WorldWideScience

Sample records for nonsmooth estimating functions

  1. A three critical point theorem for non-smooth functionals with ...

    Indian Academy of Sciences (India)

    1Department of Mathematics, Faculty of Mathematics Sciences, ... In many applications, we encounter problems with non-smooth energy functionals. These .... The next lemma shows that a locally Lipschitz functional with a compact gradient, is.

  2. A Modified Levenberg-Marquardt Method for Nonsmooth Equations with Finitely Many Maximum Functions

    Directory of Open Access Journals (Sweden)

    Shou-qiang Du

    2008-01-01

    Full Text Available For solving nonsmooth systems of equations, the Levenberg-Marquardt method and its variants are of particular importance because of their locally fast convergent rates. Finitely many maximum functions systems are very useful in the study of nonlinear complementarity problems, variational inequality problems, Karush-Kuhn-Tucker systems of nonlinear programming problems, and many problems in mechanics and engineering. In this paper, we present a modified Levenberg-Marquardt method for nonsmooth equations with finitely many maximum functions. Under mild assumptions, the present method is shown to be convergent Q-linearly. Some numerical results comparing the proposed method with classical reformulations indicate that the modified Levenberg-Marquardt algorithm works quite well in practice.

  3. Ant colony optimisation for economic dispatch problem with non-smooth cost functions

    Energy Technology Data Exchange (ETDEWEB)

    Pothiya, Saravuth; Kongprawechnon, Waree [School of Communication, Instrumentation and Control, Sirindhorn International Institute of Technology, Thammasat University, P.O. Box 22, Pathumthani (Thailand); Ngamroo, Issarachai [Center of Excellence for Innovative Energy Systems, Faculty of Engineering, King Mongkut' s Institute of Technology Ladkrabang, Bangkok 10520 (Thailand)

    2010-06-15

    This paper presents a novel and efficient optimisation approach based on the ant colony optimisation (ACO) for solving the economic dispatch (ED) problem with non-smooth cost functions. In order to improve the performance of ACO algorithm, three additional techniques, i.e. priority list, variable reduction, and zoom feature are presented. To show its efficiency and effectiveness, the proposed ACO is applied to two types of ED problems with non-smooth cost functions. Firstly, the ED problem with valve-point loading effects consists of 13 and 40 generating units. Secondly, the ED problem considering the multiple fuels consists of 10 units. Additionally, the results of the proposed ACO are compared with those of the conventional heuristic approaches. The experimental results show that the proposed ACO approach is comparatively capable of obtaining higher quality solution and faster computational time. (author)

  4. Advanced h∞ control towards nonsmooth theory and applications

    CERN Document Server

    Orlov, Yury V

    2014-01-01

    This compact monograph is focused on disturbance attenuation in nonsmooth dynamic systems, developing an H∞ approach in the nonsmooth setting. Similar to the standard nonlinear H∞ approach, the proposed nonsmooth design guarantees both the internal asymptotic stability of a nominal closed-loop system and the dissipativity inequality, which states that the size of an error signal is uniformly bounded with respect to the worst-case size of an external disturbance signal. This guarantee is achieved by constructing an energy or storage function that satisfies the dissipativity inequality and is then utilized as a Lyapunov function to ensure the internal stability requirements.    Advanced H∞ Control is unique in the literature for its treatment of disturbance attenuation in nonsmooth systems. It synthesizes various tools, including Hamilton–Jacobi–Isaacs partial differential inequalities as well as Linear Matrix Inequalities. Along with the finite-dimensional treatment, the synthesis is exten...

  5. Non-linear second-order periodic systems with non-smooth potential

    Indian Academy of Sciences (India)

    In this paper we study second order non-linear periodic systems driven by the ordinary vector -Laplacian with a non-smooth, locally Lipschitz potential function. Our approach is variational and it is based on the non-smooth critical point theory. We prove existence and multiplicity results under general growth conditions on ...

  6. Non-linear second-order periodic systems with non-smooth potential

    Indian Academy of Sciences (India)

    R. Narasimhan (Krishtel eMaging) 1461 1996 Oct 15 13:05:22

    Abstract. In this paper we study second order non-linear periodic systems driven by the ordinary vector p-Laplacian with a non-smooth, locally Lipschitz potential function. Our approach is variational and it is based on the non-smooth critical point theory. We prove existence and multiplicity results under general growth ...

  7. Nonsmooth mechanics models, dynamics and control

    CERN Document Server

    Brogliato, Bernard

    2016-01-01

    Now in its third edition, this standard reference is a comprehensive treatment of nonsmooth mechanical systems refocused to give more prominence to control and modelling. It covers Lagrangian and Newton–Euler systems, detailing mathematical tools such as convex analysis and complementarity theory. The ways in which nonsmooth mechanics influence and are influenced by well-posedness analysis, numerical analysis and simulation, modelling and control are explained. Contact/impact laws, stability theory and trajectory-tracking control are given in-depth exposition connected by a framework formed from complementarity systems and measure-differential inclusions. Links are established with electrical circuits with set-valued nonsmooth elements and with other nonsmooth dynamical systems like impulsive and piecewise linear systems. Nonsmooth Mechanics (third edition) has been substantially rewritten, edited and updated to account for the significant body of results that have emerged in the twenty-first century—incl...

  8. OPTIMAL ESTIMATES FOR THE SEMIDISCRETE GALERKIN METHOD APPLIED TO PARABOLIC INTEGRO-DIFFERENTIAL EQUATIONS WITH NONSMOOTH DATA

    KAUST Repository

    GOSWAMI, DEEPJYOTI; PANI, AMIYA K.; YADAV, SANGITA

    2014-01-01

    AWe propose and analyse an alternate approach to a priori error estimates for the semidiscrete Galerkin approximation to a time-dependent parabolic integro-differential equation with nonsmooth initial data. The method is based on energy arguments combined with repeated use of time integration, but without using parabolic-type duality techniques. An optimal L2-error estimate is derived for the semidiscrete approximation when the initial data is in L2. A superconvergence result is obtained and then used to prove a maximum norm estimate for parabolic integro-differential equations defined on a two-dimensional bounded domain. © 2014 Australian Mathematical Society.

  9. Optimal Error Estimates of Two Mixed Finite Element Methods for Parabolic Integro-Differential Equations with Nonsmooth Initial Data

    KAUST Repository

    Goswami, Deepjyoti

    2013-05-01

    In the first part of this article, a new mixed method is proposed and analyzed for parabolic integro-differential equations (PIDE) with nonsmooth initial data. Compared to the standard mixed method for PIDE, the present method does not bank on a reformulation using a resolvent operator. Based on energy arguments combined with a repeated use of an integral operator and without using parabolic type duality technique, optimal L2 L2-error estimates are derived for semidiscrete approximations, when the initial condition is in L2 L2. Due to the presence of the integral term, it is, further, observed that a negative norm estimate plays a crucial role in our error analysis. Moreover, the proposed analysis follows the spirit of the proof techniques used in deriving optimal error estimates for finite element approximations to PIDE with smooth data and therefore, it unifies both the theories, i.e., one for smooth data and other for nonsmooth data. Finally, we extend the proposed analysis to the standard mixed method for PIDE with rough initial data and provide an optimal error estimate in L2, L 2, which improves upon the results available in the literature. © 2013 Springer Science+Business Media New York.

  10. An Approximate Redistributed Proximal Bundle Method with Inexact Data for Minimizing Nonsmooth Nonconvex Functions

    Directory of Open Access Journals (Sweden)

    Jie Shen

    2015-01-01

    Full Text Available We describe an extension of the redistributed technique form classical proximal bundle method to the inexact situation for minimizing nonsmooth nonconvex functions. The cutting-planes model we construct is not the approximation to the whole nonconvex function, but to the local convexification of the approximate objective function, and this kind of local convexification is modified dynamically in order to always yield nonnegative linearization errors. Since we only employ the approximate function values and approximate subgradients, theoretical convergence analysis shows that an approximate stationary point or some double approximate stationary point can be obtained under some mild conditions.

  11. On Estimation of the CES Production Function - Revisited

    DEFF Research Database (Denmark)

    Henningsen, Arne; Henningsen, Geraldine

    2012-01-01

    Estimation of the non-linear Constant Elasticity of Scale (CES) function is generally considered problematic due to convergence problems and unstable and/or meaningless results. These problems often arise from a non-smooth objective function with large flat areas, the discontinuity of the CES...... function where the elasticity of substitution is one, and possibly significant rounding errors where the elasticity of substitution is close to one. We suggest three (combinable) solutions that alleviate these problems and improve the reliability and stability of the results....

  12. An approach for spherical harmonic analysis of non-smooth data

    Science.gov (United States)

    Wang, Hansheng; Wu, Patrick; Wang, Zhiyong

    2006-12-01

    A method is proposed to evaluate the spherical harmonic coefficients of a global or regional, non-smooth, observable dataset sampled on an equiangular grid. The method is based on an integration strategy using new recursion relations. Because a bilinear function is used to interpolate points within the grid cells, this method is suitable for non-smooth data; the slope of the data may be piecewise continuous, with extreme changes at the boundaries. In order to validate the method, the coefficients of an axisymmetric model are computed, and compared with the derived analytical expressions. Numerical results show that this method is indeed reasonable for non-smooth models, and that the maximum degree for spherical harmonic analysis should be empirically determined by several factors including the model resolution and the degree of non-smoothness in the dataset, and it can be several times larger than the total number of latitudinal grid points. It is also shown that this method is appropriate for the approximate analysis of a smooth dataset. Moreover, this paper provides the program flowchart and an internet address where the FORTRAN code with program specifications are made available.

  13. Fast nonconvex nonsmooth minimization methods for image restoration and reconstruction.

    Science.gov (United States)

    Nikolova, Mila; Ng, Michael K; Tam, Chi-Pan

    2010-12-01

    Nonconvex nonsmooth regularization has advantages over convex regularization for restoring images with neat edges. However, its practical interest used to be limited by the difficulty of the computational stage which requires a nonconvex nonsmooth minimization. In this paper, we deal with nonconvex nonsmooth minimization methods for image restoration and reconstruction. Our theoretical results show that the solution of the nonconvex nonsmooth minimization problem is composed of constant regions surrounded by closed contours and neat edges. The main goal of this paper is to develop fast minimization algorithms to solve the nonconvex nonsmooth minimization problem. Our experimental results show that the effectiveness and efficiency of the proposed algorithms.

  14. Nonsmooth Mechanics and Convex Optimization

    CERN Document Server

    Kanno, Yoshihiro

    2011-01-01

    "This book concerns matter that is intrinsically difficult: convex optimization, complementarity and duality, nonsmooth analysis, linear and nonlinear programming, etc. The author has skillfully introduced these and many more concepts, and woven them into a seamless whole by retaining an easy and consistent style throughout. The book is not all theory: There are many real-life applications in structural engineering, cable networks, frictional contact problems, and plasticity! I recommend it to any reader who desires a modern, authoritative account of nonsmooth mechanics and convex optimiz

  15. A one-layer recurrent neural network for constrained nonsmooth optimization.

    Science.gov (United States)

    Liu, Qingshan; Wang, Jun

    2011-10-01

    This paper presents a novel one-layer recurrent neural network modeled by means of a differential inclusion for solving nonsmooth optimization problems, in which the number of neurons in the proposed neural network is the same as the number of decision variables of optimization problems. Compared with existing neural networks for nonsmooth optimization problems, the global convexity condition on the objective functions and constraints is relaxed, which allows the objective functions and constraints to be nonconvex. It is proven that the state variables of the proposed neural network are convergent to optimal solutions if a single design parameter in the model is larger than a derived lower bound. Numerical examples with simulation results substantiate the effectiveness and illustrate the characteristics of the proposed neural network.

  16. Damage Mechanism in Counter Pairs Caused by Bionic Non-smoothed Surface

    Directory of Open Access Journals (Sweden)

    ZHANG Zhan-hui

    2016-08-01

    Full Text Available Four biomimetic non-smoothed surface specimens with different shapes were prepared by laser processing. Tests were conducted on MMU-5G wear and abrasion test machine to study the influencing rule of non-smoothed surfaces on counter pairs. The results show that the mass loss of the friction pair matching with the non-smoothed units is much greater than the ones matching with the smooth specimens. The pairs matching with different non-smoothed units suffer differently. The non-smoothed surface protruding zone exerts micro cutting on counter pairs. The striation causes the greatest mass loss of the pairs than the other non-smoothed units, which almost doubles the damage of the grid ones suffering the least. The difference in pairs damage is attributed to the different mechanism of undertaking the load in the process of wear. The damage can be alleviated effectively by changing the shapes of the units without increasing or decreasing the area ratio of the non-smoothed units.

  17. The Contact Dynamics method: A nonsmooth story

    Science.gov (United States)

    Dubois, Frédéric; Acary, Vincent; Jean, Michel

    2018-03-01

    When velocity jumps are occurring, the dynamics is said to be nonsmooth. For instance, in collections of contacting rigid bodies, jumps are caused by shocks and dry friction. Without compliance at the interface, contact laws are not only non-differentiable in the usual sense but also multi-valued. Modeling contacting bodies is of interest in order to understand the behavior of numerous mechanical systems such as flexible multi-body systems, granular materials or masonry. These granular materials behave puzzlingly either like a solid or a fluid and a description in the frame of classical continuous mechanics would be welcome though far to be satisfactory nowadays. Jean-Jacques Moreau greatly contributed to convex analysis, functions of bounded variations, differential measure theory, sweeping process theory, definitive mathematical tools to deal with nonsmooth dynamics. He converted all these underlying theoretical ideas into an original nonsmooth implicit numerical method called Contact Dynamics (CD); a robust and efficient method to simulate large collections of bodies with frictional contacts and impacts. The CD method offers a very interesting complementary alternative to the family of smoothed explicit numerical methods, often called Distinct Elements Method (DEM). In this paper developments and improvements of the CD method are presented together with a critical comparative review of advantages and drawbacks of both approaches. xml:lang="fr"

  18. Adaptive Integration of Nonsmooth Dynamical Systems

    Science.gov (United States)

    2017-10-11

    2017 W911NF-12-R-0012-03: Adaptive Integration of Nonsmooth Dynamical Systems The views, opinions and/or findings contained in this report are those of...Integration of Nonsmooth Dynamical Systems Report Term: 0-Other Email: drum@gwu.edu Distribution Statement: 1-Approved for public release; distribution is...classdrake_1_1systems_1_1_integrator_base.html ; 3) a solver for dynamical systems with arbitrary unilateral and bilateral constraints (the key component of the time stepping systems )- see

  19. Bifurcations of non-smooth systems

    Science.gov (United States)

    Angulo, Fabiola; Olivar, Gerard; Osorio, Gustavo A.; Escobar, Carlos M.; Ferreira, Jocirei D.; Redondo, Johan M.

    2012-12-01

    Non-smooth systems (namely piecewise-smooth systems) have received much attention in the last decade. Many contributions in this area show that theory and applications (to electronic circuits, mechanical systems, …) are relevant to problems in science and engineering. Specially, new bifurcations have been reported in the literature, and this was the topic of this minisymposium. Thus both bifurcation theory and its applications were included. Several contributions from different fields show that non-smooth bifurcations are a hot topic in research. Thus in this paper the reader can find contributions from electronics, energy markets and population dynamics. Also, a carefully-written specific algebraic software tool is presented.

  20. Spectral asymptotics for nonsmooth singular Green operators

    DEFF Research Database (Denmark)

    Grubb, Gerd

    2014-01-01

    is a singular Green operator. It is well-known in smooth cases that when G is of negative order −t on a bounded domain, its eigenvalues ors-numbers have the behavior (*)s j (G) ∼ cj −t/(n−1) for j → ∞, governed by the boundary dimension n − 1. In some nonsmooth cases, upper estimates (**)s j (G) ≤ Cj −t/(n−1...

  1. Particle-based solid for nonsmooth multidomain dynamics

    Science.gov (United States)

    Nordberg, John; Servin, Martin

    2018-04-01

    A method for simulation of elastoplastic solids in multibody systems with nonsmooth and multidomain dynamics is developed. The solid is discretised into pseudo-particles using the meshfree moving least squares method for computing the strain tensor. The particle's strain and stress tensor variables are mapped to a compliant deformation constraint. The discretised solid model thus fit a unified framework for nonsmooth multidomain dynamics simulations including rigid multibodies with complex kinematic constraints such as articulation joints, unilateral contacts with dry friction, drivelines, and hydraulics. The nonsmooth formulation allows for impact impulses to propagate instantly between the rigid multibody and the solid. Plasticity is introduced through an associative perfectly plastic modified Drucker-Prager model. The elastic and plastic dynamics are verified for simple test systems, and the capability of simulating tracked terrain vehicles driving on a deformable terrain is demonstrated.

  2. The full Keller-Segel model is well-posed on nonsmooth domains

    Science.gov (United States)

    Horstmann, D.; Meinlschmidt, H.; Rehberg, J.

    2018-04-01

    In this paper we prove that the full Keller-Segel system, a quasilinear strongly coupled reaction-crossdiffusion system of four parabolic equations, is well-posed in the sense that it always admits an unique local-in-time solution in an adequate function space, provided that the initial values are suitably regular. The proof is done via an abstract solution theorem for nonlocal quasilinear equations by Amann and is carried out for general source terms. It is fundamentally based on recent nontrivial elliptic and parabolic regularity results which hold true even on rather general nonsmooth spatial domains. For space dimensions 2 and 3, this enables us to work in a nonsmooth setting which is not available in classical parabolic systems theory. Apparently, there exists no comparable existence result for the full Keller-Segel system up to now. Due to the large class of possibly nonsmooth domains admitted, we also obtain new results for the ‘standard’ Keller-Segel system consisting of only two equations as a special case. This work is dedicated to Prof Willi Jäger.

  3. Investigation of the Effect of Dimple Bionic Nonsmooth Surface on Tire Antihydroplaning.

    Science.gov (United States)

    Zhou, Haichao; Wang, Guolin; Ding, Yangmin; Yang, Jian; Zhai, Huihui

    2015-01-01

    Inspired by the idea that bionic nonsmooth surfaces (BNSS) reduce fluid adhesion and resistance, the effect of dimple bionic nonsmooth structure arranged in tire circumferential grooves surface on antihydroplaning performance was investigated by using Computational Fluid Dynamics (CFD). The physical model of the object (model of dimple bionic nonsmooth surface distribution, hydroplaning model) and SST k - ω turbulence model are established for numerical analysis of tire hydroplaning. By virtue of the orthogonal table L16(4(5)), the parameters of dimple bionic nonsmooth structure design compared to the smooth structure were analyzed, and the priority level of the experimental factors as well as the best combination within the scope of the experiment was obtained. The simulation results show that dimple bionic nonsmooth structure can reduce water flow resistance by disturbing the eddy movement in boundary layers. Then, optimal type of dimple bionic nonsmooth structure is arranged on the bottom of tire circumferential grooves for hydroplaning performance analysis. The results show that the dimple bionic nonsmooth structure effectively decreases the tread hydrodynamic pressure when driving on water film and increases the tire hydroplaning velocity, thus improving tire antihydroplaning performance.

  4. Investigation of the Effect of Dimple Bionic Nonsmooth Surface on Tire Antihydroplaning

    Directory of Open Access Journals (Sweden)

    Haichao Zhou

    2015-01-01

    Full Text Available Inspired by the idea that bionic nonsmooth surfaces (BNSS reduce fluid adhesion and resistance, the effect of dimple bionic nonsmooth structure arranged in tire circumferential grooves surface on antihydroplaning performance was investigated by using Computational Fluid Dynamics (CFD. The physical model of the object (model of dimple bionic nonsmooth surface distribution, hydroplaning model and SST k-ω turbulence model are established for numerical analysis of tire hydroplaning. By virtue of the orthogonal table L16(45, the parameters of dimple bionic nonsmooth structure design compared to the smooth structure were analyzed, and the priority level of the experimental factors as well as the best combination within the scope of the experiment was obtained. The simulation results show that dimple bionic nonsmooth structure can reduce water flow resistance by disturbing the eddy movement in boundary layers. Then, optimal type of dimple bionic nonsmooth structure is arranged on the bottom of tire circumferential grooves for hydroplaning performance analysis. The results show that the dimple bionic nonsmooth structure effectively decreases the tread hydrodynamic pressure when driving on water film and increases the tire hydroplaning velocity, thus improving tire antihydroplaning performance.

  5. A one-layer recurrent neural network for constrained nonsmooth invex optimization.

    Science.gov (United States)

    Li, Guocheng; Yan, Zheng; Wang, Jun

    2014-02-01

    Invexity is an important notion in nonconvex optimization. In this paper, a one-layer recurrent neural network is proposed for solving constrained nonsmooth invex optimization problems, designed based on an exact penalty function method. It is proved herein that any state of the proposed neural network is globally convergent to the optimal solution set of constrained invex optimization problems, with a sufficiently large penalty parameter. In addition, any neural state is globally convergent to the unique optimal solution, provided that the objective function and constraint functions are pseudoconvex. Moreover, any neural state is globally convergent to the feasible region in finite time and stays there thereafter. The lower bounds of the penalty parameter and convergence time are also estimated. Two numerical examples are provided to illustrate the performances of the proposed neural network. Copyright © 2013 Elsevier Ltd. All rights reserved.

  6. Introduction to nonsmooth optimization theory, practice and software

    CERN Document Server

    Bagirov, Adil; Mäkelä, Marko M

    2014-01-01

    Attempts to be the first easy-to-read book about nonsmooth optimization Covers both the theory and the numerical methods used in nonsmooth optimization and offers a survey of different problems arising in the field Both, the theory and the most common problems are illustrated with examples making the book also suitable both for teaching purposes and self-access learning.

  7. Non-smooth dynamical systems

    CERN Document Server

    2000-01-01

    The book provides a self-contained introduction to the mathematical theory of non-smooth dynamical problems, as they frequently arise from mechanical systems with friction and/or impacts. It is aimed at applied mathematicians, engineers, and applied scientists in general who wish to learn the subject.

  8. Application of pattern search method to power system security constrained economic dispatch with non-smooth cost function

    International Nuclear Information System (INIS)

    Al-Othman, A.K.; El-Naggar, K.M.

    2008-01-01

    Direct search methods are evolutionary algorithms used to solve optimization problems. (DS) methods do not require any information about the gradient of the objective function at hand while searching for an optimum solution. One of such methods is Pattern Search (PS) algorithm. This paper presents a new approach based on a constrained pattern search algorithm to solve a security constrained power system economic dispatch problem (SCED) with non-smooth cost function. Operation of power systems demands a high degree of security to keep the system satisfactorily operating when subjected to disturbances, while and at the same time it is required to pay attention to the economic aspects. Pattern recognition technique is used first to assess dynamic security. Linear classifiers that determine the stability of electric power system are presented and added to other system stability and operational constraints. The problem is formulated as a constrained optimization problem in a way that insures a secure-economic system operation. Pattern search method is then applied to solve the constrained optimization formulation. In particular, the method is tested using three different test systems. Simulation results of the proposed approach are compared with those reported in literature. The outcome is very encouraging and proves that pattern search (PS) is very applicable for solving security constrained power system economic dispatch problem (SCED). In addition, valve-point effect loading and total system losses are considered to further investigate the potential of the PS technique. Based on the results, it can be concluded that the PS has demonstrated ability in handling highly nonlinear discontinuous non-smooth cost function of the SCED. (author)

  9. Intensive Research Program on Advances in Nonsmooth Dynamics 2016

    CERN Document Server

    Jeffrey, Mike; Lázaro, J; Olm, Josep

    2017-01-01

    This volume contains extended abstracts outlining selected talks and other selected presentations given by participants throughout the "Intensive Research Program on Advances in Nonsmooth Dynamics 2016", held at the Centre de Recerca Matemàtica (CRM) in Barcelona from February 1st to April 29th, 2016. They include brief research articles reporting new results, descriptions of preliminary work or open problems, and outlines of prominent discussion sessions. The articles are all the result of direct collaborations initiated during the research program. The topic is the theory and applications of Nonsmooth Dynamics. This includes systems involving elements of: impacting, switching, on/off control, hybrid discrete-continuous dynamics, jumps in physical properties, and many others. Applications include: electronics, climate modeling, life sciences, mechanics, ecology, and more. Numerous new results are reported concerning the dimensionality and robustness of nonsmooth models, shadowing variables, numbers of limit...

  10. A variational approach to nonsmooth dynamics applications in unilateral mechanics and electronics

    CERN Document Server

    Adly, Samir

    2017-01-01

    This brief examines mathematical models in nonsmooth mechanics and nonregular electrical circuits, including evolution variational inequalities, complementarity systems, differential inclusions, second-order dynamics, Lur'e systems and Moreau's sweeping process. The field of nonsmooth dynamics is of great interest to mathematicians, mechanicians, automatic controllers and engineers. The present volume acknowledges this transversality and provides a multidisciplinary view as it outlines fundamental results in nonsmooth dynamics and explains how to use them to study various problems in engineering. In particular, the author explores the question of how to redefine the notion of dynamical systems in light of modern variational and nonsmooth analysis. With the aim of bridging between the communities of applied mathematicians, engineers and researchers in control theory and nonlinear systems, this brief outlines both relevant mathematical proofs and models in unilateral mechanics and electronics.

  11. The Nonsmooth Vibration of a Relative Rotation System with Backlash and Dry Friction

    Directory of Open Access Journals (Sweden)

    Minjia He

    2017-01-01

    Full Text Available We investigate a relative rotation system with backlash and dry friction. Firstly, the corresponding nonsmooth characters are discussed by the differential inclusion theory, and the analytic conditions for stick and nonstick motions are developed to understand the motion switching mechanism. Based on such analytic conditions of motion switching, the influence of the maximal static friction torque and the driving torque on the stick motion is studied. Moreover, the sliding time bifurcation diagrams, duty cycle figures, time history diagrams, and the K-function time history diagram are also presented, which confirm the analytic results. The methodology presented in this paper can be applied to predictions of motions in nonsmooth dynamical systems.

  12. Three-Field Modelling of Nonlinear Nonsmooth Boundary Value Problems and Stability of Differential Mixed Variational Inequalities

    Directory of Open Access Journals (Sweden)

    J. Gwinner

    2013-01-01

    Full Text Available The purpose of this paper is twofold. Firstly we consider nonlinear nonsmooth elliptic boundary value problems, and also related parabolic initial boundary value problems that model in a simplified way steady-state unilateral contact with Tresca friction in solid mechanics, respectively, stem from nonlinear transient heat conduction with unilateral boundary conditions. Here a recent duality approach, that augments the classical Babuška-Brezzi saddle point formulation for mixed variational problems to twofold saddle point formulations, is extended to the nonsmooth problems under consideration. This approach leads to variational inequalities of mixed form for three coupled fields as unknowns and to related differential mixed variational inequalities in the time-dependent case. Secondly we are concerned with the stability of the solution set of a general class of differential mixed variational inequalities. Here we present a novel upper set convergence result with respect to perturbations in the data, including perturbations of the associated nonlinear maps, the nonsmooth convex functionals, and the convex constraint set. We employ epiconvergence for the convergence of the functionals and Mosco convergence for set convergence. We impose weak convergence assumptions on the perturbed maps using the monotonicity method of Browder and Minty.

  13. Optimal Error Estimates of Two Mixed Finite Element Methods for Parabolic Integro-Differential Equations with Nonsmooth Initial Data

    KAUST Repository

    Goswami, Deepjyoti; Pani, Amiya K.; Yadav, Sangita

    2013-01-01

    In the first part of this article, a new mixed method is proposed and analyzed for parabolic integro-differential equations (PIDE) with nonsmooth initial data. Compared to the standard mixed method for PIDE, the present method does not bank on a

  14. A Non-smooth Newton Method for Multibody Dynamics

    International Nuclear Information System (INIS)

    Erleben, K.; Ortiz, R.

    2008-01-01

    In this paper we deal with the simulation of rigid bodies. Rigid body dynamics have become very important for simulating rigid body motion in interactive applications, such as computer games or virtual reality. We present a novel way of computing contact forces using a Newton method. The contact problem is reformulated as a system of non-linear and non-smooth equations, and we solve this system using a non-smooth version of Newton's method. One of the main contribution of this paper is the reformulation of the complementarity problems, used to model impacts, as a system of equations that can be solved using traditional methods.

  15. The Modified HZ Conjugate Gradient Algorithm for Large-Scale Nonsmooth Optimization.

    Science.gov (United States)

    Yuan, Gonglin; Sheng, Zhou; Liu, Wenjie

    2016-01-01

    In this paper, the Hager and Zhang (HZ) conjugate gradient (CG) method and the modified HZ (MHZ) CG method are presented for large-scale nonsmooth convex minimization. Under some mild conditions, convergent results of the proposed methods are established. Numerical results show that the presented methods can be better efficiency for large-scale nonsmooth problems, and several problems are tested (with the maximum dimensions to 100,000 variables).

  16. The Modified HZ Conjugate Gradient Algorithm for Large-Scale Nonsmooth Optimization.

    Directory of Open Access Journals (Sweden)

    Gonglin Yuan

    Full Text Available In this paper, the Hager and Zhang (HZ conjugate gradient (CG method and the modified HZ (MHZ CG method are presented for large-scale nonsmooth convex minimization. Under some mild conditions, convergent results of the proposed methods are established. Numerical results show that the presented methods can be better efficiency for large-scale nonsmooth problems, and several problems are tested (with the maximum dimensions to 100,000 variables.

  17. Analyzing the non-smooth dynamics induced by a split-path nonlinear integral controller

    NARCIS (Netherlands)

    Hunnekens, B.G.B.; van Loon, S.J.L.M.; van de Wouw, N.; Heemels, W.P.M.H.; Nijmeijer, H.; Ecker, Horst; Steindl, Alois; Jakubek, Stefan

    2014-01-01

    In this paper, we introduce a novel non-smooth integral controller, which aims at achieving a better transient response in terms of overshoot of a feedback controlled dynamical system. The resulting closed-loop system can be represented as a non-smooth system with different continuous dynamics being

  18. Recurrent neural network for non-smooth convex optimization problems with application to the identification of genetic regulatory networks.

    Science.gov (United States)

    Cheng, Long; Hou, Zeng-Guang; Lin, Yingzi; Tan, Min; Zhang, Wenjun Chris; Wu, Fang-Xiang

    2011-05-01

    A recurrent neural network is proposed for solving the non-smooth convex optimization problem with the convex inequality and linear equality constraints. Since the objective function and inequality constraints may not be smooth, the Clarke's generalized gradients of the objective function and inequality constraints are employed to describe the dynamics of the proposed neural network. It is proved that the equilibrium point set of the proposed neural network is equivalent to the optimal solution of the original optimization problem by using the Lagrangian saddle-point theorem. Under weak conditions, the proposed neural network is proved to be stable, and the state of the neural network is convergent to one of its equilibrium points. Compared with the existing neural network models for non-smooth optimization problems, the proposed neural network can deal with a larger class of constraints and is not based on the penalty method. Finally, the proposed neural network is used to solve the identification problem of genetic regulatory networks, which can be transformed into a non-smooth convex optimization problem. The simulation results show the satisfactory identification accuracy, which demonstrates the effectiveness and efficiency of the proposed approach.

  19. DSPSO-TSA for economic dispatch problem with nonsmooth and noncontinuous cost functions

    Energy Technology Data Exchange (ETDEWEB)

    Khamsawang, S., E-mail: k_suwit999@yahoo.co [Electrical Engineering Department, Faculty of Engineering, King Mongkut' s Institute of Technology Ladkrabang, Ladkrabang District 10520, Bangkok (Thailand); Jiriwibhakorn, S., E-mail: kjsomcha@kmitl.ac.t [Electrical Engineering Department, Faculty of Engineering, King Mongkut' s Institute of Technology Ladkrabang, Ladkrabang District 10520, Bangkok (Thailand)

    2010-02-15

    This paper proposes a new approach based on particle swarm optimization (PSO) and tabu search algorithm (TSA). This proposed approach is called distributed Sobol PSO and TSA (DSPSO-TSA). In order to improve the convergence characteristic and solution quality of searching process, three mechanisms had been presented. Firstly, the Sobol sequence is applied to generate an inertia factor instead of the existing process. Secondly, a distributed process is used so as to reach the global solution rapidly. The search process is divided to multi-stages and used a short-term memory for recognition the best search history. Finally, to guarantee the global solution, TSA had been activated to adjust the obtained solution of DSPSO algorithm. To show its effectiveness, the proposed DSPSO-TSA is applied to test four case studies of economic dispatch (ED) problem considering nonsmooth and noncontinuous fuel cost functions of generating units. The simulation results obtained from DSPSO-TSA are compared with conventional approaches such as genetic algorithm (GA), TSA, PSO, and others in literatures. The comparison results show that the efficiency of proposed approach can reach higher quality solution and faster computational time than the conventional methods.

  20. DSPSO-TSA for economic dispatch problem with nonsmooth and noncontinuous cost functions

    International Nuclear Information System (INIS)

    Khamsawang, S.; Jiriwibhakorn, S.

    2010-01-01

    This paper proposes a new approach based on particle swarm optimization (PSO) and tabu search algorithm (TSA). This proposed approach is called distributed Sobol PSO and TSA (DSPSO-TSA). In order to improve the convergence characteristic and solution quality of searching process, three mechanisms had been presented. Firstly, the Sobol sequence is applied to generate an inertia factor instead of the existing process. Secondly, a distributed process is used so as to reach the global solution rapidly. The search process is divided to multi-stages and used a short-term memory for recognition the best search history. Finally, to guarantee the global solution, TSA had been activated to adjust the obtained solution of DSPSO algorithm. To show its effectiveness, the proposed DSPSO-TSA is applied to test four case studies of economic dispatch (ED) problem considering nonsmooth and noncontinuous fuel cost functions of generating units. The simulation results obtained from DSPSO-TSA are compared with conventional approaches such as genetic algorithm (GA), TSA, PSO, and others in literatures. The comparison results show that the efficiency of proposed approach can reach higher quality solution and faster computational time than the conventional methods.

  1. Dynamics and Control of Non-Smooth Systems with Applications to Supercavitating Vehicles

    Science.gov (United States)

    2011-01-01

    ABSTRACT Title of dissertation: Dynamics and Control of Non-Smooth Systems with Applications to Supercavitating Vehicles Vincent Nguyen, Doctor of...relates to the dynamics of non-smooth vehicle systems, and in particular, supercavitating vehicles. These high-speed under- water vehicles are...Applications to Supercavitating Vehicles 5a. CONTRACT NUMBER 5b. GRANT NUMBER 5c. PROGRAM ELEMENT NUMBER 6. AUTHOR(S) 5d. PROJECT NUMBER 5e. TASK

  2. Sharp Spectral Asymptotics and Weyl Formula for Elliptic Operators with Non-smooth Coefficients

    International Nuclear Information System (INIS)

    Zielinski, Lech

    1999-01-01

    The aim of this paper is to give the Weyl formula for eigenvalues of self-adjoint elliptic operators, assuming that first-order derivatives of the coefficients are Lipschitz continuous. The approach is based on the asymptotic formula of Hoermander''s type for the spectral function of pseudo differential operators having Lipschitz continuous Hamiltonian flow and obtained via a regularization procedure of nonsmooth coefficients

  3. Sharp Spectral Asymptotics and Weyl Formula for Elliptic Operators with Non-smooth Coefficients

    Energy Technology Data Exchange (ETDEWEB)

    Zielinski, Lech [Universite Paris 7 (D. Diderot), Institut de Mathematiques de Paris-Jussieu UMR9994 (France)

    1999-09-15

    The aim of this paper is to give the Weyl formula for eigenvalues of self-adjoint elliptic operators, assuming that first-order derivatives of the coefficients are Lipschitz continuous. The approach is based on the asymptotic formula of Hoermander''s type for the spectral function of pseudo differential operators having Lipschitz continuous Hamiltonian flow and obtained via a regularization procedure of nonsmooth coefficients.

  4. Effects of striated laser tracks on thermal fatigue resistance of cast iron samples with biomimetic non-smooth surface

    International Nuclear Information System (INIS)

    Tong, Xin; Zhou, Hong; Liu, Min; Dai, Ming-jiang

    2011-01-01

    In order to enhance the thermal fatigue resistance of cast iron materials, the samples with biomimetic non-smooth surface were processed by Neodymium:Yttrium Aluminum Garnet (Nd:YAG) laser. With self-controlled thermal fatigue test method, the thermal fatigue resistance of smooth and non-smooth samples was investigated. The effects of striated laser tracks on thermal fatigue resistance were also studied. The results indicated that biomimetic non-smooth surface was benefit for improving thermal fatigue resistance of cast iron sample. The striated non-smooth units formed by laser tracks which were vertical with thermal cracks had the best propagation resistance. The mechanisms behind these influences were discussed, and some schematic drawings were introduced to describe them.

  5. Fundamental solutions and local solvability for nonsmooth Hörmander’s operators

    CERN Document Server

    Bramanti, Marco; Manfredini, Maria

    2017-01-01

    The authors consider operators of the form L=\\sum_{i=1}^{n}X_{i}^{2}+X_{0} in a bounded domain of \\mathbb{R}^{p} where X_{0},X_{1},\\ldots,X_{n} are nonsmooth Hörmander's vector fields of step r such that the highest order commutators are only Hölder continuous. Applying Levi's parametrix method the authors construct a local fundamental solution \\gamma for L and provide growth estimates for \\gamma and its first derivatives with respect to the vector fields. Requiring the existence of one more derivative of the coefficients the authors prove that \\gamma also possesses second derivatives, and they deduce the local solvability of L, constructing, by means of \\gamma, a solution to Lu=f with Hölder continuous f. The authors also prove C_{X,loc}^{2,\\alpha} estimates on this solution.

  6. An introduction to nonsmooth analysis

    CERN Document Server

    Ferrera, Juan

    2013-01-01

    Nonsmooth Analysis is a relatively recent area of mathematical analysis. The literature about this subject consists mainly in research papers and books. The purpose of this book is to provide a handbook for undergraduate and graduate students of mathematics that introduce this interesting area in detail.Includes different kinds of sub and super differentials as well as generalized gradientsIncludes also the main tools of the theory, as Sum and Chain Rules or Mean Value theoremsContent is introduced in an elementary way, developing many examples, allowing the reader to understand a theory which

  7. Influence of non-smooth surface on tribological properties of glass fiber-epoxy resin composite sliding against stainless steel under natural seawater lubrication

    Science.gov (United States)

    Wu, Shaofeng; Gao, Dianrong; Liang, Yingna; Chen, Bo

    2015-11-01

    With the development of bionics, the bionic non-smooth surfaces are introduced to the field of tribology. Although non-smooth surface has been studied widely, the studies of non-smooth surface under the natural seawater lubrication are still very fewer, especially experimental research. The influences of smooth and non-smooth surface on the frictional properties of the glass fiber-epoxy resin composite (GF/EPR) coupled with stainless steel 316L are investigated under natural seawater lubrication in this paper. The tested non-smooth surfaces include the surfaces with semi-spherical pits, the conical pits, the cone-cylinder combined pits, the cylindrical pits and through holes. The friction and wear tests are performed using a ring-on-disc test rig under 60 N load and 1000 r/min rotational speed. The tests results show that GF/EPR with bionic non-smooth surface has quite lower friction coefficient and better wear resistance than GF/EPR with smooth surface without pits. The average friction coefficient of GF/EPR with semi-spherical pits is 0.088, which shows the largest reduction is approximately 63.18% of GF/EPR with smooth surface. In addition, the wear debris on the worn surfaces of GF/EPR are observed by a confocal scanning laser microscope. It is shown that the primary wear mechanism is the abrasive wear. The research results provide some design parameters for non-smooth surface, and the experiment results can serve as a beneficial supplement to non-smooth surface study.

  8. Non-smooth optimization methods for large-scale problems: applications to mid-term power generation planning

    International Nuclear Information System (INIS)

    Emiel, G.

    2008-01-01

    This manuscript deals with large-scale non-smooth optimization that may typically arise when performing Lagrangian relaxation of difficult problems. This technique is commonly used to tackle mixed-integer linear programming - or large-scale convex problems. For example, a classical approach when dealing with power generation planning problems in a stochastic environment is to perform a Lagrangian relaxation of the coupling constraints of demand. In this approach, a master problem coordinates local subproblems, specific to each generation unit. The master problem deals with a separable non-smooth dual function which can be maximized with, for example, bundle algorithms. In chapter 2, we introduce basic tools of non-smooth analysis and some recent results regarding incremental or inexact instances of non-smooth algorithms. However, in some situations, the dual problem may still be very hard to solve. For instance, when the number of dualized constraints is very large (exponential in the dimension of the primal problem), explicit dualization may no longer be possible or the update of dual variables may fail. In order to reduce the dual dimension, different heuristics were proposed. They involve a separation procedure to dynamically select a restricted set of constraints to be dualized along the iterations. This relax-and-cut type approach has shown its numerical efficiency in many combinatorial problems. In chapter 3, we show Primal-dual convergence of such strategy when using an adapted sub-gradient method for the dual step and under minimal assumptions on the separation procedure. Another limit of Lagrangian relaxation may appear when the dual function is separable in highly numerous or complex sub-functions. In such situation, the computational burden of solving all local subproblems may be preponderant in the whole iterative process. A natural strategy would be here to take full advantage of the dual separable structure, performing a dual iteration after having

  9. On the numerical and computational aspects of non-smoothnesses that occur in railway vehicle dynamics

    DEFF Research Database (Denmark)

    True, Hans; Engsig-Karup, Allan Peter; Bigoni, Daniele

    2014-01-01

    of the solutions across these boundaries. We compare the resulting solutions that are found with the three different strategies of handling the non-smoothnesses. Several integrators – both explicit and implicit ones – have been tested and their performances are evaluated and compared with respect to accuracy...... examples the dynamical problems are formulated as systems of ordinary differential-algebraic equations due to the geometric constraints. The non-smoothnesses have been neglected, smoothened or entered into the dynamical systems as switching boundaries with relations, which govern the continuation...

  10. Nonlinear dynamics of a nonsmooth shape memory alloy oscillator

    International Nuclear Information System (INIS)

    Cardozo dos Santos, Bruno; Amorim Savi, Marcelo

    2009-01-01

    In the last years, there is an increasing interest in nonsmooth system dynamics motivated by different applications including rotor dynamics, oil drilling and machining. Besides, shape memory alloys (SMAs) have been used in various applications exploring their high dissipation capacity related to their hysteretic behavior. This contribution investigates the nonlinear dynamics of shape memory alloy nonsmooth systems considering a linear oscillator with a discontinuous support built with an SMA element. A constitutive model developed by Paiva et al. [Paiva A, Savi MA, Braga AMB, Pacheco PMCL. A constitutive model for shape memory alloys considering tensile-compressive asymmetry and plasticity. Int J Solids Struct 2005;42(11-12):3439-57] is employed to describe the thermomechanical behavior of the SMA element. Numerical investigations show results where the SMA discontinuous support can dramatically change the system dynamics when compared to those associated with a linear elastic support system. A parametric study is of concern showing the system behavior for different system characteristics, forcing excitation and also gaps. These results show that smart materials can be employed in different kinds of mechanical systems exploring some of the remarkable properties of these alloys.

  11. Neural network for nonsmooth pseudoconvex optimization with general convex constraints.

    Science.gov (United States)

    Bian, Wei; Ma, Litao; Qin, Sitian; Xue, Xiaoping

    2018-05-01

    In this paper, a one-layer recurrent neural network is proposed for solving a class of nonsmooth, pseudoconvex optimization problems with general convex constraints. Based on the smoothing method, we construct a new regularization function, which does not depend on any information of the feasible region. Thanks to the special structure of the regularization function, we prove the global existence, uniqueness and "slow solution" character of the state of the proposed neural network. Moreover, the state solution of the proposed network is proved to be convergent to the feasible region in finite time and to the optimal solution set of the related optimization problem subsequently. In particular, the convergence of the state to an exact optimal solution is also considered in this paper. Numerical examples with simulation results are given to show the efficiency and good characteristics of the proposed network. In addition, some preliminary theoretical analysis and application of the proposed network for a wider class of dynamic portfolio optimization are included. Copyright © 2018 Elsevier Ltd. All rights reserved.

  12. Probability Density Estimation Using Neural Networks in Monte Carlo Calculations

    International Nuclear Information System (INIS)

    Shim, Hyung Jin; Cho, Jin Young; Song, Jae Seung; Kim, Chang Hyo

    2008-01-01

    The Monte Carlo neutronics analysis requires the capability for a tally distribution estimation like an axial power distribution or a flux gradient in a fuel rod, etc. This problem can be regarded as a probability density function estimation from an observation set. We apply the neural network based density estimation method to an observation and sampling weight set produced by the Monte Carlo calculations. The neural network method is compared with the histogram and the functional expansion tally method for estimating a non-smooth density, a fission source distribution, and an absorption rate's gradient in a burnable absorber rod. The application results shows that the neural network method can approximate a tally distribution quite well. (authors)

  13. The selection pressures induced non-smooth infectious disease model and bifurcation analysis

    International Nuclear Information System (INIS)

    Qin, Wenjie; Tang, Sanyi

    2014-01-01

    Highlights: • A non-smooth infectious disease model to describe selection pressure is developed. • The effect of selection pressure on infectious disease transmission is addressed. • The key factors which are related to the threshold value are determined. • The stabilities and bifurcations of model have been revealed in more detail. • Strategies for the prevention of emerging infectious disease are proposed. - Abstract: Mathematical models can assist in the design strategies to control emerging infectious disease. This paper deduces a non-smooth infectious disease model induced by selection pressures. Analysis of this model reveals rich dynamics including local, global stability of equilibria and local sliding bifurcations. Model solutions ultimately stabilize at either one real equilibrium or the pseudo-equilibrium on the switching surface of the present model, depending on the threshold value determined by some related parameters. Our main results show that reducing the threshold value to a appropriate level could contribute to the efficacy on prevention and treatment of emerging infectious disease, which indicates that the selection pressures can be beneficial to prevent the emerging infectious disease under medical resource limitation

  14. Global gradient estimates for divergence-type elliptic problems involving general nonlinear operators

    Science.gov (United States)

    Cho, Yumi

    2018-05-01

    We study nonlinear elliptic problems with nonstandard growth and ellipticity related to an N-function. We establish global Calderón-Zygmund estimates of the weak solutions in the framework of Orlicz spaces over bounded non-smooth domains. Moreover, we prove a global regularity result for asymptotically regular problems which are getting close to the regular problems considered, when the gradient variable goes to infinity.

  15. $h - p$ Spectral element methods for elliptic problems on non-smooth domains using parallel computers

    NARCIS (Netherlands)

    Tomar, S.K.

    2002-01-01

    It is well known that elliptic problems when posed on non-smooth domains, develop singularities. We examine such problems within the framework of spectral element methods and resolve the singularities with exponential accuracy.

  16. A nonsmooth nonlinear conjugate gradient method for interactive contact force problems

    DEFF Research Database (Denmark)

    Silcowitz, Morten; Abel, Sarah Maria Niebe; Erleben, Kenny

    2010-01-01

    of a nonlinear complementarity problem (NCP), which can be solved using an iterative splitting method, such as the projected Gauss–Seidel (PGS) method. We present a novel method for solving the NCP problem by applying a Fletcher–Reeves type nonlinear nonsmooth conjugate gradient (NNCG) type method. We analyze...... and present experimental convergence behavior and properties of the new method. Our results show that the NNCG method has at least the same convergence rate as PGS, and in many cases better....

  17. Lovelock action with nonsmooth boundaries

    Science.gov (United States)

    Cano, Pablo A.

    2018-05-01

    We examine the variational problem in Lovelock gravity when the boundary contains timelike and spacelike segments nonsmoothly glued. We show that two kinds of contributions have to be added to the action. The first one is associated with the presence of a boundary in every segment and it depends on intrinsic and extrinsic curvatures. We can think of this contribution as adding a total derivative to the usual surface term of Lovelock gravity. The second one appears in every joint between two segments and it involves the integral along the joint of the Jacobson-Myers entropy density weighted by the Lorentz boost parameter, which relates the orthonormal frames in each segment. We argue that this term can be straightforwardly extended to the case of joints involving null boundaries. As an application, we compute the contribution of these terms to the complexity of global anti-de Sitter space in Lovelock gravity by using the "complexity =action " proposal and we identify possible universal terms for arbitrary values of the Lovelock couplings. We find that they depend on the charge a* controlling the holographic entanglement entropy and on a new constant that we characterize.

  18. Effect of Nonsmooth Nose Surface of the Projectile on Penetration Using DEM Simulation

    Directory of Open Access Journals (Sweden)

    Jing Han

    2017-01-01

    Full Text Available The nonsmooth body surface of the reptile in nature plays an important role in reduction of resistance and friction when it lives in a soil environment. To consider whether it was feasible for improving the performance of penetrating projectile we investigated the influence of the convex as one of nonsmooth surfaces for the nose of projectile. A numerical simulation study of the projectile against the concrete target was developed based on the discrete element method (DEM. The results show that the convex nose surface of the projectile is beneficial for reducing the penetration resistance greatly, which is also validated by the experiments. Compared to the traditional smooth nose structure, the main reason of difference is due to the local contact normal pressure, which increases dramatically due to the abrupt change of curvature caused by the convex at the same condition. Accordingly, the broken particles of the concrete target obtain more kinetic energy and their average radial flow velocities will drastically increase simultaneously, which is in favor of decreasing the interface friction and the compaction density of concrete target around the nose of projectile.

  19. Invisibility cloaking via non-smooth transformation optics and ray tracing

    International Nuclear Information System (INIS)

    Crosskey, Miles M.; Nixon, Andrew T.; Schick, Leland M.; Kovacic, Gregor

    2011-01-01

    We present examples of theoretically-predicted invisibility cloaks with shapes other than spheres and cylinders, including cones and ellipsoids, as well as shapes spliced from parts of these simpler shapes. In addition, we present an example explicitly displaying the non-uniqueness of invisibility cloaks of the same shape. We depict rays propagating through these example cloaks using ray tracing for geometric optics. - Highlights: → Theoretically-predicted conical and ellipsoidal invisibility cloaks. → Non-smooth cloaks spliced from parts of simpler shapes. → Example displaying non-uniqueness of invisibility cloaks of the same shape. → Rays propagating through example cloaks depicted using geometric optics.

  20. An Alternate Approach to Optimal L 2 -Error Analysis of Semidiscrete Galerkin Methods for Linear Parabolic Problems with Nonsmooth Initial Data

    KAUST Repository

    Goswami, Deepjyoti

    2011-09-01

    In this article, we propose and analyze an alternate proof of a priori error estimates for semidiscrete Galerkin approximations to a general second order linear parabolic initial and boundary value problem with rough initial data. Our analysis is based on energy arguments without using parabolic duality. Further, it follows the spirit of the proof technique used for deriving optimal error estimates for finite element approximations to parabolic problems with smooth initial data and hence, it unifies both theories, that is, one for smooth initial data and other for nonsmooth data. Moreover, the proposed technique is also extended to a semidiscrete mixed method for linear parabolic problems. In both cases, optimal L2-error estimates are derived, when the initial data is in L2. A superconvergence phenomenon is also observed, which is then used to prove L∞-estimates for linear parabolic problems defined on two-dimensional spatial domain again with rough initial data. Copyright © Taylor & Francis Group, LLC.

  1. Clusters in nonsmooth oscillator networks

    Science.gov (United States)

    Nicks, Rachel; Chambon, Lucie; Coombes, Stephen

    2018-03-01

    For coupled oscillator networks with Laplacian coupling, the master stability function (MSF) has proven a particularly powerful tool for assessing the stability of the synchronous state. Using tools from group theory, this approach has recently been extended to treat more general cluster states. However, the MSF and its generalizations require the determination of a set of Floquet multipliers from variational equations obtained by linearization around a periodic orbit. Since closed form solutions for periodic orbits are invariably hard to come by, the framework is often explored using numerical techniques. Here, we show that further insight into network dynamics can be obtained by focusing on piecewise linear (PWL) oscillator models. Not only do these allow for the explicit construction of periodic orbits, their variational analysis can also be explicitly performed. The price for adopting such nonsmooth systems is that many of the notions from smooth dynamical systems, and in particular linear stability, need to be modified to take into account possible jumps in the components of Jacobians. This is naturally accommodated with the use of saltation matrices. By augmenting the variational approach for studying smooth dynamical systems with such matrices we show that, for a wide variety of networks that have been used as models of biological systems, cluster states can be explicitly investigated. By way of illustration, we analyze an integrate-and-fire network model with event-driven synaptic coupling as well as a diffusively coupled network built from planar PWL nodes, including a reduction of the popular Morris-Lecar neuron model. We use these examples to emphasize that the stability of network cluster states can depend as much on the choice of single node dynamics as it does on the form of network structural connectivity. Importantly, the procedure that we present here, for understanding cluster synchronization in networks, is valid for a wide variety of systems in

  2. Extension Theory and Krein-type Resolvent Formulas for Nonsmooth Boundary Value Problems

    DEFF Research Database (Denmark)

    Abels, Helmut; Grubb, Gerd; Wood, Ian Geoffrey

    2014-01-01

    The theory of selfadjoint extensions of symmetric operators, and more generally the theory of extensions of dual pairs, was implemented some years ago for boundary value problems for elliptic operators on smooth bounded domains. Recently, the questions have been taken up again for nonsmooth domains....... In the present work we show that pseudodifferential methods can be used to obtain a full characterization, including Kreĭn resolvent formulas, of the realizations of nonselfadjoint second-order operators on View the MathML source

  3. Stability analysis of delayed Cohen-Grossberg BAM neural networks with impulses via nonsmooth analysis

    International Nuclear Information System (INIS)

    Wen Zhen; Sun Jitao

    2009-01-01

    In this paper, we investigate the existence and uniqueness of equilibrium point for delayed Cohen-Grossberg bidirectional associative memory (BAM) neural networks with impulses, based on nonsmooth analysis method. And we give the criteria of global exponential stability of the unique equilibrium point for the delayed BAM neural networks with impulses using Lyapunov method. The new sufficient condition generalizes and improves the previously known results. Finally, we present examples to illustrate that our results are effective.

  4. A new fuzzy adaptive particle swarm optimization for non-smooth economic dispatch

    Energy Technology Data Exchange (ETDEWEB)

    Niknam, Taher; Mojarrad, Hassan Doagou; Nayeripour, Majid [Electrical and Electronic Engineering Department, Shiraz University of Technology, Shiraz (Iran)

    2010-04-15

    This paper proposes a novel method for solving the Non-convex Economic Dispatch (NED) problems, by the Fuzzy Adaptive Modified Particle Swarm Optimization (FAMPSO). Practical ED problems have non-smooth cost functions with equality and inequality constraints when generator valve-point loading effects are taken into account. Modern heuristic optimization techniques have been given much attention by many researchers due to their ability to find an almost global optimal solution for ED problems. PSO is one of modern heuristic algorithms, in which particles change place to get close to the best position and find the global minimum point. However, the classic PSO may converge to a local optimum solution and the performance of the PSO highly depends on the internal parameters. To overcome these drawbacks, in this paper, a new mutation is proposed to improve the global searching capability and prevent the convergence to local minima. Also, a fuzzy system is used to tune its parameters such as inertia weight and learning factors. In order to evaluate the performance of the proposed algorithm, it is applied to a system consisting of 13 and 40 thermal units whose fuel cost function is calculated by taking account of the effect of valve-point loading. Simulation results demonstrate the superiority of the proposed algorithm compared to other optimization algorithms presented in literature. (author)

  5. Hybrid Adaptive Multilevel Monte Carlo Algorithm for Non-Smooth Observables of Itô Stochastic Differential Equations

    KAUST Repository

    Rached, Nadhir B.

    2014-01-06

    A new hybrid adaptive MC forward Euler algorithm for SDEs with singular coefficients and non-smooth observables is developed. This adaptive method is based on the derivation of a new error expansion with computable leading order terms. When a non-smooth binary payoff is considered, the new adaptive method achieves the same complexity as the uniform discretization with smooth problems. Moreover, the new developed algorithm is extended to the multilevel Monte Carlo (MLMC) forward Euler setting which reduces the complexity from O(TOL-3) to O(TOL-2(log(TOL))2). For the binary option case, it recovers the standard multilevel computational cost O(TOL-2(log(TOL))2). When considering a higher order Milstein scheme, a similar complexity result was obtained by Giles using the uniform time stepping for one dimensional SDEs, see [2]. The difficulty to extend Giles’ Milstein MLMC method to the multidimensional case is an argument for the flexibility of our new constructed adaptive MLMC forward Euler method which can be easily adapted to this setting. Similarly, the expected complexity O(TOL-2(log(TOL))2) is reached for the multidimensional case and verified numerically.

  6. Hybrid Adaptive Multilevel Monte Carlo Algorithm for Non-Smooth Observables of Itô Stochastic Differential Equations

    KAUST Repository

    Rached, Nadhir B.; Hoel, Haakon; Tempone, Raul

    2014-01-01

    A new hybrid adaptive MC forward Euler algorithm for SDEs with singular coefficients and non-smooth observables is developed. This adaptive method is based on the derivation of a new error expansion with computable leading order terms. When a non-smooth binary payoff is considered, the new adaptive method achieves the same complexity as the uniform discretization with smooth problems. Moreover, the new developed algorithm is extended to the multilevel Monte Carlo (MLMC) forward Euler setting which reduces the complexity from O(TOL-3) to O(TOL-2(log(TOL))2). For the binary option case, it recovers the standard multilevel computational cost O(TOL-2(log(TOL))2). When considering a higher order Milstein scheme, a similar complexity result was obtained by Giles using the uniform time stepping for one dimensional SDEs, see [2]. The difficulty to extend Giles’ Milstein MLMC method to the multidimensional case is an argument for the flexibility of our new constructed adaptive MLMC forward Euler method which can be easily adapted to this setting. Similarly, the expected complexity O(TOL-2(log(TOL))2) is reached for the multidimensional case and verified numerically.

  7. Compact solitary waves in linearly elastic chains with non-smooth on-site potential

    Energy Technology Data Exchange (ETDEWEB)

    Gaeta, Giuseppe [Dipartimento di Matematica, Universita di Milano, Via Saldini 50, 20133 Milan (Italy); Gramchev, Todor [Dipartimento di Matematica e Informatica, Universita di Cagliari, Via Ospedale 72, 09124 Cagliari (Italy); Walcher, Sebastian [Lehrstuhl A Mathematik, RWTH Aachen, 52056 Aachen (Germany)

    2007-04-27

    It was recently observed by Saccomandi and Sgura that one-dimensional chains with nonlinear elastic interaction and regular on-site potential can support compact solitary waves, i.e. travelling solitary waves with strictly compact support. In this paper, we show that the same applies to chains with linear elastic interaction and an on-site potential which is continuous but non-smooth at minima. Some different features arise; in particular, the speed of compact solitary waves is not uniquely fixed by the equation. We also discuss several generalizations of our findings.

  8. A Projection free method for Generalized Eigenvalue Problem with a nonsmooth Regularizer.

    Science.gov (United States)

    Hwang, Seong Jae; Collins, Maxwell D; Ravi, Sathya N; Ithapu, Vamsi K; Adluru, Nagesh; Johnson, Sterling C; Singh, Vikas

    2015-12-01

    Eigenvalue problems are ubiquitous in computer vision, covering a very broad spectrum of applications ranging from estimation problems in multi-view geometry to image segmentation. Few other linear algebra problems have a more mature set of numerical routines available and many computer vision libraries leverage such tools extensively. However, the ability to call the underlying solver only as a "black box" can often become restrictive. Many 'human in the loop' settings in vision frequently exploit supervision from an expert, to the extent that the user can be considered a subroutine in the overall system. In other cases, there is additional domain knowledge, side or even partial information that one may want to incorporate within the formulation. In general, regularizing a (generalized) eigenvalue problem with such side information remains difficult. Motivated by these needs, this paper presents an optimization scheme to solve generalized eigenvalue problems (GEP) involving a (nonsmooth) regularizer. We start from an alternative formulation of GEP where the feasibility set of the model involves the Stiefel manifold. The core of this paper presents an end to end stochastic optimization scheme for the resultant problem. We show how this general algorithm enables improved statistical analysis of brain imaging data where the regularizer is derived from other 'views' of the disease pathology, involving clinical measurements and other image-derived representations.

  9. Image denoising: Learning the noise model via nonsmooth PDE-constrained optimization

    KAUST Repository

    Reyes, Juan Carlos De los

    2013-11-01

    We propose a nonsmooth PDE-constrained optimization approach for the determination of the correct noise model in total variation (TV) image denoising. An optimization problem for the determination of the weights corresponding to different types of noise distributions is stated and existence of an optimal solution is proved. A tailored regularization approach for the approximation of the optimal parameter values is proposed thereafter and its consistency studied. Additionally, the differentiability of the solution operator is proved and an optimality system characterizing the optimal solutions of each regularized problem is derived. The optimal parameter values are numerically computed by using a quasi-Newton method, together with semismooth Newton type algorithms for the solution of the TV-subproblems. © 2013 American Institute of Mathematical Sciences.

  10. Image denoising: Learning the noise model via nonsmooth PDE-constrained optimization

    KAUST Repository

    Reyes, Juan Carlos De los; Schö nlieb, Carola-Bibiane

    2013-01-01

    We propose a nonsmooth PDE-constrained optimization approach for the determination of the correct noise model in total variation (TV) image denoising. An optimization problem for the determination of the weights corresponding to different types of noise distributions is stated and existence of an optimal solution is proved. A tailored regularization approach for the approximation of the optimal parameter values is proposed thereafter and its consistency studied. Additionally, the differentiability of the solution operator is proved and an optimality system characterizing the optimal solutions of each regularized problem is derived. The optimal parameter values are numerically computed by using a quasi-Newton method, together with semismooth Newton type algorithms for the solution of the TV-subproblems. © 2013 American Institute of Mathematical Sciences.

  11. Some error estimates for the lumped mass finite element method for a parabolic problem

    KAUST Repository

    Chatzipantelidis, P.; Lazarov, R. D.; Thomé e, V.

    2012-01-01

    for the standard Galerkin method carry over to the lumped mass method whereas nonsmooth initial data estimates require special assumptions on the triangulation. We also discuss the application to time discretization by the backward Euler and Crank-Nicolson methods

  12. Estimation Methods for Infinite-Dimensional Systems Applied to the Hemodynamic Response in the Brain

    KAUST Repository

    Belkhatir, Zehor

    2018-05-01

    Infinite-Dimensional Systems (IDSs) which have been made possible by recent advances in mathematical and computational tools can be used to model complex real phenomena. However, due to physical, economic, or stringent non-invasive constraints on real systems, the underlying characteristics for mathematical models in general (and IDSs in particular) are often missing or subject to uncertainty. Therefore, developing efficient estimation techniques to extract missing pieces of information from available measurements is essential. The human brain is an example of IDSs with severe constraints on information collection from controlled experiments and invasive sensors. Investigating the intriguing modeling potential of the brain is, in fact, the main motivation for this work. Here, we will characterize the hemodynamic behavior of the brain using functional magnetic resonance imaging data. In this regard, we propose efficient estimation methods for two classes of IDSs, namely Partial Differential Equations (PDEs) and Fractional Differential Equations (FDEs). This work is divided into two parts. The first part addresses the joint estimation problem of the state, parameters, and input for a coupled second-order hyperbolic PDE and an infinite-dimensional ordinary differential equation using sampled-in-space measurements. Two estimation techniques are proposed: a Kalman-based algorithm that relies on a reduced finite-dimensional model of the IDS, and an infinite-dimensional adaptive estimator whose convergence proof is based on the Lyapunov approach. We study and discuss the identifiability of the unknown variables for both cases. The second part contributes to the development of estimation methods for FDEs where major challenges arise in estimating fractional differentiation orders and non-smooth pointwise inputs. First, we propose a fractional high-order sliding mode observer to jointly estimate the pseudo-state and input of commensurate FDEs. Second, we propose a

  13. Hybrid Adaptive Multilevel Monte Carlo Algorithm for Non-Smooth Observables of Itô Stochastic Differential Equations

    KAUST Repository

    Rached, Nadhir B.

    2013-12-01

    The Monte Carlo forward Euler method with uniform time stepping is the standard technique to compute an approximation of the expected payoff of a solution of an Itô SDE. For a given accuracy requirement TOL, the complexity of this technique for well behaved problems, that is the amount of computational work to solve the problem, is O(TOL-3). A new hybrid adaptive Monte Carlo forward Euler algorithm for SDEs with non-smooth coefficients and low regular observables is developed in this thesis. This adaptive method is based on the derivation of a new error expansion with computable leading-order terms. The basic idea of the new expansion is the use of a mixture of prior information to determine the weight functions and posterior information to compute the local error. In a number of numerical examples the superior efficiency of the hybrid adaptive algorithm over the standard uniform time stepping technique is verified. When a non-smooth binary payoff with either GBM or drift singularity type of SDEs is considered, the new adaptive method achieves the same complexity as the uniform discretization with smooth problems. Moreover, the new developed algorithm is extended to the MLMC forward Euler setting which reduces the complexity from O(TOL-3) to O(TOL-2(log(TOL))2). For the binary option case with the same type of Itô SDEs, the hybrid adaptive MLMC forward Euler recovers the standard multilevel computational cost O(TOL-2(log(TOL))2). When considering a higher order Milstein scheme, a similar complexity result was obtained by Giles using the uniform time stepping for one dimensional SDEs. The difficulty to extend Giles\\' Milstein MLMC method to the multidimensional case is an argument for the flexibility of our new constructed adaptive MLMC forward Euler method which can be easily adapted to this setting. Similarly, the expected complexity O(TOL-2(log(TOL))2) is reached for the multidimensional case and verified numerically.

  14. Observer-Based Human Knee Stiffness Estimation.

    Science.gov (United States)

    Misgeld, Berno J E; Luken, Markus; Riener, Robert; Leonhardt, Steffen

    2017-05-01

    We consider the problem of stiffness estimation for the human knee joint during motion in the sagittal plane. The new stiffness estimator uses a nonlinear reduced-order biomechanical model and a body sensor network (BSN). The developed model is based on a two-dimensional knee kinematics approach to calculate the angle-dependent lever arms and the torques of the muscle-tendon-complex. To minimize errors in the knee stiffness estimation procedure that result from model uncertainties, a nonlinear observer is developed. The observer uses the electromyogram (EMG) of involved muscles as input signals and the segmental orientation as the output signal to correct the observer-internal states. Because of dominating model nonlinearities and nonsmoothness of the corresponding nonlinear functions, an unscented Kalman filter is designed to compute and update the observer feedback (Kalman) gain matrix. The observer-based stiffness estimation algorithm is subsequently evaluated in simulations and in a test bench, specifically designed to provide robotic movement support for the human knee joint. In silico and experimental validation underline the good performance of the knee stiffness estimation even in the cases of a knee stiffening due to antagonistic coactivation. We have shown the principle function of an observer-based approach to knee stiffness estimation that employs EMG signals and segmental orientation provided by our own IPANEMA BSN. The presented approach makes realtime, model-based estimation of knee stiffness with minimal instrumentation possible.

  15. A one-layer recurrent neural network for non-smooth convex optimization subject to linear inequality constraints

    International Nuclear Information System (INIS)

    Liu, Xiaolan; Zhou, Mi

    2016-01-01

    In this paper, a one-layer recurrent network is proposed for solving a non-smooth convex optimization subject to linear inequality constraints. Compared with the existing neural networks for optimization, the proposed neural network is capable of solving more general convex optimization with linear inequality constraints. The convergence of the state variables of the proposed neural network to achieve solution optimality is guaranteed as long as the designed parameters in the model are larger than the derived lower bounds.

  16. Generalized Pattern Search methods for a class of nonsmooth optimization problems with structure

    Science.gov (United States)

    Bogani, C.; Gasparo, M. G.; Papini, A.

    2009-07-01

    We propose a Generalized Pattern Search (GPS) method to solve a class of nonsmooth minimization problems, where the set of nondifferentiability is included in the union of known hyperplanes and, therefore, is highly structured. Both unconstrained and linearly constrained problems are considered. At each iteration the set of poll directions is enforced to conform to the geometry of both the nondifferentiability set and the boundary of the feasible region, near the current iterate. This is the key issue to guarantee the convergence of certain subsequences of iterates to points which satisfy first-order optimality conditions. Numerical experiments on some classical problems validate the method.

  17. Smooth and non-smooth travelling waves in a nonlinearly dispersive Boussinesq equation

    International Nuclear Information System (INIS)

    Shen Jianwei; Xu Wei; Lei Youming

    2005-01-01

    The dynamical behavior and special exact solutions of nonlinear dispersive Boussinesq equation (B(m,n) equation), u tt -u xx -a(u n ) xx +b(u m ) xxxx =0, is studied by using bifurcation theory of dynamical system. As a result, all possible phase portraits in the parametric space for the travelling wave system, solitary wave, kink and anti-kink wave solutions and uncountably infinite many smooth and non-smooth periodic wave solutions are obtained. It can be shown that the existence of singular straight line in the travelling wave system is the reason why smooth waves converge to cusp waves, finally. When parameter are varied, under different parametric conditions, various sufficient conditions guarantee the existence of the above solutions are given

  18. Existence and smoothness of solutions to second initial boundary value problems for Schrodinger systems in cylinders with non-smooth bases

    Directory of Open Access Journals (Sweden)

    Nguyen Manh Hung

    2008-03-01

    Full Text Available In this paper, we consider the second initial boundary value problem for strongly general Schrodinger systems in both the finite and the infinite cylinders $Q_T, 0non-smooth base $Omega$. Some results on the existence, uniqueness and smoothness with respect to time variable of generalized solution of this problem are given.

  19. Unbounded critical points for a class of lower semicontinuous functionals

    OpenAIRE

    Pellacci, Benedetta; Squassina, Marco

    2003-01-01

    In this paper we prove existence and multiplicity results of unbounded critical points for a general class of weakly lower semicontinuous functionals. We will apply a suitable nonsmooth critical point theory.

  20. Identification of some nonsmooth evolution systems with illustration on adhesive contacts at small strains

    Czech Academy of Sciences Publication Activity Database

    Adam, Lukáš; Outrata, Jiří; Roubíček, Tomáš

    2017-01-01

    Roč. 66, č. 12 (2017), s. 2025-2049 ISSN 0233-1934 R&D Projects: GA ČR GA13-25911S; GA ČR GA13-18652S; GA ČR GAP201/10/0357; GA ČR(CZ) GAP201/12/0671 Grant - others:GA UK(CZ) SVV 260225/2015 Institutional support: RVO:67985556 ; RVO:61388998 Keywords : rate-independent systems * optimal control * identification * fractional-step time discretization * quadratic programming * gradient evaluation * variational analysis * implicit programming approach * limiting subdifferential * coderivative * nonsmooth contact mechanics * delamination Subject RIV: BA - General Mathematics; BA - General Mathematics (UT-L) OBOR OECD: Pure mathematics; Pure mathematics (UT-L) Impact factor: 0.943, year: 2016 http://library.utia.cas.cz/separaty/2016/MTR/adam-0453289.pdf

  1. Some error estimates for the lumped mass finite element method for a parabolic problem

    KAUST Repository

    Chatzipantelidis, P.

    2012-01-01

    We study the spatially semidiscrete lumped mass method for the model homogeneous heat equation with homogeneous Dirichlet boundary conditions. Improving earlier results we show that known optimal order smooth initial data error estimates for the standard Galerkin method carry over to the lumped mass method whereas nonsmooth initial data estimates require special assumptions on the triangulation. We also discuss the application to time discretization by the backward Euler and Crank-Nicolson methods. © 2011 American Mathematical Society.

  2. Response of a uniform optical fiber Bragg grating to strain with a non-smooth distribution: measurements and simulations

    Science.gov (United States)

    Detka, Małgorzata

    2017-08-01

    The paper presents results of numerical analyses of the response of a uniform fiber Bragg grating subjected to a strain with non-smooth profile. Results of measurements of the response of the grating to a compressive strain correspond well with results of the simulation and show, that the induced strain profile of the grating causes a widening of its reflection spectrum with a considerable shape irregularity, dependent on the location of the point where slope of the strain profile changes abruptly, and on the maximum value of the strain.

  3. A new honey bee mating optimization algorithm for non-smooth economic dispatch

    International Nuclear Information System (INIS)

    Niknam, Taher; Mojarrad, Hasan Doagou; Meymand, Hamed Zeinoddini; Firouzi, Bahman Bahmani

    2011-01-01

    The non-storage characteristics of electricity and the increasing fuel costs worldwide call for the need to operate the systems more economically. Economic dispatch (ED) is one of the most important optimization problems in power systems. ED has the objective of dividing the power demand among the online generators economically while satisfying various constraints. The importance of economic dispatch is to get maximum usable power using minimum resources. To solve the static ED problem, honey bee mating algorithm (HBMO) can be used. The basic disadvantage of the original HBMO algorithm is the fact that it may miss the optimum and provide a near optimum solution in a limited runtime period. In order to avoid this shortcoming, we propose a new method that improves the mating process of HBMO and also, combines the improved HBMO with a Chaotic Local Search (CLS) called Chaotic Improved Honey Bee Mating Optimization (CIHBMO). The proposed algorithm is used to solve ED problems taking into account the nonlinear generator characteristics such as prohibited operation zones, multi-fuel and valve-point loading effects. The CIHBMO algorithm is tested on three test systems and compared with other methods in the literature. Results have shown that the proposed method is efficient and fast for ED problems with non-smooth and non-continuous fuel cost functions. Moreover, the optimal power dispatch obtained by the algorithm is superior to previous reported results. -- Research highlights: →Economic dispatch. →Reducing electrical energy loss. →Saving electrical energy. →Optimal operation.

  4. A two-layer recurrent neural network for nonsmooth convex optimization problems.

    Science.gov (United States)

    Qin, Sitian; Xue, Xiaoping

    2015-06-01

    In this paper, a two-layer recurrent neural network is proposed to solve the nonsmooth convex optimization problem subject to convex inequality and linear equality constraints. Compared with existing neural network models, the proposed neural network has a low model complexity and avoids penalty parameters. It is proved that from any initial point, the state of the proposed neural network reaches the equality feasible region in finite time and stays there thereafter. Moreover, the state is unique if the initial point lies in the equality feasible region. The equilibrium point set of the proposed neural network is proved to be equivalent to the Karush-Kuhn-Tucker optimality set of the original optimization problem. It is further proved that the equilibrium point of the proposed neural network is stable in the sense of Lyapunov. Moreover, from any initial point, the state is proved to be convergent to an equilibrium point of the proposed neural network. Finally, as applications, the proposed neural network is used to solve nonlinear convex programming with linear constraints and L1 -norm minimization problems.

  5. Error Estimates for a Semidiscrete Finite Element Method for Fractional Order Parabolic Equations

    KAUST Repository

    Jin, Bangti

    2013-01-01

    We consider the initial boundary value problem for a homogeneous time-fractional diffusion equation with an initial condition ν(x) and a homogeneous Dirichlet boundary condition in a bounded convex polygonal domain Ω. We study two semidiscrete approximation schemes, i.e., the Galerkin finite element method (FEM) and lumped mass Galerkin FEM, using piecewise linear functions. We establish almost optimal with respect to the data regularity error estimates, including the cases of smooth and nonsmooth initial data, i.e., ν ∈ H2(Ω) ∩ H0 1(Ω) and ν ∈ L2(Ω). For the lumped mass method, the optimal L2-norm error estimate is valid only under an additional assumption on the mesh, which in two dimensions is known to be satisfied for symmetric meshes. Finally, we present some numerical results that give insight into the reliability of the theoretical study. © 2013 Society for Industrial and Applied Mathematics.

  6. Regularity of the Maxwell equations in heterogeneous media and Lipschitz domains

    KAUST Repository

    Bonito, Andrea

    2013-12-01

    This note establishes regularity estimates for the solution of the Maxwell equations in Lipschitz domains with non-smooth coefficients and minimal regularity assumptions. The argumentation relies on elliptic regularity estimates for the Poisson problem with non-smooth coefficients. © 2013 Elsevier Ltd.

  7. Single image super-resolution based on approximated Heaviside functions and iterative refinement

    Science.gov (United States)

    Wang, Xin-Yu; Huang, Ting-Zhu; Deng, Liang-Jian

    2018-01-01

    One method of solving the single-image super-resolution problem is to use Heaviside functions. This has been done previously by making a binary classification of image components as “smooth” and “non-smooth”, describing these with approximated Heaviside functions (AHFs), and iteration including l1 regularization. We now introduce a new method in which the binary classification of image components is extended to different degrees of smoothness and non-smoothness, these components being represented by various classes of AHFs. Taking into account the sparsity of the non-smooth components, their coefficients are l1 regularized. In addition, to pick up more image details, the new method uses an iterative refinement for the residuals between the original low-resolution input and the downsampled resulting image. Experimental results showed that the new method is superior to the original AHF method and to four other published methods. PMID:29329298

  8. A higher order numerical method for time fractional partial differential equations with nonsmooth data

    Science.gov (United States)

    Xing, Yanyuan; Yan, Yubin

    2018-03-01

    Gao et al. [11] (2014) introduced a numerical scheme to approximate the Caputo fractional derivative with the convergence rate O (k 3 - α), 0 equation is sufficiently smooth, Lv and Xu [20] (2016) proved by using energy method that the corresponding numerical method for solving time fractional partial differential equation has the convergence rate O (k 3 - α), 0 equation has low regularity and in this case the numerical method fails to have the convergence rate O (k 3 - α), 0 quadratic interpolation polynomials. Based on this scheme, we introduce a time discretization scheme to approximate the time fractional partial differential equation and show by using Laplace transform methods that the time discretization scheme has the convergence rate O (k 3 - α), 0 0 for smooth and nonsmooth data in both homogeneous and inhomogeneous cases. Numerical examples are given to show that the theoretical results are consistent with the numerical results.

  9. Second-order numerical methods for multi-term fractional differential equations: Smooth and non-smooth solutions

    Science.gov (United States)

    Zeng, Fanhai; Zhang, Zhongqiang; Karniadakis, George Em

    2017-12-01

    Starting with the asymptotic expansion of the error equation of the shifted Gr\\"{u}nwald--Letnikov formula, we derive a new modified weighted shifted Gr\\"{u}nwald--Letnikov (WSGL) formula by introducing appropriate correction terms. We then apply one special case of the modified WSGL formula to solve multi-term fractional ordinary and partial differential equations, and we prove the linear stability and second-order convergence for both smooth and non-smooth solutions. We show theoretically and numerically that numerical solutions up to certain accuracy can be obtained with only a few correction terms. Moreover, the correction terms can be tuned according to the fractional derivative orders without explicitly knowing the analytical solutions. Numerical simulations verify the theoretical results and demonstrate that the new formula leads to better performance compared to other known numerical approximations with similar resolution.

  10. Finite-time convergent recurrent neural network with a hard-limiting activation function for constrained optimization with piecewise-linear objective functions.

    Science.gov (United States)

    Liu, Qingshan; Wang, Jun

    2011-04-01

    This paper presents a one-layer recurrent neural network for solving a class of constrained nonsmooth optimization problems with piecewise-linear objective functions. The proposed neural network is guaranteed to be globally convergent in finite time to the optimal solutions under a mild condition on a derived lower bound of a single gain parameter in the model. The number of neurons in the neural network is the same as the number of decision variables of the optimization problem. Compared with existing neural networks for optimization, the proposed neural network has a couple of salient features such as finite-time convergence and a low model complexity. Specific models for two important special cases, namely, linear programming and nonsmooth optimization, are also presented. In addition, applications to the shortest path problem and constrained least absolute deviation problem are discussed with simulation results to demonstrate the effectiveness and characteristics of the proposed neural network.

  11. On Functional Calculus Estimates

    NARCIS (Netherlands)

    Schwenninger, F.L.

    2015-01-01

    This thesis presents various results within the field of operator theory that are formulated in estimates for functional calculi. Functional calculus is the general concept of defining operators of the form $f(A)$, where f is a function and $A$ is an operator, typically on a Banach space. Norm

  12. Model-Based Estimation of Ankle Joint Stiffness

    Directory of Open Access Journals (Sweden)

    Berno J. E. Misgeld

    2017-03-01

    Full Text Available We address the estimation of biomechanical parameters with wearable measurement technologies. In particular, we focus on the estimation of sagittal plane ankle joint stiffness in dorsiflexion/plantar flexion. For this estimation, a novel nonlinear biomechanical model of the lower leg was formulated that is driven by electromyographic signals. The model incorporates a two-dimensional kinematic description in the sagittal plane for the calculation of muscle lever arms and torques. To reduce estimation errors due to model uncertainties, a filtering algorithm is necessary that employs segmental orientation sensor measurements. Because of the model’s inherent nonlinearities and nonsmooth dynamics, a square-root cubature Kalman filter was developed. The performance of the novel estimation approach was evaluated in silico and in an experimental procedure. The experimental study was conducted with body-worn sensors and a test-bench that was specifically designed to obtain reference angle and torque measurements for a single joint. Results show that the filter is able to reconstruct joint angle positions, velocities and torque, as well as, joint stiffness during experimental test bench movements.

  13. Model-Based Estimation of Ankle Joint Stiffness.

    Science.gov (United States)

    Misgeld, Berno J E; Zhang, Tony; Lüken, Markus J; Leonhardt, Steffen

    2017-03-29

    We address the estimation of biomechanical parameters with wearable measurement technologies. In particular, we focus on the estimation of sagittal plane ankle joint stiffness in dorsiflexion/plantar flexion. For this estimation, a novel nonlinear biomechanical model of the lower leg was formulated that is driven by electromyographic signals. The model incorporates a two-dimensional kinematic description in the sagittal plane for the calculation of muscle lever arms and torques. To reduce estimation errors due to model uncertainties, a filtering algorithm is necessary that employs segmental orientation sensor measurements. Because of the model's inherent nonlinearities and nonsmooth dynamics, a square-root cubature Kalman filter was developed. The performance of the novel estimation approach was evaluated in silico and in an experimental procedure. The experimental study was conducted with body-worn sensors and a test-bench that was specifically designed to obtain reference angle and torque measurements for a single joint. Results show that the filter is able to reconstruct joint angle positions, velocities and torque, as well as, joint stiffness during experimental test bench movements.

  14. Model-Based Estimation of Ankle Joint Stiffness

    Science.gov (United States)

    Misgeld, Berno J. E.; Zhang, Tony; Lüken, Markus J.; Leonhardt, Steffen

    2017-01-01

    We address the estimation of biomechanical parameters with wearable measurement technologies. In particular, we focus on the estimation of sagittal plane ankle joint stiffness in dorsiflexion/plantar flexion. For this estimation, a novel nonlinear biomechanical model of the lower leg was formulated that is driven by electromyographic signals. The model incorporates a two-dimensional kinematic description in the sagittal plane for the calculation of muscle lever arms and torques. To reduce estimation errors due to model uncertainties, a filtering algorithm is necessary that employs segmental orientation sensor measurements. Because of the model’s inherent nonlinearities and nonsmooth dynamics, a square-root cubature Kalman filter was developed. The performance of the novel estimation approach was evaluated in silico and in an experimental procedure. The experimental study was conducted with body-worn sensors and a test-bench that was specifically designed to obtain reference angle and torque measurements for a single joint. Results show that the filter is able to reconstruct joint angle positions, velocities and torque, as well as, joint stiffness during experimental test bench movements. PMID:28353683

  15. Estimation of Correlation Functions by Random Decrement

    DEFF Research Database (Denmark)

    Asmussen, J. C.; Brincker, Rune

    This paper illustrates how correlation functions can be estimated by the random decrement technique. Several different formulations of the random decrement technique, estimating the correlation functions are considered. The speed and accuracy of the different formulations of the random decrement...... and the length of the correlation functions. The accuracy of the estimates with respect to the theoretical correlation functions and the modal parameters are both investigated. The modal parameters are extracted from the correlation functions using the polyreference time domain technique....

  16. Estimating Function Approaches for Spatial Point Processes

    Science.gov (United States)

    Deng, Chong

    Spatial point pattern data consist of locations of events that are often of interest in biological and ecological studies. Such data are commonly viewed as a realization from a stochastic process called spatial point process. To fit a parametric spatial point process model to such data, likelihood-based methods have been widely studied. However, while maximum likelihood estimation is often too computationally intensive for Cox and cluster processes, pairwise likelihood methods such as composite likelihood, Palm likelihood usually suffer from the loss of information due to the ignorance of correlation among pairs. For many types of correlated data other than spatial point processes, when likelihood-based approaches are not desirable, estimating functions have been widely used for model fitting. In this dissertation, we explore the estimating function approaches for fitting spatial point process models. These approaches, which are based on the asymptotic optimal estimating function theories, can be used to incorporate the correlation among data and yield more efficient estimators. We conducted a series of studies to demonstrate that these estmating function approaches are good alternatives to balance the trade-off between computation complexity and estimating efficiency. First, we propose a new estimating procedure that improves the efficiency of pairwise composite likelihood method in estimating clustering parameters. Our approach combines estimating functions derived from pairwise composite likeli-hood estimation and estimating functions that account for correlations among the pairwise contributions. Our method can be used to fit a variety of parametric spatial point process models and can yield more efficient estimators for the clustering parameters than pairwise composite likelihood estimation. We demonstrate its efficacy through a simulation study and an application to the longleaf pine data. Second, we further explore the quasi-likelihood approach on fitting

  17. Estimating state-contingent production functions

    DEFF Research Database (Denmark)

    Rasmussen, Svend; Karantininis, Kostas

    The paper reviews the empirical problem of estimating state-contingent production functions. The major problem is that states of nature may not be registered and/or that the number of observation per state is low. Monte Carlo simulation is used to generate an artificial, uncertain production...... environment based on Cobb Douglas production functions with state-contingent parameters. The pa-rameters are subsequently estimated based on different sizes of samples using Generalized Least Squares and Generalized Maximum Entropy and the results are compared. It is concluded that Maximum Entropy may...

  18. Non-Parametric Estimation of Correlation Functions

    DEFF Research Database (Denmark)

    Brincker, Rune; Rytter, Anders; Krenk, Steen

    In this paper three methods of non-parametric correlation function estimation are reviewed and evaluated: the direct method, estimation by the Fast Fourier Transform and finally estimation by the Random Decrement technique. The basic ideas of the techniques are reviewed, sources of bias are point...

  19. Surface morphology of laser tracks used for forming the non-smooth biomimetic unit of 3Cr2W8V steel under different processing parameters

    International Nuclear Information System (INIS)

    Zhang Zhihui; Zhou Hong; Ren Luquan; Tong Xin; Shan Hongyu; Li Xianzhou

    2008-01-01

    Aiming to form the high quality of non-smooth biomimetic unit, the influence of laser processing parameters (pulse energy, pulse duration, frequency and scanning speed in the present work) on the surface morphology of scanned tracks was studied based on the 3Cr2W8V die steel. The evolution of the surface morphology was explained according to the degree of melting and vaporization of surface material, and the trend of mean surface roughness and maximum peak-to-valley height. Cross-section morphology revealed the significant microstructural characteristic of the laser-treated zone used for forming the functional zone on the biomimetic surface. Results showed that the combination of pulse energy and pulse duration plays a major role in determining the local height difference on the irradiated surface and the occurrence of melting or vaporization. While frequency and scanning speed have a minor effect on the change of the surface morphology, acting mainly by the different overlapping amount and overlapping mode. The mechanisms behind these influences were discussed, and schematic drawings were introduced to describe the mechanisms

  20. Estimating Stochastic Volatility Models using Prediction-based Estimating Functions

    DEFF Research Database (Denmark)

    Lunde, Asger; Brix, Anne Floor

    to the performance of the GMM estimator based on conditional moments of integrated volatility from Bollerslev and Zhou (2002). The case where the observed log-price process is contaminated by i.i.d. market microstructure (MMS) noise is also investigated. First, the impact of MMS noise on the parameter estimates from......In this paper prediction-based estimating functions (PBEFs), introduced in Sørensen (2000), are reviewed and PBEFs for the Heston (1993) stochastic volatility model are derived. The finite sample performance of the PBEF based estimator is investigated in a Monte Carlo study, and compared...... to correctly account for the noise are investigated. Our Monte Carlo study shows that the estimator based on PBEFs outperforms the GMM estimator, both in the setting with and without MMS noise. Finally, an empirical application investigates the possible challenges and general performance of applying the PBEF...

  1. Receiver function estimated by maximum entropy deconvolution

    Institute of Scientific and Technical Information of China (English)

    吴庆举; 田小波; 张乃铃; 李卫平; 曾融生

    2003-01-01

    Maximum entropy deconvolution is presented to estimate receiver function, with the maximum entropy as the rule to determine auto-correlation and cross-correlation functions. The Toeplitz equation and Levinson algorithm are used to calculate the iterative formula of error-predicting filter, and receiver function is then estimated. During extrapolation, reflective coefficient is always less than 1, which keeps maximum entropy deconvolution stable. The maximum entropy of the data outside window increases the resolution of receiver function. Both synthetic and real seismograms show that maximum entropy deconvolution is an effective method to measure receiver function in time-domain.

  2. Carleman estimates, observability inequalities and null controllability for interior degenerate nonsmooth parabolic equations

    CERN Document Server

    Fragnelli, Genni

    2016-01-01

    The authors consider a parabolic problem with degeneracy in the interior of the spatial domain, and they focus on observability results through Carleman estimates for the associated adjoint problem. The novelties of the present paper are two. First, the coefficient of the leading operator only belongs to a Sobolev space. Second, the degeneracy point is allowed to lie even in the interior of the control region, so that no previous result can be adapted to this situation; however, different cases can be handled, and new controllability results are established as a consequence.

  3. ESTIMATION OF FUNCTIONALS OF SPARSE COVARIANCE MATRICES.

    Science.gov (United States)

    Fan, Jianqing; Rigollet, Philippe; Wang, Weichen

    High-dimensional statistical tests often ignore correlations to gain simplicity and stability leading to null distributions that depend on functionals of correlation matrices such as their Frobenius norm and other ℓ r norms. Motivated by the computation of critical values of such tests, we investigate the difficulty of estimation the functionals of sparse correlation matrices. Specifically, we show that simple plug-in procedures based on thresholded estimators of correlation matrices are sparsity-adaptive and minimax optimal over a large class of correlation matrices. Akin to previous results on functional estimation, the minimax rates exhibit an elbow phenomenon. Our results are further illustrated in simulated data as well as an empirical study of data arising in financial econometrics.

  4. Retrieval of Parameters for Three-Layer Media with Nonsmooth Interfaces for Subsurface Remote Sensing

    Directory of Open Access Journals (Sweden)

    Yuriy Goykhman

    2012-01-01

    Full Text Available A solution to the inverse problem for a three-layer medium with nonsmooth boundaries, representing a large class of natural subsurface structures, is developed in this paper using simulated radar data. The retrieval of the layered medium parameters is accomplished as a sequential nonlinear optimization starting from the top layer and progressively characterizing the layers below. The optimization process is achieved by an iterative technique built around the solution of the forward scattering problem. The forward scattering process is formulated by using the extended boundary condition method (EBCM and constructing reflection and transmission matrices for each interface. These matrices are then combined into the generalized scattering matrix for the entire system, from which radar scattering coefficients are then computed. To be efficiently utilized in the inverse problem, the forward scattering model is simulated over a wide range of unknowns to obtain a complete set of subspace-based equivalent closed-form models that relate radar backscattering coefficients to the sought-for parameters including dielectric constants of each layer and separation of the layers. The inversion algorithm is implemented as a modified conjugate-gradient-based nonlinear optimization. It is shown that this technique results in accurate retrieval of surface and subsurface parameters, even in the presence of noise.

  5. Fractal diffusion coefficient from dynamical zeta functions

    Energy Technology Data Exchange (ETDEWEB)

    Cristadoro, Giampaolo [Max Planck Institute for the Physics of Complex Systems, Noethnitzer Str. 38, D 01187 Dresden (Germany)

    2006-03-10

    Dynamical zeta functions provide a powerful method to analyse low-dimensional dynamical systems when the underlying symbolic dynamics is under control. On the other hand, even simple one-dimensional maps can show an intricate structure of the grammar rules that may lead to a non-smooth dependence of global observables on parameters changes. A paradigmatic example is the fractal diffusion coefficient arising in a simple piecewise linear one-dimensional map of the real line. Using the Baladi-Ruelle generalization of the Milnor-Thurnston kneading determinant, we provide the exact dynamical zeta function for such a map and compute the diffusion coefficient from its smallest zero. (letter to the editor)

  6. Fractal diffusion coefficient from dynamical zeta functions

    International Nuclear Information System (INIS)

    Cristadoro, Giampaolo

    2006-01-01

    Dynamical zeta functions provide a powerful method to analyse low-dimensional dynamical systems when the underlying symbolic dynamics is under control. On the other hand, even simple one-dimensional maps can show an intricate structure of the grammar rules that may lead to a non-smooth dependence of global observables on parameters changes. A paradigmatic example is the fractal diffusion coefficient arising in a simple piecewise linear one-dimensional map of the real line. Using the Baladi-Ruelle generalization of the Milnor-Thurnston kneading determinant, we provide the exact dynamical zeta function for such a map and compute the diffusion coefficient from its smallest zero. (letter to the editor)

  7. Effect of biomimetic non-smooth unit morphology on thermal fatigue behavior of H13 hot-work tool steel

    Science.gov (United States)

    Meng, Chao; Zhou, Hong; Cong, Dalong; Wang, Chuanwei; Zhang, Peng; Zhang, Zhihui; Ren, Luquan

    2012-06-01

    The thermal fatigue behavior of hot-work tool steel processed by a biomimetic coupled laser remelting process gets a remarkable improvement compared to untreated sample. The 'dowel pin effect', the 'dam effect' and the 'fence effect' of non-smooth units are the main reason of the conspicuous improvement of the thermal fatigue behavior. In order to get a further enhancement of the 'dowel pin effect', the 'dam effect' and the 'fence effect', this study investigated the effect of different unit morphologies (including 'prolate', 'U' and 'V' morphology) and the same unit morphology in different sizes on the thermal fatigue behavior of H13 hot-work tool steel. The results showed that the 'U' morphology unit had the optimum thermal fatigue behavior, then the 'V' morphology which was better than the 'prolate' morphology unit; when the unit morphology was identical, the thermal fatigue behavior of the sample with large unit sizes was better than that of the small sizes.

  8. Variance computations for functional of absolute risk estimates.

    Science.gov (United States)

    Pfeiffer, R M; Petracci, E

    2011-07-01

    We present a simple influence function based approach to compute the variances of estimates of absolute risk and functions of absolute risk. We apply this approach to criteria that assess the impact of changes in the risk factor distribution on absolute risk for an individual and at the population level. As an illustration we use an absolute risk prediction model for breast cancer that includes modifiable risk factors in addition to standard breast cancer risk factors. Influence function based variance estimates for absolute risk and the criteria are compared to bootstrap variance estimates.

  9. Estimating Functions with Prior Knowledge, (EFPK) for diffusions

    DEFF Research Database (Denmark)

    Nolsøe, Kim; Kessler, Mathieu; Madsen, Henrik

    2003-01-01

    In this paper a method is formulated in an estimating function setting for parameter estimation, which allows the use of prior information. The main idea is to use prior knowledge of the parameters, either specified as moments restrictions or as a distribution, and use it in the construction of a...... of an estimating function. It may be useful when the full Bayesian analysis is difficult to carry out for computational reasons. This is almost always the case for diffusions, which is the focus of this paper, though the method applies in other settings.......In this paper a method is formulated in an estimating function setting for parameter estimation, which allows the use of prior information. The main idea is to use prior knowledge of the parameters, either specified as moments restrictions or as a distribution, and use it in the construction...

  10. Usng subjective percentiles and test data for estimating fragility functions

    International Nuclear Information System (INIS)

    George, L.L.; Mensing, R.W.

    1981-01-01

    Fragility functions are cumulative distribution functions (cdfs) of strengths at failure. They are needed for reliability analyses of systems such as power generation and transmission systems. Subjective opinions supplement sparse test data for estimating fragility functions. Often the opinions are opinions on the percentiles of the fragility function. Subjective percentiles are likely to be less biased than opinions on parameters of cdfs. Solutions to several problems in the estimation of fragility functions are found for subjective percentiles and test data. How subjective percentiles should be used to estimate subjective fragility functions, how subjective percentiles should be combined with test data, how fragility functions for several failure modes should be combined into a composite fragility function, and how inherent randomness and uncertainty due to lack of knowledge should be represented are considered. Subjective percentiles are treated as independent estimates of percentiles. The following are derived: least-squares parameter estimators for normal and lognormal cdfs, based on subjective percentiles (the method is applicable to any invertible cdf); a composite fragility function for combining several failure modes; estimators of variation within and between groups of experts for nonidentically distributed subjective percentiles; weighted least-squares estimators when subjective percentiles have higher variation at higher percents; and weighted least-squares and Bayes parameter estimators based on combining subjective percentiles and test data. 4 figures, 2 tables

  11. Malware Function Estimation Using API in Initial Behavior

    OpenAIRE

    KAWAGUCHI, Naoto; OMOTE, Kazumasa

    2017-01-01

    Malware proliferation has become a serious threat to the Internet in recent years. Most current malware are subspecies of existing malware that have been automatically generated by illegal tools. To conduct an efficient analysis of malware, estimating their functions in advance is effective when we give priority to analyze malware. However, estimating the malware functions has been difficult due to the increasing sophistication of malware. Actually, the previous researches do not estimate the...

  12. Optimal estimation of the intensity function of a spatial point process

    DEFF Research Database (Denmark)

    Guan, Yongtao; Jalilian, Abdollah; Waagepetersen, Rasmus

    easily computable estimating functions. We derive the optimal estimating function in a class of first-order estimating functions. The optimal estimating function depends on the solution of a certain Fredholm integral equation and reduces to the likelihood score in case of a Poisson process. We discuss...

  13. Thresholding projection estimators in functional linear models

    OpenAIRE

    Cardot, Hervé; Johannes, Jan

    2010-01-01

    We consider the problem of estimating the regression function in functional linear regression models by proposing a new type of projection estimators which combine dimension reduction and thresholding. The introduction of a threshold rule allows to get consistency under broad assumptions as well as minimax rates of convergence under additional regularity hypotheses. We also consider the particular case of Sobolev spaces generated by the trigonometric basis which permits to get easily mean squ...

  14. A logistic regression estimating function for spatial Gibbs point processes

    DEFF Research Database (Denmark)

    Baddeley, Adrian; Coeurjolly, Jean-François; Rubak, Ege

    We propose a computationally efficient logistic regression estimating function for spatial Gibbs point processes. The sample points for the logistic regression consist of the observed point pattern together with a random pattern of dummy points. The estimating function is closely related to the p......We propose a computationally efficient logistic regression estimating function for spatial Gibbs point processes. The sample points for the logistic regression consist of the observed point pattern together with a random pattern of dummy points. The estimating function is closely related...

  15. PHAZE, Parametric Hazard Function Estimation

    International Nuclear Information System (INIS)

    2002-01-01

    1 - Description of program or function: Phaze performs statistical inference calculations on a hazard function (also called a failure rate or intensity function) based on reported failure times of components that are repaired and restored to service. Three parametric models are allowed: the exponential, linear, and Weibull hazard models. The inference includes estimation (maximum likelihood estimators and confidence regions) of the parameters and of the hazard function itself, testing of hypotheses such as increasing failure rate, and checking of the model assumptions. 2 - Methods: PHAZE assumes that the failures of a component follow a time-dependent (or non-homogenous) Poisson process and that the failure counts in non-overlapping time intervals are independent. Implicit in the independence property is the assumption that the component is restored to service immediately after any failure, with negligible repair time. The failures of one component are assumed to be independent of those of another component; a proportional hazards model is used. Data for a component are called time censored if the component is observed for a fixed time-period, or plant records covering a fixed time-period are examined, and the failure times are recorded. The number of these failures is random. Data are called failure censored if the component is kept in service until a predetermined number of failures has occurred, at which time the component is removed from service. In this case, the number of failures is fixed, but the end of the observation period equals the final failure time and is random. A typical PHAZE session consists of reading failure data from a file prepared previously, selecting one of the three models, and performing data analysis (i.e., performing the usual statistical inference about the parameters of the model, with special emphasis on the parameter(s) that determine whether the hazard function is increasing). The final goals of the inference are a point estimate

  16. On the convergence of nonconvex minimization methods for image recovery.

    Science.gov (United States)

    Xiao, Jin; Ng, Michael Kwok-Po; Yang, Yu-Fei

    2015-05-01

    Nonconvex nonsmooth regularization method has been shown to be effective for restoring images with neat edges. Fast alternating minimization schemes have also been proposed and developed to solve the nonconvex nonsmooth minimization problem. The main contribution of this paper is to show the convergence of these alternating minimization schemes, based on the Kurdyka-Łojasiewicz property. In particular, we show that the iterates generated by the alternating minimization scheme, converges to a critical point of this nonconvex nonsmooth objective function. We also extend the analysis to nonconvex nonsmooth regularization model with box constraints, and obtain similar convergence results of the related minimization algorithm. Numerical examples are given to illustrate our convergence analysis.

  17. The implicit function theorem history, theory, and applications

    CERN Document Server

    Krantz, Steven G

    2003-01-01

    The implicit function theorem is part of the bedrock of mathematics analysis and geometry. Finding its genesis in eighteenth century studies of real analytic functions and mechanics, the implicit and inverse function theorems have now blossomed into powerful tools in the theories of partial differential equations, differential geometry, and geometric analysis. There are many different forms of the implicit function theorem, including (i) the classical formulation for Ck functions, (ii) formulations in other function spaces, (iii) formulations for non-smooth function, (iv) formulations for functions with degenerate Jacobian. Particularly powerful implicit function theorems, such as the Nash-Moser theorem, have been developed for specific applications (e.g., the imbedding of Riemannian manifolds). All of these topics, and many more, are treated in the present volume. The history of the implicit function theorem is a lively and complex store, and intimately bound up with the development of fundamental ideas in a...

  18. Geodesic B-Preinvex Functions and Multiobjective Optimization Problems on Riemannian Manifolds

    Directory of Open Access Journals (Sweden)

    Sheng-lan Chen

    2014-01-01

    Full Text Available We introduce a class of functions called geodesic B-preinvex and geodesic B-invex functions on Riemannian manifolds and generalize the notions to the so-called geodesic quasi/pseudo B-preinvex and geodesic quasi/pseudo B-invex functions. We discuss the links among these functions under appropriate conditions and obtain results concerning extremum points of a nonsmooth geodesic B-preinvex function by using the proximal subdifferential. Moreover, we study a differentiable multiobjective optimization problem involving new classes of generalized geodesic B-invex functions and derive Kuhn-Tucker-type sufficient conditions for a feasible point to be an efficient or properly efficient solution. Finally, a Mond-Weir type duality is formulated and some duality results are given for the pair of primal and dual programming.

  19. Efficient Estimating Functions for Stochastic Differential Equations

    DEFF Research Database (Denmark)

    Jakobsen, Nina Munkholt

    The overall topic of this thesis is approximate martingale estimating function-based estimationfor solutions of stochastic differential equations, sampled at high frequency. Focuslies on the asymptotic properties of the estimators. The first part of the thesis deals with diffusions observed over...

  20. Impact of Base Functional Component Types on Software Functional Size based Effort Estimation

    OpenAIRE

    Gencel, Cigdem; Buglione, Luigi

    2008-01-01

    Software effort estimation is still a significant challenge for software management. Although Functional Size Measurement (FSM) methods have been standardized and have become widely used by the software organizations, the relationship between functional size and development effort still needs further investigation. Most of the studies focus on the project cost drivers and consider total software functional size as the primary input to estimation models. In this study, we investigate whether u...

  1. Optimal Bandwidth Selection for Kernel Density Functionals Estimation

    Directory of Open Access Journals (Sweden)

    Su Chen

    2015-01-01

    Full Text Available The choice of bandwidth is crucial to the kernel density estimation (KDE and kernel based regression. Various bandwidth selection methods for KDE and local least square regression have been developed in the past decade. It has been known that scale and location parameters are proportional to density functionals ∫γ(xf2(xdx with appropriate choice of γ(x and furthermore equality of scale and location tests can be transformed to comparisons of the density functionals among populations. ∫γ(xf2(xdx can be estimated nonparametrically via kernel density functionals estimation (KDFE. However, the optimal bandwidth selection for KDFE of ∫γ(xf2(xdx has not been examined. We propose a method to select the optimal bandwidth for the KDFE. The idea underlying this method is to search for the optimal bandwidth by minimizing the mean square error (MSE of the KDFE. Two main practical bandwidth selection techniques for the KDFE of ∫γ(xf2(xdx are provided: Normal scale bandwidth selection (namely, “Rule of Thumb” and direct plug-in bandwidth selection. Simulation studies display that our proposed bandwidth selection methods are superior to existing density estimation bandwidth selection methods in estimating density functionals.

  2. Bias-corrected estimation of stable tail dependence function

    DEFF Research Database (Denmark)

    Beirlant, Jan; Escobar-Bach, Mikael; Goegebeur, Yuri

    2016-01-01

    We consider the estimation of the stable tail dependence function. We propose a bias-corrected estimator and we establish its asymptotic behaviour under suitable assumptions. The finite sample performance of the proposed estimator is evaluated by means of an extensive simulation study where...

  3. Unstable volatility functions: the break preserving local linear estimator

    DEFF Research Database (Denmark)

    Casas, Isabel; Gijbels, Irene

    The objective of this paper is to introduce the break preserving local linear (BPLL) estimator for the estimation of unstable volatility functions. Breaks in the structure of the conditional mean and/or the volatility functions are common in Finance. Markov switching models (Hamilton, 1989......) and threshold models (Lin and Terasvirta, 1994) are amongst the most popular models to describe the behaviour of data with structural breaks. The local linear (LL) estimator is not consistent at points where the volatility function has a break and it may even report negative values for finite samples...

  4. An improved method for estimating the frequency correlation function

    KAUST Repository

    Chelli, Ali; Pä tzold, Matthias

    2012-01-01

    For time-invariant frequency-selective channels, the transfer function is a superposition of waves having different propagation delays and path gains. In order to estimate the frequency correlation function (FCF) of such channels, the frequency averaging technique can be utilized. The obtained FCF can be expressed as a sum of auto-terms (ATs) and cross-terms (CTs). The ATs are caused by the autocorrelation of individual path components. The CTs are due to the cross-correlation of different path components. These CTs have no physical meaning and leads to an estimation error. We propose a new estimation method aiming to improve the estimation accuracy of the FCF of a band-limited transfer function. The basic idea behind the proposed method is to introduce a kernel function aiming to reduce the CT effect, while preserving the ATs. In this way, we can improve the estimation of the FCF. The performance of the proposed method and the frequency averaging technique is analyzed using a synthetically generated transfer function. We show that the proposed method is more accurate than the frequency averaging technique. The accurate estimation of the FCF is crucial for the system design. In fact, we can determine the coherence bandwidth from the FCF. The exact knowledge of the coherence bandwidth is beneficial in both the design as well as optimization of frequency interleaving and pilot arrangement schemes. © 2012 IEEE.

  5. An improved method for estimating the frequency correlation function

    KAUST Repository

    Chelli, Ali

    2012-04-01

    For time-invariant frequency-selective channels, the transfer function is a superposition of waves having different propagation delays and path gains. In order to estimate the frequency correlation function (FCF) of such channels, the frequency averaging technique can be utilized. The obtained FCF can be expressed as a sum of auto-terms (ATs) and cross-terms (CTs). The ATs are caused by the autocorrelation of individual path components. The CTs are due to the cross-correlation of different path components. These CTs have no physical meaning and leads to an estimation error. We propose a new estimation method aiming to improve the estimation accuracy of the FCF of a band-limited transfer function. The basic idea behind the proposed method is to introduce a kernel function aiming to reduce the CT effect, while preserving the ATs. In this way, we can improve the estimation of the FCF. The performance of the proposed method and the frequency averaging technique is analyzed using a synthetically generated transfer function. We show that the proposed method is more accurate than the frequency averaging technique. The accurate estimation of the FCF is crucial for the system design. In fact, we can determine the coherence bandwidth from the FCF. The exact knowledge of the coherence bandwidth is beneficial in both the design as well as optimization of frequency interleaving and pilot arrangement schemes. © 2012 IEEE.

  6. Bayesian error estimation in density-functional theory

    DEFF Research Database (Denmark)

    Mortensen, Jens Jørgen; Kaasbjerg, Kristen; Frederiksen, Søren Lund

    2005-01-01

    We present a practical scheme for performing error estimates for density-functional theory calculations. The approach, which is based on ideas from Bayesian statistics, involves creating an ensemble of exchange-correlation functionals by comparing with an experimental database of binding energies...

  7. ESTIMATION OF PARAMETERS AND RELIABILITY FUNCTION OF EXPONENTIATED EXPONENTIAL DISTRIBUTION: BAYESIAN APPROACH UNDER GENERAL ENTROPY LOSS FUNCTION

    Directory of Open Access Journals (Sweden)

    Sanjay Kumar Singh

    2011-06-01

    Full Text Available In this Paper we propose Bayes estimators of the parameters of Exponentiated Exponential distribution and Reliability functions under General Entropy loss function for Type II censored sample. The proposed estimators have been compared with the corresponding Bayes estimators obtained under Squared Error loss function and maximum likelihood estimators for their simulated risks (average loss over sample space.

  8. Bayesian Nonparametric Mixture Estimation for Time-Indexed Functional Data in R

    Directory of Open Access Journals (Sweden)

    Terrance D. Savitsky

    2016-08-01

    Full Text Available We present growfunctions for R that offers Bayesian nonparametric estimation models for analysis of dependent, noisy time series data indexed by a collection of domains. This data structure arises from combining periodically published government survey statistics, such as are reported in the Current Population Study (CPS. The CPS publishes monthly, by-state estimates of employment levels, where each state expresses a noisy time series. Published state-level estimates from the CPS are composed from household survey responses in a model-free manner and express high levels of volatility due to insufficient sample sizes. Existing software solutions borrow information over a modeled time-based dependence to extract a de-noised time series for each domain. These solutions, however, ignore the dependence among the domains that may be additionally leveraged to improve estimation efficiency. The growfunctions package offers two fully nonparametric mixture models that simultaneously estimate both a time and domain-indexed dependence structure for a collection of time series: (1 A Gaussian process (GP construction, which is parameterized through the covariance matrix, estimates a latent function for each domain. The covariance parameters of the latent functions are indexed by domain under a Dirichlet process prior that permits estimation of the dependence among functions across the domains: (2 An intrinsic Gaussian Markov random field prior construction provides an alternative to the GP that expresses different computation and estimation properties. In addition to performing denoised estimation of latent functions from published domain estimates, growfunctions allows estimation of collections of functions for observation units (e.g., households, rather than aggregated domains, by accounting for an informative sampling design under which the probabilities for inclusion of observation units are related to the response variable. growfunctions includes plot

  9. LETTER TO THE EDITOR: Fractal diffusion coefficient from dynamical zeta functions

    Science.gov (United States)

    Cristadoro, Giampaolo

    2006-03-01

    Dynamical zeta functions provide a powerful method to analyse low-dimensional dynamical systems when the underlying symbolic dynamics is under control. On the other hand, even simple one-dimensional maps can show an intricate structure of the grammar rules that may lead to a non-smooth dependence of global observables on parameters changes. A paradigmatic example is the fractal diffusion coefficient arising in a simple piecewise linear one-dimensional map of the real line. Using the Baladi-Ruelle generalization of the Milnor-Thurnston kneading determinant, we provide the exact dynamical zeta function for such a map and compute the diffusion coefficient from its smallest zero.

  10. Development on electromagnetic impedance function modeling and its estimation

    Energy Technology Data Exchange (ETDEWEB)

    Sutarno, D., E-mail: Sutarno@fi.itb.ac.id [Earth Physics and Complex System Division Faculty of Mathematics and Natural Sciences Institut Teknologi Bandung (Indonesia)

    2015-09-30

    Today the Electromagnetic methods such as magnetotellurics (MT) and controlled sources audio MT (CSAMT) is used in a broad variety of applications. Its usefulness in poor seismic areas and its negligible environmental impact are integral parts of effective exploration at minimum cost. As exploration was forced into more difficult areas, the importance of MT and CSAMT, in conjunction with other techniques, has tended to grow continuously. However, there are obviously important and difficult problems remaining to be solved concerning our ability to collect process and interpret MT as well as CSAMT in complex 3D structural environments. This talk aim at reviewing and discussing the recent development on MT as well as CSAMT impedance functions modeling, and also some improvements on estimation procedures for the corresponding impedance functions. In MT impedance modeling, research efforts focus on developing numerical method for computing the impedance functions of three dimensionally (3-D) earth resistivity models. On that reason, 3-D finite elements numerical modeling for the impedances is developed based on edge element method. Whereas, in the CSAMT case, the efforts were focused to accomplish the non-plane wave problem in the corresponding impedance functions. Concerning estimation of MT and CSAMT impedance functions, researches were focused on improving quality of the estimates. On that objective, non-linear regression approach based on the robust M-estimators and the Hilbert transform operating on the causal transfer functions, were used to dealing with outliers (abnormal data) which are frequently superimposed on a normal ambient MT as well as CSAMT noise fields. As validated, the proposed MT impedance modeling method gives acceptable results for standard three dimensional resistivity models. Whilst, the full solution based modeling that accommodate the non-plane wave effect for CSAMT impedances is applied for all measurement zones, including near-, transition

  11. An analysis of the Rayleigh–Stokes problem for a generalized second-grade fluid

    KAUST Repository

    Bazhlekova, Emilia

    2014-11-26

    © 2014, The Author(s). We study the Rayleigh–Stokes problem for a generalized second-grade fluid which involves a Riemann–Liouville fractional derivative in time, and present an analysis of the problem in the continuous, space semidiscrete and fully discrete formulations. We establish the Sobolev regularity of the homogeneous problem for both smooth and nonsmooth initial data v, including v∈L2(Ω). A space semidiscrete Galerkin scheme using continuous piecewise linear finite elements is developed, and optimal with respect to initial data regularity error estimates for the finite element approximations are derived. Further, two fully discrete schemes based on the backward Euler method and second-order backward difference method and the related convolution quadrature are developed, and optimal error estimates are derived for the fully discrete approximations for both smooth and nonsmooth initial data. Numerical results for one- and two-dimensional examples with smooth and nonsmooth initial data are presented to illustrate the efficiency of the method, and to verify the convergence theory.

  12. An analysis of the Rayleigh–Stokes problem for a generalized second-grade fluid

    KAUST Repository

    Bazhlekova, Emilia; Jin, Bangti; Lazarov, Raytcho; Zhou, Zhi

    2014-01-01

    © 2014, The Author(s). We study the Rayleigh–Stokes problem for a generalized second-grade fluid which involves a Riemann–Liouville fractional derivative in time, and present an analysis of the problem in the continuous, space semidiscrete and fully discrete formulations. We establish the Sobolev regularity of the homogeneous problem for both smooth and nonsmooth initial data v, including v∈L2(Ω). A space semidiscrete Galerkin scheme using continuous piecewise linear finite elements is developed, and optimal with respect to initial data regularity error estimates for the finite element approximations are derived. Further, two fully discrete schemes based on the backward Euler method and second-order backward difference method and the related convolution quadrature are developed, and optimal error estimates are derived for the fully discrete approximations for both smooth and nonsmooth initial data. Numerical results for one- and two-dimensional examples with smooth and nonsmooth initial data are presented to illustrate the efficiency of the method, and to verify the convergence theory.

  13. An Approximate Proximal Bundle Method to Minimize a Class of Maximum Eigenvalue Functions

    Directory of Open Access Journals (Sweden)

    Wei Wang

    2014-01-01

    Full Text Available We present an approximate nonsmooth algorithm to solve a minimization problem, in which the objective function is the sum of a maximum eigenvalue function of matrices and a convex function. The essential idea to solve the optimization problem in this paper is similar to the thought of proximal bundle method, but the difference is that we choose approximate subgradient and function value to construct approximate cutting-plane model to solve the above mentioned problem. An important advantage of the approximate cutting-plane model for objective function is that it is more stable than cutting-plane model. In addition, the approximate proximal bundle method algorithm can be given. Furthermore, the sequences generated by the algorithm converge to the optimal solution of the original problem.

  14. Variance function estimation for immunoassays

    International Nuclear Information System (INIS)

    Raab, G.M.; Thompson, R.; McKenzie, I.

    1980-01-01

    A computer program is described which implements a recently described, modified likelihood method of determining an appropriate weighting function to use when fitting immunoassay dose-response curves. The relationship between the variance of the response and its mean value is assumed to have an exponential form, and the best fit to this model is determined from the within-set variability of many small sets of repeated measurements. The program estimates the parameter of the exponential function with its estimated standard error, and tests the fit of the experimental data to the proposed model. Output options include a list of the actual and fitted standard deviation of the set of responses, a plot of actual and fitted standard deviation against the mean response, and an ordered list of the 10 sets of data with the largest ratios of actual to fitted standard deviation. The program has been designed for a laboratory user without computing or statistical expertise. The test-of-fit has proved valuable for identifying outlying responses, which may be excluded from further analysis by being set to negative values in the input file. (Auth.)

  15. Towards an Early Software Effort Estimation Based on Functional and Non-Functional Requirements

    Science.gov (United States)

    Kassab, Mohamed; Daneva, Maya; Ormandjieva, Olga

    The increased awareness of the non-functional requirements as a key to software project and product success makes explicit the need to include them in any software project effort estimation activity. However, the existing approaches to defining size-based effort relationships still pay insufficient attention to this need. This paper presents a flexible, yet systematic approach to the early requirements-based effort estimation, based on Non-Functional Requirements ontology. It complementarily uses one standard functional size measurement model and a linear regression technique. We report on a case study which illustrates the application of our solution approach in context and also helps evaluate our experiences in using it.

  16. Estimating functions for inhomogeneous spatial point processes with incomplete covariate data

    DEFF Research Database (Denmark)

    Waagepetersen, Rasmus

    and this leads to parameter estimation error which is difficult to quantify. In this paper we introduce a Monte Carlo version of the estimating function used in "spatstat" for fitting inhomogeneous Poisson processes and certain inhomogeneous cluster processes. For this modified estimating function it is feasible...

  17. Estimating functions for inhomogeneous spatial point processes with incomplete covariate data

    DEFF Research Database (Denmark)

    Waagepetersen, Rasmus

    2008-01-01

    and this leads to parameter estimation error which is difficult to quantify. In this paper, we introduce a Monte Carlo version of the estimating function used in spatstat for fitting inhomogeneous Poisson processes and certain inhomogeneous cluster processes. For this modified estimating function, it is feasible...

  18. The Galerkin Finite Element Method for A Multi-term Time-Fractional Diffusion equation

    OpenAIRE

    Jin, Bangti; Lazarov, Raytcho; Liu, Yikan; Zhou, Zhi

    2014-01-01

    We consider the initial/boundary value problem for a diffusion equation involving multiple time-fractional derivatives on a bounded convex polyhedral domain. We analyze a space semidiscrete scheme based on the standard Galerkin finite element method using continuous piecewise linear functions. Nearly optimal error estimates for both cases of initial data and inhomogeneous term are derived, which cover both smooth and nonsmooth data. Further we develop a fully discrete scheme based on a finite...

  19. Consistent Parameter and Transfer Function Estimation using Context Free Grammars

    Science.gov (United States)

    Klotz, Daniel; Herrnegger, Mathew; Schulz, Karsten

    2017-04-01

    This contribution presents a method for the inference of transfer functions for rainfall-runoff models. Here, transfer functions are defined as parametrized (functional) relationships between a set of spatial predictors (e.g. elevation, slope or soil texture) and model parameters. They are ultimately used for estimation of consistent, spatially distributed model parameters from a limited amount of lumped global parameters. Additionally, they provide a straightforward method for parameter extrapolation from one set of basins to another and can even be used to derive parameterizations for multi-scale models [see: Samaniego et al., 2010]. Yet, currently an actual knowledge of the transfer functions is often implicitly assumed. As a matter of fact, for most cases these hypothesized transfer functions can rarely be measured and often remain unknown. Therefore, this contribution presents a general method for the concurrent estimation of the structure of transfer functions and their respective (global) parameters. Note, that by consequence an estimation of the distributed parameters of the rainfall-runoff model is also undertaken. The method combines two steps to achieve this. The first generates different possible transfer functions. The second then estimates the respective global transfer function parameters. The structural estimation of the transfer functions is based on the context free grammar concept. Chomsky first introduced context free grammars in linguistics [Chomsky, 1956]. Since then, they have been widely applied in computer science. But, to the knowledge of the authors, they have so far not been used in hydrology. Therefore, the contribution gives an introduction to context free grammars and shows how they can be constructed and used for the structural inference of transfer functions. This is enabled by new methods from evolutionary computation, such as grammatical evolution [O'Neill, 2001], which make it possible to exploit the constructed grammar as a

  20. Asymptotic normality of kernel estimator of $\\psi$-regression function for functional ergodic data

    OpenAIRE

    Laksaci ALI; Benziadi Fatima; Gheriballak Abdelkader

    2016-01-01

    In this paper we consider the problem of the estimation of the $\\psi$-regression function when the covariates take values in an infinite dimensional space. Our main aim is to establish, under a stationary ergodic process assumption, the asymptotic normality of this estimate.

  1. Investigation of MLE in nonparametric estimation methods of reliability function

    International Nuclear Information System (INIS)

    Ahn, Kwang Won; Kim, Yoon Ik; Chung, Chang Hyun; Kim, Kil Yoo

    2001-01-01

    There have been lots of trials to estimate a reliability function. In the ESReDA 20 th seminar, a new method in nonparametric way was proposed. The major point of that paper is how to use censored data efficiently. Generally there are three kinds of approach to estimate a reliability function in nonparametric way, i.e., Reduced Sample Method, Actuarial Method and Product-Limit (PL) Method. The above three methods have some limits. So we suggest an advanced method that reflects censored information more efficiently. In many instances there will be a unique maximum likelihood estimator (MLE) of an unknown parameter, and often it may be obtained by the process of differentiation. It is well known that the three methods generally used to estimate a reliability function in nonparametric way have maximum likelihood estimators that are uniquely exist. So, MLE of the new method is derived in this study. The procedure to calculate a MLE is similar just like that of PL-estimator. The difference of the two is that in the new method, the mass (or weight) of each has an influence of the others but the mass in PL-estimator not

  2. Efficient Estimating Functions for Stochastic Differential Equations

    DEFF Research Database (Denmark)

    Jakobsen, Nina Munkholt

    The overall topic of this thesis is approximate martingale estimating function-based estimationfor solutions of stochastic differential equations, sampled at high frequency. Focuslies on the asymptotic properties of the estimators. The first part of the thesis deals with diffusions observed over...... a fixed time interval. Rate optimal and effcient estimators areobtained for a one-dimensional diffusion parameter. Stable convergence in distribution isused to achieve a practically applicable Gaussian limit distribution for suitably normalisedestimators. In a simulation example, the limit distributions...... multidimensional parameter. Conditions for rate optimality and effciency of estimatorsof drift-jump and diffusion parameters are given in some special cases. Theseconditions are found to extend the pre-existing conditions applicable to continuous diffusions,and impose much stronger requirements on the estimating...

  3. Approximated Function Based Spectral Gradient Algorithm for Sparse Signal Recovery

    Directory of Open Access Journals (Sweden)

    Weifeng Wang

    2014-02-01

    Full Text Available Numerical algorithms for the l0-norm regularized non-smooth non-convex minimization problems have recently became a topic of great interest within signal processing, compressive sensing, statistics, and machine learning. Nevertheless, the l0-norm makes the problem combinatorial and generally computationally intractable. In this paper, we construct a new surrogate function to approximate l0-norm regularization, and subsequently make the discrete optimization problem continuous and smooth. Then we use the well-known spectral gradient algorithm to solve the resulting smooth optimization problem. Experiments are provided which illustrate this method is very promising.

  4. Production Functions for Water Delivery Systems: Analysis and Estimation Using Dual Cost Function and Implicit Price Specifications

    Science.gov (United States)

    Teeples, Ronald; Glyer, David

    1987-05-01

    Both policy and technical analysis of water delivery systems have been based on cost functions that are inconsistent with or are incomplete representations of the neoclassical production functions of economics. We present a full-featured production function model of water delivery which can be estimated from a multiproduct, dual cost function. The model features implicit prices for own-water inputs and is implemented as a jointly estimated system of input share equations and a translog cost function. Likelihood ratio tests are performed showing that a minimally constrained, full-featured production function is a necessary specification of the water delivery operations in our sample. This, plus the model's highly efficient and economically correct parameter estimates, confirms the usefulness of a production function approach to modeling the economic activities of water delivery systems.

  5. A comparison of dependence function estimators in multivariate extremes

    KAUST Repository

    Vettori, Sabrina; Huser, Raphaë l; Genton, Marc G.

    2017-01-01

    Various nonparametric and parametric estimators of extremal dependence have been proposed in the literature. Nonparametric methods commonly suffer from the curse of dimensionality and have been mostly implemented in extreme-value studies up to three dimensions, whereas parametric models can tackle higher-dimensional settings. In this paper, we assess, through a vast and systematic simulation study, the performance of classical and recently proposed estimators in multivariate settings. In particular, we first investigate the performance of nonparametric methods and then compare them with classical parametric approaches under symmetric and asymmetric dependence structures within the commonly used logistic family. We also explore two different ways to make nonparametric estimators satisfy the necessary dependence function shape constraints, finding a general improvement in estimator performance either (i) by substituting the estimator with its greatest convex minorant, developing a computational tool to implement this method for dimensions $$D\\ge 2$$D≥2 or (ii) by projecting the estimator onto a subspace of dependence functions satisfying such constraints and taking advantage of Bernstein–Bézier polynomials. Implementing the convex minorant method leads to better estimator performance as the dimensionality increases.

  6. A comparison of dependence function estimators in multivariate extremes

    KAUST Repository

    Vettori, Sabrina

    2017-05-11

    Various nonparametric and parametric estimators of extremal dependence have been proposed in the literature. Nonparametric methods commonly suffer from the curse of dimensionality and have been mostly implemented in extreme-value studies up to three dimensions, whereas parametric models can tackle higher-dimensional settings. In this paper, we assess, through a vast and systematic simulation study, the performance of classical and recently proposed estimators in multivariate settings. In particular, we first investigate the performance of nonparametric methods and then compare them with classical parametric approaches under symmetric and asymmetric dependence structures within the commonly used logistic family. We also explore two different ways to make nonparametric estimators satisfy the necessary dependence function shape constraints, finding a general improvement in estimator performance either (i) by substituting the estimator with its greatest convex minorant, developing a computational tool to implement this method for dimensions $$D\\\\ge 2$$D≥2 or (ii) by projecting the estimator onto a subspace of dependence functions satisfying such constraints and taking advantage of Bernstein–Bézier polynomials. Implementing the convex minorant method leads to better estimator performance as the dimensionality increases.

  7. Piecewise Geometric Estimation of a Survival Function.

    Science.gov (United States)

    1985-04-01

    Langberg (1982). One of the by- products of the estimation process is an estimate of the failure rate function: here, another issue is raised. It is evident...envisaged as the infinite product probability space that may be constructed in the usual way from the sequence of probability spaces corresponding to the...received 6 MP (a mercaptopurine used in the treatment of leukemia). The ordered remis- sion times in weeks are: 6, 6, 6, 6+, 7, 9+, 10, 10+, 11+, 13, 16

  8. On a family of Bessel type functions: Estimations, series, overconvergence

    Science.gov (United States)

    Paneva-Konovska, Jordanka

    2017-12-01

    A family of the Bessel-Maitland functions are considered in this paper and some useful estimations are obtained for them. Series defined by means of these functions are considered and their behaviour on the boundaries of the convergence domains is discussed. Using the obtained estimations, necessary and sufficient conditions for the series overconvergence, as well as Hadamard type theorem are proposed.

  9. On estimation of the intensity function of a point process

    NARCIS (Netherlands)

    Lieshout, van M.N.M.

    2010-01-01

    Abstract. Estimation of the intensity function of spatial point processes is a fundamental problem. In this paper, we interpret the Delaunay tessellation field estimator recently introduced by Schaap and Van de Weygaert as an adaptive kernel estimator and give explicit expressions for the mean and

  10. Estimating variability in functional images using a synthetic resampling approach

    International Nuclear Information System (INIS)

    Maitra, R.; O'Sullivan, F.

    1996-01-01

    Functional imaging of biologic parameters like in vivo tissue metabolism is made possible by Positron Emission Tomography (PET). Many techniques, such as mixture analysis, have been suggested for extracting such images from dynamic sequences of reconstructed PET scans. Methods for assessing the variability in these functional images are of scientific interest. The nonlinearity of the methods used in the mixture analysis approach makes analytic formulae for estimating variability intractable. The usual resampling approach is infeasible because of the prohibitive computational effort in simulating a number of sinogram. datasets, applying image reconstruction, and generating parametric images for each replication. Here we introduce an approach that approximates the distribution of the reconstructed PET images by a Gaussian random field and generates synthetic realizations in the imaging domain. This eliminates the reconstruction steps in generating each simulated functional image and is therefore practical. Results of experiments done to evaluate the approach on a model one-dimensional problem are very encouraging. Post-processing of the estimated variances is seen to improve the accuracy of the estimation method. Mixture analysis is used to estimate functional images; however, the suggested approach is general enough to extend to other parametric imaging methods

  11. Quasi-Newton methods for parameter estimation in functional differential equations

    Science.gov (United States)

    Brewer, Dennis W.

    1988-01-01

    A state-space approach to parameter estimation in linear functional differential equations is developed using the theory of linear evolution equations. A locally convergent quasi-Newton type algorithm is applied to distributed systems with particular emphasis on parameters that induce unbounded perturbations of the state. The algorithm is computationally implemented on several functional differential equations, including coefficient and delay estimation in linear delay-differential equations.

  12. Survival Bayesian Estimation of Exponential-Gamma Under Linex Loss Function

    Science.gov (United States)

    Rizki, S. W.; Mara, M. N.; Sulistianingsih, E.

    2017-06-01

    This paper elaborates a research of the cancer patients after receiving a treatment in cencored data using Bayesian estimation under Linex Loss function for Survival Model which is assumed as an exponential distribution. By giving Gamma distribution as prior and likelihood function produces a gamma distribution as posterior distribution. The posterior distribution is used to find estimatior {\\hat{λ }}BL by using Linex approximation. After getting {\\hat{λ }}BL, the estimators of hazard function {\\hat{h}}BL and survival function {\\hat{S}}BL can be found. Finally, we compare the result of Maximum Likelihood Estimation (MLE) and Linex approximation to find the best method for this observation by finding smaller MSE. The result shows that MSE of hazard and survival under MLE are 2.91728E-07 and 0.000309004 and by using Bayesian Linex worths 2.8727E-07 and 0.000304131, respectively. It concludes that the Bayesian Linex is better than MLE.

  13. Headphone-To-Ear Transfer Function Estimation Using Measured Acoustic Parameters

    Directory of Open Access Journals (Sweden)

    Jinlin Liu

    2018-06-01

    Full Text Available This paper proposes to use an optimal five-microphone array method to measure the headphone acoustic reflectance and equivalent sound sources needed in the estimation of headphone-to-ear transfer functions (HpTFs. The performance of this method is theoretically analyzed and experimentally investigated. With the measured acoustic parameters HpTFs for different headphones and ear canal area functions are estimated based on a computational acoustic model. The estimation results show that HpTFs vary considerably with headphones and ear canals, which suggests that individualized compensations for HpTFs are necessary for headphones to reproduce desired sounds for different listeners.

  14. Smoothed Conditional Scale Function Estimation in AR(1-ARCH(1 Processes

    Directory of Open Access Journals (Sweden)

    Lema Logamou Seknewna

    2018-01-01

    Full Text Available The estimation of the Smoothed Conditional Scale Function for time series was taken out under the conditional heteroscedastic innovations by imitating the kernel smoothing in nonparametric QAR-QARCH scheme. The estimation was taken out based on the quantile regression methodology proposed by Koenker and Bassett. And the proof of the asymptotic properties of the Conditional Scale Function estimator for this type of process was given and its consistency was shown.

  15. Econometric estimation of the “Constant Elasticity of Substitution" function in R

    DEFF Research Database (Denmark)

    Henningsen, Arne; Henningsen, Geraldine

    for estimating the traditional CES function with two inputs as well as nested CES functions with three and four inputs. Furthermore, we demonstrate how these approaches can be applied in R using the add-on package micEconCES and we describe how the various estimation approaches are implemented in the mic......EconCES package. Finally, we illustrate the usage of this package by replicating some estimations of CES functions that are reported in the literature....

  16. A single model procedure for tank calibration function estimation

    International Nuclear Information System (INIS)

    York, J.C.; Liebetrau, A.M.

    1995-01-01

    Reliable tank calibrations are a vital component of any measurement control and accountability program for bulk materials in a nuclear reprocessing facility. Tank volume calibration functions used in nuclear materials safeguards and accountability programs are typically constructed from several segments, each of which is estimated independently. Ideally, the segments correspond to structural features in the tank. In this paper the authors use an extension of the Thomas-Liebetrau model to estimate the entire calibration function in a single step. This procedure automatically takes significant run-to-run differences into account and yields an estimate of the entire calibration function in one operation. As with other procedures, the first step is to define suitable calibration segments. Next, a polynomial of low degree is specified for each segment. In contrast with the conventional practice of constructing a separate model for each segment, this information is used to set up the design matrix for a single model that encompasses all of the calibration data. Estimation of the model parameters is then done using conventional statistical methods. The method described here has several advantages over traditional methods. First, modeled run-to-run differences can be taken into account automatically at the estimation step. Second, no interpolation is required between successive segments. Third, variance estimates are based on all the data, rather than that from a single segment, with the result that discontinuities in confidence intervals at segment boundaries are eliminated. Fourth, the restrictive assumption of the Thomas-Liebetrau method, that the measured volumes be the same for all runs, is not required. Finally, the proposed methods are readily implemented using standard statistical procedures and widely-used software packages

  17. mBEEF-vdW: Robust fitting of error estimation density functionals

    DEFF Research Database (Denmark)

    Lundgård, Keld Troen; Wellendorff, Jess; Voss, Johannes

    2016-01-01

    . The functional is fitted within the Bayesian error estimation functional (BEEF) framework [J. Wellendorff et al., Phys. Rev. B 85, 235149 (2012); J. Wellendorff et al., J. Chem. Phys. 140, 144107 (2014)]. We improve the previously used fitting procedures by introducing a robust MM-estimator based loss function...... catalysis, including datasets that were not used for its training. Overall, we find that mBEEF-vdW has a higher general accuracy than competing popular functionals, and it is one of the best performing functionals on chemisorption systems, surface energies, lattice constants, and dispersion. We also show...

  18. Estimation of parameters of constant elasticity of substitution production functional model

    Science.gov (United States)

    Mahaboob, B.; Venkateswarlu, B.; Sankar, J. Ravi

    2017-11-01

    Nonlinear model building has become an increasing important powerful tool in mathematical economics. In recent years the popularity of applications of nonlinear models has dramatically been rising up. Several researchers in econometrics are very often interested in the inferential aspects of nonlinear regression models [6]. The present research study gives a distinct method of estimation of more complicated and highly nonlinear model viz Constant Elasticity of Substitution (CES) production functional model. Henningen et.al [5] proposed three solutions to avoid serious problems when estimating CES functions in 2012 and they are i) removing discontinuities by using the limits of the CES function and its derivative. ii) Circumventing large rounding errors by local linear approximations iii) Handling ill-behaved objective functions by a multi-dimensional grid search. Joel Chongeh et.al [7] discussed the estimation of the impact of capital and labour inputs to the gris output agri-food products using constant elasticity of substitution production function in Tanzanian context. Pol Antras [8] presented new estimates of the elasticity of substitution between capital and labour using data from the private sector of the U.S. economy for the period 1948-1998.

  19. Power estimation on functional level for programmable processors

    Directory of Open Access Journals (Sweden)

    M. Schneider

    2004-01-01

    Full Text Available In diesem Beitrag werden verschiedene Ansätze zur Verlustleistungsschätzung von programmierbaren Prozessoren vorgestellt und bezüglich ihrer Übertragbarkeit auf moderne Prozessor-Architekturen wie beispielsweise Very Long Instruction Word (VLIW-Architekturen bewertet. Besonderes Augenmerk liegt hierbei auf dem Konzept der sogenannten Functional-Level Power Analysis (FLPA. Dieser Ansatz basiert auf der Einteilung der Prozessor-Architektur in funktionale Blöcke wie beispielsweise Processing-Unit, Clock-Netzwerk, interner Speicher und andere. Die Verlustleistungsaufnahme dieser Bl¨ocke wird parameterabhängig durch arithmetische Modellfunktionen beschrieben. Durch automatisierte Analyse von Assemblercodes des zu schätzenden Systems mittels eines Parsers können die Eingangsparameter wie beispielsweise der erzielte Parallelitätsgrad oder die Art des Speicherzugriffs gewonnen werden. Dieser Ansatz wird am Beispiel zweier moderner digitaler Signalprozessoren durch eine Vielzahl von Basis-Algorithmen der digitalen Signalverarbeitung evaluiert. Die ermittelten Schätzwerte für die einzelnen Algorithmen werden dabei mit physikalisch gemessenen Werten verglichen. Es ergibt sich ein sehr kleiner maximaler Schätzfehler von 3%. In this contribution different approaches for power estimation for programmable processors are presented and evaluated concerning their capability to be applied to modern digital signal processor architectures like e.g. Very Long InstructionWord (VLIW -architectures. Special emphasis will be laid on the concept of so-called Functional-Level Power Analysis (FLPA. This approach is based on the separation of the processor architecture into functional blocks like e.g. processing unit, clock network, internal memory and others. The power consumption of these blocks is described by parameter dependent arithmetic model functions. By application of a parser based automized analysis of assembler codes of the systems to be estimated

  20. Power estimation on functional level for programmable processors

    Science.gov (United States)

    Schneider, M.; Blume, H.; Noll, T. G.

    2004-05-01

    In diesem Beitrag werden verschiedene Ansätze zur Verlustleistungsschätzung von programmierbaren Prozessoren vorgestellt und bezüglich ihrer Übertragbarkeit auf moderne Prozessor-Architekturen wie beispielsweise Very Long Instruction Word (VLIW)-Architekturen bewertet. Besonderes Augenmerk liegt hierbei auf dem Konzept der sogenannten Functional-Level Power Analysis (FLPA). Dieser Ansatz basiert auf der Einteilung der Prozessor-Architektur in funktionale Blöcke wie beispielsweise Processing-Unit, Clock-Netzwerk, interner Speicher und andere. Die Verlustleistungsaufnahme dieser Bl¨ocke wird parameterabhängig durch arithmetische Modellfunktionen beschrieben. Durch automatisierte Analyse von Assemblercodes des zu schätzenden Systems mittels eines Parsers können die Eingangsparameter wie beispielsweise der erzielte Parallelitätsgrad oder die Art des Speicherzugriffs gewonnen werden. Dieser Ansatz wird am Beispiel zweier moderner digitaler Signalprozessoren durch eine Vielzahl von Basis-Algorithmen der digitalen Signalverarbeitung evaluiert. Die ermittelten Schätzwerte für die einzelnen Algorithmen werden dabei mit physikalisch gemessenen Werten verglichen. Es ergibt sich ein sehr kleiner maximaler Schätzfehler von 3%. In this contribution different approaches for power estimation for programmable processors are presented and evaluated concerning their capability to be applied to modern digital signal processor architectures like e.g. Very Long InstructionWord (VLIW) -architectures. Special emphasis will be laid on the concept of so-called Functional-Level Power Analysis (FLPA). This approach is based on the separation of the processor architecture into functional blocks like e.g. processing unit, clock network, internal memory and others. The power consumption of these blocks is described by parameter dependent arithmetic model functions. By application of a parser based automized analysis of assembler codes of the systems to be estimated the input

  1. Unbiased estimators for spatial distribution functions of classical fluids

    Science.gov (United States)

    Adib, Artur B.; Jarzynski, Christopher

    2005-01-01

    We use a statistical-mechanical identity closely related to the familiar virial theorem, to derive unbiased estimators for spatial distribution functions of classical fluids. In particular, we obtain estimators for both the fluid density ρ(r) in the vicinity of a fixed solute and the pair correlation g(r) of a homogeneous classical fluid. We illustrate the utility of our estimators with numerical examples, which reveal advantages over traditional histogram-based methods of computing such distributions.

  2. Estimation of a monotone percentile residual life function under random censorship.

    Science.gov (United States)

    Franco-Pereira, Alba M; de Uña-Álvarez, Jacobo

    2013-01-01

    In this paper, we introduce a new estimator of a percentile residual life function with censored data under a monotonicity constraint. Specifically, it is assumed that the percentile residual life is a decreasing function. This assumption is useful when estimating the percentile residual life of units, which degenerate with age. We establish a law of the iterated logarithm for the proposed estimator, and its n-equivalence to the unrestricted estimator. The asymptotic normal distribution of the estimator and its strong approximation to a Gaussian process are also established. We investigate the finite sample performance of the monotone estimator in an extensive simulation study. Finally, data from a clinical trial in primary biliary cirrhosis of the liver are analyzed with the proposed methods. One of the conclusions of our work is that the restricted estimator may be much more efficient than the unrestricted one. © 2012 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  3. Insights from Machine Learning for Evaluating Production Function Estimators on Manufacturing Survey Data

    OpenAIRE

    Arreola, José Luis Preciado; Johnson, Andrew L.

    2016-01-01

    Organizations like census bureaus rely on non-exhaustive surveys to estimate industry population-level production functions. In this paper we propose selecting an estimator based on a weighting of its in-sample and predictive performance on actual application datasets. We compare Cobb-Douglas functional assumptions to existing nonparametric shape constrained estimators and a newly proposed estimated presented in this paper. For simulated data, we find that our proposed estimator has the lowes...

  4. Bias Errors due to Leakage Effects When Estimating Frequency Response Functions

    Directory of Open Access Journals (Sweden)

    Andreas Josefsson

    2012-01-01

    Full Text Available Frequency response functions are often utilized to characterize a system's dynamic response. For a wide range of engineering applications, it is desirable to determine frequency response functions for a system under stochastic excitation. In practice, the measurement data is contaminated by noise and some form of averaging is needed in order to obtain a consistent estimator. With Welch's method, the discrete Fourier transform is used and the data is segmented into smaller blocks so that averaging can be performed when estimating the spectrum. However, this segmentation introduces leakage effects. As a result, the estimated frequency response function suffers from both systematic (bias and random errors due to leakage. In this paper the bias error in the H1 and H2-estimate is studied and a new method is proposed to derive an approximate expression for the relative bias error at the resonance frequency with different window functions. The method is based on using a sum of real exponentials to describe the window's deterministic autocorrelation function. Simple expressions are derived for a rectangular window and a Hanning window. The theoretical expressions are verified with numerical simulations and a very good agreement is found between the results from the proposed bias expressions and the empirical results.

  5. Clinical use of estimated glomerular filtration rate for evaluation of kidney function

    DEFF Research Database (Denmark)

    Broberg, Bo; Lindhardt, Morten; Rossing, Peter

    2013-01-01

    is a significant predictor for cardiovascular disease and may along with classical cardiovascular risk factors add useful information to risk estimation. Several cautions need to be taken into account, e.g. rapid changes in kidney function, dialysis, high age, obesity, underweight and diverging and unanticipated......Estimating glomerular filtration rate by the Modification of Diet in Renal Disease or Chronic Kidney Disease Epidemiology Collaboration formulas gives a reasonable estimate of kidney function for e.g. classification of chronic kidney disease. Additionally the estimated glomerular filtration rate...

  6. Time asymmetric spacetimes near null and spatial infinity: II. Expansions of developments of initial data sets with non-smooth conformal metrics

    International Nuclear Information System (INIS)

    Kroon, Juan Antonio Valiente

    2005-01-01

    This paper uses the conformal Einstein equations and the conformal representation of spatial infinity introduced by Friedrich to analyse the behaviour of the gravitational field near null and spatial infinity for the development of initial data which are, in principle, non-conformally flat and time asymmetric. The paper is the continuation of the investigation started in Class. Quantum Grav. 21 (2004) 5457-92, where only conformally flat initial data sets were considered. For the purposes of this investigation, the conformal metric of the initial hypersurface is assumed to have a very particular type of non-smoothness at infinity in order to allow for the presence of non-Schwarzschildean stationary initial data sets in the class under study. The calculation of asymptotic expansions of the development of these initial data sets reveals-as in the conformally flat case-the existence of a hierarchy of obstructions to the smoothness of null infinity which are expressible in terms of the initial data. This allows for the possibility of having spacetimes where future and past null infinity have different degrees of smoothness. A conjecture regarding the general structure of the hierarchy of obstructions is presented

  7. Conical square function estimates in UMD Banach spaces and applications to H?-functional calculi

    NARCIS (Netherlands)

    Hytönen, T.; Van Neerven, J.; Portal, P.

    2008-01-01

    We study conical square function estimates for Banach-valued functions and introduce a vector-valued analogue of the Coifman-Meyer-Stein tent spaces. Following recent work of Auscher-M(c)Intosh-Russ, the tent spaces in turn are used to construct a scale of vector-valued Hardy spaces associated with

  8. Coefficient Estimate Problem for a New Subclass of Biunivalent Functions

    OpenAIRE

    N. Magesh; T. Rosy; S. Varma

    2013-01-01

    We introduce a unified subclass of the function class Σ of biunivalent functions defined in the open unit disc. Furthermore, we find estimates on the coefficients |a2| and |a3| for functions in this subclass. In addition, many relevant connections with known or new results are pointed out.

  9. Estimation and Application of Ecological Memory Functions in Time and Space

    Science.gov (United States)

    Itter, M.; Finley, A. O.; Dawson, A.

    2017-12-01

    A common goal in quantitative ecology is the estimation or prediction of ecological processes as a function of explanatory variables (or covariates). Frequently, the ecological process of interest and associated covariates vary in time, space, or both. Theory indicates many ecological processes exhibit memory to local, past conditions. Despite such theoretical understanding, few methods exist to integrate observations from the recent past or within a local neighborhood as drivers of these processes. We build upon recent methodological advances in ecology and spatial statistics to develop a Bayesian hierarchical framework to estimate so-called ecological memory functions; that is, weight-generating functions that specify the relative importance of local, past covariate observations to ecological processes. Memory functions are estimated using a set of basis functions in time and/or space, allowing for flexible ecological memory based on a reduced set of parameters. Ecological memory functions are entirely data driven under the Bayesian hierarchical framework—no a priori assumptions are made regarding functional forms. Memory function uncertainty follows directly from posterior distributions for model parameters allowing for tractable propagation of error to predictions of ecological processes. We apply the model framework to simulated spatio-temporal datasets generated using memory functions of varying complexity. The framework is also applied to estimate the ecological memory of annual boreal forest growth to local, past water availability. Consistent with ecological understanding of boreal forest growth dynamics, memory to past water availability peaks in the year previous to growth and slowly decays to zero in five to eight years. The Bayesian hierarchical framework has applicability to a broad range of ecosystems and processes allowing for increased understanding of ecosystem responses to local and past conditions and improved prediction of ecological

  10. A method of moments to estimate bivariate survival functions: the copula approach

    Directory of Open Access Journals (Sweden)

    Silvia Angela Osmetti

    2013-05-01

    Full Text Available In this paper we discuss the problem on parametric and non parametric estimation of the distributions generated by the Marshall-Olkin copula. This copula comes from the Marshall-Olkin bivariate exponential distribution used in reliability analysis. We generalize this model by the copula and different marginal distributions to construct several bivariate survival functions. The cumulative distribution functions are not absolutely continuous and they unknown parameters are often not be obtained in explicit form. In order to estimate the parameters we propose an easy procedure based on the moments. This method consist in two steps: in the first step we estimate only the parameters of marginal distributions and in the second step we estimate only the copula parameter. This procedure can be used to estimate the parameters of complex survival functions in which it is difficult to find an explicit expression of the mixed moments. Moreover it is preferred to the maximum likelihood one for its simplex mathematic form; in particular for distributions whose maximum likelihood parameters estimators can not be obtained in explicit form.

  11. The Galerkin finite element method for a multi-term time-fractional diffusion equation

    KAUST Repository

    Jin, Bangti

    2015-01-01

    © 2014 The Authors. We consider the initial/boundary value problem for a diffusion equation involving multiple time-fractional derivatives on a bounded convex polyhedral domain. We analyze a space semidiscrete scheme based on the standard Galerkin finite element method using continuous piecewise linear functions. Nearly optimal error estimates for both cases of initial data and inhomogeneous term are derived, which cover both smooth and nonsmooth data. Further we develop a fully discrete scheme based on a finite difference discretization of the time-fractional derivatives, and discuss its stability and error estimate. Extensive numerical experiments for one- and two-dimensional problems confirm the theoretical convergence rates.

  12. Estimating the basilar-membrane input-output function in normal-hearing and hearing-impaired listeners

    DEFF Research Database (Denmark)

    Jepsen, Morten Løve; Dau, Torsten

    To partly characterize the function of cochlear processing in humans, the basilar membrane (BM) input-output function can be estimated. In recent studies, forward masking has been used to estimate BM compression. If an on-frequency masker is processed compressively, while an off-frequency masker...... is transformed more linearly, the ratio between the slopes of growth of masking (GOM) functions provides an estimate of BM compression at the signal frequency. In this study, this paradigm is extended to also estimate the knee-point of the I/O-function between linear rocessing at low levels and compressive...... processing at medium levels. If a signal can be masked by a low-level on-frequency masker such that signal and masker fall in the linear region of the I/O-function, then a steeper GOM function is expected. The knee-point can then be estimated in the input level region where the GOM changes significantly...

  13. TS Fuzzy Model-Based Controller Design for a Class of Nonlinear Systems Including Nonsmooth Functions

    DEFF Research Database (Denmark)

    Vafamand, Navid; Asemani, Mohammad Hassan; Khayatiyan, Alireza

    2018-01-01

    This paper proposes a novel robust controller design for a class of nonlinear systems including hard nonlinearity functions. The proposed approach is based on Takagi-Sugeno (TS) fuzzy modeling, nonquadratic Lyapunov function, and nonparallel distributed compensation scheme. In this paper, a novel...... criterion, new robust controller design conditions in terms of linear matrix inequalities are derived. Three practical case studies, electric power steering system, a helicopter model and servo-mechanical system, are presented to demonstrate the importance of such class of nonlinear systems comprising...

  14. Some aspects of the translog production function estimation

    Directory of Open Access Journals (Sweden)

    Florin-Marius PAVELESCU

    2011-06-01

    Full Text Available In a translog production function, the number of parameters practically öexplodesö as the number of considered production factors increases. Consequently, the shortcoming in the estimation of the respective production function is the occurrence of collinearity. Theoretically, the collinearity impact is minimum if a single production factor is taken into account. In this case, we can determine not only the output elasticity but also the elasticity of scale related to the respective production factor. In the present paper, we demonstrate that the relationship between the output elasticity and estimated average elasticity of scale depends on the dynamics trajectory of the production factor, underexponential and overexponential, respectively. At the end, a practical example is offered, dealing with the computation of the Gross Domestic Product elasticity and average elasticity of scale related to employed population in the United Kingdom and France during 1999-2009.

  15. On approximation and energy estimates for delta 6-convex functions.

    Science.gov (United States)

    Saleem, Muhammad Shoaib; Pečarić, Josip; Rehman, Nasir; Khan, Muhammad Wahab; Zahoor, Muhammad Sajid

    2018-01-01

    The smooth approximation and weighted energy estimates for delta 6-convex functions are derived in this research. Moreover, we conclude that if 6-convex functions are closed in uniform norm, then their third derivatives are closed in weighted [Formula: see text]-norm.

  16. Functional Mixed Effects Model for Small Area Estimation.

    Science.gov (United States)

    Maiti, Tapabrata; Sinha, Samiran; Zhong, Ping-Shou

    2016-09-01

    Functional data analysis has become an important area of research due to its ability of handling high dimensional and complex data structures. However, the development is limited in the context of linear mixed effect models, and in particular, for small area estimation. The linear mixed effect models are the backbone of small area estimation. In this article, we consider area level data, and fit a varying coefficient linear mixed effect model where the varying coefficients are semi-parametrically modeled via B-splines. We propose a method of estimating the fixed effect parameters and consider prediction of random effects that can be implemented using a standard software. For measuring prediction uncertainties, we derive an analytical expression for the mean squared errors, and propose a method of estimating the mean squared errors. The procedure is illustrated via a real data example, and operating characteristics of the method are judged using finite sample simulation studies.

  17. Estimation and model selection of semiparametric multivariate survival functions under general censorship.

    Science.gov (United States)

    Chen, Xiaohong; Fan, Yanqin; Pouzo, Demian; Ying, Zhiliang

    2010-07-01

    We study estimation and model selection of semiparametric models of multivariate survival functions for censored data, which are characterized by possibly misspecified parametric copulas and nonparametric marginal survivals. We obtain the consistency and root- n asymptotic normality of a two-step copula estimator to the pseudo-true copula parameter value according to KLIC, and provide a simple consistent estimator of its asymptotic variance, allowing for a first-step nonparametric estimation of the marginal survivals. We establish the asymptotic distribution of the penalized pseudo-likelihood ratio statistic for comparing multiple semiparametric multivariate survival functions subject to copula misspecification and general censorship. An empirical application is provided.

  18. Estimated conditional score function for missing mechanism model with nonignorable nonresponse

    Institute of Scientific and Technical Information of China (English)

    CUI Xia; ZHOU Yong

    2017-01-01

    Missing data mechanism often depends on the values of the responses,which leads to nonignorable nonresponses.In such a situation,inference based on approaches that ignore the missing data mechanism could not be valid.A crucial step is to model the nature of missingness.We specify a parametric model for missingness mechanism,and then propose a conditional score function approach for estimation.This approach imputes the score function by taking the conditional expectation of the score function for the missing data given the available information.Inference procedure is then followed by replacing unknown terms with the related nonparametric estimators based on the observed data.The proposed score function does not suffer from the non-identifiability problem,and the proposed estimator is shown to be consistent and asymptotically normal.We also construct a confidence region for the parameter of interest using empirical likelihood method.Simulation studies demonstrate that the proposed inference procedure performs well in many settings.We apply the proposed method to a data set from research in a growth hormone and exercise intervention study.

  19. On approximation and energy estimates for delta 6-convex functions

    Directory of Open Access Journals (Sweden)

    Muhammad Shoaib Saleem

    2018-02-01

    Full Text Available Abstract The smooth approximation and weighted energy estimates for delta 6-convex functions are derived in this research. Moreover, we conclude that if 6-convex functions are closed in uniform norm, then their third derivatives are closed in weighted L2 $L^{2}$-norm.

  20. estimating an aggregate import demand function for ghana

    African Journals Online (AJOL)

    Administrator

    we estimate an import demand function for Ghana for the period 1970 to ... results also indicate that economic growth (real GDP) and depreciation in the ... 80% of shocks to real exchange rates, merchandise imports and GDP ... imports; capital goods, 43 percent; intermediate ... merchandise imports (World Bank, 2004). For.

  1. Towards an Early Software Effort Estimation Based on Functional and Non-Functional Requirements

    NARCIS (Netherlands)

    Kassab, M.; Daneva, Maia; Ormanjieva, Olga; Abran, A.; Braungarten, R.; Dumke, R.; Cuadrado-Gallego, J.; Brunekreef, J.

    2009-01-01

    The increased awareness of the non-functional requirements as a key to software project and product success makes explicit the need to include them in any software project effort estimation activity. However, the existing approaches to defining size-based effort relationships still pay insufficient

  2. ON THE ESTIMATION OF DISTANCE DISTRIBUTION FUNCTIONS FOR POINT PROCESSES AND RANDOM SETS

    Directory of Open Access Journals (Sweden)

    Dietrich Stoyan

    2011-05-01

    Full Text Available This paper discusses various estimators for the nearest neighbour distance distribution function D of a stationary point process and for the quadratic contact distribution function Hq of a stationary random closed set. It recommends the use of Hanisch's estimator of D, which is of Horvitz-Thompson type, and the minussampling estimator of Hq. This recommendation is based on simulations for Poisson processes and Boolean models.

  3. Cost function estimation

    DEFF Research Database (Denmark)

    Andersen, C K; Andersen, K; Kragh-Sørensen, P

    2000-01-01

    on these criteria, a two-part model was chosen. In this model, the probability of incurring any costs was estimated using a logistic regression, while the level of the costs was estimated in the second part of the model. The choice of model had a substantial impact on the predicted health care costs, e...

  4. Multi-subject hierarchical inverse covariance modelling improves estimation of functional brain networks.

    Science.gov (United States)

    Colclough, Giles L; Woolrich, Mark W; Harrison, Samuel J; Rojas López, Pedro A; Valdes-Sosa, Pedro A; Smith, Stephen M

    2018-05-07

    A Bayesian model for sparse, hierarchical inverse covariance estimation is presented, and applied to multi-subject functional connectivity estimation in the human brain. It enables simultaneous inference of the strength of connectivity between brain regions at both subject and population level, and is applicable to fmri, meg and eeg data. Two versions of the model can encourage sparse connectivity, either using continuous priors to suppress irrelevant connections, or using an explicit description of the network structure to estimate the connection probability between each pair of regions. A large evaluation of this model, and thirteen methods that represent the state of the art of inverse covariance modelling, is conducted using both simulated and resting-state functional imaging datasets. Our novel Bayesian approach has similar performance to the best extant alternative, Ng et al.'s Sparse Group Gaussian Graphical Model algorithm, which also is based on a hierarchical structure. Using data from the Human Connectome Project, we show that these hierarchical models are able to reduce the measurement error in meg beta-band functional networks by 10%, producing concomitant increases in estimates of the genetic influence on functional connectivity. Copyright © 2018. Published by Elsevier Inc.

  5. An estimating function approach to inference for inhomogeneous Neyman-Scott processes

    DEFF Research Database (Denmark)

    Waagepetersen, Rasmus

    2007-01-01

    This article is concerned with inference for a certain class of inhomogeneous Neyman-Scott point processes depending on spatial covariates. Regression parameter estimates obtained from a simple estimating function are shown to be asymptotically normal when the "mother" intensity for the Neyman-Sc...

  6. Error estimates for the Fourier-finite-element approximation of the Lame system in nonsmooth axisymmetric domains

    International Nuclear Information System (INIS)

    Nkemzi, Boniface

    2003-10-01

    This paper is concerned with the effective implementation of the Fourier-finite-element method, which combines the approximating Fourier and the finite-element methods, for treating the Derichlet problem for the Lam.6 equations in axisymmetric domains Ω-circumflex is contained in R 3 with conical vertices and reentrant edges. The partial Fourier decomposition reduces the three-dimensional boundary value problem to an infinite sequence of decoupled two-dimensional boundary value problems on the plane meridian domain Ω α is contained in R + 2 of Ω-circumflex with solutions u, n (n = 0,1,2,...) being the Fourier coefficients of the solution u of the 3D problem. The asymptotic behavior of the Fourier coefficients near the angular points of Ω α , is described by appropriate singular vector-functions and treated numerically by linear finite elements on locally graded meshes. For the right-hand side function f-circumflex is an element of (L 2 (Ω-circumflex)) 3 it is proved that with appropriate mesh grading the rate of convergence of the combined approximations in (W 2 1 (Ω-circumflex)) 3 is of the order O(h + N -1 ), where h and N are the parameters of the finite-element and Fourier approximations, respectively, with h → 0 and N → ∞. (author)

  7. Comparing performance level estimation of safety functions in three distributed structures

    International Nuclear Information System (INIS)

    Hietikko, Marita; Malm, Timo; Saha, Heikki

    2015-01-01

    The capability of a machine control system to perform a safety function is expressed using performance levels (PL). This paper presents the results of a study where PL estimation was carried out for a safety function implemented using three different distributed control system structures. Challenges relating to the process of estimating PLs for safety related distributed machine control functions are highlighted. One of these examines the use of different cabling schemes in the implementation of a safety function and its effect on the PL evaluation. The safety function used as a generic example in PL calculations relates to a mobile work machine. It is a safety stop function where different technologies (electrical, hydraulic and pneumatic) can be utilized. It was detected that by replacing analogue cables with digital communication the system structure becomes simpler with less number of failing components, which can better the PL of the safety function. - Highlights: • Integration in distributed systems enables systems with less components. • It offers high reliability and diagnostic properties. • Analogue signals create uncertainty in signal reliability and difficult diagnostics

  8. An estimating function approach to inference for inhomogeneous Neyman-Scott processes

    DEFF Research Database (Denmark)

    Waagepetersen, Rasmus Plenge

    “This paper is concerned with inference for a certain class of inhomogeneous Neyman-Scott point processes depending on spatial covariates. Regression parameter estimates obtained from a simple estimating function are shown to be asymptotically normal when the “mother” intensity for the Neyman-Scott...

  9. BAYESIAN ESTIMATION OF THE SHAPE PARAMETER OF THE GENERALISED EXPONENTIAL DISTRIBUTION UNDER DIFFERENT LOSS FUNCTIONS

    Directory of Open Access Journals (Sweden)

    SANKU DEY

    2010-11-01

    Full Text Available The generalized exponential (GE distribution proposed by Gupta and Kundu (1999 is an important lifetime distribution in survival analysis. In this article, we propose to obtain Bayes estimators and its associated risk based on a class of  non-informative prior under the assumption of three loss functions, namely, quadratic loss function (QLF, squared log-error loss function (SLELF and general entropy loss function (GELF. The motivation is to explore the most appropriate loss function among these three loss functions. The performances of the estimators are, therefore, compared on the basis of their risks obtained under QLF, SLELF and GELF separately. The relative efficiency of the estimators is also obtained. Finally, Monte Carlo simulations are performed to compare the performances of the Bayes estimates under different situations.

  10. Method for estimating modulation transfer function from sample images.

    Science.gov (United States)

    Saiga, Rino; Takeuchi, Akihisa; Uesugi, Kentaro; Terada, Yasuko; Suzuki, Yoshio; Mizutani, Ryuta

    2018-02-01

    The modulation transfer function (MTF) represents the frequency domain response of imaging modalities. Here, we report a method for estimating the MTF from sample images. Test images were generated from a number of images, including those taken with an electron microscope and with an observation satellite. These original images were convolved with point spread functions (PSFs) including those of circular apertures. The resultant test images were subjected to a Fourier transformation. The logarithm of the squared norm of the Fourier transform was plotted against the squared distance from the origin. Linear correlations were observed in the logarithmic plots, indicating that the PSF of the test images can be approximated with a Gaussian. The MTF was then calculated from the Gaussian-approximated PSF. The obtained MTF closely coincided with the MTF predicted from the original PSF. The MTF of an x-ray microtomographic section of a fly brain was also estimated with this method. The obtained MTF showed good agreement with the MTF determined from an edge profile of an aluminum test object. We suggest that this approach is an alternative way of estimating the MTF, independently of the image type. Copyright © 2017 Elsevier Ltd. All rights reserved.

  11. Operational production of Geodetic Excitation Functions from EOP estimated values at ASI-CGS

    Science.gov (United States)

    Sciarretta, C.; Luceri, V.; Bianco, G.

    2009-04-01

    ASI-CGS is routinely providing geodetic excitation functions from its own estimated EOP values (at present SLR and VLBI; the current use of GPS EOP's is also planned as soon as this product will be fully operational) on the ASI geodetic web site (http://geodaf.mt.asi.it). This product has been generated and monitored (for ASI internal use only) in a long pre-operational phase (more than two years), including validation and testing. The daily geodetic excitation functions are now weekly updated along with the operational ASI SLR and VLBI EOP solutions and compared, whenever possible, with the atmospheric excitation functions available at the IERS SBAAM, under the IB and not-IB assumption, including the "wind" term. The work will present the available estimated geodetic function time series and its comparison with the relevant atmospheric excitation functions, deriving quantitative indicators on the quality of the estimates. The similarities as well as the discrepancies among the atmospheric and geodetic series will be analysed and commented, evaluating in particular the degree of correlation among the two estimated time series and the likelihood of a linear dependence hypothesis.

  12. $L^{p}$-square function estimates on spaces of homogeneous type and on uniformly rectifiable sets

    CERN Document Server

    Hofmann, Steve; Mitrea, Marius; Morris, Andrew J

    2017-01-01

    The authors establish square function estimates for integral operators on uniformly rectifiable sets by proving a local T(b) theorem and applying it to show that such estimates are stable under the so-called big pieces functor. More generally, they consider integral operators associated with Ahlfors-David regular sets of arbitrary codimension in ambient quasi-metric spaces. The local T(b) theorem is then used to establish an inductive scheme in which square function estimates on so-called big pieces of an Ahlfors-David regular set are proved to be sufficient for square function estimates to hold on the entire set. Extrapolation results for L^p and Hardy space versions of these estimates are also established. Moreover, the authors prove square function estimates for integral operators associated with variable coefficient kernels, including the Schwartz kernels of pseudodifferential operators acting between vector bundles on subdomains with uniformly rectifiable boundaries on manifolds.

  13. The Navier-Stokes equations an elementary functional analytic approach

    CERN Document Server

    Sohr, Hermann

    2001-01-01

    The primary objective of this monograph is to develop an elementary and self­ contained approach to the mathematical theory of a viscous incompressible fluid in a domain 0 of the Euclidean space ]Rn, described by the equations of Navier­ Stokes. The book is mainly directed to students familiar with basic functional analytic tools in Hilbert and Banach spaces. However, for readers' convenience, in the first two chapters we collect without proof some fundamental properties of Sobolev spaces, distributions, operators, etc. Another important objective is to formulate the theory for a completely general domain O. In particular, the theory applies to arbitrary unbounded, non-smooth domains. For this reason, in the nonlinear case, we have to restrict ourselves to space dimensions n = 2,3 that are also most significant from the physical point of view. For mathematical generality, we will develop the lin­ earized theory for all n 2 2. Although the functional-analytic approach developed here is, in principle, known ...

  14. The risk function approach to profit maximizing estimation in direct mailing

    NARCIS (Netherlands)

    Muus, Lars; Scheer, Hiek van der; Wansbeek, Tom

    1999-01-01

    When the parameters of the model describing consumers' reaction to a mailing are known, addresses for a future mailing can be selected in a profit-maximizing way. Usually, these parameters are unknown and are to be estimated. Standard estimation are based on a quadratic loss function. In the present

  15. Discontinuous approximate molecular electronic wave-functions

    International Nuclear Information System (INIS)

    Stuebing, E.W.; Weare, J.H.; Parr, R.G.

    1977-01-01

    Following Kohn, Schlosser and Marcus and Weare and Parr an energy functional is defined for a molecular problem which is stationary in the neighborhood of the exact solution and permits the use of trial functions that are discontinuous. The functional differs from the functional of the standard Rayleigh--Ritz method in the replacement of the usual kinetic energy operators circumflex T(μ) with operators circumflex T'(μ) = circumflex T(μ) + circumflex I(μ) generates contributions from surfaces of nonsmooth behavior. If one uses the nabla PSI . nabla PSI way of writing the usual kinetic energy contributions, one must add surface integrals of the product of the average of nabla PSI and the change of PSI across surfaces of discontinuity. Various calculations are carried out for the hydrogen molecule-ion and the hydrogen molecule. It is shown that ab initio calculations on molecules can be carried out quite generally with a basis of atomic orbitals exactly obeying the zero-differential overlap (ZDO) condition, and a firm basis is thereby provided for theories of molecular electronic structure invoking the ZDO aoproximation. It is demonstrated that a valence bond theory employing orbitals exactly obeying ZDO can provide an adequate account of chemical bonding, and several suggestions are made regarding molecular orbital methods

  16. A note on reliability estimation of functionally diverse systems

    International Nuclear Information System (INIS)

    Littlewood, B.; Popov, P.; Strigini, L.

    1999-01-01

    It has been argued that functional diversity might be a plausible means of claiming independence of failures between two versions of a system. We present a model of functional diversity, in the spirit of earlier models of diversity such as those of Eckhardt and Lee, and Hughes. In terms of the model, we show that the claims for independence between functionally diverse systems seem rather unrealistic. Instead, it seems likely that functionally diverse systems will exhibit positively correlated failures, and thus will be less reliable than an assumption of independence would suggest. The result does not, of course, suggest that functional diversity is not worthwhile; instead, it places upon the evaluator of such a system the onus to estimate the degree of dependence so as to evaluate the reliability of the system

  17. Comparing adaptive procedures for estimating the psychometric function for an auditory gap detection task.

    Science.gov (United States)

    Shen, Yi

    2013-05-01

    A subject's sensitivity to a stimulus variation can be studied by estimating the psychometric function. Generally speaking, three parameters of the psychometric function are of interest: the performance threshold, the slope of the function, and the rate at which attention lapses occur. In the present study, three psychophysical procedures were used to estimate the three-parameter psychometric function for an auditory gap detection task. These were an up-down staircase (up-down) procedure, an entropy-based Bayesian (entropy) procedure, and an updated maximum-likelihood (UML) procedure. Data collected from four young, normal-hearing listeners showed that while all three procedures provided similar estimates of the threshold parameter, the up-down procedure performed slightly better in estimating the slope and lapse rate for 200 trials of data collection. When the lapse rate was increased by mixing in random responses for the three adaptive procedures, the larger lapse rate was especially detrimental to the efficiency of the up-down procedure, and the UML procedure provided better estimates of the threshold and slope than did the other two procedures.

  18. Identification of the Diffusion Parameter in Nonlocal Steady Diffusion Problems

    Energy Technology Data Exchange (ETDEWEB)

    D’Elia, M., E-mail: mdelia@fsu.edu, E-mail: mdelia@sandia.gov [Sandia National Laboratories (United States); Gunzburger, M. [Florida State University (United States)

    2016-04-15

    The problem of identifying the diffusion parameter appearing in a nonlocal steady diffusion equation is considered. The identification problem is formulated as an optimal control problem having a matching functional as the objective of the control and the parameter function as the control variable. The analysis makes use of a nonlocal vector calculus that allows one to define a variational formulation of the nonlocal problem. In a manner analogous to the local partial differential equations counterpart, we demonstrate, for certain kernel functions, the existence of at least one optimal solution in the space of admissible parameters. We introduce a Galerkin finite element discretization of the optimal control problem and derive a priori error estimates for the approximate state and control variables. Using one-dimensional numerical experiments, we illustrate the theoretical results and show that by using nonlocal models it is possible to estimate non-smooth and discontinuous diffusion parameters.

  19. Local gradient estimate for harmonic functions on Finsler manifolds

    OpenAIRE

    Xia, Chao

    2013-01-01

    In this paper, we prove the local gradient estimate for harmonic functions on complete, noncompact Finsler measure spaces under the condition that the weighted Ricci curvature has a lower bound. As applications, we obtain Liouville type theorem on Finsler manifolds with nonnegative Ricci curvature.

  20. Estimating Aggregate Import-Demand Function In Nigeria: A Co ...

    African Journals Online (AJOL)

    This paper investigates the behaviour of Nigeria's aggregate imports between the periods 1980-2005. In the empirical analysis of the aggregate import demand function for Nigeria, cointegration and Error Correction modeling approaches have been used. Our econometric estimates suggest that real GDP largely explains ...

  1. Nonparametric estimation of the stationary M/G/1 workload distribution function

    DEFF Research Database (Denmark)

    Hansen, Martin Bøgsted

    2005-01-01

    In this paper it is demonstrated how a nonparametric estimator of the stationary workload distribution function of the M/G/1-queue can be obtained by systematic sampling the workload process. Weak convergence results and bootstrap methods for empirical distribution functions for stationary associ...

  2. Quantitative pre-surgical lung function estimation with SPECT/CT

    International Nuclear Information System (INIS)

    Bailey, D. L.; Willowson, K. P.; Timmins, S.; Harris, B. E.; Bailey, E. A.; Roach, P. J.

    2009-01-01

    Full text:Objectives: To develop methodology to predict lobar lung function based on SPECT/CT ventilation and perfusion (V/Q) scanning in candidates for lobectomy for lung cancer. Methods: This combines two development areas from our group: quantitative SPECT based on CT-derived corrections for scattering and attenuation of photons, and SPECT V/Q scanning with lobar segmentation from CT. Eight patients underwent baseline pulmonary function testing (PFT) including spirometry, measure of DLCO and cario-pulmonary exercise testing. A SPECT/CT V/Q scan was acquired at baseline. Using in-house software each lobe was anatomically defined using CT to provide lobar ROIs which could be applied to the SPECT data. From these, individual lobar contribution to overall function was calculated from counts within the lobe and post-operative FEV1, DLCO and VO2 peak were predicted. This was compared with the quantitative planar scan method using 3 rectangular ROIs over each lung. Results: Post-operative FEV1 most closely matched that predicted by the planar quantification method, with SPECT V/Q over-estimating the loss of function by 8% (range - 7 - +23%). However, post-operative DLCO and VO2 peak were both accurately predicted by SPECT V/Q (average error of 0 and 2% respectively) compared with planar. Conclusions: More accurate anatomical definition of lobar anatomy provides better estimates of post-operative loss of function for DLCO and VO2 peak than traditional planar methods. SPECT/CT provides the tools for accurate anatomical defintions of the surgical target as well as being useful in producing quantitative 3D functional images for ventilation and perfusion.

  3. Lipschitz estimates for convex functions with respect to vector fields

    Directory of Open Access Journals (Sweden)

    Valentino Magnani

    2012-12-01

    Full Text Available We present Lipschitz continuity estimates for a class of convex functions with respect to Hörmander vector fields. These results have been recently obtained in collaboration with M. Scienza, [22].

  4. Source Estimation for the Damped Wave Equation Using Modulating Functions Method: Application to the Estimation of the Cerebral Blood Flow

    KAUST Repository

    Asiri, Sharefa M.; Laleg-Kirati, Taous-Meriem

    2017-01-01

    In this paper, a method based on modulating functions is proposed to estimate the Cerebral Blood Flow (CBF). The problem is written in an input estimation problem for a damped wave equation which is used to model the spatiotemporal variations

  5. Estimating unsaturated hydraulic conductivity from soil moisture-tim function

    International Nuclear Information System (INIS)

    El Gendy, R.W.

    2002-01-01

    The unsaturated hydraulic conductivity for soil can be estimated from o(t) function, and the dimensionless soil water content parameter (Se)Se (β - βr)/ (φ - θ)), where θ, is the soil water content at any time (from soil moisture depletion curve l; θ is the residual water content and θ, is the total soil porosity (equals saturation point). Se can be represented as a time function (Se = a t b ), where t, is the measurement time and (a and b) are the regression constants. The recommended equation in this method is given by

  6. Estimation of Correlation Functions by the Random Decrement Technique

    DEFF Research Database (Denmark)

    Brincker, Rune; Krenk, Steen; Jensen, Jakob Laigaard

    responses simulated by two SDOF ARMA models loaded by the same bandlimited white noise. The speed and the accuracy of the RDD technique is compared to the Fast Fourier Transform (FFT) technique. The RDD technique does not involve multiplications, but only additions. Therefore, the technique is very fast......The Random Decrement (RDD) Technique is a versatile technique for characterization of random signals in the time domain. In this paper a short review of the theoretical basis is given, and the technique is illustrated by estimating auto-correlation functions and cross-correlation functions on modal...

  7. Estimation of Correlation Functions by the Random Decrement Technique

    DEFF Research Database (Denmark)

    Brincker, Rune; Krenk, Steen; Jensen, Jacob Laigaard

    1991-01-01

    responses simulated by two SDOF ARMA models loaded by the same band-limited white noise. The speed and the accuracy of the RDD technique is compared to the Fast Fourier Transform (FFT) technique. The RDD technique does not involve multiplications, but only additions. Therefore, the technique is very fast......The Random Decrement (RDD) Technique is a versatile technique for characterization of random signals in the time domain. In this paper a short review of the theoretical basis is given, and the technique is illustrated by estimating auto-correlation functions and cross-correlation functions on modal...

  8. Estimation of Correlation Functions by the Random Decrement Technique

    DEFF Research Database (Denmark)

    Brincker, Rune; Krenk, Steen; Jensen, Jakob Laigaard

    1992-01-01

    responses simulated by two SDOF ARMA models loaded by the same bandlimited white noise. The speed and the accuracy of the RDD technique is compared to the Fast Fourier Transform (FFT) technique. The RDD technique does not involve multiplications, but only additions. Therefore, the technique is very fast......The Random Decrement (RDD) Technique is a versatile technique for characterization of random signals in the time domain. In this paper a short review of the theoretical basis is given, and the technique is illustrated by estimating auto-correlation functions and cross-correlation functions on modal...

  9. Modulating Function-Based Method for Parameter and Source Estimation of Partial Differential Equations

    KAUST Repository

    Asiri, Sharefa M.

    2017-10-08

    Partial Differential Equations (PDEs) are commonly used to model complex systems that arise for example in biology, engineering, chemistry, and elsewhere. The parameters (or coefficients) and the source of PDE models are often unknown and are estimated from available measurements. Despite its importance, solving the estimation problem is mathematically and numerically challenging and especially when the measurements are corrupted by noise, which is often the case. Various methods have been proposed to solve estimation problems in PDEs which can be classified into optimization methods and recursive methods. The optimization methods are usually heavy computationally, especially when the number of unknowns is large. In addition, they are sensitive to the initial guess and stop condition, and they suffer from the lack of robustness to noise. Recursive methods, such as observer-based approaches, are limited by their dependence on some structural properties such as observability and identifiability which might be lost when approximating the PDE numerically. Moreover, most of these methods provide asymptotic estimates which might not be useful for control applications for example. An alternative non-asymptotic approach with less computational burden has been proposed in engineering fields based on the so-called modulating functions. In this dissertation, we propose to mathematically and numerically analyze the modulating functions based approaches. We also propose to extend these approaches to different situations. The contributions of this thesis are as follows. (i) Provide a mathematical analysis of the modulating function-based method (MFBM) which includes: its well-posedness, statistical properties, and estimation errors. (ii) Provide a numerical analysis of the MFBM through some estimation problems, and study the sensitivity of the method to the modulating functions\\' parameters. (iii) Propose an effective algorithm for selecting the method\\'s design parameters

  10. On Improving Density Estimators which are not Bona Fide Functions

    OpenAIRE

    Gajek, Leslaw

    1986-01-01

    In order to improve the rate of decrease of the IMSE for nonparametric kernel density estimators with nonrandom bandwidth beyond $O(n^{-4/5})$ all current methods must relax the constraint that the density estimate be a bona fide function, that is, be nonnegative and integrate to one. In this paper we show how to achieve similar improvement without relaxing any of these constraints. The method can also be applied for orthogonal series, adaptive orthogonal series, spline, jackknife, and other ...

  11. Estimation of Nonlinear Functions of State Vector for Linear Systems with Time-Delays and Uncertainties

    Directory of Open Access Journals (Sweden)

    Il Young Song

    2015-01-01

    Full Text Available This paper focuses on estimation of a nonlinear function of state vector (NFS in discrete-time linear systems with time-delays and model uncertainties. The NFS represents a multivariate nonlinear function of state variables, which can indicate useful information of a target system for control. The optimal nonlinear estimator of an NFS (in mean square sense represents a function of the receding horizon estimate and its error covariance. The proposed receding horizon filter represents the standard Kalman filter with time-delays and special initial horizon conditions described by the Lyapunov-like equations. In general case to calculate an optimal estimator of an NFS we propose using the unscented transformation. Important class of polynomial NFS is considered in detail. In the case of polynomial NFS an optimal estimator has a closed-form computational procedure. The subsequent application of the proposed receding horizon filter and nonlinear estimator to a linear stochastic system with time-delays and uncertainties demonstrates their effectiveness.

  12. Estimation of Multiple Point Sources for Linear Fractional Order Systems Using Modulating Functions

    KAUST Repository

    Belkhatir, Zehor; Laleg-Kirati, Taous-Meriem

    2017-01-01

    This paper proposes an estimation algorithm for the characterization of multiple point inputs for linear fractional order systems. First, using polynomial modulating functions method and a suitable change of variables the problem of estimating

  13. Application of chaos-based chaotic invasive weed optimization techniques for environmental OPF problems in the power system

    International Nuclear Information System (INIS)

    Ghasemi, Mojtaba; Ghavidel, Sahand; Aghaei, Jamshid; Gitizadeh, Mohsen; Falah, Hasan

    2014-01-01

    Highlights: • Chaotic invasive weed optimization techniques based on chaos. • Nonlinear environmental OPF problem considering non-smooth fuel cost curves. • A comparative study of CIWO techniques for environmental OPF problem. - Abstract: This paper presents efficient chaotic invasive weed optimization (CIWO) techniques based on chaos for solving optimal power flow (OPF) problems with non-smooth generator fuel cost functions (non-smooth OPF) with the minimum pollution level (environmental OPF) in electric power systems. OPF problem is used for developing corrective strategies and to perform least cost dispatches. However, cost based OPF problem solutions usually result in unattractive system gaze emission issue (environmental OPF). In the present paper, the OPF problem is formulated by considering the emission issue. The total emission can be expressed as a non-linear function of power generation, as a multi-objective optimization problem, where optimal control settings for simultaneous minimization of fuel cost and gaze emission issue are obtained. The IEEE 30-bus test power system is presented to illustrate the application of the environmental OPF problem using CIWO techniques. Our experimental results suggest that CIWO techniques hold immense promise to appear as efficient and powerful algorithm for optimization in the power systems

  14. Machine Learning Estimation of Atom Condensed Fukui Functions.

    Science.gov (United States)

    Zhang, Qingyou; Zheng, Fangfang; Zhao, Tanfeng; Qu, Xiaohui; Aires-de-Sousa, João

    2016-02-01

    To enable the fast estimation of atom condensed Fukui functions, machine learning algorithms were trained with databases of DFT pre-calculated values for ca. 23,000 atoms in organic molecules. The problem was approached as the ranking of atom types with the Bradley-Terry (BT) model, and as the regression of the Fukui function. Random Forests (RF) were trained to predict the condensed Fukui function, to rank atoms in a molecule, and to classify atoms as high/low Fukui function. Atomic descriptors were based on counts of atom types in spheres around the kernel atom. The BT coefficients assigned to atom types enabled the identification (93-94 % accuracy) of the atom with the highest Fukui function in pairs of atoms in the same molecule with differences ≥0.1. In whole molecules, the atom with the top Fukui function could be recognized in ca. 50 % of the cases and, on the average, about 3 of the top 4 atoms could be recognized in a shortlist of 4. Regression RF yielded predictions for test sets with R(2) =0.68-0.69, improving the ability of BT coefficients to rank atoms in a molecule. Atom classification (as high/low Fukui function) was obtained with RF with sensitivity of 55-61 % and specificity of 94-95 %. © 2016 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  15. LARF: Instrumental Variable Estimation of Causal Effects through Local Average Response Functions

    Directory of Open Access Journals (Sweden)

    Weihua An

    2016-07-01

    Full Text Available LARF is an R package that provides instrumental variable estimation of treatment effects when both the endogenous treatment and its instrument (i.e., the treatment inducement are binary. The method (Abadie 2003 involves two steps. First, pseudo-weights are constructed from the probability of receiving the treatment inducement. By default LARF estimates the probability by a probit regression. It also provides semiparametric power series estimation of the probability and allows users to employ other external methods to estimate the probability. Second, the pseudo-weights are used to estimate the local average response function conditional on treatment and covariates. LARF provides both least squares and maximum likelihood estimates of the conditional treatment effects.

  16. Estimation of cost function in the natural gas industry

    Energy Technology Data Exchange (ETDEWEB)

    Kim, Young Duk [Korea Energy Economics Institute, Euiwang (Korea)

    1999-02-01

    The natural gas industry in Korea has characteristics of a dual industrial structure with wholesale and retail and a regional monopoly of city gas company. Recently there have been discussions on the restructuring of gas industry and the problems arising from such industrial organization. At this point, the labor and capital cost of KOGAS were analyzed to find out efficiency of KOGAS, the wholesaler and the cost function focusing on distribution was estimated to find out effect of scale of city gas company, the retailer. As a result, in the case of KOGAS, it is prove that enhancing competitive power is needed by improving labor productivity through stabilization of labor structure and by maximizing value-added through stability of capital combination. From the estimation of cost function of city gas companies, the existing regional monopoly of city gas company have effects on its scale only when the area of operation and end users used the same amount per end user are increased. (author). 31 refs., 10 figs., 43 tabs.

  17. Bayesian Estimation of the Scale Parameter of Inverse Weibull Distribution under the Asymmetric Loss Functions

    Directory of Open Access Journals (Sweden)

    Farhad Yahgmaei

    2013-01-01

    Full Text Available This paper proposes different methods of estimating the scale parameter in the inverse Weibull distribution (IWD. Specifically, the maximum likelihood estimator of the scale parameter in IWD is introduced. We then derived the Bayes estimators for the scale parameter in IWD by considering quasi, gamma, and uniform priors distributions under the square error, entropy, and precautionary loss functions. Finally, the different proposed estimators have been compared by the extensive simulation studies in corresponding the mean square errors and the evolution of risk functions.

  18. On the robust nonparametric regression estimation for a functional regressor

    OpenAIRE

    Azzedine , Nadjia; Laksaci , Ali; Ould-Saïd , Elias

    2009-01-01

    On the robust nonparametric regression estimation for a functional regressor correspondance: Corresponding author. (Ould-Said, Elias) (Azzedine, Nadjia) (Laksaci, Ali) (Ould-Said, Elias) Departement de Mathematiques--> , Univ. Djillali Liabes--> , BP 89--> , 22000 Sidi Bel Abbes--> - ALGERIA (Azzedine, Nadjia) Departement de Mathema...

  19. Development of fragility functions to estimate homelessness after an earthquake

    Science.gov (United States)

    Brink, Susan A.; Daniell, James; Khazai, Bijan; Wenzel, Friedemann

    2014-05-01

    Immediately after an earthquake, many stakeholders need to make decisions about their response. These decisions often need to be made in a data poor environment as accurate information on the impact can take months or even years to be collected and publicized. Social fragility functions have been developed and applied to provide an estimate of the impact in terms of building damage, deaths and injuries in near real time. These rough estimates can help governments and response agencies determine what aid may be required which can improve their emergency response and facilitate planning for longer term response. Due to building damage, lifeline outages, fear of aftershocks, or other causes, people may become displaced or homeless after an earthquake. Especially in cold and dangerous locations, the rapid provision of safe emergency shelter can be a lifesaving necessity. However, immediately after an event there is little information available about the number of homeless, their locations and whether they require public shelter to aid the response agencies in decision making. In this research, we analyze homelessness after historic earthquakes using the CATDAT Damaging Earthquakes Database. CATDAT includes information on the hazard as well as the physical and social impact of over 7200 damaging earthquakes from 1900-2013 (Daniell et al. 2011). We explore the relationship of both earthquake characteristics and area characteristics with homelessness after the earthquake. We consider modelled variables such as population density, HDI, year, measures of ground motion intensity developed in Daniell (2014) over the time period from 1900-2013 as well as temperature. Using a base methodology based on that used for PAGER fatality fragility curves developed by Jaiswal and Wald (2010), but using regression through time using the socioeconomic parameters developed in Daniell et al. (2012) for "socioeconomic fragility functions", we develop a set of fragility curves that can be

  20. Stability of the Minimizers of Least Squares with a Non-Convex Regularization. Part I: Local Behavior

    International Nuclear Information System (INIS)

    Durand, S.; Nikolova, M.

    2006-01-01

    Many estimation problems amount to minimizing a piecewise C m objective function, with m ≥ 2, composed of a quadratic data-fidelity term and a general regularization term. It is widely accepted that the minimizers obtained using non-convex and possibly non-smooth regularization terms are frequently good estimates. However, few facts are known on the ways to control properties of these minimizers. This work is dedicated to the stability of the minimizers of such objective functions with respect to variations of the data. It consists of two parts: first we consider all local minimizers, whereas in a second part we derive results on global minimizers. In this part we focus on data points such that every local minimizer is isolated and results from a C m-1 local minimizer function, defined on some neighborhood. We demonstrate that all data points for which this fails form a set whose closure is negligible

  1. Estimating the Partition Function Zeros by Using the Wang-Landau Monte Carlo Algorithm

    Energy Technology Data Exchange (ETDEWEB)

    Kim, Seung-Yeon [Korea National University of Transportation, Chungju (Korea, Republic of)

    2017-03-15

    The concept of the partition function zeros is one of the most efficient methods for investigating the phase transitions and the critical phenomena in various physical systems. Estimating the partition function zeros requires information on the density of states Ω(E) as a function of the energy E. Currently, the Wang-Landau Monte Carlo algorithm is one of the best methods for calculating Ω(E). The partition function zeros in the complex temperature plane of the Ising model on an L × L square lattice (L = 10 ∼ 80) with a periodic boundary condition have been estimated by using the Wang-Landau Monte Carlo algorithm. The efficiency of the Wang-Landau Monte Carlo algorithm and the accuracies of the partition function zeros have been evaluated for three different, 5%, 10%, and 20%, flatness criteria for the histogram H(E).

  2. Drawbacks of the use of indirect estimates of renal function to evaluate the effect of risk factors on renal function

    NARCIS (Netherlands)

    Verhave, JC; Gansevoort, RT; Hillege, HL; De Zeeuw, D; Curhan, GC; De Jong, PE

    Many epidemiologic studies presently aim to evaluate the effect of risk factors on renal function. As direct measurement of renal function is cumbersome to perform, epidentiologic studies generally use an indirect estimate of renal function. The consequences of using different methods of renal

  3. Multistability of memristive Cohen-Grossberg neural networks with non-monotonic piecewise linear activation functions and time-varying delays.

    Science.gov (United States)

    Nie, Xiaobing; Zheng, Wei Xing; Cao, Jinde

    2015-11-01

    The problem of coexistence and dynamical behaviors of multiple equilibrium points is addressed for a class of memristive Cohen-Grossberg neural networks with non-monotonic piecewise linear activation functions and time-varying delays. By virtue of the fixed point theorem, nonsmooth analysis theory and other analytical tools, some sufficient conditions are established to guarantee that such n-dimensional memristive Cohen-Grossberg neural networks can have 5(n) equilibrium points, among which 3(n) equilibrium points are locally exponentially stable. It is shown that greater storage capacity can be achieved by neural networks with the non-monotonic activation functions introduced herein than the ones with Mexican-hat-type activation function. In addition, unlike most existing multistability results of neural networks with monotonic activation functions, those obtained 3(n) locally stable equilibrium points are located both in saturated regions and unsaturated regions. The theoretical findings are verified by an illustrative example with computer simulations. Copyright © 2015 Elsevier Ltd. All rights reserved.

  4. Linear estimates of structure functions from deep inelastic lepton-nucleon scattering data. Part 1

    International Nuclear Information System (INIS)

    Anikeev, V.B.; Zhigunov, V.P.

    1991-01-01

    This paper concerns the linear estimation of structure functions from muon(electron)-nucleon scattering. The expressions obtained for the structure functions estimate provide correct analysis of the random error and the bias The bias arises because of the finite number of experimental data and the finite resolution of experiment. The approach suggested may become useful for data handling from experiments at HERA. 9 refs

  5. Combined Uncertainty and A-Posteriori Error Bound Estimates for General CFD Calculations: Theory and Software Implementation

    Science.gov (United States)

    Barth, Timothy J.

    2014-01-01

    This workshop presentation discusses the design and implementation of numerical methods for the quantification of statistical uncertainty, including a-posteriori error bounds, for output quantities computed using CFD methods. Hydrodynamic realizations often contain numerical error arising from finite-dimensional approximation (e.g. numerical methods using grids, basis functions, particles) and statistical uncertainty arising from incomplete information and/or statistical characterization of model parameters and random fields. The first task at hand is to derive formal error bounds for statistics given realizations containing finite-dimensional numerical error [1]. The error in computed output statistics contains contributions from both realization error and the error resulting from the calculation of statistics integrals using a numerical method. A second task is to devise computable a-posteriori error bounds by numerically approximating all terms arising in the error bound estimates. For the same reason that CFD calculations including error bounds but omitting uncertainty modeling are only of limited value, CFD calculations including uncertainty modeling but omitting error bounds are only of limited value. To gain maximum value from CFD calculations, a general software package for uncertainty quantification with quantified error bounds has been developed at NASA. The package provides implementations for a suite of numerical methods used in uncertainty quantification: Dense tensorization basis methods [3] and a subscale recovery variant [1] for non-smooth data, Sparse tensorization methods[2] utilizing node-nested hierarchies, Sampling methods[4] for high-dimensional random variable spaces.

  6. A recursive Monte Carlo method for estimating importance functions in deep penetration problems

    International Nuclear Information System (INIS)

    Goldstein, M.

    1980-04-01

    A pratical recursive Monte Carlo method for estimating the importance function distribution, aimed at importance sampling for the solution of deep penetration problems in three-dimensional systems, was developed. The efficiency of the recursive method was investigated for sample problems including one- and two-dimensional, monoenergetic and and multigroup problems, as well as for a practical deep-penetration problem with streaming. The results of the recursive Monte Carlo calculations agree fairly well with Ssub(n) results. It is concluded that the recursive Monte Carlo method promises to become a universal method for estimating the importance function distribution for the solution of deep-penetration problems, in all kinds of systems: for many systems the recursive method is likely to be more efficient than previously existing methods; for three-dimensional systems it is the first method that can estimate the importance function with the accuracy required for an efficient solution based on importance sampling of neutron deep-penetration problems in those systems

  7. Modulation transfer function estimation of optical lens system by adaptive neuro-fuzzy methodology

    Science.gov (United States)

    Petković, Dalibor; Shamshirband, Shahaboddin; Pavlović, Nenad T.; Anuar, Nor Badrul; Kiah, Miss Laiha Mat

    2014-07-01

    The quantitative assessment of image quality is an important consideration in any type of imaging system. The modulation transfer function (MTF) is a graphical description of the sharpness and contrast of an imaging system or of its individual components. The MTF is also known and spatial frequency response. The MTF curve has different meanings according to the corresponding frequency. The MTF of an optical system specifies the contrast transmitted by the system as a function of image size, and is determined by the inherent optical properties of the system. In this study, the adaptive neuro-fuzzy (ANFIS) estimator is designed and adapted to estimate MTF value of the actual optical system. Neural network in ANFIS adjusts parameters of membership function in the fuzzy logic of the fuzzy inference system. The back propagation learning algorithm is used for training this network. This intelligent estimator is implemented using Matlab/Simulink and the performances are investigated. The simulation results presented in this paper show the effectiveness of the developed method.

  8. Iohexol clearance is superior to creatinine-based renal function estimating equations in detecting short-term renal function decline in chronic heart failure.

    Science.gov (United States)

    Cvan Trobec, Katja; Kerec Kos, Mojca; von Haehling, Stephan; Anker, Stefan D; Macdougall, Iain C; Ponikowski, Piotr; Lainscak, Mitja

    2015-12-01

    To compare the performance of iohexol plasma clearance and creatinine-based renal function estimating equations in monitoring longitudinal renal function changes in chronic heart failure (CHF) patients, and to assess the effects of body composition on the equation performance. Iohexol plasma clearance was measured in 43 CHF patients at baseline and after at least 6 months. Simultaneously, renal function was estimated with five creatinine-based equations (four- and six-variable Modification of Diet in Renal Disease, Cockcroft-Gault, Cockcroft-Gault adjusted for lean body mass, Chronic Kidney Disease Epidemiology Collaboration equation) and body composition was assessed using bioimpedance and dual-energy x-ray absorptiometry. Over a median follow-up of 7.5 months (range 6-17 months), iohexol clearance significantly declined (52.8 vs 44.4 mL/[min ×1.73 m2], P=0.001). This decline was significantly higher in patients receiving mineralocorticoid receptor antagonists at baseline (mean decline -22% of baseline value vs -3%, P=0.037). Mean serum creatinine concentration did not change significantly during follow-up and no creatinine-based renal function estimating equation was able to detect the significant longitudinal decline of renal function determined by iohexol clearance. After accounting for body composition, the accuracy of the equations improved, but not their ability to detect renal function decline. Renal function measured with iohexol plasma clearance showed relevant decline in CHF patients, particularly in those treated with mineralocorticoid receptor antagonists. None of the equations for renal function estimation was able to detect these changes. ClinicalTrials.gov registration number: NCT01829880.

  9. School District Inputs and Biased Estimation of Educational Production Functions.

    Science.gov (United States)

    Watts, Michael

    1985-01-01

    In 1979, Eric Hanushek pointed out a potential problem in estimating educational production functions, particularly at the precollege level. He observed that it is frequently inappropriate to include school-system variables in equations using the individual student as the unit of observation. This study offers limited evidence supporting this…

  10. Bayesian Parameter Estimation via Filtering and Functional Approximations

    KAUST Repository

    Matthies, Hermann G.

    2016-11-25

    The inverse problem of determining parameters in a model by comparing some output of the model with observations is addressed. This is a description for what hat to be done to use the Gauss-Markov-Kalman filter for the Bayesian estimation and updating of parameters in a computational model. This is a filter acting on random variables, and while its Monte Carlo variant --- the Ensemble Kalman Filter (EnKF) --- is fairly straightforward, we subsequently only sketch its implementation with the help of functional representations.

  11. Bayesian Parameter Estimation via Filtering and Functional Approximations

    KAUST Repository

    Matthies, Hermann G.; Litvinenko, Alexander; Rosic, Bojana V.; Zander, Elmar

    2016-01-01

    The inverse problem of determining parameters in a model by comparing some output of the model with observations is addressed. This is a description for what hat to be done to use the Gauss-Markov-Kalman filter for the Bayesian estimation and updating of parameters in a computational model. This is a filter acting on random variables, and while its Monte Carlo variant --- the Ensemble Kalman Filter (EnKF) --- is fairly straightforward, we subsequently only sketch its implementation with the help of functional representations.

  12. Optimum Dispatch of Hybrid Solar Thermal (HSTP Electric Power Plant Using Non-Smooth Cost Function and Emission Function for IEEE-30 Bus System

    Directory of Open Access Journals (Sweden)

    Saroj Kumar Dash

    2016-07-01

    Full Text Available The basic objective of economic load dispatch (ELD is to optimize the total fuel cost of hybrid solar thermal electric power plant (HSTP. In ELD problems the cost function for each generator has been approximated by a single quadratic cost equation. As cost of coal increases, it becomes even more important have a good model for the production cost of each generator for the solar thermal hybrid system. A more accurate formulation is obtained for the ELD problem by expressing the generation cost function as a piece wise quadratic cost function. However, the solution methods for ELD problem with piece wise quadratic cost function requires much complicated algorithms such as the hierarchical structure approach along with evolutionary computations (ECs. A test system comprising of 10 units with 29 different fuel [7] cost equations is considered in this paper. The applied genetic algorithm method will provide optimal solution for the given load demand.

  13. Pedotransfer functions to estimate soil water content at field capacity ...

    Indian Academy of Sciences (India)

    20

    available scarce water resources in dry land agriculture, but direct measurement thereof for multiple locations in the field is not always feasible. Therefore, pedotransfer functions (PTFs) were developed to estimate soil water retention at FC and PWP for dryland soils of India. A soil database available for Arid Western India ...

  14. State-space model with deep learning for functional dynamics estimation in resting-state fMRI.

    Science.gov (United States)

    Suk, Heung-Il; Wee, Chong-Yaw; Lee, Seong-Whan; Shen, Dinggang

    2016-04-01

    Studies on resting-state functional Magnetic Resonance Imaging (rs-fMRI) have shown that different brain regions still actively interact with each other while a subject is at rest, and such functional interaction is not stationary but changes over time. In terms of a large-scale brain network, in this paper, we focus on time-varying patterns of functional networks, i.e., functional dynamics, inherent in rs-fMRI, which is one of the emerging issues along with the network modelling. Specifically, we propose a novel methodological architecture that combines deep learning and state-space modelling, and apply it to rs-fMRI based Mild Cognitive Impairment (MCI) diagnosis. We first devise a Deep Auto-Encoder (DAE) to discover hierarchical non-linear functional relations among regions, by which we transform the regional features into an embedding space, whose bases are complex functional networks. Given the embedded functional features, we then use a Hidden Markov Model (HMM) to estimate dynamic characteristics of functional networks inherent in rs-fMRI via internal states, which are unobservable but can be inferred from observations statistically. By building a generative model with an HMM, we estimate the likelihood of the input features of rs-fMRI as belonging to the corresponding status, i.e., MCI or normal healthy control, based on which we identify the clinical label of a testing subject. In order to validate the effectiveness of the proposed method, we performed experiments on two different datasets and compared with state-of-the-art methods in the literature. We also analyzed the functional networks learned by DAE, estimated the functional connectivities by decoding hidden states in HMM, and investigated the estimated functional connectivities by means of a graph-theoretic approach. Copyright © 2016 Elsevier Inc. All rights reserved.

  15. Galerkin FEM for Fractional Order Parabolic Equations with Initial Data in H − s , 0 ≤ s ≤ 1

    KAUST Repository

    Jin, Bangti; Lazarov, Raytcho; Pasciak, Joseph; Zhou, Zhi

    2013-01-01

    We investigate semi-discrete numerical schemes based on the standard Galerkin and lumped mass Galerkin finite element methods for an initial-boundary value problem for homogeneous fractional diffusion problems with non-smooth initial data. We assume that Ω ⊂ ℝd , d = 1,2,3 is a convex polygonal (polyhedral) domain. We theoretically justify optimal order error estimates in L2- and H1-norms for initial data in H-s (Ω), 0 ≤ s ≤ 1. We confirm our theoretical findings with a number of numerical tests that include initial data v being a Dirac δ-function supported on a (d-1)-dimensional manifold. © 2013 Springer-Verlag.

  16. Joint brain connectivity estimation from diffusion and functional MRI data

    Science.gov (United States)

    Chu, Shu-Hsien; Lenglet, Christophe; Parhi, Keshab K.

    2015-03-01

    Estimating brain wiring patterns is critical to better understand the brain organization and function. Anatomical brain connectivity models axonal pathways, while the functional brain connectivity characterizes the statistical dependencies and correlation between the activities of various brain regions. The synchronization of brain activity can be inferred through the variation of blood-oxygen-level dependent (BOLD) signal from functional MRI (fMRI) and the neural connections can be estimated using tractography from diffusion MRI (dMRI). Functional connections between brain regions are supported by anatomical connections, and the synchronization of brain activities arises through sharing of information in the form of electro-chemical signals on axon pathways. Jointly modeling fMRI and dMRI data may improve the accuracy in constructing anatomical connectivity as well as functional connectivity. Such an approach may lead to novel multimodal biomarkers potentially able to better capture functional and anatomical connectivity variations. We present a novel brain network model which jointly models the dMRI and fMRI data to improve the anatomical connectivity estimation and extract the anatomical subnetworks associated with specific functional modes by constraining the anatomical connections as structural supports to the functional connections. The key idea is similar to a multi-commodity flow optimization problem that minimizes the cost or maximizes the efficiency for flow configuration and simultaneously fulfills the supply-demand constraint for each commodity. In the proposed network, the nodes represent the grey matter (GM) regions providing brain functionality, and the links represent white matter (WM) fiber bundles connecting those regions and delivering information. The commodities can be thought of as the information corresponding to brain activity patterns as obtained for instance by independent component analysis (ICA) of fMRI data. The concept of information

  17. Estimation of the reliability function for two-parameter exponentiated Rayleigh or Burr type X distribution

    Directory of Open Access Journals (Sweden)

    Anupam Pathak

    2014-11-01

    Full Text Available Abstract: Problem Statement: The two-parameter exponentiated Rayleigh distribution has been widely used especially in the modelling of life time event data. It provides a statistical model which has a wide variety of application in many areas and the main advantage is its ability in the context of life time event among other distributions. The uniformly minimum variance unbiased and maximum likelihood estimation methods are the way to estimate the parameters of the distribution. In this study we explore and compare the performance of the uniformly minimum variance unbiased and maximum likelihood estimators of the reliability function R(t=P(X>t and P=P(X>Y for the two-parameter exponentiated Rayleigh distribution. Approach: A new technique of obtaining these parametric functions is introduced in which major role is played by the powers of the parameter(s and the functional forms of the parametric functions to be estimated are not needed.  We explore the performance of these estimators numerically under varying conditions. Through the simulation study a comparison are made on the performance of these estimators with respect to the Biasness, Mean Square Error (MSE, 95% confidence length and corresponding coverage percentage. Conclusion: Based on the results of simulation study the UMVUES of R(t and ‘P’ for the two-parameter exponentiated Rayleigh distribution found to be superior than MLES of R(t and ‘P’.

  18. Iohexol clearance is superior to creatinine-based renal function estimating equations in detecting short-term renal function decline in chronic heart failure

    Science.gov (United States)

    Cvan Trobec, Katja; Kerec Kos, Mojca; von Haehling, Stephan; Anker, Stefan D.; Macdougall, Iain C.; Ponikowski, Piotr; Lainscak, Mitja

    2015-01-01

    Aim To compare the performance of iohexol plasma clearance and creatinine-based renal function estimating equations in monitoring longitudinal renal function changes in chronic heart failure (CHF) patients, and to assess the effects of body composition on the equation performance. Methods Iohexol plasma clearance was measured in 43 CHF patients at baseline and after at least 6 months. Simultaneously, renal function was estimated with five creatinine-based equations (four- and six-variable Modification of Diet in Renal Disease, Cockcroft-Gault, Cockcroft-Gault adjusted for lean body mass, Chronic Kidney Disease Epidemiology Collaboration equation) and body composition was assessed using bioimpedance and dual-energy x-ray absorptiometry. Results Over a median follow-up of 7.5 months (range 6-17 months), iohexol clearance significantly declined (52.8 vs 44.4 mL/[min ×1.73 m2], P = 0.001). This decline was significantly higher in patients receiving mineralocorticoid receptor antagonists at baseline (mean decline -22% of baseline value vs -3%, P = 0.037). Mean serum creatinine concentration did not change significantly during follow-up and no creatinine-based renal function estimating equation was able to detect the significant longitudinal decline of renal function determined by iohexol clearance. After accounting for body composition, the accuracy of the equations improved, but not their ability to detect renal function decline. Conclusions Renal function measured with iohexol plasma clearance showed relevant decline in CHF patients, particularly in those treated with mineralocorticoid receptor antagonists. None of the equations for renal function estimation was able to detect these changes. ClinicalTrials.gov registration number NCT01829880 PMID:26718759

  19. Application of independent component analysis for speech-music separation using an efficient score function estimation

    Science.gov (United States)

    Pishravian, Arash; Aghabozorgi Sahaf, Masoud Reza

    2012-12-01

    In this paper speech-music separation using Blind Source Separation is discussed. The separating algorithm is based on the mutual information minimization where the natural gradient algorithm is used for minimization. In order to do that, score function estimation from observation signals (combination of speech and music) samples is needed. The accuracy and the speed of the mentioned estimation will affect on the quality of the separated signals and the processing time of the algorithm. The score function estimation in the presented algorithm is based on Gaussian mixture based kernel density estimation method. The experimental results of the presented algorithm on the speech-music separation and comparing to the separating algorithm which is based on the Minimum Mean Square Error estimator, indicate that it can cause better performance and less processing time

  20. An estimator of the survival function based on the semi-Markov model under dependent censorship.

    Science.gov (United States)

    Lee, Seung-Yeoun; Tsai, Wei-Yann

    2005-06-01

    Lee and Wolfe (Biometrics vol. 54 pp. 1176-1178, 1998) proposed the two-stage sampling design for testing the assumption of independent censoring, which involves further follow-up of a subset of lost-to-follow-up censored subjects. They also proposed an adjusted estimator for the survivor function for a proportional hazards model under the dependent censoring model. In this paper, a new estimator for the survivor function is proposed for the semi-Markov model under the dependent censorship on the basis of the two-stage sampling data. The consistency and the asymptotic distribution of the proposed estimator are derived. The estimation procedure is illustrated with an example of lung cancer clinical trial and simulation results are reported of the mean squared errors of estimators under a proportional hazards and two different nonproportional hazards models.

  1. An open tool for input function estimation and quantification of dynamic PET FDG brain scans.

    Science.gov (United States)

    Bertrán, Martín; Martínez, Natalia; Carbajal, Guillermo; Fernández, Alicia; Gómez, Álvaro

    2016-08-01

    Positron emission tomography (PET) analysis of clinical studies is mostly restricted to qualitative evaluation. Quantitative analysis of PET studies is highly desirable to be able to compute an objective measurement of the process of interest in order to evaluate treatment response and/or compare patient data. But implementation of quantitative analysis generally requires the determination of the input function: the arterial blood or plasma activity which indicates how much tracer is available for uptake in the brain. The purpose of our work was to share with the community an open software tool that can assist in the estimation of this input function, and the derivation of a quantitative map from the dynamic PET study. Arterial blood sampling during the PET study is the gold standard method to get the input function, but is uncomfortable and risky for the patient so it is rarely used in routine studies. To overcome the lack of a direct input function, different alternatives have been devised and are available in the literature. These alternatives derive the input function from the PET image itself (image-derived input function) or from data gathered from previous similar studies (population-based input function). In this article, we present ongoing work that includes the development of a software tool that integrates several methods with novel strategies for the segmentation of blood pools and parameter estimation. The tool is available as an extension to the 3D Slicer software. Tests on phantoms were conducted in order to validate the implemented methods. We evaluated the segmentation algorithms over a range of acquisition conditions and vasculature size. Input function estimation algorithms were evaluated against ground truth of the phantoms, as well as on their impact over the final quantification map. End-to-end use of the tool yields quantification maps with [Formula: see text] relative error in the estimated influx versus ground truth on phantoms. The main

  2. Bayesian Estimation of Two-Parameter Weibull Distribution Using Extension of Jeffreys' Prior Information with Three Loss Functions

    Directory of Open Access Journals (Sweden)

    Chris Bambey Guure

    2012-01-01

    Full Text Available The Weibull distribution has been observed as one of the most useful distribution, for modelling and analysing lifetime data in engineering, biology, and others. Studies have been done vigorously in the literature to determine the best method in estimating its parameters. Recently, much attention has been given to the Bayesian estimation approach for parameters estimation which is in contention with other estimation methods. In this paper, we examine the performance of maximum likelihood estimator and Bayesian estimator using extension of Jeffreys prior information with three loss functions, namely, the linear exponential loss, general entropy loss, and the square error loss function for estimating the two-parameter Weibull failure time distribution. These methods are compared using mean square error through simulation study with varying sample sizes. The results show that Bayesian estimator using extension of Jeffreys' prior under linear exponential loss function in most cases gives the smallest mean square error and absolute bias for both the scale parameter α and the shape parameter β for the given values of extension of Jeffreys' prior.

  3. Signal detection theory and vestibular perception: III. Estimating unbiased fit parameters for psychometric functions.

    Science.gov (United States)

    Chaudhuri, Shomesh E; Merfeld, Daniel M

    2013-03-01

    Psychophysics generally relies on estimating a subject's ability to perform a specific task as a function of an observed stimulus. For threshold studies, the fitted functions are called psychometric functions. While fitting psychometric functions to data acquired using adaptive sampling procedures (e.g., "staircase" procedures), investigators have encountered a bias in the spread ("slope" or "threshold") parameter that has been attributed to the serial dependency of the adaptive data. Using simulations, we confirm this bias for cumulative Gaussian parametric maximum likelihood fits on data collected via adaptive sampling procedures, and then present a bias-reduced maximum likelihood fit that substantially reduces the bias without reducing the precision of the spread parameter estimate and without reducing the accuracy or precision of the other fit parameters. As a separate topic, we explain how to implement this bias reduction technique using generalized linear model fits as well as other numeric maximum likelihood techniques such as the Nelder-Mead simplex. We then provide a comparison of the iterative bootstrap and observed information matrix techniques for estimating parameter fit variance from adaptive sampling procedure data sets. The iterative bootstrap technique is shown to be slightly more accurate; however, the observed information technique executes in a small fraction (0.005 %) of the time required by the iterative bootstrap technique, which is an advantage when a real-time estimate of parameter fit variance is required.

  4. On the a priori estimation of collocation error covariance functions: a feasibility study

    DEFF Research Database (Denmark)

    Arabelos, D.N.; Forsberg, René; Tscherning, C.C.

    2007-01-01

    and the associated error covariance functions were conducted in the Arctic region north of 64 degrees latitude. The correlation between the known features of the data and the parameters variance and correlation length of the computed error covariance functions was estimated using multiple regression analysis...

  5. Towards real-time diffuse optical tomography for imaging brain functions cooperated with Kalman estimator

    Science.gov (United States)

    Wang, Bingyuan; Zhang, Yao; Liu, Dongyuan; Ding, Xuemei; Dan, Mai; Pan, Tiantian; Wang, Yihan; Li, Jiao; Zhou, Zhongxing; Zhang, Limin; Zhao, Huijuan; Gao, Feng

    2018-02-01

    Functional near-infrared spectroscopy (fNIRS) is a non-invasive neuroimaging method to monitor the cerebral hemodynamic through the optical changes measured at the scalp surface. It has played a more and more important role in psychology and medical imaging communities. Real-time imaging of brain function using NIRS makes it possible to explore some sophisticated human brain functions unexplored before. Kalman estimator has been frequently used in combination with modified Beer-Lamber Law (MBLL) based optical topology (OT), for real-time brain function imaging. However, the spatial resolution of the OT is low, hampering the application of OT in exploring some complicated brain functions. In this paper, we develop a real-time imaging method combining diffuse optical tomography (DOT) and Kalman estimator, much improving the spatial resolution. Instead of only presenting one spatially distributed image indicating the changes of the absorption coefficients at each time point during the recording process, one real-time updated image using the Kalman estimator is provided. Its each voxel represents the amplitude of the hemodynamic response function (HRF) associated with this voxel. We evaluate this method using some simulation experiments, demonstrating that this method can obtain more reliable spatial resolution images. Furthermore, a statistical analysis is also conducted to help to decide whether a voxel in the field of view is activated or not.

  6. Nonsmooth Newton method for Fischer function reformulation of contact force problems for interactive rigid body simulation

    DEFF Research Database (Denmark)

    Silcowitz, Morten; Niebe, Sarah Maria; Erleben, Kenny

    2009-01-01

    contact response. In this paper, we present a new approach to contact force determination. We reformulate the contact force problem as a nonlinear root search problem, using a Fischer function. We solve this problem using a generalized Newton method. Our new Fischer - Newton method shows improved...... qualities for specific configurations where the most widespread alternative, the Projected Gauss-Seidel method, fails. Experiments show superior convergence properties of the exact Fischer - Newton method....

  7. Effect of large weight reductions on measured and estimated kidney function

    DEFF Research Database (Denmark)

    von Scholten, Bernt Johan; Persson, Frederik; Svane, Maria S

    2017-01-01

    GFR (creatinine-based equations), whereas measured GFR (mGFR) and cystatin C-based eGFR would be unaffected if adjusted for body surface area. METHODS: Prospective, intervention study including 19 patients. All attended a baseline visit before gastric bypass surgery followed by a visit six months post-surgery. m...... for body surface area was unchanged. Estimates of GFR based on creatinine overestimate renal function likely due to changes in muscle mass, whereas cystatin C based estimates are unaffected. TRIAL REGISTRATION: ClinicalTrials.gov, NCT02138565 . Date of registration: March 24, 2014....

  8. Source Estimation for the Damped Wave Equation Using Modulating Functions Method: Application to the Estimation of the Cerebral Blood Flow

    KAUST Repository

    Asiri, Sharefa M.

    2017-10-19

    In this paper, a method based on modulating functions is proposed to estimate the Cerebral Blood Flow (CBF). The problem is written in an input estimation problem for a damped wave equation which is used to model the spatiotemporal variations of blood mass density. The method is described and its performance is assessed through some numerical simulations. The robustness of the method in presence of noise is also studied.

  9. Three-dimensional habitat structure and landscape genetics: a step forward in estimating functional connectivity.

    Science.gov (United States)

    Milanesi, P; Holderegger, R; Bollmann, K; Gugerli, F; Zellweger, F

    2017-02-01

    Estimating connectivity among fragmented habitat patches is crucial for evaluating the functionality of ecological networks. However, current estimates of landscape resistance to animal movement and dispersal lack landscape-level data on local habitat structure. Here, we used a landscape genetics approach to show that high-fidelity habitat structure maps derived from Light Detection and Ranging (LiDAR) data critically improve functional connectivity estimates compared to conventional land cover data. We related pairwise genetic distances of 128 Capercaillie (Tetrao urogallus) genotypes to least-cost path distances at multiple scales derived from land cover data. Resulting β values of linear mixed effects models ranged from 0.372 to 0.495, while those derived from LiDAR ranged from 0.558 to 0.758. The identification and conservation of functional ecological networks suffering from habitat fragmentation and homogenization will thus benefit from the growing availability of detailed and contiguous data on three-dimensional habitat structure and associated habitat quality. © 2016 by the Ecological Society of America.

  10. Proceedings – Mathematical Sciences | Indian Academy of Sciences

    Indian Academy of Sciences (India)

    Using the fixed point method, we prove the Hyers–Ulam stability of the Cauchy additive functional equation and the quadratic functional equation in matrix normed spaces. pp 413-447. ℎ- Spectral element methods for three dimensional elliptic problems on non-smooth domains, Part-II: Proof of stability theorem.

  11. Dosing of cytotoxic chemotherapy: impact of renal function estimates on dose.

    Science.gov (United States)

    Dooley, M J; Poole, S G; Rischin, D

    2013-11-01

    Oncology clinicians are now routinely provided with an estimated glomerular filtration rate on pathology reports whenever serum creatinine is requested. The utility of using this for the dose determination of renally excreted drugs compared with other existing methods is needed to inform practice. Renal function was determined by [Tc(99m)]DTPA clearance in adult patients presenting for chemotherapy. Renal function was calculated using the 4-variable Modification of Diet in Renal Disease (4v-MDRD), Chronic Kidney Disease Epidemiology Collaboration (CKD-EPI), Cockcroft and Gault (CG), Wright and Martin formulae. Doses for renal excreted cytotoxic drugs, including carboplatin, were calculated. The concordance of the renal function estimates according to the CKD classification with measured Tc(99m)DPTA clearance in 455 adults (median age 64.0 years: range 17-87 years) for the 4v-MDRD, CKD-EPI, CG, Martin and Wright formulae was 47.7%, 56.3%, 46.2%, 56.5% and 60.2%, respectively. Concordance for chemotherapy dose for these formulae was 89.0%, 89.5%, 85.1%, 89.9% and 89.9%, respectively. Concordance for carboplatin dose specifically was 66.4%, 71.4%, 64.0%, 73.8% and 73.2%. All bedside formulae provide similar levels of concordance in dosage selection for the renal excreted chemotherapy drugs when compared with the use of a direct measure of renal function.

  12. Optimal replacement time estimation for machines and equipment based on cost function

    OpenAIRE

    J. Šebo; J. Buša; P. Demeč; J. Svetlík

    2013-01-01

    The article deals with a multidisciplinary issue of estimating the optimal replacement time for the machines. Considered categories of machines, for which the optimization method is usable, are of the metallurgical and engineering production. Different models of cost function are considered (both with one and two variables). Parameters of the models were calculated through the least squares method. Models testing show that all are good enough, so for estimation of optimal replacement time is ...

  13. On the Reliability of Source Time Functions Estimated Using Empirical Green's Function Methods

    Science.gov (United States)

    Gallegos, A. C.; Xie, J.; Suarez Salas, L.

    2017-12-01

    The Empirical Green's Function (EGF) method (Hartzell, 1978) has been widely used to extract source time functions (STFs). In this method, seismograms generated by collocated events with different magnitudes are deconvolved. Under a fundamental assumption that the STF of the small event is a delta function, the deconvolved Relative Source Time Function (RSTF) yields the large event's STF. While this assumption can be empirically justified by examination of differences in event size and frequency content of the seismograms, there can be a lack of rigorous justification of the assumption. In practice, a small event might have a finite duration when the RSTF is retrieved and interpreted as the large event STF with a bias. In this study, we rigorously analyze this bias using synthetic waveforms generated by convolving a realistic Green's function waveform with pairs of finite-duration triangular or parabolic STFs. The RSTFs are found using a time-domain based matrix deconvolution. We find when the STFs of smaller events are finite, the RSTFs are a series of narrow non-physical spikes. Interpreting these RSTFs as a series of high-frequency source radiations would be very misleading. The only reliable and unambiguous information we can retrieve from these RSTFs is the difference in durations and the moment ratio of the two STFs. We can apply a Tikhonov smoothing to obtain a single-pulse RSTF, but its duration is dependent on the choice of weighting, which may be subjective. We then test the Multi-Channel Deconvolution (MCD) method (Plourde & Bostock, 2017) which assumes that both STFs have finite durations to be solved for. A concern about the MCD method is that the number of unknown parameters is larger, which would tend to make the problem rank-deficient. Because the kernel matrix is dependent on the STFs to be solved for under a positivity constraint, we can only estimate the rank-deficiency with a semi-empirical approach. Based on the results so far, we find that the

  14. Estimations for the Schwinger functions of relativistic quantum field theories

    International Nuclear Information System (INIS)

    Mayer, C.D.

    1981-01-01

    Schwinger functions of a relativistic neutral scalar field the basing test function space of which is S or D are estimated by methods of the analytic continuation. Concerning the behaviour in coincident points it is shown: The two-point singularity of the n-point Schwinger function of a field theory is dominated by an inverse power of the distance of both points modulo a multiplicative constant, if the other n-2 points a sufficiently distant and remain fixed. The power thereby, depends only on n. Using additional conditions on the field the independence of the power on n may be proved. Concerning the behaviour at infinite it is shown: The n-point Schwinger functions of a field theory are globally bounded, if the minimal distance of the arguments is positive. The bound depends only on n and the minimal distance of the arguments. (orig.) [de

  15. Modified Moment, Maximum Likelihood and Percentile Estimators for the Parameters of the Power Function Distribution

    Directory of Open Access Journals (Sweden)

    Azam Zaka

    2014-10-01

    Full Text Available This paper is concerned with the modifications of maximum likelihood, moments and percentile estimators of the two parameter Power function distribution. Sampling behavior of the estimators is indicated by Monte Carlo simulation. For some combinations of parameter values, some of the modified estimators appear better than the traditional maximum likelihood, moments and percentile estimators with respect to bias, mean square error and total deviation.

  16. Estimating crustal thickness and Vp/Vs ratio with joint constraints of receiver function and gravity data

    Science.gov (United States)

    Shi, Lei; Guo, Lianghui; Ma, Yawei; Li, Yonghua; Wang, Weilai

    2018-05-01

    The technique of teleseismic receiver function H-κ stacking is popular for estimating the crustal thickness and Vp/Vs ratio. However, it has large uncertainty or ambiguity when the Moho multiples in receiver function are not easy to be identified. We present an improved technique to estimate the crustal thickness and Vp/Vs ratio by joint constraints of receiver function and gravity data. The complete Bouguer gravity anomalies, composed of the anomalies due to the relief of the Moho interface and the heterogeneous density distribution within the crust, are associated with the crustal thickness, density and Vp/Vs ratio. According to their relationship formulae presented by Lowry and Pérez-Gussinyé, we invert the complete Bouguer gravity anomalies by using a common algorithm of likelihood estimation to obtain the crustal thickness and Vp/Vs ratio, and then utilize them to constrain the receiver function H-κ stacking result. We verified the improved technique on three synthetic crustal models and evaluated the influence of selected parameters, the results of which demonstrated that the novel technique could reduce the ambiguity and enhance the accuracy of estimation. Real data test at two given stations in the NE margin of Tibetan Plateau illustrated that the improved technique provided reliable estimations of crustal thickness and Vp/Vs ratio.

  17. The Navier-Stokes equations an elementary functional analytic approach

    CERN Document Server

    Sohr, Hermann

    2001-01-01

    The primary objective of this monograph is to develop an elementary and self-contained approach to the mathematical theory of a viscous, incompressible fluid in a domain of the Euclidean space, described by the equations of Navier-Stokes. Moreover, the theory is presented for completely general domains, in particular, for arbitrary unbounded, nonsmooth domains. Therefore, restriction was necessary to space dimensions two and three, which are also the most significant from a physical point of view. For mathematical generality, however, the linearized theory is expounded for general dimensions higher than one. Although the functional analytic approach developed here is, in principle, known to specialists, the present book fills a gap in the literature providing a systematic treatment of a subject that has been documented until now only in fragments. The book is mainly directed to students familiar with basic tools in Hilbert and Banach spaces. However, for the readers’ convenience, some fundamental properties...

  18. A time-frequency analysis method to obtain stable estimates of magnetotelluric response function based on Hilbert-Huang transform

    Science.gov (United States)

    Cai, Jianhua

    2017-05-01

    The time-frequency analysis method represents signal as a function of time and frequency, and it is considered a powerful tool for handling arbitrary non-stationary time series by using instantaneous frequency and instantaneous amplitude. It also provides a possible alternative to the analysis of the non-stationary magnetotelluric (MT) signal. Based on the Hilbert-Huang transform (HHT), a time-frequency analysis method is proposed to obtain stable estimates of the magnetotelluric response function. In contrast to conventional methods, the response function estimation is performed in the time-frequency domain using instantaneous spectra rather than in the frequency domain, which allows for imaging the response parameter content as a function of time and frequency. The theory of the method is presented and the mathematical model and calculation procedure, which are used to estimate response function based on HHT time-frequency spectrum, are discussed. To evaluate the results, response function estimates are compared with estimates from a standard MT data processing method based on the Fourier transform. All results show that apparent resistivities and phases, which are calculated from the HHT time-frequency method, are generally more stable and reliable than those determined from the simple Fourier analysis. The proposed method overcomes the drawbacks of the traditional Fourier methods, and the resulting parameter minimises the estimation bias caused by the non-stationary characteristics of the MT data.

  19. An Estimation of the Gamma-Ray Burst Afterglow Apparent Optical Brightness Distribution Function

    Science.gov (United States)

    Akerlof, Carl W.; Swan, Heather F.

    2007-12-01

    By using recent publicly available observational data obtained in conjunction with the NASA Swift gamma-ray burst (GRB) mission and a novel data analysis technique, we have been able to make some rough estimates of the GRB afterglow apparent optical brightness distribution function. The results suggest that 71% of all burst afterglows have optical magnitudes with mRa strong indication that the apparent optical magnitude distribution function peaks at mR~19.5. Such estimates may prove useful in guiding future plans to improve GRB counterpart observation programs. The employed numerical techniques might find application in a variety of other data analysis problems in which the intrinsic distributions must be inferred from a heterogeneous sample.

  20. Time variation of the electromagnetic transfer function of the earth estimated by using wavelet transform.

    Science.gov (United States)

    Suto, Noriko; Harada, Makoto; Izutsu, Jun; Nagao, Toshiyasu

    2006-07-01

    In order to accurately estimate the geomagnetic transfer functions in the area of the volcano Mt. Iwate (IWT), we applied the interstation transfer function (ISTF) method to the three-component geomagnetic field data observed at Mt. Iwate station (IWT), using the Kakioka Magnetic Observatory, JMA (KAK) as remote reference station. Instead of the conventional Fourier transform, in which temporary transient noises badly degrade the accuracy of long term properties, continuous wavelet transform has been used. The accuracy of the results was as high as that of robust estimations of transfer functions obtained by the Fourier transform method. This would provide us with possibilities for routinely monitoring the transfer functions, without sophisticated statistical procedures, to detect changes in the underground electrical conductivity structure.

  1. Primal Interior Point Method for Minimization of Generalized Minimax Functions

    Czech Academy of Sciences Publication Activity Database

    Lukšan, Ladislav; Matonoha, Ctirad; Vlček, Jan

    2010-01-01

    Roč. 46, č. 4 (2010), s. 697-721 ISSN 0023-5954 R&D Projects: GA ČR GA201/09/1957 Institutional research plan: CEZ:AV0Z10300504 Keywords : unconstrained optimization * large-scale optimization * nonsmooth optimization * generalized minimax optimization * interior-point methods * modified Newton methods * variable metric methods * global convergence * computational experiments Subject RIV: BA - General Mathematics Impact factor: 0.461, year: 2010 http://dml.cz/handle/10338.dmlcz/140779

  2. Bayesian Estimation Of Shift Point In Poisson Model Under Asymmetric Loss Functions

    Directory of Open Access Journals (Sweden)

    uma srivastava

    2012-01-01

    Full Text Available The paper deals with estimating  shift point which occurs in any sequence of independent observations  of Poisson model in statistical process control. This shift point occurs in the sequence when  i.e. m  life data are observed. The Bayes estimator on shift point 'm' and before and after shift process means are derived for symmetric and asymmetric loss functions under informative and non informative priors. The sensitivity analysis of Bayes estimators are carried out by simulation and numerical comparisons with  R-programming. The results shows the effectiveness of shift in sequence of Poisson disribution .

  3. Assessment of various parameters in the estimation of differential renal function using technetium-99m mercaptoacetyltriglycine

    International Nuclear Information System (INIS)

    Lythgoe, M.F.; Gordon, I.; Khader, Z.; Smith, T.; Anderson, P.J.

    1999-01-01

    Differential renal function (DRF) is an important parameter that should be assessed from virtually every dynamic renogram. With the introduction of technetium-99m mercaptoacetyltriglycine ( 99m Tc-MAG3), a tracer with a high renal extraction, the estimation of DRF might hopefully become accurate and reproducible both between observers in the same institution and also between institutions. The aim of this study was to assess the effect of different parameters on the estimation of DRF. To this end we investigated two groups of children: group A, comprising 35 children with a single kidney (27 of whom had poor renal function), and group B, comprising 20 children with two kidneys and normal global function who also had an associated 99m Tc-dimercaptosuccinic acid scan ( 99m Tc-DMSA). The variables assessed for their effect on the estimation of DRF were: different operators, the choice of renal regions of interest (ROIs), the applied background subtraction, and six different techniques for analysis of the renogram. The six techniques were based on: linear regression of the slopes in the Rutland-Patlak plot, matrix deconvolution, differential method, integral method, linear regression of the slope of the renograms, and the area under the curve of the renogram. The estimation of DRF was less dependent upon both observer and method in patients with two normally functioning kidneys than in patients with a single kidney. The inter-observer comparison among children in either group was not dependent on either ROI or background subtraction. However, in patients with poor renal function the method of choice for the estimation of DRF was dependent on background subtraction, though not ROI. In children with two kidneys and normal renal function, the estimation of DRF from the 24 techniques gave similar results. Methods that produced DRF values closest to expected results, from either group of children, were the Rutland-Patlak plot and matrix deconvolution methods. (orig.)

  4. Distributed leader-follower flocking control for multi-agent dynamical systems with time-varying velocities

    NARCIS (Netherlands)

    Yu, Wenwu; Chen, Guanrong; Cao, Ming

    Using tools from algebraic graph theory and nonsmooth analysis in combination with ideas of collective potential functions, velocity consensus and navigation feedback, a distributed leader-follower flocking algorithm for multi-agent dynamical systems with time-varying velocities is developed where

  5. Estimating functional liver reserve following hepatic irradiation: Adaptive normal tissue response models

    International Nuclear Information System (INIS)

    Stenmark, Matthew H.; Cao, Yue; Wang, Hesheng; Jackson, Andrew; Ben-Josef, Edgar; Ten Haken, Randall K.; Lawrence, Theodore S.; Feng, Mary

    2014-01-01

    Purpose: To estimate the limit of functional liver reserve for safe application of hepatic irradiation using changes in indocyanine green, an established assay of liver function. Materials and methods: From 2005 to 2011, 60 patients undergoing hepatic irradiation were enrolled in a prospective study assessing the plasma retention fraction of indocyanine green at 15-min (ICG-R15) prior to, during (at 60% of planned dose), and after radiotherapy (RT). The limit of functional liver reserve was estimated from the damage fraction of functional liver (DFL) post-RT [1 − (ICG-R15 pre-RT /ICG-R15 post-RT )] where no toxicity was observed using a beta distribution function. Results: Of 48 evaluable patients, 3 (6%) developed RILD, all within 2.5 months of completing RT. The mean ICG-R15 for non-RILD patients pre-RT, during-RT and 1-month post-RT was 20.3%(SE 2.6), 22.0%(3.0), and 27.5%(2.8), and for RILD patients was 6.3%(4.3), 10.8%(2.7), and 47.6%(8.8). RILD was observed at post-RT damage fractions of ⩾78%. Both DFL assessed by during-RT ICG and MLD predicted for DFL post-RT (p < 0.0001). Limiting the post-RT DFL to 50%, predicted a 99% probability of a true complication rate <15%. Conclusion: The DFL as assessed by changes in ICG during treatment serves as an early indicator of a patient’s tolerance to hepatic irradiation

  6. Estimation of demand function on natural gas and study of demand analysis

    Energy Technology Data Exchange (ETDEWEB)

    Kim, Y.D. [Korea Energy Economics Institute, Euiwang (Korea, Republic of)

    1998-04-01

    Demand Function is estimated with several methods about the demand on natural gas, and analyzed per usage. Since the demand on natural gas, which has big share of heating use, has a close relationship with temperature, the inter-season trend of price and income elasticity is estimated considering temperature and economic formation. Per usage response of natural gas demand on the changes of price and income is also estimated. It was estimated that the response of gas demand on the changes of price and income occurs by the change of number of users in long term. In case of the response of unit consumption, only industrial use shows long-term response to price. Since gas price barely responds to the change of exchange rate, it seems to express the price-making mechanism that does not reflect timely the import condition such as exchange rate, etc. 16 refs., 12 figs., 13 tabs.

  7. Application of a disease-specific mapping function to estimate utility gains with effective treatment of schizophrenia

    Directory of Open Access Journals (Sweden)

    Rupnow Marcia FT

    2005-09-01

    Full Text Available Abstract Background Most tools for estimating utilities use clinical trial data from general health status models, such as the 36-Item Short-Form Health Survey (SF-36. A disease-specific model may be more appropriate. The objective of this study was to apply a disease-specific utility mapping function for schizophrenia to data from a large, 1-year, open-label study of long-acting risperidone and to compare its performance with an SF-36-based utility mapping function. Methods Patients with schizophrenia or schizoaffective disorder by DSM-IV criteria received 25, 50, or 75 mg long-acting risperidone every 2 weeks for 12 months. The Positive and Negative Syndrome Scale (PANSS and SF-36 were used to assess efficacy and health-related quality of life. Movement disorder severity was measured using the Extrapyramidal Symptom Rating Scale (ESRS; data concerning other common adverse effects (orthostatic hypotension, weight gain were collected. Transforms were applied to estimate utilities. Results A total of 474 patients completed the study. Long-acting risperidone treatment was associated with a utility gain of 0.051 using the disease-specific function. The estimated gain using an SF-36-based mapping function was smaller: 0.0285. Estimates of gains were only weakly correlated (r = 0.2. Because of differences in scaling and variance, the requisite sample size for a randomized trial to confirm observed effects is much smaller for the disease-specific mapping function (156 versus 672 total subjects. Conclusion Application of a disease-specific mapping function was feasible. Differences in scaling and precision suggest the clinically based mapping function has greater power than the SF-36-based measure to detect differences in utility.

  8. Nonparametric adaptive estimation of linear functionals for low frequency observed Lévy processes

    OpenAIRE

    Kappus, Johanna

    2012-01-01

    For a Lévy process X having finite variation on compact sets and finite first moments, µ( dx) = xv( dx) is a finite signed measure which completely describes the jump dynamics. We construct kernel estimators for linear functionals of µ and provide rates of convergence under regularity assumptions. Moreover, we consider adaptive estimation via model selection and propose a new strategy for the data driven choice of the smoothing parameter.

  9. The Impact of Clinical and Cognitive Variables on Social Functioning in Parkinson's Disease: Patient versus Examiner Estimates

    Directory of Open Access Journals (Sweden)

    Patrick McNamara

    2010-01-01

    Results. Patients' estimates of their own social functioning were not significantly different from examiners' estimates. The impact of clinical variables on social functioning in PD revealed depression to be the strongest association of social functioning in PD on both the patient and the examiner version of the Social Adaptation Self-Evaluation Scale. Conclusions. PD patients appear to be well aware of their social strengths and weaknesses. Depression and motor symptom severity are significant predictors of both self- and examiner reported social functioning in patients with PD. Assessment and treatment of depression in patients with PD may improve social functioning and overall quality of life.

  10. Clinical use of estimated glomerular filtration rate for evaluation of kidney function

    DEFF Research Database (Denmark)

    Broberg, Bo; Lindhardt, Morten; Rossing, Peter

    2013-01-01

    is a significant predictor for cardiovascular disease and may along with classical cardiovascular risk factors add useful information to risk estimation. Several cautions need to be taken into account, e.g. rapid changes in kidney function, dialysis, high age, obesity, underweight and diverging and unanticipated...

  11. Singular boundary perturbations of distributed systems

    DEFF Research Database (Denmark)

    Pedersen, Michael

    1990-01-01

    Some problems arising in real-life control applications are addressed--namely, problems concerning non-smooth control inputs on the boundary of the spatial domain. The classical variational approach is extended, and sufficient conditions are given for the solutions to continuous functions of time...

  12. Ep for efficient stochastic control with obstacles

    NARCIS (Netherlands)

    Mensink, T.; Verbeek, J.; Kappen, H.J.

    2010-01-01

    Abstract. We address the problem of continuous stochastic optimal control in the presence of hard obstacles. Due to the non-smooth character of the obstacles, the traditional approach using dynamic programming in combination with function approximation tends to fail. We consider a recently

  13. Estimation of the Lagrangian structure function constant ¤C¤0 from surface-layer wind data

    DEFF Research Database (Denmark)

    Anfossi, D.; Degrazia, G.; Ferrero, E.

    2000-01-01

    Eulerian turbulence observations, made in the surface layer under unstable conditions (z/L > 0), by a sonic anemometer were used to estimate the Lagrangian structure function constant C(0). Two methods were considered. The first one makes use of a relationship, widely used in the Lagrangian...... stochastic dispersion models, relating C(0) to the turbulent kinetic energy dissipation rate epsilon, wind velocity variance and Lagrangian decorrelation time. The second one employs a novel equation, connecting C(0) to the constant of the second-order Eulerian structure function. Before estimating C(0...

  14. Smooth extrapolation of unknown anatomy via statistical shape models

    Science.gov (United States)

    Grupp, R. B.; Chiang, H.; Otake, Y.; Murphy, R. J.; Gordon, C. R.; Armand, M.; Taylor, R. H.

    2015-03-01

    Several methods to perform extrapolation of unknown anatomy were evaluated. The primary application is to enhance surgical procedures that may use partial medical images or medical images of incomplete anatomy. Le Fort-based, face-jaw-teeth transplant is one such procedure. From CT data of 36 skulls and 21 mandibles separate Statistical Shape Models of the anatomical surfaces were created. Using the Statistical Shape Models, incomplete surfaces were projected to obtain complete surface estimates. The surface estimates exhibit non-zero error in regions where the true surface is known; it is desirable to keep the true surface and seamlessly merge the estimated unknown surface. Existing extrapolation techniques produce non-smooth transitions from the true surface to the estimated surface, resulting in additional error and a less aesthetically pleasing result. The three extrapolation techniques evaluated were: copying and pasting of the surface estimate (non-smooth baseline), a feathering between the patient surface and surface estimate, and an estimate generated via a Thin Plate Spline trained from displacements between the surface estimate and corresponding vertices of the known patient surface. Feathering and Thin Plate Spline approaches both yielded smooth transitions. However, feathering corrupted known vertex values. Leave-one-out analyses were conducted, with 5% to 50% of known anatomy removed from the left-out patient and estimated via the proposed approaches. The Thin Plate Spline approach yielded smaller errors than the other two approaches, with an average vertex error improvement of 1.46 mm and 1.38 mm for the skull and mandible respectively, over the baseline approach.

  15. An enhanced particle swarm optimization for dynamic economic dispatch problem considering valve-point loading

    Energy Technology Data Exchange (ETDEWEB)

    Sriyanyong, P. [King Mongkut' s Univ. of Technology, Bangkok (Thailand). Dept. of Teacher Training in Electrical Engineering

    2008-07-01

    This paper described the use of an enhanced particle swarm optimization (PSO) model to address the problem of dynamic economic dispatch (DED). A modified heuristic search method was incorporated into the PSO model. Both smooth and non-smooth cost functions were considered. The enhanced PSO model not only utilized the basic PSO algorithm in order to seek the optimal solution for the DED problem, but it also used a modified heuristic method to deal with constraints and increase the possibility of finding a feasible solution. In order to validate the enhanced PSO model, it was used and tested on 10-unit systems considering both smooth and non-smooth cost functions characteristics. The experimental results were also compared to other methods. The proposed technique was found to be better than other approaches. The enhanced PSO model outperformed others with respect to quality, stability and reliability. 23 refs., 1 tab., 8 figs.

  16. Collective estimation of multiple bivariate density functions with application to angular-sampling-based protein loop modeling

    KAUST Repository

    Maadooliat, Mehdi

    2015-10-21

    This paper develops a method for simultaneous estimation of density functions for a collection of populations of protein backbone angle pairs using a data-driven, shared basis that is constructed by bivariate spline functions defined on a triangulation of the bivariate domain. The circular nature of angular data is taken into account by imposing appropriate smoothness constraints across boundaries of the triangles. Maximum penalized likelihood is used to fit the model and an alternating blockwise Newton-type algorithm is developed for computation. A simulation study shows that the collective estimation approach is statistically more efficient than estimating the densities individually. The proposed method was used to estimate neighbor-dependent distributions of protein backbone dihedral angles (i.e., Ramachandran distributions). The estimated distributions were applied to protein loop modeling, one of the most challenging open problems in protein structure prediction, by feeding them into an angular-sampling-based loop structure prediction framework. Our estimated distributions compared favorably to the Ramachandran distributions estimated by fitting a hierarchical Dirichlet process model; and in particular, our distributions showed significant improvements on the hard cases where existing methods do not work well.

  17. Collective estimation of multiple bivariate density functions with application to angular-sampling-based protein loop modeling

    KAUST Repository

    Maadooliat, Mehdi; Zhou, Lan; Najibi, Seyed Morteza; Gao, Xin; Huang, Jianhua Z.

    2015-01-01

    This paper develops a method for simultaneous estimation of density functions for a collection of populations of protein backbone angle pairs using a data-driven, shared basis that is constructed by bivariate spline functions defined on a triangulation of the bivariate domain. The circular nature of angular data is taken into account by imposing appropriate smoothness constraints across boundaries of the triangles. Maximum penalized likelihood is used to fit the model and an alternating blockwise Newton-type algorithm is developed for computation. A simulation study shows that the collective estimation approach is statistically more efficient than estimating the densities individually. The proposed method was used to estimate neighbor-dependent distributions of protein backbone dihedral angles (i.e., Ramachandran distributions). The estimated distributions were applied to protein loop modeling, one of the most challenging open problems in protein structure prediction, by feeding them into an angular-sampling-based loop structure prediction framework. Our estimated distributions compared favorably to the Ramachandran distributions estimated by fitting a hierarchical Dirichlet process model; and in particular, our distributions showed significant improvements on the hard cases where existing methods do not work well.

  18. Bayesian estimation of dynamic matching function for U-V analysis in Japan

    Science.gov (United States)

    Kyo, Koki; Noda, Hideo; Kitagawa, Genshiro

    2012-05-01

    In this paper we propose a Bayesian method for analyzing unemployment dynamics. We derive a Beveridge curve for unemployment and vacancy (U-V) analysis from a Bayesian model based on a labor market matching function. In our framework, the efficiency of matching and the elasticities of new hiring with respect to unemployment and vacancy are regarded as time varying parameters. To construct a flexible model and obtain reasonable estimates in an underdetermined estimation problem, we treat the time varying parameters as random variables and introduce smoothness priors. The model is then described in a state space representation, enabling the parameter estimation to be carried out using Kalman filter and fixed interval smoothing. In such a representation, dynamic features of the cyclic unemployment rate and the structural-frictional unemployment rate can be accurately captured.

  19. Estimation of bone Calcium-to-Phosphorous mass ratio using dual-energy nonlinear polynomial functions

    International Nuclear Information System (INIS)

    Sotiropoulou, P; Koukou, V; Martini, N; Nikiforidis, G; Michail, C; Kandarakis, I; Fountos, G; Kounadi, E

    2015-01-01

    In this study an analytical approximation of dual-energy inverse functions is presented for the estimation of the calcium-to-phosphorous (Ca/P) mass ratio, which is a crucial parameter in bone health. Bone quality could be examined by the X-ray dual-energy method (XDEM), in terms of bone tissue material properties. Low- and high-energy, log- intensity measurements were combined by using a nonlinear function, to cancel out the soft tissue structures and generate the dual energy bone Ca/P mass ratio. The dual-energy simulated data were obtained using variable Ca and PO 4 thicknesses on a fixed total tissue thickness. The XDEM simulations were based on a bone phantom. Inverse fitting functions with least-squares estimation were used to obtain the fitting coefficients and to calculate the thickness of each material. The examined inverse mapping functions were linear, quadratic, and cubic. For every thickness, the nonlinear quadratic function provided the optimal fitting accuracy while requiring relative few terms. The dual-energy method, simulated in this work could be used to quantify bone Ca/P mass ratio with photon-counting detectors. (paper)

  20. Case Study: On Objective Functions for the Peak Flow Calibration and for the Representative Parameter Estimation of the Basin

    Directory of Open Access Journals (Sweden)

    Jungwook Kim

    2018-05-01

    Full Text Available The objective function is usually used for verification of the optimization process between observed and simulated flows for the parameter estimation of rainfall–runoff model. However, it does not focus on peak flow and on representative parameter for various rain storm events of the basin, but it can estimate the optimal parameters by minimizing the overall error of observed and simulated flows. Therefore, the aim of this study is to suggest the objective functions that can fit peak flow in hydrograph and estimate the representative parameter of the basin for the events. The Streamflow Synthesis And Reservoir Regulation (SSARR model was employed to perform flood runoff simulation for the Mihocheon stream basin in Geum River, Korea. Optimization was conducted using three calibration methods: genetic algorithm, pattern search, and the Shuffled Complex Evolution method developed at the University of Arizona (SCE-UA. Two objective functions of the Sum of Squared of Residual (SSR and the Weighted Sum of Squared of Residual (WSSR suggested in this study for peak flow optimization were applied. Since the parameters estimated using a single rain storm event do not represent the parameters for various rain storms in the basin, we used the representative objective function that can minimize the sum of objective functions of the events. Six rain storm events were used for the parameter estimation. Four events were used for the calibration and the other two for validation; then, the results by SSR and WSSR were compared. Flow runoff simulation was carried out based on the proposed objective functions, and the objective function of WSSR was found to be more useful than that of SSR in the simulation of peak flow runoff. Representative parameters that minimize the objective function for each of the four rain storm events were estimated. The calibrated observed and simulated flow runoff hydrographs obtained from applying the estimated representative

  1. Estimation of the input function in dynamic positron emission tomography applied to fluorodeoxyglucose

    International Nuclear Information System (INIS)

    Jouvie, Camille

    2013-01-01

    Positron Emission Tomography (PET) is a method of functional imaging, used in particular for drug development and tumor imaging. In PET, the estimation of the arterial plasmatic activity concentration of the non-metabolized compound (the 'input function') is necessary for the extraction of the pharmacokinetic parameters. These parameters enable the quantification of the compound dynamics in the tissues. This PhD thesis contributes to the study of the input function by the development of a minimally invasive method to estimate the input function. This method uses the PET image and a few blood samples. In this work, the example of the FDG tracer is chosen. The proposed method relies on compartmental modeling: it deconvoluates the three-compartment-model. The originality of the method consists in using a large number of regions of interest (ROIs), a large number of sets of three ROIs, and an iterative process. To validate the method, simulations of PET images of increasing complexity have been performed, from a simple image simulated with an analytic simulator to a complex image simulated with a Monte-Carlo simulator. After simulation of the acquisition, reconstruction and corrections, the images were segmented (through segmentation of an IRM image and registration between PET and IRM images) and corrected for partial volume effect by a variant of Rousset's method, to obtain the kinetics in the ROIs, which are the input data of the estimation method. The evaluation of the method on simulated and real data is presented, as well as a study of the method robustness to different error sources, for example in the segmentation, in the registration or in the activity of the used blood samples. (author) [fr

  2. Comparison of density estimators. [Estimation of probability density functions

    Energy Technology Data Exchange (ETDEWEB)

    Kao, S.; Monahan, J.F.

    1977-09-01

    Recent work in the field of probability density estimation has included the introduction of some new methods, such as the polynomial and spline methods and the nearest neighbor method, and the study of asymptotic properties in depth. This earlier work is summarized here. In addition, the computational complexity of the various algorithms is analyzed, as are some simulations. The object is to compare the performance of the various methods in small samples and their sensitivity to change in their parameters, and to attempt to discover at what point a sample is so small that density estimation can no longer be worthwhile. (RWR)

  3. Modulating functions method for parameters estimation in the fifth order KdV equation

    KAUST Repository

    Asiri, Sharefa M.; Liu, Da-Yan; Laleg-Kirati, Taous-Meriem

    2017-01-01

    In this work, the modulating functions method is proposed for estimating coefficients in higher-order nonlinear partial differential equation which is the fifth order Kortewegde Vries (KdV) equation. The proposed method transforms the problem into a

  4. Land-use change and carbon sinks: Econometric estimation of the carbon sequestration supply function

    Energy Technology Data Exchange (ETDEWEB)

    Lubowski, Ruben N.; Plantinga, Andrew J.; Stavins, Robert N.

    2001-01-01

    Increased attention by policy makers to the threat of global climate change has brought with it considerable interest in the possibility of encouraging the expansion of forest area as a means of sequestering carbon dioxide. The marginal costs of carbon sequestration or, equivalently, the carbon sequestration supply function will determine the ultimate effects and desirability of policies aimed at enhancing carbon uptake. In particular, marginal sequestration costs are the critical statistic for identifying a cost-effective policy mix to mitigate net carbon dioxide emissions. We develop a framework for conducting an econometric analysis of land use for the forty-eight contiguous United States and employing it to estimate the carbon sequestration supply function. By estimating the opportunity costs of land on the basis of econometric evidence of landowners' actual behavior, we aim to circumvent many of the shortcomings of previous sequestration cost assessments. By conducting the first nationwide econometric estimation of sequestration costs, endogenizing prices for land-based commodities, and estimating land-use transition probabilities in a framework that explicitly considers the range of land-use alternatives, we hope to provide better estimates eventually of the true costs of large-scale carbon sequestration efforts. In this way, we seek to add to understanding of the costs and potential of this strategy for addressing the threat of global climate change.

  5. Spectrum response estimation for deep-water floating platforms via retardation function representation

    Science.gov (United States)

    Liu, Fushun; Liu, Chengcheng; Chen, Jiefeng; Wang, Bin

    2017-08-01

    The key concept of spectrum response estimation with commercial software, such as the SESAM software tool, typically includes two main steps: finding a suitable loading spectrum and computing the response amplitude operators (RAOs) subjected to a frequency-specified wave component. In this paper, we propose a nontraditional spectrum response estimation method that uses a numerical representation of the retardation functions. Based on estimated added mass and damping matrices of the structure, we decompose and replace the convolution terms with a series of poles and corresponding residues in the Laplace domain. Then, we estimate the power density corresponding to each frequency component using the improved periodogram method. The advantage of this approach is that the frequency-dependent motion equations in the time domain can be transformed into the Laplace domain without requiring Laplace-domain expressions for the added mass and damping. To validate the proposed method, we use a numerical semi-submerged pontoon from the SESAM. The numerical results show that the responses of the proposed method match well with those obtained from the traditional method. Furthermore, the estimated spectrum also matches well, which indicates its potential application to deep-water floating structures.

  6. Estimates of the integral modulus of continuity of functions with rarely changing Fourier coefficients

    International Nuclear Information System (INIS)

    Telyakovskii, S A

    2002-01-01

    The functions under consideration are those satisfying the condition Δa i =Δb i =0 for all i≠n j , where {n j } is a lacunary sequence. An asymptotic estimate of the rate of decrease of the modulus of continuity in the L-metric of such functions in terms of their Fourier coefficients is obtained

  7. Diversity-interaction modeling: estimating contributions of species identities and interactions to ecosystem function

    DEFF Research Database (Denmark)

    Kirwan, L; Connolly, J; Finn, J A

    2009-01-01

    to the roles of evenness, functional groups, and functional redundancy. These more parsimonious descriptions can be especially useful in identifying general diversity-function relationships in communities with large numbers of species. We provide an example of the application of the modeling framework......We develop a modeling framework that estimates the effects of species identity and diversity on ecosystem function and permits prediction of the diversity-function relationship across different types of community composition. Rather than just measure an overall effect of diversity, we separately....... These models describe community-level performance and thus do not require separate measurement of the performance of individual species. This flexible modeling approach can be tailored to test many hypotheses in biodiversity research and can suggest the interaction mechanisms that may be acting....

  8. Absolute Monotonicity of Functions Related To Estimates of First Eigenvalue of Laplace Operator on Riemannian Manifolds

    Directory of Open Access Journals (Sweden)

    Feng Qi

    2014-10-01

    Full Text Available The authors find the absolute monotonicity and complete monotonicity of some functions involving trigonometric functions and related to estimates the lower bounds of the first eigenvalue of Laplace operator on Riemannian manifolds.

  9. A Gaussian mixture model based cost function for parameter estimation of chaotic biological systems

    Science.gov (United States)

    Shekofteh, Yasser; Jafari, Sajad; Sprott, Julien Clinton; Hashemi Golpayegani, S. Mohammad Reza; Almasganj, Farshad

    2015-02-01

    As we know, many biological systems such as neurons or the heart can exhibit chaotic behavior. Conventional methods for parameter estimation in models of these systems have some limitations caused by sensitivity to initial conditions. In this paper, a novel cost function is proposed to overcome those limitations by building a statistical model on the distribution of the real system attractor in state space. This cost function is defined by the use of a likelihood score in a Gaussian mixture model (GMM) which is fitted to the observed attractor generated by the real system. Using that learned GMM, a similarity score can be defined by the computed likelihood score of the model time series. We have applied the proposed method to the parameter estimation of two important biological systems, a neuron and a cardiac pacemaker, which show chaotic behavior. Some simulated experiments are given to verify the usefulness of the proposed approach in clean and noisy conditions. The results show the adequacy of the proposed cost function.

  10. A method for estimating DMSA SPECT renal function for assessing the effect of percutaneous nephrolithotripsy on the treated pole

    International Nuclear Information System (INIS)

    AGUIAR, Pablo; RUIBAL, Álvaro; CORTÉS, Julia; PÉREZ-FENTES, Daniel; GARCÍA, Camilo; GARRIDO, Miguel

    2016-01-01

    The aim of this study was to develop a method for estimating DMSA SPECT renal function on each renal pole in order to evaluate the effect of percutaneous nephrolithotripsy by focusing the measurements on the region through which the percutaneous approach is performed. Twenty patients undergoing percutaneous nephrolithotripsy between November 2010 and June 2012 were included in this study. Both Planar and SPECT-DMSA studies were carried out before and after nephrolithotripsy. The effect of percutaneous nephrolithotripsy was evaluated by estimating the total renal function and the regional renal function of each renal pole. Despite PCNL has been previously reported as a minimally invasive technique, our results showed regional renal function decreases in the treated pole in most patients, affecting the total renal function in a few of them. A quantification method was used for estimating the SPECT DMSA renal function of the upper, inter polar and lower renal poles. Our results confirmed that total renal function was preserved after nephrolithotripsy. Nevertheless, the proposed method showed that the regional renal function of the treated pole decreased in most patients (15 of 20 patients), allowing us to find differences in patients who had not shown changes in the total renal function obtained from conventional quantification methods. In conclusion, a method for estimating the SPECT DMSA renal function focused on the treated pole enabled us to show for the first time that nephrolithotripsy can lead to a renal parenchymal damage restricted to the treated pole.

  11. Feasibility study of the non-invasive estimation of the β+ arterial input function for human PET imaging

    International Nuclear Information System (INIS)

    Hubert, X.

    2009-12-01

    This work deals with the estimation of the concentration of molecules in arterial blood which are labelled with positron-emitting radioelements. This concentration is called 'β + arterial input function'. This concentration has to be estimated for a large number of pharmacokinetic analyses. Nowadays it is measured through series of arterial sampling, which is an accurate method but requiring a stringent protocol. Complications might occur during arterial blood sampling because this method is invasive (hematomas, nosocomial infections). The objective of this work is to overcome this risk through a non-invasive estimation of β + input function with an external detector and a collimator. This allows the reconstruction of blood vessels and thus the discrimination of arterial signal from signals in other tissues. Collimators in medical imaging are not adapted to estimate β + input function because their sensitivity is very low. During this work, they are replaced by coded-aperture collimators, originally developed for astronomy. New methods where coded apertures are used with statistical reconstruction algorithms are presented. Techniques for analytical ray-tracing and for the acceleration of reconstructions are proposed. A new method which decomposes reconstructions on temporal sets and on spatial sets is also developed to efficiently estimate arterial input function from series of temporal acquisitions. This work demonstrates that the trade-off between sensitivity and spatial resolution in PET can be improved thanks to coded aperture collimators and statistical reconstruction algorithm; it also provides new tools to implement such improvements. (author)

  12. Time-varying acceleration coefficients IPSO for solving dynamic economic dispatch with non-smooth cost function

    International Nuclear Information System (INIS)

    Mohammadi-ivatloo, Behnam; Rabiee, Abbas; Ehsan, Mehdi

    2012-01-01

    Highlights: ► New approach to solve power systems dynamic economic dispatch. ► Considering Valve-point effect, prohibited operation zones. ► Proposing TVAC-IPSO algorithm. - Abstract: The objective of the dynamic economic dispatch (DED) problem is to schedule power generation for the online units for a given time horizon economically, satisfying various operational constraints. Due to the effect of valve-point effects and prohibited operating zones (POZs) in the generating units cost functions, DED problem is a highly non-linear and non-convex optimization problem. The DED problem even may be more complicated if transmission losses and ramp-rate constraints are taken into account. This paper presents a novel and heuristic algorithm to solve DED problem of generating units, by employing time varying acceleration coefficients iteration particle swarm optimization (TVAC-IPSO) method. The effectiveness of the proposed method is examined and validated by carrying out extensive tests on different test systems, i.e. 5-unit and 10-unit test systems. Valve-point effects, POZs and ramp-rate constraints along with transmission losses are considered. To examine the efficiency of the proposed TVAC-IPSO algorithm, comprehensive studies are carried out, which compare convergence properties of the proposed TVAC-IPSO approach with conventional PSO algorithm, in addition to the other recently reported approaches. Numerical results show that the TVAC-IPSO method has good convergence properties and the generation costs resulted from the proposed method are lower than other algorithms reported in recent literature.

  13. Modulating functions method for parameters estimation in the fifth order KdV equation

    KAUST Repository

    Asiri, Sharefa M.

    2017-07-25

    In this work, the modulating functions method is proposed for estimating coefficients in higher-order nonlinear partial differential equation which is the fifth order Kortewegde Vries (KdV) equation. The proposed method transforms the problem into a system of linear algebraic equations of the unknowns. The statistical properties of the modulating functions solution are described in this paper. In addition, guidelines for choosing the number of modulating functions, which is an important design parameter, are provided. The effectiveness and robustness of the proposed method are shown through numerical simulations in both noise-free and noisy cases.

  14. Estimation of the pulmonary input function in dynamic whole body PET

    International Nuclear Information System (INIS)

    Ho-Shon, K.; Buchen, P.; Meikle, S.R.; Fulham, M.J.; University of Sydney, Sydney, NSW

    1998-01-01

    Full text: Dynamic data acquisition in Whole Body PET (WB-PET) has the potential to measure the metabolic rate of glucose (MRGlc) in tissue in-vivo. Estimation of changes in tumoral MRGlc may be a valuable tool in cancer by providing an quantitative index of response to treatment. A necessary requirement is an input function (IF) that can be obtained from arterial, 'arterialised' venous or pulmonary arterial blood in the case of lung tumours. Our aim was to extract the pulmonary input function from dynamic WB-PET data using Principal Component Analysis (PCA), Factor Analysis (FA) and Maximum Entropy (ME) for the evaluation of patients undergoing induction chemotherapy for non-small cell lung cancer. PCA is first used as a method of dimension reduction to obtain a signal space, defined by an optimal metric and a set of vectors. FA is used together with a ME constraint to rotate these vectors to obtain 'physiological' factors. A form of entropy function that does not require normalised data was used. This enabled the introduction of a penalty function based on the blood concentration at the last time point which provides an additional constraint. Tissue functions from 10 planes through normal lung were simulated. The model was a linear combination of an IF and a tissue time activity curve (TAC). The proportion of the IF to TAC was varied over the planes to simulate the apical to basal gradient in vascularity of the lung and pseudo Poisson noise was added. The method accurately extracted the IF at noise levels spanning the expected range for dynamic ROI data acquired with the interplane septa extended. Our method is minimally invasive because it requires only 1 late venous blood sample and is applicable to a wide range of tracers since it does not assume a particular compartmental model. Pilot data from 2 patients have been collected enabling comparison of the estimated IF with direct blood sampling from the pulmonary artery

  15. Inverse heat transfer analysis of a functionally graded fin to estimate time-dependent base heat flux and temperature distributions

    International Nuclear Information System (INIS)

    Lee, Haw-Long; Chang, Win-Jin; Chen, Wen-Lih; Yang, Yu-Ching

    2012-01-01

    Highlights: ► Time-dependent base heat flux of a functionally graded fin is inversely estimated. ► An inverse algorithm based on the conjugate gradient method and the discrepancy principle is applied. ► The distributions of temperature in the fin are determined as well. ► The influence of measurement error and measurement location upon the precision of the estimated results is also investigated. - Abstract: In this study, an inverse algorithm based on the conjugate gradient method and the discrepancy principle is applied to estimate the unknown time-dependent base heat flux of a functionally graded fin from the knowledge of temperature measurements taken within the fin. Subsequently, the distributions of temperature in the fin can be determined as well. It is assumed that no prior information is available on the functional form of the unknown base heat flux; hence the procedure is classified as the function estimation in inverse calculation. The temperature data obtained from the direct problem are used to simulate the temperature measurements. The influence of measurement errors and measurement location upon the precision of the estimated results is also investigated. Results show that an excellent estimation on the time-dependent base heat flux and temperature distributions can be obtained for the test case considered in this study.

  16. Estimation Methods of the Point Spread Function Axial Position: A Comparative Computational Study

    Directory of Open Access Journals (Sweden)

    Javier Eduardo Diaz Zamboni

    2017-01-01

    Full Text Available The precise knowledge of the point spread function is central for any imaging system characterization. In fluorescence microscopy, point spread function (PSF determination has become a common and obligatory task for each new experimental device, mainly due to its strong dependence on acquisition conditions. During the last decade, algorithms have been developed for the precise calculation of the PSF, which fit model parameters that describe image formation on the microscope to experimental data. In order to contribute to this subject, a comparative study of three parameter estimation methods is reported, namely: I-divergence minimization (MIDIV, maximum likelihood (ML and non-linear least square (LSQR. They were applied to the estimation of the point source position on the optical axis, using a physical model. Methods’ performance was evaluated under different conditions and noise levels using synthetic images and considering success percentage, iteration number, computation time, accuracy and precision. The main results showed that the axial position estimation requires a high SNR to achieve an acceptable success level and higher still to be close to the estimation error lower bound. ML achieved a higher success percentage at lower SNR compared to MIDIV and LSQR with an intrinsic noise source. Only the ML and MIDIV methods achieved the error lower bound, but only with data belonging to the optical axis and high SNR. Extrinsic noise sources worsened the success percentage, but no difference was found between noise sources for the same method for all methods studied.

  17. An estimation of the structure function xF3 in neutrino-proton scattering

    International Nuclear Information System (INIS)

    Aoki, Kenzaburo; Arimoto, Shinsuke; Hoshino, Shigetoshi; Itoh, Nobuhisa; Konno, Toshiharu.

    1981-01-01

    The structure function xF 3 (x, Q 2 ) in the deep-inelastic neutrino-proton scattering was estimated without differentiating with respect to Q 2 in the evolution function. At first, the moment of the non-singlet structure function xF 3 (x, Q 2 ) is defined. Then, the kernel function f(z, Q 2 ) is presented. Finally, the expression for the structure function xF 3 is given. The values of the structure function for various Q 2 are shown in five figures. A peak is seen in each figure, and the highest peak is at about Q 2 = 14GeV 2 . The analysis suggests very small value of xF 3 in small Q 2 region. The kernel function f(x/y, Q 2 ) may be interpreted as the probability of finding a quark of momentum fraction x arising from that of y is quantum chromodynamics. (Kato, T.)

  18. Fused Adaptive Lasso for Spatial and Temporal Quantile Function Estimation

    KAUST Repository

    Sun, Ying

    2015-09-01

    Quantile functions are important in characterizing the entire probability distribution of a random variable, especially when the tail of a skewed distribution is of interest. This article introduces new quantile function estimators for spatial and temporal data with a fused adaptive Lasso penalty to accommodate the dependence in space and time. This method penalizes the difference among neighboring quantiles, hence it is desirable for applications with features ordered in time or space without replicated observations. The theoretical properties are investigated and the performances of the proposed methods are evaluated by simulations. The proposed method is applied to particulate matter (PM) data from the Community Multiscale Air Quality (CMAQ) model to characterize the upper quantiles, which are crucial for studying spatial association between PM concentrations and adverse human health effects. © 2016 American Statistical Association and the American Society for Quality.

  19. Estimating functions for inhomogeneous Cox processes

    DEFF Research Database (Denmark)

    Waagepetersen, Rasmus

    2006-01-01

    Estimation methods are reviewed for inhomogeneous Cox processes with tractable first and second order properties. We illustrate the various suggestions by means of data examples.......Estimation methods are reviewed for inhomogeneous Cox processes with tractable first and second order properties. We illustrate the various suggestions by means of data examples....

  20. Estimation of gas and tissue lung volumes by MRI: functional approach of lung imaging.

    Science.gov (United States)

    Qanadli, S D; Orvoen-Frija, E; Lacombe, P; Di Paola, R; Bittoun, J; Frija, G

    1999-01-01

    The purpose of this work was to assess the accuracy of MRI for the determination of lung gas and tissue volumes. Fifteen healthy subjects underwent MRI of the thorax and pulmonary function tests [vital capacity (VC) and total lung capacity (TLC)] in the supine position. MR examinations were performed at inspiration and expiration. Lung volumes were measured by a previously validated technique on phantoms. Both individual and total lung volumes and capacities were calculated. MRI total vital capacity (VC(MRI)) was compared with spirometric vital capacity (VC(SP)). Capacities were correlated to lung volumes. Tissue volume (V(T)) was estimated as the difference between the total lung volume at full inspiration and the TLC. No significant difference was seen between VC(MRI) and VC(SP). Individual capacities were well correlated (r = 0.9) to static volume at full inspiration. The V(T) was estimated to be 836+/-393 ml. This preliminary study demonstrates that MRI can accurately estimate lung gas and tissue volumes. The proposed approach appears well suited for functional imaging of the lung.

  1. Modified polarimetric bidirectional reflectance distribution function with diffuse scattering: surface parameter estimation

    Science.gov (United States)

    Zhan, Hanyu; Voelz, David G.

    2016-12-01

    The polarimetric bidirectional reflectance distribution function (pBRDF) describes the relationships between incident and scattered Stokes parameters, but the familiar surface-only microfacet pBRDF cannot capture diffuse scattering contributions and depolarization phenomena. We propose a modified pBRDF model with a diffuse scattering component developed from the Kubelka-Munk and Le Hors et al. theories, and apply it in the development of a method to jointly estimate refractive index, slope variance, and diffuse scattering parameters from a series of Stokes parameter measurements of a surface. An application of the model and estimation approach to experimental data published by Priest and Meier shows improved correspondence with measurements of normalized Mueller matrix elements. By converting the Stokes/Mueller calculus formulation of the model to a degree of polarization (DOP) description, the estimation results of the parameters from measured DOP values are found to be consistent with a previous DOP model and results.

  2. Sequential fitting-and-separating reflectance components for analytical bidirectional reflectance distribution function estimation.

    Science.gov (United States)

    Lee, Yu; Yu, Chanki; Lee, Sang Wook

    2018-01-10

    We present a sequential fitting-and-separating algorithm for surface reflectance components that separates individual dominant reflectance components and simultaneously estimates the corresponding bidirectional reflectance distribution function (BRDF) parameters from the separated reflectance values. We tackle the estimation of a Lafortune BRDF model, which combines a nonLambertian diffuse reflection and multiple specular reflectance components with a different specular lobe. Our proposed method infers the appropriate number of BRDF lobes and their parameters by separating and estimating each of the reflectance components using an interval analysis-based branch-and-bound method in conjunction with iterative K-ordered scale estimation. The focus of this paper is the estimation of the Lafortune BRDF model. Nevertheless, our proposed method can be applied to other analytical BRDF models such as the Cook-Torrance and Ward models. Experiments were carried out to validate the proposed method using isotropic materials from the Mitsubishi Electric Research Laboratories-Massachusetts Institute of Technology (MERL-MIT) BRDF database, and the results show that our method is superior to a conventional minimization algorithm.

  3. FUNCTIONS OF THE HEAD OF SPECIAL (CORRECTIONAL) EDUCATIONAL INSTITUTION ON PERFECTION OF ESTIMATION OF EDUCATIONAL SYSTEM

    OpenAIRE

    Voynelenko Natalya Vaselyevna

    2012-01-01

    In article the maintenance of activity of the head of special (correctional) educational institution on the organization of estimation of quality of educational system is discussed. The model of joint activity of participants of educational process on estimation of educational objects, as component of system of quality management in Educational institution is presented. Functions of estimation of educational system in activity of the head of educational institution are formulated.

  4. Correlation Function Approach for Estimating Thermal Conductivity in Highly Porous Fibrous Materials

    Science.gov (United States)

    Martinez-Garcia, Jorge; Braginsky, Leonid; Shklover, Valery; Lawson, John W.

    2011-01-01

    Heat transport in highly porous fiber networks is analyzed via two-point correlation functions. Fibers are assumed to be long and thin to allow a large number of crossing points per fiber. The network is characterized by three parameters: the fiber aspect ratio, the porosity and the anisotropy of the structure. We show that the effective thermal conductivity of the system can be estimated from knowledge of the porosity and the correlation lengths of the correlation functions obtained from a fiber structure image. As an application, the effects of the fiber aspect ratio and the network anisotropy on the thermal conductivity is studied.

  5. Estimation of functional failure probability of passive systems based on adaptive importance sampling method

    International Nuclear Information System (INIS)

    Wang Baosheng; Wang Dongqing; Zhang Jianmin; Jiang Jing

    2012-01-01

    In order to estimate the functional failure probability of passive systems, an innovative adaptive importance sampling methodology is presented. In the proposed methodology, information of variables is extracted with some pre-sampling of points in the failure region. An important sampling density is then constructed from the sample distribution in the failure region. Taking the AP1000 passive residual heat removal system as an example, the uncertainties related to the model of a passive system and the numerical values of its input parameters are considered in this paper. And then the probability of functional failure is estimated with the combination of the response surface method and adaptive importance sampling method. The numerical results demonstrate the high computed efficiency and excellent computed accuracy of the methodology compared with traditional probability analysis methods. (authors)

  6. A PEDOTRANSFER FUNCTION FOR ESTIMATING THE SOIL ERODIBILITY FACTOR IN SICILY

    Directory of Open Access Journals (Sweden)

    Vincenzo Bagarello

    2009-09-01

    Full Text Available The soil erodibility factor, K, of the Universal Soil Loss Equation (USLE is a simple descriptor of the soil susceptibility to rill and interrill erosion. The original procedure for determining K needs a knowledge of soil particle size distribution (PSD, soil organic matter, OM, content, and soil structure and permeability characteristics. However, OM data are often missing and soil structure and permeability are not easily evaluated in regional analyses. The objective of this investigation was to develop a pedotransfer function (PTF for estimating the K factor of the USLE in Sicily (south Italy using only soil textural data. The nomograph soil erodibility factor and its associated first approximation, K’, were determined at 471 sampling points distributed throughout the island of Sicily. Two existing relationships for estimating K on the basis of the measured geometric mean particle diameter were initially tested. Then, two alternative PTFs for estimating K’ and K, respectively, on the basis of the measured PSD were derived. Testing analysis showed that the K estimate by the proposed PTF (eq.11, which was characterized by a Nash-Suttcliffe efficiency index, NSEI, varying between 0.68 and 0.76, depending on the considered data set, was appreciably more accurate than the one obtained by other existing equations, yielding NSEI values varying between 0.21 and 0.32.

  7. Estimation of functional failure probability of passive systems based on subset simulation method

    International Nuclear Information System (INIS)

    Wang Dongqing; Wang Baosheng; Zhang Jianmin; Jiang Jing

    2012-01-01

    In order to solve the problem of multi-dimensional epistemic uncertainties and small functional failure probability of passive systems, an innovative reliability analysis algorithm called subset simulation based on Markov chain Monte Carlo was presented. The method is found on the idea that a small failure probability can be expressed as a product of larger conditional failure probabilities by introducing a proper choice of intermediate failure events. Markov chain Monte Carlo simulation was implemented to efficiently generate conditional samples for estimating the conditional failure probabilities. Taking the AP1000 passive residual heat removal system, for example, the uncertainties related to the model of a passive system and the numerical values of its input parameters were considered in this paper. And then the probability of functional failure was estimated with subset simulation method. The numerical results demonstrate that subset simulation method has the high computing efficiency and excellent computing accuracy compared with traditional probability analysis methods. (authors)

  8. Grid occupancy estimation for environment perception based on belief functions and PCR6

    Science.gov (United States)

    Moras, Julien; Dezert, Jean; Pannetier, Benjamin

    2015-05-01

    In this contribution, we propose to improve the grid map occupancy estimation method developed so far based on belief function modeling and the classical Dempster's rule of combination. Grid map offers a useful representation of the perceived world for mobile robotics navigation. It will play a major role for the security (obstacle avoidance) of next generations of terrestrial vehicles, as well as for future autonomous navigation systems. In a grid map, the occupancy of each cell representing a small piece of the surrounding area of the robot must be estimated at first from sensors measurements (typically LIDAR, or camera), and then it must also be classified into different classes in order to get a complete and precise perception of the dynamic environment where the robot moves. So far, the estimation and the grid map updating have been done using fusion techniques based on the probabilistic framework, or on the classical belief function framework thanks to an inverse model of the sensors. Mainly because the latter offers an interesting management of uncertainties when the quality of available information is low, and when the sources of information appear as conflicting. To improve the performances of the grid map estimation, we propose in this paper to replace Dempster's rule of combination by the PCR6 rule (Proportional Conflict Redistribution rule #6) proposed in DSmT (Dezert-Smarandache) Theory. As an illustrating scenario, we consider a platform moving in dynamic area and we compare our new realistic simulation results (based on a LIDAR sensor) with those obtained by the probabilistic and the classical belief-based approaches.

  9. A Scale Elasticity Measure for Directional Distance Function and its Dual: Theory and DEA Estimation

    OpenAIRE

    Valentin Zelenyuk

    2012-01-01

    In this paper we focus on scale elasticity measure based on directional distance function for multi-output-multi-input technologies, explore its fundamental properties and show its equivalence with the input oriented and output oriented scale elasticity measures. We also establish duality relationship between the scale elasticity measure based on the directional distance function with scale elasticity measure based on the profit function. Finally, we discuss the estimation issues of the scale...

  10. Land-use change and carbon sinks: Econometric estimation of the carbon sequestration supply function; FINAL

    International Nuclear Information System (INIS)

    Lubowski, Ruben N.; Plantinga, Andrew J.; Stavins, Robert N.

    2001-01-01

    Increased attention by policy makers to the threat of global climate change has brought with it considerable interest in the possibility of encouraging the expansion of forest area as a means of sequestering carbon dioxide. The marginal costs of carbon sequestration or, equivalently, the carbon sequestration supply function will determine the ultimate effects and desirability of policies aimed at enhancing carbon uptake. In particular, marginal sequestration conts are the critical statistic for identifying a cost-effective policy mix to mitigate net carbon dioxide emissions. We develop a framework for conducting an econometric analysis of land use for the forty-eight contiguous United States and employing it to estimate the carbon sequestration supply function. By estimating the opportunity costs of land on the basis of econometric evidence of landowners' actual behavior, we aim to circumvent many of the shortcomings of previous sequestration cost assessments. By conducting the first nationwide econometric estimation of sequestration costs, endogenizing prices for land-based commodities, and estimating land-use transition probabilities in a framework that explicitly considers the range of land-use alternatives, we hope to provide better estimates eventually of the true costs of large-scale carbon sequestration efforts. In this way, we seek to add to understanding of the costs and potential of this strategy for addressing the threat of global climate change

  11. Limitations of a Short Demographic Questionnaire for Bedside Estimation of Patients’ Global Cognitive Functioning in Epilepsy Patients

    Directory of Open Access Journals (Sweden)

    Iris Gorny

    2018-03-01

    Full Text Available ObjectivesThe German socio-demographic estimation scale was developed by Jahn et al. (1 to quickly predict premorbid global cognitive functioning in patients. So far, it has been validated in healthy adults and has shown a good correlation with the full and verbal IQ of the Wechsler Adult Intelligence Scale (WAIS in this group. However, there are no data regarding its use as a bedside test in epilepsy patients.MethodsForty native German speaking adult patients with refractory epilepsy were included. They completed a neuropsychological assessment, including a nine scale short form of the German version of the WAIS-III and the German socio-demographic estimation scale by Jahn et al. (1 during their presurgical diagnostic stay in our center. We calculated means, correlations, and the rate of concordance (range ±5 and ±7.5 IQ score points between these two measures for the whole group, and a subsample of 19 patients with a global cognitive functioning level within 1 SD of the mean (IQ score range 85–115 and who had completed their formal education before epilepsy onset.ResultsThe German demographic estimation scale by Jahn et al. (1 showed a significant mean overestimation of the global cognitive functioning level of eight points in the epilepsy patient sample compared with the short form WAIS-III score. The accuracy within a range of ±5 or ±7.5 IQ score points for each patient was similar to that of the healthy controls reported by Jahn et al. (1 in our subsample, but not in our whole sample.ConclusionOur results show that the socio-demographic scale by Jahn et al. (1 is not sufficiently reliable as an estimation tool of global cognitive functioning in epilepsy patients. It can be used to estimate global cognitive functioning in a subset of patients with a normal global cognitive functioning level who have completed their formal education before epilepsy onset, but it does not reliably predict global cognitive functioning in epilepsy patients

  12. A non-penalty recurrent neural network for solving a class of constrained optimization problems.

    Science.gov (United States)

    Hosseini, Alireza

    2016-01-01

    In this paper, we explain a methodology to analyze convergence of some differential inclusion-based neural networks for solving nonsmooth optimization problems. For a general differential inclusion, we show that if its right hand-side set valued map satisfies some conditions, then solution trajectory of the differential inclusion converges to optimal solution set of its corresponding in optimization problem. Based on the obtained methodology, we introduce a new recurrent neural network for solving nonsmooth optimization problems. Objective function does not need to be convex on R(n) nor does the new neural network model require any penalty parameter. We compare our new method with some penalty-based and non-penalty based models. Moreover for differentiable cases, we implement circuit diagram of the new neural network. Copyright © 2015 Elsevier Ltd. All rights reserved.

  13. Estimating the input function non-invasively for FDG-PET quantification with multiple linear regression analysis: simulation and verification with in vivo data

    International Nuclear Information System (INIS)

    Fang, Yu-Hua; Kao, Tsair; Liu, Ren-Shyan; Wu, Liang-Chih

    2004-01-01

    A novel statistical method, namely Regression-Estimated Input Function (REIF), is proposed in this study for the purpose of non-invasive estimation of the input function for fluorine-18 2-fluoro-2-deoxy-d-glucose positron emission tomography (FDG-PET) quantitative analysis. We collected 44 patients who had undergone a blood sampling procedure during their FDG-PET scans. First, we generated tissue time-activity curves of the grey matter and the whole brain with a segmentation technique for every subject. Summations of different intervals of these two curves were used as a feature vector, which also included the net injection dose. Multiple linear regression analysis was then applied to find the correlation between the input function and the feature vector. After a simulation study with in vivo data, the data of 29 patients were applied to calculate the regression coefficients, which were then used to estimate the input functions of the other 15 subjects. Comparing the estimated input functions with the corresponding real input functions, the averaged error percentages of the area under the curve and the cerebral metabolic rate of glucose (CMRGlc) were 12.13±8.85 and 16.60±9.61, respectively. Regression analysis of the CMRGlc values derived from the real and estimated input functions revealed a high correlation (r=0.91). No significant difference was found between the real CMRGlc and that derived from our regression-estimated input function (Student's t test, P>0.05). The proposed REIF method demonstrated good abilities for input function and CMRGlc estimation, and represents a reliable replacement for the blood sampling procedures in FDG-PET quantification. (orig.)

  14. On the method of logarithmic cumulants for parametric probability density function estimation.

    Science.gov (United States)

    Krylov, Vladimir A; Moser, Gabriele; Serpico, Sebastiano B; Zerubia, Josiane

    2013-10-01

    Parameter estimation of probability density functions is one of the major steps in the area of statistical image and signal processing. In this paper we explore several properties and limitations of the recently proposed method of logarithmic cumulants (MoLC) parameter estimation approach which is an alternative to the classical maximum likelihood (ML) and method of moments (MoM) approaches. We derive the general sufficient condition for a strong consistency of the MoLC estimates which represents an important asymptotic property of any statistical estimator. This result enables the demonstration of the strong consistency of MoLC estimates for a selection of widely used distribution families originating from (but not restricted to) synthetic aperture radar image processing. We then derive the analytical conditions of applicability of MoLC to samples for the distribution families in our selection. Finally, we conduct various synthetic and real data experiments to assess the comparative properties, applicability and small sample performance of MoLC notably for the generalized gamma and K families of distributions. Supervised image classification experiments are considered for medical ultrasound and remote-sensing SAR imagery. The obtained results suggest that MoLC is a feasible and computationally fast yet not universally applicable alternative to MoM. MoLC becomes especially useful when the direct ML approach turns out to be unfeasible.

  15. The Effect of Error in Item Parameter Estimates on the Test Response Function Method of Linking.

    Science.gov (United States)

    Kaskowitz, Gary S.; De Ayala, R. J.

    2001-01-01

    Studied the effect of item parameter estimation for computation of linking coefficients for the test response function (TRF) linking/equating method. Simulation results showed that linking was more accurate when there was less error in the parameter estimates, and that 15 or 25 common items provided better results than 5 common items under both…

  16. Parametric estimation of covariance function in Gaussian-process based Kriging models. Application to uncertainty quantification for computer experiments

    International Nuclear Information System (INIS)

    Bachoc, F.

    2013-01-01

    The parametric estimation of the covariance function of a Gaussian process is studied, in the framework of the Kriging model. Maximum Likelihood and Cross Validation estimators are considered. The correctly specified case, in which the covariance function of the Gaussian process does belong to the parametric set used for estimation, is first studied in an increasing-domain asymptotic framework. The sampling considered is a randomly perturbed multidimensional regular grid. Consistency and asymptotic normality are proved for the two estimators. It is then put into evidence that strong perturbations of the regular grid are always beneficial to Maximum Likelihood estimation. The incorrectly specified case, in which the covariance function of the Gaussian process does not belong to the parametric set used for estimation, is then studied. It is shown that Cross Validation is more robust than Maximum Likelihood in this case. Finally, two applications of the Kriging model with Gaussian processes are carried out on industrial data. For a validation problem of the friction model of the thermal-hydraulic code FLICA 4, where experimental results are available, it is shown that Gaussian process modeling of the FLICA 4 code model error enables to considerably improve its predictions. Finally, for a meta modeling problem of the GERMINAL thermal-mechanical code, the interest of the Kriging model with Gaussian processes, compared to neural network methods, is shown. (author) [fr

  17. Complex mode indication function and its applications to spatial domain parameter estimation

    Science.gov (United States)

    Shih, C. Y.; Tsuei, Y. G.; Allemang, R. J.; Brown, D. L.

    1988-10-01

    This paper introduces the concept of the Complex Mode Indication Function (CMIF) and its application in spatial domain parameter estimation. The concept of CMIF is developed by performing singular value decomposition (SVD) of the Frequency Response Function (FRF) matrix at each spectral line. The CMIF is defined as the eigenvalues, which are the square of the singular values, solved from the normal matrix formed from the FRF matrix, [ H( jω)] H[ H( jω)], at each spectral line. The CMIF appears to be a simple and efficient method for identifying the modes of the complex system. The CMIF identifies modes by showing the physical magnitude of each mode and the damped natural frequency for each root. Since multiple reference data is applied in CMIF, repeated roots can be detected. The CMIF also gives global modal parameters, such as damped natural frequencies, mode shapes and modal participation vectors. Since CMIF works in the spatial domain, uneven frequency spacing data such as data from spatial sine testing can be used. A second-stage procedure for accurate damped natural frequency and damping estimation as well as mode shape scaling is also discussed in this paper.

  18. Variance Function Estimation. Revision.

    Science.gov (United States)

    1987-03-01

    UNLSIFIED RFOSR-TR-87-±112 F49620-85-C-O144 F/C 12/3 NL EEEEEEh LOUA28~ ~ L53 11uLoo MICROOP REOUINTS-’HR ------ N L E U INARF-% - IS %~1 %i % 0111...and 9 jointly. If 7,, 0. and are any preliminary estimators for 71, 6. and 3. define 71 and 6 to be the solutions of (4.1) N1 IN2 (7., ’ Td " ~ - / =0P

  19. Assessing a learning process with functional ANOVA estimators of EEG power spectral densities.

    Science.gov (United States)

    Gutiérrez, David; Ramírez-Moreno, Mauricio A

    2016-04-01

    We propose to assess the process of learning a task using electroencephalographic (EEG) measurements. In particular, we quantify changes in brain activity associated to the progression of the learning experience through the functional analysis-of-variances (FANOVA) estimators of the EEG power spectral density (PSD). Such functional estimators provide a sense of the effect of training in the EEG dynamics. For that purpose, we implemented an experiment to monitor the process of learning to type using the Colemak keyboard layout during a twelve-lessons training. Hence, our aim is to identify statistically significant changes in PSD of various EEG rhythms at different stages and difficulty levels of the learning process. Those changes are taken into account only when a probabilistic measure of the cognitive state ensures the high engagement of the volunteer to the training. Based on this, a series of statistical tests are performed in order to determine the personalized frequencies and sensors at which changes in PSD occur, then the FANOVA estimates are computed and analyzed. Our experimental results showed a significant decrease in the power of [Formula: see text] and [Formula: see text] rhythms for ten volunteers during the learning process, and such decrease happens regardless of the difficulty of the lesson. These results are in agreement with previous reports of changes in PSD being associated to feature binding and memory encoding.

  20. [Nonparametric method of estimating survival functions containing right-censored and interval-censored data].

    Science.gov (United States)

    Xu, Yonghong; Gao, Xiaohuan; Wang, Zhengxi

    2014-04-01

    Missing data represent a general problem in many scientific fields, especially in medical survival analysis. Dealing with censored data, interpolation method is one of important methods. However, most of the interpolation methods replace the censored data with the exact data, which will distort the real distribution of the censored data and reduce the probability of the real data falling into the interpolation data. In order to solve this problem, we in this paper propose a nonparametric method of estimating the survival function of right-censored and interval-censored data and compare its performance to SC (self-consistent) algorithm. Comparing to the average interpolation and the nearest neighbor interpolation method, the proposed method in this paper replaces the right-censored data with the interval-censored data, and greatly improves the probability of the real data falling into imputation interval. Then it bases on the empirical distribution theory to estimate the survival function of right-censored and interval-censored data. The results of numerical examples and a real breast cancer data set demonstrated that the proposed method had higher accuracy and better robustness for the different proportion of the censored data. This paper provides a good method to compare the clinical treatments performance with estimation of the survival data of the patients. This pro vides some help to the medical survival data analysis.

  1. A note on quasilinear elliptic eigenvalue problems

    Directory of Open Access Journals (Sweden)

    Gianni Arioli

    1999-11-01

    Full Text Available We study an eigenvalue problem by a non-smooth critical point theory. Under general assumptions, we prove the existence of at least one solution as a minimum of a constrained energy functional. We apply some results on critical point theory with symmetry to provide a multiplicity result.

  2. Using step and path selection functions for estimating resistance to movement: Pumas as a case study

    Science.gov (United States)

    Katherine A. Zeller; Kevin McGarigal; Samuel A. Cushman; Paul Beier; T. Winston Vickers; Walter M. Boyce

    2015-01-01

    GPS telemetry collars and their ability to acquire accurate and consistently frequent locations have increased the use of step selection functions (SSFs) and path selection functions (PathSFs) for studying animal movement and estimating resistance. However, previously published SSFs and PathSFs often do not accommodate multiple scales or multiscale modeling....

  3. ESTIMATING THE PRODUCTION FUNCTION IN THE CASE OF ROMANIA METODOLOGY AND RESULTS

    Directory of Open Access Journals (Sweden)

    Simuț Ramona Marinela

    2015-07-01

    Full Text Available The problem of economic growth is a headline concern among economists, mathematicians and politicians. This is because of the major impact of economic growth on the entire population of a country, which has made achieving or maintaining a sustained growth rate the major objective of macroeconomic policy of any country. Thus, in order to identify present sources of economic growth for Romania in our study we used the Cobb-Douglas type production function. The basic variables of this model are represented by work factors, capital stock and the part of economic growth determined by the technical progress, the Solow residue or total productivity of production factors. To estimate this production function in the case of Romania, we used the quarter statistical data from the period between 2000 – first quarter and 2014 – fourth quarter; the source of the data was Eurostat. The Cobb-Douglas production function with the variables work and capital is valid in Romania’s case because it has the parameters of the exogenous variables significantly different from zero. This model became valid after we eliminated the autocorrelation of errors. Removing the autocorrelation of errors does not alter the structure of the production function. The adjusted R2 determination coefficient, as well as the α and β coefficients have values close to those from the first estimated equation. The regression of the GDP is characterized by marginal decreasing efficiency of the capital stock (α > 1 and decreasing efficiency of work (β < 1. In our case the sum of the α and β coefficients is below 1 (it is 0.75 as well as in the case of the second model (0.89, which corresponds to the decreasing efficiency of the production function. Concerning the working population of Romania, it registered a growing trend, starting with 2000 until 2005, a period that coincided with a sustained economic growth.

  4. Fitting psychometric functions using a fixed-slope parameter: an advanced alternative for estimating odor thresholds with data generated by ASTM E679.

    Science.gov (United States)

    Peng, Mei; Jaeger, Sara R; Hautus, Michael J

    2014-03-01

    Psychometric functions are predominately used for estimating detection thresholds in vision and audition. However, the requirement of large data quantities for fitting psychometric functions (>30 replications) reduces their suitability in olfactory studies because olfactory response data are often limited (ASTM) E679. The slope parameter of the individual-judge psychometric function is fixed to be the same as that of the group function; the same-shaped symmetrical sigmoid function is fitted only using the intercept. This study evaluated the proposed method by comparing it with 2 available methods. Comparison to conventional psychometric functions (fitted slope and intercept) indicated that the assumption of a fixed slope did not compromise precision of the threshold estimates. No systematic difference was obtained between the proposed method and the ASTM method in terms of group threshold estimates or threshold distributions, but there were changes in the rank, by threshold, of judges in the group. Overall, the fixed-slope psychometric function is recommended for obtaining relatively reliable individual threshold estimates when the quantity of data is limited.

  5. An enriched cohesive zone model for delamination in brittle interfaces

    NARCIS (Netherlands)

    Samimi, M.; Dommelen, van J.A.W.; Geers, M.G.D.

    2009-01-01

    Application of standard cohesive zone models in a finite element framework to simulate delamination in brittle interfaces may trigger non-smooth load-displacement responses that lead to the failure of iterative solution procedures. This non-smoothness is an artifact of the discretization; and hence

  6. Volume-assisted estimation of liver function based on Gd-EOB-DTPA-enhanced MR relaxometry

    Energy Technology Data Exchange (ETDEWEB)

    Haimerl, Michael; Schlabeck, Mona; Verloh, Niklas; Fellner, Claudia; Stroszczynski, Christian; Wiggermann, Philipp [University Hospital Regensburg, Department of Radiology, Regensburg (Germany); Zeman, Florian [University Hospital Regensburg, Center for Clinical Trials, Regensburg (Germany); Nickel, Dominik [MR Applications Development, Siemens AG, Healthcare Sector, Erlangen (Germany); Barreiros, Ana Paula [University Hospital Regensburg, Department of Internal Medicine I, Regensburg (Germany); Loss, Martin [University Hospital Regensburg, Department of Surgery, Regensburg (Germany)

    2016-04-15

    To determine whether liver function as determined by indocyanine green (ICG) clearance can be estimated quantitatively from hepatic magnetic resonance (MR) relaxometry with gadoxetic acid (Gd-EOB-DTPA). One hundred and seven patients underwent an ICG clearance test and Gd-EOB-DTPA-enhanced MRI, including MR relaxometry at 3 Tesla. A transverse 3D VIBE sequence with an inline T1 calculation was acquired prior to and 20 minutes post-Gd-EOB-DTPA administration. The reduction rate of T1 relaxation time (rrT1) between pre- and post-contrast images and the liver volume-assisted index of T1 reduction rate (LVrrT1) were evaluated. The plasma disappearance rate of ICG (ICG-PDR) was correlated with the liver volume (LV), rrT1 and LVrrT1, providing an MRI-based estimated ICG-PDR value (ICG-PDR{sub est}). Simple linear regression model showed a significant correlation of ICG-PDR with LV (r = 0.32; p = 0.001), T1{sub post} (r = 0.65; p < 0.001) and rrT1 (r = 0.86; p < 0.001). Assessment of LV and consecutive evaluation of multiple linear regression model revealed a stronger correlation of ICG-PDR with LVrrT1 (r = 0.92; p < 0.001), allowing for the calculation of ICG-PDR{sub est}. Liver function as determined using ICG-PDR can be estimated quantitatively from Gd-EOB-DTPA-enhanced MR relaxometry. Volume-assisted MR relaxometry has a stronger correlation with liver function than does MR relaxometry. (orig.)

  7. Estimation of Multiple Point Sources for Linear Fractional Order Systems Using Modulating Functions

    KAUST Repository

    Belkhatir, Zehor

    2017-06-28

    This paper proposes an estimation algorithm for the characterization of multiple point inputs for linear fractional order systems. First, using polynomial modulating functions method and a suitable change of variables the problem of estimating the locations and the amplitudes of a multi-pointwise input is decoupled into two algebraic systems of equations. The first system is nonlinear and solves for the time locations iteratively, whereas the second system is linear and solves for the input’s amplitudes. Second, closed form formulas for both the time location and the amplitude are provided in the particular case of single point input. Finally, numerical examples are given to illustrate the performance of the proposed technique in both noise-free and noisy cases. The joint estimation of pointwise input and fractional differentiation orders is also presented. Furthermore, a discussion on the performance of the proposed algorithm is provided.

  8. Modulating functions-based method for parameters and source estimation in one-dimensional partial differential equations

    KAUST Repository

    Asiri, Sharefa M.; Laleg-Kirati, Taous-Meriem

    2016-01-01

    In this paper, modulating functions-based method is proposed for estimating space–time-dependent unknowns in one-dimensional partial differential equations. The proposed method simplifies the problem into a system of algebraic equations linear

  9. Optimal replacement time estimation for machines and equipment based on cost function

    Directory of Open Access Journals (Sweden)

    J. Šebo

    2013-01-01

    Full Text Available The article deals with a multidisciplinary issue of estimating the optimal replacement time for the machines. Considered categories of machines, for which the optimization method is usable, are of the metallurgical and engineering production. Different models of cost function are considered (both with one and two variables. Parameters of the models were calculated through the least squares method. Models testing show that all are good enough, so for estimation of optimal replacement time is sufficient to use simpler models. In addition to the testing of models we developed the method (tested on selected simple model which enable us in actual real time (with limited data set to indicate the optimal replacement time. The indicated time moment is close enough to the optimal replacement time t*.

  10. Spectral velocity estimation using autocorrelation functions for sparse data sets

    DEFF Research Database (Denmark)

    2006-01-01

    The distribution of velocities of blood or tissue is displayed using ultrasound scanners by finding the power spectrum of the received signal. This is currently done by making a Fourier transform of the received signal and then showing spectra in an M-mode display. It is desired to show a B......-mode image for orientation, and data for this has to acquired interleaved with the flow data. The power spectrum can be calculated from the Fourier transform of the autocorrelation function Ry (k), where its span of lags k is given by the number of emission N in the data segment for velocity estimation...

  11. Influence function method for fast estimation of BWR core performance

    International Nuclear Information System (INIS)

    Rahnema, F.; Martin, C.L.; Parkos, G.R.; Williams, R.D.

    1993-01-01

    The model, which is based on the influence function method, provides rapid estimate of important quantities such as margins to fuel operating limits, the effective multiplication factor, nodal power and void and bundle flow distributions as well as the traversing in-core probe (TIP) and local power range monitor (LPRM) readings. The fast model has been incorporated into GE's three-dimensional core monitoring system (3D Monicore). In addition to its predicative capability, the model adapts to LPRM readings in the monitoring mode. Comparisons have shown that the agreement between the results of the fast method and those of the standard 3D Monicore is within a few percent. (orig.)

  12. Different shades of default mode disturbance in schizophrenia: Subnodal covariance estimation in structure and function.

    Science.gov (United States)

    Lefort-Besnard, Jérémy; Bassett, Danielle S; Smallwood, Jonathan; Margulies, Daniel S; Derntl, Birgit; Gruber, Oliver; Aleman, Andre; Jardri, Renaud; Varoquaux, Gaël; Thirion, Bertrand; Eickhoff, Simon B; Bzdok, Danilo

    2018-02-01

    Schizophrenia is a devastating mental disease with an apparent disruption in the highly associative default mode network (DMN). Interplay between this canonical network and others probably contributes to goal-directed behavior so its disturbance is a candidate neural fingerprint underlying schizophrenia psychopathology. Previous research has reported both hyperconnectivity and hypoconnectivity within the DMN, and both increased and decreased DMN coupling with the multimodal saliency network (SN) and dorsal attention network (DAN). This study systematically revisited network disruption in patients with schizophrenia using data-derived network atlases and multivariate pattern-learning algorithms in a multisite dataset (n = 325). Resting-state fluctuations in unconstrained brain states were used to estimate functional connectivity, and local volume differences between individuals were used to estimate structural co-occurrence within and between the DMN, SN, and DAN. In brain structure and function, sparse inverse covariance estimates of network coupling were used to characterize healthy participants and patients with schizophrenia, and to identify statistically significant group differences. Evidence did not confirm that the backbone of the DMN was the primary driver of brain dysfunction in schizophrenia. Instead, functional and structural aberrations were frequently located outside of the DMN core, such as in the anterior temporoparietal junction and precuneus. Additionally, functional covariation analyses highlighted dysfunctional DMN-DAN coupling, while structural covariation results highlighted aberrant DMN-SN coupling. Our findings reframe the role of the DMN core and its relation to canonical networks in schizophrenia. We thus underline the importance of large-scale neural interactions as effective biomarkers and indicators of how to tailor psychiatric care to single patients. © 2017 Wiley Periodicals, Inc.

  13. Smooth semi-nonparametric (SNP) estimation of the cumulative incidence function.

    Science.gov (United States)

    Duc, Anh Nguyen; Wolbers, Marcel

    2017-08-15

    This paper presents a novel approach to estimation of the cumulative incidence function in the presence of competing risks. The underlying statistical model is specified via a mixture factorization of the joint distribution of the event type and the time to the event. The time to event distributions conditional on the event type are modeled using smooth semi-nonparametric densities. One strength of this approach is that it can handle arbitrary censoring and truncation while relying on mild parametric assumptions. A stepwise forward algorithm for model estimation and adaptive selection of smooth semi-nonparametric polynomial degrees is presented, implemented in the statistical software R, evaluated in a sequence of simulation studies, and applied to data from a clinical trial in cryptococcal meningitis. The simulations demonstrate that the proposed method frequently outperforms both parametric and nonparametric alternatives. They also support the use of 'ad hoc' asymptotic inference to derive confidence intervals. An extension to regression modeling is also presented, and its potential and challenges are discussed. © 2017 The Authors. Statistics in Medicine Published by John Wiley & Sons Ltd. © 2017 The Authors. Statistics in Medicine Published by John Wiley & Sons Ltd.

  14. A regularized matrix factorization approach to induce structured sparse-low-rank solutions in the EEG inverse problem

    DEFF Research Database (Denmark)

    Montoya-Martinez, Jair; Artes-Rodriguez, Antonio; Pontil, Massimiliano

    2014-01-01

    We consider the estimation of the Brain Electrical Sources (BES) matrix from noisy electroencephalographic (EEG) measurements, commonly named as the EEG inverse problem. We propose a new method to induce neurophysiological meaningful solutions, which takes into account the smoothness, structured...... sparsity, and low rank of the BES matrix. The method is based on the factorization of the BES matrix as a product of a sparse coding matrix and a dense latent source matrix. The structured sparse-low-rank structure is enforced by minimizing a regularized functional that includes the ℓ21-norm of the coding...... matrix and the squared Frobenius norm of the latent source matrix. We develop an alternating optimization algorithm to solve the resulting nonsmooth-nonconvex minimization problem. We analyze the convergence of the optimization procedure, and we compare, under different synthetic scenarios...

  15. $\\Upsilon\\overline{B}B$ couplings, slope of the Isgur-Wise function and improved estimate of $V_{cb}$

    CERN Document Server

    Narison, Stéphan

    1994-01-01

    We estimate the sum of the \\Upsilon \\bar BB couplings using QCD Spectral Sum Rules (QSSR). Our result implies the phenomenological bound \\xi'(vv'=1) \\geq -1.04 for the slope of the Isgur-Wise function. An analytic estimate of the (physical) slope to two loops within QSSR leads to the accurate value \\xi'(vv'=1) \\simeq -(1.00 \\pm 0.02) due to the (almost) complete cancellations between the perturbative and non-perturbative corrections at the stability points. Then, we deduce, from the present data, the improved estimate \\vert V_{cb} \\vert \\simeq \\ga 1.48 \\mbox{ps}/\\tau_B \\dr ^{1/2}(37.3 \\pm 1.2 \\pm 1.4)\\times 10^{-3} where the first error comes from the data analysis and the second one from the different model parametrizations of the Isgur-Wise function.

  16. Studies on the Zeroes of Bessel Functions and Methods for Their Computation: IV. Inequalities, Estimates, Expansions, etc., for Zeros of Bessel Functions

    Science.gov (United States)

    Kerimov, M. K.

    2018-01-01

    This paper is the fourth in a series of survey articles concerning zeros of Bessel functions and methods for their computation. Various inequalities, estimates, expansions, etc. for positive zeros are analyzed, and some results are described in detail with proofs.

  17. Estimation of Pulse Transit Time as a Function of Blood Pressure Using a Nonlinear Arterial Tube-Load Model.

    Science.gov (United States)

    Gao, Mingwu; Cheng, Hao-Min; Sung, Shih-Hsien; Chen, Chen-Huan; Olivier, Nicholas Bari; Mukkamala, Ramakrishna

    2017-07-01

    pulse transit time (PTT) varies with blood pressure (BP) throughout the cardiac cycle, yet, because of wave reflection, only one PTT value at the diastolic BP level is conventionally estimated from proximal and distal BP waveforms. The objective was to establish a technique to estimate multiple PTT values at different BP levels in the cardiac cycle. a technique was developed for estimating PTT as a function of BP (to indicate the PTT value for every BP level) from proximal and distal BP waveforms. First, a mathematical transformation from one waveform to the other is defined in terms of the parameters of a nonlinear arterial tube-load model accounting for BP-dependent arterial compliance and wave reflection. Then, the parameters are estimated by optimally fitting the waveforms to each other via the model-based transformation. Finally, PTT as a function of BP is specified by the parameters. The technique was assessed in animals and patients in several ways including the ability of its estimated PTT-BP function to serve as a subject-specific curve for calibrating PTT to BP. the calibration curve derived by the technique during a baseline period yielded bias and precision errors in mean BP of 5.1 ± 0.9 and 6.6 ± 1.0 mmHg, respectively, during hemodynamic interventions that varied mean BP widely. the new technique may permit, for the first time, estimation of PTT values throughout the cardiac cycle from proximal and distal waveforms. the technique could potentially be applied to improve arterial stiffness monitoring and help realize cuff-less BP monitoring.

  18. Bayesian switching factor analysis for estimating time-varying functional connectivity in fMRI.

    Science.gov (United States)

    Taghia, Jalil; Ryali, Srikanth; Chen, Tianwen; Supekar, Kaustubh; Cai, Weidong; Menon, Vinod

    2017-07-15

    There is growing interest in understanding the dynamical properties of functional interactions between distributed brain regions. However, robust estimation of temporal dynamics from functional magnetic resonance imaging (fMRI) data remains challenging due to limitations in extant multivariate methods for modeling time-varying functional interactions between multiple brain areas. Here, we develop a Bayesian generative model for fMRI time-series within the framework of hidden Markov models (HMMs). The model is a dynamic variant of the static factor analysis model (Ghahramani and Beal, 2000). We refer to this model as Bayesian switching factor analysis (BSFA) as it integrates factor analysis into a generative HMM in a unified Bayesian framework. In BSFA, brain dynamic functional networks are represented by latent states which are learnt from the data. Crucially, BSFA is a generative model which estimates the temporal evolution of brain states and transition probabilities between states as a function of time. An attractive feature of BSFA is the automatic determination of the number of latent states via Bayesian model selection arising from penalization of excessively complex models. Key features of BSFA are validated using extensive simulations on carefully designed synthetic data. We further validate BSFA using fingerprint analysis of multisession resting-state fMRI data from the Human Connectome Project (HCP). Our results show that modeling temporal dependencies in the generative model of BSFA results in improved fingerprinting of individual participants. Finally, we apply BSFA to elucidate the dynamic functional organization of the salience, central-executive, and default mode networks-three core neurocognitive systems with central role in cognitive and affective information processing (Menon, 2011). Across two HCP sessions, we demonstrate a high level of dynamic interactions between these networks and determine that the salience network has the highest temporal

  19. Fracture of functionally graded materials: application to hydrided zircaloy

    International Nuclear Information System (INIS)

    Perales, F.

    2005-12-01

    This thesis is devoted to the dynamic fracture of functionally graded materials. More particularly, it deals with the toughness of nuclear cladding at high burnup submitted to transient loading. The fracture is studied at local scale using cohesive zone model in a multi body approach. Cohesive zone models include frictional contact to take into account mixed mode fracture. Non smooth dynamics problems are treated within the Non-Smooth Contact Dynamics framework. A multi scale study is necessary because of the dimension of the clad. At microscopic scale, the effective properties of surface law, between each body, are obtained by periodic numerical homogenization. A two fields Finite Element formulation is so written. An extended formulation of the NSCD framework is obtained. The associated software allows to simulate, in finite deformation, from the crack initiation to post-fracture behavior in heterogeneous materials. At microscopic scale, random RVE calculations are made to determine effective properties. At macroscopic scale, calculations of part of clad are made to determine the role of the mean hydrogen concentration and gradient of hydrogen parameters in the toughness of the clad under dynamic loading. (author)

  20. Randomized Block Cubic Newton Method

    KAUST Repository

    Doikov, Nikita; Richtarik, Peter

    2018-01-01

    We study the problem of minimizing the sum of three convex functions: a differentiable, twice-differentiable and a non-smooth term in a high dimensional setting. To this effect we propose and analyze a randomized block cubic Newton (RBCN) method, which in each iteration builds a model of the objective function formed as the sum of the natural models of its three components: a linear model with a quadratic regularizer for the differentiable term, a quadratic model with a cubic regularizer for the twice differentiable term, and perfect (proximal) model for the nonsmooth term. Our method in each iteration minimizes the model over a random subset of blocks of the search variable. RBCN is the first algorithm with these properties, generalizing several existing methods, matching the best known bounds in all special cases. We establish ${\\cal O}(1/\\epsilon)$, ${\\cal O}(1/\\sqrt{\\epsilon})$ and ${\\cal O}(\\log (1/\\epsilon))$ rates under different assumptions on the component functions. Lastly, we show numerically that our method outperforms the state-of-the-art on a variety of machine learning problems, including cubically regularized least-squares, logistic regression with constraints, and Poisson regression.

  1. Randomized Block Cubic Newton Method

    KAUST Repository

    Doikov, Nikita

    2018-02-12

    We study the problem of minimizing the sum of three convex functions: a differentiable, twice-differentiable and a non-smooth term in a high dimensional setting. To this effect we propose and analyze a randomized block cubic Newton (RBCN) method, which in each iteration builds a model of the objective function formed as the sum of the natural models of its three components: a linear model with a quadratic regularizer for the differentiable term, a quadratic model with a cubic regularizer for the twice differentiable term, and perfect (proximal) model for the nonsmooth term. Our method in each iteration minimizes the model over a random subset of blocks of the search variable. RBCN is the first algorithm with these properties, generalizing several existing methods, matching the best known bounds in all special cases. We establish ${\\\\cal O}(1/\\\\epsilon)$, ${\\\\cal O}(1/\\\\sqrt{\\\\epsilon})$ and ${\\\\cal O}(\\\\log (1/\\\\epsilon))$ rates under different assumptions on the component functions. Lastly, we show numerically that our method outperforms the state-of-the-art on a variety of machine learning problems, including cubically regularized least-squares, logistic regression with constraints, and Poisson regression.

  2. Estimation of Time-Varying Coherence and Its Application in Understanding Brain Functional Connectivity

    Directory of Open Access Journals (Sweden)

    Cheng Liu

    2010-01-01

    Full Text Available Time-varying coherence is a powerful tool for revealing functional dynamics between different regions in the brain. In this paper, we address ways of estimating evolutionary spectrum and coherence using the general Cohen's class distributions. We show that the intimate connection between the Cohen's class-based spectra and the evolutionary spectra defined on the locally stationary time series can be linked by the kernel functions of the Cohen's class distributions. The time-varying spectra and coherence are further generalized with the Stockwell transform, a multiscale time-frequency representation. The Stockwell measures can be studied in the framework of the Cohen's class distributions with a generalized frequency-dependent kernel function. A magnetoencephalography study using the Stockwell coherence reveals an interesting temporal interaction between contralateral and ipsilateral motor cortices under the multisource interference task.

  3. 99mTc-GSA dynamic SPECT for regional hepatic functional reserve estimation. Assessment of quantification

    International Nuclear Information System (INIS)

    Hwang, Eui-Hyo

    1999-01-01

    The aim of this study is the assessment of the physiological implication of estimated parameters and the clinical value of this analyzing method for hepatic functional reserve estimation. After venous injection of 185 MBq of GSA, fifteen sequential sets of SPECT data were acquired for 15 minutes. First 5 sets SPECT images were analyzed by Patlak plot and hepatic GSA clearance was obtained in each matrix. The sum of hepatic GSA clearance in each matrix (total hepatic GSA clearance) was calculated as an index of whole liver functional reserve. Total hepatic GSA clearance was compared with receptor index or effective blood flow (EHBF) of whole liver which were analyzed by Direct Integral Linear Least Square Regression (DILS) method for the assessment of the physiological implications of hepatic GSA clearance. The clinical value of total hepatic GSA clearance was assessed in comparisons with the conventional hepatic function test. A very good correlations were observed between total hepatic GSA clearance and receptor index, whereas the correlations between total hepatic GSA clearance and EHBF were not significant. Significant correlations were also observed between total hepatic GSA clearance and the conventional hepatic function tests, such as choline esterase, albumin, hepaplastin test, ICG R15. (K.H.)

  4. Economic Estimation of the Losses Caused by Surface Water Pollution Accidents in China From the Perspective of Water Bodies’ Functions

    Science.gov (United States)

    Yao, Hong; You, Zhen; Liu, Bo

    2016-01-01

    The number of surface water pollution accidents (abbreviated as SWPAs) has increased substantially in China in recent years. Estimation of economic losses due to SWPAs has been one of the focuses in China and is mentioned many times in the Environmental Protection Law of China promulgated in 2014. From the perspective of water bodies’ functions, pollution accident damages can be divided into eight types: damage to human health, water supply suspension, fishery, recreational functions, biological diversity, environmental property loss, the accident’s origin and other indirect losses. In the valuation of damage to people’s life, the procedure for compensation of traffic accidents in China was used. The functional replacement cost method was used in economic estimation of the losses due to water supply suspension and loss of water’s recreational functions. Damage to biological diversity was estimated by recovery cost analysis and damage to environmental property losses were calculated using pollutant removal costs. As a case study, using the proposed calculation procedure the economic losses caused by the major Songhuajiang River pollution accident that happened in China in 2005 have been estimated at 2263 billion CNY. The estimated economic losses for real accidents can sometimes be influenced by social and political factors, such as data authenticity and accuracy. Besides, one or more aspects in the method might be overestimated, underrated or even ignored. The proposed procedure may be used by decision makers for the economic estimation of losses in SWPAs. Estimates of the economic losses of pollution accidents could help quantify potential costs associated with increased risk sources along lakes/rivers but more importantly, highlight the value of clean water to society as a whole. PMID:26805869

  5. Economic Estimation of the Losses Caused by Surface Water Pollution Accidents in China From the Perspective of Water Bodies' Functions.

    Science.gov (United States)

    Yao, Hong; You, Zhen; Liu, Bo

    2016-01-22

    The number of surface water pollution accidents (abbreviated as SWPAs) has increased substantially in China in recent years. Estimation of economic losses due to SWPAs has been one of the focuses in China and is mentioned many times in the Environmental Protection Law of China promulgated in 2014. From the perspective of water bodies' functions, pollution accident damages can be divided into eight types: damage to human health, water supply suspension, fishery, recreational functions, biological diversity, environmental property loss, the accident's origin and other indirect losses. In the valuation of damage to people's life, the procedure for compensation of traffic accidents in China was used. The functional replacement cost method was used in economic estimation of the losses due to water supply suspension and loss of water's recreational functions. Damage to biological diversity was estimated by recovery cost analysis and damage to environmental property losses were calculated using pollutant removal costs. As a case study, using the proposed calculation procedure the economic losses caused by the major Songhuajiang River pollution accident that happened in China in 2005 have been estimated at 2263 billion CNY. The estimated economic losses for real accidents can sometimes be influenced by social and political factors, such as data authenticity and accuracy. Besides, one or more aspects in the method might be overestimated, underrated or even ignored. The proposed procedure may be used by decision makers for the economic estimation of losses in SWPAs. Estimates of the economic losses of pollution accidents could help quantify potential costs associated with increased risk sources along lakes/rivers but more importantly, highlight the value of clean water to society as a whole.

  6. Economic Estimation of the Losses Caused by Surface Water Pollution Accidents in China From the Perspective of Water Bodies’ Functions

    Directory of Open Access Journals (Sweden)

    Hong Yao

    2016-01-01

    Full Text Available The number of surface water pollution accidents (abbreviated as SWPAs has increased substantially in China in recent years. Estimation of economic losses due to SWPAs has been one of the focuses in China and is mentioned many times in the Environmental Protection Law of China promulgated in 2014. From the perspective of water bodies’ functions, pollution accident damages can be divided into eight types: damage to human health, water supply suspension, fishery, recreational functions, biological diversity, environmental property loss, the accident’s origin and other indirect losses. In the valuation of damage to people’s life, the procedure for compensation of traffic accidents in China was used. The functional replacement cost method was used in economic estimation of the losses due to water supply suspension and loss of water’s recreational functions. Damage to biological diversity was estimated by recovery cost analysis and damage to environmental property losses were calculated using pollutant removal costs. As a case study, using the proposed calculation procedure the economic losses caused by the major Songhuajiang River pollution accident that happened in China in 2005 have been estimated at 2263 billion CNY. The estimated economic losses for real accidents can sometimes be influenced by social and political factors, such as data authenticity and accuracy. Besides, one or more aspects in the method might be overestimated, underrated or even ignored. The proposed procedure may be used by decision makers for the economic estimation of losses in SWPAs. Estimates of the economic losses of pollution accidents could help quantify potential costs associated with increased risk sources along lakes/rivers but more importantly, highlight the value of clean water to society as a whole.

  7. Estimates for the mixed derivatives of the Green functions on homogeneous manifolds of negative curvature

    Directory of Open Access Journals (Sweden)

    Roman Urban

    2004-12-01

    Full Text Available We consider the Green functions for second-order left-invariant differential operators on homogeneous manifolds of negative curvature, being a semi-direct product of a nilpotent Lie group $N$ and $A=mathbb{R}^+$. We obtain estimates for mixed derivatives of the Green functions both in the coercive and non-coercive case. The current paper completes the previous results obtained by the author in a series of papers [14,15,16,19].

  8. Dictionary-Based Stochastic Expectation–Maximization for SAR Amplitude Probability Density Function Estimation

    OpenAIRE

    Moser , Gabriele; Zerubia , Josiane; Serpico , Sebastiano B.

    2006-01-01

    International audience; In remotely sensed data analysis, a crucial problem is represented by the need to develop accurate models for the statistics of the pixel intensities. This paper deals with the problem of probability density function (pdf) estimation in the context of synthetic aperture radar (SAR) amplitude data analysis. Several theoretical and heuristic models for the pdfs of SAR data have been proposed in the literature, which have been proved to be effective for different land-cov...

  9. Accounting for animal movement in estimation of resource selection functions: sampling and data analysis.

    Science.gov (United States)

    Forester, James D; Im, Hae Kyung; Rathouz, Paul J

    2009-12-01

    Patterns of resource selection by animal populations emerge as a result of the behavior of many individuals. Statistical models that describe these population-level patterns of habitat use can miss important interactions between individual animals and characteristics of their local environment; however, identifying these interactions is difficult. One approach to this problem is to incorporate models of individual movement into resource selection models. To do this, we propose a model for step selection functions (SSF) that is composed of a resource-independent movement kernel and a resource selection function (RSF). We show that standard case-control logistic regression may be used to fit the SSF; however, the sampling scheme used to generate control points (i.e., the definition of availability) must be accommodated. We used three sampling schemes to analyze simulated movement data and found that ignoring sampling and the resource-independent movement kernel yielded biased estimates of selection. The level of bias depended on the method used to generate control locations, the strength of selection, and the spatial scale of the resource map. Using empirical or parametric methods to sample control locations produced biased estimates under stronger selection; however, we show that the addition of a distance function to the analysis substantially reduced that bias. Assuming a uniform availability within a fixed buffer yielded strongly biased selection estimates that could be corrected by including the distance function but remained inefficient relative to the empirical and parametric sampling methods. As a case study, we used location data collected from elk in Yellowstone National Park, USA, to show that selection and bias may be temporally variable. Because under constant selection the amount of bias depends on the scale at which a resource is distributed in the landscape, we suggest that distance always be included as a covariate in SSF analyses. This approach to

  10. Robust Improvement in Estimation of a Covariance Matrix in an Elliptically Contoured Distribution Respect to Quadratic Loss Function

    Directory of Open Access Journals (Sweden)

    Z. Khodadadi

    2008-03-01

    Full Text Available Let S be matrix of residual sum of square in linear model Y = Aβ + e where matrix e is distributed as elliptically contoured with unknown scale matrix Σ. In present work, we consider the problem of estimating Σ with respect to squared loss function, L(Σˆ , Σ = tr(ΣΣˆ −1 −I 2 . It is shown that improvement of the estimators were obtained by James, Stein [7], Dey and Srivasan [1] under the normality assumption remains robust under an elliptically contoured distribution respect to squared loss function

  11. Estimation of age- and stage-specific Catalan breast cancer survival functions using US and Catalan survival data

    Science.gov (United States)

    2009-01-01

    Background During the last part of the 1990s the chance of surviving breast cancer increased. Changes in survival functions reflect a mixture of effects. Both, the introduction of adjuvant treatments and early screening with mammography played a role in the decline in mortality. Evaluating the contribution of these interventions using mathematical models requires survival functions before and after their introduction. Furthermore, required survival functions may be different by age groups and are related to disease stage at diagnosis. Sometimes detailed information is not available, as was the case for the region of Catalonia (Spain). Then one may derive the functions using information from other geographical areas. This work presents the methodology used to estimate age- and stage-specific Catalan breast cancer survival functions from scarce Catalan survival data by adapting the age- and stage-specific US functions. Methods Cubic splines were used to smooth data and obtain continuous hazard rate functions. After, we fitted a Poisson model to derive hazard ratios. The model included time as a covariate. Then the hazard ratios were applied to US survival functions detailed by age and stage to obtain Catalan estimations. Results We started estimating the hazard ratios for Catalonia versus the USA before and after the introduction of screening. The hazard ratios were then multiplied by the age- and stage-specific breast cancer hazard rates from the USA to obtain the Catalan hazard rates. We also compared breast cancer survival in Catalonia and the USA in two time periods, before cancer control interventions (USA 1975–79, Catalonia 1980–89) and after (USA and Catalonia 1990–2001). Survival in Catalonia in the 1980–89 period was worse than in the USA during 1975–79, but the differences disappeared in 1990–2001. Conclusion Our results suggest that access to better treatments and quality of care contributed to large improvements in survival in Catalonia. On

  12. Inadmissibility of Usual and Mixed Estimators of Two Ordered Gamma Scale Parameters Under Reflected Gamma Loss Function

    Directory of Open Access Journals (Sweden)

    Z. Meghnatisi

    2009-06-01

    Full Text Available Let Xi1, · · · , Xini be a random sample from a gamma distribution with known shape parameter νi > 0 and unknown scale parameter βi > 0, i = 1, 2, satisfying 0 < β1 6 β2. We consider the class of mixed estimators for estimation of β1 and β2 under reflected gamma loss function. It has been shown that the minimum risk equivariant estimator of βi, i = 1, 2, which is admissible when no information on the ordering of parameters are given, is inadmissible and dominated by a class of mixed estimators when it is known that the parameters are ordered. Also, the inadmissible estimators in the class of mixed estimators are derived. Finally the results are extended to some subclass of exponential family

  13. Monitoring renal function in children with Fabry disease: comparisons of measured and creatinine-based estimated glomerular filtration rate

    NARCIS (Netherlands)

    Tøndel, Camilla; Ramaswami, Uma; Aakre, Kristin Moberg; Wijburg, Frits; Bouwman, Machtelt; Svarstad, Einar

    2010-01-01

    Studies on renal function in children with Fabry disease have mainly been done using estimated creatinine-based glomerular filtration rate (GFR). The aim of this study was to compare estimated creatinine-based GFR (eGFR) with measured GFR (mGFR) in children with Fabry disease and normal renal

  14. Modulating Functions Based Algorithm for the Estimation of the Coefficients and Differentiation Order for a Space-Fractional Advection-Dispersion Equation

    KAUST Repository

    Aldoghaither, Abeer

    2015-12-01

    In this paper, a new method, based on the so-called modulating functions, is proposed to estimate average velocity, dispersion coefficient, and differentiation order in a space-fractional advection-dispersion equation, where the average velocity and the dispersion coefficient are space-varying. First, the average velocity and the dispersion coefficient are estimated by applying the modulating functions method, where the problem is transformed into a linear system of algebraic equations. Then, the modulating functions method combined with a Newton\\'s iteration algorithm is applied to estimate the coefficients and the differentiation order simultaneously. The local convergence of the proposed method is proved. Numerical results are presented with noisy measurements to show the effectiveness and robustness of the proposed method. It is worth mentioning that this method can be extended to general fractional partial differential equations.

  15. Modulating Functions Based Algorithm for the Estimation of the Coefficients and Differentiation Order for a Space-Fractional Advection-Dispersion Equation

    KAUST Repository

    Aldoghaither, Abeer; Liu, Da-Yan; Laleg-Kirati, Taous-Meriem

    2015-01-01

    In this paper, a new method, based on the so-called modulating functions, is proposed to estimate average velocity, dispersion coefficient, and differentiation order in a space-fractional advection-dispersion equation, where the average velocity and the dispersion coefficient are space-varying. First, the average velocity and the dispersion coefficient are estimated by applying the modulating functions method, where the problem is transformed into a linear system of algebraic equations. Then, the modulating functions method combined with a Newton's iteration algorithm is applied to estimate the coefficients and the differentiation order simultaneously. The local convergence of the proposed method is proved. Numerical results are presented with noisy measurements to show the effectiveness and robustness of the proposed method. It is worth mentioning that this method can be extended to general fractional partial differential equations.

  16. Functional soil microbial diversity across Europe estimated by EEA, MicroResp and BIOLOG

    DEFF Research Database (Denmark)

    Winding, Anne; Rutgers, Michiel; Creamer, Rachel

    consisting of 81 soil samples covering five Biogeograhical Zones and three land-uses in order to test the sensitivity, ease and cost of performance and biological significance of the data output. The techniques vary in how close they are to in situ functions; dependency on growth during incubation......Soil microorganisms are abundant and essential for the bio-geochemical processes of soil, soil quality and soil ecosystem services. All this is dependent on the actual functions the microbial communities are performing in the soil. Measuring soil respiration has for many years been the basis...... of estimating soil microbial activity. However, today several techniques are in use for determining microbial functional diversity and assessing soil biodiversity: Methods based on CO2 development by the microbes such as substrate induced respiration (SIR) on specific substrates have lead to the development...

  17. Modulating functions-based method for parameters and source estimation in one-dimensional partial differential equations

    KAUST Repository

    Asiri, Sharefa M.

    2016-10-20

    In this paper, modulating functions-based method is proposed for estimating space–time-dependent unknowns in one-dimensional partial differential equations. The proposed method simplifies the problem into a system of algebraic equations linear in unknown parameters. The well-posedness of the modulating functions-based solution is proved. The wave and the fifth-order KdV equations are used as examples to show the effectiveness of the proposed method in both noise-free and noisy cases.

  18. hp Spectral element methods for three dimensional elliptic problems

    Indian Academy of Sciences (India)

    elliptic boundary value problems on non-smooth domains in R3. For Dirichlet problems, ... of variable degree bounded by W. Let N denote the number of layers in the geomet- ric mesh ... We prove a stability theorem for mixed problems when the spectral element functions vanish ..... Applying Theorem 3.1,. ∫ r l. |Mu|2dx −.

  19. An improved parameter estimation and comparison for soft tissue constitutive models containing an exponential function.

    Science.gov (United States)

    Aggarwal, Ankush

    2017-08-01

    Motivated by the well-known result that stiffness of soft tissue is proportional to the stress, many of the constitutive laws for soft tissues contain an exponential function. In this work, we analyze properties of the exponential function and how it affects the estimation and comparison of elastic parameters for soft tissues. In particular, we find that as a consequence of the exponential function there are lines of high covariance in the elastic parameter space. As a result, one can have widely varying mechanical parameters defining the tissue stiffness but similar effective stress-strain responses. Drawing from elementary algebra, we propose simple changes in the norm and the parameter space, which significantly improve the convergence of parameter estimation and robustness in the presence of noise. More importantly, we demonstrate that these changes improve the conditioning of the problem and provide a more robust solution in the case of heterogeneous material by reducing the chances of getting trapped in a local minima. Based upon the new insight, we also propose a transformed parameter space which will allow for rational parameter comparison and avoid misleading conclusions regarding soft tissue mechanics.

  20. Inadmissibility of Usual and Mixed Estimators of Two Ordered Gamma Scale Parameters Under Reflected Gamma Loss Function

    OpenAIRE

    Z. Meghnatisi; N. Nematollahi

    2009-01-01

    Let Xi1, · · · , Xini be a random sample from a gamma distribution with known shape parameter νi > 0 and unknown scale parameter βi > 0, i = 1, 2, satisfying 0 < β1 6 β2. We consider the class of mixed estimators for estimation of β1 and β2 under reflected gamma loss function. It has been shown that the minimum risk equivariant estimator of βi, i = 1, 2, which is admissible when no information on the ordering of parameters are given, is inadmissible and dominated by a cla...

  1. On the pth moment estimates of solutions to stochastic functional differential equations in the G-framework.

    Science.gov (United States)

    Faizullah, Faiz

    2016-01-01

    The aim of the current paper is to present the path-wise and moment estimates for solutions to stochastic functional differential equations with non-linear growth condition in the framework of G-expectation and G-Brownian motion. Under the nonlinear growth condition, the pth moment estimates for solutions to SFDEs driven by G-Brownian motion are proved. The properties of G-expectations, Hölder's inequality, Bihari's inequality, Gronwall's inequality and Burkholder-Davis-Gundy inequalities are used to develop the above mentioned theory. In addition, the path-wise asymptotic estimates and continuity of pth moment for the solutions to SFDEs in the G-framework, with non-linear growth condition are shown.

  2. Nonparametric Estimation of Cumulative Incidence Functions for Competing Risks Data with Missing Cause of Failure

    DEFF Research Database (Denmark)

    Effraimidis, Georgios; Dahl, Christian Møller

    In this paper, we develop a fully nonparametric approach for the estimation of the cumulative incidence function with Missing At Random right-censored competing risks data. We obtain results on the pointwise asymptotic normality as well as the uniform convergence rate of the proposed nonparametric...

  3. An improved analysis of gravity drainage experiments for estimating the unsaturated soil hydraulic functions

    Science.gov (United States)

    Sisson, James B.; van Genuchten, Martinus Th.

    1991-04-01

    The unsaturated hydraulic properties are important parameters in any quantitative description of water and solute transport in partially saturated soils. Currently, most in situ methods for estimating the unsaturated hydraulic conductivity (K) are based on analyses that require estimates of the soil water flux and the pressure head gradient. These analyses typically involve differencing of field-measured pressure head (h) and volumetric water content (θ) data, a process that can significantly amplify instrumental and measurement errors. More reliable methods result when differencing of field data can be avoided. One such method is based on estimates of the gravity drainage curve K'(θ) = dK/dθ which may be computed from observations of θ and/or h during the drainage phase of infiltration drainage experiments assuming unit gradient hydraulic conditions. The purpose of this study was to compare estimates of the unsaturated soil hydraulic functions on the basis of different combinations of field data θ, h, K, and K'. Five different data sets were used for the analysis: (1) θ-h, (2) K-θ, (3) K'-θ (4) K-θ-h, and (5) K'-θ-h. The analysis was applied to previously published data for the Norfolk, Troup, and Bethany soils. The K-θ-h and K'-θ-h data sets consistently produced nearly identical estimates of the hydraulic functions. The K-θ and K'-θ data also resulted in similar curves, although results in this case were less consistent than those produced by the K-θ-h and K'-θ-h data sets. We conclude from this study that differencing of field data can be avoided and hence that there is no need to calculate soil water fluxes and pressure head gradients from inherently noisy field-measured θ and h data. The gravity drainage analysis also provides results over a much broader range of hydraulic conductivity values than is possible with the more standard instantaneous profile analysis, especially when augmented with independently measured soil water retention data.

  4. Estimation of time- and state-dependent delays and other parameters in functional differential equations

    Science.gov (United States)

    Murphy, K. A.

    1990-01-01

    A parameter estimation algorithm is developed which can be used to estimate unknown time- or state-dependent delays and other parameters (e.g., initial condition) appearing within a nonlinear nonautonomous functional differential equation. The original infinite dimensional differential equation is approximated using linear splines, which are allowed to move with the variable delay. The variable delays are approximated using linear splines as well. The approximation scheme produces a system of ordinary differential equations with nice computational properties. The unknown parameters are estimated within the approximating systems by minimizing a least-squares fit-to-data criterion. Convergence theorems are proved for time-dependent delays and state-dependent delays within two classes, which say essentially that fitting the data by using approximations will, in the limit, provide a fit to the data using the original system. Numerical test examples are presented which illustrate the method for all types of delay.

  5. Two Approaches to Estimating the Effect of Parenting on the Development of Executive Function in Early Childhood

    Science.gov (United States)

    Blair, Clancy; Raver, C. Cybele; Berry, Daniel J.

    2014-01-01

    In the current article, we contrast 2 analytical approaches to estimate the relation of parenting to executive function development in a sample of 1,292 children assessed longitudinally between the ages of 36 and 60 months of age. Children were administered a newly developed and validated battery of 6 executive function tasks tapping inhibitory…

  6. Joint entropy for space and spatial frequency domains estimated from psychometric functions of achromatic discrimination.

    Science.gov (United States)

    Silveira, Vladímir de Aquino; Souza, Givago da Silva; Gomes, Bruno Duarte; Rodrigues, Anderson Raiol; Silveira, Luiz Carlos de Lima

    2014-01-01

    We used psychometric functions to estimate the joint entropy for space discrimination and spatial frequency discrimination. Space discrimination was taken as discrimination of spatial extent. Seven subjects were tested. Gábor functions comprising unidimensionalsinusoidal gratings (0.4, 2, and 10 cpd) and bidimensionalGaussian envelopes (1°) were used as reference stimuli. The experiment comprised the comparison between reference and test stimulithat differed in grating's spatial frequency or envelope's standard deviation. We tested 21 different envelope's standard deviations around the reference standard deviation to study spatial extent discrimination and 19 different grating's spatial frequencies around the reference spatial frequency to study spatial frequency discrimination. Two series of psychometric functions were obtained for 2%, 5%, 10%, and 100% stimulus contrast. The psychometric function data points for spatial extent discrimination or spatial frequency discrimination were fitted with Gaussian functions using the least square method, and the spatial extent and spatial frequency entropies were estimated from the standard deviation of these Gaussian functions. Then, joint entropy was obtained by multiplying the square root of space extent entropy times the spatial frequency entropy. We compared our results to the theoretical minimum for unidimensional Gábor functions, 1/4π or 0.0796. At low and intermediate spatial frequencies and high contrasts, joint entropy reached levels below the theoretical minimum, suggesting non-linear interactions between two or more visual mechanisms. We concluded that non-linear interactions of visual pathways, such as the M and P pathways, could explain joint entropy values below the theoretical minimum at low and intermediate spatial frequencies and high contrasts. These non-linear interactions might be at work at intermediate and high contrasts at all spatial frequencies once there was a substantial decrease in joint

  7. 78 FR 17076 - Airworthiness Directives; Eurocopter France Helicopters

    Science.gov (United States)

    2013-03-20

    ...) swash-plate upper bearing (bearing) for a non-smooth point (friction point). This AD was prompted by a report of the premature deterioration of the MRH bearing of the rotating star installed on a Model... the MRH bearing for a non-smooth point (friction point), and if there is a friction point in the...

  8. Estimating the cost of improving quality in electricity distribution: A parametric distance function approach

    International Nuclear Information System (INIS)

    Coelli, Tim J.; Gautier, Axel; Perelman, Sergio; Saplacan-Pop, Roxana

    2013-01-01

    The quality of electricity distribution is being more and more scrutinized by regulatory authorities, with explicit reward and penalty schemes based on quality targets having been introduced in many countries. It is then of prime importance to know the cost of improving the quality for a distribution system operator. In this paper, we focus on one dimension of quality, the continuity of supply, and we estimated the cost of preventing power outages. For that, we make use of the parametric distance function approach, assuming that outages enter in the firm production set as an input, an imperfect substitute for maintenance activities and capital investment. This allows us to identify the sources of technical inefficiency and the underlying trade-off faced by operators between quality and other inputs and costs. For this purpose, we use panel data on 92 electricity distribution units operated by ERDF (Electricité de France - Réseau Distribution) in the 2003–2005 financial years. Assuming a multi-output multi-input translog technology, we estimate that the cost of preventing one interruption is equal to 10.7€ for an average DSO. Furthermore, as one would expect, marginal quality improvements tend to be more expensive as quality itself improves. - Highlights: ► We estimate the implicit cost of outages for the main distribution company in France. ► For this purpose, we make use of a parametric distance function approach. ► Marginal quality improvements tend to be more expensive as quality itself improves. ► The cost of preventing one interruption varies from 1.8 € to 69.2 € (2005 prices). ► We estimate that, in average, it lays 33% above the regulated price of quality.

  9. Estimation of the POD function and the LOD of a qualitative microbiological measurement method.

    Science.gov (United States)

    Wilrich, Cordula; Wilrich, Peter-Theodor

    2009-01-01

    Qualitative microbiological measurement methods in which the measurement results are either 0 (microorganism not detected) or 1 (microorganism detected) are discussed. The performance of such a measurement method is described by its probability of detection as a function of the contamination (CFU/g or CFU/mL) of the test material, or by the LOD(p), i.e., the contamination that is detected (measurement result 1) with a specified probability p. A complementary log-log model was used to statistically estimate these performance characteristics. An intralaboratory experiment for the detection of Listeria monocytogenes in various food matrixes illustrates the method. The estimate of LOD50% is compared with the Spearman-Kaerber method.

  10. Cerebral function estimation using electro-encephalography for the patients with brain tumor managed by radiotherapy

    International Nuclear Information System (INIS)

    Mariya, Yasushi; Saito, Fumio; Kimura, Tamaki

    1999-01-01

    Cerebral function of 12 patients accompanied with brain tumor, managed by radiotherapy, were serially estimated using electroencephalography (EEG), and the results were compared with tumor responses, analyzed by magnetic resonance imaging (MRI), and clinical courses. After radiotherapy, EEG findings were improved in 7 patients, unchanged in 3, and worsened in 1. Clinical courses were generally correlated with serial changes in EEG findings and tumor responses. However, in 3 patients, clinical courses were explained better with EEG findings than tumor responses. It is suggested that the combination of EEG and image analysis is clinically useful for comprehensive estimation of radiotherapeutic effects. (author)

  11. On the expected value and variance for an estimator of the spatio-temporal product density function

    DEFF Research Database (Denmark)

    Rodríguez-Corté, Francisco J.; Ghorbani, Mohammad; Mateu, Jorge

    Second-order characteristics are used to analyse the spatio-temporal structure of the underlying point process, and thus these methods provide a natural starting point for the analysis of spatio-temporal point process data. We restrict our attention to the spatio-temporal product density function......, and develop a non-parametric edge-corrected kernel estimate of the product density under the second-order intensity-reweighted stationary hypothesis. The expectation and variance of the estimator are obtained, and closed form expressions derived under the Poisson case. A detailed simulation study is presented...... to compare our close expression for the variance with estimated ones for Poisson cases. The simulation experiments show that the theoretical form for the variance gives acceptable values, which can be used in practice. Finally, we apply the resulting estimator to data on the spatio-temporal distribution...

  12. Cost function estimates, scale economies and technological progress in the Turkish electricity generation sector

    International Nuclear Information System (INIS)

    Ali Akkemik, K.

    2009-01-01

    Turkish electricity sector has undergone significant institutional changes since 1984. The recent developments since 2001 including the setting up of a regulatory agency to undertake the regulation of the sector and increasing participation of private investors in the field of electricity generation are of special interest. This paper estimates cost functions and investigates the degree of scale economies, overinvestment, and technological progress in the Turkish electricity generation sector for the period 1984-2006 using long-run and short-run translog cost functions. Estimations were done for six groups of firms, public and private. The results indicate existence of scale economies throughout the period of analysis, hence declining long-run average costs. The paper finds empirical support for the Averch-Johnson effect until 2001, i.e., firms overinvested in an environment where there are excess returns to capital. But this effect was reduced largely after 2002. Technological progress deteriorated slightly from 1984-1993 to 1994-2001 but improved after 2002. Overall, the paper found that regulation of the market under the newly established regulating agency after 2002 was effective and there are potential gains from such regulation. (author)

  13. Estimating the parameters of stochastic differential equations using a criterion function based on the Kolmogorov-Smirnov statistic

    OpenAIRE

    McDonald, A. David; Sandal, Leif Kristoffer

    1998-01-01

    Estimation of parameters in the drift and diffusion terms of stochastic differential equations involves simulation and generally requires substantial data sets. We examine a method that can be applied when available time series are limited to less than 20 observations per replication. We compare and contrast parameter estimation for linear and nonlinear first-order stochastic differential equations using two criterion functions: one based on a Chi-square statistic, put forward by Hurn and Lin...

  14. A functional-type a posteriori error estimate of approximate solutions for Reissner-Mindlin plates and its implementation

    Science.gov (United States)

    Frolov, Maxim; Chistiakova, Olga

    2017-06-01

    Paper is devoted to a numerical justification of the recent a posteriori error estimate for Reissner-Mindlin plates. This majorant provides a reliable control of accuracy of any conforming approximate solution of the problem including solutions obtained with commercial software for mechanical engineering. The estimate is developed on the basis of the functional approach and is applicable to several types of boundary conditions. To verify the approach, numerical examples with mesh refinements are provided.

  15. An Energy-Based Limit State Function for Estimation of Structural Reliability in Shock Environments

    Directory of Open Access Journals (Sweden)

    Michael A. Guthrie

    2013-01-01

    Full Text Available limit state function is developed for the estimation of structural reliability in shock environments. This limit state function uses peak modal strain energies to characterize environmental severity and modal strain energies at failure to characterize the structural capacity. The Hasofer-Lind reliability index is briefly reviewed and its computation for the energy-based limit state function is discussed. Applications to two degree of freedom mass-spring systems and to a simple finite element model are considered. For these examples, computation of the reliability index requires little effort beyond a modal analysis, but still accounts for relevant uncertainties in both the structure and environment. For both examples, the reliability index is observed to agree well with the results of Monte Carlo analysis. In situations where fast, qualitative comparison of several candidate designs is required, the reliability index based on the proposed limit state function provides an attractive metric which can be used to compare and control reliability.

  16. Estimating the small-x exponent of the structure function g1NS from the Bjorken sum rule

    International Nuclear Information System (INIS)

    Knauf, Anke; Meyer-Hermann, Michael; Soff, Gerhard

    2002-01-01

    We present a new estimate of the exponent governing the small-x behavior of the nonsinglet structure function g 1 p-n derived under the assumption that the Bjorken sum rule is valid. We use the world wide average of α s and the NNNLO QCD corrections to the Bjorken sum rule. The structure function g 1 NS is found to be clearly divergent for small x

  17. Moving-Horizon Modulating Functions-Based Algorithm for Online Source Estimation in a First Order Hyperbolic PDE

    KAUST Repository

    Asiri, Sharefa M.; Elmetennani, Shahrazed; Laleg-Kirati, Taous-Meriem

    2017-01-01

    In this paper, an on-line estimation algorithm of the source term in a first order hyperbolic PDE is proposed. This equation describes heat transport dynamics in concentrated solar collectors where the source term represents the received energy. This energy depends on the solar irradiance intensity and the collector characteristics affected by the environmental changes. Control strategies are usually used to enhance the efficiency of heat production; however, these strategies often depend on the source term which is highly affected by the external working conditions. Hence, efficient source estimation methods are required. The proposed algorithm is based on modulating functions method where a moving horizon strategy is introduced. Numerical results are provided to illustrate the performance of the proposed estimator in open and closed loops.

  18. Moving-Horizon Modulating Functions-Based Algorithm for Online Source Estimation in a First Order Hyperbolic PDE

    KAUST Repository

    Asiri, Sharefa M.

    2017-08-22

    In this paper, an on-line estimation algorithm of the source term in a first order hyperbolic PDE is proposed. This equation describes heat transport dynamics in concentrated solar collectors where the source term represents the received energy. This energy depends on the solar irradiance intensity and the collector characteristics affected by the environmental changes. Control strategies are usually used to enhance the efficiency of heat production; however, these strategies often depend on the source term which is highly affected by the external working conditions. Hence, efficient source estimation methods are required. The proposed algorithm is based on modulating functions method where a moving horizon strategy is introduced. Numerical results are provided to illustrate the performance of the proposed estimator in open and closed loops.

  19. Min-max Extrapolation Scheme for Fast Estimation of 3D Potts Field Partition Functions. Application to the Joint Detection-Estimation of Brain Activity in fMRI

    International Nuclear Information System (INIS)

    Risser, L.; Vincent, T.; Ciuciu, P.; Risser, L.; Idier, J.; Risser, L.; Forbes, F.

    2011-01-01

    In this paper, we propose a fast numerical scheme to estimate Partition Functions (PF) of symmetric Potts fields. Our strategy is first validated on 2D two-color Potts fields and then on 3D two- and three-color Potts fields. It is then applied to the joint detection-estimation of brain activity from functional Magnetic Resonance Imaging (fMRI) data, where the goal is to automatically recover activated, deactivated and inactivated brain regions and to estimate region dependent hemodynamic filters. For any brain region, a specific 3D Potts field indeed embodies the spatial correlation over the hidden states of the voxels by modeling whether they are activated, deactivated or inactive. To make spatial regularization adaptive, the PFs of the Potts fields over all brain regions are computed prior to the brain activity estimation. Our approach is first based upon a classical path-sampling method to approximate a small subset of reference PFs corresponding to pre-specified regions. Then, we propose an extrapolation method that allows us to approximate the PFs associated to the Potts fields defined over the remaining brain regions. In comparison with preexisting methods either based on a path sampling strategy or mean-field approximations, our contribution strongly alleviates the computational cost and makes spatially adaptive regularization of whole brain fMRI datasets feasible. It is also robust against grid inhomogeneities and efficient irrespective of the topological configurations of the brain regions. (authors)

  20. The Reliability Estimation for the Open Function of Cabin Door Affected by the Imprecise Judgment Corresponding to Distribution Hypothesis

    Science.gov (United States)

    Yu, Z. P.; Yue, Z. F.; Liu, W.

    2018-05-01

    With the development of artificial intelligence, more and more reliability experts have noticed the roles of subjective information in the reliability design of complex system. Therefore, based on the certain numbers of experiment data and expert judgments, we have divided the reliability estimation based on distribution hypothesis into cognition process and reliability calculation. Consequently, for an illustration of this modification, we have taken the information fusion based on intuitional fuzzy belief functions as the diagnosis model of cognition process, and finished the reliability estimation for the open function of cabin door affected by the imprecise judgment corresponding to distribution hypothesis.

  1. Variational estimate of the vacuum state of the SU(2) lattice gauge theory with a disordered trial wave function

    International Nuclear Information System (INIS)

    Heys, D.W.; Stump, D.R.

    1984-01-01

    The variational principle is used to estimate the ground state of the Kogut-Susskind Hamiltonian of the SU(2) lattice gauge theory, with a trial wave function for which the magnetic fields on different plaquettes are uncorrelated. This trial function describes a disordered state. The energy expectation value is evaluated by a Monte Carlo method. The variational results are compared to similar results for a related Abelian gauge theory. Also, the expectation value of the Wilson loop operator is computed for the trial state, and the resulting estimate of the string tension is compared to the prediction of asymptotic freedom

  2. A canonical process for estimation of convex functions : The "invelope" of integrated Brownian motion +t4

    NARCIS (Netherlands)

    Groeneboom, P.; Jongbloed, G.; Wellner, J.A.

    2001-01-01

    A process associated with integrated Brownian motion is introduced that characterizes the limit behavior of nonparametric least squares and maximum likelihood estimators of convex functions and convex densities, respectively. We call this process “the invelope” and show that it is an almost surely

  3. Complex dynamics analysis of impulsively coupled Duffing oscillators with ring structure

    International Nuclear Information System (INIS)

    Jiang Hai-Bo; Zhang Li-Ping; Yu Jian-Jiang

    2015-01-01

    Impulsively coupled systems are high-dimensional non-smooth systems that can exhibit rich and complex dynamics. This paper studies the complex dynamics of a non-smooth system which is unidirectionally impulsively coupled by three Duffing oscillators in a ring structure. By constructing a proper Poincaré map of the non-smooth system, an analytical expression of the Jacobian matrix of Poincaré map is given. Two-parameter Hopf bifurcation sets are obtained by combining the shooting method and the Runge–Kutta method. When the period is fixed and the coupling strength changes, the system undergoes stable, periodic, quasi-periodic, and hyper-chaotic solutions, etc. Floquet theory is used to study the stability of the periodic solutions of the system and their bifurcations. (paper)

  4. Estimating a Smooth Common Transfer Function with a Panel of Time Series - Inflow of Larvae Cod as an Example

    Directory of Open Access Journals (Sweden)

    Elizabeth Hansen

    2012-07-01

    Full Text Available Normal 0 false false false EN-US X-NONE X-NONE /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin:0in; mso-para-margin-bottom:.0001pt; mso-pagination:widow-orphan; font-size:10.0pt; font-family:"Times New Roman","serif";} The annual response variable in an ecological monitoring study often relates linearly to the weighted cumulative effect of some daily covariate, after adjusting for other annual covariates. Here we consider the problem of non-parametrically estimating the weights involved in computing the aforementioned cumulative effect, with a panel of short and contemporaneously correlated time series whose responses share the common cumulative effect of a daily covariate. The sequence of (unknown daily weights constitutes the so-called transfer function. Specifically, we consider the problem of estimating a smooth common transfer function shared by a panel of short time series that are contemporaneously correlated. We propose an estimation scheme using a likelihood approach that penalizes the roughness of the common transfer function. We illustrate the proposed method with a simulation study and a biological example of indirectly estimating the spawning date distribution of North Sea cod.

  5. Pedotransfer functions estimating soil hydraulic properties using different soil parameters

    DEFF Research Database (Denmark)

    Børgesen, Christen Duus; Iversen, Bo Vangsø; Jacobsen, Ole Hørbye

    2008-01-01

    Estimates of soil hydraulic properties using pedotransfer functions (PTF) are useful in many studies such as hydrochemical modelling and soil mapping. The objective of this study was to calibrate and test parametric PTFs that predict soil water retention and unsaturated hydraulic conductivity...... parameters. The PTFs are based on neural networks and the Bootstrap method using different sets of predictors and predict the van Genuchten/Mualem parameters. A Danish soil data set (152 horizons) dominated by sandy and sandy loamy soils was used in the development of PTFs to predict the Mualem hydraulic...... conductivity parameters. A larger data set (1618 horizons) with a broader textural range was used in the development of PTFs to predict the van Genuchten parameters. The PTFs using either three or seven textural classes combined with soil organic mater and bulk density gave the most reliable predictions...

  6. Estimating receiver functions on dense arrays: application to the IRIS Community Wavefield Experiment in Oklahoma

    Science.gov (United States)

    Zhong, M.; Zhan, Z.

    2017-12-01

    Receiver functions (RF) estimated on dense arrays have been widely used for studies of Earth structures at different scales. However, there are still challenges in estimating and interpreting RF images due to non-uniqueness of deconvolution, noise in data, and lack of uncertainty. Here, we develop a dense-array-based RF method towards robust and high-resolution RF images. We cast RF images as the models in a sparsity-promoted inverse problem, in which waveforms from multiple events recorded by neighboring stations are jointly inverted. We use the Neighborhood Algorithm to find the optimal model (i.e., RF image) as well as an ensemble of models for further uncertainty quantification. Synthetic tests and application to the IRIS Community Wavefield Experiment in Oklahoma demonstrate that the new method is able to deal with challenging dataset, retrieve reliable high-resolution RF images, and provide realistic uncertainty estimates.

  7. Efficient Estimation of Dynamic Density Functions with Applications in Streaming Data

    KAUST Repository

    Qahtan, Abdulhakim

    2016-05-11

    Recent advances in computing technology allow for collecting vast amount of data that arrive continuously in the form of streams. Mining data streams is challenged by the speed and volume of the arriving data. Furthermore, the underlying distribution of the data changes over the time in unpredicted scenarios. To reduce the computational cost, data streams are often studied in forms of condensed representation, e.g., Probability Density Function (PDF). This thesis aims at developing an online density estimator that builds a model called KDE-Track for characterizing the dynamic density of the data streams. KDE-Track estimates the PDF of the stream at a set of resampling points and uses interpolation to estimate the density at any given point. To reduce the interpolation error and computational complexity, we introduce adaptive resampling where more/less resampling points are used in high/low curved regions of the PDF. The PDF values at the resampling points are updated online to provide up-to-date model of the data stream. Comparing with other existing online density estimators, KDE-Track is often more accurate (as reflected by smaller error values) and more computationally efficient (as reflected by shorter running time). The anytime available PDF estimated by KDE-Track can be applied for visualizing the dynamic density of data streams, outlier detection and change detection in data streams. In this thesis work, the first application is to visualize the taxi traffic volume in New York city. Utilizing KDE-Track allows for visualizing and monitoring the traffic flow on real time without extra overhead and provides insight analysis of the pick up demand that can be utilized by service providers to improve service availability. The second application is to detect outliers in data streams from sensor networks based on the estimated PDF. The method detects outliers accurately and outperforms baseline methods designed for detecting and cleaning outliers in sensor data. The

  8. Turnpike theory of continuous-time linear optimal control problems

    CERN Document Server

    Zaslavski, Alexander J

    2015-01-01

    Individual turnpike results are of great interest due to their numerous applications in engineering and in economic theory; in this book the study is focused on new results of turnpike phenomenon in linear optimal control problems.  The book is intended for engineers as well as for mathematicians interested in the calculus of variations, optimal control, and in applied functional analysis. Two large classes of problems are studied in more depth. The first class studied in Chapter 2 consists of linear control problems with periodic nonsmooth convex integrands. Chapters 3-5 consist of linear control problems with autonomous nonconvex and nonsmooth integrands.  Chapter 6 discusses a turnpike property for dynamic zero-sum games with linear constraints. Chapter 7 examines genericity results. In Chapter 8, the description of structure of variational problems with extended-valued integrands is obtained. Chapter 9 ends the exposition with a study of turnpike phenomenon for dynamic games with extended value integran...

  9. A Nonlinear Dynamics-Based Estimator for Functional Electrical Stimulation: Preliminary Results From Lower-Leg Extension Experiments.

    Science.gov (United States)

    Allen, Marcus; Zhong, Qiang; Kirsch, Nicholas; Dani, Ashwin; Clark, William W; Sharma, Nitin

    2017-12-01

    Miniature inertial measurement units (IMUs) are wearable sensors that measure limb segment or joint angles during dynamic movements. However, IMUs are generally prone to drift, external magnetic interference, and measurement noise. This paper presents a new class of nonlinear state estimation technique called state-dependent coefficient (SDC) estimation to accurately predict joint angles from IMU measurements. The SDC estimation method uses limb dynamics, instead of limb kinematics, to estimate the limb state. Importantly, the nonlinear limb dynamic model is formulated into state-dependent matrices that facilitate the estimator design without performing a Jacobian linearization. The estimation method is experimentally demonstrated to predict knee joint angle measurements during functional electrical stimulation of the quadriceps muscle. The nonlinear knee musculoskeletal model was identified through a series of experiments. The SDC estimator was then compared with an extended kalman filter (EKF), which uses a Jacobian linearization and a rotation matrix method, which uses a kinematic model instead of the dynamic model. Each estimator's performance was evaluated against the true value of the joint angle, which was measured through a rotary encoder. The experimental results showed that the SDC estimator, the rotation matrix method, and EKF had root mean square errors of 2.70°, 2.86°, and 4.42°, respectively. Our preliminary experimental results show the new estimator's advantage over the EKF method but a slight advantage over the rotation matrix method. However, the information from the dynamic model allows the SDC method to use only one IMU to measure the knee angle compared with the rotation matrix method that uses two IMUs to estimate the angle.

  10. Influence of different temperatures on the thermal fatigue behavior and thermal stability of hot-work tool steel processed by a biomimetic couple laser technique

    Science.gov (United States)

    Meng, Chao; Zhou, Hong; Zhou, Ying; Gao, Ming; Tong, Xin; Cong, Dalong; Wang, Chuanwei; Chang, Fang; Ren, Luquan

    2014-04-01

    Three kinds of biomimetic non-smooth shapes (spot-shape, striation-shape and reticulation-shape) were fabricated on the surface of H13 hot-work tool steel by laser. We investigated the thermal fatigue behavior of biomimetic non-smooth samples with three kinds of shapes at different thermal cycle temperature. Moreover, the evolution of microstructure, as well as the variations of hardness of laser affected area and matrix were studied and compared. The results showed that biomimetic non-smooth samples had better thermal fatigue behavior compared to the untreated samples at different thermal cycle temperatures. For a given maximal temperature, the biomimetic non-smooth sample with reticulation-shape had the optimum thermal fatigue behavior, than with striation-shape which was better than that with the spot-shape. The microstructure observations indicated that at different thermal cycle temperatures the coarsening degrees of microstructures of laser affected area were different and the microstructures of laser affected area were still finer than that of the untreated samples. Although the resistance to thermal cycling softening of laser affected area was lower than that of the untreated sample, laser affected area had higher microhardness than the untreated sample at different thermal cycle temperature.

  11. Specification errors in estimating cost functions: the case of the nuclear-electric-generating industry

    International Nuclear Information System (INIS)

    Jorgensen, E.J.

    1987-01-01

    This study is an application of production-cost duality theory. Duality theory is reviewed for the competitive and rate-of-return regulated firm. The cost function is developed for the nuclear electric-power-generating industry of the United States using capital, fuel, and labor factor inputs. A comparison is made between the Generalized Box-Cox (GBC) and Fourier Flexible (FF) functional forms. The GBC functional form nests the Generalized Leontief, Generalized Square Root Quadratic and Translog functional forms, and is based upon a second-order Taylor-series expansion. The FF form follows from a Fourier-series expansion in sine and cosine terms using the Sobolev norm as the goodness-of-fit measure. The Sobolev norm takes into account first and second derivatives. The cost function and two factor shares are estimated as a system of equations using maximum-likelihood techniques, with Additive Standard Normal and Logistic Normal error distributions. In summary, none of the special cases of the GBC function form are accepted. Homotheticity of the underlying production technology can be rejected for both GBC and FF forms, leaving only the unrestricted versions supported by the data. Residual analysis indicates a slight improvement in skewness and kurtosis for univariate and multivariate cases when the Logistic Normal distribution is used

  12. Slope Estimation in Noisy Piecewise Linear Functions.

    Science.gov (United States)

    Ingle, Atul; Bucklew, James; Sethares, William; Varghese, Tomy

    2015-03-01

    This paper discusses the development of a slope estimation algorithm called MAPSlope for piecewise linear data that is corrupted by Gaussian noise. The number and locations of slope change points (also known as breakpoints) are assumed to be unknown a priori though it is assumed that the possible range of slope values lies within known bounds. A stochastic hidden Markov model that is general enough to encompass real world sources of piecewise linear data is used to model the transitions between slope values and the problem of slope estimation is addressed using a Bayesian maximum a posteriori approach. The set of possible slope values is discretized, enabling the design of a dynamic programming algorithm for posterior density maximization. Numerical simulations are used to justify choice of a reasonable number of quantization levels and also to analyze mean squared error performance of the proposed algorithm. An alternating maximization algorithm is proposed for estimation of unknown model parameters and a convergence result for the method is provided. Finally, results using data from political science, finance and medical imaging applications are presented to demonstrate the practical utility of this procedure.

  13. Estimation of the four-wave mixing noise probability-density function by the multicanonical Monte Carlo method.

    Science.gov (United States)

    Neokosmidis, Ioannis; Kamalakis, Thomas; Chipouras, Aristides; Sphicopoulos, Thomas

    2005-01-01

    The performance of high-powered wavelength-division multiplexed (WDM) optical networks can be severely degraded by four-wave-mixing- (FWM-) induced distortion. The multicanonical Monte Carlo method (MCMC) is used to calculate the probability-density function (PDF) of the decision variable of a receiver, limited by FWM noise. Compared with the conventional Monte Carlo method previously used to estimate this PDF, the MCMC method is much faster and can accurately estimate smaller error probabilities. The method takes into account the correlation between the components of the FWM noise, unlike the Gaussian model, which is shown not to provide accurate results.

  14. Lithium-ion battery state of function estimation based on fuzzy logic algorithm with associated variables

    Science.gov (United States)

    Gan, L.; Yang, F.; Shi, Y. F.; He, H. L.

    2017-11-01

    Many occasions related to batteries demand to know how much continuous and instantaneous power can batteries provide such as the rapidly developing electric vehicles. As the large-scale applications of lithium-ion batteries, lithium-ion batteries are used to be our research object. Many experiments are designed to get the lithium-ion battery parameters to ensure the relevance and reliability of the estimation. To evaluate the continuous and instantaneous load capability of a battery called state-of-function (SOF), this paper proposes a fuzzy logic algorithm based on battery state-of-charge(SOC), state-of-health(SOH) and C-rate parameters. Simulation and experimental results indicate that the proposed approach is suitable for battery SOF estimation.

  15. Detection of crack-like indications in digital radiography by global optimisation of a probabilistic estimation function

    Energy Technology Data Exchange (ETDEWEB)

    Alekseychuk, O.

    2006-07-01

    A new algorithm for detection of longitudinal crack-like indications in radiographic images is developed in this work. Conventional local detection techniques give unsatisfactory results for this task due to the low signal to noise ratio (SNR {proportional_to} 1) of crack-like indications in radiographic images. The usage of global features of crack-like indications provides the necessary noise resistance, but this is connected with prohibitive computational complexities of detection and difficulties in a formal description of the indication shape. Conventionally, the excessive computational complexity of the solution is reduced by usage of heuristics. The heuristics to be used, are selected on a trial and error basis, are problem dependent and do not guarantee the optimal solution. Not following this way is a distinctive feature of the algorithm developed here. Instead, a global characteristic of crack-like indication (the estimation function) is used, whose maximum in the space of all possible positions, lengths and shapes can be found exactly, i.e. without any heuristics. The proposed estimation function is defined as a sum of a posteriori information gains about hypothesis of indication presence in each point along the whole hypothetical indication. The gain in the information about hypothesis of indication presence results from the analysis of the underlying image in the local area. Such an estimation function is theoretically justified and exhibits a desirable behaviour on changing signals. The developed algorithm is implemented in the C++ programming language and tested on synthetic as well as on real images. It delivers good results (high correct detection rate by given false alarm rate) which are comparable to the performance of trained human inspectors.

  16. Detection of crack-like indications in digital radiography by global optimisation of a probabilistic estimation function

    International Nuclear Information System (INIS)

    Alekseychuk, O.

    2006-01-01

    A new algorithm for detection of longitudinal crack-like indications in radiographic images is developed in this work. Conventional local detection techniques give unsatisfactory results for this task due to the low signal to noise ratio (SNR ∝ 1) of crack-like indications in radiographic images. The usage of global features of crack-like indications provides the necessary noise resistance, but this is connected with prohibitive computational complexities of detection and difficulties in a formal description of the indication shape. Conventionally, the excessive computational complexity of the solution is reduced by usage of heuristics. The heuristics to be used, are selected on a trial and error basis, are problem dependent and do not guarantee the optimal solution. Not following this way is a distinctive feature of the algorithm developed here. Instead, a global characteristic of crack-like indication (the estimation function) is used, whose maximum in the space of all possible positions, lengths and shapes can be found exactly, i.e. without any heuristics. The proposed estimation function is defined as a sum of a posteriori information gains about hypothesis of indication presence in each point along the whole hypothetical indication. The gain in the information about hypothesis of indication presence results from the analysis of the underlying image in the local area. Such an estimation function is theoretically justified and exhibits a desirable behaviour on changing signals. The developed algorithm is implemented in the C++ programming language and tested on synthetic as well as on real images. It delivers good results (high correct detection rate by given false alarm rate) which are comparable to the performance of trained human inspectors

  17. Flexible semiparametric joint modeling: an application to estimate individual lung function decline and risk of pulmonary exacerbations in cystic fibrosis

    Directory of Open Access Journals (Sweden)

    Dan Li

    2017-11-01

    Full Text Available Abstract Background Epidemiologic surveillance of lung function is key to clinical care of individuals with cystic fibrosis, but lung function decline is nonlinear and often impacted by acute respiratory events known as pulmonary exacerbations. Statistical models are needed to simultaneously estimate lung function decline while providing risk estimates for the onset of pulmonary exacerbations, in order to identify relevant predictors of declining lung function and understand how these associations could be used to predict the onset of pulmonary exacerbations. Methods Using longitudinal lung function (FEV1 measurements and time-to-event data on pulmonary exacerbations from individuals in the United States Cystic Fibrosis Registry, we implemented a flexible semiparametric joint model consisting of a mixed-effects submodel with regression splines to fit repeated FEV1 measurements and a time-to-event submodel for possibly censored data on pulmonary exacerbations. We contrasted this approach with methods currently used in epidemiological studies and highlight clinical implications. Results The semiparametric joint model had the best fit of all models examined based on deviance information criterion. Higher starting FEV1 implied more rapid lung function decline in both separate and joint models; however, individualized risk estimates for pulmonary exacerbation differed depending upon model type. Based on shared parameter estimates from the joint model, which accounts for the nonlinear FEV1 trajectory, patients with more positive rates of change were less likely to experience a pulmonary exacerbation (HR per one standard deviation increase in FEV1 rate of change = 0.566, 95% CI 0.516–0.619, and having higher absolute FEV1 also corresponded to lower risk of having a pulmonary exacerbation (HR per one standard deviation increase in FEV1 = 0.856, 95% CI 0.781–0.937. At the population level, both submodels indicated significant effects of birth

  18. A two-stage method for inverse medium scattering

    KAUST Repository

    Ito, Kazufumi

    2013-03-01

    We present a novel numerical method to the time-harmonic inverse medium scattering problem of recovering the refractive index from noisy near-field scattered data. The approach consists of two stages, one pruning step of detecting the scatterer support, and one resolution enhancing step with nonsmooth mixed regularization. The first step is strictly direct and of sampling type, and it faithfully detects the scatterer support. The second step is an innovative application of nonsmooth mixed regularization, and it accurately resolves the scatterer size as well as intensities. The nonsmooth model can be efficiently solved by a semi-smooth Newton-type method. Numerical results for two- and three-dimensional examples indicate that the new approach is accurate, computationally efficient, and robust with respect to data noise. © 2012 Elsevier Inc.

  19. Sensitivity of landscape resistance estimates based on point selection functions to scale and behavioral state: Pumas as a case study

    Science.gov (United States)

    Katherine A. Zeller; Kevin McGarigal; Paul Beier; Samuel A. Cushman; T. Winston Vickers; Walter M. Boyce

    2014-01-01

    Estimating landscape resistance to animal movement is the foundation for connectivity modeling, and resource selection functions based on point data are commonly used to empirically estimate resistance. In this study, we used GPS data points acquired at 5-min intervals from radiocollared pumas in southern California to model context-dependent point selection...

  20. Intake retention functions and their applications to bioassay and the estimation of internal radiation doses

    International Nuclear Information System (INIS)

    Skrable, K.W.; Chabot, G.E.; French, C.S.; La Bone, T.R.

    1988-01-01

    This paper describes a way of obtaining and gives applications of intake retention functions. These functions give the fraction of an intake of radioactive material expected to be present in a specified bioassay compartment at any time after a single acute exposure or after onset of a continuous exposure. The intake retention functions are derived from a multicompartmental model and a recursive catenary kinetics equation that completely describe the metabolism of radioelements from intake to excretion, accounting for the delay in uptake from compartments in the respiratory and gastrointestinal tracts and the recycling of radioelements between systemic compartments. This approach, which treats excretion as the 'last' compartment of all catenary metabolic pathways, avoids the use of convolution integrals and provides algebraic solutions that can be programmed on hand held calculators or personal computers. The estimation of intakes and internal radiation doses and the use of intake retention functions in the design of bioassay programs are discussed along with several examples

  1. Unbiased minimum variance estimator of a matrix exponential function. Application to Boltzmann/Bateman coupled equations solving

    International Nuclear Information System (INIS)

    Dumonteil, E.; Diop, C. M.

    2009-01-01

    This paper derives an unbiased minimum variance estimator (UMVE) of a matrix exponential function of a normal wean. The result is then used to propose a reference scheme to solve Boltzmann/Bateman coupled equations, thanks to Monte Carlo transport codes. The last section will present numerical results on a simple example. (authors)

  2. On Estimation Of The Orientation Of Mobile Robots Using Turning Functions And SONAR Information

    Directory of Open Access Journals (Sweden)

    Dorel AIORDACHIOAIE

    2003-12-01

    Full Text Available SONAR systems are widely used by some artificial objects, e.g. robots, and by animals, e.g. bats, for navigation and pattern recognition. The objective of this paper is to present a solution on the estimation of the orientation in the environment of mobile robots, in the context of navigation, using the turning function approach. The results are shown to be accurate and can be used further in the design of navigation strategies of mobile robots.

  3. Plasma Levels of Middle Molecules to Estimate Residual Kidney Function in Haemodialysis without Urine Collection.

    Directory of Open Access Journals (Sweden)

    Enric Vilar

    Full Text Available Residual Kidney Function (RKF is associated with survival benefits in haemodialysis (HD but is difficult to measure without urine collection. Middle molecules such as Cystatin C and β2-microglobulin accumulate in renal disease and plasma levels have been used to estimate kidney function early in this condition. We investigated their use to estimate RKF in patients on HD.Cystatin C, β2-microglobulin, urea and creatinine levels were studied in patients on incremental high-flux HD or hemodiafiltration(HDF. Over sequential HD sessions, blood was sampled pre- and post-session 1 and pre-session 2, for estimation of these parameters. Urine was collected during the whole interdialytic interval, for estimation of residual GFR (GFRResidual = mean of urea and creatinine clearance. The relationships of plasma Cystatin C and β2-microglobulin levels to GFRResidual and urea clearance were determined.Of the 341 patients studied, 64% had urine output>100 ml/day, 32.6% were on high-flux HD and 67.4% on HDF. Parameters most closely correlated with GFRResidual were 1/β2-micoglobulin (r2 0.67 and 1/Cystatin C (r2 0.50. Both these relationships were weaker at low GFRResidual. The best regression model for GFRResidual, explaining 67% of the variation, was: GFRResidual = 160.3 · (1/β2m - 4.2. Where β2m is the pre-dialysis β2 microglobulin concentration (mg/L. This model was validated in a separate cohort of 50 patients using Bland-Altman analysis. Areas under the curve in Receiver Operating Characteristic analysis aimed at identifying subjects with urea clearance≥2 ml/min/1.73 m2 was 0.91 for β2-microglobulin and 0.86 for Cystatin C. A plasma β2-microglobulin cut-off of ≤19.2 mg/L allowed identification of patients with urea clearance ≥2 ml/min/1.73 m2 with 90% specificity and 65% sensitivity.Plasma pre-dialysis β2-microglobulin levels can provide estimates of RKF which may have clinical utility and appear superior to cystatin C. Use of cut-off levels

  4. Density functionals for surface science: Exchange-correlation model development with Bayesian error estimation

    DEFF Research Database (Denmark)

    Wellendorff, Jess; Lundgård, Keld Troen; Møgelhøj, Andreas

    2012-01-01

    A methodology for semiempirical density functional optimization, using regularization and cross-validation methods from machine learning, is developed. We demonstrate that such methods enable well-behaved exchange-correlation approximations in very flexible model spaces, thus avoiding the overfit......A methodology for semiempirical density functional optimization, using regularization and cross-validation methods from machine learning, is developed. We demonstrate that such methods enable well-behaved exchange-correlation approximations in very flexible model spaces, thus avoiding...... the energetics of intramolecular and intermolecular, bulk solid, and surface chemical bonding, and the developed optimization method explicitly handles making the compromise based on the directions in model space favored by different materials properties. The approach is applied to designing the Bayesian error...... sets validates the applicability of BEEF-vdW to studies in chemistry and condensed matter physics. Applications of the approximation and its Bayesian ensemble error estimate to two intricate surface science problems support this....

  5. A Simple But Effective Canonical Dual Theory Unified Algorithm for Global Optimization

    OpenAIRE

    Zhang, Jiapu

    2011-01-01

    Numerical global optimization methods are often very time consuming and could not be applied for high-dimensional nonconvex/nonsmooth optimization problems. Due to the nonconvexity/nonsmoothness, directly solving the primal problems sometimes is very difficult. This paper presents a very simple but very effective canonical duality theory (CDT) unified global optimization algorithm. This algorithm has convergence is proved in this paper. More important, for this CDT-unified algorithm, numerous...

  6. An Improved Differential Evolution Based Dynamic Economic Dispatch with Nonsmooth Fuel Cost Function

    Directory of Open Access Journals (Sweden)

    R. Balamurugan

    2007-09-01

    Full Text Available Dynamic economic dispatch (DED is one of the major operational decisions in electric power systems. DED problem is an optimization problem with an objective to determine the optimal combination of power outputs for all generating units over a certain period of time in order to minimize the total fuel cost while satisfying dynamic operational constraints and load demand in each interval. This paper presents an improved differential evolution (IDE method to solve the DED problem of generating units considering valve-point effects. Heuristic crossover technique and gene swap operator are introduced in the proposed approach to improve the convergence characteristic of the differential evolution (DE algorithm. To illustrate the effectiveness of the proposed approach, two test systems consisting of five and ten generating units have been considered. The results obtained through the proposed method are compared with those reported in the literature.

  7. Creatinine Versus Cystatin C: Differing Estimates of Renal Function in Hospitalized Veterans Receiving Anticoagulants.

    Science.gov (United States)

    Wang, Christina Hao; Rubinsky, Anna D; Minichiello, Tracy; Shlipak, Michael G; Price, Erika Leemann

    2018-05-31

    Current practice in anticoagulation dosing relies on kidney function estimated by serum creatinine using the Cockcroft-Gault equation. However, creatinine can be unreliable in patients with low or high muscle mass. Cystatin C provides an alternative estimation of glomerular filtration rate (eGFR) that is independent of muscle. We compared cystatin C-based eGFR (eGFR cys ) with multiple creatinine-based estimates of kidney function in hospitalized patients receiving anticoagulants, to assess for discordant results that could impact medication dosing. Retrospective chart review of hospitalized patients over 1 year who received non-vitamin K antagonist anticoagulation, and who had same-day measurements of cystatin C and creatinine. Seventy-five inpatient veterans (median age 68) at the San Francisco VA Medical Center (SFVAMC). We compared the median difference between eGFR by the Chronic Kidney Disease Epidemiology Collaboration (CKD-EPI) study equation using cystatin C (eGFR cys ) and eGFRs using three creatinine-based equations: CKD-EPI (eGFR EPI ), Modified Diet in Renal Disease (eGFR MDRD ), and Cockcroft-Gault (eGFR CG ). We categorized patients into standard KDIGO kidney stages and into drug-dosing categories based on each creatinine equation and calculated proportions of patients reclassified across these categories based on cystatin C. Cystatin C predicted overall lower eGFR compared to creatinine-based equations, with a median difference of - 7.1 (IQR - 17.2, 2.6) mL/min/1.73 m 2 versus eGFR EPI , - 21.2 (IQR - 43.7, - 8.1) mL/min/1.73 m 2 versus eGFR MDRD , and - 25.9 (IQR - 46.8, - 8.7) mL/min/1.73 m 2 versus eGFR CG . Thirty-one to 52% of patients were reclassified into lower drug-dosing categories using cystatin C compared to creatinine-based estimates. We found substantial discordance in eGFR comparing cystatin C with creatinine in this group of anticoagulated inpatients. Our sample size was limited and included few women. Further

  8. Evaluating the impact of spatio-temporal smoothness constraints on the BOLD hemodynamic response function estimation: an analysis based on Tikhonov regularization

    International Nuclear Information System (INIS)

    Casanova, R; Yang, L; Hairston, W D; Laurienti, P J; Maldjian, J A

    2009-01-01

    Recently we have proposed the use of Tikhonov regularization with temporal smoothness constraints to estimate the BOLD fMRI hemodynamic response function (HRF). The temporal smoothness constraint was imposed on the estimates by using second derivative information while the regularization parameter was selected based on the generalized cross-validation function (GCV). Using one-dimensional simulations, we previously found this method to produce reliable estimates of the HRF time course, especially its time to peak (TTP), being at the same time fast and robust to over-sampling in the HRF estimation. Here, we extend the method to include simultaneous temporal and spatial smoothness constraints. This method does not need Gaussian smoothing as a pre-processing step as usually done in fMRI data analysis. We carried out two-dimensional simulations to compare the two methods: Tikhonov regularization with temporal (Tik-GCV-T) and spatio-temporal (Tik-GCV-ST) smoothness constraints on the estimated HRF. We focus our attention on quantifying the influence of the Gaussian data smoothing and the presence of edges on the performance of these techniques. Our results suggest that the spatial smoothing introduced by regularization is less severe than that produced by Gaussian smoothing. This allows more accurate estimates of the response amplitudes while producing similar estimates of the TTP. We illustrate these ideas using real data. (note)

  9. Estimation of daily global solar radiation as a function of the solar energy potential at soil surface

    International Nuclear Information System (INIS)

    Pereira, A.B.; Vrisman, A.L.; Galvani, E.

    2002-01-01

    The solar radiation received at the surface of the earth, apart from its relevance to several daily human activities, plays an important role in the growth and development of plants. The aim of the current work was to develop and gauge an estimation model for the evaluation of the global solar radiation flux density as a function of the solar energy potential at soil surface. Radiometric data were collected at Ponta Grossa, PR, Brazil (latitude 25°13' S, longitude 50°03' W, altitude 880 m). Estimated values of solar energy potential obtained as a function of only one measurement taken at solar noon time were confronted with those measured by a Robitzsch bimetalic actinograph, for days that presented insolation ratios higher than 0.85. This data set was submitted to a simple linear regression analysis, having been obtained a good adjustment between observed and calculated values. For the estimation of the coefficients a and b of Angström's equation, the method based on the solar energy potential at soil surface was used for the site under study. The methodology was efficient to assess the coefficients, aiming at the determination of the global solar radiation flux density, whith quickness and simplicity, having also found out that the criterium for the estimation of the solar energy potential is equivalent to that of the classical methodology of Angström. Knowledge of the available solar energy potential and global solar radiation flux density is of great importance for the estimation of the maximum atmospheric evaporative demand, of water consumption by irrigated crops, and also for building solar engineering equipment, such as driers, heaters, solar ovens, refrigerators, etc [pt

  10. Global sensitivity analysis by polynomial dimensional decomposition

    Energy Technology Data Exchange (ETDEWEB)

    Rahman, Sharif, E-mail: rahman@engineering.uiowa.ed [College of Engineering, The University of Iowa, Iowa City, IA 52242 (United States)

    2011-07-15

    This paper presents a polynomial dimensional decomposition (PDD) method for global sensitivity analysis of stochastic systems subject to independent random input following arbitrary probability distributions. The method involves Fourier-polynomial expansions of lower-variate component functions of a stochastic response by measure-consistent orthonormal polynomial bases, analytical formulae for calculating the global sensitivity indices in terms of the expansion coefficients, and dimension-reduction integration for estimating the expansion coefficients. Due to identical dimensional structures of PDD and analysis-of-variance decomposition, the proposed method facilitates simple and direct calculation of the global sensitivity indices. Numerical results of the global sensitivity indices computed for smooth systems reveal significantly higher convergence rates of the PDD approximation than those from existing methods, including polynomial chaos expansion, random balance design, state-dependent parameter, improved Sobol's method, and sampling-based methods. However, for non-smooth functions, the convergence properties of the PDD solution deteriorate to a great extent, warranting further improvements. The computational complexity of the PDD method is polynomial, as opposed to exponential, thereby alleviating the curse of dimensionality to some extent.

  11. [Cardiac Synchronization Function Estimation Based on ASM Level Set Segmentation Method].

    Science.gov (United States)

    Zhang, Yaonan; Gao, Yuan; Tang, Liang; He, Ying; Zhang, Huie

    At present, there is no accurate and quantitative methods for the determination of cardiac mechanical synchronism, and quantitative determination of the synchronization function of the four cardiac cavities with medical images has a great clinical value. This paper uses the whole heart ultrasound image sequence, and segments the left & right atriums and left & right ventricles of each frame. After the segmentation, the number of pixels in each cavity and in each frame is recorded, and the areas of the four cavities of the image sequence are therefore obtained. The area change curves of the four cavities are further extracted, and the synchronous information of the four cavities is obtained. Because of the low SNR of Ultrasound images, the boundary lines of cardiac cavities are vague, so the extraction of cardiac contours is still a challenging problem. Therefore, the ASM model information is added to the traditional level set method to force the curve evolution process. According to the experimental results, the improved method improves the accuracy of the segmentation. Furthermore, based on the ventricular segmentation, the right and left ventricular systolic functions are evaluated, mainly according to the area changes. The synchronization of the four cavities of the heart is estimated based on the area changes and the volume changes.

  12. Calculation of Coupled Vibroacoustics Response Estimates from a Library of Available Uncoupled Transfer Function Sets

    Science.gov (United States)

    Smith, Andrew; LaVerde, Bruce; Hunt, Ron; Fulcher, Clay; Towner, Robert; McDonald, Emmett

    2012-01-01

    The design and theoretical basis of a new database tool that quickly generates vibroacoustic response estimates using a library of transfer functions (TFs) is discussed. During the early stages of a launch vehicle development program, these response estimates can be used to provide vibration environment specification to hardware vendors. The tool accesses TFs from a database, combines the TFs, and multiplies these by input excitations to estimate vibration responses. The database is populated with two sets of uncoupled TFs; the first set representing vibration response of a bare panel, designated as H(sup s), and the second set representing the response of the free-free component equipment by itself, designated as H(sup c). For a particular configuration undergoing analysis, the appropriate H(sup s) and H(sup c) are selected and coupled to generate an integrated TF, designated as H(sup s +c). This integrated TF is then used with the appropriate input excitations to estimate vibration responses. This simple yet powerful tool enables a user to estimate vibration responses without directly using finite element models, so long as suitable H(sup s) and H(sup c) sets are defined in the database libraries. The paper discusses the preparation of the database tool and provides the assumptions and methodologies necessary to combine H(sup s) and H(sup c) sets into an integrated H(sup s + c). An experimental validation of the approach is also presented.

  13. Distribution-based estimates of minimum clinically important difference in cognition, arm function and lower body function after slow release-fampridine treatment of patients with multiple sclerosis

    DEFF Research Database (Denmark)

    Jensen, H B; Mamoei, Sepehr; Ravnborg, M.

    2016-01-01

    OBJECTIVE: To provide distribution-based estimates of the minimal clinical important difference (MCID) after slow release fampridine treatment on cognition and functional capacity in people with MS (PwMS). METHOD: MCID values were determined after SR-Fampridine treatment in 105 PwMS. Testing...

  14. Estimating renal function in children: a new GFR-model based on serum cystatin C and body cell mass.

    Science.gov (United States)

    Andersen, Trine Borup

    2012-07-01

    This PhD thesis is based on four individual studies including 131 children aged 2-14 years with nephro-urologic disorders. The majority (72%) of children had a normal renal function (GFR > 82 ml/min/1.73 square metres), and only 8% had a renal function thesis´ main aims were: 1) to develop a more accurate GFR model based on a novel theory of body cell mass (BCM) and cystatin C (CysC); 2) to investigate the diagnostic performance in comparison to other models as well as serum CysC and creatinine; 3) to validate the new models precision and validity. The model´s diagnostic performance was investigated in study I as the ability to detect changes in renal function (total day-to-day variation), and in study IV as the ability to discriminate between normal and reduced function. The model´s precision and validity were indirectly evaluated in study II and III, and in study I accuracy was estimated by comparison to reference GFR. Several prediction models based on CysC or a combination of CysC and serum creatinine have been developed for predicting GFR in children. Despite these efforts to improve GFR estimates, no alternative to exogenous methods has been found and the Schwartz´s formula based on height, creatinine and an empirically derived constant is still recommended for GFR estimation in children. However, the inclusion of BCM as a possible variable in a CysC-based prediction model has not yet been explored. As CysC is produced at a constant rate from all nucleated cells we hypothesize that including BCM in a new prediction model will increase accuracy of the GFR estimate. Study I aimed at deriving the new GFR-prediction model based on the novel theory of CysC and BCM and comparing the performance to previously published models. The BCM-model took the form GFR (mL/min) = 10.2 × (BCM/CysC)E 0.40 × (height × body surface area/Crea)E 0.65. The model predicted 99% within ± 30% of reference GFR, and 67% within ±10%. This was higher than any other model. The

  15. Quantitative estimation of renal function with dynamic contrast-enhanced MRI using a modified two-compartment model.

    Directory of Open Access Journals (Sweden)

    Bin Chen

    Full Text Available To establish a simple two-compartment model for glomerular filtration rate (GFR and renal plasma flow (RPF estimations by dynamic contrast-enhanced magnetic resonance imaging (DCE-MRI.A total of eight New Zealand white rabbits were included in DCE-MRI. The two-compartment model was modified with the impulse residue function in this study. First, the reliability of GFR measurement of the proposed model was compared with other published models in Monte Carlo simulation at different noise levels. Then, functional parameters were estimated in six healthy rabbits to test the feasibility of the new model. Moreover, in order to investigate its validity of GFR estimation, two rabbits underwent acute ischemia surgical procedure in unilateral kidney before DCE-MRI, and pixel-wise measurements were implemented to detect the cortical GFR alterations between normal and abnormal kidneys.The lowest variability of GFR and RPF measurements were found in the proposed model in the comparison. Mean GFR was 3.03±1.1 ml/min and mean RPF was 2.64±0.5 ml/g/min in normal animals, which were in good agreement with the published values. Moreover, large GFR decline was found in dysfunction kidneys comparing to the contralateral control group.Results in our study demonstrate that measurement of renal kinetic parameters based on the proposed model is feasible and it has the ability to discriminate GFR changes in healthy and diseased kidneys.

  16. On the relation between S-Estimators and M-Estimators of multivariate location and covariance

    NARCIS (Netherlands)

    Lopuhaa, H.P.

    1987-01-01

    We discuss the relation between S-estimators and M-estimators of multivariate location and covariance. As in the case of the estimation of a multiple regression parameter, S-estimators are shown to satisfy first-order conditions of M-estimators. We show that the influence function IF (x;S F) of

  17. Improving Stability and Convergence for Adaptive Radial Basis Function Neural Networks Algorithm. (On-Line Harmonics Estimation Application

    Directory of Open Access Journals (Sweden)

    Eyad K Almaita

    2017-03-01

    Keywords: Energy efficiency, Power quality, Radial basis function, neural networks, adaptive, harmonic. Article History: Received Dec 15, 2016; Received in revised form Feb 2nd 2017; Accepted 13rd 2017; Available online How to Cite This Article: Almaita, E.K and Shawawreh J.Al (2017 Improving Stability and Convergence for Adaptive Radial Basis Function Neural Networks Algorithm (On-Line Harmonics Estimation Application.  International Journal of Renewable Energy Develeopment, 6(1, 9-17. http://dx.doi.org/10.14710/ijred.6.1.9-17

  18. Robust estimation for ordinary differential equation models.

    Science.gov (United States)

    Cao, J; Wang, L; Xu, J

    2011-12-01

    Applied scientists often like to use ordinary differential equations (ODEs) to model complex dynamic processes that arise in biology, engineering, medicine, and many other areas. It is interesting but challenging to estimate ODE parameters from noisy data, especially when the data have some outliers. We propose a robust method to address this problem. The dynamic process is represented with a nonparametric function, which is a linear combination of basis functions. The nonparametric function is estimated by a robust penalized smoothing method. The penalty term is defined with the parametric ODE model, which controls the roughness of the nonparametric function and maintains the fidelity of the nonparametric function to the ODE model. The basis coefficients and ODE parameters are estimated in two nested levels of optimization. The coefficient estimates are treated as an implicit function of ODE parameters, which enables one to derive the analytic gradients for optimization using the implicit function theorem. Simulation studies show that the robust method gives satisfactory estimates for the ODE parameters from noisy data with outliers. The robust method is demonstrated by estimating a predator-prey ODE model from real ecological data. © 2011, The International Biometric Society.

  19. The Kernel Estimation in Biosystems Engineering

    Directory of Open Access Journals (Sweden)

    Esperanza Ayuga Téllez

    2008-04-01

    Full Text Available In many fields of biosystems engineering, it is common to find works in which statistical information is analysed that violates the basic hypotheses necessary for the conventional forecasting methods. For those situations, it is necessary to find alternative methods that allow the statistical analysis considering those infringements. Non-parametric function estimation includes methods that fit a target function locally, using data from a small neighbourhood of the point. Weak assumptions, such as continuity and differentiability of the target function, are rather used than "a priori" assumption of the global target function shape (e.g., linear or quadratic. In this paper a few basic rules of decision are enunciated, for the application of the non-parametric estimation method. These statistical rules set up the first step to build an interface usermethod for the consistent application of kernel estimation for not expert users. To reach this aim, univariate and multivariate estimation methods and density function were analysed, as well as regression estimators. In some cases the models to be applied in different situations, based on simulations, were defined. Different biosystems engineering applications of the kernel estimation are also analysed in this review.

  20. Sliding bifurcations and chaos induced by dry friction in a braking system

    International Nuclear Information System (INIS)

    Yang, F.H.; Zhang, W.; Wang, J.

    2009-01-01

    In this paper, non-smooth bifurcations and chaotic dynamics are investigated for a braking system. A three-degree-of-freedom model is considered to capture the complicated nonlinear characteristics, in particular, non-smooth bifurcations in the braking system. The stick-slip transition is analyzed for the braking system. From the results of numerical simulation, it is observed that there also exist the grazing-sliding bifurcation and stick-slip chaos in the braking system.

  1. Estimating the residential demand function for natural gas in Seoul with correction for sample selection bias

    International Nuclear Information System (INIS)

    Yoo, Seung-Hoon; Lim, Hea-Jin; Kwak, Seung-Jun

    2009-01-01

    Over the last twenty years, the consumption of natural gas in Korea has increased dramatically. This increase has mainly resulted from the rise of consumption in the residential sector. The main objective of the study is to estimate households' demand function for natural gas by applying a sample selection model using data from a survey of households in Seoul. The results show that there exists a selection bias in the sample and that failure to correct for sample selection bias distorts the mean estimate, of the demand for natural gas, downward by 48.1%. In addition, according to the estimation results, the size of the house, the dummy variable for dwelling in an apartment, the dummy variable for having a bed in an inner room, and the household's income all have positive relationships with the demand for natural gas. On the other hand, the size of the family and the price of gas negatively contribute to the demand for natural gas. (author)

  2. Improved estimation of subject-level functional connectivity using full and partial correlation with empirical Bayes shrinkage.

    Science.gov (United States)

    Mejia, Amanda F; Nebel, Mary Beth; Barber, Anita D; Choe, Ann S; Pekar, James J; Caffo, Brian S; Lindquist, Martin A

    2018-05-15

    Reliability of subject-level resting-state functional connectivity (FC) is determined in part by the statistical techniques employed in its estimation. Methods that pool information across subjects to inform estimation of subject-level effects (e.g., Bayesian approaches) have been shown to enhance reliability of subject-level FC. However, fully Bayesian approaches are computationally demanding, while empirical Bayesian approaches typically rely on using repeated measures to estimate the variance components in the model. Here, we avoid the need for repeated measures by proposing a novel measurement error model for FC describing the different sources of variance and error, which we use to perform empirical Bayes shrinkage of subject-level FC towards the group average. In addition, since the traditional intra-class correlation coefficient (ICC) is inappropriate for biased estimates, we propose a new reliability measure denoted the mean squared error intra-class correlation coefficient (ICC MSE ) to properly assess the reliability of the resulting (biased) estimates. We apply the proposed techniques to test-retest resting-state fMRI data on 461 subjects from the Human Connectome Project to estimate connectivity between 100 regions identified through independent components analysis (ICA). We consider both correlation and partial correlation as the measure of FC and assess the benefit of shrinkage for each measure, as well as the effects of scan duration. We find that shrinkage estimates of subject-level FC exhibit substantially greater reliability than traditional estimates across various scan durations, even for the most reliable connections and regardless of connectivity measure. Additionally, we find partial correlation reliability to be highly sensitive to the choice of penalty term, and to be generally worse than that of full correlations except for certain connections and a narrow range of penalty values. This suggests that the penalty needs to be chosen carefully

  3. Estimation of optical rotation of γ-alkylidenebutenolide, cyclopropylamine, cyclopropyl-methanol and cyclopropenone based compounds by a Density Functional Theory (DFT) approach.

    Science.gov (United States)

    Shahzadi, Iram; Shaukat, Aqsa; Zara, Zeenat; Irfan, Muhammad; Eliasson, Bertil; Ayub, Khurshid; Iqbal, Javed

    2017-10-01

    Computing the optical rotation of organic molecules can be a real challenge, and various theoretical approaches have been developed in this regard. A benchmark study of optical rotation of various classes of compounds was carried out by Density Functional Theory (DFT) methods. The aim of the present research study was to find out the best-suited functional and basis set to estimate the optical rotations of selected compounds with respect to experimental literature values. Six DFT functional LSDA, BVP86, CAM-B3LYP, B3PW91, and PBE were applied on 22 different compounds. Furthermore, six different basis sets, i.e., 3-21G, 6-31G, aug-cc-pVDZ, aug-cc-pVTZ, DGDZVP, and DGDZVP2 were also applied with the best-suited functional B3LYP. After rigorous effort, it can be safely said that the best combination of functional and basis set is B3LYP/aug-cc-pVTZ for the estimation of optical rotation for selected compounds. © 2017 Wiley Periodicals, Inc.

  4. Dimensions of Fractals Generated by Bi-Lipschitz Maps

    Directory of Open Access Journals (Sweden)

    Qi-Rong Deng

    2014-01-01

    Full Text Available On the class of iterated function systems of bi-Lipschitz mappings that are contractions with respect to some metrics, we introduce a logarithmic distortion property, which is weaker than the well-known bounded distortion property. By assuming this property, we prove the equality of the Hausdorff and box dimensions of the attractor. We also obtain a formula for the dimension of the attractor in terms of certain modified topological pressure functions, without imposing any separation condition. As an application, we prove the equality of Hausdorff and box dimensions for certain iterated function systems consisting of affine maps and nonsmooth maps.

  5. Estimating genetic covariance functions assuming a parametric correlation structure for environmental effects

    Directory of Open Access Journals (Sweden)

    Meyer Karin

    2001-11-01

    Full Text Available Abstract A random regression model for the analysis of "repeated" records in animal breeding is described which combines a random regression approach for additive genetic and other random effects with the assumption of a parametric correlation structure for within animal covariances. Both stationary and non-stationary correlation models involving a small number of parameters are considered. Heterogeneity in within animal variances is modelled through polynomial variance functions. Estimation of parameters describing the dispersion structure of such model by restricted maximum likelihood via an "average information" algorithm is outlined. An application to mature weight records of beef cow is given, and results are contrasted to those from analyses fitting sets of random regression coefficients for permanent environmental effects.

  6. Two-step estimation for inhomogeneous spatial point processes

    DEFF Research Database (Denmark)

    Waagepetersen, Rasmus; Guan, Yongtao

    This paper is concerned with parameter estimation for inhomogeneous spatial point processes with a regression model for the intensity function and tractable second order properties (K-function). Regression parameters are estimated using a Poisson likelihood score estimating function and in a second...... step minimum contrast estimation is applied for the residual clustering parameters. Asymptotic normality of parameter estimates is established under certain mixing conditions and we exemplify how the results may be applied in ecological studies of rain forests....

  7. Two-step estimation for inhomogeneous spatial point processes

    DEFF Research Database (Denmark)

    Waagepetersen, Rasmus; Guan, Yongtao

    2009-01-01

    The paper is concerned with parameter estimation for inhomogeneous spatial point processes with a regression model for the intensity function and tractable second-order properties (K-function). Regression parameters are estimated by using a Poisson likelihood score estimating function and in the ...... and in the second step minimum contrast estimation is applied for the residual clustering parameters. Asymptotic normality of parameter estimates is established under certain mixing conditions and we exemplify how the results may be applied in ecological studies of rainforests....

  8. Sensitivity of Calibrated Parameters and Water Resource Estimates on Different Objective Functions and Optimization Algorithms

    Directory of Open Access Journals (Sweden)

    Delaram Houshmand Kouchi

    2017-05-01

    Full Text Available The successful application of hydrological models relies on careful calibration and uncertainty analysis. However, there are many different calibration/uncertainty analysis algorithms, and each could be run with different objective functions. In this paper, we highlight the fact that each combination of optimization algorithm-objective functions may lead to a different set of optimum parameters, while having the same performance; this makes the interpretation of dominant hydrological processes in a watershed highly uncertain. We used three different optimization algorithms (SUFI-2, GLUE, and PSO, and eight different objective functions (R2, bR2, NSE, MNS, RSR, SSQR, KGE, and PBIAS in a SWAT model to calibrate the monthly discharges in two watersheds in Iran. The results show that all three algorithms, using the same objective function, produced acceptable calibration results; however, with significantly different parameter ranges. Similarly, an algorithm using different objective functions also produced acceptable calibration results, but with different parameter ranges. The different calibrated parameter ranges consequently resulted in significantly different water resource estimates. Hence, the parameters and the outputs that they produce in a calibrated model are “conditioned” on the choices of the optimization algorithm and objective function. This adds another level of non-negligible uncertainty to watershed models, calling for more attention and investigation in this area.

  9. Development and testing of transfer functions for generating quantitative climatic estimates from Australian pollen data

    Science.gov (United States)

    Cook, Ellyn J.; van der Kaars, Sander

    2006-10-01

    We review attempts to derive quantitative climatic estimates from Australian pollen data, including the climatic envelope, climatic indicator and modern analogue approaches, and outline the need to pursue alternatives for use as input to, or validation of, simulations by models of past, present and future climate patterns. To this end, we have constructed and tested modern pollen-climate transfer functions for mainland southeastern Australia and Tasmania using the existing southeastern Australian pollen database and for northern Australia using a new pollen database we are developing. After testing for statistical significance, 11 parameters were selected for mainland southeastern Australia, seven for Tasmania and six for northern Australia. The functions are based on weighted-averaging partial least squares regression and their predictive ability evaluated against modern observational climate data using leave-one-out cross-validation. Functions for summer, annual and winter rainfall and temperatures are most robust for southeastern Australia, while in Tasmania functions for minimum temperature of the coldest period, mean winter and mean annual temperature are the most reliable. In northern Australia, annual and summer rainfall and annual and summer moisture indexes are the strongest. The validation of all functions means all can be applied to Quaternary pollen records from these three areas with confidence. Copyright

  10. A smooth generalized Newton method for a class of non-smooth equations

    International Nuclear Information System (INIS)

    Uko, L. U.

    1995-10-01

    This paper presents a Newton-type iterative scheme for finding the zero of the sum of a differentiable function and a multivalued maximal monotone function. Local and semi-local convergence results are proved for the Newton scheme, and an analogue of the Kantorovich theorem is proved for the associated modified scheme that uses only one Jacobian evaluation for the entire iteration. Applications in variational inequalities are discussed, and an illustrative numerical example is given. (author). 24 refs

  11. Container Surface Evaluation by Function Estimation

    Energy Technology Data Exchange (ETDEWEB)

    Wendelberger, James G. [Los Alamos National Lab. (LANL), Los Alamos, NM (United States)

    2017-08-03

    Container images are analyzed for specific surface features, such as, pits, cracks, and corrosion. The detection of these features is confounded with complicating features. These complication features include: shape/curvature, welds, edges, scratches, foreign objects among others. A method is provided to discriminate between the various features. The method consists of estimating the image background, determining a residual image and post processing to determine the features present. The methodology is not finalized but demonstrates the feasibility of a method to determine the kind and size of the features present.

  12. Glomerular filtration rate is associated with free triiodothyronine in euthyroid subjects : Comparison between various equations to estimate renal function and creatinine clearance

    NARCIS (Netherlands)

    Anderson, Josephine L C; Gruppen, Eke G; van Tienhoven-Wind, Lynnda; Eisenga, Michele F; de Vries, Hanne; Gansevoort, Ron T; Bakker, Stephan J L; Dullaart, Robin P F

    BACKGROUND: Effects of variations in thyroid function within the euthyroid range on renal function are unclear. Cystatin C-based equations to estimate glomerular filtration rate (GFR) are currently advocated for mortality and renal risk prediction. However, the applicability of cystatin C-based

  13. A Block Coordinate Descent Method for Multi-Convex Optimization with Applications to Nonnegative Tensor Factorization and Completion

    Science.gov (United States)

    2012-08-01

    Sciandrone, On the convergence of the block nonlinear Gauss - Seidel method under convex constraints , Oper. Res. Lett., 26 (2000), pp. 127–136. [23] S.P...include nonsmooth functions. Our main interest is the block coordinate descent (BCD) method of the Gauss - Seidel type, which mini- mizes F cyclically over...original objective around the current iterate . They do not use extrapolation either and only have subsequence convergence . There are examples of ri

  14. On Nonconvex Decentralized Gradient Descent

    Science.gov (United States)

    2016-08-01

    and J. Bolte, On the convergence of the proximal algorithm for nonsmooth functions involving analytic features, Math . Program., 116: 5-16, 2009. [2] H...splitting, and regularized Gauss-Seidel methods, Math . Pro- gram., Ser. A, 137: 91-129, 2013. [3] P. Bianchi and J. Jakubowicz, Convergence of a multi-agent...subgradient method under random communication topologies , IEEE J. Sel. Top. Signal Process., 5:754-771, 2011. [11] A. Nedic and A. Ozdaglar, Distributed

  15. See food diet? Cultural differences in estimating fullness and intake as a function of plate size.

    Science.gov (United States)

    Peng, Mei; Adam, Sarah; Hautus, Michael J; Shin, Myoungju; Duizer, Lisa M; Yan, Huiquan

    2017-10-01

    Previous research has suggested that manipulations of plate size can have a direct impact on perception of food intake, measured by estimated fullness and intake. The present study, involving 570 individuals across Canada, China, Korea, and New Zealand, is the first empirical study to investigate cultural influences on perception of food portion as a function of plate size. The respondents viewed photographs of ten culturally diverse dishes presented on large (27 cm) and small (23 cm) plates, and then rated their estimated usual intake and expected fullness after consuming the dish, using 100-point visual analog scales. The data were analysed with a mixed-model ANCOVA controlling for individual BMI, liking and familiarity of the presented food. The results showed clear cultural differences: (1) manipulations of the plate size had no effect on the expected fullness or the estimated intake of the Chinese and Korean respondents, as opposed to significant effects in Canadians and New Zealanders (p Asian respondents. Overall, these findings, from a cultural perspective, support the notion that estimation of fullness and intake are learned through dining experiences, and highlight the importance of considering eating environments and contexts when assessing individual behaviours relating to food intake. Copyright © 2017 Elsevier Ltd. All rights reserved.

  16. Mixture Item Response Theory-MIMIC Model: Simultaneous Estimation of Differential Item Functioning for Manifest Groups and Latent Classes

    Science.gov (United States)

    Bilir, Mustafa Kuzey

    2009-01-01

    This study uses a new psychometric model (mixture item response theory-MIMIC model) that simultaneously estimates differential item functioning (DIF) across manifest groups and latent classes. Current DIF detection methods investigate DIF from only one side, either across manifest groups (e.g., gender, ethnicity, etc.), or across latent classes…

  17. Robust extrapolation scheme for fast estimation of 3D Ising field partition functions: application to within subject fMRI data

    Energy Technology Data Exchange (ETDEWEB)

    Risser, L.; Vincent, T.; Ciuciu, Ph. [NeuroSpin CEA, F-91191 Gif sur Yvette (France); Risser, L.; Vincent, T. [Laboratoire de Neuroimagerie Assistee par Ordinateur (LNAO) CEA - DSV/I2BM/NEUROSPIN (France); Risser, L. [Institut de mecanique des fluides de Toulouse (IMFT), CNRS: UMR5502 - Universite Paul Sabatier - Toulouse III - Institut National Polytechnique de Toulouse - INPT (France); Idier, J. [Institut de Recherche en Communications et en Cybernetique de Nantes (IRCCyN) CNRS - UMR6597 - Universite de Nantes - ecole Centrale de Nantes - Ecole des Mines de Nantes - Ecole Polytechnique de l' Universite de Nantes (France)

    2009-07-01

    In this paper, we present a first numerical scheme to estimate Partition Functions (PF) of 3D Ising fields. Our strategy is applied to the context of the joint detection-estimation of brain activity from functional Magnetic Resonance Imaging (fMRI) data, where the goal is to automatically recover activated regions and estimate region-dependent, hemodynamic filters. For any region, a specific binary Markov random field may embody spatial correlation over the hidden states of the voxels by modeling whether they are activated or not. To make this spatial regularization fully adaptive, our approach is first based upon it, classical path-sampling method to approximate a small subset of reference PFs corresponding to pre-specified regions. Then, file proposed extrapolation method allows its to approximate the PFs associated with the Ising fields defined over the remaining brain regions. In comparison with preexisting approaches, our method is robust; to topological inhomogeneities in the definition of the reference regions. As a result, it strongly alleviates the computational burden and makes spatially adaptive regularization of whole brain fMRI datasets feasible. (authors)

  18. Estimating Water Footprints of Vegetable Crops: Influence of Growing Season, Solar Radiation Data and Functional Unit

    Directory of Open Access Journals (Sweden)

    Betsie le Roux

    2016-10-01

    Full Text Available Water footprint (WF accounting as proposed by the Water Footprint Network (WFN can potentially provide important information for water resource management, especially in water scarce countries relying on irrigation to help meet their food requirements. However, calculating accurate WFs of short-season vegetable crops such as carrots, cabbage, beetroot, broccoli and lettuce presented some challenges. Planting dates and inter-annual weather conditions impact WF results. Joining weather datasets of just rainfall, minimum and maximum temperature with ones that include solar radiation and wind-speed affected crop model estimates and WF results. The functional unit selected can also have a major impact on results. For example, WFs according to the WFN approach do not account for crop residues used for other purposes, like composting and animal feed. Using yields in dry matter rather than fresh mass also impacts WF metrics, making comparisons difficult. To overcome this, using the nutritional value of crops as a functional unit can connect water use more directly to potential benefits derived from different crops and allow more straightforward comparisons. Grey WFs based on nitrogen only disregards water pollution caused by phosphates, pesticides and salinization. Poor understanding of the fate of nitrogen complicates estimation of nitrogen loads into the aquifer.

  19. Effects of exposure estimation errors on estimated exposure-response relations for PM2.5.

    Science.gov (United States)

    Cox, Louis Anthony Tony

    2018-07-01

    Associations between fine particulate matter (PM2.5) exposure concentrations and a wide variety of undesirable outcomes, from autism and auto theft to elderly mortality, suicide, and violent crime, have been widely reported. Influential articles have argued that reducing National Ambient Air Quality Standards for PM2.5 is desirable to reduce these outcomes. Yet, other studies have found that reducing black smoke and other particulate matter by as much as 70% and dozens of micrograms per cubic meter has not detectably affected all-cause mortality rates even after decades, despite strong, statistically significant positive exposure concentration-response (C-R) associations between them. This paper examines whether this disconnect between association and causation might be explained in part by ignored estimation errors in estimated exposure concentrations. We use EPA air quality monitor data from the Los Angeles area of California to examine the shapes of estimated C-R functions for PM2.5 when the true C-R functions are assumed to be step functions with well-defined response thresholds. The estimated C-R functions mistakenly show risk as smoothly increasing with concentrations even well below the response thresholds, thus incorrectly predicting substantial risk reductions from reductions in concentrations that do not affect health risks. We conclude that ignored estimation errors obscure the shapes of true C-R functions, including possible thresholds, possibly leading to unrealistic predictions of the changes in risk caused by changing exposures. Instead of estimating improvements in public health per unit reduction (e.g., per 10 µg/m 3 decrease) in average PM2.5 concentrations, it may be essential to consider how interventions change the distributions of exposure concentrations. Copyright © 2018 Elsevier Inc. All rights reserved.

  20. Pointwise estimates of pseudo-differential operators

    DEFF Research Database (Denmark)

    Johnsen, Jon

    As a new technique it is shown how general pseudo-differential operators can be estimated at arbitrary points in Euclidean space when acting on functions u with compact spectra.The estimate is a factorisation inequality, in which one factor is the Peetre–Fefferman–Stein maximal function of u......, whilst the other is a symbol factor carrying the whole information on the symbol. The symbol factor is estimated in terms of the spectral radius of u, so that the framework is well suited for Littlewood–Paley analysis. It is also shown how it gives easy access to results on polynomial bounds...... and estimates in Lp , including a new result for type 1,1-operators that they are always bounded on Lp -functions with compact spectra....

  1. Pointwise estimates of pseudo-differential operators

    DEFF Research Database (Denmark)

    Johnsen, Jon

    2011-01-01

    As a new technique it is shown how general pseudo-differential operators can be estimated at arbitrary points in Euclidean space when acting on functions u with compact spectra. The estimate is a factorisation inequality, in which one factor is the Peetre–Fefferman–Stein maximal function of u......, whilst the other is a symbol factor carrying the whole information on the symbol. The symbol factor is estimated in terms of the spectral radius of u, so that the framework is well suited for Littlewood–Paley analysis. It is also shown how it gives easy access to results on polynomial bounds...... and estimates in Lp, including a new result for type 1, 1-operators that they are always bounded on Lp-functions with compact spectra....

  2. Histogram Estimators of Bivariate Densities

    National Research Council Canada - National Science Library

    Husemann, Joyce A

    1986-01-01

    One-dimensional fixed-interval histogram estimators of univariate probability density functions are less efficient than the analogous variable-interval estimators which are constructed from intervals...

  3. Estimation of leaf area index in the sunflower as a function of thermal time1

    Directory of Open Access Journals (Sweden)

    Dioneia Daiane Pitol Lucas

    Full Text Available The aim of this study was to obtain a mathematical model for estimating the leaf area index (LAI of a sunflower crop as a function of accumulated thermal time. Generating the models and testing their coefficients was carried out using data obtained from experiments carried out for different sowing dates in the crop years of 2007/08, 2008/09, 2009/10 and 2010/11 with two sunflower hybrids, Aguará 03 and Hélio 358. Linear leaf dimensions were used for the non-destructive measurement of the leaf area, and thermal time was used to quantify the biological time. With the data for accumulated thermal time (TTa and LAI known for any one day after emergence, mathematical models were generated for estimating the LAI. The following models were obtained, as they presented the best fit (lowest rootmean- square error, RMSE: gaussian peak, cubic polynomial, sigmoidal and an adjusted compound model, the modified sigmoidal. The modified sigmoidal model had the best fit to the generation data and the highest value for the coefficient of determination (R2. In testing the models, the lowest values for root-mean-square error, and the highest R2 between the observed and estimated values were obtained with the modified sigmoidal model.

  4. Comparison of volatility function technique for risk-neutral densities estimation

    Science.gov (United States)

    Bahaludin, Hafizah; Abdullah, Mimi Hafizah

    2017-08-01

    Volatility function technique by using interpolation approach plays an important role in extracting the risk-neutral density (RND) of options. The aim of this study is to compare the performances of two interpolation approaches namely smoothing spline and fourth order polynomial in extracting the RND. The implied volatility of options with respect to strike prices/delta are interpolated to obtain a well behaved density. The statistical analysis and forecast accuracy are tested using moments of distribution. The difference between the first moment of distribution and the price of underlying asset at maturity is used as an input to analyze forecast accuracy. RNDs are extracted from the Dow Jones Industrial Average (DJIA) index options with a one month constant maturity for the period from January 2011 until December 2015. The empirical results suggest that the estimation of RND using a fourth order polynomial is more appropriate to be used compared to a smoothing spline in which the fourth order polynomial gives the lowest mean square error (MSE). The results can be used to help market participants capture market expectations of the future developments of the underlying asset.

  5. Topological estimation of aerodynamic controlled airplane system functionality of quality

    Directory of Open Access Journals (Sweden)

    С.В. Павлова

    2005-01-01

    Full Text Available  It is suggested to use topological methods for stage estimation of aerodynamic airplane control in widespread range of its conditions The estimation is based on normalized stage virtual non-isotropy of configurational airplane systems calculation.

  6. Smooth and robust solutions for Dirichlet boundary control of fluid-solid conjugate heat transfer problems

    KAUST Repository

    Yan, Yan

    2015-01-01

    We study a new optimization scheme that generates smooth and robust solutions for Dirichlet velocity boundary control (DVBC) of conjugate heat transfer (CHT) processes. The solutions to the DVBC of the incompressible Navier-Stokes equations are typically nonsmooth, due to the regularity degradation of the boundary stress in the adjoint Navier-Stokes equations. This nonsmoothness is inherited by the solutions to the DVBC of CHT processes, since the CHT process couples the Navier-Stokes equations of fluid motion with the convection-diffusion equations of fluid-solid thermal interaction. Our objective in the CHT boundary control problem is to select optimally the fluid inflow profile that minimizes an objective function that involves the sum of the mismatch between the temperature distribution in the fluid system and a prescribed temperature profile and the cost of the control.Our strategy to resolve the nonsmoothness of the boundary control solution is based on two features, namely, the objective function with a regularization term on the gradient of the control profile on both the continuous and the discrete levels, and the optimization scheme with either explicit or implicit smoothing effects, such as the smoothed Steepest Descent and the Limited-memory Broyden-Fletcher-Goldfarb-Shanno (L-BFGS) methods. Our strategy to achieve the robustness of the solution process is based on combining the smoothed optimization scheme with the numerical continuation technique on the regularization parameters in the objective function. In the section of numerical studies, we present two suites of experiments. In the first one, we demonstrate the feasibility and effectiveness of our numerical schemes in recovering the boundary control profile of the standard case of a Poiseuille flow. In the second one, we illustrate the robustness of our optimization schemes via solving more challenging DVBC problems for both the channel flow and the flow past a square cylinder, which use initial

  7. Assessment of pedotransfer functions for estimating soil water retention curves for the amazon region

    Directory of Open Access Journals (Sweden)

    João Carlos Medeiros

    2014-06-01

    Full Text Available Knowledge of the soil water retention curve (SWRC is essential for understanding and modeling hydraulic processes in the soil. However, direct determination of the SWRC is time consuming and costly. In addition, it requires a large number of samples, due to the high spatial and temporal variability of soil hydraulic properties. An alternative is the use of models, called pedotransfer functions (PTFs, which estimate the SWRC from easy-to-measure properties. The aim of this paper was to test the accuracy of 16 point or parametric PTFs reported in the literature on different soils from the south and southeast of the State of Pará, Brazil. The PTFs tested were proposed by Pidgeon (1972, Lal (1979, Aina & Periaswamy (1985, Arruda et al. (1987, Dijkerman (1988, Vereecken et al. (1989, Batjes (1996, van den Berg et al. (1997, Tomasella et al. (2000, Hodnett & Tomasella (2002, Oliveira et al. (2002, and Barros (2010. We used a database that includes soil texture (sand, silt, and clay, bulk density, soil organic carbon, soil pH, cation exchange capacity, and the SWRC. Most of the PTFs tested did not show good performance in estimating the SWRC. The parametric PTFs, however, performed better than the point PTFs in assessing the SWRC in the tested region. Among the parametric PTFs, those proposed by Tomasella et al. (2000 achieved the best accuracy in estimating the empirical parameters of the van Genuchten (1980 model, especially when tested in the top soil layer.

  8. Estimate of the angular dimensions of objects and reconstruction of their shapes from the parameters of the fourth-order radiation correlation function

    International Nuclear Information System (INIS)

    Buryi, E V; Kosygin, A A

    2004-01-01

    It is shown that, when the angular resolution of a receiving optical system is insufficient, the angular dimensions of a located object can be estimated and its shape can be reconstructed by estimating the parameters of the fourth-order correlation function (CF) of scattered coherent radiation. The reliability of the estimates of CF counts obtained by the method of a discrete spatial convolution of the intensity-field counts, the possibility of estimating the CF profile counts by the method of one-dimensional convolution of intensity counts, and the applicability of the method for reconstructing the object shape are confirmed experimentally. (laser applications and other topics in quantum electronics)

  9. High throughput nonparametric probability density estimation.

    Science.gov (United States)

    Farmer, Jenny; Jacobs, Donald

    2018-01-01

    In high throughput applications, such as those found in bioinformatics and finance, it is important to determine accurate probability distribution functions despite only minimal information about data characteristics, and without using human subjectivity. Such an automated process for univariate data is implemented to achieve this goal by merging the maximum entropy method with single order statistics and maximum likelihood. The only required properties of the random variables are that they are continuous and that they are, or can be approximated as, independent and identically distributed. A quasi-log-likelihood function based on single order statistics for sampled uniform random data is used to empirically construct a sample size invariant universal scoring function. Then a probability density estimate is determined by iteratively improving trial cumulative distribution functions, where better estimates are quantified by the scoring function that identifies atypical fluctuations. This criterion resists under and over fitting data as an alternative to employing the Bayesian or Akaike information criterion. Multiple estimates for the probability density reflect uncertainties due to statistical fluctuations in random samples. Scaled quantile residual plots are also introduced as an effective diagnostic to visualize the quality of the estimated probability densities. Benchmark tests show that estimates for the probability density function (PDF) converge to the true PDF as sample size increases on particularly difficult test probability densities that include cases with discontinuities, multi-resolution scales, heavy tails, and singularities. These results indicate the method has general applicability for high throughput statistical inference.

  10. Estimating the population size and colony boundary of subterranean termites by using the density functions of directionally averaged capture probability.

    Science.gov (United States)

    Su, Nan-Yao; Lee, Sang-Hee

    2008-04-01

    Marked termites were released in a linear-connected foraging arena, and the spatial heterogeneity of their capture probabilities was averaged for both directions at distance r from release point to obtain a symmetrical distribution, from which the density function of directionally averaged capture probability P(x) was derived. We hypothesized that as marked termites move into the population and given sufficient time, the directionally averaged capture probability may reach an equilibrium P(e) over the distance r and thus satisfy the equal mixing assumption of the mark-recapture protocol. The equilibrium capture probability P(e) was used to estimate the population size N. The hypothesis was tested in a 50-m extended foraging arena to simulate the distance factor of field colonies of subterranean termites. Over the 42-d test period, the density functions of directionally averaged capture probability P(x) exhibited four phases: exponential decline phase, linear decline phase, equilibrium phase, and postequilibrium phase. The equilibrium capture probability P(e), derived as the intercept of the linear regression during the equilibrium phase, correctly projected N estimates that were not significantly different from the known number of workers in the arena. Because the area beneath the probability density function is a constant (50% in this study), preequilibrium regression parameters and P(e) were used to estimate the population boundary distance 1, which is the distance between the release point and the boundary beyond which the population is absent.

  11. Tensor Completion for Estimating Missing Values in Visual Data

    KAUST Repository

    Liu, Ji

    2012-01-25

    In this paper, we propose an algorithm to estimate missing values in tensors of visual data. The values can be missing due to problems in the acquisition process or because the user manually identified unwanted outliers. Our algorithm works even with a small amount of samples and it can propagate structure to fill larger missing regions. Our methodology is built on recent studies about matrix completion using the matrix trace norm. The contribution of our paper is to extend the matrix case to the tensor case by proposing the first definition of the trace norm for tensors and then by building a working algorithm. First, we propose a definition for the tensor trace norm that generalizes the established definition of the matrix trace norm. Second, similarly to matrix completion, the tensor completion is formulated as a convex optimization problem. Unfortunately, the straightforward problem extension is significantly harder to solve than the matrix case because of the dependency among multiple constraints. To tackle this problem, we developed three algorithms: simple low rank tensor completion (SiLRTC), fast low rank tensor completion (FaLRTC), and high accuracy low rank tensor completion (HaLRTC). The SiLRTC algorithm is simple to implement and employs a relaxation technique to separate the dependant relationships and uses the block coordinate descent (BCD) method to achieve a globally optimal solution; the FaLRTC algorithm utilizes a smoothing scheme to transform the original nonsmooth problem into a smooth one and can be used to solve a general tensor trace norm minimization problem; the HaLRTC algorithm applies the alternating direction method of multipliers (ADMMs) to our problem. Our experiments show potential applications of our algorithms and the quantitative evaluation indicates that our methods are more accurate and robust than heuristic approaches. The efficiency comparison indicates that FaLTRC and HaLRTC are more efficient than SiLRTC and between Fa

  12. Tensor Completion for Estimating Missing Values in Visual Data

    KAUST Repository

    Liu, Ji; Musialski, Przemyslaw; Wonka, Peter; Ye, Jieping

    2012-01-01

    In this paper, we propose an algorithm to estimate missing values in tensors of visual data. The values can be missing due to problems in the acquisition process or because the user manually identified unwanted outliers. Our algorithm works even with a small amount of samples and it can propagate structure to fill larger missing regions. Our methodology is built on recent studies about matrix completion using the matrix trace norm. The contribution of our paper is to extend the matrix case to the tensor case by proposing the first definition of the trace norm for tensors and then by building a working algorithm. First, we propose a definition for the tensor trace norm that generalizes the established definition of the matrix trace norm. Second, similarly to matrix completion, the tensor completion is formulated as a convex optimization problem. Unfortunately, the straightforward problem extension is significantly harder to solve than the matrix case because of the dependency among multiple constraints. To tackle this problem, we developed three algorithms: simple low rank tensor completion (SiLRTC), fast low rank tensor completion (FaLRTC), and high accuracy low rank tensor completion (HaLRTC). The SiLRTC algorithm is simple to implement and employs a relaxation technique to separate the dependant relationships and uses the block coordinate descent (BCD) method to achieve a globally optimal solution; the FaLRTC algorithm utilizes a smoothing scheme to transform the original nonsmooth problem into a smooth one and can be used to solve a general tensor trace norm minimization problem; the HaLRTC algorithm applies the alternating direction method of multipliers (ADMMs) to our problem. Our experiments show potential applications of our algorithms and the quantitative evaluation indicates that our methods are more accurate and robust than heuristic approaches. The efficiency comparison indicates that FaLTRC and HaLRTC are more efficient than SiLRTC and between Fa

  13. Tensor completion for estimating missing values in visual data.

    Science.gov (United States)

    Liu, Ji; Musialski, Przemyslaw; Wonka, Peter; Ye, Jieping

    2013-01-01

    In this paper, we propose an algorithm to estimate missing values in tensors of visual data. The values can be missing due to problems in the acquisition process or because the user manually identified unwanted outliers. Our algorithm works even with a small amount of samples and it can propagate structure to fill larger missing regions. Our methodology is built on recent studies about matrix completion using the matrix trace norm. The contribution of our paper is to extend the matrix case to the tensor case by proposing the first definition of the trace norm for tensors and then by building a working algorithm. First, we propose a definition for the tensor trace norm that generalizes the established definition of the matrix trace norm. Second, similarly to matrix completion, the tensor completion is formulated as a convex optimization problem. Unfortunately, the straightforward problem extension is significantly harder to solve than the matrix case because of the dependency among multiple constraints. To tackle this problem, we developed three algorithms: simple low rank tensor completion (SiLRTC), fast low rank tensor completion (FaLRTC), and high accuracy low rank tensor completion (HaLRTC). The SiLRTC algorithm is simple to implement and employs a relaxation technique to separate the dependent relationships and uses the block coordinate descent (BCD) method to achieve a globally optimal solution; the FaLRTC algorithm utilizes a smoothing scheme to transform the original nonsmooth problem into a smooth one and can be used to solve a general tensor trace norm minimization problem; the HaLRTC algorithm applies the alternating direction method of multipliers (ADMMs) to our problem. Our experiments show potential applications of our algorithms and the quantitative evaluation indicates that our methods are more accurate and robust than heuristic approaches. The efficiency comparison indicates that FaLTRC and HaLRTC are more efficient than SiLRTC and between FaLRTC an

  14. Better estimation of protein-DNA interaction parameters improve prediction of functional sites

    Directory of Open Access Journals (Sweden)

    O'Flanagan Ruadhan A

    2008-12-01

    Full Text Available Abstract Background Characterizing transcription factor binding motifs is a common bioinformatics task. For transcription factors with variable binding sites, we need to get many suboptimal binding sites in our training dataset to get accurate estimates of free energy penalties for deviating from the consensus DNA sequence. One procedure to do that involves a modified SELEX (Systematic Evolution of Ligands by Exponential Enrichment method designed to produce many such sequences. Results We analyzed low stringency SELEX data for E. coli Catabolic Activator Protein (CAP, and we show here that appropriate quantitative analysis improves our ability to predict in vitro affinity. To obtain large number of sequences required for this analysis we used a SELEX SAGE protocol developed by Roulet et al. The sequences obtained from here were subjected to bioinformatic analysis. The resulting bioinformatic model characterizes the sequence specificity of the protein more accurately than those sequence specificities predicted from previous analysis just by using a few known binding sites available in the literature. The consequences of this increase in accuracy for prediction of in vivo binding sites (and especially functional ones in the E. coli genome are also discussed. We measured the dissociation constants of several putative CAP binding sites by EMSA (Electrophoretic Mobility Shift Assay and compared the affinities to the bioinformatics scores provided by methods like the weight matrix method and QPMEME (Quadratic Programming Method of Energy Matrix Estimation trained on known binding sites as well as on the new sites from SELEX SAGE data. We also checked predicted genome sites for conservation in the related species S. typhimurium. We found that bioinformatics scores based on SELEX SAGE data does better in terms of prediction of physical binding energies as well as in detecting functional sites. Conclusion We think that training binding site detection

  15. Two Approaches to Estimating the Effect of Parenting on the Development of Executive Function in Early Childhood

    Science.gov (United States)

    Blair, Clancy; Raver, C. Cybele; Berry, Daniel J.

    2015-01-01

    In the current article, we contrast 2 analytical approaches to estimate the relation of parenting to executive function development in a sample of 1,292 children assessed longitudinally between the ages of 36 and 60 months of age. Children were administered a newly developed and validated battery of 6 executive function tasks tapping inhibitory control, working memory, and attention shifting. Residualized change analysis indicated that higher quality parenting as indicated by higher scores on widely used measures of parenting at both earlier and later time points predicted more positive gain in executive function at 60 months. Latent change score models in which parenting and executive function over time were held to standards of longitudinal measurement invariance provided additional evidence of the association between change in parenting quality and change in executive function. In these models, cross-lagged paths indicated that in addition to parenting predicting change in executive function, executive function bidirectionally predicted change in parenting quality. Results were robust with the addition of covariates, including child sex, race, maternal education, and household income-to-need. Strengths and drawbacks of the 2 analytic approaches are discussed, and the findings are considered in light of emerging methodological innovations for testing the extent to which executive function is malleable and open to the influence of experience. PMID:23834294

  16. A Novel Group-Fused Sparse Partial Correlation Method for Simultaneous Estimation of Functional Networks in Group Comparison Studies.

    Science.gov (United States)

    Liang, Xiaoyun; Vaughan, David N; Connelly, Alan; Calamante, Fernando

    2018-05-01

    The conventional way to estimate functional networks is primarily based on Pearson correlation along with classic Fisher Z test. In general, networks are usually calculated at the individual-level and subsequently aggregated to obtain group-level networks. However, such estimated networks are inevitably affected by the inherent large inter-subject variability. A joint graphical model with Stability Selection (JGMSS) method was recently shown to effectively reduce inter-subject variability, mainly caused by confounding variations, by simultaneously estimating individual-level networks from a group. However, its benefits might be compromised when two groups are being compared, given that JGMSS is blinded to other groups when it is applied to estimate networks from a given group. We propose a novel method for robustly estimating networks from two groups by using group-fused multiple graphical-lasso combined with stability selection, named GMGLASS. Specifically, by simultaneously estimating similar within-group networks and between-group difference, it is possible to address inter-subject variability of estimated individual networks inherently related with existing methods such as Fisher Z test, and issues related to JGMSS ignoring between-group information in group comparisons. To evaluate the performance of GMGLASS in terms of a few key network metrics, as well as to compare with JGMSS and Fisher Z test, they are applied to both simulated and in vivo data. As a method aiming for group comparison studies, our study involves two groups for each case, i.e., normal control and patient groups; for in vivo data, we focus on a group of patients with right mesial temporal lobe epilepsy.

  17. Nonparametric Transfer Function Models

    Science.gov (United States)

    Liu, Jun M.; Chen, Rong; Yao, Qiwei

    2009-01-01

    In this paper a class of nonparametric transfer function models is proposed to model nonlinear relationships between ‘input’ and ‘output’ time series. The transfer function is smooth with unknown functional forms, and the noise is assumed to be a stationary autoregressive-moving average (ARMA) process. The nonparametric transfer function is estimated jointly with the ARMA parameters. By modeling the correlation in the noise, the transfer function can be estimated more efficiently. The parsimonious ARMA structure improves the estimation efficiency in finite samples. The asymptotic properties of the estimators are investigated. The finite-sample properties are illustrated through simulations and one empirical example. PMID:20628584

  18. Estimation of delays and other parameters in nonlinear functional differential equations

    Science.gov (United States)

    Banks, H. T.; Lamm, P. K. D.

    1983-01-01

    A spline-based approximation scheme for nonlinear nonautonomous delay differential equations is discussed. Convergence results (using dissipative type estimates on the underlying nonlinear operators) are given in the context of parameter estimation problems which include estimation of multiple delays and initial data as well as the usual coefficient-type parameters. A brief summary of some of the related numerical findings is also given.

  19.  Higher Order Improvements for Approximate Estimators

    DEFF Research Database (Denmark)

    Kristensen, Dennis; Salanié, Bernard

    Many modern estimation methods in econometrics approximate an objective function, through simulation or discretization for instance. The resulting "approximate" estimator is often biased; and it always incurs an efficiency loss. We here propose three methods to improve the properties of such appr......Many modern estimation methods in econometrics approximate an objective function, through simulation or discretization for instance. The resulting "approximate" estimator is often biased; and it always incurs an efficiency loss. We here propose three methods to improve the properties...... of such approximate estimators at a low computational cost. The first two methods correct the objective function so as to remove the leading term of the bias due to the approximation. One variant provides an analytical bias adjustment, but it only works for estimators based on stochastic approximators......, such as simulation-based estimators. Our second bias correction is based on ideas from the resampling literature; it eliminates the leading bias term for non-stochastic as well as stochastic approximators. Finally, we propose an iterative procedure where we use Newton-Raphson (NR) iterations based on a much finer...

  20. Multivariable Frequency Response Functions Estimation for Industrial Robots

    NARCIS (Netherlands)

    Hardeman, T.; Aarts, Ronald G.K.M.; Jonker, Jan B.

    2005-01-01

    The accuracy of industrial robots limits its applicability for high demanding processes, like robotised laser welding. We are working on a nonlinear exible model of the robot manipulator to predict these inaccuracies. This poster presents the experimental results on estimating the Multivariable

  1. Error analysis of semidiscrete finite element methods for inhomogeneous time-fractional diffusion

    KAUST Repository

    Jin, B.; Lazarov, R.; Pasciak, J.; Zhou, Z.

    2014-01-01

    © 2014 Published by Oxford University Press on behalf of the Institute of Mathematics and its Applications. All rights reserved. We consider the initial-boundary value problem for an inhomogeneous time-fractional diffusion equation with a homogeneous Dirichlet boundary condition, a vanishing initial data and a nonsmooth right-hand side in a bounded convex polyhedral domain. We analyse two semidiscrete schemes based on the standard Galerkin and lumped mass finite element methods. Almost optimal error estimates are obtained for right-hand side data f (x, t) ε L∞ (0, T; Hq(ω)), ≤1≥ 1, for both semidiscrete schemes. For the lumped mass method, the optimal L2(ω)-norm error estimate requires symmetric meshes. Finally, twodimensional numerical experiments are presented to verify our theoretical results.

  2. On splice site prediction using weight array models: a comparison of smoothing techniques

    International Nuclear Information System (INIS)

    Taher, Leila; Meinicke, Peter; Morgenstern, Burkhard

    2007-01-01

    In most eukaryotic genes, protein-coding exons are separated by non-coding introns which are removed from the primary transcript by a process called 'splicing'. The positions where introns are cut and exons are spliced together are called 'splice sites'. Thus, computational prediction of splice sites is crucial for gene finding in eukaryotes. Weight array models are a powerful probabilistic approach to splice site detection. Parameters for these models are usually derived from m-tuple frequencies in trusted training data and subsequently smoothed to avoid zero probabilities. In this study we compare three different ways of parameter estimation for m-tuple frequencies, namely (a) non-smoothed probability estimation, (b) standard pseudo counts and (c) a Gaussian smoothing procedure that we recently developed

  3. Error analysis of semidiscrete finite element methods for inhomogeneous time-fractional diffusion

    KAUST Repository

    Jin, B.

    2014-05-30

    © 2014 Published by Oxford University Press on behalf of the Institute of Mathematics and its Applications. All rights reserved. We consider the initial-boundary value problem for an inhomogeneous time-fractional diffusion equation with a homogeneous Dirichlet boundary condition, a vanishing initial data and a nonsmooth right-hand side in a bounded convex polyhedral domain. We analyse two semidiscrete schemes based on the standard Galerkin and lumped mass finite element methods. Almost optimal error estimates are obtained for right-hand side data f (x, t) ε L∞ (0, T; Hq(ω)), ≤1≥ 1, for both semidiscrete schemes. For the lumped mass method, the optimal L2(ω)-norm error estimate requires symmetric meshes. Finally, twodimensional numerical experiments are presented to verify our theoretical results.

  4. Hypnosis and pain perception: An Activation Likelihood Estimation (ALE) meta-analysis of functional neuroimaging studies.

    Science.gov (United States)

    Del Casale, Antonio; Ferracuti, Stefano; Rapinesi, Chiara; De Rossi, Pietro; Angeletti, Gloria; Sani, Gabriele; Kotzalidis, Georgios D; Girardi, Paolo

    2015-12-01

    Several studies reported that hypnosis can modulate pain perception and tolerance by affecting cortical and subcortical activity in brain regions involved in these processes. We conducted an Activation Likelihood Estimation (ALE) meta-analysis on functional neuroimaging studies of pain perception under hypnosis to identify brain activation-deactivation patterns occurring during hypnotic suggestions aiming at pain reduction, including hypnotic analgesic, pleasant, or depersonalization suggestions (HASs). We searched the PubMed, Embase and PsycInfo databases; we included papers published in peer-reviewed journals dealing with functional neuroimaging and hypnosis-modulated pain perception. The ALE meta-analysis encompassed data from 75 healthy volunteers reported in 8 functional neuroimaging studies. HASs during experimentally-induced pain compared to control conditions correlated with significant activations of the right anterior cingulate cortex (Brodmann's Area [BA] 32), left superior frontal gyrus (BA 6), and right insula, and deactivation of right midline nuclei of the thalamus. HASs during experimental pain impact both cortical and subcortical brain activity. The anterior cingulate, left superior frontal, and right insular cortices activation increases could induce a thalamic deactivation (top-down inhibition), which may correlate with reductions in pain intensity. Copyright © 2016 Elsevier Ltd. All rights reserved.

  5. Regularization by External Variables

    DEFF Research Database (Denmark)

    Bossolini, Elena; Edwards, R.; Glendinning, P. A.

    2016-01-01

    Regularization was a big topic at the 2016 CRM Intensive Research Program on Advances in Nonsmooth Dynamics. There are many open questions concerning well known kinds of regularization (e.g., by smoothing or hysteresis). Here, we propose a framework for an alternative and important kind of regula......Regularization was a big topic at the 2016 CRM Intensive Research Program on Advances in Nonsmooth Dynamics. There are many open questions concerning well known kinds of regularization (e.g., by smoothing or hysteresis). Here, we propose a framework for an alternative and important kind...

  6. Semiparametric estimation of covariance matrices for longitudinal data.

    Science.gov (United States)

    Fan, Jianqing; Wu, Yichao

    2008-12-01

    Estimation of longitudinal data covariance structure poses significant challenges because the data are usually collected at irregular time points. A viable semiparametric model for covariance matrices was proposed in Fan, Huang and Li (2007) that allows one to estimate the variance function nonparametrically and to estimate the correlation function parametrically via aggregating information from irregular and sparse data points within each subject. However, the asymptotic properties of their quasi-maximum likelihood estimator (QMLE) of parameters in the covariance model are largely unknown. In the current work, we address this problem in the context of more general models for the conditional mean function including parametric, nonparametric, or semi-parametric. We also consider the possibility of rough mean regression function and introduce the difference-based method to reduce biases in the context of varying-coefficient partially linear mean regression models. This provides a more robust estimator of the covariance function under a wider range of situations. Under some technical conditions, consistency and asymptotic normality are obtained for the QMLE of the parameters in the correlation function. Simulation studies and a real data example are used to illustrate the proposed approach.

  7. Estimating Glomerular Filtration Rate in Older People

    Directory of Open Access Journals (Sweden)

    Sabrina Garasto

    2014-01-01

    Full Text Available We aimed at reviewing age-related changes in kidney structure and function, methods for estimating kidney function, and impact of reduced kidney function on geriatric outcomes, as well as the reliability and applicability of equations for estimating glomerular filtration rate (eGFR in older patients. CKD is associated with different comorbidities and adverse outcomes such as disability and premature death in older populations. Creatinine clearance and other methods for estimating kidney function are not easy to apply in older subjects. Thus, an accurate and reliable method for calculating eGFR would be highly desirable for early detection and management of CKD in this vulnerable population. Equations based on serum creatinine, age, race, and gender have been widely used. However, these equations have their own limitations, and no equation seems better than the other ones in older people. New equations specifically developed for use in older populations, especially those based on serum cystatin C, hold promises. However, further studies are needed to definitely accept them as the reference method to estimate kidney function in older patients in the clinical setting.

  8. RETURN DAN RISIKO SAHAM PADA PERUSAHAAN PERATA LABA DAN BUKAN PERATA LABA

    Directory of Open Access Journals (Sweden)

    Dwi Putra R.A.

    2013-05-01

    Full Text Available Perataan laba merupakan praktik yang umum dilakukan oleh manajer perusahaan untuk mengurangi fluktuasi laba, yang diharapkan memiliki efek menguntungkan bagi evaluasi kinerja manajemen. Beberapa peneliti percaya bahwa investor memiliki lebih banyak kecenderungan untuk berinvestasi di perusahaan yang menerapkan perataan laba. Investor percaya bahwa perusahaan halus memiliki return yang berbeda dan risiko investasi. Beberapa penelitian membuktikan tentang return yang berbeda dan risiko investasi antara perusahaan perata dan bukan perata laba. Studi lainnya menyatakan bahwa tidak ada perbedaan antara perusahaan perata dan bukan perata laba. Penelitian ini mencoba untuk menguji perbedaan risiko investasi dan return antara perusahaan manufaktur perata dan bukan perata laba yang terdaftar di Bursa Efek Indonesia pada tahun 2009-2011. Perusahaan-perusahaan diklasifikasikan dengan Indeks Eckel dan pendapatan berdasarkan pendapatan operasional, laba sebelum pajak, dan laba setelah pajak. Studi ini menunjukkan bahwa tidak ada perbedaan return investasi antara perusahaan perata dan bukan perata laba. Namun, ada perbedaan dalam risiko investasi antara perusahaan perata dan bukan perata labaKata kunci: Return, Risiko, Perata laba, Beta Income smoothing is a common practice by corporate managers to reduce fluctuations in earnings, which are expected to have beneficial effects for management performance evaluation. Some researchers believe that investors have much more tendency to invest in companies that apply income smoothing. Investors believe that smoother companies have different return and risk investment.  Some studies prove about different return and risk investment between the smoother and non-smoother companies. On the other hand, the rest studies state that there is no difference between smoother and non-smoother companies. This study tries to examine the difference of investment risk and return between smoother and non-smoother manufacturing

  9. An Accurate FFPA-PSR Estimator Algorithm and Tool for Software Effort Estimation

    Directory of Open Access Journals (Sweden)

    Senthil Kumar Murugesan

    2015-01-01

    Full Text Available Software companies are now keen to provide secure software with respect to accuracy and reliability of their products especially related to the software effort estimation. Therefore, there is a need to develop a hybrid tool which provides all the necessary features. This paper attempts to propose a hybrid estimator algorithm and model which incorporates quality metrics, reliability factor, and the security factor with a fuzzy-based function point analysis. Initially, this method utilizes a fuzzy-based estimate to control the uncertainty in the software size with the help of a triangular fuzzy set at the early development stage. Secondly, the function point analysis is extended by the security and reliability factors in the calculation. Finally, the performance metrics are added with the effort estimation for accuracy. The experimentation is done with different project data sets on the hybrid tool, and the results are compared with the existing models. It shows that the proposed method not only improves the accuracy but also increases the reliability, as well as the security, of the product.

  10. Gastrointestinal Functionality of Aquatic Animal (Oreochromis niloticus) Carcass in Water Allows Estimating Time of Death.

    Science.gov (United States)

    Hahor, Waraporn; Thongprajukaew, Karun; Yoonram, Krueawan; Rodjaroen, Somrak

    2016-11-01

    Postmortem changes have been previously studied in some terrestrial animal models, but no prior information is available on aquatic species. Gastrointestinal functionality was investigated in terms of indices, protein concentration, digestive enzyme activity, and scavenging activity, in an aquatic animal model, Nile tilapia, to assess the postmortem changes. Dead fish were floated indoors, and samples were collected within 48 h after death. Stomasomatic index decreased with postmortem time and correlated positively with protein, pepsin-specific activity, and stomach scavenging activity. Also intestosomatic index decreased significantly and correlated positively with protein, specific activity of trypsin, chymotrypsin, amylase, lipase, and intestinal scavenging activity. In their postmortem changes, the digestive enzymes exhibited earlier lipid degradation than carbohydrate or protein. The intestine changed more rapidly than the stomach. The findings suggest that the postmortem changes of gastrointestinal functionality can serve as primary data for the estimation of time of death of an aquatic animal. © 2016 American Academy of Forensic Sciences.

  11. M-Estimation for Discrete Data. Asymptotic Distribution Theory and Implications.

    Science.gov (United States)

    1985-10-01

    outlying data points, can be specified in a direct way since the influence function of an IM-estimator is proportional to its score function; see HamDel...consistently estimates - when the model is correct. Suppose now that ac RI. The influence function at F of an M-estimator for 3 has the form 2(x,S) = d/ P ("e... influence function at F . This is assuming, of course, that the estimator is asymototically normal at Fe. The truncation point c(f) determines the bounds

  12. Micro-Economic Estimation On The Demand Function For ...

    African Journals Online (AJOL)

    The article focused on the estimation of the prostitution demand behaviour in Adamawa State. An econometric model was specified based on economic theory and confronted with both primary and secondary data. Ordinary least square multiple regression techniques were adopted and the linear model was chosen as a ...

  13. Transfer Function Identification Using Orthogonal Fourier Transform Modeling Functions

    Science.gov (United States)

    Morelli, Eugene A.

    2013-01-01

    A method for transfer function identification, including both model structure determination and parameter estimation, was developed and demonstrated. The approach uses orthogonal modeling functions generated from frequency domain data obtained by Fourier transformation of time series data. The method was applied to simulation data to identify continuous-time transfer function models and unsteady aerodynamic models. Model fit error, estimated model parameters, and the associated uncertainties were used to show the effectiveness of the method for identifying accurate transfer function models from noisy data.

  14. Improved estimates of coordinate error for molecular replacement

    International Nuclear Information System (INIS)

    Oeffner, Robert D.; Bunkóczi, Gábor; McCoy, Airlie J.; Read, Randy J.

    2013-01-01

    A function for estimating the effective root-mean-square deviation in coordinates between two proteins has been developed that depends on both the sequence identity and the size of the protein and is optimized for use with molecular replacement in Phaser. A top peak translation-function Z-score of over 8 is found to be a reliable metric of when molecular replacement has succeeded. The estimate of the root-mean-square deviation (r.m.s.d.) in coordinates between the model and the target is an essential parameter for calibrating likelihood functions for molecular replacement (MR). Good estimates of the r.m.s.d. lead to good estimates of the variance term in the likelihood functions, which increases signal to noise and hence success rates in the MR search. Phaser has hitherto used an estimate of the r.m.s.d. that only depends on the sequence identity between the model and target and which was not optimized for the MR likelihood functions. Variance-refinement functionality was added to Phaser to enable determination of the effective r.m.s.d. that optimized the log-likelihood gain (LLG) for a correct MR solution. Variance refinement was subsequently performed on a database of over 21 000 MR problems that sampled a range of sequence identities, protein sizes and protein fold classes. Success was monitored using the translation-function Z-score (TFZ), where a TFZ of 8 or over for the top peak was found to be a reliable indicator that MR had succeeded for these cases with one molecule in the asymmetric unit. Good estimates of the r.m.s.d. are correlated with the sequence identity and the protein size. A new estimate of the r.m.s.d. that uses these two parameters in a function optimized to fit the mean of the refined variance is implemented in Phaser and improves MR outcomes. Perturbing the initial estimate of the r.m.s.d. from the mean of the distribution in steps of standard deviations of the distribution further increases MR success rates

  15. On the robustness of two-stage estimators

    KAUST Repository

    Zhelonkin, Mikhail

    2012-04-01

    The aim of this note is to provide a general framework for the analysis of the robustness properties of a broad class of two-stage models. We derive the influence function, the change-of-variance function, and the asymptotic variance of a general two-stage M-estimator, and provide their interpretations. We illustrate our results in the case of the two-stage maximum likelihood estimator and the two-stage least squares estimator. © 2011.

  16. Fracture of functionally graded materials: application to hydrided zircaloy; Fissuration des materiaux a gradient de proprietes: application au zircaloy hydrure

    Energy Technology Data Exchange (ETDEWEB)

    Perales, F

    2005-12-15

    This thesis is devoted to the dynamic fracture of functionally graded materials. More particularly, it deals with the toughness of nuclear cladding at high burnup submitted to transient loading. The fracture is studied at local scale using cohesive zone model in a multi body approach. Cohesive zone models include frictional contact to take into account mixed mode fracture. Non smooth dynamics problems are treated within the Non-Smooth Contact Dynamics framework. A multi scale study is necessary because of the dimension of the clad. At microscopic scale, the effective properties of surface law, between each body, are obtained by periodic numerical homogenization. A two fields Finite Element formulation is so written. An extended formulation of the NSCD framework is obtained. The associated software allows to simulate, in finite deformation, from the crack initiation to post-fracture behavior in heterogeneous materials. At microscopic scale, random RVE calculations are made to determine effective properties. At macroscopic scale, calculations of part of clad are made to determine the role of the mean hydrogen concentration and gradient of hydrogen parameters in the toughness of the clad under dynamic loading. (author)

  17. An Efficient and Reliable Statistical Method for Estimating Functional Connectivity in Large Scale Brain Networks Using Partial Correlation.

    Science.gov (United States)

    Wang, Yikai; Kang, Jian; Kemmer, Phebe B; Guo, Ying

    2016-01-01

    Currently, network-oriented analysis of fMRI data has become an important tool for understanding brain organization and brain networks. Among the range of network modeling methods, partial correlation has shown great promises in accurately detecting true brain network connections. However, the application of partial correlation in investigating brain connectivity, especially in large-scale brain networks, has been limited so far due to the technical challenges in its estimation. In this paper, we propose an efficient and reliable statistical method for estimating partial correlation in large-scale brain network modeling. Our method derives partial correlation based on the precision matrix estimated via Constrained L1-minimization Approach (CLIME), which is a recently developed statistical method that is more efficient and demonstrates better performance than the existing methods. To help select an appropriate tuning parameter for sparsity control in the network estimation, we propose a new Dens-based selection method that provides a more informative and flexible tool to allow the users to select the tuning parameter based on the desired sparsity level. Another appealing feature of the Dens-based method is that it is much faster than the existing methods, which provides an important advantage in neuroimaging applications. Simulation studies show that the Dens-based method demonstrates comparable or better performance with respect to the existing methods in network estimation. We applied the proposed partial correlation method to investigate resting state functional connectivity using rs-fMRI data from the Philadelphia Neurodevelopmental Cohort (PNC) study. Our results show that partial correlation analysis removed considerable between-module marginal connections identified by full correlation analysis, suggesting these connections were likely caused by global effects or common connection to other nodes. Based on partial correlation, we find that the most significant

  18. Estimating Herd Immunity to Amphibian Chytridiomycosis in Madagascar Based on the Defensive Function of Amphibian Skin Bacteria

    Directory of Open Access Journals (Sweden)

    Molly C. Bletz

    2017-09-01

    Full Text Available For decades, Amphibians have been globally threatened by the still expanding infectious disease, chytridiomycosis. Madagascar is an amphibian biodiversity hotspot where Batrachochytrium dendrobatidis (Bd has only recently been detected. While no Bd-associated population declines have been reported, the risk of declines is high when invasive virulent lineages become involved. Cutaneous bacteria contribute to host innate immunity by providing defense against pathogens for numerous animals, including amphibians. Little is known, however, about the cutaneous bacterial residents of Malagasy amphibians and the functional capacity they have against Bd. We cultured 3179 skin bacterial isolates from over 90 frog species across Madagascar, identified them via Sanger sequencing of approximately 700 bp of the 16S rRNA gene, and characterized their functional capacity against Bd. A subset of isolates was also tested against multiple Bd genotypes. In addition, we applied the concept of herd immunity to estimate Bd-associated risk for amphibian communities across Madagascar based on bacterial antifungal activity. We found that multiple bacterial isolates (39% of all isolates cultured from the skin of Malagasy frogs were able to inhibit Bd. Mean inhibition was weakly correlated with bacterial phylogeny, and certain taxonomic groups appear to have a high proportion of inhibitory isolates, such as the Enterobacteriaceae, Pseudomonadaceae, and Xanthamonadaceae (84, 80, and 75% respectively. Functional capacity of bacteria against Bd varied among Bd genotypes; however, there were some bacteria that showed broad spectrum inhibition against all tested Bd genotypes, suggesting that these bacteria would be good candidates for probiotic therapies. We estimated Bd-associated risk for sampled amphibian communities based on the concept of herd immunity. Multiple amphibian communities, including those in the amphibian diversity hotspots, Andasibe and Ranomafana, were

  19. Estimating Herd Immunity to Amphibian Chytridiomycosis in Madagascar Based on the Defensive Function of Amphibian Skin Bacteria.

    Science.gov (United States)

    Bletz, Molly C; Myers, Jillian; Woodhams, Douglas C; Rabemananjara, Falitiana C E; Rakotonirina, Angela; Weldon, Che; Edmonds, Devin; Vences, Miguel; Harris, Reid N

    2017-01-01

    For decades, Amphibians have been globally threatened by the still expanding infectious disease, chytridiomycosis. Madagascar is an amphibian biodiversity hotspot where Batrachochytrium dendrobatidis ( Bd ) has only recently been detected. While no Bd -associated population declines have been reported, the risk of declines is high when invasive virulent lineages become involved. Cutaneous bacteria contribute to host innate immunity by providing defense against pathogens for numerous animals, including amphibians. Little is known, however, about the cutaneous bacterial residents of Malagasy amphibians and the functional capacity they have against Bd . We cultured 3179 skin bacterial isolates from over 90 frog species across Madagascar, identified them via Sanger sequencing of approximately 700 bp of the 16S rRNA gene, and characterized their functional capacity against Bd . A subset of isolates was also tested against multiple Bd genotypes. In addition, we applied the concept of herd immunity to estimate Bd -associated risk for amphibian communities across Madagascar based on bacterial antifungal activity. We found that multiple bacterial isolates (39% of all isolates) cultured from the skin of Malagasy frogs were able to inhibit Bd . Mean inhibition was weakly correlated with bacterial phylogeny, and certain taxonomic groups appear to have a high proportion of inhibitory isolates, such as the Enterobacteriaceae, Pseudomonadaceae, and Xanthamonadaceae (84, 80, and 75% respectively). Functional capacity of bacteria against Bd varied among Bd genotypes; however, there were some bacteria that showed broad spectrum inhibition against all tested Bd genotypes, suggesting that these bacteria would be good candidates for probiotic therapies. We estimated Bd -associated risk for sampled amphibian communities based on the concept of herd immunity. Multiple amphibian communities, including those in the amphibian diversity hotspots, Andasibe and Ranomafana, were estimated to be

  20. Complex Estimation of Strength Properties of Functional Materials on the Basis of the Analysis of Grain-Phase Structure Parameters

    OpenAIRE

    Gitman, M.B.; Klyuev, A.V.; Stolbov, V.Y.; Gitman, I.M.

    2017-01-01

    The technique allows analysis using grain-phase structure of the functional material to evaluate its performance, particularly strength properties. The technique is based on the use of linguistic variable in the process of comprehensive evaluation. An example of estimating the strength properties of steel reinforcement, subject to special heat treatment to obtain the desired grain-phase structure.

  1. Error analysis and new dual-cosine window for estimating the sensor frequency response function from the step response data

    Science.gov (United States)

    Yang, Shuang-Long; Liang, Li-Ping; Liu, Hou-De; Xu, Ke-Jun

    2018-03-01

    Aiming at reducing the estimation error of the sensor frequency response function (FRF) estimated by the commonly used window-based spectral estimation method, the error models of interpolation and transient errors are derived in the form of non-parameter models. Accordingly, window effects on the errors are analyzed and reveal that the commonly used hanning window leads to smaller interpolation error which can also be significantly eliminated by the cubic spline interpolation method when estimating the FRF from the step response data, and window with smaller front-end value can restrain more transient error. Thus, a new dual-cosine window with its non-zero discrete Fourier transform bins at -3, -1, 0, 1, and 3 is constructed for FRF estimation. Compared with the hanning window, the new dual-cosine window has the equivalent interpolation error suppression capability and better transient error suppression capability when estimating the FRF from the step response; specifically, it reduces the asymptotic property of the transient error from O(N-2) of the hanning window method to O(N-4) while only increases the uncertainty slightly (about 0.4 dB). Then, one direction of a wind tunnel strain gauge balance which is a high order, small damping, and non-minimum phase system is employed as the example for verifying the new dual-cosine window-based spectral estimation method. The model simulation result shows that the new dual-cosine window method is better than the hanning window method for FRF estimation, and compared with the Gans method and LPM method, it has the advantages of simple computation, less time consumption, and short data requirement; the actual data calculation result of the balance FRF is consistent to the simulation result. Thus, the new dual-cosine window is effective and practical for FRF estimation.

  2. Blind Deconvolution for Jump-Preserving Curve Estimation

    Directory of Open Access Journals (Sweden)

    Xingfang Huang

    2010-01-01

    when recovering the signals. Our procedure is based on three local linear kernel estimates of the regression function, constructed from observations in a left-side, a right-side, and a two-side neighborhood of a given point, respectively. The estimated function at the given point is then defined by one of the three estimates with the smallest weighted residual sum of squares. To better remove the noise and blur, this estimate can also be updated iteratively. Performance of this procedure is investigated by both simulation and real data examples, from which it can be seen that our procedure performs well in various cases.

  3. A note on the conditional density estimate in single functional index model

    OpenAIRE

    2010-01-01

    Abstract In this paper, we consider estimation of the conditional density of a scalar response variable Y given a Hilbertian random variable X when the observations are linked with a single-index structure. We establish the pointwise and the uniform almost complete convergence (with the rate) of the kernel estimate of this model. As an application, we show how our result can be applied in the prediction problem via the conditional mode estimate. Finally, the estimation of the funct...

  4. Efficient estimation of semiparametric copula models for bivariate survival data

    KAUST Repository

    Cheng, Guang

    2014-01-01

    A semiparametric copula model for bivariate survival data is characterized by a parametric copula model of dependence and nonparametric models of two marginal survival functions. Efficient estimation for the semiparametric copula model has been recently studied for the complete data case. When the survival data are censored, semiparametric efficient estimation has only been considered for some specific copula models such as the Gaussian copulas. In this paper, we obtain the semiparametric efficiency bound and efficient estimation for general semiparametric copula models for possibly censored data. We construct an approximate maximum likelihood estimator by approximating the log baseline hazard functions with spline functions. We show that our estimates of the copula dependence parameter and the survival functions are asymptotically normal and efficient. Simple consistent covariance estimators are also provided. Numerical results are used to illustrate the finite sample performance of the proposed estimators. © 2013 Elsevier Inc.

  5. Estimators for local non-Gaussianities

    International Nuclear Information System (INIS)

    Creminelli, P.; Senatore, L.; Zaldarriaga, M.

    2006-05-01

    We study the Likelihood function of data given f NL for the so-called local type of non-Gaussianity. In this case the curvature perturbation is a non-linear function, local in real space, of a Gaussian random field. We compute the Cramer-Rao bound for f NL and show that for small values of f NL the 3- point function estimator saturates the bound and is equivalent to calculating the full Likelihood of the data. However, for sufficiently large f NL , the naive 3-point function estimator has a much larger variance than previously thought. In the limit in which the departure from Gaussianity is detected with high confidence, error bars on f NL only decrease as 1/ln N pix rather than N pix -1/2 as the size of the data set increases. We identify the physical origin of this behavior and explain why it only affects the local type of non- Gaussianity, where the contribution of the first multipoles is always relevant. We find a simple improvement to the 3-point function estimator that makes the square root of its variance decrease as N pix -1/2 even for large f NL , asymptotically approaching the Cramer-Rao bound. We show that using the modified estimator is practically equivalent to computing the full Likelihood of f NL given the data. Thus other statistics of the data, such as the 4-point function and Minkowski functionals, contain no additional information on f NL . In particular, we explicitly show that the recent claims about the relevance of the 4-point function are not correct. By direct inspection of the Likelihood, we show that the data do not contain enough information for any statistic to be able to constrain higher order terms in the relation between the Gaussian field and the curvature perturbation, unless these are orders of magnitude larger than the size suggested by the current limits on f NL . (author)

  6. Experimental determination of frequency response function estimates for flexible joint industrial manipulators with serial kinematics

    Science.gov (United States)

    Saupe, Florian; Knoblach, Andreas

    2015-02-01

    Two different approaches for the determination of frequency response functions (FRFs) are used for the non-parametric closed loop identification of a flexible joint industrial manipulator with serial kinematics. The two applied experiment designs are based on low power multisine and high power chirp excitations. The main challenge is to eliminate disturbances of the FRF estimates caused by the numerous nonlinearities of the robot. For the experiment design based on chirp excitations, a simple iterative procedure is proposed which allows exploiting the good crest factor of chirp signals in a closed loop setup. An interesting synergy of the two approaches, beyond validation purposes, is pointed out.

  7. Use of digital image analysis to estimate fluid permeability of porous materials: Application of two-point correlation functions

    International Nuclear Information System (INIS)

    Berryman, J.G.; Blair, S.C.

    1986-01-01

    Scanning electron microscope images of cross sections of several porous specimens have been digitized and analyzed using image processing techniques. The porosity and specific surface area may be estimated directly from measured two-point spatial correlation functions. The measured values of porosity and image specific surface were combined with known values of electrical formation factors to estimate fluid permeability using one version of the Kozeny-Carman empirical relation. For glass bead samples with measured permeability values in the range of a few darcies, our estimates agree well ( +- 10--20%) with the measurements. For samples of Ironton-Galesville sandstone with a permeability in the range of hundreds of millidarcies, our best results agree with the laboratory measurements again within about 20%. For Berea sandstone with still lower permeability (tens of millidarcies), our predictions from the images agree within 10--30%. Best results for the sandstones were obtained by using the porosities obtained at magnifications of about 100 x (since less resolution and better statistics are required) and the image specific surface obtained at magnifications of about 500 x (since greater resolution is required)

  8. Estimation of probability density functions of damage parameter for valve leakage detection in reciprocating pump used in nuclear power plants

    Energy Technology Data Exchange (ETDEWEB)

    Lee, Jong Kyeom; Kim, Tae Yun; Kim, Hyun Su; Chai, Jang Bom; Lee, Jin Woo [Div. of Mechanical Engineering, Ajou University, Suwon (Korea, Republic of)

    2016-10-15

    This paper presents an advanced estimation method for obtaining the probability density functions of a damage parameter for valve leakage detection in a reciprocating pump. The estimation method is based on a comparison of model data which are simulated by using a mathematical model, and experimental data which are measured on the inside and outside of the reciprocating pump in operation. The mathematical model, which is simplified and extended on the basis of previous models, describes not only the normal state of the pump, but also its abnormal state caused by valve leakage. The pressure in the cylinder is expressed as a function of the crankshaft angle, and an additional volume flow rate due to the valve leakage is quantified by a damage parameter in the mathematical model. The change in the cylinder pressure profiles due to the suction valve leakage is noticeable in the compression and expansion modes of the pump. The damage parameter value over 300 cycles is calculated in two ways, considering advance or delay in the opening and closing angles of the discharge valves. The probability density functions of the damage parameter are compared for diagnosis and prognosis on the basis of the probabilistic features of valve leakage.

  9. Estimation of probability density functions of damage parameter for valve leakage detection in reciprocating pump used in nuclear power plants

    International Nuclear Information System (INIS)

    Lee, Jong Kyeom; Kim, Tae Yun; Kim, Hyun Su; Chai, Jang Bom; Lee, Jin Woo

    2016-01-01

    This paper presents an advanced estimation method for obtaining the probability density functions of a damage parameter for valve leakage detection in a reciprocating pump. The estimation method is based on a comparison of model data which are simulated by using a mathematical model, and experimental data which are measured on the inside and outside of the reciprocating pump in operation. The mathematical model, which is simplified and extended on the basis of previous models, describes not only the normal state of the pump, but also its abnormal state caused by valve leakage. The pressure in the cylinder is expressed as a function of the crankshaft angle, and an additional volume flow rate due to the valve leakage is quantified by a damage parameter in the mathematical model. The change in the cylinder pressure profiles due to the suction valve leakage is noticeable in the compression and expansion modes of the pump. The damage parameter value over 300 cycles is calculated in two ways, considering advance or delay in the opening and closing angles of the discharge valves. The probability density functions of the damage parameter are compared for diagnosis and prognosis on the basis of the probabilistic features of valve leakage

  10. Estimation of Probability Density Functions of Damage Parameter for Valve Leakage Detection in Reciprocating Pump Used in Nuclear Power Plants

    Directory of Open Access Journals (Sweden)

    Jong Kyeom Lee

    2016-10-01

    Full Text Available This paper presents an advanced estimation method for obtaining the probability density functions of a damage parameter for valve leakage detection in a reciprocating pump. The estimation method is based on a comparison of model data which are simulated by using a mathematical model, and experimental data which are measured on the inside and outside of the reciprocating pump in operation. The mathematical model, which is simplified and extended on the basis of previous models, describes not only the normal state of the pump, but also its abnormal state caused by valve leakage. The pressure in the cylinder is expressed as a function of the crankshaft angle, and an additional volume flow rate due to the valve leakage is quantified by a damage parameter in the mathematical model. The change in the cylinder pressure profiles due to the suction valve leakage is noticeable in the compression and expansion modes of the pump. The damage parameter value over 300 cycles is calculated in two ways, considering advance or delay in the opening and closing angles of the discharge valves. The probability density functions of the damage parameter are compared for diagnosis and prognosis on the basis of the probabilistic features of valve leakage.

  11. Motion estimation for cardiac functional analysis using two x-ray computed tomography scans.

    Science.gov (United States)

    Fung, George S K; Ciuffo, Luisa; Ashikaga, Hiroshi; Taguchi, Katsuyuki

    2017-09-01

    This work concerns computed tomography (CT)-based cardiac functional analysis (CFA) with a reduced radiation dose. As CT-CFA requires images over the entire heartbeat, the scans are often performed at 10-20% of the tube current settings that are typically used for coronary CT angiography. A large image noise then degrades the accuracy of motion estimation. Moreover, even if the scan was performed during the sinus rhythm, the cardiac motion observed in CT images may not be cyclic with patients with atrial fibrillation. In this study, we propose to use two CT scan data, one for CT angiography at a quiescent phase at a standard dose and the other for CFA over the entire heart beat at a lower dose. We have made the following four modifications to an image-based cardiac motion estimation method we have previously developed for a full-dose retrospectively gated coronary CT angiography: (a) a full-dose prospectively gated coronary CT angiography image acquired at the least motion phase was used as the reference image; (b) a three-dimensional median filter was applied to lower-dose retrospectively gated cardiac images acquired at 20 phases over one heartbeat in order to reduce image noise; (c) the strength of the temporal regularization term was made adaptive; and (d) a one-dimensional temporal filter was applied to the estimated motion vector field in order to decrease jaggy motion patterns. We describe the conventional method iME1 and the proposed method iME2 in this article. Five observers assessed the accuracy of the estimated motion vector field of iME2 and iME1 using a 4-point scale. The observers repeated the assessment with data presented in a new random order 1 week after the first assessment session. The study confirmed that the proposed iME2 was robust against the mismatch of noise levels, contrast enhancement levels, and shapes of the chambers. There was a statistically significant difference between iME2 and iME1 (accuracy score, 2.08 ± 0.81 versus 2.77

  12. Estimating Functions of Distributions Defined over Spaces of Unknown Size

    Directory of Open Access Journals (Sweden)

    David H. Wolpert

    2013-10-01

    Full Text Available We consider Bayesian estimation of information-theoretic quantities from data, using a Dirichlet prior. Acknowledging the uncertainty of the event space size m and the Dirichlet prior’s concentration parameter c, we treat both as random variables set by a hyperprior. We show that the associated hyperprior, P(c, m, obeys a simple “Irrelevance of Unseen Variables” (IUV desideratum iff P(c, m = P(cP(m. Thus, requiring IUV greatly reduces the number of degrees of freedom of the hyperprior. Some information-theoretic quantities can be expressed multiple ways, in terms of different event spaces, e.g., mutual information. With all hyperpriors (implicitly used in earlier work, different choices of this event space lead to different posterior expected values of these information-theoretic quantities. We show that there is no such dependence on the choice of event space for a hyperprior that obeys IUV. We also derive a result that allows us to exploit IUV to greatly simplify calculations, like the posterior expected mutual information or posterior expected multi-information. We also use computer experiments to favorably compare an IUV-based estimator of entropy to three alternative methods in common use. We end by discussing how seemingly innocuous changes to the formalization of an estimation problem can substantially affect the resultant estimates of posterior expectations.

  13. Estimating leaf functional traits by inversion of PROSPECT: Assessing leaf dry matter content and specific leaf area in mixed mountainous forest

    Science.gov (United States)

    Ali, Abebe Mohammed; Darvishzadeh, Roshanak; Skidmore, Andrew K.; Duren, Iris van; Heiden, Uta; Heurich, Marco

    2016-03-01

    Assessments of ecosystem functioning rely heavily on quantification of vegetation properties. The search is on for methods that produce reliable and accurate baseline information on plant functional traits. In this study, the inversion of the PROSPECT radiative transfer model was used to estimate two functional leaf traits: leaf dry matter content (LDMC) and specific leaf area (SLA). Inversion of PROSPECT usually aims at quantifying its direct input parameters. This is the first time the technique has been used to indirectly model LDMC and SLA. Biophysical parameters of 137 leaf samples were measured in July 2013 in the Bavarian Forest National Park, Germany. Spectra of the leaf samples were measured using an ASD FieldSpec3 equipped with an integrating sphere. PROSPECT was inverted using a look-up table (LUT) approach. The LUTs were generated with and without using prior information. The effect of incorporating prior information on the retrieval accuracy was studied before and after stratifying the samples into broadleaf and conifer categories. The estimated values were evaluated using R2 and normalized root mean square error (nRMSE). Among the retrieved variables the lowest nRMSE (0.0899) was observed for LDMC. For both traits higher R2 values (0.83 for LDMC and 0.89 for SLA) were discovered in the pooled samples. The use of prior information improved accuracy of the retrieved traits. The strong correlation between the estimated traits and the NIR/SWIR region of the electromagnetic spectrum suggests that these leaf traits could be assessed at canopy level by using remotely sensed data.

  14. Estimation of Lung Ventilation

    Science.gov (United States)

    Ding, Kai; Cao, Kunlin; Du, Kaifang; Amelon, Ryan; Christensen, Gary E.; Raghavan, Madhavan; Reinhardt, Joseph M.

    Since the primary function of the lung is gas exchange, ventilation can be interpreted as an index of lung function in addition to perfusion. Injury and disease processes can alter lung function on a global and/or a local level. MDCT can be used to acquire multiple static breath-hold CT images of the lung taken at different lung volumes, or with proper respiratory control, 4DCT images of the lung reconstructed at different respiratory phases. Image registration can be applied to this data to estimate a deformation field that transforms the lung from one volume configuration to the other. This deformation field can be analyzed to estimate local lung tissue expansion, calculate voxel-by-voxel intensity change, and make biomechanical measurements. The physiologic significance of the registration-based measures of respiratory function can be established by comparing to more conventional measurements, such as nuclear medicine or contrast wash-in/wash-out studies with CT or MR. An important emerging application of these methods is the detection of pulmonary function change in subjects undergoing radiation therapy (RT) for lung cancer. During RT, treatment is commonly limited to sub-therapeutic doses due to unintended toxicity to normal lung tissue. Measurement of pulmonary function may be useful as a planning tool during RT planning, may be useful for tracking the progression of toxicity to nearby normal tissue during RT, and can be used to evaluate the effectiveness of a treatment post-therapy. This chapter reviews the basic measures to estimate regional ventilation from image registration of CT images, the comparison of them to the existing golden standard and the application in radiation therapy.

  15. Tree Species Classification in Temperate Forests Using Formosat-2 Satellite Image Time Series

    Directory of Open Access Journals (Sweden)

    David Sheeren

    2016-09-01

    Full Text Available Mapping forest composition is a major concern for forest management, biodiversity assessment and for understanding the potential impacts of climate change on tree species distribution. In this study, the suitability of a dense high spatial resolution multispectral Formosat-2 satellite image time-series (SITS to discriminate tree species in temperate forests is investigated. Based on a 17-date SITS acquired across one year, thirteen major tree species (8 broadleaves and 5 conifers are classified in a study area of southwest France. The performance of parametric (GMM and nonparametric (k-NN, RF, SVM methods are compared at three class hierarchy levels for different versions of the SITS: (i a smoothed noise-free version based on the Whittaker smoother; (ii a non-smoothed cloudy version including all the dates; (iii a non-smoothed noise-free version including only 14 dates. Noise refers to pixels contaminated by clouds and cloud shadows. The results of the 108 distinct classifications show a very high suitability of the SITS to identify the forest tree species based on phenological differences (average κ = 0 . 93 estimated by cross-validation based on 1235 field-collected plots. SVM is found to be the best classifier with very close results from the other classifiers. No clear benefit of removing noise by smoothing can be observed. Classification accuracy is even improved using the non-smoothed cloudy version of the SITS compared to the 14 cloud-free image time series. However conclusions of the results need to be considered with caution because of possible overfitting. Disagreements also appear between the maps produced by the classifiers for complex mixed forests, suggesting a higher classification uncertainty in these contexts. Our findings suggest that time-series data can be a good alternative to hyperspectral data for mapping forest types. It also demonstrates the potential contribution of the recently launched Sentinel-2 satellite for

  16. A Comparative Study on Recently-Introduced Nature-Based Global Optimization Methods in Complex Mechanical System Design

    Directory of Open Access Journals (Sweden)

    Abdulbaset El Hadi Saad

    2017-10-01

    Full Text Available Advanced global optimization algorithms have been continuously introduced and improved to solve various complex design optimization problems for which the objective and constraint functions can only be evaluated through computation intensive numerical analyses or simulations with a large number of design variables. The often implicit, multimodal, and ill-shaped objective and constraint functions in high-dimensional and “black-box” forms demand the search to be carried out using low number of function evaluations with high search efficiency and good robustness. This work investigates the performance of six recently introduced, nature-inspired global optimization methods: Artificial Bee Colony (ABC, Firefly Algorithm (FFA, Cuckoo Search (CS, Bat Algorithm (BA, Flower Pollination Algorithm (FPA and Grey Wolf Optimizer (GWO. These approaches are compared in terms of search efficiency and robustness in solving a set of representative benchmark problems in smooth-unimodal, non-smooth unimodal, smooth multimodal, and non-smooth multimodal function forms. In addition, four classic engineering optimization examples and a real-life complex mechanical system design optimization problem, floating offshore wind turbines design optimization, are used as additional test cases representing computationally-expensive black-box global optimization problems. Results from this comparative study show that the ability of these global optimization methods to obtain a good solution diminishes as the dimension of the problem, or number of design variables increases. Although none of these methods is universally capable, the study finds that GWO and ABC are more efficient on average than the other four in obtaining high quality solutions efficiently and consistently, solving 86% and 80% of the tested benchmark problems, respectively. The research contributes to future improvements of global optimization methods.

  17. Projections onto Convex Sets Super-Resolution Reconstruction Based on Point Spread Function Estimation of Low-Resolution Remote Sensing Images

    Directory of Open Access Journals (Sweden)

    Chong Fan

    2017-02-01

    Full Text Available To solve the problem on inaccuracy when estimating the point spread function (PSF of the ideal original image in traditional projection onto convex set (POCS super-resolution (SR reconstruction, this paper presents an improved POCS SR algorithm based on PSF estimation of low-resolution (LR remote sensing images. The proposed algorithm can improve the spatial resolution of the image and benefit agricultural crop visual interpolation. The PSF of the highresolution (HR image is unknown in reality. Therefore, analysis of the relationship between the PSF of the HR image and the PSF of the LR image is important to estimate the PSF of the HR image by using multiple LR images. In this study, the linear relationship between the PSFs of the HR and LR images can be proven. In addition, the novel slant knife-edge method is employed, which can improve the accuracy of the PSF estimation of LR images. Finally, the proposed method is applied to reconstruct airborne digital sensor 40 (ADS40 three-line array images and the overlapped areas of two adjacent GF-2 images by embedding the estimated PSF of the HR image to the original POCS SR algorithm. Experimental results show that the proposed method yields higher quality of reconstructed images than that produced by the blind SR method and the bicubic interpolation method.

  18. Arterial wave reflections and kidney function decline among persons with preserved estimated glomerular filtration rate: the Multi-Ethnic Study of Atherosclerosis.

    Science.gov (United States)

    Hsu, Jeffrey J; Katz, Ronit; Chirinos, Julio A; Jacobs, David R; Duprez, Daniel A; Peralta, Carmen A

    2016-05-01

    Differences in arterial wave reflections have been associated with increased risk for heart failure and mortality. Whether these measures are also associated with kidney function decline is not well established. Reflection magnitude (RM, defined as the ratio of the backward wave [Pb] to that of the forward wave [Pf]), augmentation index (AIx), and pulse pressure amplification (PPA) were derived from radial tonometry measures among 5232 participants free of cardiovascular disease who were enrolled in the Multiethnic Study of Atherosclerosis. Kidney function was estimated by creatinine and cystatin C measurements, as well as albumin-to-creatinine ratio. We evaluated the associations of Pb, Pf, RM, AIx, and PPA with annualized estimated glomerular filtration rate (eGFR) change and rapid kidney function decline over 5 years, using generalized linear mixed models and logistic regression, respectively. Of the study participants, 48% were male, mean age was 62 years, mean eGFR and median albumin-to-creatinine ratio at baseline were 84 mL/min/1.73 m(2) and 5.3 mg/g, respectively. In demographically adjusted models, both Pb and Pf had similarly strong associations with kidney function decline; compared to those in the lowest tertiles, the persons in the highest tertiles of Pb and Pf had a 1.01 and 0.99 mL/min/1.73 m(2)/year faster eGFR decline, respectively (P function decline. In conclusion, the reflected and forward wave components were similarly associated with kidney function decline, and these associations were explained by differences in systolic blood pressure. Copyright © 2016 American Society of Hypertension. Published by Elsevier Inc. All rights reserved.

  19. Estimation of bone perfusion as a function of intramedullary pressure in sheep

    International Nuclear Information System (INIS)

    Rosenthal, M.S.; Lehner, C.E.; Pearson, D.W.; Kanikula, T.M.; Adler, G.G.; Venci, R.; Lanphier, E.H.; De Luca, P.M.

    1985-01-01

    It has been reported previously that following decompression (i.e. diving ascents) the intramedullary pressure (IMP) in bone can rise dramatically and possibly by the mechanism which can induce dysbaric osteonecrosis or the ''silent bends''. If the blood supply for the bone transverses the marrow compartment, than an increase in IMP could cause a temporary decrease in perfusion or hemostasis and hence ischemia leading to bone necrosis. To test this hypothesis, the authors measured the perfusion of bone in sheep as a function of IMP. The bone perfusion was estimated by measuring the perfusion-limited clearance of Ar-41 (Eγ=1293 keV, T/sub 1/2/=1.83 h) from the bone mineral matrix of sheep's tibia. The argon gas was formed in vivo by the fast neutron activation of Ca-44 to Ar-41 following the Ca-44(n,α) reaction. Clearance of Ar-41 was measured by time gated gamma-ray spectroscopy. These results indicate that an elevation of intramedullary pressure can decrease perfusion in bone and may cause bone necrosis

  20. Error Analysis of a Finite Element Method for the Space-Fractional Parabolic Equation

    KAUST Repository

    Jin, Bangti; Lazarov, Raytcho; Pasciak, Joseph; Zhou, Zhi

    2014-01-01

    © 2014 Society for Industrial and Applied Mathematics We consider an initial boundary value problem for a one-dimensional fractional-order parabolic equation with a space fractional derivative of Riemann-Liouville type and order α ∈ (1, 2). We study a spatial semidiscrete scheme using the standard Galerkin finite element method with piecewise linear finite elements, as well as fully discrete schemes based on the backward Euler method and the Crank-Nicolson method. Error estimates in the L2(D)- and Hα/2 (D)-norm are derived for the semidiscrete scheme and in the L2(D)-norm for the fully discrete schemes. These estimates cover both smooth and nonsmooth initial data and are expressed directly in terms of the smoothness of the initial data. Extensive numerical results are presented to illustrate the theoretical results.