A three critical point theorem for non-smooth functionals with ...
Indian Academy of Sciences (India)
1Department of Mathematics, Faculty of Mathematics Sciences, ... In many applications, we encounter problems with non-smooth energy functionals. These .... The next lemma shows that a locally Lipschitz functional with a compact gradient, is.
A Modified Levenberg-Marquardt Method for Nonsmooth Equations with Finitely Many Maximum Functions
Directory of Open Access Journals (Sweden)
Shou-qiang Du
2008-01-01
Full Text Available For solving nonsmooth systems of equations, the Levenberg-Marquardt method and its variants are of particular importance because of their locally fast convergent rates. Finitely many maximum functions systems are very useful in the study of nonlinear complementarity problems, variational inequality problems, Karush-Kuhn-Tucker systems of nonlinear programming problems, and many problems in mechanics and engineering. In this paper, we present a modified Levenberg-Marquardt method for nonsmooth equations with finitely many maximum functions. Under mild assumptions, the present method is shown to be convergent Q-linearly. Some numerical results comparing the proposed method with classical reformulations indicate that the modified Levenberg-Marquardt algorithm works quite well in practice.
Ant colony optimisation for economic dispatch problem with non-smooth cost functions
Energy Technology Data Exchange (ETDEWEB)
Pothiya, Saravuth; Kongprawechnon, Waree [School of Communication, Instrumentation and Control, Sirindhorn International Institute of Technology, Thammasat University, P.O. Box 22, Pathumthani (Thailand); Ngamroo, Issarachai [Center of Excellence for Innovative Energy Systems, Faculty of Engineering, King Mongkut' s Institute of Technology Ladkrabang, Bangkok 10520 (Thailand)
2010-06-15
This paper presents a novel and efficient optimisation approach based on the ant colony optimisation (ACO) for solving the economic dispatch (ED) problem with non-smooth cost functions. In order to improve the performance of ACO algorithm, three additional techniques, i.e. priority list, variable reduction, and zoom feature are presented. To show its efficiency and effectiveness, the proposed ACO is applied to two types of ED problems with non-smooth cost functions. Firstly, the ED problem with valve-point loading effects consists of 13 and 40 generating units. Secondly, the ED problem considering the multiple fuels consists of 10 units. Additionally, the results of the proposed ACO are compared with those of the conventional heuristic approaches. The experimental results show that the proposed ACO approach is comparatively capable of obtaining higher quality solution and faster computational time. (author)
Directory of Open Access Journals (Sweden)
Jie Shen
2015-01-01
Full Text Available We describe an extension of the redistributed technique form classical proximal bundle method to the inexact situation for minimizing nonsmooth nonconvex functions. The cutting-planes model we construct is not the approximation to the whole nonconvex function, but to the local convexification of the approximate objective function, and this kind of local convexification is modified dynamically in order to always yield nonnegative linearization errors. Since we only employ the approximate function values and approximate subgradients, theoretical convergence analysis shows that an approximate stationary point or some double approximate stationary point can be obtained under some mild conditions.
GOSWAMI, DEEPJYOTI; PANI, AMIYA K.; YADAV, SANGITA
2014-01-01
AWe propose and analyse an alternate approach to a priori error estimates for the semidiscrete Galerkin approximation to a time-dependent parabolic integro-differential equation with nonsmooth initial data. The method is based on energy arguments combined with repeated use of time integration, but without using parabolic-type duality techniques. An optimal L2-error estimate is derived for the semidiscrete approximation when the initial data is in L2. A superconvergence result is obtained and then used to prove a maximum norm estimate for parabolic integro-differential equations defined on a two-dimensional bounded domain. © 2014 Australian Mathematical Society.
Goswami, Deepjyoti
2013-05-01
In the first part of this article, a new mixed method is proposed and analyzed for parabolic integro-differential equations (PIDE) with nonsmooth initial data. Compared to the standard mixed method for PIDE, the present method does not bank on a reformulation using a resolvent operator. Based on energy arguments combined with a repeated use of an integral operator and without using parabolic type duality technique, optimal L2 L2-error estimates are derived for semidiscrete approximations, when the initial condition is in L2 L2. Due to the presence of the integral term, it is, further, observed that a negative norm estimate plays a crucial role in our error analysis. Moreover, the proposed analysis follows the spirit of the proof techniques used in deriving optimal error estimates for finite element approximations to PIDE with smooth data and therefore, it unifies both the theories, i.e., one for smooth data and other for nonsmooth data. Finally, we extend the proposed analysis to the standard mixed method for PIDE with rough initial data and provide an optimal error estimate in L2, L 2, which improves upon the results available in the literature. © 2013 Springer Science+Business Media New York.
DSPSO-TSA for economic dispatch problem with nonsmooth and noncontinuous cost functions
International Nuclear Information System (INIS)
Khamsawang, S.; Jiriwibhakorn, S.
2010-01-01
This paper proposes a new approach based on particle swarm optimization (PSO) and tabu search algorithm (TSA). This proposed approach is called distributed Sobol PSO and TSA (DSPSO-TSA). In order to improve the convergence characteristic and solution quality of searching process, three mechanisms had been presented. Firstly, the Sobol sequence is applied to generate an inertia factor instead of the existing process. Secondly, a distributed process is used so as to reach the global solution rapidly. The search process is divided to multi-stages and used a short-term memory for recognition the best search history. Finally, to guarantee the global solution, TSA had been activated to adjust the obtained solution of DSPSO algorithm. To show its effectiveness, the proposed DSPSO-TSA is applied to test four case studies of economic dispatch (ED) problem considering nonsmooth and noncontinuous fuel cost functions of generating units. The simulation results obtained from DSPSO-TSA are compared with conventional approaches such as genetic algorithm (GA), TSA, PSO, and others in literatures. The comparison results show that the efficiency of proposed approach can reach higher quality solution and faster computational time than the conventional methods.
DSPSO-TSA for economic dispatch problem with nonsmooth and noncontinuous cost functions
Energy Technology Data Exchange (ETDEWEB)
Khamsawang, S., E-mail: k_suwit999@yahoo.co [Electrical Engineering Department, Faculty of Engineering, King Mongkut' s Institute of Technology Ladkrabang, Ladkrabang District 10520, Bangkok (Thailand); Jiriwibhakorn, S., E-mail: kjsomcha@kmitl.ac.t [Electrical Engineering Department, Faculty of Engineering, King Mongkut' s Institute of Technology Ladkrabang, Ladkrabang District 10520, Bangkok (Thailand)
2010-02-15
This paper proposes a new approach based on particle swarm optimization (PSO) and tabu search algorithm (TSA). This proposed approach is called distributed Sobol PSO and TSA (DSPSO-TSA). In order to improve the convergence characteristic and solution quality of searching process, three mechanisms had been presented. Firstly, the Sobol sequence is applied to generate an inertia factor instead of the existing process. Secondly, a distributed process is used so as to reach the global solution rapidly. The search process is divided to multi-stages and used a short-term memory for recognition the best search history. Finally, to guarantee the global solution, TSA had been activated to adjust the obtained solution of DSPSO algorithm. To show its effectiveness, the proposed DSPSO-TSA is applied to test four case studies of economic dispatch (ED) problem considering nonsmooth and noncontinuous fuel cost functions of generating units. The simulation results obtained from DSPSO-TSA are compared with conventional approaches such as genetic algorithm (GA), TSA, PSO, and others in literatures. The comparison results show that the efficiency of proposed approach can reach higher quality solution and faster computational time than the conventional methods.
International Nuclear Information System (INIS)
Al-Othman, A.K.; El-Naggar, K.M.
2008-01-01
Direct search methods are evolutionary algorithms used to solve optimization problems. (DS) methods do not require any information about the gradient of the objective function at hand while searching for an optimum solution. One of such methods is Pattern Search (PS) algorithm. This paper presents a new approach based on a constrained pattern search algorithm to solve a security constrained power system economic dispatch problem (SCED) with non-smooth cost function. Operation of power systems demands a high degree of security to keep the system satisfactorily operating when subjected to disturbances, while and at the same time it is required to pay attention to the economic aspects. Pattern recognition technique is used first to assess dynamic security. Linear classifiers that determine the stability of electric power system are presented and added to other system stability and operational constraints. The problem is formulated as a constrained optimization problem in a way that insures a secure-economic system operation. Pattern search method is then applied to solve the constrained optimization formulation. In particular, the method is tested using three different test systems. Simulation results of the proposed approach are compared with those reported in literature. The outcome is very encouraging and proves that pattern search (PS) is very applicable for solving security constrained power system economic dispatch problem (SCED). In addition, valve-point effect loading and total system losses are considered to further investigate the potential of the PS technique. Based on the results, it can be concluded that the PS has demonstrated ability in handling highly nonlinear discontinuous non-smooth cost function of the SCED. (author)
Goswami, Deepjyoti; Pani, Amiya K.; Yadav, Sangita
2013-01-01
In the first part of this article, a new mixed method is proposed and analyzed for parabolic integro-differential equations (PIDE) with nonsmooth initial data. Compared to the standard mixed method for PIDE, the present method does not bank on a
Fragnelli, Genni
2016-01-01
The authors consider a parabolic problem with degeneracy in the interior of the spatial domain, and they focus on observability results through Carleman estimates for the associated adjoint problem. The novelties of the present paper are two. First, the coefficient of the leading operator only belongs to a Sobolev space. Second, the degeneracy point is allowed to lie even in the interior of the control region, so that no previous result can be adapted to this situation; however, different cases can be handled, and new controllability results are established as a consequence.
DEFF Research Database (Denmark)
Vafamand, Navid; Asemani, Mohammad Hassan; Khayatiyan, Alireza
2018-01-01
This paper proposes a novel robust controller design for a class of nonlinear systems including hard nonlinearity functions. The proposed approach is based on Takagi-Sugeno (TS) fuzzy modeling, nonquadratic Lyapunov function, and nonparallel distributed compensation scheme. In this paper, a novel...... criterion, new robust controller design conditions in terms of linear matrix inequalities are derived. Three practical case studies, electric power steering system, a helicopter model and servo-mechanical system, are presented to demonstrate the importance of such class of nonlinear systems comprising...
DEFF Research Database (Denmark)
Silcowitz, Morten; Niebe, Sarah Maria; Erleben, Kenny
2009-01-01
contact response. In this paper, we present a new approach to contact force determination. We reformulate the contact force problem as a nonlinear root search problem, using a Fischer function. We solve this problem using a generalized Newton method. Our new Fischer - Newton method shows improved...... qualities for specific configurations where the most widespread alternative, the Projected Gauss-Seidel method, fails. Experiments show superior convergence properties of the exact Fischer - Newton method....
International Nuclear Information System (INIS)
Nkemzi, Boniface
2003-10-01
This paper is concerned with the effective implementation of the Fourier-finite-element method, which combines the approximating Fourier and the finite-element methods, for treating the Derichlet problem for the Lam.6 equations in axisymmetric domains Ω-circumflex is contained in R 3 with conical vertices and reentrant edges. The partial Fourier decomposition reduces the three-dimensional boundary value problem to an infinite sequence of decoupled two-dimensional boundary value problems on the plane meridian domain Ω α is contained in R + 2 of Ω-circumflex with solutions u, n (n = 0,1,2,...) being the Fourier coefficients of the solution u of the 3D problem. The asymptotic behavior of the Fourier coefficients near the angular points of Ω α , is described by appropriate singular vector-functions and treated numerically by linear finite elements on locally graded meshes. For the right-hand side function f-circumflex is an element of (L 2 (Ω-circumflex)) 3 it is proved that with appropriate mesh grading the rate of convergence of the combined approximations in (W 2 1 (Ω-circumflex)) 3 is of the order O(h + N -1 ), where h and N are the parameters of the finite-element and Fourier approximations, respectively, with h → 0 and N → ∞. (author)
The Contact Dynamics method: A nonsmooth story
Dubois, Frédéric; Acary, Vincent; Jean, Michel
2018-03-01
When velocity jumps are occurring, the dynamics is said to be nonsmooth. For instance, in collections of contacting rigid bodies, jumps are caused by shocks and dry friction. Without compliance at the interface, contact laws are not only non-differentiable in the usual sense but also multi-valued. Modeling contacting bodies is of interest in order to understand the behavior of numerous mechanical systems such as flexible multi-body systems, granular materials or masonry. These granular materials behave puzzlingly either like a solid or a fluid and a description in the frame of classical continuous mechanics would be welcome though far to be satisfactory nowadays. Jean-Jacques Moreau greatly contributed to convex analysis, functions of bounded variations, differential measure theory, sweeping process theory, definitive mathematical tools to deal with nonsmooth dynamics. He converted all these underlying theoretical ideas into an original nonsmooth implicit numerical method called Contact Dynamics (CD); a robust and efficient method to simulate large collections of bodies with frictional contacts and impacts. The CD method offers a very interesting complementary alternative to the family of smoothed explicit numerical methods, often called Distinct Elements Method (DEM). In this paper developments and improvements of the CD method are presented together with a critical comparative review of advantages and drawbacks of both approaches. xml:lang="fr"
On Estimation of the CES Production Function - Revisited
DEFF Research Database (Denmark)
Henningsen, Arne; Henningsen, Geraldine
2012-01-01
Estimation of the non-linear Constant Elasticity of Scale (CES) function is generally considered problematic due to convergence problems and unstable and/or meaningless results. These problems often arise from a non-smooth objective function with large flat areas, the discontinuity of the CES...... function where the elasticity of substitution is one, and possibly significant rounding errors where the elasticity of substitution is close to one. We suggest three (combinable) solutions that alleviate these problems and improve the reliability and stability of the results....
Nonsmooth Mechanics and Convex Optimization
Kanno, Yoshihiro
2011-01-01
"This book concerns matter that is intrinsically difficult: convex optimization, complementarity and duality, nonsmooth analysis, linear and nonlinear programming, etc. The author has skillfully introduced these and many more concepts, and woven them into a seamless whole by retaining an easy and consistent style throughout. The book is not all theory: There are many real-life applications in structural engineering, cable networks, frictional contact problems, and plasticity! I recommend it to any reader who desires a modern, authoritative account of nonsmooth mechanics and convex optimiz
Spectral asymptotics for nonsmooth singular Green operators
DEFF Research Database (Denmark)
Grubb, Gerd
2014-01-01
is a singular Green operator. It is well-known in smooth cases that when G is of negative order −t on a bounded domain, its eigenvalues ors-numbers have the behavior (*)s j (G) ∼ cj −t/(n−1) for j → ∞, governed by the boundary dimension n − 1. In some nonsmooth cases, upper estimates (**)s j (G) ≤ Cj −t/(n−1...
Non-linear second-order periodic systems with non-smooth potential
Indian Academy of Sciences (India)
In this paper we study second order non-linear periodic systems driven by the ordinary vector -Laplacian with a non-smooth, locally Lipschitz potential function. Our approach is variational and it is based on the non-smooth critical point theory. We prove existence and multiplicity results under general growth conditions on ...
Non-linear second-order periodic systems with non-smooth potential
Indian Academy of Sciences (India)
R. Narasimhan (Krishtel eMaging) 1461 1996 Oct 15 13:05:22
Abstract. In this paper we study second order non-linear periodic systems driven by the ordinary vector p-Laplacian with a non-smooth, locally Lipschitz potential function. Our approach is variational and it is based on the non-smooth critical point theory. We prove existence and multiplicity results under general growth ...
2000-01-01
The book provides a self-contained introduction to the mathematical theory of non-smooth dynamical problems, as they frequently arise from mechanical systems with friction and/or impacts. It is aimed at applied mathematicians, engineers, and applied scientists in general who wish to learn the subject.
Clusters in nonsmooth oscillator networks
Nicks, Rachel; Chambon, Lucie; Coombes, Stephen
2018-03-01
For coupled oscillator networks with Laplacian coupling, the master stability function (MSF) has proven a particularly powerful tool for assessing the stability of the synchronous state. Using tools from group theory, this approach has recently been extended to treat more general cluster states. However, the MSF and its generalizations require the determination of a set of Floquet multipliers from variational equations obtained by linearization around a periodic orbit. Since closed form solutions for periodic orbits are invariably hard to come by, the framework is often explored using numerical techniques. Here, we show that further insight into network dynamics can be obtained by focusing on piecewise linear (PWL) oscillator models. Not only do these allow for the explicit construction of periodic orbits, their variational analysis can also be explicitly performed. The price for adopting such nonsmooth systems is that many of the notions from smooth dynamical systems, and in particular linear stability, need to be modified to take into account possible jumps in the components of Jacobians. This is naturally accommodated with the use of saltation matrices. By augmenting the variational approach for studying smooth dynamical systems with such matrices we show that, for a wide variety of networks that have been used as models of biological systems, cluster states can be explicitly investigated. By way of illustration, we analyze an integrate-and-fire network model with event-driven synaptic coupling as well as a diffusively coupled network built from planar PWL nodes, including a reduction of the popular Morris-Lecar neuron model. We use these examples to emphasize that the stability of network cluster states can depend as much on the choice of single node dynamics as it does on the form of network structural connectivity. Importantly, the procedure that we present here, for understanding cluster synchronization in networks, is valid for a wide variety of systems in
International Nuclear Information System (INIS)
Mohammadi-ivatloo, Behnam; Rabiee, Abbas; Ehsan, Mehdi
2012-01-01
Highlights: ► New approach to solve power systems dynamic economic dispatch. ► Considering Valve-point effect, prohibited operation zones. ► Proposing TVAC-IPSO algorithm. - Abstract: The objective of the dynamic economic dispatch (DED) problem is to schedule power generation for the online units for a given time horizon economically, satisfying various operational constraints. Due to the effect of valve-point effects and prohibited operating zones (POZs) in the generating units cost functions, DED problem is a highly non-linear and non-convex optimization problem. The DED problem even may be more complicated if transmission losses and ramp-rate constraints are taken into account. This paper presents a novel and heuristic algorithm to solve DED problem of generating units, by employing time varying acceleration coefficients iteration particle swarm optimization (TVAC-IPSO) method. The effectiveness of the proposed method is examined and validated by carrying out extensive tests on different test systems, i.e. 5-unit and 10-unit test systems. Valve-point effects, POZs and ramp-rate constraints along with transmission losses are considered. To examine the efficiency of the proposed TVAC-IPSO algorithm, comprehensive studies are carried out, which compare convergence properties of the proposed TVAC-IPSO approach with conventional PSO algorithm, in addition to the other recently reported approaches. Numerical results show that the TVAC-IPSO method has good convergence properties and the generation costs resulted from the proposed method are lower than other algorithms reported in recent literature.
Advanced h∞ control towards nonsmooth theory and applications
Orlov, Yury V
2014-01-01
This compact monograph is focused on disturbance attenuation in nonsmooth dynamic systems, developing an H∞ approach in the nonsmooth setting. Similar to the standard nonlinear H∞ approach, the proposed nonsmooth design guarantees both the internal asymptotic stability of a nominal closed-loop system and the dissipativity inequality, which states that the size of an error signal is uniformly bounded with respect to the worst-case size of an external disturbance signal. This guarantee is achieved by constructing an energy or storage function that satisfies the dissipativity inequality and is then utilized as a Lyapunov function to ensure the internal stability requirements. Advanced H∞ Control is unique in the literature for its treatment of disturbance attenuation in nonsmooth systems. It synthesizes various tools, including Hamilton–Jacobi–Isaacs partial differential inequalities as well as Linear Matrix Inequalities. Along with the finite-dimensional treatment, the synthesis is exten...
An introduction to nonsmooth analysis
Ferrera, Juan
2013-01-01
Nonsmooth Analysis is a relatively recent area of mathematical analysis. The literature about this subject consists mainly in research papers and books. The purpose of this book is to provide a handbook for undergraduate and graduate students of mathematics that introduce this interesting area in detail.Includes different kinds of sub and super differentials as well as generalized gradientsIncludes also the main tools of the theory, as Sum and Chain Rules or Mean Value theoremsContent is introduced in an elementary way, developing many examples, allowing the reader to understand a theory which
Nonsmooth mechanics models, dynamics and control
Brogliato, Bernard
2016-01-01
Now in its third edition, this standard reference is a comprehensive treatment of nonsmooth mechanical systems refocused to give more prominence to control and modelling. It covers Lagrangian and Newton–Euler systems, detailing mathematical tools such as convex analysis and complementarity theory. The ways in which nonsmooth mechanics influence and are influenced by well-posedness analysis, numerical analysis and simulation, modelling and control are explained. Contact/impact laws, stability theory and trajectory-tracking control are given in-depth exposition connected by a framework formed from complementarity systems and measure-differential inclusions. Links are established with electrical circuits with set-valued nonsmooth elements and with other nonsmooth dynamical systems like impulsive and piecewise linear systems. Nonsmooth Mechanics (third edition) has been substantially rewritten, edited and updated to account for the significant body of results that have emerged in the twenty-first century—incl...
Lovelock action with nonsmooth boundaries
Cano, Pablo A.
2018-05-01
We examine the variational problem in Lovelock gravity when the boundary contains timelike and spacelike segments nonsmoothly glued. We show that two kinds of contributions have to be added to the action. The first one is associated with the presence of a boundary in every segment and it depends on intrinsic and extrinsic curvatures. We can think of this contribution as adding a total derivative to the usual surface term of Lovelock gravity. The second one appears in every joint between two segments and it involves the integral along the joint of the Jacobson-Myers entropy density weighted by the Lorentz boost parameter, which relates the orthonormal frames in each segment. We argue that this term can be straightforwardly extended to the case of joints involving null boundaries. As an application, we compute the contribution of these terms to the complexity of global anti-de Sitter space in Lovelock gravity by using the "complexity =action " proposal and we identify possible universal terms for arbitrary values of the Lovelock couplings. We find that they depend on the charge a* controlling the holographic entanglement entropy and on a new constant that we characterize.
On Functional Calculus Estimates
Schwenninger, F.L.
2015-01-01
This thesis presents various results within the field of operator theory that are formulated in estimates for functional calculi. Functional calculus is the general concept of defining operators of the form $f(A)$, where f is a function and $A$ is an operator, typically on a Banach space. Norm
Directory of Open Access Journals (Sweden)
Saroj Kumar Dash
2016-07-01
Full Text Available The basic objective of economic load dispatch (ELD is to optimize the total fuel cost of hybrid solar thermal electric power plant (HSTP. In ELD problems the cost function for each generator has been approximated by a single quadratic cost equation. As cost of coal increases, it becomes even more important have a good model for the production cost of each generator for the solar thermal hybrid system. A more accurate formulation is obtained for the ELD problem by expressing the generation cost function as a piece wise quadratic cost function. However, the solution methods for ELD problem with piece wise quadratic cost function requires much complicated algorithms such as the hierarchical structure approach along with evolutionary computations (ECs. A test system comprising of 10 units with 29 different fuel [7] cost equations is considered in this paper. The applied genetic algorithm method will provide optimal solution for the given load demand.
A one-layer recurrent neural network for constrained nonsmooth invex optimization.
Li, Guocheng; Yan, Zheng; Wang, Jun
2014-02-01
Invexity is an important notion in nonconvex optimization. In this paper, a one-layer recurrent neural network is proposed for solving constrained nonsmooth invex optimization problems, designed based on an exact penalty function method. It is proved herein that any state of the proposed neural network is globally convergent to the optimal solution set of constrained invex optimization problems, with a sufficiently large penalty parameter. In addition, any neural state is globally convergent to the unique optimal solution, provided that the objective function and constraint functions are pseudoconvex. Moreover, any neural state is globally convergent to the feasible region in finite time and stays there thereafter. The lower bounds of the penalty parameter and convergence time are also estimated. Two numerical examples are provided to illustrate the performances of the proposed neural network. Copyright © 2013 Elsevier Ltd. All rights reserved.
Adaptive Integration of Nonsmooth Dynamical Systems
2017-10-11
2017 W911NF-12-R-0012-03: Adaptive Integration of Nonsmooth Dynamical Systems The views, opinions and/or findings contained in this report are those of...Integration of Nonsmooth Dynamical Systems Report Term: 0-Other Email: drum@gwu.edu Distribution Statement: 1-Approved for public release; distribution is...classdrake_1_1systems_1_1_integrator_base.html ; 3) a solver for dynamical systems with arbitrary unilateral and bilateral constraints (the key component of the time stepping systems )- see
Bifurcations of non-smooth systems
Angulo, Fabiola; Olivar, Gerard; Osorio, Gustavo A.; Escobar, Carlos M.; Ferreira, Jocirei D.; Redondo, Johan M.
2012-12-01
Non-smooth systems (namely piecewise-smooth systems) have received much attention in the last decade. Many contributions in this area show that theory and applications (to electronic circuits, mechanical systems, …) are relevant to problems in science and engineering. Specially, new bifurcations have been reported in the literature, and this was the topic of this minisymposium. Thus both bifurcation theory and its applications were included. Several contributions from different fields show that non-smooth bifurcations are a hot topic in research. Thus in this paper the reader can find contributions from electronics, energy markets and population dynamics. Also, a carefully-written specific algebraic software tool is presented.
Fast nonconvex nonsmooth minimization methods for image restoration and reconstruction.
Nikolova, Mila; Ng, Michael K; Tam, Chi-Pan
2010-12-01
Nonconvex nonsmooth regularization has advantages over convex regularization for restoring images with neat edges. However, its practical interest used to be limited by the difficulty of the computational stage which requires a nonconvex nonsmooth minimization. In this paper, we deal with nonconvex nonsmooth minimization methods for image restoration and reconstruction. Our theoretical results show that the solution of the nonconvex nonsmooth minimization problem is composed of constant regions surrounded by closed contours and neat edges. The main goal of this paper is to develop fast minimization algorithms to solve the nonconvex nonsmooth minimization problem. Our experimental results show that the effectiveness and efficiency of the proposed algorithms.
An approach for spherical harmonic analysis of non-smooth data
Wang, Hansheng; Wu, Patrick; Wang, Zhiyong
2006-12-01
A method is proposed to evaluate the spherical harmonic coefficients of a global or regional, non-smooth, observable dataset sampled on an equiangular grid. The method is based on an integration strategy using new recursion relations. Because a bilinear function is used to interpolate points within the grid cells, this method is suitable for non-smooth data; the slope of the data may be piecewise continuous, with extreme changes at the boundaries. In order to validate the method, the coefficients of an axisymmetric model are computed, and compared with the derived analytical expressions. Numerical results show that this method is indeed reasonable for non-smooth models, and that the maximum degree for spherical harmonic analysis should be empirically determined by several factors including the model resolution and the degree of non-smoothness in the dataset, and it can be several times larger than the total number of latitudinal grid points. It is also shown that this method is appropriate for the approximate analysis of a smooth dataset. Moreover, this paper provides the program flowchart and an internet address where the FORTRAN code with program specifications are made available.
Introduction to nonsmooth optimization theory, practice and software
Bagirov, Adil; Mäkelä, Marko M
2014-01-01
Attempts to be the first easy-to-read book about nonsmooth optimization Covers both the theory and the numerical methods used in nonsmooth optimization and offers a survey of different problems arising in the field Both, the theory and the most common problems are illustrated with examples making the book also suitable both for teaching purposes and self-access learning.
DEFF Research Database (Denmark)
Andersen, C K; Andersen, K; Kragh-Sørensen, P
2000-01-01
on these criteria, a two-part model was chosen. In this model, the probability of incurring any costs was estimated using a logistic regression, while the level of the costs was estimated in the second part of the model. The choice of model had a substantial impact on the predicted health care costs, e...
Variance Function Estimation. Revision.
1987-03-01
UNLSIFIED RFOSR-TR-87-±112 F49620-85-C-O144 F/C 12/3 NL EEEEEEh LOUA28~ ~ L53 11uLoo MICROOP REOUINTS-’HR ------ N L E U INARF-% - IS %~1 %i % 0111...and 9 jointly. If 7,, 0. and are any preliminary estimators for 71, 6. and 3. define 71 and 6 to be the solutions of (4.1) N1 IN2 (7., ’ Td " ~ - / =0P
A one-layer recurrent neural network for constrained nonsmooth optimization.
Liu, Qingshan; Wang, Jun
2011-10-01
This paper presents a novel one-layer recurrent neural network modeled by means of a differential inclusion for solving nonsmooth optimization problems, in which the number of neurons in the proposed neural network is the same as the number of decision variables of optimization problems. Compared with existing neural networks for nonsmooth optimization problems, the global convexity condition on the objective functions and constraints is relaxed, which allows the objective functions and constraints to be nonconvex. It is proven that the state variables of the proposed neural network are convergent to optimal solutions if a single design parameter in the model is larger than a derived lower bound. Numerical examples with simulation results substantiate the effectiveness and illustrate the characteristics of the proposed neural network.
Sharp Spectral Asymptotics and Weyl Formula for Elliptic Operators with Non-smooth Coefficients
Energy Technology Data Exchange (ETDEWEB)
Zielinski, Lech [Universite Paris 7 (D. Diderot), Institut de Mathematiques de Paris-Jussieu UMR9994 (France)
1999-09-15
The aim of this paper is to give the Weyl formula for eigenvalues of self-adjoint elliptic operators, assuming that first-order derivatives of the coefficients are Lipschitz continuous. The approach is based on the asymptotic formula of Hoermander''s type for the spectral function of pseudo differential operators having Lipschitz continuous Hamiltonian flow and obtained via a regularization procedure of nonsmooth coefficients.
Sharp Spectral Asymptotics and Weyl Formula for Elliptic Operators with Non-smooth Coefficients
International Nuclear Information System (INIS)
Zielinski, Lech
1999-01-01
The aim of this paper is to give the Weyl formula for eigenvalues of self-adjoint elliptic operators, assuming that first-order derivatives of the coefficients are Lipschitz continuous. The approach is based on the asymptotic formula of Hoermander''s type for the spectral function of pseudo differential operators having Lipschitz continuous Hamiltonian flow and obtained via a regularization procedure of nonsmooth coefficients
The Nonsmooth Vibration of a Relative Rotation System with Backlash and Dry Friction
Directory of Open Access Journals (Sweden)
Minjia He
2017-01-01
Full Text Available We investigate a relative rotation system with backlash and dry friction. Firstly, the corresponding nonsmooth characters are discussed by the differential inclusion theory, and the analytic conditions for stick and nonstick motions are developed to understand the motion switching mechanism. Based on such analytic conditions of motion switching, the influence of the maximal static friction torque and the driving torque on the stick motion is studied. Moreover, the sliding time bifurcation diagrams, duty cycle figures, time history diagrams, and the K-function time history diagram are also presented, which confirm the analytic results. The methodology presented in this paper can be applied to predictions of motions in nonsmooth dynamical systems.
The full Keller-Segel model is well-posed on nonsmooth domains
Horstmann, D.; Meinlschmidt, H.; Rehberg, J.
2018-04-01
In this paper we prove that the full Keller-Segel system, a quasilinear strongly coupled reaction-crossdiffusion system of four parabolic equations, is well-posed in the sense that it always admits an unique local-in-time solution in an adequate function space, provided that the initial values are suitably regular. The proof is done via an abstract solution theorem for nonlocal quasilinear equations by Amann and is carried out for general source terms. It is fundamentally based on recent nontrivial elliptic and parabolic regularity results which hold true even on rather general nonsmooth spatial domains. For space dimensions 2 and 3, this enables us to work in a nonsmooth setting which is not available in classical parabolic systems theory. Apparently, there exists no comparable existence result for the full Keller-Segel system up to now. Due to the large class of possibly nonsmooth domains admitted, we also obtain new results for the ‘standard’ Keller-Segel system consisting of only two equations as a special case. This work is dedicated to Prof Willi Jäger.
Neural network for nonsmooth pseudoconvex optimization with general convex constraints.
Bian, Wei; Ma, Litao; Qin, Sitian; Xue, Xiaoping
2018-05-01
In this paper, a one-layer recurrent neural network is proposed for solving a class of nonsmooth, pseudoconvex optimization problems with general convex constraints. Based on the smoothing method, we construct a new regularization function, which does not depend on any information of the feasible region. Thanks to the special structure of the regularization function, we prove the global existence, uniqueness and "slow solution" character of the state of the proposed neural network. Moreover, the state solution of the proposed network is proved to be convergent to the feasible region in finite time and to the optimal solution set of the related optimization problem subsequently. In particular, the convergence of the state to an exact optimal solution is also considered in this paper. Numerical examples with simulation results are given to show the efficiency and good characteristics of the proposed network. In addition, some preliminary theoretical analysis and application of the proposed network for a wider class of dynamic portfolio optimization are included. Copyright © 2018 Elsevier Ltd. All rights reserved.
Particle-based solid for nonsmooth multidomain dynamics
Nordberg, John; Servin, Martin
2018-04-01
A method for simulation of elastoplastic solids in multibody systems with nonsmooth and multidomain dynamics is developed. The solid is discretised into pseudo-particles using the meshfree moving least squares method for computing the strain tensor. The particle's strain and stress tensor variables are mapped to a compliant deformation constraint. The discretised solid model thus fit a unified framework for nonsmooth multidomain dynamics simulations including rigid multibodies with complex kinematic constraints such as articulation joints, unilateral contacts with dry friction, drivelines, and hydraulics. The nonsmooth formulation allows for impact impulses to propagate instantly between the rigid multibody and the solid. Plasticity is introduced through an associative perfectly plastic modified Drucker-Prager model. The elastic and plastic dynamics are verified for simple test systems, and the capability of simulating tracked terrain vehicles driving on a deformable terrain is demonstrated.
Intensive Research Program on Advances in Nonsmooth Dynamics 2016
Jeffrey, Mike; Lázaro, J; Olm, Josep
2017-01-01
This volume contains extended abstracts outlining selected talks and other selected presentations given by participants throughout the "Intensive Research Program on Advances in Nonsmooth Dynamics 2016", held at the Centre de Recerca Matemàtica (CRM) in Barcelona from February 1st to April 29th, 2016. They include brief research articles reporting new results, descriptions of preliminary work or open problems, and outlines of prominent discussion sessions. The articles are all the result of direct collaborations initiated during the research program. The topic is the theory and applications of Nonsmooth Dynamics. This includes systems involving elements of: impacting, switching, on/off control, hybrid discrete-continuous dynamics, jumps in physical properties, and many others. Applications include: electronics, climate modeling, life sciences, mechanics, ecology, and more. Numerous new results are reported concerning the dimensionality and robustness of nonsmooth models, shadowing variables, numbers of limit...
A Non-smooth Newton Method for Multibody Dynamics
International Nuclear Information System (INIS)
Erleben, K.; Ortiz, R.
2008-01-01
In this paper we deal with the simulation of rigid bodies. Rigid body dynamics have become very important for simulating rigid body motion in interactive applications, such as computer games or virtual reality. We present a novel way of computing contact forces using a Newton method. The contact problem is reformulated as a system of non-linear and non-smooth equations, and we solve this system using a non-smooth version of Newton's method. One of the main contribution of this paper is the reformulation of the complementarity problems, used to model impacts, as a system of equations that can be solved using traditional methods.
PHAZE, Parametric Hazard Function Estimation
International Nuclear Information System (INIS)
2002-01-01
1 - Description of program or function: Phaze performs statistical inference calculations on a hazard function (also called a failure rate or intensity function) based on reported failure times of components that are repaired and restored to service. Three parametric models are allowed: the exponential, linear, and Weibull hazard models. The inference includes estimation (maximum likelihood estimators and confidence regions) of the parameters and of the hazard function itself, testing of hypotheses such as increasing failure rate, and checking of the model assumptions. 2 - Methods: PHAZE assumes that the failures of a component follow a time-dependent (or non-homogenous) Poisson process and that the failure counts in non-overlapping time intervals are independent. Implicit in the independence property is the assumption that the component is restored to service immediately after any failure, with negligible repair time. The failures of one component are assumed to be independent of those of another component; a proportional hazards model is used. Data for a component are called time censored if the component is observed for a fixed time-period, or plant records covering a fixed time-period are examined, and the failure times are recorded. The number of these failures is random. Data are called failure censored if the component is kept in service until a predetermined number of failures has occurred, at which time the component is removed from service. In this case, the number of failures is fixed, but the end of the observation period equals the final failure time and is random. A typical PHAZE session consists of reading failure data from a file prepared previously, selecting one of the three models, and performing data analysis (i.e., performing the usual statistical inference about the parameters of the model, with special emphasis on the parameter(s) that determine whether the hazard function is increasing). The final goals of the inference are a point estimate
Variance function estimation for immunoassays
International Nuclear Information System (INIS)
Raab, G.M.; Thompson, R.; McKenzie, I.
1980-01-01
A computer program is described which implements a recently described, modified likelihood method of determining an appropriate weighting function to use when fitting immunoassay dose-response curves. The relationship between the variance of the response and its mean value is assumed to have an exponential form, and the best fit to this model is determined from the within-set variability of many small sets of repeated measurements. The program estimates the parameter of the exponential function with its estimated standard error, and tests the fit of the experimental data to the proposed model. Output options include a list of the actual and fitted standard deviation of the set of responses, a plot of actual and fitted standard deviation against the mean response, and an ordered list of the 10 sets of data with the largest ratios of actual to fitted standard deviation. The program has been designed for a laboratory user without computing or statistical expertise. The test-of-fit has proved valuable for identifying outlying responses, which may be excluded from further analysis by being set to negative values in the input file. (Auth.)
Cheng, Long; Hou, Zeng-Guang; Lin, Yingzi; Tan, Min; Zhang, Wenjun Chris; Wu, Fang-Xiang
2011-05-01
A recurrent neural network is proposed for solving the non-smooth convex optimization problem with the convex inequality and linear equality constraints. Since the objective function and inequality constraints may not be smooth, the Clarke's generalized gradients of the objective function and inequality constraints are employed to describe the dynamics of the proposed neural network. It is proved that the equilibrium point set of the proposed neural network is equivalent to the optimal solution of the original optimization problem by using the Lagrangian saddle-point theorem. Under weak conditions, the proposed neural network is proved to be stable, and the state of the neural network is convergent to one of its equilibrium points. Compared with the existing neural network models for non-smooth optimization problems, the proposed neural network can deal with a larger class of constraints and is not based on the penalty method. Finally, the proposed neural network is used to solve the identification problem of genetic regulatory networks, which can be transformed into a non-smooth convex optimization problem. The simulation results show the satisfactory identification accuracy, which demonstrates the effectiveness and efficiency of the proposed approach.
Directory of Open Access Journals (Sweden)
J. Gwinner
2013-01-01
Full Text Available The purpose of this paper is twofold. Firstly we consider nonlinear nonsmooth elliptic boundary value problems, and also related parabolic initial boundary value problems that model in a simplified way steady-state unilateral contact with Tresca friction in solid mechanics, respectively, stem from nonlinear transient heat conduction with unilateral boundary conditions. Here a recent duality approach, that augments the classical Babuška-Brezzi saddle point formulation for mixed variational problems to twofold saddle point formulations, is extended to the nonsmooth problems under consideration. This approach leads to variational inequalities of mixed form for three coupled fields as unknowns and to related differential mixed variational inequalities in the time-dependent case. Secondly we are concerned with the stability of the solution set of a general class of differential mixed variational inequalities. Here we present a novel upper set convergence result with respect to perturbations in the data, including perturbations of the associated nonlinear maps, the nonsmooth convex functionals, and the convex constraint set. We employ epiconvergence for the convergence of the functionals and Mosco convergence for set convergence. We impose weak convergence assumptions on the perturbed maps using the monotonicity method of Browder and Minty.
Estimation of Correlation Functions by Random Decrement
DEFF Research Database (Denmark)
Asmussen, J. C.; Brincker, Rune
This paper illustrates how correlation functions can be estimated by the random decrement technique. Several different formulations of the random decrement technique, estimating the correlation functions are considered. The speed and accuracy of the different formulations of the random decrement...... and the length of the correlation functions. The accuracy of the estimates with respect to the theoretical correlation functions and the modal parameters are both investigated. The modal parameters are extracted from the correlation functions using the polyreference time domain technique....
Fundamental solutions and local solvability for nonsmooth Hörmander’s operators
Bramanti, Marco; Manfredini, Maria
2017-01-01
The authors consider operators of the form L=\\sum_{i=1}^{n}X_{i}^{2}+X_{0} in a bounded domain of \\mathbb{R}^{p} where X_{0},X_{1},\\ldots,X_{n} are nonsmooth Hörmander's vector fields of step r such that the highest order commutators are only Hölder continuous. Applying Levi's parametrix method the authors construct a local fundamental solution \\gamma for L and provide growth estimates for \\gamma and its first derivatives with respect to the vector fields. Requiring the existence of one more derivative of the coefficients the authors prove that \\gamma also possesses second derivatives, and they deduce the local solvability of L, constructing, by means of \\gamma, a solution to Lu=f with Hölder continuous f. The authors also prove C_{X,loc}^{2,\\alpha} estimates on this solution.
Analyzing the non-smooth dynamics induced by a split-path nonlinear integral controller
Hunnekens, B.G.B.; van Loon, S.J.L.M.; van de Wouw, N.; Heemels, W.P.M.H.; Nijmeijer, H.; Ecker, Horst; Steindl, Alois; Jakubek, Stefan
2014-01-01
In this paper, we introduce a novel non-smooth integral controller, which aims at achieving a better transient response in terms of overshoot of a feedback controlled dynamical system. The resulting closed-loop system can be represented as a non-smooth system with different continuous dynamics being
Non-Parametric Estimation of Correlation Functions
DEFF Research Database (Denmark)
Brincker, Rune; Rytter, Anders; Krenk, Steen
In this paper three methods of non-parametric correlation function estimation are reviewed and evaluated: the direct method, estimation by the Fast Fourier Transform and finally estimation by the Random Decrement technique. The basic ideas of the techniques are reviewed, sources of bias are point...
Estimating Stochastic Volatility Models using Prediction-based Estimating Functions
DEFF Research Database (Denmark)
Lunde, Asger; Brix, Anne Floor
to the performance of the GMM estimator based on conditional moments of integrated volatility from Bollerslev and Zhou (2002). The case where the observed log-price process is contaminated by i.i.d. market microstructure (MMS) noise is also investigated. First, the impact of MMS noise on the parameter estimates from......In this paper prediction-based estimating functions (PBEFs), introduced in Sørensen (2000), are reviewed and PBEFs for the Heston (1993) stochastic volatility model are derived. The finite sample performance of the PBEF based estimator is investigated in a Monte Carlo study, and compared...... to correctly account for the noise are investigated. Our Monte Carlo study shows that the estimator based on PBEFs outperforms the GMM estimator, both in the setting with and without MMS noise. Finally, an empirical application investigates the possible challenges and general performance of applying the PBEF...
Nonlinear dynamics of a nonsmooth shape memory alloy oscillator
International Nuclear Information System (INIS)
Cardozo dos Santos, Bruno; Amorim Savi, Marcelo
2009-01-01
In the last years, there is an increasing interest in nonsmooth system dynamics motivated by different applications including rotor dynamics, oil drilling and machining. Besides, shape memory alloys (SMAs) have been used in various applications exploring their high dissipation capacity related to their hysteretic behavior. This contribution investigates the nonlinear dynamics of shape memory alloy nonsmooth systems considering a linear oscillator with a discontinuous support built with an SMA element. A constitutive model developed by Paiva et al. [Paiva A, Savi MA, Braga AMB, Pacheco PMCL. A constitutive model for shape memory alloys considering tensile-compressive asymmetry and plasticity. Int J Solids Struct 2005;42(11-12):3439-57] is employed to describe the thermomechanical behavior of the SMA element. Numerical investigations show results where the SMA discontinuous support can dramatically change the system dynamics when compared to those associated with a linear elastic support system. A parametric study is of concern showing the system behavior for different system characteristics, forcing excitation and also gaps. These results show that smart materials can be employed in different kinds of mechanical systems exploring some of the remarkable properties of these alloys.
Receiver function estimated by maximum entropy deconvolution
Institute of Scientific and Technical Information of China (English)
吴庆举; 田小波; 张乃铃; 李卫平; 曾融生
2003-01-01
Maximum entropy deconvolution is presented to estimate receiver function, with the maximum entropy as the rule to determine auto-correlation and cross-correlation functions. The Toeplitz equation and Levinson algorithm are used to calculate the iterative formula of error-predicting filter, and receiver function is then estimated. During extrapolation, reflective coefficient is always less than 1, which keeps maximum entropy deconvolution stable. The maximum entropy of the data outside window increases the resolution of receiver function. Both synthetic and real seismograms show that maximum entropy deconvolution is an effective method to measure receiver function in time-domain.
Damage Mechanism in Counter Pairs Caused by Bionic Non-smoothed Surface
Directory of Open Access Journals (Sweden)
ZHANG Zhan-hui
2016-08-01
Full Text Available Four biomimetic non-smoothed surface specimens with different shapes were prepared by laser processing. Tests were conducted on MMU-5G wear and abrasion test machine to study the influencing rule of non-smoothed surfaces on counter pairs. The results show that the mass loss of the friction pair matching with the non-smoothed units is much greater than the ones matching with the smooth specimens. The pairs matching with different non-smoothed units suffer differently. The non-smoothed surface protruding zone exerts micro cutting on counter pairs. The striation causes the greatest mass loss of the pairs than the other non-smoothed units, which almost doubles the damage of the grid ones suffering the least. The difference in pairs damage is attributed to the different mechanism of undertaking the load in the process of wear. The damage can be alleviated effectively by changing the shapes of the units without increasing or decreasing the area ratio of the non-smoothed units.
Investigation of the Effect of Dimple Bionic Nonsmooth Surface on Tire Antihydroplaning
Directory of Open Access Journals (Sweden)
Haichao Zhou
2015-01-01
Full Text Available Inspired by the idea that bionic nonsmooth surfaces (BNSS reduce fluid adhesion and resistance, the effect of dimple bionic nonsmooth structure arranged in tire circumferential grooves surface on antihydroplaning performance was investigated by using Computational Fluid Dynamics (CFD. The physical model of the object (model of dimple bionic nonsmooth surface distribution, hydroplaning model and SST k-ω turbulence model are established for numerical analysis of tire hydroplaning. By virtue of the orthogonal table L16(45, the parameters of dimple bionic nonsmooth structure design compared to the smooth structure were analyzed, and the priority level of the experimental factors as well as the best combination within the scope of the experiment was obtained. The simulation results show that dimple bionic nonsmooth structure can reduce water flow resistance by disturbing the eddy movement in boundary layers. Then, optimal type of dimple bionic nonsmooth structure is arranged on the bottom of tire circumferential grooves for hydroplaning performance analysis. The results show that the dimple bionic nonsmooth structure effectively decreases the tread hydrodynamic pressure when driving on water film and increases the tire hydroplaning velocity, thus improving tire antihydroplaning performance.
Investigation of the Effect of Dimple Bionic Nonsmooth Surface on Tire Antihydroplaning.
Zhou, Haichao; Wang, Guolin; Ding, Yangmin; Yang, Jian; Zhai, Huihui
2015-01-01
Inspired by the idea that bionic nonsmooth surfaces (BNSS) reduce fluid adhesion and resistance, the effect of dimple bionic nonsmooth structure arranged in tire circumferential grooves surface on antihydroplaning performance was investigated by using Computational Fluid Dynamics (CFD). The physical model of the object (model of dimple bionic nonsmooth surface distribution, hydroplaning model) and SST k - ω turbulence model are established for numerical analysis of tire hydroplaning. By virtue of the orthogonal table L16(4(5)), the parameters of dimple bionic nonsmooth structure design compared to the smooth structure were analyzed, and the priority level of the experimental factors as well as the best combination within the scope of the experiment was obtained. The simulation results show that dimple bionic nonsmooth structure can reduce water flow resistance by disturbing the eddy movement in boundary layers. Then, optimal type of dimple bionic nonsmooth structure is arranged on the bottom of tire circumferential grooves for hydroplaning performance analysis. The results show that the dimple bionic nonsmooth structure effectively decreases the tread hydrodynamic pressure when driving on water film and increases the tire hydroplaning velocity, thus improving tire antihydroplaning performance.
Efficient Estimating Functions for Stochastic Differential Equations
DEFF Research Database (Denmark)
Jakobsen, Nina Munkholt
The overall topic of this thesis is approximate martingale estimating function-based estimationfor solutions of stochastic differential equations, sampled at high frequency. Focuslies on the asymptotic properties of the estimators. The first part of the thesis deals with diffusions observed over...
Thresholding projection estimators in functional linear models
Cardot, Hervé; Johannes, Jan
2010-01-01
We consider the problem of estimating the regression function in functional linear regression models by proposing a new type of projection estimators which combine dimension reduction and thresholding. The introduction of a threshold rule allows to get consistency under broad assumptions as well as minimax rates of convergence under additional regularity hypotheses. We also consider the particular case of Sobolev spaces generated by the trigonometric basis which permits to get easily mean squ...
Estimating Function Approaches for Spatial Point Processes
Deng, Chong
Spatial point pattern data consist of locations of events that are often of interest in biological and ecological studies. Such data are commonly viewed as a realization from a stochastic process called spatial point process. To fit a parametric spatial point process model to such data, likelihood-based methods have been widely studied. However, while maximum likelihood estimation is often too computationally intensive for Cox and cluster processes, pairwise likelihood methods such as composite likelihood, Palm likelihood usually suffer from the loss of information due to the ignorance of correlation among pairs. For many types of correlated data other than spatial point processes, when likelihood-based approaches are not desirable, estimating functions have been widely used for model fitting. In this dissertation, we explore the estimating function approaches for fitting spatial point process models. These approaches, which are based on the asymptotic optimal estimating function theories, can be used to incorporate the correlation among data and yield more efficient estimators. We conducted a series of studies to demonstrate that these estmating function approaches are good alternatives to balance the trade-off between computation complexity and estimating efficiency. First, we propose a new estimating procedure that improves the efficiency of pairwise composite likelihood method in estimating clustering parameters. Our approach combines estimating functions derived from pairwise composite likeli-hood estimation and estimating functions that account for correlations among the pairwise contributions. Our method can be used to fit a variety of parametric spatial point process models and can yield more efficient estimators for the clustering parameters than pairwise composite likelihood estimation. We demonstrate its efficacy through a simulation study and an application to the longleaf pine data. Second, we further explore the quasi-likelihood approach on fitting
A variational approach to nonsmooth dynamics applications in unilateral mechanics and electronics
Adly, Samir
2017-01-01
This brief examines mathematical models in nonsmooth mechanics and nonregular electrical circuits, including evolution variational inequalities, complementarity systems, differential inclusions, second-order dynamics, Lur'e systems and Moreau's sweeping process. The field of nonsmooth dynamics is of great interest to mathematicians, mechanicians, automatic controllers and engineers. The present volume acknowledges this transversality and provides a multidisciplinary view as it outlines fundamental results in nonsmooth dynamics and explains how to use them to study various problems in engineering. In particular, the author explores the question of how to redefine the notion of dynamical systems in light of modern variational and nonsmooth analysis. With the aim of bridging between the communities of applied mathematicians, engineers and researchers in control theory and nonlinear systems, this brief outlines both relevant mathematical proofs and models in unilateral mechanics and electronics.
Tomar, S.K.
2002-01-01
It is well known that elliptic problems when posed on non-smooth domains, develop singularities. We examine such problems within the framework of spectral element methods and resolve the singularities with exponential accuracy.
Piecewise Geometric Estimation of a Survival Function.
1985-04-01
Langberg (1982). One of the by- products of the estimation process is an estimate of the failure rate function: here, another issue is raised. It is evident...envisaged as the infinite product probability space that may be constructed in the usual way from the sequence of probability spaces corresponding to the...received 6 MP (a mercaptopurine used in the treatment of leukemia). The ordered remis- sion times in weeks are: 6, 6, 6, 6+, 7, 9+, 10, 10+, 11+, 13, 16
ESTIMATION OF FUNCTIONALS OF SPARSE COVARIANCE MATRICES.
Fan, Jianqing; Rigollet, Philippe; Wang, Weichen
High-dimensional statistical tests often ignore correlations to gain simplicity and stability leading to null distributions that depend on functionals of correlation matrices such as their Frobenius norm and other ℓ r norms. Motivated by the computation of critical values of such tests, we investigate the difficulty of estimation the functionals of sparse correlation matrices. Specifically, we show that simple plug-in procedures based on thresholded estimators of correlation matrices are sparsity-adaptive and minimax optimal over a large class of correlation matrices. Akin to previous results on functional estimation, the minimax rates exhibit an elbow phenomenon. Our results are further illustrated in simulated data as well as an empirical study of data arising in financial econometrics.
The Modified HZ Conjugate Gradient Algorithm for Large-Scale Nonsmooth Optimization.
Yuan, Gonglin; Sheng, Zhou; Liu, Wenjie
2016-01-01
In this paper, the Hager and Zhang (HZ) conjugate gradient (CG) method and the modified HZ (MHZ) CG method are presented for large-scale nonsmooth convex minimization. Under some mild conditions, convergent results of the proposed methods are established. Numerical results show that the presented methods can be better efficiency for large-scale nonsmooth problems, and several problems are tested (with the maximum dimensions to 100,000 variables).
The Modified HZ Conjugate Gradient Algorithm for Large-Scale Nonsmooth Optimization.
Directory of Open Access Journals (Sweden)
Gonglin Yuan
Full Text Available In this paper, the Hager and Zhang (HZ conjugate gradient (CG method and the modified HZ (MHZ CG method are presented for large-scale nonsmooth convex minimization. Under some mild conditions, convergent results of the proposed methods are established. Numerical results show that the presented methods can be better efficiency for large-scale nonsmooth problems, and several problems are tested (with the maximum dimensions to 100,000 variables.
Dynamics and Control of Non-Smooth Systems with Applications to Supercavitating Vehicles
2011-01-01
ABSTRACT Title of dissertation: Dynamics and Control of Non-Smooth Systems with Applications to Supercavitating Vehicles Vincent Nguyen, Doctor of...relates to the dynamics of non-smooth vehicle systems, and in particular, supercavitating vehicles. These high-speed under- water vehicles are...Applications to Supercavitating Vehicles 5a. CONTRACT NUMBER 5b. GRANT NUMBER 5c. PROGRAM ELEMENT NUMBER 6. AUTHOR(S) 5d. PROJECT NUMBER 5e. TASK
Estimating state-contingent production functions
DEFF Research Database (Denmark)
Rasmussen, Svend; Karantininis, Kostas
The paper reviews the empirical problem of estimating state-contingent production functions. The major problem is that states of nature may not be registered and/or that the number of observation per state is low. Monte Carlo simulation is used to generate an artificial, uncertain production...... environment based on Cobb Douglas production functions with state-contingent parameters. The pa-rameters are subsequently estimated based on different sizes of samples using Generalized Least Squares and Generalized Maximum Entropy and the results are compared. It is concluded that Maximum Entropy may...
Cho, Yumi
2018-05-01
We study nonlinear elliptic problems with nonstandard growth and ellipticity related to an N-function. We establish global Calderón-Zygmund estimates of the weak solutions in the framework of Orlicz spaces over bounded non-smooth domains. Moreover, we prove a global regularity result for asymptotically regular problems which are getting close to the regular problems considered, when the gradient variable goes to infinity.
Efficient Estimating Functions for Stochastic Differential Equations
DEFF Research Database (Denmark)
Jakobsen, Nina Munkholt
The overall topic of this thesis is approximate martingale estimating function-based estimationfor solutions of stochastic differential equations, sampled at high frequency. Focuslies on the asymptotic properties of the estimators. The first part of the thesis deals with diffusions observed over...... a fixed time interval. Rate optimal and effcient estimators areobtained for a one-dimensional diffusion parameter. Stable convergence in distribution isused to achieve a practically applicable Gaussian limit distribution for suitably normalisedestimators. In a simulation example, the limit distributions...... multidimensional parameter. Conditions for rate optimality and effciency of estimatorsof drift-jump and diffusion parameters are given in some special cases. Theseconditions are found to extend the pre-existing conditions applicable to continuous diffusions,and impose much stronger requirements on the estimating...
A new fuzzy adaptive particle swarm optimization for non-smooth economic dispatch
Energy Technology Data Exchange (ETDEWEB)
Niknam, Taher; Mojarrad, Hassan Doagou; Nayeripour, Majid [Electrical and Electronic Engineering Department, Shiraz University of Technology, Shiraz (Iran)
2010-04-15
This paper proposes a novel method for solving the Non-convex Economic Dispatch (NED) problems, by the Fuzzy Adaptive Modified Particle Swarm Optimization (FAMPSO). Practical ED problems have non-smooth cost functions with equality and inequality constraints when generator valve-point loading effects are taken into account. Modern heuristic optimization techniques have been given much attention by many researchers due to their ability to find an almost global optimal solution for ED problems. PSO is one of modern heuristic algorithms, in which particles change place to get close to the best position and find the global minimum point. However, the classic PSO may converge to a local optimum solution and the performance of the PSO highly depends on the internal parameters. To overcome these drawbacks, in this paper, a new mutation is proposed to improve the global searching capability and prevent the convergence to local minima. Also, a fuzzy system is used to tune its parameters such as inertia weight and learning factors. In order to evaluate the performance of the proposed algorithm, it is applied to a system consisting of 13 and 40 thermal units whose fuel cost function is calculated by taking account of the effect of valve-point loading. Simulation results demonstrate the superiority of the proposed algorithm compared to other optimization algorithms presented in literature. (author)
A Projection free method for Generalized Eigenvalue Problem with a nonsmooth Regularizer.
Hwang, Seong Jae; Collins, Maxwell D; Ravi, Sathya N; Ithapu, Vamsi K; Adluru, Nagesh; Johnson, Sterling C; Singh, Vikas
2015-12-01
Eigenvalue problems are ubiquitous in computer vision, covering a very broad spectrum of applications ranging from estimation problems in multi-view geometry to image segmentation. Few other linear algebra problems have a more mature set of numerical routines available and many computer vision libraries leverage such tools extensively. However, the ability to call the underlying solver only as a "black box" can often become restrictive. Many 'human in the loop' settings in vision frequently exploit supervision from an expert, to the extent that the user can be considered a subroutine in the overall system. In other cases, there is additional domain knowledge, side or even partial information that one may want to incorporate within the formulation. In general, regularizing a (generalized) eigenvalue problem with such side information remains difficult. Motivated by these needs, this paper presents an optimization scheme to solve generalized eigenvalue problems (GEP) involving a (nonsmooth) regularizer. We start from an alternative formulation of GEP where the feasibility set of the model involves the Stiefel manifold. The core of this paper presents an end to end stochastic optimization scheme for the resultant problem. We show how this general algorithm enables improved statistical analysis of brain imaging data where the regularizer is derived from other 'views' of the disease pathology, involving clinical measurements and other image-derived representations.
DEFF Research Database (Denmark)
True, Hans; Engsig-Karup, Allan Peter; Bigoni, Daniele
2014-01-01
of the solutions across these boundaries. We compare the resulting solutions that are found with the three different strategies of handling the non-smoothnesses. Several integrators – both explicit and implicit ones – have been tested and their performances are evaluated and compared with respect to accuracy...... examples the dynamical problems are formulated as systems of ordinary differential-algebraic equations due to the geometric constraints. The non-smoothnesses have been neglected, smoothened or entered into the dynamical systems as switching boundaries with relations, which govern the continuation...
International Nuclear Information System (INIS)
Emiel, G.
2008-01-01
This manuscript deals with large-scale non-smooth optimization that may typically arise when performing Lagrangian relaxation of difficult problems. This technique is commonly used to tackle mixed-integer linear programming - or large-scale convex problems. For example, a classical approach when dealing with power generation planning problems in a stochastic environment is to perform a Lagrangian relaxation of the coupling constraints of demand. In this approach, a master problem coordinates local subproblems, specific to each generation unit. The master problem deals with a separable non-smooth dual function which can be maximized with, for example, bundle algorithms. In chapter 2, we introduce basic tools of non-smooth analysis and some recent results regarding incremental or inexact instances of non-smooth algorithms. However, in some situations, the dual problem may still be very hard to solve. For instance, when the number of dualized constraints is very large (exponential in the dimension of the primal problem), explicit dualization may no longer be possible or the update of dual variables may fail. In order to reduce the dual dimension, different heuristics were proposed. They involve a separation procedure to dynamically select a restricted set of constraints to be dualized along the iterations. This relax-and-cut type approach has shown its numerical efficiency in many combinatorial problems. In chapter 3, we show Primal-dual convergence of such strategy when using an adapted sub-gradient method for the dual step and under minimal assumptions on the separation procedure. Another limit of Lagrangian relaxation may appear when the dual function is separable in highly numerous or complex sub-functions. In such situation, the computational burden of solving all local subproblems may be preponderant in the whole iterative process. A natural strategy would be here to take full advantage of the dual separable structure, performing a dual iteration after having
Comparison of density estimators. [Estimation of probability density functions
Energy Technology Data Exchange (ETDEWEB)
Kao, S.; Monahan, J.F.
1977-09-01
Recent work in the field of probability density estimation has included the introduction of some new methods, such as the polynomial and spline methods and the nearest neighbor method, and the study of asymptotic properties in depth. This earlier work is summarized here. In addition, the computational complexity of the various algorithms is analyzed, as are some simulations. The object is to compare the performance of the various methods in small samples and their sensitivity to change in their parameters, and to attempt to discover at what point a sample is so small that density estimation can no longer be worthwhile. (RWR)
A new honey bee mating optimization algorithm for non-smooth economic dispatch
International Nuclear Information System (INIS)
Niknam, Taher; Mojarrad, Hasan Doagou; Meymand, Hamed Zeinoddini; Firouzi, Bahman Bahmani
2011-01-01
The non-storage characteristics of electricity and the increasing fuel costs worldwide call for the need to operate the systems more economically. Economic dispatch (ED) is one of the most important optimization problems in power systems. ED has the objective of dividing the power demand among the online generators economically while satisfying various constraints. The importance of economic dispatch is to get maximum usable power using minimum resources. To solve the static ED problem, honey bee mating algorithm (HBMO) can be used. The basic disadvantage of the original HBMO algorithm is the fact that it may miss the optimum and provide a near optimum solution in a limited runtime period. In order to avoid this shortcoming, we propose a new method that improves the mating process of HBMO and also, combines the improved HBMO with a Chaotic Local Search (CLS) called Chaotic Improved Honey Bee Mating Optimization (CIHBMO). The proposed algorithm is used to solve ED problems taking into account the nonlinear generator characteristics such as prohibited operation zones, multi-fuel and valve-point loading effects. The CIHBMO algorithm is tested on three test systems and compared with other methods in the literature. Results have shown that the proposed method is efficient and fast for ED problems with non-smooth and non-continuous fuel cost functions. Moreover, the optimal power dispatch obtained by the algorithm is superior to previous reported results. -- Research highlights: →Economic dispatch. →Reducing electrical energy loss. →Saving electrical energy. →Optimal operation.
Estimating functions for inhomogeneous Cox processes
DEFF Research Database (Denmark)
Waagepetersen, Rasmus
2006-01-01
Estimation methods are reviewed for inhomogeneous Cox processes with tractable first and second order properties. We illustrate the various suggestions by means of data examples.......Estimation methods are reviewed for inhomogeneous Cox processes with tractable first and second order properties. We illustrate the various suggestions by means of data examples....
Goswami, Deepjyoti
2011-09-01
In this article, we propose and analyze an alternate proof of a priori error estimates for semidiscrete Galerkin approximations to a general second order linear parabolic initial and boundary value problem with rough initial data. Our analysis is based on energy arguments without using parabolic duality. Further, it follows the spirit of the proof technique used for deriving optimal error estimates for finite element approximations to parabolic problems with smooth initial data and hence, it unifies both theories, that is, one for smooth initial data and other for nonsmooth data. Moreover, the proposed technique is also extended to a semidiscrete mixed method for linear parabolic problems. In both cases, optimal L2-error estimates are derived, when the initial data is in L2. A superconvergence phenomenon is also observed, which is then used to prove L∞-estimates for linear parabolic problems defined on two-dimensional spatial domain again with rough initial data. Copyright © Taylor & Francis Group, LLC.
International Nuclear Information System (INIS)
Tong, Xin; Zhou, Hong; Liu, Min; Dai, Ming-jiang
2011-01-01
In order to enhance the thermal fatigue resistance of cast iron materials, the samples with biomimetic non-smooth surface were processed by Neodymium:Yttrium Aluminum Garnet (Nd:YAG) laser. With self-controlled thermal fatigue test method, the thermal fatigue resistance of smooth and non-smooth samples was investigated. The effects of striated laser tracks on thermal fatigue resistance were also studied. The results indicated that biomimetic non-smooth surface was benefit for improving thermal fatigue resistance of cast iron sample. The striated non-smooth units formed by laser tracks which were vertical with thermal cracks had the best propagation resistance. The mechanisms behind these influences were discussed, and some schematic drawings were introduced to describe them.
A nonsmooth nonlinear conjugate gradient method for interactive contact force problems
DEFF Research Database (Denmark)
Silcowitz, Morten; Abel, Sarah Maria Niebe; Erleben, Kenny
2010-01-01
of a nonlinear complementarity problem (NCP), which can be solved using an iterative splitting method, such as the projected Gauss–Seidel (PGS) method. We present a novel method for solving the NCP problem by applying a Fletcher–Reeves type nonlinear nonsmooth conjugate gradient (NNCG) type method. We analyze...... and present experimental convergence behavior and properties of the new method. Our results show that the NNCG method has at least the same convergence rate as PGS, and in many cases better....
International Nuclear Information System (INIS)
Wen Zhen; Sun Jitao
2009-01-01
In this paper, we investigate the existence and uniqueness of equilibrium point for delayed Cohen-Grossberg bidirectional associative memory (BAM) neural networks with impulses, based on nonsmooth analysis method. And we give the criteria of global exponential stability of the unique equilibrium point for the delayed BAM neural networks with impulses using Lyapunov method. The new sufficient condition generalizes and improves the previously known results. Finally, we present examples to illustrate that our results are effective.
A logistic regression estimating function for spatial Gibbs point processes
DEFF Research Database (Denmark)
Baddeley, Adrian; Coeurjolly, Jean-François; Rubak, Ege
We propose a computationally efficient logistic regression estimating function for spatial Gibbs point processes. The sample points for the logistic regression consist of the observed point pattern together with a random pattern of dummy points. The estimating function is closely related to the p......We propose a computationally efficient logistic regression estimating function for spatial Gibbs point processes. The sample points for the logistic regression consist of the observed point pattern together with a random pattern of dummy points. The estimating function is closely related...
Slope Estimation in Noisy Piecewise Linear Functions.
Ingle, Atul; Bucklew, James; Sethares, William; Varghese, Tomy
2015-03-01
This paper discusses the development of a slope estimation algorithm called MAPSlope for piecewise linear data that is corrupted by Gaussian noise. The number and locations of slope change points (also known as breakpoints) are assumed to be unknown a priori though it is assumed that the possible range of slope values lies within known bounds. A stochastic hidden Markov model that is general enough to encompass real world sources of piecewise linear data is used to model the transitions between slope values and the problem of slope estimation is addressed using a Bayesian maximum a posteriori approach. The set of possible slope values is discretized, enabling the design of a dynamic programming algorithm for posterior density maximization. Numerical simulations are used to justify choice of a reasonable number of quantization levels and also to analyze mean squared error performance of the proposed algorithm. An alternating maximization algorithm is proposed for estimation of unknown model parameters and a convergence result for the method is provided. Finally, results using data from political science, finance and medical imaging applications are presented to demonstrate the practical utility of this procedure.
Container Surface Evaluation by Function Estimation
Energy Technology Data Exchange (ETDEWEB)
Wendelberger, James G. [Los Alamos National Lab. (LANL), Los Alamos, NM (United States)
2017-08-03
Container images are analyzed for specific surface features, such as, pits, cracks, and corrosion. The detection of these features is confounded with complicating features. These complication features include: shape/curvature, welds, edges, scratches, foreign objects among others. A method is provided to discriminate between the various features. The method consists of estimating the image background, determining a residual image and post processing to determine the features present. The methodology is not finalized but demonstrates the feasibility of a method to determine the kind and size of the features present.
Usng subjective percentiles and test data for estimating fragility functions
International Nuclear Information System (INIS)
George, L.L.; Mensing, R.W.
1981-01-01
Fragility functions are cumulative distribution functions (cdfs) of strengths at failure. They are needed for reliability analyses of systems such as power generation and transmission systems. Subjective opinions supplement sparse test data for estimating fragility functions. Often the opinions are opinions on the percentiles of the fragility function. Subjective percentiles are likely to be less biased than opinions on parameters of cdfs. Solutions to several problems in the estimation of fragility functions are found for subjective percentiles and test data. How subjective percentiles should be used to estimate subjective fragility functions, how subjective percentiles should be combined with test data, how fragility functions for several failure modes should be combined into a composite fragility function, and how inherent randomness and uncertainty due to lack of knowledge should be represented are considered. Subjective percentiles are treated as independent estimates of percentiles. The following are derived: least-squares parameter estimators for normal and lognormal cdfs, based on subjective percentiles (the method is applicable to any invertible cdf); a composite fragility function for combining several failure modes; estimators of variation within and between groups of experts for nonidentically distributed subjective percentiles; weighted least-squares estimators when subjective percentiles have higher variation at higher percents; and weighted least-squares and Bayes parameter estimators based on combining subjective percentiles and test data. 4 figures, 2 tables
Some error estimates for the lumped mass finite element method for a parabolic problem
Chatzipantelidis, P.; Lazarov, R. D.; Thomé e, V.
2012-01-01
for the standard Galerkin method carry over to the lumped mass method whereas nonsmooth initial data estimates require special assumptions on the triangulation. We also discuss the application to time discretization by the backward Euler and Crank-Nicolson methods
Bias-corrected estimation of stable tail dependence function
DEFF Research Database (Denmark)
Beirlant, Jan; Escobar-Bach, Mikael; Goegebeur, Yuri
2016-01-01
We consider the estimation of the stable tail dependence function. We propose a bias-corrected estimator and we establish its asymptotic behaviour under suitable assumptions. The finite sample performance of the proposed estimator is evaluated by means of an extensive simulation study where...
On estimation of the intensity function of a point process
Lieshout, van M.N.M.
2010-01-01
Abstract. Estimation of the intensity function of spatial point processes is a fundamental problem. In this paper, we interpret the Delaunay tessellation field estimator recently introduced by Schaap and Van de Weygaert as an adaptive kernel estimator and give explicit expressions for the mean and
On a family of Bessel type functions: Estimations, series, overconvergence
Paneva-Konovska, Jordanka
2017-12-01
A family of the Bessel-Maitland functions are considered in this paper and some useful estimations are obtained for them. Series defined by means of these functions are considered and their behaviour on the boundaries of the convergence domains is discussed. Using the obtained estimations, necessary and sufficient conditions for the series overconvergence, as well as Hadamard type theorem are proposed.
Malware Function Estimation Using API in Initial Behavior
KAWAGUCHI, Naoto; OMOTE, Kazumasa
2017-01-01
Malware proliferation has become a serious threat to the Internet in recent years. Most current malware are subspecies of existing malware that have been automatically generated by illegal tools. To conduct an efficient analysis of malware, estimating their functions in advance is effective when we give priority to analyze malware. However, estimating the malware functions has been difficult due to the increasing sophistication of malware. Actually, the previous researches do not estimate the...
Rached, Nadhir B.
2013-12-01
The Monte Carlo forward Euler method with uniform time stepping is the standard technique to compute an approximation of the expected payoff of a solution of an Itô SDE. For a given accuracy requirement TOL, the complexity of this technique for well behaved problems, that is the amount of computational work to solve the problem, is O(TOL-3). A new hybrid adaptive Monte Carlo forward Euler algorithm for SDEs with non-smooth coefficients and low regular observables is developed in this thesis. This adaptive method is based on the derivation of a new error expansion with computable leading-order terms. The basic idea of the new expansion is the use of a mixture of prior information to determine the weight functions and posterior information to compute the local error. In a number of numerical examples the superior efficiency of the hybrid adaptive algorithm over the standard uniform time stepping technique is verified. When a non-smooth binary payoff with either GBM or drift singularity type of SDEs is considered, the new adaptive method achieves the same complexity as the uniform discretization with smooth problems. Moreover, the new developed algorithm is extended to the MLMC forward Euler setting which reduces the complexity from O(TOL-3) to O(TOL-2(log(TOL))2). For the binary option case with the same type of Itô SDEs, the hybrid adaptive MLMC forward Euler recovers the standard multilevel computational cost O(TOL-2(log(TOL))2). When considering a higher order Milstein scheme, a similar complexity result was obtained by Giles using the uniform time stepping for one dimensional SDEs. The difficulty to extend Giles\\' Milstein MLMC method to the multidimensional case is an argument for the flexibility of our new constructed adaptive MLMC forward Euler method which can be easily adapted to this setting. Similarly, the expected complexity O(TOL-2(log(TOL))2) is reached for the multidimensional case and verified numerically.
Probability Density Estimation Using Neural Networks in Monte Carlo Calculations
International Nuclear Information System (INIS)
Shim, Hyung Jin; Cho, Jin Young; Song, Jae Seung; Kim, Chang Hyo
2008-01-01
The Monte Carlo neutronics analysis requires the capability for a tally distribution estimation like an axial power distribution or a flux gradient in a fuel rod, etc. This problem can be regarded as a probability density function estimation from an observation set. We apply the neural network based density estimation method to an observation and sampling weight set produced by the Monte Carlo calculations. The neural network method is compared with the histogram and the functional expansion tally method for estimating a non-smooth density, a fission source distribution, and an absorption rate's gradient in a burnable absorber rod. The application results shows that the neural network method can approximate a tally distribution quite well. (authors)
Coefficient Estimate Problem for a New Subclass of Biunivalent Functions
N. Magesh; T. Rosy; S. Varma
2013-01-01
We introduce a unified subclass of the function class Σ of biunivalent functions defined in the open unit disc. Furthermore, we find estimates on the coefficients |a2| and |a3| for functions in this subclass. In addition, many relevant connections with known or new results are pointed out.
Variance computations for functional of absolute risk estimates.
Pfeiffer, R M; Petracci, E
2011-07-01
We present a simple influence function based approach to compute the variances of estimates of absolute risk and functions of absolute risk. We apply this approach to criteria that assess the impact of changes in the risk factor distribution on absolute risk for an individual and at the population level. As an illustration we use an absolute risk prediction model for breast cancer that includes modifiable risk factors in addition to standard breast cancer risk factors. Influence function based variance estimates for absolute risk and the criteria are compared to bootstrap variance estimates.
Unstable volatility functions: the break preserving local linear estimator
DEFF Research Database (Denmark)
Casas, Isabel; Gijbels, Irene
The objective of this paper is to introduce the break preserving local linear (BPLL) estimator for the estimation of unstable volatility functions. Breaks in the structure of the conditional mean and/or the volatility functions are common in Finance. Markov switching models (Hamilton, 1989......) and threshold models (Lin and Terasvirta, 1994) are amongst the most popular models to describe the behaviour of data with structural breaks. The local linear (LL) estimator is not consistent at points where the volatility function has a break and it may even report negative values for finite samples...
Estimating Functions with Prior Knowledge, (EFPK) for diffusions
DEFF Research Database (Denmark)
Nolsøe, Kim; Kessler, Mathieu; Madsen, Henrik
2003-01-01
In this paper a method is formulated in an estimating function setting for parameter estimation, which allows the use of prior information. The main idea is to use prior knowledge of the parameters, either specified as moments restrictions or as a distribution, and use it in the construction of a...... of an estimating function. It may be useful when the full Bayesian analysis is difficult to carry out for computational reasons. This is almost always the case for diffusions, which is the focus of this paper, though the method applies in other settings.......In this paper a method is formulated in an estimating function setting for parameter estimation, which allows the use of prior information. The main idea is to use prior knowledge of the parameters, either specified as moments restrictions or as a distribution, and use it in the construction...
Invisibility cloaking via non-smooth transformation optics and ray tracing
International Nuclear Information System (INIS)
Crosskey, Miles M.; Nixon, Andrew T.; Schick, Leland M.; Kovacic, Gregor
2011-01-01
We present examples of theoretically-predicted invisibility cloaks with shapes other than spheres and cylinders, including cones and ellipsoids, as well as shapes spliced from parts of these simpler shapes. In addition, we present an example explicitly displaying the non-uniqueness of invisibility cloaks of the same shape. We depict rays propagating through these example cloaks using ray tracing for geometric optics. - Highlights: → Theoretically-predicted conical and ellipsoidal invisibility cloaks. → Non-smooth cloaks spliced from parts of simpler shapes. → Example displaying non-uniqueness of invisibility cloaks of the same shape. → Rays propagating through example cloaks depicted using geometric optics.
Generalized Pattern Search methods for a class of nonsmooth optimization problems with structure
Bogani, C.; Gasparo, M. G.; Papini, A.
2009-07-01
We propose a Generalized Pattern Search (GPS) method to solve a class of nonsmooth minimization problems, where the set of nondifferentiability is included in the union of known hyperplanes and, therefore, is highly structured. Both unconstrained and linearly constrained problems are considered. At each iteration the set of poll directions is enforced to conform to the geometry of both the nondifferentiability set and the boundary of the feasible region, near the current iterate. This is the key issue to guarantee the convergence of certain subsequences of iterates to points which satisfy first-order optimality conditions. Numerical experiments on some classical problems validate the method.
Compact solitary waves in linearly elastic chains with non-smooth on-site potential
Energy Technology Data Exchange (ETDEWEB)
Gaeta, Giuseppe [Dipartimento di Matematica, Universita di Milano, Via Saldini 50, 20133 Milan (Italy); Gramchev, Todor [Dipartimento di Matematica e Informatica, Universita di Cagliari, Via Ospedale 72, 09124 Cagliari (Italy); Walcher, Sebastian [Lehrstuhl A Mathematik, RWTH Aachen, 52056 Aachen (Germany)
2007-04-27
It was recently observed by Saccomandi and Sgura that one-dimensional chains with nonlinear elastic interaction and regular on-site potential can support compact solitary waves, i.e. travelling solitary waves with strictly compact support. In this paper, we show that the same applies to chains with linear elastic interaction and an on-site potential which is continuous but non-smooth at minima. Some different features arise; in particular, the speed of compact solitary waves is not uniquely fixed by the equation. We also discuss several generalizations of our findings.
Extension Theory and Krein-type Resolvent Formulas for Nonsmooth Boundary Value Problems
DEFF Research Database (Denmark)
Abels, Helmut; Grubb, Gerd; Wood, Ian Geoffrey
2014-01-01
The theory of selfadjoint extensions of symmetric operators, and more generally the theory of extensions of dual pairs, was implemented some years ago for boundary value problems for elliptic operators on smooth bounded domains. Recently, the questions have been taken up again for nonsmooth domains....... In the present work we show that pseudodifferential methods can be used to obtain a full characterization, including Kreĭn resolvent formulas, of the realizations of nonselfadjoint second-order operators on
Lipschitz estimates for convex functions with respect to vector fields
Directory of Open Access Journals (Sweden)
Valentino Magnani
2012-12-01
Full Text Available We present Lipschitz continuity estimates for a class of convex functions with respect to Hörmander vector fields. These results have been recently obtained in collaboration with M. Scienza, [22].
Unbiased estimators for spatial distribution functions of classical fluids
Adib, Artur B.; Jarzynski, Christopher
2005-01-01
We use a statistical-mechanical identity closely related to the familiar virial theorem, to derive unbiased estimators for spatial distribution functions of classical fluids. In particular, we obtain estimators for both the fluid density ρ(r) in the vicinity of a fixed solute and the pair correlation g(r) of a homogeneous classical fluid. We illustrate the utility of our estimators with numerical examples, which reveal advantages over traditional histogram-based methods of computing such distributions.
An improved method for estimating the frequency correlation function
Chelli, Ali; Pä tzold, Matthias
2012-01-01
For time-invariant frequency-selective channels, the transfer function is a superposition of waves having different propagation delays and path gains. In order to estimate the frequency correlation function (FCF) of such channels, the frequency averaging technique can be utilized. The obtained FCF can be expressed as a sum of auto-terms (ATs) and cross-terms (CTs). The ATs are caused by the autocorrelation of individual path components. The CTs are due to the cross-correlation of different path components. These CTs have no physical meaning and leads to an estimation error. We propose a new estimation method aiming to improve the estimation accuracy of the FCF of a band-limited transfer function. The basic idea behind the proposed method is to introduce a kernel function aiming to reduce the CT effect, while preserving the ATs. In this way, we can improve the estimation of the FCF. The performance of the proposed method and the frequency averaging technique is analyzed using a synthetically generated transfer function. We show that the proposed method is more accurate than the frequency averaging technique. The accurate estimation of the FCF is crucial for the system design. In fact, we can determine the coherence bandwidth from the FCF. The exact knowledge of the coherence bandwidth is beneficial in both the design as well as optimization of frequency interleaving and pilot arrangement schemes. © 2012 IEEE.
An improved method for estimating the frequency correlation function
Chelli, Ali
2012-04-01
For time-invariant frequency-selective channels, the transfer function is a superposition of waves having different propagation delays and path gains. In order to estimate the frequency correlation function (FCF) of such channels, the frequency averaging technique can be utilized. The obtained FCF can be expressed as a sum of auto-terms (ATs) and cross-terms (CTs). The ATs are caused by the autocorrelation of individual path components. The CTs are due to the cross-correlation of different path components. These CTs have no physical meaning and leads to an estimation error. We propose a new estimation method aiming to improve the estimation accuracy of the FCF of a band-limited transfer function. The basic idea behind the proposed method is to introduce a kernel function aiming to reduce the CT effect, while preserving the ATs. In this way, we can improve the estimation of the FCF. The performance of the proposed method and the frequency averaging technique is analyzed using a synthetically generated transfer function. We show that the proposed method is more accurate than the frequency averaging technique. The accurate estimation of the FCF is crucial for the system design. In fact, we can determine the coherence bandwidth from the FCF. The exact knowledge of the coherence bandwidth is beneficial in both the design as well as optimization of frequency interleaving and pilot arrangement schemes. © 2012 IEEE.
Bayesian error estimation in density-functional theory
DEFF Research Database (Denmark)
Mortensen, Jens Jørgen; Kaasbjerg, Kristen; Frederiksen, Søren Lund
2005-01-01
We present a practical scheme for performing error estimates for density-functional theory calculations. The approach, which is based on ideas from Bayesian statistics, involves creating an ensemble of exchange-correlation functionals by comparing with an experimental database of binding energies...
On approximation and energy estimates for delta 6-convex functions.
Saleem, Muhammad Shoaib; Pečarić, Josip; Rehman, Nasir; Khan, Muhammad Wahab; Zahoor, Muhammad Sajid
2018-01-01
The smooth approximation and weighted energy estimates for delta 6-convex functions are derived in this research. Moreover, we conclude that if 6-convex functions are closed in uniform norm, then their third derivatives are closed in weighted [Formula: see text]-norm.
On approximation and energy estimates for delta 6-convex functions
Directory of Open Access Journals (Sweden)
Muhammad Shoaib Saleem
2018-02-01
Full Text Available Abstract The smooth approximation and weighted energy estimates for delta 6-convex functions are derived in this research. Moreover, we conclude that if 6-convex functions are closed in uniform norm, then their third derivatives are closed in weighted L2 $L^{2}$-norm.
Effect of Nonsmooth Nose Surface of the Projectile on Penetration Using DEM Simulation
Directory of Open Access Journals (Sweden)
Jing Han
2017-01-01
Full Text Available The nonsmooth body surface of the reptile in nature plays an important role in reduction of resistance and friction when it lives in a soil environment. To consider whether it was feasible for improving the performance of penetrating projectile we investigated the influence of the convex as one of nonsmooth surfaces for the nose of projectile. A numerical simulation study of the projectile against the concrete target was developed based on the discrete element method (DEM. The results show that the convex nose surface of the projectile is beneficial for reducing the penetration resistance greatly, which is also validated by the experiments. Compared to the traditional smooth nose structure, the main reason of difference is due to the local contact normal pressure, which increases dramatically due to the abrupt change of curvature caused by the convex at the same condition. Accordingly, the broken particles of the concrete target obtain more kinetic energy and their average radial flow velocities will drastically increase simultaneously, which is in favor of decreasing the interface friction and the compaction density of concrete target around the nose of projectile.
The selection pressures induced non-smooth infectious disease model and bifurcation analysis
International Nuclear Information System (INIS)
Qin, Wenjie; Tang, Sanyi
2014-01-01
Highlights: • A non-smooth infectious disease model to describe selection pressure is developed. • The effect of selection pressure on infectious disease transmission is addressed. • The key factors which are related to the threshold value are determined. • The stabilities and bifurcations of model have been revealed in more detail. • Strategies for the prevention of emerging infectious disease are proposed. - Abstract: Mathematical models can assist in the design strategies to control emerging infectious disease. This paper deduces a non-smooth infectious disease model induced by selection pressures. Analysis of this model reveals rich dynamics including local, global stability of equilibria and local sliding bifurcations. Model solutions ultimately stabilize at either one real equilibrium or the pseudo-equilibrium on the switching surface of the present model, depending on the threshold value determined by some related parameters. Our main results show that reducing the threshold value to a appropriate level could contribute to the efficacy on prevention and treatment of emerging infectious disease, which indicates that the selection pressures can be beneficial to prevent the emerging infectious disease under medical resource limitation
Investigation of MLE in nonparametric estimation methods of reliability function
International Nuclear Information System (INIS)
Ahn, Kwang Won; Kim, Yoon Ik; Chung, Chang Hyun; Kim, Kil Yoo
2001-01-01
There have been lots of trials to estimate a reliability function. In the ESReDA 20 th seminar, a new method in nonparametric way was proposed. The major point of that paper is how to use censored data efficiently. Generally there are three kinds of approach to estimate a reliability function in nonparametric way, i.e., Reduced Sample Method, Actuarial Method and Product-Limit (PL) Method. The above three methods have some limits. So we suggest an advanced method that reflects censored information more efficiently. In many instances there will be a unique maximum likelihood estimator (MLE) of an unknown parameter, and often it may be obtained by the process of differentiation. It is well known that the three methods generally used to estimate a reliability function in nonparametric way have maximum likelihood estimators that are uniquely exist. So, MLE of the new method is derived in this study. The procedure to calculate a MLE is similar just like that of PL-estimator. The difference of the two is that in the new method, the mass (or weight) of each has an influence of the others but the mass in PL-estimator not
Unbounded critical points for a class of lower semicontinuous functionals
Pellacci, Benedetta; Squassina, Marco
2003-01-01
In this paper we prove existence and multiplicity results of unbounded critical points for a general class of weakly lower semicontinuous functionals. We will apply a suitable nonsmooth critical point theory.
Development on electromagnetic impedance function modeling and its estimation
Energy Technology Data Exchange (ETDEWEB)
Sutarno, D., E-mail: Sutarno@fi.itb.ac.id [Earth Physics and Complex System Division Faculty of Mathematics and Natural Sciences Institut Teknologi Bandung (Indonesia)
2015-09-30
Today the Electromagnetic methods such as magnetotellurics (MT) and controlled sources audio MT (CSAMT) is used in a broad variety of applications. Its usefulness in poor seismic areas and its negligible environmental impact are integral parts of effective exploration at minimum cost. As exploration was forced into more difficult areas, the importance of MT and CSAMT, in conjunction with other techniques, has tended to grow continuously. However, there are obviously important and difficult problems remaining to be solved concerning our ability to collect process and interpret MT as well as CSAMT in complex 3D structural environments. This talk aim at reviewing and discussing the recent development on MT as well as CSAMT impedance functions modeling, and also some improvements on estimation procedures for the corresponding impedance functions. In MT impedance modeling, research efforts focus on developing numerical method for computing the impedance functions of three dimensionally (3-D) earth resistivity models. On that reason, 3-D finite elements numerical modeling for the impedances is developed based on edge element method. Whereas, in the CSAMT case, the efforts were focused to accomplish the non-plane wave problem in the corresponding impedance functions. Concerning estimation of MT and CSAMT impedance functions, researches were focused on improving quality of the estimates. On that objective, non-linear regression approach based on the robust M-estimators and the Hilbert transform operating on the causal transfer functions, were used to dealing with outliers (abnormal data) which are frequently superimposed on a normal ambient MT as well as CSAMT noise fields. As validated, the proposed MT impedance modeling method gives acceptable results for standard three dimensional resistivity models. Whilst, the full solution based modeling that accommodate the non-plane wave effect for CSAMT impedances is applied for all measurement zones, including near-, transition
On Improving Density Estimators which are not Bona Fide Functions
Gajek, Leslaw
1986-01-01
In order to improve the rate of decrease of the IMSE for nonparametric kernel density estimators with nonrandom bandwidth beyond $O(n^{-4/5})$ all current methods must relax the constraint that the density estimate be a bona fide function, that is, be nonnegative and integrate to one. In this paper we show how to achieve similar improvement without relaxing any of these constraints. The method can also be applied for orthogonal series, adaptive orthogonal series, spline, jackknife, and other ...
Optimal Bandwidth Selection for Kernel Density Functionals Estimation
Directory of Open Access Journals (Sweden)
Su Chen
2015-01-01
Full Text Available The choice of bandwidth is crucial to the kernel density estimation (KDE and kernel based regression. Various bandwidth selection methods for KDE and local least square regression have been developed in the past decade. It has been known that scale and location parameters are proportional to density functionals ∫γ(xf2(xdx with appropriate choice of γ(x and furthermore equality of scale and location tests can be transformed to comparisons of the density functionals among populations. ∫γ(xf2(xdx can be estimated nonparametrically via kernel density functionals estimation (KDFE. However, the optimal bandwidth selection for KDFE of ∫γ(xf2(xdx has not been examined. We propose a method to select the optimal bandwidth for the KDFE. The idea underlying this method is to search for the optimal bandwidth by minimizing the mean square error (MSE of the KDFE. Two main practical bandwidth selection techniques for the KDFE of ∫γ(xf2(xdx are provided: Normal scale bandwidth selection (namely, “Rule of Thumb” and direct plug-in bandwidth selection. Simulation studies display that our proposed bandwidth selection methods are superior to existing density estimation bandwidth selection methods in estimating density functionals.
A comparison of dependence function estimators in multivariate extremes
Vettori, Sabrina; Huser, Raphaë l; Genton, Marc G.
2017-01-01
Various nonparametric and parametric estimators of extremal dependence have been proposed in the literature. Nonparametric methods commonly suffer from the curse of dimensionality and have been mostly implemented in extreme-value studies up to three dimensions, whereas parametric models can tackle higher-dimensional settings. In this paper, we assess, through a vast and systematic simulation study, the performance of classical and recently proposed estimators in multivariate settings. In particular, we first investigate the performance of nonparametric methods and then compare them with classical parametric approaches under symmetric and asymmetric dependence structures within the commonly used logistic family. We also explore two different ways to make nonparametric estimators satisfy the necessary dependence function shape constraints, finding a general improvement in estimator performance either (i) by substituting the estimator with its greatest convex minorant, developing a computational tool to implement this method for dimensions $$D\\ge 2$$D≥2 or (ii) by projecting the estimator onto a subspace of dependence functions satisfying such constraints and taking advantage of Bernstein–Bézier polynomials. Implementing the convex minorant method leads to better estimator performance as the dimensionality increases.
A comparison of dependence function estimators in multivariate extremes
Vettori, Sabrina
2017-05-11
Various nonparametric and parametric estimators of extremal dependence have been proposed in the literature. Nonparametric methods commonly suffer from the curse of dimensionality and have been mostly implemented in extreme-value studies up to three dimensions, whereas parametric models can tackle higher-dimensional settings. In this paper, we assess, through a vast and systematic simulation study, the performance of classical and recently proposed estimators in multivariate settings. In particular, we first investigate the performance of nonparametric methods and then compare them with classical parametric approaches under symmetric and asymmetric dependence structures within the commonly used logistic family. We also explore two different ways to make nonparametric estimators satisfy the necessary dependence function shape constraints, finding a general improvement in estimator performance either (i) by substituting the estimator with its greatest convex minorant, developing a computational tool to implement this method for dimensions $$D\\\\ge 2$$D≥2 or (ii) by projecting the estimator onto a subspace of dependence functions satisfying such constraints and taking advantage of Bernstein–Bézier polynomials. Implementing the convex minorant method leads to better estimator performance as the dimensionality increases.
Consistent Parameter and Transfer Function Estimation using Context Free Grammars
Klotz, Daniel; Herrnegger, Mathew; Schulz, Karsten
2017-04-01
This contribution presents a method for the inference of transfer functions for rainfall-runoff models. Here, transfer functions are defined as parametrized (functional) relationships between a set of spatial predictors (e.g. elevation, slope or soil texture) and model parameters. They are ultimately used for estimation of consistent, spatially distributed model parameters from a limited amount of lumped global parameters. Additionally, they provide a straightforward method for parameter extrapolation from one set of basins to another and can even be used to derive parameterizations for multi-scale models [see: Samaniego et al., 2010]. Yet, currently an actual knowledge of the transfer functions is often implicitly assumed. As a matter of fact, for most cases these hypothesized transfer functions can rarely be measured and often remain unknown. Therefore, this contribution presents a general method for the concurrent estimation of the structure of transfer functions and their respective (global) parameters. Note, that by consequence an estimation of the distributed parameters of the rainfall-runoff model is also undertaken. The method combines two steps to achieve this. The first generates different possible transfer functions. The second then estimates the respective global transfer function parameters. The structural estimation of the transfer functions is based on the context free grammar concept. Chomsky first introduced context free grammars in linguistics [Chomsky, 1956]. Since then, they have been widely applied in computer science. But, to the knowledge of the authors, they have so far not been used in hydrology. Therefore, the contribution gives an introduction to context free grammars and shows how they can be constructed and used for the structural inference of transfer functions. This is enabled by new methods from evolutionary computation, such as grammatical evolution [O'Neill, 2001], which make it possible to exploit the constructed grammar as a
Pedotransfer functions to estimate soil water content at field capacity ...
Indian Academy of Sciences (India)
20
available scarce water resources in dry land agriculture, but direct measurement thereof for multiple locations in the field is not always feasible. Therefore, pedotransfer functions (PTFs) were developed to estimate soil water retention at FC and PWP for dryland soils of India. A soil database available for Arid Western India ...
Local gradient estimate for harmonic functions on Finsler manifolds
Xia, Chao
2013-01-01
In this paper, we prove the local gradient estimate for harmonic functions on complete, noncompact Finsler measure spaces under the condition that the weighted Ricci curvature has a lower bound. As applications, we obtain Liouville type theorem on Finsler manifolds with nonnegative Ricci curvature.
Estimating variability in functional images using a synthetic resampling approach
International Nuclear Information System (INIS)
Maitra, R.; O'Sullivan, F.
1996-01-01
Functional imaging of biologic parameters like in vivo tissue metabolism is made possible by Positron Emission Tomography (PET). Many techniques, such as mixture analysis, have been suggested for extracting such images from dynamic sequences of reconstructed PET scans. Methods for assessing the variability in these functional images are of scientific interest. The nonlinearity of the methods used in the mixture analysis approach makes analytic formulae for estimating variability intractable. The usual resampling approach is infeasible because of the prohibitive computational effort in simulating a number of sinogram. datasets, applying image reconstruction, and generating parametric images for each replication. Here we introduce an approach that approximates the distribution of the reconstructed PET images by a Gaussian random field and generates synthetic realizations in the imaging domain. This eliminates the reconstruction steps in generating each simulated functional image and is therefore practical. Results of experiments done to evaluate the approach on a model one-dimensional problem are very encouraging. Post-processing of the estimated variances is seen to improve the accuracy of the estimation method. Mixture analysis is used to estimate functional images; however, the suggested approach is general enough to extend to other parametric imaging methods
Estimating Aggregate Import-Demand Function In Nigeria: A Co ...
African Journals Online (AJOL)
This paper investigates the behaviour of Nigeria's aggregate imports between the periods 1980-2005. In the empirical analysis of the aggregate import demand function for Nigeria, cointegration and Error Correction modeling approaches have been used. Our econometric estimates suggest that real GDP largely explains ...
School District Inputs and Biased Estimation of Educational Production Functions.
Watts, Michael
1985-01-01
In 1979, Eric Hanushek pointed out a potential problem in estimating educational production functions, particularly at the precollege level. He observed that it is frequently inappropriate to include school-system variables in equations using the individual student as the unit of observation. This study offers limited evidence supporting this…
On the robust nonparametric regression estimation for a functional regressor
Azzedine , Nadjia; Laksaci , Ali; Ould-Saïd , Elias
2009-01-01
On the robust nonparametric regression estimation for a functional regressor correspondance: Corresponding author. (Ould-Said, Elias) (Azzedine, Nadjia) (Laksaci, Ali) (Ould-Said, Elias) Departement de Mathematiques--> , Univ. Djillali Liabes--> , BP 89--> , 22000 Sidi Bel Abbes--> - ALGERIA (Azzedine, Nadjia) Departement de Mathema...
estimating an aggregate import demand function for ghana
African Journals Online (AJOL)
Administrator
we estimate an import demand function for Ghana for the period 1970 to ... results also indicate that economic growth (real GDP) and depreciation in the ... 80% of shocks to real exchange rates, merchandise imports and GDP ... imports; capital goods, 43 percent; intermediate ... merchandise imports (World Bank, 2004). For.
Functional Mixed Effects Model for Small Area Estimation.
Maiti, Tapabrata; Sinha, Samiran; Zhong, Ping-Shou
2016-09-01
Functional data analysis has become an important area of research due to its ability of handling high dimensional and complex data structures. However, the development is limited in the context of linear mixed effect models, and in particular, for small area estimation. The linear mixed effect models are the backbone of small area estimation. In this article, we consider area level data, and fit a varying coefficient linear mixed effect model where the varying coefficients are semi-parametrically modeled via B-splines. We propose a method of estimating the fixed effect parameters and consider prediction of random effects that can be implemented using a standard software. For measuring prediction uncertainties, we derive an analytical expression for the mean squared errors, and propose a method of estimating the mean squared errors. The procedure is illustrated via a real data example, and operating characteristics of the method are judged using finite sample simulation studies.
Impact of Base Functional Component Types on Software Functional Size based Effort Estimation
Gencel, Cigdem; Buglione, Luigi
2008-01-01
Software effort estimation is still a significant challenge for software management. Although Functional Size Measurement (FSM) methods have been standardized and have become widely used by the software organizations, the relationship between functional size and development effort still needs further investigation. Most of the studies focus on the project cost drivers and consider total software functional size as the primary input to estimation models. In this study, we investigate whether u...
Image denoising: Learning the noise model via nonsmooth PDE-constrained optimization
Reyes, Juan Carlos De los; Schö nlieb, Carola-Bibiane
2013-01-01
We propose a nonsmooth PDE-constrained optimization approach for the determination of the correct noise model in total variation (TV) image denoising. An optimization problem for the determination of the weights corresponding to different types of noise distributions is stated and existence of an optimal solution is proved. A tailored regularization approach for the approximation of the optimal parameter values is proposed thereafter and its consistency studied. Additionally, the differentiability of the solution operator is proved and an optimality system characterizing the optimal solutions of each regularized problem is derived. The optimal parameter values are numerically computed by using a quasi-Newton method, together with semismooth Newton type algorithms for the solution of the TV-subproblems. © 2013 American Institute of Mathematical Sciences.
Image denoising: Learning the noise model via nonsmooth PDE-constrained optimization
Reyes, Juan Carlos De los
2013-11-01
We propose a nonsmooth PDE-constrained optimization approach for the determination of the correct noise model in total variation (TV) image denoising. An optimization problem for the determination of the weights corresponding to different types of noise distributions is stated and existence of an optimal solution is proved. A tailored regularization approach for the approximation of the optimal parameter values is proposed thereafter and its consistency studied. Additionally, the differentiability of the solution operator is proved and an optimality system characterizing the optimal solutions of each regularized problem is derived. The optimal parameter values are numerically computed by using a quasi-Newton method, together with semismooth Newton type algorithms for the solution of the TV-subproblems. © 2013 American Institute of Mathematical Sciences.
Czech Academy of Sciences Publication Activity Database
Adam, Lukáš; Outrata, Jiří; Roubíček, Tomáš
2017-01-01
Roč. 66, č. 12 (2017), s. 2025-2049 ISSN 0233-1934 R&D Projects: GA ČR GA13-25911S; GA ČR GA13-18652S; GA ČR GAP201/10/0357; GA ČR(CZ) GAP201/12/0671 Grant - others:GA UK(CZ) SVV 260225/2015 Institutional support: RVO:67985556 ; RVO:61388998 Keywords : rate-independent systems * optimal control * identification * fractional-step time discretization * quadratic programming * gradient evaluation * variational analysis * implicit programming approach * limiting subdifferential * coderivative * nonsmooth contact mechanics * delamination Subject RIV: BA - General Mathematics; BA - General Mathematics (UT-L) OBOR OECD: Pure mathematics; Pure mathematics (UT-L) Impact factor: 0.943, year: 2016 http://library.utia.cas.cz/separaty/2016/MTR/adam-0453289.pdf
Smooth and non-smooth travelling waves in a nonlinearly dispersive Boussinesq equation
International Nuclear Information System (INIS)
Shen Jianwei; Xu Wei; Lei Youming
2005-01-01
The dynamical behavior and special exact solutions of nonlinear dispersive Boussinesq equation (B(m,n) equation), u tt -u xx -a(u n ) xx +b(u m ) xxxx =0, is studied by using bifurcation theory of dynamical system. As a result, all possible phase portraits in the parametric space for the travelling wave system, solitary wave, kink and anti-kink wave solutions and uncountably infinite many smooth and non-smooth periodic wave solutions are obtained. It can be shown that the existence of singular straight line in the travelling wave system is the reason why smooth waves converge to cusp waves, finally. When parameter are varied, under different parametric conditions, various sufficient conditions guarantee the existence of the above solutions are given
Xing, Yanyuan; Yan, Yubin
2018-03-01
Gao et al. [11] (2014) introduced a numerical scheme to approximate the Caputo fractional derivative with the convergence rate O (k 3 - α), 0 equation is sufficiently smooth, Lv and Xu [20] (2016) proved by using energy method that the corresponding numerical method for solving time fractional partial differential equation has the convergence rate O (k 3 - α), 0 equation has low regularity and in this case the numerical method fails to have the convergence rate O (k 3 - α), 0 quadratic interpolation polynomials. Based on this scheme, we introduce a time discretization scheme to approximate the time fractional partial differential equation and show by using Laplace transform methods that the time discretization scheme has the convergence rate O (k 3 - α), 0 0 for smooth and nonsmooth data in both homogeneous and inhomogeneous cases. Numerical examples are given to show that the theoretical results are consistent with the numerical results.
A note on reliability estimation of functionally diverse systems
International Nuclear Information System (INIS)
Littlewood, B.; Popov, P.; Strigini, L.
1999-01-01
It has been argued that functional diversity might be a plausible means of claiming independence of failures between two versions of a system. We present a model of functional diversity, in the spirit of earlier models of diversity such as those of Eckhardt and Lee, and Hughes. In terms of the model, we show that the claims for independence between functionally diverse systems seem rather unrealistic. Instead, it seems likely that functionally diverse systems will exhibit positively correlated failures, and thus will be less reliable than an assumption of independence would suggest. The result does not, of course, suggest that functional diversity is not worthwhile; instead, it places upon the evaluator of such a system the onus to estimate the degree of dependence so as to evaluate the reliability of the system
A single model procedure for tank calibration function estimation
International Nuclear Information System (INIS)
York, J.C.; Liebetrau, A.M.
1995-01-01
Reliable tank calibrations are a vital component of any measurement control and accountability program for bulk materials in a nuclear reprocessing facility. Tank volume calibration functions used in nuclear materials safeguards and accountability programs are typically constructed from several segments, each of which is estimated independently. Ideally, the segments correspond to structural features in the tank. In this paper the authors use an extension of the Thomas-Liebetrau model to estimate the entire calibration function in a single step. This procedure automatically takes significant run-to-run differences into account and yields an estimate of the entire calibration function in one operation. As with other procedures, the first step is to define suitable calibration segments. Next, a polynomial of low degree is specified for each segment. In contrast with the conventional practice of constructing a separate model for each segment, this information is used to set up the design matrix for a single model that encompasses all of the calibration data. Estimation of the model parameters is then done using conventional statistical methods. The method described here has several advantages over traditional methods. First, modeled run-to-run differences can be taken into account automatically at the estimation step. Second, no interpolation is required between successive segments. Third, variance estimates are based on all the data, rather than that from a single segment, with the result that discontinuities in confidence intervals at segment boundaries are eliminated. Fourth, the restrictive assumption of the Thomas-Liebetrau method, that the measured volumes be the same for all runs, is not required. Finally, the proposed methods are readily implemented using standard statistical procedures and widely-used software packages
Estimations for the Schwinger functions of relativistic quantum field theories
International Nuclear Information System (INIS)
Mayer, C.D.
1981-01-01
Schwinger functions of a relativistic neutral scalar field the basing test function space of which is S or D are estimated by methods of the analytic continuation. Concerning the behaviour in coincident points it is shown: The two-point singularity of the n-point Schwinger function of a field theory is dominated by an inverse power of the distance of both points modulo a multiplicative constant, if the other n-2 points a sufficiently distant and remain fixed. The power thereby, depends only on n. Using additional conditions on the field the independence of the power on n may be proved. Concerning the behaviour at infinite it is shown: The n-point Schwinger functions of a field theory are globally bounded, if the minimal distance of the arguments is positive. The bound depends only on n and the minimal distance of the arguments. (orig.) [de
Joint brain connectivity estimation from diffusion and functional MRI data
Chu, Shu-Hsien; Lenglet, Christophe; Parhi, Keshab K.
2015-03-01
Estimating brain wiring patterns is critical to better understand the brain organization and function. Anatomical brain connectivity models axonal pathways, while the functional brain connectivity characterizes the statistical dependencies and correlation between the activities of various brain regions. The synchronization of brain activity can be inferred through the variation of blood-oxygen-level dependent (BOLD) signal from functional MRI (fMRI) and the neural connections can be estimated using tractography from diffusion MRI (dMRI). Functional connections between brain regions are supported by anatomical connections, and the synchronization of brain activities arises through sharing of information in the form of electro-chemical signals on axon pathways. Jointly modeling fMRI and dMRI data may improve the accuracy in constructing anatomical connectivity as well as functional connectivity. Such an approach may lead to novel multimodal biomarkers potentially able to better capture functional and anatomical connectivity variations. We present a novel brain network model which jointly models the dMRI and fMRI data to improve the anatomical connectivity estimation and extract the anatomical subnetworks associated with specific functional modes by constraining the anatomical connections as structural supports to the functional connections. The key idea is similar to a multi-commodity flow optimization problem that minimizes the cost or maximizes the efficiency for flow configuration and simultaneously fulfills the supply-demand constraint for each commodity. In the proposed network, the nodes represent the grey matter (GM) regions providing brain functionality, and the links represent white matter (WM) fiber bundles connecting those regions and delivering information. The commodities can be thought of as the information corresponding to brain activity patterns as obtained for instance by independent component analysis (ICA) of fMRI data. The concept of information
Some aspects of the translog production function estimation
Directory of Open Access Journals (Sweden)
Florin-Marius PAVELESCU
2011-06-01
Full Text Available In a translog production function, the number of parameters practically öexplodesö as the number of considered production factors increases. Consequently, the shortcoming in the estimation of the respective production function is the occurrence of collinearity. Theoretically, the collinearity impact is minimum if a single production factor is taken into account. In this case, we can determine not only the output elasticity but also the elasticity of scale related to the respective production factor. In the present paper, we demonstrate that the relationship between the output elasticity and estimated average elasticity of scale depends on the dynamics trajectory of the production factor, underexponential and overexponential, respectively. At the end, a practical example is offered, dealing with the computation of the Gross Domestic Product elasticity and average elasticity of scale related to employed population in the United Kingdom and France during 1999-2009.
Estimation of Correlation Functions by the Random Decrement Technique
DEFF Research Database (Denmark)
Brincker, Rune; Krenk, Steen; Jensen, Jakob Laigaard
responses simulated by two SDOF ARMA models loaded by the same bandlimited white noise. The speed and the accuracy of the RDD technique is compared to the Fast Fourier Transform (FFT) technique. The RDD technique does not involve multiplications, but only additions. Therefore, the technique is very fast......The Random Decrement (RDD) Technique is a versatile technique for characterization of random signals in the time domain. In this paper a short review of the theoretical basis is given, and the technique is illustrated by estimating auto-correlation functions and cross-correlation functions on modal...
Estimation of Correlation Functions by the Random Decrement Technique
DEFF Research Database (Denmark)
Brincker, Rune; Krenk, Steen; Jensen, Jacob Laigaard
1991-01-01
responses simulated by two SDOF ARMA models loaded by the same band-limited white noise. The speed and the accuracy of the RDD technique is compared to the Fast Fourier Transform (FFT) technique. The RDD technique does not involve multiplications, but only additions. Therefore, the technique is very fast......The Random Decrement (RDD) Technique is a versatile technique for characterization of random signals in the time domain. In this paper a short review of the theoretical basis is given, and the technique is illustrated by estimating auto-correlation functions and cross-correlation functions on modal...
Estimation of Correlation Functions by the Random Decrement Technique
DEFF Research Database (Denmark)
Brincker, Rune; Krenk, Steen; Jensen, Jakob Laigaard
1992-01-01
responses simulated by two SDOF ARMA models loaded by the same bandlimited white noise. The speed and the accuracy of the RDD technique is compared to the Fast Fourier Transform (FFT) technique. The RDD technique does not involve multiplications, but only additions. Therefore, the technique is very fast......The Random Decrement (RDD) Technique is a versatile technique for characterization of random signals in the time domain. In this paper a short review of the theoretical basis is given, and the technique is illustrated by estimating auto-correlation functions and cross-correlation functions on modal...
Estimating unsaturated hydraulic conductivity from soil moisture-tim function
International Nuclear Information System (INIS)
El Gendy, R.W.
2002-01-01
The unsaturated hydraulic conductivity for soil can be estimated from o(t) function, and the dimensionless soil water content parameter (Se)Se (β - βr)/ (φ - θ)), where θ, is the soil water content at any time (from soil moisture depletion curve l; θ is the residual water content and θ, is the total soil porosity (equals saturation point). Se can be represented as a time function (Se = a t b ), where t, is the measurement time and (a and b) are the regression constants. The recommended equation in this method is given by
Asymptotic normality of kernel estimator of $\\psi$-regression function for functional ergodic data
Laksaci ALI; Benziadi Fatima; Gheriballak Abdelkader
2016-01-01
In this paper we consider the problem of the estimation of the $\\psi$-regression function when the covariates take values in an infinite dimensional space. Our main aim is to establish, under a stationary ergodic process assumption, the asymptotic normality of this estimate.
Wu, Shaofeng; Gao, Dianrong; Liang, Yingna; Chen, Bo
2015-11-01
With the development of bionics, the bionic non-smooth surfaces are introduced to the field of tribology. Although non-smooth surface has been studied widely, the studies of non-smooth surface under the natural seawater lubrication are still very fewer, especially experimental research. The influences of smooth and non-smooth surface on the frictional properties of the glass fiber-epoxy resin composite (GF/EPR) coupled with stainless steel 316L are investigated under natural seawater lubrication in this paper. The tested non-smooth surfaces include the surfaces with semi-spherical pits, the conical pits, the cone-cylinder combined pits, the cylindrical pits and through holes. The friction and wear tests are performed using a ring-on-disc test rig under 60 N load and 1000 r/min rotational speed. The tests results show that GF/EPR with bionic non-smooth surface has quite lower friction coefficient and better wear resistance than GF/EPR with smooth surface without pits. The average friction coefficient of GF/EPR with semi-spherical pits is 0.088, which shows the largest reduction is approximately 63.18% of GF/EPR with smooth surface. In addition, the wear debris on the worn surfaces of GF/EPR are observed by a confocal scanning laser microscope. It is shown that the primary wear mechanism is the abrasive wear. The research results provide some design parameters for non-smooth surface, and the experiment results can serve as a beneficial supplement to non-smooth surface study.
Machine Learning Estimation of Atom Condensed Fukui Functions.
Zhang, Qingyou; Zheng, Fangfang; Zhao, Tanfeng; Qu, Xiaohui; Aires-de-Sousa, João
2016-02-01
To enable the fast estimation of atom condensed Fukui functions, machine learning algorithms were trained with databases of DFT pre-calculated values for ca. 23,000 atoms in organic molecules. The problem was approached as the ranking of atom types with the Bradley-Terry (BT) model, and as the regression of the Fukui function. Random Forests (RF) were trained to predict the condensed Fukui function, to rank atoms in a molecule, and to classify atoms as high/low Fukui function. Atomic descriptors were based on counts of atom types in spheres around the kernel atom. The BT coefficients assigned to atom types enabled the identification (93-94 % accuracy) of the atom with the highest Fukui function in pairs of atoms in the same molecule with differences ≥0.1. In whole molecules, the atom with the top Fukui function could be recognized in ca. 50 % of the cases and, on the average, about 3 of the top 4 atoms could be recognized in a shortlist of 4. Regression RF yielded predictions for test sets with R(2) =0.68-0.69, improving the ability of BT coefficients to rank atoms in a molecule. Atom classification (as high/low Fukui function) was obtained with RF with sensitivity of 55-61 % and specificity of 94-95 %. © 2016 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.
Observer-Based Human Knee Stiffness Estimation.
Misgeld, Berno J E; Luken, Markus; Riener, Robert; Leonhardt, Steffen
2017-05-01
We consider the problem of stiffness estimation for the human knee joint during motion in the sagittal plane. The new stiffness estimator uses a nonlinear reduced-order biomechanical model and a body sensor network (BSN). The developed model is based on a two-dimensional knee kinematics approach to calculate the angle-dependent lever arms and the torques of the muscle-tendon-complex. To minimize errors in the knee stiffness estimation procedure that result from model uncertainties, a nonlinear observer is developed. The observer uses the electromyogram (EMG) of involved muscles as input signals and the segmental orientation as the output signal to correct the observer-internal states. Because of dominating model nonlinearities and nonsmoothness of the corresponding nonlinear functions, an unscented Kalman filter is designed to compute and update the observer feedback (Kalman) gain matrix. The observer-based stiffness estimation algorithm is subsequently evaluated in simulations and in a test bench, specifically designed to provide robotic movement support for the human knee joint. In silico and experimental validation underline the good performance of the knee stiffness estimation even in the cases of a knee stiffening due to antagonistic coactivation. We have shown the principle function of an observer-based approach to knee stiffness estimation that employs EMG signals and segmental orientation provided by our own IPANEMA BSN. The presented approach makes realtime, model-based estimation of knee stiffness with minimal instrumentation possible.
Conical square function estimates in UMD Banach spaces and applications to H?-functional calculi
Hytönen, T.; Van Neerven, J.; Portal, P.
2008-01-01
We study conical square function estimates for Banach-valued functions and introduce a vector-valued analogue of the Coifman-Meyer-Stein tent spaces. Following recent work of Auscher-M(c)Intosh-Russ, the tent spaces in turn are used to construct a scale of vector-valued Hardy spaces associated with
International Nuclear Information System (INIS)
Liu, Xiaolan; Zhou, Mi
2016-01-01
In this paper, a one-layer recurrent network is proposed for solving a non-smooth convex optimization subject to linear inequality constraints. Compared with the existing neural networks for optimization, the proposed neural network is capable of solving more general convex optimization with linear inequality constraints. The convergence of the state variables of the proposed neural network to achieve solution optimality is guaranteed as long as the designed parameters in the model are larger than the derived lower bounds.
Method for estimating modulation transfer function from sample images.
Saiga, Rino; Takeuchi, Akihisa; Uesugi, Kentaro; Terada, Yasuko; Suzuki, Yoshio; Mizutani, Ryuta
2018-02-01
The modulation transfer function (MTF) represents the frequency domain response of imaging modalities. Here, we report a method for estimating the MTF from sample images. Test images were generated from a number of images, including those taken with an electron microscope and with an observation satellite. These original images were convolved with point spread functions (PSFs) including those of circular apertures. The resultant test images were subjected to a Fourier transformation. The logarithm of the squared norm of the Fourier transform was plotted against the squared distance from the origin. Linear correlations were observed in the logarithmic plots, indicating that the PSF of the test images can be approximated with a Gaussian. The MTF was then calculated from the Gaussian-approximated PSF. The obtained MTF closely coincided with the MTF predicted from the original PSF. The MTF of an x-ray microtomographic section of a fly brain was also estimated with this method. The obtained MTF showed good agreement with the MTF determined from an edge profile of an aluminum test object. We suggest that this approach is an alternative way of estimating the MTF, independently of the image type. Copyright © 2017 Elsevier Ltd. All rights reserved.
Directory of Open Access Journals (Sweden)
Sanjay Kumar Singh
2011-06-01
Full Text Available In this Paper we propose Bayes estimators of the parameters of Exponentiated Exponential distribution and Reliability functions under General Entropy loss function for Type II censored sample. The proposed estimators have been compared with the corresponding Bayes estimators obtained under Squared Error loss function and maximum likelihood estimators for their simulated risks (average loss over sample space.
Towards an Early Software Effort Estimation Based on Functional and Non-Functional Requirements
Kassab, Mohamed; Daneva, Maya; Ormandjieva, Olga
The increased awareness of the non-functional requirements as a key to software project and product success makes explicit the need to include them in any software project effort estimation activity. However, the existing approaches to defining size-based effort relationships still pay insufficient attention to this need. This paper presents a flexible, yet systematic approach to the early requirements-based effort estimation, based on Non-Functional Requirements ontology. It complementarily uses one standard functional size measurement model and a linear regression technique. We report on a case study which illustrates the application of our solution approach in context and also helps evaluate our experiences in using it.
Power estimation on functional level for programmable processors
Directory of Open Access Journals (Sweden)
M. Schneider
2004-01-01
Full Text Available In diesem Beitrag werden verschiedene Ansätze zur Verlustleistungsschätzung von programmierbaren Prozessoren vorgestellt und bezüglich ihrer Übertragbarkeit auf moderne Prozessor-Architekturen wie beispielsweise Very Long Instruction Word (VLIW-Architekturen bewertet. Besonderes Augenmerk liegt hierbei auf dem Konzept der sogenannten Functional-Level Power Analysis (FLPA. Dieser Ansatz basiert auf der Einteilung der Prozessor-Architektur in funktionale Blöcke wie beispielsweise Processing-Unit, Clock-Netzwerk, interner Speicher und andere. Die Verlustleistungsaufnahme dieser Bl¨ocke wird parameterabhängig durch arithmetische Modellfunktionen beschrieben. Durch automatisierte Analyse von Assemblercodes des zu schätzenden Systems mittels eines Parsers können die Eingangsparameter wie beispielsweise der erzielte Parallelitätsgrad oder die Art des Speicherzugriffs gewonnen werden. Dieser Ansatz wird am Beispiel zweier moderner digitaler Signalprozessoren durch eine Vielzahl von Basis-Algorithmen der digitalen Signalverarbeitung evaluiert. Die ermittelten Schätzwerte für die einzelnen Algorithmen werden dabei mit physikalisch gemessenen Werten verglichen. Es ergibt sich ein sehr kleiner maximaler Schätzfehler von 3%. In this contribution different approaches for power estimation for programmable processors are presented and evaluated concerning their capability to be applied to modern digital signal processor architectures like e.g. Very Long InstructionWord (VLIW -architectures. Special emphasis will be laid on the concept of so-called Functional-Level Power Analysis (FLPA. This approach is based on the separation of the processor architecture into functional blocks like e.g. processing unit, clock network, internal memory and others. The power consumption of these blocks is described by parameter dependent arithmetic model functions. By application of a parser based automized analysis of assembler codes of the systems to be estimated
Power estimation on functional level for programmable processors
Schneider, M.; Blume, H.; Noll, T. G.
2004-05-01
In diesem Beitrag werden verschiedene Ansätze zur Verlustleistungsschätzung von programmierbaren Prozessoren vorgestellt und bezüglich ihrer Übertragbarkeit auf moderne Prozessor-Architekturen wie beispielsweise Very Long Instruction Word (VLIW)-Architekturen bewertet. Besonderes Augenmerk liegt hierbei auf dem Konzept der sogenannten Functional-Level Power Analysis (FLPA). Dieser Ansatz basiert auf der Einteilung der Prozessor-Architektur in funktionale Blöcke wie beispielsweise Processing-Unit, Clock-Netzwerk, interner Speicher und andere. Die Verlustleistungsaufnahme dieser Bl¨ocke wird parameterabhängig durch arithmetische Modellfunktionen beschrieben. Durch automatisierte Analyse von Assemblercodes des zu schätzenden Systems mittels eines Parsers können die Eingangsparameter wie beispielsweise der erzielte Parallelitätsgrad oder die Art des Speicherzugriffs gewonnen werden. Dieser Ansatz wird am Beispiel zweier moderner digitaler Signalprozessoren durch eine Vielzahl von Basis-Algorithmen der digitalen Signalverarbeitung evaluiert. Die ermittelten Schätzwerte für die einzelnen Algorithmen werden dabei mit physikalisch gemessenen Werten verglichen. Es ergibt sich ein sehr kleiner maximaler Schätzfehler von 3%. In this contribution different approaches for power estimation for programmable processors are presented and evaluated concerning their capability to be applied to modern digital signal processor architectures like e.g. Very Long InstructionWord (VLIW) -architectures. Special emphasis will be laid on the concept of so-called Functional-Level Power Analysis (FLPA). This approach is based on the separation of the processor architecture into functional blocks like e.g. processing unit, clock network, internal memory and others. The power consumption of these blocks is described by parameter dependent arithmetic model functions. By application of a parser based automized analysis of assembler codes of the systems to be estimated the input
Optimal estimation of the intensity function of a spatial point process
DEFF Research Database (Denmark)
Guan, Yongtao; Jalilian, Abdollah; Waagepetersen, Rasmus
easily computable estimating functions. We derive the optimal estimating function in a class of first-order estimating functions. The optimal estimating function depends on the solution of a certain Fredholm integral equation and reduces to the likelihood score in case of a Poisson process. We discuss...
Influence function method for fast estimation of BWR core performance
International Nuclear Information System (INIS)
Rahnema, F.; Martin, C.L.; Parkos, G.R.; Williams, R.D.
1993-01-01
The model, which is based on the influence function method, provides rapid estimate of important quantities such as margins to fuel operating limits, the effective multiplication factor, nodal power and void and bundle flow distributions as well as the traversing in-core probe (TIP) and local power range monitor (LPRM) readings. The fast model has been incorporated into GE's three-dimensional core monitoring system (3D Monicore). In addition to its predicative capability, the model adapts to LPRM readings in the monitoring mode. Comparisons have shown that the agreement between the results of the fast method and those of the standard 3D Monicore is within a few percent. (orig.)
Spectral velocity estimation using autocorrelation functions for sparse data sets
DEFF Research Database (Denmark)
2006-01-01
The distribution of velocities of blood or tissue is displayed using ultrasound scanners by finding the power spectrum of the received signal. This is currently done by making a Fourier transform of the received signal and then showing spectra in an M-mode display. It is desired to show a B......-mode image for orientation, and data for this has to acquired interleaved with the flow data. The power spectrum can be calculated from the Fourier transform of the autocorrelation function Ry (k), where its span of lags k is given by the number of emission N in the data segment for velocity estimation...
Bayesian Parameter Estimation via Filtering and Functional Approximations
Matthies, Hermann G.
2016-11-25
The inverse problem of determining parameters in a model by comparing some output of the model with observations is addressed. This is a description for what hat to be done to use the Gauss-Markov-Kalman filter for the Bayesian estimation and updating of parameters in a computational model. This is a filter acting on random variables, and while its Monte Carlo variant --- the Ensemble Kalman Filter (EnKF) --- is fairly straightforward, we subsequently only sketch its implementation with the help of functional representations.
Bayesian Parameter Estimation via Filtering and Functional Approximations
Matthies, Hermann G.; Litvinenko, Alexander; Rosic, Bojana V.; Zander, Elmar
2016-01-01
The inverse problem of determining parameters in a model by comparing some output of the model with observations is addressed. This is a description for what hat to be done to use the Gauss-Markov-Kalman filter for the Bayesian estimation and updating of parameters in a computational model. This is a filter acting on random variables, and while its Monte Carlo variant --- the Ensemble Kalman Filter (EnKF) --- is fairly straightforward, we subsequently only sketch its implementation with the help of functional representations.
Asiri, Sharefa M.; Laleg-Kirati, Taous-Meriem
2017-01-01
In this paper, a method based on modulating functions is proposed to estimate the Cerebral Blood Flow (CBF). The problem is written in an input estimation problem for a damped wave equation which is used to model the spatiotemporal variations
Fused Adaptive Lasso for Spatial and Temporal Quantile Function Estimation
Sun, Ying
2015-09-01
Quantile functions are important in characterizing the entire probability distribution of a random variable, especially when the tail of a skewed distribution is of interest. This article introduces new quantile function estimators for spatial and temporal data with a fused adaptive Lasso penalty to accommodate the dependence in space and time. This method penalizes the difference among neighboring quantiles, hence it is desirable for applications with features ordered in time or space without replicated observations. The theoretical properties are investigated and the performances of the proposed methods are evaluated by simulations. The proposed method is applied to particulate matter (PM) data from the Community Multiscale Air Quality (CMAQ) model to characterize the upper quantiles, which are crucial for studying spatial association between PM concentrations and adverse human health effects. © 2016 American Statistical Association and the American Society for Quality.
A two-layer recurrent neural network for nonsmooth convex optimization problems.
Qin, Sitian; Xue, Xiaoping
2015-06-01
In this paper, a two-layer recurrent neural network is proposed to solve the nonsmooth convex optimization problem subject to convex inequality and linear equality constraints. Compared with existing neural network models, the proposed neural network has a low model complexity and avoids penalty parameters. It is proved that from any initial point, the state of the proposed neural network reaches the equality feasible region in finite time and stays there thereafter. Moreover, the state is unique if the initial point lies in the equality feasible region. The equilibrium point set of the proposed neural network is proved to be equivalent to the Karush-Kuhn-Tucker optimality set of the original optimization problem. It is further proved that the equilibrium point of the proposed neural network is stable in the sense of Lyapunov. Moreover, from any initial point, the state is proved to be convergent to an equilibrium point of the proposed neural network. Finally, as applications, the proposed neural network is used to solve nonlinear convex programming with linear constraints and L1 -norm minimization problems.
Directory of Open Access Journals (Sweden)
Yuriy Goykhman
2012-01-01
Full Text Available A solution to the inverse problem for a three-layer medium with nonsmooth boundaries, representing a large class of natural subsurface structures, is developed in this paper using simulated radar data. The retrieval of the layered medium parameters is accomplished as a sequential nonlinear optimization starting from the top layer and progressively characterizing the layers below. The optimization process is achieved by an iterative technique built around the solution of the forward scattering problem. The forward scattering process is formulated by using the extended boundary condition method (EBCM and constructing reflection and transmission matrices for each interface. These matrices are then combined into the generalized scattering matrix for the entire system, from which radar scattering coefficients are then computed. To be efficiently utilized in the inverse problem, the forward scattering model is simulated over a wide range of unknowns to obtain a complete set of subspace-based equivalent closed-form models that relate radar backscattering coefficients to the sought-for parameters including dielectric constants of each layer and separation of the layers. The inversion algorithm is implemented as a modified conjugate-gradient-based nonlinear optimization. It is shown that this technique results in accurate retrieval of surface and subsurface parameters, even in the presence of noise.
Estimation of cost function in the natural gas industry
Energy Technology Data Exchange (ETDEWEB)
Kim, Young Duk [Korea Energy Economics Institute, Euiwang (Korea)
1999-02-01
The natural gas industry in Korea has characteristics of a dual industrial structure with wholesale and retail and a regional monopoly of city gas company. Recently there have been discussions on the restructuring of gas industry and the problems arising from such industrial organization. At this point, the labor and capital cost of KOGAS were analyzed to find out efficiency of KOGAS, the wholesaler and the cost function focusing on distribution was estimated to find out effect of scale of city gas company, the retailer. As a result, in the case of KOGAS, it is prove that enhancing competitive power is needed by improving labor productivity through stabilization of labor structure and by maximizing value-added through stability of capital combination. From the estimation of cost function of city gas companies, the existing regional monopoly of city gas company have effects on its scale only when the area of operation and end users used the same amount per end user are increased. (author). 31 refs., 10 figs., 43 tabs.
Development of fragility functions to estimate homelessness after an earthquake
Brink, Susan A.; Daniell, James; Khazai, Bijan; Wenzel, Friedemann
2014-05-01
Immediately after an earthquake, many stakeholders need to make decisions about their response. These decisions often need to be made in a data poor environment as accurate information on the impact can take months or even years to be collected and publicized. Social fragility functions have been developed and applied to provide an estimate of the impact in terms of building damage, deaths and injuries in near real time. These rough estimates can help governments and response agencies determine what aid may be required which can improve their emergency response and facilitate planning for longer term response. Due to building damage, lifeline outages, fear of aftershocks, or other causes, people may become displaced or homeless after an earthquake. Especially in cold and dangerous locations, the rapid provision of safe emergency shelter can be a lifesaving necessity. However, immediately after an event there is little information available about the number of homeless, their locations and whether they require public shelter to aid the response agencies in decision making. In this research, we analyze homelessness after historic earthquakes using the CATDAT Damaging Earthquakes Database. CATDAT includes information on the hazard as well as the physical and social impact of over 7200 damaging earthquakes from 1900-2013 (Daniell et al. 2011). We explore the relationship of both earthquake characteristics and area characteristics with homelessness after the earthquake. We consider modelled variables such as population density, HDI, year, measures of ground motion intensity developed in Daniell (2014) over the time period from 1900-2013 as well as temperature. Using a base methodology based on that used for PAGER fatality fragility curves developed by Jaiswal and Wald (2010), but using regression through time using the socioeconomic parameters developed in Daniell et al. (2012) for "socioeconomic fragility functions", we develop a set of fragility curves that can be
An Improved Differential Evolution Based Dynamic Economic Dispatch with Nonsmooth Fuel Cost Function
Directory of Open Access Journals (Sweden)
R. Balamurugan
2007-09-01
Full Text Available Dynamic economic dispatch (DED is one of the major operational decisions in electric power systems. DED problem is an optimization problem with an objective to determine the optimal combination of power outputs for all generating units over a certain period of time in order to minimize the total fuel cost while satisfying dynamic operational constraints and load demand in each interval. This paper presents an improved differential evolution (IDE method to solve the DED problem of generating units considering valve-point effects. Heuristic crossover technique and gene swap operator are introduced in the proposed approach to improve the convergence characteristic of the differential evolution (DE algorithm. To illustrate the effectiveness of the proposed approach, two test systems consisting of five and ten generating units have been considered. The results obtained through the proposed method are compared with those reported in the literature.
Towards an Early Software Effort Estimation Based on Functional and Non-Functional Requirements
Kassab, M.; Daneva, Maia; Ormanjieva, Olga; Abran, A.; Braungarten, R.; Dumke, R.; Cuadrado-Gallego, J.; Brunekreef, J.
2009-01-01
The increased awareness of the non-functional requirements as a key to software project and product success makes explicit the need to include them in any software project effort estimation activity. However, the existing approaches to defining size-based effort relationships still pay insufficient
Pedotransfer functions estimating soil hydraulic properties using different soil parameters
DEFF Research Database (Denmark)
Børgesen, Christen Duus; Iversen, Bo Vangsø; Jacobsen, Ole Hørbye
2008-01-01
Estimates of soil hydraulic properties using pedotransfer functions (PTF) are useful in many studies such as hydrochemical modelling and soil mapping. The objective of this study was to calibrate and test parametric PTFs that predict soil water retention and unsaturated hydraulic conductivity...... parameters. The PTFs are based on neural networks and the Bootstrap method using different sets of predictors and predict the van Genuchten/Mualem parameters. A Danish soil data set (152 horizons) dominated by sandy and sandy loamy soils was used in the development of PTFs to predict the Mualem hydraulic...... conductivity parameters. A larger data set (1618 horizons) with a broader textural range was used in the development of PTFs to predict the van Genuchten parameters. The PTFs using either three or seven textural classes combined with soil organic mater and bulk density gave the most reliable predictions...
International Nuclear Information System (INIS)
Zhang Zhihui; Zhou Hong; Ren Luquan; Tong Xin; Shan Hongyu; Li Xianzhou
2008-01-01
Aiming to form the high quality of non-smooth biomimetic unit, the influence of laser processing parameters (pulse energy, pulse duration, frequency and scanning speed in the present work) on the surface morphology of scanned tracks was studied based on the 3Cr2W8V die steel. The evolution of the surface morphology was explained according to the degree of melting and vaporization of surface material, and the trend of mean surface roughness and maximum peak-to-valley height. Cross-section morphology revealed the significant microstructural characteristic of the laser-treated zone used for forming the functional zone on the biomimetic surface. Results showed that the combination of pulse energy and pulse duration plays a major role in determining the local height difference on the irradiated surface and the occurrence of melting or vaporization. While frequency and scanning speed have a minor effect on the change of the surface morphology, acting mainly by the different overlapping amount and overlapping mode. The mechanisms behind these influences were discussed, and schematic drawings were introduced to describe the mechanisms
Detka, Małgorzata
2017-08-01
The paper presents results of numerical analyses of the response of a uniform fiber Bragg grating subjected to a strain with non-smooth profile. Results of measurements of the response of the grating to a compressive strain correspond well with results of the simulation and show, that the induced strain profile of the grating causes a widening of its reflection spectrum with a considerable shape irregularity, dependent on the location of the point where slope of the strain profile changes abruptly, and on the maximum value of the strain.
Error Estimates for a Semidiscrete Finite Element Method for Fractional Order Parabolic Equations
Jin, Bangti
2013-01-01
We consider the initial boundary value problem for a homogeneous time-fractional diffusion equation with an initial condition ν(x) and a homogeneous Dirichlet boundary condition in a bounded convex polygonal domain Ω. We study two semidiscrete approximation schemes, i.e., the Galerkin finite element method (FEM) and lumped mass Galerkin FEM, using piecewise linear functions. We establish almost optimal with respect to the data regularity error estimates, including the cases of smooth and nonsmooth initial data, i.e., ν ∈ H2(Ω) ∩ H0 1(Ω) and ν ∈ L2(Ω). For the lumped mass method, the optimal L2-norm error estimate is valid only under an additional assumption on the mesh, which in two dimensions is known to be satisfied for symmetric meshes. Finally, we present some numerical results that give insight into the reliability of the theoretical study. © 2013 Society for Industrial and Applied Mathematics.
Estimation of Cumulative Absolute Velocity using Empirical Green's Function Method
International Nuclear Information System (INIS)
Park, Dong Hee; Yun, Kwan Hee; Chang, Chun Joong; Park, Se Moon
2009-01-01
In recognition of the needs to develop a new criterion for determining when the OBE (Operating Basis Earthquake) has been exceeded at nuclear power plants, Cumulative Absolute Velocity (CAV) was introduced by EPRI. The concept of CAV is the area accumulation with the values more than 0.025g occurred during every one second. The equation of the CAV is as follows. CAV = ∫ 0 max |a(t)|dt (1) t max = duration of record, a(t) = acceleration (>0.025g) Currently, the OBE exceedance criteria in Korea is Peak Ground Acceleration (PGA, PGA>0.1g). When Odesan earthquake (M L =4.8, January 20th, 2007) and Gyeongju earthquake (M L =3.4, June 2nd, 1999) were occurred, we have had already experiences of PGA greater than 0.1g that did not even cause any damage to the poorly-designed structures nearby. This moderate earthquake has motivated Korea to begin the use of the CAV for OBE exceedance criteria for NPPs. Because the present OBE level has proved itself to be a poor indicator for small-to-moderate earthquakes, for which the low OBE level can cause an inappropriate shut down the plant. A more serious possibility is that this scenario will become a reality at a very high level. Empirical Green's Function method was a simulation technique which can estimate the CAV value and it is hereby introduced
Rached, Nadhir B.
2014-01-06
A new hybrid adaptive MC forward Euler algorithm for SDEs with singular coefficients and non-smooth observables is developed. This adaptive method is based on the derivation of a new error expansion with computable leading order terms. When a non-smooth binary payoff is considered, the new adaptive method achieves the same complexity as the uniform discretization with smooth problems. Moreover, the new developed algorithm is extended to the multilevel Monte Carlo (MLMC) forward Euler setting which reduces the complexity from O(TOL-3) to O(TOL-2(log(TOL))2). For the binary option case, it recovers the standard multilevel computational cost O(TOL-2(log(TOL))2). When considering a higher order Milstein scheme, a similar complexity result was obtained by Giles using the uniform time stepping for one dimensional SDEs, see [2]. The difficulty to extend Giles’ Milstein MLMC method to the multidimensional case is an argument for the flexibility of our new constructed adaptive MLMC forward Euler method which can be easily adapted to this setting. Similarly, the expected complexity O(TOL-2(log(TOL))2) is reached for the multidimensional case and verified numerically.
Rached, Nadhir B.; Hoel, Haakon; Tempone, Raul
2014-01-01
A new hybrid adaptive MC forward Euler algorithm for SDEs with singular coefficients and non-smooth observables is developed. This adaptive method is based on the derivation of a new error expansion with computable leading order terms. When a non-smooth binary payoff is considered, the new adaptive method achieves the same complexity as the uniform discretization with smooth problems. Moreover, the new developed algorithm is extended to the multilevel Monte Carlo (MLMC) forward Euler setting which reduces the complexity from O(TOL-3) to O(TOL-2(log(TOL))2). For the binary option case, it recovers the standard multilevel computational cost O(TOL-2(log(TOL))2). When considering a higher order Milstein scheme, a similar complexity result was obtained by Giles using the uniform time stepping for one dimensional SDEs, see [2]. The difficulty to extend Giles’ Milstein MLMC method to the multidimensional case is an argument for the flexibility of our new constructed adaptive MLMC forward Euler method which can be easily adapted to this setting. Similarly, the expected complexity O(TOL-2(log(TOL))2) is reached for the multidimensional case and verified numerically.
Arreola, José Luis Preciado; Johnson, Andrew L.
2016-01-01
Organizations like census bureaus rely on non-exhaustive surveys to estimate industry population-level production functions. In this paper we propose selecting an estimator based on a weighting of its in-sample and predictive performance on actual application datasets. We compare Cobb-Douglas functional assumptions to existing nonparametric shape constrained estimators and a newly proposed estimated presented in this paper. For simulated data, we find that our proposed estimator has the lowes...
Topological estimation of aerodynamic controlled airplane system functionality of quality
Directory of Open Access Journals (Sweden)
С.В. Павлова
2005-01-01
Full Text Available It is suggested to use topological methods for stage estimation of aerodynamic airplane control in widespread range of its conditions The estimation is based on normalized stage virtual non-isotropy of configurational airplane systems calculation.
Estimating functions for inhomogeneous spatial point processes with incomplete covariate data
DEFF Research Database (Denmark)
Waagepetersen, Rasmus
and this leads to parameter estimation error which is difficult to quantify. In this paper we introduce a Monte Carlo version of the estimating function used in "spatstat" for fitting inhomogeneous Poisson processes and certain inhomogeneous cluster processes. For this modified estimating function it is feasible...
Estimating functions for inhomogeneous spatial point processes with incomplete covariate data
DEFF Research Database (Denmark)
Waagepetersen, Rasmus
2008-01-01
and this leads to parameter estimation error which is difficult to quantify. In this paper, we introduce a Monte Carlo version of the estimating function used in spatstat for fitting inhomogeneous Poisson processes and certain inhomogeneous cluster processes. For this modified estimating function, it is feasible...
Pedotransfer functions to estimate soil water content at field capacity ...
Indian Academy of Sciences (India)
Priyabrata Santra
2018-03-27
Mar 27, 2018 ... of the global population (Millennium Ecosystem. Assessment 2005). Likewise, there is a .... Therefore, the main objective of this study was to develop PTFs for arid soils of India to estimate soil water content at FC and PWP.
Directory of Open Access Journals (Sweden)
Nguyen Manh Hung
2008-03-01
Full Text Available In this paper, we consider the second initial boundary value problem for strongly general Schrodinger systems in both the finite and the infinite cylinders $Q_T, 0
Estimation of functional preparedness of young handballers in setup time
Directory of Open Access Journals (Sweden)
Favoritоv V.N.
2012-11-01
Full Text Available The dynamics of level of functional preparedness of young handballers in setup time is shown. It was foreseen to make alteration in educational-training process with the purpose of optimization of their functional preparedness. 11 youths were plugged in research, calendar age 14 - 15 years. For determination of level of their functional preparedness the computer program "SVSM" was applied. It is set that at the beginning of setup time of 18,18% of all respondent functional preparedness is characterized by a "middle" level, 27,27% - below the "average", 54,54% - "above" the average. At the end of setup time among sportsmen representatives prevailed with the level of functional preparedness "above" average - 63,63%, with level "high" - 27,27%, sportsmen with level below the average were not observed. Efficiency of the offered system of trainings employments for optimization of functional preparedness of young handballers is well-proven.
Estimation of Multiple Point Sources for Linear Fractional Order Systems Using Modulating Functions
Belkhatir, Zehor; Laleg-Kirati, Taous-Meriem
2017-01-01
This paper proposes an estimation algorithm for the characterization of multiple point inputs for linear fractional order systems. First, using polynomial modulating functions method and a suitable change of variables the problem of estimating
Estimate of K-functionals and modulus of smoothness constructed ...
Indian Academy of Sciences (India)
... and -functionals. The main result of the paper is the proof of the equivalence theorem for a -functional and a modulus of smoothness for the Dunkl transform on R d . Author Affiliations. M El Hamma1 R Daher1. Department of Mathematics, Faculty of Sciences Aïn Chock, University of Hassan II, Casablanca, Morocco ...
Micro-Economic Estimation On The Demand Function For ...
African Journals Online (AJOL)
The article focused on the estimation of the prostitution demand behaviour in Adamawa State. An econometric model was specified based on economic theory and confronted with both primary and secondary data. Ordinary least square multiple regression techniques were adopted and the linear model was chosen as a ...
Multivariable Frequency Response Functions Estimation for Industrial Robots
Hardeman, T.; Aarts, Ronald G.K.M.; Jonker, Jan B.
2005-01-01
The accuracy of industrial robots limits its applicability for high demanding processes, like robotised laser welding. We are working on a nonlinear exible model of the robot manipulator to predict these inaccuracies. This poster presents the experimental results on estimating the Multivariable
Estimating Functions of Distributions Defined over Spaces of Unknown Size
Directory of Open Access Journals (Sweden)
David H. Wolpert
2013-10-01
Full Text Available We consider Bayesian estimation of information-theoretic quantities from data, using a Dirichlet prior. Acknowledging the uncertainty of the event space size m and the Dirichlet prior’s concentration parameter c, we treat both as random variables set by a hyperprior. We show that the associated hyperprior, P(c, m, obeys a simple “Irrelevance of Unseen Variables” (IUV desideratum iff P(c, m = P(cP(m. Thus, requiring IUV greatly reduces the number of degrees of freedom of the hyperprior. Some information-theoretic quantities can be expressed multiple ways, in terms of different event spaces, e.g., mutual information. With all hyperpriors (implicitly used in earlier work, different choices of this event space lead to different posterior expected values of these information-theoretic quantities. We show that there is no such dependence on the choice of event space for a hyperprior that obeys IUV. We also derive a result that allows us to exploit IUV to greatly simplify calculations, like the posterior expected mutual information or posterior expected multi-information. We also use computer experiments to favorably compare an IUV-based estimator of entropy to three alternative methods in common use. We end by discussing how seemingly innocuous changes to the formalization of an estimation problem can substantially affect the resultant estimates of posterior expectations.
On the Reliability of Source Time Functions Estimated Using Empirical Green's Function Methods
Gallegos, A. C.; Xie, J.; Suarez Salas, L.
2017-12-01
The Empirical Green's Function (EGF) method (Hartzell, 1978) has been widely used to extract source time functions (STFs). In this method, seismograms generated by collocated events with different magnitudes are deconvolved. Under a fundamental assumption that the STF of the small event is a delta function, the deconvolved Relative Source Time Function (RSTF) yields the large event's STF. While this assumption can be empirically justified by examination of differences in event size and frequency content of the seismograms, there can be a lack of rigorous justification of the assumption. In practice, a small event might have a finite duration when the RSTF is retrieved and interpreted as the large event STF with a bias. In this study, we rigorously analyze this bias using synthetic waveforms generated by convolving a realistic Green's function waveform with pairs of finite-duration triangular or parabolic STFs. The RSTFs are found using a time-domain based matrix deconvolution. We find when the STFs of smaller events are finite, the RSTFs are a series of narrow non-physical spikes. Interpreting these RSTFs as a series of high-frequency source radiations would be very misleading. The only reliable and unambiguous information we can retrieve from these RSTFs is the difference in durations and the moment ratio of the two STFs. We can apply a Tikhonov smoothing to obtain a single-pulse RSTF, but its duration is dependent on the choice of weighting, which may be subjective. We then test the Multi-Channel Deconvolution (MCD) method (Plourde & Bostock, 2017) which assumes that both STFs have finite durations to be solved for. A concern about the MCD method is that the number of unknown parameters is larger, which would tend to make the problem rank-deficient. Because the kernel matrix is dependent on the STFs to be solved for under a positivity constraint, we can only estimate the rank-deficiency with a semi-empirical approach. Based on the results so far, we find that the
Econometric estimation of the “Constant Elasticity of Substitution" function in R
DEFF Research Database (Denmark)
Henningsen, Arne; Henningsen, Geraldine
for estimating the traditional CES function with two inputs as well as nested CES functions with three and four inputs. Furthermore, we demonstrate how these approaches can be applied in R using the add-on package micEconCES and we describe how the various estimation approaches are implemented in the mic......EconCES package. Finally, we illustrate the usage of this package by replicating some estimations of CES functions that are reported in the literature....
Asiri, Sharefa M.
2017-10-19
In this paper, a method based on modulating functions is proposed to estimate the Cerebral Blood Flow (CBF). The problem is written in an input estimation problem for a damped wave equation which is used to model the spatiotemporal variations of blood mass density. The method is described and its performance is assessed through some numerical simulations. The robustness of the method in presence of noise is also studied.
Estimate of K-functionals and modulus of smoothness constructed ...
Indian Academy of Sciences (India)
2016-08-26
functional and a modulus of smoothness for the Dunkl transform on Rd. Author Affiliations. M El Hamma1 R Daher1. Department of Mathematics, Faculty of Sciences Aïn Chock, University of Hassan II, Casablanca, Morocco. Dates.
Argument estimates of certain multivalent functions involving a linear operator
Directory of Open Access Journals (Sweden)
Nak Eun Cho
2002-01-01
Full Text Available The purpose of this paper is to derive some argument properties of certain multivalent functions in the open unit disk involving a linear operator. We also investigate their integral preserving property in a sector.
Meng, Chao; Zhou, Hong; Cong, Dalong; Wang, Chuanwei; Zhang, Peng; Zhang, Zhihui; Ren, Luquan
2012-06-01
The thermal fatigue behavior of hot-work tool steel processed by a biomimetic coupled laser remelting process gets a remarkable improvement compared to untreated sample. The 'dowel pin effect', the 'dam effect' and the 'fence effect' of non-smooth units are the main reason of the conspicuous improvement of the thermal fatigue behavior. In order to get a further enhancement of the 'dowel pin effect', the 'dam effect' and the 'fence effect', this study investigated the effect of different unit morphologies (including 'prolate', 'U' and 'V' morphology) and the same unit morphology in different sizes on the thermal fatigue behavior of H13 hot-work tool steel. The results showed that the 'U' morphology unit had the optimum thermal fatigue behavior, then the 'V' morphology which was better than the 'prolate' morphology unit; when the unit morphology was identical, the thermal fatigue behavior of the sample with large unit sizes was better than that of the small sizes.
Zeng, Fanhai; Zhang, Zhongqiang; Karniadakis, George Em
2017-12-01
Starting with the asymptotic expansion of the error equation of the shifted Gr\\"{u}nwald--Letnikov formula, we derive a new modified weighted shifted Gr\\"{u}nwald--Letnikov (WSGL) formula by introducing appropriate correction terms. We then apply one special case of the modified WSGL formula to solve multi-term fractional ordinary and partial differential equations, and we prove the linear stability and second-order convergence for both smooth and non-smooth solutions. We show theoretically and numerically that numerical solutions up to certain accuracy can be obtained with only a few correction terms. Moreover, the correction terms can be tuned according to the fractional derivative orders without explicitly knowing the analytical solutions. Numerical simulations verify the theoretical results and demonstrate that the new formula leads to better performance compared to other known numerical approximations with similar resolution.
Directory of Open Access Journals (Sweden)
Feng Qi
2014-10-01
Full Text Available The authors find the absolute monotonicity and complete monotonicity of some functions involving trigonometric functions and related to estimates the lower bounds of the first eigenvalue of Laplace operator on Riemannian manifolds.
Teeples, Ronald; Glyer, David
1987-05-01
Both policy and technical analysis of water delivery systems have been based on cost functions that are inconsistent with or are incomplete representations of the neoclassical production functions of economics. We present a full-featured production function model of water delivery which can be estimated from a multiproduct, dual cost function. The model features implicit prices for own-water inputs and is implemented as a jointly estimated system of input share equations and a translog cost function. Likelihood ratio tests are performed showing that a minimally constrained, full-featured production function is a necessary specification of the water delivery operations in our sample. This, plus the model's highly efficient and economically correct parameter estimates, confirms the usefulness of a production function approach to modeling the economic activities of water delivery systems.
HEDONIC PRICE FUNCTION ESTIMATION FOR MOBILE PHONE IN IRAN
Directory of Open Access Journals (Sweden)
Sayed Mahdi Mostafavi
2013-01-01
Full Text Available The aim of this paper is the survey of mobile price determinants by hedonic model. We have applied the hedonic price model for mobile phone market in Iran in the year of 2008. The brands conclude NOKIA, QTEK, HTC, MOTOROLA, SONY ERICSSON and SAMSUNG that comprise 193 types of handset mobile phone. The results show that in the hedonic function, the maximum amount of parameters of hedonic price function related to the following variables respectively: touch screen, hands free and connectivity tools, and the minimum amount of them are belonged to clarification of monitor images, phone volume and phone memory. Moreover, except Motorola brand the type of brand has not a significant parameter in the hedonic price function.
Verhave, JC; Gansevoort, RT; Hillege, HL; De Zeeuw, D; Curhan, GC; De Jong, PE
Many epidemiologic studies presently aim to evaluate the effect of risk factors on renal function. As direct measurement of renal function is cumbersome to perform, epidentiologic studies generally use an indirect estimate of renal function. The consequences of using different methods of renal
Estimate of K-functionals and modulus of smoothness constructed ...
Indian Academy of Sciences (India)
Casablanca, Morocco. E-mail: m_elhamma@yahoo.fr. MS received 17 January 2013. Abstract. Using a generalized spherical mean operator, we define generalized modu- lus of smoothness in the space L2 k. (Rd). Based on the Dunkl operator we define. Sobolev-type space and K-functionals. The main result of the paper ...
Estimation of acoustic resonances for room transfer function equalization
DEFF Research Database (Denmark)
Gil-Cacho, Pepe; van Waterschoot, Toon; Moonen, Marc
2010-01-01
Strong acoustic resonances create long room impulse responses (RIRs) which may harm the speech transmission in an acoustic space and hence reduce speech intelligibility. Equalization is performed by cancelling the main acoustic resonances common to multiple room transfer functions (RTFs), i...
Pedotransfer functions to estimate soil water content at field capacity ...
Indian Academy of Sciences (India)
20
Soil water retention, Dry lands, Western India, Pedotransfer functions, Soil moisture calculator. 1. 2. 3. 4 ..... samples although it is known that structure and macro-porosity of the sample affect water retention (Unger ..... and OC content has positive influence on water retention whereas interaction of clay and OC has negative ...
Bayesian Nonparametric Mixture Estimation for Time-Indexed Functional Data in R
Directory of Open Access Journals (Sweden)
Terrance D. Savitsky
2016-08-01
Full Text Available We present growfunctions for R that offers Bayesian nonparametric estimation models for analysis of dependent, noisy time series data indexed by a collection of domains. This data structure arises from combining periodically published government survey statistics, such as are reported in the Current Population Study (CPS. The CPS publishes monthly, by-state estimates of employment levels, where each state expresses a noisy time series. Published state-level estimates from the CPS are composed from household survey responses in a model-free manner and express high levels of volatility due to insufficient sample sizes. Existing software solutions borrow information over a modeled time-based dependence to extract a de-noised time series for each domain. These solutions, however, ignore the dependence among the domains that may be additionally leveraged to improve estimation efficiency. The growfunctions package offers two fully nonparametric mixture models that simultaneously estimate both a time and domain-indexed dependence structure for a collection of time series: (1 A Gaussian process (GP construction, which is parameterized through the covariance matrix, estimates a latent function for each domain. The covariance parameters of the latent functions are indexed by domain under a Dirichlet process prior that permits estimation of the dependence among functions across the domains: (2 An intrinsic Gaussian Markov random field prior construction provides an alternative to the GP that expresses different computation and estimation properties. In addition to performing denoised estimation of latent functions from published domain estimates, growfunctions allows estimation of collections of functions for observation units (e.g., households, rather than aggregated domains, by accounting for an informative sampling design under which the probabilities for inclusion of observation units are related to the response variable. growfunctions includes plot
INCREASING OF PRECISE ESTIMATION OF OPTIMAL CRITERIA BOILER FUNCTIONING
Directory of Open Access Journals (Sweden)
Y. M. Skakovsk
2016-08-01
Full Text Available Results of laboratory and industrial research allowed offering a way to improve the accuracy of estimation the optimal criterion of boilers' operation depending on fuel quality. Criterion is calculated continuously during boiler operation as heat ratio transmitted in production with superheated steam to the thermal energy obtained by combustion in boiler’s furnace fuel (natural gas .The non-linearity dependence of steam enthalpy from its temperature and pressure are considered when calculating, as well as changes in calorific value of natural gas, depending on variety in nitrogen content therein. The control algorithm and program for Ukrainian PLC MIC-52 are offered. The user selection program implements two searching modes for criterion maximum: automated and automatic. The results are going to be used for upgrading the existing control system on sugar factory.
$L^{p}$-square function estimates on spaces of homogeneous type and on uniformly rectifiable sets
Hofmann, Steve; Mitrea, Marius; Morris, Andrew J
2017-01-01
The authors establish square function estimates for integral operators on uniformly rectifiable sets by proving a local T(b) theorem and applying it to show that such estimates are stable under the so-called big pieces functor. More generally, they consider integral operators associated with Ahlfors-David regular sets of arbitrary codimension in ambient quasi-metric spaces. The local T(b) theorem is then used to establish an inductive scheme in which square function estimates on so-called big pieces of an Ahlfors-David regular set are proved to be sufficient for square function estimates to hold on the entire set. Extrapolation results for L^p and Hardy space versions of these estimates are also established. Moreover, the authors prove square function estimates for integral operators associated with variable coefficient kernels, including the Schwartz kernels of pseudodifferential operators acting between vector bundles on subdomains with uniformly rectifiable boundaries on manifolds.
ON THE ESTIMATION OF DISTANCE DISTRIBUTION FUNCTIONS FOR POINT PROCESSES AND RANDOM SETS
Directory of Open Access Journals (Sweden)
Dietrich Stoyan
2011-05-01
Full Text Available This paper discusses various estimators for the nearest neighbour distance distribution function D of a stationary point process and for the quadratic contact distribution function Hq of a stationary random closed set. It recommends the use of Hanisch's estimator of D, which is of Horvitz-Thompson type, and the minussampling estimator of Hq. This recommendation is based on simulations for Poisson processes and Boolean models.
Smoothed Conditional Scale Function Estimation in AR(1-ARCH(1 Processes
Directory of Open Access Journals (Sweden)
Lema Logamou Seknewna
2018-01-01
Full Text Available The estimation of the Smoothed Conditional Scale Function for time series was taken out under the conditional heteroscedastic innovations by imitating the kernel smoothing in nonparametric QAR-QARCH scheme. The estimation was taken out based on the quantile regression methodology proposed by Koenker and Bassett. And the proof of the asymptotic properties of the Conditional Scale Function estimator for this type of process was given and its consistency was shown.
Directory of Open Access Journals (Sweden)
Azam Zaka
2014-10-01
Full Text Available This paper is concerned with the modifications of maximum likelihood, moments and percentile estimators of the two parameter Power function distribution. Sampling behavior of the estimators is indicated by Monte Carlo simulation. For some combinations of parameter values, some of the modified estimators appear better than the traditional maximum likelihood, moments and percentile estimators with respect to bias, mean square error and total deviation.
Cochlear function tests in estimation of speech dynamic range.
Han, Jung Ju; Park, So Young; Park, Shi Nae; Na, Mi Sun; Lee, Philip; Han, Jae Sang
2016-10-01
The loss of active cochlear mechanics causes elevated thresholds, loudness recruitment, and reduced frequency selectivity. The problems faced by hearing-impaired listeners are largely related with reduced dynamic range (DR). The aim of this study was to determine which index of the cochlear function tests correlates best with the DR to speech stimuli. Audiological data on 516 ears with pure tone average (PTA) of ≤55 dB and word recognition score of ≥70% were analyzed. PTA, speech recognition threshold (SRT), uncomfortable loudness (UCL), and distortion product otoacoustic emission (DPOAE) were explored as the indices of cochlear function. Audiometric configurations were classified. Correlation between each index and the DR was assessed and multiple regression analysis was done. PTA and SRT demonstrated strong negative correlations with the DR (r = -0.788 and -0.860, respectively), while DPOAE sum was moderately correlated (r = 0.587). UCLs remained quite constant for the total range of the DR. The regression equation was Y (DR) = 75.238 - 0.719 × SRT (R(2 )=( )0.721, p equation.
Estimation Methods for Infinite-Dimensional Systems Applied to the Hemodynamic Response in the Brain
Belkhatir, Zehor
2018-05-01
Infinite-Dimensional Systems (IDSs) which have been made possible by recent advances in mathematical and computational tools can be used to model complex real phenomena. However, due to physical, economic, or stringent non-invasive constraints on real systems, the underlying characteristics for mathematical models in general (and IDSs in particular) are often missing or subject to uncertainty. Therefore, developing efficient estimation techniques to extract missing pieces of information from available measurements is essential. The human brain is an example of IDSs with severe constraints on information collection from controlled experiments and invasive sensors. Investigating the intriguing modeling potential of the brain is, in fact, the main motivation for this work. Here, we will characterize the hemodynamic behavior of the brain using functional magnetic resonance imaging data. In this regard, we propose efficient estimation methods for two classes of IDSs, namely Partial Differential Equations (PDEs) and Fractional Differential Equations (FDEs). This work is divided into two parts. The first part addresses the joint estimation problem of the state, parameters, and input for a coupled second-order hyperbolic PDE and an infinite-dimensional ordinary differential equation using sampled-in-space measurements. Two estimation techniques are proposed: a Kalman-based algorithm that relies on a reduced finite-dimensional model of the IDS, and an infinite-dimensional adaptive estimator whose convergence proof is based on the Lyapunov approach. We study and discuss the identifiability of the unknown variables for both cases. The second part contributes to the development of estimation methods for FDEs where major challenges arise in estimating fractional differentiation orders and non-smooth pointwise inputs. First, we propose a fractional high-order sliding mode observer to jointly estimate the pseudo-state and input of commensurate FDEs. Second, we propose a
Directory of Open Access Journals (Sweden)
SANKU DEY
2010-11-01
Full Text Available The generalized exponential (GE distribution proposed by Gupta and Kundu (1999 is an important lifetime distribution in survival analysis. In this article, we propose to obtain Bayes estimators and its associated risk based on a class of non-informative prior under the assumption of three loss functions, namely, quadratic loss function (QLF, squared log-error loss function (SLELF and general entropy loss function (GELF. The motivation is to explore the most appropriate loss function among these three loss functions. The performances of the estimators are, therefore, compared on the basis of their risks obtained under QLF, SLELF and GELF separately. The relative efficiency of the estimators is also obtained. Finally, Monte Carlo simulations are performed to compare the performances of the Bayes estimates under different situations.
Selection of the wavelet function for the frequencies estimation
International Nuclear Information System (INIS)
Garcia R, A.
2007-01-01
At the moment the signals are used to diagnose the state of the systems, by means of the extraction of their more important characteristics such as the frequencies, tendencies, changes and temporary evolutions. This characteristics are detected by means of diverse analysis techniques, as Autoregressive methods, Fourier Transformation, Fourier transformation in short time, Wavelet transformation, among others. The present work uses the one Wavelet transformation because it allows to analyze stationary, quasi-stationary and transitory signals in the time-frequency plane. It also describes a methodology to select the scales and the Wavelet function to be applied the one Wavelet transformation with the objective of detecting to the dominant system frequencies. (Author)
Estimation of a monotone percentile residual life function under random censorship.
Franco-Pereira, Alba M; de Uña-Álvarez, Jacobo
2013-01-01
In this paper, we introduce a new estimator of a percentile residual life function with censored data under a monotonicity constraint. Specifically, it is assumed that the percentile residual life is a decreasing function. This assumption is useful when estimating the percentile residual life of units, which degenerate with age. We establish a law of the iterated logarithm for the proposed estimator, and its n-equivalence to the unrestricted estimator. The asymptotic normal distribution of the estimator and its strong approximation to a Gaussian process are also established. We investigate the finite sample performance of the monotone estimator in an extensive simulation study. Finally, data from a clinical trial in primary biliary cirrhosis of the liver are analyzed with the proposed methods. One of the conclusions of our work is that the restricted estimator may be much more efficient than the unrestricted one. © 2012 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.
mBEEF-vdW: Robust fitting of error estimation density functionals
DEFF Research Database (Denmark)
Lundgård, Keld Troen; Wellendorff, Jess; Voss, Johannes
2016-01-01
. The functional is fitted within the Bayesian error estimation functional (BEEF) framework [J. Wellendorff et al., Phys. Rev. B 85, 235149 (2012); J. Wellendorff et al., J. Chem. Phys. 140, 144107 (2014)]. We improve the previously used fitting procedures by introducing a robust MM-estimator based loss function...... catalysis, including datasets that were not used for its training. Overall, we find that mBEEF-vdW has a higher general accuracy than competing popular functionals, and it is one of the best performing functionals on chemisorption systems, surface energies, lattice constants, and dispersion. We also show...
Directory of Open Access Journals (Sweden)
Farhad Yahgmaei
2013-01-01
Full Text Available This paper proposes different methods of estimating the scale parameter in the inverse Weibull distribution (IWD. Specifically, the maximum likelihood estimator of the scale parameter in IWD is introduced. We then derived the Bayes estimators for the scale parameter in IWD by considering quasi, gamma, and uniform priors distributions under the square error, entropy, and precautionary loss functions. Finally, the different proposed estimators have been compared by the extensive simulation studies in corresponding the mean square errors and the evolution of risk functions.
Clinical use of estimated glomerular filtration rate for evaluation of kidney function
DEFF Research Database (Denmark)
Broberg, Bo; Lindhardt, Morten; Rossing, Peter
2013-01-01
is a significant predictor for cardiovascular disease and may along with classical cardiovascular risk factors add useful information to risk estimation. Several cautions need to be taken into account, e.g. rapid changes in kidney function, dialysis, high age, obesity, underweight and diverging and unanticipated......Estimating glomerular filtration rate by the Modification of Diet in Renal Disease or Chronic Kidney Disease Epidemiology Collaboration formulas gives a reasonable estimate of kidney function for e.g. classification of chronic kidney disease. Additionally the estimated glomerular filtration rate...
Chen, Xiaohong; Fan, Yanqin; Pouzo, Demian; Ying, Zhiliang
2010-07-01
We study estimation and model selection of semiparametric models of multivariate survival functions for censored data, which are characterized by possibly misspecified parametric copulas and nonparametric marginal survivals. We obtain the consistency and root- n asymptotic normality of a two-step copula estimator to the pseudo-true copula parameter value according to KLIC, and provide a simple consistent estimator of its asymptotic variance, allowing for a first-step nonparametric estimation of the marginal survivals. We establish the asymptotic distribution of the penalized pseudo-likelihood ratio statistic for comparing multiple semiparametric multivariate survival functions subject to copula misspecification and general censorship. An empirical application is provided.
Modulating functions method for parameters estimation in the fifth order KdV equation
Asiri, Sharefa M.; Liu, Da-Yan; Laleg-Kirati, Taous-Meriem
2017-01-01
In this work, the modulating functions method is proposed for estimating coefficients in higher-order nonlinear partial differential equation which is the fifth order Kortewegde Vries (KdV) equation. The proposed method transforms the problem into a
Voynelenko Natalya Vaselyevna
2012-01-01
In article the maintenance of activity of the head of special (correctional) educational institution on the organization of estimation of quality of educational system is discussed. The model of joint activity of participants of educational process on estimation of educational objects, as component of system of quality management in Educational institution is presented. Functions of estimation of educational system in activity of the head of educational institution are formulated.
The Effect of Error in Item Parameter Estimates on the Test Response Function Method of Linking.
Kaskowitz, Gary S.; De Ayala, R. J.
2001-01-01
Studied the effect of item parameter estimation for computation of linking coefficients for the test response function (TRF) linking/equating method. Simulation results showed that linking was more accurate when there was less error in the parameter estimates, and that 15 or 25 common items provided better results than 5 common items under both…
An estimating function approach to inference for inhomogeneous Neyman-Scott processes
DEFF Research Database (Denmark)
Waagepetersen, Rasmus Plenge
“This paper is concerned with inference for a certain class of inhomogeneous Neyman-Scott point processes depending on spatial covariates. Regression parameter estimates obtained from a simple estimating function are shown to be asymptotically normal when the “mother” intensity for the Neyman-Scott...
An estimating function approach to inference for inhomogeneous Neyman-Scott processes
DEFF Research Database (Denmark)
Waagepetersen, Rasmus
2007-01-01
This article is concerned with inference for a certain class of inhomogeneous Neyman-Scott point processes depending on spatial covariates. Regression parameter estimates obtained from a simple estimating function are shown to be asymptotically normal when the "mother" intensity for the Neyman-Sc...
The risk function approach to profit maximizing estimation in direct mailing
Muus, Lars; Scheer, Hiek van der; Wansbeek, Tom
1999-01-01
When the parameters of the model describing consumers' reaction to a mailing are known, addresses for a future mailing can be selected in a profit-maximizing way. Usually, these parameters are unknown and are to be estimated. Standard estimation are based on a quadratic loss function. In the present
Linear estimates of structure functions from deep inelastic lepton-nucleon scattering data. Part 1
International Nuclear Information System (INIS)
Anikeev, V.B.; Zhigunov, V.P.
1991-01-01
This paper concerns the linear estimation of structure functions from muon(electron)-nucleon scattering. The expressions obtained for the structure functions estimate provide correct analysis of the random error and the bias The bias arises because of the finite number of experimental data and the finite resolution of experiment. The approach suggested may become useful for data handling from experiments at HERA. 9 refs
Quasi-Newton methods for parameter estimation in functional differential equations
Brewer, Dennis W.
1988-01-01
A state-space approach to parameter estimation in linear functional differential equations is developed using the theory of linear evolution equations. A locally convergent quasi-Newton type algorithm is applied to distributed systems with particular emphasis on parameters that induce unbounded perturbations of the state. The algorithm is computationally implemented on several functional differential equations, including coefficient and delay estimation in linear delay-differential equations.
Headphone-To-Ear Transfer Function Estimation Using Measured Acoustic Parameters
Directory of Open Access Journals (Sweden)
Jinlin Liu
2018-06-01
Full Text Available This paper proposes to use an optimal five-microphone array method to measure the headphone acoustic reflectance and equivalent sound sources needed in the estimation of headphone-to-ear transfer functions (HpTFs. The performance of this method is theoretically analyzed and experimentally investigated. With the measured acoustic parameters HpTFs for different headphones and ear canal area functions are estimated based on a computational acoustic model. The estimation results show that HpTFs vary considerably with headphones and ear canals, which suggests that individualized compensations for HpTFs are necessary for headphones to reproduce desired sounds for different listeners.
An estimator of the survival function based on the semi-Markov model under dependent censorship.
Lee, Seung-Yeoun; Tsai, Wei-Yann
2005-06-01
Lee and Wolfe (Biometrics vol. 54 pp. 1176-1178, 1998) proposed the two-stage sampling design for testing the assumption of independent censoring, which involves further follow-up of a subset of lost-to-follow-up censored subjects. They also proposed an adjusted estimator for the survivor function for a proportional hazards model under the dependent censoring model. In this paper, a new estimator for the survivor function is proposed for the semi-Markov model under the dependent censorship on the basis of the two-stage sampling data. The consistency and the asymptotic distribution of the proposed estimator are derived. The estimation procedure is illustrated with an example of lung cancer clinical trial and simulation results are reported of the mean squared errors of estimators under a proportional hazards and two different nonproportional hazards models.
Pishravian, Arash; Aghabozorgi Sahaf, Masoud Reza
2012-12-01
In this paper speech-music separation using Blind Source Separation is discussed. The separating algorithm is based on the mutual information minimization where the natural gradient algorithm is used for minimization. In order to do that, score function estimation from observation signals (combination of speech and music) samples is needed. The accuracy and the speed of the mentioned estimation will affect on the quality of the separated signals and the processing time of the algorithm. The score function estimation in the presented algorithm is based on Gaussian mixture based kernel density estimation method. The experimental results of the presented algorithm on the speech-music separation and comparing to the separating algorithm which is based on the Minimum Mean Square Error estimator, indicate that it can cause better performance and less processing time
On the a priori estimation of collocation error covariance functions: a feasibility study
DEFF Research Database (Denmark)
Arabelos, D.N.; Forsberg, René; Tscherning, C.C.
2007-01-01
and the associated error covariance functions were conducted in the Arctic region north of 64 degrees latitude. The correlation between the known features of the data and the parameters variance and correlation length of the computed error covariance functions was estimated using multiple regression analysis...
International Nuclear Information System (INIS)
Telyakovskii, S A
2002-01-01
The functions under consideration are those satisfying the condition Δa i =Δb i =0 for all i≠n j , where {n j } is a lacunary sequence. An asymptotic estimate of the rate of decrease of the modulus of continuity in the L-metric of such functions in terms of their Fourier coefficients is obtained
Nonparametric estimation of the stationary M/G/1 workload distribution function
DEFF Research Database (Denmark)
Hansen, Martin Bøgsted
2005-01-01
In this paper it is demonstrated how a nonparametric estimator of the stationary workload distribution function of the M/G/1-queue can be obtained by systematic sampling the workload process. Weak convergence results and bootstrap methods for empirical distribution functions for stationary associ...
Using step and path selection functions for estimating resistance to movement: Pumas as a case study
Katherine A. Zeller; Kevin McGarigal; Samuel A. Cushman; Paul Beier; T. Winston Vickers; Walter M. Boyce
2015-01-01
GPS telemetry collars and their ability to acquire accurate and consistently frequent locations have increased the use of step selection functions (SSFs) and path selection functions (PathSFs) for studying animal movement and estimating resistance. However, previously published SSFs and PathSFs often do not accommodate multiple scales or multiscale modeling....
Survival Bayesian Estimation of Exponential-Gamma Under Linex Loss Function
Rizki, S. W.; Mara, M. N.; Sulistianingsih, E.
2017-06-01
This paper elaborates a research of the cancer patients after receiving a treatment in cencored data using Bayesian estimation under Linex Loss function for Survival Model which is assumed as an exponential distribution. By giving Gamma distribution as prior and likelihood function produces a gamma distribution as posterior distribution. The posterior distribution is used to find estimatior {\\hat{λ }}BL by using Linex approximation. After getting {\\hat{λ }}BL, the estimators of hazard function {\\hat{h}}BL and survival function {\\hat{S}}BL can be found. Finally, we compare the result of Maximum Likelihood Estimation (MLE) and Linex approximation to find the best method for this observation by finding smaller MSE. The result shows that MSE of hazard and survival under MLE are 2.91728E-07 and 0.000309004 and by using Bayesian Linex worths 2.8727E-07 and 0.000304131, respectively. It concludes that the Bayesian Linex is better than MLE.
LARF: Instrumental Variable Estimation of Causal Effects through Local Average Response Functions
Directory of Open Access Journals (Sweden)
Weihua An
2016-07-01
Full Text Available LARF is an R package that provides instrumental variable estimation of treatment effects when both the endogenous treatment and its instrument (i.e., the treatment inducement are binary. The method (Abadie 2003 involves two steps. First, pseudo-weights are constructed from the probability of receiving the treatment inducement. By default LARF estimates the probability by a probit regression. It also provides semiparametric power series estimation of the probability and allows users to employ other external methods to estimate the probability. Second, the pseudo-weights are used to estimate the local average response function conditional on treatment and covariates. LARF provides both least squares and maximum likelihood estimates of the conditional treatment effects.
A Scale Elasticity Measure for Directional Distance Function and its Dual: Theory and DEA Estimation
Valentin Zelenyuk
2012-01-01
In this paper we focus on scale elasticity measure based on directional distance function for multi-output-multi-input technologies, explore its fundamental properties and show its equivalence with the input oriented and output oriented scale elasticity measures. We also establish duality relationship between the scale elasticity measure based on the directional distance function with scale elasticity measure based on the profit function. Finally, we discuss the estimation issues of the scale...
Directory of Open Access Journals (Sweden)
Anupam Pathak
2014-11-01
Full Text Available Abstract: Problem Statement: The two-parameter exponentiated Rayleigh distribution has been widely used especially in the modelling of life time event data. It provides a statistical model which has a wide variety of application in many areas and the main advantage is its ability in the context of life time event among other distributions. The uniformly minimum variance unbiased and maximum likelihood estimation methods are the way to estimate the parameters of the distribution. In this study we explore and compare the performance of the uniformly minimum variance unbiased and maximum likelihood estimators of the reliability function R(t=P(X>t and P=P(X>Y for the two-parameter exponentiated Rayleigh distribution. Approach: A new technique of obtaining these parametric functions is introduced in which major role is played by the powers of the parameter(s and the functional forms of the parametric functions to be estimated are not needed. We explore the performance of these estimators numerically under varying conditions. Through the simulation study a comparison are made on the performance of these estimators with respect to the Biasness, Mean Square Error (MSE, 95% confidence length and corresponding coverage percentage. Conclusion: Based on the results of simulation study the UMVUES of R(t and ‘P’ for the two-parameter exponentiated Rayleigh distribution found to be superior than MLES of R(t and ‘P’.
A smooth generalized Newton method for a class of non-smooth equations
International Nuclear Information System (INIS)
Uko, L. U.
1995-10-01
This paper presents a Newton-type iterative scheme for finding the zero of the sum of a differentiable function and a multivalued maximal monotone function. Local and semi-local convergence results are proved for the Newton scheme, and an analogue of the Kantorovich theorem is proved for the associated modified scheme that uses only one Jacobian evaluation for the entire iteration. Applications in variational inequalities are discussed, and an illustrative numerical example is given. (author). 24 refs
Estimating the Partition Function Zeros by Using the Wang-Landau Monte Carlo Algorithm
Energy Technology Data Exchange (ETDEWEB)
Kim, Seung-Yeon [Korea National University of Transportation, Chungju (Korea, Republic of)
2017-03-15
The concept of the partition function zeros is one of the most efficient methods for investigating the phase transitions and the critical phenomena in various physical systems. Estimating the partition function zeros requires information on the density of states Ω(E) as a function of the energy E. Currently, the Wang-Landau Monte Carlo algorithm is one of the best methods for calculating Ω(E). The partition function zeros in the complex temperature plane of the Ising model on an L × L square lattice (L = 10 ∼ 80) with a periodic boundary condition have been estimated by using the Wang-Landau Monte Carlo algorithm. The efficiency of the Wang-Landau Monte Carlo algorithm and the accuracies of the partition function zeros have been evaluated for three different, 5%, 10%, and 20%, flatness criteria for the histogram H(E).
Nonparametric adaptive estimation of linear functionals for low frequency observed Lévy processes
Kappus, Johanna
2012-01-01
For a Lévy process X having finite variation on compact sets and finite first moments, Âµ( dx) = xv( dx) is a finite signed measure which completely describes the jump dynamics. We construct kernel estimators for linear functionals of Âµ and provide rates of convergence under regularity assumptions. Moreover, we consider adaptive estimation via model selection and propose a new strategy for the data driven choice of the smoothing parameter.
Optimal replacement time estimation for machines and equipment based on cost function
J. Šebo; J. Buša; P. Demeč; J. Svetlík
2013-01-01
The article deals with a multidisciplinary issue of estimating the optimal replacement time for the machines. Considered categories of machines, for which the optimization method is usable, are of the metallurgical and engineering production. Different models of cost function are considered (both with one and two variables). Parameters of the models were calculated through the least squares method. Models testing show that all are good enough, so for estimation of optimal replacement time is ...
Directory of Open Access Journals (Sweden)
Il Young Song
2015-01-01
Full Text Available This paper focuses on estimation of a nonlinear function of state vector (NFS in discrete-time linear systems with time-delays and model uncertainties. The NFS represents a multivariate nonlinear function of state variables, which can indicate useful information of a target system for control. The optimal nonlinear estimator of an NFS (in mean square sense represents a function of the receding horizon estimate and its error covariance. The proposed receding horizon filter represents the standard Kalman filter with time-delays and special initial horizon conditions described by the Lyapunov-like equations. In general case to calculate an optimal estimator of an NFS we propose using the unscented transformation. Important class of polynomial NFS is considered in detail. In the case of polynomial NFS an optimal estimator has a closed-form computational procedure. The subsequent application of the proposed receding horizon filter and nonlinear estimator to a linear stochastic system with time-delays and uncertainties demonstrates their effectiveness.
A method of moments to estimate bivariate survival functions: the copula approach
Directory of Open Access Journals (Sweden)
Silvia Angela Osmetti
2013-05-01
Full Text Available In this paper we discuss the problem on parametric and non parametric estimation of the distributions generated by the Marshall-Olkin copula. This copula comes from the Marshall-Olkin bivariate exponential distribution used in reliability analysis. We generalize this model by the copula and different marginal distributions to construct several bivariate survival functions. The cumulative distribution functions are not absolutely continuous and they unknown parameters are often not be obtained in explicit form. In order to estimate the parameters we propose an easy procedure based on the moments. This method consist in two steps: in the first step we estimate only the parameters of marginal distributions and in the second step we estimate only the copula parameter. This procedure can be used to estimate the parameters of complex survival functions in which it is difficult to find an explicit expression of the mixed moments. Moreover it is preferred to the maximum likelihood one for its simplex mathematic form; in particular for distributions whose maximum likelihood parameters estimators can not be obtained in explicit form.
International Nuclear Information System (INIS)
Lythgoe, M.F.; Gordon, I.; Khader, Z.; Smith, T.; Anderson, P.J.
1999-01-01
Differential renal function (DRF) is an important parameter that should be assessed from virtually every dynamic renogram. With the introduction of technetium-99m mercaptoacetyltriglycine ( 99m Tc-MAG3), a tracer with a high renal extraction, the estimation of DRF might hopefully become accurate and reproducible both between observers in the same institution and also between institutions. The aim of this study was to assess the effect of different parameters on the estimation of DRF. To this end we investigated two groups of children: group A, comprising 35 children with a single kidney (27 of whom had poor renal function), and group B, comprising 20 children with two kidneys and normal global function who also had an associated 99m Tc-dimercaptosuccinic acid scan ( 99m Tc-DMSA). The variables assessed for their effect on the estimation of DRF were: different operators, the choice of renal regions of interest (ROIs), the applied background subtraction, and six different techniques for analysis of the renogram. The six techniques were based on: linear regression of the slopes in the Rutland-Patlak plot, matrix deconvolution, differential method, integral method, linear regression of the slope of the renograms, and the area under the curve of the renogram. The estimation of DRF was less dependent upon both observer and method in patients with two normally functioning kidneys than in patients with a single kidney. The inter-observer comparison among children in either group was not dependent on either ROI or background subtraction. However, in patients with poor renal function the method of choice for the estimation of DRF was dependent on background subtraction, though not ROI. In children with two kidneys and normal renal function, the estimation of DRF from the 24 techniques gave similar results. Methods that produced DRF values closest to expected results, from either group of children, were the Rutland-Patlak plot and matrix deconvolution methods. (orig.)
Shen, Yi
2013-05-01
A subject's sensitivity to a stimulus variation can be studied by estimating the psychometric function. Generally speaking, three parameters of the psychometric function are of interest: the performance threshold, the slope of the function, and the rate at which attention lapses occur. In the present study, three psychophysical procedures were used to estimate the three-parameter psychometric function for an auditory gap detection task. These were an up-down staircase (up-down) procedure, an entropy-based Bayesian (entropy) procedure, and an updated maximum-likelihood (UML) procedure. Data collected from four young, normal-hearing listeners showed that while all three procedures provided similar estimates of the threshold parameter, the up-down procedure performed slightly better in estimating the slope and lapse rate for 200 trials of data collection. When the lapse rate was increased by mixing in random responses for the three adaptive procedures, the larger lapse rate was especially detrimental to the efficiency of the up-down procedure, and the UML procedure provided better estimates of the threshold and slope than did the other two procedures.
Estimation and Application of Ecological Memory Functions in Time and Space
Itter, M.; Finley, A. O.; Dawson, A.
2017-12-01
A common goal in quantitative ecology is the estimation or prediction of ecological processes as a function of explanatory variables (or covariates). Frequently, the ecological process of interest and associated covariates vary in time, space, or both. Theory indicates many ecological processes exhibit memory to local, past conditions. Despite such theoretical understanding, few methods exist to integrate observations from the recent past or within a local neighborhood as drivers of these processes. We build upon recent methodological advances in ecology and spatial statistics to develop a Bayesian hierarchical framework to estimate so-called ecological memory functions; that is, weight-generating functions that specify the relative importance of local, past covariate observations to ecological processes. Memory functions are estimated using a set of basis functions in time and/or space, allowing for flexible ecological memory based on a reduced set of parameters. Ecological memory functions are entirely data driven under the Bayesian hierarchical framework—no a priori assumptions are made regarding functional forms. Memory function uncertainty follows directly from posterior distributions for model parameters allowing for tractable propagation of error to predictions of ecological processes. We apply the model framework to simulated spatio-temporal datasets generated using memory functions of varying complexity. The framework is also applied to estimate the ecological memory of annual boreal forest growth to local, past water availability. Consistent with ecological understanding of boreal forest growth dynamics, memory to past water availability peaks in the year previous to growth and slowly decays to zero in five to eight years. The Bayesian hierarchical framework has applicability to a broad range of ecosystems and processes allowing for increased understanding of ecosystem responses to local and past conditions and improved prediction of ecological
An estimation of the structure function xF3 in neutrino-proton scattering
International Nuclear Information System (INIS)
Aoki, Kenzaburo; Arimoto, Shinsuke; Hoshino, Shigetoshi; Itoh, Nobuhisa; Konno, Toshiharu.
1981-01-01
The structure function xF 3 (x, Q 2 ) in the deep-inelastic neutrino-proton scattering was estimated without differentiating with respect to Q 2 in the evolution function. At first, the moment of the non-singlet structure function xF 3 (x, Q 2 ) is defined. Then, the kernel function f(z, Q 2 ) is presented. Finally, the expression for the structure function xF 3 is given. The values of the structure function for various Q 2 are shown in five figures. A peak is seen in each figure, and the highest peak is at about Q 2 = 14GeV 2 . The analysis suggests very small value of xF 3 in small Q 2 region. The kernel function f(x/y, Q 2 ) may be interpreted as the probability of finding a quark of momentum fraction x arising from that of y is quantum chromodynamics. (Kato, T.)
Operational production of Geodetic Excitation Functions from EOP estimated values at ASI-CGS
Sciarretta, C.; Luceri, V.; Bianco, G.
2009-04-01
ASI-CGS is routinely providing geodetic excitation functions from its own estimated EOP values (at present SLR and VLBI; the current use of GPS EOP's is also planned as soon as this product will be fully operational) on the ASI geodetic web site (http://geodaf.mt.asi.it). This product has been generated and monitored (for ASI internal use only) in a long pre-operational phase (more than two years), including validation and testing. The daily geodetic excitation functions are now weekly updated along with the operational ASI SLR and VLBI EOP solutions and compared, whenever possible, with the atmospheric excitation functions available at the IERS SBAAM, under the IB and not-IB assumption, including the "wind" term. The work will present the available estimated geodetic function time series and its comparison with the relevant atmospheric excitation functions, deriving quantitative indicators on the quality of the estimates. The similarities as well as the discrepancies among the atmospheric and geodetic series will be analysed and commented, evaluating in particular the degree of correlation among the two estimated time series and the likelihood of a linear dependence hypothesis.
Estimation of parameters of constant elasticity of substitution production functional model
Mahaboob, B.; Venkateswarlu, B.; Sankar, J. Ravi
2017-11-01
Nonlinear model building has become an increasing important powerful tool in mathematical economics. In recent years the popularity of applications of nonlinear models has dramatically been rising up. Several researchers in econometrics are very often interested in the inferential aspects of nonlinear regression models [6]. The present research study gives a distinct method of estimation of more complicated and highly nonlinear model viz Constant Elasticity of Substitution (CES) production functional model. Henningen et.al [5] proposed three solutions to avoid serious problems when estimating CES functions in 2012 and they are i) removing discontinuities by using the limits of the CES function and its derivative. ii) Circumventing large rounding errors by local linear approximations iii) Handling ill-behaved objective functions by a multi-dimensional grid search. Joel Chongeh et.al [7] discussed the estimation of the impact of capital and labour inputs to the gris output agri-food products using constant elasticity of substitution production function in Tanzanian context. Pol Antras [8] presented new estimates of the elasticity of substitution between capital and labour using data from the private sector of the U.S. economy for the period 1948-1998.
Bias Errors due to Leakage Effects When Estimating Frequency Response Functions
Directory of Open Access Journals (Sweden)
Andreas Josefsson
2012-01-01
Full Text Available Frequency response functions are often utilized to characterize a system's dynamic response. For a wide range of engineering applications, it is desirable to determine frequency response functions for a system under stochastic excitation. In practice, the measurement data is contaminated by noise and some form of averaging is needed in order to obtain a consistent estimator. With Welch's method, the discrete Fourier transform is used and the data is segmented into smaller blocks so that averaging can be performed when estimating the spectrum. However, this segmentation introduces leakage effects. As a result, the estimated frequency response function suffers from both systematic (bias and random errors due to leakage. In this paper the bias error in the H1 and H2-estimate is studied and a new method is proposed to derive an approximate expression for the relative bias error at the resonance frequency with different window functions. The method is based on using a sum of real exponentials to describe the window's deterministic autocorrelation function. Simple expressions are derived for a rectangular window and a Hanning window. The theoretical expressions are verified with numerical simulations and a very good agreement is found between the results from the proposed bias expressions and the empirical results.
An open tool for input function estimation and quantification of dynamic PET FDG brain scans.
Bertrán, Martín; Martínez, Natalia; Carbajal, Guillermo; Fernández, Alicia; Gómez, Álvaro
2016-08-01
Positron emission tomography (PET) analysis of clinical studies is mostly restricted to qualitative evaluation. Quantitative analysis of PET studies is highly desirable to be able to compute an objective measurement of the process of interest in order to evaluate treatment response and/or compare patient data. But implementation of quantitative analysis generally requires the determination of the input function: the arterial blood or plasma activity which indicates how much tracer is available for uptake in the brain. The purpose of our work was to share with the community an open software tool that can assist in the estimation of this input function, and the derivation of a quantitative map from the dynamic PET study. Arterial blood sampling during the PET study is the gold standard method to get the input function, but is uncomfortable and risky for the patient so it is rarely used in routine studies. To overcome the lack of a direct input function, different alternatives have been devised and are available in the literature. These alternatives derive the input function from the PET image itself (image-derived input function) or from data gathered from previous similar studies (population-based input function). In this article, we present ongoing work that includes the development of a software tool that integrates several methods with novel strategies for the segmentation of blood pools and parameter estimation. The tool is available as an extension to the 3D Slicer software. Tests on phantoms were conducted in order to validate the implemented methods. We evaluated the segmentation algorithms over a range of acquisition conditions and vasculature size. Input function estimation algorithms were evaluated against ground truth of the phantoms, as well as on their impact over the final quantification map. End-to-end use of the tool yields quantification maps with [Formula: see text] relative error in the estimated influx versus ground truth on phantoms. The main
Adriana Keeting; John Handmer
2013-01-01
South-eastern Australia is one of the most fire prone environments on earth. Devastating fires in February 2009 appear to have been off the charts climatically and economically, they led to a new category of fire danger aptly called 'catastrophic'. Almost all wildfire losses have been associated with these extreme conditions and climate change will see an...
Suto, Noriko; Harada, Makoto; Izutsu, Jun; Nagao, Toshiyasu
2006-07-01
In order to accurately estimate the geomagnetic transfer functions in the area of the volcano Mt. Iwate (IWT), we applied the interstation transfer function (ISTF) method to the three-component geomagnetic field data observed at Mt. Iwate station (IWT), using the Kakioka Magnetic Observatory, JMA (KAK) as remote reference station. Instead of the conventional Fourier transform, in which temporary transient noises badly degrade the accuracy of long term properties, continuous wavelet transform has been used. The accuracy of the results was as high as that of robust estimations of transfer functions obtained by the Fourier transform method. This would provide us with possibilities for routinely monitoring the transfer functions, without sophisticated statistical procedures, to detect changes in the underground electrical conductivity structure.
Directory of Open Access Journals (Sweden)
Roman Urban
2004-12-01
Full Text Available We consider the Green functions for second-order left-invariant differential operators on homogeneous manifolds of negative curvature, being a semi-direct product of a nilpotent Lie group $N$ and $A=mathbb{R}^+$. We obtain estimates for mixed derivatives of the Green functions both in the coercive and non-coercive case. The current paper completes the previous results obtained by the author in a series of papers [14,15,16,19].
Some error estimates for the lumped mass finite element method for a parabolic problem
Chatzipantelidis, P.
2012-01-01
We study the spatially semidiscrete lumped mass method for the model homogeneous heat equation with homogeneous Dirichlet boundary conditions. Improving earlier results we show that known optimal order smooth initial data error estimates for the standard Galerkin method carry over to the lumped mass method whereas nonsmooth initial data estimates require special assumptions on the triangulation. We also discuss the application to time discretization by the backward Euler and Crank-Nicolson methods. © 2011 American Mathematical Society.
Wang, Bingyuan; Zhang, Yao; Liu, Dongyuan; Ding, Xuemei; Dan, Mai; Pan, Tiantian; Wang, Yihan; Li, Jiao; Zhou, Zhongxing; Zhang, Limin; Zhao, Huijuan; Gao, Feng
2018-02-01
Functional near-infrared spectroscopy (fNIRS) is a non-invasive neuroimaging method to monitor the cerebral hemodynamic through the optical changes measured at the scalp surface. It has played a more and more important role in psychology and medical imaging communities. Real-time imaging of brain function using NIRS makes it possible to explore some sophisticated human brain functions unexplored before. Kalman estimator has been frequently used in combination with modified Beer-Lamber Law (MBLL) based optical topology (OT), for real-time brain function imaging. However, the spatial resolution of the OT is low, hampering the application of OT in exploring some complicated brain functions. In this paper, we develop a real-time imaging method combining diffuse optical tomography (DOT) and Kalman estimator, much improving the spatial resolution. Instead of only presenting one spatially distributed image indicating the changes of the absorption coefficients at each time point during the recording process, one real-time updated image using the Kalman estimator is provided. Its each voxel represents the amplitude of the hemodynamic response function (HRF) associated with this voxel. We evaluate this method using some simulation experiments, demonstrating that this method can obtain more reliable spatial resolution images. Furthermore, a statistical analysis is also conducted to help to decide whether a voxel in the field of view is activated or not.
Asiri, Sharefa M.
2017-10-08
Partial Differential Equations (PDEs) are commonly used to model complex systems that arise for example in biology, engineering, chemistry, and elsewhere. The parameters (or coefficients) and the source of PDE models are often unknown and are estimated from available measurements. Despite its importance, solving the estimation problem is mathematically and numerically challenging and especially when the measurements are corrupted by noise, which is often the case. Various methods have been proposed to solve estimation problems in PDEs which can be classified into optimization methods and recursive methods. The optimization methods are usually heavy computationally, especially when the number of unknowns is large. In addition, they are sensitive to the initial guess and stop condition, and they suffer from the lack of robustness to noise. Recursive methods, such as observer-based approaches, are limited by their dependence on some structural properties such as observability and identifiability which might be lost when approximating the PDE numerically. Moreover, most of these methods provide asymptotic estimates which might not be useful for control applications for example. An alternative non-asymptotic approach with less computational burden has been proposed in engineering fields based on the so-called modulating functions. In this dissertation, we propose to mathematically and numerically analyze the modulating functions based approaches. We also propose to extend these approaches to different situations. The contributions of this thesis are as follows. (i) Provide a mathematical analysis of the modulating function-based method (MFBM) which includes: its well-posedness, statistical properties, and estimation errors. (ii) Provide a numerical analysis of the MFBM through some estimation problems, and study the sensitivity of the method to the modulating functions\\' parameters. (iii) Propose an effective algorithm for selecting the method\\'s design parameters
DEFF Research Database (Denmark)
Effraimidis, Georgios; Dahl, Christian Møller
In this paper, we develop a fully nonparametric approach for the estimation of the cumulative incidence function with Missing At Random right-censored competing risks data. We obtain results on the pointwise asymptotic normality as well as the uniform convergence rate of the proposed nonparametric...
Clinical use of estimated glomerular filtration rate for evaluation of kidney function
DEFF Research Database (Denmark)
Broberg, Bo; Lindhardt, Morten; Rossing, Peter
2013-01-01
is a significant predictor for cardiovascular disease and may along with classical cardiovascular risk factors add useful information to risk estimation. Several cautions need to be taken into account, e.g. rapid changes in kidney function, dialysis, high age, obesity, underweight and diverging and unanticipated...
Groeneboom, P.; Jongbloed, G.; Wellner, J.A.
2001-01-01
A process associated with integrated Brownian motion is introduced that characterizes the limit behavior of nonparametric least squares and maximum likelihood estimators of convex functions and convex densities, respectively. We call this process “the invelope” and show that it is an almost surely
Chaudhuri, Shomesh E; Merfeld, Daniel M
2013-03-01
Psychophysics generally relies on estimating a subject's ability to perform a specific task as a function of an observed stimulus. For threshold studies, the fitted functions are called psychometric functions. While fitting psychometric functions to data acquired using adaptive sampling procedures (e.g., "staircase" procedures), investigators have encountered a bias in the spread ("slope" or "threshold") parameter that has been attributed to the serial dependency of the adaptive data. Using simulations, we confirm this bias for cumulative Gaussian parametric maximum likelihood fits on data collected via adaptive sampling procedures, and then present a bias-reduced maximum likelihood fit that substantially reduces the bias without reducing the precision of the spread parameter estimate and without reducing the accuracy or precision of the other fit parameters. As a separate topic, we explain how to implement this bias reduction technique using generalized linear model fits as well as other numeric maximum likelihood techniques such as the Nelder-Mead simplex. We then provide a comparison of the iterative bootstrap and observed information matrix techniques for estimating parameter fit variance from adaptive sampling procedure data sets. The iterative bootstrap technique is shown to be slightly more accurate; however, the observed information technique executes in a small fraction (0.005 %) of the time required by the iterative bootstrap technique, which is an advantage when a real-time estimate of parameter fit variance is required.
Colclough, Giles L; Woolrich, Mark W; Harrison, Samuel J; Rojas López, Pedro A; Valdes-Sosa, Pedro A; Smith, Stephen M
2018-05-07
A Bayesian model for sparse, hierarchical inverse covariance estimation is presented, and applied to multi-subject functional connectivity estimation in the human brain. It enables simultaneous inference of the strength of connectivity between brain regions at both subject and population level, and is applicable to fmri, meg and eeg data. Two versions of the model can encourage sparse connectivity, either using continuous priors to suppress irrelevant connections, or using an explicit description of the network structure to estimate the connection probability between each pair of regions. A large evaluation of this model, and thirteen methods that represent the state of the art of inverse covariance modelling, is conducted using both simulated and resting-state functional imaging datasets. Our novel Bayesian approach has similar performance to the best extant alternative, Ng et al.'s Sparse Group Gaussian Graphical Model algorithm, which also is based on a hierarchical structure. Using data from the Human Connectome Project, we show that these hierarchical models are able to reduce the measurement error in meg beta-band functional networks by 10%, producing concomitant increases in estimates of the genetic influence on functional connectivity. Copyright © 2018. Published by Elsevier Inc.
Estimation of demand function on natural gas and study of demand analysis
Energy Technology Data Exchange (ETDEWEB)
Kim, Y.D. [Korea Energy Economics Institute, Euiwang (Korea, Republic of)
1998-04-01
Demand Function is estimated with several methods about the demand on natural gas, and analyzed per usage. Since the demand on natural gas, which has big share of heating use, has a close relationship with temperature, the inter-season trend of price and income elasticity is estimated considering temperature and economic formation. Per usage response of natural gas demand on the changes of price and income is also estimated. It was estimated that the response of gas demand on the changes of price and income occurs by the change of number of users in long term. In case of the response of unit consumption, only industrial use shows long-term response to price. Since gas price barely responds to the change of exchange rate, it seems to express the price-making mechanism that does not reflect timely the import condition such as exchange rate, etc. 16 refs., 12 figs., 13 tabs.
Modulating functions method for parameters estimation in the fifth order KdV equation
Asiri, Sharefa M.
2017-07-25
In this work, the modulating functions method is proposed for estimating coefficients in higher-order nonlinear partial differential equation which is the fifth order Kortewegde Vries (KdV) equation. The proposed method transforms the problem into a system of linear algebraic equations of the unknowns. The statistical properties of the modulating functions solution are described in this paper. In addition, guidelines for choosing the number of modulating functions, which is an important design parameter, are provided. The effectiveness and robustness of the proposed method are shown through numerical simulations in both noise-free and noisy cases.
Comparing performance level estimation of safety functions in three distributed structures
International Nuclear Information System (INIS)
Hietikko, Marita; Malm, Timo; Saha, Heikki
2015-01-01
The capability of a machine control system to perform a safety function is expressed using performance levels (PL). This paper presents the results of a study where PL estimation was carried out for a safety function implemented using three different distributed control system structures. Challenges relating to the process of estimating PLs for safety related distributed machine control functions are highlighted. One of these examines the use of different cabling schemes in the implementation of a safety function and its effect on the PL evaluation. The safety function used as a generic example in PL calculations relates to a mobile work machine. It is a safety stop function where different technologies (electrical, hydraulic and pneumatic) can be utilized. It was detected that by replacing analogue cables with digital communication the system structure becomes simpler with less number of failing components, which can better the PL of the safety function. - Highlights: • Integration in distributed systems enables systems with less components. • It offers high reliability and diagnostic properties. • Analogue signals create uncertainty in signal reliability and difficult diagnostics
DEFF Research Database (Denmark)
Kirwan, L; Connolly, J; Finn, J A
2009-01-01
to the roles of evenness, functional groups, and functional redundancy. These more parsimonious descriptions can be especially useful in identifying general diversity-function relationships in communities with large numbers of species. We provide an example of the application of the modeling framework......We develop a modeling framework that estimates the effects of species identity and diversity on ecosystem function and permits prediction of the diversity-function relationship across different types of community composition. Rather than just measure an overall effect of diversity, we separately....... These models describe community-level performance and thus do not require separate measurement of the performance of individual species. This flexible modeling approach can be tailored to test many hypotheses in biodiversity research and can suggest the interaction mechanisms that may be acting....
Land-use change and carbon sinks: Econometric estimation of the carbon sequestration supply function
Energy Technology Data Exchange (ETDEWEB)
Lubowski, Ruben N.; Plantinga, Andrew J.; Stavins, Robert N.
2001-01-01
Increased attention by policy makers to the threat of global climate change has brought with it considerable interest in the possibility of encouraging the expansion of forest area as a means of sequestering carbon dioxide. The marginal costs of carbon sequestration or, equivalently, the carbon sequestration supply function will determine the ultimate effects and desirability of policies aimed at enhancing carbon uptake. In particular, marginal sequestration costs are the critical statistic for identifying a cost-effective policy mix to mitigate net carbon dioxide emissions. We develop a framework for conducting an econometric analysis of land use for the forty-eight contiguous United States and employing it to estimate the carbon sequestration supply function. By estimating the opportunity costs of land on the basis of econometric evidence of landowners' actual behavior, we aim to circumvent many of the shortcomings of previous sequestration cost assessments. By conducting the first nationwide econometric estimation of sequestration costs, endogenizing prices for land-based commodities, and estimating land-use transition probabilities in a framework that explicitly considers the range of land-use alternatives, we hope to provide better estimates eventually of the true costs of large-scale carbon sequestration efforts. In this way, we seek to add to understanding of the costs and potential of this strategy for addressing the threat of global climate change.
International Nuclear Information System (INIS)
Lubowski, Ruben N.; Plantinga, Andrew J.; Stavins, Robert N.
2001-01-01
Increased attention by policy makers to the threat of global climate change has brought with it considerable interest in the possibility of encouraging the expansion of forest area as a means of sequestering carbon dioxide. The marginal costs of carbon sequestration or, equivalently, the carbon sequestration supply function will determine the ultimate effects and desirability of policies aimed at enhancing carbon uptake. In particular, marginal sequestration conts are the critical statistic for identifying a cost-effective policy mix to mitigate net carbon dioxide emissions. We develop a framework for conducting an econometric analysis of land use for the forty-eight contiguous United States and employing it to estimate the carbon sequestration supply function. By estimating the opportunity costs of land on the basis of econometric evidence of landowners' actual behavior, we aim to circumvent many of the shortcomings of previous sequestration cost assessments. By conducting the first nationwide econometric estimation of sequestration costs, endogenizing prices for land-based commodities, and estimating land-use transition probabilities in a framework that explicitly considers the range of land-use alternatives, we hope to provide better estimates eventually of the true costs of large-scale carbon sequestration efforts. In this way, we seek to add to understanding of the costs and potential of this strategy for addressing the threat of global climate change
International Nuclear Information System (INIS)
Mariya, Yasushi; Saito, Fumio; Kimura, Tamaki
1999-01-01
Cerebral function of 12 patients accompanied with brain tumor, managed by radiotherapy, were serially estimated using electroencephalography (EEG), and the results were compared with tumor responses, analyzed by magnetic resonance imaging (MRI), and clinical courses. After radiotherapy, EEG findings were improved in 7 patients, unchanged in 3, and worsened in 1. Clinical courses were generally correlated with serial changes in EEG findings and tumor responses. However, in 3 patients, clinical courses were explained better with EEG findings than tumor responses. It is suggested that the combination of EEG and image analysis is clinically useful for comprehensive estimation of radiotherapeutic effects. (author)
Effect of large weight reductions on measured and estimated kidney function
DEFF Research Database (Denmark)
von Scholten, Bernt Johan; Persson, Frederik; Svane, Maria S
2017-01-01
GFR (creatinine-based equations), whereas measured GFR (mGFR) and cystatin C-based eGFR would be unaffected if adjusted for body surface area. METHODS: Prospective, intervention study including 19 patients. All attended a baseline visit before gastric bypass surgery followed by a visit six months post-surgery. m...... for body surface area was unchanged. Estimates of GFR based on creatinine overestimate renal function likely due to changes in muscle mass, whereas cystatin C based estimates are unaffected. TRIAL REGISTRATION: ClinicalTrials.gov, NCT02138565 . Date of registration: March 24, 2014....
Estimated conditional score function for missing mechanism model with nonignorable nonresponse
Institute of Scientific and Technical Information of China (English)
CUI Xia; ZHOU Yong
2017-01-01
Missing data mechanism often depends on the values of the responses,which leads to nonignorable nonresponses.In such a situation,inference based on approaches that ignore the missing data mechanism could not be valid.A crucial step is to model the nature of missingness.We specify a parametric model for missingness mechanism,and then propose a conditional score function approach for estimation.This approach imputes the score function by taking the conditional expectation of the score function for the missing data given the available information.Inference procedure is then followed by replacing unknown terms with the related nonparametric estimators based on the observed data.The proposed score function does not suffer from the non-identifiability problem,and the proposed estimator is shown to be consistent and asymptotically normal.We also construct a confidence region for the parameter of interest using empirical likelihood method.Simulation studies demonstrate that the proposed inference procedure performs well in many settings.We apply the proposed method to a data set from research in a growth hormone and exercise intervention study.
Zhan, Hanyu; Voelz, David G.
2016-12-01
The polarimetric bidirectional reflectance distribution function (pBRDF) describes the relationships between incident and scattered Stokes parameters, but the familiar surface-only microfacet pBRDF cannot capture diffuse scattering contributions and depolarization phenomena. We propose a modified pBRDF model with a diffuse scattering component developed from the Kubelka-Munk and Le Hors et al. theories, and apply it in the development of a method to jointly estimate refractive index, slope variance, and diffuse scattering parameters from a series of Stokes parameter measurements of a surface. An application of the model and estimation approach to experimental data published by Priest and Meier shows improved correspondence with measurements of normalized Mueller matrix elements. By converting the Stokes/Mueller calculus formulation of the model to a degree of polarization (DOP) description, the estimation results of the parameters from measured DOP values are found to be consistent with a previous DOP model and results.
Bayesian Estimation Of Shift Point In Poisson Model Under Asymmetric Loss Functions
Directory of Open Access Journals (Sweden)
uma srivastava
2012-01-01
Full Text Available The paper deals with estimating shift point which occurs in any sequence of independent observations of Poisson model in statistical process control. This shift point occurs in the sequence when i.e. m life data are observed. The Bayes estimator on shift point 'm' and before and after shift process means are derived for symmetric and asymmetric loss functions under informative and non informative priors. The sensitivity analysis of Bayes estimators are carried out by simulation and numerical comparisons with R-programming. The results shows the effectiveness of shift in sequence of Poisson disribution .
Estimation of Multiple Point Sources for Linear Fractional Order Systems Using Modulating Functions
Belkhatir, Zehor
2017-06-28
This paper proposes an estimation algorithm for the characterization of multiple point inputs for linear fractional order systems. First, using polynomial modulating functions method and a suitable change of variables the problem of estimating the locations and the amplitudes of a multi-pointwise input is decoupled into two algebraic systems of equations. The first system is nonlinear and solves for the time locations iteratively, whereas the second system is linear and solves for the input’s amplitudes. Second, closed form formulas for both the time location and the amplitude are provided in the particular case of single point input. Finally, numerical examples are given to illustrate the performance of the proposed technique in both noise-free and noisy cases. The joint estimation of pointwise input and fractional differentiation orders is also presented. Furthermore, a discussion on the performance of the proposed algorithm is provided.
International Nuclear Information System (INIS)
Yoo, Seung-Hoon; Lim, Hea-Jin; Kwak, Seung-Jun
2009-01-01
Over the last twenty years, the consumption of natural gas in Korea has increased dramatically. This increase has mainly resulted from the rise of consumption in the residential sector. The main objective of the study is to estimate households' demand function for natural gas by applying a sample selection model using data from a survey of households in Seoul. The results show that there exists a selection bias in the sample and that failure to correct for sample selection bias distorts the mean estimate, of the demand for natural gas, downward by 48.1%. In addition, according to the estimation results, the size of the house, the dummy variable for dwelling in an apartment, the dummy variable for having a bed in an inner room, and the household's income all have positive relationships with the demand for natural gas. On the other hand, the size of the family and the price of gas negatively contribute to the demand for natural gas. (author)
Murphy, K. A.
1990-01-01
A parameter estimation algorithm is developed which can be used to estimate unknown time- or state-dependent delays and other parameters (e.g., initial condition) appearing within a nonlinear nonautonomous functional differential equation. The original infinite dimensional differential equation is approximated using linear splines, which are allowed to move with the variable delay. The variable delays are approximated using linear splines as well. The approximation scheme produces a system of ordinary differential equations with nice computational properties. The unknown parameters are estimated within the approximating systems by minimizing a least-squares fit-to-data criterion. Convergence theorems are proved for time-dependent delays and state-dependent delays within two classes, which say essentially that fitting the data by using approximations will, in the limit, provide a fit to the data using the original system. Numerical test examples are presented which illustrate the method for all types of delay.
Bayesian estimation of dynamic matching function for U-V analysis in Japan
Kyo, Koki; Noda, Hideo; Kitagawa, Genshiro
2012-05-01
In this paper we propose a Bayesian method for analyzing unemployment dynamics. We derive a Beveridge curve for unemployment and vacancy (U-V) analysis from a Bayesian model based on a labor market matching function. In our framework, the efficiency of matching and the elasticities of new hiring with respect to unemployment and vacancy are regarded as time varying parameters. To construct a flexible model and obtain reasonable estimates in an underdetermined estimation problem, we treat the time varying parameters as random variables and introduce smoothness priors. The model is then described in a state space representation, enabling the parameter estimation to be carried out using Kalman filter and fixed interval smoothing. In such a representation, dynamic features of the cyclic unemployment rate and the structural-frictional unemployment rate can be accurately captured.
Zhong, M.; Zhan, Z.
2017-12-01
Receiver functions (RF) estimated on dense arrays have been widely used for studies of Earth structures at different scales. However, there are still challenges in estimating and interpreting RF images due to non-uniqueness of deconvolution, noise in data, and lack of uncertainty. Here, we develop a dense-array-based RF method towards robust and high-resolution RF images. We cast RF images as the models in a sparsity-promoted inverse problem, in which waveforms from multiple events recorded by neighboring stations are jointly inverted. We use the Neighborhood Algorithm to find the optimal model (i.e., RF image) as well as an ensemble of models for further uncertainty quantification. Synthetic tests and application to the IRIS Community Wavefield Experiment in Oklahoma demonstrate that the new method is able to deal with challenging dataset, retrieve reliable high-resolution RF images, and provide realistic uncertainty estimates.
An Energy-Based Limit State Function for Estimation of Structural Reliability in Shock Environments
Directory of Open Access Journals (Sweden)
Michael A. Guthrie
2013-01-01
Full Text Available limit state function is developed for the estimation of structural reliability in shock environments. This limit state function uses peak modal strain energies to characterize environmental severity and modal strain energies at failure to characterize the structural capacity. The Hasofer-Lind reliability index is briefly reviewed and its computation for the energy-based limit state function is discussed. Applications to two degree of freedom mass-spring systems and to a simple finite element model are considered. For these examples, computation of the reliability index requires little effort beyond a modal analysis, but still accounts for relevant uncertainties in both the structure and environment. For both examples, the reliability index is observed to agree well with the results of Monte Carlo analysis. In situations where fast, qualitative comparison of several candidate designs is required, the reliability index based on the proposed limit state function provides an attractive metric which can be used to compare and control reliability.
International Nuclear Information System (INIS)
Skrable, K.W.; Chabot, G.E.; French, C.S.; La Bone, T.R.
1988-01-01
This paper describes a way of obtaining and gives applications of intake retention functions. These functions give the fraction of an intake of radioactive material expected to be present in a specified bioassay compartment at any time after a single acute exposure or after onset of a continuous exposure. The intake retention functions are derived from a multicompartmental model and a recursive catenary kinetics equation that completely describe the metabolism of radioelements from intake to excretion, accounting for the delay in uptake from compartments in the respiratory and gastrointestinal tracts and the recycling of radioelements between systemic compartments. This approach, which treats excretion as the 'last' compartment of all catenary metabolic pathways, avoids the use of convolution integrals and provides algebraic solutions that can be programmed on hand held calculators or personal computers. The estimation of intakes and internal radiation doses and the use of intake retention functions in the design of bioassay programs are discussed along with several examples
International Nuclear Information System (INIS)
Wang Baosheng; Wang Dongqing; Zhang Jianmin; Jiang Jing
2012-01-01
In order to estimate the functional failure probability of passive systems, an innovative adaptive importance sampling methodology is presented. In the proposed methodology, information of variables is extracted with some pre-sampling of points in the failure region. An important sampling density is then constructed from the sample distribution in the failure region. Taking the AP1000 passive residual heat removal system as an example, the uncertainties related to the model of a passive system and the numerical values of its input parameters are considered in this paper. And then the probability of functional failure is estimated with the combination of the response surface method and adaptive importance sampling method. The numerical results demonstrate the high computed efficiency and excellent computed accuracy of the methodology compared with traditional probability analysis methods. (authors)
An Estimation of the Gamma-Ray Burst Afterglow Apparent Optical Brightness Distribution Function
Akerlof, Carl W.; Swan, Heather F.
2007-12-01
By using recent publicly available observational data obtained in conjunction with the NASA Swift gamma-ray burst (GRB) mission and a novel data analysis technique, we have been able to make some rough estimates of the GRB afterglow apparent optical brightness distribution function. The results suggest that 71% of all burst afterglows have optical magnitudes with mRa strong indication that the apparent optical magnitude distribution function peaks at mR~19.5. Such estimates may prove useful in guiding future plans to improve GRB counterpart observation programs. The employed numerical techniques might find application in a variety of other data analysis problems in which the intrinsic distributions must be inferred from a heterogeneous sample.
Estimation of functional failure probability of passive systems based on subset simulation method
International Nuclear Information System (INIS)
Wang Dongqing; Wang Baosheng; Zhang Jianmin; Jiang Jing
2012-01-01
In order to solve the problem of multi-dimensional epistemic uncertainties and small functional failure probability of passive systems, an innovative reliability analysis algorithm called subset simulation based on Markov chain Monte Carlo was presented. The method is found on the idea that a small failure probability can be expressed as a product of larger conditional failure probabilities by introducing a proper choice of intermediate failure events. Markov chain Monte Carlo simulation was implemented to efficiently generate conditional samples for estimating the conditional failure probabilities. Taking the AP1000 passive residual heat removal system, for example, the uncertainties related to the model of a passive system and the numerical values of its input parameters were considered in this paper. And then the probability of functional failure was estimated with subset simulation method. The numerical results demonstrate that subset simulation method has the high computing efficiency and excellent computing accuracy compared with traditional probability analysis methods. (authors)
Correlation Function Approach for Estimating Thermal Conductivity in Highly Porous Fibrous Materials
Martinez-Garcia, Jorge; Braginsky, Leonid; Shklover, Valery; Lawson, John W.
2011-01-01
Heat transport in highly porous fiber networks is analyzed via two-point correlation functions. Fibers are assumed to be long and thin to allow a large number of crossing points per fiber. The network is characterized by three parameters: the fiber aspect ratio, the porosity and the anisotropy of the structure. We show that the effective thermal conductivity of the system can be estimated from knowledge of the porosity and the correlation lengths of the correlation functions obtained from a fiber structure image. As an application, the effects of the fiber aspect ratio and the network anisotropy on the thermal conductivity is studied.
International Nuclear Information System (INIS)
Sotiropoulou, P; Koukou, V; Martini, N; Nikiforidis, G; Michail, C; Kandarakis, I; Fountos, G; Kounadi, E
2015-01-01
In this study an analytical approximation of dual-energy inverse functions is presented for the estimation of the calcium-to-phosphorous (Ca/P) mass ratio, which is a crucial parameter in bone health. Bone quality could be examined by the X-ray dual-energy method (XDEM), in terms of bone tissue material properties. Low- and high-energy, log- intensity measurements were combined by using a nonlinear function, to cancel out the soft tissue structures and generate the dual energy bone Ca/P mass ratio. The dual-energy simulated data were obtained using variable Ca and PO 4 thicknesses on a fixed total tissue thickness. The XDEM simulations were based on a bone phantom. Inverse fitting functions with least-squares estimation were used to obtain the fitting coefficients and to calculate the thickness of each material. The examined inverse mapping functions were linear, quadratic, and cubic. For every thickness, the nonlinear quadratic function provided the optimal fitting accuracy while requiring relative few terms. The dual-energy method, simulated in this work could be used to quantify bone Ca/P mass ratio with photon-counting detectors. (paper)
Estimation Methods of the Point Spread Function Axial Position: A Comparative Computational Study
Directory of Open Access Journals (Sweden)
Javier Eduardo Diaz Zamboni
2017-01-01
Full Text Available The precise knowledge of the point spread function is central for any imaging system characterization. In fluorescence microscopy, point spread function (PSF determination has become a common and obligatory task for each new experimental device, mainly due to its strong dependence on acquisition conditions. During the last decade, algorithms have been developed for the precise calculation of the PSF, which fit model parameters that describe image formation on the microscope to experimental data. In order to contribute to this subject, a comparative study of three parameter estimation methods is reported, namely: I-divergence minimization (MIDIV, maximum likelihood (ML and non-linear least square (LSQR. They were applied to the estimation of the point source position on the optical axis, using a physical model. Methods’ performance was evaluated under different conditions and noise levels using synthetic images and considering success percentage, iteration number, computation time, accuracy and precision. The main results showed that the axial position estimation requires a high SNR to achieve an acceptable success level and higher still to be close to the estimation error lower bound. ML achieved a higher success percentage at lower SNR compared to MIDIV and LSQR with an intrinsic noise source. Only the ML and MIDIV methods achieved the error lower bound, but only with data belonging to the optical axis and high SNR. Extrinsic noise sources worsened the success percentage, but no difference was found between noise sources for the same method for all methods studied.
On Estimation Of The Orientation Of Mobile Robots Using Turning Functions And SONAR Information
Directory of Open Access Journals (Sweden)
Dorel AIORDACHIOAIE
2003-12-01
Full Text Available SONAR systems are widely used by some artificial objects, e.g. robots, and by animals, e.g. bats, for navigation and pattern recognition. The objective of this paper is to present a solution on the estimation of the orientation in the environment of mobile robots, in the context of navigation, using the turning function approach. The results are shown to be accurate and can be used further in the design of navigation strategies of mobile robots.
Moser , Gabriele; Zerubia , Josiane; Serpico , Sebastiano B.
2006-01-01
International audience; In remotely sensed data analysis, a crucial problem is represented by the need to develop accurate models for the statistics of the pixel intensities. This paper deals with the problem of probability density function (pdf) estimation in the context of synthetic aperture radar (SAR) amplitude data analysis. Several theoretical and heuristic models for the pdfs of SAR data have been proposed in the literature, which have been proved to be effective for different land-cov...
Milanesi, P; Holderegger, R; Bollmann, K; Gugerli, F; Zellweger, F
2017-02-01
Estimating connectivity among fragmented habitat patches is crucial for evaluating the functionality of ecological networks. However, current estimates of landscape resistance to animal movement and dispersal lack landscape-level data on local habitat structure. Here, we used a landscape genetics approach to show that high-fidelity habitat structure maps derived from Light Detection and Ranging (LiDAR) data critically improve functional connectivity estimates compared to conventional land cover data. We related pairwise genetic distances of 128 Capercaillie (Tetrao urogallus) genotypes to least-cost path distances at multiple scales derived from land cover data. Resulting β values of linear mixed effects models ranged from 0.372 to 0.495, while those derived from LiDAR ranged from 0.558 to 0.758. The identification and conservation of functional ecological networks suffering from habitat fragmentation and homogenization will thus benefit from the growing availability of detailed and contiguous data on three-dimensional habitat structure and associated habitat quality. © 2016 by the Ecological Society of America.
A recursive Monte Carlo method for estimating importance functions in deep penetration problems
International Nuclear Information System (INIS)
Goldstein, M.
1980-04-01
A pratical recursive Monte Carlo method for estimating the importance function distribution, aimed at importance sampling for the solution of deep penetration problems in three-dimensional systems, was developed. The efficiency of the recursive method was investigated for sample problems including one- and two-dimensional, monoenergetic and and multigroup problems, as well as for a practical deep-penetration problem with streaming. The results of the recursive Monte Carlo calculations agree fairly well with Ssub(n) results. It is concluded that the recursive Monte Carlo method promises to become a universal method for estimating the importance function distribution for the solution of deep-penetration problems, in all kinds of systems: for many systems the recursive method is likely to be more efficient than previously existing methods; for three-dimensional systems it is the first method that can estimate the importance function with the accuracy required for an efficient solution based on importance sampling of neutron deep-penetration problems in those systems
Modulation transfer function estimation of optical lens system by adaptive neuro-fuzzy methodology
Petković, Dalibor; Shamshirband, Shahaboddin; Pavlović, Nenad T.; Anuar, Nor Badrul; Kiah, Miss Laiha Mat
2014-07-01
The quantitative assessment of image quality is an important consideration in any type of imaging system. The modulation transfer function (MTF) is a graphical description of the sharpness and contrast of an imaging system or of its individual components. The MTF is also known and spatial frequency response. The MTF curve has different meanings according to the corresponding frequency. The MTF of an optical system specifies the contrast transmitted by the system as a function of image size, and is determined by the inherent optical properties of the system. In this study, the adaptive neuro-fuzzy (ANFIS) estimator is designed and adapted to estimate MTF value of the actual optical system. Neural network in ANFIS adjusts parameters of membership function in the fuzzy logic of the fuzzy inference system. The back propagation learning algorithm is used for training this network. This intelligent estimator is implemented using Matlab/Simulink and the performances are investigated. The simulation results presented in this paper show the effectiveness of the developed method.
Smith, Andrew; LaVerde, Bruce; Hunt, Ron; Fulcher, Clay; Towner, Robert; McDonald, Emmett
2012-01-01
The design and theoretical basis of a new database tool that quickly generates vibroacoustic response estimates using a library of transfer functions (TFs) is discussed. During the early stages of a launch vehicle development program, these response estimates can be used to provide vibration environment specification to hardware vendors. The tool accesses TFs from a database, combines the TFs, and multiplies these by input excitations to estimate vibration responses. The database is populated with two sets of uncoupled TFs; the first set representing vibration response of a bare panel, designated as H(sup s), and the second set representing the response of the free-free component equipment by itself, designated as H(sup c). For a particular configuration undergoing analysis, the appropriate H(sup s) and H(sup c) are selected and coupled to generate an integrated TF, designated as H(sup s +c). This integrated TF is then used with the appropriate input excitations to estimate vibration responses. This simple yet powerful tool enables a user to estimate vibration responses without directly using finite element models, so long as suitable H(sup s) and H(sup c) sets are defined in the database libraries. The paper discusses the preparation of the database tool and provides the assumptions and methodologies necessary to combine H(sup s) and H(sup c) sets into an integrated H(sup s + c). An experimental validation of the approach is also presented.
On the method of logarithmic cumulants for parametric probability density function estimation.
Krylov, Vladimir A; Moser, Gabriele; Serpico, Sebastiano B; Zerubia, Josiane
2013-10-01
Parameter estimation of probability density functions is one of the major steps in the area of statistical image and signal processing. In this paper we explore several properties and limitations of the recently proposed method of logarithmic cumulants (MoLC) parameter estimation approach which is an alternative to the classical maximum likelihood (ML) and method of moments (MoM) approaches. We derive the general sufficient condition for a strong consistency of the MoLC estimates which represents an important asymptotic property of any statistical estimator. This result enables the demonstration of the strong consistency of MoLC estimates for a selection of widely used distribution families originating from (but not restricted to) synthetic aperture radar image processing. We then derive the analytical conditions of applicability of MoLC to samples for the distribution families in our selection. Finally, we conduct various synthetic and real data experiments to assess the comparative properties, applicability and small sample performance of MoLC notably for the generalized gamma and K families of distributions. Supervised image classification experiments are considered for medical ultrasound and remote-sensing SAR imagery. The obtained results suggest that MoLC is a feasible and computationally fast yet not universally applicable alternative to MoM. MoLC becomes especially useful when the direct ML approach turns out to be unfeasible.
Liu, Fushun; Liu, Chengcheng; Chen, Jiefeng; Wang, Bin
2017-08-01
The key concept of spectrum response estimation with commercial software, such as the SESAM software tool, typically includes two main steps: finding a suitable loading spectrum and computing the response amplitude operators (RAOs) subjected to a frequency-specified wave component. In this paper, we propose a nontraditional spectrum response estimation method that uses a numerical representation of the retardation functions. Based on estimated added mass and damping matrices of the structure, we decompose and replace the convolution terms with a series of poles and corresponding residues in the Laplace domain. Then, we estimate the power density corresponding to each frequency component using the improved periodogram method. The advantage of this approach is that the frequency-dependent motion equations in the time domain can be transformed into the Laplace domain without requiring Laplace-domain expressions for the added mass and damping. To validate the proposed method, we use a numerical semi-submerged pontoon from the SESAM. The numerical results show that the responses of the proposed method match well with those obtained from the traditional method. Furthermore, the estimated spectrum also matches well, which indicates its potential application to deep-water floating structures.
Lee, Yu; Yu, Chanki; Lee, Sang Wook
2018-01-10
We present a sequential fitting-and-separating algorithm for surface reflectance components that separates individual dominant reflectance components and simultaneously estimates the corresponding bidirectional reflectance distribution function (BRDF) parameters from the separated reflectance values. We tackle the estimation of a Lafortune BRDF model, which combines a nonLambertian diffuse reflection and multiple specular reflectance components with a different specular lobe. Our proposed method infers the appropriate number of BRDF lobes and their parameters by separating and estimating each of the reflectance components using an interval analysis-based branch-and-bound method in conjunction with iterative K-ordered scale estimation. The focus of this paper is the estimation of the Lafortune BRDF model. Nevertheless, our proposed method can be applied to other analytical BRDF models such as the Cook-Torrance and Ward models. Experiments were carried out to validate the proposed method using isotropic materials from the Mitsubishi Electric Research Laboratories-Massachusetts Institute of Technology (MERL-MIT) BRDF database, and the results show that our method is superior to a conventional minimization algorithm.
Silveira, Vladímir de Aquino; Souza, Givago da Silva; Gomes, Bruno Duarte; Rodrigues, Anderson Raiol; Silveira, Luiz Carlos de Lima
2014-01-01
We used psychometric functions to estimate the joint entropy for space discrimination and spatial frequency discrimination. Space discrimination was taken as discrimination of spatial extent. Seven subjects were tested. Gábor functions comprising unidimensionalsinusoidal gratings (0.4, 2, and 10 cpd) and bidimensionalGaussian envelopes (1°) were used as reference stimuli. The experiment comprised the comparison between reference and test stimulithat differed in grating's spatial frequency or envelope's standard deviation. We tested 21 different envelope's standard deviations around the reference standard deviation to study spatial extent discrimination and 19 different grating's spatial frequencies around the reference spatial frequency to study spatial frequency discrimination. Two series of psychometric functions were obtained for 2%, 5%, 10%, and 100% stimulus contrast. The psychometric function data points for spatial extent discrimination or spatial frequency discrimination were fitted with Gaussian functions using the least square method, and the spatial extent and spatial frequency entropies were estimated from the standard deviation of these Gaussian functions. Then, joint entropy was obtained by multiplying the square root of space extent entropy times the spatial frequency entropy. We compared our results to the theoretical minimum for unidimensional Gábor functions, 1/4π or 0.0796. At low and intermediate spatial frequencies and high contrasts, joint entropy reached levels below the theoretical minimum, suggesting non-linear interactions between two or more visual mechanisms. We concluded that non-linear interactions of visual pathways, such as the M and P pathways, could explain joint entropy values below the theoretical minimum at low and intermediate spatial frequencies and high contrasts. These non-linear interactions might be at work at intermediate and high contrasts at all spatial frequencies once there was a substantial decrease in joint
DEFF Research Database (Denmark)
Jepsen, Morten Løve; Dau, Torsten
To partly characterize the function of cochlear processing in humans, the basilar membrane (BM) input-output function can be estimated. In recent studies, forward masking has been used to estimate BM compression. If an on-frequency masker is processed compressively, while an off-frequency masker...... is transformed more linearly, the ratio between the slopes of growth of masking (GOM) functions provides an estimate of BM compression at the signal frequency. In this study, this paradigm is extended to also estimate the knee-point of the I/O-function between linear rocessing at low levels and compressive...... processing at medium levels. If a signal can be masked by a low-level on-frequency masker such that signal and masker fall in the linear region of the I/O-function, then a steeper GOM function is expected. The knee-point can then be estimated in the input level region where the GOM changes significantly...
Dosing of cytotoxic chemotherapy: impact of renal function estimates on dose.
Dooley, M J; Poole, S G; Rischin, D
2013-11-01
Oncology clinicians are now routinely provided with an estimated glomerular filtration rate on pathology reports whenever serum creatinine is requested. The utility of using this for the dose determination of renally excreted drugs compared with other existing methods is needed to inform practice. Renal function was determined by [Tc(99m)]DTPA clearance in adult patients presenting for chemotherapy. Renal function was calculated using the 4-variable Modification of Diet in Renal Disease (4v-MDRD), Chronic Kidney Disease Epidemiology Collaboration (CKD-EPI), Cockcroft and Gault (CG), Wright and Martin formulae. Doses for renal excreted cytotoxic drugs, including carboplatin, were calculated. The concordance of the renal function estimates according to the CKD classification with measured Tc(99m)DPTA clearance in 455 adults (median age 64.0 years: range 17-87 years) for the 4v-MDRD, CKD-EPI, CG, Martin and Wright formulae was 47.7%, 56.3%, 46.2%, 56.5% and 60.2%, respectively. Concordance for chemotherapy dose for these formulae was 89.0%, 89.5%, 85.1%, 89.9% and 89.9%, respectively. Concordance for carboplatin dose specifically was 66.4%, 71.4%, 64.0%, 73.8% and 73.2%. All bedside formulae provide similar levels of concordance in dosage selection for the renal excreted chemotherapy drugs when compared with the use of a direct measure of renal function.
International Nuclear Information System (INIS)
Hwang, Eui-Hyo
1999-01-01
The aim of this study is the assessment of the physiological implication of estimated parameters and the clinical value of this analyzing method for hepatic functional reserve estimation. After venous injection of 185 MBq of GSA, fifteen sequential sets of SPECT data were acquired for 15 minutes. First 5 sets SPECT images were analyzed by Patlak plot and hepatic GSA clearance was obtained in each matrix. The sum of hepatic GSA clearance in each matrix (total hepatic GSA clearance) was calculated as an index of whole liver functional reserve. Total hepatic GSA clearance was compared with receptor index or effective blood flow (EHBF) of whole liver which were analyzed by Direct Integral Linear Least Square Regression (DILS) method for the assessment of the physiological implications of hepatic GSA clearance. The clinical value of total hepatic GSA clearance was assessed in comparisons with the conventional hepatic function test. A very good correlations were observed between total hepatic GSA clearance and receptor index, whereas the correlations between total hepatic GSA clearance and EHBF were not significant. Significant correlations were also observed between total hepatic GSA clearance and the conventional hepatic function tests, such as choline esterase, albumin, hepaplastin test, ICG R15. (K.H.)
Functional soil microbial diversity across Europe estimated by EEA, MicroResp and BIOLOG
DEFF Research Database (Denmark)
Winding, Anne; Rutgers, Michiel; Creamer, Rachel
consisting of 81 soil samples covering five Biogeograhical Zones and three land-uses in order to test the sensitivity, ease and cost of performance and biological significance of the data output. The techniques vary in how close they are to in situ functions; dependency on growth during incubation......Soil microorganisms are abundant and essential for the bio-geochemical processes of soil, soil quality and soil ecosystem services. All this is dependent on the actual functions the microbial communities are performing in the soil. Measuring soil respiration has for many years been the basis...... of estimating soil microbial activity. However, today several techniques are in use for determining microbial functional diversity and assessing soil biodiversity: Methods based on CO2 development by the microbes such as substrate induced respiration (SIR) on specific substrates have lead to the development...
Directory of Open Access Journals (Sweden)
Cheng Liu
2010-01-01
Full Text Available Time-varying coherence is a powerful tool for revealing functional dynamics between different regions in the brain. In this paper, we address ways of estimating evolutionary spectrum and coherence using the general Cohen's class distributions. We show that the intimate connection between the Cohen's class-based spectra and the evolutionary spectra defined on the locally stationary time series can be linked by the kernel functions of the Cohen's class distributions. The time-varying spectra and coherence are further generalized with the Stockwell transform, a multiscale time-frequency representation. The Stockwell measures can be studied in the framework of the Cohen's class distributions with a generalized frequency-dependent kernel function. A magnetoencephalography study using the Stockwell coherence reveals an interesting temporal interaction between contralateral and ipsilateral motor cortices under the multisource interference task.
Quantitative pre-surgical lung function estimation with SPECT/CT
International Nuclear Information System (INIS)
Bailey, D. L.; Willowson, K. P.; Timmins, S.; Harris, B. E.; Bailey, E. A.; Roach, P. J.
2009-01-01
Full text:Objectives: To develop methodology to predict lobar lung function based on SPECT/CT ventilation and perfusion (V/Q) scanning in candidates for lobectomy for lung cancer. Methods: This combines two development areas from our group: quantitative SPECT based on CT-derived corrections for scattering and attenuation of photons, and SPECT V/Q scanning with lobar segmentation from CT. Eight patients underwent baseline pulmonary function testing (PFT) including spirometry, measure of DLCO and cario-pulmonary exercise testing. A SPECT/CT V/Q scan was acquired at baseline. Using in-house software each lobe was anatomically defined using CT to provide lobar ROIs which could be applied to the SPECT data. From these, individual lobar contribution to overall function was calculated from counts within the lobe and post-operative FEV1, DLCO and VO2 peak were predicted. This was compared with the quantitative planar scan method using 3 rectangular ROIs over each lung. Results: Post-operative FEV1 most closely matched that predicted by the planar quantification method, with SPECT V/Q over-estimating the loss of function by 8% (range - 7 - +23%). However, post-operative DLCO and VO2 peak were both accurately predicted by SPECT V/Q (average error of 0 and 2% respectively) compared with planar. Conclusions: More accurate anatomical definition of lobar anatomy provides better estimates of post-operative loss of function for DLCO and VO2 peak than traditional planar methods. SPECT/CT provides the tools for accurate anatomical defintions of the surgical target as well as being useful in producing quantitative 3D functional images for ventilation and perfusion.
Bayesian switching factor analysis for estimating time-varying functional connectivity in fMRI.
Taghia, Jalil; Ryali, Srikanth; Chen, Tianwen; Supekar, Kaustubh; Cai, Weidong; Menon, Vinod
2017-07-15
There is growing interest in understanding the dynamical properties of functional interactions between distributed brain regions. However, robust estimation of temporal dynamics from functional magnetic resonance imaging (fMRI) data remains challenging due to limitations in extant multivariate methods for modeling time-varying functional interactions between multiple brain areas. Here, we develop a Bayesian generative model for fMRI time-series within the framework of hidden Markov models (HMMs). The model is a dynamic variant of the static factor analysis model (Ghahramani and Beal, 2000). We refer to this model as Bayesian switching factor analysis (BSFA) as it integrates factor analysis into a generative HMM in a unified Bayesian framework. In BSFA, brain dynamic functional networks are represented by latent states which are learnt from the data. Crucially, BSFA is a generative model which estimates the temporal evolution of brain states and transition probabilities between states as a function of time. An attractive feature of BSFA is the automatic determination of the number of latent states via Bayesian model selection arising from penalization of excessively complex models. Key features of BSFA are validated using extensive simulations on carefully designed synthetic data. We further validate BSFA using fingerprint analysis of multisession resting-state fMRI data from the Human Connectome Project (HCP). Our results show that modeling temporal dependencies in the generative model of BSFA results in improved fingerprinting of individual participants. Finally, we apply BSFA to elucidate the dynamic functional organization of the salience, central-executive, and default mode networks-three core neurocognitive systems with central role in cognitive and affective information processing (Menon, 2011). Across two HCP sessions, we demonstrate a high level of dynamic interactions between these networks and determine that the salience network has the highest temporal
Lefort-Besnard, Jérémy; Bassett, Danielle S; Smallwood, Jonathan; Margulies, Daniel S; Derntl, Birgit; Gruber, Oliver; Aleman, Andre; Jardri, Renaud; Varoquaux, Gaël; Thirion, Bertrand; Eickhoff, Simon B; Bzdok, Danilo
2018-02-01
Schizophrenia is a devastating mental disease with an apparent disruption in the highly associative default mode network (DMN). Interplay between this canonical network and others probably contributes to goal-directed behavior so its disturbance is a candidate neural fingerprint underlying schizophrenia psychopathology. Previous research has reported both hyperconnectivity and hypoconnectivity within the DMN, and both increased and decreased DMN coupling with the multimodal saliency network (SN) and dorsal attention network (DAN). This study systematically revisited network disruption in patients with schizophrenia using data-derived network atlases and multivariate pattern-learning algorithms in a multisite dataset (n = 325). Resting-state fluctuations in unconstrained brain states were used to estimate functional connectivity, and local volume differences between individuals were used to estimate structural co-occurrence within and between the DMN, SN, and DAN. In brain structure and function, sparse inverse covariance estimates of network coupling were used to characterize healthy participants and patients with schizophrenia, and to identify statistically significant group differences. Evidence did not confirm that the backbone of the DMN was the primary driver of brain dysfunction in schizophrenia. Instead, functional and structural aberrations were frequently located outside of the DMN core, such as in the anterior temporoparietal junction and precuneus. Additionally, functional covariation analyses highlighted dysfunctional DMN-DAN coupling, while structural covariation results highlighted aberrant DMN-SN coupling. Our findings reframe the role of the DMN core and its relation to canonical networks in schizophrenia. We thus underline the importance of large-scale neural interactions as effective biomarkers and indicators of how to tailor psychiatric care to single patients. © 2017 Wiley Periodicals, Inc.
International Nuclear Information System (INIS)
Jouvie, Camille
2013-01-01
Positron Emission Tomography (PET) is a method of functional imaging, used in particular for drug development and tumor imaging. In PET, the estimation of the arterial plasmatic activity concentration of the non-metabolized compound (the 'input function') is necessary for the extraction of the pharmacokinetic parameters. These parameters enable the quantification of the compound dynamics in the tissues. This PhD thesis contributes to the study of the input function by the development of a minimally invasive method to estimate the input function. This method uses the PET image and a few blood samples. In this work, the example of the FDG tracer is chosen. The proposed method relies on compartmental modeling: it deconvoluates the three-compartment-model. The originality of the method consists in using a large number of regions of interest (ROIs), a large number of sets of three ROIs, and an iterative process. To validate the method, simulations of PET images of increasing complexity have been performed, from a simple image simulated with an analytic simulator to a complex image simulated with a Monte-Carlo simulator. After simulation of the acquisition, reconstruction and corrections, the images were segmented (through segmentation of an IRM image and registration between PET and IRM images) and corrected for partial volume effect by a variant of Rousset's method, to obtain the kinetics in the ROIs, which are the input data of the estimation method. The evaluation of the method on simulated and real data is presented, as well as a study of the method robustness to different error sources, for example in the segmentation, in the registration or in the activity of the used blood samples. (author) [fr
Forester, James D; Im, Hae Kyung; Rathouz, Paul J
2009-12-01
Patterns of resource selection by animal populations emerge as a result of the behavior of many individuals. Statistical models that describe these population-level patterns of habitat use can miss important interactions between individual animals and characteristics of their local environment; however, identifying these interactions is difficult. One approach to this problem is to incorporate models of individual movement into resource selection models. To do this, we propose a model for step selection functions (SSF) that is composed of a resource-independent movement kernel and a resource selection function (RSF). We show that standard case-control logistic regression may be used to fit the SSF; however, the sampling scheme used to generate control points (i.e., the definition of availability) must be accommodated. We used three sampling schemes to analyze simulated movement data and found that ignoring sampling and the resource-independent movement kernel yielded biased estimates of selection. The level of bias depended on the method used to generate control locations, the strength of selection, and the spatial scale of the resource map. Using empirical or parametric methods to sample control locations produced biased estimates under stronger selection; however, we show that the addition of a distance function to the analysis substantially reduced that bias. Assuming a uniform availability within a fixed buffer yielded strongly biased selection estimates that could be corrected by including the distance function but remained inefficient relative to the empirical and parametric sampling methods. As a case study, we used location data collected from elk in Yellowstone National Park, USA, to show that selection and bias may be temporally variable. Because under constant selection the amount of bias depends on the scale at which a resource is distributed in the landscape, we suggest that distance always be included as a covariate in SSF analyses. This approach to
International Nuclear Information System (INIS)
Jorgensen, E.J.
1987-01-01
This study is an application of production-cost duality theory. Duality theory is reviewed for the competitive and rate-of-return regulated firm. The cost function is developed for the nuclear electric-power-generating industry of the United States using capital, fuel, and labor factor inputs. A comparison is made between the Generalized Box-Cox (GBC) and Fourier Flexible (FF) functional forms. The GBC functional form nests the Generalized Leontief, Generalized Square Root Quadratic and Translog functional forms, and is based upon a second-order Taylor-series expansion. The FF form follows from a Fourier-series expansion in sine and cosine terms using the Sobolev norm as the goodness-of-fit measure. The Sobolev norm takes into account first and second derivatives. The cost function and two factor shares are estimated as a system of equations using maximum-likelihood techniques, with Additive Standard Normal and Logistic Normal error distributions. In summary, none of the special cases of the GBC function form are accepted. Homotheticity of the underlying production technology can be rejected for both GBC and FF forms, leaving only the unrestricted versions supported by the data. Residual analysis indicates a slight improvement in skewness and kurtosis for univariate and multivariate cases when the Logistic Normal distribution is used
Directory of Open Access Journals (Sweden)
Delaram Houshmand Kouchi
2017-05-01
Full Text Available The successful application of hydrological models relies on careful calibration and uncertainty analysis. However, there are many different calibration/uncertainty analysis algorithms, and each could be run with different objective functions. In this paper, we highlight the fact that each combination of optimization algorithm-objective functions may lead to a different set of optimum parameters, while having the same performance; this makes the interpretation of dominant hydrological processes in a watershed highly uncertain. We used three different optimization algorithms (SUFI-2, GLUE, and PSO, and eight different objective functions (R2, bR2, NSE, MNS, RSR, SSQR, KGE, and PBIAS in a SWAT model to calibrate the monthly discharges in two watersheds in Iran. The results show that all three algorithms, using the same objective function, produced acceptable calibration results; however, with significantly different parameter ranges. Similarly, an algorithm using different objective functions also produced acceptable calibration results, but with different parameter ranges. The different calibrated parameter ranges consequently resulted in significantly different water resource estimates. Hence, the parameters and the outputs that they produce in a calibrated model are “conditioned” on the choices of the optimization algorithm and objective function. This adds another level of non-negligible uncertainty to watershed models, calling for more attention and investigation in this area.
Estimation of gas and tissue lung volumes by MRI: functional approach of lung imaging.
Qanadli, S D; Orvoen-Frija, E; Lacombe, P; Di Paola, R; Bittoun, J; Frija, G
1999-01-01
The purpose of this work was to assess the accuracy of MRI for the determination of lung gas and tissue volumes. Fifteen healthy subjects underwent MRI of the thorax and pulmonary function tests [vital capacity (VC) and total lung capacity (TLC)] in the supine position. MR examinations were performed at inspiration and expiration. Lung volumes were measured by a previously validated technique on phantoms. Both individual and total lung volumes and capacities were calculated. MRI total vital capacity (VC(MRI)) was compared with spirometric vital capacity (VC(SP)). Capacities were correlated to lung volumes. Tissue volume (V(T)) was estimated as the difference between the total lung volume at full inspiration and the TLC. No significant difference was seen between VC(MRI) and VC(SP). Individual capacities were well correlated (r = 0.9) to static volume at full inspiration. The V(T) was estimated to be 836+/-393 ml. This preliminary study demonstrates that MRI can accurately estimate lung gas and tissue volumes. The proposed approach appears well suited for functional imaging of the lung.
Assessing a learning process with functional ANOVA estimators of EEG power spectral densities.
Gutiérrez, David; Ramírez-Moreno, Mauricio A
2016-04-01
We propose to assess the process of learning a task using electroencephalographic (EEG) measurements. In particular, we quantify changes in brain activity associated to the progression of the learning experience through the functional analysis-of-variances (FANOVA) estimators of the EEG power spectral density (PSD). Such functional estimators provide a sense of the effect of training in the EEG dynamics. For that purpose, we implemented an experiment to monitor the process of learning to type using the Colemak keyboard layout during a twelve-lessons training. Hence, our aim is to identify statistically significant changes in PSD of various EEG rhythms at different stages and difficulty levels of the learning process. Those changes are taken into account only when a probabilistic measure of the cognitive state ensures the high engagement of the volunteer to the training. Based on this, a series of statistical tests are performed in order to determine the personalized frequencies and sensors at which changes in PSD occur, then the FANOVA estimates are computed and analyzed. Our experimental results showed a significant decrease in the power of [Formula: see text] and [Formula: see text] rhythms for ten volunteers during the learning process, and such decrease happens regardless of the difficulty of the lesson. These results are in agreement with previous reports of changes in PSD being associated to feature binding and memory encoding.
Xu, Yonghong; Gao, Xiaohuan; Wang, Zhengxi
2014-04-01
Missing data represent a general problem in many scientific fields, especially in medical survival analysis. Dealing with censored data, interpolation method is one of important methods. However, most of the interpolation methods replace the censored data with the exact data, which will distort the real distribution of the censored data and reduce the probability of the real data falling into the interpolation data. In order to solve this problem, we in this paper propose a nonparametric method of estimating the survival function of right-censored and interval-censored data and compare its performance to SC (self-consistent) algorithm. Comparing to the average interpolation and the nearest neighbor interpolation method, the proposed method in this paper replaces the right-censored data with the interval-censored data, and greatly improves the probability of the real data falling into imputation interval. Then it bases on the empirical distribution theory to estimate the survival function of right-censored and interval-censored data. The results of numerical examples and a real breast cancer data set demonstrated that the proposed method had higher accuracy and better robustness for the different proportion of the censored data. This paper provides a good method to compare the clinical treatments performance with estimation of the survival data of the patients. This pro vides some help to the medical survival data analysis.
See food diet? Cultural differences in estimating fullness and intake as a function of plate size.
Peng, Mei; Adam, Sarah; Hautus, Michael J; Shin, Myoungju; Duizer, Lisa M; Yan, Huiquan
2017-10-01
Previous research has suggested that manipulations of plate size can have a direct impact on perception of food intake, measured by estimated fullness and intake. The present study, involving 570 individuals across Canada, China, Korea, and New Zealand, is the first empirical study to investigate cultural influences on perception of food portion as a function of plate size. The respondents viewed photographs of ten culturally diverse dishes presented on large (27 cm) and small (23 cm) plates, and then rated their estimated usual intake and expected fullness after consuming the dish, using 100-point visual analog scales. The data were analysed with a mixed-model ANCOVA controlling for individual BMI, liking and familiarity of the presented food. The results showed clear cultural differences: (1) manipulations of the plate size had no effect on the expected fullness or the estimated intake of the Chinese and Korean respondents, as opposed to significant effects in Canadians and New Zealanders (p Asian respondents. Overall, these findings, from a cultural perspective, support the notion that estimation of fullness and intake are learned through dining experiences, and highlight the importance of considering eating environments and contexts when assessing individual behaviours relating to food intake. Copyright © 2017 Elsevier Ltd. All rights reserved.
A PEDOTRANSFER FUNCTION FOR ESTIMATING THE SOIL ERODIBILITY FACTOR IN SICILY
Directory of Open Access Journals (Sweden)
Vincenzo Bagarello
2009-09-01
Full Text Available The soil erodibility factor, K, of the Universal Soil Loss Equation (USLE is a simple descriptor of the soil susceptibility to rill and interrill erosion. The original procedure for determining K needs a knowledge of soil particle size distribution (PSD, soil organic matter, OM, content, and soil structure and permeability characteristics. However, OM data are often missing and soil structure and permeability are not easily evaluated in regional analyses. The objective of this investigation was to develop a pedotransfer function (PTF for estimating the K factor of the USLE in Sicily (south Italy using only soil textural data. The nomograph soil erodibility factor and its associated first approximation, K’, were determined at 471 sampling points distributed throughout the island of Sicily. Two existing relationships for estimating K on the basis of the measured geometric mean particle diameter were initially tested. Then, two alternative PTFs for estimating K’ and K, respectively, on the basis of the measured PSD were derived. Testing analysis showed that the K estimate by the proposed PTF (eq.11, which was characterized by a Nash-Suttcliffe efficiency index, NSEI, varying between 0.68 and 0.76, depending on the considered data set, was appreciably more accurate than the one obtained by other existing equations, yielding NSEI values varying between 0.21 and 0.32.
Optimal replacement time estimation for machines and equipment based on cost function
Directory of Open Access Journals (Sweden)
J. Šebo
2013-01-01
Full Text Available The article deals with a multidisciplinary issue of estimating the optimal replacement time for the machines. Considered categories of machines, for which the optimization method is usable, are of the metallurgical and engineering production. Different models of cost function are considered (both with one and two variables. Parameters of the models were calculated through the least squares method. Models testing show that all are good enough, so for estimation of optimal replacement time is sufficient to use simpler models. In addition to the testing of models we developed the method (tested on selected simple model which enable us in actual real time (with limited data set to indicate the optimal replacement time. The indicated time moment is close enough to the optimal replacement time t*.
Estimation of the POD function and the LOD of a qualitative microbiological measurement method.
Wilrich, Cordula; Wilrich, Peter-Theodor
2009-01-01
Qualitative microbiological measurement methods in which the measurement results are either 0 (microorganism not detected) or 1 (microorganism detected) are discussed. The performance of such a measurement method is described by its probability of detection as a function of the contamination (CFU/g or CFU/mL) of the test material, or by the LOD(p), i.e., the contamination that is detected (measurement result 1) with a specified probability p. A complementary log-log model was used to statistically estimate these performance characteristics. An intralaboratory experiment for the detection of Listeria monocytogenes in various food matrixes illustrates the method. The estimate of LOD50% is compared with the Spearman-Kaerber method.
Gan, L.; Yang, F.; Shi, Y. F.; He, H. L.
2017-11-01
Many occasions related to batteries demand to know how much continuous and instantaneous power can batteries provide such as the rapidly developing electric vehicles. As the large-scale applications of lithium-ion batteries, lithium-ion batteries are used to be our research object. Many experiments are designed to get the lithium-ion battery parameters to ensure the relevance and reliability of the estimation. To evaluate the continuous and instantaneous load capability of a battery called state-of-function (SOF), this paper proposes a fuzzy logic algorithm based on battery state-of-charge(SOC), state-of-health(SOH) and C-rate parameters. Simulation and experimental results indicate that the proposed approach is suitable for battery SOF estimation.
A Gaussian mixture model based cost function for parameter estimation of chaotic biological systems
Shekofteh, Yasser; Jafari, Sajad; Sprott, Julien Clinton; Hashemi Golpayegani, S. Mohammad Reza; Almasganj, Farshad
2015-02-01
As we know, many biological systems such as neurons or the heart can exhibit chaotic behavior. Conventional methods for parameter estimation in models of these systems have some limitations caused by sensitivity to initial conditions. In this paper, a novel cost function is proposed to overcome those limitations by building a statistical model on the distribution of the real system attractor in state space. This cost function is defined by the use of a likelihood score in a Gaussian mixture model (GMM) which is fitted to the observed attractor generated by the real system. Using that learned GMM, a similarity score can be defined by the computed likelihood score of the model time series. We have applied the proposed method to the parameter estimation of two important biological systems, a neuron and a cardiac pacemaker, which show chaotic behavior. Some simulated experiments are given to verify the usefulness of the proposed approach in clean and noisy conditions. The results show the adequacy of the proposed cost function.
Aggarwal, Ankush
2017-08-01
Motivated by the well-known result that stiffness of soft tissue is proportional to the stress, many of the constitutive laws for soft tissues contain an exponential function. In this work, we analyze properties of the exponential function and how it affects the estimation and comparison of elastic parameters for soft tissues. In particular, we find that as a consequence of the exponential function there are lines of high covariance in the elastic parameter space. As a result, one can have widely varying mechanical parameters defining the tissue stiffness but similar effective stress-strain responses. Drawing from elementary algebra, we propose simple changes in the norm and the parameter space, which significantly improve the convergence of parameter estimation and robustness in the presence of noise. More importantly, we demonstrate that these changes improve the conditioning of the problem and provide a more robust solution in the case of heterogeneous material by reducing the chances of getting trapped in a local minima. Based upon the new insight, we also propose a transformed parameter space which will allow for rational parameter comparison and avoid misleading conclusions regarding soft tissue mechanics.
ESTIMATING THE PRODUCTION FUNCTION IN THE CASE OF ROMANIA METODOLOGY AND RESULTS
Directory of Open Access Journals (Sweden)
Simuț Ramona Marinela
2015-07-01
Full Text Available The problem of economic growth is a headline concern among economists, mathematicians and politicians. This is because of the major impact of economic growth on the entire population of a country, which has made achieving or maintaining a sustained growth rate the major objective of macroeconomic policy of any country. Thus, in order to identify present sources of economic growth for Romania in our study we used the Cobb-Douglas type production function. The basic variables of this model are represented by work factors, capital stock and the part of economic growth determined by the technical progress, the Solow residue or total productivity of production factors. To estimate this production function in the case of Romania, we used the quarter statistical data from the period between 2000 – first quarter and 2014 – fourth quarter; the source of the data was Eurostat. The Cobb-Douglas production function with the variables work and capital is valid in Romania’s case because it has the parameters of the exogenous variables significantly different from zero. This model became valid after we eliminated the autocorrelation of errors. Removing the autocorrelation of errors does not alter the structure of the production function. The adjusted R2 determination coefficient, as well as the α and β coefficients have values close to those from the first estimated equation. The regression of the GDP is characterized by marginal decreasing efficiency of the capital stock (α > 1 and decreasing efficiency of work (β < 1. In our case the sum of the α and β coefficients is below 1 (it is 0.75 as well as in the case of the second model (0.89, which corresponds to the decreasing efficiency of the production function. Concerning the working population of Romania, it registered a growing trend, starting with 2000 until 2005, a period that coincided with a sustained economic growth.
International Nuclear Information System (INIS)
Stenmark, Matthew H.; Cao, Yue; Wang, Hesheng; Jackson, Andrew; Ben-Josef, Edgar; Ten Haken, Randall K.; Lawrence, Theodore S.; Feng, Mary
2014-01-01
Purpose: To estimate the limit of functional liver reserve for safe application of hepatic irradiation using changes in indocyanine green, an established assay of liver function. Materials and methods: From 2005 to 2011, 60 patients undergoing hepatic irradiation were enrolled in a prospective study assessing the plasma retention fraction of indocyanine green at 15-min (ICG-R15) prior to, during (at 60% of planned dose), and after radiotherapy (RT). The limit of functional liver reserve was estimated from the damage fraction of functional liver (DFL) post-RT [1 − (ICG-R15 pre-RT /ICG-R15 post-RT )] where no toxicity was observed using a beta distribution function. Results: Of 48 evaluable patients, 3 (6%) developed RILD, all within 2.5 months of completing RT. The mean ICG-R15 for non-RILD patients pre-RT, during-RT and 1-month post-RT was 20.3%(SE 2.6), 22.0%(3.0), and 27.5%(2.8), and for RILD patients was 6.3%(4.3), 10.8%(2.7), and 47.6%(8.8). RILD was observed at post-RT damage fractions of ⩾78%. Both DFL assessed by during-RT ICG and MLD predicted for DFL post-RT (p < 0.0001). Limiting the post-RT DFL to 50%, predicted a 99% probability of a true complication rate <15%. Conclusion: The DFL as assessed by changes in ICG during treatment serves as an early indicator of a patient’s tolerance to hepatic irradiation
Estimation of the pulmonary input function in dynamic whole body PET
International Nuclear Information System (INIS)
Ho-Shon, K.; Buchen, P.; Meikle, S.R.; Fulham, M.J.; University of Sydney, Sydney, NSW
1998-01-01
Full text: Dynamic data acquisition in Whole Body PET (WB-PET) has the potential to measure the metabolic rate of glucose (MRGlc) in tissue in-vivo. Estimation of changes in tumoral MRGlc may be a valuable tool in cancer by providing an quantitative index of response to treatment. A necessary requirement is an input function (IF) that can be obtained from arterial, 'arterialised' venous or pulmonary arterial blood in the case of lung tumours. Our aim was to extract the pulmonary input function from dynamic WB-PET data using Principal Component Analysis (PCA), Factor Analysis (FA) and Maximum Entropy (ME) for the evaluation of patients undergoing induction chemotherapy for non-small cell lung cancer. PCA is first used as a method of dimension reduction to obtain a signal space, defined by an optimal metric and a set of vectors. FA is used together with a ME constraint to rotate these vectors to obtain 'physiological' factors. A form of entropy function that does not require normalised data was used. This enabled the introduction of a penalty function based on the blood concentration at the last time point which provides an additional constraint. Tissue functions from 10 planes through normal lung were simulated. The model was a linear combination of an IF and a tissue time activity curve (TAC). The proportion of the IF to TAC was varied over the planes to simulate the apical to basal gradient in vascularity of the lung and pseudo Poisson noise was added. The method accurately extracted the IF at noise levels spanning the expected range for dynamic ROI data acquired with the interplane septa extended. Our method is minimally invasive because it requires only 1 late venous blood sample and is applicable to a wide range of tracers since it does not assume a particular compartmental model. Pilot data from 2 patients have been collected enabling comparison of the estimated IF with direct blood sampling from the pulmonary artery
International Nuclear Information System (INIS)
Coelli, Tim J.; Gautier, Axel; Perelman, Sergio; Saplacan-Pop, Roxana
2013-01-01
The quality of electricity distribution is being more and more scrutinized by regulatory authorities, with explicit reward and penalty schemes based on quality targets having been introduced in many countries. It is then of prime importance to know the cost of improving the quality for a distribution system operator. In this paper, we focus on one dimension of quality, the continuity of supply, and we estimated the cost of preventing power outages. For that, we make use of the parametric distance function approach, assuming that outages enter in the firm production set as an input, an imperfect substitute for maintenance activities and capital investment. This allows us to identify the sources of technical inefficiency and the underlying trade-off faced by operators between quality and other inputs and costs. For this purpose, we use panel data on 92 electricity distribution units operated by ERDF (Electricité de France - Réseau Distribution) in the 2003–2005 financial years. Assuming a multi-output multi-input translog technology, we estimate that the cost of preventing one interruption is equal to 10.7€ for an average DSO. Furthermore, as one would expect, marginal quality improvements tend to be more expensive as quality itself improves. - Highlights: ► We estimate the implicit cost of outages for the main distribution company in France. ► For this purpose, we make use of a parametric distance function approach. ► Marginal quality improvements tend to be more expensive as quality itself improves. ► The cost of preventing one interruption varies from 1.8 € to 69.2 € (2005 prices). ► We estimate that, in average, it lays 33% above the regulated price of quality.
Grid occupancy estimation for environment perception based on belief functions and PCR6
Moras, Julien; Dezert, Jean; Pannetier, Benjamin
2015-05-01
In this contribution, we propose to improve the grid map occupancy estimation method developed so far based on belief function modeling and the classical Dempster's rule of combination. Grid map offers a useful representation of the perceived world for mobile robotics navigation. It will play a major role for the security (obstacle avoidance) of next generations of terrestrial vehicles, as well as for future autonomous navigation systems. In a grid map, the occupancy of each cell representing a small piece of the surrounding area of the robot must be estimated at first from sensors measurements (typically LIDAR, or camera), and then it must also be classified into different classes in order to get a complete and precise perception of the dynamic environment where the robot moves. So far, the estimation and the grid map updating have been done using fusion techniques based on the probabilistic framework, or on the classical belief function framework thanks to an inverse model of the sensors. Mainly because the latter offers an interesting management of uncertainties when the quality of available information is low, and when the sources of information appear as conflicting. To improve the performances of the grid map estimation, we propose in this paper to replace Dempster's rule of combination by the PCR6 rule (Proportional Conflict Redistribution rule #6) proposed in DSmT (Dezert-Smarandache) Theory. As an illustrating scenario, we consider a platform moving in dynamic area and we compare our new realistic simulation results (based on a LIDAR sensor) with those obtained by the probabilistic and the classical belief-based approaches.
Cook, Ellyn J.; van der Kaars, Sander
2006-10-01
We review attempts to derive quantitative climatic estimates from Australian pollen data, including the climatic envelope, climatic indicator and modern analogue approaches, and outline the need to pursue alternatives for use as input to, or validation of, simulations by models of past, present and future climate patterns. To this end, we have constructed and tested modern pollen-climate transfer functions for mainland southeastern Australia and Tasmania using the existing southeastern Australian pollen database and for northern Australia using a new pollen database we are developing. After testing for statistical significance, 11 parameters were selected for mainland southeastern Australia, seven for Tasmania and six for northern Australia. The functions are based on weighted-averaging partial least squares regression and their predictive ability evaluated against modern observational climate data using leave-one-out cross-validation. Functions for summer, annual and winter rainfall and temperatures are most robust for southeastern Australia, while in Tasmania functions for minimum temperature of the coldest period, mean winter and mean annual temperature are the most reliable. In northern Australia, annual and summer rainfall and annual and summer moisture indexes are the strongest. The validation of all functions means all can be applied to Quaternary pollen records from these three areas with confidence. Copyright
Volume-assisted estimation of liver function based on Gd-EOB-DTPA-enhanced MR relaxometry
Energy Technology Data Exchange (ETDEWEB)
Haimerl, Michael; Schlabeck, Mona; Verloh, Niklas; Fellner, Claudia; Stroszczynski, Christian; Wiggermann, Philipp [University Hospital Regensburg, Department of Radiology, Regensburg (Germany); Zeman, Florian [University Hospital Regensburg, Center for Clinical Trials, Regensburg (Germany); Nickel, Dominik [MR Applications Development, Siemens AG, Healthcare Sector, Erlangen (Germany); Barreiros, Ana Paula [University Hospital Regensburg, Department of Internal Medicine I, Regensburg (Germany); Loss, Martin [University Hospital Regensburg, Department of Surgery, Regensburg (Germany)
2016-04-15
To determine whether liver function as determined by indocyanine green (ICG) clearance can be estimated quantitatively from hepatic magnetic resonance (MR) relaxometry with gadoxetic acid (Gd-EOB-DTPA). One hundred and seven patients underwent an ICG clearance test and Gd-EOB-DTPA-enhanced MRI, including MR relaxometry at 3 Tesla. A transverse 3D VIBE sequence with an inline T1 calculation was acquired prior to and 20 minutes post-Gd-EOB-DTPA administration. The reduction rate of T1 relaxation time (rrT1) between pre- and post-contrast images and the liver volume-assisted index of T1 reduction rate (LVrrT1) were evaluated. The plasma disappearance rate of ICG (ICG-PDR) was correlated with the liver volume (LV), rrT1 and LVrrT1, providing an MRI-based estimated ICG-PDR value (ICG-PDR{sub est}). Simple linear regression model showed a significant correlation of ICG-PDR with LV (r = 0.32; p = 0.001), T1{sub post} (r = 0.65; p < 0.001) and rrT1 (r = 0.86; p < 0.001). Assessment of LV and consecutive evaluation of multiple linear regression model revealed a stronger correlation of ICG-PDR with LVrrT1 (r = 0.92; p < 0.001), allowing for the calculation of ICG-PDR{sub est}. Liver function as determined using ICG-PDR can be estimated quantitatively from Gd-EOB-DTPA-enhanced MR relaxometry. Volume-assisted MR relaxometry has a stronger correlation with liver function than does MR relaxometry. (orig.)
Saupe, Florian; Knoblach, Andreas
2015-02-01
Two different approaches for the determination of frequency response functions (FRFs) are used for the non-parametric closed loop identification of a flexible joint industrial manipulator with serial kinematics. The two applied experiment designs are based on low power multisine and high power chirp excitations. The main challenge is to eliminate disturbances of the FRF estimates caused by the numerous nonlinearities of the robot. For the experiment design based on chirp excitations, a simple iterative procedure is proposed which allows exploiting the good crest factor of chirp signals in a closed loop setup. An interesting synergy of the two approaches, beyond validation purposes, is pointed out.
DEFF Research Database (Denmark)
Wellendorff, Jess; Lundgård, Keld Troen; Møgelhøj, Andreas
2012-01-01
A methodology for semiempirical density functional optimization, using regularization and cross-validation methods from machine learning, is developed. We demonstrate that such methods enable well-behaved exchange-correlation approximations in very flexible model spaces, thus avoiding the overfit......A methodology for semiempirical density functional optimization, using regularization and cross-validation methods from machine learning, is developed. We demonstrate that such methods enable well-behaved exchange-correlation approximations in very flexible model spaces, thus avoiding...... the energetics of intramolecular and intermolecular, bulk solid, and surface chemical bonding, and the developed optimization method explicitly handles making the compromise based on the directions in model space favored by different materials properties. The approach is applied to designing the Bayesian error...... sets validates the applicability of BEEF-vdW to studies in chemistry and condensed matter physics. Applications of the approximation and its Bayesian ensemble error estimate to two intricate surface science problems support this....
Hahor, Waraporn; Thongprajukaew, Karun; Yoonram, Krueawan; Rodjaroen, Somrak
2016-11-01
Postmortem changes have been previously studied in some terrestrial animal models, but no prior information is available on aquatic species. Gastrointestinal functionality was investigated in terms of indices, protein concentration, digestive enzyme activity, and scavenging activity, in an aquatic animal model, Nile tilapia, to assess the postmortem changes. Dead fish were floated indoors, and samples were collected within 48 h after death. Stomasomatic index decreased with postmortem time and correlated positively with protein, pepsin-specific activity, and stomach scavenging activity. Also intestosomatic index decreased significantly and correlated positively with protein, specific activity of trypsin, chymotrypsin, amylase, lipase, and intestinal scavenging activity. In their postmortem changes, the digestive enzymes exhibited earlier lipid degradation than carbohydrate or protein. The intestine changed more rapidly than the stomach. The findings suggest that the postmortem changes of gastrointestinal functionality can serve as primary data for the estimation of time of death of an aquatic animal. © 2016 American Academy of Forensic Sciences.
Directory of Open Access Journals (Sweden)
João Carlos Medeiros
2014-06-01
Full Text Available Knowledge of the soil water retention curve (SWRC is essential for understanding and modeling hydraulic processes in the soil. However, direct determination of the SWRC is time consuming and costly. In addition, it requires a large number of samples, due to the high spatial and temporal variability of soil hydraulic properties. An alternative is the use of models, called pedotransfer functions (PTFs, which estimate the SWRC from easy-to-measure properties. The aim of this paper was to test the accuracy of 16 point or parametric PTFs reported in the literature on different soils from the south and southeast of the State of Pará, Brazil. The PTFs tested were proposed by Pidgeon (1972, Lal (1979, Aina & Periaswamy (1985, Arruda et al. (1987, Dijkerman (1988, Vereecken et al. (1989, Batjes (1996, van den Berg et al. (1997, Tomasella et al. (2000, Hodnett & Tomasella (2002, Oliveira et al. (2002, and Barros (2010. We used a database that includes soil texture (sand, silt, and clay, bulk density, soil organic carbon, soil pH, cation exchange capacity, and the SWRC. Most of the PTFs tested did not show good performance in estimating the SWRC. The parametric PTFs, however, performed better than the point PTFs in assessing the SWRC in the tested region. Among the parametric PTFs, those proposed by Tomasella et al. (2000 achieved the best accuracy in estimating the empirical parameters of the van Genuchten (1980 model, especially when tested in the top soil layer.
Estimation of leaf area index in the sunflower as a function of thermal time1
Directory of Open Access Journals (Sweden)
Dioneia Daiane Pitol Lucas
Full Text Available The aim of this study was to obtain a mathematical model for estimating the leaf area index (LAI of a sunflower crop as a function of accumulated thermal time. Generating the models and testing their coefficients was carried out using data obtained from experiments carried out for different sowing dates in the crop years of 2007/08, 2008/09, 2009/10 and 2010/11 with two sunflower hybrids, Aguará 03 and Hélio 358. Linear leaf dimensions were used for the non-destructive measurement of the leaf area, and thermal time was used to quantify the biological time. With the data for accumulated thermal time (TTa and LAI known for any one day after emergence, mathematical models were generated for estimating the LAI. The following models were obtained, as they presented the best fit (lowest rootmean- square error, RMSE: gaussian peak, cubic polynomial, sigmoidal and an adjusted compound model, the modified sigmoidal. The modified sigmoidal model had the best fit to the generation data and the highest value for the coefficient of determination (R2. In testing the models, the lowest values for root-mean-square error, and the highest R2 between the observed and estimated values were obtained with the modified sigmoidal model.
Directory of Open Access Journals (Sweden)
Enric Vilar
Full Text Available Residual Kidney Function (RKF is associated with survival benefits in haemodialysis (HD but is difficult to measure without urine collection. Middle molecules such as Cystatin C and β2-microglobulin accumulate in renal disease and plasma levels have been used to estimate kidney function early in this condition. We investigated their use to estimate RKF in patients on HD.Cystatin C, β2-microglobulin, urea and creatinine levels were studied in patients on incremental high-flux HD or hemodiafiltration(HDF. Over sequential HD sessions, blood was sampled pre- and post-session 1 and pre-session 2, for estimation of these parameters. Urine was collected during the whole interdialytic interval, for estimation of residual GFR (GFRResidual = mean of urea and creatinine clearance. The relationships of plasma Cystatin C and β2-microglobulin levels to GFRResidual and urea clearance were determined.Of the 341 patients studied, 64% had urine output>100 ml/day, 32.6% were on high-flux HD and 67.4% on HDF. Parameters most closely correlated with GFRResidual were 1/β2-micoglobulin (r2 0.67 and 1/Cystatin C (r2 0.50. Both these relationships were weaker at low GFRResidual. The best regression model for GFRResidual, explaining 67% of the variation, was: GFRResidual = 160.3 · (1/β2m - 4.2. Where β2m is the pre-dialysis β2 microglobulin concentration (mg/L. This model was validated in a separate cohort of 50 patients using Bland-Altman analysis. Areas under the curve in Receiver Operating Characteristic analysis aimed at identifying subjects with urea clearance≥2 ml/min/1.73 m2 was 0.91 for β2-microglobulin and 0.86 for Cystatin C. A plasma β2-microglobulin cut-off of ≤19.2 mg/L allowed identification of patients with urea clearance ≥2 ml/min/1.73 m2 with 90% specificity and 65% sensitivity.Plasma pre-dialysis β2-microglobulin levels can provide estimates of RKF which may have clinical utility and appear superior to cystatin C. Use of cut-off levels
Sisson, James B.; van Genuchten, Martinus Th.
1991-04-01
The unsaturated hydraulic properties are important parameters in any quantitative description of water and solute transport in partially saturated soils. Currently, most in situ methods for estimating the unsaturated hydraulic conductivity (K) are based on analyses that require estimates of the soil water flux and the pressure head gradient. These analyses typically involve differencing of field-measured pressure head (h) and volumetric water content (θ) data, a process that can significantly amplify instrumental and measurement errors. More reliable methods result when differencing of field data can be avoided. One such method is based on estimates of the gravity drainage curve K'(θ) = dK/dθ which may be computed from observations of θ and/or h during the drainage phase of infiltration drainage experiments assuming unit gradient hydraulic conditions. The purpose of this study was to compare estimates of the unsaturated soil hydraulic functions on the basis of different combinations of field data θ, h, K, and K'. Five different data sets were used for the analysis: (1) θ-h, (2) K-θ, (3) K'-θ (4) K-θ-h, and (5) K'-θ-h. The analysis was applied to previously published data for the Norfolk, Troup, and Bethany soils. The K-θ-h and K'-θ-h data sets consistently produced nearly identical estimates of the hydraulic functions. The K-θ and K'-θ data also resulted in similar curves, although results in this case were less consistent than those produced by the K-θ-h and K'-θ-h data sets. We conclude from this study that differencing of field data can be avoided and hence that there is no need to calculate soil water fluxes and pressure head gradients from inherently noisy field-measured θ and h data. The gravity drainage analysis also provides results over a much broader range of hydraulic conductivity values than is possible with the more standard instantaneous profile analysis, especially when augmented with independently measured soil water retention data.
International Nuclear Information System (INIS)
Ali Akkemik, K.
2009-01-01
Turkish electricity sector has undergone significant institutional changes since 1984. The recent developments since 2001 including the setting up of a regulatory agency to undertake the regulation of the sector and increasing participation of private investors in the field of electricity generation are of special interest. This paper estimates cost functions and investigates the degree of scale economies, overinvestment, and technological progress in the Turkish electricity generation sector for the period 1984-2006 using long-run and short-run translog cost functions. Estimations were done for six groups of firms, public and private. The results indicate existence of scale economies throughout the period of analysis, hence declining long-run average costs. The paper finds empirical support for the Averch-Johnson effect until 2001, i.e., firms overinvested in an environment where there are excess returns to capital. But this effect was reduced largely after 2002. Technological progress deteriorated slightly from 1984-1993 to 1994-2001 but improved after 2002. Overall, the paper found that regulation of the market under the newly established regulating agency after 2002 was effective and there are potential gains from such regulation. (author)
Directory of Open Access Journals (Sweden)
Betsie le Roux
2016-10-01
Full Text Available Water footprint (WF accounting as proposed by the Water Footprint Network (WFN can potentially provide important information for water resource management, especially in water scarce countries relying on irrigation to help meet their food requirements. However, calculating accurate WFs of short-season vegetable crops such as carrots, cabbage, beetroot, broccoli and lettuce presented some challenges. Planting dates and inter-annual weather conditions impact WF results. Joining weather datasets of just rainfall, minimum and maximum temperature with ones that include solar radiation and wind-speed affected crop model estimates and WF results. The functional unit selected can also have a major impact on results. For example, WFs according to the WFN approach do not account for crop residues used for other purposes, like composting and animal feed. Using yields in dry matter rather than fresh mass also impacts WF metrics, making comparisons difficult. To overcome this, using the nutritional value of crops as a functional unit can connect water use more directly to potential benefits derived from different crops and allow more straightforward comparisons. Grey WFs based on nitrogen only disregards water pollution caused by phosphates, pesticides and salinization. Poor understanding of the fate of nitrogen complicates estimation of nitrogen loads into the aquifer.
Complex mode indication function and its applications to spatial domain parameter estimation
Shih, C. Y.; Tsuei, Y. G.; Allemang, R. J.; Brown, D. L.
1988-10-01
This paper introduces the concept of the Complex Mode Indication Function (CMIF) and its application in spatial domain parameter estimation. The concept of CMIF is developed by performing singular value decomposition (SVD) of the Frequency Response Function (FRF) matrix at each spectral line. The CMIF is defined as the eigenvalues, which are the square of the singular values, solved from the normal matrix formed from the FRF matrix, [ H( jω)] H[ H( jω)], at each spectral line. The CMIF appears to be a simple and efficient method for identifying the modes of the complex system. The CMIF identifies modes by showing the physical magnitude of each mode and the damped natural frequency for each root. Since multiple reference data is applied in CMIF, repeated roots can be detected. The CMIF also gives global modal parameters, such as damped natural frequencies, mode shapes and modal participation vectors. Since CMIF works in the spatial domain, uneven frequency spacing data such as data from spatial sine testing can be used. A second-stage procedure for accurate damped natural frequency and damping estimation as well as mode shape scaling is also discussed in this paper.
Directory of Open Access Journals (Sweden)
Patrick McNamara
2010-01-01
Results. Patients' estimates of their own social functioning were not significantly different from examiners' estimates. The impact of clinical variables on social functioning in PD revealed depression to be the strongest association of social functioning in PD on both the patient and the examiner version of the Social Adaptation Self-Evaluation Scale. Conclusions. PD patients appear to be well aware of their social strengths and weaknesses. Depression and motor symptom severity are significant predictors of both self- and examiner reported social functioning in patients with PD. Assessment and treatment of depression in patients with PD may improve social functioning and overall quality of life.
Efficient Estimation of Dynamic Density Functions with Applications in Streaming Data
Qahtan, Abdulhakim
2016-05-11
Recent advances in computing technology allow for collecting vast amount of data that arrive continuously in the form of streams. Mining data streams is challenged by the speed and volume of the arriving data. Furthermore, the underlying distribution of the data changes over the time in unpredicted scenarios. To reduce the computational cost, data streams are often studied in forms of condensed representation, e.g., Probability Density Function (PDF). This thesis aims at developing an online density estimator that builds a model called KDE-Track for characterizing the dynamic density of the data streams. KDE-Track estimates the PDF of the stream at a set of resampling points and uses interpolation to estimate the density at any given point. To reduce the interpolation error and computational complexity, we introduce adaptive resampling where more/less resampling points are used in high/low curved regions of the PDF. The PDF values at the resampling points are updated online to provide up-to-date model of the data stream. Comparing with other existing online density estimators, KDE-Track is often more accurate (as reflected by smaller error values) and more computationally efficient (as reflected by shorter running time). The anytime available PDF estimated by KDE-Track can be applied for visualizing the dynamic density of data streams, outlier detection and change detection in data streams. In this thesis work, the first application is to visualize the taxi traffic volume in New York city. Utilizing KDE-Track allows for visualizing and monitoring the traffic flow on real time without extra overhead and provides insight analysis of the pick up demand that can be utilized by service providers to improve service availability. The second application is to detect outliers in data streams from sensor networks based on the estimated PDF. The method detects outliers accurately and outperforms baseline methods designed for detecting and cleaning outliers in sensor data. The
Kerimov, M. K.
2018-01-01
This paper is the fourth in a series of survey articles concerning zeros of Bessel functions and methods for their computation. Various inequalities, estimates, expansions, etc. for positive zeros are analyzed, and some results are described in detail with proofs.
Wang, Christina Hao; Rubinsky, Anna D; Minichiello, Tracy; Shlipak, Michael G; Price, Erika Leemann
2018-05-31
Current practice in anticoagulation dosing relies on kidney function estimated by serum creatinine using the Cockcroft-Gault equation. However, creatinine can be unreliable in patients with low or high muscle mass. Cystatin C provides an alternative estimation of glomerular filtration rate (eGFR) that is independent of muscle. We compared cystatin C-based eGFR (eGFR cys ) with multiple creatinine-based estimates of kidney function in hospitalized patients receiving anticoagulants, to assess for discordant results that could impact medication dosing. Retrospective chart review of hospitalized patients over 1 year who received non-vitamin K antagonist anticoagulation, and who had same-day measurements of cystatin C and creatinine. Seventy-five inpatient veterans (median age 68) at the San Francisco VA Medical Center (SFVAMC). We compared the median difference between eGFR by the Chronic Kidney Disease Epidemiology Collaboration (CKD-EPI) study equation using cystatin C (eGFR cys ) and eGFRs using three creatinine-based equations: CKD-EPI (eGFR EPI ), Modified Diet in Renal Disease (eGFR MDRD ), and Cockcroft-Gault (eGFR CG ). We categorized patients into standard KDIGO kidney stages and into drug-dosing categories based on each creatinine equation and calculated proportions of patients reclassified across these categories based on cystatin C. Cystatin C predicted overall lower eGFR compared to creatinine-based equations, with a median difference of - 7.1 (IQR - 17.2, 2.6) mL/min/1.73 m 2 versus eGFR EPI , - 21.2 (IQR - 43.7, - 8.1) mL/min/1.73 m 2 versus eGFR MDRD , and - 25.9 (IQR - 46.8, - 8.7) mL/min/1.73 m 2 versus eGFR CG . Thirty-one to 52% of patients were reclassified into lower drug-dosing categories using cystatin C compared to creatinine-based estimates. We found substantial discordance in eGFR comparing cystatin C with creatinine in this group of anticoagulated inpatients. Our sample size was limited and included few women. Further
Del Casale, Antonio; Ferracuti, Stefano; Rapinesi, Chiara; De Rossi, Pietro; Angeletti, Gloria; Sani, Gabriele; Kotzalidis, Georgios D; Girardi, Paolo
2015-12-01
Several studies reported that hypnosis can modulate pain perception and tolerance by affecting cortical and subcortical activity in brain regions involved in these processes. We conducted an Activation Likelihood Estimation (ALE) meta-analysis on functional neuroimaging studies of pain perception under hypnosis to identify brain activation-deactivation patterns occurring during hypnotic suggestions aiming at pain reduction, including hypnotic analgesic, pleasant, or depersonalization suggestions (HASs). We searched the PubMed, Embase and PsycInfo databases; we included papers published in peer-reviewed journals dealing with functional neuroimaging and hypnosis-modulated pain perception. The ALE meta-analysis encompassed data from 75 healthy volunteers reported in 8 functional neuroimaging studies. HASs during experimentally-induced pain compared to control conditions correlated with significant activations of the right anterior cingulate cortex (Brodmann's Area [BA] 32), left superior frontal gyrus (BA 6), and right insula, and deactivation of right midline nuclei of the thalamus. HASs during experimental pain impact both cortical and subcortical brain activity. The anterior cingulate, left superior frontal, and right insular cortices activation increases could induce a thalamic deactivation (top-down inhibition), which may correlate with reductions in pain intensity. Copyright © 2016 Elsevier Ltd. All rights reserved.
Asiri, Sharefa M.; Laleg-Kirati, Taous-Meriem
2016-01-01
In this paper, modulating functions-based method is proposed for estimating space–time-dependent unknowns in one-dimensional partial differential equations. The proposed method simplifies the problem into a system of algebraic equations linear
International Nuclear Information System (INIS)
AGUIAR, Pablo; RUIBAL, Álvaro; CORTÉS, Julia; PÉREZ-FENTES, Daniel; GARCÍA, Camilo; GARRIDO, Miguel
2016-01-01
The aim of this study was to develop a method for estimating DMSA SPECT renal function on each renal pole in order to evaluate the effect of percutaneous nephrolithotripsy by focusing the measurements on the region through which the percutaneous approach is performed. Twenty patients undergoing percutaneous nephrolithotripsy between November 2010 and June 2012 were included in this study. Both Planar and SPECT-DMSA studies were carried out before and after nephrolithotripsy. The effect of percutaneous nephrolithotripsy was evaluated by estimating the total renal function and the regional renal function of each renal pole. Despite PCNL has been previously reported as a minimally invasive technique, our results showed regional renal function decreases in the treated pole in most patients, affecting the total renal function in a few of them. A quantification method was used for estimating the SPECT DMSA renal function of the upper, inter polar and lower renal poles. Our results confirmed that total renal function was preserved after nephrolithotripsy. Nevertheless, the proposed method showed that the regional renal function of the treated pole decreased in most patients (15 of 20 patients), allowing us to find differences in patients who had not shown changes in the total renal function obtained from conventional quantification methods. In conclusion, a method for estimating the SPECT DMSA renal function focused on the treated pole enabled us to show for the first time that nephrolithotripsy can lead to a renal parenchymal damage restricted to the treated pole.
Directory of Open Access Journals (Sweden)
Meyer Karin
2001-11-01
Full Text Available Abstract A random regression model for the analysis of "repeated" records in animal breeding is described which combines a random regression approach for additive genetic and other random effects with the assumption of a parametric correlation structure for within animal covariances. Both stationary and non-stationary correlation models involving a small number of parameters are considered. Heterogeneity in within animal variances is modelled through polynomial variance functions. Estimation of parameters describing the dispersion structure of such model by restricted maximum likelihood via an "average information" algorithm is outlined. An application to mature weight records of beef cow is given, and results are contrasted to those from analyses fitting sets of random regression coefficients for permanent environmental effects.
Katherine A. Zeller; Kevin McGarigal; Paul Beier; Samuel A. Cushman; T. Winston Vickers; Walter M. Boyce
2014-01-01
Estimating landscape resistance to animal movement is the foundation for connectivity modeling, and resource selection functions based on point data are commonly used to empirically estimate resistance. In this study, we used GPS data points acquired at 5-min intervals from radiocollared pumas in southern California to model context-dependent point selection...
Tøndel, Camilla; Ramaswami, Uma; Aakre, Kristin Moberg; Wijburg, Frits; Bouwman, Machtelt; Svarstad, Einar
2010-01-01
Studies on renal function in children with Fabry disease have mainly been done using estimated creatinine-based glomerular filtration rate (GFR). The aim of this study was to compare estimated creatinine-based GFR (eGFR) with measured GFR (mGFR) in children with Fabry disease and normal renal
Eilers, Anna-Christina; Hennawi, Joseph F.; Lee, Khee-Gan
2017-08-01
We present a new Bayesian algorithm making use of Markov Chain Monte Carlo sampling that allows us to simultaneously estimate the unknown continuum level of each quasar in an ensemble of high-resolution spectra, as well as their common probability distribution function (PDF) for the transmitted Lyα forest flux. This fully automated PDF regulated continuum fitting method models the unknown quasar continuum with a linear principal component analysis (PCA) basis, with the PCA coefficients treated as nuisance parameters. The method allows one to estimate parameters governing the thermal state of the intergalactic medium (IGM), such as the slope of the temperature-density relation γ -1, while marginalizing out continuum uncertainties in a fully Bayesian way. Using realistic mock quasar spectra created from a simplified semi-numerical model of the IGM, we show that this method recovers the underlying quasar continua to a precision of ≃ 7 % and ≃ 10 % at z = 3 and z = 5, respectively. Given the number of principal component spectra, this is comparable to the underlying accuracy of the PCA model itself. Most importantly, we show that we can achieve a nearly unbiased estimate of the slope γ -1 of the IGM temperature-density relation with a precision of +/- 8.6 % at z = 3 and +/- 6.1 % at z = 5, for an ensemble of ten mock high-resolution quasar spectra. Applying this method to real quasar spectra and comparing to a more realistic IGM model from hydrodynamical simulations would enable precise measurements of the thermal and cosmological parameters governing the IGM, albeit with somewhat larger uncertainties, given the increased flexibility of the model.
International Nuclear Information System (INIS)
Saito, Reiko; Uemura, Koji; Uchiyama, Akihiko; Toyama, Hinako; Ishii, Kenji; Senda, Michio
2001-01-01
The purpose of this paper is to estimate the extent of atrophy and the decline in brain function objectively and quantitatively. Two-dimensional (2D) projection images of three-dimensional (3D) transaxial images of positron emission tomography (PET) and magnetic resonance imaging (MRI) were made by means of the Mollweide method which keeps the area of the brain surface. A correlation image was generated between 2D projection images of MRI and cerebral blood flow (CBF) or 18 F-fluorodeoxyglucose (FDG) PET images and the sulcus was extracted from the correlation image clustered by K-means method. Furthermore, the extent of atrophy was evaluated from the extracted sulcus on 2D-projection MRI and the cerebral cortical function such as blood flow or glucose metabolic rate was assessed in the cortex excluding sulcus on 2D-projection PET image, and then the relationship between the cerebral atrophy and function was evaluated. This method was applied to the two groups, the young and the aged normal subjects, and the relationship between the age and the rate of atrophy or the cerebral blood flow was investigated. This method was also applied to FDG-PET and MRI studies in the normal controls and in patients with corticobasal degeneration. The mean rate of atrophy in the aged group was found to be higher than that in the young. The mean value and the variance of the cerebral blood flow for the young are greater than those of the aged. The sulci were similarly extracted using either CBF or FDG PET images. The purposed method using 2-D projection images of MRI and PET is clinically useful for quantitative assessment of atrophic change and functional disorder of cerebral cortex. (author)
Smooth semi-nonparametric (SNP) estimation of the cumulative incidence function.
Duc, Anh Nguyen; Wolbers, Marcel
2017-08-15
This paper presents a novel approach to estimation of the cumulative incidence function in the presence of competing risks. The underlying statistical model is specified via a mixture factorization of the joint distribution of the event type and the time to the event. The time to event distributions conditional on the event type are modeled using smooth semi-nonparametric densities. One strength of this approach is that it can handle arbitrary censoring and truncation while relying on mild parametric assumptions. A stepwise forward algorithm for model estimation and adaptive selection of smooth semi-nonparametric polynomial degrees is presented, implemented in the statistical software R, evaluated in a sequence of simulation studies, and applied to data from a clinical trial in cryptococcal meningitis. The simulations demonstrate that the proposed method frequently outperforms both parametric and nonparametric alternatives. They also support the use of 'ad hoc' asymptotic inference to derive confidence intervals. An extension to regression modeling is also presented, and its potential and challenges are discussed. © 2017 The Authors. Statistics in Medicine Published by John Wiley & Sons Ltd. © 2017 The Authors. Statistics in Medicine Published by John Wiley & Sons Ltd.
International Nuclear Information System (INIS)
Azadeh, A.; Saberi, M.; Ghaderi, S.F.; Gitiforouz, A.; Ebrahimipour, V.
2008-01-01
This study presents an integrated fuzzy system, data mining and time series framework to estimate and predict electricity demand for seasonal and monthly changes in electricity consumption especially in developing countries such as China and Iran with non-stationary data. Furthermore, it is difficult to model uncertain behavior of energy consumption with only conventional fuzzy system or time series and the integrated algorithm could be an ideal substitute for such cases. To construct fuzzy systems, a rule base is needed. Because a rule base is not available, for the case of demand function, look up table which is one of the extracting rule methods is used to extract the rule base. This system is defined as FLT. Also, decision tree method which is a data mining approach is similarly utilized to extract the rule base. This system is defined as FDM. Preferred time series model is selected from linear (ARMA) and nonlinear model. For this, after selecting preferred ARMA model, McLeod-Li test is applied to determine nonlinearity condition. When, nonlinearity condition is satisfied, preferred nonlinear model is selected and compare with preferred ARMA model and finally one of this is selected as time series model. At last, ANOVA is used for selecting preferred model from fuzzy models and time series model. Also, the impact of data preprocessing and postprocessing on the fuzzy system performance is considered by the algorithm. In addition, another unique feature of the proposed algorithm is utilization of autocorrelation function (ACF) to define input variables, whereas conventional methods which use trial and error method. Monthly electricity consumption of Iran from 1995 to 2005 is considered as the case of this study. The MAPE estimation of genetic algorithm (GA), artificial neural network (ANN) versus the proposed algorithm shows the appropriateness of the proposed algorithm
Cai, Jianhua
2017-05-01
The time-frequency analysis method represents signal as a function of time and frequency, and it is considered a powerful tool for handling arbitrary non-stationary time series by using instantaneous frequency and instantaneous amplitude. It also provides a possible alternative to the analysis of the non-stationary magnetotelluric (MT) signal. Based on the Hilbert-Huang transform (HHT), a time-frequency analysis method is proposed to obtain stable estimates of the magnetotelluric response function. In contrast to conventional methods, the response function estimation is performed in the time-frequency domain using instantaneous spectra rather than in the frequency domain, which allows for imaging the response parameter content as a function of time and frequency. The theory of the method is presented and the mathematical model and calculation procedure, which are used to estimate response function based on HHT time-frequency spectrum, are discussed. To evaluate the results, response function estimates are compared with estimates from a standard MT data processing method based on the Fourier transform. All results show that apparent resistivities and phases, which are calculated from the HHT time-frequency method, are generally more stable and reliable than those determined from the simple Fourier analysis. The proposed method overcomes the drawbacks of the traditional Fourier methods, and the resulting parameter minimises the estimation bias caused by the non-stationary characteristics of the MT data.
Motion estimation for cardiac functional analysis using two x-ray computed tomography scans.
Fung, George S K; Ciuffo, Luisa; Ashikaga, Hiroshi; Taguchi, Katsuyuki
2017-09-01
This work concerns computed tomography (CT)-based cardiac functional analysis (CFA) with a reduced radiation dose. As CT-CFA requires images over the entire heartbeat, the scans are often performed at 10-20% of the tube current settings that are typically used for coronary CT angiography. A large image noise then degrades the accuracy of motion estimation. Moreover, even if the scan was performed during the sinus rhythm, the cardiac motion observed in CT images may not be cyclic with patients with atrial fibrillation. In this study, we propose to use two CT scan data, one for CT angiography at a quiescent phase at a standard dose and the other for CFA over the entire heart beat at a lower dose. We have made the following four modifications to an image-based cardiac motion estimation method we have previously developed for a full-dose retrospectively gated coronary CT angiography: (a) a full-dose prospectively gated coronary CT angiography image acquired at the least motion phase was used as the reference image; (b) a three-dimensional median filter was applied to lower-dose retrospectively gated cardiac images acquired at 20 phases over one heartbeat in order to reduce image noise; (c) the strength of the temporal regularization term was made adaptive; and (d) a one-dimensional temporal filter was applied to the estimated motion vector field in order to decrease jaggy motion patterns. We describe the conventional method iME1 and the proposed method iME2 in this article. Five observers assessed the accuracy of the estimated motion vector field of iME2 and iME1 using a 4-point scale. The observers repeated the assessment with data presented in a new random order 1 week after the first assessment session. The study confirmed that the proposed iME2 was robust against the mismatch of noise levels, contrast enhancement levels, and shapes of the chambers. There was a statistically significant difference between iME2 and iME1 (accuracy score, 2.08 ± 0.81 versus 2.77
[Cardiac Synchronization Function Estimation Based on ASM Level Set Segmentation Method].
Zhang, Yaonan; Gao, Yuan; Tang, Liang; He, Ying; Zhang, Huie
At present, there is no accurate and quantitative methods for the determination of cardiac mechanical synchronism, and quantitative determination of the synchronization function of the four cardiac cavities with medical images has a great clinical value. This paper uses the whole heart ultrasound image sequence, and segments the left & right atriums and left & right ventricles of each frame. After the segmentation, the number of pixels in each cavity and in each frame is recorded, and the areas of the four cavities of the image sequence are therefore obtained. The area change curves of the four cavities are further extracted, and the synchronous information of the four cavities is obtained. Because of the low SNR of Ultrasound images, the boundary lines of cardiac cavities are vague, so the extraction of cardiac contours is still a challenging problem. Therefore, the ASM model information is added to the traditional level set method to force the curve evolution process. According to the experimental results, the improved method improves the accuracy of the segmentation. Furthermore, based on the ventricular segmentation, the right and left ventricular systolic functions are evaluated, mainly according to the area changes. The synchronization of the four cavities of the heart is estimated based on the area changes and the volume changes.
International Nuclear Information System (INIS)
Larsson, I.; Lindstedt, E.; Ohlin, P.; Strand, S.E.; White, T.
1975-01-01
A scintillation camera technique was used for measuring renal uptake of [ 131 I]Hippuran 80-110 s after injection. Externally measured Hippuran uptake was markedly influenced by kidney depth, which was measured by lateral-view image after injection of [ 99 Tc]iron ascorbic acid complex or [ 197 Hg]chlormerodrine. When one kidney was nearer to the dorsal surface of the body than the other, it was necessary to correct the externally measured Hippuran uptake for kidney depth to obtain reliable information on the true partition of Hippuran between the two kidneys. In some patients the glomerular filtration rate (GFR) was measured before and after nephrectomy. Measured postoperative GFR was compared with preoperative predicted GFR, which was calculated by multiplying the preoperative Hippuran uptake of the kidney to be left in situ, as a fraction of the preoperative Hippuran uptake of both kidneys, by the measured preoperative GFR. The measured postoperative GFR was usually moderately higher than the preoperatively predicted GFR. The difference could be explained by a postoperative compensatory increase in function of the remaining kidney. Thus, the present method offers a possibility of estimating separate kidney function without arterial or ureteric catheterization. (auth)
Directory of Open Access Journals (Sweden)
Chris Bambey Guure
2012-01-01
Full Text Available The Weibull distribution has been observed as one of the most useful distribution, for modelling and analysing lifetime data in engineering, biology, and others. Studies have been done vigorously in the literature to determine the best method in estimating its parameters. Recently, much attention has been given to the Bayesian estimation approach for parameters estimation which is in contention with other estimation methods. In this paper, we examine the performance of maximum likelihood estimator and Bayesian estimator using extension of Jeffreys prior information with three loss functions, namely, the linear exponential loss, general entropy loss, and the square error loss function for estimating the two-parameter Weibull failure time distribution. These methods are compared using mean square error through simulation study with varying sample sizes. The results show that Bayesian estimator using extension of Jeffreys' prior under linear exponential loss function in most cases gives the smallest mean square error and absolute bias for both the scale parameter α and the shape parameter β for the given values of extension of Jeffreys' prior.
Blair, Clancy; Raver, C. Cybele; Berry, Daniel J.
2014-01-01
In the current article, we contrast 2 analytical approaches to estimate the relation of parenting to executive function development in a sample of 1,292 children assessed longitudinally between the ages of 36 and 60 months of age. Children were administered a newly developed and validated battery of 6 executive function tasks tapping inhibitory…
Pedotransfer functions to estimate water retention parameters of soils in northeastern Brazil
Directory of Open Access Journals (Sweden)
Alexandre Hugo Cezar Barros
2013-04-01
Full Text Available Pedotransfer functions (PTF were developed to estimate the parameters (α, n, θr and θs of the van Genuchten model (1980 to describe soil water retention curves. The data came from various sources, mainly from studies conducted by universities in Northeast Brazil, by the Brazilian Agricultural Research Corporation (Embrapa and by a corporation for the development of the São Francisco and Parnaíba river basins (Codevasf, totaling 786 retention curves, which were divided into two data sets: 85 % for the development of PTFs, and 15 % for testing and validation, considered independent data. Aside from the development of general PTFs for all soils together, specific PTFs were developed for the soil classes Ultisols, Oxisols, Entisols, and Alfisols by multiple regression techniques, using a stepwise procedure (forward and backward to select the best predictors. Two types of PTFs were developed: the first included all predictors (soil density, proportions of sand, silt, clay, and organic matter, and the second only the proportions of sand, silt and clay. The evaluation of adequacy of the PTFs was based on the correlation coefficient (R and Willmott index (d. To evaluate the PTF for the moisture content at specific pressure heads, we used the root mean square error (RMSE. The PTF-predicted retention curve is relatively poor, except for the residual water content. The inclusion of organic matter as a PTF predictor improved the prediction of parameter a of van Genuchten. The performance of soil-class-specific PTFs was not better than of the general PTF. Except for the water content of saturated soil estimated by particle size distribution, the tested models for water content prediction at specific pressure heads proved satisfactory. Predictions of water content at pressure heads more negative than -0.6 m, using a PTF considering particle size distribution, are only slightly lower than those obtained by PTFs including bulk density and organic matter
Energy Technology Data Exchange (ETDEWEB)
Ramirez-Guinart, Oriol; Rigol, Anna; Vidal, Miquel [Analytical Chemistry department, Faculty of Chemistry, University of Barcelona, Mart i Franques 1-11, 08028, Barcelona (Spain)
2014-07-01
In the frame of the revision of the IAEA TRS 364 (Handbook of parameter values for the prediction of radionuclide transfer in temperate environments), a database of radionuclide solid-liquid distribution coefficients (K{sub d}) in soils was compiled with data coming from field and laboratory experiments, from references mostly from 1990 onwards, including data from reports, reviewed papers, and grey literature. The K{sub d} values were grouped for each radionuclide according to two criteria. The first criterion was based on the sand and clay mineral percentages referred to the mineral matter, and the organic matter (OM) content in the soil. This defined the 'texture/OM' criterion. The second criterion was to group soils regarding specific soil factors governing the radionuclide-soil interaction ('cofactor' criterion). The cofactors depended on the radionuclide considered. An advantage of using cofactors was that the variability of K{sub d} ranges for a given soil group decreased considerably compared with that observed when the classification was based solely on sand, clay and organic matter contents. The K{sub d} best estimates were defined as the calculated GM values assuming that K{sub d} values were always log-normally distributed. Risk assessment models may require as input data for a given parameter either a single value (a best estimate) or a continuous function from which not only individual best estimates but also confidence ranges and data variability can be derived. In the case of the K{sub d} parameter, a suitable continuous function which contains the statistical parameters (e.g. arithmetical/geometric mean, arithmetical/geometric standard deviation, mode, etc.) that better explain the distribution among the K{sub d} values of a dataset is the Cumulative Distribution Function (CDF). To our knowledge, appropriate CDFs has not been proposed for radionuclide K{sub d} in soils yet. Therefore, the aim of this works is to create CDFs for
Directory of Open Access Journals (Sweden)
Z. Meghnatisi
2009-06-01
Full Text Available Let Xi1, · · · , Xini be a random sample from a gamma distribution with known shape parameter νi > 0 and unknown scale parameter βi > 0, i = 1, 2, satisfying 0 < β1 6 β2. We consider the class of mixed estimators for estimation of β1 and β2 under reflected gamma loss function. It has been shown that the minimum risk equivariant estimator of βi, i = 1, 2, which is admissible when no information on the ordering of parameters are given, is inadmissible and dominated by a class of mixed estimators when it is known that the parameters are ordered. Also, the inadmissible estimators in the class of mixed estimators are derived. Finally the results are extended to some subclass of exponential family
Better estimation of protein-DNA interaction parameters improve prediction of functional sites
Directory of Open Access Journals (Sweden)
O'Flanagan Ruadhan A
2008-12-01
Full Text Available Abstract Background Characterizing transcription factor binding motifs is a common bioinformatics task. For transcription factors with variable binding sites, we need to get many suboptimal binding sites in our training dataset to get accurate estimates of free energy penalties for deviating from the consensus DNA sequence. One procedure to do that involves a modified SELEX (Systematic Evolution of Ligands by Exponential Enrichment method designed to produce many such sequences. Results We analyzed low stringency SELEX data for E. coli Catabolic Activator Protein (CAP, and we show here that appropriate quantitative analysis improves our ability to predict in vitro affinity. To obtain large number of sequences required for this analysis we used a SELEX SAGE protocol developed by Roulet et al. The sequences obtained from here were subjected to bioinformatic analysis. The resulting bioinformatic model characterizes the sequence specificity of the protein more accurately than those sequence specificities predicted from previous analysis just by using a few known binding sites available in the literature. The consequences of this increase in accuracy for prediction of in vivo binding sites (and especially functional ones in the E. coli genome are also discussed. We measured the dissociation constants of several putative CAP binding sites by EMSA (Electrophoretic Mobility Shift Assay and compared the affinities to the bioinformatics scores provided by methods like the weight matrix method and QPMEME (Quadratic Programming Method of Energy Matrix Estimation trained on known binding sites as well as on the new sites from SELEX SAGE data. We also checked predicted genome sites for conservation in the related species S. typhimurium. We found that bioinformatics scores based on SELEX SAGE data does better in terms of prediction of physical binding energies as well as in detecting functional sites. Conclusion We think that training binding site detection
Haskell, Craig A.; Beauchamp, David A.; Bollens, Stephen M.
2017-01-01
Juvenile salmon (Oncorhynchus spp.) use of reservoir food webs is understudied. We examined the feeding behavior of subyearling Chinook salmon (O. tshawytscha) and its relation to growth by estimating the functional response of juvenile salmon to changes in the density of Daphnia, an important component of reservoir food webs. We then estimated salmon growth across a broad range of water temperatures and daily rations of two primary prey, Daphnia and juvenile American shad (Alosa sapidissima) using a bioenergetics model. Laboratory feeding experiments yielded a Type-II functional response curve: C = 29.858 P *(4.271 + P)-1 indicating that salmon consumption (C) of Daphnia was not affected until Daphnia densities (P) were < 30 · L-1. Past field studies documented Daphnia densities in lower Columbia River reservoirs of < 3 · L-1 in July but as high as 40 · L-1 in August. Bioenergetics modeling indicated that subyearlings could not achieve positive growth above 22°C regardless of prey type or consumption rate. When feeding on Daphnia, subyearlings could not achieve positive growth above 20°C (water temperatures they commonly encounter in the lower Columbia River during summer). At 16–18°C, subyearlings had to consume about 27,000 Daphnia · day-1 to achieve positive growth. However, when feeding on juvenile American shad, subyearlings had to consume 20 shad · day-1 at 16–18°C, or at least 25 shad · day-1 at 20°C to achieve positive growth. Using empirical consumption rates and water temperatures from summer 2013, subyearlings exhibited negative growth during July (-0.23 to -0.29 g · d-1) and August (-0.05 to -0.07 g · d-1). By switching prey from Daphnia to juvenile shad which have a higher energy density, subyearlings can partially compensate for the effects of higher water temperatures they experience in the lower Columbia River during summer. However, achieving positive growth as piscivores requires subyearlings to feed at
Directory of Open Access Journals (Sweden)
Jungwook Kim
2018-05-01
Full Text Available The objective function is usually used for verification of the optimization process between observed and simulated flows for the parameter estimation of rainfall–runoff model. However, it does not focus on peak flow and on representative parameter for various rain storm events of the basin, but it can estimate the optimal parameters by minimizing the overall error of observed and simulated flows. Therefore, the aim of this study is to suggest the objective functions that can fit peak flow in hydrograph and estimate the representative parameter of the basin for the events. The Streamflow Synthesis And Reservoir Regulation (SSARR model was employed to perform flood runoff simulation for the Mihocheon stream basin in Geum River, Korea. Optimization was conducted using three calibration methods: genetic algorithm, pattern search, and the Shuffled Complex Evolution method developed at the University of Arizona (SCE-UA. Two objective functions of the Sum of Squared of Residual (SSR and the Weighted Sum of Squared of Residual (WSSR suggested in this study for peak flow optimization were applied. Since the parameters estimated using a single rain storm event do not represent the parameters for various rain storms in the basin, we used the representative objective function that can minimize the sum of objective functions of the events. Six rain storm events were used for the parameter estimation. Four events were used for the calibration and the other two for validation; then, the results by SSR and WSSR were compared. Flow runoff simulation was carried out based on the proposed objective functions, and the objective function of WSSR was found to be more useful than that of SSR in the simulation of peak flow runoff. Representative parameters that minimize the objective function for each of the four rain storm events were estimated. The calibrated observed and simulated flow runoff hydrographs obtained from applying the estimated representative
International Nuclear Information System (INIS)
Lee, Haw-Long; Chang, Win-Jin; Chen, Wen-Lih; Yang, Yu-Ching
2012-01-01
Highlights: ► Time-dependent base heat flux of a functionally graded fin is inversely estimated. ► An inverse algorithm based on the conjugate gradient method and the discrepancy principle is applied. ► The distributions of temperature in the fin are determined as well. ► The influence of measurement error and measurement location upon the precision of the estimated results is also investigated. - Abstract: In this study, an inverse algorithm based on the conjugate gradient method and the discrepancy principle is applied to estimate the unknown time-dependent base heat flux of a functionally graded fin from the knowledge of temperature measurements taken within the fin. Subsequently, the distributions of temperature in the fin can be determined as well. It is assumed that no prior information is available on the functional form of the unknown base heat flux; hence the procedure is classified as the function estimation in inverse calculation. The temperature data obtained from the direct problem are used to simulate the temperature measurements. The influence of measurement errors and measurement location upon the precision of the estimated results is also investigated. Results show that an excellent estimation on the time-dependent base heat flux and temperature distributions can be obtained for the test case considered in this study.
Flood damage estimation of companies: A comparison of Stage-Damage-Functions and Random Forests
Sieg, Tobias; Kreibich, Heidi; Vogel, Kristin; Merz, Bruno
2017-04-01
The development of appropriate flood damage models plays an important role not only for the damage assessment after an event but also to develop adaptation and risk mitigation strategies. So called Stage-Damage-Functions (SDFs) are often applied as a standard approach to estimate flood damage. These functions assign a certain damage to the water depth depending on the use or other characteristics of the exposed objects. Recent studies apply machine learning algorithms like Random Forests (RFs) to model flood damage. These algorithms usually consider more influencing variables and promise to depict a more detailed insight into the damage processes. In addition they provide an inherent validation scheme. Our study focuses on direct, tangible damage of single companies. The objective is to model and validate the flood damage suffered by single companies with SDFs and RFs. The data sets used are taken from two surveys conducted after the floods in the Elbe and Danube catchments in the years 2002 and 2013 in Germany. Damage to buildings (n = 430), equipment (n = 651) as well as goods and stock (n = 530) are taken into account. The model outputs are validated via a comparison with the actual flood damage acquired by the surveys and subsequently compared with each other. This study investigates the gain in model performance with the use of additional data and the advantages and disadvantages of the RFs compared to SDFs. RFs show an increase in model performance with an increasing amount of data records over a comparatively large range, while the model performance of the SDFs is already saturated for a small set of records. In addition, the RFs are able to identify damage influencing variables, which improves the understanding of damage processes. Hence, RFs can slightly improve flood damage predictions and provide additional insight into the underlying mechanisms compared to SDFs.
A note on the conditional density estimate in single functional index model
2010-01-01
Abstract In this paper, we consider estimation of the conditional density of a scalar response variable Y given a Hilbertian random variable X when the observations are linked with a single-index structure. We establish the pointwise and the uniform almost complete convergence (with the rate) of the kernel estimate of this model. As an application, we show how our result can be applied in the prediction problem via the conditional mode estimate. Finally, the estimation of the funct...
International Nuclear Information System (INIS)
Alves, Carolina Moura; Horodecki, Pawel; Oi, Daniel K. L.; Kwek, L. C.; Ekert, Artur K.
2003-01-01
We present a method of direct estimation of important properties of a shared bipartite quantum state, within the ''distant laboratories'' paradigm, using only local operations and classical communication. We apply this procedure to spectrum estimation of shared states, and locally implementable structural physical approximations to incompletely positive maps. This procedure can also be applied to the estimation of channel capacity and measures of entanglement
Estimates of azimuthal numbers associated with elementary elliptic cylinder wave functions
Kovalev, V. A.; Radaev, Yu. N.
2014-05-01
The paper deals with issues related to the construction of solutions, 2 π-periodic in the angular variable, of the Mathieu differential equation for the circular elliptic cylinder harmonics, the associated characteristic values, and the azimuthal numbers needed to form the elementary elliptic cylinder wave functions. A superposition of the latter is one possible form for representing the analytic solution of the thermoelastic wave propagation problem in long waveguides with elliptic cross-section contour. The classical Sturm-Liouville problem for the Mathieu equation is reduced to a spectral problem for a linear self-adjoint operator in the Hilbert space of infinite square summable two-sided sequences. An approach is proposed that permits one to derive rather simple algorithms for computing the characteristic values of the angular Mathieu equation with real parameters and the corresponding eigenfunctions. Priority is given to the application of the most symmetric forms and equations that have not yet been used in the theory of the Mathieu equation. These algorithms amount to constructing a matrix diagonalizing an infinite symmetric pentadiagonal matrix. The problem of generalizing the notion of azimuthal number of a wave propagating in a cylindrical waveguide to the case of elliptic geometry is considered. Two-sided mutually refining estimates are constructed for the spectral values of the Mathieu differential operator with periodic and half-periodic (antiperiodic) boundary conditions.
Directory of Open Access Journals (Sweden)
Marina Solé
2014-01-01
Full Text Available Functional conformation and performance in Classic and Menorca Dressage are the main selection criteria in the Menorca Horse breeding program. Menorca Dressage is an alternative Classical Dressage discipline which is exclusive of the Menorca Island, but including a series of movements that the animals perform in the traditional festivities called “Jaleo Menorquín”. One of these movements involves the horse raising its forelimbs and standing or walking on its hindlimbs, which is called “el bot”. To make the Menorca horse breed more competitive in the equestrian market, it is necessary to understand the genetic background that characterizes the aptitude for Menorca Dressage and its relationship with conformation traits. The analysed data consisted of 15 conformation traits from 347 Menorca horses (200 males and 147 females, with 1,550 performance records in Menorca Dressage competitions. Genetic parameters were estimated using linear and threshold animal models. The heritabilities for heights and lengths were high (0.45-0.76, those for angulations and binary conformation traits were low to moderate (0.10-0.36 as were the scores for dressage performance (0.13-0.21. The results suggest that the analyzed traits could be used as an efficient tool for selecting breeding horses.
Estimation of bone perfusion as a function of intramedullary pressure in sheep
International Nuclear Information System (INIS)
Rosenthal, M.S.; Lehner, C.E.; Pearson, D.W.; Kanikula, T.M.; Adler, G.G.; Venci, R.; Lanphier, E.H.; De Luca, P.M.
1985-01-01
It has been reported previously that following decompression (i.e. diving ascents) the intramedullary pressure (IMP) in bone can rise dramatically and possibly by the mechanism which can induce dysbaric osteonecrosis or the ''silent bends''. If the blood supply for the bone transverses the marrow compartment, than an increase in IMP could cause a temporary decrease in perfusion or hemostasis and hence ischemia leading to bone necrosis. To test this hypothesis, the authors measured the perfusion of bone in sheep as a function of IMP. The bone perfusion was estimated by measuring the perfusion-limited clearance of Ar-41 (Eγ=1293 keV, T/sub 1/2/=1.83 h) from the bone mineral matrix of sheep's tibia. The argon gas was formed in vivo by the fast neutron activation of Ca-44 to Ar-41 following the Ca-44(n,α) reaction. Clearance of Ar-41 was measured by time gated gamma-ray spectroscopy. These results indicate that an elevation of intramedullary pressure can decrease perfusion in bone and may cause bone necrosis
Energy Technology Data Exchange (ETDEWEB)
Sole, M.; Cervantes, I.; Gutierrez, J. P.; Gomez, M. D.; Valera, M.
2014-06-01
Functional conformation and performance in Classic and Menorca Dressage are the main selection criteria in the Menorca Horse breeding program. Menorca Dressage is an alternative Classical Dressage discipline which is exclusive of the Menorca Island, but including a series of movements that the animals perform in the traditional festivities called Jaleo Menorquin. One of these movements involves the horse raising its forelimbs and standing or walking on its hindlimbs, which is called el bot. To make the Menorca horse breed more competitive in the equestrian market, it is necessary to understand the genetic background that characterizes the aptitude for Menorca Dressage and its relationship with conformation traits. The analysed data consisted of 15 conformation traits from 347 Menorca horses (200 males and 147 females), with 1,550 performance records in Menorca Dressage competitions. Genetic parameters were estimated using linear and threshold animal models. The heritabilities for heights and lengths were high (0.45-0.76), those for angulations and binary conformation traits were low to moderate (0.10-0.36) as were the scores for dressage performance (0.13-0.21). The results suggest that the analyzed traits could be used as an efficient tool for selecting breeding horses. (Author)
Comparison of volatility function technique for risk-neutral densities estimation
Bahaludin, Hafizah; Abdullah, Mimi Hafizah
2017-08-01
Volatility function technique by using interpolation approach plays an important role in extracting the risk-neutral density (RND) of options. The aim of this study is to compare the performances of two interpolation approaches namely smoothing spline and fourth order polynomial in extracting the RND. The implied volatility of options with respect to strike prices/delta are interpolated to obtain a well behaved density. The statistical analysis and forecast accuracy are tested using moments of distribution. The difference between the first moment of distribution and the price of underlying asset at maturity is used as an input to analyze forecast accuracy. RNDs are extracted from the Dow Jones Industrial Average (DJIA) index options with a one month constant maturity for the period from January 2011 until December 2015. The empirical results suggest that the estimation of RND using a fourth order polynomial is more appropriate to be used compared to a smoothing spline in which the fourth order polynomial gives the lowest mean square error (MSE). The results can be used to help market participants capture market expectations of the future developments of the underlying asset.
Directory of Open Access Journals (Sweden)
Amir LAKZIAN
2010-09-01
Full Text Available This paper presents the comparison of three different approaches to estimate soil water content at defined values of soil water potential based on selected parameters of soil solid phase. Forty different sampling locations in northeast of Iran were selected and undisturbed samples were taken to measure the water content at field capacity (FC, -33 kPa, and permanent wilting point (PWP, -1500 kPa. At each location solid particle of each sample including the percentage of sand, silt and clay were measured. Organic carbon percentage and soil texture were also determined for each soil sample at each location. Three different techniques including pattern recognition approach (k nearest neighbour, k-NN, Artificial Neural Network (ANN and pedotransfer functions (PTF were used to predict the soil water at each sampling location. Mean square deviation (MSD and its components, index of agreement (d, root mean square difference (RMSD and normalized RMSD (RMSDr were used to evaluate the performance of all the three approaches. Our results showed that k-NN and PTF performed better than ANN in prediction of water content at both FC and PWP matric potential. Various statistics criteria for simulation performance also indicated that between kNN and PTF, the former, predicted water content at PWP more accurate than PTF, however both approach showed a similar accuracy to predict water content at FC.
Estimating dose rates to organs as a function of age following internal exposure to radionuclides
International Nuclear Information System (INIS)
Leggett, R.W.; Eckerman, K.F.; Dunning, D.E. Jr.; Cristy, M.; Crawford-Brown, D.J.; Williams, L.R.
1984-03-01
The AGEDOS methodology allows estimates of dose rates, as a function of age, to radiosensitive organs and tissues in the human body at arbitrary times during or after internal exposure to radioactive material. Presently there are few, if any, radionuclides for which sufficient metabolic information is available to allow full use of all features of the methodology. The intention has been to construct the methodology so that optimal information can be gained from a mixture of the limited amount of age-dependent, nuclide-specific data and the generally plentiful age-dependent physiological data now available. Moreover, an effort has been made to design the methodology so that constantly accumulating metabolic information can be incorporated with minimal alterations in the AGEDOS computer code. Some preliminary analyses performed by the authors, using the AGEDOS code in conjunction with age-dependent risk factors developed from the A-bomb survivor data and other studies, has indicated that the doses and subsequent risks of eventually experiencing radiogenic cancers may vary substantially with age for some exposure scenarios and may be relatively invariant with age for other scenarios. We believe that the AGEDOS methodology provides a convenient and efficient means for performing the internal dosimetry
State-space model with deep learning for functional dynamics estimation in resting-state fMRI.
Suk, Heung-Il; Wee, Chong-Yaw; Lee, Seong-Whan; Shen, Dinggang
2016-04-01
Studies on resting-state functional Magnetic Resonance Imaging (rs-fMRI) have shown that different brain regions still actively interact with each other while a subject is at rest, and such functional interaction is not stationary but changes over time. In terms of a large-scale brain network, in this paper, we focus on time-varying patterns of functional networks, i.e., functional dynamics, inherent in rs-fMRI, which is one of the emerging issues along with the network modelling. Specifically, we propose a novel methodological architecture that combines deep learning and state-space modelling, and apply it to rs-fMRI based Mild Cognitive Impairment (MCI) diagnosis. We first devise a Deep Auto-Encoder (DAE) to discover hierarchical non-linear functional relations among regions, by which we transform the regional features into an embedding space, whose bases are complex functional networks. Given the embedded functional features, we then use a Hidden Markov Model (HMM) to estimate dynamic characteristics of functional networks inherent in rs-fMRI via internal states, which are unobservable but can be inferred from observations statistically. By building a generative model with an HMM, we estimate the likelihood of the input features of rs-fMRI as belonging to the corresponding status, i.e., MCI or normal healthy control, based on which we identify the clinical label of a testing subject. In order to validate the effectiveness of the proposed method, we performed experiments on two different datasets and compared with state-of-the-art methods in the literature. We also analyzed the functional networks learned by DAE, estimated the functional connectivities by decoding hidden states in HMM, and investigated the estimated functional connectivities by means of a graph-theoretic approach. Copyright © 2016 Elsevier Inc. All rights reserved.
International Nuclear Information System (INIS)
Kroon, Juan Antonio Valiente
2005-01-01
This paper uses the conformal Einstein equations and the conformal representation of spatial infinity introduced by Friedrich to analyse the behaviour of the gravitational field near null and spatial infinity for the development of initial data which are, in principle, non-conformally flat and time asymmetric. The paper is the continuation of the investigation started in Class. Quantum Grav. 21 (2004) 5457-92, where only conformally flat initial data sets were considered. For the purposes of this investigation, the conformal metric of the initial hypersurface is assumed to have a very particular type of non-smoothness at infinity in order to allow for the presence of non-Schwarzschildean stationary initial data sets in the class under study. The calculation of asymptotic expansions of the development of these initial data sets reveals-as in the conformally flat case-the existence of a hierarchy of obstructions to the smoothness of null infinity which are expressible in terms of the initial data. This allows for the possibility of having spacetimes where future and past null infinity have different degrees of smoothness. A conjecture regarding the general structure of the hierarchy of obstructions is presented
McDonald, A. David; Sandal, Leif Kristoffer
1998-01-01
Estimation of parameters in the drift and diffusion terms of stochastic differential equations involves simulation and generally requires substantial data sets. We examine a method that can be applied when available time series are limited to less than 20 observations per replication. We compare and contrast parameter estimation for linear and nonlinear first-order stochastic differential equations using two criterion functions: one based on a Chi-square statistic, put forward by Hurn and Lin...
Frolov, Maxim; Chistiakova, Olga
2017-06-01
Paper is devoted to a numerical justification of the recent a posteriori error estimate for Reissner-Mindlin plates. This majorant provides a reliable control of accuracy of any conforming approximate solution of the problem including solutions obtained with commercial software for mechanical engineering. The estimate is developed on the basis of the functional approach and is applicable to several types of boundary conditions. To verify the approach, numerical examples with mesh refinements are provided.
International Nuclear Information System (INIS)
Bachoc, F.
2013-01-01
The parametric estimation of the covariance function of a Gaussian process is studied, in the framework of the Kriging model. Maximum Likelihood and Cross Validation estimators are considered. The correctly specified case, in which the covariance function of the Gaussian process does belong to the parametric set used for estimation, is first studied in an increasing-domain asymptotic framework. The sampling considered is a randomly perturbed multidimensional regular grid. Consistency and asymptotic normality are proved for the two estimators. It is then put into evidence that strong perturbations of the regular grid are always beneficial to Maximum Likelihood estimation. The incorrectly specified case, in which the covariance function of the Gaussian process does not belong to the parametric set used for estimation, is then studied. It is shown that Cross Validation is more robust than Maximum Likelihood in this case. Finally, two applications of the Kriging model with Gaussian processes are carried out on industrial data. For a validation problem of the friction model of the thermal-hydraulic code FLICA 4, where experimental results are available, it is shown that Gaussian process modeling of the FLICA 4 code model error enables to considerably improve its predictions. Finally, for a meta modeling problem of the GERMINAL thermal-mechanical code, the interest of the Kriging model with Gaussian processes, compared to neural network methods, is shown. (author) [fr
Z. Meghnatisi; N. Nematollahi
2009-01-01
Let Xi1, · · · , Xini be a random sample from a gamma distribution with known shape parameter νi > 0 and unknown scale parameter βi > 0, i = 1, 2, satisfying 0 < β1 6 β2. We consider the class of mixed estimators for estimation of β1 and β2 under reflected gamma loss function. It has been shown that the minimum risk equivariant estimator of βi, i = 1, 2, which is admissible when no information on the ordering of parameters are given, is inadmissible and dominated by a cla...
International Nuclear Information System (INIS)
Hubert, X.
2009-12-01
This work deals with the estimation of the concentration of molecules in arterial blood which are labelled with positron-emitting radioelements. This concentration is called 'β + arterial input function'. This concentration has to be estimated for a large number of pharmacokinetic analyses. Nowadays it is measured through series of arterial sampling, which is an accurate method but requiring a stringent protocol. Complications might occur during arterial blood sampling because this method is invasive (hematomas, nosocomial infections). The objective of this work is to overcome this risk through a non-invasive estimation of β + input function with an external detector and a collimator. This allows the reconstruction of blood vessels and thus the discrimination of arterial signal from signals in other tissues. Collimators in medical imaging are not adapted to estimate β + input function because their sensitivity is very low. During this work, they are replaced by coded-aperture collimators, originally developed for astronomy. New methods where coded apertures are used with statistical reconstruction algorithms are presented. Techniques for analytical ray-tracing and for the acceleration of reconstructions are proposed. A new method which decomposes reconstructions on temporal sets and on spatial sets is also developed to efficiently estimate arterial input function from series of temporal acquisitions. This work demonstrates that the trade-off between sensitivity and spatial resolution in PET can be improved thanks to coded aperture collimators and statistical reconstruction algorithm; it also provides new tools to implement such improvements. (author)
Shi, Lei; Guo, Lianghui; Ma, Yawei; Li, Yonghua; Wang, Weilai
2018-05-01
The technique of teleseismic receiver function H-κ stacking is popular for estimating the crustal thickness and Vp/Vs ratio. However, it has large uncertainty or ambiguity when the Moho multiples in receiver function are not easy to be identified. We present an improved technique to estimate the crustal thickness and Vp/Vs ratio by joint constraints of receiver function and gravity data. The complete Bouguer gravity anomalies, composed of the anomalies due to the relief of the Moho interface and the heterogeneous density distribution within the crust, are associated with the crustal thickness, density and Vp/Vs ratio. According to their relationship formulae presented by Lowry and Pérez-Gussinyé, we invert the complete Bouguer gravity anomalies by using a common algorithm of likelihood estimation to obtain the crustal thickness and Vp/Vs ratio, and then utilize them to constrain the receiver function H-κ stacking result. We verified the improved technique on three synthetic crustal models and evaluated the influence of selected parameters, the results of which demonstrated that the novel technique could reduce the ambiguity and enhance the accuracy of estimation. Real data test at two given stations in the NE margin of Tibetan Plateau illustrated that the improved technique provided reliable estimations of crustal thickness and Vp/Vs ratio.
Maadooliat, Mehdi
2015-10-21
This paper develops a method for simultaneous estimation of density functions for a collection of populations of protein backbone angle pairs using a data-driven, shared basis that is constructed by bivariate spline functions defined on a triangulation of the bivariate domain. The circular nature of angular data is taken into account by imposing appropriate smoothness constraints across boundaries of the triangles. Maximum penalized likelihood is used to fit the model and an alternating blockwise Newton-type algorithm is developed for computation. A simulation study shows that the collective estimation approach is statistically more efficient than estimating the densities individually. The proposed method was used to estimate neighbor-dependent distributions of protein backbone dihedral angles (i.e., Ramachandran distributions). The estimated distributions were applied to protein loop modeling, one of the most challenging open problems in protein structure prediction, by feeding them into an angular-sampling-based loop structure prediction framework. Our estimated distributions compared favorably to the Ramachandran distributions estimated by fitting a hierarchical Dirichlet process model; and in particular, our distributions showed significant improvements on the hard cases where existing methods do not work well.
Maadooliat, Mehdi; Zhou, Lan; Najibi, Seyed Morteza; Gao, Xin; Huang, Jianhua Z.
2015-01-01
This paper develops a method for simultaneous estimation of density functions for a collection of populations of protein backbone angle pairs using a data-driven, shared basis that is constructed by bivariate spline functions defined on a triangulation of the bivariate domain. The circular nature of angular data is taken into account by imposing appropriate smoothness constraints across boundaries of the triangles. Maximum penalized likelihood is used to fit the model and an alternating blockwise Newton-type algorithm is developed for computation. A simulation study shows that the collective estimation approach is statistically more efficient than estimating the densities individually. The proposed method was used to estimate neighbor-dependent distributions of protein backbone dihedral angles (i.e., Ramachandran distributions). The estimated distributions were applied to protein loop modeling, one of the most challenging open problems in protein structure prediction, by feeding them into an angular-sampling-based loop structure prediction framework. Our estimated distributions compared favorably to the Ramachandran distributions estimated by fitting a hierarchical Dirichlet process model; and in particular, our distributions showed significant improvements on the hard cases where existing methods do not work well.
Directory of Open Access Journals (Sweden)
Z. Khodadadi
2008-03-01
Full Text Available Let S be matrix of residual sum of square in linear model Y = Aβ + e where matrix e is distributed as elliptically contoured with unknown scale matrix Σ. In present work, we consider the problem of estimating Σ with respect to squared loss function, L(Σˆ , Σ = tr(ΣΣˆ −1 −I 2 . It is shown that improvement of the estimators were obtained by James, Stein [7], Dey and Srivasan [1] under the normality assumption remains robust under an elliptically contoured distribution respect to squared loss function
Yu, Z. P.; Yue, Z. F.; Liu, W.
2018-05-01
With the development of artificial intelligence, more and more reliability experts have noticed the roles of subjective information in the reliability design of complex system. Therefore, based on the certain numbers of experiment data and expert judgments, we have divided the reliability estimation based on distribution hypothesis into cognition process and reliability calculation. Consequently, for an illustration of this modification, we have taken the information fusion based on intuitional fuzzy belief functions as the diagnosis model of cognition process, and finished the reliability estimation for the open function of cabin door affected by the imprecise judgment corresponding to distribution hypothesis.
International Nuclear Information System (INIS)
Heys, D.W.; Stump, D.R.
1984-01-01
The variational principle is used to estimate the ground state of the Kogut-Susskind Hamiltonian of the SU(2) lattice gauge theory, with a trial wave function for which the magnetic fields on different plaquettes are uncorrelated. This trial function describes a disordered state. The energy expectation value is evaluated by a Monte Carlo method. The variational results are compared to similar results for a related Abelian gauge theory. Also, the expectation value of the Wilson loop operator is computed for the trial state, and the resulting estimate of the string tension is compared to the prediction of asymptotic freedom
Estimation of the Lagrangian structure function constant ¤C¤0 from surface-layer wind data
DEFF Research Database (Denmark)
Anfossi, D.; Degrazia, G.; Ferrero, E.
2000-01-01
Eulerian turbulence observations, made in the surface layer under unstable conditions (z/L > 0), by a sonic anemometer were used to estimate the Lagrangian structure function constant C(0). Two methods were considered. The first one makes use of a relationship, widely used in the Lagrangian...... stochastic dispersion models, relating C(0) to the turbulent kinetic energy dissipation rate epsilon, wind velocity variance and Lagrangian decorrelation time. The second one employs a novel equation, connecting C(0) to the constant of the second-order Eulerian structure function. Before estimating C(0...
Cvan Trobec, Katja; Kerec Kos, Mojca; von Haehling, Stephan; Anker, Stefan D; Macdougall, Iain C; Ponikowski, Piotr; Lainscak, Mitja
2015-12-01
To compare the performance of iohexol plasma clearance and creatinine-based renal function estimating equations in monitoring longitudinal renal function changes in chronic heart failure (CHF) patients, and to assess the effects of body composition on the equation performance. Iohexol plasma clearance was measured in 43 CHF patients at baseline and after at least 6 months. Simultaneously, renal function was estimated with five creatinine-based equations (four- and six-variable Modification of Diet in Renal Disease, Cockcroft-Gault, Cockcroft-Gault adjusted for lean body mass, Chronic Kidney Disease Epidemiology Collaboration equation) and body composition was assessed using bioimpedance and dual-energy x-ray absorptiometry. Over a median follow-up of 7.5 months (range 6-17 months), iohexol clearance significantly declined (52.8 vs 44.4 mL/[min ×1.73 m2], P=0.001). This decline was significantly higher in patients receiving mineralocorticoid receptor antagonists at baseline (mean decline -22% of baseline value vs -3%, P=0.037). Mean serum creatinine concentration did not change significantly during follow-up and no creatinine-based renal function estimating equation was able to detect the significant longitudinal decline of renal function determined by iohexol clearance. After accounting for body composition, the accuracy of the equations improved, but not their ability to detect renal function decline. Renal function measured with iohexol plasma clearance showed relevant decline in CHF patients, particularly in those treated with mineralocorticoid receptor antagonists. None of the equations for renal function estimation was able to detect these changes. ClinicalTrials.gov registration number: NCT01829880.
Cvan Trobec, Katja; Kerec Kos, Mojca; von Haehling, Stephan; Anker, Stefan D.; Macdougall, Iain C.; Ponikowski, Piotr; Lainscak, Mitja
2015-01-01
Aim To compare the performance of iohexol plasma clearance and creatinine-based renal function estimating equations in monitoring longitudinal renal function changes in chronic heart failure (CHF) patients, and to assess the effects of body composition on the equation performance. Methods Iohexol plasma clearance was measured in 43 CHF patients at baseline and after at least 6 months. Simultaneously, renal function was estimated with five creatinine-based equations (four- and six-variable Modification of Diet in Renal Disease, Cockcroft-Gault, Cockcroft-Gault adjusted for lean body mass, Chronic Kidney Disease Epidemiology Collaboration equation) and body composition was assessed using bioimpedance and dual-energy x-ray absorptiometry. Results Over a median follow-up of 7.5 months (range 6-17 months), iohexol clearance significantly declined (52.8 vs 44.4 mL/[min ×1.73 m2], P = 0.001). This decline was significantly higher in patients receiving mineralocorticoid receptor antagonists at baseline (mean decline -22% of baseline value vs -3%, P = 0.037). Mean serum creatinine concentration did not change significantly during follow-up and no creatinine-based renal function estimating equation was able to detect the significant longitudinal decline of renal function determined by iohexol clearance. After accounting for body composition, the accuracy of the equations improved, but not their ability to detect renal function decline. Conclusions Renal function measured with iohexol plasma clearance showed relevant decline in CHF patients, particularly in those treated with mineralocorticoid receptor antagonists. None of the equations for renal function estimation was able to detect these changes. ClinicalTrials.gov registration number NCT01829880 PMID:26718759
Estimation of delays and other parameters in nonlinear functional differential equations
Banks, H. T.; Lamm, P. K. D.
1983-01-01
A spline-based approximation scheme for nonlinear nonautonomous delay differential equations is discussed. Convergence results (using dissipative type estimates on the underlying nonlinear operators) are given in the context of parameter estimation problems which include estimation of multiple delays and initial data as well as the usual coefficient-type parameters. A brief summary of some of the related numerical findings is also given.
Directory of Open Access Journals (Sweden)
Weikai Li
2017-08-01
Full Text Available Functional brain network (FBN has been becoming an increasingly important way to model the statistical dependence among neural time courses of brain, and provides effective imaging biomarkers for diagnosis of some neurological or psychological disorders. Currently, Pearson's Correlation (PC is the simplest and most widely-used method in constructing FBNs. Despite its advantages in statistical meaning and calculated performance, the PC tends to result in a FBN with dense connections. Therefore, in practice, the PC-based FBN needs to be sparsified by removing weak (potential noisy connections. However, such a scheme depends on a hard-threshold without enough flexibility. Different from this traditional strategy, in this paper, we propose a new approach for estimating FBNs by remodeling PC as an optimization problem, which provides a way to incorporate biological/physical priors into the FBNs. In particular, we introduce an L1-norm regularizer into the optimization model for obtaining a sparse solution. Compared with the hard-threshold scheme, the proposed framework gives an elegant mathematical formulation for sparsifying PC-based networks. More importantly, it provides a platform to encode other biological/physical priors into the PC-based FBNs. To further illustrate the flexibility of the proposed method, we extend the model to a weighted counterpart for learning both sparse and scale-free networks, and then conduct experiments to identify autism spectrum disorders (ASD from normal controls (NC based on the constructed FBNs. Consequently, we achieved an 81.52% classification accuracy which outperforms the baseline and state-of-the-art methods.
Li, Weikai; Wang, Zhengxia; Zhang, Limei; Qiao, Lishan; Shen, Dinggang
2017-01-01
Functional brain network (FBN) has been becoming an increasingly important way to model the statistical dependence among neural time courses of brain, and provides effective imaging biomarkers for diagnosis of some neurological or psychological disorders. Currently, Pearson's Correlation (PC) is the simplest and most widely-used method in constructing FBNs. Despite its advantages in statistical meaning and calculated performance, the PC tends to result in a FBN with dense connections. Therefore, in practice, the PC-based FBN needs to be sparsified by removing weak (potential noisy) connections. However, such a scheme depends on a hard-threshold without enough flexibility. Different from this traditional strategy, in this paper, we propose a new approach for estimating FBNs by remodeling PC as an optimization problem, which provides a way to incorporate biological/physical priors into the FBNs. In particular, we introduce an L 1 -norm regularizer into the optimization model for obtaining a sparse solution. Compared with the hard-threshold scheme, the proposed framework gives an elegant mathematical formulation for sparsifying PC-based networks. More importantly, it provides a platform to encode other biological/physical priors into the PC-based FBNs. To further illustrate the flexibility of the proposed method, we extend the model to a weighted counterpart for learning both sparse and scale-free networks, and then conduct experiments to identify autism spectrum disorders (ASD) from normal controls (NC) based on the constructed FBNs. Consequently, we achieved an 81.52% classification accuracy which outperforms the baseline and state-of-the-art methods.
The organization of the human cerebellum estimated by intrinsic functional connectivity
Krienen, Fenna M.; Castellanos, Angela; Diaz, Julio C.; Yeo, B. T. Thomas
2011-01-01
The cerebral cortex communicates with the cerebellum via polysynaptic circuits. Separate regions of the cerebellum are connected to distinct cerebral areas, forming a complex topography. In this study we explored the organization of cerebrocerebellar circuits in the human using resting-state functional connectivity MRI (fcMRI). Data from 1,000 subjects were registered using nonlinear deformation of the cerebellum in combination with surface-based alignment of the cerebral cortex. The foot, hand, and tongue representations were localized in subjects performing movements. fcMRI maps derived from seed regions placed in different parts of the motor body representation yielded the expected inverted map of somatomotor topography in the anterior lobe and the upright map in the posterior lobe. Next, we mapped the complete topography of the cerebellum by estimating the principal cerebral target for each point in the cerebellum in a discovery sample of 500 subjects and replicated the topography in 500 independent subjects. The majority of the human cerebellum maps to association areas. Quantitative analysis of 17 distinct cerebral networks revealed that the extent of the cerebellum dedicated to each network is proportional to the network's extent in the cerebrum with a few exceptions, including primary visual cortex, which is not represented in the cerebellum. Like somatomotor representations, cerebellar regions linked to association cortex have separate anterior and posterior representations that are oriented as mirror images of one another. The orderly topography of the representations suggests that the cerebellum possesses at least two large, homotopic maps of the full cerebrum and possibly a smaller third map. PMID:21795627
Directory of Open Access Journals (Sweden)
Hong Yao
2016-01-01
Full Text Available The number of surface water pollution accidents (abbreviated as SWPAs has increased substantially in China in recent years. Estimation of economic losses due to SWPAs has been one of the focuses in China and is mentioned many times in the Environmental Protection Law of China promulgated in 2014. From the perspective of water bodies’ functions, pollution accident damages can be divided into eight types: damage to human health, water supply suspension, fishery, recreational functions, biological diversity, environmental property loss, the accident’s origin and other indirect losses. In the valuation of damage to people’s life, the procedure for compensation of traffic accidents in China was used. The functional replacement cost method was used in economic estimation of the losses due to water supply suspension and loss of water’s recreational functions. Damage to biological diversity was estimated by recovery cost analysis and damage to environmental property losses were calculated using pollutant removal costs. As a case study, using the proposed calculation procedure the economic losses caused by the major Songhuajiang River pollution accident that happened in China in 2005 have been estimated at 2263 billion CNY. The estimated economic losses for real accidents can sometimes be influenced by social and political factors, such as data authenticity and accuracy. Besides, one or more aspects in the method might be overestimated, underrated or even ignored. The proposed procedure may be used by decision makers for the economic estimation of losses in SWPAs. Estimates of the economic losses of pollution accidents could help quantify potential costs associated with increased risk sources along lakes/rivers but more importantly, highlight the value of clean water to society as a whole.
Yao, Hong; You, Zhen; Liu, Bo
2016-01-01
The number of surface water pollution accidents (abbreviated as SWPAs) has increased substantially in China in recent years. Estimation of economic losses due to SWPAs has been one of the focuses in China and is mentioned many times in the Environmental Protection Law of China promulgated in 2014. From the perspective of water bodies’ functions, pollution accident damages can be divided into eight types: damage to human health, water supply suspension, fishery, recreational functions, biological diversity, environmental property loss, the accident’s origin and other indirect losses. In the valuation of damage to people’s life, the procedure for compensation of traffic accidents in China was used. The functional replacement cost method was used in economic estimation of the losses due to water supply suspension and loss of water’s recreational functions. Damage to biological diversity was estimated by recovery cost analysis and damage to environmental property losses were calculated using pollutant removal costs. As a case study, using the proposed calculation procedure the economic losses caused by the major Songhuajiang River pollution accident that happened in China in 2005 have been estimated at 2263 billion CNY. The estimated economic losses for real accidents can sometimes be influenced by social and political factors, such as data authenticity and accuracy. Besides, one or more aspects in the method might be overestimated, underrated or even ignored. The proposed procedure may be used by decision makers for the economic estimation of losses in SWPAs. Estimates of the economic losses of pollution accidents could help quantify potential costs associated with increased risk sources along lakes/rivers but more importantly, highlight the value of clean water to society as a whole. PMID:26805869
Yao, Hong; You, Zhen; Liu, Bo
2016-01-22
The number of surface water pollution accidents (abbreviated as SWPAs) has increased substantially in China in recent years. Estimation of economic losses due to SWPAs has been one of the focuses in China and is mentioned many times in the Environmental Protection Law of China promulgated in 2014. From the perspective of water bodies' functions, pollution accident damages can be divided into eight types: damage to human health, water supply suspension, fishery, recreational functions, biological diversity, environmental property loss, the accident's origin and other indirect losses. In the valuation of damage to people's life, the procedure for compensation of traffic accidents in China was used. The functional replacement cost method was used in economic estimation of the losses due to water supply suspension and loss of water's recreational functions. Damage to biological diversity was estimated by recovery cost analysis and damage to environmental property losses were calculated using pollutant removal costs. As a case study, using the proposed calculation procedure the economic losses caused by the major Songhuajiang River pollution accident that happened in China in 2005 have been estimated at 2263 billion CNY. The estimated economic losses for real accidents can sometimes be influenced by social and political factors, such as data authenticity and accuracy. Besides, one or more aspects in the method might be overestimated, underrated or even ignored. The proposed procedure may be used by decision makers for the economic estimation of losses in SWPAs. Estimates of the economic losses of pollution accidents could help quantify potential costs associated with increased risk sources along lakes/rivers but more importantly, highlight the value of clean water to society as a whole.
DEFF Research Database (Denmark)
Jensen, H B; Mamoei, Sepehr; Ravnborg, M.
2016-01-01
OBJECTIVE: To provide distribution-based estimates of the minimal clinical important difference (MCID) after slow release fampridine treatment on cognition and functional capacity in people with MS (PwMS). METHOD: MCID values were determined after SR-Fampridine treatment in 105 PwMS. Testing...
DEFF Research Database (Denmark)
Jørgensen, Bent; Demétrio, Clarice G. B.; Kristensen, Erik
2011-01-01
Estimation of Taylor’s power law for species abundance data may be performed by linear regression of the log empirical variances on the log means, but this method suffers from a problem of bias for sparse data. We show that the bias may be reduced by using a bias-corrected Pearson estimating...
Allen, Marcus; Zhong, Qiang; Kirsch, Nicholas; Dani, Ashwin; Clark, William W; Sharma, Nitin
2017-12-01
Miniature inertial measurement units (IMUs) are wearable sensors that measure limb segment or joint angles during dynamic movements. However, IMUs are generally prone to drift, external magnetic interference, and measurement noise. This paper presents a new class of nonlinear state estimation technique called state-dependent coefficient (SDC) estimation to accurately predict joint angles from IMU measurements. The SDC estimation method uses limb dynamics, instead of limb kinematics, to estimate the limb state. Importantly, the nonlinear limb dynamic model is formulated into state-dependent matrices that facilitate the estimator design without performing a Jacobian linearization. The estimation method is experimentally demonstrated to predict knee joint angle measurements during functional electrical stimulation of the quadriceps muscle. The nonlinear knee musculoskeletal model was identified through a series of experiments. The SDC estimator was then compared with an extended kalman filter (EKF), which uses a Jacobian linearization and a rotation matrix method, which uses a kinematic model instead of the dynamic model. Each estimator's performance was evaluated against the true value of the joint angle, which was measured through a rotary encoder. The experimental results showed that the SDC estimator, the rotation matrix method, and EKF had root mean square errors of 2.70°, 2.86°, and 4.42°, respectively. Our preliminary experimental results show the new estimator's advantage over the EKF method but a slight advantage over the rotation matrix method. However, the information from the dynamic model allows the SDC method to use only one IMU to measure the knee angle compared with the rotation matrix method that uses two IMUs to estimate the angle.
Estimating the small-x exponent of the structure function g1NS from the Bjorken sum rule
International Nuclear Information System (INIS)
Knauf, Anke; Meyer-Hermann, Michael; Soff, Gerhard
2002-01-01
We present a new estimate of the exponent governing the small-x behavior of the nonsinglet structure function g 1 p-n derived under the assumption that the Bjorken sum rule is valid. We use the world wide average of α s and the NNNLO QCD corrections to the Bjorken sum rule. The structure function g 1 NS is found to be clearly divergent for small x
Gitman, M.B.; Klyuev, A.V.; Stolbov, V.Y.; Gitman, I.M.
2017-01-01
The technique allows analysis using grain-phase structure of the functional material to evaluate its performance, particularly strength properties. The technique is based on the use of linguistic variable in the process of comprehensive evaluation. An example of estimating the strength properties of steel reinforcement, subject to special heat treatment to obtain the desired grain-phase structure.
International Nuclear Information System (INIS)
Dumonteil, E.; Diop, C. M.
2009-01-01
This paper derives an unbiased minimum variance estimator (UMVE) of a matrix exponential function of a normal wean. The result is then used to propose a reference scheme to solve Boltzmann/Bateman coupled equations, thanks to Monte Carlo transport codes. The last section will present numerical results on a simple example. (authors)
Bilir, Mustafa Kuzey
2009-01-01
This study uses a new psychometric model (mixture item response theory-MIMIC model) that simultaneously estimates differential item functioning (DIF) across manifest groups and latent classes. Current DIF detection methods investigate DIF from only one side, either across manifest groups (e.g., gender, ethnicity, etc.), or across latent classes…
Narison, Stéphan
1994-01-01
We estimate the sum of the \\Upsilon \\bar BB couplings using QCD Spectral Sum Rules (QSSR). Our result implies the phenomenological bound \\xi'(vv'=1) \\geq -1.04 for the slope of the Isgur-Wise function. An analytic estimate of the (physical) slope to two loops within QSSR leads to the accurate value \\xi'(vv'=1) \\simeq -(1.00 \\pm 0.02) due to the (almost) complete cancellations between the perturbative and non-perturbative corrections at the stability points. Then, we deduce, from the present data, the improved estimate \\vert V_{cb} \\vert \\simeq \\ga 1.48 \\mbox{ps}/\\tau_B \\dr ^{1/2}(37.3 \\pm 1.2 \\pm 1.4)\\times 10^{-3} where the first error comes from the data analysis and the second one from the different model parametrizations of the Isgur-Wise function.
Directory of Open Access Journals (Sweden)
Rupnow Marcia FT
2005-09-01
Full Text Available Abstract Background Most tools for estimating utilities use clinical trial data from general health status models, such as the 36-Item Short-Form Health Survey (SF-36. A disease-specific model may be more appropriate. The objective of this study was to apply a disease-specific utility mapping function for schizophrenia to data from a large, 1-year, open-label study of long-acting risperidone and to compare its performance with an SF-36-based utility mapping function. Methods Patients with schizophrenia or schizoaffective disorder by DSM-IV criteria received 25, 50, or 75 mg long-acting risperidone every 2 weeks for 12 months. The Positive and Negative Syndrome Scale (PANSS and SF-36 were used to assess efficacy and health-related quality of life. Movement disorder severity was measured using the Extrapyramidal Symptom Rating Scale (ESRS; data concerning other common adverse effects (orthostatic hypotension, weight gain were collected. Transforms were applied to estimate utilities. Results A total of 474 patients completed the study. Long-acting risperidone treatment was associated with a utility gain of 0.051 using the disease-specific function. The estimated gain using an SF-36-based mapping function was smaller: 0.0285. Estimates of gains were only weakly correlated (r = 0.2. Because of differences in scaling and variance, the requisite sample size for a randomized trial to confirm observed effects is much smaller for the disease-specific mapping function (156 versus 672 total subjects. Conclusion Application of a disease-specific mapping function was feasible. Differences in scaling and precision suggest the clinically based mapping function has greater power than the SF-36-based measure to detect differences in utility.
Connectivity among subpopulations of Louisiana black bears as estimated by a step selection function
Clark, Joseph D.; Jared S. Laufenberg,; Maria Davidson,; Jennifer L. Murrow,
2015-01-01
Habitat fragmentation is a fundamental cause of population decline and increased risk of extinction for many wildlife species; animals with large home ranges and small population sizes are particularly sensitive. The Louisiana black bear (Ursus americanus luteolus) exists only in small, isolated subpopulations as a result of land clearing for agriculture, but the relative potential for inter-subpopulation movement by Louisiana black bears has not been quantified, nor have characteristics of effective travel routes between habitat fragments been identified. We placed and monitored global positioning system (GPS) radio collars on 8 female and 23 male bears located in 4 subpopulations in Louisiana, which included a reintroduced subpopulation located between 2 of the remnant subpopulations. We compared characteristics of sequential radiolocations of bears (i.e., steps) with steps that were possible but not chosen by the bears to develop step selection function models based on conditional logistic regression. The probability of a step being selected by a bear increased as the distance to natural land cover and agriculture at the end of the step decreased and as distance from roads at the end of a step increased. To characterize connectivity among subpopulations, we used the step selection models to create 4,000 hypothetical correlated random walks for each subpopulation representing potential dispersal events to estimate the proportion that intersected adjacent subpopulations (hereafter referred to as successful dispersals). Based on the models, movement paths for males intersected all adjacent subpopulations but paths for females intersected only the most proximate subpopulations. Cross-validation and genetic and independent observation data supported our findings. Our models also revealed that successful dispersals were facilitated by a reintroduced population located between 2 distant subpopulations. Successful dispersals for males were dependent on natural land
On the expected value and variance for an estimator of the spatio-temporal product density function
DEFF Research Database (Denmark)
Rodríguez-Corté, Francisco J.; Ghorbani, Mohammad; Mateu, Jorge
Second-order characteristics are used to analyse the spatio-temporal structure of the underlying point process, and thus these methods provide a natural starting point for the analysis of spatio-temporal point process data. We restrict our attention to the spatio-temporal product density function......, and develop a non-parametric edge-corrected kernel estimate of the product density under the second-order intensity-reweighted stationary hypothesis. The expectation and variance of the estimator are obtained, and closed form expressions derived under the Poisson case. A detailed simulation study is presented...... to compare our close expression for the variance with estimated ones for Poisson cases. The simulation experiments show that the theoretical form for the variance gives acceptable values, which can be used in practice. Finally, we apply the resulting estimator to data on the spatio-temporal distribution...
Efficient estimation of dynamic density functions with an application to outlier detection
Qahtan, Abdulhakim Ali Ali; Zhang, Xiangliang; Wang, Suojin
2012-01-01
In this paper, we propose a new method to estimate the dynamic density over data streams, named KDE-Track as it is based on a conventional and widely used Kernel Density Estimation (KDE) method. KDE-Track can efficiently estimate the density with linear complexity by using interpolation on a kernel model, which is incrementally updated upon the arrival of streaming data. Both theoretical analysis and experimental validation show that KDE-Track outperforms traditional KDE and a baseline method Cluster-Kernels on estimation accuracy of the complex density structures in data streams, computing time and memory usage. KDE-Track is also demonstrated on timely catching the dynamic density of synthetic and real-world data. In addition, KDE-Track is used to accurately detect outliers in sensor data and compared with two existing methods developed for detecting outliers and cleaning sensor data. © 2012 ACM.
Efficient Estimation of Dynamic Density Functions with Applications in Streaming Data
Qahtan, Abdulhakim Ali Ali
2016-01-01
application is to detect outliers in data streams from sensor networks based on the estimated PDF. The method detects outliers accurately and outperforms baseline methods designed for detecting and cleaning outliers in sensor data. The third application
Endogenous markers for estimation of renal function in peritoneal dialysis patients
DEFF Research Database (Denmark)
Kjaergaard, Krista Dybtved; Jensen, Jens Dam; Rehling, Michael
2012-01-01
OBJECTIVE: This method comparison study, conducted at the peritoneal dialysis (PD) outpatient clinic of the Department of Renal Medicine, Aarhus University Hospital, Denmark, set out to evaluate the accuracy and reproducibility of methods for estimating glomerular filtration rate (GFR) based...
Cost function approach for estimating derived demand for composite wood products
T. C. Marcin
1991-01-01
A cost function approach was examined for using the concept of duality between production and input factor demands. A translog cost function was used to represent residential construction costs and derived conditional factor demand equations. Alternative models were derived from the translog cost function by imposing parameter restrictions.
Barth, Timothy J.
2014-01-01
This workshop presentation discusses the design and implementation of numerical methods for the quantification of statistical uncertainty, including a-posteriori error bounds, for output quantities computed using CFD methods. Hydrodynamic realizations often contain numerical error arising from finite-dimensional approximation (e.g. numerical methods using grids, basis functions, particles) and statistical uncertainty arising from incomplete information and/or statistical characterization of model parameters and random fields. The first task at hand is to derive formal error bounds for statistics given realizations containing finite-dimensional numerical error [1]. The error in computed output statistics contains contributions from both realization error and the error resulting from the calculation of statistics integrals using a numerical method. A second task is to devise computable a-posteriori error bounds by numerically approximating all terms arising in the error bound estimates. For the same reason that CFD calculations including error bounds but omitting uncertainty modeling are only of limited value, CFD calculations including uncertainty modeling but omitting error bounds are only of limited value. To gain maximum value from CFD calculations, a general software package for uncertainty quantification with quantified error bounds has been developed at NASA. The package provides implementations for a suite of numerical methods used in uncertainty quantification: Dense tensorization basis methods [3] and a subscale recovery variant [1] for non-smooth data, Sparse tensorization methods[2] utilizing node-nested hierarchies, Sampling methods[4] for high-dimensional random variable spaces.
Single image super-resolution based on approximated Heaviside functions and iterative refinement
Wang, Xin-Yu; Huang, Ting-Zhu; Deng, Liang-Jian
2018-01-01
One method of solving the single-image super-resolution problem is to use Heaviside functions. This has been done previously by making a binary classification of image components as “smooth” and “non-smooth”, describing these with approximated Heaviside functions (AHFs), and iteration including l1 regularization. We now introduce a new method in which the binary classification of image components is extended to different degrees of smoothness and non-smoothness, these components being represented by various classes of AHFs. Taking into account the sparsity of the non-smooth components, their coefficients are l1 regularized. In addition, to pick up more image details, the new method uses an iterative refinement for the residuals between the original low-resolution input and the downsampled resulting image. Experimental results showed that the new method is superior to the original AHF method and to four other published methods. PMID:29329298
Yang, Shuang-Long; Liang, Li-Ping; Liu, Hou-De; Xu, Ke-Jun
2018-03-01
Aiming at reducing the estimation error of the sensor frequency response function (FRF) estimated by the commonly used window-based spectral estimation method, the error models of interpolation and transient errors are derived in the form of non-parameter models. Accordingly, window effects on the errors are analyzed and reveal that the commonly used hanning window leads to smaller interpolation error which can also be significantly eliminated by the cubic spline interpolation method when estimating the FRF from the step response data, and window with smaller front-end value can restrain more transient error. Thus, a new dual-cosine window with its non-zero discrete Fourier transform bins at -3, -1, 0, 1, and 3 is constructed for FRF estimation. Compared with the hanning window, the new dual-cosine window has the equivalent interpolation error suppression capability and better transient error suppression capability when estimating the FRF from the step response; specifically, it reduces the asymptotic property of the transient error from O(N-2) of the hanning window method to O(N-4) while only increases the uncertainty slightly (about 0.4 dB). Then, one direction of a wind tunnel strain gauge balance which is a high order, small damping, and non-minimum phase system is employed as the example for verifying the new dual-cosine window-based spectral estimation method. The model simulation result shows that the new dual-cosine window method is better than the hanning window method for FRF estimation, and compared with the Gans method and LPM method, it has the advantages of simple computation, less time consumption, and short data requirement; the actual data calculation result of the balance FRF is consistent to the simulation result. Thus, the new dual-cosine window is effective and practical for FRF estimation.
Directory of Open Access Journals (Sweden)
Elizabeth Hansen
2012-07-01
Full Text Available Normal 0 false false false EN-US X-NONE X-NONE /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin:0in; mso-para-margin-bottom:.0001pt; mso-pagination:widow-orphan; font-size:10.0pt; font-family:"Times New Roman","serif";} The annual response variable in an ecological monitoring study often relates linearly to the weighted cumulative effect of some daily covariate, after adjusting for other annual covariates. Here we consider the problem of non-parametrically estimating the weights involved in computing the aforementioned cumulative effect, with a panel of short and contemporaneously correlated time series whose responses share the common cumulative effect of a daily covariate. The sequence of (unknown daily weights constitutes the so-called transfer function. Specifically, we consider the problem of estimating a smooth common transfer function shared by a panel of short time series that are contemporaneously correlated. We propose an estimation scheme using a likelihood approach that penalizes the roughness of the common transfer function. We illustrate the proposed method with a simulation study and a biological example of indirectly estimating the spawning date distribution of North Sea cod.
Gao, Mingwu; Cheng, Hao-Min; Sung, Shih-Hsien; Chen, Chen-Huan; Olivier, Nicholas Bari; Mukkamala, Ramakrishna
2017-07-01
pulse transit time (PTT) varies with blood pressure (BP) throughout the cardiac cycle, yet, because of wave reflection, only one PTT value at the diastolic BP level is conventionally estimated from proximal and distal BP waveforms. The objective was to establish a technique to estimate multiple PTT values at different BP levels in the cardiac cycle. a technique was developed for estimating PTT as a function of BP (to indicate the PTT value for every BP level) from proximal and distal BP waveforms. First, a mathematical transformation from one waveform to the other is defined in terms of the parameters of a nonlinear arterial tube-load model accounting for BP-dependent arterial compliance and wave reflection. Then, the parameters are estimated by optimally fitting the waveforms to each other via the model-based transformation. Finally, PTT as a function of BP is specified by the parameters. The technique was assessed in animals and patients in several ways including the ability of its estimated PTT-BP function to serve as a subject-specific curve for calibrating PTT to BP. the calibration curve derived by the technique during a baseline period yielded bias and precision errors in mean BP of 5.1 ± 0.9 and 6.6 ± 1.0 mmHg, respectively, during hemodynamic interventions that varied mean BP widely. the new technique may permit, for the first time, estimation of PTT values throughout the cardiac cycle from proximal and distal waveforms. the technique could potentially be applied to improve arterial stiffness monitoring and help realize cuff-less BP monitoring.
On the growth estimates of entire functions of double complex variables
Directory of Open Access Journals (Sweden)
Sanjib Datta
2017-08-01
Full Text Available Recently Datta et al. (2016 introduced the idea of relative type and relative weak type of entire functions of two complex variables with respect to another entire function of two complex variables and prove some related growth properties of it. In this paper, further we study some growth properties of entire functions of two complex variables on the basis of their relative types and relative weak types as introduced by Datta et al (2016.
Neokosmidis, Ioannis; Kamalakis, Thomas; Chipouras, Aristides; Sphicopoulos, Thomas
2005-01-01
The performance of high-powered wavelength-division multiplexed (WDM) optical networks can be severely degraded by four-wave-mixing- (FWM-) induced distortion. The multicanonical Monte Carlo method (MCMC) is used to calculate the probability-density function (PDF) of the decision variable of a receiver, limited by FWM noise. Compared with the conventional Monte Carlo method previously used to estimate this PDF, the MCMC method is much faster and can accurately estimate smaller error probabilities. The method takes into account the correlation between the components of the FWM noise, unlike the Gaussian model, which is shown not to provide accurate results.
A Complex Estimation Function based on Community Reputation for On-line Transaction Systems
Directory of Open Access Journals (Sweden)
Yu Yang
2012-09-01
Full Text Available A reputation management system is crucial in online transaction systems, in which a reputation function is its central component. We propose a generalized set-theoretic reputation function in this paper, which can be configured to meet various assessment requirements of a wide range of reputation scenarios encountered in online transaction nowadays. We analyze and verify tolerance of this reputation function against various socio-communal reputation attacks. We find the function to be dynamic, customizable and tolerant against different attacks. As such it can serve well in many online transaction systems such as e-commerce websites, online group activities, and P2P systems.
DEFF Research Database (Denmark)
Chon, K H; Cohen, R J; Holstein-Rathlou, N H
1997-01-01
A linear and nonlinear autoregressive moving average (ARMA) identification algorithm is developed for modeling time series data. The algorithm uses Laguerre expansion of kernals (LEK) to estimate Volterra-Wiener kernals. However, instead of estimating linear and nonlinear system dynamics via moving...... average models, as is the case for the Volterra-Wiener analysis, we propose an ARMA model-based approach. The proposed algorithm is essentially the same as LEK, but this algorithm is extended to include past values of the output as well. Thus, all of the advantages associated with using the Laguerre...
John S. Hogland; Nathaniel M. Anderson
2015-01-01
Raster modeling is an integral component of spatial analysis. However, conventional raster modeling techniques can require a substantial amount of processing time and storage space, often limiting the types of analyses that can be performed. To address this issue, we have developed Function Modeling. Function Modeling is a new modeling framework that streamlines the...
Borovkova, Svetlana; Burton, Robert; Dehling, Herold
2001-01-01
In this paper we develop a general approach for investigating the asymptotic distribution of functional Xn = f((Zn+k)k∈z) of absolutely regular stochastic processes (Zn)n∈z. Such functional occur naturally as orbits of chaotic dynamical systems, and thus our results can be used to study
DEFF Research Database (Denmark)
Shekarchi, Sayedali; Christensen-Dalsgaard, Jakob; Hallam, John
2015-01-01
A head-related transfer function (HRTF) model employing Legendre polynomials (LPs) is evaluated as an HRTF spatial complexity indicator and interpolation technique in the azimuth plane. LPs are a set of orthogonal functions derived on the sphere which can be used to compress an HRTF dataset...
Peng, Shitao; Zhou, Ran; Qin, Xuebo; Shi, Honghua; Ding, Dewen
2013-09-15
In this study, the functional group concept was first applied to evaluate the ecosystem health of Bohai Bay. Macrobenthos functional groups were defined according to feeding types and divided into five groups: a carnivorous group (CA), omnivorous group (OM), planktivorous group (PL), herbivorous group (HE), and detritivorous group (DE). Groups CA, DE, OM, and PL were identified, but the HE group was absent from Bohai Bay. Group DE was dominant during the study periods. The ecosystem health was assessed using a functional group evenness index. The functional group evenness values of most sampling stations were less than 0.40, indicating that the ecosystem health was deteriorated in Bohai Bay. Such deterioration could be attributed to land reclamation, industrial and sewage effluents, oil pollution, and hypersaline water discharge. This study demonstrates that the functional group concept can be applied to ecosystem health assessment in a semi-enclosed bay. Copyright © 2013 Elsevier Ltd. All rights reserved.
Bipp, T.; Steinmayr, R.; Spinath, B.
2012-01-01
Building on the notion that motivation energizes and directs resources in achievement situations, we argue that goal orientations affect perceptions of own intelligence and that the effect of goals on performance is partly mediated by self-estimates of intelligence. Studies 1 (n = 89) and 2 (n =
Simultaneous Estimation of Regression Functions for Marine Corps Technical Training Specialties.
Dunbar, Stephen B.; And Others
This paper considers the application of Bayesian techniques for simultaneous estimation to the specification of regression weights for selection tests used in various technical training courses in the Marine Corps. Results of a method for m-group regression developed by Molenaar and Lewis (1979) suggest that common weights for training courses…
Adding a Parameter Increases the Variance of an Estimated Regression Function
Withers, Christopher S.; Nadarajah, Saralees
2011-01-01
The linear regression model is one of the most popular models in statistics. It is also one of the simplest models in statistics. It has received applications in almost every area of science, engineering and medicine. In this article, the authors show that adding a predictor to a linear model increases the variance of the estimated regression…
The Support Reduction Algorithm for Computing Non-Parametric Function Estimates in Mixture Models
GROENEBOOM, PIET; JONGBLOED, GEURT; WELLNER, JON A.
2008-01-01
In this paper, we study an algorithm (which we call the support reduction algorithm) that can be used to compute non-parametric M-estimators in mixture models. The algorithm is compared with natural competitors in the context of convex regression and the ‘Aspect problem’ in quantum physics.
Panel data estimates of the production function and product and labor market imperfections
Dobbelaere, S.; Mairesse, J.
2013-01-01
Consistent with two models of imperfect competition in the labor market-the efficient bargaining model and the monopsony model-we provide two extensions of a microeconomic version of Hall's framework for estimating price-cost margins. We show that both product and labor market imperfections generate
Saturated hydraulic conductivity Ksat is a fundamental characteristic in modeling flow and contaminant transport in soils and sediments. Therefore, many models have been developed to estimate Ksat from easily measureable parameters, such as textural properties, bulk density, etc. However, Ksat is no...
Liu, Qingshan; Wang, Jun
2011-04-01
This paper presents a one-layer recurrent neural network for solving a class of constrained nonsmooth optimization problems with piecewise-linear objective functions. The proposed neural network is guaranteed to be globally convergent in finite time to the optimal solutions under a mild condition on a derived lower bound of a single gain parameter in the model. The number of neurons in the neural network is the same as the number of decision variables of the optimization problem. Compared with existing neural networks for optimization, the proposed neural network has a couple of salient features such as finite-time convergence and a low model complexity. Specific models for two important special cases, namely, linear programming and nonsmooth optimization, are also presented. In addition, applications to the shortest path problem and constrained least absolute deviation problem are discussed with simulation results to demonstrate the effectiveness and characteristics of the proposed neural network.
Directory of Open Access Journals (Sweden)
Iris Gorny
2018-03-01
Full Text Available ObjectivesThe German socio-demographic estimation scale was developed by Jahn et al. (1 to quickly predict premorbid global cognitive functioning in patients. So far, it has been validated in healthy adults and has shown a good correlation with the full and verbal IQ of the Wechsler Adult Intelligence Scale (WAIS in this group. However, there are no data regarding its use as a bedside test in epilepsy patients.MethodsForty native German speaking adult patients with refractory epilepsy were included. They completed a neuropsychological assessment, including a nine scale short form of the German version of the WAIS-III and the German socio-demographic estimation scale by Jahn et al. (1 during their presurgical diagnostic stay in our center. We calculated means, correlations, and the rate of concordance (range ±5 and ±7.5 IQ score points between these two measures for the whole group, and a subsample of 19 patients with a global cognitive functioning level within 1 SD of the mean (IQ score range 85–115 and who had completed their formal education before epilepsy onset.ResultsThe German demographic estimation scale by Jahn et al. (1 showed a significant mean overestimation of the global cognitive functioning level of eight points in the epilepsy patient sample compared with the short form WAIS-III score. The accuracy within a range of ±5 or ±7.5 IQ score points for each patient was similar to that of the healthy controls reported by Jahn et al. (1 in our subsample, but not in our whole sample.ConclusionOur results show that the socio-demographic scale by Jahn et al. (1 is not sufficiently reliable as an estimation tool of global cognitive functioning in epilepsy patients. It can be used to estimate global cognitive functioning in a subset of patients with a normal global cognitive functioning level who have completed their formal education before epilepsy onset, but it does not reliably predict global cognitive functioning in epilepsy patients
Directory of Open Access Journals (Sweden)
Dan Li
2017-11-01
Full Text Available Abstract Background Epidemiologic surveillance of lung function is key to clinical care of individuals with cystic fibrosis, but lung function decline is nonlinear and often impacted by acute respiratory events known as pulmonary exacerbations. Statistical models are needed to simultaneously estimate lung function decline while providing risk estimates for the onset of pulmonary exacerbations, in order to identify relevant predictors of declining lung function and understand how these associations could be used to predict the onset of pulmonary exacerbations. Methods Using longitudinal lung function (FEV1 measurements and time-to-event data on pulmonary exacerbations from individuals in the United States Cystic Fibrosis Registry, we implemented a flexible semiparametric joint model consisting of a mixed-effects submodel with regression splines to fit repeated FEV1 measurements and a time-to-event submodel for possibly censored data on pulmonary exacerbations. We contrasted this approach with methods currently used in epidemiological studies and highlight clinical implications. Results The semiparametric joint model had the best fit of all models examined based on deviance information criterion. Higher starting FEV1 implied more rapid lung function decline in both separate and joint models; however, individualized risk estimates for pulmonary exacerbation differed depending upon model type. Based on shared parameter estimates from the joint model, which accounts for the nonlinear FEV1 trajectory, patients with more positive rates of change were less likely to experience a pulmonary exacerbation (HR per one standard deviation increase in FEV1 rate of change = 0.566, 95% CI 0.516–0.619, and having higher absolute FEV1 also corresponded to lower risk of having a pulmonary exacerbation (HR per one standard deviation increase in FEV1 = 0.856, 95% CI 0.781–0.937. At the population level, both submodels indicated significant effects of birth
Mejia, Amanda F; Nebel, Mary Beth; Barber, Anita D; Choe, Ann S; Pekar, James J; Caffo, Brian S; Lindquist, Martin A
2018-05-15
Reliability of subject-level resting-state functional connectivity (FC) is determined in part by the statistical techniques employed in its estimation. Methods that pool information across subjects to inform estimation of subject-level effects (e.g., Bayesian approaches) have been shown to enhance reliability of subject-level FC. However, fully Bayesian approaches are computationally demanding, while empirical Bayesian approaches typically rely on using repeated measures to estimate the variance components in the model. Here, we avoid the need for repeated measures by proposing a novel measurement error model for FC describing the different sources of variance and error, which we use to perform empirical Bayes shrinkage of subject-level FC towards the group average. In addition, since the traditional intra-class correlation coefficient (ICC) is inappropriate for biased estimates, we propose a new reliability measure denoted the mean squared error intra-class correlation coefficient (ICC MSE ) to properly assess the reliability of the resulting (biased) estimates. We apply the proposed techniques to test-retest resting-state fMRI data on 461 subjects from the Human Connectome Project to estimate connectivity between 100 regions identified through independent components analysis (ICA). We consider both correlation and partial correlation as the measure of FC and assess the benefit of shrinkage for each measure, as well as the effects of scan duration. We find that shrinkage estimates of subject-level FC exhibit substantially greater reliability than traditional estimates across various scan durations, even for the most reliable connections and regardless of connectivity measure. Additionally, we find partial correlation reliability to be highly sensitive to the choice of penalty term, and to be generally worse than that of full correlations except for certain connections and a narrow range of penalty values. This suggests that the penalty needs to be chosen carefully
Renal parenchyma thickness: a rapid estimation of renal function on computed tomography
International Nuclear Information System (INIS)
Kaplon, Daniel M.; Lasser, Michael S.; Sigman, Mark; Haleblian, George E.; Pareek, Gyan
2009-01-01
Purpose: To define the relationship between renal parenchyma thickness (RPT) on computed tomography and renal function on nuclear renography in chronically obstructed renal units (ORUs) and to define a minimal thickness ratio associated with adequate function. Materials and Methods: Twenty-eight consecutive patients undergoing both nuclear renography and CT during a six-month period between 2004 and 2006 were included. All patients that had a diagnosis of unilateral obstruction were included for analysis. RPT was measured in the following manner: The parenchyma thickness at three discrete levels of each kidney was measured using calipers on a CT workstation. The mean of these three measurements was defined as RPT. The renal parenchyma thickness ratio of the ORUs and non-obstructed renal unit (NORUs) was calculated and this was compared to the observed function on Mag-3 lasix Renogram. Results: A total of 28 patients were evaluated. Mean parenchyma thickness was 1.82 cm and 2.25 cm in the ORUs and NORUs, respectively. The mean relative renal function of ORUs was 39%. Linear regression analysis comparing renogram function to RPT ratio revealed a correlation coefficient of 0.48 (p * RPT ratio. A thickness ratio of 0.68 correlated with 20% renal function. Conclusion: RPT on computed tomography appears to be a powerful predictor of relative renal function in ORUs. Assessment of RPT is a useful and readily available clinical tool for surgical decision making (renal salvage therapy versus nephrectomy) in patients with ORUs. (author)
International Nuclear Information System (INIS)
Nkemzi, B.
2005-10-01
Three-dimensional time-harmonic Maxwell's problems in axisymmetric domains Ω-circumflex with edges and conical points on the boundary are treated by means of the Fourier-finite-element method. The Fourier-fem combines the approximating Fourier series expansion of the solution with respect to the rotational angle using trigonometric polynomials of degree N (N → ∞), with the finite element approximation of the Fourier coefficients on the plane meridian domain Ω a is a subset of R + 2 of Ω-circumflex with mesh size h (h → 0). The singular behaviors of the Fourier coefficients near angular points of the domain Ω a are fully described by suitable singular functions and treated numerically by means of the singular function method with the finite element method on graded meshes. It is proved that the rate of convergence of the mixed approximations in H 1 (Ω-circumflex) 3 is of the order O (h+N -1 ) as known for the classical Fourier-finite-element approximation of problems with regular solutions. (author)
Asiri, Sharefa M.; Elmetennani, Shahrazed; Laleg-Kirati, Taous-Meriem
2017-01-01
In this paper, an on-line estimation algorithm of the source term in a first order hyperbolic PDE is proposed. This equation describes heat transport dynamics in concentrated solar collectors where the source term represents the received energy. This energy depends on the solar irradiance intensity and the collector characteristics affected by the environmental changes. Control strategies are usually used to enhance the efficiency of heat production; however, these strategies often depend on the source term which is highly affected by the external working conditions. Hence, efficient source estimation methods are required. The proposed algorithm is based on modulating functions method where a moving horizon strategy is introduced. Numerical results are provided to illustrate the performance of the proposed estimator in open and closed loops.
Asiri, Sharefa M.
2017-08-22
In this paper, an on-line estimation algorithm of the source term in a first order hyperbolic PDE is proposed. This equation describes heat transport dynamics in concentrated solar collectors where the source term represents the received energy. This energy depends on the solar irradiance intensity and the collector characteristics affected by the environmental changes. Control strategies are usually used to enhance the efficiency of heat production; however, these strategies often depend on the source term which is highly affected by the external working conditions. Hence, efficient source estimation methods are required. The proposed algorithm is based on modulating functions method where a moving horizon strategy is introduced. Numerical results are provided to illustrate the performance of the proposed estimator in open and closed loops.
Faizullah, Faiz
2016-01-01
The aim of the current paper is to present the path-wise and moment estimates for solutions to stochastic functional differential equations with non-linear growth condition in the framework of G-expectation and G-Brownian motion. Under the nonlinear growth condition, the pth moment estimates for solutions to SFDEs driven by G-Brownian motion are proved. The properties of G-expectations, Hölder's inequality, Bihari's inequality, Gronwall's inequality and Burkholder-Davis-Gundy inequalities are used to develop the above mentioned theory. In addition, the path-wise asymptotic estimates and continuity of pth moment for the solutions to SFDEs in the G-framework, with non-linear growth condition are shown.
Chai, Rui; Xu, Li-Sheng; Yao, Yang; Hao, Li-Ling; Qi, Lin
2017-01-01
This study analyzed ascending branch slope (A_slope), dicrotic notch height (Hn), diastolic area (Ad) and systolic area (As) diastolic blood pressure (DBP), systolic blood pressure (SBP), pulse pressure (PP), subendocardial viability ratio (SEVR), waveform parameter (k), stroke volume (SV), cardiac output (CO), and peripheral resistance (RS) of central pulse wave invasively and non-invasively measured. Invasively measured parameters were compared with parameters measured from brachial pulse waves by regression model and transfer function model. Accuracy of parameters estimated by regression and transfer function model, was compared too. Findings showed that k value, central pulse wave and brachial pulse wave parameters invasively measured, correlated positively. Regression model parameters including A_slope, DBP, SEVR, and transfer function model parameters had good consistency with parameters invasively measured. They had same effect of consistency. SBP, PP, SV, and CO could be calculated through the regression model, but their accuracies were worse than that of transfer function model.
Blair, Clancy; Raver, C. Cybele; Berry, Daniel J.
2015-01-01
In the current article, we contrast 2 analytical approaches to estimate the relation of parenting to executive function development in a sample of 1,292 children assessed longitudinally between the ages of 36 and 60 months of age. Children were administered a newly developed and validated battery of 6 executive function tasks tapping inhibitory control, working memory, and attention shifting. Residualized change analysis indicated that higher quality parenting as indicated by higher scores on widely used measures of parenting at both earlier and later time points predicted more positive gain in executive function at 60 months. Latent change score models in which parenting and executive function over time were held to standards of longitudinal measurement invariance provided additional evidence of the association between change in parenting quality and change in executive function. In these models, cross-lagged paths indicated that in addition to parenting predicting change in executive function, executive function bidirectionally predicted change in parenting quality. Results were robust with the addition of covariates, including child sex, race, maternal education, and household income-to-need. Strengths and drawbacks of the 2 analytic approaches are discussed, and the findings are considered in light of emerging methodological innovations for testing the extent to which executive function is malleable and open to the influence of experience. PMID:23834294
Directory of Open Access Journals (Sweden)
Eyad K Almaita
2017-03-01
Keywords: Energy efficiency, Power quality, Radial basis function, neural networks, adaptive, harmonic. Article History: Received Dec 15, 2016; Received in revised form Feb 2nd 2017; Accepted 13rd 2017; Available online How to Cite This Article: Almaita, E.K and Shawawreh J.Al (2017 Improving Stability and Convergence for Adaptive Radial Basis Function Neural Networks Algorithm (On-Line Harmonics Estimation Application. International Journal of Renewable Energy Develeopment, 6(1, 9-17. http://dx.doi.org/10.14710/ijred.6.1.9-17
Asiri, Sharefa M.
2016-10-20
In this paper, modulating functions-based method is proposed for estimating space–time-dependent unknowns in one-dimensional partial differential equations. The proposed method simplifies the problem into a system of algebraic equations linear in unknown parameters. The well-posedness of the modulating functions-based solution is proved. The wave and the fifth-order KdV equations are used as examples to show the effectiveness of the proposed method in both noise-free and noisy cases.
Estimate of the influence of muzzle smoke on function range of infrared system
Luo, Yan-ling; Wang, Jun; Wu, Jiang-hui; Wu, Jun; Gao, Meng; Gao, Fei; Zhao, Yu-jie; Zhang, Lei
2013-09-01
Muzzle smoke produced by weapons shooting has important influence on infrared (IR) system while detecting targets. Based on the theoretical model of detecting spot targets and surface targets of IR system while there is muzzle smoke, the function range for detecting spot targets and surface targets are deduced separately according to the definition of noise equivalent temperature difference(NETD) and minimum resolution temperature difference(MRTD). Also parameters of muzzle smoke affecting function range of IR system are analyzed. Base on measured data of muzzle smoke for single shot, the function range of an IR system for detecting typical targets are calculated separately while there is muzzle smoke and there is no muzzle smoke at 8-12 micron waveband. For our IR system function range has reduced by over 10% for detecting tank if muzzle smoke exists. The results will provide evidence for evaluating the influence of muzzle smoke on IR system and will help researchers to improve ammo craftwork.
Directory of Open Access Journals (Sweden)
Yuri B. Tebekin
2011-11-01
Full Text Available The article is devoted to the problem of the quality management for multiphase processes on the basis of the probabilistic approach. Method with continuous response functions is offered from the application of the method of Lagrange multipliers.
Bornkamp, Björn; Ickstadt, Katja
2009-03-01
In this article, we consider monotone nonparametric regression in a Bayesian framework. The monotone function is modeled as a mixture of shifted and scaled parametric probability distribution functions, and a general random probability measure is assumed as the prior for the mixing distribution. We investigate the choice of the underlying parametric distribution function and find that the two-sided power distribution function is well suited both from a computational and mathematical point of view. The model is motivated by traditional nonlinear models for dose-response analysis, and provides possibilities to elicitate informative prior distributions on different aspects of the curve. The method is compared with other recent approaches to monotone nonparametric regression in a simulation study and is illustrated on a data set from dose-response analysis.
The Approach to an Estimation of a Local Area Network Functioning Efficiency
Directory of Open Access Journals (Sweden)
M. M. Taraskin
2010-09-01
Full Text Available In the article authors call attention to a choice of system of metrics, which permits to take a qualitative assessment of local area network functioning efficiency in condition of computer attacks.
Functional estimation of kidneys after extracorporeal shock wave therapy (ESWL) by clearance
International Nuclear Information System (INIS)
Sydow, K.; Kirschner, P.; Brien, G.; Buchali, K.; Frenzel, R.
1991-01-01
35 patients were scintiscanned with 99m-Tc-DTPA to determine the effects of extracorporeal shock waves used to desintegrate renal concrements may have on the patients renal function. The therapy was conducted using a standard Lithostar unit (Siemens) (20 patients) and an additional overtable module (15 patients). Functional scintigraphy was performed using a gamma camera before lithotripsy, and on the first day after it. Further control investigations were performed one or two weeks later and two till six months later. In both groups most of the patients developed temporary restrictions in renal function, some of them irreversible restrictions. Functional losses were found to be less severe with the use of the overtable module than with the standard Lithostar unit. (orig.) [de
Estimation of CN Parameter for Small Agricultural Watersheds Using Asymptotic Functions
Tomasz Kowalik; Andrzej Walega
2015-01-01
This paper investigates a possibility of using asymptotic functions to determine the value of curve number (CN) parameter as a function of rainfall in small agricultural watersheds. It also compares the actually calculated CN with its values provided in the Soil Conservation Service (SCS) National Engineering Handbook Section 4: Hydrology (NEH-4) and Technical Release 20 (TR-20). The analysis showed that empirical CN values presented in the National Engineering Handbook tables differed from t...
International Nuclear Information System (INIS)
Cooke, D.J.
1983-01-01
A procedure has been developed for deriving functions which characterize the effect of geomagnetic cutoffs on the charged primary cosmic rays that give rise to neutrinos arriving in any given direction at specified points on or in the earth. These cutoff distribution functions, for use in atmospheric-neutrino flux calculations, have been determined for eight nucleon-decay--experiment sites, by use of a technique which employs the Stormer cutoff expression, and which assumes collinear motion of neutrino and parent primary
Using Empirical Data to Estimate Potential Functions in Commodity Markets: Some Initial Results
Shen, C.; Haven, E.
2017-12-01
This paper focuses on estimating real and quantum potentials from financial commodities. The log returns of six common commodities are considered. We find that some phenomena, such as the vertical potential walls and the time scale issue of the variation on returns, also exists in commodity markets. By comparing the quantum and classical potentials, we attempt to demonstrate that the information within these two types of potentials is different. We believe this empirical result is consistent with the theoretical assumption that quantum potentials (when embedded into social science contexts) may contain some social cognitive or market psychological information, while classical potentials mainly reflect `hard' market conditions. We also compare the two potential forces and explore their relationship by simply estimating the Pearson correlation between them. The Medium or weak interaction effect may indicate that the cognitive system among traders may be affected by those `hard' market conditions.
Wang, Wei; Young, Bessie A.; Fülöp, Tibor; de Boer, Ian H.; Boulware, L. Ebony; Katz, Ronit; Correa, Adolfo; Griswold, Michael E.
2015-01-01
Background The calibration to Isotope Dilution Mass Spectroscopy (IDMS) traceable creatinine is essential for valid use of the new Chronic Kidney Disease Epidemiology Collaboration (CKD-EPI) equation to estimate the glomerular filtration rate (GFR). Methods For 5,210 participants in the Jackson Heart Study (JHS), serum creatinine was measured with a multipoint enzymatic spectrophotometric assay at the baseline visit (2000–2004) and re-measured using the Roche enzymatic method, traceable to IDMS in a subset of 206 subjects. The 200 eligible samples (6 were excluded, 1 for failure of the re-measurement and 5 for outliers) were divided into three disjoint sets - training, validation, and test - to select a calibration model, estimate true errors, and assess performance of the final calibration equation. The calibration equation was applied to serum creatinine measurements of 5,210 participants to estimate GFR and the prevalence of CKD. Results The selected Deming regression model provided a slope of 0.968 (95% Confidence Interval (CI), 0.904 to 1.053) and intercept of −0.0248 (95% CI, −0.0862 to 0.0366) with R squared 0.9527. Calibrated serum creatinine showed high agreement with actual measurements when applying to the unused test set (concordance correlation coefficient 0.934, 95% CI, 0.894 to 0.960). The baseline prevalence of CKD in the JHS (2000–2004) was 6.30% using calibrated values, compared with 8.29% using non-calibrated serum creatinine with the CKD-EPI equation (P creatinine measurements in the JHS and the calibrated values provide a lower CKD prevalence estimate. PMID:25806862
DEFF Research Database (Denmark)
Hansen, Peter Reinhard; Lunde, Asger
An economic time series can often be viewed as a noisy proxy for an underlying economic variable. Measurement errors will influence the dynamic properties of the observed process and may conceal the persistence of the underlying time series. In this paper we develop instrumental variable (IV...... application despite the large sample. Unit root tests based on the IV estimator have better finite sample properties in this context....
Verification of functional a posteriori error estimates for obstacle problem in 1D
Czech Academy of Sciences Publication Activity Database
Harasim, P.; Valdman, Jan
2013-01-01
Roč. 49, č. 5 (2013), s. 738-754 ISSN 0023-5954 R&D Projects: GA ČR GA13-18652S Institutional support: RVO:67985556 Keywords : obstacle problem * a posteriori error estimate * variational inequalities Subject RIV: BA - General Mathematics Impact factor: 0.563, year: 2013 http://library.utia.cas.cz/separaty/2014/MTR/valdman-0424082.pdf
Verification of functional a posteriori error estimates for obstacle problem in 2D
Czech Academy of Sciences Publication Activity Database
Harasim, P.; Valdman, Jan
2014-01-01
Roč. 50, č. 6 (2014), s. 978-1002 ISSN 0023-5954 R&D Projects: GA ČR GA13-18652S Institutional support: RVO:67985556 Keywords : obstacle problem * a posteriori error estimate * finite element method * variational inequalities Subject RIV: BA - General Mathematics Impact factor: 0.541, year: 2014 http://library.utia.cas.cz/separaty/2015/MTR/valdman-0441661.pdf
Wang, Wei; Young, Bessie A; Fülöp, Tibor; de Boer, Ian H; Boulware, L Ebony; Katz, Ronit; Correa, Adolfo; Griswold, Michael E
2015-05-01
The calibration to isotope dilution mass spectrometry-traceable creatinine is essential for valid use of the new Chronic Kidney Disease Epidemiology Collaboration equation to estimate the glomerular filtration rate. For 5,210 participants in the Jackson Heart Study (JHS), serum creatinine was measured with a multipoint enzymatic spectrophotometric assay at the baseline visit (2000-2004) and remeasured using the Roche enzymatic method, traceable to isotope dilution mass spectrometry in a subset of 206 subjects. The 200 eligible samples (6 were excluded, 1 for failure of the remeasurement and 5 for outliers) were divided into 3 disjoint sets-training, validation and test-to select a calibration model, estimate true errors and assess performance of the final calibration equation. The calibration equation was applied to serum creatinine measurements of 5,210 participants to estimate glomerular filtration rate and the prevalence of chronic kidney disease (CKD). The selected Deming regression model provided a slope of 0.968 (95% confidence interval [CI], 0.904-1.053) and intercept of -0.0248 (95% CI, -0.0862 to 0.0366) with R value of 0.9527. Calibrated serum creatinine showed high agreement with actual measurements when applying to the unused test set (concordance correlation coefficient 0.934, 95% CI, 0.894-0.960). The baseline prevalence of CKD in the JHS (2000-2004) was 6.30% using calibrated values compared with 8.29% using noncalibrated serum creatinine with the Chronic Kidney Disease Epidemiology Collaboration equation (P creatinine measurements in the JHS, and the calibrated values provide a lower CKD prevalence estimate.
Efficacy of using data from angler-caught Burbot to estimate population rate functions
Brauer, Tucker A.; Rhea, Darren T.; Walrath, John D.; Quist, Michael C.
2018-01-01
The effective management of a fish population depends on the collection of accurate demographic data from that population. Since demographic data are often expensive and difficult to obtain, developing cost‐effective and efficient collection methods is a high priority. This research evaluates the efficacy of using angler‐supplied data to monitor a nonnative population of Burbot Lota lota. Age and growth estimates were compared between Burbot collected by anglers and those collected in trammel nets from two Wyoming reservoirs. Collection methods produced different length‐frequency distributions, but no difference was observed in age‐frequency distributions. Mean back‐calculated lengths at age revealed that netted Burbot grew faster than angled Burbot in Fontenelle Reservoir. In contrast, angled Burbot grew slightly faster than netted Burbot in Flaming Gorge Reservoir. Von Bertalanffy growth models differed between collection methods, but differences in parameter estimates were minor. Estimates of total annual mortality (A) of Burbot in Fontenelle Reservoir were comparable between angled (A = 35.4%) and netted fish (33.9%); similar results were observed in Flaming Gorge Reservoir for angled (29.3%) and netted fish (30.5%). Beverton–Holt yield‐per‐recruit models were fit using data from both collection methods. Estimated yield differed by less than 15% between data sources and reservoir. Spawning potential ratios indicated that an exploitation rate of 20% would be required to induce recruitment overfishing in either reservoir, regardless of data source. Results of this study suggest that angler‐supplied data are useful for monitoring Burbot population dynamics in Wyoming and may be an option to efficiently monitor other fish populations in North America.
Estimation of Import and Export demand Functions Using Bilateral Trade Data ___ the case of Pakistan
Jahanzaib Haider; Muhammad Afzal; Farah Riaz
2011-01-01
We estimated the import and export elasticities of Pakistan trade with traditional trade partners and some Asian countries to see the dynamics of Pakistan trade from 1973 to 2008. OLS results suggest that income is the principal determinant of exports and imports. Pakistan exports are cointegrated with Japan and USA while the imports are cointegrated with UAE and USA. Pakistan imports and exports are cointegrated with Bangladesh and Sri Lanka but not with India and China. Income and exchange ...
Hospital costs estimation and prediction as a function of patient and admission characteristics.
Ramiarina, Robert; Almeida, Renan Mvr; Pereira, Wagner Ca
2008-01-01
The present work analyzed the association between hospital costs and patient admission characteristics in a general public hospital in the city of Rio de Janeiro, Brazil. The unit costs method was used to estimate inpatient day costs associated to specific hospital clinics. With this aim, three "cost centers" were defined in order to group direct and indirect expenses pertaining to the clinics. After the costs were estimated, a standard linear regression model was developed for correlating cost units and their putative predictors (the patients gender and age, the admission type (urgency/elective), ICU admission (yes/no), blood transfusion (yes/no), the admission outcome (death/no death), the complexity of the medical procedures performed, and a risk-adjustment index). Data were collected for 3100 patients, January 2001-January 2003. Average inpatient costs across clinics ranged from (US$) 1135 [Orthopedics] to 3101 [Cardiology]. Costs increased according to increases in the risk-adjustment index in all clinics, and the index was statistically significant in all clinics except Urology, General surgery, and Clinical medicine. The occupation rate was inversely correlated to costs, and age had no association with costs. The (adjusted) per cent of explained variance varied between 36.3% [Clinical medicine] and 55.1% [Thoracic surgery clinic]. The estimates are an important step towards the standardization of hospital costs calculation, especially for countries that lack formal hospital accounting systems.
PEDO-TRANSFER FUNCTIONS FOR ESTIMATING SOIL BULK DENSITY IN CENTRAL AMAZONIA
Directory of Open Access Journals (Sweden)
Henrique Seixas Barros
2015-04-01
Full Text Available Under field conditions in the Amazon forest, soil bulk density is difficult to measure. Rigorous methodological criteria must be applied to obtain reliable inventories of C stocks and soil nutrients, making this process expensive and sometimes unfeasible. This study aimed to generate models to estimate soil bulk density based on parameters that can be easily and reliably measured in the field and that are available in many soil-related inventories. Stepwise regression models to predict bulk density were developed using data on soil C content, clay content and pH in water from 140 permanent plots in terra firme (upland forests near Manaus, Amazonas State, Brazil. The model results were interpreted according to the coefficient of determination (R2 and Akaike information criterion (AIC and were validated with a dataset consisting of 125 plots different from those used to generate the models. The model with best performance in estimating soil bulk density under the conditions of this study included clay content and pH in water as independent variables and had R2 = 0.73 and AIC = -250.29. The performance of this model for predicting soil density was compared with that of models from the literature. The results showed that the locally calibrated equation was the most accurate for estimating soil bulk density for upland forests in the Manaus region.
Su, Nan-Yao; Lee, Sang-Hee
2008-04-01
Marked termites were released in a linear-connected foraging arena, and the spatial heterogeneity of their capture probabilities was averaged for both directions at distance r from release point to obtain a symmetrical distribution, from which the density function of directionally averaged capture probability P(x) was derived. We hypothesized that as marked termites move into the population and given sufficient time, the directionally averaged capture probability may reach an equilibrium P(e) over the distance r and thus satisfy the equal mixing assumption of the mark-recapture protocol. The equilibrium capture probability P(e) was used to estimate the population size N. The hypothesis was tested in a 50-m extended foraging arena to simulate the distance factor of field colonies of subterranean termites. Over the 42-d test period, the density functions of directionally averaged capture probability P(x) exhibited four phases: exponential decline phase, linear decline phase, equilibrium phase, and postequilibrium phase. The equilibrium capture probability P(e), derived as the intercept of the linear regression during the equilibrium phase, correctly projected N estimates that were not significantly different from the known number of workers in the arena. Because the area beneath the probability density function is a constant (50% in this study), preequilibrium regression parameters and P(e) were used to estimate the population boundary distance 1, which is the distance between the release point and the boundary beyond which the population is absent.
Liang, Xiaoyun; Vaughan, David N; Connelly, Alan; Calamante, Fernando
2018-05-01
The conventional way to estimate functional networks is primarily based on Pearson correlation along with classic Fisher Z test. In general, networks are usually calculated at the individual-level and subsequently aggregated to obtain group-level networks. However, such estimated networks are inevitably affected by the inherent large inter-subject variability. A joint graphical model with Stability Selection (JGMSS) method was recently shown to effectively reduce inter-subject variability, mainly caused by confounding variations, by simultaneously estimating individual-level networks from a group. However, its benefits might be compromised when two groups are being compared, given that JGMSS is blinded to other groups when it is applied to estimate networks from a given group. We propose a novel method for robustly estimating networks from two groups by using group-fused multiple graphical-lasso combined with stability selection, named GMGLASS. Specifically, by simultaneously estimating similar within-group networks and between-group difference, it is possible to address inter-subject variability of estimated individual networks inherently related with existing methods such as Fisher Z test, and issues related to JGMSS ignoring between-group information in group comparisons. To evaluate the performance of GMGLASS in terms of a few key network metrics, as well as to compare with JGMSS and Fisher Z test, they are applied to both simulated and in vivo data. As a method aiming for group comparison studies, our study involves two groups for each case, i.e., normal control and patient groups; for in vivo data, we focus on a group of patients with right mesial temporal lobe epilepsy.
Functional approach in estimation of cultural ecosystem services of recreational areas
Sautkin, I. S.; Rogova, T. V.
2018-01-01
The article is devoted to the identification and analysis of cultural ecosystem services of recreational areas from the different forest plant functional groups in the suburbs of Kazan. The study explored two cultural ecosystem services supplied by forest plants by linking these services to different plant functional traits. Information on the functional traits of 76 plants occurring in the forest ecosystems of the investigated area was collected from reference books on the biological characteristics of plant species. Analysis of these species and traits with the Ward clustering method yielded four functional groups with different potentials for delivering ecosystem services. The results show that the contribution of species diversity to services can be characterized through the functional traits of plants. This proves that there is a stable relationship between biodiversity and the quality and quantity of ecosystem services. The proposed method can be extended to other types of services (regulating and supporting). The analysis can be used in the socio-economic assessment of natural ecosystems for recreation and other uses.
Estimation of Hepatic Function Using 99mTc-DISIDA Plasma Clearance Rate
International Nuclear Information System (INIS)
Lee, M. S.; Yoo, H. S.; Lee, J. T.; Park, C. Y.
1983-01-01
Various methods to determine hepatic function have been studied, and among these, calculated maximal removal rate of ICG (ICG R-max) is more accurate and sensitive index for quantification of hepatic function. But calculation of ICG R-max is time-consuming, invasive, and expensive study, and even ICG R-max 1 day study is still complicated. So we tried to evaluated of hepatic function test using 99mTc-DISIDA plasma clearance rate. The author studied 11 cases of normal control, 4 cases of acute hepatitis, 8 cases of chronic hepatitis, and 19 cases of liver cirrhosis. The results were as follows: 1. In normal control, DISIDA-K was 0.70 min -1 , in liver cirrhosis 0.25 min -1 , in acute hepatitis 0.46 min -1 , and in chronic hepatitis 0.14min -1 . The most severe depressed DISIDA-K value was observed in liver cirrhosis. 2. Comparison of DISIDA-K value to liver function indices revealed no correlation between DISIDA-K value and serum albumin, prothrombin time, total bilirubin, SGOT, and alkaline phosphatase. 3. DISIDA-K value in liver cirrhosis with complication such as ascites, splenomegaly, esophageal varices, and hepatic coma was more decreased than without complication. With the above result, calculation of DISIDA-K value was found easily available, accurate, and simple index for quantification of hepatic function.
2009-01-01
Background During the last part of the 1990s the chance of surviving breast cancer increased. Changes in survival functions reflect a mixture of effects. Both, the introduction of adjuvant treatments and early screening with mammography played a role in the decline in mortality. Evaluating the contribution of these interventions using mathematical models requires survival functions before and after their introduction. Furthermore, required survival functions may be different by age groups and are related to disease stage at diagnosis. Sometimes detailed information is not available, as was the case for the region of Catalonia (Spain). Then one may derive the functions using information from other geographical areas. This work presents the methodology used to estimate age- and stage-specific Catalan breast cancer survival functions from scarce Catalan survival data by adapting the age- and stage-specific US functions. Methods Cubic splines were used to smooth data and obtain continuous hazard rate functions. After, we fitted a Poisson model to derive hazard ratios. The model included time as a covariate. Then the hazard ratios were applied to US survival functions detailed by age and stage to obtain Catalan estimations. Results We started estimating the hazard ratios for Catalonia versus the USA before and after the introduction of screening. The hazard ratios were then multiplied by the age- and stage-specific breast cancer hazard rates from the USA to obtain the Catalan hazard rates. We also compared breast cancer survival in Catalonia and the USA in two time periods, before cancer control interventions (USA 1975–79, Catalonia 1980–89) and after (USA and Catalonia 1990–2001). Survival in Catalonia in the 1980–89 period was worse than in the USA during 1975–79, but the differences disappeared in 1990–2001. Conclusion Our results suggest that access to better treatments and quality of care contributed to large improvements in survival in Catalonia. On
Unbiased determination of the proton structure function F2p with faithful uncertainty estimation
International Nuclear Information System (INIS)
Del Debbio, Luigi; Forte, Stefano; Latorre, Jose I.; Rojo, Joan; Piccione, Andrea
2005-01-01
We construct a parametrization of the deep-inelastic structure function of the proton F 2 (x,Q 2 ) based on all available experimental information from charged lepton deep-inelastic scattering experiments. The parametrization effectively provides a bias-free determination of the probability measure in the space of structure functions, which retains information on experimental errors and correlations. The result is obtained in the form of a Monte Carlo sample of neural networks trained on an ensemble of replicas of the experimental data. We discuss in detail the techniques required for the construction of bias-free parameterizations of large amounts of structure function data, in view of future applications to the determination of parton distributions based on the same method. (author)
Quantative pre-surgical lung function estimation with SPECT/CT
International Nuclear Information System (INIS)
Bailey, Dale L.; Timmins, Sophi; Harris, Benjamin E.; Bailey, Elizabeth A.; Roach, Paul J.; Willowson, Kathy P.
2009-01-01
Full text: Objectives: To develop methodology to predict lobar lung function based on SPECT/CT ventilation 6 k perfusion (V/Q) scanning in candidates for lobectomy for lung cancer. This combines two development areas from our group: quantitative SPECT based on CT-derived corrections for scattering and attenuation of photons, and SPECT V/Q scanning with lobar segmentation from CT Six patients underwent baseline pulmonary function testing (PFT) including spirometry, measurement of DLCO and cardio-pulmonary exercise testing. A SPECT/CT V/Q scan was acquired at baseline. Using in-house software each lobe was anatomically defined using CT to provide lobar ROIs which could be applied to the SPECT data. From these, individual lobar contribution to overall function was calculated from counts within the lobe and post-operative FEVl, DLCO and V02 peak were predicted. This was compared with the quantitative planar scan method using 3 rectangular ROIs over each lung.
Estimation of the multidimensional transient functions oculo-motor system of human
Pavlenko, Vitaliy; Salata, Dmytro; Dombrovskyi, Mykola; Maksymenko, Yuri
2017-09-01
Proposed a new method of constructing nonparametric dynamic models of the oculomotor system system (OMS) in the form of human multidimensional transition functions on the basis of experimental data "input-output". As the test signals used bright points on the long duration of the computer screen. OMS response is measured using information technology Eye-tracking and recorded on video. As a result data processing of the experiment we receive function based "pupil coordinate - time". Using the method of least squares (Ordinary Least Squares, OLS) defined transition functions of the first, second and third order - integral transformations of Volterra kernels, representing a model of OMS. Completed experimental studies using computer simulations confirm the adequacy of the constructed approximation model as a real system.
Danjon, Frédéric; Caplan, Joshua S; Fortin, Mathieu; Meredieu, Céline
2013-01-01
Root systems of woody plants generally display a strong relationship between the cross-sectional area or cross-sectional diameter (CSD) of a root and the dry weight of biomass (DWd) or root volume (Vd) that has grown (i.e., is descendent) from a point. Specification of this relationship allows one to quantify root architectural patterns and estimate the amount of material lost when root systems are extracted from the soil. However, specifications of this relationship generally do not account for the fact that root systems are comprised of multiple types of roots. We assessed whether the relationship between CSD and Vd varies as a function of root type. Additionally, we sought to identify a more accurate and time-efficient method for estimating missing root volume than is currently available. We used a database that described the 3D root architecture of Pinus pinaster root systems (5, 12, or 19 years) from a stand in southwest France. We determined the relationship between CSD and Vd for 10,000 root segments from intact root branches. Models were specified that did and did not account for root type. The relationships were then applied to the diameters of 11,000 broken root ends to estimate the volume of missing roots. CSD was nearly linearly related to the square root of Vd, but the slope of the curve varied greatly as a function of root type. Sinkers and deep roots tapered rapidly, as they were limited by available soil depth. Distal shallow roots tapered gradually, as they were less limited spatially. We estimated that younger trees lost an average of 17% of root volume when excavated, while older trees lost 4%. Missing volumes were smallest in the central parts of root systems and largest in distal shallow roots. The slopes of the curves for each root type are synthetic parameters that account for differentiation due to genetics, soil properties, or mechanical stimuli. Accounting for this differentiation is critical to estimating root loss accurately.
International Nuclear Information System (INIS)
Pereira, A.B.; Vrisman, A.L.; Galvani, E.
2002-01-01
The solar radiation received at the surface of the earth, apart from its relevance to several daily human activities, plays an important role in the growth and development of plants. The aim of the current work was to develop and gauge an estimation model for the evaluation of the global solar radiation flux density as a function of the solar energy potential at soil surface. Radiometric data were collected at Ponta Grossa, PR, Brazil (latitude 25°13' S, longitude 50°03' W, altitude 880 m). Estimated values of solar energy potential obtained as a function of only one measurement taken at solar noon time were confronted with those measured by a Robitzsch bimetalic actinograph, for days that presented insolation ratios higher than 0.85. This data set was submitted to a simple linear regression analysis, having been obtained a good adjustment between observed and calculated values. For the estimation of the coefficients a and b of Angström's equation, the method based on the solar energy potential at soil surface was used for the site under study. The methodology was efficient to assess the coefficients, aiming at the determination of the global solar radiation flux density, whith quickness and simplicity, having also found out that the criterium for the estimation of the solar energy potential is equivalent to that of the classical methodology of Angström. Knowledge of the available solar energy potential and global solar radiation flux density is of great importance for the estimation of the maximum atmospheric evaporative demand, of water consumption by irrigated crops, and also for building solar engineering equipment, such as driers, heaters, solar ovens, refrigerators, etc [pt
Jonge, de R.; Zanten, van J.H.
2012-01-01
We investigate posterior contraction rates for priors on multivariate functions that are constructed using tensor-product B-spline expansions. We prove that using a hierarchical prior with an appropriate prior distribution on the partition size and Gaussian prior weights on the B-spline
A non-parametric estimator for the doubly-periodic Poisson intensity function
R. Helmers (Roelof); I.W. Mangku (Wayan); R. Zitikis
2007-01-01
textabstractIn a series of papers, J. Garrido and Y. Lu have proposed and investigated a doubly-periodic Poisson model, and then applied it to analyze hurricane data. The authors have suggested several parametric models for the underlying intensity function. In the present paper we construct and
Optimization of the coherence function estimation for multi-core central processing unit
Cheremnov, A. G.; Faerman, V. A.; Avramchuk, V. S.
2017-02-01
The paper considers use of parallel processing on multi-core central processing unit for optimization of the coherence function evaluation arising in digital signal processing. Coherence function along with other methods of spectral analysis is commonly used for vibration diagnosis of rotating machinery and its particular nodes. An algorithm is given for the function evaluation for signals represented with digital samples. The algorithm is analyzed for its software implementation and computational problems. Optimization measures are described, including algorithmic, architecture and compiler optimization, their results are assessed for multi-core processors from different manufacturers. Thus, speeding-up of the parallel execution with respect to sequential execution was studied and results are presented for Intel Core i7-4720HQ и AMD FX-9590 processors. The results show comparatively high efficiency of the optimization measures taken. In particular, acceleration indicators and average CPU utilization have been significantly improved, showing high degree of parallelism of the constructed calculating functions. The developed software underwent state registration and will be used as a part of a software and hardware solution for rotating machinery fault diagnosis and pipeline leak location with acoustic correlation method.
Drug Dosing and Estimated Renal Function-Any Step Forward from Effersoe?
DEFF Research Database (Denmark)
Hornum, Mads; Feldt-Rasmussen, Bo
2017-01-01
Drug dosing in accordance with the renal function is a long-standing challenge to clinicians. For many years it has been evident that in many clinical situations there is no easy way to correctly dose any drug that is mainly cleared by the kidneys. Despite the development of many formulas...
Asymptotic Estimates of Gerber-Shiu Functions in the Renewal Risk Model with Exponential Claims
Institute of Scientific and Technical Information of China (English)
Li WEI
2012-01-01
This paper continues to study the asymptotic behavior of Gerber-Shiu expected discounted penalty functions in the renewal risk model as the initial capital becomes large.Under the assumption that the claim-size distribution is exponential,we establish an explicit asymptotic formula.Some straightforward consequences of this formula match existing results in the field.
Construction of New Electronic Density Functionals with Error Estimation Through Fitting
DEFF Research Database (Denmark)
Petzold, V.; Bligaard, T.; Jacobsen, K. W.
2012-01-01
We investigate the possibilities and limitations for the development of new electronic density functionals through large-scale fitting to databases of binding energies obtained experimentally or through high-quality calculations. We show that databases with up to a few hundred entries allow for u...
International Nuclear Information System (INIS)
Frid, I.A.; Berntstejn, M.I.; Evtyukhin, A.I.; Shul'ga, N.I.
1980-01-01
The functional state of the adrenal glands during surgical and combinated treatment was examined in 38 radically operated patients with pulmonary cancer. Irradiation of lung cancer patients was found to stimulate the adrenal glands activity followed by reduction of their potentialities, manifested in a less marked increase of the catecholamines level and decreased 11-OCS level in blood during surgical treatment
Modelling of migration from multi-layers and functional barriers: Estimation of parameters
Dole, P.; Voulzatis, Y.; Vitrac, O.; Reynier, A.; Hankemeier, T.; Aucejo, S.; Feigenbaum, A.
2006-01-01
Functional barriers form parts of multi-layer packaging materials, which are deemed to protect the food from migration of a broad range of contaminants, e.g. those associated with reused packaging. Often, neither the presence nor the identity of the contaminants is known, so that safety assessment
Re-estimation of renal function with 99mTc-DTPA by the Gates' method
International Nuclear Information System (INIS)
Itoh, Kazuo; Arakawa, Masanori
1987-01-01
We analyzed a regression equation between percent total renal uptake (%TRU) of 99m Tc-DTPA and creatinine clearance (Ccr) by the Gates' method in 82 patients. 1) The following regression equations between measured renal depth on CT scan and (weight in kg)/(height in cm) in Japanese were obtained; Right in both kidneys = 13.6361 · (W/H) 0.6996 (n = 217, r = 0.86691, p 0.7554 (n = 224, r = 0.8822, p 0.8099 (n = 27, r = 0.9515, p 0.6997 (n = 21, r = 0.9213, p 2 ) = 13.15 · %TRU 0.787 (n = 86, r = 0.820, p 0.753 (n = 40, r = 0.754). The Gates' method is very convenient for an immediate estimation of glomerular filtration rate (GFR) after renal scintigraphy using 99m Tc-DTPA. However, the correlation coefficient was not high as compared to the Gates' results. The equation which was reported by Gates' is not necessarily adaptable in routine study. Each facility which uses the Gates' method for estimating GFR should obtain the corrected regression equation between %TRU and Ccr. (author)
Estimation of the lower flammability limit of organic compounds as a function of temperature.
Rowley, J R; Rowley, R L; Wilding, W V
2011-02-15
A new method of estimating the lower flammability limit (LFL) of general organic compounds is presented. The LFL is predicted at 298 K for gases and the lower temperature limit for solids and liquids from structural contributions and the ideal gas heat of formation of the fuel. The average absolute deviation from more than 500 experimental data points is 10.7%. In a previous study, the widely used modified Burgess-Wheeler law was shown to underestimate the effect of temperature on the lower flammability limit when determined in a large-diameter vessel. An improved version of the modified Burgess-Wheeler law is presented that represents the temperature dependence of LFL data determined in large-diameter vessels more accurately. When the LFL is estimated at increased temperatures using a combination of this model and the proposed structural-contribution method, an average absolute deviation of 3.3% is returned when compared with 65 data points for 17 organic compounds determined in an ASHRAE-style apparatus. Copyright © 2010 Elsevier B.V. All rights reserved.
Haberman, Shelby J; Sinharay, Sandip; Chon, Kyong Hee
2013-07-01
Residual analysis (e.g. Hambleton & Swaminathan, Item response theory: principles and applications, Kluwer Academic, Boston, 1985; Hambleton, Swaminathan, & Rogers, Fundamentals of item response theory, Sage, Newbury Park, 1991) is a popular method to assess fit of item response theory (IRT) models. We suggest a form of residual analysis that may be applied to assess item fit for unidimensional IRT models. The residual analysis consists of a comparison of the maximum-likelihood estimate of the item characteristic curve with an alternative ratio estimate of the item characteristic curve. The large sample distribution of the residual is proved to be standardized normal when the IRT model fits the data. We compare the performance of our suggested residual to the standardized residual of Hambleton et al. (Fundamentals of item response theory, Sage, Newbury Park, 1991) in a detailed simulation study. We then calculate our suggested residuals using data from an operational test. The residuals appear to be useful in assessing the item fit for unidimensional IRT models.
Estimation of Net Radiation in Three Different Plant Functional Types in Korea
International Nuclear Information System (INIS)
Kwon, H.J.
2009-01-01
Net Radiation (R N ) is the major driving force for biophysical and biogeochemical processes in the terrestrial ecosystems, which is one of the most critical variables in both measurement and modeling. Despite its importance, there are only 10 weather stations conducting R N measurements among the 544 stations operated by Korea Meteorological Administration (KMA; KMA, 2008). The measurement of incoming shortwave radiation (R S ↓) is, however, conducted at 22 stations while that of sunshine duration is conducted at all the manned stations. In this context, the recent research for estimating R N using R S ↓ in Korean peninsula by Kwon (2009) is of great worth. The author used a linear regression and the radiation balance methods. We generally agree with the author that, in terms of simplicity and practicality, both methods show reliable applicability for estimating R N . We noted, however, that the author’s experimental method and analysis need some clarification and improvement, that are addressed in the following perspectives: (1) the use of daily integrated data for regression, (2) the use of measured albedo, (3) the use of linear coefficients for whole year data, (4) methodological improvement, (5) the use of sunshine duration, and (6) the error assessment. (author)
Schneider, Hauke; Huynh, Thien J; Demchuk, Andrew M; Dowlatshahi, Dar; Rodriguez-Luna, David; Silva, Yolanda; Aviv, Richard; Dzialowski, Imanuel
2018-06-01
The intracerebral hemorrhage (ICH) score is the most commonly used grading scale for stratifying functional outcome in patients with acute ICH. We sought to determine whether a combination of the ICH score and the computed tomographic angiography spot sign may improve outcome prediction in the cohort of a prospective multicenter hemorrhage trial. Prospectively collected data from 241 patients from the observational PREDICT study (Prediction of Hematoma Growth and Outcome in Patients With Intracerebral Hemorrhage Using the CT-Angiography Spot Sign) were analyzed. Functional outcome at 3 months was dichotomized using the modified Rankin Scale (0-3 versus 4-6). Performance of (1) the ICH score and (2) the spot sign ICH score-a scoring scale combining ICH score and spot sign number-was tested. Multivariable analysis demonstrated that ICH score (odds ratio, 3.2; 95% confidence interval, 2.2-4.8) and spot sign number (n=1: odds ratio, 2.7; 95% confidence interval, 1.1-7.4; n>1: odds ratio, 3.8; 95% confidence interval, 1.2-17.1) were independently predictive of functional outcome at 3 months with similar odds ratios. Prediction of functional outcome was not significantly different using the spot sign ICH score compared with the ICH score alone (spot sign ICH score area under curve versus ICH score area under curve: P =0.14). In the PREDICT cohort, a prognostic score adding the computed tomographic angiography-based spot sign to the established ICH score did not improve functional outcome prediction compared with the ICH score. © 2018 American Heart Association, Inc.
Directory of Open Access Journals (Sweden)
Atta Ullah
2014-01-01
Full Text Available In practical utilization of stratified random sampling scheme, the investigator meets a problem to select a sample that maximizes the precision of a finite population mean under cost constraint. An allocation of sample size becomes complicated when more than one characteristic is observed from each selected unit in a sample. In many real life situations, a linear cost function of a sample size nh is not a good approximation to actual cost of sample survey when traveling cost between selected units in a stratum is significant. In this paper, sample allocation problem in multivariate stratified random sampling with proposed cost function is formulated in integer nonlinear multiobjective mathematical programming. A solution procedure is proposed using extended lexicographic goal programming approach. A numerical example is presented to illustrate the computational details and to compare the efficiency of proposed compromise allocation.
Spectral Velocity Estimation using the Autocorrelation Function and Sparse data Sequences
DEFF Research Database (Denmark)
Jensen, Jørgen Arendt
2005-01-01
Ultrasound scanners can be used for displaying the distribution of velocities in blood vessels by finding the power spectrum of the received signal. It is desired to show a B-mode image for orientation and data for this has to be acquired interleaved with the flow data. Techniques for maintaining...... both the B-mode frame rate, and at the same time have the highest possible $f_{prf}$ only limited by the depth of investigation, are, thus, of great interest. The power spectrum can be calculated from the Fourier transform of the autocorrelation function $R_r(k)$. The lag $k$ corresponds...... of the sequence. The audio signal has also been synthesized from the autocorrelation data by passing white, Gaussian noise through a filter designed from the power spectrum of the autocorrelation function. The results show that both the full velocity range can be maintained at the same time as a B-mode image...
Goel, R.; Kofman, I.; DeDios, Y. E.; Jeevarajan, J.; Stepanyan, V.; Nair, M.; Congdon, S.; Fregia, M.; Peters, B.; Cohen, H.;
2015-01-01
Sensorimotor changes such as postural and gait instabilities can affect the functional performance of astronauts when they transition across different gravity environments. We are developing a method, based on stochastic resonance (SR), to enhance information transfer by applying non-zero levels of external noise on the vestibular system (vestibular stochastic resonance, VSR). The goal of this project was to determine optimal levels of stimulation for SR applications by using a defined vestibular threshold of motion detection.
Estimating Diversifying Selection and Functional Constraint in the Presence of Recombination
Wilson, Daniel J.; McVean, Gilean
2006-01-01
Models of molecular evolution that incorporate the ratio of nonsynonymous to synonymous polymorphism (dN/dS ratio) as a parameter can be used to identify sites that are under diversifying selection or functional constraint in a sample of gene sequences. However, when there has been recombination in the evolutionary history of the sequences, reconstructing a single phylogenetic tree is not appropriate, and inference based on a single tree can give misleading results. In the presence of high le...
Estimation of Input Function from Dynamic PET Brain Data Using Bayesian Blind Source Separation
Czech Academy of Sciences Publication Activity Database
Tichý, Ondřej; Šmídl, Václav
2015-01-01
Roč. 12, č. 4 (2015), s. 1273-1287 ISSN 1820-0214 R&D Projects: GA ČR GA13-29225S Institutional support: RVO:67985556 Keywords : blind source separation * Variational Bayes method * dynamic PET * input function * deconvolution Subject RIV: BB - Applied Statistics, Operational Research Impact factor: 0.623, year: 2015 http://library.utia.cas.cz/separaty/2015/AS/tichy-0450509.pdf
Estimation of placenta function using T2* measurements during hyper- and normoxia
DEFF Research Database (Denmark)
Peters, David Alberg; Sørensen, Anne Nødgård; Fründ, Ernst Torben
2012-01-01
MR imaging is becoming widely used for pre natal diagnosis1. Conventional pre natal imaging focuses on structural changes, but recently several groups have begun to investigate changes in the MR signal during oxygen breathing. The main focus has been changes in the BOLD signal2,3 in organs such a...... such as liver, brain, lungs and heart. In this study we invenstigate the feasibility of pre- and post-oxygen T2* measurements to evaluate the function of the placenta....
Directory of Open Access Journals (Sweden)
Bin Chen
Full Text Available To establish a simple two-compartment model for glomerular filtration rate (GFR and renal plasma flow (RPF estimations by dynamic contrast-enhanced magnetic resonance imaging (DCE-MRI.A total of eight New Zealand white rabbits were included in DCE-MRI. The two-compartment model was modified with the impulse residue function in this study. First, the reliability of GFR measurement of the proposed model was compared with other published models in Monte Carlo simulation at different noise levels. Then, functional parameters were estimated in six healthy rabbits to test the feasibility of the new model. Moreover, in order to investigate its validity of GFR estimation, two rabbits underwent acute ischemia surgical procedure in unilateral kidney before DCE-MRI, and pixel-wise measurements were implemented to detect the cortical GFR alterations between normal and abnormal kidneys.The lowest variability of GFR and RPF measurements were found in the proposed model in the comparison. Mean GFR was 3.03±1.1 ml/min and mean RPF was 2.64±0.5 ml/g/min in normal animals, which were in good agreement with the published values. Moreover, large GFR decline was found in dysfunction kidneys comparing to the contralateral control group.Results in our study demonstrate that measurement of renal kinetic parameters based on the proposed model is feasible and it has the ability to discriminate GFR changes in healthy and diseased kidneys.
Yeo, B T Thomas; Krienen, Fenna M; Chee, Michael W L; Buckner, Randy L
2014-03-01
The organization of the human cerebral cortex has recently been explored using techniques for parcellating the cortex into distinct functionally coupled networks. The divergent and convergent nature of cortico-cortical anatomic connections suggests the need to consider the possibility of regions belonging to multiple networks and hierarchies among networks. Here we applied the Latent Dirichlet Allocation (LDA) model and spatial independent component analysis (ICA) to solve for functionally coupled cerebral networks without assuming that cortical regions belong to a single network. Data analyzed included 1000 subjects from the Brain Genomics Superstruct Project (GSP) and 12 high quality individual subjects from the Human Connectome Project (HCP). The organization of the cerebral cortex was similar regardless of whether a winner-take-all approach or the more relaxed constraints of LDA (or ICA) were imposed. This suggests that large-scale networks may function as partially isolated modules. Several notable interactions among networks were uncovered by the LDA analysis. Many association regions belong to at least two networks, while somatomotor and early visual cortices are especially isolated. As examples of interaction, the precuneus, lateral temporal cortex, medial prefrontal cortex and posterior parietal cortex participate in multiple paralimbic networks that together comprise subsystems of the default network. In addition, regions at or near the frontal eye field and human lateral intraparietal area homologue participate in multiple hierarchically organized networks. These observations were replicated in both datasets and could be detected (and replicated) in individual subjects from the HCP. © 2013.
Age-independent anti-Müllerian hormone (AMH) standard deviation scores to estimate ovarian function.
Helden, Josef van; Weiskirchen, Ralf
2017-06-01
To determine single year age-specific anti-Müllerian hormone (AMH) standard deviation scores (SDS) for women associated to normal ovarian function and different ovarian disorders resulting in sub- or infertility. Determination of particular year median and mean AMH values with standard deviations (SD), calculation of age-independent cut off SDS for the discrimination between normal ovarian function and ovarian disorders. Single-year-specific median, mean, and SD values have been evaluated for the Beckman Access AMH immunoassay. While the decrease of both median and mean AMH values is strongly correlated with increasing age, calculated SDS values have been shown to be age independent with the differentiation between normal ovarian function measured as occurred ovulation with sufficient luteal activity compared with hyperandrogenemic cycle disorders or anovulation associated with high AMH values and reduced ovarian activity or insufficiency associated with low AMH, respectively. These results will be helpful for the treatment of patients and the ventilation of the different reproductive options. Copyright © 2017 Elsevier B.V. All rights reserved.
International Nuclear Information System (INIS)
Ding Shouguo; Xie Yu; Yang Ping; Weng Fuzhong; Liu Quanhua; Baum, Bryan; Hu Yongxiang
2009-01-01
The bulk-scattering properties of dust aerosols and clouds are computed for the community radiative transfer model (CRTM) that is a flagship effort of the Joint Center for Satellite Data Assimilation (JCSDA). The delta-fit method is employed to truncate the forward peaks of the scattering phase functions and to compute the Legendre expansion coefficients for re-constructing the truncated phase function. Use of more terms in the expansion gives more accurate re-construction of the phase function, but the issue remains as to how many terms are necessary for different applications. To explore this issue further, the bidirectional reflectances associated with dust aerosols, water clouds, and ice clouds are simulated with various numbers of Legendre expansion terms. To have relative numerical errors smaller than 5%, the present analyses indicate that, in the visible spectrum, 16 Legendre polynomials should be used for dust aerosols, while 32 Legendre expansion terms should be used for both water and ice clouds. In the infrared spectrum, the brightness temperatures at the top of the atmosphere are computed by using the scattering properties of dust aerosols, water clouds and ice clouds. Although small differences of brightness temperatures compared with the counterparts computed with 4, 8, 128 expansion terms are observed at large viewing angles for each layer, it is shown that 4 terms of Legendre polynomials are sufficient in the radiative transfer computation at infrared wavelengths for practical applications.
Estimated glomerular filtration rate function in patients with and without metabolic syndrome
Directory of Open Access Journals (Sweden)
María E Lizardo
2016-06-01
Full Text Available Introduction: Metabolic syndrome (MS is an independent risk factor, which affects the development of chronic kidney disease, so the glomerular filtration rate (GFR as an indicator of glomerular function in patients with and without MS who attended the outpatient clinic “los Grillitos, sector Caña de Azucar”. Materials and Methods: A comparative, correlational, cross-sectional study was conducted in a non-probability sample of convenience consisting of 60 patients with MS diagnosed according to the criteria Panel ATP III, and 60 apparently healthy individuals, whom the GFR was determined by the Cockcroft-Gault as well as clinical and biochemical parameters for the diagnosis of MS. Results: Out of the total patients evaluated, 37 (30.7% showed alterations that put them in grades G2 and G3 system risk stratification of CKD, of these 18 and 19 corresponded to patients with and without MS respectively. Glomerular Hyperfiltration (> 120 mil / min it was found in both groups 28 (46.7% and 24 (40% cases of patients with and without MS respectively. The glomerular function was strongly correlated with abdominal obesity and high levels of stress arterial. As for the number of criteria and its relationship to the level of kidney damage present, not a firm to increase the latter with respect to the first (p=0.385 trend was observed. Conclusion: The change in the glomerular function is not directly related to the MS but with its components, specifically abdominal obesity and hypertension.
Antarctic ice sheet thickness estimation based on P-receiver function and waveform inversion
Yan, P.; Li, F.; LI, Z.; Li, J.; Yang, Y.; Hao, W.
2016-12-01
Antarctic ice sheet thickness is key parameter and boundary condition for ice sheet model construction, which has great significance for glacial isostatic adjustment, ice sheet mass balance and global change study. Ice thickness acquired utilizing seismological receiver function method can complement and verify with results obtained by radar echo sounding method. In this paper, P-receiver functions(PRFs) are extracted for stations deployed on Antarctic ice sheet, then Vp/Vs ratio and ice thickness are obtained using H-Kappa stacking. Comparisons are made between Bedmap2 dataset and the ice thickness from PRFs, most of the absolute value of the differences are less than 200 meters, only a few reach 600 meters. Taking into account of the intensity of Bedmap2 dataset survey lines and the uncertainty of radio echo sounding, as well as the inherit complexity of the internal ice structure beneath some stations, the ice thickness obtained from receiver function method is reliable. However limitation exists when using H-Kappa stacking method for stations where sediment squeezed between the ice and the bed rock layer. For better verifying the PRF result, a global optimizing method-Neighbourhood algotithm(NA) and spline interpolation are used to modeling PRFs assuming an isotropic layered ice sheet with depth varied densities and velocities beneath the stations. Then the velocity structure and ice sheet thickness are obtained through nonlinear searching by optimally fitting the real and the theoretical PRFs. The obtained ice sheet thickness beneath the stations agree well with the former H-Kappa method, but further detailed study are needed to constrain the inner ice velocity structure.
Energy Technology Data Exchange (ETDEWEB)
Lee, Jong Kyeom; Kim, Tae Yun; Kim, Hyun Su; Chai, Jang Bom; Lee, Jin Woo [Div. of Mechanical Engineering, Ajou University, Suwon (Korea, Republic of)
2016-10-15
This paper presents an advanced estimation method for obtaining the probability density functions of a damage parameter for valve leakage detection in a reciprocating pump. The estimation method is based on a comparison of model data which are simulated by using a mathematical model, and experimental data which are measured on the inside and outside of the reciprocating pump in operation. The mathematical model, which is simplified and extended on the basis of previous models, describes not only the normal state of the pump, but also its abnormal state caused by valve leakage. The pressure in the cylinder is expressed as a function of the crankshaft angle, and an additional volume flow rate due to the valve leakage is quantified by a damage parameter in the mathematical model. The change in the cylinder pressure profiles due to the suction valve leakage is noticeable in the compression and expansion modes of the pump. The damage parameter value over 300 cycles is calculated in two ways, considering advance or delay in the opening and closing angles of the discharge valves. The probability density functions of the damage parameter are compared for diagnosis and prognosis on the basis of the probabilistic features of valve leakage.
International Nuclear Information System (INIS)
Lee, Jong Kyeom; Kim, Tae Yun; Kim, Hyun Su; Chai, Jang Bom; Lee, Jin Woo
2016-01-01
This paper presents an advanced estimation method for obtaining the probability density functions of a damage parameter for valve leakage detection in a reciprocating pump. The estimation method is based on a comparison of model data which are simulated by using a mathematical model, and experimental data which are measured on the inside and outside of the reciprocating pump in operation. The mathematical model, which is simplified and extended on the basis of previous models, describes not only the normal state of the pump, but also its abnormal state caused by valve leakage. The pressure in the cylinder is expressed as a function of the crankshaft angle, and an additional volume flow rate due to the valve leakage is quantified by a damage parameter in the mathematical model. The change in the cylinder pressure profiles due to the suction valve leakage is noticeable in the compression and expansion modes of the pump. The damage parameter value over 300 cycles is calculated in two ways, considering advance or delay in the opening and closing angles of the discharge valves. The probability density functions of the damage parameter are compared for diagnosis and prognosis on the basis of the probabilistic features of valve leakage
Directory of Open Access Journals (Sweden)
Jong Kyeom Lee
2016-10-01
Full Text Available This paper presents an advanced estimation method for obtaining the probability density functions of a damage parameter for valve leakage detection in a reciprocating pump. The estimation method is based on a comparison of model data which are simulated by using a mathematical model, and experimental data which are measured on the inside and outside of the reciprocating pump in operation. The mathematical model, which is simplified and extended on the basis of previous models, describes not only the normal state of the pump, but also its abnormal state caused by valve leakage. The pressure in the cylinder is expressed as a function of the crankshaft angle, and an additional volume flow rate due to the valve leakage is quantified by a damage parameter in the mathematical model. The change in the cylinder pressure profiles due to the suction valve leakage is noticeable in the compression and expansion modes of the pump. The damage parameter value over 300 cycles is calculated in two ways, considering advance or delay in the opening and closing angles of the discharge valves. The probability density functions of the damage parameter are compared for diagnosis and prognosis on the basis of the probabilistic features of valve leakage.
International Nuclear Information System (INIS)
Belisheva, N.K.; Popov, A.N.; Petukhova, N.V.; Pavlova, L.P.; Osipov, K.S.; Tkachenko, S.Eh.; Baranova, T.I.
1995-01-01
The comparison of functional dynamics of human brain with reference to qualitative and quantitative characteristics of local geomagnetic field (GMF) variations was conducted. Steady and unsteady states of human brain can be determined: by geomagnetic disturbances before the observation period; by structure and doses of GMF variations; by different combinations of qualitative and quantitative characteristics of GMF variations. Decrease of optimal GMF activity level and the appearance of aperiodic disturbances of GMF can be a reason of unsteady brain's state. 18 refs.; 3 figs
Estimation of cluster stability using the theory of electron density functional
International Nuclear Information System (INIS)
Borisov, Yu.A.
1985-01-01
Prospects of using simple versions of the electron density functional for studying the energy characteristics of cluster compounds Was discussed. These types of cluster compounds were considered: clusters of Cs, Be, B, Sr, Cd, Sc, In, V, Tl, I elements as intermediate form between molecule and solid body, metalloorganic Mo, W, Tc, Re, Rn clusters and elementoorganic compounds of nido-cluster type. The problem concerning changes in the binding energy of homoatomic clusters depending on their size and three-dimensional structure was analysed
6-Electron exchange function as a simple estimator of aromaticity in large polyaromatic hydrocarbons
Mandado, Marcos; Mosquera, Ricardo A.
2009-02-01
The 6-electron exchange function (6-EEF) is defined and calculated for a series of large polyaromatic hydrocarbons (PAHs). It is shown that the 6-EEF, computed at selected points in space, is able to reproduce in PAHs the same relative values as the multicenter electron delocalization indices with an affordable computational cost and without using any definition of the atom in the molecule. Calculations for a series of D 6h PAHs ranging from C 6H 6 to C 216H 36 are performed. The results can be extrapolated to even larger PAHs and allow predicting the behaviour of a benzene ring in an infinite sheet of graphite.
Directory of Open Access Journals (Sweden)
Xiaoling Chen
2018-05-01
Full Text Available Recently, functional corticomuscular coupling (FCMC between the cortex and the contralateral muscle has been used to evaluate motor function after stroke. As we know, the motor-control system is a closed-loop system that is regulated by complex self-regulating and interactive mechanisms which operate in multiple spatial and temporal scales. Multiscale analysis can represent the inherent complexity. However, previous studies in FCMC for stroke patients mainly focused on the coupling strength in single-time scale, without considering the changes of the inherently directional and multiscale properties in sensorimotor systems. In this paper, a multiscale-causal model, named multiscale transfer entropy, was used to quantify the functional connection between electroencephalogram over the scalp and electromyogram from the flexor digitorum superficialis (FDS recorded simultaneously during steady-state grip task in eight stroke patients and eight healthy controls. Our results showed that healthy controls exhibited higher coupling when the scale reached up to about 12, and the FCMC in descending direction was stronger at certain scales (1, 7, 12, and 14 than that in ascending direction. Further analysis showed these multi-time scale characteristics mainly focused on the beta1 band at scale 11 and beta2 band at scale 9, 11, 13, and 15. Compared to controls, the multiscale properties of the FCMC for stroke were changed, the strengths in both directions were reduced, and the gaps between the descending and ascending directions were disappeared over all scales. Further analysis in specific bands showed that the reduced FCMC mainly focused on the alpha2 at higher scale, beta1 and beta2 across almost the entire scales. This study about multi-scale confirms that the FCMC between the brain and muscles is capable of complex and directional characteristics, and these characteristics in functional connection for stroke are destroyed by the structural lesion in the
Estimation of CN Parameter for Small Agricultural Watersheds Using Asymptotic Functions
Directory of Open Access Journals (Sweden)
Tomasz Kowalik
2015-03-01
Full Text Available This paper investigates a possibility of using asymptotic functions to determine the value of curve number (CN parameter as a function of rainfall in small agricultural watersheds. It also compares the actually calculated CN with its values provided in the Soil Conservation Service (SCS National Engineering Handbook Section 4: Hydrology (NEH-4 and Technical Release 20 (TR-20. The analysis showed that empirical CN values presented in the National Engineering Handbook tables differed from the actually observed values. Calculations revealed a strong correlation between the observed CN and precipitation (P. In three of the analyzed watersheds, a typical pattern of the observed CN stabilization during abundant precipitation was perceived. It was found that Model 2, based on a kinetics equation, most effectively described the P-CN relationship. In most cases, the observed CN in the investigated watersheds was similar to the empirical CN, corresponding to average moisture conditions set out by NEH-4. Model 2 also provided the greatest stability of CN at 90% sampled event rainfall.
Radionuclide estimation of kidney function in patients with acute renal failure
International Nuclear Information System (INIS)
Ilic, S.; Bogicevic, M.; Stefanovic, V.
1989-01-01
In order to evaluate kidney function radionuclide studies were made in 51 patients with different phases of acute renal failure within the period of six months from the beginning of underlying disease. Low 99m -Tc-DTPA clearance values meaning a marked reduction of glomerular filtration rate in oligoanuric phase, with an improvement but not normalization during diuretic and recovery phase. A decrease of the effective renal plasma flow was also found in 131 I-hippurate studies. In the oligoanuric phase glomerular filtration rate was more severely impaired than renal plasma flow, while in the recovery phase this difference disappeared. In the oligoanuric phase of ARF 99m Tc-DTPA dynamic curves were aplated and those of 131 I-hippurate showed accumulation type, in the diuretic phase hypofunction type with both radionuclides, in the recovery phase a minority of them were completely normalized. It is suggested that radionuclide methods should be used to evaluate and follow up kidney function in patients with different phases of ARF. (orig.) [de