WorldWideScience

Sample records for mean-variance smoothing method

  1. A nonparametric mean-variance smoothing method to assess Arabidopsis cold stress transcriptional regulator CBF2 overexpression microarray data.

    Science.gov (United States)

    Hu, Pingsha; Maiti, Tapabrata

    2011-01-01

    Microarray is a powerful tool for genome-wide gene expression analysis. In microarray expression data, often mean and variance have certain relationships. We present a non-parametric mean-variance smoothing method (NPMVS) to analyze differentially expressed genes. In this method, a nonlinear smoothing curve is fitted to estimate the relationship between mean and variance. Inference is then made upon shrinkage estimation of posterior means assuming variances are known. Different methods have been applied to simulated datasets, in which a variety of mean and variance relationships were imposed. The simulation study showed that NPMVS outperformed the other two popular shrinkage estimation methods in some mean-variance relationships; and NPMVS was competitive with the two methods in other relationships. A real biological dataset, in which a cold stress transcription factor gene, CBF2, was overexpressed, has also been analyzed with the three methods. Gene ontology and cis-element analysis showed that NPMVS identified more cold and stress responsive genes than the other two methods did. The good performance of NPMVS is mainly due to its shrinkage estimation for both means and variances. In addition, NPMVS exploits a non-parametric regression between mean and variance, instead of assuming a specific parametric relationship between mean and variance. The source code written in R is available from the authors on request.

  2. Mean-Variance-CvaR Model of Multiportfolio Optimization via Linear Weighted Sum Method

    Directory of Open Access Journals (Sweden)

    Younes Elahi

    2014-01-01

    Full Text Available We propose a new approach to optimizing portfolios to mean-variance-CVaR (MVC model. Although of several researches have studied the optimal MVC model of portfolio, the linear weighted sum method (LWSM was not implemented in the area. The aim of this paper is to investigate the optimal portfolio model based on MVC via LWSM. With this method, the solution of the MVC model of portfolio as the multiobjective problem is presented. In data analysis section, this approach in investing on two assets is investigated. An MVC model of the multiportfolio was implemented in MATLAB and tested on the presented problem. It is shown that, by using three objective functions, it helps the investors to manage their portfolio better and thereby minimize the risk and maximize the return of the portfolio. The main goal of this study is to modify the current models and simplify it by using LWSM to obtain better results.

  3. On Mean-Variance Analysis

    OpenAIRE

    Li, Yang; Pirvu, Traian A

    2011-01-01

    This paper considers the mean variance portfolio management problem. We examine portfolios which contain both primary and derivative securities. The challenge in this context is due to portfolio's nonlinearities. The delta-gamma approximation is employed to overcome it. Thus, the optimization problem is reduced to a well posed quadratic program. The methodology developed in this paper can be also applied to pricing and hedging in incomplete markets.

  4. Efficient Cardinality/Mean-Variance Portfolios

    OpenAIRE

    Brito, R. Pedro; Vicente, Luís Nunes

    2014-01-01

    International audience; We propose a novel approach to handle cardinality in portfolio selection, by means of a biobjective cardinality/mean-variance problem, allowing the investor to analyze the efficient tradeoff between return-risk and number of active positions. Recent progress in multiobjective optimization without derivatives allow us to robustly compute (in-sample) the whole cardinality/mean-variance efficient frontier, for a variety of data sets and mean-variance models. Our results s...

  5. Portfolio optimization with mean-variance model

    Science.gov (United States)

    Hoe, Lam Weng; Siew, Lam Weng

    2016-06-01

    Investors wish to achieve the target rate of return at the minimum level of risk in their investment. Portfolio optimization is an investment strategy that can be used to minimize the portfolio risk and can achieve the target rate of return. The mean-variance model has been proposed in portfolio optimization. The mean-variance model is an optimization model that aims to minimize the portfolio risk which is the portfolio variance. The objective of this study is to construct the optimal portfolio using the mean-variance model. The data of this study consists of weekly returns of 20 component stocks of FTSE Bursa Malaysia Kuala Lumpur Composite Index (FBMKLCI). The results of this study show that the portfolio composition of the stocks is different. Moreover, investors can get the return at minimum level of risk with the constructed optimal mean-variance portfolio.

  6. A Mean variance analysis of arbitrage portfolios

    Science.gov (United States)

    Fang, Shuhong

    2007-03-01

    Based on the careful analysis of the definition of arbitrage portfolio and its return, the author presents a mean-variance analysis of the return of arbitrage portfolios, which implies that Korkie and Turtle's results ( B. Korkie, H.J. Turtle, A mean-variance analysis of self-financing portfolios, Manage. Sci. 48 (2002) 427-443) are misleading. A practical example is given to show the difference between the arbitrage portfolio frontier and the usual portfolio frontier.

  7. Dynamic Mean-Variance Asset Allocation

    OpenAIRE

    Basak, Suleyman; Chabakauri, Georgy

    2009-01-01

    Mean-variance criteria remain prevalent in multi-period problems, and yet not much is known about their dynamically optimal policies. We provide a fully analytical characterization of the optimal dynamic mean-variance portfolios within a general incomplete-market economy, and recover a simple structure that also inherits several conventional properties of static models. We also identify a probability measure that incorporates intertemporal hedging demands and facilitates much tractability in ...

  8. Multi-objective mean-variance-skewness model for generation portfolio allocation in electricity markets

    Energy Technology Data Exchange (ETDEWEB)

    Pindoriya, N.M.; Singh, S.N. [Department of Electrical Engineering, Indian Institute of Technology Kanpur, Kanpur 208016 (India); Singh, S.K. [Indian Institute of Management Lucknow, Lucknow 226013 (India)

    2010-10-15

    This paper proposes an approach for generation portfolio allocation based on mean-variance-skewness (MVS) model which is an extension of the classical mean-variance (MV) portfolio theory, to deal with assets whose return distribution is non-normal. The MVS model allocates portfolios optimally by considering the maximization of both the expected return and skewness of portfolio return while simultaneously minimizing the risk. Since, it is competing and conflicting non-smooth multi-objective optimization problem, this paper employed a multi-objective particle swarm optimization (MOPSO) based meta-heuristic technique to provide Pareto-optimal solution in a single simulation run. Using a case study of the PJM electricity market, the performance of the MVS portfolio theory based method and the classical MV method is compared. It has been found that the MVS portfolio theory based method can provide significantly better portfolios in the situation where non-normally distributed assets exist for trading. (author)

  9. Multi-objective mean-variance-skewness model for generation portfolio allocation in electricity markets

    International Nuclear Information System (INIS)

    Pindoriya, N.M.; Singh, S.N.; Singh, S.K.

    2010-01-01

    This paper proposes an approach for generation portfolio allocation based on mean-variance-skewness (MVS) model which is an extension of the classical mean-variance (MV) portfolio theory, to deal with assets whose return distribution is non-normal. The MVS model allocates portfolios optimally by considering the maximization of both the expected return and skewness of portfolio return while simultaneously minimizing the risk. Since, it is competing and conflicting non-smooth multi-objective optimization problem, this paper employed a multi-objective particle swarm optimization (MOPSO) based meta-heuristic technique to provide Pareto-optimal solution in a single simulation run. Using a case study of the PJM electricity market, the performance of the MVS portfolio theory based method and the classical MV method is compared. It has been found that the MVS portfolio theory based method can provide significantly better portfolios in the situation where non-normally distributed assets exist for trading. (author)

  10. Cumulative prospect theory and mean variance analysis. A rigorous comparison

    OpenAIRE

    Hens, Thorsten; Mayer, Janos

    2012-01-01

    We compare asset allocations derived for cumulative prospect theory(CPT) based on two different methods: Maximizing CPT along the mean–variance efficient frontier and maximizing it without that restriction. We find that with normally distributed returns the difference is negligible. However, using standard asset allocation data of pension funds the difference is considerable. Moreover, with derivatives like call options the restriction to the mean-variance efficient frontier results in a siza...

  11. Geometric representation of the mean-variance-skewness portfolio frontier based upon the shortage function

    OpenAIRE

    Kerstens, Kristiaan; Mounier, Amine; Van de Woestyne, Ignace

    2008-01-01

    The literature suggests that investors prefer portfolios based on mean, variance and skewness rather than portfolios based on mean-variance (MV) criteria solely. Furthermore, a small variety of methods have been proposed to determine mean-variance-skewness (MVS) optimal portfolios. Recently, the shortage function has been introduced as a measure of efficiency, allowing to characterize MVS optimalportfolios using non-parametric mathematical programming tools. While tracing the MV portfolio fro...

  12. Continuous-Time Mean-Variance Portfolio Selection under the CEV Process

    OpenAIRE

    Ma, Hui-qiang

    2014-01-01

    We consider a continuous-time mean-variance portfolio selection model when stock price follows the constant elasticity of variance (CEV) process. The aim of this paper is to derive an optimal portfolio strategy and the efficient frontier. The mean-variance portfolio selection problem is formulated as a linearly constrained convex program problem. By employing the Lagrange multiplier method and stochastic optimal control theory, we obtain the optimal portfolio strategy and mean-variance effici...

  13. A Mean-Variance Criterion for Economic Model Predictive Control of Stochastic Linear Systems

    DEFF Research Database (Denmark)

    Sokoler, Leo Emil; Dammann, Bernd; Madsen, Henrik

    2014-01-01

    , the tractability of the resulting optimal control problem is addressed. We use a power management case study to compare different variations of the mean-variance strategy with EMPC based on the certainty equivalence principle. The certainty equivalence strategy is much more computationally efficient than the mean......-variance strategies, but it does not account for the variance of the uncertain parameters. Openloop simulations suggest that a single-stage mean-variance approach yields a significantly lower operating cost than the certainty equivalence strategy. In closed-loop, the single-stage formulation is overly conservative...... be modified to perform almost as well as the two-stage mean-variance formulation. Nevertheless, we argue that the mean-variance approach can be used both as a strategy for evaluating less computational demanding methods such as the certainty equivalence method, and as an individual control strategy when...

  14. MENENTUKAN PORTOFOLIO OPTIMAL MENGGUNAKAN MODEL CONDITIONAL MEAN VARIANCE

    Directory of Open Access Journals (Sweden)

    I GEDE ERY NISCAHYANA

    2016-08-01

    Full Text Available When the returns of stock prices show the existence of autocorrelation and heteroscedasticity, then conditional mean variance models are suitable method to model the behavior of the stocks. In this thesis, the implementation of the conditional mean variance model to the autocorrelated and heteroscedastic return was discussed. The aim of this thesis was to assess the effect of the autocorrelated and heteroscedastic returns to the optimal solution of a portfolio. The margin of four stocks, Fortune Mate Indonesia Tbk (FMII.JK, Bank Permata Tbk (BNLI.JK, Suryamas Dutamakmur Tbk (SMDM.JK dan Semen Gresik Indonesia Tbk (SMGR.JK were estimated by GARCH(1,1 model with standard innovations following the standard normal distribution and the t-distribution.  The estimations were used to construct a portfolio. The portfolio optimal was found when the standard innovation used was t-distribution with the standard deviation of 1.4532 and the mean of 0.8023 consisting of 0.9429 (94% of FMII stock, 0.0473 (5% of  BNLI stock, 0% of SMDM stock, 1% of  SMGR stock.

  15. Discrete and continuous time dynamic mean-variance analysis

    OpenAIRE

    Reiss, Ariane

    1999-01-01

    Contrary to static mean-variance analysis, very few papers have dealt with dynamic mean-variance analysis. Here, the mean-variance efficient self-financing portfolio strategy is derived for n risky assets in discrete and continuous time. In the discrete setting, the resulting portfolio is mean-variance efficient in a dynamic sense. It is shown that the optimal strategy for n risky assets may be dominated if the expected terminal wealth is constrained to exactly attain a certain goal instead o...

  16. Discrete time and continuous time dynamic mean-variance analysis

    OpenAIRE

    Reiss, Ariane

    1999-01-01

    Contrary to static mean-variance analysis, very few papers have dealt with dynamic mean-variance analysis. Here, the mean-variance efficient self-financing portfolio strategy is derived for n risky assets in discrete and continuous time. In the discrete setting, the resulting portfolio is mean-variance efficient in a dynamic sense. It is shown that the optimal strategy for n risky assets may be dominated if the expected terminal wealth is constrained to exactly attain a certain goal instead o...

  17. Continuous-Time Mean-Variance Portfolio Selection under the CEV Process

    Directory of Open Access Journals (Sweden)

    Hui-qiang Ma

    2014-01-01

    Full Text Available We consider a continuous-time mean-variance portfolio selection model when stock price follows the constant elasticity of variance (CEV process. The aim of this paper is to derive an optimal portfolio strategy and the efficient frontier. The mean-variance portfolio selection problem is formulated as a linearly constrained convex program problem. By employing the Lagrange multiplier method and stochastic optimal control theory, we obtain the optimal portfolio strategy and mean-variance efficient frontier analytically. The results show that the mean-variance efficient frontier is still a parabola in the mean-variance plane, and the optimal strategies depend not only on the total wealth but also on the stock price. Moreover, some numerical examples are given to analyze the sensitivity of the efficient frontier with respect to the elasticity parameter and to illustrate the results presented in this paper. The numerical results show that the price of risk decreases as the elasticity coefficient increases.

  18. Mean-Variance Portfolio Selection with Margin Requirements

    Directory of Open Access Journals (Sweden)

    Yuan Zhou

    2013-01-01

    Full Text Available We study the continuous-time mean-variance portfolio selection problem in the situation when investors must pay margin for short selling. The problem is essentially a nonlinear stochastic optimal control problem because the coefficients of positive and negative parts of control variables are different. We can not apply the results of stochastic linearquadratic (LQ problem. Also the solution of corresponding Hamilton-Jacobi-Bellman (HJB equation is not smooth. Li et al. (2002 studied the case when short selling is prohibited; therefore they only need to consider the positive part of control variables, whereas we need to handle both the positive part and the negative part of control variables. The main difficulty is that the positive part and the negative part are not independent. The previous results are not directly applicable. By decomposing the problem into several subproblems we figure out the solutions of HJB equation in two disjoint regions and then prove it is the viscosity solution of HJB equation. Finally we formulate solution of optimal portfolio and the efficient frontier. We also present two examples showing how different margin rates affect the optimal solutions and the efficient frontier.

  19. ANALISIS PORTOFOLIO RESAMPLED EFFICIENT FRONTIER BERDASARKAN OPTIMASI MEAN-VARIANCE

    OpenAIRE

    Abdurakhman, Abdurakhman

    2008-01-01

    Keputusan alokasi asset yang tepat pada investasi portofolio dapat memaksimalkan keuntungan dan atau meminimalkan risiko. Metode yang sering dipakai dalam optimasi portofolio adalah metode Mean-Variance Markowitz. Dalam prakteknya, metode ini mempunyai kelemahan tidak terlalu stabil. Sedikit perubahan dalam estimasi parameter input menyebabkan perubahan besar pada komposisi portofolio. Untuk itu dikembangkan metode optimasi portofolio yang dapat mengatasi ketidakstabilan metode Mean-Variance ...

  20. On the Endogeneity of the Mean-Variance Efficient Frontier.

    Science.gov (United States)

    Somerville, R. A.; O'Connell, Paul G. J.

    2002-01-01

    Explains that the endogeneity of the efficient frontier in the mean-variance model of portfolio selection is commonly obscured in portfolio selection literature and in widely used textbooks. Demonstrates endogeneity and discusses the impact of parameter changes on the mean-variance efficient frontier and on the beta coefficients of individual…

  1. A spatial mean-variance MIP model for energy market risk analysis

    International Nuclear Information System (INIS)

    Yu, Zuwei

    2003-01-01

    The paper presents a short-term market risk model based on the Markowitz mean-variance method for spatial electricity markets. The spatial nature is captured using the correlation of geographically separated markets and the consideration of wheeling administration. The model also includes transaction costs and other practical constraints, resulting in a mixed integer programming (MIP) model. The incorporation of those practical constraints makes the model more attractive than the traditional Markowitz portfolio model with continuity. A case study is used to illustrate the practical application of the model. The results show that the MIP portfolio efficient frontier is neither smooth nor concave. The paper also considers the possible extension of the model to other energy markets, including natural gas and oil markets

  2. A spatial mean-variance MIP model for energy market risk analysis

    International Nuclear Information System (INIS)

    Zuwei Yu

    2003-01-01

    The paper presents a short-term market risk model based on the Markowitz mean-variance method for spatial electricity markets. The spatial nature is captured using the correlation of geographically separated markets and the consideration of wheeling administration. The model also includes transaction costs and other practical constraints, resulting in a mixed integer programming (MIP) model. The incorporation of those practical constraints makes the model more attractive than the traditional Markowitz portfolio model with continuity. A case study is used to illustrate the practical application of the model. The results show that the MIP portfolio efficient frontier is neither smooth nor concave. The paper also considers the possible extension of the model to other energy markets, including natural gas and oil markets. (author)

  3. A spatial mean-variance MIP model for energy market risk analysis

    Energy Technology Data Exchange (ETDEWEB)

    Zuwei Yu [Purdue University, West Lafayette, IN (United States). Indiana State Utility Forecasting Group and School of Industrial Engineering

    2003-05-01

    The paper presents a short-term market risk model based on the Markowitz mean-variance method for spatial electricity markets. The spatial nature is captured using the correlation of geographically separated markets and the consideration of wheeling administration. The model also includes transaction costs and other practical constraints, resulting in a mixed integer programming (MIP) model. The incorporation of those practical constraints makes the model more attractive than the traditional Markowitz portfolio model with continuity. A case study is used to illustrate the practical application of the model. The results show that the MIP portfolio efficient frontier is neither smooth nor concave. The paper also considers the possible extension of the model to other energy markets, including natural gas and oil markets. (author)

  4. A spatial mean-variance MIP model for energy market risk analysis

    Energy Technology Data Exchange (ETDEWEB)

    Yu, Zuwei [Indiana State Utility Forecasting Group and School of Industrial Engineering, Purdue University, Room 334, 1293 A.A. Potter, West Lafayette, IN 47907 (United States)

    2003-05-01

    The paper presents a short-term market risk model based on the Markowitz mean-variance method for spatial electricity markets. The spatial nature is captured using the correlation of geographically separated markets and the consideration of wheeling administration. The model also includes transaction costs and other practical constraints, resulting in a mixed integer programming (MIP) model. The incorporation of those practical constraints makes the model more attractive than the traditional Markowitz portfolio model with continuity. A case study is used to illustrate the practical application of the model. The results show that the MIP portfolio efficient frontier is neither smooth nor concave. The paper also considers the possible extension of the model to other energy markets, including natural gas and oil markets.

  5. Replica approach to mean-variance portfolio optimization

    Science.gov (United States)

    Varga-Haszonits, Istvan; Caccioli, Fabio; Kondor, Imre

    2016-12-01

    We consider the problem of mean-variance portfolio optimization for a generic covariance matrix subject to the budget constraint and the constraint for the expected return, with the application of the replica method borrowed from the statistical physics of disordered systems. We find that the replica symmetry of the solution does not need to be assumed, but emerges as the unique solution of the optimization problem. We also check the stability of this solution and find that the eigenvalues of the Hessian are positive for r  =  N/T  optimal in-sample variance is found to vanish at the critical point inversely proportional to the divergent estimation error.

  6. Mean-variance Optimal Reinsurance-investment Strategy in Continuous Time

    Directory of Open Access Journals (Sweden)

    Daheng Peng

    2017-10-01

    Full Text Available In this paper, Lagrange method is used to solve the continuous-time mean-variance reinsurance-investment problem. Proportional reinsurance, multiple risky assets and risk-free asset are considered synthetically in the optimal strategy for insurers. By solving the backward stochastic differential equation for the Lagrange multiplier, we get the mean-variance optimal reinsurance-investment strategy and its effective frontier in explicit forms.

  7. Mean-variance Optimal Reinsurance-investment Strategy in Continuous Time

    OpenAIRE

    Daheng Peng; Fang Zhang

    2017-01-01

    In this paper, Lagrange method is used to solve the continuous-time mean-variance reinsurance-investment problem. Proportional reinsurance, multiple risky assets and risk-free asset are considered synthetically in the optimal strategy for insurers. By solving the backward stochastic differential equation for the Lagrange multiplier, we get the mean-variance optimal reinsurance-investment strategy and its effective frontier in explicit forms.

  8. Multiperiod Mean-Variance Portfolio Optimization via Market Cloning

    Energy Technology Data Exchange (ETDEWEB)

    Ankirchner, Stefan, E-mail: ankirchner@hcm.uni-bonn.de [Rheinische Friedrich-Wilhelms-Universitaet Bonn, Institut fuer Angewandte Mathematik, Hausdorff Center for Mathematics (Germany); Dermoune, Azzouz, E-mail: Azzouz.Dermoune@math.univ-lille1.fr [Universite des Sciences et Technologies de Lille, Laboratoire Paul Painleve UMR CNRS 8524 (France)

    2011-08-15

    The problem of finding the mean variance optimal portfolio in a multiperiod model can not be solved directly by means of dynamic programming. In order to find a solution we therefore first introduce independent market clones having the same distributional properties as the original market, and we replace the portfolio mean and variance by their empirical counterparts. We then use dynamic programming to derive portfolios maximizing a weighted sum of the empirical mean and variance. By letting the number of market clones converge to infinity we are able to solve the original mean variance problem.

  9. Multiperiod Mean-Variance Portfolio Optimization via Market Cloning

    International Nuclear Information System (INIS)

    Ankirchner, Stefan; Dermoune, Azzouz

    2011-01-01

    The problem of finding the mean variance optimal portfolio in a multiperiod model can not be solved directly by means of dynamic programming. In order to find a solution we therefore first introduce independent market clones having the same distributional properties as the original market, and we replace the portfolio mean and variance by their empirical counterparts. We then use dynamic programming to derive portfolios maximizing a weighted sum of the empirical mean and variance. By letting the number of market clones converge to infinity we are able to solve the original mean variance problem.

  10. A mean-variance frontier in discrete and continuous time

    NARCIS (Netherlands)

    Bekker, Paul A.

    2004-01-01

    The paper presents a mean-variance frontier based on dynamic frictionless investment strategies in continuous time. The result applies to a finite number of risky assets whose price process is given by multivariate geometric Brownian motion with deterministically varying coefficients. The derivation

  11. Comparison of some nonlinear smoothing methods

    International Nuclear Information System (INIS)

    Bell, P.R.; Dillon, R.S.

    1977-01-01

    Due to the poor quality of many nuclear medicine images, computer-driven smoothing procedures are frequently employed to enhance the diagnostic utility of these images. While linear methods were first tried, it was discovered that nonlinear techniques produced superior smoothing with little detail suppression. We have compared four methods: Gaussian smoothing (linear), two-dimensional least-squares smoothing (linear), two-dimensional least-squares bounding (nonlinear), and two-dimensional median smoothing (nonlinear). The two dimensional least-squares procedures have yielded the most satisfactorily enhanced images, with the median smoothers providing quite good images, even in the presence of widely aberrant points

  12. A mean-variance frontier in discrete and continuous time

    OpenAIRE

    Bekker, Paul A.

    2004-01-01

    The paper presents a mean-variance frontier based on dynamic frictionless investment strategies in continuous time. The result applies to a finite number of risky assets whose price process is given by multivariate geometric Brownian motion with deterministically varying coefficients. The derivation is based on the solution for the frontier in discrete time. Using the same multiperiod framework as Li and Ng (2000), I provide an alternative derivation and an alternative formulation of the solu...

  13. ASYMMETRY OF MARKET RETURNS AND THE MEAN VARIANCE FRONTIER

    OpenAIRE

    SENGUPTA, Jati K.; PARK, Hyung S.

    1994-01-01

    The hypothesis that the skewness and asymmetry have no significant impact on the mean variance frontier is found to be strongly violated by monthly U.S. data over the period January 1965 through December 1974. This result raises serious doubts whether the common market portifolios such as SP 500, value weighted and equal weighted returns can serve as suitable proxies for meanvariance efficient portfolios in the CAPM framework. A new test for assessing the impact of skewness on the variance fr...

  14. Mean-Variance Analysis in a Multiperiod Setting

    OpenAIRE

    Frauendorfer, Karl; Siede, Heiko

    1997-01-01

    Similar to the classical Markowitz approach it is possible to apply a mean-variance criterion to a multiperiod setting to obtain efficient portfolios. To represent the stochastic dynamic characteristics necessary for modelling returns a process of asset returns is discretized with respect to time and space and summarized in a scenario tree. The resulting optimization problem is solved by means of stochastic multistage programming. The optimal solutions show equivalent structural properties as...

  15. A Note on the Kinks at the Mean Variance Frontier

    OpenAIRE

    Vörös, J.; Kriens, J.; Strijbosch, L.W.G.

    1997-01-01

    In this paper the standard portfolio case with short sales restrictions is analyzed.Dybvig pointed out that if there is a kink at a risky portfolio on the efficient frontier, then the securities in this portfolio have equal expected return and the converse of this statement is false.For the existence of kinks at the efficient frontier the sufficient condition is given here and a new procedure is used to derive the efficient frontier, i.e. the characteristics of the mean variance frontier.

  16. A Decomposition Algorithm for Mean-Variance Economic Model Predictive Control of Stochastic Linear Systems

    DEFF Research Database (Denmark)

    Sokoler, Leo Emil; Dammann, Bernd; Madsen, Henrik

    2014-01-01

    This paper presents a decomposition algorithm for solving the optimal control problem (OCP) that arises in Mean-Variance Economic Model Predictive Control of stochastic linear systems. The algorithm applies the alternating direction method of multipliers to a reformulation of the OCP...

  17. Deterministic mean-variance-optimal consumption and investment

    DEFF Research Database (Denmark)

    Christiansen, Marcus; Steffensen, Mogens

    2013-01-01

    In dynamic optimal consumption–investment problems one typically aims to find an optimal control from the set of adapted processes. This is also the natural starting point in case of a mean-variance objective. In contrast, we solve the optimization problem with the special feature that the consum......In dynamic optimal consumption–investment problems one typically aims to find an optimal control from the set of adapted processes. This is also the natural starting point in case of a mean-variance objective. In contrast, we solve the optimization problem with the special feature...... that the consumption rate and the investment proportion are constrained to be deterministic processes. As a result we get rid of a series of unwanted features of the stochastic solution including diffusive consumption, satisfaction points and consistency problems. Deterministic strategies typically appear in unit......-linked life insurance contracts, where the life-cycle investment strategy is age dependent but wealth independent. We explain how optimal deterministic strategies can be found numerically and present an example from life insurance where we compare the optimal solution with suboptimal deterministic strategies...

  18. A load factor based mean-variance analysis for fuel diversification

    Energy Technology Data Exchange (ETDEWEB)

    Gotham, Douglas; Preckel, Paul; Ruangpattana, Suriya [State Utility Forecasting Group, Purdue University, West Lafayette, IN (United States); Muthuraman, Kumar [McCombs School of Business, University of Texas, Austin, TX (United States); Rardin, Ronald [Department of Industrial Engineering, University of Arkansas, Fayetteville, AR (United States)

    2009-03-15

    Fuel diversification implies the selection of a mix of generation technologies for long-term electricity generation. The goal is to strike a good balance between reduced costs and reduced risk. The method of analysis that has been advocated and adopted for such studies is the mean-variance portfolio analysis pioneered by Markowitz (Markowitz, H., 1952. Portfolio selection. Journal of Finance 7(1) 77-91). However the standard mean-variance methodology, does not account for the ability of various fuels/technologies to adapt to varying loads. Such analysis often provides results that are easily dismissed by regulators and practitioners as unacceptable, since load cycles play critical roles in fuel selection. To account for such issues and still retain the convenience and elegance of the mean-variance approach, we propose a variant of the mean-variance analysis using the decomposition of the load into various types and utilizing the load factors of each load type. We also illustrate the approach using data for the state of Indiana and demonstrate the ability of the model in providing useful insights. (author)

  19. Ant Colony Optimization for Markowitz Mean-Variance Portfolio Model

    Science.gov (United States)

    Deng, Guang-Feng; Lin, Woo-Tsong

    This work presents Ant Colony Optimization (ACO), which was initially developed to be a meta-heuristic for combinatorial optimization, for solving the cardinality constraints Markowitz mean-variance portfolio model (nonlinear mixed quadratic programming problem). To our knowledge, an efficient algorithmic solution for this problem has not been proposed until now. Using heuristic algorithms in this case is imperative. Numerical solutions are obtained for five analyses of weekly price data for the following indices for the period March, 1992 to September, 1997: Hang Seng 31 in Hong Kong, DAX 100 in Germany, FTSE 100 in UK, S&P 100 in USA and Nikkei 225 in Japan. The test results indicate that the ACO is much more robust and effective than Particle swarm optimization (PSO), especially for low-risk investment portfolios.

  20. Swarm based mean-variance mapping optimization (MVMOS) for solving economic dispatch

    Science.gov (United States)

    Khoa, T. H.; Vasant, P. M.; Singh, M. S. Balbir; Dieu, V. N.

    2014-10-01

    The economic dispatch (ED) is an essential optimization task in the power generation system. It is defined as the process of allocating the real power output of generation units to meet required load demand so as their total operating cost is minimized while satisfying all physical and operational constraints. This paper introduces a novel optimization which named as Swarm based Mean-variance mapping optimization (MVMOS). The technique is the extension of the original single particle mean-variance mapping optimization (MVMO). Its features make it potentially attractive algorithm for solving optimization problems. The proposed method is implemented for three test power systems, including 3, 13 and 20 thermal generation units with quadratic cost function and the obtained results are compared with many other methods available in the literature. Test results have indicated that the proposed method can efficiently implement for solving economic dispatch.

  1. An adaptive method for γ spectra smoothing

    International Nuclear Information System (INIS)

    Xiao Gang; Zhou Chunlin; Li Tiantuo; Han Feng; Di Yuming

    2001-01-01

    Adaptive wavelet method and multinomial fitting gliding method are used for smoothing γ spectra, respectively, and then FWHM of 1332 keV peak of 60 Co and activities of 238 U standard specimen are calculated. Calculated results show that adaptive wavelet method is better than the other

  2. Multiple predictor smoothing methods for sensitivity analysis

    International Nuclear Information System (INIS)

    Helton, Jon Craig; Storlie, Curtis B.

    2006-01-01

    The use of multiple predictor smoothing methods in sampling-based sensitivity analyses of complex models is investigated. Specifically, sensitivity analysis procedures based on smoothing methods employing the stepwise application of the following nonparametric regression techniques are described: (1) locally weighted regression (LOESS), (2) additive models, (3) projection pursuit regression, and (4) recursive partitioning regression. The indicated procedures are illustrated with both simple test problems and results from a performance assessment for a radioactive waste disposal facility (i.e., the Waste Isolation Pilot Plant). As shown by the example illustrations, the use of smoothing procedures based on nonparametric regression techniques can yield more informative sensitivity analysis results than can be obtained with more traditional sensitivity analysis procedures based on linear regression, rank regression or quadratic regression when nonlinear relationships between model inputs and model predictions are present

  3. Multiple predictor smoothing methods for sensitivity analysis.

    Energy Technology Data Exchange (ETDEWEB)

    Helton, Jon Craig; Storlie, Curtis B.

    2006-08-01

    The use of multiple predictor smoothing methods in sampling-based sensitivity analyses of complex models is investigated. Specifically, sensitivity analysis procedures based on smoothing methods employing the stepwise application of the following nonparametric regression techniques are described: (1) locally weighted regression (LOESS), (2) additive models, (3) projection pursuit regression, and (4) recursive partitioning regression. The indicated procedures are illustrated with both simple test problems and results from a performance assessment for a radioactive waste disposal facility (i.e., the Waste Isolation Pilot Plant). As shown by the example illustrations, the use of smoothing procedures based on nonparametric regression techniques can yield more informative sensitivity analysis results than can be obtained with more traditional sensitivity analysis procedures based on linear regression, rank regression or quadratic regression when nonlinear relationships between model inputs and model predictions are present.

  4. R package MVR for Joint Adaptive Mean-Variance Regularization and Variance Stabilization.

    Science.gov (United States)

    Dazard, Jean-Eudes; Xu, Hua; Rao, J Sunil

    2011-01-01

    We present an implementation in the R language for statistical computing of our recent non-parametric joint adaptive mean-variance regularization and variance stabilization procedure. The method is specifically suited for handling difficult problems posed by high-dimensional multivariate datasets ( p ≫ n paradigm), such as in 'omics'-type data, among which are that the variance is often a function of the mean, variable-specific estimators of variances are not reliable, and tests statistics have low powers due to a lack of degrees of freedom. The implementation offers a complete set of features including: (i) normalization and/or variance stabilization function, (ii) computation of mean-variance-regularized t and F statistics, (iii) generation of diverse diagnostic plots, (iv) synthetic and real 'omics' test datasets, (v) computationally efficient implementation, using C interfacing, and an option for parallel computing, (vi) manual and documentation on how to setup a cluster. To make each feature as user-friendly as possible, only one subroutine per functionality is to be handled by the end-user. It is available as an R package, called MVR ('Mean-Variance Regularization'), downloadable from the CRAN.

  5. Measuring kinetics of complex single ion channel data using mean-variance histograms.

    Science.gov (United States)

    Patlak, J B

    1993-07-01

    The measurement of single ion channel kinetics is difficult when those channels exhibit subconductance events. When the kinetics are fast, and when the current magnitudes are small, as is the case for Na+, Ca2+, and some K+ channels, these difficulties can lead to serious errors in the estimation of channel kinetics. I present here a method, based on the construction and analysis of mean-variance histograms, that can overcome these problems. A mean-variance histogram is constructed by calculating the mean current and the current variance within a brief "window" (a set of N consecutive data samples) superimposed on the digitized raw channel data. Systematic movement of this window over the data produces large numbers of mean-variance pairs which can be assembled into a two-dimensional histogram. Defined current levels (open, closed, or sublevel) appear in such plots as low variance regions. The total number of events in such low variance regions is estimated by curve fitting and plotted as a function of window width. This function decreases with the same time constants as the original dwell time probability distribution for each of the regions. The method can therefore be used: 1) to present a qualitative summary of the single channel data from which the signal-to-noise ratio, open channel noise, steadiness of the baseline, and number of conductance levels can be quickly determined; 2) to quantify the dwell time distribution in each of the levels exhibited. In this paper I present the analysis of a Na+ channel recording that had a number of complexities. The signal-to-noise ratio was only about 8 for the main open state, open channel noise, and fast flickers to other states were present, as were a substantial number of subconductance states. "Standard" half-amplitude threshold analysis of these data produce open and closed time histograms that were well fitted by the sum of two exponentials, but with apparently erroneous time constants, whereas the mean-variance

  6. A geometric approach to multiperiod mean variance optimization of assets and liabilities

    OpenAIRE

    Leippold, Markus; Trojani, Fabio; Vanini, Paolo

    2005-01-01

    We present a geometric approach to discrete time multiperiod mean variance portfolio optimization that largely simplifies the mathematical analysis and the economic interpretation of such model settings. We show that multiperiod mean variance optimal policies can be decomposed in an orthogonal set of basis strategies, each having a clear economic interpretation. This implies that the corresponding multi period mean variance frontiers are spanned by an orthogonal basis of dynamic returns. Spec...

  7. Mean-Variance-Validation Technique for Sequential Kriging Metamodels

    International Nuclear Information System (INIS)

    Lee, Tae Hee; Kim, Ho Sung

    2010-01-01

    The rigorous validation of the accuracy of metamodels is an important topic in research on metamodel techniques. Although a leave-k-out cross-validation technique involves a considerably high computational cost, it cannot be used to measure the fidelity of metamodels. Recently, the mean 0 validation technique has been proposed to quantitatively determine the accuracy of metamodels. However, the use of mean 0 validation criterion may lead to premature termination of a sampling process even if the kriging model is inaccurate. In this study, we propose a new validation technique based on the mean and variance of the response evaluated when sequential sampling method, such as maximum entropy sampling, is used. The proposed validation technique is more efficient and accurate than the leave-k-out cross-validation technique, because instead of performing numerical integration, the kriging model is explicitly integrated to accurately evaluate the mean and variance of the response evaluated. The error in the proposed validation technique resembles a root mean squared error, thus it can be used to determine a stop criterion for sequential sampling of metamodels

  8. PET image reconstruction: mean, variance, and optimal minimax criterion

    International Nuclear Information System (INIS)

    Liu, Huafeng; Guo, Min; Gao, Fei; Shi, Pengcheng; Xue, Liying; Nie, Jing

    2015-01-01

    Given the noise nature of positron emission tomography (PET) measurements, it is critical to know the image quality and reliability as well as expected radioactivity map (mean image) for both qualitative interpretation and quantitative analysis. While existing efforts have often been devoted to providing only the reconstructed mean image, we present a unified framework for joint estimation of the mean and corresponding variance of the radioactivity map based on an efficient optimal min–max criterion. The proposed framework formulates the PET image reconstruction problem to be a transformation from system uncertainties to estimation errors, where the minimax criterion is adopted to minimize the estimation errors with possibly maximized system uncertainties. The estimation errors, in the form of a covariance matrix, express the measurement uncertainties in a complete way. The framework is then optimized by ∞-norm optimization and solved with the corresponding H ∞ filter. Unlike conventional statistical reconstruction algorithms, that rely on the statistical modeling methods of the measurement data or noise, the proposed joint estimation stands from the point of view of signal energies and can handle from imperfect statistical assumptions to even no a priori statistical assumptions. The performance and accuracy of reconstructed mean and variance images are validated using Monte Carlo simulations. Experiments on phantom scans with a small animal PET scanner and real patient scans are also conducted for assessment of clinical potential. (paper)

  9. On Mean-Variance Hedging of Bond Options with Stochastic Risk Premium Factor

    NARCIS (Netherlands)

    Aihara, ShinIchi; Bagchi, Arunabha; Kumar, Suresh K.

    2014-01-01

    We consider the mean-variance hedging problem for pricing bond options using the yield curve as the observation. The model considered contains infinite-dimensional noise sources with the stochastically- varying risk premium. Hence our model is incomplete. We consider mean-variance hedging under the

  10. Investor preferences for oil spot and futures based on mean-variance and stochastic dominance

    NARCIS (Netherlands)

    H.H. Lean (Hooi Hooi); M.J. McAleer (Michael); W.-K. Wong (Wing-Keung)

    2010-01-01

    textabstractThis paper examines investor preferences for oil spot and futures based on mean-variance (MV) and stochastic dominance (SD). The mean-variance criterion cannot distinct the preferences of spot and market whereas SD tests leads to the conclusion that spot dominates futures in the downside

  11. Mean-variance portfolio optimization by using time series approaches based on logarithmic utility function

    Science.gov (United States)

    Soeryana, E.; Fadhlina, N.; Sukono; Rusyaman, E.; Supian, S.

    2017-01-01

    Investments in stocks investors are also faced with the issue of risk, due to daily price of stock also fluctuate. For minimize the level of risk, investors usually forming an investment portfolio. Establishment of a portfolio consisting of several stocks are intended to get the optimal composition of the investment portfolio. This paper discussed about optimizing investment portfolio of Mean-Variance to stocks by using mean and volatility is not constant based on logarithmic utility function. Non constant mean analysed using models Autoregressive Moving Average (ARMA), while non constant volatility models are analysed using the Generalized Autoregressive Conditional heteroscedastic (GARCH). Optimization process is performed by using the Lagrangian multiplier technique. As a numerical illustration, the method is used to analyse some Islamic stocks in Indonesia. The expected result is to get the proportion of investment in each Islamic stock analysed.

  12. Smoothing-Norm Preconditioning for Regularizing Minimum-Residual Methods

    DEFF Research Database (Denmark)

    Hansen, Per Christian; Jensen, Toke Koldborg

    2006-01-01

    take into account a smoothing norm for the solution. This technique is well established for CGLS, but it does not immediately carry over to minimum-residual methods when the smoothing norm is a seminorm or a Sobolev norm. We develop a new technique which works for any smoothing norm of the form $\\|L...

  13. Regime shifts in mean-variance efficient frontiers: some international evidence

    OpenAIRE

    Massimo Guidolin; Federica Ria

    2010-01-01

    Regime switching models have been assuming a central role in financial applications because of their well-known ability to capture the presence of rich non-linear patterns in the joint distribution of asset returns. This paper examines how the presence of regimes in means, variances, and correlations of asset returns translates into explicit dynamics of the Markowitz mean-variance frontier. In particular, the paper shows both theoretically and through an application to international equity po...

  14. Markov switching mean-variance frontier dynamics: theory and international evidence

    OpenAIRE

    M. Guidolin; F. Ria

    2010-01-01

    It is well-known that regime switching models are able to capture the presence of rich non-linear patterns in the joint distribution of asset returns. After reviewing key concepts and technical issues related to specifying, estimating, and using multivariate Markov switching models in financial applications, in this paper we map the presence of regimes in means, variances, and covariances of asset returns into explicit dynamics of the Markowitz mean-variance frontier. In particular, we show b...

  15. Optimization Stock Portfolio With Mean-Variance and Linear Programming: Case In Indonesia Stock Market

    Directory of Open Access Journals (Sweden)

    Yen Sun

    2010-05-01

    Full Text Available It is observed that the number of Indonesia’s domestic investor who involved in the stock exchange is very less compare to its total number of population (only about 0.1%. As a result, Indonesia Stock Exchange (IDX is highly affected by foreign investor that can threat the economy. Domestic investor tends to invest in risk-free asset such as deposit in the bank since they are not familiar yet with the stock market and anxious about the risk (risk-averse type of investor. Therefore, it is important to educate domestic investor to involve in the stock exchange. Investing in portfolio of stock is one of the best choices for risk-averse investor (such as Indonesia domestic investor since it offers lower risk for a given level of return. This paper studies the optimization of Indonesian stock portfolio. The data is the historical return of 10 stocks of LQ 45 for 5 time series (January 2004 – December 2008. It will be focus on selecting stocks into a portfolio, setting 10 of stock portfolios using mean variance method combining with the linear programming (solver. Furthermore, based on Efficient Frontier concept and Sharpe measurement, there will be one stock portfolio picked as an optimum Portfolio (Namely Portfolio G. Then, Performance of portfolio G will be evaluated by using Sharpe, Treynor and Jensen Measurement to show whether the return of Portfolio G exceeds the market return. This paper also illustrates how the stock composition of the Optimum Portfolio (G succeeds to predict the portfolio return in the future (5th January – 3rd April 2009. The result of the study observed that optimization portfolio using Mean-Variance (consistent with Markowitz theory combine with linear programming can be applied into Indonesia stock’s portfolio. All the measurements (Sharpe, Jensen, and Treynor show that the portfolio G is a superior portfolio. It is also been found that the composition (weights stocks of optimum portfolio (G can be used to

  16. Mean-variance portfolio selection and efficient frontier for defined contribution pension schemes

    DEFF Research Database (Denmark)

    Højgaard, Bjarne; Vigna, Elena

    We solve a mean-variance portfolio selection problem in the accumulation phase of a defined contribution pension scheme. The efficient frontier, which is found for the 2 asset case as well as the n + 1 asset case, gives the member the possibility to decide his own risk/reward profile. The mean...... as a mean-variance optimization problem. It is shown that the corresponding mean and variance of the final fund belong to the efficient frontier and also the opposite, that each point on the efficient frontier corresponds to a target-based optimization problem. Furthermore, numerical results indicate...... that the largely adopted lifestyle strategy seems to be very far from being efficient in the mean-variance setting....

  17. A Random Parameter Model for Continuous-Time Mean-Variance Asset-Liability Management

    Directory of Open Access Journals (Sweden)

    Hui-qiang Ma

    2015-01-01

    Full Text Available We consider a continuous-time mean-variance asset-liability management problem in a market with random market parameters; that is, interest rate, appreciation rates, and volatility rates are considered to be stochastic processes. By using the theories of stochastic linear-quadratic (LQ optimal control and backward stochastic differential equations (BSDEs, we tackle this problem and derive optimal investment strategies as well as the mean-variance efficient frontier analytically in terms of the solution of BSDEs. We find that the efficient frontier is still a parabola in a market with random parameters. Comparing with the existing results, we also find that the liability does not affect the feasibility of the mean-variance portfolio selection problem. However, in an incomplete market with random parameters, the liability can not be fully hedged.

  18. Mean-variance portfolio selection and efficient frontier for defined contribution pension schemes

    OpenAIRE

    Hoejgaard, B.; Vigna, E.

    2007-01-01

    We solve a mean-variance portfolio selection problem in the accumulation phase of a defined contribution pension scheme. The efficient frontier, which is found for the 2 asset case as well as the n + 1 asset case, gives the member the possibility to decide his own risk/reward profile. The mean-variance approach is then compared to other investment strategies adopted in DC pension schemes, namely the target-based approach and the lifestyle strategy. The comparison is done both in a theoretical...

  19. Mean-Variance Hedging on Uncertain Time Horizon in a Market with a Jump

    International Nuclear Information System (INIS)

    Kharroubi, Idris; Lim, Thomas; Ngoupeyou, Armand

    2013-01-01

    In this work, we study the problem of mean-variance hedging with a random horizon T∧τ, where T is a deterministic constant and τ is a jump time of the underlying asset price process. We first formulate this problem as a stochastic control problem and relate it to a system of BSDEs with a jump. We then provide a verification theorem which gives the optimal strategy for the mean-variance hedging using the solution of the previous system of BSDEs. Finally, we prove that this system of BSDEs admits a solution via a decomposition approach coming from filtration enlargement theory

  20. Mean-Variance Hedging on Uncertain Time Horizon in a Market with a Jump

    Energy Technology Data Exchange (ETDEWEB)

    Kharroubi, Idris, E-mail: kharroubi@ceremade.dauphine.fr [Université Paris Dauphine, CEREMADE, CNRS UMR 7534 (France); Lim, Thomas, E-mail: lim@ensiie.fr [Université d’Evry and ENSIIE, Laboratoire d’Analyse et Probabilités (France); Ngoupeyou, Armand, E-mail: armand.ngoupeyou@univ-paris-diderot.fr [Université Paris 7, Laboratoire de Probabilités et Modèles Aléatoires (France)

    2013-12-15

    In this work, we study the problem of mean-variance hedging with a random horizon T∧τ, where T is a deterministic constant and τ is a jump time of the underlying asset price process. We first formulate this problem as a stochastic control problem and relate it to a system of BSDEs with a jump. We then provide a verification theorem which gives the optimal strategy for the mean-variance hedging using the solution of the previous system of BSDEs. Finally, we prove that this system of BSDEs admits a solution via a decomposition approach coming from filtration enlargement theory.

  1. Mean-variance portfolio allocation with a value at risk constraint

    OpenAIRE

    Enrique Sentana

    2001-01-01

    In this Paper, I first provide a simple unifying approach to static Mean-Variance analysis and Value at Risk, which highlights their similarities and differences. Then I use it to explain how fund managers can take investment decisions that satisfy the VaR restrictions imposed on them by regulators, within the well-known Mean-Variance allocation framework. I do so by introducing a new type of line to the usual mean-standard deviation diagram, called IsoVaR,which represents all the portfolios ...

  2. Mean-Variance Portfolio Selection with a Fixed Flow of Investment in ...

    African Journals Online (AJOL)

    We consider a mean-variance portfolio selection problem for a fixed flow of investment in a continuous time framework. We consider a market structure that is characterized by a cash account, an indexed bond and a stock. We obtain the expected optimal terminal wealth for the investor. We also obtain a closed-form ...

  3. On the Computation of Optimal Monotone Mean-Variance Portfolios via Truncated Quadratic Utility

    OpenAIRE

    Ales Cerný; Fabio Maccheroni; Massimo Marinacci; Aldo Rustichini

    2008-01-01

    We report a surprising link between optimal portfolios generated by a special type of variational preferences called divergence preferences (cf. [8]) and optimal portfolios generated by classical expected utility. As a special case we connect optimization of truncated quadratic utility (cf. [2]) to the optimal monotone mean-variance portfolios (cf. [9]), thus simplifying the computation of the latter.

  4. Spectrally-Corrected Estimation for High-Dimensional Markowitz Mean-Variance Optimization

    NARCIS (Netherlands)

    Z. Bai (Zhidong); H. Li (Hua); M.J. McAleer (Michael); W.-K. Wong (Wing-Keung)

    2016-01-01

    textabstractThis paper considers the portfolio problem for high dimensional data when the dimension and size are both large. We analyze the traditional Markowitz mean-variance (MV) portfolio by large dimension matrix theory, and find the spectral distribution of the sample covariance is the main

  5. Mean-variance portfolio optimization with state-dependent risk aversion

    DEFF Research Database (Denmark)

    Bjoerk, Tomas; Murgoci, Agatha; Zhou, Xun Yu

    2014-01-01

    The objective of this paper is to study the mean-variance portfolio optimization in continuous time. Since this problem is time inconsistent we attack it by placing the problem within a game theoretic framework and look for subgame perfect Nash equilibrium strategies. This particular problem has...

  6. A Mean-Variance Explanation of FDI Flows to Developing Countries

    DEFF Research Database (Denmark)

    Sunesen, Eva Rytter

    country to another. This will have implications for the way investors evaluate the return and risk of investing abroad. This paper utilises a simple mean-variance optimisation framework where global and regonal factors capture the interdependence between countries. The model implies that FDI is driven...

  7. Mean-Coherent Risk and Mean-Variance Approaches in Portfolio Selection : An Empirical Comparison

    NARCIS (Netherlands)

    Polbennikov, S.Y.; Melenberg, B.

    2005-01-01

    We empirically analyze the implementation of coherent risk measures in portfolio selection.First, we compare optimal portfolios obtained through mean-coherent risk optimization with corresponding mean-variance portfolios.We find that, even for a typical portfolio of equities, the outcomes can be

  8. Joint Adaptive Mean-Variance Regularization and Variance Stabilization of High Dimensional Data.

    Science.gov (United States)

    Dazard, Jean-Eudes; Rao, J Sunil

    2012-07-01

    The paper addresses a common problem in the analysis of high-dimensional high-throughput "omics" data, which is parameter estimation across multiple variables in a set of data where the number of variables is much larger than the sample size. Among the problems posed by this type of data are that variable-specific estimators of variances are not reliable and variable-wise tests statistics have low power, both due to a lack of degrees of freedom. In addition, it has been observed in this type of data that the variance increases as a function of the mean. We introduce a non-parametric adaptive regularization procedure that is innovative in that : (i) it employs a novel "similarity statistic"-based clustering technique to generate local-pooled or regularized shrinkage estimators of population parameters, (ii) the regularization is done jointly on population moments, benefiting from C. Stein's result on inadmissibility, which implies that usual sample variance estimator is improved by a shrinkage estimator using information contained in the sample mean. From these joint regularized shrinkage estimators, we derived regularized t-like statistics and show in simulation studies that they offer more statistical power in hypothesis testing than their standard sample counterparts, or regular common value-shrinkage estimators, or when the information contained in the sample mean is simply ignored. Finally, we show that these estimators feature interesting properties of variance stabilization and normalization that can be used for preprocessing high-dimensional multivariate data. The method is available as an R package, called 'MVR' ('Mean-Variance Regularization'), downloadable from the CRAN website.

  9. Time-Consistent Strategies for a Multiperiod Mean-Variance Portfolio Selection Problem

    Directory of Open Access Journals (Sweden)

    Huiling Wu

    2013-01-01

    Full Text Available It remained prevalent in the past years to obtain the precommitment strategies for Markowitz's mean-variance portfolio optimization problems, but not much is known about their time-consistent strategies. This paper takes a step to investigate the time-consistent Nash equilibrium strategies for a multiperiod mean-variance portfolio selection problem. Under the assumption that the risk aversion is, respectively, a constant and a function of current wealth level, we obtain the explicit expressions for the time-consistent Nash equilibrium strategy and the equilibrium value function. Many interesting properties of the time-consistent results are identified through numerical sensitivity analysis and by comparing them with the classical pre-commitment solutions.

  10. Mean-Variance portfolio optimization when each asset has individual uncertain exit-time

    Directory of Open Access Journals (Sweden)

    Reza Keykhaei

    2016-12-01

    Full Text Available The standard Markowitz Mean-Variance optimization model is a single-period portfolio selection approach where the exit-time (or the time-horizon is deterministic. ‎In this paper we study the Mean-Variance portfolio selection problem ‎with ‎uncertain ‎exit-time ‎when ‎each ‎has ‎individual uncertain ‎xit-time‎, ‎which generalizes the Markowitz's model‎. ‎‎‎‎‎‎We provide some conditions under which the optimal portfolio of the generalized problem is independent of the exit-times distributions. Also, ‎‎it is shown that under some general circumstances, the sets of optimal portfolios‎ ‎in the generalized model and the standard model are the same‎.

  11. Continuous-Time Mean-Variance Portfolio Selection: A Stochastic LQ Framework

    International Nuclear Information System (INIS)

    Zhou, X.Y.; Li, D.

    2000-01-01

    This paper is concerned with a continuous-time mean-variance portfolio selection model that is formulated as a bicriteria optimization problem. The objective is to maximize the expected terminal return and minimize the variance of the terminal wealth. By putting weights on the two criteria one obtains a single objective stochastic control problem which is however not in the standard form due to the variance term involved. It is shown that this nonstandard problem can be 'embedded' into a class of auxiliary stochastic linear-quadratic (LQ) problems. The stochastic LQ control model proves to be an appropriate and effective framework to study the mean-variance problem in light of the recent development on general stochastic LQ problems with indefinite control weighting matrices. This gives rise to the efficient frontier in a closed form for the original portfolio selection problem

  12. Mean-variance model for portfolio optimization with background risk based on uncertainty theory

    Science.gov (United States)

    Zhai, Jia; Bai, Manying

    2018-04-01

    The aim of this paper is to develop a mean-variance model for portfolio optimization considering the background risk, liquidity and transaction cost based on uncertainty theory. In portfolio selection problem, returns of securities and assets liquidity are assumed as uncertain variables because of incidents or lacking of historical data, which are common in economic and social environment. We provide crisp forms of the model and a hybrid intelligent algorithm to solve it. Under a mean-variance framework, we analyze the portfolio frontier characteristic considering independently additive background risk. In addition, we discuss some effects of background risk and liquidity constraint on the portfolio selection. Finally, we demonstrate the proposed models by numerical simulations.

  13. Mean-variance portfolio selection for defined-contribution pension funds with stochastic salary.

    Science.gov (United States)

    Zhang, Chubing

    2014-01-01

    This paper focuses on a continuous-time dynamic mean-variance portfolio selection problem of defined-contribution pension funds with stochastic salary, whose risk comes from both financial market and nonfinancial market. By constructing a special Riccati equation as a continuous (actually a viscosity) solution to the HJB equation, we obtain an explicit closed form solution for the optimal investment portfolio as well as the efficient frontier.

  14. Mean-Variance Portfolio Selection for Defined-Contribution Pension Funds with Stochastic Salary

    Directory of Open Access Journals (Sweden)

    Chubing Zhang

    2014-01-01

    Full Text Available This paper focuses on a continuous-time dynamic mean-variance portfolio selection problem of defined-contribution pension funds with stochastic salary, whose risk comes from both financial market and nonfinancial market. By constructing a special Riccati equation as a continuous (actually a viscosity solution to the HJB equation, we obtain an explicit closed form solution for the optimal investment portfolio as well as the efficient frontier.

  15. Non-Linear Transaction Costs Inclusion in Mean-Variance Optimization

    Directory of Open Access Journals (Sweden)

    Christian Johannes Zimmer

    2005-12-01

    Full Text Available In this article we propose a new way to include transaction costs into a mean-variance portfolio optimization. We consider brokerage fees, bid/ask spread and the market impact of the trade. A pragmatic algorithm is proposed, which approximates the optimal portfolio, and we can show that is converges in the absence of restrictions. Using Brazilian financial market data we compare our approximation algorithm with the results of a non-linear optimizer.

  16. Mean-Variance Portfolio Selection for Defined-Contribution Pension Funds with Stochastic Salary

    OpenAIRE

    Chubing Zhang

    2014-01-01

    This paper focuses on a continuous-time dynamic mean-variance portfolio selection problem of defined-contribution pension funds with stochastic salary, whose risk comes from both financial market and nonfinancial market. By constructing a special Riccati equation as a continuous (actually a viscosity) solution to the HJB equation, we obtain an explicit closed form solution for the optimal investment portfolio as well as the efficient frontier.

  17. Mean-Variance Portfolio Selection for Defined-Contribution Pension Funds with Stochastic Salary

    Science.gov (United States)

    Zhang, Chubing

    2014-01-01

    This paper focuses on a continuous-time dynamic mean-variance portfolio selection problem of defined-contribution pension funds with stochastic salary, whose risk comes from both financial market and nonfinancial market. By constructing a special Riccati equation as a continuous (actually a viscosity) solution to the HJB equation, we obtain an explicit closed form solution for the optimal investment portfolio as well as the efficient frontier. PMID:24782667

  18. Mean--variance portfolio optimization when means and covariances are unknown

    OpenAIRE

    Tze Leung Lai; Haipeng Xing; Zehao Chen

    2011-01-01

    Markowitz's celebrated mean--variance portfolio optimization theory assumes that the means and covariances of the underlying asset returns are known. In practice, they are unknown and have to be estimated from historical data. Plugging the estimates into the efficient frontier that assumes known parameters has led to portfolios that may perform poorly and have counter-intuitive asset allocation weights; this has been referred to as the "Markowitz optimization enigma." After reviewing differen...

  19. Neurobiological studies of risk assessment: a comparison of expected utility and mean-variance approaches.

    Science.gov (United States)

    D'Acremont, Mathieu; Bossaerts, Peter

    2008-12-01

    When modeling valuation under uncertainty, economists generally prefer expected utility because it has an axiomatic foundation, meaning that the resulting choices will satisfy a number of rationality requirements. In expected utility theory, values are computed by multiplying probabilities of each possible state of nature by the payoff in that state and summing the results. The drawback of this approach is that all state probabilities need to be dealt with separately, which becomes extremely cumbersome when it comes to learning. Finance academics and professionals, however, prefer to value risky prospects in terms of a trade-off between expected reward and risk, where the latter is usually measured in terms of reward variance. This mean-variance approach is fast and simple and greatly facilitates learning, but it impedes assigning values to new gambles on the basis of those of known ones. To date, it is unclear whether the human brain computes values in accordance with expected utility theory or with mean-variance analysis. In this article, we discuss the theoretical and empirical arguments that favor one or the other theory. We also propose a new experimental paradigm that could determine whether the human brain follows the expected utility or the mean-variance approach. Behavioral results of implementation of the paradigm are discussed.

  20. Regularization by fractional filter methods and data smoothing

    International Nuclear Information System (INIS)

    Klann, E; Ramlau, R

    2008-01-01

    This paper is concerned with the regularization of linear ill-posed problems by a combination of data smoothing and fractional filter methods. For the data smoothing, a wavelet shrinkage denoising is applied to the noisy data with known error level δ. For the reconstruction, an approximation to the solution of the operator equation is computed from the data estimate by fractional filter methods. These fractional methods are based on the classical Tikhonov and Landweber method, but avoid, at least partially, the well-known drawback of oversmoothing. Convergence rates as well as numerical examples are presented

  1. A Non-smooth Newton Method for Multibody Dynamics

    International Nuclear Information System (INIS)

    Erleben, K.; Ortiz, R.

    2008-01-01

    In this paper we deal with the simulation of rigid bodies. Rigid body dynamics have become very important for simulating rigid body motion in interactive applications, such as computer games or virtual reality. We present a novel way of computing contact forces using a Newton method. The contact problem is reformulated as a system of non-linear and non-smooth equations, and we solve this system using a non-smooth version of Newton's method. One of the main contribution of this paper is the reformulation of the complementarity problems, used to model impacts, as a system of equations that can be solved using traditional methods.

  2. Continuous-Time Mean-Variance Portfolio Selection with Random Horizon

    International Nuclear Information System (INIS)

    Yu, Zhiyong

    2013-01-01

    This paper examines the continuous-time mean-variance optimal portfolio selection problem with random market parameters and random time horizon. Treating this problem as a linearly constrained stochastic linear-quadratic optimal control problem, I explicitly derive the efficient portfolios and efficient frontier in closed forms based on the solutions of two backward stochastic differential equations. Some related issues such as a minimum variance portfolio and a mutual fund theorem are also addressed. All the results are markedly different from those in the problem with deterministic exit time. A key part of my analysis involves proving the global solvability of a stochastic Riccati equation, which is interesting in its own right

  3. Continuous-Time Mean-Variance Portfolio Selection with Random Horizon

    Energy Technology Data Exchange (ETDEWEB)

    Yu, Zhiyong, E-mail: yuzhiyong@sdu.edu.cn [Shandong University, School of Mathematics (China)

    2013-12-15

    This paper examines the continuous-time mean-variance optimal portfolio selection problem with random market parameters and random time horizon. Treating this problem as a linearly constrained stochastic linear-quadratic optimal control problem, I explicitly derive the efficient portfolios and efficient frontier in closed forms based on the solutions of two backward stochastic differential equations. Some related issues such as a minimum variance portfolio and a mutual fund theorem are also addressed. All the results are markedly different from those in the problem with deterministic exit time. A key part of my analysis involves proving the global solvability of a stochastic Riccati equation, which is interesting in its own right.

  4. Risk-Sensitive and Mean Variance Optimality in Markov Decision Processes

    Czech Academy of Sciences Publication Activity Database

    Sladký, Karel

    2013-01-01

    Roč. 7, č. 3 (2013), s. 146-161 ISSN 0572-3043 R&D Projects: GA ČR GAP402/10/0956; GA ČR GAP402/11/0150 Grant - others:AVČR a CONACyT(CZ) 171396 Institutional support: RVO:67985556 Keywords : Discrete-time Markov decision chains * exponential utility functions * certainty equivalent * mean-variance optimality * connections between risk -sensitive and risk -neutral models Subject RIV: BB - Applied Statistics, Operational Research http://library.utia.cas.cz/separaty/2013/E/sladky-0399099.pdf

  5. Robust Markowitz mean-variance portfolio selection under ambiguous covariance matrix *

    OpenAIRE

    Ismail, Amine; Pham, Huyên

    2016-01-01

    This paper studies a robust continuous-time Markowitz portfolio selection pro\\-blem where the model uncertainty carries on the covariance matrix of multiple risky assets. This problem is formulated into a min-max mean-variance problem over a set of non-dominated probability measures that is solved by a McKean-Vlasov dynamic programming approach, which allows us to characterize the solution in terms of a Bellman-Isaacs equation in the Wasserstein space of probability measures. We provide expli...

  6. Multiple predictor smoothing methods for sensitivity analysis: Description of techniques

    International Nuclear Information System (INIS)

    Storlie, Curtis B.; Helton, Jon C.

    2008-01-01

    The use of multiple predictor smoothing methods in sampling-based sensitivity analyses of complex models is investigated. Specifically, sensitivity analysis procedures based on smoothing methods employing the stepwise application of the following nonparametric regression techniques are described: (i) locally weighted regression (LOESS), (ii) additive models, (iii) projection pursuit regression, and (iv) recursive partitioning regression. Then, in the second and concluding part of this presentation, the indicated procedures are illustrated with both simple test problems and results from a performance assessment for a radioactive waste disposal facility (i.e., the Waste Isolation Pilot Plant). As shown by the example illustrations, the use of smoothing procedures based on nonparametric regression techniques can yield more informative sensitivity analysis results than can be obtained with more traditional sensitivity analysis procedures based on linear regression, rank regression or quadratic regression when nonlinear relationships between model inputs and model predictions are present

  7. Multiple predictor smoothing methods for sensitivity analysis: Example results

    International Nuclear Information System (INIS)

    Storlie, Curtis B.; Helton, Jon C.

    2008-01-01

    The use of multiple predictor smoothing methods in sampling-based sensitivity analyses of complex models is investigated. Specifically, sensitivity analysis procedures based on smoothing methods employing the stepwise application of the following nonparametric regression techniques are described in the first part of this presentation: (i) locally weighted regression (LOESS), (ii) additive models, (iii) projection pursuit regression, and (iv) recursive partitioning regression. In this, the second and concluding part of the presentation, the indicated procedures are illustrated with both simple test problems and results from a performance assessment for a radioactive waste disposal facility (i.e., the Waste Isolation Pilot Plant). As shown by the example illustrations, the use of smoothing procedures based on nonparametric regression techniques can yield more informative sensitivity analysis results than can be obtained with more traditional sensitivity analysis procedures based on linear regression, rank regression or quadratic regression when nonlinear relationships between model inputs and model predictions are present

  8. Excluded-Mean-Variance Neural Decision Analyzer for Qualitative Group Decision Making

    Directory of Open Access Journals (Sweden)

    Ki-Young Song

    2012-01-01

    Full Text Available Many qualitative group decisions in professional fields such as law, engineering, economics, psychology, and medicine that appear to be crisp and certain are in reality shrouded in fuzziness as a result of uncertain environments and the nature of human cognition within which the group decisions are made. In this paper we introduce an innovative approach to group decision making in uncertain situations by using a mean-variance neural approach. The key idea of this proposed approach is to compute the excluded mean of individual evaluations and weight it by applying a variance influence function (VIF; this process of weighting the excluded mean by VIF provides an improved result in the group decision making. In this paper, a case study with the proposed excluded-mean-variance approach is also presented. The results of this case study indicate that this proposed approach can improve the effectiveness of qualitative decision making by providing the decision maker with a new cognitive tool to assist in the reasoning process.

  9. Risk-sensitivity and the mean-variance trade-off: decision making in sensorimotor control.

    Science.gov (United States)

    Nagengast, Arne J; Braun, Daniel A; Wolpert, Daniel M

    2011-08-07

    Numerous psychophysical studies suggest that the sensorimotor system chooses actions that optimize the average cost associated with a movement. Recently, however, violations of this hypothesis have been reported in line with economic theories of decision-making that not only consider the mean payoff, but are also sensitive to risk, that is the variability of the payoff. Here, we examine the hypothesis that risk-sensitivity in sensorimotor control arises as a mean-variance trade-off in movement costs. We designed a motor task in which participants could choose between a sure motor action that resulted in a fixed amount of effort and a risky motor action that resulted in a variable amount of effort that could be either lower or higher than the fixed effort. By changing the mean effort of the risky action while experimentally fixing its variance, we determined indifference points at which participants chose equiprobably between the sure, fixed amount of effort option and the risky, variable effort option. Depending on whether participants accepted a variable effort with a mean that was higher, lower or equal to the fixed effort, they could be classified as risk-seeking, risk-averse or risk-neutral. Most subjects were risk-sensitive in our task consistent with a mean-variance trade-off in effort, thereby, underlining the importance of risk-sensitivity in computational models of sensorimotor control.

  10. Simulating water hammer with corrective smoothed particle method

    NARCIS (Netherlands)

    Hou, Q.; Kruisbrink, A.C.H.; Tijsseling, A.S.; Keramat, A.

    2012-01-01

    The corrective smoothed particle method (CSPM) is used to simulate water hammer. The spatial derivatives in the water-hammer equations are approximated by a corrective kernel estimate. For the temporal derivatives, the Euler-forward time integration algorithm is employed. The CSPM results are in

  11. Comparative Analysis for Robust Penalized Spline Smoothing Methods

    Directory of Open Access Journals (Sweden)

    Bin Wang

    2014-01-01

    Full Text Available Smoothing noisy data is commonly encountered in engineering domain, and currently robust penalized regression spline models are perceived to be the most promising methods for coping with this issue, due to their flexibilities in capturing the nonlinear trends in the data and effectively alleviating the disturbance from the outliers. Against such a background, this paper conducts a thoroughly comparative analysis of two popular robust smoothing techniques, the M-type estimator and S-estimation for penalized regression splines, both of which are reelaborated starting from their origins, with their derivation process reformulated and the corresponding algorithms reorganized under a unified framework. Performances of these two estimators are thoroughly evaluated from the aspects of fitting accuracy, robustness, and execution time upon the MATLAB platform. Elaborately comparative experiments demonstrate that robust penalized spline smoothing methods possess the capability of resistance to the noise effect compared with the nonrobust penalized LS spline regression method. Furthermore, the M-estimator exerts stable performance only for the observations with moderate perturbation error, whereas the S-estimator behaves fairly well even for heavily contaminated observations, but consuming more execution time. These findings can be served as guidance to the selection of appropriate approach for smoothing the noisy data.

  12. A new media optimizer based on the mean-variance model

    Directory of Open Access Journals (Sweden)

    Pedro Jesus Fernandez

    2007-01-01

    Full Text Available In the financial markets, there is a well established portfolio optimization model called generalized mean-variance model (or generalized Markowitz model. This model considers that a typical investor, while expecting returns to be high, also expects returns to be as certain as possible. In this paper we introduce a new media optimization system based on the mean-variance model, a novel approach in media planning. After presenting the model in its full generality, we discuss possible advantages of the mean-variance paradigm, such as its flexibility in modeling the optimization problem, its ability of dealing with many media performance indices - satisfying most of the media plan needs - and, most important, the property of diversifying the media portfolios in a natural way, without the need to set up ad hoc constraints to enforce diversification.No mercado financeiro, existem modelos de otimização de portfólios já bem estabelecidos, denominados modelos de média-variância generalizados, ou modelos de Markowitz generalizados. Este modelo considera que um investidor típico, enquanto espera altos retornos, espera também que estes retornos sejam tão certos quanto possível. Neste artigo introduzimos um novo sistema otimizador de mídia baseado no modelo de média-variância, uma abordagem inovadora na área de planejamento de mídia. Após apresentar o modelo em sua máxima generalidade, discutimos possíveis vantagens do paradigma de média-variância, como sua flexibilidade na modelagem do problema de otimização, sua habilidade de lidar com vários índices de performance - satisfazendo a maioria dos requisitos de planejamento - e, o mais importante, a propriedade de diversificar os portfólios de mídia de uma forma natural, sem a necessidade de estipular restrições ad hoc para forçar a diversificação.

  13. Mean-Variance portfolio optimization by using non constant mean and volatility based on the negative exponential utility function

    Science.gov (United States)

    Soeryana, Endang; Halim, Nurfadhlina Bt Abdul; Sukono, Rusyaman, Endang; Supian, Sudradjat

    2017-03-01

    Investments in stocks investors are also faced with the issue of risk, due to daily price of stock also fluctuate. For minimize the level of risk, investors usually forming an investment portfolio. Establishment of a portfolio consisting of several stocks are intended to get the optimal composition of the investment portfolio. This paper discussed about optimizing investment portfolio of Mean-Variance to stocks by using mean and volatility is not constant based on the Negative Exponential Utility Function. Non constant mean analyzed using models Autoregressive Moving Average (ARMA), while non constant volatility models are analyzed using the Generalized Autoregressive Conditional heteroscedastic (GARCH). Optimization process is performed by using the Lagrangian multiplier technique. As a numerical illustration, the method is used to analyze some stocks in Indonesia. The expected result is to get the proportion of investment in each stock analyzed

  14. Market efficiency of oil spot and futures: A mean-variance and stochastic dominance approach

    International Nuclear Information System (INIS)

    Lean, Hooi Hooi; McAleer, Michael; Wong, Wing-Keung

    2010-01-01

    This paper examines the market efficiency of oil spot and futures prices by using both mean-variance (MV) and stochastic dominance (SD) approaches. Based on the West Texas Intermediate crude oil data for the sample period 1989-2008, we find no evidence of any MV and SD relationships between oil spot and futures indices. This infers that there is no arbitrage opportunity between these two markets, spot and futures do not dominate one another, investors are indifferent to investing spot or futures, and the spot and futures oil markets are efficient and rational. The empirical findings are robust to each sub-period before and after the crises for different crises, and also to portfolio diversification.

  15. Quantifying Systemic Risk by Solutions of the Mean-Variance Risk Model.

    Directory of Open Access Journals (Sweden)

    Jan Jurczyk

    Full Text Available The world is still recovering from the financial crisis peaking in September 2008. The triggering event was the bankruptcy of Lehman Brothers. To detect such turmoils, one can investigate the time-dependent behaviour of correlations between assets or indices. These cross-correlations have been connected to the systemic risks within markets by several studies in the aftermath of this crisis. We study 37 different US indices which cover almost all aspects of the US economy and show that monitoring an average investor's behaviour can be used to quantify times of increased risk. In this paper the overall investing strategy is approximated by the ground-states of the mean-variance model along the efficient frontier bound to real world constraints. Changes in the behaviour of the average investor is utlilized as a early warning sign.

  16. Time Consistent Strategies for Mean-Variance Asset-Liability Management Problems

    Directory of Open Access Journals (Sweden)

    Hui-qiang Ma

    2013-01-01

    Full Text Available This paper studies the optimal time consistent investment strategies in multiperiod asset-liability management problems under mean-variance criterion. By applying time consistent model of Chen et al. (2013 and employing dynamic programming technique, we derive two-time consistent policies for asset-liability management problems in a market with and without a riskless asset, respectively. We show that the presence of liability does affect the optimal strategy. More specifically, liability leads a parallel shift of optimal time-consistent investment policy. Moreover, for an arbitrarily risk averse investor (under the variance criterion with liability, the time-diversification effects could be ignored in a market with a riskless asset; however, it should be considered in a market without any riskless asset.

  17. Fuel mix diversification incentives in liberalized electricity markets: A Mean-Variance Portfolio theory approach

    International Nuclear Information System (INIS)

    Roques, Fabien A.; Newbery, David M.; Nuttall, William J.

    2008-01-01

    Monte Carlo simulations of gas, coal and nuclear plant investment returns are used as inputs of a Mean-Variance Portfolio optimization to identify optimal base load generation portfolios for large electricity generators in liberalized electricity markets. We study the impact of fuel, electricity, and CO 2 price risks and their degree of correlation on optimal plant portfolios. High degrees of correlation between gas and electricity prices - as observed in most European markets - reduce gas plant risks and make portfolios dominated by gas plant more attractive. Long-term power purchase contracts and/or a lower cost of capital can rebalance optimal portfolios towards more diversified portfolios with larger shares of nuclear and coal plants

  18. Quantifying Systemic Risk by Solutions of the Mean-Variance Risk Model.

    Science.gov (United States)

    Jurczyk, Jan; Eckrot, Alexander; Morgenstern, Ingo

    2016-01-01

    The world is still recovering from the financial crisis peaking in September 2008. The triggering event was the bankruptcy of Lehman Brothers. To detect such turmoils, one can investigate the time-dependent behaviour of correlations between assets or indices. These cross-correlations have been connected to the systemic risks within markets by several studies in the aftermath of this crisis. We study 37 different US indices which cover almost all aspects of the US economy and show that monitoring an average investor's behaviour can be used to quantify times of increased risk. In this paper the overall investing strategy is approximated by the ground-states of the mean-variance model along the efficient frontier bound to real world constraints. Changes in the behaviour of the average investor is utlilized as a early warning sign.

  19. Market efficiency of oil spot and futures: A mean-variance and stochastic dominance approach

    Energy Technology Data Exchange (ETDEWEB)

    Lean, Hooi Hooi [Economics Program, School of Social Sciences, Universiti Sains Malaysia (Malaysia); McAleer, Michael [Econometric Institute, Erasmus School of Economics, Erasmus University Rotterdam, and, Tinbergen Institute (Netherlands); Wong, Wing-Keung, E-mail: awong@hkbu.edu.h [Department of Economics, Hong Kong Baptist University (Hong Kong)

    2010-09-15

    This paper examines the market efficiency of oil spot and futures prices by using both mean-variance (MV) and stochastic dominance (SD) approaches. Based on the West Texas Intermediate crude oil data for the sample period 1989-2008, we find no evidence of any MV and SD relationships between oil spot and futures indices. This infers that there is no arbitrage opportunity between these two markets, spot and futures do not dominate one another, investors are indifferent to investing spot or futures, and the spot and futures oil markets are efficient and rational. The empirical findings are robust to each sub-period before and after the crises for different crises, and also to portfolio diversification.

  20. Market efficiency of oil spot and futures. A mean-variance and stochastic dominance approach

    Energy Technology Data Exchange (ETDEWEB)

    Lean, Hooi Hooi [Economics Program, School of Social Sciences, Universiti Sains Malaysia (Malaysia); McAleer, Michael [Econometric Institute, Erasmus School of Economics, Erasmus University Rotterdam (Netherlands); Wong, Wing-Keung [Department of Economics, Hong Kong Baptist University (China); Tinbergen Institute (Netherlands)

    2010-09-15

    This paper examines the market efficiency of oil spot and futures prices by using both mean-variance (MV) and stochastic dominance (SD) approaches. Based on the West Texas Intermediate crude oil data for the sample period 1989-2008, we find no evidence of any MV and SD relationships between oil spot and futures indices. This infers that there is no arbitrage opportunity between these two markets, spot and futures do not dominate one another, investors are indifferent to investing spot or futures, and the spot and futures oil markets are efficient and rational. The empirical findings are robust to each sub-period before and after the crises for different crises, and also to portfolio diversification. (author)

  1. Mean-Variance stochastic goal programming for sustainable mutual funds' portfolio selection.

    Directory of Open Access Journals (Sweden)

    García-Bernabeu, Ana

    2015-11-01

    Full Text Available Mean-Variance Stochastic Goal Programming models (MV-SGP provide satisficing investment solutions in uncertain contexts. In this work, an MV-SGP model is proposed for portfolio selection which includes goals with regards to traditional and sustainable assets. The proposed approach is based on a two-step procedure. In the first step, sustainability and/or financial screens are applied to a set of assets (mutual funds previously evaluated with TOPSIS to determine the opportunity set. In a second step, satisficing portfolios of assets are obtained using a Goal Programming approach. Two different goals are considered. The first goal reflects only the purely financial side of the target while the second goal is referred to the sustainable side. Aversion to Risk Absolute (ARA coefficients are estimated and incorporated in our investment decision making approach using two different approaches.

  2. Fuel mix diversification incentives in liberalized electricity markets: A Mean-Variance Portfolio theory approach

    Energy Technology Data Exchange (ETDEWEB)

    Roques, F.A.; Newbery, D.M.; Nuffall, W.J. [University of Cambridge, Cambridge (United Kingdom). Faculty of Economics

    2008-07-15

    Monte Carlo simulations of gas, coal and nuclear plant investment returns are used as inputs of a Mean-Variance Portfolio optimization to identify optimal base load generation portfolios for large electricity generators in liberalized electricity markets. We study the impact of fuel, electricity, and CO{sub 2} price risks and their degree of correlation on optimal plant portfolios. High degrees of correlation between gas and electricity prices - as observed in most European markets - reduce gas plant risks and make portfolios dominated by gas plant more attractive. Long-term power purchase contracts and/or a lower cost of capital can rebalance optimal portfolios towards more diversified portfolios with larger shares of nuclear and coal plants.

  3. International Diversification Versus Domestic Diversification: Mean-Variance Portfolio Optimization and Stochastic Dominance Approaches

    Directory of Open Access Journals (Sweden)

    Fathi Abid

    2014-05-01

    Full Text Available This paper applies the mean-variance portfolio optimization (PO approach and the stochastic dominance (SD test to examine preferences for international diversification versus domestic diversification from American investors’ viewpoints. Our PO results imply that the domestic diversification strategy dominates the international diversification strategy at a lower risk level and the reverse is true at a higher risk level. Our SD analysis shows that there is no arbitrage opportunity between international and domestic stock markets; domestically diversified portfolios with smaller risk dominate internationally diversified portfolios with larger risk and vice versa; and at the same risk level, there is no difference between the domestically and internationally diversified portfolios. Nonetheless, we cannot find any domestically diversified portfolios that stochastically dominate all internationally diversified portfolios, but we find some internationally diversified portfolios with small risk that dominate all the domestically diversified portfolios.

  4. A Method for Low-Delay Pitch Tracking and Smoothing

    DEFF Research Database (Denmark)

    Christensen, Mads Græsbøll

    2012-01-01

    . In the second step, a Kalman filter is used to smooth the estimates and separate the pitch into a slowly varying component and a rapidly varying component. The former represents the mean pitch while the latter represents vibrato, slides and other fast changes. The method is intended for use in applica- tions...... that require fast and sample-by-sample estimates, like tuners for musical instruments, transcription tasks requiring details like vi- brato, and real-time tracking of voiced speech....

  5. Generalised method of moments tests of mean-variance efficiency of the Australian equity market

    OpenAIRE

    Lau, Silvana

    2017-01-01

    For many years the Capital Asset Pricing Model (CAPM) developed by Sharpe (1964) and Lintner (1965) was the primary asset pricing model of financial theory. Over time, persistent criticism regarding the strict assumptions underlying the model resulted in numerous extensions of the model. Each extension involved relaxing one or more of the underlying assumptions. Unfortunately, empirical tests of these extensions have not proven to be ultimately superior. Early tests of the CAPM faced many p...

  6. Identification of melanoma cells: a method based in mean variance of signatures via spectral densities.

    Science.gov (United States)

    Guerra-Rosas, Esperanza; Álvarez-Borrego, Josué; Angulo-Molina, Aracely

    2017-04-01

    In this paper a new methodology to detect and differentiate melanoma cells from normal cells through 1D-signatures averaged variances calculated with a binary mask is presented. The sample images were obtained from histological sections of mice melanoma tumor of 4 [Formula: see text] in thickness and contrasted with normal cells. The results show that melanoma cells present a well-defined range of averaged variances values obtained from the signatures in the four conditions used.

  7. Investigation on filter method for smoothing spiral phase plate

    Science.gov (United States)

    Zhang, Yuanhang; Wen, Shenglin; Luo, Zijian; Tang, Caixue; Yan, Hao; Yang, Chunlin; Liu, Mincai; Zhang, Qinghua; Wang, Jian

    2018-03-01

    Spiral phase plate (SPP) for generating vortex hollow beams has high efficiency in various applications. However, it is difficult to obtain an ideal spiral phase plate because of its continuous-varying helical phase and discontinued phase step. This paper describes the demonstration of continuous spiral phase plate using filter methods. The numerical simulations indicate that different filter method including spatial domain filter, frequency domain filter has unique impact on surface topography of SPP and optical vortex characteristics. The experimental results reveal that the spatial Gaussian filter method for smoothing SPP is suitable for Computer Controlled Optical Surfacing (CCOS) technique and obtains good optical properties.

  8. DIFFERENCES BETWEEN MEAN-VARIANCE AND MEAN-CVAR PORTFOLIO OPTIMIZATION MODELS

    Directory of Open Access Journals (Sweden)

    Panna Miskolczi

    2016-07-01

    Full Text Available Everybody heard already that one should not expect high returns without high risk, or one should not expect safety without low returns. The goal of portfolio theory is to find the balance between maximizing the return and minimizing the risk. To do so we have to first understand and measure the risk. Naturally a good risk measure has to satisfy several properties - in theory and in practise. Markowitz suggested to use the variance as a risk measure in portfolio theory. This led to the so called mean-variance model - for which Markowitz received the Nobel Prize in 1990. The model has been criticized because it is well suited for elliptical distributions but it may lead to incorrect conclusions in the case of non-elliptical distributions. Since then many risk measures have been introduced, of which the Value at Risk (VaR is the most widely used in the recent years. Despite of the widespread use of the Value at Risk there are some fundamental problems with it. It does not satisfy the subadditivity property and it ignores the severity of losses in the far tail of the profit-and-loss (P&L distribution. Moreover, its non-convexity makes VaR impossible to use in optimization problems. To come over these issues the Expected Shortfall (ES as a coherent risk measure was developed. Expected Shortfall is also called Conditional Value at Risk (CVaR. Compared to Value at Risk, ES is more sensitive to the tail behaviour of the P&L distribution function. In the first part of the paper I state the definition of these three risk measures. In the second part I deal with my main question: What is happening if we replace the variance with the Expected Shortfall in the portfolio optimization process. Do we have different optimal portfolios as a solution? And thus, does the solution suggests to decide differently in the two cases? To answer to these questions I analyse seven Hungarian stock exchange companies. First I use the mean-variance portfolio optimization model

  9. Model Optimisasi Portofolio Investasi Mean-Variance Tanpa dan Dengan Aset Bebas Risiko pada Saham Idx30

    Directory of Open Access Journals (Sweden)

    Basuki Basuki

    2017-07-01

    Full Text Available Dalam paper ini, model optimisasi portofolio investasi Mean-Variance tanpa aset bebas risiko, atau disebut model dasar dari Markowitz telah dikaji untuk mendapatkan portofolio optimum.Berdasarkan model dasar dari Markowitz, kemudian dilakukan studi lebih lanjut pada model Mean-Variance dengan aset bebas risiko. Selanjutnya, kedua model tersebut digunakan untuk menganalisis optimisasi portofolio investasi pada beberapa saham IDX30. Dalam paper ini diasumsikan bahwa proporsi sebesar 10% diinvestasikan pada aset bebas risiko, berupa deposito yang memberikan return sebesar 7% per tahun. Berdasarkan hasil analisis optimisasi portofolio investasi pada lima saham yang dipilih didapatkan grafik permukaan efisien dari optimisasi portofolio Mean-Variance dengan aset bebas risiko, berada lebih tinggi dibandingkan optimisasi portofolio Mean-Variance tanpa aset bebas risiko. Dalam hal ini menunjukkan bahwa portofolio investasi kombinasi dari aset bebas risiko dan aset tanpa bebas risiko, lebih menguntungkan dibandingkan portofolio investasi yang hanya pada aset tanpa bebas risiko.

  10. A Mean-Variance Diagnosis of the Financial Crisis: International Diversification and Safe Havens

    Directory of Open Access Journals (Sweden)

    Alexander Eptas

    2010-12-01

    Full Text Available We use mean-variance analysis with short selling constraints to diagnose the effects of the recent global financial crisis by evaluating the potential benefits of international diversification in the search for ‘safe havens’. We use stock index data for a sample of developed, advanced-emerging and emerging countries. ‘Text-book’ results are obtained for the pre-crisis analysis with the optimal portfolio for any risk-averse investor being obtained as the tangency portfolio of the All-Country portfolio frontier. During the crisis there is a disjunction between bank lending and stock markets revealed by negative average returns and an absence of any empirical Capital Market Line. Israel and Colombia emerge as the safest havens for any investor during the crisis. For Israel this may reflect the protection afforded by special trade links and diaspora support, while for Colombia we speculate that this reveals the impact on world financial markets of the demand for cocaine.

  11. Firefly Algorithm for Cardinality Constrained Mean-Variance Portfolio Optimization Problem with Entropy Diversity Constraint

    Science.gov (United States)

    2014-01-01

    Portfolio optimization (selection) problem is an important and hard optimization problem that, with the addition of necessary realistic constraints, becomes computationally intractable. Nature-inspired metaheuristics are appropriate for solving such problems; however, literature review shows that there are very few applications of nature-inspired metaheuristics to portfolio optimization problem. This is especially true for swarm intelligence algorithms which represent the newer branch of nature-inspired algorithms. No application of any swarm intelligence metaheuristics to cardinality constrained mean-variance (CCMV) portfolio problem with entropy constraint was found in the literature. This paper introduces modified firefly algorithm (FA) for the CCMV portfolio model with entropy constraint. Firefly algorithm is one of the latest, very successful swarm intelligence algorithm; however, it exhibits some deficiencies when applied to constrained problems. To overcome lack of exploration power during early iterations, we modified the algorithm and tested it on standard portfolio benchmark data sets used in the literature. Our proposed modified firefly algorithm proved to be better than other state-of-the-art algorithms, while introduction of entropy diversity constraint further improved results. PMID:24991645

  12. The benefit of regional diversification of cogeneration investments in Europe. A mean-variance portfolio analysis

    International Nuclear Information System (INIS)

    Westner, Guenther; Madlener, Reinhard

    2010-01-01

    The EU Directive 2004/8/EC, concerning the promotion of cogeneration, established principles on how EU member states can support combined heat and power generation (CHP). Up to now, the implementation of these principles into national law has not been uniform, and has led to the adoption of different promotion schemes for CHP across the EU member states. In this paper, we first give an overview of the promotion schemes for CHP in various European countries. In a next step, we take two standard CHP technologies, combined-cycle gas turbines (CCGT-CHP) and engine-CHP, and apply exemplarily four selected support mechanisms used in the four largest European energy markets: feed-in tariffs in Germany; energy efficiency certificates in Italy; benefits through tax reduction in the UK; and purchase obligations for power from CHP generation in France. For contracting companies, it could be of interest to diversify their investment in new CHP facilities regionally over several countries in order to reduce country and regulatory risk. By applying the Mean-Variance Portfolio (MVP) theory, we derive characteristic return-risk profiles of the selected CHP technologies in different countries. The results show that the returns on CHP investments differ significantly depending on the country, the support scheme, and the selected technology studied. While a regional diversification of investments in CCGT-CHP does not contribute to reducing portfolio risks, a diversification of investments in engine-CHP can decrease the risk exposure. (author)

  13. Firefly algorithm for cardinality constrained mean-variance portfolio optimization problem with entropy diversity constraint.

    Science.gov (United States)

    Bacanin, Nebojsa; Tuba, Milan

    2014-01-01

    Portfolio optimization (selection) problem is an important and hard optimization problem that, with the addition of necessary realistic constraints, becomes computationally intractable. Nature-inspired metaheuristics are appropriate for solving such problems; however, literature review shows that there are very few applications of nature-inspired metaheuristics to portfolio optimization problem. This is especially true for swarm intelligence algorithms which represent the newer branch of nature-inspired algorithms. No application of any swarm intelligence metaheuristics to cardinality constrained mean-variance (CCMV) portfolio problem with entropy constraint was found in the literature. This paper introduces modified firefly algorithm (FA) for the CCMV portfolio model with entropy constraint. Firefly algorithm is one of the latest, very successful swarm intelligence algorithm; however, it exhibits some deficiencies when applied to constrained problems. To overcome lack of exploration power during early iterations, we modified the algorithm and tested it on standard portfolio benchmark data sets used in the literature. Our proposed modified firefly algorithm proved to be better than other state-of-the-art algorithms, while introduction of entropy diversity constraint further improved results.

  14. The benefit of regional diversification of cogeneration investments in Europe. A mean-variance portfolio analysis

    Energy Technology Data Exchange (ETDEWEB)

    Westner, Guenther; Madlener, Reinhard [E.ON Energy Projects GmbH, Arnulfstrasse 56, 80335 Munich (Germany)

    2010-12-15

    The EU Directive 2004/8/EC, concerning the promotion of cogeneration, established principles on how EU member states can support combined heat and power generation (CHP). Up to now, the implementation of these principles into national law has not been uniform, and has led to the adoption of different promotion schemes for CHP across the EU member states. In this paper, we first give an overview of the promotion schemes for CHP in various European countries. In a next step, we take two standard CHP technologies, combined-cycle gas turbines (CCGT-CHP) and engine-CHP, and apply exemplarily four selected support mechanisms used in the four largest European energy markets: feed-in tariffs in Germany; energy efficiency certificates in Italy; benefits through tax reduction in the UK; and purchase obligations for power from CHP generation in France. For contracting companies, it could be of interest to diversify their investment in new CHP facilities regionally over several countries in order to reduce country and regulatory risk. By applying the Mean-Variance Portfolio (MVP) theory, we derive characteristic return-risk profiles of the selected CHP technologies in different countries. The results show that the returns on CHP investments differ significantly depending on the country, the support scheme, and the selected technology studied. While a regional diversification of investments in CCGT-CHP does not contribute to reducing portfolio risks, a diversification of investments in engine-CHP can decrease the risk exposure. (author)

  15. The benefit of regional diversification of cogeneration investments in Europe: A mean-variance portfolio analysis

    Energy Technology Data Exchange (ETDEWEB)

    Westner, Guenther, E-mail: guenther.westner@eon-energie.co [E.ON Energy Projects GmbH, Arnulfstrasse 56, 80335 Munich (Germany); Madlener, Reinhard, E-mail: rmadlener@eonerc.rwth-aachen.d [Institute for Future Energy Consumer Needs and Behavior (FCN), Faculty of Business and Economics/E.ON Energy Research Center, RWTH Aachen University, Mathieustrasse 6, 52074 Aachen (Germany)

    2010-12-15

    The EU Directive 2004/8/EC, concerning the promotion of cogeneration, established principles on how EU member states can support combined heat and power generation (CHP). Up to now, the implementation of these principles into national law has not been uniform, and has led to the adoption of different promotion schemes for CHP across the EU member states. In this paper, we first give an overview of the promotion schemes for CHP in various European countries. In a next step, we take two standard CHP technologies, combined-cycle gas turbines (CCGT-CHP) and engine-CHP, and apply exemplarily four selected support mechanisms used in the four largest European energy markets: feed-in tariffs in Germany; energy efficiency certificates in Italy; benefits through tax reduction in the UK; and purchase obligations for power from CHP generation in France. For contracting companies, it could be of interest to diversify their investment in new CHP facilities regionally over several countries in order to reduce country and regulatory risk. By applying the Mean-Variance Portfolio (MVP) theory, we derive characteristic return-risk profiles of the selected CHP technologies in different countries. The results show that the returns on CHP investments differ significantly depending on the country, the support scheme, and the selected technology studied. While a regional diversification of investments in CCGT-CHP does not contribute to reducing portfolio risks, a diversification of investments in engine-CHP can decrease the risk exposure. - Research highlights: {yields}Preconditions for CHP investments differ significantly between the EU member states. {yields}Regional diversification of CHP investments can reduce the total portfolio risk. {yields}Risk reduction depends on the chosen CHP technology.

  16. Arima model and exponential smoothing method: A comparison

    Science.gov (United States)

    Wan Ahmad, Wan Kamarul Ariffin; Ahmad, Sabri

    2013-04-01

    This study shows the comparison between Autoregressive Moving Average (ARIMA) model and Exponential Smoothing Method in making a prediction. The comparison is focused on the ability of both methods in making the forecasts with the different number of data sources and the different length of forecasting period. For this purpose, the data from The Price of Crude Palm Oil (RM/tonne), Exchange Rates of Ringgit Malaysia (RM) in comparison to Great Britain Pound (GBP) and also The Price of SMR 20 Rubber Type (cents/kg) with three different time series are used in the comparison process. Then, forecasting accuracy of each model is measured by examinethe prediction error that producedby using Mean Squared Error (MSE), Mean Absolute Percentage Error (MAPE), and Mean Absolute deviation (MAD). The study shows that the ARIMA model can produce a better prediction for the long-term forecasting with limited data sources, butcannot produce a better prediction for time series with a narrow range of one point to another as in the time series for Exchange Rates. On the contrary, Exponential Smoothing Method can produce a better forecasting for Exchange Rates that has a narrow range of one point to another for its time series, while itcannot produce a better prediction for a longer forecasting period.

  17. Managing risk and expected financial return from selective expansion of operating room capacity: mean-variance analysis of a hospital's portfolio of surgeons.

    Science.gov (United States)

    Dexter, Franklin; Ledolter, Johannes

    2003-07-01

    Surgeons using the same amount of operating room (OR) time differ in their achieved hospital contribution margins (revenue minus variable costs) by >1000%. Thus, to improve the financial return from perioperative facilities, OR strategic decisions should selectively focus additional OR capacity and capital purchasing on a few surgeons or subspecialties. These decisions use estimates of each surgeon's and/or subspecialty's contribution margin per OR hour. The estimates are subject to uncertainty (e.g., from outliers). We account for the uncertainties by using mean-variance portfolio analysis (i.e., quadratic programming). This method characterizes the problem of selectively expanding OR capacity based on the expected financial return and risk of different portfolios of surgeons. The assessment reveals whether the choices, of which surgeons have their OR capacity expanded, are sensitive to the uncertainties in the surgeons' contribution margins per OR hour. Thus, mean-variance analysis reduces the chance of making strategic decisions based on spurious information. We also assess the financial benefit of using mean-variance portfolio analysis when the planned expansion of OR capacity is well diversified over at least several surgeons or subspecialties. Our results show that, in such circumstances, there may be little benefit from further changing the portfolio to reduce its financial risk. Surgeon and subspecialty specific hospital financial data are uncertain, a fact that should be taken into account when making decisions about expanding operating room capacity. We show that mean-variance portfolio analysis can incorporate this uncertainty, thereby guiding operating room management decision-making and reducing the chance of a strategic decision being made based on spurious information.

  18. Portfolios Dominating Indices: Optimization with Second-Order Stochastic Dominance Constraints vs. Minimum and Mean Variance Portfolios

    Directory of Open Access Journals (Sweden)

    Neslihan Fidan Keçeci

    2016-10-01

    Full Text Available The paper compares portfolio optimization with the Second-Order Stochastic Dominance (SSD constraints with mean-variance and minimum variance portfolio optimization. As a distribution-free decision rule, stochastic dominance takes into account the entire distribution of return rather than some specific characteristic, such as variance. The paper is focused on practical applications of the portfolio optimization and uses the Portfolio Safeguard (PSG package, which has precoded modules for optimization with SSD constraints, mean-variance and minimum variance portfolio optimization. We have done in-sample and out-of-sample simulations for portfolios of stocks from the Dow Jones, S&P 100 and DAX indices. The considered portfolios’ SSD dominate the Dow Jones, S&P 100 and DAX indices. Simulation demonstrated a superior performance of portfolios with SD constraints, versus mean-variance and minimum variance portfolios.

  19. Coupling of smooth particle hydrodynamics with the finite element method

    International Nuclear Information System (INIS)

    Attaway, S.W.; Heinstein, M.W.; Swegle, J.W.

    1994-01-01

    A gridless technique called smooth particle hydrodynamics (SPH) has been coupled with the transient dynamics finite element code ppercase[pronto]. In this paper, a new weighted residual derivation for the SPH method will be presented, and the methods used to embed SPH within ppercase[pronto] will be outlined. Example SPH ppercase[pronto] calculations will also be presented. One major difficulty associated with the Lagrangian finite element method is modeling materials with no shear strength; for example, gases, fluids and explosive biproducts. Typically, these materials can be modeled for only a short time with a Lagrangian finite element code. Large distortions cause tangling of the mesh, which will eventually lead to numerical difficulties, such as negative element area or ''bow tie'' elements. Remeshing will allow the problem to continue for a short while, but the large distortions can prevent a complete analysis. SPH is a gridless Lagrangian technique. Requiring no mesh, SPH has the potential to model material fracture, large shear flows and penetration. SPH computes the strain rate and the stress divergence based on the nearest neighbors of a particle, which are determined using an efficient particle-sorting technique. Embedding the SPH method within ppercase[pronto] allows part of the problem to be modeled with quadrilateral finite elements, while other parts are modeled with the gridless SPH method. SPH elements are coupled to the quadrilateral elements through a contact-like algorithm. ((orig.))

  20. On robust multi-period pre-commitment and time-consistent mean-variance portfolio optimization

    NARCIS (Netherlands)

    F. Cong (Fei); C.W. Oosterlee (Kees)

    2017-01-01

    textabstractWe consider robust pre-commitment and time-consistent mean-variance optimal asset allocation strategies, that are required to perform well also in a worst-case scenario regarding the development of the asset price. We show that worst-case scenarios for both strategies can be found by

  1. Mean-downside risk versus mean-variance efficient asset class allocations in relation to the investment horizon

    NARCIS (Netherlands)

    Ruiter, de A.J.C.; Brouwer, F.

    1996-01-01

    In this paper we examine the difference between a Mean-Downside Risk (MDR) based asset allocation decision and a Mean-Variance (MV) based decision. Using a vector autoregressive specification, future return series, trom 1 month up to 10 years, of several US stock and bond asset classes have been

  2. A Fourier transform method for the selection of a smoothing interval

    International Nuclear Information System (INIS)

    Kekre, H.B.; Madan, V.K.; Bairi, B.R.

    1989-01-01

    A novel method for the selection of a smoothing interval for the widely used Savitzky and Golay's smoothing filter is proposed. Complementary bandwidths for the nuclear spectral data and the smoothing filter are defined. The criterion for the selection of smoothing interval is based on matching the bandwidths of the spectral data to the filter. Using the above method five real observed spectral peaks of different full width at half maximum, viz. 23.5, 19.5, 17, 8.5 and 6.5 channels, were smoothed and the results are presented. (orig.)

  3. Multi-Period Mean-Variance Portfolio Selection with Uncertain Time Horizon When Returns Are Serially Correlated

    Directory of Open Access Journals (Sweden)

    Ling Zhang

    2012-01-01

    Full Text Available We study a multi-period mean-variance portfolio selection problem with an uncertain time horizon and serial correlations. Firstly, we embed the nonseparable multi-period optimization problem into a separable quadratic optimization problem with uncertain exit time by employing the embedding technique of Li and Ng (2000. Then we convert the later into an optimization problem with deterministic exit time. Finally, using the dynamic programming approach, we explicitly derive the optimal strategy and the efficient frontier for the dynamic mean-variance optimization problem. A numerical example with AR(1 return process is also presented, which shows that both the uncertainty of exit time and the serial correlations of returns have significant impacts on the optimal strategy and the efficient frontier.

  4. Stochastic Funding of a Defined Contribution Pension Plan with Proportional Administrative Costs and Taxation under Mean-Variance Optimization Approach

    Directory of Open Access Journals (Sweden)

    Charles I Nkeki

    2014-11-01

    Full Text Available This paper aim at studying a mean-variance portfolio selection problem with stochastic salary, proportional administrative costs and taxation in the accumulation phase of a defined contribution (DC pension scheme. The fund process is subjected to taxation while the contribution of the pension plan member (PPM is tax exempt. It is assumed that the flow of contributions of a PPM are invested into a market that is characterized by a cash account and a stock. The optimal portfolio processes and expected wealth for the PPM are established. The efficient and parabolic frontiers of a PPM portfolios in mean-variance are obtained. It was found that capital market line can be attained when initial fund and the contribution rate are zero. It was also found that the optimal portfolio process involved an inter-temporal hedging term that will offset any shocks to the stochastic salary of the PPM.

  5. Portfolios dominating indices: Optimization with second-order stochastic dominance constraints vs. minimum and mean variance portfolios

    OpenAIRE

    Keçeci, Neslihan Fidan; Kuzmenko, Viktor; Uryasev, Stan

    2016-01-01

    The paper compares portfolio optimization with the Second-Order Stochastic Dominance (SSD) constraints with mean-variance and minimum variance portfolio optimization. As a distribution-free decision rule, stochastic dominance takes into account the entire distribution of return rather than some specific characteristic, such as variance. The paper is focused on practical applications of the portfolio optimization and uses the Portfolio Safeguard (PSG) package, which has precoded modules for op...

  6. Portfolios Dominating Indices: Optimization with Second-Order Stochastic Dominance Constraints vs. Minimum and Mean Variance Portfolios

    OpenAIRE

    Neslihan Fidan Keçeci; Viktor Kuzmenko; Stan Uryasev

    2016-01-01

    The paper compares portfolio optimization with the Second-Order Stochastic Dominance (SSD) constraints with mean-variance and minimum variance portfolio optimization. As a distribution-free decision rule, stochastic dominance takes into account the entire distribution of return rather than some specific characteristic, such as variance. The paper is focused on practical applications of the portfolio optimization and uses the Portfolio Safeguard (PSG) package, which has precoded modules for op...

  7. A method of piecewise-smooth numerical branching

    Czech Academy of Sciences Publication Activity Database

    Ligurský, Tomáš; Renard, Y.

    2017-01-01

    Roč. 97, č. 7 (2017), s. 815-827 ISSN 1521-4001 R&D Projects: GA MŠk LQ1602 Institutional support: RVO:68145535 Keywords : numerical branching * piecewise smooth * steady-state problem * contact problem * Coulomb friction Subject RIV: BA - General Mathematics OBOR OECD: Applied mathematics http://onlinelibrary.wiley.com/doi/10.1002/zamm.201600219/epdf

  8. Self-Averaging Property of Minimal Investment Risk of Mean-Variance Model.

    Science.gov (United States)

    Shinzato, Takashi

    2015-01-01

    In portfolio optimization problems, the minimum expected investment risk is not always smaller than the expected minimal investment risk. That is, using a well-known approach from operations research, it is possible to derive a strategy that minimizes the expected investment risk, but this strategy does not always result in the best rate of return on assets. Prior to making investment decisions, it is important to an investor to know the potential minimal investment risk (or the expected minimal investment risk) and to determine the strategy that will maximize the return on assets. We use the self-averaging property to analyze the potential minimal investment risk and the concentrated investment level for the strategy that gives the best rate of return. We compare the results from our method with the results obtained by the operations research approach and with those obtained by a numerical simulation using the optimal portfolio. The results of our method and the numerical simulation are in agreement, but they differ from that of the operations research approach.

  9. Self-Averaging Property of Minimal Investment Risk of Mean-Variance Model.

    Directory of Open Access Journals (Sweden)

    Takashi Shinzato

    Full Text Available In portfolio optimization problems, the minimum expected investment risk is not always smaller than the expected minimal investment risk. That is, using a well-known approach from operations research, it is possible to derive a strategy that minimizes the expected investment risk, but this strategy does not always result in the best rate of return on assets. Prior to making investment decisions, it is important to an investor to know the potential minimal investment risk (or the expected minimal investment risk and to determine the strategy that will maximize the return on assets. We use the self-averaging property to analyze the potential minimal investment risk and the concentrated investment level for the strategy that gives the best rate of return. We compare the results from our method with the results obtained by the operations research approach and with those obtained by a numerical simulation using the optimal portfolio. The results of our method and the numerical simulation are in agreement, but they differ from that of the operations research approach.

  10. Suppression of stochastic pulsation in laser-plasma interaction by smoothing methods

    International Nuclear Information System (INIS)

    Hora, H.; Aydin, M.

    1992-01-01

    The control of the very complex behavior of a plasma with laser interaction by smoothing with induced spatial incoherence or other methods was related to improving the lateral uniformity of the irradiation. While this is important, it is shown from numerical hydrodynamic studies that the very strong temporal pulsation (stuttering) will mostly be suppressed by these smoothing methods too

  11. The Requirement of a Positive Definite Covariance Matrix of Security Returns for Mean-Variance Portfolio Analysis: A Pedagogic Illustration

    Directory of Open Access Journals (Sweden)

    Clarence C. Y. Kwan

    2010-07-01

    Full Text Available This study considers, from a pedagogic perspective, a crucial requirement for the covariance matrix of security returns in mean-variance portfolio analysis. Although the requirement that the covariance matrix be positive definite is fundamental in modern finance, it has not received any attention in standard investment textbooks. Being unaware of the requirement could cause confusion for students over some strange portfolio results that are based on seemingly reasonable input parameters. This study considers the requirement both informally and analytically. Electronic spreadsheet tools for constrained optimization and basic matrix operations are utilized to illustrate the various concepts involved.

  12. A nonlinear wavelet method for data smoothing of low-level gamma-ray spectra

    International Nuclear Information System (INIS)

    Gang Xiao; Li Deng; Benai Zhang; Jianshi Zhu

    2004-01-01

    A nonlinear wavelet method was designed for smoothing low-level gamma-ray spectra. The spectra of a 60 Co graduated radioactive source and a mixed soil sample were smoothed respectively according to this method and a 5 point smoothing method. The FWHM of 1,332 keV peak of 60 Co source and the absolute activities of 238 U of soil sample were calculated. The results show that the nonlinear wavelet method is better than the traditional method, with less loss of spectral peak and a more complete reduction of statistical fluctuation. (author)

  13. Big Data Challenges of High-Dimensional Continuous-Time Mean-Variance Portfolio Selection and a Remedy.

    Science.gov (United States)

    Chiu, Mei Choi; Pun, Chi Seng; Wong, Hoi Ying

    2017-08-01

    Investors interested in the global financial market must analyze financial securities internationally. Making an optimal global investment decision involves processing a huge amount of data for a high-dimensional portfolio. This article investigates the big data challenges of two mean-variance optimal portfolios: continuous-time precommitment and constant-rebalancing strategies. We show that both optimized portfolios implemented with the traditional sample estimates converge to the worst performing portfolio when the portfolio size becomes large. The crux of the problem is the estimation error accumulated from the huge dimension of stock data. We then propose a linear programming optimal (LPO) portfolio framework, which applies a constrained ℓ 1 minimization to the theoretical optimal control to mitigate the risk associated with the dimensionality issue. The resulting portfolio becomes a sparse portfolio that selects stocks with a data-driven procedure and hence offers a stable mean-variance portfolio in practice. When the number of observations becomes large, the LPO portfolio converges to the oracle optimal portfolio, which is free of estimation error, even though the number of stocks grows faster than the number of observations. Our numerical and empirical studies demonstrate the superiority of the proposed approach. © 2017 Society for Risk Analysis.

  14. Comparisons and Characterizations of the Mean-Variance, Mean-VaR, Mean-CVaR Models for Portfolio Selection With Background Risk

    OpenAIRE

    Xu, Guo; Wing-Keung, Wong; Lixing, Zhu

    2013-01-01

    This paper investigates the impact of background risk on an investor’s portfolio choice in a mean-VaR, mean-CVaR and mean-variance framework, and analyzes the characterizations of the mean-variance boundary and mean-VaR efficient frontier in the presence of background risk. We also consider the case with a risk-free security.

  15. An adaptive segment method for smoothing lidar signal based on noise estimation

    Science.gov (United States)

    Wang, Yuzhao; Luo, Pingping

    2014-10-01

    An adaptive segmentation smoothing method (ASSM) is introduced in the paper to smooth the signal and suppress the noise. In the ASSM, the noise is defined as the 3σ of the background signal. An integer number N is defined for finding the changing positions in the signal curve. If the difference of adjacent two points is greater than 3Nσ, the position is recorded as an end point of the smoothing segment. All the end points detected as above are recorded and the curves between them will be smoothed separately. In the traditional method, the end points of the smoothing windows in the signals are fixed. The ASSM creates changing end points in different signals and the smoothing windows could be set adaptively. The windows are always set as the half of the segmentations and then the average smoothing method will be applied in the segmentations. The Iterative process is required for reducing the end-point aberration effect in the average smoothing method and two or three times are enough. In ASSM, the signals are smoothed in the spacial area nor frequent area, that means the frequent disturbance will be avoided. A lidar echo was simulated in the experimental work. The echo was supposed to be created by a space-born lidar (e.g. CALIOP). And white Gaussian noise was added to the echo to act as the random noise resulted from environment and the detector. The novel method, ASSM, was applied to the noisy echo to filter the noise. In the test, N was set to 3 and the Iteration time is two. The results show that, the signal could be smoothed adaptively by the ASSM, but the N and the Iteration time might be optimized when the ASSM is applied in a different lidar.

  16. A numerical study of the Regge calculus and smooth lattice methods on a Kasner cosmology

    International Nuclear Information System (INIS)

    Brewin, Leo

    2015-01-01

    Two lattice based methods for numerical relativity, the Regge calculus and the smooth lattice relativity, will be compared with respect to accuracy and computational speed in a full 3+1 evolution of initial data representing a standard Kasner cosmology. It will be shown that both methods provide convergent approximations to the exact Kasner cosmology. It will also be shown that the Regge calculus is of the order of 110 times slower than the smooth lattice method. (paper)

  17. Power generation mixes evaluation applying the mean-variance theory. Analysis of the choices for Japanese energy policy

    International Nuclear Information System (INIS)

    Tabaru, Yasuhiko; Nonaka, Yuzuru; Nonaka, Shunsuke; Endou, Misao

    2013-01-01

    Optimal Japanese power generation mixes in 2030, for both economic efficiency and energy security (less cost variance risk), are evaluated by applying the mean-variance portfolio theory. Technical assumptions, including remaining generation capacity out of the present generation mix, future load duration curve, and Research and Development risks for some renewable energy technologies in 2030, are taken into consideration as either the constraints or parameters for the evaluation. Efficiency frontiers, which consist of the optimal generation mixes for several future scenarios, are identified, taking not only power balance but also capacity balance into account, and are compared with three power generation mixes submitted by the Japanese government as 'the choices for energy and environment'. (author)

  18. Continuous-time mean-variance portfolio selection with value-at-risk and no-shorting constraints

    Science.gov (United States)

    Yan, Wei

    2012-01-01

    An investment problem is considered with dynamic mean-variance(M-V) portfolio criterion under discontinuous prices which follow jump-diffusion processes according to the actual prices of stocks and the normality and stability of the financial market. The short-selling of stocks is prohibited in this mathematical model. Then, the corresponding stochastic Hamilton-Jacobi-Bellman(HJB) equation of the problem is presented and the solution of the stochastic HJB equation based on the theory of stochastic LQ control and viscosity solution is obtained. The efficient frontier and optimal strategies of the original dynamic M-V portfolio selection problem are also provided. And then, the effects on efficient frontier under the value-at-risk constraint are illustrated. Finally, an example illustrating the discontinuous prices based on M-V portfolio selection is presented.

  19. $h - p$ Spectral element methods for elliptic problems on non-smooth domains using parallel computers

    NARCIS (Netherlands)

    Tomar, S.K.

    2002-01-01

    It is well known that elliptic problems when posed on non-smooth domains, develop singularities. We examine such problems within the framework of spectral element methods and resolve the singularities with exponential accuracy.

  20. Improved smoothed analysis of the k-means method

    NARCIS (Netherlands)

    Manthey, Bodo; Röglin, Heiko; Mathieu, C.

    2009-01-01

    The k-means method is a widely used clustering algorithm. One of its distinguished features is its speed in practice. Its worst-case running-time, however, is exponential, leaving a gap between practical and theoretical performance. Arthur and Vassilvitskii [3] aimed at closing this gap, and they

  1. Smoothed analysis of the k-means method

    NARCIS (Netherlands)

    Arthur, David; Manthey, Bodo; Röglin, Heiko

    2011-01-01

    The k-means method is one of the most widely used clustering algorithms, drawing its popularity from its speed in practice. Recently, however, it was shown to have exponential worst-case running time. In order to close the gap between practical performance and theoretical analysis, the k-means

  2. Analysis of elastic-plastic problems using edge-based smoothed finite element method

    International Nuclear Information System (INIS)

    Cui, X.Y.; Liu, G.R.; Li, G.Y.; Zhang, G.Y.; Sun, G.Y.

    2009-01-01

    In this paper, an edge-based smoothed finite element method (ES-FEM) is formulated for stress field determination of elastic-plastic problems using triangular meshes, in which smoothing domains associated with the edges of the triangles are used for smoothing operations to improve the accuracy and the convergence rate of the method. The smoothed Galerkin weak form is adopted to obtain the discretized system equations, and the numerical integration becomes a simple summation over the edge-based smoothing domains. The pseudo-elastic method is employed for the determination of stress field and Hencky's total deformation theory is used to define effective elastic material parameters, which are treated as field variables and considered as functions of the final state of stress fields. The effective elastic material parameters are then obtained in an iterative manner based on the strain controlled projection method from the uniaxial material curve. Some numerical examples are investigated and excellent results have been obtained demonstrating the effectivity of the present method.

  3. A Pragmatic Smoothing Method for Improving the Quality of the Results in Atomic Spectroscopy

    Science.gov (United States)

    Bennun, Leonardo

    2017-07-01

    A new smoothing method for the improvement on the identification and quantification of spectral functions based on the previous knowledge of the signals that are expected to be quantified, is presented. These signals are used as weighted coefficients in the smoothing algorithm. This smoothing method was conceived to be applied in atomic and nuclear spectroscopies preferably to these techniques where net counts are proportional to acquisition time, such as particle induced X-ray emission (PIXE) and other X-ray fluorescence spectroscopic methods, etc. This algorithm, when properly applied, does not distort the form nor the intensity of the signal, so it is well suited for all kind of spectroscopic techniques. This method is extremely effective at reducing high-frequency noise in the signal much more efficient than a single rectangular smooth of the same width. As all of smoothing techniques, the proposed method improves the precision of the results, but in this case we found also a systematic improvement on the accuracy of the results. We still have to evaluate the improvement on the quality of the results when this method is applied over real experimental results. We expect better characterization of the net area quantification of the peaks, and smaller Detection and Quantification Limits. We have applied this method to signals that obey Poisson statistics, but with the same ideas and criteria, it could be applied to time series. In a general case, when this algorithm is applied over experimental results, also it would be required that the sought characteristic functions, required for this weighted smoothing method, should be obtained from a system with strong stability. If the sought signals are not perfectly clean, this method should be carefully applied

  4. A Smooth Newton Method for Nonlinear Programming Problems with Inequality Constraints

    Directory of Open Access Journals (Sweden)

    Vasile Moraru

    2012-02-01

    Full Text Available The paper presents a reformulation of the Karush-Kuhn-Tucker (KKT system associated nonlinear programming problem into an equivalent system of smooth equations. Classical Newton method is applied to solve the system of equations. The superlinear convergence of the primal sequence, generated by proposed method, is proved. The preliminary numerical results with a problems test set are presented.

  5. A Meshfree Cell-based Smoothed Point Interpolation Method for Solid Mechanics Problems

    International Nuclear Information System (INIS)

    Zhang Guiyong; Liu Guirong

    2010-01-01

    In the framework of a weakened weak (W 2 ) formulation using a generalized gradient smoothing operation, this paper introduces a novel meshfree cell-based smoothed point interpolation method (CS-PIM) for solid mechanics problems. The W 2 formulation seeks solutions from a normed G space which includes both continuous and discontinuous functions and allows the use of much more types of methods to create shape functions for numerical methods. When PIM shape functions are used, the functions constructed are in general not continuous over the entire problem domain and hence are not compatible. Such an interpolation is not in a traditional H 1 space, but in a G 1 space. By introducing the generalized gradient smoothing operation properly, the requirement on function is now further weakened upon the already weakened requirement for functions in a H 1 space and G 1 space can be viewed as a space of functions with weakened weak (W 2 ) requirement on continuity. The cell-based smoothed point interpolation method (CS-PIM) is formulated based on the W 2 formulation, in which displacement field is approximated using the PIM shape functions, which possess the Kronecker delta property facilitating the enforcement of essential boundary conditions [3]. The gradient (strain) field is constructed by the generalized gradient smoothing operation within the cell-based smoothing domains, which are exactly the triangular background cells. A W 2 formulation of generalized smoothed Galerkin (GS-Galerkin) weak form is used to derive the discretized system equations. It was found that the CS-PIM possesses the following attractive properties: (1) It is very easy to implement and works well with the simplest linear triangular mesh without introducing additional degrees of freedom; (2) it is at least linearly conforming; (3) this method is temporally stable and works well for dynamic analysis; (4) it possesses a close-to-exact stiffness, which is much softer than the overly-stiff FEM model and

  6. Application of Data Smoothing Method in Signal Processing for Vortex Flow Meters

    Directory of Open Access Journals (Sweden)

    Zhang Jun

    2017-01-01

    Full Text Available Vortex flow meter is typical flow measure equipment. Its measurement output signals can easily be impaired by environmental conditions. In order to obtain an improved estimate of the time-averaged velocity from the vortex flow meter, a signal filter method is applied in this paper. The method is based on a simple Savitzky-Golay smoothing filter algorithm. According with the algorithm, a numerical program is developed in Python with the scientific library numerical Numpy. Two sample data sets are processed through the program. The results demonstrate that the processed data is available accepted compared with the original data. The improved data of the time-averaged velocity is obtained within smoothing curves. Finally the simple data smoothing program is useable and stable for this filter.

  7. A three-dimensional cell-based smoothed finite element method for elasto-plasticity

    International Nuclear Information System (INIS)

    Lee, Kye Hyung; Im, Se Yong; Lim, Jae Hyuk; Sohn, Dong Woo

    2015-01-01

    This work is concerned with a three-dimensional cell-based smoothed finite element method for application to elastic-plastic analysis. The formulation of smoothed finite elements is extended to cover elastic-plastic deformations beyond the classical linear theory of elasticity, which has been the major application domain of smoothed finite elements. The finite strain deformations are treated with the aid of the formulation based on the hyperelastic constitutive equation. The volumetric locking originating from the nearly incompressible behavior of elastic-plastic deformations is remedied by relaxing the volumetric strain through the mean value. The comparison with the conventional finite elements demonstrates the effectiveness and accuracy of the present approach.

  8. A three-dimensional cell-based smoothed finite element method for elasto-plasticity

    Energy Technology Data Exchange (ETDEWEB)

    Lee, Kye Hyung; Im, Se Yong [KAIST, Daejeon (Korea, Republic of); Lim, Jae Hyuk [KARI, Daejeon (Korea, Republic of); Sohn, Dong Woo [Korea Maritime and Ocean University, Busan (Korea, Republic of)

    2015-02-15

    This work is concerned with a three-dimensional cell-based smoothed finite element method for application to elastic-plastic analysis. The formulation of smoothed finite elements is extended to cover elastic-plastic deformations beyond the classical linear theory of elasticity, which has been the major application domain of smoothed finite elements. The finite strain deformations are treated with the aid of the formulation based on the hyperelastic constitutive equation. The volumetric locking originating from the nearly incompressible behavior of elastic-plastic deformations is remedied by relaxing the volumetric strain through the mean value. The comparison with the conventional finite elements demonstrates the effectiveness and accuracy of the present approach.

  9. Investigation of noise in gear transmissions by the method of mathematical smoothing of experiments

    Science.gov (United States)

    Sheftel, B. T.; Lipskiy, G. K.; Ananov, P. P.; Chernenko, I. K.

    1973-01-01

    A rotatable central component smoothing method is used to analyze rotating gear noise spectra. A matrix is formulated in which the randomized rows correspond to various tests and the columns to factor values. Canonical analysis of the obtained regression equation permits the calculation of optimal speed and load at a previous assigned noise level.

  10. Second-order numerical methods for multi-term fractional differential equations: Smooth and non-smooth solutions

    Science.gov (United States)

    Zeng, Fanhai; Zhang, Zhongqiang; Karniadakis, George Em

    2017-12-01

    Starting with the asymptotic expansion of the error equation of the shifted Gr\\"{u}nwald--Letnikov formula, we derive a new modified weighted shifted Gr\\"{u}nwald--Letnikov (WSGL) formula by introducing appropriate correction terms. We then apply one special case of the modified WSGL formula to solve multi-term fractional ordinary and partial differential equations, and we prove the linear stability and second-order convergence for both smooth and non-smooth solutions. We show theoretically and numerically that numerical solutions up to certain accuracy can be obtained with only a few correction terms. Moreover, the correction terms can be tuned according to the fractional derivative orders without explicitly knowing the analytical solutions. Numerical simulations verify the theoretical results and demonstrate that the new formula leads to better performance compared to other known numerical approximations with similar resolution.

  11. Risk implications of renewable support instruments: Comparative analysis of feed-in tariffs and premiums using a mean-variance approach

    DEFF Research Database (Denmark)

    Kitzing, Lena

    2014-01-01

    . Using cash flow analysis, Monte Carlo simulations and mean-variance analysis, we quantify risk-return relationships for an exemplary offshore wind park in a simplified setting. We show that feedin tariffs systematically require lower direct support levels than feed-in premiums while providing the same...

  12. A method for smoothing segmented lung boundary in chest CT images

    Science.gov (United States)

    Yim, Yeny; Hong, Helen

    2007-03-01

    To segment low density lung regions in chest CT images, most of methods use the difference in gray-level value of pixels. However, radiodense pulmonary vessels and pleural nodules that contact with the surrounding anatomy are often excluded from the segmentation result. To smooth lung boundary segmented by gray-level processing in chest CT images, we propose a new method using scan line search. Our method consists of three main steps. First, lung boundary is extracted by our automatic segmentation method. Second, segmented lung contour is smoothed in each axial CT slice. We propose a scan line search to track the points on lung contour and find rapidly changing curvature efficiently. Finally, to provide consistent appearance between lung contours in adjacent axial slices, 2D closing in coronal plane is applied within pre-defined subvolume. Our method has been applied for performance evaluation with the aspects of visual inspection, accuracy and processing time. The results of our method show that the smoothness of lung contour was considerably increased by compensating for pulmonary vessels and pleural nodules.

  13. Semi-Smooth Newton Method for Solving 2D Contact Problems with Tresca and Coulomb Friction

    Directory of Open Access Journals (Sweden)

    Kristina Motyckova

    2013-01-01

    Full Text Available The contribution deals with contact problems for two elastic bodies with friction. After the description of the problem we present its discretization based on linear or bilinear finite elements. The semi--smooth Newton method is used to find the solution, from which we derive active sets algorithms. Finally, we arrive at the globally convergent dual implementation of the algorithms in terms of the Langrange multipliers for the Tresca problem. Numerical experiments conclude the paper.

  14. Research on industrialization of electric vehicles with its demand forecast using exponential smoothing method

    Directory of Open Access Journals (Sweden)

    Zhanglin Peng

    2015-04-01

    Full Text Available Purpose: Electric vehicles industry has gotten a rapid development in the world, especially in the developed countries, but still has a gap among different countries or regions. The advanced industrialization experiences of the EVs in the developed countries will have a great helpful for the development of EVs industrialization in the developing countries. This paper seeks to research the industrialization path & prospect of American EVs by forecasting electric vehicles demand and its proportion to the whole car sales based on the historical 37 EVs monthly sales and Cars monthly sales spanning from Dec. 2010 to Dec. 2013, and find out the key measurements to help Chinese government and automobile enterprises to promote Chinese EVs industrialization. Design/methodology: Compared with Single Exponential Smoothing method and Double Exponential Smoothing method, Triple exponential smoothing method is improved and applied in this study. Findings: The research results show that:  American EVs industry will keep a sustained growth in the next 3 months.  Price of the EVs, price of fossil oil, number of charging station, EVs technology and the government market & taxation polices have a different influence to EVs sales. So EVs manufacturers and policy-makers can adjust or reformulate some technology tactics and market measurements according to the forecast results. China can learn from American EVs polices and measurements to develop Chinese EVs industry. Originality/value: The main contribution of this paper is to use the triple exponential smoothing method to forecast the electric vehicles demand and its proportion to the whole automobile sales, and analyze the industrial development of Chinese electric vehicles by American EVs industry.

  15. A Nonlinear Framework of Delayed Particle Smoothing Method for Vehicle Localization under Non-Gaussian Environment

    Directory of Open Access Journals (Sweden)

    Zhu Xiao

    2016-05-01

    Full Text Available In this paper, a novel nonlinear framework of smoothing method, non-Gaussian delayed particle smoother (nGDPS, is proposed, which enables vehicle state estimation (VSE with high accuracy taking into account the non-Gaussianity of the measurement and process noises. Within the proposed method, the multivariate Student’s t-distribution is adopted in order to compute the probability distribution function (PDF related to the process and measurement noises, which are assumed to be non-Gaussian distributed. A computation approach based on Ensemble Kalman Filter (EnKF is designed to cope with the mean and the covariance matrix of the proposal non-Gaussian distribution. A delayed Gibbs sampling algorithm, which incorporates smoothing of the sampled trajectories over a fixed-delay, is proposed to deal with the sample degeneracy of particles. The performance is investigated based on the real-world data, which is collected by low-cost on-board vehicle sensors. The comparison study based on the real-world experiments and the statistical analysis demonstrates that the proposed nGDPS has significant improvement on the vehicle state accuracy and outperforms the existing filtering and smoothing methods.

  16. Window least squares method applied to statistical noise smoothing of positron annihilation data

    International Nuclear Information System (INIS)

    Adam, G.; Adam, S.; Barbiellini, B.; Hoffmann, L.; Manuel, A.A.; Peter, M.

    1993-06-01

    The paper deals with the off-line processing of experimental data obtained by two-dimensional angular correlation of the electron-positron annihilation radiation (2D-ACAR) technique on high-temperature superconductors. A piecewise continuous window least squares (WLS) method devoted to the statistical noise smoothing of 2D-ACAR data, under close control of the crystal reciprocal lattice periodicity, is derived. Reliability evaluation of the constant local weight WLS smoothing formula (CW-WLSF) shows that consistent processing 2D-ACAR data by CW-WLSF is possible. CW-WLSF analysis of 2D-ACAR data collected on untwinned Y Ba 2 Cu 3 O 7-δ single crystals yields significantly improved signature of the Fermi surface ridge at second Umklapp processes and resolves, for the first time, the ridge signature at third Umklapp processes. (author). 24 refs, 9 figs

  17. A systematic method of smooth switching LPV controllers design for a morphing aircraft

    Directory of Open Access Journals (Sweden)

    Jiang Weilai

    2015-12-01

    Full Text Available This paper is concerned with a systematic method of smooth switching linear parameter-varying (LPV controllers design for a morphing aircraft with a variable wing sweep angle. The morphing aircraft is modeled as an LPV system, whose scheduling parameter is the variation rate of the wing sweep angle. By dividing the scheduling parameter set into subsets with overlaps, output feedback controllers which consider smooth switching are designed and the controllers in overlapped subsets are interpolated from two adjacent subsets. A switching law without constraint on the average dwell time is obtained which makes the conclusion less conservative. Furthermore, a systematic algorithm is developed to improve the efficiency of the controllers design process. The parameter set is divided into the fewest subsets on the premise that the closed-loop system has a desired performance. Simulation results demonstrate the effectiveness of this approach.

  18. Increased Wear Resistance of Surfaces of Rotation Bearings Methods Strengthening-Smoothing Processing

    Directory of Open Access Journals (Sweden)

    A.A. Tkachuk

    2016-05-01

    Full Text Available Trends of modern engineering put forward higher requirements for quality bearings. This is especially true on production of bearings for special purposes with high speeds of rotation and resource. Much more opportunities in the technology management quality surface layers appear in the application of smoothing-strengthening methods, based on superficial plastic deformation. Working models of cutting lathes, grinders and tool smoothing sequence revealed the formation of operational parameters in the technological cycle of roller rings. The model of the dynamics of elastic deformation of the work piece tool helps identify actions radial force in the contact “surface – indenter.” Using mathematical modelling resolved a number of issues relevant process.

  19. A novel MPPT method for enhancing energy conversion efficiency taking power smoothing into account

    International Nuclear Information System (INIS)

    Liu, Jizhen; Meng, Hongmin; Hu, Yang; Lin, Zhongwei; Wang, Wei

    2015-01-01

    Highlights: • We discuss the disadvantages of conventional OTC MPPT method. • We study the relationship between enhancing efficiency and power smoothing. • The conversion efficiency is enhanced and the volatility of power is suppressed. • Small signal analysis is used to verify the effectiveness of proposed method. - Abstract: With the increasing capacity of wind energy conversion system (WECS), the rotational inertia of wind turbine is becoming larger. And the efficiency of energy conversion is significantly reduced by the large inertia. This paper proposes a novel maximum power point tracking (MPPT) method to enhance the efficiency of energy conversion for large-scale wind turbine. Since improving the efficiency may increase the fluctuations of output power, power smoothing is considered as the second control objective. A T-S fuzzy inference system (FIS) is adapted to reduce the fluctuations according to the volatility of wind speed and accelerated rotor speed by regulating the compensation gain. To verify the effectiveness, stability and good dynamic performance of the new method, mechanism analyses, small signal analyses, and simulation studies are carried out based on doubly-fed induction generator (DFIG) wind turbine, respectively. Study results show that both the response speed and the efficiency of proposed method are increased. In addition, the extra fluctuations of output power caused by the high efficiency are reduced effectively by the proposed method with FIS

  20. The CACAO Method for Smoothing, Gap Filling, and Characterizing Seasonal Anomalies in Satellite Time Series

    Science.gov (United States)

    Verger, Aleixandre; Baret, F.; Weiss, M.; Kandasamy, S.; Vermote, E.

    2013-01-01

    Consistent, continuous, and long time series of global biophysical variables derived from satellite data are required for global change research. A novel climatology fitting approach called CACAO (Consistent Adjustment of the Climatology to Actual Observations) is proposed to reduce noise and fill gaps in time series by scaling and shifting the seasonal climatological patterns to the actual observations. The shift and scale CACAO parameters adjusted for each season allow quantifying shifts in the timing of seasonal phenology and inter-annual variations in magnitude as compared to the average climatology. CACAO was assessed first over simulated daily Leaf Area Index (LAI) time series with varying fractions of missing data and noise. Then, performances were analyzed over actual satellite LAI products derived from AVHRR Long-Term Data Record for the 1981-2000 period over the BELMANIP2 globally representative sample of sites. Comparison with two widely used temporal filtering methods-the asymmetric Gaussian (AG) model and the Savitzky-Golay (SG) filter as implemented in TIMESAT-revealed that CACAO achieved better performances for smoothing AVHRR time series characterized by high level of noise and frequent missing observations. The resulting smoothed time series captures well the vegetation dynamics and shows no gaps as compared to the 50-60% of still missing data after AG or SG reconstructions. Results of simulation experiments as well as confrontation with actual AVHRR time series indicate that the proposed CACAO method is more robust to noise and missing data than AG and SG methods for phenology extraction.

  1. Smoothed Particle Inference: A Kilo-Parametric Method for X-ray Galaxy Cluster Modeling

    Energy Technology Data Exchange (ETDEWEB)

    Peterson, John R.; Marshall, P.J.; /KIPAC, Menlo Park; Andersson, K.; /Stockholm U. /SLAC

    2005-08-05

    We propose an ambitious new method that models the intracluster medium in clusters of galaxies as a set of X-ray emitting smoothed particles of plasma. Each smoothed particle is described by a handful of parameters including temperature, location, size, and elemental abundances. Hundreds to thousands of these particles are used to construct a model cluster of galaxies, with the appropriate complexity estimated from the data quality. This model is then compared iteratively with X-ray data in the form of adaptively binned photon lists via a two-sample likelihood statistic and iterated via Markov Chain Monte Carlo. The complex cluster model is propagated through the X-ray instrument response using direct sampling Monte Carlo methods. Using this approach the method can reproduce many of the features observed in the X-ray emission in a less assumption-dependent way that traditional analyses, and it allows for a more detailed characterization of the density, temperature, and metal abundance structure of clusters. Multi-instrument X-ray analyses and simultaneous X-ray, Sunyaev-Zeldovich (SZ), and lensing analyses are a straight-forward extension of this methodology. Significant challenges still exist in understanding the degeneracy in these models and the statistical noise induced by the complexity of the models.

  2. Mean-variance analysis of block-iterative reconstruction algorithms modeling 3D detector response in SPECT

    Science.gov (United States)

    Lalush, D. S.; Tsui, B. M. W.

    1998-06-01

    We study the statistical convergence properties of two fast iterative reconstruction algorithms, the rescaled block-iterative (RBI) and ordered subset (OS) EM algorithms, in the context of cardiac SPECT with 3D detector response modeling. The Monte Carlo method was used to generate nearly noise-free projection data modeling the effects of attenuation, detector response, and scatter from the MCAT phantom. One thousand noise realizations were generated with an average count level approximating a typical T1-201 cardiac study. Each noise realization was reconstructed using the RBI and OS algorithms for cases with and without detector response modeling. For each iteration up to twenty, we generated mean and variance images, as well as covariance images for six specific locations. Both OS and RBI converged in the mean to results that were close to the noise-free ML-EM result using the same projection model. When detector response was not modeled in the reconstruction, RBI exhibited considerably lower noise variance than OS for the same resolution. When 3D detector response was modeled, the RBI-EM provided a small improvement in the tradeoff between noise level and resolution recovery, primarily in the axial direction, while OS required about half the number of iterations of RBI to reach the same resolution. We conclude that OS is faster than RBI, but may be sensitive to errors in the projection model. Both OS-EM and RBI-EM are effective alternatives to the EVIL-EM algorithm, but noise level and speed of convergence depend on the projection model used.

  3. Review of smoothing methods for enhancement of noisy data from heavy-duty LHD mining machines

    Science.gov (United States)

    Wodecki, Jacek; Michalak, Anna; Stefaniak, Paweł

    2018-01-01

    Appropriate analysis of data measured on heavy-duty mining machines is essential for processes monitoring, management and optimization. Some particular classes of machines, for example LHD (load-haul-dump) machines, hauling trucks, drilling/bolting machines etc. are characterized with cyclicity of operations. In those cases, identification of cycles and their segments or in other words - simply data segmentation is a key to evaluate their performance, which may be very useful from the management point of view, for example leading to introducing optimization to the process. However, in many cases such raw signals are contaminated with various artifacts, and in general are expected to be very noisy, which makes the segmentation task very difficult or even impossible. To deal with that problem, there is a need for efficient smoothing methods that will allow to retain informative trends in the signals while disregarding noises and other undesired non-deterministic components. In this paper authors present a review of various approaches to diagnostic data smoothing. Described methods can be used in a fast and efficient way, effectively cleaning the signals while preserving informative deterministic behaviour, that is a crucial to precise segmentation and other approaches to industrial data analysis.

  4. Method of adiabatic modes in studying problems of smoothly irregular open waveguide structures

    International Nuclear Information System (INIS)

    Sevastianov, L. A.; Egorov, A. A.; Sevastyanov, A. L.

    2013-01-01

    Basic steps in developing an original method of adiabatic modes that makes it possible to solve the direct and inverse problems of simulating and designing three-dimensional multilayered smoothly irregular open waveguide structures are described. A new element in the method is that an approximate solution of Maxwell’s equations is made to obey “inclined” boundary conditions at the interfaces between themedia being considered. These boundary conditions take into account the obliqueness of planes tangent to nonplanar boundaries between the media and lead to new equations for coupled vector quasiwaveguide hybrid adiabatic modes. Solutions of these equations describe the phenomenon of “entanglement” of two linear polarizations of an irregular multilayered waveguide, the appearance of a new mode in an entangled state, and the effect of rotation of the polarization plane of quasiwaveguide modes. The efficiency of the method is demonstrated by considering the example of numerically simulating a thin-film generalized waveguide Lüneburg lens.

  5. Face-based smoothed finite element method for real-time simulation of soft tissue

    Science.gov (United States)

    Mendizabal, Andrea; Bessard Duparc, Rémi; Bui, Huu Phuoc; Paulus, Christoph J.; Peterlik, Igor; Cotin, Stéphane

    2017-03-01

    In soft tissue surgery, a tumor and other anatomical structures are usually located using the preoperative CT or MR images. However, due to the deformation of the concerned tissues, this information suffers from inaccuracy when employed directly during the surgery. In order to account for these deformations in the planning process, the use of a bio-mechanical model of the tissues is needed. Such models are often designed using the finite element method (FEM), which is, however, computationally expensive, in particular when a high accuracy of the simulation is required. In our work, we propose to use a smoothed finite element method (S-FEM) in the context of modeling of the soft tissue deformation. This numerical technique has been introduced recently to overcome the overly stiff behavior of the standard FEM and to improve the solution accuracy and the convergence rate in solid mechanics problems. In this paper, a face-based smoothed finite element method (FS-FEM) using 4-node tetrahedral elements is presented. We show that in some cases, the method allows for reducing the number of degrees of freedom, while preserving the accuracy of the discretization. The method is evaluated on a simulation of a cantilever beam loaded at the free end and on a simulation of a 3D cube under traction and compression forces. Further, it is applied to the simulation of the brain shift and of the kidney's deformation. The results demonstrate that the method outperforms the standard FEM in a bending scenario and that has similar accuracy as the standard FEM in the simulations of the brain-shift and of the kidney's deformation.

  6. Adaptive Multilevel Methods with Local Smoothing for $H^1$- and $H^{\\mathrm{curl}}$-Conforming High Order Finite Element Methods

    KAUST Repository

    Janssen, Bä rbel; Kanschat, Guido

    2011-01-01

    A multilevel method on adaptive meshes with hanging nodes is presented, and the additional matrices appearing in the implementation are derived. Smoothers of overlapping Schwarz type are discussed; smoothing is restricted to the interior of the subdomains refined to the current level; thus it has optimal computational complexity. When applied to conforming finite element discretizations of elliptic problems and Maxwell equations, the method's convergence rates are very close to those for the nonadaptive version. Furthermore, the smoothers remain efficient for high order finite elements. We discuss the implementation in a general finite element code using the example of the deal.II library. © 2011 Societ y for Industrial and Applied Mathematics.

  7. On the computational efficiency of isogeometric methods for smooth elliptic problems using direct solvers

    KAUST Repository

    Collier, Nathan; Dalcin, Lisandro; Calo, Victor M.

    2014-01-01

    SUMMARY: We compare the computational efficiency of isogeometric Galerkin and collocation methods for partial differential equations in the asymptotic regime. We define a metric to identify when numerical experiments have reached this regime. We then apply these ideas to analyze the performance of different isogeometric discretizations, which encompass C0 finite element spaces and higher-continuous spaces. We derive convergence and cost estimates in terms of the total number of degrees of freedom and then perform an asymptotic numerical comparison of the efficiency of these methods applied to an elliptic problem. These estimates are derived assuming that the underlying solution is smooth, the full Gauss quadrature is used in each non-zero knot span and the numerical solution of the discrete system is found using a direct multi-frontal solver. We conclude that under the assumptions detailed in this paper, higher-continuous basis functions provide marginal benefits.

  8. On the computational efficiency of isogeometric methods for smooth elliptic problems using direct solvers

    KAUST Repository

    Collier, Nathan

    2014-09-17

    SUMMARY: We compare the computational efficiency of isogeometric Galerkin and collocation methods for partial differential equations in the asymptotic regime. We define a metric to identify when numerical experiments have reached this regime. We then apply these ideas to analyze the performance of different isogeometric discretizations, which encompass C0 finite element spaces and higher-continuous spaces. We derive convergence and cost estimates in terms of the total number of degrees of freedom and then perform an asymptotic numerical comparison of the efficiency of these methods applied to an elliptic problem. These estimates are derived assuming that the underlying solution is smooth, the full Gauss quadrature is used in each non-zero knot span and the numerical solution of the discrete system is found using a direct multi-frontal solver. We conclude that under the assumptions detailed in this paper, higher-continuous basis functions provide marginal benefits.

  9. Assessment of finite element and smoothed particles hydrodynamics methods for modeling serrated chip formation in hardened steel

    Directory of Open Access Journals (Sweden)

    Usama Umer

    2016-05-01

    Full Text Available This study aims to perform comparative analyses in modeling serrated chip morphologies using traditional finite element and smoothed particles hydrodynamics methods. Although finite element models are being employed in predicting machining performance variables for the last two decades, many drawbacks and limitations exist with the current finite element models. The problems like excessive mesh distortions, high numerical cost of adaptive meshing techniques, and need of geometric chip separation criteria hinder its practical implementation in metal cutting industries. In this study, a mesh free method, namely, smoothed particles hydrodynamics, is implemented for modeling serrated chip morphology while machining AISI H13 hardened tool steel. The smoothed particles hydrodynamics models are compared with the traditional finite element models, and it has been found that the smoothed particles hydrodynamics models have good capabilities in handling large distortions and do not need any geometric or mesh-based chip separation criterion.

  10. Application of Holt exponential smoothing and ARIMA method for data population in West Java

    Science.gov (United States)

    Supriatna, A.; Susanti, D.; Hertini, E.

    2017-01-01

    One method of time series that is often used to predict data that contains trend is Holt. Holt method using different parameters used in the original data which aims to smooth the trend value. In addition to Holt, ARIMA method can be used on a wide variety of data including data pattern containing a pattern trend. Data actual of population from 1998-2015 contains the trends so can be solved by Holt and ARIMA method to obtain the prediction value of some periods. The best method is measured by looking at the smallest MAPE and MAE error. The result using Holt method is 47.205.749 populations in 2016, 47.535.324 populations in 2017, and 48.041.672 populations in 2018, with MAPE error is 0,469744 and MAE error is 189.731. While the result using ARIMA method is 46.964.682 populations in 2016, 47.342.189 in 2017, and 47.899.696 in 2018, with MAPE error is 0,4380 and MAE is 176.626.

  11. A new smoothing modified three-term conjugate gradient method for [Formula: see text]-norm minimization problem.

    Science.gov (United States)

    Du, Shouqiang; Chen, Miao

    2018-01-01

    We consider a kind of nonsmooth optimization problems with [Formula: see text]-norm minimization, which has many applications in compressed sensing, signal reconstruction, and the related engineering problems. Using smoothing approximate techniques, this kind of nonsmooth optimization problem can be transformed into a general unconstrained optimization problem, which can be solved by the proposed smoothing modified three-term conjugate gradient method. The smoothing modified three-term conjugate gradient method is based on Polak-Ribière-Polyak conjugate gradient method. For the Polak-Ribière-Polyak conjugate gradient method has good numerical properties, the proposed method possesses the sufficient descent property without any line searches, and it is also proved to be globally convergent. Finally, the numerical experiments show the efficiency of the proposed method.

  12. Surface smoothness

    DEFF Research Database (Denmark)

    Tummala, Sudhakar; Dam, Erik B.

    2010-01-01

    accuracy, such novel markers must therefore be validated against clinically meaningful end-goals such as the ability to allow correct diagnosis. We present a method for automatic cartilage surface smoothness quantification in the knee joint. The quantification is based on a curvature flow method used....... We demonstrate that the fully automatic markers eliminate the time required for radiologist annotations, and in addition provide a diagnostic marker superior to the evaluated semi-manual markers....

  13. A DAFT DL_POLY distributed memory adaptation of the Smoothed Particle Mesh Ewald method

    Science.gov (United States)

    Bush, I. J.; Todorov, I. T.; Smith, W.

    2006-09-01

    The Smoothed Particle Mesh Ewald method [U. Essmann, L. Perera, M.L. Berkowtz, T. Darden, H. Lee, L.G. Pedersen, J. Chem. Phys. 103 (1995) 8577] for calculating long ranged forces in molecular simulation has been adapted for the parallel molecular dynamics code DL_POLY_3 [I.T. Todorov, W. Smith, Philos. Trans. Roy. Soc. London 362 (2004) 1835], making use of a novel 3D Fast Fourier Transform (DAFT) [I.J. Bush, The Daresbury Advanced Fourier transform, Daresbury Laboratory, 1999] that perfectly matches the Domain Decomposition (DD) parallelisation strategy [W. Smith, Comput. Phys. Comm. 62 (1991) 229; M.R.S. Pinches, D. Tildesley, W. Smith, Mol. Sim. 6 (1991) 51; D. Rapaport, Comput. Phys. Comm. 62 (1991) 217] of the DL_POLY_3 code. In this article we describe software adaptations undertaken to import this functionality and provide a review of its performance.

  14. Smooth polishing of femtosecond laser induced craters on cemented carbide by ultrasonic vibration method

    Science.gov (United States)

    Wang, H. P.; Guan, Y. C.; Zheng, H. Y.

    2017-12-01

    Rough surface features induced by laser irradiation have been a challenging for the fabrication of micro/nano scale features. In this work, we propose hybrid ultrasonic vibration polishing method to improve surface quality of microcraters produced by femtosecond laser irradiation on cemented carbide. The laser caused rough surfaces are significantly smoothened after ultrasonic vibration polishing due to the strong collision effect of diamond particles on the surfaces. 3D morphology, SEM and AFM analysis has been conducted to characterize surface morphology and topography. Results indicate that the minimal surface roughness of Ra 7.60 nm has been achieved on the polished surfaces. The fabrication of microcraters with smooth surfaces is applicable to molding process for mass production of micro-optical components.

  15. An effective method for smoothing the staggered dose distribution of multi-leaf collimator field edge

    International Nuclear Information System (INIS)

    Hwang, I.-M.; Lin, S.-Y.; Lee, M.-S.; Wang, C.-J.; Chuang, K.-S.; Ding, H.-J.

    2002-01-01

    Purpose: To smooth the staggered dose distribution that occurs in stepped leaves defined by a multi-leaf collimator (MLC). Materials and methods: The MLC Shaper program controlled the stepped leaves, which were shifted in a traveling range, the pattern of shift was from the position of out-bound to in-bound with a one-segment (cross-bound), three-segment, and five-segment shifts. Film was placed at a depth of 1.5 cm and irradiated with the same irradiation dose used for the cerrobend block experiment. Four field edges with the MLC defining at 15 deg., 30 deg., 45 deg., 60 deg. angels relative to the jaw edge were performed, respectively, in this study. For the field edge defined by the multi-segment technique, the amplitude of the isodose lines for 50% isodose line and both the 80% and 20% isodose lines were measured. The effective penumbra widths with 90-10% and 80-20% distances for different irradiations were determined at four field edges with the MLC defining at 15 deg., 30 deg., 45 deg., 60 deg. angels relative to the jaw edge. Results: Use of the five-segment technique for multi-leaf collimation at the 60 deg. angle field edge smoothes each isodose line into an effectively straight line, similar to the pattern achieved using a cerrobend block. The separation of these lines is also important. The 80-20% effective penumbra width with five-segment techniques (8.23 mm) at 60 deg. angle relative to the jaw edge is little wider (1.9 times) than the penumbra of cerrobend block field edge (4.23 mm). We also found that the 90-10% effective penumbra width with five-segment techniques (12.68 mm) at 60 deg. angle relative to the jaw edge is little wider (1.28 times) than the penumbra of cerrobend block field edge (9.89 mm). Conclusion: The multi-segment technique is effective in smoothing the MLC staggered field edge. The effective penumbra width with more segment techniques at larger degree angles relative to the field edge is little wider than the penumbra for a

  16. Methods for simultaneously identifying coherent local clusters with smooth global patterns in gene expression profiles

    Directory of Open Access Journals (Sweden)

    Lee Yun-Shien

    2008-03-01

    Full Text Available Abstract Background The hierarchical clustering tree (HCT with a dendrogram 1 and the singular value decomposition (SVD with a dimension-reduced representative map 2 are popular methods for two-way sorting the gene-by-array matrix map employed in gene expression profiling. While HCT dendrograms tend to optimize local coherent clustering patterns, SVD leading eigenvectors usually identify better global grouping and transitional structures. Results This study proposes a flipping mechanism for a conventional agglomerative HCT using a rank-two ellipse (R2E, an improved SVD algorithm for sorting purpose seriation by Chen 3 as an external reference. While HCTs always produce permutations with good local behaviour, the rank-two ellipse seriation gives the best global grouping patterns and smooth transitional trends. The resulting algorithm automatically integrates the desirable properties of each method so that users have access to a clustering and visualization environment for gene expression profiles that preserves coherent local clusters and identifies global grouping trends. Conclusion We demonstrate, through four examples, that the proposed method not only possesses better numerical and statistical properties, it also provides more meaningful biomedical insights than other sorting algorithms. We suggest that sorted proximity matrices for genes and arrays, in addition to the gene-by-array expression matrix, can greatly aid in the search for comprehensive understanding of gene expression structures. Software for the proposed methods can be obtained at http://gap.stat.sinica.edu.tw/Software/GAP.

  17. A moving control volume method for smooth computation of hydrodynamic forces and torques on immersed bodies

    Science.gov (United States)

    Nangia, Nishant; Patankar, Neelesh A.; Bhalla, Amneet P. S.

    2017-11-01

    Fictitious domain methods for simulating fluid-structure interaction (FSI) have been gaining popularity in the past few decades because of their robustness in handling arbitrarily moving bodies. Often the transient net hydrodynamic forces and torques on the body are desired quantities for these types of simulations. In past studies using immersed boundary (IB) methods, force measurements are contaminated with spurious oscillations due to evaluation of possibly discontinuous spatial velocity of pressure gradients within or on the surface of the body. Based on an application of the Reynolds transport theorem, we present a moving control volume (CV) approach to computing the net forces and torques on a moving body immersed in a fluid. The approach is shown to be accurate for a wide array of FSI problems, including flow past stationary and moving objects, Stokes flow, and high Reynolds number free-swimming. The approach only requires far-field (smooth) velocity and pressure information, thereby suppressing spurious force oscillations and eliminating the need for any filtering. The proposed moving CV method is not limited to a specific IB method and is straightforward to implement within an existing parallel FSI simulation software. This work is supported by NSF (Award Numbers SI2-SSI-1450374, SI2-SSI-1450327, and DGE-1324585), the US Department of Energy, Office of Science, ASCR (Award Number DE-AC02-05CH11231), and NIH (Award Number HL117163).

  18. Scalable smoothing strategies for a geometric multigrid method for the immersed boundary equations

    Energy Technology Data Exchange (ETDEWEB)

    Bhalla, Amneet Pal Singh [Univ. of North Carolina, Chapel Hill, NC (United States); Knepley, Matthew G. [Rice Univ., Houston, TX (United States); Adams, Mark F. [Lawrence Berkeley National Lab. (LBNL), Berkeley, CA (United States); Guy, Robert D. [Univ. of California, Davis, CA (United States); Griffith, Boyce E. [Univ. of North Carolina, Chapel Hill, NC (United States)

    2016-12-20

    The immersed boundary (IB) method is a widely used approach to simulating fluid-structure interaction (FSI). Although explicit versions of the IB method can suffer from severe time step size restrictions, these methods remain popular because of their simplicity and generality. In prior work (Guy et al., Adv Comput Math, 2015), some of us developed a geometric multigrid preconditioner for a stable semi-implicit IB method under Stokes flow conditions; however, this solver methodology used a Vanka-type smoother that presented limited opportunities for parallelization. This work extends this Stokes-IB solver methodology by developing smoothing techniques that are suitable for parallel implementation. Specifically, we demonstrate that an additive version of the Vanka smoother can yield an effective multigrid preconditioner for the Stokes-IB equations, and we introduce an efficient Schur complement-based smoother that is also shown to be effective for the Stokes-IB equations. We investigate the performance of these solvers for a broad range of material stiffnesses, both for Stokes flows and flows at nonzero Reynolds numbers, and for thick and thin structural models. We show here that linear solver performance degrades with increasing Reynolds number and material stiffness, especially for thin interface cases. Nonetheless, the proposed approaches promise to yield effective solution algorithms, especially at lower Reynolds numbers and at modest-to-high elastic stiffnesses.

  19. Developing and setting up optical methods to study the speckle patterns created by optical beam smoothing

    International Nuclear Information System (INIS)

    Surville, J.

    2005-12-01

    We have developed three main optical methods to study the speckles generated by a smoothed laser source. The first method addresses the measurement of the temporal and spatial correlation functions of the source, with a modified Michelson interferometer. The second one is a pump-probe technique created to shoot a picture of a speckle pattern generated at a set time. And the third one is an evolution of the second method dedicated to time-frequency coding, thanks to a frequency chirped probe pulse. Thus, the speckles can be followed in time and their motion can be described. According to these three methods, the average size and duration of the speckles can be measured. It is also possible to measure the size and the duration of each of them and mostly their velocity in a given direction. All the results obtained have been confronted to the different existing theories. We show that the statistical distributions of the measured speckles'size and speckles'intensity agree satisfactorily with theoretical values

  20. Mean-Variance Portfolio Selection Problem with Stochastic Salary for a Defined Contribution Pension Scheme: A Stochastic Linear-Quadratic-Exponential Framework

    Directory of Open Access Journals (Sweden)

    Charles Nkeki

    2013-11-01

    Full Text Available This paper examines a mean-variance portfolio selection problem with stochastic salary and inflation protection strategy in the accumulation phase of a defined contribution (DC pension plan. The utility function is assumed to be quadratic. It was assumed that the flow of contributions made by the PPM are invested into a market that is characterized by a cash account, an inflation-linked bond and a stock. In this paper, inflationlinked bond is traded and used to hedge inflation risks associated with the investment. The aim of this paper is to maximize the expected final wealth and minimize its variance. Efficient frontier for the three classes of assets (under quadratic utility function that will enable pension plan members (PPMs to decide their own wealth and risk in their investment profile at retirement was obtained.

  1. Adaptive Multilevel Methods with Local Smoothing for $H^1$- and $H^{\\mathrm{curl}}$-Conforming High Order Finite Element Methods

    KAUST Repository

    Janssen, Bärbel

    2011-01-01

    A multilevel method on adaptive meshes with hanging nodes is presented, and the additional matrices appearing in the implementation are derived. Smoothers of overlapping Schwarz type are discussed; smoothing is restricted to the interior of the subdomains refined to the current level; thus it has optimal computational complexity. When applied to conforming finite element discretizations of elliptic problems and Maxwell equations, the method\\'s convergence rates are very close to those for the nonadaptive version. Furthermore, the smoothers remain efficient for high order finite elements. We discuss the implementation in a general finite element code using the example of the deal.II library. © 2011 Societ y for Industrial and Applied Mathematics.

  2. Multicrack Localization in Rotors Based on Proper Orthogonal Decomposition Using Fractal Dimension and Gapped Smoothing Method

    Directory of Open Access Journals (Sweden)

    Zhiwen Lu

    2016-01-01

    Full Text Available Multicrack localization in operating rotor systems is still a challenge today. Focusing on this challenge, a new approach based on proper orthogonal decomposition (POD is proposed for multicrack localization in rotors. A two-disc rotor-bearing system with breathing cracks is established by the finite element method and simulated sensors are distributed along the rotor to obtain the steady-state transverse responses required by POD. Based on the discontinuities introduced in the proper orthogonal modes (POMs at the locations of cracks, the characteristic POM (CPOM, which is sensitive to crack locations and robust to noise, is selected for cracks localization. Instead of using the CPOM directly, due to its difficulty to localize incipient cracks, damage indexes using fractal dimension (FD and gapped smoothing method (GSM are adopted, in order to extract the locations more efficiently. The method proposed in this work is validated to be effective for multicrack localization in rotors by numerical experiments on rotors in different crack configuration cases considering the effects of noise. In addition, the feasibility of using fewer sensors is also investigated.

  3. Method of adiabatic modes in studying problems of smoothly irregular open waveguide structures

    Energy Technology Data Exchange (ETDEWEB)

    Sevastianov, L. A., E-mail: sevast@sci.pfu.edu.ru [Peoples' Friendship University of Russia (Russian Federation); Egorov, A. A. [Russian Academy of Sciences, Prokhorov General Physics Institute (Russian Federation); Sevastyanov, A. L. [Peoples' Friendship University of Russia (Russian Federation)

    2013-02-15

    Basic steps in developing an original method of adiabatic modes that makes it possible to solve the direct and inverse problems of simulating and designing three-dimensional multilayered smoothly irregular open waveguide structures are described. A new element in the method is that an approximate solution of Maxwell's equations is made to obey 'inclined' boundary conditions at the interfaces between themedia being considered. These boundary conditions take into account the obliqueness of planes tangent to nonplanar boundaries between the media and lead to new equations for coupled vector quasiwaveguide hybrid adiabatic modes. Solutions of these equations describe the phenomenon of 'entanglement' of two linear polarizations of an irregular multilayered waveguide, the appearance of a new mode in an entangled state, and the effect of rotation of the polarization plane of quasiwaveguide modes. The efficiency of the method is demonstrated by considering the example of numerically simulating a thin-film generalized waveguide Lueneburg lens.

  4. GPUs, a new tool of acceleration in CFD: efficiency and reliability on smoothed particle hydrodynamics methods.

    Directory of Open Access Journals (Sweden)

    Alejandro C Crespo

    Full Text Available Smoothed Particle Hydrodynamics (SPH is a numerical method commonly used in Computational Fluid Dynamics (CFD to simulate complex free-surface flows. Simulations with this mesh-free particle method far exceed the capacity of a single processor. In this paper, as part of a dual-functioning code for either central processing units (CPUs or Graphics Processor Units (GPUs, a parallelisation using GPUs is presented. The GPU parallelisation technique uses the Compute Unified Device Architecture (CUDA of nVidia devices. Simulations with more than one million particles on a single GPU card exhibit speedups of up to two orders of magnitude over using a single-core CPU. It is demonstrated that the code achieves different speedups with different CUDA-enabled GPUs. The numerical behaviour of the SPH code is validated with a standard benchmark test case of dam break flow impacting on an obstacle where good agreement with the experimental results is observed. Both the achieved speed-ups and the quantitative agreement with experiments suggest that CUDA-based GPU programming can be used in SPH methods with efficiency and reliability.

  5. Analysis and forecasting of wind velocity in chetumal, quintana roo, using the single exponential smoothing method

    Energy Technology Data Exchange (ETDEWEB)

    Cadenas, E. [Facultad de Ingenieria Mecanica, Universidad Michoacana de San Nicolas de Hidalgo, Santiago Tapia No. 403, Centro (Mexico); Jaramillo, O.A.; Rivera, W. [Centro de Ivestigacion en Energia, Universidad Nacional Autonoma de Mexico, Apartado Postal 34, Temixco 62580, Morelos (Mexico)

    2010-05-15

    In this paper the analysis and forecasting of wind velocities in Chetumal, Quintana Roo, Mexico is presented. Measurements were made by the Instituto de Investigaciones Electricas (IIE) during two years, from 2004 to 2005. This location exemplifies the wind energy generation potential in the Caribbean coast of Mexico that could be employed in the hotel industry in the next decade. The wind speed and wind direction were measured at 10 m above ground level. Sensors with high accuracy and a low starting threshold were used. The wind velocity was recorded using a data acquisition system supplied by a 10 W photovoltaic panel. The wind speed values were measured with a frequency of 1 Hz and the average wind speed was recorded considering regular intervals of 10 min. First a statistical analysis of the time series was made in the first part of the paper through conventional and robust measures. Also the forecasting of the last day of measurements was made utilizing the single exponential smoothing method (SES). The results showed a very good accuracy of the data with this technique for an {alpha} value of 0.9. Finally the SES method was compared with the artificial neural network (ANN) method showing the former better results. (author)

  6. Methods of solving of the optimal stabilization problem for stationary smooth control systems. Part I

    Directory of Open Access Journals (Sweden)

    G. Kondrat'ev

    1999-10-01

    Full Text Available In this article some ideas of Hamilton mechanics and differential-algebraic Geometry are used to exact definition of the potential function (Bellman-Lyapunov function in the optimal stabilization problem of smooth finite-dimensional systems.

  7. Preparation of smooth, flexible and stable silver nanowires- polyurethane composite transparent conductive films by transfer method

    Science.gov (United States)

    Bai, Shengchi; Wang, Haifeng; Yang, Hui; Zhang, He; Guo, Xingzhong

    2018-02-01

    Silver nanowires (AgNWs)-polyurethane (PU) composite transparent conductive films were fabricated via transfer method using AgNWs conductive inks and polyurethane as starting materials, and the effects of post-treatments including heat treatment, NaCl solution bath and HCl solution bath for AgNWs film on the sheet resistance and transmittance of the composite films were respectively investigated in detail. AgNWs networks are uniformly embedded in the PU layer to improve the adhesion and reduce the surface roughness of AgNWs-PU composite films. Heat treatment can melt and weld the nanowires, and NaCl and HCl solution baths promote the dissolution and re-deposition of silver and the dissolving of the polymer, both which form conduction pathways and improve contact of AgNWs for reducing the sheet resistance. Smooth and flexible AgNWs-PU composite film with a transmittance of 85% and a sheet resistance of 15 Ω · sq‑1 is obtained after treated in 0.5 wt% HCl solution bath for 60 s, and the optoelectronic properties of the resultant composite film can maintain after 1000 cycles of bending and 100 days.

  8. Methods and energy storage devices utilizing electrolytes having surface-smoothing additives

    Science.gov (United States)

    Xu, Wu; Zhang, Jiguang; Graff, Gordon L; Chen, Xilin; Ding, Fei

    2015-11-12

    Electrodeposition and energy storage devices utilizing an electrolyte having a surface-smoothing additive can result in self-healing, instead of self-amplification, of initial protuberant tips that give rise to roughness and/or dendrite formation on the substrate and anode surface. For electrodeposition of a first metal (M1) on a substrate or anode from one or more cations of M1 in an electrolyte solution, the electrolyte solution is characterized by a surface-smoothing additive containing cations of a second metal (M2), wherein cations of M2 have an effective electrochemical reduction potential in the solution lower than that of the cations of M1.

  9. A three-level support method for smooth switching of the micro-grid operation model

    Science.gov (United States)

    Zong, Yuanyang; Gong, Dongliang; Zhang, Jianzhou; Liu, Bin; Wang, Yun

    2018-01-01

    Smooth switching of micro-grid between the grid-connected operation mode and off-grid operation mode is one of the key technologies to ensure it runs flexible and efficiently. The basic control strategy and the switching principle of micro-grid are analyzed in this paper. The reasons for the fluctuations of the voltage and the frequency in the switching process are analyzed from views of power balance and control strategy, and the operation mode switching strategy has been improved targeted. From the three aspects of controller’s current inner loop reference signal, voltage outer loop control strategy optimization and micro-grid energy balance management, a three-level security strategy for smooth switching of micro-grid operation mode is proposed. From the three aspects of controller’s current inner loop reference signal tracking, voltage outer loop control strategy optimization and micro-grid energy balance management, a three-level strategy for smooth switching of micro-grid operation mode is proposed. At last, it is proved by simulation that the proposed control strategy can make the switching process smooth and stable, the fluctuation problem of the voltage and frequency has been effectively improved.

  10. A modified compressible smoothed particle hydrodynamics method and its application on the numerical simulation of low and high velocity impacts

    International Nuclear Information System (INIS)

    Amanifard, N.; Haghighat Namini, V.

    2012-01-01

    In this study a Modified Compressible Smoothed Particle Hydrodynamics method is introduced which is applicable in problems involving shock wave structures and elastic-plastic deformations of solids. As a matter of fact, algorithm of the method is based on an approach which descritizes the momentum equation into three parts and solves each part separately and calculates their effects on the velocity field and displacement of particles. The most exclusive feature of the method is exactly removing artificial viscosity of the formulations and representing good compatibility with other reasonable numerical methods without any rigorous numerical fractures or tensile instabilities while Modified Compressible Smoothed Particle Hydrodynamics does not use any extra modifications. Two types of problems involving elastic-plastic deformations and shock waves are presented here to demonstrate the capability of Modified Compressible Smoothed Particle Hydrodynamics in simulation of such problems and its ability to capture shock. The problems that are proposed here are low and high velocity impacts between aluminum projectiles and semi infinite aluminum beams. Elastic-perfectly plastic model is chosen for constitutive model of the aluminum and the results of simulations are compared with other reasonable studies in these cases.

  11. An efficient depth map preprocessing method based on structure-aided domain transform smoothing for 3D view generation.

    Directory of Open Access Journals (Sweden)

    Wei Liu

    Full Text Available Depth image-based rendering (DIBR, which is used to render virtual views with a color image and the corresponding depth map, is one of the key techniques in the 2D to 3D conversion process. Due to the absence of knowledge about the 3D structure of a scene and its corresponding texture, DIBR in the 2D to 3D conversion process, inevitably leads to holes in the resulting 3D image as a result of newly-exposed areas. In this paper, we proposed a structure-aided depth map preprocessing framework in the transformed domain, which is inspired by recently proposed domain transform for its low complexity and high efficiency. Firstly, our framework integrates hybrid constraints including scene structure, edge consistency and visual saliency information in the transformed domain to improve the performance of depth map preprocess in an implicit way. Then, adaptive smooth localization is cooperated and realized in the proposed framework to further reduce over-smoothness and enhance optimization in the non-hole regions. Different from the other similar methods, the proposed method can simultaneously achieve the effects of hole filling, edge correction and local smoothing for typical depth maps in a united framework. Thanks to these advantages, it can yield visually satisfactory results with less computational complexity for high quality 2D to 3D conversion. Numerical experimental results demonstrate the excellent performances of the proposed method.

  12. A generalized Fellner-Schall method for smoothing parameter optimization with application to Tweedie location, scale and shape models.

    Science.gov (United States)

    Wood, Simon N; Fasiolo, Matteo

    2017-12-01

    We consider the optimization of smoothing parameters and variance components in models with a regular log likelihood subject to quadratic penalization of the model coefficients, via a generalization of the method of Fellner (1986) and Schall (1991). In particular: (i) we generalize the original method to the case of penalties that are linear in several smoothing parameters, thereby covering the important cases of tensor product and adaptive smoothers; (ii) we show why the method's steps increase the restricted marginal likelihood of the model, that it tends to converge faster than the EM algorithm, or obvious accelerations of this, and investigate its relation to Newton optimization; (iii) we generalize the method to any Fisher regular likelihood. The method represents a considerable simplification over existing methods of estimating smoothing parameters in the context of regular likelihoods, without sacrificing generality: for example, it is only necessary to compute with the same first and second derivatives of the log-likelihood required for coefficient estimation, and not with the third or fourth order derivatives required by alternative approaches. Examples are provided which would have been impossible or impractical with pre-existing Fellner-Schall methods, along with an example of a Tweedie location, scale and shape model which would be a challenge for alternative methods, and a sparse additive modeling example where the method facilitates computational efficiency gains of several orders of magnitude. This is an open access article under the terms of the Creative Commons Attribution License, which permits use, distribution and reproduction in any medium, provided the original work is properly cited. © 2017, The Authors Biometrics published by Wiley Periodicals, Inc. on behalf of International Biometric Society.

  13. Smoothing of the Time Structure of Slowly Extracted Beam From Synchrotron by RF-Knock-out Method

    International Nuclear Information System (INIS)

    Voloshnyuk, A.V.; Bezshyjko, O.A.; Dolinskiy, A.V.; Dolinskij, A.V.

    2005-01-01

    Results of the study are presented in work on smoothing of the time structure of the bunch, slowly extracted from synchrotron. The numerical algorithm has been designed for study of the influence of the radio-frequency field of the resonator on time structure of the bunch. The numerical algorithm is based on method Monte-Carlo, where particles in the beam have been extracted by means of slow moving to the third-order resonance conditions. Characteristics of the time structure are vastly smoothed when synchrotron oscillations have been used as first experiments showed. Theoretical motivation of the reasons, influencing upon time structure of the slowly extracted beam is explained in given work

  14. Users manual for Opt-MS : local methods for simplicial mesh smoothing and untangling.

    Energy Technology Data Exchange (ETDEWEB)

    Freitag, L.

    1999-07-20

    Creating meshes containing good-quality elements is a challenging, yet critical, problem facing computational scientists today. Several researchers have shown that the size of the mesh, the shape of the elements within that mesh, and their relationship to the physical application of interest can profoundly affect the efficiency and accuracy of many numerical approximation techniques. If the application contains anisotropic physics, the mesh can be improved by considering both local characteristics of the approximate application solution and the geometry of the computational domain. If the application is isotropic, regularly shaped elements in the mesh reduce the discretization error, and the mesh can be improved a priori by considering geometric criteria only. The Opt-MS package provides several local node point smoothing techniques that improve elements in the mesh by adjusting grid point location using geometric, criteria. The package is easy to use; only three subroutine calls are required for the user to begin using the software. The package is also flexible; the user may change the technique, function, or dimension of the problem at any time during the mesh smoothing process. Opt-MS is designed to interface with C and C++ codes, ad examples for both two-and three-dimensional meshes are provided.

  15. Using LMS Method in Smoothing Reference Centile Curves for Lipid Profile of Iranian Children and Adolescents: A CASPIAN Study

    Directory of Open Access Journals (Sweden)

    M Hoseini

    2012-05-01

    Full Text Available

    Background and Objectives: LMS is a general monitoring method for fitting smooth reference centile curves in medical sciences. They provide the distribution of a measurement as it changes according to some covariates like age or time. This method describes the distribution of changes by three parameters; Mean, Coefficient of variation and Cox-Box power (skewness. Applying maximum penalized likelihood and spline function, the three curves are estimated and fitted and optimum smoothness is expressed by three curves. This study was conducted to provide the percentiles of lipid profile of Iranian children and adolescents by LMS.

     

    Methods: Smoothed reference centile curves of four groups of lipids (triglycerides, total-LDL- and HDL-cholesterol were developed from the data of 4824 Iranian school students, aged 6-18 years, living in six cities (Tabriz, Rasht, Gorgan, Mashad, Yazd and Tehran-Firouzkouh in Iran. Demographic and laboratory data were taken from the national study of the surveillance and prevention of non-communicable diseases from childhood (CASPIAN Study. After data management, data of 4824 students were included in the statistical analysis, which was conducted by the modified LMS method proposed by Cole. The curves were developed with a degree of freedom of four to ten with some tools such as deviance, Q tests, and detrended Q-Q plot were used for monitoring goodness of fit models.

     

    Results: All tools confirmed the model, and the LMS method was used as an appropriate method in smoothing reference centile. This method revealed the distributing features of variables serving as an objective tool to determine their relative importance.

     

    Conclusion: This study showed that the triglycerides level is higher and

  16. Bayesian Exponential Smoothing.

    OpenAIRE

    Forbes, C.S.; Snyder, R.D.; Shami, R.S.

    2000-01-01

    In this paper, a Bayesian version of the exponential smoothing method of forecasting is proposed. The approach is based on a state space model containing only a single source of error for each time interval. This model allows us to improve current practices surrounding exponential smoothing by providing both point predictions and measures of the uncertainty surrounding them.

  17. Smooth manifolds

    CERN Document Server

    Sinha, Rajnikant

    2014-01-01

    This book offers an introduction to the theory of smooth manifolds, helping students to familiarize themselves with the tools they will need for mathematical research on smooth manifolds and differential geometry. The book primarily focuses on topics concerning differential manifolds, tangent spaces, multivariable differential calculus, topological properties of smooth manifolds, embedded submanifolds, Sard’s theorem and Whitney embedding theorem. It is clearly structured, amply illustrated and includes solved examples for all concepts discussed. Several difficult theorems have been broken into many lemmas and notes (equivalent to sub-lemmas) to enhance the readability of the book. Further, once a concept has been introduced, it reoccurs throughout the book to ensure comprehension. Rank theorem, a vital aspect of smooth manifolds theory, occurs in many manifestations, including rank theorem for Euclidean space and global rank theorem. Though primarily intended for graduate students of mathematics, the book ...

  18. Exploration of faint absorption bands in the reflectance spectra of the asteroids by method of optimal smoothing: Vestoids

    Science.gov (United States)

    Shestopalov, D. I.; McFadden, L. A.; Golubeva, L. F.

    2007-04-01

    An optimization method of smoothing noisy spectra was developed to investigate faint absorption bands in the visual spectral region of reflectance spectra of asteroids and the compositional information derived from their analysis. The smoothing algorithm is called "optimal" because the algorithm determines the best running box size to separate weak absorption bands from the noise. The method is tested for its sensitivity to identifying false features in the smoothed spectrum, and its correctness of forecasting real absorption bands was tested with artificial spectra simulating asteroid reflectance spectra. After validating the method we optimally smoothed 22 vestoid spectra from SMASS1 [Xu, Sh., Binzel, R.P., Burbine, T.H., Bus, S.J., 1995. Icarus 115, 1-35]. We show that the resulting bands are not telluric features. Interpretation of the absorption bands in the asteroid spectra was based on the spectral properties of both terrestrial and meteorite pyroxenes. The bands located near 480, 505, 530, and 550 nm we assigned to spin-forbidden crystal field bands of ferrous iron, whereas the bands near 570, 600, and 650 nm are attributed to the crystal field bands of trivalent chromium and/or ferric iron in low-calcium pyroxenes on the asteroids' surface. While not measured by microprobe analysis, Fe 3+ site occupancy can be measured with Mössbauer spectroscopy, and is seen in trace amounts in pyroxenes. We believe that trace amounts of Fe 3+ on vestoid surfaces may be due to oxidation from impacts by icy bodies. If that is the case, they should be ubiquitous in the asteroid belt wherever pyroxene absorptions are found. Pyroxene composition of four asteroids of our set is determined from the band position of absorptions at 505 and 1000 nm, implying that there can be orthopyroxenes in all range of ferruginosity on the vestoid surfaces. For the present we cannot unambiguously interpret of the faint absorption bands that are seen in the spectra of 4005 Dyagilev, 4038

  19. A Robust Method to Generate Mechanically Anisotropic Vascular Smooth Muscle Cell Sheets for Vascular Tissue Engineering.

    Science.gov (United States)

    Backman, Daniel E; LeSavage, Bauer L; Shah, Shivem B; Wong, Joyce Y

    2017-06-01

    In arterial tissue engineering, mimicking native structure and mechanical properties is essential because compliance mismatch can lead to graft failure and further disease. With bottom-up tissue engineering approaches, designing tissue components with proper microscale mechanical properties is crucial to achieve the necessary macroscale properties in the final implant. This study develops a thermoresponsive cell culture platform for growing aligned vascular smooth muscle cell (VSMC) sheets by photografting N-isopropylacrylamide (NIPAAm) onto micropatterned poly(dimethysiloxane) (PDMS). The grafting process is experimentally and computationally optimized to produce PNIPAAm-PDMS substrates optimal for VSMC attachment. To allow long-term VSMC sheet culture and increase the rate of VSMC sheet formation, PNIPAAm-PDMS surfaces were further modified with 3-aminopropyltriethoxysilane yielding a robust, thermoresponsive cell culture platform for culturing VSMC sheets. VSMC cell sheets cultured on patterned thermoresponsive substrates exhibit cellular and collagen alignment in the direction of the micropattern. Mechanical characterization of patterned, single-layer VSMC sheets reveals increased stiffness in the aligned direction compared to the perpendicular direction whereas nonpatterned cell sheets exhibit no directional dependence. Structural and mechanical anisotropy of aligned, single-layer VSMC sheets makes this platform an attractive microstructural building block for engineering a vascular graft to match the in vivo mechanical properties of native arterial tissue. © 2017 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  20. A Well-Designed Parameter Estimation Method for Lifetime Prediction of Deteriorating Systems with Both Smooth Degradation and Abrupt Damage

    Directory of Open Access Journals (Sweden)

    Chuanqiang Yu

    2015-01-01

    Full Text Available Deteriorating systems, which are subject to both continuous smooth degradation and additional abrupt damages due to a shock process, can be often encountered in engineering. Modeling the degradation evolution and predicting the lifetime of this kind of systems are both interesting and challenging in practice. In this paper, we model the degradation trajectory of the deteriorating system by a random coefficient regression (RCR model with positive jumps, where the RCR part is used to model the continuous smooth degradation of the system and the jump part is used to characterize the abrupt damages due to random shocks. Based on a specified threshold level, the probability density function (PDF and cumulative distribution function (CDF of the lifetime can be derived analytically. The unknown parameters associated with the derived lifetime distributions can be estimated via a well-designed parameter estimation procedure on the basis of the available degradation recordings of the deteriorating systems. An illustrative example is finally provided to demonstrate the implementation and superiority of the newly proposed lifetime prediction method. The experimental results reveal that our proposed lifetime prediction method with the dedicated parameter estimation strategy can get more accurate lifetime predictions than the rival model in literature.

  1. A purely Lagrangian method for simulating the shallow water equations on a sphere using smooth particle hydrodynamics

    Science.gov (United States)

    Capecelatro, Jesse

    2018-03-01

    It has long been suggested that a purely Lagrangian solution to global-scale atmospheric/oceanic flows can potentially outperform tradition Eulerian schemes. Meanwhile, a demonstration of a scalable and practical framework remains elusive. Motivated by recent progress in particle-based methods when applied to convection dominated flows, this work presents a fully Lagrangian method for solving the inviscid shallow water equations on a rotating sphere in a smooth particle hydrodynamics framework. To avoid singularities at the poles, the governing equations are solved in Cartesian coordinates, augmented with a Lagrange multiplier to ensure that fluid particles are constrained to the surface of the sphere. An underlying grid in spherical coordinates is used to facilitate efficient neighbor detection and parallelization. The method is applied to a suite of canonical test cases, and conservation, accuracy, and parallel performance are assessed.

  2. Application of pattern search method to power system security constrained economic dispatch with non-smooth cost function

    International Nuclear Information System (INIS)

    Al-Othman, A.K.; El-Naggar, K.M.

    2008-01-01

    Direct search methods are evolutionary algorithms used to solve optimization problems. (DS) methods do not require any information about the gradient of the objective function at hand while searching for an optimum solution. One of such methods is Pattern Search (PS) algorithm. This paper presents a new approach based on a constrained pattern search algorithm to solve a security constrained power system economic dispatch problem (SCED) with non-smooth cost function. Operation of power systems demands a high degree of security to keep the system satisfactorily operating when subjected to disturbances, while and at the same time it is required to pay attention to the economic aspects. Pattern recognition technique is used first to assess dynamic security. Linear classifiers that determine the stability of electric power system are presented and added to other system stability and operational constraints. The problem is formulated as a constrained optimization problem in a way that insures a secure-economic system operation. Pattern search method is then applied to solve the constrained optimization formulation. In particular, the method is tested using three different test systems. Simulation results of the proposed approach are compared with those reported in literature. The outcome is very encouraging and proves that pattern search (PS) is very applicable for solving security constrained power system economic dispatch problem (SCED). In addition, valve-point effect loading and total system losses are considered to further investigate the potential of the PS technique. Based on the results, it can be concluded that the PS has demonstrated ability in handling highly nonlinear discontinuous non-smooth cost function of the SCED. (author)

  3. A Piecewise Acceleration-Optimal and Smooth-Jerk Trajectory Planning Method for Robot Manipulator along a Predefined Path

    Directory of Open Access Journals (Sweden)

    Yuan Chen

    2011-09-01

    Full Text Available This paper proposes a piecewise acceleration-optimal and smooth-jerk trajectory planning method of robot manipulator. The optimal objective function is given by the weighted sum of two terms having opposite effects: the maximal acceleration and the minimal jerk. Some computing techniques are proposed to determine the optimal solution. These techniques take both the time intervals between two interpolation points and the control points of B-spline function as optimal variables, redefine the kinematic constraints as the constraints of optimal variables, and reformulate the objective function in matrix form. The feasibility of the optimal method is illustrated by simulation and experimental results with pan mechanism for cooking robot.

  4. A Parallel Implementation of a Smoothed Particle Hydrodynamics Method on Graphics Hardware Using the Compute Unified Device Architecture

    International Nuclear Information System (INIS)

    Wong Unhong; Wong Honcheng; Tang Zesheng

    2010-01-01

    The smoothed particle hydrodynamics (SPH), which is a class of meshfree particle methods (MPMs), has a wide range of applications from micro-scale to macro-scale as well as from discrete systems to continuum systems. Graphics hardware, originally designed for computer graphics, now provide unprecedented computational power for scientific computation. Particle system needs a huge amount of computations in physical simulation. In this paper, an efficient parallel implementation of a SPH method on graphics hardware using the Compute Unified Device Architecture is developed for fluid simulation. Comparing to the corresponding CPU implementation, our experimental results show that the new approach allows significant speedups of fluid simulation through handling huge amount of computations in parallel on graphics hardware.

  5. Classical solutions of two dimensional Stokes problems on non smooth domains. 2: Collocation method for the Radon equation

    International Nuclear Information System (INIS)

    Lubuma, M.S.

    1991-05-01

    The non uniquely solvable Radon boundary integral equation for the two-dimensional Stokes-Dirichlet problem on a non smooth domain is transformed into a well posed one by a suitable compact perturbation of the velocity double layer potential operator. The solution to the modified equation is decomposed into a regular part and a finite linear combination of intrinsic singular functions whose coefficients are computed from explicit formulae. Using these formulae, the classical collocation method, defined by continuous piecewise linear vector-valued basis functions, which converges slowly because of the lack of regularity of the solution, is improved into a collocation dual singular function method with optimal rates of convergence for the solution and for the coefficients of singularities. (author). 34 refs

  6. A convergent numerical method for the full Navier-Stokes-Fourier system in smooth physical domains

    Czech Academy of Sciences Publication Activity Database

    Feireisl, Eduard; Hošek, Radim; Michálek, Martin

    2016-01-01

    Roč. 54, č. 5 (2016), s. 3062-3082 ISSN 0036-1429 EU Projects: European Commission(XE) 320078 - MATHEF Institutional support: RVO:67985840 Keywords : Navier-Stokes- Fourier system * finite element method * finite volume method Subject RIV: BA - General Mathematics Impact factor: 1.978, year: 2016 http://epubs.siam.org/doi/abs/10.1137/15M1011809

  7. A convergent numerical method for the full Navier-Stokes-Fourier system in smooth physical domains

    Czech Academy of Sciences Publication Activity Database

    Feireisl, Eduard; Hošek, Radim; Michálek, Martin

    2016-01-01

    Roč. 54, č. 5 (2016), s. 3062-3082 ISSN 0036-1429 EU Projects: European Commission(XE) 320078 - MATHEF Institutional support: RVO:67985840 Keywords : Navier-Stokes-Fourier system * finite element method * finite volume method Subject RIV: BA - General Mathematics Impact factor: 1.978, year: 2016 http://epubs.siam.org/doi/abs/10.1137/15M1011809

  8. Combined smoothing method and its use in combining Earth orientation parameters measured by space techniques

    Czech Academy of Sciences Publication Activity Database

    Vondrák, Jan; Čepek, A.

    2000-01-01

    Roč. 147, č. 2 (2000), s. 347-359 ISSN 0365-0138 R&D Projects: GA ČR GA205/98/1104 Institutional research plan: CEZ:AV0Z1003909 Keywords : numerical method s * miscellaneous techniques * reference systems Subject RIV: BN - Astronomy, Celestial Mechanics, Astrophysics Impact factor: 1.745, year: 2000

  9. Non-smooth optimization methods for large-scale problems: applications to mid-term power generation planning

    International Nuclear Information System (INIS)

    Emiel, G.

    2008-01-01

    This manuscript deals with large-scale non-smooth optimization that may typically arise when performing Lagrangian relaxation of difficult problems. This technique is commonly used to tackle mixed-integer linear programming - or large-scale convex problems. For example, a classical approach when dealing with power generation planning problems in a stochastic environment is to perform a Lagrangian relaxation of the coupling constraints of demand. In this approach, a master problem coordinates local subproblems, specific to each generation unit. The master problem deals with a separable non-smooth dual function which can be maximized with, for example, bundle algorithms. In chapter 2, we introduce basic tools of non-smooth analysis and some recent results regarding incremental or inexact instances of non-smooth algorithms. However, in some situations, the dual problem may still be very hard to solve. For instance, when the number of dualized constraints is very large (exponential in the dimension of the primal problem), explicit dualization may no longer be possible or the update of dual variables may fail. In order to reduce the dual dimension, different heuristics were proposed. They involve a separation procedure to dynamically select a restricted set of constraints to be dualized along the iterations. This relax-and-cut type approach has shown its numerical efficiency in many combinatorial problems. In chapter 3, we show Primal-dual convergence of such strategy when using an adapted sub-gradient method for the dual step and under minimal assumptions on the separation procedure. Another limit of Lagrangian relaxation may appear when the dual function is separable in highly numerous or complex sub-functions. In such situation, the computational burden of solving all local subproblems may be preponderant in the whole iterative process. A natural strategy would be here to take full advantage of the dual separable structure, performing a dual iteration after having

  10. A smooth mixture of Tobits model for healthcare expenditure.

    Science.gov (United States)

    Keane, Michael; Stavrunova, Olena

    2011-09-01

    This paper develops a smooth mixture of Tobits (SMTobit) model for healthcare expenditure. The model is a generalization of the smoothly mixing regressions framework of Geweke and Keane (J Econometrics 2007; 138: 257-290) to the case of a Tobit-type limited dependent variable. A Markov chain Monte Carlo algorithm with data augmentation is developed to obtain the posterior distribution of model parameters. The model is applied to the US Medicare Current Beneficiary Survey data on total medical expenditure. The results suggest that the model can capture the overall shape of the expenditure distribution very well, and also provide a good fit to a number of characteristics of the conditional (on covariates) distribution of expenditure, such as the conditional mean, variance and probability of extreme outcomes, as well as the 50th, 90th, and 95th, percentiles. We find that healthier individuals face an expenditure distribution with lower mean, variance and probability of extreme outcomes, compared with their counterparts in a worse state of health. Males have an expenditure distribution with higher mean, variance and probability of an extreme outcome, compared with their female counterparts. The results also suggest that heart and cardiovascular diseases affect the expenditure of males more than that of females. Copyright © 2011 John Wiley & Sons, Ltd.

  11. Monitoring county-level chlamydia incidence in Texas, 2004 – 2005: application of empirical Bayesian smoothing and Exploratory Spatial Data Analysis (ESDA methods

    Directory of Open Access Journals (Sweden)

    Owens Chantelle J

    2009-02-01

    Full Text Available Abstract Background Chlamydia continues to be the most prevalent disease in the United States. Effective spatial monitoring of chlamydia incidence is important for successful implementation of control and prevention programs. The objective of this study is to apply Bayesian smoothing and exploratory spatial data analysis (ESDA methods to monitor Texas county-level chlamydia incidence rates by examining spatiotemporal patterns. We used county-level data on chlamydia incidence (for all ages, gender and races from the National Electronic Telecommunications System for Surveillance (NETSS for 2004 and 2005. Results Bayesian-smoothed chlamydia incidence rates were spatially dependent both in levels and in relative changes. Erath county had significantly (p 300 cases per 100,000 residents than its contiguous neighbors (195 or less in both years. Gaines county experienced the highest relative increase in smoothed rates (173% – 139 to 379. The relative change in smoothed chlamydia rates in Newton county was significantly (p Conclusion Bayesian smoothing and ESDA methods can assist programs in using chlamydia surveillance data to identify outliers, as well as relevant changes in chlamydia incidence in specific geographic units. Secondly, it may also indirectly help in assessing existing differences and changes in chlamydia surveillance systems over time.

  12. Modeling, analysis and comparison of TSR and OTC methods for MPPT and power smoothing in permanent magnet synchronous generator-based wind turbines

    International Nuclear Information System (INIS)

    Nasiri, M.; Milimonfared, J.; Fathi, S.H.

    2014-01-01

    Highlights: • Small signal modeling of PMSG wind turbine with two controllers are introduced. • Poles and zeroes analyzing of OTC and TSR methods is performed. • Generator output power with varying wind speed in PMSG wind turbine is studied. • MPPT capability of OTC and TSR methods to wind speed variations are compared. • Power smoothing capability and reducing mechanical stress of both methods are studied. - Abstract: This paper presents a small signal modeling of a direct-driven permanent magnet synchronous generator (PMSG) based on wind turbine which is connected to the grid via back-to-back converters. The proposed small signal model includes two maximum power point tracking (MPPT) controllers: tip speed ratio (TSR) control and optimal torque control (OTC). These methods are analytically compared to illustrate MPPT and power smoothing capability. Then, to compare the MPPT and power smoothing operation of the mentioned methods, simulations are performed in MATLAB/Simulink software. From the simulation results, OTC is highly efficient in power smoothing enhancement and has clearly good performance to extract maximum power from wind; however, TSR control has definitely fast responses to wind speed variations with the expense of higher fluctuations due to its non-minimum phase characteristic

  13. Comparison of ALE finite element method and adaptive smoothed finite element method for the numerical simulation of friction stir welding

    NARCIS (Netherlands)

    van der Stelt, A.A.; Bor, Teunis Cornelis; Geijselaers, Hubertus J.M.; Quak, W.; Akkerman, Remko; Huetink, Han; Menary, G

    2011-01-01

    In this paper, the material flow around the pin during friction stir welding (FSW) is simulated using a 2D plane strain model. A pin rotates without translation in a disc with elasto-viscoplastic material properties and the outer boundary of the disc is clamped. Two numerical methods are used to

  14. A two-dimensional method of manufactured solutions benchmark suite based on variations of Larsen's benchmark with escalating order of smoothness of the exact solution

    International Nuclear Information System (INIS)

    Schunert, Sebastian; Azmy, Yousry Y.

    2011-01-01

    The quantification of the discretization error associated with the spatial discretization of the Discrete Ordinate(DO) equations in multidimensional Cartesian geometries is the central problem in error estimation of spatial discretization schemes for transport theory as well as computer code verification. Traditionally ne mesh solutions are employed as reference, because analytical solutions only exist in the absence of scattering. This approach, however, is inadequate when the discretization error associated with the reference solution is not small compared to the discretization error associated with the mesh under scrutiny. Typically this situation occurs if the mesh of interest is only a couple of refinement levels away from the reference solution or if the order of accuracy of the numerical method (and hence the reference as well) is lower than expected. In this work we present a Method of Manufactured Solutions (MMS) benchmark suite with variable order of smoothness of the underlying exact solution for two-dimensional Cartesian geometries which provides analytical solutions aver- aged over arbitrary orthogonal meshes for scattering and non-scattering media. It should be emphasized that the developed MMS benchmark suite rst eliminates the aforementioned limitation of ne mesh reference solutions since it secures knowledge of the underlying true solution and second that it allows for an arbitrary order of smoothness of the underlying ex- act solution. The latter is of importance because even for smooth parameters and boundary conditions the DO equations can feature exact solution with limited smoothness. Moreover, the degree of smoothness is crucial for both the order of accuracy and the magnitude of the discretization error for any spatial discretization scheme. (author)

  15. Low-loss integrated electrical surface plasmon source with ultra-smooth metal film fabricated by polymethyl methacrylate ‘bond and peel’ method

    Science.gov (United States)

    Liu, Wenjie; Hu, Xiaolong; Zou, Qiushun; Wu, Shaoying; Jin, Chongjun

    2018-06-01

    External light sources are mostly employed to functionalize the plasmonic components, resulting in a bulky footprint. Electrically driven integrated plasmonic devices, combining ultra-compact critical feature sizes with extremely high transmission speeds and low power consumption, can link plasmonics with the present-day electronic world. In an effort to achieve this prospect, suppressing the losses in the plasmonic devices becomes a pressing issue. In this work, we developed a novel polymethyl methacrylate ‘bond and peel’ method to fabricate metal films with sub-nanometer smooth surfaces on semiconductor wafers. Based on this method, we further fabricated a compact plasmonic source containing a metal-insulator-metal (MIM) waveguide with an ultra-smooth metal surface on a GaAs-based light-emitting diode wafer. An increase in propagation length of the SPP mode by a factor of 2.95 was achieved as compared with the conventional device containing a relatively rough metal surface. Numerical calculations further confirmed that the propagation length is comparable to the theoretical prediction on the MIM waveguide with perfectly smooth metal surfaces. This method facilitates low-loss and high-integration of electrically driven plasmonic devices, thus provides an immediate opportunity for the practical application of on-chip integrated plasmonic circuits.

  16. Smooth Phase Interpolated Keying

    Science.gov (United States)

    Borah, Deva K.

    2007-01-01

    Smooth phase interpolated keying (SPIK) is an improved method of computing smooth phase-modulation waveforms for radio communication systems that convey digital information. SPIK is applicable to a variety of phase-shift-keying (PSK) modulation schemes, including quaternary PSK (QPSK), octonary PSK (8PSK), and 16PSK. In comparison with a related prior method, SPIK offers advantages of better performance and less complexity of implementation. In a PSK scheme, the underlying information waveform that one seeks to convey consists of discrete rectangular steps, but the spectral width of such a waveform is excessive for practical radio communication. Therefore, the problem is to smooth the step phase waveform in such a manner as to maintain power and bandwidth efficiency without incurring an unacceptably large error rate and without introducing undesired variations in the amplitude of the affected radio signal. Although the ideal constellation of PSK phasor points does not cause amplitude variations, filtering of the modulation waveform (in which, typically, a rectangular pulse is converted to a square-root raised cosine pulse) causes amplitude fluctuations. If a power-efficient nonlinear amplifier is used in the radio communication system, the fluctuating-amplitude signal can undergo significant spectral regrowth, thus compromising the bandwidth efficiency of the system. In the related prior method, one seeks to solve the problem in a procedure that comprises two major steps: phase-value generation and phase interpolation. SPIK follows the two-step approach of the related prior method, but the details of the steps are different. In the phase-value-generation step, the phase values of symbols in the PSK constellation are determined by a phase function that is said to be maximally smooth and that is chosen to minimize the spectral spread of the modulated signal. In this step, the constellation is divided into two groups by assigning, to information symbols, phase values

  17. Large-eddy simulations of 3D Taylor-Green vortex: comparison of Smoothed Particle Hydrodynamics, Lattice Boltzmann and Finite Volume methods

    International Nuclear Information System (INIS)

    Kajzer, A; Pozorski, J; Szewc, K

    2014-01-01

    In the paper we present Large-eddy simulation (LES) results of 3D Taylor- Green vortex obtained by the three different computational approaches: Smoothed Particle Hydrodynamics (SPH), Lattice Boltzmann Method (LBM) and Finite Volume Method (FVM). The Smagorinsky model was chosen as a subgrid-scale closure in LES for all considered methods and a selection of spatial resolutions have been investigated. The SPH and LBM computations have been carried out with the use of the in-house codes executed on GPU and compared, for validation purposes, with the FVM results obtained using the open-source CFD software OpenFOAM. A comparative study in terms of one-point statistics and turbulent energy spectra shows a good agreement of LES results for all methods. An analysis of the GPU code efficiency and implementation difficulties has been made. It is shown that both SPH and LBM may offer a significant advantage over mesh-based CFD methods.

  18. I-F starting method with smooth transition to EMF based motion-sensorless vector control of PM synchronous motor/generator

    DEFF Research Database (Denmark)

    Blaabjerg, Frede; Teodorescu, Remus; Fatu, M.

    2008-01-01

    This paper proposes a novel hybrid motion- sensorless control system for permanent magnet synchronous motors (PMSM) using a new robust start-up method called I-f control, and a smooth transition to emf-based vector control. The I-f method is based on separate control of id, iq currents with the r......This paper proposes a novel hybrid motion- sensorless control system for permanent magnet synchronous motors (PMSM) using a new robust start-up method called I-f control, and a smooth transition to emf-based vector control. The I-f method is based on separate control of id, iq currents......-adaptive compensator to eliminate dc-offset and phase-delay. Digital simulations for PMSM start-up with full load torque are presented for different initial rotor-positions. The transitions from I-f to emf motion-sensorless vector control and back as well, at very low-speeds, are fully validated by experimental...

  19. Mathematical modelling for the drying method and smoothing drying rate using cubic spline for seaweed Kappaphycus Striatum variety Durian in a solar dryer

    Energy Technology Data Exchange (ETDEWEB)

    M Ali, M. K., E-mail: majidkhankhan@ymail.com, E-mail: eutoco@gmail.com; Ruslan, M. H., E-mail: majidkhankhan@ymail.com, E-mail: eutoco@gmail.com [Solar Energy Research Institute (SERI), Universiti Kebangsaan Malaysia, 43600 UKM Bangi, Selangor (Malaysia); Muthuvalu, M. S., E-mail: sudaram-@yahoo.com, E-mail: jumat@ums.edu.my; Wong, J., E-mail: sudaram-@yahoo.com, E-mail: jumat@ums.edu.my [Unit Penyelidikan Rumpai Laut (UPRL), Sekolah Sains dan Teknologi, Universiti Malaysia Sabah, 88400 Kota Kinabalu, Sabah (Malaysia); Sulaiman, J., E-mail: ysuhaimi@ums.edu.my, E-mail: hafidzruslan@eng.ukm.my; Yasir, S. Md., E-mail: ysuhaimi@ums.edu.my, E-mail: hafidzruslan@eng.ukm.my [Program Matematik dengan Ekonomi, Sekolah Sains dan Teknologi, Universiti Malaysia Sabah, 88400 Kota Kinabalu, Sabah (Malaysia)

    2014-06-19

    The solar drying experiment of seaweed using Green V-Roof Hybrid Solar Drier (GVRHSD) was conducted in Semporna, Sabah under the metrological condition in Malaysia. Drying of sample seaweed in GVRHSD reduced the moisture content from about 93.4% to 8.2% in 4 days at average solar radiation of about 600W/m{sup 2} and mass flow rate about 0.5 kg/s. Generally the plots of drying rate need more smoothing compared moisture content data. Special cares is needed at low drying rates and moisture contents. It is shown the cubic spline (CS) have been found to be effective for moisture-time curves. The idea of this method consists of an approximation of data by a CS regression having first and second derivatives. The analytical differentiation of the spline regression permits the determination of instantaneous rate. The method of minimization of the functional of average risk was used successfully to solve the problem. This method permits to obtain the instantaneous rate to be obtained directly from the experimental data. The drying kinetics was fitted with six published exponential thin layer drying models. The models were fitted using the coefficient of determination (R{sup 2}), and root mean square error (RMSE). The modeling of models using raw data tested with the possible of exponential drying method. The result showed that the model from Two Term was found to be the best models describe the drying behavior. Besides that, the drying rate smoothed using CS shows to be effective method for moisture-time curves good estimators as well as for the missing moisture content data of seaweed Kappaphycus Striatum Variety Durian in Solar Dryer under the condition tested.

  20. Mathematical modelling for the drying method and smoothing drying rate using cubic spline for seaweed Kappaphycus Striatum variety Durian in a solar dryer

    International Nuclear Information System (INIS)

    M Ali, M. K.; Ruslan, M. H.; Muthuvalu, M. S.; Wong, J.; Sulaiman, J.; Yasir, S. Md.

    2014-01-01

    The solar drying experiment of seaweed using Green V-Roof Hybrid Solar Drier (GVRHSD) was conducted in Semporna, Sabah under the metrological condition in Malaysia. Drying of sample seaweed in GVRHSD reduced the moisture content from about 93.4% to 8.2% in 4 days at average solar radiation of about 600W/m 2 and mass flow rate about 0.5 kg/s. Generally the plots of drying rate need more smoothing compared moisture content data. Special cares is needed at low drying rates and moisture contents. It is shown the cubic spline (CS) have been found to be effective for moisture-time curves. The idea of this method consists of an approximation of data by a CS regression having first and second derivatives. The analytical differentiation of the spline regression permits the determination of instantaneous rate. The method of minimization of the functional of average risk was used successfully to solve the problem. This method permits to obtain the instantaneous rate to be obtained directly from the experimental data. The drying kinetics was fitted with six published exponential thin layer drying models. The models were fitted using the coefficient of determination (R 2 ), and root mean square error (RMSE). The modeling of models using raw data tested with the possible of exponential drying method. The result showed that the model from Two Term was found to be the best models describe the drying behavior. Besides that, the drying rate smoothed using CS shows to be effective method for moisture-time curves good estimators as well as for the missing moisture content data of seaweed Kappaphycus Striatum Variety Durian in Solar Dryer under the condition tested

  1. Mathematical modelling for the drying method and smoothing drying rate using cubic spline for seaweed Kappaphycus Striatum variety Durian in a solar dryer

    Science.gov (United States)

    M Ali, M. K.; Ruslan, M. H.; Muthuvalu, M. S.; Wong, J.; Sulaiman, J.; Yasir, S. Md.

    2014-06-01

    The solar drying experiment of seaweed using Green V-Roof Hybrid Solar Drier (GVRHSD) was conducted in Semporna, Sabah under the metrological condition in Malaysia. Drying of sample seaweed in GVRHSD reduced the moisture content from about 93.4% to 8.2% in 4 days at average solar radiation of about 600W/m2 and mass flow rate about 0.5 kg/s. Generally the plots of drying rate need more smoothing compared moisture content data. Special cares is needed at low drying rates and moisture contents. It is shown the cubic spline (CS) have been found to be effective for moisture-time curves. The idea of this method consists of an approximation of data by a CS regression having first and second derivatives. The analytical differentiation of the spline regression permits the determination of instantaneous rate. The method of minimization of the functional of average risk was used successfully to solve the problem. This method permits to obtain the instantaneous rate to be obtained directly from the experimental data. The drying kinetics was fitted with six published exponential thin layer drying models. The models were fitted using the coefficient of determination (R2), and root mean square error (RMSE). The modeling of models using raw data tested with the possible of exponential drying method. The result showed that the model from Two Term was found to be the best models describe the drying behavior. Besides that, the drying rate smoothed using CS shows to be effective method for moisture-time curves good estimators as well as for the missing moisture content data of seaweed Kappaphycus Striatum Variety Durian in Solar Dryer under the condition tested.

  2. A Signal Decomposition Method for Ultrasonic Guided Wave Generated from Debonding Combining Smoothed Pseudo Wigner-Ville Distribution and Vold–Kalman Filter Order Tracking

    Directory of Open Access Journals (Sweden)

    Junhua Wu

    2017-01-01

    Full Text Available Carbon fibre composites have a promising application future of the vehicle, due to its excellent physical properties. Debonding is a major defect of the material. Analyses of wave packets are critical for identification of the defect on ultrasonic nondestructive evaluation and testing. In order to isolate different components of ultrasonic guided waves (GWs, a signal decomposition algorithm combining Smoothed Pseudo Wigner-Ville distribution and Vold–Kalman filter order tracking is presented. In the algorithm, the time-frequency distribution of GW is first obtained by using Smoothed Pseudo Wigner-Ville distribution. The frequencies of different modes are computed based on summation of the time-frequency coefficients in the frequency direction. On the basis of these frequencies, isolation of different modes is done by Vold–Kalman filter order tracking. The results of the simulation signal and the experimental signal reveal that the presented algorithm succeeds in decomposing the multicomponent signal into monocomponents. Even though components overlap in corresponding Fourier spectrum, they can be isolated by using the presented algorithm. So the frequency resolution of the presented method is promising. Based on this, we can do research about defect identification, calculation of the defect size, and locating the position of the defect.

  3. Numerical discretization-based estimation methods for ordinary differential equation models via penalized spline smoothing with applications in biomedical research.

    Science.gov (United States)

    Wu, Hulin; Xue, Hongqi; Kumar, Arun

    2012-06-01

    Differential equations are extensively used for modeling dynamics of physical processes in many scientific fields such as engineering, physics, and biomedical sciences. Parameter estimation of differential equation models is a challenging problem because of high computational cost and high-dimensional parameter space. In this article, we propose a novel class of methods for estimating parameters in ordinary differential equation (ODE) models, which is motivated by HIV dynamics modeling. The new methods exploit the form of numerical discretization algorithms for an ODE solver to formulate estimating equations. First, a penalized-spline approach is employed to estimate the state variables and the estimated state variables are then plugged in a discretization formula of an ODE solver to obtain the ODE parameter estimates via a regression approach. We consider three different order of discretization methods, Euler's method, trapezoidal rule, and Runge-Kutta method. A higher-order numerical algorithm reduces numerical error in the approximation of the derivative, which produces a more accurate estimate, but its computational cost is higher. To balance the computational cost and estimation accuracy, we demonstrate, via simulation studies, that the trapezoidal discretization-based estimate is the best and is recommended for practical use. The asymptotic properties for the proposed numerical discretization-based estimators are established. Comparisons between the proposed methods and existing methods show a clear benefit of the proposed methods in regards to the trade-off between computational cost and estimation accuracy. We apply the proposed methods t an HIV study to further illustrate the usefulness of the proposed approaches. © 2012, The International Biometric Society.

  4. Error estimates for a numerical method for the compressible Navier-Stokes system on sufficiently smooth domains

    Czech Academy of Sciences Publication Activity Database

    Feireisl, Eduard; Hošek, Radim; Maltese, D.; Novotný, A.

    2017-01-01

    Roč. 51, č. 1 (2017), s. 279-319 ISSN 0764-583X EU Projects: European Commission(XE) 320078 - MATHEF Institutional support: RVO:67985840 Keywords : Navier-Stokes system * finite element numerical method * finite volume numerical method Subject RIV: BA - General Mathematics OBOR OECD: Pure mathematics Impact factor: 1.727, year: 2016 http://www.esaim-m2an.org/ articles /m2an/abs/2017/01/m2an150157/m2an150157.html

  5. Smooth polyhedral surfaces

    KAUST Repository

    Gü nther, Felix; Jiang, Caigui; Pottmann, Helmut

    2017-01-01

    Polyhedral surfaces are fundamental objects in architectural geometry and industrial design. Whereas closeness of a given mesh to a smooth reference surface and its suitability for numerical simulations were already studied extensively, the aim of our work is to find and to discuss suitable assessments of smoothness of polyhedral surfaces that only take the geometry of the polyhedral surface itself into account. Motivated by analogies to classical differential geometry, we propose a theory of smoothness of polyhedral surfaces including suitable notions of normal vectors, tangent planes, asymptotic directions, and parabolic curves that are invariant under projective transformations. It is remarkable that seemingly mild conditions significantly limit the shapes of faces of a smooth polyhedral surface. Besides being of theoretical interest, we believe that smoothness of polyhedral surfaces is of interest in the architectural context, where vertices and edges of polyhedral surfaces are highly visible.

  6. Smooth polyhedral surfaces

    KAUST Repository

    Günther, Felix

    2017-03-15

    Polyhedral surfaces are fundamental objects in architectural geometry and industrial design. Whereas closeness of a given mesh to a smooth reference surface and its suitability for numerical simulations were already studied extensively, the aim of our work is to find and to discuss suitable assessments of smoothness of polyhedral surfaces that only take the geometry of the polyhedral surface itself into account. Motivated by analogies to classical differential geometry, we propose a theory of smoothness of polyhedral surfaces including suitable notions of normal vectors, tangent planes, asymptotic directions, and parabolic curves that are invariant under projective transformations. It is remarkable that seemingly mild conditions significantly limit the shapes of faces of a smooth polyhedral surface. Besides being of theoretical interest, we believe that smoothness of polyhedral surfaces is of interest in the architectural context, where vertices and edges of polyhedral surfaces are highly visible.

  7. Smoothed Analysis of Local Search Algorithms

    NARCIS (Netherlands)

    Manthey, Bodo; Dehne, Frank; Sack, Jörg-Rüdiger; Stege, Ulrike

    2015-01-01

    Smoothed analysis is a method for analyzing the performance of algorithms for which classical worst-case analysis fails to explain the performance observed in practice. Smoothed analysis has been applied to explain the performance of a variety of algorithms in the last years. One particular class of

  8. Smooth quantile normalization.

    Science.gov (United States)

    Hicks, Stephanie C; Okrah, Kwame; Paulson, Joseph N; Quackenbush, John; Irizarry, Rafael A; Bravo, Héctor Corrada

    2018-04-01

    Between-sample normalization is a critical step in genomic data analysis to remove systematic bias and unwanted technical variation in high-throughput data. Global normalization methods are based on the assumption that observed variability in global properties is due to technical reasons and are unrelated to the biology of interest. For example, some methods correct for differences in sequencing read counts by scaling features to have similar median values across samples, but these fail to reduce other forms of unwanted technical variation. Methods such as quantile normalization transform the statistical distributions across samples to be the same and assume global differences in the distribution are induced by only technical variation. However, it remains unclear how to proceed with normalization if these assumptions are violated, for example, if there are global differences in the statistical distributions between biological conditions or groups, and external information, such as negative or control features, is not available. Here, we introduce a generalization of quantile normalization, referred to as smooth quantile normalization (qsmooth), which is based on the assumption that the statistical distribution of each sample should be the same (or have the same distributional shape) within biological groups or conditions, but allowing that they may differ between groups. We illustrate the advantages of our method on several high-throughput datasets with global differences in distributions corresponding to different biological conditions. We also perform a Monte Carlo simulation study to illustrate the bias-variance tradeoff and root mean squared error of qsmooth compared to other global normalization methods. A software implementation is available from https://github.com/stephaniehicks/qsmooth.

  9. Smoothness of limit functors

    Indian Academy of Sciences (India)

    Abstract. Let S be a scheme. Assume that we are given an action of the one dimen- sional split torus Gm,S on a smooth affine S-scheme X. We consider the limit (also called attractor) subfunctor Xλ consisting of points whose orbit under the given action. 'admits a limit at 0'. We show that Xλ is representable by a smooth ...

  10. Molecular Method for Sex Identification of Half-Smooth Tongue Sole (Cynoglossus semilaevis Using a Novel Sex-Linked Microsatellite Marker

    Directory of Open Access Journals (Sweden)

    Xiaolin Liao

    2014-07-01

    Full Text Available Half-smooth tongue sole (Cynoglossus semilaevis is one of the most important flatfish species for aquaculture in China. To produce a monosex population, we attempted to develop a marker-assisted sex control technique in this sexually size dimorphic fish. In this study, we identified a co-dominant sex-linked marker (i.e., CyseSLM by screening genomic microsatellites and further developed a novel molecular method for sex identification in the tongue sole. CyseSLM has a sequence similarity of 73%–75% with stickleback, medaka, Fugu and Tetraodon. At this locus, two alleles (i.e., A244 and A234 were amplified from 119 tongue sole individuals with primer pairs CyseSLM-F1 and CyseSLM-R. Allele A244 was present in all individuals, while allele A234 (female-associated allele, FAA was mostly present in females with exceptions in four male individuals. Compared with the sequence of A244, A234 has a 10-bp deletion and 28 SNPs. A specific primer (CyseSLM-F2 was then designed based on the A234 sequence, which amplified a 204 bp fragment in all females and four males with primer CyseSLM-R. A time-efficient multiplex PCR program was developed using primers CyseSLM-F2, CyseSLM-R and the newly designed primer CyseSLM-F3. The multiplex PCR products with co-dominant pattern could be detected by agarose gel electrophoresis, which accurately identified the genetic sex of the tongue sole. Therefore, we have developed a rapid and reliable method for sex identification in tongue sole with a newly identified sex-linked microsatellite marker.

  11. Assessment of smoothed spectra using autocorrelation function

    International Nuclear Information System (INIS)

    Urbanski, P.; Kowalska, E.

    2006-01-01

    Recently, data and signal smoothing became almost standard procedures in the spectrometric and chromatographic methods. In radiometry, the main purpose to apply smoothing is minimisation of the statistical fluctuation and avoid distortion. The aim of the work was to find a qualitative parameter, which could be used, as a figure of merit for detecting distortion of the smoothed spectra, based on the linear model. It is assumed that as long as the part of the raw spectrum removed by the smoothing procedure (v s ) will be of random nature, the smoothed spectrum can be considered as undistorted. Thanks to this feature of the autocorrelation function, drifts of the mean value in the removed noise vs as well as its periodicity can be more easily detected from the autocorrelogram than from the original data

  12. Smooth halos in the cosmic web

    International Nuclear Information System (INIS)

    Gaite, José

    2015-01-01

    Dark matter halos can be defined as smooth distributions of dark matter placed in a non-smooth cosmic web structure. This definition of halos demands a precise definition of smoothness and a characterization of the manner in which the transition from smooth halos to the cosmic web takes place. We introduce entropic measures of smoothness, related to measures of inequality previously used in economy and with the advantage of being connected with standard methods of multifractal analysis already used for characterizing the cosmic web structure in cold dark matter N-body simulations. These entropic measures provide us with a quantitative description of the transition from the small scales portrayed as a distribution of halos to the larger scales portrayed as a cosmic web and, therefore, allow us to assign definite sizes to halos. However, these ''smoothness sizes'' have no direct relation to the virial radii. Finally, we discuss the influence of N-body discreteness parameters on smoothness

  13. Smooth halos in the cosmic web

    Energy Technology Data Exchange (ETDEWEB)

    Gaite, José, E-mail: jose.gaite@upm.es [Physics Dept., ETSIAE, IDR, Universidad Politécnica de Madrid, Pza. Cardenal Cisneros 3, E-28040 Madrid (Spain)

    2015-04-01

    Dark matter halos can be defined as smooth distributions of dark matter placed in a non-smooth cosmic web structure. This definition of halos demands a precise definition of smoothness and a characterization of the manner in which the transition from smooth halos to the cosmic web takes place. We introduce entropic measures of smoothness, related to measures of inequality previously used in economy and with the advantage of being connected with standard methods of multifractal analysis already used for characterizing the cosmic web structure in cold dark matter N-body simulations. These entropic measures provide us with a quantitative description of the transition from the small scales portrayed as a distribution of halos to the larger scales portrayed as a cosmic web and, therefore, allow us to assign definite sizes to halos. However, these ''smoothness sizes'' have no direct relation to the virial radii. Finally, we discuss the influence of N-body discreteness parameters on smoothness.

  14. Mean-Variance Efficiency of the Market Portfolio

    OpenAIRE

    Rafael Falcão Noda; Roy Martelanc; José Roberto Securato

    2014-01-01

    The objective of this study is to answer the criticism to the CAPM based on findings that the market portfolio is far from the efficient frontier. We run a numeric optimization model, based on Brazilian stock market data from 2003 to 2012. For each asset, we obtain adjusted returns and standard deviations such that (i) the efficient frontier intersects with the market portfolio and (ii) the distance between the adjusted parameters and the sample parameters is minimized. We conclude that the a...

  15. Mean-Variance Efficiency of the Market Portfolio

    Directory of Open Access Journals (Sweden)

    Rafael Falcão Noda

    2014-06-01

    Full Text Available The objective of this study is to answer the criticism to the CAPM based on findings that the market portfolio is far from the efficient frontier. We run a numeric optimization model, based on Brazilian stock market data from 2003 to 2012. For each asset, we obtain adjusted returns and standard deviations such that (i the efficient frontier intersects with the market portfolio and (ii the distance between the adjusted parameters and the sample parameters is minimized. We conclude that the adjusted parameters are not significantly different from the sample parameters, in line with the results of Levy and Roll (2010 for the USA stock market. Such results suggest that the imprecisions in the implementation of the CAPM stem mostly from parameter estimation errors and that other explanatory factors for returns may have low relevance. Therefore, our results contradict the above-mentioned criticisms to the CAPM in Brazil.

  16. Optimization problem and mean variance hedging on defaultable claims

    OpenAIRE

    Goutte, Stephane; Ngoupeyou, Armand

    2012-01-01

    We study the pricing and the hedging of claim {\\psi} which depends on the default times of two firms A and B. In fact, we assume that, in the market, we can not buy or sell any defaultable bond of the firm B but we can only trade defaultable bond of the firm A. Our aim is then to find the best price and hedging of {\\psi} using only bond of the firm A. Hence, we solve this problem in two cases: firstly in a Markov framework using indifference price and solving a system of Hamilton-Jacobi-Bellm...

  17. A Note on the Kinks at the Mean Variance Frontier

    NARCIS (Netherlands)

    Vörös, J.; Kriens, J.; Strijbosch, L.W.G.

    1997-01-01

    In this paper the standard portfolio case with short sales restrictions is analyzed.Dybvig pointed out that if there is a kink at a risky portfolio on the efficient frontier, then the securities in this portfolio have equal expected return and the converse of this statement is false.For the

  18. Mean-Variance Optimization in Markov Decision Processes

    OpenAIRE

    Mannor, Shie; Tsitsiklis, John N.

    2011-01-01

    We consider finite horizon Markov decision processes under performance measures that involve both the mean and the variance of the cumulative reward. We show that either randomized or history-based policies can improve performance. We prove that the complexity of computing a policy that maximizes the mean reward under a variance constraint is NP-hard for some cases, and strongly NP-hard for others. We finally offer pseudo-polynomial exact and approximation algorithms.

  19. Revealed smooth nontransitive preferences

    DEFF Research Database (Denmark)

    Keiding, Hans; Tvede, Mich

    2013-01-01

    In the present paper, we are concerned with the behavioural consequences of consumers having nontransitive preference relations. Data sets consist of finitely many observations of price vectors and consumption bundles. A preference relation rationalizes a data set provided that for every observed...... consumption bundle, all strictly preferred bundles are more expensive than the observed bundle. Our main result is that data sets can be rationalized by a smooth nontransitive preference relation if and only if prices can normalized such that the law of demand is satisfied. Market data sets consist of finitely...... many observations of price vectors, lists of individual incomes and aggregate demands. We apply our main result to characterize market data sets consistent with equilibrium behaviour of pure-exchange economies with smooth nontransitive consumers....

  20. Generalizing smooth transition autoregressions

    DEFF Research Database (Denmark)

    Chini, Emilio Zanetti

    We introduce a variant of the smooth transition autoregression - the GSTAR model - capable to parametrize the asymmetry in the tails of the transition equation by using a particular generalization of the logistic function. A General-to-Specific modelling strategy is discussed in detail, with part......We introduce a variant of the smooth transition autoregression - the GSTAR model - capable to parametrize the asymmetry in the tails of the transition equation by using a particular generalization of the logistic function. A General-to-Specific modelling strategy is discussed in detail......, with particular emphasis on two different LM-type tests for the null of symmetric adjustment towards a new regime and three diagnostic tests, whose power properties are explored via Monte Carlo experiments. Four classical real datasets illustrate the empirical properties of the GSTAR, jointly to a rolling...

  1. Anti-smooth muscle antibody

    Science.gov (United States)

    ... gov/ency/article/003531.htm Anti-smooth muscle antibody To use the sharing features on this page, please enable JavaScript. Anti-smooth muscle antibody is a blood test that detects the presence ...

  2. Smooth functors vs. differential forms

    NARCIS (Netherlands)

    Schreiber, U.; Waldorf, K.

    2011-01-01

    We establish a relation between smooth 2-functors defined on the path 2-groupoid of a smooth manifold and differential forms on this manifold. This relation can be understood as a part of a dictionary between fundamental notions from category theory and differential geometry. We show that smooth

  3. Adsorption on smooth electrodes: A radiotracer study

    International Nuclear Information System (INIS)

    Rice-Jackson, L.M.

    1990-01-01

    Adsorption on solids is a complicated process and in most cases, occurs as the early stage of other more complicated processes, i.e. chemical reactions, electrooxidation, electroreduction. The research reported here combines the electroanalytical method, cyclic voltammetry, and the use of radio-labeled isotopes, soft beta emitters, to study adsorption processes at smooth electrodes. The in-situ radiotracer method is highly anion (molecule) specific and provides information on the structure and composition of the electric double layer. The emphasis of this research was on studying adsorption processes at smooth electrodes of copper, gold, and platinum. The application of the radiotracer method to these smooth surfaces have led to direct in-situ measurements from which surface coverage was determined; anions and molecules were identified; and weak interactions of adsorbates with the surface of the electrodes were readily monitored. 179 refs

  4. Local smoothness for global optical flow

    DEFF Research Database (Denmark)

    Rakêt, Lars Lau

    2012-01-01

    by this technique and work on local-global optical flow we propose a simple method for fusing optical flow estimates of different smoothness by evaluating interpolation quality locally by means of L1 block match on the corresponding set of gradient images. We illustrate the method in a setting where optical flows...

  5. Exponential smoothing weighted correlations

    Science.gov (United States)

    Pozzi, F.; Di Matteo, T.; Aste, T.

    2012-06-01

    In many practical applications, correlation matrices might be affected by the "curse of dimensionality" and by an excessive sensitiveness to outliers and remote observations. These shortcomings can cause problems of statistical robustness especially accentuated when a system of dynamic correlations over a running window is concerned. These drawbacks can be partially mitigated by assigning a structure of weights to observational events. In this paper, we discuss Pearson's ρ and Kendall's τ correlation matrices, weighted with an exponential smoothing, computed on moving windows using a data-set of daily returns for 300 NYSE highly capitalized companies in the period between 2001 and 2003. Criteria for jointly determining optimal weights together with the optimal length of the running window are proposed. We find that the exponential smoothing can provide more robust and reliable dynamic measures and we discuss that a careful choice of the parameters can reduce the autocorrelation of dynamic correlations whilst keeping significance and robustness of the measure. Weighted correlations are found to be smoother and recovering faster from market turbulence than their unweighted counterparts, helping also to discriminate more effectively genuine from spurious correlations.

  6. Non-parametric smoothing of experimental data

    International Nuclear Information System (INIS)

    Kuketayev, A.T.; Pen'kov, F.M.

    2007-01-01

    Full text: Rapid processing of experimental data samples in nuclear physics often requires differentiation in order to find extrema. Therefore, even at the preliminary stage of data analysis, a range of noise reduction methods are used to smooth experimental data. There are many non-parametric smoothing techniques: interval averages, moving averages, exponential smoothing, etc. Nevertheless, it is more common to use a priori information about the behavior of the experimental curve in order to construct smoothing schemes based on the least squares techniques. The latter methodology's advantage is that the area under the curve can be preserved, which is equivalent to conservation of total speed of counting. The disadvantages of this approach include the lack of a priori information. For example, very often the sums of undifferentiated (by a detector) peaks are replaced with one peak during the processing of data, introducing uncontrolled errors in the determination of the physical quantities. The problem is solvable only by having experienced personnel, whose skills are much greater than the challenge. We propose a set of non-parametric techniques, which allows the use of any additional information on the nature of experimental dependence. The method is based on a construction of a functional, which includes both experimental data and a priori information. Minimum of this functional is reached on a non-parametric smoothed curve. Euler (Lagrange) differential equations are constructed for these curves; then their solutions are obtained analytically or numerically. The proposed approach allows for automated processing of nuclear physics data, eliminating the need for highly skilled laboratory personnel. Pursuant to the proposed approach is the possibility to obtain smoothing curves in a given confidence interval, e.g. according to the χ 2 distribution. This approach is applicable when constructing smooth solutions of ill-posed problems, in particular when solving

  7. Smooth random change point models.

    Science.gov (United States)

    van den Hout, Ardo; Muniz-Terrera, Graciela; Matthews, Fiona E

    2011-03-15

    Change point models are used to describe processes over time that show a change in direction. An example of such a process is cognitive ability, where a decline a few years before death is sometimes observed. A broken-stick model consists of two linear parts and a breakpoint where the two lines intersect. Alternatively, models can be formulated that imply a smooth change between the two linear parts. Change point models can be extended by adding random effects to account for variability between subjects. A new smooth change point model is introduced and examples are presented that show how change point models can be estimated using functions in R for mixed-effects models. The Bayesian inference using WinBUGS is also discussed. The methods are illustrated using data from a population-based longitudinal study of ageing, the Cambridge City over 75 Cohort Study. The aim is to identify how many years before death individuals experience a change in the rate of decline of their cognitive ability. Copyright © 2010 John Wiley & Sons, Ltd.

  8. Smooth functions statistics

    International Nuclear Information System (INIS)

    Arnold, V.I.

    2006-03-01

    To describe the topological structure of a real smooth function one associates to it the graph, formed by the topological variety, whose points are the connected components of the level hypersurface of the function. For a Morse function, such a graph is a tree. Generically, it has T triple vertices, T + 2 endpoints, 2T + 2 vertices and 2T + 1 arrows. The main goal of the present paper is to study the statistics of the graphs, corresponding to T triple points: what is the growth rate of the number φ(T) of different graphs? Which part of these graphs is representable by the polynomial functions of corresponding degree? A generic polynomial of degree n has at most (n - 1) 2 critical points on R 2 , corresponding to 2T + 2 = (n - 1) 2 + 1, that is to T = 2k(k - 1) saddle-points for degree n = 2k

  9. Classification of smooth Fano polytopes

    DEFF Research Database (Denmark)

    Øbro, Mikkel

    A simplicial lattice polytope containing the origin in the interior is called a smooth Fano polytope, if the vertices of every facet is a basis of the lattice. The study of smooth Fano polytopes is motivated by their connection to toric varieties. The thesis concerns the classification of smooth...... Fano polytopes up to isomorphism. A smooth Fano -polytope can have at most vertices. In case of vertices an explicit classification is known. The thesis contains the classification in case of vertices. Classifications of smooth Fano -polytopes for fixed exist only for . In the thesis an algorithm...... for the classification of smooth Fano -polytopes for any given is presented. The algorithm has been implemented and used to obtain the complete classification for ....

  10. Lyapunov exponents and smooth ergodic theory

    CERN Document Server

    Barreira, Luis

    2001-01-01

    This book is a systematic introduction to smooth ergodic theory. The topics discussed include the general (abstract) theory of Lyapunov exponents and its applications to the stability theory of differential equations, stable manifold theory, absolute continuity, and the ergodic theory of dynamical systems with nonzero Lyapunov exponents (including geodesic flows). The authors consider several non-trivial examples of dynamical systems with nonzero Lyapunov exponents to illustrate some basic methods and ideas of the theory. This book is self-contained. The reader needs a basic knowledge of real analysis, measure theory, differential equations, and topology. The authors present basic concepts of smooth ergodic theory and provide complete proofs of the main results. They also state some more advanced results to give readers a broader view of smooth ergodic theory. This volume may be used by those nonexperts who wish to become familiar with the field.

  11. SmoothMoves : Smooth pursuits head movements for augmented reality

    NARCIS (Netherlands)

    Esteves, Augusto; Verweij, David; Suraiya, Liza; Islam, Rasel; Lee, Youryang; Oakley, Ian

    2017-01-01

    SmoothMoves is an interaction technique for augmented reality (AR) based on smooth pursuits head movements. It works by computing correlations between the movements of on-screen targets and the user's head while tracking those targets. The paper presents three studies. The first suggests that head

  12. Interval Forecast for Smooth Transition Autoregressive Model ...

    African Journals Online (AJOL)

    In this paper, we propose a simple method for constructing interval forecast for smooth transition autoregressive (STAR) model. This interval forecast is based on bootstrapping the residual error of the estimated STAR model for each forecast horizon and computing various Akaike information criterion (AIC) function. This new ...

  13. Polarization beam smoothing for inertial confinement fusion

    International Nuclear Information System (INIS)

    Rothenberg, Joshua E.

    2000-01-01

    For both direct and indirect drive approaches to inertial confinement fusion (ICF) it is imperative to obtain the best possible drive beam uniformity. The approach chosen for the National Ignition Facility uses a random-phase plate to generate a speckle pattern with a precisely controlled envelope on target. A number of temporal smoothing techniques can then be employed to utilize bandwidth to rapidly change the speckle pattern, and thus average out the small-scale speckle structure. One technique which generally can supplement other smoothing methods is polarization smoothing (PS): the illumination of the target with two distinct and orthogonally polarized speckle patterns. Since these two polarizations do not interfere, the intensity patterns add incoherently, and the rms nonuniformity can be reduced by a factor of (√2). A number of PS schemes are described and compared on the basis of the aggregate rms and the spatial spectrum of the focused illumination distribution. The (√2) rms nonuniformity reduction of PS is present on an instantaneous basis and is, therefore, of particular interest for the suppression of laser plasma instabilities, which have a very rapid response time. When combining PS and temporal methods, such as smoothing by spectral dispersion (SSD), PS can reduce the rms of the temporally smoothed illumination by an additional factor of (√2). However, it has generally been thought that in order to achieve this reduction of (√2), the increased divergence of the beam from PS must exceed the divergence of SSD. It is also shown here that, over the time scales of interest to direct or indirect drive ICF, under some conditions PS can reduce the smoothed illumination rms by nearly (√2) even when the PS divergence is much smaller than that of SSD. (c) 2000 American Institute of Physics

  14. Smoothness in Binomial Edge Ideals

    Directory of Open Access Journals (Sweden)

    Hamid Damadi

    2016-06-01

    Full Text Available In this paper we study some geometric properties of the algebraic set associated to the binomial edge ideal of a graph. We study the singularity and smoothness of the algebraic set associated to the binomial edge ideal of a graph. Some of these algebraic sets are irreducible and some of them are reducible. If every irreducible component of the algebraic set is smooth we call the graph an edge smooth graph, otherwise it is called an edge singular graph. We show that complete graphs are edge smooth and introduce two conditions such that the graph G is edge singular if and only if it satisfies these conditions. Then, it is shown that cycles and most of trees are edge singular. In addition, it is proved that complete bipartite graphs are edge smooth.

  15. Catalogue of methods of calculation, interpolation, smoothing, and reduction for the physical, chemical, and biological parameters of deep hydrology (CATMETH) (NODC Accession 7700442)

    Data.gov (United States)

    National Oceanic and Atmospheric Administration, Department of Commerce — The document presents the methods, formulas and citations used by the BNDO to process physical, chemical, and biological data for deep hydrology including...

  16. Full Waveform Inversion Using Nonlinearly Smoothed Wavefields

    KAUST Repository

    Li, Y.; Choi, Yun Seok; Alkhalifah, Tariq Ali; Li, Z.

    2017-01-01

    The lack of low frequency information in the acquired data makes full waveform inversion (FWI) conditionally converge to the accurate solution. An initial velocity model that results in data with events within a half cycle of their location in the observed data was required to converge. The multiplication of wavefields with slightly different frequencies generates artificial low frequency components. This can be effectively utilized by multiplying the wavefield with itself, which is nonlinear operation, followed by a smoothing operator to extract the artificially produced low frequency information. We construct the objective function using the nonlinearly smoothed wavefields with a global-correlation norm to properly handle the energy imbalance in the nonlinearly smoothed wavefield. Similar to the multi-scale strategy, we progressively reduce the smoothing width applied to the multiplied wavefield to welcome higher resolution. We calculate the gradient of the objective function using the adjoint-state technique, which is similar to the conventional FWI except for the adjoint source. Examples on the Marmousi 2 model demonstrate the feasibility of the proposed FWI method to mitigate the cycle-skipping problem in the case of a lack of low frequency information.

  17. Full Waveform Inversion Using Nonlinearly Smoothed Wavefields

    KAUST Repository

    Li, Y.

    2017-05-26

    The lack of low frequency information in the acquired data makes full waveform inversion (FWI) conditionally converge to the accurate solution. An initial velocity model that results in data with events within a half cycle of their location in the observed data was required to converge. The multiplication of wavefields with slightly different frequencies generates artificial low frequency components. This can be effectively utilized by multiplying the wavefield with itself, which is nonlinear operation, followed by a smoothing operator to extract the artificially produced low frequency information. We construct the objective function using the nonlinearly smoothed wavefields with a global-correlation norm to properly handle the energy imbalance in the nonlinearly smoothed wavefield. Similar to the multi-scale strategy, we progressively reduce the smoothing width applied to the multiplied wavefield to welcome higher resolution. We calculate the gradient of the objective function using the adjoint-state technique, which is similar to the conventional FWI except for the adjoint source. Examples on the Marmousi 2 model demonstrate the feasibility of the proposed FWI method to mitigate the cycle-skipping problem in the case of a lack of low frequency information.

  18. Adaptive Smoothed Finite Elements (ASFEM) for history dependent material models

    International Nuclear Information System (INIS)

    Quak, W.; Boogaard, A. H. van den

    2011-01-01

    A successful simulation of a bulk forming process with finite elements can be difficult due to distortion of the finite elements. Nodal smoothed Finite Elements (NSFEM) are an interesting option for such a process since they show good distortion insensitivity and moreover have locking-free behavior and good computational efficiency. In this paper a method is proposed which takes advantage of the nodally smoothed field. This method, named adaptive smoothed finite elements (ASFEM), revises the mesh for every step of a simulation without mapping the history dependent material parameters. In this paper an updated-Lagrangian implementation is presented. Several examples are given to illustrate the method and to show its properties.

  19. A classical Perron method for existence of smooth solutions to boundary value and obstacle problems for degenerate-elliptic operators via holomorphic maps

    Science.gov (United States)

    Feehan, Paul M. N.

    2017-09-01

    We prove existence of solutions to boundary value problems and obstacle problems for degenerate-elliptic, linear, second-order partial differential operators with partial Dirichlet boundary conditions using a new version of the Perron method. The elliptic operators considered have a degeneracy along a portion of the domain boundary which is similar to the degeneracy of a model linear operator identified by Daskalopoulos and Hamilton [9] in their study of the porous medium equation or the degeneracy of the Heston operator [21] in mathematical finance. Existence of a solution to the partial Dirichlet problem on a half-ball, where the operator becomes degenerate on the flat boundary and a Dirichlet condition is only imposed on the spherical boundary, provides the key additional ingredient required for our Perron method. Surprisingly, proving existence of a solution to this partial Dirichlet problem with ;mixed; boundary conditions on a half-ball is more challenging than one might expect. Due to the difficulty in developing a global Schauder estimate and due to compatibility conditions arising where the ;degenerate; and ;non-degenerate boundaries; touch, one cannot directly apply the continuity or approximate solution methods. However, in dimension two, there is a holomorphic map from the half-disk onto the infinite strip in the complex plane and one can extend this definition to higher dimensions to give a diffeomorphism from the half-ball onto the infinite ;slab;. The solution to the partial Dirichlet problem on the half-ball can thus be converted to a partial Dirichlet problem on the slab, albeit for an operator which now has exponentially growing coefficients. The required Schauder regularity theory and existence of a solution to the partial Dirichlet problem on the slab can nevertheless be obtained using previous work of the author and C. Pop [16]. Our Perron method relies on weak and strong maximum principles for degenerate-elliptic operators, concepts of

  20. Smoothing optimization of supporting quadratic surfaces with Zernike polynomials

    Science.gov (United States)

    Zhang, Hang; Lu, Jiandong; Liu, Rui; Ma, Peifu

    2018-03-01

    A new optimization method to get a smooth freeform optical surface from an initial surface generated by the supporting quadratic method (SQM) is proposed. To smooth the initial surface, a 9-vertex system from the neighbor quadratic surface and the Zernike polynomials are employed to establish a linear equation system. A local optimized surface to the 9-vertex system can be build by solving the equations. Finally, a continuous smooth optimization surface is constructed by stitching the above algorithm on the whole initial surface. The spot corresponding to the optimized surface is no longer discrete pixels but a continuous distribution.

  1. Efficient computation of smoothing splines via adaptive basis sampling

    KAUST Repository

    Ma, Ping

    2015-06-24

    © 2015 Biometrika Trust. Smoothing splines provide flexible nonparametric regression estimators. However, the high computational cost of smoothing splines for large datasets has hindered their wide application. In this article, we develop a new method, named adaptive basis sampling, for efficient computation of smoothing splines in super-large samples. Except for the univariate case where the Reinsch algorithm is applicable, a smoothing spline for a regression problem with sample size n can be expressed as a linear combination of n basis functions and its computational complexity is generally O(n3). We achieve a more scalable computation in the multivariate case by evaluating the smoothing spline using a smaller set of basis functions, obtained by an adaptive sampling scheme that uses values of the response variable. Our asymptotic analysis shows that smoothing splines computed via adaptive basis sampling converge to the true function at the same rate as full basis smoothing splines. Using simulation studies and a large-scale deep earth core-mantle boundary imaging study, we show that the proposed method outperforms a sampling method that does not use the values of response variables.

  2. Efficient computation of smoothing splines via adaptive basis sampling

    KAUST Repository

    Ma, Ping; Huang, Jianhua Z.; Zhang, Nan

    2015-01-01

    © 2015 Biometrika Trust. Smoothing splines provide flexible nonparametric regression estimators. However, the high computational cost of smoothing splines for large datasets has hindered their wide application. In this article, we develop a new method, named adaptive basis sampling, for efficient computation of smoothing splines in super-large samples. Except for the univariate case where the Reinsch algorithm is applicable, a smoothing spline for a regression problem with sample size n can be expressed as a linear combination of n basis functions and its computational complexity is generally O(n3). We achieve a more scalable computation in the multivariate case by evaluating the smoothing spline using a smaller set of basis functions, obtained by an adaptive sampling scheme that uses values of the response variable. Our asymptotic analysis shows that smoothing splines computed via adaptive basis sampling converge to the true function at the same rate as full basis smoothing splines. Using simulation studies and a large-scale deep earth core-mantle boundary imaging study, we show that the proposed method outperforms a sampling method that does not use the values of response variables.

  3. Smooth homogeneous structures in operator theory

    CERN Document Server

    Beltita, Daniel

    2005-01-01

    Geometric ideas and techniques play an important role in operator theory and the theory of operator algebras. Smooth Homogeneous Structures in Operator Theory builds the background needed to understand this circle of ideas and reports on recent developments in this fruitful field of research. Requiring only a moderate familiarity with functional analysis and general topology, the author begins with an introduction to infinite dimensional Lie theory with emphasis on the relationship between Lie groups and Lie algebras. A detailed examination of smooth homogeneous spaces follows. This study is illustrated by familiar examples from operator theory and develops methods that allow endowing such spaces with structures of complex manifolds. The final section of the book explores equivariant monotone operators and Kähler structures. It examines certain symmetry properties of abstract reproducing kernels and arrives at a very general version of the construction of restricted Grassmann manifolds from the theory of loo...

  4. Non-smooth dynamical systems

    CERN Document Server

    2000-01-01

    The book provides a self-contained introduction to the mathematical theory of non-smooth dynamical problems, as they frequently arise from mechanical systems with friction and/or impacts. It is aimed at applied mathematicians, engineers, and applied scientists in general who wish to learn the subject.

  5. Panel Smooth Transition Regression Models

    DEFF Research Database (Denmark)

    González, Andrés; Terasvirta, Timo; Dijk, Dick van

    We introduce the panel smooth transition regression model. This new model is intended for characterizing heterogeneous panels, allowing the regression coefficients to vary both across individuals and over time. Specifically, heterogeneity is allowed for by assuming that these coefficients are bou...

  6. Smoothing type buffer memory device

    International Nuclear Information System (INIS)

    Podorozhnyj, D.M.; Yashin, I.V.

    1990-01-01

    The layout of the micropower 4-bit smoothing type buffer memory device allowing one to record without counting the sequence of input randomly distributed pulses in multi-channel devices with serial poll, is given. The power spent by a memory cell for one binary digit recording is not greater than 0.15 mW, the device dead time is 10 mus

  7. Covariances of smoothed observational data

    Czech Academy of Sciences Publication Activity Database

    Vondrák, Jan; Čepek, A.

    2000-01-01

    Roč. 40, 5-6 (2000), s. 42-44 ISSN 1210-2709 R&D Projects: GA ČR GA205/98/1104 Institutional research plan: CEZ:AV0Z1003909 Keywords : digital filter * smoothing * estimation of uncertainties Subject RIV: BN - Astronomy, Celestial Mechanics, Astrophysics

  8. Income smoothing by Dutch hospitals

    NARCIS (Netherlands)

    Boterenbrood, D.R.

    2014-01-01

    Research indicates that hospitals manage their earnings. However, these findings might be influenced by methodological issues. In this study, I exploit specific features of Dutch hospitals to study income smoothing while limiting these methodological issues. The managers of Dutch hospitals have the

  9. Exchange rate smoothing in Hungary

    OpenAIRE

    Karádi, Péter

    2005-01-01

    The paper proposes a structural empirical model capable of examining exchange rate smoothing in the small, open economy of Hungary. The framework assumes the existence of an unobserved and changing implicit exchange rate target. The central bank is assumed to use interest rate policy to obtain this preferred rate in the medium term, while market participants are assumed to form rational expectations about this target and influence exchange rates accordingly. The paper applies unobserved varia...

  10. Workshop on advances in smooth particle hydrodynamics

    Energy Technology Data Exchange (ETDEWEB)

    Wingate, C.A.; Miller, W.A.

    1993-12-31

    This proceedings contains viewgraphs presented at the 1993 workshop held at Los Alamos National Laboratory. Discussed topics include: negative stress, reactive flow calculations, interface problems, boundaries and interfaces, energy conservation in viscous flows, linked penetration calculations, stability and consistency of the SPH method, instabilities, wall heating and conservative smoothing, tensors, tidal disruption of stars, breaking the 10,000,000 particle limit, modelling relativistic collapse, SPH without H, relativistic KSPH avoidance of velocity based kernels, tidal compression and disruption of stars near a supermassive rotation black hole, and finally relativistic SPH viscosity and energy.

  11. Smooth surfaces from rational bilinear patches

    KAUST Repository

    Shi, Ling; Wang, Jun; Pottmann, Helmut

    2014-01-01

    Smooth freeform skins from simple panels constitute a challenging topic arising in contemporary architecture. We contribute to this problem area by showing how to approximate a negatively curved surface by smoothly joined rational bilinear patches

  12. Localization method of picking point of apple target based on smoothing contour symmetry axis algorithm%基于平滑轮廓对称轴法的苹果目标采摘点定位方法

    Institute of Scientific and Technical Information of China (English)

    王丹丹; 徐越; 宋怀波; 何东健

    2015-01-01

    果实采摘点的精确定位是采摘机器人必须解决的关键问题。鉴于苹果目标具有良好对称性的特点,利用转动惯量所具有的平移、旋转不变性及其在对称轴方向取得极值的特性,提出了一种基于轮廓对称轴法的苹果目标采摘点定位方法。为了解决分割后苹果目标边缘不够平滑而导致定位精度偏低的问题,提出了一种苹果目标轮廓平滑方法。为了验证算法的有效性,对随机选取的20幅无遮挡的单果苹果图像分别利用轮廓平滑和未进行轮廓平滑的算法进行试验,试验结果表明,未进行轮廓平滑算法的平均定位误差为20.678°,而轮廓平滑后算法平均定位误差为4.542°,比未进行轮廓平滑算法平均定位误差降低了78.035%,未进行轮廓平滑算法的平均运行时间为10.2 ms,而轮廓平滑后算法的平均运行时间为7.5 ms,比未进行轮廓平滑算法平均运行时间降低了25.839%,表明平滑轮廓算法可以提高定位精度和运算效率。利用平滑轮廓对称轴算法可以较好地找到苹果目标的对称轴并实现采摘点定位,表明将该方法应用于苹果目标的对称轴提取及采摘点定位是可行的。%The localization of picking points of fruits is one of the key problems for picking robots, and it is the first step of implementation of the picking task for picking robots. In view of a good symmetry of apples, and characteristics of shift, rotation invariance, and reaching the extreme values in symmetry axis direction which moment of inertia possesses, a new method based on a contour symmetry axis was proposed to locate the picking point of apples. In order to solve the problem of low localization accuracy which results from the rough edge of apples after segmentation, a method of smoothing contour algorithm was presented. The steps of the algorithm were as follow, first, the image was transformed from RGB color space into

  13. Calcium dynamics in vascular smooth muscle

    OpenAIRE

    Amberg, Gregory C.; Navedo, Manuel F.

    2013-01-01

    Smooth muscle cells are ultimately responsible for determining vascular luminal diameter and blood flow. Dynamic changes in intracellular calcium are a critical mechanism regulating vascular smooth muscle contractility. Processes influencing intracellular calcium are therefore important regulators of vascular function with physiological and pathophysiological consequences. In this review we discuss the major dynamic calcium signals identified and characterized in vascular smooth muscle cells....

  14. multiscale smoothing in supervised statistical learning

    Indian Academy of Sciences (India)

    Optimum level of smoothing is chosen based on the entire training sample, while a good choice of smoothing parameter may also depend on the observation to be classified. One may like to assess the strength of evidence in favor of different competing class at different scale of smoothing. In allows only one single ...

  15. A SAS IML Macro for Loglinear Smoothing

    Science.gov (United States)

    Moses, Tim; von Davier, Alina

    2011-01-01

    Polynomial loglinear models for one-, two-, and higher-way contingency tables have important applications to measurement and assessment. They are essentially regarded as a smoothing technique, which is commonly referred to as loglinear smoothing. A SAS IML (SAS Institute, 2002a) macro was created to implement loglinear smoothing according to…

  16. Automatic smoothing parameter selection in GAMLSS with an application to centile estimation.

    Science.gov (United States)

    Rigby, Robert A; Stasinopoulos, Dimitrios M

    2014-08-01

    A method for automatic selection of the smoothing parameters in a generalised additive model for location, scale and shape (GAMLSS) model is introduced. The method uses a P-spline representation of the smoothing terms to express them as random effect terms with an internal (or local) maximum likelihood estimation on the predictor scale of each distribution parameter to estimate its smoothing parameters. This provides a fast method for estimating multiple smoothing parameters. The method is applied to centile estimation where all four parameters of a distribution for the response variable are modelled as smooth functions of a transformed explanatory variable x This allows smooth modelling of the location, scale, skewness and kurtosis parameters of the response variable distribution as functions of x. © The Author(s) 2013 Reprints and permissions: sagepub.co.uk/journalsPermissions.nav.

  17. Smoothing the payoff for efficient computation of Basket option prices

    KAUST Repository

    Bayer, Christian

    2017-07-22

    We consider the problem of pricing basket options in a multivariate Black–Scholes or Variance-Gamma model. From a numerical point of view, pricing such options corresponds to moderate and high-dimensional numerical integration problems with non-smooth integrands. Due to this lack of regularity, higher order numerical integration techniques may not be directly available, requiring the use of methods like Monte Carlo specifically designed to work for non-regular problems. We propose to use the inherent smoothing property of the density of the underlying in the above models to mollify the payoff function by means of an exact conditional expectation. The resulting conditional expectation is unbiased and yields a smooth integrand, which is amenable to the efficient use of adaptive sparse-grid cubature. Numerical examples indicate that the high-order method may perform orders of magnitude faster than Monte Carlo or Quasi Monte Carlo methods in dimensions up to 35.

  18. Calcium signaling in smooth muscle.

    Science.gov (United States)

    Hill-Eubanks, David C; Werner, Matthias E; Heppner, Thomas J; Nelson, Mark T

    2011-09-01

    Changes in intracellular Ca(2+) are central to the function of smooth muscle, which lines the walls of all hollow organs. These changes take a variety of forms, from sustained, cell-wide increases to temporally varying, localized changes. The nature of the Ca(2+) signal is a reflection of the source of Ca(2+) (extracellular or intracellular) and the molecular entity responsible for generating it. Depending on the specific channel involved and the detection technology employed, extracellular Ca(2+) entry may be detected optically as graded elevations in intracellular Ca(2+), junctional Ca(2+) transients, Ca(2+) flashes, or Ca(2+) sparklets, whereas release of Ca(2+) from intracellular stores may manifest as Ca(2+) sparks, Ca(2+) puffs, or Ca(2+) waves. These diverse Ca(2+) signals collectively regulate a variety of functions. Some functions, such as contractility, are unique to smooth muscle; others are common to other excitable cells (e.g., modulation of membrane potential) and nonexcitable cells (e.g., regulation of gene expression).

  19. Lensing smoothing of BAO wiggles

    Energy Technology Data Exchange (ETDEWEB)

    Dio, Enea Di, E-mail: enea.didio@oats.inaf.it [INAF—Osservatorio Astronomico di Trieste, Via G.B. Tiepolo 11, I-34143 Trieste (Italy)

    2017-03-01

    We study non-perturbatively the effect of the deflection angle on the BAO wiggles of the matter power spectrum in real space. We show that from redshift z ∼2 this introduces a dispersion of roughly 1 Mpc at BAO scale, which corresponds approximately to a 1% effect. The lensing effect induced by the deflection angle, which is completely geometrical and survey independent, smears out the BAO wiggles. The effect on the power spectrum amplitude at BAO scale is about 0.1 % for z ∼2 and 0.2 % for z ∼4. We compare the smoothing effects induced by the lensing potential and non-linear structure formation, showing that the two effects become comparable at z ∼ 4, while the lensing effect dominates for sources at higher redshifts. We note that this effect is not accounted through BAO reconstruction techniques.

  20. Radial smoothing and closed orbit

    International Nuclear Information System (INIS)

    Burnod, L.; Cornacchia, M.; Wilson, E.

    1983-11-01

    A complete simulation leading to a description of one of the error curves must involve four phases: (1) random drawing of the six set-up points within a normal population having a standard deviation of 1.3 mm; (b) random drawing of the six vertices of the curve in the sextant mode within a normal population having a standard deviation of 1.2 mm. These vertices are to be set with respect to the axis of the error lunes, while this axis has as its origins the positions defined by the preceding drawing; (c) mathematical definition of six parabolic curves and their junctions. These latter may be curves with very slight curvatures, or segments of a straight line passing through the set-up point and having lengths no longer than one LSS. Thus one gets a mean curve for the absolute errors; (d) plotting of the actually observed radial positions with respect to the mean curve (results of smoothing)

  1. Doing smooth pursuit paradigms in Windows 7

    DEFF Research Database (Denmark)

    Wilms, Inge Linda

    predict strengths or deficits in perception and attention. However, smooth pursuit movements have been difficult to study and very little normative data is available for smooth pursuit performance in children and adults. This poster describes the challenges in setting up a smooth pursuit paradigm...... in Windows 7 with live capturing of eye movements using a Tobii TX300 eye tracker. In particular, the poster describes the challenges and limitations created by the hardware and the software...

  2. Contruction of a smoothed DEA frontier

    Directory of Open Access Journals (Sweden)

    Mello João Carlos Correia Baptista Soares de

    2002-01-01

    Full Text Available It is known that the DEA multipliers model does not allow a unique solution for the weights. This is due to the absence of unique derivatives in the extreme-efficient points, which is a consequence of the piecewise linear nature of the frontier. In this paper we propose a method to solve this problem, consisting of changing the original DEA frontier for a new one, smooth (with continuous derivatives at every point and closest to the original frontier. We present the theoretical development for the general case, exemplified with the particular case of the BCC model with one input and one output. The 3-dimensional problem is briefly discussed. Some uses of the model are summarised, and one of them, a new Cross-Evaluation model, is presented.

  3. Income and Consumption Smoothing among US States

    DEFF Research Database (Denmark)

    Sørensen, Bent; Yosha, Oved

    within regions but not between regions. This suggests that capital markets transcend regional barriers while credit markets are regional in their nature. Smoothing within the club of rich states is accomplished mainly via capital markets whereas consumption smoothing is dominant within the club of poor...... states. The fraction of a shock to gross state products smoothed by the federal tax-transfer system is the same for various regions and other clubs of states. We calculate the scope for consumption smoothing within various regions and clubs, finding that most gains from risk sharing can be achieved...

  4. Restoring a smooth function from its noisy integrals

    Science.gov (United States)

    Goulko, Olga; Prokof'ev, Nikolay; Svistunov, Boris

    2018-05-01

    Numerical (and experimental) data analysis often requires the restoration of a smooth function from a set of sampled integrals over finite bins. We present the bin hierarchy method that efficiently computes the maximally smooth function from the sampled integrals using essentially all the information contained in the data. We perform extensive tests with different classes of functions and levels of data quality, including Monte Carlo data suffering from a severe sign problem and physical data for the Green's function of the Fröhlich polaron.

  5. Smooth torque speed characteristic of switched reluctance motors

    DEFF Research Database (Denmark)

    Zeng, Hui; Chen, Zhe; Chen, Hao

    2014-01-01

    The torque ripple of switched reluctance motors (SRMs) is the main disadvantage that limits the industrial application of these motors. Although several methods for smooth-toque operation (STO) have been proposed, STO works well only within a certain torque and speed range because...

  6. Smoothing identification of systems with small non-linearities

    Czech Academy of Sciences Publication Activity Database

    Kozánek, Jan; Piranda, J.

    2003-01-01

    Roč. 38, č. 1 (2003), s. 71-84 ISSN 0025-6455 R&D Projects: GA ČR GA101/00/1471 Institutional research plan: CEZ:AV0Z2076919 Keywords : identification * small non-linearities * smoothing methods Subject RIV: BI - Acoustics Impact factor: 0.237, year: 2003

  7. Technique for smoothing free-flight oscillation data.

    CSIR Research Space (South Africa)

    Beyers, ME

    1975-01-01

    Full Text Available A technique based on superposition of tricyclic solutions has been proposed for smoothing free-flight angular motion. When incorporated into a conventional tricyclic data reduction program, the method is convenient to use and does not require a...

  8. An inductive algorithm for smooth approximation of functions

    International Nuclear Information System (INIS)

    Kupenova, T.N.

    2011-01-01

    An inductive algorithm is presented for smooth approximation of functions, based on the Tikhonov regularization method and applied to a specific kind of the Tikhonov parametric functional. The discrepancy principle is used for estimation of the regularization parameter. The principle of heuristic self-organization is applied for assessment of some parameters of the approximating function

  9. Two-dimensional interpolation with experimental data smoothing

    International Nuclear Information System (INIS)

    Trejbal, Z.

    1989-01-01

    A method of two-dimensional interpolation with smoothing of time statistically deflected points is developed for processing of magnetic field measurements at the U-120M field measurements at the U-120M cyclotron. Mathematical statement of initial requirements and the final result of relevant algebraic transformations are given. 3 refs

  10. Effective Five Directional Partial Derivatives-Based Image Smoothing and a Parallel Structure Design.

    Science.gov (United States)

    Choongsang Cho; Sangkeun Lee

    2016-04-01

    Image smoothing has been used for image segmentation, image reconstruction, object classification, and 3D content generation. Several smoothing approaches have been used at the pre-processing step to retain the critical edge, while removing noise and small details. However, they have limited performance, especially in removing small details and smoothing discrete regions. Therefore, to provide fast and accurate smoothing, we propose an effective scheme that uses a weighted combination of the gradient, Laplacian, and diagonal derivatives of a smoothed image. In addition, to reduce computational complexity, we designed and implemented a parallel processing structure for the proposed scheme on a graphics processing unit (GPU). For an objective evaluation of the smoothing performance, the images were linearly quantized into several layers to generate experimental images, and the quantized images were smoothed using several methods for reconstructing the smoothly changed shape and intensity of the original image. Experimental results showed that the proposed scheme has higher objective scores and better successful smoothing performance than similar schemes, while preserving and removing critical and trivial details, respectively. For computational complexity, the proposed smoothing scheme running on a GPU provided 18 and 16 times lower complexity than the proposed smoothing scheme running on a CPU and the L0-based smoothing scheme, respectively. In addition, a simple noise reduction test was conducted to show the characteristics of the proposed approach; it reported that the presented algorithm outperforms the state-of-the art algorithms by more than 5.4 dB. Therefore, we believe that the proposed scheme can be a useful tool for efficient image smoothing.

  11. Smooth horizons and quantum ripples

    International Nuclear Information System (INIS)

    Golovnev, Alexey

    2015-01-01

    Black holes are unique objects which allow for meaningful theoretical studies of strong gravity and even quantum gravity effects. An infalling and a distant observer would have very different views on the structure of the world. However, a careful analysis has shown that it entails no genuine contradictions for physics, and the paradigm of observer complementarity has been coined. Recently this picture was put into doubt. In particular, it was argued that in old black holes a firewall must form in order to protect the basic principles of quantum mechanics. This AMPS paradox has already been discussed in a vast number of papers with different attitudes and conclusions. Here we want to argue that a possible source of confusion is the neglect of quantum gravity effects. Contrary to widespread perception, it does not necessarily mean that effective field theory is inapplicable in rather smooth neighbourhoods of large black hole horizons. The real offender might be an attempt to consistently use it over the huge distances from the near-horizon zone of old black holes to the early radiation. We give simple estimates to support this viewpoint and show how the Page time and (somewhat more speculative) scrambling time do appear. (orig.)

  12. Smooth horizons and quantum ripples

    Energy Technology Data Exchange (ETDEWEB)

    Golovnev, Alexey [Saint Petersburg State University, High Energy Physics Department, Saint-Petersburg (Russian Federation)

    2015-05-15

    Black holes are unique objects which allow for meaningful theoretical studies of strong gravity and even quantum gravity effects. An infalling and a distant observer would have very different views on the structure of the world. However, a careful analysis has shown that it entails no genuine contradictions for physics, and the paradigm of observer complementarity has been coined. Recently this picture was put into doubt. In particular, it was argued that in old black holes a firewall must form in order to protect the basic principles of quantum mechanics. This AMPS paradox has already been discussed in a vast number of papers with different attitudes and conclusions. Here we want to argue that a possible source of confusion is the neglect of quantum gravity effects. Contrary to widespread perception, it does not necessarily mean that effective field theory is inapplicable in rather smooth neighbourhoods of large black hole horizons. The real offender might be an attempt to consistently use it over the huge distances from the near-horizon zone of old black holes to the early radiation. We give simple estimates to support this viewpoint and show how the Page time and (somewhat more speculative) scrambling time do appear. (orig.)

  13. Local Transfer Coefficient, Smooth Channel

    Directory of Open Access Journals (Sweden)

    R. T. Kukreja

    1998-01-01

    Full Text Available Naphthalene sublimation technique and the heat/mass transfer analogy are used to determine the detailed local heat/mass transfer distributions on the leading and trailing walls of a twopass square channel with smooth walls that rotates about a perpendicular axis. Since the variation of density is small in the flow through the channel, buoyancy effect is negligible. Results show that, in both the stationary and rotating channel cases, very large spanwise variations of the mass transfer exist in he turn and in the region immediately downstream of the turn in the second straight pass. In the first straight pass, the rotation-induced Coriolis forces reduce the mass transfer on the leading wall and increase the mass transfer on the trailing wall. In the turn, rotation significantly increases the mass transfer on the leading wall, especially in the upstream half of the turn. Rotation also increases the mass transfer on the trailing wall, more in the downstream half of the turn than in the upstream half of the turn. Immediately downstream of the turn, rotation causes the mass transfer to be much higher on the trailing wall near the downstream corner of the tip of the inner wall than on the opposite leading wall. The mass transfer in the second pass is higher on the leading wall than on the trailing wall. A slower flow causes higher mass transfer enhancement in the turn on both the leading and trailing walls.

  14. Diagnosis of osteoarthritis by cartilage surface smoothness quantified automatically from knee MRI

    DEFF Research Database (Denmark)

    Tummala, Sudhakar; Bay-Jensen, Anne-Christine; Karsdal, Morten A.

    2011-01-01

    Objective: We investigated whether surface smoothness of articular cartilage in the medial tibiofemoral compartment quantified from magnetic resonance imaging (MRI) could be appropriate as a diagnostic marker of osteoarthritis (OA). Method: At baseline, 159 community-based subjects aged 21 to 81...... with normal or OA-affected knees were recruited to provide a broad range of OA states. Smoothness was quantified using an automatic framework from low-field MRI in the tibial, femoral, and femoral subcompartments. Diagnostic ability of smoothness was evaluated by comparison with conventional OA markers......, correlations between smoothness and pain values and smoothness loss and cartilage loss supported a link to progression of OA. Thereby, smoothness markers may allow detection and monitoring of OA-supplemented currently accepted markers....

  15. Quantification of smoothing requirement for 3D optic flow calculation of volumetric images

    DEFF Research Database (Denmark)

    Bab-Hadiashar, Alireza; Tennakoon, Ruwan B.; de Bruijne, Marleen

    2013-01-01

    Complexities of dynamic volumetric imaging challenge the available computer vision techniques on a number of different fronts. This paper examines the relationship between the estimation accuracy and required amount of smoothness for a general solution from a robust statistics perspective. We show...... that a (surprisingly) small amount of local smoothing is required to satisfy both the necessary and sufficient conditions for accurate optic flow estimation. This notion is called 'just enough' smoothing, and its proper implementation has a profound effect on the preservation of local information in processing 3D...... dynamic scans. To demonstrate the effect of 'just enough' smoothing, a robust 3D optic flow method with quantized local smoothing is presented, and the effect of local smoothing on the accuracy of motion estimation in dynamic lung CT images is examined using both synthetic and real image sequences...

  16. Diffusion tensor smoothing through weighted Karcher means

    Science.gov (United States)

    Carmichael, Owen; Chen, Jun; Paul, Debashis; Peng, Jie

    2014-01-01

    Diffusion tensor magnetic resonance imaging (MRI) quantifies the spatial distribution of water Diffusion at each voxel on a regular grid of locations in a biological specimen by Diffusion tensors– 3 × 3 positive definite matrices. Removal of noise from DTI is an important problem due to the high scientific relevance of DTI and relatively low signal to noise ratio it provides. Leading approaches to this problem amount to estimation of weighted Karcher means of Diffusion tensors within spatial neighborhoods, under various metrics imposed on the space of tensors. However, it is unclear how the behavior of these estimators varies with the magnitude of DTI sensor noise (the noise resulting from the thermal e!ects of MRI scanning) as well as the geometric structure of the underlying Diffusion tensor neighborhoods. In this paper, we combine theoretical analysis, empirical analysis of simulated DTI data, and empirical analysis of real DTI scans to compare the noise removal performance of three kernel-based DTI smoothers that are based on Euclidean, log-Euclidean, and affine-invariant metrics. The results suggest, contrary to conventional wisdom, that imposing a simplistic Euclidean metric may in fact provide comparable or superior noise removal, especially in relatively unstructured regions and/or in the presence of moderate to high levels of sensor noise. On the contrary, log-Euclidean and affine-invariant metrics may lead to better noise removal in highly structured anatomical regions, especially when the sensor noise is of low magnitude. These findings emphasize the importance of considering the interplay of sensor noise magnitude and tensor field geometric structure when assessing Diffusion tensor smoothing options. They also point to the necessity for continued development of smoothing methods that perform well across a large range of scenarios. PMID:25419264

  17. Quadratic Hedging Methods for Defaultable Claims

    International Nuclear Information System (INIS)

    Biagini, Francesca; Cretarola, Alessandra

    2007-01-01

    We apply the local risk-minimization approach to defaultable claims and we compare it with intensity-based evaluation formulas and the mean-variance hedging. We solve analytically the problem of finding respectively the hedging strategy and the associated portfolio for the three methods in the case of a default put option with random recovery at maturity

  18. Mediators on human airway smooth muscle.

    Science.gov (United States)

    Armour, C; Johnson, P; Anticevich, S; Ammit, A; McKay, K; Hughes, M; Black, J

    1997-01-01

    1. Bronchial hyperresponsiveness in asthma may be due to several abnormalities, but must include alterations in the airway smooth muscle responsiveness and/or volume. 2. Increased responsiveness of airway smooth muscle in vitro can be induced by certain inflammatory cell products and by induction of sensitization (atopy). 3. Increased airway smooth muscle growth can also be induced by inflammatory cell products and atopic serum. 4. Mast cell numbers are increased in the airways of asthmatics and, in our studies, in airway smooth muscle that is sensitized and hyperresponsive. 5. We propose that there is a relationship between mast cells and airway smooth muscle cells which, once an allergic process has been initiated, results in the development of critical features in the lungs in asthma.

  19. Modelling free surface flows with smoothed particle hydrodynamics

    Directory of Open Access Journals (Sweden)

    L.Di G.Sigalotti

    2006-01-01

    Full Text Available In this paper the method of Smoothed Particle Hydrodynamics (SPH is extended to include an adaptive density kernel estimation (ADKE procedure. It is shown that for a van der Waals (vdW fluid, this method can be used to deal with free-surface phenomena without difficulties. In particular, arbitrary moving boundaries can be easily handled because surface tension is effectively simulated by the cohesive pressure forces. Moreover, the ADKE method is seen to increase both the accuracy and stability of SPH since it allows the width of the kernel interpolant to vary locally in a way that only the minimum necessary smoothing is applied at and near free surfaces and sharp fluid-fluid interfaces. The method is robust and easy to implement. Examples of its resolving power are given for both the formation of a circular liquid drop under surface tension and the nonlinear oscillation of excited drops.

  20. Experimental investigation of smoothing by spectral dispersion

    International Nuclear Information System (INIS)

    Regan, Sean P.; Marozas, John A.; Kelly, John H.; Boehly, Thomas R.; Donaldson, William R.; Jaanimagi, Paul A.; Keck, Robert L.; Kessler, Terrance J.; Meyerhofer, David D.; Seka, Wolf

    2000-01-01

    Measurements of smoothing rates for smoothing by spectral dispersion (SSD) of high-power, solid-state laser beams used for inertial confinement fusion (ICF) research are reported. Smoothing rates were obtained from the intensity distributions of equivalent target plane images for laser pulses of varying duration. Simulations of the experimental data with the known properties of the phase plates and the frequency modulators are in good agreement with the experimental data. These results inspire confidence in extrapolating to higher bandwidths and other SSD configurations that may be suitable for ICF experiments and ultimately for direct-drive laser-fusion ignition. (c) 2000 Optical Society of America

  1. Bifurcations of non-smooth systems

    Science.gov (United States)

    Angulo, Fabiola; Olivar, Gerard; Osorio, Gustavo A.; Escobar, Carlos M.; Ferreira, Jocirei D.; Redondo, Johan M.

    2012-12-01

    Non-smooth systems (namely piecewise-smooth systems) have received much attention in the last decade. Many contributions in this area show that theory and applications (to electronic circuits, mechanical systems, …) are relevant to problems in science and engineering. Specially, new bifurcations have been reported in the literature, and this was the topic of this minisymposium. Thus both bifurcation theory and its applications were included. Several contributions from different fields show that non-smooth bifurcations are a hot topic in research. Thus in this paper the reader can find contributions from electronics, energy markets and population dynamics. Also, a carefully-written specific algebraic software tool is presented.

  2. Simple smoothing technique to reduce data scattering in physics experiments

    International Nuclear Information System (INIS)

    Levesque, L

    2008-01-01

    This paper describes an experiment involving motorized motion and a method to reduce data scattering from data acquisition. Jitter or minute instrumental vibrations add noise to a detected signal, which often renders small modulations of a graph very difficult to interpret. Here we describe a method to reduce scattering amongst data points from the signal measured by a photodetector that is motorized and scanned in a direction parallel to the plane of a rectangular slit during a computer-controlled diffraction experiment. The smoothing technique is investigated using subsets of many data points from the data acquisition. A limit for the number of data points in a subset is determined from the results based on the trend of the small measured signal to avoid severe changes in the shape of the signal from the averaging procedure. This simple smoothing method can be achieved using any type of spreadsheet software

  3. Smoothing of, and parameter estimation from, noisy biophysical recordings.

    Directory of Open Access Journals (Sweden)

    Quentin J M Huys

    2009-05-01

    Full Text Available Biophysically detailed models of single cells are difficult to fit to real data. Recent advances in imaging techniques allow simultaneous access to various intracellular variables, and these data can be used to significantly facilitate the modelling task. These data, however, are noisy, and current approaches to building biophysically detailed models are not designed to deal with this. We extend previous techniques to take the noisy nature of the measurements into account. Sequential Monte Carlo ("particle filtering" methods, in combination with a detailed biophysical description of a cell, are used for principled, model-based smoothing of noisy recording data. We also provide an alternative formulation of smoothing where the neural nonlinearities are estimated in a non-parametric manner. Biophysically important parameters of detailed models (such as channel densities, intercompartmental conductances, input resistances, and observation noise are inferred automatically from noisy data via expectation-maximization. Overall, we find that model-based smoothing is a powerful, robust technique for smoothing of noisy biophysical data and for inference of biophysical parameters in the face of recording noise.

  4. Electron histochemical and autoradiographic studies of vascular smooth muscle cell

    International Nuclear Information System (INIS)

    Kameyama, Kohji; Aida, Takeo; Asano, Goro

    1982-01-01

    The authors have studied the vascular smooth muscle cell in the aorta and the arteries of brain, heart in autopsied cases, cholesterol fed rabbits and canine through electron histochemical and autoradiographic methods, using 3 H-proline and 3 H-thymidine. The vascular changes are variable presumably due to the functional and morphological difference of vessels. Aging, pathological condition and physiological requirement induce the disturbances of vascular functions as contractility. According to various pathological conditions, the smooth muscle cell altered their shape, surface properties and arrangement of subcellular organelles including changes in number. The morphological features of arteries during aging is characterized by the thickening of endothelium and media. Decreasing cellularity and increasing collagen contents in media. The autoradiographic and histochemical observations using periodic acid methenamine silver (PAM) and ruthenium red stains demonstrated that the smooth muscle cell is a connective tissue synthetic cell. The PAM impregnation have proved that the small bundle of microfilaments become associated with small conglomerate of collagen and elastic fibers. Cytochemical examination will provide sufficient evidence to establish the contribution of subcellular structure. The acid phosphatase play an important role in vascular disease and they are directly involved in cellular lipid metabolism in cholesterol fed animals, and the activity of Na-K ATPase on the plasma membrane may contribute to the regulation of vascular blood flow and vasospasms. Direct injury and subsequent abnormal contraction of smooth muscle cell may initiate increased permeability of plasma protein and lipid in the media layer and eventually may developed and enhance arteriosclerosis. (author)

  5. An analysis of 1-D smoothed particle hydrodynamics kernels

    International Nuclear Information System (INIS)

    Fulk, D.A.; Quinn, D.W.

    1996-01-01

    In this paper, the smoothed particle hydrodynamics (SPH) kernel is analyzed, resulting in measures of merit for one-dimensional SPH. Various methods of obtaining an objective measure of the quality and accuracy of the SPH kernel are addressed. Since the kernel is the key element in the SPH methodology, this should be of primary concern to any user of SPH. The results of this work are two measures of merit, one for smooth data and one near shocks. The measure of merit for smooth data is shown to be quite accurate and a useful delineator of better and poorer kernels. The measure of merit for non-smooth data is not quite as accurate, but results indicate the kernel is much less important for these types of problems. In addition to the theory, 20 kernels are analyzed using the measure of merit demonstrating the general usefulness of the measure of merit and the individual kernels. In general, it was decided that bell-shaped kernels perform better than other shapes. 12 refs., 16 figs., 7 tabs

  6. Optimal Smoothing in Adaptive Location Estimation

    OpenAIRE

    Mammen, Enno; Park, Byeong U.

    1997-01-01

    In this paper higher order performance of kernel basedadaptive location estimators are considered. Optimalchoice of smoothing parameters is discussed and it isshown how much is lossed in efficiency by not knowingthe underlying translation density.

  7. Smooth surfaces from rational bilinear patches

    KAUST Repository

    Shi, Ling

    2014-01-01

    Smooth freeform skins from simple panels constitute a challenging topic arising in contemporary architecture. We contribute to this problem area by showing how to approximate a negatively curved surface by smoothly joined rational bilinear patches. The approximation problem is solved with help of a new computational approach to the hyperbolic nets of Huhnen-Venedey and Rörig and optimization algorithms based on it. We also discuss its limits which lie in the topology of the input surface. Finally, freeform deformations based on Darboux transformations are used to generate smooth surfaces from smoothly joined Darboux cyclide patches; in this way we eliminate the restriction to surfaces with negative Gaussian curvature. © 2013 Elsevier B.V.

  8. Smooth embeddings with Stein surface images

    OpenAIRE

    Gompf, Robert E.

    2011-01-01

    A simple characterization is given of open subsets of a complex surface that smoothly perturb to Stein open subsets. As applications, complex 2-space C^2 contains domains of holomorphy (Stein open subsets) that are exotic R^4's, and others homotopy equivalent to the 2-sphere but cut out by smooth, compact 3-manifolds. Pseudoconvex embeddings of Brieskorn spheres and other 3-manifolds into complex surfaces are constructed, as are pseudoconcave holomorphic fillings (with disagreeing contact and...

  9. Some splines produced by smooth interpolation

    Czech Academy of Sciences Publication Activity Database

    Segeth, Karel

    2018-01-01

    Roč. 319, 15 February (2018), s. 387-394 ISSN 0096-3003 R&D Projects: GA ČR GA14-02067S Institutional support: RVO:67985840 Keywords : smooth data approximation * smooth data interpolation * cubic spline Subject RIV: BA - General Mathematics OBOR OECD: Applied mathematics Impact factor: 1.738, year: 2016 http://www.sciencedirect.com/science/article/pii/S0096300317302746?via%3Dihub

  10. Some splines produced by smooth interpolation

    Czech Academy of Sciences Publication Activity Database

    Segeth, Karel

    2018-01-01

    Roč. 319, 15 February (2018), s. 387-394 ISSN 0096-3003 R&D Projects: GA ČR GA14-02067S Institutional support: RVO:67985840 Keywords : smooth data approximation * smooth data interpolation * cubic spline Subject RIV: BA - General Mathematics OBOR OECD: Applied mathematics Impact factor: 1.738, year: 2016 http://www. science direct.com/ science /article/pii/S0096300317302746?via%3Dihub

  11. Optimal Smooth Consumption and Annuity Design

    DEFF Research Database (Denmark)

    Bruhn, Kenneth; Steffensen, Mogens

    2013-01-01

    We propose an optimization criterion that yields extraordinary consumption smoothing compared to the well known results of the life-cycle model. Under this criterion we solve the related consumption and investment optimization problem faced by individuals with preferences for intertemporal stabil...... stability in consumption. We find that the consumption and investment patterns demanded under the optimization criterion is in general offered as annuity benefits from products in the class of ‘Formula Based Smoothed Investment-Linked Annuities’....

  12. An implicit Smooth Particle Hydrodynamic code

    Energy Technology Data Exchange (ETDEWEB)

    Knapp, Charles E. [Univ. of New Mexico, Albuquerque, NM (United States)

    2000-05-01

    An implicit version of the Smooth Particle Hydrodynamic (SPH) code SPHINX has been written and is working. In conjunction with the SPHINX code the new implicit code models fluids and solids under a wide range of conditions. SPH codes are Lagrangian, meshless and use particles to model the fluids and solids. The implicit code makes use of the Krylov iterative techniques for solving large linear-systems and a Newton-Raphson method for non-linear corrections. It uses numerical derivatives to construct the Jacobian matrix. It uses sparse techniques to save on memory storage and to reduce the amount of computation. It is believed that this is the first implicit SPH code to use Newton-Krylov techniques, and is also the first implicit SPH code to model solids. A description of SPH and the techniques used in the implicit code are presented. Then, the results of a number of tests cases are discussed, which include a shock tube problem, a Rayleigh-Taylor problem, a breaking dam problem, and a single jet of gas problem. The results are shown to be in very good agreement with analytic solutions, experimental results, and the explicit SPHINX code. In the case of the single jet of gas case it has been demonstrated that the implicit code can do a problem in much shorter time than the explicit code. The problem was, however, very unphysical, but it does demonstrate the potential of the implicit code. It is a first step toward a useful implicit SPH code.

  13. A generalized transport-velocity formulation for smoothed particle hydrodynamics

    Energy Technology Data Exchange (ETDEWEB)

    Zhang, Chi; Hu, Xiangyu Y., E-mail: xiangyu.hu@tum.de; Adams, Nikolaus A.

    2017-05-15

    The standard smoothed particle hydrodynamics (SPH) method suffers from tensile instability. In fluid-dynamics simulations this instability leads to particle clumping and void regions when negative pressure occurs. In solid-dynamics simulations, it results in unphysical structure fragmentation. In this work the transport-velocity formulation of Adami et al. (2013) is generalized for providing a solution of this long-standing problem. Other than imposing a global background pressure, a variable background pressure is used to modify the particle transport velocity and eliminate the tensile instability completely. Furthermore, such a modification is localized by defining a shortened smoothing length. The generalized formulation is suitable for fluid and solid materials with and without free surfaces. The results of extensive numerical tests on both fluid and solid dynamics problems indicate that the new method provides a unified approach for multi-physics SPH simulations.

  14. Gradient approach to quantify the gradation smoothness for output media

    Science.gov (United States)

    Kim, Youn Jin; Bang, Yousun; Choh, Heui-Keun

    2010-01-01

    We aim to quantify the perception of color gradation smoothness using objectively measurable properties. We propose a model to compute the smoothness of hardcopy color-to-color gradations. It is a gradient-based method that can be determined as a function of the 95th percentile of second derivative for the tone-jump estimator and the fifth percentile of first derivative for the tone-clipping estimator. Performance of the model and a previously suggested method were psychophysically appreciated, and their prediction accuracies were compared to each other. Our model showed a stronger Pearson correlation to the corresponding visual data, and the magnitude of the Pearson correlation reached up to 0.87. Its statistical significance was verified through analysis of variance. Color variations of the representative memory colors-blue sky, green grass and Caucasian skin-were rendered as gradational scales and utilized as the test stimuli.

  15. Full-waveform inversion using a nonlinearly smoothed wavefield

    KAUST Repository

    Li, Yuanyuan

    2017-12-08

    Conventional full-waveform inversion (FWI) based on the least-squares misfit function faces problems in converging to the global minimum when using gradient methods because of the cycle-skipping phenomena. An initial model producing data that are at most a half-cycle away from the observed data is needed for convergence to the global minimum. Low frequencies are helpful in updating low-wavenumber components of the velocity model to avoid cycle skipping. However, low enough frequencies are usually unavailable in field cases. The multiplication of wavefields of slightly different frequencies adds artificial low-frequency components in the data, which can be used for FWI to generate a convergent result and avoid cycle skipping. We generalize this process by multiplying the wavefield with itself and then applying a smoothing operator to the multiplied wavefield or its square to derive the nonlinearly smoothed wavefield, which is rich in low frequencies. The global correlation-norm-based objective function can mitigate the dependence on the amplitude information of the nonlinearly smoothed wavefield. Therefore, we have evaluated the use of this objective function when using the nonlinearly smoothed wavefield. The proposed objective function has much larger convexity than the conventional objective functions. We calculate the gradient of the objective function using the adjoint-state technique, which is similar to that of the conventional FWI except for the adjoint source. We progressively reduce the smoothing width applied to the nonlinear wavefield to naturally adopt the multiscale strategy. Using examples on the Marmousi 2 model, we determine that the proposed FWI helps to generate convergent results without the need for low-frequency information.

  16. Book vs. fair value accounting in banking and intertemporal smoothing

    OpenAIRE

    Freixas, Xavier; Tsomocos, Dimitrios P.

    2004-01-01

    The aim of this paper is to examine the pros and cons of book and fair value accounting from the perspective of the theory of banking. We consider the implications of the two accounting methods in an overlapping generations environment. As observed by Allen and Gale(1997), in an overlapping generation model, banks have a role as intergenerational connectors as they allow for intertemporal smoothing. Our main result is that when dividends depend on profits, book value ex ante dominates fair va...

  17. Design and simulation of origami structures with smooth folds.

    Science.gov (United States)

    Peraza Hernandez, E A; Hartl, D J; Lagoudas, D C

    2017-04-01

    Origami has enabled new approaches to the fabrication and functionality of multiple structures. Current methods for origami design are restricted to the idealization of folds as creases of zeroth-order geometric continuity. Such an idealization is not proper for origami structures of non-negligible fold thickness or maximum curvature at the folds restricted by material limitations. For such structures, folds are not properly represented as creases but rather as bent regions of higher-order geometric continuity. Such fold regions of arbitrary order of continuity are termed as smooth folds . This paper presents a method for solving the following origami design problem: given a goal shape represented as a polygonal mesh (termed as the goal mesh ), find the geometry of a single planar sheet, its pattern of smooth folds, and the history of folding motion allowing the sheet to approximate the goal mesh. The parametrization of the planar sheet and the constraints that allow for a valid pattern of smooth folds are presented. The method is tested against various goal meshes having diverse geometries. The results show that every determined sheet approximates its corresponding goal mesh in a known folded configuration having fold angles obtained from the geometry of the goal mesh.

  18. Intelligent PV Power Smoothing Control Using Probabilistic Fuzzy Neural Network with Asymmetric Membership Function

    Directory of Open Access Journals (Sweden)

    Faa-Jeng Lin

    2017-01-01

    Full Text Available An intelligent PV power smoothing control using probabilistic fuzzy neural network with asymmetric membership function (PFNN-AMF is proposed in this study. First, a photovoltaic (PV power plant with a battery energy storage system (BESS is introduced. The BESS consisted of a bidirectional DC/AC 3-phase inverter and LiFePO4 batteries. Then, the difference of the actual PV power and smoothed power is supplied by the BESS. Moreover, the network structure of the PFNN-AMF and its online learning algorithms are described in detail. Furthermore, the three-phase output currents of the PV power plant are converted to the dq-axis current components. The resulted q-axis current is the input of the PFNN-AMF power smoothing control, and the output is a smoothing PV power curve to achieve the effect of PV power smoothing. Comparing to the other smoothing methods, a minimum energy capacity of the BESS with a small fluctuation of the grid power can be achieved by the PV power smoothing control using PFNN-AMF. In addition, a personal computer- (PC- based PV power plant emulator and BESS are built for the experimentation. From the experimental results of various irradiance variation conditions, the effectiveness of the proposed intelligent PV power smoothing control can be verified.

  19. Effect of smoothing on robust chaos.

    Science.gov (United States)

    Deshpande, Amogh; Chen, Qingfei; Wang, Yan; Lai, Ying-Cheng; Do, Younghae

    2010-08-01

    In piecewise-smooth dynamical systems, situations can arise where the asymptotic attractors of the system in an open parameter interval are all chaotic (e.g., no periodic windows). This is the phenomenon of robust chaos. Previous works have established that robust chaos can occur through the mechanism of border-collision bifurcation, where border is the phase-space region where discontinuities in the derivatives of the dynamical equations occur. We investigate the effect of smoothing on robust chaos and find that periodic windows can arise when a small amount of smoothness is present. We introduce a parameter of smoothing and find that the measure of the periodic windows in the parameter space scales linearly with the parameter, regardless of the details of the smoothing function. Numerical support and a heuristic theory are provided to establish the scaling relation. Experimental evidence of periodic windows in a supposedly piecewise linear dynamical system, which has been implemented as an electronic circuit, is also provided.

  20. TAX SMOOTHING: TESTS ON INDONESIAN DATA

    Directory of Open Access Journals (Sweden)

    Rudi Kurniawan

    2011-01-01

    Full Text Available This paper contributes to the literature of public debt management by testing for tax smoothing behaviour in Indonesia. Tax smoothing means that the government smooths the tax rate across all future time periods to minimize the distortionary costs of taxation over time for a given path of government spending. In a stochastic economy with an incomplete bond market, tax smoothing implies that the tax rate approximates a random walk and changes in the tax rate are nearly unpredictable. For that purpose, two tests were performed. First, random walk behaviour of the tax rate was examined by undertaking unit root tests. The null hypothesis of unit root cannot be rejected, indicating that the tax rate is nonstationary and, hence, it follows a random walk. Second, the predictability of the tax rate was examined by regressing changes in the tax rate on its own lagged values and also on lagged values of changes in the goverment expenditure ratio, and growth of real output. They are found to be not significant in predicting changes in the tax rate. Taken together, the present evidence seems to be consistent with the tax smoothing, therefore provides support to this theory.

  1. Smoothing densities under shape constraints

    OpenAIRE

    Davies, Paul Laurie; Meise, Monika

    2009-01-01

    In Davies and Kovac (2004) the taut string method was proposed for calculating a density which is consistent with the data and has the minimum number of peaks. The main disadvantage of the taut string density is that it is piecewise constant. In this paper a procedure is presented which gives a smoother density by minimizing the total variation of a derivative of the density subject to the number, positions and heights of the local extreme values obtained from the taut string density. 2...

  2. Neurobiological studies of risk assessment: A comparison of expected utility and mean-variance approaches

    OpenAIRE

    d'Acremont, M.; Bossaerts, Peter

    2008-01-01

    When modeling valuation under uncertainty, economists generally prefer expected utility because it has an axiomatic foundation, meaning that the resulting choices will satisfy a number of rationality requirements. In expected utility theory, values are computed by multiplying probabilities of each possible state of nature by the payoff in that state and summing the results. The drawback of this approach is that all state probabilities need to be dealt with separately, which becomes extremely ...

  3. Optimization Stock Portfolio With Mean-Variance and Linear Programming: Case In Indonesia Stock Market

    OpenAIRE

    Yen Sun

    2010-01-01

    It is observed that the number of Indonesia’s domestic investor who involved in the stock exchange is very less compare to its total number of population (only about 0.1%). As a result, Indonesia Stock Exchange (IDX) is highly affected by foreign investor that can threat the economy. Domestic investor tends to invest in risk-free asset such as deposit in the bank since they are not familiar yet with the stock market and anxious about the risk (risk-averse type of investor). Therefore, it is i...

  4. Mean-variance portfolio analysis data for optimizing community-based photovoltaic investment.

    Science.gov (United States)

    Shakouri, Mahmoud; Lee, Hyun Woo

    2016-03-01

    The amount of electricity generated by Photovoltaic (PV) systems is affected by factors such as shading, building orientation and roof slope. To increase electricity generation and reduce volatility in generation of PV systems, a portfolio of PV systems can be made which takes advantages of the potential synergy among neighboring buildings. This paper contains data supporting the research article entitled: PACPIM: new decision-support model of optimized portfolio analysis for community-based photovoltaic investment [1]. We present a set of data relating to physical properties of 24 houses in Oregon, USA, along with simulated hourly electricity data for the installed PV systems. The developed Matlab code to construct optimized portfolios is also provided in . The application of these files can be generalized to variety of communities interested in investing on PV systems.

  5. Stochastic Dominance and Mean-Variance Measures of Profit and Loss for Business Planning and Investment

    OpenAIRE

    Wing-Keung Wong

    2007-01-01

    In this paper, we first extend the stochastic dominance (SD) theory by introducing the first three orders of both ascending SD (ASD) and descending SD (DSD) to decisions in business planning and investment to risk-averse and risk- loving decision makers so that they can compare both return and loss. We provide investors with more tools for empirical analysis, with which they can identify the first order ASD and DSD prospects and discern arbitrage opportunities that could increase his/her util...

  6. Firefly Algorithm for Cardinality Constrained Mean-Variance Portfolio Optimization Problem with Entropy Diversity Constraint

    Directory of Open Access Journals (Sweden)

    Nebojsa Bacanin

    2014-01-01

    portfolio model with entropy constraint. Firefly algorithm is one of the latest, very successful swarm intelligence algorithm; however, it exhibits some deficiencies when applied to constrained problems. To overcome lack of exploration power during early iterations, we modified the algorithm and tested it on standard portfolio benchmark data sets used in the literature. Our proposed modified firefly algorithm proved to be better than other state-of-the-art algorithms, while introduction of entropy diversity constraint further improved results.

  7. Life history traits and exploitation affect the spatial mean-variance relationship in fish abundance.

    Science.gov (United States)

    Kuo, Ting-chun; Mandal, Sandip; Yamauchi, Atsushi; Hsieh, Chih-hao

    2016-05-01

    Fishing is expected to alter the spatial heterogeneity of fishes. As an effective index to quantify spatial heterogeneity, the exponent b in Taylor's power law (V = aMb) measures how spatial variance (V) varies with changes in mean abundance (M) of a population, with larger b indicating higher spatial aggregation potential (i.e., more heterogeneity). Theory predicts b is related with life history traits, but empirical evidence is lacking. Using 50-yr spatiotemporal data from the California Current Ecosystem, we examined fishing and life history effects on Taylor's exponent by comparing spatial distributions of exploited and unexploited fishes living in the same environment. We found that unexploited species with smaller size and generation time exhibit larger b, supporting theoretical prediction. In contrast, this relationship in exploited species is much weaker, as the exponents of large exploited species were higher than unexploited species with similar traits. Our results suggest that fishing may increase spatial aggregation potential of a species, likely through degrading their size/age structure. Results of moving-window cross-correlation analyses on b vs. age structure indices (mean age and age evenness) for some exploited species corroborate our findings. Furthermore, through linking our findings to other fundamental ecological patterns (occupancy-abundance and size-abundance relationships), we provide theoretical arguments for the usefulness of monitoring the exponent b for management purposes. We propose that age/size-truncated species might have lower recovery rate in spatial occupancy, and the spatial variance-mass relationship of a species might be non-linear. Our findings provide theoretical basis explaining why fishery management strategy should be concerned with changes to the age and spatial structure of exploited fishes.

  8. Asymmetries in conditional mean variance: modelling stock returns by asMA-asQGARCH

    NARCIS (Netherlands)

    de Gooijer, J.G.; Brännäs, K.

    2004-01-01

    We propose a nonlinear time series model where both the conditional mean and the conditional variance are asymmetric functions of past information. The model is particularly useful for analysing financial time series where it has been noted that there is an asymmetric impact of good news and bad

  9. A characterization of optimal portfolios under the tail mean-variance criterion

    OpenAIRE

    Owadally, I.; Landsman, Z.

    2013-01-01

    The tail mean–variance model was recently introduced for use in risk management and portfolio choice; it involves a criterion that focuses on the risk of rare but large losses, which is particularly important when losses have heavy-tailed distributions. If returns or losses follow a multivariate elliptical distribution, the use of risk measures that satisfy certain well-known properties is equivalent to risk management in the classical mean–variance framework. The tail mean–variance criterion...

  10. Mean-variance portfolio analysis data for optimizing community-based photovoltaic investment

    Directory of Open Access Journals (Sweden)

    Mahmoud Shakouri

    2016-03-01

    Full Text Available The amount of electricity generated by Photovoltaic (PV systems is affected by factors such as shading, building orientation and roof slope. To increase electricity generation and reduce volatility in generation of PV systems, a portfolio of PV systems can be made which takes advantages of the potential synergy among neighboring buildings. This paper contains data supporting the research article entitled: PACPIM: new decision-support model of optimized portfolio analysis for community-based photovoltaic investment [1]. We present a set of data relating to physical properties of 24 houses in Oregon, USA, along with simulated hourly electricity data for the installed PV systems. The developed Matlab code to construct optimized portfolios is also provided in Supplementary materials. The application of these files can be generalized to variety of communities interested in investing on PV systems. Keywords: Community solar, Photovoltaic system, Portfolio theory, Energy optimization, Electricity volatility

  11. Research on regularized mean-variance portfolio selection strategy with modified Roy safety-first principle.

    Science.gov (United States)

    Atta Mills, Ebenezer Fiifi Emire; Yan, Dawen; Yu, Bo; Wei, Xinyuan

    2016-01-01

    We propose a consolidated risk measure based on variance and the safety-first principle in a mean-risk portfolio optimization framework. The safety-first principle to financial portfolio selection strategy is modified and improved. Our proposed models are subjected to norm regularization to seek near-optimal stable and sparse portfolios. We compare the cumulative wealth of our preferred proposed model to a benchmark, S&P 500 index for the same period. Our proposed portfolio strategies have better out-of-sample performance than the selected alternative portfolio rules in literature and control the downside risk of the portfolio returns.

  12. Some properties of the smoothed Wigner function

    International Nuclear Information System (INIS)

    Soto, F.; Claverie, P.

    1981-01-01

    Recently it has been proposed a modification of the Wigner function which consists in smoothing it by convolution with a phase-space gaussian function; this smoothed Wigner function is non-negative if the gaussian parameters Δ and delta satisfy the condition Δdelta > h/2π. We analyze in this paper the predictions of this modified Wigner function for the harmonic oscillator, for anharmonic oscillator and finally for the hydrogen atom. We find agreement with experiment in the linear case, but for strongly nonlinear systems, such as the hydrogen atom, the results obtained are completely wrong. (orig.)

  13. Cardiac, Skeletal, and smooth muscle mitochondrial respiration

    DEFF Research Database (Denmark)

    Park, Song-Young; Gifford, Jayson R; Andtbacka, Robert H I

    2014-01-01

    , skeletal, and smooth muscle was harvested from a total of 22 subjects (53±6 yrs) and mitochondrial respiration assessed in permeabilized fibers. Complex I+II, state 3 respiration, an index of oxidative phosphorylation capacity, fell progressively from cardiac, skeletal, to smooth muscle (54±1; 39±4; 15......±1 pmol•s(-1)•mg (-1), prespiration rates were normalized by CS (respiration...... per mitochondrial content), oxidative phosphorylation capacity was no longer different between the three muscle types. Interestingly, Complex I state 2 normalized for CS activity, an index of non-phosphorylating respiration per mitochondrial content, increased progressively from cardiac, skeletal...

  14. Smooth massless limit of field theories

    International Nuclear Information System (INIS)

    Fronsdal, C.

    1980-01-01

    The massless limit of Fierz-Pauli field theories, describing fields with fixed mass and spin interacting with external sources, is examined. Results are obtained for spins, 1, 3/2, 2 and 3 using conventional models, and then for all half-integral spins in a relatively model-independent manner. It is found that the massless limit is smooth provided that the sources satisfy certain conditions. In the massless limit these conditions reduce to the conservation laws required by internal consistency of massless field theory. Smoothness simply requires that quantities that vanish in the massless case approach zero in a certain well-defined manner. (orig.)

  15. Mapping of Agricultural Crops from Single High-Resolution Multispectral Images—Data-Driven Smoothing vs. Parcel-Based Smoothing

    Directory of Open Access Journals (Sweden)

    Asli Ozdarici-Ok

    2015-05-01

    Full Text Available Mapping agricultural crops is an important application of remote sensing. However, in many cases it is based either on hyperspectral imagery or on multitemporal coverage, both of which are difficult to scale up to large-scale deployment at high spatial resolution. In the present paper, we evaluate the possibility of crop classification based on single images from very high-resolution (VHR satellite sensors. The main objective of this work is to expose performance difference between state-of-the-art parcel-based smoothing and purely data-driven conditional random field (CRF smoothing, which is yet unknown. To fulfill this objective, we perform extensive tests with four different classification methods (Support Vector Machines, Random Forest, Gaussian Mixtures, and Maximum Likelihood to compute the pixel-wise data term; and we also test two different definitions of the pairwise smoothness term. We have performed a detailed evaluation on different multispectral VHR images (Ikonos, QuickBird, Kompsat-2. The main finding of this study is that pairwise CRF smoothing comes close to the state-of-the-art parcel-based method that requires parcel boundaries (average difference ≈ 2.5%. Our results indicate that a single multispectral (R, G, B, NIR image is enough to reach satisfactory classification accuracy for six crop classes (corn, pasture, rice, sugar beet, wheat, and tomato in Mediterranean climate. Overall, it appears that crop mapping using only one-shot VHR imagery taken at the right time may be a viable alternative, especially since high-resolution multitemporal or hyperspectral coverage as well as parcel boundaries are in practice often not available.

  16. ON THE DERIVATIVE OF SMOOTH MEANINGFUL FUNCTIONS

    Directory of Open Access Journals (Sweden)

    Sanjo Zlobec

    2011-02-01

    Full Text Available The derivative of a function f in n variables at a point x* is one of the most important tools in mathematical modelling. If this object exists, it is represented by the row n-tuple f(x* = [∂f/∂xi(x*] called the gradient of f at x*, abbreviated: “the gradient”. The evaluation of f(x* is usually done in two stages, first by calculating the n partials and then their values at x = x*. In this talk we give an alternative approach. We show that one can characterize the gradient without differentiation! The idea is to fix an arbitrary row n-tuple G and answer the following question: What is a necessary and sufficient condition such that G is the gradient of a given f at a given x*? The answer is given after adjusting the quadratic envelope property introduced in [3]. We work with smooth, i.e., continuously differentiable, functions with a Lipschitz derivative on a compact convex set with a non-empty interior. Working with this class of functions is not a serious restriction. In fact, loosely speaking, “almost all” smooth meaningful functions used in modelling of real life situations are expected to have a bounded “acceleration” hence they belong to this class. In particular, the class contains all twice differentiable functions [1]. An important property of the functions from this class is that every f can be represented as the difference of some convex function and a convex quadratic function. This decomposition was used in [3] to characterize the zero derivative points. There we obtained reformulations and augmentations of some well known classic results on optimality such as Fermats extreme value theorem (known from high school and the Lagrange multiplier theorem from calculus [2, 3]. In this talk we extend the results on zero derivative points to characterize the relation G = f(x*, where G is an arbitrary n-tuple. Some special cases: If G = O, we recover the results on zero derivative points. For functions of a single

  17. 16-dimensional smooth projective planes with large collineation groups

    OpenAIRE

    Bödi, Richard

    1998-01-01

    Erworben im Rahmen der Schweizer Nationallizenzen (http://www.nationallizenzen.ch) Smooth projective planes are projective planes defined on smooth manifolds (i.e. the set of points and the set of lines are smooth manifolds) such that the geometric operations of join and intersection are smooth. A systematic study of such planes and of their collineation groups can be found in previous works of the author. We prove in this paper that a 16-dimensional smooth projective plane which admits a ...

  18. Experimental model of human corpus cavernosum smooth muscle relaxation

    Directory of Open Access Journals (Sweden)

    Rommel P. Regadas

    2010-08-01

    Full Text Available PURPOSE: To describe a technique for en bloc harvesting of the corpus cavernosum, cavernous artery and urethra from transplant organ donors and contraction-relaxation experiments with corpus cavernosum smooth muscle. MATERIALS AND METHODS: The corpus cavernosum was dissected to the point of attachment with the crus penis. A 3 cm segment (corpus cavernosum and urethra was isolated and placed in ice-cold sterile transportation buffer. Under magnification, the cavernous artery was dissected. Thus, 2 cm fragments of cavernous artery and corpus cavernosum were obtained. Strips measuring 3 x 3 x 8 mm3 were then mounted vertically in an isolated organ bath device. Contractions were measured isometrically with a Narco-Biosystems force displacement transducer (model F-60, Narco-Biosystems, Houston, TX, USA and recorded on a 4-channel Narco-Biosystems desk model polygraph. RESULTS: Phenylephrine (1µM was used to induce tonic contractions in the corpus cavernosum (3 - 5 g tension and cavernous artery (0.5 - 1g tension until reaching a plateau. After precontraction, smooth muscle relaxants were used to produce relaxation-response curves (10-12M to 10-4 M. Sodium nitroprusside was used as a relaxation control. CONCLUSION: The harvesting technique and the smooth muscle contraction-relaxation model described in this study were shown to be useful instruments in the search for new drugs for the treatment of human erectile dysfunction.

  19. Ensemble Kalman filtering with one-step-ahead smoothing

    KAUST Repository

    Raboudi, Naila F.

    2018-01-11

    The ensemble Kalman filter (EnKF) is widely used for sequential data assimilation. It operates as a succession of forecast and analysis steps. In realistic large-scale applications, EnKFs are implemented with small ensembles and poorly known model error statistics. This limits their representativeness of the background error covariances and, thus, their performance. This work explores the efficiency of the one-step-ahead (OSA) smoothing formulation of the Bayesian filtering problem to enhance the data assimilation performance of EnKFs. Filtering with OSA smoothing introduces an updated step with future observations, conditioning the ensemble sampling with more information. This should provide an improved background ensemble in the analysis step, which may help to mitigate the suboptimal character of EnKF-based methods. Here, the authors demonstrate the efficiency of a stochastic EnKF with OSA smoothing for state estimation. They then introduce a deterministic-like EnKF-OSA based on the singular evolutive interpolated ensemble Kalman (SEIK) filter. The authors show that the proposed SEIK-OSA outperforms both SEIK, as it efficiently exploits the data twice, and the stochastic EnKF-OSA, as it avoids observational error undersampling. They present extensive assimilation results from numerical experiments conducted with the Lorenz-96 model to demonstrate SEIK-OSA’s capabilities.

  20. Smoothness in Banach spaces. Selected problems

    Czech Academy of Sciences Publication Activity Database

    Fabian, Marián; Montesinos, V.; Zizler, Václav

    2006-01-01

    Roč. 100, č. 2 (2006), s. 101-125 ISSN 1578-7303 R&D Projects: GA ČR(CZ) GA201/04/0090; GA AV ČR(CZ) IAA100190610 Institutional research plan: CEZ:AV0Z10190503 Keywords : smooth norm * renorming * weakly compactly generated space Subject RIV: BA - General Mathematics

  1. The Koch curve as a smooth manifold

    International Nuclear Information System (INIS)

    Epstein, Marcelo; Sniatycki, Jedrzej

    2008-01-01

    We show that there exists a homeomorphism between the closed interval [0,1] is contained in R and the Koch curve endowed with the subset topology of R 2 . We use this homeomorphism to endow the Koch curve with the structure of a smooth manifold with boundary

  2. on Isolated Smooth Muscle Preparation in Rats

    African Journals Online (AJOL)

    Samuel Olaleye

    ABSTRACT. This study investigated the receptor effects of methanolic root extract of ... Phytochemical Analysis: Photochemistry of the methanolic extract was ... mounted with resting tension 0.5g in an organ bath containing .... Effects of extra cellular free Ca2+ and 0.5mM ... isolated smooth muscle by high K+ on the other.

  3. PHANTOM: Smoothed particle hydrodynamics and magnetohydrodynamics code

    Science.gov (United States)

    Price, Daniel J.; Wurster, James; Nixon, Chris; Tricco, Terrence S.; Toupin, Stéven; Pettitt, Alex; Chan, Conrad; Laibe, Guillaume; Glover, Simon; Dobbs, Clare; Nealon, Rebecca; Liptai, David; Worpel, Hauke; Bonnerot, Clément; Dipierro, Giovanni; Ragusa, Enrico; Federrath, Christoph; Iaconi, Roberto; Reichardt, Thomas; Forgan, Duncan; Hutchison, Mark; Constantino, Thomas; Ayliffe, Ben; Mentiplay, Daniel; Hirsh, Kieran; Lodato, Giuseppe

    2017-09-01

    Phantom is a smoothed particle hydrodynamics and magnetohydrodynamics code focused on stellar, galactic, planetary, and high energy astrophysics. It is modular, and handles sink particles, self-gravity, two fluid and one fluid dust, ISM chemistry and cooling, physical viscosity, non-ideal MHD, and more. Its modular structure makes it easy to add new physics to the code.

  4. Data driven smooth tests for composite hypotheses

    NARCIS (Netherlands)

    Inglot, Tadeusz; Kallenberg, Wilbert C.M.; Ledwina, Teresa

    1997-01-01

    The classical problem of testing goodness-of-fit of a parametric family is reconsidered. A new test for this problem is proposed and investigated. The new test statistic is a combination of the smooth test statistic and Schwarz's selection rule. More precisely, as the sample size increases, an

  5. On the theory of smooth structures. 2

    International Nuclear Information System (INIS)

    Shafei Deh Abad, A.

    1992-09-01

    In this paper we continue by introducing the concepts of substructures, quotient structures and tensor product, and examine some of their properties. By using the concept of tensor product, in the next paper, we will give another product for smooth structures which is a characterization of integral domains which are not fields. (author). 2 refs

  6. Supplementary speed control for wind power smoothing

    NARCIS (Netherlands)

    Haan, de J.E.S.; Frunt, J.; Kechroud, A.; Kling, W.L.

    2010-01-01

    Wind fluctuations result in even larger wind power fluctuations because the power of wind is proportional to the cube of the wind speed. This report analyzes wind power fluctuations to investigate inertial power smoothing, in particular for the frequency range of 0.08 - 0.5 Hz. Due to the growing

  7. Arc-based smoothing of ion beam intensity on targets

    International Nuclear Information System (INIS)

    Friedman, Alex

    2012-01-01

    By manipulating a set of ion beams upstream of a target, it is possible to arrange for a smoother deposition pattern, so as to achieve more uniform illumination of the target. A uniform energy deposition pattern is important for applications including ion-beam-driven high energy density physics and heavy-ion beam-driven inertial fusion energy (“heavy-ion fusion”). Here, we consider an approach to such smoothing that is based on rapidly “wobbling” each of the beams back and forth along a short arc-shaped path, via oscillating fields applied upstream of the final pulse compression. In this technique, uniformity is achieved in the time-averaged sense; this is sufficient provided the beam oscillation timescale is short relative to the hydrodynamic timescale of the target implosion. This work builds on two earlier concepts: elliptical beams applied to a distributed-radiator target [D. A. Callahan and M. Tabak, Phys. Plasmas 7, 2083 (2000)] and beams that are wobbled so as to trace a number of full rotations around a circular or elliptical path [R. C. Arnold et al., Nucl. Instrum. Methods 199, 557 (1982)]. Here, we describe the arc-based smoothing approach and compare it to results obtainable using an elliptical-beam prescription. In particular, we assess the potential of these approaches for minimization of azimuthal asymmetry, for the case of a ring of beams arranged on a cone. It is found that, for small numbers of beams on the ring, the arc-based smoothing approach offers superior uniformity. In contrast with the full-rotation approach, arc-based smoothing remains usable when the geometry precludes wobbling the beams around a full circle, e.g., for the X-target [E. Henestroza, B. G. Logan, and L. J. Perkins, Phys. Plasmas 18, 032702 (2011)] and some classes of distributed-radiator targets.

  8. A smooth generalized Newton method for a class of non-smooth equations

    International Nuclear Information System (INIS)

    Uko, L. U.

    1995-10-01

    This paper presents a Newton-type iterative scheme for finding the zero of the sum of a differentiable function and a multivalued maximal monotone function. Local and semi-local convergence results are proved for the Newton scheme, and an analogue of the Kantorovich theorem is proved for the associated modified scheme that uses only one Jacobian evaluation for the entire iteration. Applications in variational inequalities are discussed, and an illustrative numerical example is given. (author). 24 refs

  9. Suspension system vibration analysis with regard to variable type ability to smooth road irregularities

    Science.gov (United States)

    Rykov, S. P.; Rykova, O. A.; Koval, V. S.; Makhno, D. E.; Fedotov, K. V.

    2018-03-01

    The paper aims to analyze vibrations of the dynamic system equivalent of the suspension system with regard to tyre ability to smooth road irregularities. The research is based on static dynamics for linear systems of automated control, methods of correlation, spectral and numerical analysis. Input of new data on the smoothing effect of the pneumatic tyre reflecting changes of a contact area between the wheel and road under vibrations of the suspension makes the system non-linear which requires using numerical analysis methods. Taking into account the variable smoothing ability of the tyre when calculating suspension vibrations, one can approximate calculation and experimental results and improve the constant smoothing ability of the tyre.

  10. Role of Smooth Muscle in Intestinal Inflammation

    Directory of Open Access Journals (Sweden)

    Stephen M Collins

    1996-01-01

    Full Text Available The notion that smooth muscle function is altered in inflammation is prompted by clinical observations of altered motility in patients with inflammatory bowel disease (IBD. While altered motility may reflect inflammation-induced changes in intrinsic or extrinsic nerves to the gut, changes in gut hormone release and changes in muscle function, recent studies have provided in vitro evidence of altered muscle contractility in muscle resected from patients with ulcerative colitis or Crohn’s disease. In addition, the observation that smooth muscle cells are more numerous and prominent in the strictured bowel of IBD patients compared with controls suggests that inflammation may alter the growth of intestinal smooth muscle. Thus, inflammation is associated with changes in smooth muscle growth and contractility that, in turn, contribute to important symptoms of IBD including diarrhea (from altered motility and pain (via either altered motility or stricture formation. The involvement of smooth muscle in this context may be as an innocent bystander, where cells and products of the inflammatory process induce alterations in muscle contractility and growth. However, it is likely that intestinal muscle cells play a more active role in the inflammatory process via the elaboration of mediators and trophic factors, including cytokines, and via the production of collagen. The concept of muscle cells as active participants in the intestinal inflammatory process is a new concept that is under intense study. This report summarizes current knowledge as it relates to these two aspects of altered muscle function (growth and contractility in the inflamed intestine, and will focus on mechanisms underlying these changes, based on data obtained from animal models of intestinal inflammation.

  11. Smoothing a Piecewise-Smooth: An Example from Plankton Population Dynamics

    DEFF Research Database (Denmark)

    Piltz, Sofia Helena

    2016-01-01

    In this work we discuss a piecewise-smooth dynamical system inspired by plankton observations and constructed for one predator switching its diet between two different types of prey. We then discuss two smooth formulations of the piecewise-smooth model obtained by using a hyperbolic tangent funct...... function and adding a dimension to the system. We compare model behaviour of the three systems and show an example case where the steepness of the switch is determined from a comparison with data on freshwater plankton....

  12. Smooth-arm spiral galaxies: their properties and significance to cluster-galaxy evolution

    International Nuclear Information System (INIS)

    Wilkerson, M.S.

    1979-01-01

    In this dissertation a number of galaxies with optical appearances between those of normal, actively-star-forming spirals and SO galaxies have been examined. These so-called smooth-arm spiral galaxies exhibit spiral arms without any of the spiral tracers - H II regions, O-B star associations, dust - indicative of current star formation. Tests were made to find if, perhaps, these smooth-arm spirals could have, at one time, been normal, actively-star-forming spirals whose gas had been somehow removed; and that are currently transforming into SO galaxies. This scenario proceeds as (1) removal of gas, (2) gradual dying of disk density wave, (3) emergence of SO galaxy. If the dominant method of gas removal is ram-pressure stripping by a hot, intracluster medium, then smooth-arm spirals should occur primarily in x-ray clusters. Some major findings of this dissertation are as follows: (1) Smooth-arm spirals are redder than normal spirals of the same morphological type. Most smooth-arm spirals cannot be distinguished by color from SO galaxies. (2) A weak trend exists for smooth-arm spirals with stronger arms to be bluer than those with weaker arms; thus implying that the interval since gas removal has been shorter for the galaxies with stronger arms. (3) Smooth-arm spirals are deficient in neutral hydrogen - sometimes by an order of magnitude or, possibly, more

  13. Poster - 52: Smoothing constraints in Modulated Photon Radiotherapy (XMRT) fluence map optimization

    International Nuclear Information System (INIS)

    McGeachy, Philip; Villarreal-Barajas, Jose Eduardo; Zinchenko, Yuriy; Khan, Rao

    2016-01-01

    Purpose: Modulated Photon Radiotherapy (XMRT), which simultaneously optimizes photon beamlet energy (6 and 18 MV) and fluence, has recently shown dosimetric improvement in comparison to conventional IMRT. That said, the degree of smoothness of resulting fluence maps (FMs) has yet to be investigated and could impact the deliverability of XMRT. This study looks at investigating FM smoothness and imposing smoothing constraint in the fluence map optimization. Methods: Smoothing constraints were modeled in the XMRT algorithm with the sum of positive gradient (SPG) technique. XMRT solutions, with and without SPG constraints, were generated for a clinical prostate scan using standard dosimetric prescriptions, constraints, and a seven coplanar beam arrangement. The smoothness, with and without SPG constraints, was assessed by looking at the absolute and relative maximum SPG scores for each fluence map. Dose volume histograms were utilized when evaluating impact on the dose distribution. Results: Imposing SPG constraints reduced the absolute and relative maximum SPG values by factors of up to 5 and 2, respectively, when compared with their non-SPG constrained counterparts. This leads to a more seamless conversion of FMS to their respective MLC sequences. This improved smoothness resulted in an increase to organ at risk (OAR) dose, however the increase is not clinically significant. Conclusions: For a clinical prostate case, there was a noticeable improvement in the smoothness of the XMRT FMs when SPG constraints were applied with a minor increase in dose to OARs. This increase in OAR dose is not clinically meaningful.

  14. Poster - 52: Smoothing constraints in Modulated Photon Radiotherapy (XMRT) fluence map optimization

    Energy Technology Data Exchange (ETDEWEB)

    McGeachy, Philip; Villarreal-Barajas, Jose Eduardo; Zinchenko, Yuriy; Khan, Rao [Department of Medical Physics, CancerCare Manitoba, Winnipeg, MB, CAN, Department of Physics and Astronomy, University of Calgary, Calgary, AB, CAN, Department of Mathematics and Statistics, University of Calgary, Calgary, AB, CAN, Department of Radiation Oncology, Washington University School of Medicine, St Louis, MO (United States)

    2016-08-15

    Purpose: Modulated Photon Radiotherapy (XMRT), which simultaneously optimizes photon beamlet energy (6 and 18 MV) and fluence, has recently shown dosimetric improvement in comparison to conventional IMRT. That said, the degree of smoothness of resulting fluence maps (FMs) has yet to be investigated and could impact the deliverability of XMRT. This study looks at investigating FM smoothness and imposing smoothing constraint in the fluence map optimization. Methods: Smoothing constraints were modeled in the XMRT algorithm with the sum of positive gradient (SPG) technique. XMRT solutions, with and without SPG constraints, were generated for a clinical prostate scan using standard dosimetric prescriptions, constraints, and a seven coplanar beam arrangement. The smoothness, with and without SPG constraints, was assessed by looking at the absolute and relative maximum SPG scores for each fluence map. Dose volume histograms were utilized when evaluating impact on the dose distribution. Results: Imposing SPG constraints reduced the absolute and relative maximum SPG values by factors of up to 5 and 2, respectively, when compared with their non-SPG constrained counterparts. This leads to a more seamless conversion of FMS to their respective MLC sequences. This improved smoothness resulted in an increase to organ at risk (OAR) dose, however the increase is not clinically significant. Conclusions: For a clinical prostate case, there was a noticeable improvement in the smoothness of the XMRT FMs when SPG constraints were applied with a minor increase in dose to OARs. This increase in OAR dose is not clinically meaningful.

  15. Numerical Non-Equilibrium and Smoothing of Solutions in The Difference Method for Plane 2-Dimensional Adhesive Joints / Nierównowaga Numeryczna i Wygładzanie Rozwiazań w Metodzie Różnicowej Dla Dwuwymiarowych Połączeń Klejowych

    Directory of Open Access Journals (Sweden)

    Rapp Piotr

    2016-03-01

    Full Text Available The subject of the paper is related to problems with numerical errors in the finite difference method used to solve equations of the theory of elasticity describing 2- dimensional adhesive joints in the plane stress state. Adhesive joints are described in terms of displacements by four elliptic partial differential equations of the second order with static and kinematic boundary conditions. If adhesive joint is constrained as a statically determinate body and is loaded by a self-equilibrated loading, the finite difference solution is sensitive to kinematic boundary conditions. Displacements computed at the constraints are not exactly zero. Thus, the solution features a numerical error as if the adhesive joint was not in equilibrium. Herein this phenomenon is called numerical non-equilibrium. The disturbances in displacements and stress distributions can be decreased or eliminated by a correction of loading acting on the adhesive joint or by smoothing of solutions based on Dirichlet boundary value problem.

  16. Fuzzy Logic Based Edge Detection in Smooth and Noisy Clinical Images.

    Directory of Open Access Journals (Sweden)

    Izhar Haq

    Full Text Available Edge detection has beneficial applications in the fields such as machine vision, pattern recognition and biomedical imaging etc. Edge detection highlights high frequency components in the image. Edge detection is a challenging task. It becomes more arduous when it comes to noisy images. This study focuses on fuzzy logic based edge detection in smooth and noisy clinical images. The proposed method (in noisy images employs a 3 × 3 mask guided by fuzzy rule set. Moreover, in case of smooth clinical images, an extra mask of contrast adjustment is integrated with edge detection mask to intensify the smooth images. The developed method was tested on noise-free, smooth and noisy images. The results were compared with other established edge detection techniques like Sobel, Prewitt, Laplacian of Gaussian (LOG, Roberts and Canny. When the developed edge detection technique was applied to a smooth clinical image of size 270 × 290 pixels having 24 dB 'salt and pepper' noise, it detected very few (22 false edge pixels, compared to Sobel (1931, Prewitt (2741, LOG (3102, Roberts (1451 and Canny (1045 false edge pixels. Therefore it is evident that the developed method offers improved solution to the edge detection problem in smooth and noisy clinical images.

  17. A Smoothed Finite Element-Based Elasticity Model for Soft Bodies

    Directory of Open Access Journals (Sweden)

    Juan Zhang

    2017-01-01

    Full Text Available One of the major challenges in mesh-based deformation simulation in computer graphics is to deal with mesh distortion. In this paper, we present a novel mesh-insensitive and softer method for simulating deformable solid bodies under the assumptions of linear elastic mechanics. A face-based strain smoothing method is adopted to alleviate mesh distortion instead of the traditional spatial adaptive smoothing method. Then, we propose a way to combine the strain smoothing method and the corotational method. With this approach, the amplitude and frequency of transient displacements are slightly affected by the distorted mesh. Realistic simulation results are generated under large rotation using a linear elasticity model without adding significant complexity or computational cost to the standard corotational FEM. Meanwhile, softening effect is a by-product of our method.

  18. Smoothing of respiratory motion traces for motion-compensated radiotherapy

    International Nuclear Information System (INIS)

    Ernst, Floris; Schlaefer, Alexander; Schweikard, Achim

    2010-01-01

    Purpose: The CyberKnife system has been used successfully for several years to radiosurgically treat tumors without the need for stereotactic fixation or sedation of the patient. It has been shown that tumor motion in the lung, liver, and pancreas can be tracked with acceptable accuracy and repeatability. However, highly precise targeting for tumors in the lower abdomen, especially for tumors which exhibit strong motion, remains problematic. Reasons for this are manifold, like the slow tracking system operating at 26.5 Hz, and using the signal from the tracking camera ''as is''. Since the motion recorded with the camera is used to compensate for system latency by prediction and the predicted signal is subsequently used to infer the tumor position from a correlation model based on x-ray imaging of gold fiducials around the tumor, camera noise directly influences the targeting accuracy. The goal of this work is to establish the suitability of a new smoothing method for respiratory motion traces used in motion-compensated radiotherapy. The authors endeavor to show that better prediction--With a lower rms error of the predicted signal--and/or smoother prediction is possible using this method. Methods: The authors evaluated six commercially available tracking systems (NDI Aurora, PolarisClassic, Polaris Vicra, MicronTracker2 H40, FP5000, and accuTrack compact). The authors first tracked markers both stationary and while in motion to establish the systems' noise characteristics. Then the authors applied a smoothing method based on the a trous wavelet decomposition to reduce the devices' noise level. Additionally, the smoothed signal of the moving target and a motion trace from actual human respiratory motion were subjected to prediction using the MULIN and the nLMS 2 algorithms. Results: The authors established that the noise distribution for a static target is Gaussian and that when the probe is moved such as to mimic human respiration, it remains Gaussian with the

  19. On smoothness-asymmetric null infinities

    International Nuclear Information System (INIS)

    Valiente Kroon, Juan Antonio

    2006-01-01

    We discuss the existence of asymptotically Euclidean initial data sets for the vacuum Einstein field equations which would give rise (modulo an existence result for the evolution equations near spatial infinity) to developments with a past and a future null infinity of different smoothness. For simplicity, the analysis is restricted to the class of conformally flat, axially symmetric initial data sets. It is shown how the free parameters in the second fundamental form of the data can be used to satisfy certain obstructions to the smoothness of null infinity. The resulting initial data sets could be interpreted as those of some sort of (nonlinearly) distorted Schwarzschild black hole. Their developments would be that they admit a peeling future null infinity, but at the same time have a polyhomogeneous (non-peeling) past null infinity

  20. Does responsive pricing smooth demand shocks?

    OpenAIRE

    Pascal, Courty; Mario, Pagliero

    2011-01-01

    Using data from a unique pricing experiment, we investigate Vickrey’s conjecture that responsive pricing can be used to smooth both predictable and unpredictable demand shocks. Our evidence shows that increasing the responsiveness of price to demand conditions reduces the magnitude of deviations in capacity utilization rates from a pre-determined target level. A 10 percent increase in price variability leads to a decrease in the variability of capacity utilization rates between...

  1. The Smooth Muscle of the Artery

    Science.gov (United States)

    1975-01-01

    of vascular smooth muscle are contrac- tion, thereby mediating vaso constriction, and the synthesis of the extracellular proteins and polysaccharides ...of the monosaccharides turned out to be different for instance from cornea to aorta (229, 283). In the conditions yed (4 hours incubation at 37 degrees... polysaccharides only. This glyco- protein is not very rich in sugar components (- 5Z) (228, 284), but is a very acidic protein (286). Fig.66 shows

  2. An approach for spherical harmonic analysis of non-smooth data

    Science.gov (United States)

    Wang, Hansheng; Wu, Patrick; Wang, Zhiyong

    2006-12-01

    A method is proposed to evaluate the spherical harmonic coefficients of a global or regional, non-smooth, observable dataset sampled on an equiangular grid. The method is based on an integration strategy using new recursion relations. Because a bilinear function is used to interpolate points within the grid cells, this method is suitable for non-smooth data; the slope of the data may be piecewise continuous, with extreme changes at the boundaries. In order to validate the method, the coefficients of an axisymmetric model are computed, and compared with the derived analytical expressions. Numerical results show that this method is indeed reasonable for non-smooth models, and that the maximum degree for spherical harmonic analysis should be empirically determined by several factors including the model resolution and the degree of non-smoothness in the dataset, and it can be several times larger than the total number of latitudinal grid points. It is also shown that this method is appropriate for the approximate analysis of a smooth dataset. Moreover, this paper provides the program flowchart and an internet address where the FORTRAN code with program specifications are made available.

  3. Log canonical thresholds of smooth Fano threefolds

    International Nuclear Information System (INIS)

    Cheltsov, Ivan A; Shramov, Konstantin A

    2008-01-01

    The complex singularity exponent is a local invariant of a holomorphic function determined by the integrability of fractional powers of the function. The log canonical thresholds of effective Q-divisors on normal algebraic varieties are algebraic counterparts of complex singularity exponents. For a Fano variety, these invariants have global analogues. In the former case, it is the so-called α-invariant of Tian; in the latter case, it is the global log canonical threshold of the Fano variety, which is the infimum of log canonical thresholds of all effective Q-divisors numerically equivalent to the anticanonical divisor. An appendix to this paper contains a proof that the global log canonical threshold of a smooth Fano variety coincides with its α-invariant of Tian. The purpose of the paper is to compute the global log canonical thresholds of smooth Fano threefolds (altogether, there are 105 deformation families of such threefolds). The global log canonical thresholds are computed for every smooth threefold in 64 deformation families, and the global log canonical thresholds are computed for a general threefold in 20 deformation families. Some bounds for the global log canonical thresholds are computed for 14 deformation families. Appendix A is due to J.-P. Demailly.

  4. Smooth Nb surfaces fabricated by buffered electropolishing

    International Nuclear Information System (INIS)

    Wu, Andy T.; Mammosser, John; Phillips, Larry; Delayen, Jean; Reece, Charles; Wilkerson, Amy; Smith, David; Ike, Robert

    2007-01-01

    It was demonstrated that smooth Nb surfaces could be obtained through buffered electropolishing (BEP) employing an electrolyte consisting of lactic, sulfuric, and hydrofluoric acids. Parameters that control the polishing process were optimized to achieve a smooth surface finish. The polishing rate of BEP was determined to be 0.646 μm/min which was much higher than 0.381 μm/min achieved by the conventional electropolishing (EP) process widely used in the superconducting radio frequency (SRF) community. Root mean square measurements using a 3D profilometer revealed that Nb surfaces treated by BEP were an order of magnitude smoother than those treated by the optimized EP process. The chemical composition of the Nb surfaces after BEP was analyzed by static and dynamic secondary ion mass spectrometry (SIMS) systems. SIMS results implied that the surface oxide structure of Nb might be more complicated than what usually believed and could be inhomogeneous. Preliminary results of BEP on Nb SRF single cell cavities and half-cells were reported. It was shown that smooth and bright surfaces could be obtained in 1800 s when the electric field inside a SRF cavity was uniform during a BEP process. This study showed that BEP is a promising technique for surface treatment on Nb SRF cavities to be used in particle accelerators

  5. Did you smooth your well logs the right way for seismic interpretation?

    International Nuclear Information System (INIS)

    Duchesne, Mathieu J; Gaillot, Philippe

    2011-01-01

    Correlations between physical properties and seismic reflection data are useful to determine the geological nature of seismic reflections and the lateral extent of geological strata. The difference in resolution between well logs and seismic data is a major hurdle faced by seismic interpreters when tying both data sets. In general, log data have a resolution of at least two orders of magnitude greater than seismic data. Smoothing physical property logs improves correlation at the seismic scale. Three different approaches were used and compared to smooth a density log: binomial filtering, seismic wavelet filtering and discrete wavelet transform (DWT) filtering. Regression plots between the density logs and the acoustic impedance show that the data smoothed with the DWT is the only method that preserves the original relationship between the raw density data and the acoustic impedance. Smoothed logs were then used to generate synthetic seismograms that were tied to seismic data at the borehole site. Best ties were achieved using the synthetic seismogram computed with the density log processed with the DWT. The good performance of the DWT is explained by its adaptive multi-scale characteristic which preserved significant local changes of density on the high-resolution data series that were also pictured at the seismic scale. Since synthetic seismograms are generated using smoothed logs, the choice of the smoothing method impacts on the quality of seismic-to-well ties. This ultimately can have economical implications during hydrocarbon exploration or exploitation phases

  6. Output Power Smoothing Control for a Wind Farm Based on the Allocation of Wind Turbines

    Directory of Open Access Journals (Sweden)

    Ying Zhu

    2018-06-01

    Full Text Available This paper presents a new output power smoothing control strategy for a wind farm based on the allocation of wind turbines. The wind turbines in the wind farm are divided into control wind turbines (CWT and power wind turbines (PWT, separately. The PWTs are expected to output as much power as possible and a maximum power point tracking (MPPT control strategy combining the rotor inertia based power smoothing method is adopted. The CWTs are in charge of the output power smoothing for the whole wind farm by giving the calculated appropriate power. The battery energy storage system (BESS with small capacity is installed to be the support and its charge and discharge times are greatly reduced comparing with the traditional ESSs based power smoothing strategies. The simulation model of the permanent magnet synchronous generators (PMSG based wind farm by considering the wake effect is built in Matlab/Simulink to test the proposed power smoothing method. Three different working modes of the wind farm are given in the simulation and the simulation results verify the effectiveness of the proposed power smoothing control strategy.

  7. Smoothing of respiratory motion traces for motion-compensated radiotherapy.

    Science.gov (United States)

    Ernst, Floris; Schlaefer, Alexander; Schweikard, Achim

    2010-01-01

    The CyberKnife system has been used successfully for several years to radiosurgically treat tumors without the need for stereotactic fixation or sedation of the patient. It has been shown that tumor motion in the lung, liver, and pancreas can be tracked with acceptable accuracy and repeatability. However, highly precise targeting for tumors in the lower abdomen, especially for tumors which exhibit strong motion, remains problematic. Reasons for this are manifold, like the slow tracking system operating at 26.5 Hz, and using the signal from the tracking camera "as is." Since the motion recorded with the camera is used to compensate for system latency by prediction and the predicted signal is subsequently used to infer the tumor position from a correlation model based on x-ray imaging of gold fiducials around the tumor, camera noise directly influences the targeting accuracy. The goal of this work is to establish the suitability of a new smoothing method for respiratory motion traces used in motion-compensated radiotherapy. The authors endeavor to show that better prediction--With a lower rms error of the predicted signal--and/or smoother prediction is possible using this method. The authors evaluated six commercially available tracking systems (NDI Aurora, PolarisClassic, Polaris Vicra, MicronTracker2 H40, FP5000, and accuTrack compact). The authors first tracked markers both stationary and while in motion to establish the systems' noise characteristics. Then the authors applied a smoothing method based on the a trous wavelet decomposition to reduce the devices' noise level. Additionally, the smoothed signal of the moving target and a motion trace from actual human respiratory motion were subjected to prediction using the MULIN and the nLMS2 algorithms. The authors established that the noise distribution for a static target is Gaussian and that when the probe is moved such as to mimic human respiration, it remains Gaussian with the exception of the FP5000 and the

  8. Smooth individual level covariates adjustment in disease mapping.

    Science.gov (United States)

    Huque, Md Hamidul; Anderson, Craig; Walton, Richard; Woolford, Samuel; Ryan, Louise

    2018-05-01

    Spatial models for disease mapping should ideally account for covariates measured both at individual and area levels. The newly available "indiCAR" model fits the popular conditional autoregresssive (CAR) model by accommodating both individual and group level covariates while adjusting for spatial correlation in the disease rates. This algorithm has been shown to be effective but assumes log-linear associations between individual level covariates and outcome. In many studies, the relationship between individual level covariates and the outcome may be non-log-linear, and methods to track such nonlinearity between individual level covariate and outcome in spatial regression modeling are not well developed. In this paper, we propose a new algorithm, smooth-indiCAR, to fit an extension to the popular conditional autoregresssive model that can accommodate both linear and nonlinear individual level covariate effects while adjusting for group level covariates and spatial correlation in the disease rates. In this formulation, the effect of a continuous individual level covariate is accommodated via penalized splines. We describe a two-step estimation procedure to obtain reliable estimates of individual and group level covariate effects where both individual and group level covariate effects are estimated separately. This distributed computing framework enhances its application in the Big Data domain with a large number of individual/group level covariates. We evaluate the performance of smooth-indiCAR through simulation. Our results indicate that the smooth-indiCAR method provides reliable estimates of all regression and random effect parameters. We illustrate our proposed methodology with an analysis of data on neutropenia admissions in New South Wales (NSW), Australia. © 2018 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  9. Smoothed dissipative particle dynamics with angular momentum conservation

    Energy Technology Data Exchange (ETDEWEB)

    Müller, Kathrin, E-mail: k.mueller@fz-juelich.de; Fedosov, Dmitry A., E-mail: d.fedosov@fz-juelich.de; Gompper, Gerhard, E-mail: g.gompper@fz-juelich.de

    2015-01-15

    Smoothed dissipative particle dynamics (SDPD) combines two popular mesoscopic techniques, the smoothed particle hydrodynamics and dissipative particle dynamics (DPD) methods, and can be considered as an improved dissipative particle dynamics approach. Despite several advantages of the SDPD method over the conventional DPD model, the original formulation of SDPD by Español and Revenga (2003) [9], lacks angular momentum conservation, leading to unphysical results for problems where the conservation of angular momentum is essential. To overcome this limitation, we extend the SDPD method by introducing a particle spin variable such that local and global angular momentum conservation is restored. The new SDPD formulation (SDPD+a) is directly derived from the Navier–Stokes equation for fluids with spin, while thermal fluctuations are incorporated similarly to the DPD method. We test the new SDPD method and demonstrate that it properly reproduces fluid transport coefficients. Also, SDPD with angular momentum conservation is validated using two problems: (i) the Taylor–Couette flow with two immiscible fluids and (ii) a tank-treading vesicle in shear flow with a viscosity contrast between inner and outer fluids. For both problems, the new SDPD method leads to simulation predictions in agreement with the corresponding analytical theories, while the original SDPD method fails to capture properly physical characteristics of the systems due to violation of angular momentum conservation. In conclusion, the extended SDPD method with angular momentum conservation provides a new approach to tackle fluid problems such as multiphase flows and vesicle/cell suspensions, where the conservation of angular momentum is essential.

  10. Nodular smooth muscle metaplasia in multiple peritoneal endometriosis

    OpenAIRE

    Kim, Hyun-Soo; Yoon, Gun; Ha, Sang Yun; Song, Sang Yong

    2015-01-01

    We report here an unusual presentation of peritoneal endometriosis with smooth muscle metaplasia as multiple protruding masses on the lateral pelvic wall. Smooth muscle metaplasia is a common finding in rectovaginal endometriosis, whereas in peritoneal endometriosis, smooth muscle metaplasia is uncommon and its nodular presentation on the pelvic wall is even rarer. To the best of our knowledge, this is the first case of nodular smooth muscle metaplasia occurring in peritoneal endometriosis. A...

  11. Radial Basis Function Based Quadrature over Smooth Surfaces

    Science.gov (United States)

    2016-03-24

    Radial Basis Functions φ(r) Piecewise Smooth (Conditionally Positive Definite) MN Monomial |r|2m+1 TPS thin plate spline |r|2mln|r| Infinitely Smooth...smooth surfaces using polynomial interpolants, while [27] couples Thin - Plate Spline interpolation (see table 1) with Green’s integral formula [29

  12. Neurophysiology and Neuroanatomy of Smooth Pursuit in Humans

    Science.gov (United States)

    Lencer, Rebekka; Trillenberg, Peter

    2008-01-01

    Smooth pursuit eye movements enable us to focus our eyes on moving objects by utilizing well-established mechanisms of visual motion processing, sensorimotor transformation and cognition. Novel smooth pursuit tasks and quantitative measurement techniques can help unravel the different smooth pursuit components and complex neural systems involved…

  13. Pseudo-Random Sequences Generated by a Class of One-Dimensional Smooth Map

    Science.gov (United States)

    Wang, Xing-Yuan; Qin, Xue; Xie, Yi-Xin

    2011-08-01

    We extend a class of a one-dimensional smooth map. We make sure that for each desired interval of the parameter the map's Lyapunov exponent is positive. Then we propose a novel parameter perturbation method based on the good property of the extended one-dimensional smooth map. We perturb the parameter r in each iteration by the real number xi generated by the iteration. The auto-correlation function and NIST statistical test suite are taken to illustrate the method's randomness finally. We provide an application of this method in image encryption. Experiments show that the pseudo-random sequences are suitable for this application.

  14. Pseudo-Random Sequences Generated by a Class of One-Dimensional Smooth Map

    International Nuclear Information System (INIS)

    Wang Xing-Yuan; Qin Xue; Xie Yi-Xin

    2011-01-01

    We extend a class of a one-dimensional smooth map. We make sure that for each desired interval of the parameter the map's Lyapunov exponent is positive. Then we propose a novel parameter perturbation method based on the good property of the extended one-dimensional smooth map. We perturb the parameter r in each iteration by the real number x i generated by the iteration. The auto-correlation function and NIST statistical test suite are taken to illustrate the method's randomness finally. We provide an application of this method in image encryption. Experiments show that the pseudo-random sequences are suitable for this application. (general)

  15. Unexpected properties of bandwidth choice when smoothing discrete data for constructing a functional data classifier

    KAUST Repository

    Carroll, Raymond J.

    2013-12-01

    The data functions that are studied in the course of functional data analysis are assembled from discrete data, and the level of smoothing that is used is generally that which is appropriate for accurate approximation of the conceptually smooth functions that were not actually observed. Existing literature shows that this approach is effective, and even optimal, when using functional data methods for prediction or hypothesis testing. However, in the present paper we show that this approach is not effective in classification problems. There a useful rule of thumb is that undersmoothing is often desirable, but there are several surprising qualifications to that approach. First, the effect of smoothing the training data can be more significant than that of smoothing the new data set to be classified; second, undersmoothing is not always the right approach, and in fact in some cases using a relatively large bandwidth can be more effective; and third, these perverse results are the consequence of very unusual properties of error rates, expressed as functions of smoothing parameters. For example, the orders of magnitude of optimal smoothing parameter choices depend on the signs and sizes of terms in an expansion of error rate, and those signs and sizes can vary dramatically from one setting to another, even for the same classifier.

  16. Unexpected properties of bandwidth choice when smoothing discrete data for constructing a functional data classifier

    KAUST Repository

    Carroll, Raymond J.; Delaigle, Aurore; Hall, Peter

    2013-01-01

    The data functions that are studied in the course of functional data analysis are assembled from discrete data, and the level of smoothing that is used is generally that which is appropriate for accurate approximation of the conceptually smooth functions that were not actually observed. Existing literature shows that this approach is effective, and even optimal, when using functional data methods for prediction or hypothesis testing. However, in the present paper we show that this approach is not effective in classification problems. There a useful rule of thumb is that undersmoothing is often desirable, but there are several surprising qualifications to that approach. First, the effect of smoothing the training data can be more significant than that of smoothing the new data set to be classified; second, undersmoothing is not always the right approach, and in fact in some cases using a relatively large bandwidth can be more effective; and third, these perverse results are the consequence of very unusual properties of error rates, expressed as functions of smoothing parameters. For example, the orders of magnitude of optimal smoothing parameter choices depend on the signs and sizes of terms in an expansion of error rate, and those signs and sizes can vary dramatically from one setting to another, even for the same classifier.

  17. Voltage dependent potassium channel remodeling in murine intestinal smooth muscle hypertrophy induced by partial obstruction.

    Science.gov (United States)

    Liu, Dong-Hai; Huang, Xu; Guo, Xin; Meng, Xiang-Min; Wu, Yi-Song; Lu, Hong-Li; Zhang, Chun-Mei; Kim, Young-chul; Xu, Wen-Xie

    2014-01-01

    Partial obstruction of the small intestine causes obvious hypertrophy of smooth muscle cells and motility disorder in the bowel proximate to the obstruction. To identify electric remodeling of hypertrophic smooth muscles in partially obstructed murine small intestine, the patch-clamp and intracellular microelectrode recording methods were used to identify the possible electric remodeling and Western blot, immunofluorescence and immunoprecipitation were utilized to examine the channel protein expression and phosphorylation level changes in this research. After 14 days of obstruction, partial obstruction caused obvious smooth muscle hypertrophy in the proximally located intestine. The slow waves of intestinal smooth muscles in the dilated region were significantly suppressed, their amplitude and frequency were reduced, whilst the resting membrane potentials were depolarized compared with normal and sham animals. The current density of voltage dependent potassium channel (KV) was significantly decreased in the hypertrophic smooth muscle cells and the voltage sensitivity of KV activation was altered. The sensitivity of KV currents (IKV) to TEA, a nonselective potassium channel blocker, increased significantly, but the sensitivity of IKv to 4-AP, a KV blocker, stays the same. The protein levels of KV4.3 and KV2.2 were up-regulated in the hypertrophic smooth muscle cell membrane. The serine and threonine phosphorylation levels of KV4.3 and KV2.2 were significantly increased in the hypertrophic smooth muscle cells. Thus this study represents the first identification of KV channel remodeling in murine small intestinal smooth muscle hypertrophy induced by partial obstruction. The enhanced phosphorylations of KV4.3 and KV2.2 may be involved in this process.

  18. Voltage dependent potassium channel remodeling in murine intestinal smooth muscle hypertrophy induced by partial obstruction.

    Directory of Open Access Journals (Sweden)

    Dong-Hai Liu

    Full Text Available Partial obstruction of the small intestine causes obvious hypertrophy of smooth muscle cells and motility disorder in the bowel proximate to the obstruction. To identify electric remodeling of hypertrophic smooth muscles in partially obstructed murine small intestine, the patch-clamp and intracellular microelectrode recording methods were used to identify the possible electric remodeling and Western blot, immunofluorescence and immunoprecipitation were utilized to examine the channel protein expression and phosphorylation level changes in this research. After 14 days of obstruction, partial obstruction caused obvious smooth muscle hypertrophy in the proximally located intestine. The slow waves of intestinal smooth muscles in the dilated region were significantly suppressed, their amplitude and frequency were reduced, whilst the resting membrane potentials were depolarized compared with normal and sham animals. The current density of voltage dependent potassium channel (KV was significantly decreased in the hypertrophic smooth muscle cells and the voltage sensitivity of KV activation was altered. The sensitivity of KV currents (IKV to TEA, a nonselective potassium channel blocker, increased significantly, but the sensitivity of IKv to 4-AP, a KV blocker, stays the same. The protein levels of KV4.3 and KV2.2 were up-regulated in the hypertrophic smooth muscle cell membrane. The serine and threonine phosphorylation levels of KV4.3 and KV2.2 were significantly increased in the hypertrophic smooth muscle cells. Thus this study represents the first identification of KV channel remodeling in murine small intestinal smooth muscle hypertrophy induced by partial obstruction. The enhanced phosphorylations of KV4.3 and KV2.2 may be involved in this process.

  19. Voltage harmonic elimination with RLC based interface smoothing filter

    International Nuclear Information System (INIS)

    Chandrasekaran, K; Ramachandaramurthy, V K

    2015-01-01

    A method is proposed for designing a Dynamic Voltage Restorer (DVR) with RLC interface smoothing filter. The RLC filter connected between the IGBT based Voltage Source Inverter (VSI) is attempted to eliminate voltage harmonics in the busbar voltage and switching harmonics from VSI by producing a PWM controlled harmonic voltage. In this method, the DVR or series active filter produces PWM voltage that cancels the existing harmonic voltage due to any harmonic voltage source. The proposed method is valid for any distorted busbar voltage. The operating VSI handles no active power but only harmonic power. The DVR is able to suppress the lower order switching harmonics generated by the IGBT based VSI. Good dynamic and transient results obtained. The Total Harmonic Distortion (THD) is minimized to zero at the sensitive load end. Digital simulations are carried out using PSCAD/EMTDC to validate the performance of RLC filter. Simulated results are presented. (paper)

  20. Smooth and non-smooth travelling waves in a nonlinearly dispersive Boussinesq equation

    International Nuclear Information System (INIS)

    Shen Jianwei; Xu Wei; Lei Youming

    2005-01-01

    The dynamical behavior and special exact solutions of nonlinear dispersive Boussinesq equation (B(m,n) equation), u tt -u xx -a(u n ) xx +b(u m ) xxxx =0, is studied by using bifurcation theory of dynamical system. As a result, all possible phase portraits in the parametric space for the travelling wave system, solitary wave, kink and anti-kink wave solutions and uncountably infinite many smooth and non-smooth periodic wave solutions are obtained. It can be shown that the existence of singular straight line in the travelling wave system is the reason why smooth waves converge to cusp waves, finally. When parameter are varied, under different parametric conditions, various sufficient conditions guarantee the existence of the above solutions are given

  1. Smooth extrapolation of unknown anatomy via statistical shape models

    Science.gov (United States)

    Grupp, R. B.; Chiang, H.; Otake, Y.; Murphy, R. J.; Gordon, C. R.; Armand, M.; Taylor, R. H.

    2015-03-01

    Several methods to perform extrapolation of unknown anatomy were evaluated. The primary application is to enhance surgical procedures that may use partial medical images or medical images of incomplete anatomy. Le Fort-based, face-jaw-teeth transplant is one such procedure. From CT data of 36 skulls and 21 mandibles separate Statistical Shape Models of the anatomical surfaces were created. Using the Statistical Shape Models, incomplete surfaces were projected to obtain complete surface estimates. The surface estimates exhibit non-zero error in regions where the true surface is known; it is desirable to keep the true surface and seamlessly merge the estimated unknown surface. Existing extrapolation techniques produce non-smooth transitions from the true surface to the estimated surface, resulting in additional error and a less aesthetically pleasing result. The three extrapolation techniques evaluated were: copying and pasting of the surface estimate (non-smooth baseline), a feathering between the patient surface and surface estimate, and an estimate generated via a Thin Plate Spline trained from displacements between the surface estimate and corresponding vertices of the known patient surface. Feathering and Thin Plate Spline approaches both yielded smooth transitions. However, feathering corrupted known vertex values. Leave-one-out analyses were conducted, with 5% to 50% of known anatomy removed from the left-out patient and estimated via the proposed approaches. The Thin Plate Spline approach yielded smaller errors than the other two approaches, with an average vertex error improvement of 1.46 mm and 1.38 mm for the skull and mandible respectively, over the baseline approach.

  2. Bessel smoothing filter for spectral-element mesh

    Science.gov (United States)

    Trinh, P. T.; Brossier, R.; Métivier, L.; Virieux, J.; Wellington, P.

    2017-06-01

    Smoothing filters are extremely important tools in seismic imaging and inversion, such as for traveltime tomography, migration and waveform inversion. For efficiency, and as they can be used a number of times during inversion, it is important that these filters can easily incorporate prior information on the geological structure of the investigated medium, through variable coherent lengths and orientation. In this study, we promote the use of the Bessel filter to achieve these purposes. Instead of considering the direct application of the filter, we demonstrate that we can rely on the equation associated with its inverse filter, which amounts to the solution of an elliptic partial differential equation. This enhances the efficiency of the filter application, and also its flexibility. We apply this strategy within a spectral-element-based elastic full waveform inversion framework. Taking advantage of this formulation, we apply the Bessel filter by solving the associated partial differential equation directly on the spectral-element mesh through the standard weak formulation. This avoids cumbersome projection operators between the spectral-element mesh and a regular Cartesian grid, or expensive explicit windowed convolution on the finite-element mesh, which is often used for applying smoothing operators. The associated linear system is solved efficiently through a parallel conjugate gradient algorithm, in which the matrix vector product is factorized and highly optimized with vectorized computation. Significant scaling behaviour is obtained when comparing this strategy with the explicit convolution method. The theoretical numerical complexity of this approach increases linearly with the coherent length, whereas a sublinear relationship is observed practically. Numerical illustrations are provided here for schematic examples, and for a more realistic elastic full waveform inversion gradient smoothing on the SEAM II benchmark model. These examples illustrate well the

  3. Effects of slope smoothing in river channel modeling

    Science.gov (United States)

    Kim, Kyungmin; Liu, Frank; Hodges, Ben R.

    2017-04-01

    In extending dynamic river modeling with the 1D Saint-Venant equations from a single reach to a large watershed there are critical questions as to how much bathymetric knowledge is necessary and how it should be represented parsimoniously. The ideal model will include the detail necessary to provide realism, but not include extraneous detail that should not exert a control on a 1D (cross-section averaged) solution. In a Saint-Venant model, the overall complexity of the river channel morphometry is typically abstracted into metrics for the channel slope, cross-sectional area, hydraulic radius, and roughness. In stream segments where cross-section surveys are closely spaced, it is not uncommon to have sharp changes in slope or even negative values (where a positive slope is the downstream direction). However, solving river flow with the Saint-Venant equations requires a degree of smoothness in the equation parameters or the equation set with the directly measured channel slopes may not be Lipschitz continuous. The results of non-smoothness are typically extended computational time to converge solutions (or complete failure to converge) and/or numerical instabilities under transient conditions. We have investigated using cubic splines to smooth the bottom slope and ensure always positive reference slopes within a 1D model. This method has been implemented in the Simulation Program for River Networks (SPRNT) and is compared to the standard HEC-RAS river solver. It is shown that the reformulation of the reference slope is both in keeping with the underlying derivation of the Saint-Venant equations and provides practical numerical stability without altering the realism of the simulation. This research was supported in part by the National Science Foundation under grant number CCF-1331610.

  4. ASIC PROTEINS REGULATE SMOOTH MUSCLE CELL MIGRATION

    OpenAIRE

    Grifoni, Samira C.; Jernigan, Nikki L.; Hamilton, Gina; Drummond, Heather A.

    2007-01-01

    The purpose of the present study was to investigate Acid Sensing Ion Channel (ASIC) protein expression and importance in cellular migration. We recently demonstrated Epithelial Na+ Channel (ENaC) proteins are required for vascular smooth muscle cell (VSMC) migration, however the role of the closely related ASIC proteins has not been addressed. We used RT-PCR and immunolabeling to determine expression of ASIC1, ASIC2, ASIC3 and ASIC4 in A10 cells. We used small interference RNA to silence indi...

  5. A smooth exit from eternal inflation?

    Science.gov (United States)

    Hawking, S. W.; Hertog, Thomas

    2018-04-01

    The usual theory of inflation breaks down in eternal inflation. We derive a dual description of eternal inflation in terms of a deformed Euclidean CFT located at the threshold of eternal inflation. The partition function gives the amplitude of different geometries of the threshold surface in the no-boundary state. Its local and global behavior in dual toy models shows that the amplitude is low for surfaces which are not nearly conformal to the round three-sphere and essentially zero for surfaces with negative curvature. Based on this we conjecture that the exit from eternal inflation does not produce an infinite fractal-like multiverse, but is finite and reasonably smooth.

  6. On spaces of functions of smoothness zero

    International Nuclear Information System (INIS)

    Besov, Oleg V

    2012-01-01

    The paper is concerned with the new spaces B-bar p,q 0 of functions of smoothness zero defined on the n-dimensional Euclidean space R n or on a subdomain G of R n . These spaces are compared with the spaces B p,q 0 (R n ) and bmo(R n ). The embedding theorems for Sobolev spaces are refined in terms of the space B-bar p,q 0 with the limiting exponent. Bibliography: 8 titles.

  7. Smooth Nanowire/Polymer Composite Transparent Electrodes

    KAUST Repository

    Gaynor, Whitney; Burkhard, George F.; McGehee, Michael D.; Peumans, Peter

    2011-01-01

    Smooth composite transparent electrodes are fabricated via lamination of silver nanowires into the polymer poly-(4,3-ethylene dioxythiophene): poly(styrene-sulfonate) (PEDOT:PSS). The surface roughness is dramatically reduced compared to bare nanowires. High-efficiency P3HT:PCBM organic photovoltaic cells can be fabricated using these composites, reproducing the performance of cells on indium tin oxide (ITO) on glass and improving the performance of cells on ITO on plastic. Copyright © 2011 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  8. Smooth Nanowire/Polymer Composite Transparent Electrodes

    KAUST Repository

    Gaynor, Whitney

    2011-04-29

    Smooth composite transparent electrodes are fabricated via lamination of silver nanowires into the polymer poly-(4,3-ethylene dioxythiophene): poly(styrene-sulfonate) (PEDOT:PSS). The surface roughness is dramatically reduced compared to bare nanowires. High-efficiency P3HT:PCBM organic photovoltaic cells can be fabricated using these composites, reproducing the performance of cells on indium tin oxide (ITO) on glass and improving the performance of cells on ITO on plastic. Copyright © 2011 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  9. Quantum key distribution with finite resources: Smooth Min entropy vs. Smooth Renyi entropy

    Energy Technology Data Exchange (ETDEWEB)

    Mertz, Markus; Abruzzo, Silvestre; Bratzik, Sylvia; Kampermann, Hermann; Bruss, Dagmar [Institut fuer Theoretische Physik III, Duesseldorf (Germany)

    2010-07-01

    We consider different entropy measures that play an important role in the analysis of the security of QKD with finite resources. The smooth min entropy leads to an optimal bound for the length of a secure key. Another bound on the secure key length was derived by using Renyi entropies. Unfortunately, it is very hard or even impossible to calculate these entropies for realistic QKD scenarios. To estimate the security rate it becomes important to find computable bounds on these entropies. Here, we compare a lower bound for the smooth min entropy with a bound using Renyi entropies. We compare these entropies for the six-state protocol with symmetric attacks.

  10. Smoothed Bootstrap und seine Anwendung in parametrischen Testverfahren

    Directory of Open Access Journals (Sweden)

    Handschuh, Dmitri

    2015-03-01

    Full Text Available In empirical research, the distribution of observations is usually unknown. This creates a problem if parametric methods are to be employed. The functionality of parametric methods relies on strong parametric assumptions. If these are violated the result of using classical parametric methods is questionable. Therefore, modifications of the parametric methods are required, if the appropriateness of their assumptions is in doubt. In this article, a modification of the smoothed bootstrap is presented (using the linear interpolation to approximate the distribution law suggested by the data. The application of this modification to statistical parametric methods allows taking into account deviations of the observed data distributions from the classical distribution assumptions without changing to other hypotheses, which often is implicit in using nonparametric methods. The approach is based on Monte Carlo method and is presented using one-way ANOVA as an example. The original and the modified statistical methods lead to identical outcomes when the assumptions of the original method are satisfied. For strong violations of the distributional assumptions, the modified version of the method is generally preferable. All procedures have been implemented in SAS. Test characteristics (type 1 error, the operating characteristic curve of the modified ANOVA are calculated.

  11. Fixed point iterations for quasi-contractive maps in uniformly smooth Banach spaces

    International Nuclear Information System (INIS)

    Chidume, C.E.; Osilike, M.O.

    1992-05-01

    Two well-known fixed point iteration methods are applied to approximate fixed points of quasi-contractive maps in real uniformly smooth Banach spaces. While our theorems generalize important known results, our method is of independent interest. (author). 25 refs

  12. Worst-case and smoothed analysis of k-means clustering with Bregman divergences

    NARCIS (Netherlands)

    Manthey, Bodo; Röglin, H.

    2013-01-01

    The $k$-means method is the method of choice for clustering large-scale data sets and it performs exceedingly well in practice despite its exponential worst-case running-time. To narrow the gap between theory and practice, $k$-means has been studied in the semi-random input model of smoothed

  13. WIENER-HOPF SOLVER WITH SMOOTH PROBABILITY DISTRIBUTIONS OF ITS COMPONENTS

    Directory of Open Access Journals (Sweden)

    Mr. Vladimir A. Smagin

    2016-12-01

    Full Text Available The Wiener – Hopf solver with smooth probability distributions of its component is presented. The method is based on hyper delta approximations of initial distributions. The use of Fourier series transformation and characteristic function allows working with the random variable method concentrated in transversal axis of absc.

  14. Transformation-invariant and nonparametric monotone smooth estimation of ROC curves.

    Science.gov (United States)

    Du, Pang; Tang, Liansheng

    2009-01-30

    When a new diagnostic test is developed, it is of interest to evaluate its accuracy in distinguishing diseased subjects from non-diseased subjects. The accuracy of the test is often evaluated by receiver operating characteristic (ROC) curves. Smooth ROC estimates are often preferable for continuous test results when the underlying ROC curves are in fact continuous. Nonparametric and parametric methods have been proposed by various authors to obtain smooth ROC curve estimates. However, there are certain drawbacks with the existing methods. Parametric methods need specific model assumptions. Nonparametric methods do not always satisfy the inherent properties of the ROC curves, such as monotonicity and transformation invariance. In this paper we propose a monotone spline approach to obtain smooth monotone ROC curves. Our method ensures important inherent properties of the underlying ROC curves, which include monotonicity, transformation invariance, and boundary constraints. We compare the finite sample performance of the newly proposed ROC method with other ROC smoothing methods in large-scale simulation studies. We illustrate our method through a real life example. Copyright (c) 2008 John Wiley & Sons, Ltd.

  15. Isotropic Growth of Graphene toward Smoothing Stitching.

    Science.gov (United States)

    Zeng, Mengqi; Tan, Lifang; Wang, Lingxiang; Mendes, Rafael G; Qin, Zhihui; Huang, Yaxin; Zhang, Tao; Fang, Liwen; Zhang, Yanfeng; Yue, Shuanglin; Rümmeli, Mark H; Peng, Lianmao; Liu, Zhongfan; Chen, Shengli; Fu, Lei

    2016-07-26

    The quality of graphene grown via chemical vapor deposition still has very great disparity with its theoretical property due to the inevitable formation of grain boundaries. The design of single-crystal substrate with an anisotropic twofold symmetry for the unidirectional alignment of graphene seeds would be a promising way for eliminating the grain boundaries at the wafer scale. However, such a delicate process will be easily terminated by the obstruction of defects or impurities. Here we investigated the isotropic growth behavior of graphene single crystals via melting the growth substrate to obtain an amorphous isotropic surface, which will not offer any specific grain orientation induction or preponderant growth rate toward a certain direction in the graphene growth process. The as-obtained graphene grains are isotropically round with mixed edges that exhibit high activity. The orientation of adjacent grains can be easily self-adjusted to smoothly match each other over a liquid catalyst with facile atom delocalization due to the low rotation steric hindrance of the isotropic grains, thus achieving the smoothing stitching of the adjacent graphene. Therefore, the adverse effects of grain boundaries will be eliminated and the excellent transport performance of graphene will be more guaranteed. What is more, such an isotropic growth mode can be extended to other types of layered nanomaterials such as hexagonal boron nitride and transition metal chalcogenides for obtaining large-size intrinsic film with low defect.

  16. Smooth Tubercle Bacilli: Neglected Opportunistic Tropical Pathogens

    Directory of Open Access Journals (Sweden)

    Djaltou eAboubaker

    2016-01-01

    Full Text Available Smooth tubercle bacilli (STB including ‘‘Mycobacterium canettii’’ are members of the Mycobacterium tuberculosis complex (MTBC which cause non-contagious tuberculosis in human. This group comprises less than one hundred isolates characterized by smooth colonies and cordless organisms. Most STB isolates have been obtained from patients exposed to the Republic of Djibouti but seven isolates, including the three seminal ones obtained by Georges Canetti between 1968 and 1970, were recovered from patients in France, Madagascar, Sub-Sahara East Africa and French Polynesia. STB form a genetically heterogeneous group of MTBC organisms with large 4.48 ± 0.05 Mb genomes which may link Mycobacterium kansasii to MTBC organisms. Lack of inter-human transmission suggested a yet unknown environmental reservoir. Clinical data indicate a respiratory tract route of contamination and the digestive tract as an alternative route of contamination. Further epidemiological and clinical studies are warranted to elucidate areas of uncertainty regarding these unusual mycobacteria and the tuberculosis they cause.

  17. Snap evaporation of droplets on smooth topographies.

    Science.gov (United States)

    Wells, Gary G; Ruiz-Gutiérrez, Élfego; Le Lirzin, Youen; Nourry, Anthony; Orme, Bethany V; Pradas, Marc; Ledesma-Aguilar, Rodrigo

    2018-04-11

    Droplet evaporation on solid surfaces is important in many applications including printing, micro-patterning and cooling. While seemingly simple, the configuration of evaporating droplets on solids is difficult to predict and control. This is because evaporation typically proceeds as a "stick-slip" sequence-a combination of pinning and de-pinning events dominated by static friction or "pinning", caused by microscopic surface roughness. Here we show how smooth, pinning-free, solid surfaces of non-planar topography promote a different process called snap evaporation. During snap evaporation a droplet follows a reproducible sequence of configurations, consisting of a quasi-static phase-change controlled by mass diffusion interrupted by out-of-equilibrium snaps. Snaps are triggered by bifurcations of the equilibrium droplet shape mediated by the underlying non-planar solid. Because the evolution of droplets during snap evaporation is controlled by a smooth topography, and not by surface roughness, our ideas can inspire programmable surfaces that manage liquids in heat- and mass-transfer applications.

  18. Effects of striated laser tracks on thermal fatigue resistance of cast iron samples with biomimetic non-smooth surface

    International Nuclear Information System (INIS)

    Tong, Xin; Zhou, Hong; Liu, Min; Dai, Ming-jiang

    2011-01-01

    In order to enhance the thermal fatigue resistance of cast iron materials, the samples with biomimetic non-smooth surface were processed by Neodymium:Yttrium Aluminum Garnet (Nd:YAG) laser. With self-controlled thermal fatigue test method, the thermal fatigue resistance of smooth and non-smooth samples was investigated. The effects of striated laser tracks on thermal fatigue resistance were also studied. The results indicated that biomimetic non-smooth surface was benefit for improving thermal fatigue resistance of cast iron sample. The striated non-smooth units formed by laser tracks which were vertical with thermal cracks had the best propagation resistance. The mechanisms behind these influences were discussed, and some schematic drawings were introduced to describe them.

  19. SMOOTH MYOCYTES AND COLLAGENOUS FIBERS OF THE URINARY BLADDER OF RATS IN DIABETES MELLITUS

    Directory of Open Access Journals (Sweden)

    Nadiya Tokaruk

    2015-12-01

    Ivano-Frankivsk National Medical University, Ivano-Frankivsk, Ukraine   Key words: diabetes mellitus; smooth myocytes; collagenous fibers.   Introduction. Diabetes mellitus (DM causes diabetic cystopathy, which is associated with detrusor dysfunction and the content of collagenous fibers. The results of the performed studies are ambiguous and often contradictory, requiring objective data which could be obtained on the basis of the simultaneous determination of relative areas of smooth myocytes and collagenous fibers and their ultrastructural study. Objective: To determine the peculiarities of the structural and metric organization of smooth myocytes and collagenous fibers of the urinary bladder (UB of rats during different stages of DM. Materials and methods. DM was modeled by streptozotocin in Wistar rats. Relative areas of the studied structures were defined on digital images of histological sections of UB stained by Mason using the original automatic way. Smooth myocytes were studied ultrastructurally. Results. During the 14th-28th day of DM development the percent of collagenous fibers area decreases and the percentage of smooth myocytes area of UB wall increases. The expanding of intercellular spaces and the development of vacuolar degeneration of myocytes are observed. During the 42nd-56th days the percentage of collagenous fibers area increases and the percentage of the area of smooth myocytes decreases. Ultrastructurally subsiding of vacuolar dystrophy, short-term baloon dystrophy, the appearance of dark myocytes, moderate karyorrhexis were observed. During the 70th day of the experiment the percentage of collagenous fibers and smooth myocytes areas does not change significantly, most dark myocytes are involutive, there are local fibrosis and myocyte sequestration areas. Conclusions. Ultrastructural changes are characterized by a pronounced polymorphism and have a chronological relationship. Author’s worked out original method of determination of the

  20. Smooth time-dependent receiver operating characteristic curve estimators.

    Science.gov (United States)

    Martínez-Camblor, Pablo; Pardo-Fernández, Juan Carlos

    2018-03-01

    The receiver operating characteristic curve is a popular graphical method often used to study the diagnostic capacity of continuous (bio)markers. When the considered outcome is a time-dependent variable, two main extensions have been proposed: the cumulative/dynamic receiver operating characteristic curve and the incident/dynamic receiver operating characteristic curve. In both cases, the main problem for developing appropriate estimators is the estimation of the joint distribution of the variables time-to-event and marker. As usual, different approximations lead to different estimators. In this article, the authors explore the use of a bivariate kernel density estimator which accounts for censored observations in the sample and produces smooth estimators of the time-dependent receiver operating characteristic curves. The performance of the resulting cumulative/dynamic and incident/dynamic receiver operating characteristic curves is studied by means of Monte Carlo simulations. Additionally, the influence of the choice of the required smoothing parameters is explored. Finally, two real-applications are considered. An R package is also provided as a complement to this article.

  1. Parallel multigrid smoothing: polynomial versus Gauss-Seidel

    International Nuclear Information System (INIS)

    Adams, Mark; Brezina, Marian; Hu, Jonathan; Tuminaro, Ray

    2003-01-01

    Gauss-Seidel is often the smoother of choice within multigrid applications. In the context of unstructured meshes, however, maintaining good parallel efficiency is difficult with multiplicative iterative methods such as Gauss-Seidel. This leads us to consider alternative smoothers. We discuss the computational advantages of polynomial smoothers within parallel multigrid algorithms for positive definite symmetric systems. Two particular polynomials are considered: Chebyshev and a multilevel specific polynomial. The advantages of polynomial smoothing over traditional smoothers such as Gauss-Seidel are illustrated on several applications: Poisson's equation, thin-body elasticity, and eddy current approximations to Maxwell's equations. While parallelizing the Gauss-Seidel method typically involves a compromise between a scalable convergence rate and maintaining high flop rates, polynomial smoothers achieve parallel scalable multigrid convergence rates without sacrificing flop rates. We show that, although parallel computers are the main motivation, polynomial smoothers are often surprisingly competitive with Gauss-Seidel smoothers on serial machines

  2. Parallel multigrid smoothing: polynomial versus Gauss-Seidel

    Science.gov (United States)

    Adams, Mark; Brezina, Marian; Hu, Jonathan; Tuminaro, Ray

    2003-07-01

    Gauss-Seidel is often the smoother of choice within multigrid applications. In the context of unstructured meshes, however, maintaining good parallel efficiency is difficult with multiplicative iterative methods such as Gauss-Seidel. This leads us to consider alternative smoothers. We discuss the computational advantages of polynomial smoothers within parallel multigrid algorithms for positive definite symmetric systems. Two particular polynomials are considered: Chebyshev and a multilevel specific polynomial. The advantages of polynomial smoothing over traditional smoothers such as Gauss-Seidel are illustrated on several applications: Poisson's equation, thin-body elasticity, and eddy current approximations to Maxwell's equations. While parallelizing the Gauss-Seidel method typically involves a compromise between a scalable convergence rate and maintaining high flop rates, polynomial smoothers achieve parallel scalable multigrid convergence rates without sacrificing flop rates. We show that, although parallel computers are the main motivation, polynomial smoothers are often surprisingly competitive with Gauss-Seidel smoothers on serial machines.

  3. Smoothness without smoothing: why Gaussian naive Bayes is not naive for multi-subject searchlight studies.

    Directory of Open Access Journals (Sweden)

    Rajeev D S Raizada

    Full Text Available Spatial smoothness is helpful when averaging fMRI signals across multiple subjects, as it allows different subjects' corresponding brain areas to be pooled together even if they are slightly misaligned. However, smoothing is usually not applied when performing multivoxel pattern-based analyses (MVPA, as it runs the risk of blurring away the information that fine-grained spatial patterns contain. It would therefore be desirable, if possible, to carry out pattern-based analyses which take unsmoothed data as their input but which produce smooth images as output. We show here that the Gaussian Naive Bayes (GNB classifier does precisely this, when it is used in "searchlight" pattern-based analyses. We explain why this occurs, and illustrate the effect in real fMRI data. Moreover, we show that analyses using GNBs produce results at the multi-subject level which are statistically robust, neurally plausible, and which replicate across two independent data sets. By contrast, SVM classifiers applied to the same data do not generate a replication, even if the SVM-derived searchlight maps have smoothing applied to them. An additional advantage of GNB classifiers for searchlight analyses is that they are orders of magnitude faster to compute than more complex alternatives such as SVMs. Collectively, these results suggest that Gaussian Naive Bayes classifiers may be a highly non-naive choice for multi-subject pattern-based fMRI studies.

  4. Escoamento uniforme em canais circulares lisos. Parte I: adaptação e validação do método de Kazemipour Uniform flow in smooth circular channels. Part I: adaptation and validation of the Kazemipour method

    Directory of Open Access Journals (Sweden)

    Maurício C. Goldfarb

    2004-12-01

    Full Text Available A partir da equação de von Karman Prandtl para tubos pressurizados, Kazemipour & Apelt (1980 desenvolveram uma metodologia para cálculo do escoamento em canais circulares lisos, denominada método de Kazemipour o qual, apesar de apresentar resultados de bastante eficiência necessita, no entanto, de recursos gráficos na sua aplicação, o que impossibilita a solução através de métodos computacionais e, também, a comparação deste com outras metodologias existentes. Neste trabalho, mostram-se os resultados da investigação analítica que resulta na validação do método de Kazemipour, como também o ajuste, de acordo com o procedimento proposto por Silva & Figueiredo (1993, de maneira a tornar o procedimento completamente equacionável sem a necessidade de recursos gráficos. O resultado encontrado é satisfatório e sua aplicação é apresentada num exemplo de aplicação prática.Considering the von Karman Prandtl equation for pressurized tubes, Kazemipour & Apelt (1980 developed a methodology for flow calculation in smooth circular channels, denominated as method of Kazemipour. Inspite of good results, the Kazemipour method needs graphic tools in its application, which makes its solution through computational methods and comparison to other existing methodologies difficult. In this research, the results of the analytic investigation that provides the validation of the Kazemipour method are shown, as well as the adjustments according to procedure proposed by Silva & Figueiredo (1993, performed in such a way to make the procedure independent of graphic tools. The result obtained is satisfactory and its use is presented in an example of practical application.

  5. Smoothing and projecting age-specific probabilities of death by TOPALS

    Directory of Open Access Journals (Sweden)

    Joop de Beer

    2012-10-01

    Full Text Available BACKGROUND TOPALS is a new relational model for smoothing and projecting age schedules. The model is operationally simple, flexible, and transparent. OBJECTIVE This article demonstrates how TOPALS can be used for both smoothing and projecting age-specific mortality for 26 European countries and compares the results of TOPALS with those of other smoothing and projection methods. METHODS TOPALS uses a linear spline to describe the ratios between the age-specific death probabilities of a given country and a standard age schedule. For smoothing purposes I use the average of death probabilities over 15 Western European countries as standard, whereas for projection purposes I use an age schedule of 'best practice' mortality. A partial adjustment model projects how quickly the death probabilities move in the direction of the best-practice level of mortality. RESULTS On average, TOPALS performs better than the Heligman-Pollard model and the Brass relational method in smoothing mortality age schedules. TOPALS can produce projections that are similar to those of the Lee-Carter method, but can easily be used to produce alternative scenarios as well. This article presents three projections of life expectancy at birth for the year 2060 for 26 European countries. The Baseline scenario assumes a continuation of the past trend in each country, the Convergence scenario assumes that there is a common trend across European countries, and the Acceleration scenario assumes that the future decline of death probabilities will exceed that in the past. The Baseline scenario projects that average European life expectancy at birth will increase to 80 years for men and 87 years for women in 2060, whereas the Acceleration scenario projects an increase to 90 and 93 years respectively. CONCLUSIONS TOPALS is a useful new tool for demographers for both smoothing age schedules and making scenarios.

  6. Smooth function approximation using neural networks.

    Science.gov (United States)

    Ferrari, Silvia; Stengel, Robert F

    2005-01-01

    An algebraic approach for representing multidimensional nonlinear functions by feedforward neural networks is presented. In this paper, the approach is implemented for the approximation of smooth batch data containing the function's input, output, and possibly, gradient information. The training set is associated to the network adjustable parameters by nonlinear weight equations. The cascade structure of these equations reveals that they can be treated as sets of linear systems. Hence, the training process and the network approximation properties can be investigated via linear algebra. Four algorithms are developed to achieve exact or approximate matching of input-output and/or gradient-based training sets. Their application to the design of forward and feedback neurocontrollers shows that algebraic training is characterized by faster execution speeds and better generalization properties than contemporary optimization techniques.

  7. Smooth driving of Moessbauer electromechanical transducers

    Energy Technology Data Exchange (ETDEWEB)

    Veiga, A., E-mail: veiga@fisica.unlp.edu.ar; Mayosky, M. A. [Universidad Nacional de La Plata, Facultad de Ingenieria (Argentina); Martinez, N.; Mendoza Zelis, P.; Pasquevich, G. A.; Sanchez, F. H. [Instituto de Fisica La Plata, CONICET (Argentina)

    2011-11-15

    Quality of Moessbauer spectra is strongly related to the performance of source velocity modulator. Traditional electromechanical driving techniques demand hard-edged square or triangular velocity waveforms that introduce long settling times and demand careful driver tuning. For this work, the behavior of commercial velocity transducers and drive units was studied under different working conditions. Different velocity reference waveforms in constant-acceleration, constant-velocity and programmable-velocity techniques were tested. Significant improvement in spectrometer efficiency and accuracy was achieved by replacing triangular and square hard edges with continuous smooth-shaped transitions. A criterion for best waveform selection and synchronization is presented and attainable enhancements are evaluated. In order to fully exploit this driving technique, a compact microprocessor-based architecture is proposed and a suitable data acquisition system implementation is presented. System linearity and efficiency characterization are also shown.

  8. Smooth muscle cell phenotypic switching in stroke.

    Science.gov (United States)

    Poittevin, Marine; Lozeron, Pierre; Hilal, Rose; Levy, Bernard I; Merkulova-Rainon, Tatiana; Kubis, Nathalie

    2014-06-01

    Disruption of cerebral blood flow after stroke induces cerebral tissue injury through multiple mechanisms that are not yet fully understood. Smooth muscle cells (SMCs) in blood vessel walls play a key role in cerebral blood flow control. Cerebral ischemia triggers these cells to switch to a phenotype that will be either detrimental or beneficial to brain repair. Moreover, SMC can be primarily affected genetically or by toxic metabolic molecules. After stroke, this pathological phenotype has an impact on the incidence, pattern, severity, and outcome of the cerebral ischemic disease. Although little research has been conducted on the pathological role and molecular mechanisms of SMC in cerebrovascular ischemic diseases, some therapeutic targets have already been identified and could be considered for further pharmacological development. We examine these different aspects in this review.

  9. Smoothed Particle Hydrodynamics Coupled with Radiation Transfer

    Science.gov (United States)

    Susa, Hajime

    2006-04-01

    We have constructed a brand-new radiation hydrodynamics solver based upon Smoothed Particle Hydrodynamics, which works on a parallel computer system. The code is designed to investigate the formation and evolution of first-generation objects at z ≳ 10, where the radiative feedback from various sources plays important roles. The code can compute the fraction of chemical species e, H+, H, H-, H2, and H+2 by by fully implicit time integration. It also can deal with multiple sources of ionizing radiation, as well as radiation at Lyman-Werner band. We compare the results for a few test calculations with the results of one-dimensional simulations, in which we find good agreements with each other. We also evaluate the speedup by parallelization, which is found to be almost ideal, as long as the number of sources is comparable to the number of processors.

  10. Viscoplastic augmentation of the smooth cap model

    International Nuclear Information System (INIS)

    Schwer, Leonard E.

    1994-01-01

    The most common numerical viscoplastic implementations are formulations attributed to Perzyna. Although Perzyna-type algorithms are popular, they have several disadvantages relating to the lack of enforcement of the consistency condition in plasticity. The present work adapts a relatively unknown viscoplastic formulation attributed to Duvaut and Lions and generalized to multi-surface plasticity by Simo et al. The attraction of the Duvaut-Lions formulation is its ease of numerical implementation in existing elastoplastic algorithms. The present work provides a motivation for the Duvaut-Lions viscoplastic formulation, derivation of the algorithm and comparison with the Perzyna algorithm. A simple uniaxial strain numerical simulation is used to compare the results of the Duvaut-Lions algorithm, as adapted to the ppercase[dyna3d] smooth cap model with results from a Perzyna algorithm adapted by Katona and Muleret to an implicit code. ((orig.))

  11. Smoothing internal migration age profiles for comparative research

    Directory of Open Access Journals (Sweden)

    Aude Bernard

    2015-05-01

    Full Text Available Background: Age patterns are a key dimension to compare migration between countries and over time. Comparative metrics can be reliably computed only if data capture the underlying age distribution of migration. Model schedules, the prevailing smoothing method, fit a composite exponential function, but are sensitive to function selection and initial parameter setting. Although non-parametric alternatives exist, their performance is yet to be established. Objective: We compare cubic splines and kernel regressions against model schedules by assessingwhich method provides an accurate representation of the age profile and best performs on metrics for comparing aggregate age patterns. Methods: We use full population microdata for Chile to perform 1,000 Monte-Carlo simulations for nine sample sizes and two spatial scales. We use residual and graphic analysis to assess model performance on the age and intensity at which migration peaks and the evolution of migration age patterns. Results: Model schedules generate a better fit when (1 the expected distribution of the age profile is known a priori, (2 the pre-determined shape of the model schedule adequately describes the true age distribution, and (3 the component curves and initial parameter values can be correctly set. When any of these conditions is not met, kernel regressions and cubic splines offer more reliable alternatives. Conclusions: Smoothing models should be selected according to research aims, age profile characteristics, and sample size. Kernel regressions and cubic splines enable a precise representation of aggregate migration age profiles for most sample sizes, without requiring parameter setting or imposing a pre-determined distribution, and therefore facilitate objective comparison.

  12. Seamless Heterogeneous 3D Tessellation via DWT Domain Smoothing and Mosaicking

    Directory of Open Access Journals (Sweden)

    Gilles Gesquière

    2010-01-01

    Full Text Available With todays geobrowsers, the tessellations are far from being smooth due to a variety of reasons: the principal being the light difference and resolution heterogeneity. Whilst the former has been extensively dealt with in the literature through classic mosaicking techniques, the latter has got little attention. We focus on this latter aspect and present two DWT domain methods to seamlessly stitch tiles of heterogeneous resolutions. The first method is local in that each of the tiles that constitute the view, is subjected to one of the three context-based smoothing functions proposed for horizontal, vertical, and radial smoothing, depending on its localization in the tessellation. These functions are applied at the DWT subband level and followed by an inverse DWT to give a smoothened tile. In the second method, though we assume the same tessellation scenario, the view field is thought to be of a sliding window which may contain parts of the tiles from the heterogeneous tessellation. The window is refined in the DWT domain through mosaicking and smoothing followed by a global inverse DWT. Rather than the traditional sense, the mosaicking employed over here targets the heterogeneous resolution. Perceptually, this second method has shown better results than the first one. The methods have been successfully applied to practical examples of both the texture and its corresponding DEM for seamless 3D terrain visualization.

  13. Bifurcation theory for finitely smooth planar autonomous differential systems

    Science.gov (United States)

    Han, Maoan; Sheng, Lijuan; Zhang, Xiang

    2018-03-01

    In this paper we establish bifurcation theory of limit cycles for planar Ck smooth autonomous differential systems, with k ∈ N. The key point is to study the smoothness of bifurcation functions which are basic and important tool on the study of Hopf bifurcation at a fine focus or a center, and of Poincaré bifurcation in a period annulus. We especially study the smoothness of the first order Melnikov function in degenerate Hopf bifurcation at an elementary center. As we know, the smoothness problem was solved for analytic and C∞ differential systems, but it was not tackled for finitely smooth differential systems. Here, we present their optimal regularity of these bifurcation functions and their asymptotic expressions in the finite smooth case.

  14. Impact of spectral smoothing on gamma radiation portal alarm probabilities

    International Nuclear Information System (INIS)

    Burr, T.; Hamada, M.; Hengartner, N.

    2011-01-01

    Gamma detector counts are included in radiation portal monitors (RPM) to screen for illicit nuclear material. Gamma counts are sometimes smoothed to reduce variance in the estimated underlying true mean count rate, which is the 'signal' in our context. Smoothing reduces total error variance in the estimated signal if the bias that smoothing introduces is more than offset by the variance reduction. An empirical RPM study for vehicle screening applications is presented for unsmoothed and smoothed gamma counts in low-resolution plastic scintillator detectors and in medium-resolution NaI detectors. - Highlights: → We evaluate options for smoothing counts from gamma detectors deployed for portal monitoring. → A new multiplicative bias correction (MBC) is shown to reduce bias in peak and valley regions. → Performance is measured using mean squared error and detection probabilities for sources. → Smoothing with the MBC improves detection probabilities and the mean squared error.

  15. Spatial smoothing coherence factor for ultrasound computed tomography

    Science.gov (United States)

    Lou, Cuijuan; Xu, Mengling; Ding, Mingyue; Yuchi, Ming

    2016-04-01

    In recent years, many research studies have been carried out on ultrasound computed tomography (USCT) for its application prospect in early diagnosis of breast cancer. This paper applies four kinds of coherence-factor-like beamforming methods to improve the image quality of synthetic aperture focusing method for USCT, including the coherence-factor (CF), the phase coherence factor (PCF), the sign coherence factor (SCF) and the spatial smoothing coherence factor (SSCF) (proposed in our previous work). The performance of these methods was tested with simulated raw data which were generated by the ultrasound simulation software PZFlex 2014. The simulated phantom was set to be water of 4cm diameter with three nylon objects of different diameters inside. The ring-type transducer had 72 elements with a center frequency of 1MHz. The results show that all the methods can reveal the biggest nylon circle with the radius of 2.5mm. SSCF gets the highest SNR among the proposed methods and provides a more homogenous background. None of these methods can reveal the two smaller nylon circles with the radius of 0.75mm and 0.25mm. This may be due to the small number of elements.

  16. Compare diagnostic tests using transformation-invariant smoothed ROC curves⋆

    Science.gov (United States)

    Tang, Liansheng; Du, Pang; Wu, Chengqing

    2012-01-01

    Receiver operating characteristic (ROC) curve, plotting true positive rates against false positive rates as threshold varies, is an important tool for evaluating biomarkers in diagnostic medicine studies. By definition, ROC curve is monotone increasing from 0 to 1 and is invariant to any monotone transformation of test results. And it is often a curve with certain level of smoothness when test results from the diseased and non-diseased subjects follow continuous distributions. Most existing ROC curve estimation methods do not guarantee all of these properties. One of the exceptions is Du and Tang (2009) which applies certain monotone spline regression procedure to empirical ROC estimates. However, their method does not consider the inherent correlations between empirical ROC estimates. This makes the derivation of the asymptotic properties very difficult. In this paper we propose a penalized weighted least square estimation method, which incorporates the covariance between empirical ROC estimates as a weight matrix. The resulting estimator satisfies all the aforementioned properties, and we show that it is also consistent. Then a resampling approach is used to extend our method for comparisons of two or more diagnostic tests. Our simulations show a significantly improved performance over the existing method, especially for steep ROC curves. We then apply the proposed method to a cancer diagnostic study that compares several newly developed diagnostic biomarkers to a traditional one. PMID:22639484

  17. Six-term exact sequences for smooth generalized crossed products

    DEFF Research Database (Denmark)

    Gabriel, Olivier; Grensing, Martin

    2013-01-01

    We define smooth generalized crossed products and prove six-term exact sequences of Pimsner–Voiculescu type. This sequence may, in particular, be applied to smooth subalgebras of the quantum Heisenberg manifolds in order to compute the generators of their cyclic cohomology. Further, our results...... include the known results for smooth crossed products. Our proof is based on a combination of arguments from the setting of (Cuntz–)Pimsner algebras and the Toeplitz proof of Bott periodicity....

  18. Star Products with Separation of Variables Admitting a Smooth Extension

    Science.gov (United States)

    Karabegov, Alexander

    2012-08-01

    Given a complex manifold M with an open dense subset Ω endowed with a pseudo-Kähler form ω which cannot be smoothly extended to a larger open subset, we consider various examples where the corresponding Kähler-Poisson structure and a star product with separation of variables on (Ω, ω) admit smooth extensions to M. We give a simple criterion of the existence of a smooth extension of a star product and apply it to these examples.

  19. Star products with separation of variables admitting a smooth extension

    OpenAIRE

    Karabegov, Alexander

    2010-01-01

    Given a complex manifold $M$ with an open dense subset $\\Omega$ endowed with a pseudo-Kaehler form $\\omega$ which cannot be smoothly extended to a larger open subset, we consider various examples where the corresponding Kaehler-Poisson structure and a star product with separation of variables on $(\\Omega, \\omega)$ admit smooth extensions to $M$. We suggest a simple criterion of the existence of a smooth extension of a star product and apply it to these examples.

  20. Fast compact algorithms and software for spline smoothing

    CERN Document Server

    Weinert, Howard L

    2012-01-01

    Fast Compact Algorithms and Software for Spline Smoothing investigates algorithmic alternatives for computing cubic smoothing splines when the amount of smoothing is determined automatically by minimizing the generalized cross-validation score. These algorithms are based on Cholesky factorization, QR factorization, or the fast Fourier transform. All algorithms are implemented in MATLAB and are compared based on speed, memory use, and accuracy. An overall best algorithm is identified, which allows very large data sets to be processed quickly on a personal computer.

  1. Inherited neurovascular diseases affecting cerebral blood vessels and smooth muscle.

    Science.gov (United States)

    Sam, Christine; Li, Fei-Feng; Liu, Shu-Lin

    2015-10-01

    Neurovascular diseases are among the leading causes of mortality and permanent disability due to stroke, aneurysm, and other cardiovascular complications. Cerebral autosomal-dominant arteriopathy with subcortical infarcts and leukoencephalopathy (CADASIL) and Marfan syndrome are two neurovascular disorders that affect smooth muscle cells through accumulation of granule and osmiophilic materials and defective elastic fiber formations respectively. Moyamoya disease, hereditary hemorrhagic telangiectasia (HHT), microcephalic osteodysplastic primordial dwarfism type II (MOPD II), and Fabry's disease are disorders that affect the endothelium cells of blood vessels through occlusion or abnormal development. While much research has been done on mapping out mutations in these diseases, the exact mechanisms are still largely unknown. This paper briefly introduces the pathogenesis, genetics, clinical symptoms, and current methods of treatment of the diseases in the hope that it can help us better understand the mechanism of these diseases and work on ways to develop better diagnosis and treatment.

  2. Numerical modelling of extreme waves by Smoothed Particle Hydrodynamics

    Directory of Open Access Journals (Sweden)

    M. H. Dao

    2011-02-01

    Full Text Available The impact of extreme/rogue waves can lead to serious damage of vessels as well as marine and coastal structures. Such extreme waves in deep water are characterized by steep wave fronts and an energetic wave crest. The process of wave breaking is highly complex and, apart from the general knowledge that impact loadings are highly impulsive, the dynamics of the breaking and impact are still poorly understood. Using an advanced numerical method, the Smoothed Particle Hydrodynamics enhanced with parallel computing is able to reproduce well the extreme waves and their breaking process. Once the waves and their breaking process are modelled successfully, the dynamics of the breaking and the characteristics of their impact on offshore structures could be studied. The computational methodology and numerical results are presented in this paper.

  3. Smooth conditional distribution function and quantiles under random censorship.

    Science.gov (United States)

    Leconte, Eve; Poiraud-Casanova, Sandrine; Thomas-Agnan, Christine

    2002-09-01

    We consider a nonparametric random design regression model in which the response variable is possibly right censored. The aim of this paper is to estimate the conditional distribution function and the conditional alpha-quantile of the response variable. We restrict attention to the case where the response variable as well as the explanatory variable are unidimensional and continuous. We propose and discuss two classes of estimators which are smooth with respect to the response variable as well as to the covariate. Some simulations demonstrate that the new methods have better mean square error performances than the generalized Kaplan-Meier estimator introduced by Beran (1981) and considered in the literature by Dabrowska (1989, 1992) and Gonzalez-Manteiga and Cadarso-Suarez (1994).

  4. Modeling and control of three phase rectifier with electronic smoothing inductor

    DEFF Research Database (Denmark)

    Singh, Yash Veer; Rasmussen, Peter Omand; Andersen, Torben Ole

    2011-01-01

    This paper presents a simple, direct method for deriving the approximate, small-signal, average model and control strategy for three-phase diode bridge rectifier operating with electronic smoothing technique. Electronic smoothing inductor (ESI) performs the function of an inductor that has...... controlled variable impedance. This increases power factor (PF) and reduces total harmonic distortions (THDs) in mains current. The ESI based rectifier enables compact and cost effective design of three phase electric drive as size of passive components is reduced significantly. In order to carry out...

  5. Influence of smoothing of X-ray spectra on parameters of calibration model

    International Nuclear Information System (INIS)

    Antoniak, W.; Urbanski, P.; Kowalska, E.

    1998-01-01

    Parameters of the calibration model before and after smoothing of X-ray spectra have been investigated. The calibration model was calculated using multivariate procedure - namely the partial least square regression (PLS). Investigations have been performed on an example of six sets of various standards used for calibration of some instruments based on X-ray fluorescence principle. The smoothing methods were compared: regression splines, Savitzky-Golay and Discrete Fourier Transform. The calculations were performed using a software package MATLAB and some home-made programs. (author)

  6. Convergence theorems for a class of nonlinear maps in uniformly smooth Banach spaces

    International Nuclear Information System (INIS)

    Chidume, C.E.; Osilike, M.O.

    1992-05-01

    Let K be a nonempty closed and convex subset of a real uniformly smooth Banach space, E, with modulus of smoothness of power type q>1. Let T be a mapping of K into itself, T is an element of C (in the notion of Browder and Petryshyn; and Rhoades). It is proved that the Mann iteration process, under suitable conditions, converges strongly to the unique fixed point of T. If K is also bounded, then the Ishikawa iteration process converges to the fixed point of T. While our theorems generalize important known results, our method is also of independent interest. (author). 14 refs

  7. The force recovery following repeated quick releases applied to pig urinary bladder smooth muscle

    NARCIS (Netherlands)

    R. van Mastrigt (Ron)

    1991-01-01

    textabstractA method for measuring several quick-releases during one contraction of a pig urinary bladder smooth muscle preparation was developed. The force recovery following quick release in this muscle type was studied by fitting a multiexponential model to 926 responses measured during the first

  8. On using smoothing spline and residual correction to fuse rain gauge observations and remote sensing data

    Science.gov (United States)

    Huang, Chengcheng; Zheng, Xiaogu; Tait, Andrew; Dai, Yongjiu; Yang, Chi; Chen, Zhuoqi; Li, Tao; Wang, Zhonglei

    2014-01-01

    Partial thin-plate smoothing spline model is used to construct the trend surface.Correction of the spline estimated trend surface is often necessary in practice.Cressman weight is modified and applied in residual correction.The modified Cressman weight performs better than Cressman weight.A method for estimating the error covariance matrix of gridded field is provided.

  9. Expectile smoothing: new perspectives on asymmetric least squares. An application to life expectancy

    NARCIS (Netherlands)

    Schnabel, S.K.

    2011-01-01

    While initially motivated from a demographic application, this thesis develops methodology for expectile estimation. To this end first the basic model for expectile curves using least asymmetrically weighted squares (LAWS) was introduced as well as methods for smoothing in this context. The simple

  10. Simulating Magnetized Laboratory Plasmas with Smoothed Particle Hydrodynamics

    Energy Technology Data Exchange (ETDEWEB)

    Johnson, Jeffrey N. [Univ. of California, Davis, CA (United States)

    2009-01-01

    The creation of plasmas in the laboratory continues to generate excitement in the physics community. Despite the best efforts of the intrepid plasma diagnostics community, the dynamics of these plasmas remains a difficult challenge to both the theorist and the experimentalist. This dissertation describes the simulation of strongly magnetized laboratory plasmas with Smoothed Particle Hydrodynamics (SPH), a method born of astrophysics but gaining broad support in the engineering community. We describe the mathematical formulation that best characterizes a strongly magnetized plasma under our circumstances of interest, and we review the SPH method and its application to astrophysical plasmas based on research by Phillips [1], Buerve [2], and Price and Monaghan [3]. Some modifications and extensions to this method are necessary to simulate terrestrial plasmas, such as a treatment of magnetic diffusion based on work by Brookshaw [4] and by Atluri [5]; we describe these changes as we turn our attention toward laboratory experiments. Test problems that verify the method are provided throughout the discussion. Finally, we apply our method to the compression of a magnetized plasma performed by the Compact Toroid Injection eXperiment (CTIX) [6] and show that the experimental results support our computed predictions.

  11. A new smoothing procedure to reduce delivery segments for static MLC-based IMRT planning

    International Nuclear Information System (INIS)

    Sun Xuepeng; Xia Ping

    2004-01-01

    In the application of pixel-based intensity-modulated radiation therapy (IMRT) using the step-and-shoot delivery method, one major difficulty is the prolonged delivery time. In this study, we present an integrated IMRT planning system that involves a simple smoothing method to reduce the complexity of the beam profiles. The system consists of three main steps: (a) an inverse planning process based on a least-square dose-based cost function; (b) smoothing of the intensity maps; (c) reoptimization of the segment weights. Step (a) obtains the best plan with the lowest cost value using a simulated annealing optimization algorithm with discrete intensity levels. Step (b) takes the intensity maps obtained from (a) and reduces the complexity of the maps by smoothing the adjacent beamlet intensities. During this process each beamlet is assigned a structure index based on anatomical information. A smoothing update is applied to average adjacent beamlets with the same index. To control the quality of the plan, a predefined clinical protocol is used as an acceptance criterion. The smoothing updates that violate the criterion are rejected. After the smoothing process, the segment weights are reoptimized in step (c) to further improve the plan quality. Three clinical cases were studied using this system: a medulloblastoma, a prostate cancer, and an oropharyngeal carcinoma. While the final plans demonstrate a degradation of the original plan quality, they still meet the plan acceptance criterion. On the other hand, the segment numbers or delivery times are reduced by 40%, 20%, and 20% for the three cases, respectively

  12. Nuclear fusion-independent smooth muscle differentiation of human adipose-derived stem cells induced by a smooth muscle environment.

    Science.gov (United States)

    Zhang, Rong; Jack, Gregory S; Rao, Nagesh; Zuk, Patricia; Ignarro, Louis J; Wu, Benjamin; Rodríguez, Larissa V

    2012-03-01

    Human adipose-derived stem cells hASC have been isolated and were shown to have multilineage differentiation capacity. Although both plasticity and cell fusion have been suggested as mechanisms for cell differentiation in vivo, the effect of the local in vivo environment on the differentiation of adipose-derived stem cells has not been evaluated. We previously reported the in vitro capacity of smooth muscle differentiation of these cells. In this study, we evaluate the effect of an in vivo smooth muscle environment in the differentiation of hASC. We studied this by two experimental designs: (a) in vivo evaluation of smooth muscle differentiation of hASC injected into a smooth muscle environment and (b) in vitro evaluation of smooth muscle differentiation capacity of hASC exposed to bladder smooth muscle cells. Our results indicate a time-dependent differentiation of hASC into mature smooth muscle cells when these cells are injected into the smooth musculature of the urinary bladder. Similar findings were seen when the cells were cocultured in vitro with primary bladder smooth muscle cells. Chromosomal analysis demonstrated that microenvironment cues rather than nuclear fusion are responsible for this differentiation. We conclude that cell plasticity is present in hASCs, and their differentiation is accomplished in the absence of nuclear fusion. Copyright © 2011 AlphaMed Press.

  13. Smoothing technology of gamma-ray spectrometry data based on matched filtering

    International Nuclear Information System (INIS)

    Gu Min; Ge Liangquan

    2009-01-01

    Traditional method of smoothness of gamma-ray spectrometry data gives rise to aberration of spectra curves easily. The article improve convolution sliding transformation using idea of matched filtering. Gauss adding the exponential function instead of Gauss function is used as converting function. The improved method not only suppresses statistical fluctuation mostly but also keeps feature of spectra curves. Instance verified superiority of this new method. (authors)

  14. ASIC proteins regulate smooth muscle cell migration.

    Science.gov (United States)

    Grifoni, Samira C; Jernigan, Nikki L; Hamilton, Gina; Drummond, Heather A

    2008-03-01

    The purpose of the present study was to investigate Acid Sensing Ion Channel (ASIC) protein expression and importance in cellular migration. We recently demonstrated that Epithelial Na(+)Channel (ENaC) proteins are required for vascular smooth muscle cell (VSMC) migration; however, the role of the closely related ASIC proteins has not been addressed. We used RT-PCR and immunolabeling to determine expression of ASIC1, ASIC2, ASIC3 and ASIC4 in A10 cells. We used small interference RNA to silence individual ASIC expression and determine the importance of ASIC proteins in wound healing and chemotaxis (PDGF-bb)-initiated migration. We found ASIC1, ASIC2, and ASIC3, but not ASIC4, expression in A10 cells. ASIC1, ASIC2, and ASIC3 siRNA molecules significantly suppressed expression of their respective proteins compared to non-targeting siRNA (RISC) transfected controls by 63%, 44%, and 55%, respectively. Wound healing was inhibited by 10, 20, and 26% compared to RISC controls following suppression of ASIC1, ASIC2, and ASIC3, respectively. Chemotactic migration was inhibited by 30% and 45%, respectively, following suppression of ASIC1 and ASIC3. ASIC2 suppression produced a small, but significant, increase in chemotactic migration (4%). Our data indicate that ASIC expression is required for normal migration and may suggest a novel role for ASIC proteins in cellular migration.

  15. Static and dynamic properties of smoothed dissipative particle dynamics

    Science.gov (United States)

    Alizadehrad, Davod; Fedosov, Dmitry A.

    2018-03-01

    In this paper, static and dynamic properties of the smoothed dissipative particle dynamics (SDPD) method are investigated. We study the effect of method parameters on SDPD fluid properties, such as structure, speed of sound, and transport coefficients, and show that a proper choice of parameters leads to a well-behaved and accurate fluid model. In particular, the speed of sound, the radial distribution function (RDF), shear-thinning of viscosity, the mean-squared displacement (〈R2 〉 ∝ t), and the Schmidt number (Sc ∼ O (103) - O (104)) can be controlled, such that the model exhibits a fluid-like behavior for a wide range of temperatures in simulations. Furthermore, in addition to the consideration of fluid density variations for fluid compressibility, a more challenging test of incompressibility is performed by considering the Poisson ratio and divergence of velocity field in an elongational flow. Finally, as an example of complex-fluid flow, we present the applicability and validity of the SDPD method with an appropriate choice of parameters for the simulation of cellular blood flow in irregular geometries. In conclusion, the results demonstrate that the SDPD method is able to approximate well a nearly incompressible fluid behavior, which includes hydrodynamic interactions and consistent thermal fluctuations, thereby providing, a powerful approach for simulations of complex mesoscopic systems.

  16. Mathematical pattern, smoothing and digital filtering of a speech signal

    International Nuclear Information System (INIS)

    Razzam, Mohamed Habib

    1979-01-01

    After presentation of speech synthesis methods, characterized by a treatment of pre-recorded natural signals, or by an analog simulation of vocal tract, we present a new synthesis method especially based on a mathematical pattern of the signal, as a development of M. RODET's method. For their physiological origin, these signals are partially or totally voiced, or aleatory. For the phoneme voiced parts, we compute the formant curves, the sum of which constitute the wave, directly in time-domain by applying a specific envelope (operating as a time-window analysis) to a sinusoidal wave, The sinusoidal wave computation is made at the beginning of each signal's pseudo-period. The transition from successive periods is assured by a polynomial smoothing followed by a digital filtering. For the aleatory parts, we present an aleatory computation method of formant curves. Each signal is subjected to a melodic diagrams computed in accordance with the nature of the phoneme (vowel or consonant) and its context (isolated or not). (author) [fr

  17. Hypoxic contraction of cultured pulmonary vascular smooth muscle cells

    International Nuclear Information System (INIS)

    Murray, T.R.; Chen, L.; Marshall, B.E.; Macarak, E.J.

    1990-01-01

    The cellular events involved in generating the hypoxic pulmonary vasoconstriction response are not clearly understood, in part because of the multitude of factors that alter pulmonary vascular tone. The goal of the present studies was to determine if a cell culture preparation containing vascular smooth muscle (VSM) cells could be made to contract when exposed to a hypoxic atmosphere. Cultures containing only fetal bovine pulmonary artery VSM cells were assessed for contractile responses to hypoxic stimuli by two methods. In the first, tension forces generated by cells grown on a flexible growth surface (polymerized polydimethyl siloxane) were manifested as wrinkles and distortions of the surface under the cells. Wrinkling of the surface was noted to progressively increase with time as the culture medium bathing the cells was made hypoxic (PO2 approximately 25 mmHg). The changes were sometimes reversible upon return to normoxic conditions and appeared to be enhanced in cells already exhibiting evidence of some baseline tone. Repeated passage in culture did not diminish the hypoxic response. Evidence for contractile responses to hypoxia was also obtained from measurements of myosin light chain (MLC) phosphorylation. Conversion of MLC to the phosphorylated species is an early step in the activation of smooth muscle contraction. Lowering the PO2 in the culture medium to 59 mmHg caused a 45% increase in the proportion of MLC in the phosphorylated form as determined by two-dimensional gel electrophoresis. Similarly, cultures preincubated for 4 h with 32P and then exposed to normoxia or hypoxia for a 5-min experimental period showed more than twice as much of the label in MLCs of the hypoxic cells

  18. Bandwidth selection in smoothing functions | Kibua | East African ...

    African Journals Online (AJOL)

    ... inexpensive and, hence, worth adopting. We argue that the bandwidth parameter is determined by two factors: the kernel function and the length of the smoothing region. We give an illustrative example of its application using real data. Keywords: Kernel, Smoothing functions, Bandwidth > East African Journal of Statistics ...

  19. Three-phase electric drive with modified electronic smoothing inductor

    DEFF Research Database (Denmark)

    Singh, Yash Veer; Rasmussen, Peter Omand; Andersen, Torben Ole

    2010-01-01

    This paper presents a three-phase electric drive with a modified electronic smoothing inductor (MESI) having reduced size of passive components. The classical electronic smoothing inductor (ESI) is able to control a diode bridge output current and also reduce not only mains current harmonics...

  20. Smooth Maps of a Foliated Manifold in a Symplectic Manifold

    Indian Academy of Sciences (India)

    Let be a smooth manifold with a regular foliation F and a 2-form which induces closed forms on the leaves of F in the leaf topology. A smooth map f : ( M , F ) ⟶ ( N , ) in a symplectic manifold ( N , ) is called a foliated symplectic immersion if restricts to an immersion on each leaf of the foliation and further, the ...

  1. Classification of smooth structures on a homotopy complex ...

    Indian Academy of Sciences (India)

    Abstract. We classify, up to diffeomorphism, all closed smooth manifolds homeo- morphic to the complex projective n-space CPn, where n = 3 and 4. Let M2n be a closed smooth 2n-manifold homotopy equivalent to CPn. We show that, up to diffeo- morphism, M6 has a unique differentiable structure and M8 has at most two ...

  2. Classification of smooth structures on a homotopy complex ...

    Indian Academy of Sciences (India)

    We classify, up to diffeomorphism, all closed smooth manifolds homeomorphic to the complex projective n -space C P n , where n = 3 and 4. Let M 2 n be a closed smooth 2 n -manifold homotopy equivalent to C P n . We show that, up to diffeomorphism, M 6 has a unique differentiable structure and M 8 has at most two ...

  3. Some asymptotic theory for variance function smoothing | Kibua ...

    African Journals Online (AJOL)

    Simple selection of the smoothing parameter is suggested. Both homoscedastic and heteroscedastic regression models are considered. Keywords: Asymptotic, Smoothing, Kernel, Bandwidth, Bias, Variance, Mean squared error, Homoscedastic, Heteroscedastic. > East African Journal of Statistics Vol. 1 (1) 2005: pp. 9-22 ...

  4. On smoothed analysis of quicksort and Hoare's find

    NARCIS (Netherlands)

    Fouz, Mahmoud; Kufleitner, Manfred; Manthey, Bodo; Zeini Jahromi, Nima; Ngo, H.Q.

    2009-01-01

    We provide a smoothed analysis of Hoare’s find algorithm and we revisit the smoothed analysis of quicksort. Hoare’s find algorithm – often called quickselect – is an easy-to-implement algorithm for finding the $k$-th smallest element of a sequence. While the worst-case number of comparisons that

  5. Investigation of angular and axial smoothing of PET data

    International Nuclear Information System (INIS)

    Daube-Witherspoon, M.E.; Carson, R.E.

    1996-01-01

    Radial filtering of emission and transmission data is routinely performed in PET during reconstruction in order to reduce image noise. Angular smoothing is not typically done, due to the introduction of a non-uniform resolution loss; axial filtering is also not usually performed on data acquired in 2D mode. The goal of this paper was to assess the effects of angular and axial smoothing on noise and resolution. Angular and axial smoothing was incorporated into the reconstruction process on the Scanditronix PC2048-15B brain PET scanner. In-plane spatial resolution and noise reduction were measured for different amounts of radial and angular smoothing. For radial positions away from the center of the scanner, noise reduction and degraded tangential resolution with no loss of radial resolution were seen. Near the center, no resolution loss was observed, but there was also no reduction in noise for angular filters up to a 7 degrees FWHM. These results can be understood by considering the combined effects of smoothing projections across rows (angles) and then summing (backprojecting). Thus, angular smoothing is not optimal due to its anisotropic noise reduction and resolution degradation properties. However, uniform noise reduction comparable to that seen with radial filtering can be achieved with axial smoothing of transmission data. The axial results suggest that combined radial and axial transmission smoothing could lead to improved noise characteristics with more isotropic resolution degradation

  6. A Note on the Definition of a Smooth Curve

    Science.gov (United States)

    Euler, Russell; Sadek, Jawad

    2005-01-01

    In many elementary calculus textbooks in use today, the definition of a "smooth curve" is slightly ambiguous from the students' perspective. Even when smoothness is defined carefully, there is a shortage of relevant exercises that would serve to elaborate on related subtle points which many students may find confusing. In this article, the authors…

  7. Smooth surfaces from bilinear patches: Discrete affine minimal surfaces

    KAUST Repository

    Käferböck, Florian

    2013-06-01

    Motivated by applications in freeform architecture, we study surfaces which are composed of smoothly joined bilinear patches. These surfaces turn out to be discrete versions of negatively curved affine minimal surfaces and share many properties with their classical smooth counterparts. We present computational design approaches and study special cases which should be interesting for the architectural application. 2013 Elsevier B.V.

  8. Dynamics of wetting on smooth and rough surfaces.

    NARCIS (Netherlands)

    Cazabat, A.M.; Cohen Stuart, M.A.

    1987-01-01

    The rate of spreading of non-volatile liquids on smooth and on rough surfaces was investigated. The radius of the wetted spot was found to agree with recently proposed scaling laws (t 1/10 for capillarity driven andt 1/8 for gravity driven spreading) when the surface was smooth. However, the

  9. Neurophysiology and Neuroanatomy of Smooth Pursuit: Lesion Studies

    Science.gov (United States)

    Sharpe, James A.

    2008-01-01

    Smooth pursuit impairment is recognized clinically by the presence of saccadic tracking of a small object and quantified by reduction in pursuit gain, the ratio of smooth eye movement velocity to the velocity of a foveal target. Correlation of the site of brain lesions, identified by imaging or neuropathological examination, with defective smooth…

  10. Mechanisms of mechanical strain memory in airway smooth muscle.

    Science.gov (United States)

    Kim, Hak Rim; Hai, Chi-Ming

    2005-10-01

    We evaluated the hypothesis that mechanical deformation of airway smooth muscle induces structural remodeling of airway smooth muscle cells, thereby modulating mechanical performance in subsequent contractions. This hypothesis implied that past experience of mechanical deformation was retained (or "memorized") as structural changes in airway smooth muscle cells, which modulated the cell's subsequent contractile responses. We termed this phenomenon mechanical strain memory. Preshortening has been found to induce attenuation of both force and isotonic shortening velocity in cholinergic receptor-activated airway smooth muscle. Rapid stretching of cholinergic receptor-activated airway smooth muscle from an initial length to a final length resulted in post-stretch force and myosin light chain phosphorylation that correlated significantly with initial length. Thus post-stretch muscle strips appeared to retain memory of the initial length prior to rapid stretch (mechanical strain memory). Cytoskeletal recruitment of actin- and integrin-binding proteins and Erk 1/2 MAPK appeared to be important mechanisms of mechanical strain memory. Sinusoidal length oscillation led to force attenuation during oscillation and in subsequent contractions in intact airway smooth muscle, and p38 MAPK appeared to be an important mechanism. In contrast, application of local mechanical strain to cultured airway smooth muscle cells induced local actin polymerization and cytoskeletal stiffening. It is conceivable that deep inspiration-induced bronchoprotection may be a manifestation of mechanical strain memory such that mechanical deformation from past breathing cycles modulated the mechanical performance of airway smooth muscle in subsequent cycles in a continuous and dynamic manner.

  11. Microtissues Enhance Smooth Muscle Differentiation and Cell Viability of hADSCs for Three Dimensional Bioprinting

    Directory of Open Access Journals (Sweden)

    Jin Yipeng

    2017-07-01

    Full Text Available Smooth muscle differentiated human adipose derived stem cells (hADSCs provide a crucial stem cell source for urinary tissue engineering, but the induction of hADSCs for smooth muscle differentiation still has several issues to overcome, including a relatively long induction time and equipment dependence, which limits access to abundant stem cells within a short period of time for further application. Three-dimensional (3D bioprinting holds great promise in regenerative medicine due to its controllable construction of a designed 3D structure. When evenly mixed with bioink, stem cells can be spatially distributed within a bioprinted 3D structure, thus avoiding drawbacks such as, stem cell detachment in a conventional cell-scaffold strategy. Notwithstanding the advantages mentioned above, cell viability is often compromised during 3D bioprinting, which is often due to pressure during the bioprinting process. The objective of our study was to improve the efficiency of hADSC smooth muscle differentiation and cell viability of a 3D bioprinted structure. Here, we employed the hanging-drop method to generate hADSC microtissues in a smooth muscle inductive medium containing human transforming growth factor β1 and bioprinted the induced microtissues onto a 3D structure. After 3 days of smooth muscle induction, the expression of α-smooth muscle actin and smoothelin was higher in microtissues than in their counterpart monolayer cultured hADSCs, as confirmed by immunofluorescence and western blotting analysis. The semi-quantitative assay showed that the expression of α-smooth muscle actin (α-SMA was 0.218 ± 0.077 in MTs and 0.082 ± 0.007 in Controls; smoothelin expression was 0.319 ± 0.02 in MTs and 0.178 ± 0.06 in Controls. Induced MTs maintained their phenotype after the bioprinting process. Live/dead and cell count kit 8 assays showed that cell viability and cell proliferation in the 3D structure printed with microtissues were higher at all time

  12. Adaptive smoothing based on Gaussian processes regression increases the sensitivity and specificity of fMRI data.

    Science.gov (United States)

    Strappini, Francesca; Gilboa, Elad; Pitzalis, Sabrina; Kay, Kendrick; McAvoy, Mark; Nehorai, Arye; Snyder, Abraham Z

    2017-03-01

    Temporal and spatial filtering of fMRI data is often used to improve statistical power. However, conventional methods, such as smoothing with fixed-width Gaussian filters, remove fine-scale structure in the data, necessitating a tradeoff between sensitivity and specificity. Specifically, smoothing may increase sensitivity (reduce noise and increase statistical power) but at the cost loss of specificity in that fine-scale structure in neural activity patterns is lost. Here, we propose an alternative smoothing method based on Gaussian processes (GP) regression for single subjects fMRI experiments. This method adapts the level of smoothing on a voxel by voxel basis according to the characteristics of the local neural activity patterns. GP-based fMRI analysis has been heretofore impractical owing to computational demands. Here, we demonstrate a new implementation of GP that makes it possible to handle the massive data dimensionality of the typical fMRI experiment. We demonstrate how GP can be used as a drop-in replacement to conventional preprocessing steps for temporal and spatial smoothing in a standard fMRI pipeline. We present simulated and experimental results that show the increased sensitivity and specificity compared to conventional smoothing strategies. Hum Brain Mapp 38:1438-1459, 2017. © 2016 Wiley Periodicals, Inc. © 2016 Wiley Periodicals, Inc.

  13. Adaptively smoothed seismicity earthquake forecasts for Italy

    Directory of Open Access Journals (Sweden)

    Yan Y. Kagan

    2010-11-01

    Full Text Available We present a model for estimation of the probabilities of future earthquakes of magnitudes m ≥ 4.95 in Italy. This model is a modified version of that proposed for California, USA, by Helmstetter et al. [2007] and Werner et al. [2010a], and it approximates seismicity using a spatially heterogeneous, temporally homogeneous Poisson point process. The temporal, spatial and magnitude dimensions are entirely decoupled. Magnitudes are independently and identically distributed according to a tapered Gutenberg-Richter magnitude distribution. We have estimated the spatial distribution of future seismicity by smoothing the locations of past earthquakes listed in two Italian catalogs: a short instrumental catalog, and a longer instrumental and historic catalog. The bandwidth of the adaptive spatial kernel is estimated by optimizing the predictive power of the kernel estimate of the spatial earthquake density in retrospective forecasts. When available and reliable, we used small earthquakes of m ≥ 2.95 to reveal active fault structures and 29 probable future epicenters. By calibrating the model with these two catalogs of different durations to create two forecasts, we intend to quantify the loss (or gain of predictability incurred when only a short, but recent, data record is available. Both forecasts were scaled to five and ten years, and have been submitted to the Italian prospective forecasting experiment of the global Collaboratory for the Study of Earthquake Predictability (CSEP. An earlier forecast from the model was submitted by Helmstetter et al. [2007] to the Regional Earthquake Likelihood Model (RELM experiment in California, and with more than half of the five-year experimental period over, the forecast has performed better than the others.

  14. Evaluating the impact of spatio-temporal smoothness constraints on the BOLD hemodynamic response function estimation: an analysis based on Tikhonov regularization

    International Nuclear Information System (INIS)

    Casanova, R; Yang, L; Hairston, W D; Laurienti, P J; Maldjian, J A

    2009-01-01

    Recently we have proposed the use of Tikhonov regularization with temporal smoothness constraints to estimate the BOLD fMRI hemodynamic response function (HRF). The temporal smoothness constraint was imposed on the estimates by using second derivative information while the regularization parameter was selected based on the generalized cross-validation function (GCV). Using one-dimensional simulations, we previously found this method to produce reliable estimates of the HRF time course, especially its time to peak (TTP), being at the same time fast and robust to over-sampling in the HRF estimation. Here, we extend the method to include simultaneous temporal and spatial smoothness constraints. This method does not need Gaussian smoothing as a pre-processing step as usually done in fMRI data analysis. We carried out two-dimensional simulations to compare the two methods: Tikhonov regularization with temporal (Tik-GCV-T) and spatio-temporal (Tik-GCV-ST) smoothness constraints on the estimated HRF. We focus our attention on quantifying the influence of the Gaussian data smoothing and the presence of edges on the performance of these techniques. Our results suggest that the spatial smoothing introduced by regularization is less severe than that produced by Gaussian smoothing. This allows more accurate estimates of the response amplitudes while producing similar estimates of the TTP. We illustrate these ideas using real data. (note)

  15. Restoration of an object from its complex cross sections and surface smoothing of the object

    International Nuclear Information System (INIS)

    Agui, Takeshi; Arai, Kiyoshi; Nakajima, Masayuki

    1990-01-01

    In clinical medicine, restoring the surface of a three-dimensional object from its set of parallel cross sections obtained by CT or MRI is useful in diagnoses. A method of connecting a pair of contours on neighboring cross sections to each other by triangular patches is generally used for this restoration. This method, however, has the complexity of triangulation algorithm, and requires the numerous quantity of calculations when surface smoothing is executed. In our new method, the positions of sampling points are expressed in cylindrical coordinates. Sampling points including auxiliary points are extracted and connected using simple algorithm. Surface smoothing is executed by moving sampling points. This method extends the application scope of restoring objects by triangulation. (author)

  16. Comparison of halo detection from noisy weak lensing convergence maps with Gaussian smoothing and MRLens treatment

    International Nuclear Information System (INIS)

    Jiao Yangxiu; Shan Huanyuan; Fan Zuhui

    2011-01-01

    Taking into account the noise from intrinsic ellipticities of source galaxies, we study the efficiency and completeness of halo detections from weak lensing convergence maps. Particularly, with numerical simulations, we compare the Gaussian filter with the so called MRLens treatment based on the modification of the Maximum Entropy Method. For a pure noise field without lensing signals, a Gaussian smoothing results in a residual noise field that is approximately Gaussian in terms of statistics if a large enough number of galaxies are included in the smoothing window. On the other hand, the noise field after the MRLens treatment is significantly non-Gaussian, resulting in complications in characterizing the noise effects. Considering weak-lensing cluster detections, although the MRLens treatment effectively deletes false peaks arising from noise, it removes the real peaks heavily due to its inability to distinguish real signals with relatively low amplitudes from noise in its restoration process. The higher the noise level is, the larger the removal effects are for the real peaks. For a survey with a source density n g ∼ 30 arcmin -2 , the number of peaks found in an area of 3 x 3 deg 2 after MRLens filtering is only ∼ 50 for the detection threshold κ = 0.02, while the number of halos with M > 5 x 10 13 M circleddot and with redshift z ≤ 2 in the same area is expected to be ∼ 530. For the Gaussian smoothing treatment, the number of detections is ∼ 260, much larger than that of the MRLens. The Gaussianity of the noise statistics in the Gaussian smoothing case adds further advantages for this method to circumvent the problem of the relatively low efficiency in weak-lensing cluster detections. Therefore, in studies aiming to construct large cluster samples from weak-lensing surveys, the Gaussian smoothing method performs significantly better than the MRLens treatment.

  17. Smooth pursuit eye movements and schizophrenia: literature review.

    Science.gov (United States)

    Franco, J G; de Pablo, J; Gaviria, A M; Sepúlveda, E; Vilella, E

    2014-09-01

    To review the scientific literature about the relationship between impairment on smooth pursuit eye movements and schizophrenia. Narrative review that includes historical articles, reports about basic and clinical investigation, systematic reviews, and meta-analysis on the topic. Up to 80% of schizophrenic patients have impairment of smooth pursuit eye movements. Despite the diversity of test protocols, 65% of patients and controls are correctly classified by their overall performance during this pursuit. The smooth pursuit eye movements depend on the ability to anticipate the target's velocity and the visual feedback, as well as on learning and attention. The neuroanatomy implicated in smooth pursuit overlaps to some extent with certain frontal cortex zones associated with some clinical and neuropsychological characteristics of the schizophrenia, therefore some specific components of smooth pursuit anomalies could serve as biomarkers of the disease. Due to their sedative effect, antipsychotics have a deleterious effect on smooth pursuit eye movements, thus these movements cannot be used to evaluate the efficacy of the currently available treatments. Standardized evaluation of smooth pursuit eye movements on schizophrenia will allow to use specific aspects of that pursuit as biomarkers for the study of its genetics, psychopathology, or neuropsychology. Copyright © 2013 Sociedad Española de Oftalmología. Published by Elsevier Espana. All rights reserved.

  18. Nodular smooth muscle metaplasia in multiple peritoneal endometriosis.

    Science.gov (United States)

    Kim, Hyun-Soo; Yoon, Gun; Ha, Sang Yun; Song, Sang Yong

    2015-01-01

    We report here an unusual presentation of peritoneal endometriosis with smooth muscle metaplasia as multiple protruding masses on the lateral pelvic wall. Smooth muscle metaplasia is a common finding in rectovaginal endometriosis, whereas in peritoneal endometriosis, smooth muscle metaplasia is uncommon and its nodular presentation on the pelvic wall is even rarer. To the best of our knowledge, this is the first case of nodular smooth muscle metaplasia occurring in peritoneal endometriosis. As observed in this case, when performing laparoscopic surgery in order to excise malignant tumors of intra-abdominal or pelvic organs, it can be difficult for surgeons to distinguish the metastatic tumors from benign nodular pelvic wall lesions, including endometriosis, based on the gross findings only. Therefore, an intraoperative frozen section biopsy of the pelvic wall nodules should be performed to evaluate the peritoneal involvement by malignant tumors. Moreover, this report implies that peritoneal endometriosis, as well as rectovaginal endometriosis, can clinically present as nodular lesions if obvious smooth muscle metaplasia is present. The pathological investigation of smooth muscle cells in peritoneal lesions can contribute not only to the precise diagnosis but also to the structure and function of smooth muscle cells and related cells involved in the histogenesis of peritoneal endometriosis.

  19. The Smoothing Hypothesis, Stock Returns and Risk in Brazil

    Directory of Open Access Journals (Sweden)

    Antonio Lopo Martinez

    2011-01-01

    Full Text Available Income smoothing is defined as the deliberate normalization of income in order to reach a desired trend. If the smoothing causes more information to be reflected in the stock price, it is likely to improve the allocation of resources and can be a critical factor in investment decisions. This study aims to build metrics to determine the degree of smoothing in Brazilian public companies, to classify them as smoothing and non-smoothing companies and additionally to present evidence on the long-term relationship between the smoothing hypothesis and stock return and risk. Using the Economatica and CVM databases, this study focuses on 145 companies in the period 1998-2007. We find that Brazilian smoothers have a smaller degree of systemic risk than non-smoothers. In average terms, the beta of smoothers is significantly lower than non-smoothers. Regarding return, we find that the abnormal annualized returns of smoothers are significantly higher. We confirm differences in the groups by nonparametric and parametric tests in cross section or as time series, indicating that there is a statistically significant difference in performance in the Brazilian market between firms that do and do not engage in smoothing.

  20. Modeling the dispersion effects of contractile fibers in smooth muscles

    Science.gov (United States)

    Murtada, Sae-Il; Kroon, Martin; Holzapfel, Gerhard A.

    2010-12-01

    Micro-structurally based models for smooth muscle contraction are crucial for a better understanding of pathological conditions such as atherosclerosis, incontinence and asthma. It is meaningful that models consider the underlying mechanical structure and the biochemical activation. Hence, a simple mechanochemical model is proposed that includes the dispersion of the orientation of smooth muscle myofilaments and that is capable to capture available experimental data on smooth muscle contraction. This allows a refined study of the effects of myofilament dispersion on the smooth muscle contraction. A classical biochemical model is used to describe the cross-bridge interactions with the thin filament in smooth muscles in which calcium-dependent myosin phosphorylation is the only regulatory mechanism. A novel mechanical model considers the dispersion of the contractile fiber orientations in smooth muscle cells by means of a strain-energy function in terms of one dispersion parameter. All model parameters have a biophysical meaning and may be estimated through comparisons with experimental data. The contraction of the middle layer of a carotid artery is studied numerically. Using a tube the relationships between the internal pressure and the stretches are investigated as functions of the dispersion parameter, which implies a strong influence of the orientation of smooth muscle myofilaments on the contraction response. It is straightforward to implement this model in a finite element code to better analyze more complex boundary-value problems.

  1. Smoothing of Fused Spectral Consistent Satellite Images

    DEFF Research Database (Denmark)

    Sveinsson, Johannes; Aanæs, Henrik; Benediktsson, Jon Atli

    2006-01-01

    on satellite data. Additionally, most conventional methods are loosely connected to the image forming physics of the satellite image, giving these methods an ad hoc feel. Vesteinsson et al. (2005) proposed a method of fusion of satellite images that is based on the properties of imaging physics...

  2. Contraction of gut smooth muscle cells assessed by fluorescence imaging

    Directory of Open Access Journals (Sweden)

    Yohei Tokita

    2015-03-01

    Full Text Available Here we discuss the development of a novel cell imaging system for the evaluation of smooth muscle cell (SMC contraction. SMCs were isolated from the circular and longitudinal muscular layers of mouse small intestine by enzymatic digestion. SMCs were stimulated by test agents, thereafter fixed in acrolein. Actin in fixed SMCs was stained with phalloidin and cell length was determined by measuring diameter at the large end of phalloidin-stained strings within the cells. The contractile response was taken as the decrease in the average length of a population of stimulated-SMCs. Various mediators and chemically identified compounds of daikenchuto (DKT, pharmaceutical-grade traditional Japanese prokinetics, were examined. Verification of the integrity of SMC morphology by phalloidin and DAPI staining and semi-automatic measurement of cell length using an imaging analyzer was a reliable method by which to quantify the contractile response. Serotonin, substance P, prostaglandin E2 and histamine induced SMC contraction in concentration-dependent manner. Two components of DKT, hydroxy-α-sanshool and hydroxy-β-sanshool, induced contraction of SMCs. We established a novel cell imaging technique to evaluate SMC contractility. This method may facilitate investigation into SMC activity and its role in gastrointestinal motility, and may assist in the discovery of new prokinetic agents.

  3. Smoothing expansion rate data to reconstruct cosmological matter perturbations

    Energy Technology Data Exchange (ETDEWEB)

    Gonzalez, J.E.; Alcaniz, J.S.; Carvalho, J.C., E-mail: javierernesto@on.br, E-mail: alcaniz@on.br, E-mail: jcarvalho@on.br [Departamento de Astronomia, Observatório Nacional, Rua Gal. José Cristino, 77, Rio de Janeiro, RJ 20921-400 (Brazil)

    2017-08-01

    The existing degeneracy between different dark energy and modified gravity cosmologies at the background level may be broken by analyzing quantities at the perturbative level. In this work, we apply a non-parametric smoothing (NPS) method to reconstruct the expansion history of the Universe ( H ( z )) from model-independent cosmic chronometers and high- z quasar data. Assuming a homogeneous and isotropic flat universe and general relativity (GR) as the gravity theory, we calculate the non-relativistic matter perturbations in the linear regime using the H ( z ) reconstruction and realistic values of Ω {sub m} {sub 0} and σ{sub 8} from Planck and WMAP-9 collaborations. We find a good agreement between the measurements of the growth rate and f σ{sub 8}( z ) from current large-scale structure observations and the estimates obtained from the reconstruction of the cosmic expansion history. Considering a recently proposed null test for GR using matter perturbations, we also apply the NPS method to reconstruct f σ{sub 8}( z ). For this case, we find a ∼ 3σ tension (good agreement) with the standard relativistic cosmology when the Planck (WMAP-9) priors are used.

  4. SOFT: smooth OPC fixing technique for ECO process

    Science.gov (United States)

    Zhang, Hongbo; Shi, Zheng

    2007-03-01

    SOFT (Smooth OPC Fixing Technique) is a new OPC flow developed from the basic OPC framework. It provides a new method to reduce the computation cost and complexities of ECO-OPC (Engineering Change Order - Optical Proximity Correction). In this paper, we introduce polygon comparison to extract the necessary but possibly lost fragmentation and offset information of previous post-OPC layout. By reusing these data, we can start the modification on each segment from a more accurate initial offset. In addition, the fragmentation method in the boundary of the patch in the previous OPC process is therefore available for engineers to stitch the regional ECO-OPC result back to the whole post-OPC layout seamlessly. For the ripple effect in the OPC, by comparing each segment's movement in each loop, we much free the fixing speed from the limitation of patch size. We handle layout remodification, especially in three basic kinds of ECO-OPC processes, while maintaining other design closure. Our experimental results show that, by utilizing the previous post-OPC layout, full-chip ECO-OPC can realize an over 5X acceleration and the regional ECO-OPC result can also be stitched back into the whole layout seamlessly with the ripple effect of the lithography interaction.

  5. Smoothing expansion rate data to reconstruct cosmological matter perturbations

    International Nuclear Information System (INIS)

    Gonzalez, J.E.; Alcaniz, J.S.; Carvalho, J.C.

    2017-01-01

    The existing degeneracy between different dark energy and modified gravity cosmologies at the background level may be broken by analyzing quantities at the perturbative level. In this work, we apply a non-parametric smoothing (NPS) method to reconstruct the expansion history of the Universe ( H ( z )) from model-independent cosmic chronometers and high- z quasar data. Assuming a homogeneous and isotropic flat universe and general relativity (GR) as the gravity theory, we calculate the non-relativistic matter perturbations in the linear regime using the H ( z ) reconstruction and realistic values of Ω m 0 and σ 8 from Planck and WMAP-9 collaborations. We find a good agreement between the measurements of the growth rate and f σ 8 ( z ) from current large-scale structure observations and the estimates obtained from the reconstruction of the cosmic expansion history. Considering a recently proposed null test for GR using matter perturbations, we also apply the NPS method to reconstruct f σ 8 ( z ). For this case, we find a ∼ 3σ tension (good agreement) with the standard relativistic cosmology when the Planck (WMAP-9) priors are used.

  6. Smooth solutions of the Navier-Stokes equations

    International Nuclear Information System (INIS)

    Pokhozhaev, S I

    2014-01-01

    We consider smooth solutions of the Cauchy problem for the Navier-Stokes equations on the scale of smooth functions which are periodic with respect to x∈R 3 . We obtain existence theorems for global (with respect to t>0) and local solutions of the Cauchy problem. The statements of these depend on the smoothness and the norm of the initial vector function. Upper bounds for the behaviour of solutions in both classes, which depend on t, are also obtained. Bibliography: 10 titles

  7. EXCHANGE-RATES FORECASTING: EXPONENTIAL SMOOTHING TECHNIQUES AND ARIMA MODELS

    Directory of Open Access Journals (Sweden)

    Dezsi Eva

    2011-07-01

    Full Text Available Exchange rates forecasting is, and has been a challenging task in finance. Statistical and econometrical models are widely used in analysis and forecasting of foreign exchange rates. This paper investigates the behavior of daily exchange rates of the Romanian Leu against the Euro, United States Dollar, British Pound, Japanese Yen, Chinese Renminbi and the Russian Ruble. Smoothing techniques are generated and compared with each other. These models include the Simple Exponential Smoothing technique, as the Double Exponential Smoothing technique, the Simple Holt-Winters, the Additive Holt-Winters, namely the Autoregressive Integrated Moving Average model.

  8. Electrochemically replicated smooth aluminum foils for anodic alumina nanochannel arrays

    International Nuclear Information System (INIS)

    Biring, Sajal; Tsai, K-T; Sur, Ujjal Kumar; Wang, Y-L

    2008-01-01

    A fast electrochemical replication technique has been developed to fabricate large-scale ultra-smooth aluminum foils by exploiting readily available large-scale smooth silicon wafers as the masters. Since the adhesion of aluminum on silicon depends on the time of surface pretreatment in water, it is possible to either detach the replicated aluminum from the silicon master without damaging the replicated aluminum and master or integrate the aluminum film to the silicon substrate. Replicated ultra-smooth aluminum foils are used for the growth of both self-organized and lithographically guided long-range ordered arrays of anodic alumina nanochannels without any polishing pretreatment

  9. Additional Smooth and Rough Water Trials of SKI-CAT.

    Science.gov (United States)

    1981-08-01

    REPORT & PERIOD COVERED ADDITIONAL SMOOTH AND ROUGH WATER TRIALS OF FINAL SKI- CAT S. PERFORMING ORO. REPORT NUMSER 7. AUTHOR() I. CONTRACT OR GRANT NUMUr...Identif by bloc membe) ’ " -Further tests of SKI- CAT were made in smooth and rough water. Smooth water results confirmed the performance results of...reductions in the accelerations and motions of SKI- CAT over against the head seasreut DD , +A ,3 1473 EDITION OF I NOVS IS OBSOLETE UNCIbSJFIED SIME 0102-014

  10. Smooth invariant densities for random switching on the torus

    Science.gov (United States)

    Bakhtin, Yuri; Hurth, Tobias; Lawley, Sean D.; Mattingly, Jonathan C.

    2018-04-01

    We consider a random dynamical system obtained by switching between the flows generated by two smooth vector fields on the 2d-torus, with the random switchings happening according to a Poisson process. Assuming that the driving vector fields are transversal to each other at all points of the torus and that each of them allows for a smooth invariant density and no periodic orbits, we prove that the switched system also has a smooth invariant density, for every switching rate. Our approach is based on an integration by parts formula inspired by techniques from Malliavin calculus.

  11. Investigating the effects of smoothness of interfaces on stability of probing nano-scale thin films by neutron reflectometry

    Directory of Open Access Journals (Sweden)

    S.S. Jahromi

    2012-03-01

    Full Text Available Most of the reflectometry methods which are used for determining the phase of complex reflection coefficient such as Reference Method and Variation of Surroundings medium are based on solving the Schrödinger equation using a discontinuous and step-like scattering optical potential. However, during the deposition process for making a real sample the two adjacent layers are mixed together and the interface would not be discontinuous and sharp. The smearing of adjacent layers at the interface (smoothness of interface, would affect the the reflectivity, phase of reflection coefficient and reconstruction of the scattering length density (SLD of the sample. In this paper, we have investigated the stability of Reference Method in the presence of smooth interfaces. The smoothness of interfaces is considered by using a continuous function scattering potential. We have also proposed a method to achieve the most reliable output result while retrieving the SLD of the sample.

  12. Smooth muscle myosin light chain kinase efficiently phosphorylates serine 15 of cardiac myosin regulatory light chain

    International Nuclear Information System (INIS)

    Josephson, Matthew P.; Sikkink, Laura A.; Penheiter, Alan R.; Burghardt, Thomas P.; Ajtai, Katalin

    2011-01-01

    Highlights: ► Cardiac myosin regulatory light chain (MYL2) is phosphorylated at S15. ► Smooth muscle myosin light chain kinase (smMLCK) is a ubiquitous kinase. ► It is a widely believed that MYL2 is a poor substrate for smMLCK. ► In fact, smMLCK efficiently and rapidly phosphorylates S15 in MYL2. ► Phosphorylation kinetics measured by novel fluorescence method without radioactivity. -- Abstract: Specific phosphorylation of the human ventricular cardiac myosin regulatory light chain (MYL2) modifies the protein at S15. This modification affects MYL2 secondary structure and modulates the Ca 2+ sensitivity of contraction in cardiac tissue. Smooth muscle myosin light chain kinase (smMLCK) is a ubiquitous kinase prevalent in uterus and present in other contracting tissues including cardiac muscle. The recombinant 130 kDa (short) smMLCK phosphorylated S15 in MYL2 in vitro. Specific modification of S15 was verified using the direct detection of the phospho group on S15 with mass spectrometry. SmMLCK also specifically phosphorylated myosin regulatory light chain S15 in porcine ventricular myosin and chicken gizzard smooth muscle myosin (S20 in smooth muscle) but failed to phosphorylate the myosin regulatory light chain in rabbit skeletal myosin. Phosphorylation kinetics, measured using a novel fluorescence method eliminating the use of radioactive isotopes, indicates similar Michaelis–Menten V max and K M for regulatory light chain S15 phosphorylation rates in MYL2, porcine ventricular myosin, and chicken gizzard myosin. These data demonstrate that smMLCK is a specific and efficient kinase for the in vitro phosphorylation of MYL2, cardiac, and smooth muscle myosin. Whether smMLCK plays a role in cardiac muscle regulation or response to a disease causing stimulus is unclear but it should be considered a potentially significant kinase in cardiac tissue on the basis of its specificity, kinetics, and tissue expression.

  13. Effect of an Ethanol Extract of Scutellaria baicalensis on Relaxation in Corpus Cavernosum Smooth Muscle

    Directory of Open Access Journals (Sweden)

    Xiang Li

    2012-01-01

    Full Text Available Aims of study. The aim of the present study was to investigate whether an ethanol extract of Scutellaria baicalensis (ESB relaxes penile corpus cavernosum muscle in organ bath experiments. Materials and methods. Changes in tension of cavernous smooth muscle strips were determined by penile strip chamber model and in penile perfusion model. Isolated endothelium-intact rabbit corpus cavernosum was precontracted with phenylephrine (PE and then treated with ESB. Results. ESB relaxed penile smooth muscle in a dose-dependent manner, and this was inhibited by pre-treatment with NG-nitro-l-arginine methyl ester (l-NAME, a nitric oxide (NO synthase inhibitor, and 1H-[1, 2, 4]-oxadiazolo-[4,3-α]-quinoxalin-1-one (ODQ, a soluble guanylyl cyclase (sGC inhibitor. ESB-induced relaxation was significantly attenuated by pretreatment with tetraethylammonium (TEA, a nonselective K+ channel blocker, and charybdotoxin, a selective Ca2+-dependent K+ channel inhibitor. ESB increased the cGMP levels of rabbit corpus cavernosum in a concentration-dependent manner without changes in cAMP levels. In a perfusion model of penile tissue, ESB also relaxed penile corpus cavernosum smooth muscle in a dose-dependent manner. Conclusion. Taken together, these results suggest that ESB relaxed rabbit cavernous smooth muscle via the NO/cGMP system and Ca2+-sensitive K+ channels in the corpus cavernosum.

  14. Using smooth sheets to describe groundfish habitat in Alaskan waters, with specific application to two flatfishes

    Science.gov (United States)

    Zimmermann, Mark; Reid, Jane A.; Golden, Nadine

    2016-01-01

    In this analysis we demonstrate how preferred fish habitat can be predicted and mapped for juveniles of two Alaskan groundfish species – Pacific halibut (Hippoglossus stenolepis) and flathead sole (Hippoglossoides elassodon) – at five sites (Kiliuda Bay, Izhut Bay, Port Dick, Aialik Bay, and the Barren Islands) in the central Gulf of Alaska. The method involves using geographic information system (GIS) software to extract appropriate information from National Ocean Service (NOS) smooth sheets that are available from NGDC (the National Geophysical Data Center). These smooth sheets are highly detailed charts that include more soundings, substrates, shoreline and feature information than the more commonly-known navigational charts. By bringing the information from smooth sheets into a GIS, a variety of surfaces, such as depth, slope, rugosity and mean grain size were interpolated into raster surfaces. Other measurements such as site openness, shoreline length, proportion of bay that is near shore, areas of rocky reefs and kelp beds, water volumes, surface areas and vertical cross-sections were also made in order to quantify differences between the study sites. Proper GIS processing also allows linking the smooth sheets to other data sets, such as orthographic satellite photographs, topographic maps and precipitation estimates from which watersheds and runoff can be derived. This same methodology can be applied to larger areas, taking advantage of these free data sets to describe predicted groundfish essential fish habitat (EFH) in Alaskan waters.

  15. Structure of modes of smoothly irregular three-dimensional integrated optical four-layer waveguide

    International Nuclear Information System (INIS)

    Egorov, A.A.; Ajryan, Eh.A.; Sevast'yanov, A.L.; Sevast'yanov, L.A.

    2009-01-01

    As a method of research of an integrated optical multilayer waveguide, satisfying the condition of smooth modification of the shape of the studied three-dimensional structure, an asymptotic method is used. Three-dimensional fields of smoothly deforming modes of the integrated optical waveguide are circumscribed analytically. An evident dependence of the contributions of the first order of smallness in the amplitudes of the electrical and magnetic fields of the quasi-waveguide modes is obtained. The canonical type of the equations circumscribing propagation of quasi-TE and quasi-TM modes in the smoothly irregular part of a four-layer integrated optical waveguide is represented for an asymptotic method. With the help of the method of coupled waves and perturbation theory method, the shifts of complex propagation constants for quasi-TE and quasi-TM modes are obtained in an explicit form. The elaborated theory is applicable for the analysis of similar structures of dielectric, magnetic and metamaterials in a sufficiently broad band of electromagnetic wavelengths

  16. The Smoothing Artifact of Spatially Constrained Canonical Correlation Analysis in Functional MRI

    Directory of Open Access Journals (Sweden)

    Dietmar Cordes

    2012-01-01

    Full Text Available A wide range of studies show the capacity of multivariate statistical methods for fMRI to improve mapping of brain activations in a noisy environment. An advanced method uses local canonical correlation analysis (CCA to encompass a group of neighboring voxels instead of looking at the single voxel time course. The value of a suitable test statistic is used as a measure of activation. It is customary to assign the value to the center voxel; however, this is a choice of convenience and without constraints introduces artifacts, especially in regions of strong localized activation. To compensate for these deficiencies, different spatial constraints in CCA have been introduced to enforce dominance of the center voxel. However, even if the dominance condition for the center voxel is satisfied, constrained CCA can still lead to a smoothing artifact, often called the “bleeding artifact of CCA”, in fMRI activation patterns. In this paper a new method is introduced to measure and correct for the smoothing artifact for constrained CCA methods. It is shown that constrained CCA methods corrected for the smoothing artifact lead to more plausible activation patterns in fMRI as shown using data from a motor task and a memory task.

  17. Derivatives of Multivariate Bernstein Operators and Smoothness with Jacobi Weights

    Directory of Open Access Journals (Sweden)

    Jianjun Wang

    2012-01-01

    Full Text Available Using the modulus of smoothness, directional derivatives of multivariate Bernstein operators with weights are characterized. The obtained results partly generalize the corresponding ones for multivariate Bernstein operators without weights.

  18. Smooth surfaces from bilinear patches: Discrete affine minimal surfaces

    KAUST Repository

    Kä ferbö ck, Florian; Pottmann, Helmut

    2013-01-01

    Motivated by applications in freeform architecture, we study surfaces which are composed of smoothly joined bilinear patches. These surfaces turn out to be discrete versions of negatively curved affine minimal surfaces and share many properties

  19. Ensemble Kalman filtering with one-step-ahead smoothing

    KAUST Repository

    Raboudi, Naila F.; Ait-El-Fquih, Boujemaa; Hoteit, Ibrahim

    2018-01-01

    error statistics. This limits their representativeness of the background error covariances and, thus, their performance. This work explores the efficiency of the one-step-ahead (OSA) smoothing formulation of the Bayesian filtering problem to enhance

  20. Estimate of K-functionals and modulus of smoothness constructed ...

    Indian Academy of Sciences (India)

    2016-08-26

    functional and a modulus of smoothness for the Dunkl transform on Rd. Author Affiliations. M El Hamma1 R Daher1. Department of Mathematics, Faculty of Sciences Aïn Chock, University of Hassan II, Casablanca, Morocco. Dates.

  1. Small Smooth Units ('Young' Lavas?) Abutting Lobate Scarps on Mercury

    Science.gov (United States)

    Malliband, C. C.; Rothery, D. A.; Balme, M. R.; Conway, S. J.

    2018-05-01

    We have identified small units abutting, and so stratigraphy younger than, lobate scarps. This post dates the end of large scale smooth plains formation at the onset of global contraction. This elaborates the history of volcanism on Mercury.

  2. Carrier tracking by smoothing filter improves symbol SNR

    Science.gov (United States)

    Pomalaza-Raez, Carlos A.; Hurd, William J.

    1986-01-01

    The potential benefit of using a smoothing filter to estimate carrier phase over use of phase locked loops (PLL) is determined. Numerical results are presented for the performance of three possible configurations of the deep space network advanced receiver. These are residual carrier PLL, sideband aided residual carrier PLL, and finally sideband aiding with a Kalman smoother. The average symbol signal to noise ratio (SNR) after losses due to carrier phase estimation error is computed for different total power SNRs, symbol rates and symbol SNRs. It is found that smoothing is most beneficial for low symbol SNRs and low symbol rates. Smoothing gains up to 0.4 dB over a sideband aided residual carrier PLL, and the combined benefit of smoothing and sideband aiding relative to a residual carrier loop is often in excess of 1 dB.

  3. Carrier tracking by smoothing filter can improve symbol SNR

    Science.gov (United States)

    Hurd, W. J.; Pomalaza-Raez, C. A.

    1985-01-01

    The potential benefit of using a smoothing filter to estimate carrier phase over use of phase locked loops (PLL) is determined. Numerical results are presented for the performance of three possible configurations of the deep space network advanced receiver. These are residual carrier PLL, sideband aided residual carrier PLL, and finally sideband aiding with a Kalman smoother. The average symbol signal to noise ratio (CNR) after losses due to carrier phase estimation error is computed for different total power SNRs, symbol rates and symbol SNRs. It is found that smoothing is most beneficial for low symbol SNRs and low symbol rates. Smoothing gains up to 0.4 dB over a sideband aided residual carrier PLL, and the combined benefit of smoothing and sideband aiding relative to a residual carrier loop is often in excess of 1 dB.

  4. Calcium-sensitivity of smooth muscle contraction in the isolated ...

    African Journals Online (AJOL)

    sensitivity of smooth muscle contraction were studied in the isolated perfused rat tail artery, employing the activators noradrenaline (NA) (3ìM) sand potassium chloride (KC1) (100mM). Experiments were conduced in Ca2+ - buffered saline.

  5. Effects of contrast on smooth pursuit eye movements.

    Science.gov (United States)

    Spering, Miriam; Kerzel, Dirk; Braun, Doris I; Hawken, Michael J; Gegenfurtner, Karl R

    2005-05-20

    It is well known that moving stimuli can appear to move more slowly when contrast is reduced (P. Thompson, 1982). Here we address the question whether changes in stimulus contrast also affect smooth pursuit eye movements. Subjects were asked to smoothly track a moving Gabor patch. Targets varied in velocity (1, 8, and 15 deg/s), spatial frequency (0.1, 1, 4, and 8 c/deg), and contrast, ranging from just below individual thresholds to maximum contrast. Results show that smooth pursuit eye velocity gain rose significantly with increasing contrast. Below a contrast level of two to three times threshold, pursuit gain, acceleration, latency, and positional accuracy were severely impaired. Therefore, the smooth pursuit motor response shows the same kind of slowing at low contrast that was demonstrated in previous studies on perception.

  6. Aging may negatively impact movement smoothness during stair negotiation.

    Science.gov (United States)

    Dixon, P C; Stirling, L; Xu, X; Chang, C C; Dennerlein, J T; Schiffman, J M

    2018-05-26

    Stairs represent a barrier to safe locomotion for some older adults, potentially leading to the adoption of a cautious gait strategy that may lack fluidity. This strategy may be characterized as unsmooth; however, stair negotiation smoothness has yet to be quantified. The aims of this study were to assess age- and task-related differences in head and body center of mass (COM) acceleration patterns and smoothness during stair negotiation and to determine if smoothness was associated with the timed "Up and Go" (TUG) test of functional movement. Motion data from nineteen older and twenty young adults performing stair ascent, stair descent, and overground straight walking trials were analyzed and used to compute smoothness based on the log-normalized dimensionless jerk (LDJ) and the velocity spectral arc length (SPARC) metrics. The associations between TUG and smoothness measures were evaluated using Pearson's correlation coefficient (r). Stair tasks increased head and body COM acceleration pattern differences across groups, compared to walking (p < 0.05). LDJ smoothness for the head and body COM decreased in older adults during stair descent, compared to young adults (p ≤ 0.015) and worsened with increasing TUG for all tasks (-0.60 ≤ r ≤ -0.43). SPARC smoothness of the head and body COM increased in older adults, regardless of task (p < 0.001), while correlations showed improved SPARC smoothness with increasing TUG for some tasks (0.33 ≤ r ≤ 0.40). The LDJ outperforms SPARC in identifying age-related stair negotiation adaptations and is associated with performance on a clinical test of gait. Copyright © 2018 Elsevier B.V. All rights reserved.

  7. Ureter smooth muscle cell orientation in rat is predominantly longitudinal.

    Science.gov (United States)

    Spronck, Bart; Merken, Jort J; Reesink, Koen D; Kroon, Wilco; Delhaas, Tammo

    2014-01-01

    In ureter peristalsis, the orientation of the contracting smooth muscle cells is essential, yet current descriptions of orientation and composition of the smooth muscle layer in human as well as in rat ureter are inconsistent. The present study aims to improve quantification of smooth muscle orientation in rat ureters as a basis for mechanistic understanding of peristalsis. A crucial step in our approach is to use two-photon laser scanning microscopy and image analysis providing objective, quantitative data on smooth muscle cell orientation in intact ureters, avoiding the usual sectioning artifacts. In 36 rat ureter segments, originating from a proximal, middle or distal site and from a left or right ureter, we found close to the adventitia a well-defined longitudinal smooth muscle orientation. Towards the lamina propria, the orientation gradually became slightly more disperse, yet the main orientation remained longitudinal. We conclude that smooth muscle cell orientation in rat ureter is predominantly longitudinal, though the orientation gradually becomes more disperse towards the proprial side. These findings do not support identification of separate layers. The observed longitudinal orientation suggests that smooth muscle contraction would rather cause local shortening of the ureter, than cause luminal constriction. However, the net-like connective tissue of the ureter wall may translate local longitudinal shortening into co-local luminal constriction, facilitating peristalsis. Our quantitative, minimally invasive approach is a crucial step towards more mechanistic insight into ureter peristalsis, and may also be used to study smooth muscle cell orientation in other tube-like structures like gut and blood vessels.

  8. Ureter smooth muscle cell orientation in rat is predominantly longitudinal.

    Directory of Open Access Journals (Sweden)

    Bart Spronck

    Full Text Available In ureter peristalsis, the orientation of the contracting smooth muscle cells is essential, yet current descriptions of orientation and composition of the smooth muscle layer in human as well as in rat ureter are inconsistent. The present study aims to improve quantification of smooth muscle orientation in rat ureters as a basis for mechanistic understanding of peristalsis. A crucial step in our approach is to use two-photon laser scanning microscopy and image analysis providing objective, quantitative data on smooth muscle cell orientation in intact ureters, avoiding the usual sectioning artifacts. In 36 rat ureter segments, originating from a proximal, middle or distal site and from a left or right ureter, we found close to the adventitia a well-defined longitudinal smooth muscle orientation. Towards the lamina propria, the orientation gradually became slightly more disperse, yet the main orientation remained longitudinal. We conclude that smooth muscle cell orientation in rat ureter is predominantly longitudinal, though the orientation gradually becomes more disperse towards the proprial side. These findings do not support identification of separate layers. The observed longitudinal orientation suggests that smooth muscle contraction would rather cause local shortening of the ureter, than cause luminal constriction. However, the net-like connective tissue of the ureter wall may translate local longitudinal shortening into co-local luminal constriction, facilitating peristalsis. Our quantitative, minimally invasive approach is a crucial step towards more mechanistic insight into ureter peristalsis, and may also be used to study smooth muscle cell orientation in other tube-like structures like gut and blood vessels.

  9. Some remarks on smooth renormings of Banach spaces

    Czech Academy of Sciences Publication Activity Database

    Hájek, Petr Pavel; Russo, T.

    2017-01-01

    Roč. 455, č. 2 (2017), s. 1272-1284 ISSN 0022-247X R&D Projects: GA ČR GA16-07378S Institutional support: RVO:67985840 Keywords : Fréchet smooth * approximation of norms * Ck-smooth norm Subject RIV: BA - General Mathematics OBOR OECD: Pure mathematics Impact factor: 1.064, year: 2016 http://www.sciencedirect.com/science/article/pii/S0022247X17305462?via%3Dihub

  10. Smooth maps of a foliated manifold in a symplectic manifold

    Indian Academy of Sciences (India)

    Abstract. Let M be a smooth manifold with a regular foliation F and a 2-form ω which induces closed forms on the leaves of F in the leaf topology. A smooth map f : (M, F) −→ (N,σ) in a symplectic manifold (N,σ) is called a foliated symplectic immersion if f restricts to an immersion on each leaf of the foliation and further, the.

  11. Vardenafil inhibiting parasympathetic function of tracheal smooth muscle.

    Science.gov (United States)

    Lee, Fei-Peng; Chao, Pin-Zhir; Wang, Hsing-Won

    2018-07-01

    Levitra, a phosphodiesterase-5 (PDE5) inhibitor, is the trade name of vardenafil. Nowadays, it is applied to treatment of erectile dysfunction. PDE5 inhibitors are employed to induce dilatation of the vascular smooth muscle. The effect of Levitra on impotency is well known; however, its effect on the tracheal smooth muscle has rarely been explored. When administered for sexual symptoms via oral intake or inhalation, Levitra might affect the trachea. This study assessed the effects of Levitra on isolated rat tracheal smooth muscle by examining its effect on resting tension of tracheal smooth muscle, contraction caused by 10 -6  M methacholine as a parasympathetic mimetic, and electrically induced tracheal smooth muscle contractions. The results showed that adding methacholine to the incubation medium caused the trachea to contract in a dose-dependent manner. Addition of Levitra at doses of 10 -5  M or above elicited a significant relaxation response to 10 -6  M methacholine-induced contraction. Levitra could inhibit electrical field stimulation-induced spike contraction. It alone had minimal effect on the basal tension of the trachea as the concentration increased. High concentrations of Levitra could inhibit parasympathetic function of the trachea. Levitra when administered via oral intake might reduce asthma attacks in impotent patients because it might inhibit parasympathetic function and reduce methacholine-induced contraction of the tracheal smooth muscle. Copyright © 2018. Published by Elsevier Taiwan LLC.

  12. Myosin light chain kinase phosphorylation in tracheal smooth muscle

    International Nuclear Information System (INIS)

    Stull, J.T.; Hsu, L.C.; Tansey, M.G.; Kamm, K.E.

    1990-01-01

    Purified myosin light chain kinase from smooth muscle is phosphorylated by cyclic AMP-dependent protein kinase, protein kinase C, and the multifunctional calmodulin-dependent protein kinase II. Because phosphorylation in a specific site (site A) by any one of these kinases desensitizes myosin light chain kinase to activation by Ca2+/calmodulin, kinase phosphorylation could play an important role in regulating smooth muscle contractility. This possibility was investigated in 32 P-labeled bovine tracheal smooth muscle. Treatment of tissues with carbachol, KCl, isoproterenol, or phorbol 12,13-dibutyrate increased the extent of kinase phosphorylation. Six primary phosphopeptides (A-F) of myosin light chain kinase were identified. Site A was phosphorylated to an appreciable extent only with carbachol or KCl, agents which contract tracheal smooth muscle. The extent of site A phosphorylation correlated to increases in the concentration of Ca2+/calmodulin required for activation. These results show that cyclic AMP-dependent protein kinase and protein kinase C do not affect smooth muscle contractility by phosphorylating site A in myosin light chain kinase. It is proposed that phosphorylation of myosin light chain kinase in site A in contracting tracheal smooth muscle may play a role in the reported desensitization of contractile elements to activation by Ca2+

  13. Stimulation of aortic smooth muscle cell mitogenesis by serotonin

    International Nuclear Information System (INIS)

    Nemecek, G.M.; Coughlin, S.R.; Handley, D.A.; Moskowitz, M.A.

    1986-01-01

    Bovine aortic smooth muscle cells in vitro responded to 1 nM to 10 μM serotonin with increased incorporation of [ 3 H]thymidine into DNA. The mitogenic effect of serotonin was half-maximal at 80 nM and maximal above 1 μM. At a concentration of 1 μM, serotonin stimulated smooth muscle cell mitogenesis to the same extent as human platelet-derived growth factor (PDGF) at 12 ng/ml. Tryptamine was ≅ 1/10th as potent as serotonin as a mitogen for smooth muscle cells. Other indoles that are structurally related to serotonin (D- and L-tryptophan, 5-hydroxy-L-tryptophan, N-acetyl-5-hydroxytryptamine, melatonin, 5-hydroxyindoleacetic acid, and 5-hydroxytryptophol) and quipazine were inactive. The stimulatory effect of serotonin on smooth muscle cell DNA synthesis required prolonged (20-24 hr) exposure to the agonist and was attenuated in the presence of serotonin D receptor antagonists. When smooth muscle cells were incubated with submaximal concentrations of serotonin and PDGF, synergistic rather than additive mitogenic responses were observed. These data indicate that serotonin has a significant mitogenic effect on smooth muscle cells in vitro, which appears to be mediated by specific plasma membrane receptors

  14. Effect of lovastatin on rabbit vascular smooth muscle cells

    International Nuclear Information System (INIS)

    Luan Zhaoxia; Pei Zhuguo

    2003-01-01

    Objective: To investigate the effect of lovastatin on binding activity of nuclear factor activator protein-1 (AP-1) to NF-κB and the expression of matrix metalloproteinase-9 (MMP-9) in rabbit vascular smooth muscle cells (VSMCs). Methods: The oligonucleotide corresponding to the consensus NF-κB element or the consensus AP-1 element was labeled by [γ- 32 P]-ATP. AP-1 and NF-κB binding activity was detected by electrophoretic mobility shift assay (EMSA), expression of MMP-9 was detected by zymography. Results: Lovastatin inhibited the expression of MMP-9 in a dose-dependent manner, this effect was reversed by mevalonate and GGPP but not by squalene; lovastatin significantly decreased AP-1 and NF-κB binding activity. Conclusion: Lovastatin decreased AP-1 and NF-κB binding activity and inhibited MMP-9 expression in rabbit VSMCs by the way of inhibiting prenylation of protein but not by cholestrol-lowering, and this might be the mechanism of its arteriosclerostic plaque stabilizing effects

  15. Impaired Arterial Smooth Muscle Cell Vasodilatory Function In Methamphetamine Users

    Directory of Open Access Journals (Sweden)

    Ghaemeh Nabaei

    2017-02-01

    Full Text Available Objectives: Methamphetamine use is a strong risk factor for stroke. This study was designed to evaluate arterial function and structure in methamphetamine users ultrasonographically. Methods: In a cross-sectional study, 20 methamphetamine users and 21 controls, aged between 20 and 40years, were enrolled. Common carotid artery intima-media thickness (CCA-IMT marker of early atherogenesis, flow-mediated dilatation (FMD determinants of endothelium-dependent vasodilation, and nitroglycerine-mediated dilatation (NMD independent marker of vasodilation were measured in two groups. Results: There were no significant differences between the two groups regarding demographic and metabolic characteristics. The mean (±SD CCA-IMT in methamphetamine users was 0.58±0.09mm, versus 0.59±0.07mm in the controls (p=0.84. Likewise, FMD% was not significantly different between the two groups [7.6±6.1% in methamphetamine users vs. 8.2±5.1% in the controls; p=0.72], nor were peak flow and shear rate after hyperemia. However, NMD% was considerably decreased in the methamphetamine users [8.5±7.8% in methamphetamine users vs. 13.4±6.2% in controls; p=0.03]. Conclusion: According to our results, NMD is reduced among otherwise healthy methamphetamine users, which represents smooth muscle dysfunction in this group. This may contribute to the high risk of stroke among methamphetamine users.

  16. Uremia modulates the phenotype of aortic smooth muscle cells

    DEFF Research Database (Denmark)

    Madsen, Marie; Pedersen, Annemarie Aarup; Albinsson, Sebastian

    2017-01-01

    the phenotype of aortic SMCs in vivo. METHODS: Moderate uremia was induced by 5/6 nephrectomy in apolipoprotein E knockout (ApoE(-/-)) and wildtype C57Bl/6 mice. Plasma analysis, gene expression, histology, and myography were used to determine uremia-mediated changes in the arterial wall. RESULTS: Induction...... of moderate uremia in ApoE(-/-) mice increased atherosclerosis in the aortic arch en face 1.6 fold (p = 0.04) and induced systemic inflammation. Based on histological analyses of aortic root sections, uremia increased the medial area, while there was no difference in the content of elastic fibers or collagen...... in the aortic media. In the aortic arch, mRNA and miRNA expression patterns were consistent with a uremia-mediated phenotypic modulation of SMCs; e.g. downregulation of myocardin, α-smooth muscle actin, and transgelin; and upregulation of miR146a. Notably, these expression patterns were observed after acute (2...

  17. Efficient Smoothed Concomitant Lasso Estimation for High Dimensional Regression

    Science.gov (United States)

    Ndiaye, Eugene; Fercoq, Olivier; Gramfort, Alexandre; Leclère, Vincent; Salmon, Joseph

    2017-10-01

    In high dimensional settings, sparse structures are crucial for efficiency, both in term of memory, computation and performance. It is customary to consider ℓ 1 penalty to enforce sparsity in such scenarios. Sparsity enforcing methods, the Lasso being a canonical example, are popular candidates to address high dimension. For efficiency, they rely on tuning a parameter trading data fitting versus sparsity. For the Lasso theory to hold this tuning parameter should be proportional to the noise level, yet the latter is often unknown in practice. A possible remedy is to jointly optimize over the regression parameter as well as over the noise level. This has been considered under several names in the literature: Scaled-Lasso, Square-root Lasso, Concomitant Lasso estimation for instance, and could be of interest for uncertainty quantification. In this work, after illustrating numerical difficulties for the Concomitant Lasso formulation, we propose a modification we coined Smoothed Concomitant Lasso, aimed at increasing numerical stability. We propose an efficient and accurate solver leading to a computational cost no more expensive than the one for the Lasso. We leverage on standard ingredients behind the success of fast Lasso solvers: a coordinate descent algorithm, combined with safe screening rules to achieve speed efficiency, by eliminating early irrelevant features.

  18. Establishment of artery smooth muscle cell proliferation model after subarachnoid hemorrhage in rats

    Directory of Open Access Journals (Sweden)

    Yu-jie CHEN

    2011-12-01

    Full Text Available Objective The current paper aims to simulate the effects of hemolytic products on intracranial vascular smooth muscle cell after subarachnoid hemorrhage(SAH,and probe into the molecular mechanism and strategy for the prevention and cure of vascular proliferation after SAH.Methods Thirty Sprague-Dawley rats were randomly divided into three groups,including sham-operated,24 h after SAH,and 72 h after SAH groups.The artificial hemorrhage model around the common carotid artery was established for the latter two groups.The animals were put to death after 24 h and 72 h to take the common carotid artery,and to measure the expression level of PCNA,SM-α-actin protein,and mRNA in the smooth muscle cell.Results The PCNA mRNA expression was significantly up-regulated in the 24-h group(P < 0.01.The expression in the 72-h group was lower than that of the 24-h group(P < 0.01,whereas it was still remarkably higher than that of the sham group(P < 0.01.The SM-α-actin mRNA expression in the smooth muscle cell in the 24-h and 72-h groups decreased compared with that of the Sham group(P < 0.05,whereas the 72-h group was significantly lower than that of the 24-h group(P < 0.05.The protein expression of PCNA and SM-α-actin showed a similar trend.Conclusion The current experiment simulates better effects of the hemolytic products on vascular smooth muscle cell after SAH.It also shows that artificial hemorrhage around the common carotid artery could stimulate vascular smooth muscle cell to change from contractile phenotype into synthetic phenotype,and improve it to proliferate.

  19. Formulation and demonstration of a robust mean variance optimization approach for concurrent airline network and aircraft design

    Science.gov (United States)

    Davendralingam, Navindran

    Conceptual design of aircraft and the airline network (routes) on which aircraft fly on are inextricably linked to passenger driven demand. Many factors influence passenger demand for various Origin-Destination (O-D) city pairs including demographics, geographic location, seasonality, socio-economic factors and naturally, the operations of directly competing airlines. The expansion of airline operations involves the identificaion of appropriate aircraft to meet projected future demand. The decisions made in incorporating and subsequently allocating these new aircraft to serve air travel demand affects the inherent risk and profit potential as predicted through the airline revenue management systems. Competition between airlines then translates to latent passenger observations of the routes served between OD pairs and ticket pricing---this in effect reflexively drives future states of demand. This thesis addresses the integrated nature of aircraft design, airline operations and passenger demand, in order to maximize future expected profits as new aircraft are brought into service. The goal of this research is to develop an approach that utilizes aircraft design, airline network design and passenger demand as a unified framework to provide better integrated design solutions in order to maximize expexted profits of an airline. This is investigated through two approaches. The first is a static model that poses the concurrent engineering paradigm above as an investment portfolio problem. Modern financial portfolio optimization techniques are used to leverage risk of serving future projected demand using a 'yet to be introduced' aircraft against potentially generated future profits. Robust optimization methodologies are incorporated to mitigate model sensitivity and address estimation risks associated with such optimization techniques. The second extends the portfolio approach to include dynamic effects of an airline's operations. A dynamic programming approach is employed to simulate the reflexive nature of airline supply-demand interactions by modeling the aggregate changes in demand that would result from tactical allocations of aircraft to maximize profit. The best yet-to-be-introduced aircraft maximizes profit by minimizing the long term fleetwide direct operating costs.

  20. Diversification in the driveway: mean-variance optimization for greenhouse gas emissions reduction from the next generation of vehicles

    International Nuclear Information System (INIS)

    Oliver Gao, H.; Stasko, Timon H.

    2009-01-01

    Modern portfolio theory is applied to the problem of selecting which vehicle technologies and fuels to use in the next generation of vehicles. Selecting vehicles with the lowest lifetime cost is complicated by the fact that future prices are uncertain, just as selecting securities for an investment portfolio is complicated by the fact that future returns are uncertain. A quadratic program is developed based on modern portfolio theory, with the objective of minimizing the expected lifetime cost of the 'vehicle portfolio'. Constraints limit greenhouse gas emissions, as well as the variance of the cost. A case study is performed for light-duty passenger vehicles in the United States, drawing emissions and usage data from the US Environmental Protection Agency's MOVES and Department of Energy's GREET models, among other sources. Four vehicle technologies are considered: conventional gasoline, conventional diesel, grid-independent (non-plug-in) gasoline-electric hybrid, and flex fuel using E85. Results indicate that much of the uncertainty surrounding cost stems from fuel price fluctuations, and that fuel efficient vehicles can lower cost variance. Hybrids exhibit the lowest cost variances of the technologies considered, making them an arguably financially conservative choice.