WorldWideScience

Sample records for algorithm asset-based dynamic

  1. Exploring consumption- and asset-based poverty dynamics in Ethiopia

    African Journals Online (AJOL)

    This paper examines the dynamics of wellbeing in Ethiopia by assessing changes in poverty status based on consumption and asset ownership. Using panel data from the first two waves of the Ethiopia Socioeconomic Survey (ESS), we discover that although the cross-sectional poverty remains relatively unchanged ...

  2. Static and dynamic factors in an information-based multi-asset artificial stock market

    Science.gov (United States)

    Ponta, Linda; Pastore, Stefano; Cincotti, Silvano

    2018-02-01

    An information-based multi-asset artificial stock market characterized by different types of stocks and populated by heterogeneous agents is presented. In the market, agents trade risky assets in exchange for cash. Beside the amount of cash and of stocks owned, each agent is characterized by sentiments and agents share their sentiments by means of interactions that are determined by sparsely connected networks. A central market maker (clearing house mechanism) determines the price processes for each stock at the intersection of the demand and the supply curves. Single stock price processes exhibit volatility clustering and fat-tailed distribution of returns whereas multivariate price process exhibits both static and dynamic stylized facts, i.e., the presence of static factors and common trends. Static factors are studied making reference to the cross-correlation of returns of different stocks. The common trends are investigated considering the variance-covariance matrix of prices. Results point out that the probability distribution of eigenvalues of the cross-correlation matrix of returns shows the presence of sectors, similar to those observed on real empirical data. As regarding the dynamic factors, the variance-covariance matrix of prices point out a limited number of assets prices series that are independent integrated processes, in close agreement with the empirical evidence of asset price time series of real stock markets. These results remarks the crucial dependence of statistical properties of multi-assets stock market on the agents' interaction structure.

  3. and Asset-based Poverty Dynamics in Ethiopia

    African Journals Online (AJOL)

    Optiplex 7010 Pro

    poverty status based on consumption and asset ownership. Using panel data ... In recent years Ethiopia has experienced remarkable economic growth with a ...... Understanding the relationship between household demographics and poverty ...

  4. Forecasting financial asset processes: stochastic dynamics via learning neural networks.

    Science.gov (United States)

    Giebel, S; Rainer, M

    2010-01-01

    Models for financial asset dynamics usually take into account their inherent unpredictable nature by including a suitable stochastic component into their process. Unknown (forward) values of financial assets (at a given time in the future) are usually estimated as expectations of the stochastic asset under a suitable risk-neutral measure. This estimation requires the stochastic model to be calibrated to some history of sufficient length in the past. Apart from inherent limitations, due to the stochastic nature of the process, the predictive power is also limited by the simplifying assumptions of the common calibration methods, such as maximum likelihood estimation and regression methods, performed often without weights on the historic time series, or with static weights only. Here we propose a novel method of "intelligent" calibration, using learning neural networks in order to dynamically adapt the parameters of the stochastic model. Hence we have a stochastic process with time dependent parameters, the dynamics of the parameters being themselves learned continuously by a neural network. The back propagation in training the previous weights is limited to a certain memory length (in the examples we consider 10 previous business days), which is similar to the maximal time lag of autoregressive processes. We demonstrate the learning efficiency of the new algorithm by tracking the next-day forecasts for the EURTRY and EUR-HUF exchange rates each.

  5. Development of algorithm for depreciation costs allocation in dynamic input-output industrial enterprise model

    Directory of Open Access Journals (Sweden)

    Keller Alevtina

    2017-01-01

    Full Text Available The article considers the issue of allocation of depreciation costs in the dynamic inputoutput model of an industrial enterprise. Accounting the depreciation costs in such a model improves the policy of fixed assets management. It is particularly relevant to develop the algorithm for the allocation of depreciation costs in the construction of dynamic input-output model of an industrial enterprise, since such enterprises have a significant amount of fixed assets. Implementation of terms of the adequacy of such an algorithm itself allows: evaluating the appropriateness of investments in fixed assets, studying the final financial results of an industrial enterprise, depending on management decisions in the depreciation policy. It is necessary to note that the model in question for the enterprise is always degenerate. It is caused by the presence of zero rows in the matrix of capital expenditures by lines of structural elements unable to generate fixed assets (part of the service units, households, corporate consumers. The paper presents the algorithm for the allocation of depreciation costs for the model. This algorithm was developed by the authors and served as the basis for further development of the flowchart for subsequent implementation with use of software. The construction of such algorithm and its use for dynamic input-output models of industrial enterprises is actualized by international acceptance of the effectiveness of the use of input-output models for national and regional economic systems. This is what allows us to consider that the solutions discussed in the article are of interest to economists of various industrial enterprises.

  6. Genetic algorithms with memory- and elitism-based immigrants in dynamic environments.

    Science.gov (United States)

    Yang, Shengxiang

    2008-01-01

    In recent years the genetic algorithm community has shown a growing interest in studying dynamic optimization problems. Several approaches have been devised. The random immigrants and memory schemes are two major ones. The random immigrants scheme addresses dynamic environments by maintaining the population diversity while the memory scheme aims to adapt genetic algorithms quickly to new environments by reusing historical information. This paper investigates a hybrid memory and random immigrants scheme, called memory-based immigrants, and a hybrid elitism and random immigrants scheme, called elitism-based immigrants, for genetic algorithms in dynamic environments. In these schemes, the best individual from memory or the elite from the previous generation is retrieved as the base to create immigrants into the population by mutation. This way, not only can diversity be maintained but it is done more efficiently to adapt genetic algorithms to the current environment. Based on a series of systematically constructed dynamic problems, experiments are carried out to compare genetic algorithms with the memory-based and elitism-based immigrants schemes against genetic algorithms with traditional memory and random immigrants schemes and a hybrid memory and multi-population scheme. The sensitivity analysis regarding some key parameters is also carried out. Experimental results show that the memory-based and elitism-based immigrants schemes efficiently improve the performance of genetic algorithms in dynamic environments.

  7. Macroeconomic Dynamics of Assets, Leverage and Trust

    Science.gov (United States)

    Rozendaal, Jeroen C.; Malevergne, Yannick; Sornette, Didier

    A macroeconomic model based on the economic variables (i) assets, (ii) leverage (defined as debt over asset) and (iii) trust (defined as the maximum sustainable leverage) is proposed to investigate the role of credit in the dynamics of economic growth, and how credit may be associated with both economic performance and confidence. Our first notable finding is the mechanism of reward/penalty associated with patience, as quantified by the return on assets. In regular economies where the EBITA/Assets ratio is larger than the cost of debt, starting with a trust higher than leverage results in the highest long-term return on assets (which can be seen as a proxy for economic growth). Therefore, patient economies that first build trust and then increase leverage are positively rewarded. Our second main finding concerns a recommendation for the reaction of a central bank to an external shock that affects negatively the economic growth. We find that late policy intervention in the model economy results in the highest long-term return on assets. However, this comes at the cost of suffering longer from the crisis until the intervention occurs. The phenomenon that late intervention is most effective to attain a high long-term return on assets can be ascribed to the fact that postponing intervention allows trust to increase first, and it is most effective to intervene when trust is high. These results are derived from two fundamental assumptions underlying our model: (a) trust tends to increase when it is above leverage; (b) economic agents learn optimally to adjust debt for a given level of trust and amount of assets. Using a Markov Switching Model for the EBITA/Assets ratio, we have successfully calibrated our model to the empirical data of the return on equity of the EURO STOXX 50 for the time period 2000-2013. We find that dynamics of leverage and trust can be highly nonmonotonous with curved trajectories, as a result of the nonlinear coupling between the variables. This

  8. A Tutorial on Nonlinear Time-Series Data Mining in Engineering Asset Health and Reliability Prediction: Concepts, Models, and Algorithms

    Directory of Open Access Journals (Sweden)

    Ming Dong

    2010-01-01

    Full Text Available The primary objective of engineering asset management is to optimize assets service delivery potential and to minimize the related risks and costs over their entire life through the development and application of asset health and usage management in which the health and reliability prediction plays an important role. In real-life situations where an engineering asset operates under dynamic operational and environmental conditions, the lifetime of an engineering asset is generally described as monitored nonlinear time-series data and subject to high levels of uncertainty and unpredictability. It has been proved that application of data mining techniques is very useful for extracting relevant features which can be used as parameters for assets diagnosis and prognosis. In this paper, a tutorial on nonlinear time-series data mining in engineering asset health and reliability prediction is given. Besides that an overview on health and reliability prediction techniques for engineering assets is covered, this tutorial will focus on concepts, models, algorithms, and applications of hidden Markov models (HMMs and hidden semi-Markov models (HSMMs in engineering asset health prognosis, which are representatives of recent engineering asset health prediction techniques.

  9. Improved dynamic-programming-based algorithms for segmentation of masses in mammograms

    International Nuclear Information System (INIS)

    Dominguez, Alfonso Rojas; Nandi, Asoke K.

    2007-01-01

    In this paper, two new boundary tracing algorithms for segmentation of breast masses are presented. These new algorithms are based on the dynamic programming-based boundary tracing (DPBT) algorithm proposed in Timp and Karssemeijer, [S. Timp and N. Karssemeijer, Med. Phys. 31, 958-971 (2004)] The DPBT algorithm contains two main steps: (1) construction of a local cost function, and (2) application of dynamic programming to the selection of the optimal boundary based on the local cost function. The validity of some assumptions used in the design of the DPBT algorithm is tested in this paper using a set of 349 mammographic images. Based on the results of the tests, modifications to the computation of the local cost function have been designed and have resulted in the Improved-DPBT (IDPBT) algorithm. A procedure for the dynamic selection of the strength of the components of the local cost function is presented that makes these parameters independent of the image dataset. Incorporation of this dynamic selection procedure has produced another new algorithm which we have called ID 2 PBT. Methods for the determination of some other parameters of the DPBT algorithm that were not covered in the original paper are presented as well. The merits of the new IDPBT and ID 2 PBT algorithms are demonstrated experimentally by comparison against the DPBT algorithm. The segmentation results are evaluated with base on the area overlap measure and other segmentation metrics. Both of the new algorithms outperform the original DPBT; the improvements in the algorithms performance are more noticeable around the values of the segmentation metrics corresponding to the highest segmentation accuracy, i.e., the new algorithms produce more optimally segmented regions, rather than a pronounced increase in the average quality of all the segmented regions

  10. New MPPT algorithm based on hybrid dynamical theory

    KAUST Repository

    Elmetennani, Shahrazed

    2014-11-01

    This paper presents a new maximum power point tracking algorithm based on the hybrid dynamical theory. A multiceli converter has been considered as an adaptation stage for the photovoltaic chain. The proposed algorithm is a hybrid automata switching between eight different operating modes, which has been validated by simulation tests under different working conditions. © 2014 IEEE.

  11. New MPPT algorithm based on hybrid dynamical theory

    KAUST Repository

    Elmetennani, Shahrazed; Laleg-Kirati, Taous-Meriem; Benmansour, K.; Boucherit, M. S.; Tadjine, M.

    2014-01-01

    This paper presents a new maximum power point tracking algorithm based on the hybrid dynamical theory. A multiceli converter has been considered as an adaptation stage for the photovoltaic chain. The proposed algorithm is a hybrid automata switching between eight different operating modes, which has been validated by simulation tests under different working conditions. © 2014 IEEE.

  12. A Method Based on Dial's Algorithm for Multi-time Dynamic Traffic Assignment

    Directory of Open Access Journals (Sweden)

    Rongjie Kuang

    2014-03-01

    Full Text Available Due to static traffic assignment has poor performance in reflecting actual case and dynamic traffic assignment may incurs excessive compute cost, method of multi-time dynamic traffic assignment combining static and dynamic traffic assignment balances factors of precision and cost effectively. A method based on Dial's logit algorithm is proposed in the article to solve the dynamic stochastic user equilibrium problem in dynamic traffic assignment. Before that, a fitting function that can proximately reflect overloaded traffic condition of link is proposed and used to give corresponding model. Numerical example is given to illustrate heuristic procedure of method and to compare results with one of same example solved by other literature's algorithm. Results show that method based on Dial's algorithm is preferable to algorithm from others.

  13. Tactical Asset Allocation mit Genetischen Algorithmen

    OpenAIRE

    Manuel Ammann; Christian Zenkner

    2003-01-01

    In this study of tactical asset allocation, we use a genetic algorithm to implement a market timing strategy. The algorithm makes a daily decision whether to invest in the market index or in a riskless asset. The market index is represented by the S&P500 Composite Index, the riskless asset by a 3-month T-Bill. The decision of the genetic algorithm is based on fundamental macroeconomic variables. The association of fundamental variables with a set of operators creates a space of possible strat...

  14. The Patch-Levy-Based Bees Algorithm Applied to Dynamic Optimization Problems

    Directory of Open Access Journals (Sweden)

    Wasim A. Hussein

    2017-01-01

    Full Text Available Many real-world optimization problems are actually of dynamic nature. These problems change over time in terms of the objective function, decision variables, constraints, and so forth. Therefore, it is very important to study the performance of a metaheuristic algorithm in dynamic environments to assess the robustness of the algorithm to deal with real-word problems. In addition, it is important to adapt the existing metaheuristic algorithms to perform well in dynamic environments. This paper investigates a recently proposed version of Bees Algorithm, which is called Patch-Levy-based Bees Algorithm (PLBA, on solving dynamic problems, and adapts it to deal with such problems. The performance of the PLBA is compared with other BA versions and other state-of-the-art algorithms on a set of dynamic multimodal benchmark problems of different degrees of difficulties. The results of the experiments show that PLBA achieves better results than the other BA variants. The obtained results also indicate that PLBA significantly outperforms some of the other state-of-the-art algorithms and is competitive with others.

  15. Explicit symplectic algorithms based on generating functions for charged particle dynamics

    Science.gov (United States)

    Zhang, Ruili; Qin, Hong; Tang, Yifa; Liu, Jian; He, Yang; Xiao, Jianyuan

    2016-07-01

    Dynamics of a charged particle in the canonical coordinates is a Hamiltonian system, and the well-known symplectic algorithm has been regarded as the de facto method for numerical integration of Hamiltonian systems due to its long-term accuracy and fidelity. For long-term simulations with high efficiency, explicit symplectic algorithms are desirable. However, it is generally believed that explicit symplectic algorithms are only available for sum-separable Hamiltonians, and this restriction limits the application of explicit symplectic algorithms to charged particle dynamics. To overcome this difficulty, we combine the familiar sum-split method and a generating function method to construct second- and third-order explicit symplectic algorithms for dynamics of charged particle. The generating function method is designed to generate explicit symplectic algorithms for product-separable Hamiltonian with form of H (x ,p ) =pif (x ) or H (x ,p ) =xig (p ) . Applied to the simulations of charged particle dynamics, the explicit symplectic algorithms based on generating functions demonstrate superiorities in conservation and efficiency.

  16. A dynamic regrouping based sequential dynamic programming algorithm for unit commitment of combined heat and power systems

    DEFF Research Database (Denmark)

    Rong, Aiying; Hakonen, Henri; Lahdelma, Risto

    2009-01-01

    efficiency of the plants. We introduce in this paper the DRDP-RSC algorithm, which is a dynamic regrouping based dynamic programming (DP) algorithm based on linear relaxation of the ON/OFF states of the units, sequential commitment of units in small groups. Relaxed states of the plants are used to reduce...... the dimension of the UC problem and dynamic regrouping is used to improve the solution quality. Numerical results based on real-life data sets show that this algorithm is efficient and optimal or near-optimal solutions with very small optimality gap are obtained....

  17. Dynamic Asset Allocation Strategies Based on Volatility, Unexpected Volatility and Financial Turbulence

    OpenAIRE

    Grimsrud, David Borkner

    2015-01-01

    Masteroppgave økonomi og administrasjon- Universitetet i Agder, 2015 This master thesis looks at unexpected volatility- and financial turbulence’s predictive ability, and exploit these measures of financial risk, together with volatility, to create three dynamic asset allocation strategies, and test if they can outperform a passive and naively diversified buy-and-hold strategy. The idea with the dynamic strategies is to increase the portfolio return by keeping the portfolio risk at a low a...

  18. Algebraic dynamics algorithm: Numerical comparison with Runge-Kutta algorithm and symplectic geometric algorithm

    Institute of Scientific and Technical Information of China (English)

    WANG ShunJin; ZHANG Hua

    2007-01-01

    Based on the exact analytical solution of ordinary differential equations,a truncation of the Taylor series of the exact solution to the Nth order leads to the Nth order algebraic dynamics algorithm.A detailed numerical comparison is presented with Runge-Kutta algorithm and symplectic geometric algorithm for 12 test models.The results show that the algebraic dynamics algorithm can better preserve both geometrical and dynamical fidelity of a dynamical system at a controllable precision,and it can solve the problem of algorithm-induced dissipation for the Runge-Kutta algorithm and the problem of algorithm-induced phase shift for the symplectic geometric algorithm.

  19. Algebraic dynamics algorithm:Numerical comparison with Runge-Kutta algorithm and symplectic geometric algorithm

    Institute of Scientific and Technical Information of China (English)

    2007-01-01

    Based on the exact analytical solution of ordinary differential equations, a truncation of the Taylor series of the exact solution to the Nth order leads to the Nth order algebraic dynamics algorithm. A detailed numerical comparison is presented with Runge-Kutta algorithm and symplectic geometric algorithm for 12 test models. The results show that the algebraic dynamics algorithm can better preserve both geometrical and dynamical fidelity of a dynamical system at a controllable precision, and it can solve the problem of algorithm-induced dissipation for the Runge-Kutta algorithm and the problem of algorithm-induced phase shift for the symplectic geometric algorithm.

  20. Node-Dependence-Based Dynamic Incentive Algorithm in Opportunistic Networks

    Directory of Open Access Journals (Sweden)

    Ruiyun Yu

    2014-01-01

    Full Text Available Opportunistic networks lack end-to-end paths between source nodes and destination nodes, so the communications are mainly carried out by the “store-carry-forward” strategy. Selfish behaviors of rejecting packet relay requests will severely worsen the network performance. Incentive is an efficient way to reduce selfish behaviors and hence improves the reliability and robustness of the networks. In this paper, we propose the node-dependence-based dynamic gaming incentive (NDI algorithm, which exploits the dynamic repeated gaming to motivate nodes relaying packets for other nodes. The NDI algorithm presents a mechanism of tolerating selfish behaviors of nodes. Reward and punishment methods are also designed based on the node dependence degree. Simulation results show that the NDI algorithm is effective in increasing the delivery ratio and decreasing average latency when there are a lot of selfish nodes in the opportunistic networks.

  1. NONLINEAR FILTER METHOD OF GPS DYNAMIC POSITIONING BASED ON BANCROFT ALGORITHM

    Institute of Scientific and Technical Information of China (English)

    ZHANGQin; TAOBen-zao; ZHAOChao-ying; WANGLi

    2005-01-01

    Because of the ignored items after linearization, the extended Kalman filter (EKF) becomes a form of suboptimal gradient descent algorithm. The emanative tendency exists in GPS solution when the filter equations are ill-posed. The deviation in the estimation cannot be avoided. Furthermore, the true solution may be lost in pseudorange positioning because the linearized pseudorange equations are partial solutions. To solve the above problems in GPS dynamic positioning by using EKF, a closed-form Kalman filter method called the two-stage algorithm is presented for the nonlinear algebraic solution of GPS dynamic positioning based on the global nonlinear least squares closed algorithm--Bancroft numerical algorithm of American. The method separates the spatial parts from temporal parts during processing the GPS filter problems, and solves the nonlinear GPS dynamic positioning, thus getting stable and reliable dynamic positioning solutions.

  2. A block chain based architecture for asset management in coalition operations

    Science.gov (United States)

    Verma, Dinesh; Desai, Nirmit; Preece, Alun; Taylor, Ian

    2017-05-01

    To support dynamic communities of interests in coalition operations, new architectures for efficient sharing of ISR assets are needed. The use of blockchain technology in wired business environments, such as digital currency systems, offers an interesting solution by creating a way to maintain a distributed shared ledger without requiring a single trusted authority. In this paper, we discuss how a blockchain-based system can be modified to provide a solution for dynamic asset sharing amongst coalition members, enabling the creation of a logically centralized asset management system by a seamless policy-compliant federation of different coalition systems. We discuss the use of blockchain for three different types of assets in a coalition context, showing how blockchain can offer a suitable solution for sharing assets in those environments. We also discuss the limitations in the current implementations of blockchain which need to be overcome for the technology to become more effective in a decentralized tactical edge environment.

  3. Loss Aversion, Adaptive Beliefs, and Asset Pricing Dynamics

    Directory of Open Access Journals (Sweden)

    Kamal Samy Selim

    2015-01-01

    Full Text Available We study asset pricing dynamics in artificial financial markets model. The financial market is populated with agents following two heterogeneous trading beliefs, the technical and the fundamental prediction rules. Agents switch between trading rules with respect to their past performance. The agents are loss averse over asset price fluctuations. Loss aversion behaviour depends on the past performance of the trading strategies in terms of an evolutionary fitness measure. We propose a novel application of the prospect theory to agent-based modelling, and by simulation, the effect of evolutionary fitness measure on adaptive belief system is investigated. For comparison, we study pricing dynamics of a financial market populated with chartists perceive losses and gains symmetrically. One of our contributions is validating the agent-based models using real financial data of the Egyptian Stock Exchange. We find that our framework can explain important stylized facts in financial time series, such as random walk price behaviour, bubbles and crashes, fat-tailed return distributions, power-law tails in the distribution of returns, excess volatility, volatility clustering, the absence of autocorrelation in raw returns, and the power-law autocorrelations in absolute returns. In addition to this, we find that loss aversion improves market quality and market stability.

  4. Optimization algorithm based on densification and dynamic canonical descent

    Science.gov (United States)

    Bousson, K.; Correia, S. D.

    2006-07-01

    Stochastic methods have gained some popularity in global optimization in that most of them do not assume the cost functions to be differentiable. They have capabilities to avoid being trapped by local optima, and may converge even faster than gradient-based optimization methods on some problems. The present paper proposes an optimization method, which reduces the search space by means of densification curves, coupled with the dynamic canonical descent algorithm. The performances of the new method are shown on several known problems classically used for testing optimization algorithms, and proved to outperform competitive algorithms such as simulated annealing and genetic algorithms.

  5. Dynamic asset allocation for bank under stochastic interest rates.

    OpenAIRE

    Chakroun, Fatma; Abid, Fathi

    2014-01-01

    This paper considers the optimal asset allocation strategy for bank with stochastic interest rates when there are three types of asset: Bank account, loans and securities. The asset allocation problem is to maximize the expected utility from terminal wealth of a bank's shareholders over a finite time horizon. As a consequence, we apply a dynamic programming principle to solve the Hamilton-Jacobi-Bellman (HJB) equation explicitly in the case of the CRRA utility function. A case study is given ...

  6. Modeling the coupled return-spread high frequency dynamics of large tick assets

    Science.gov (United States)

    Curato, Gianbiagio; Lillo, Fabrizio

    2015-01-01

    Large tick assets, i.e. assets where one tick movement is a significant fraction of the price and bid-ask spread is almost always equal to one tick, display a dynamics in which price changes and spread are strongly coupled. We present an approach based on the hidden Markov model, also known in econometrics as the Markov switching model, for the dynamics of price changes, where the latent Markov process is described by the transitions between spreads. We then use a finite Markov mixture of logit regressions on past squared price changes to describe temporal dependencies in the dynamics of price changes. The model can thus be seen as a double chain Markov model. We show that the model describes the shape of the price change distribution at different time scales, volatility clustering, and the anomalous decrease of kurtosis. We calibrate our models based on Nasdaq stocks and we show that this model reproduces remarkably well the statistical properties of real data.

  7. A New Recommendation Algorithm Based on User’s Dynamic Information in Complex Social Network

    Directory of Open Access Journals (Sweden)

    Jiujun Cheng

    2015-01-01

    Full Text Available The development of recommendation system comes with the research of data sparsity, cold start, scalability, and privacy protection problems. Even though many papers proposed different improved recommendation algorithms to solve those problems, there is still plenty of room for improvement. In the complex social network, we can take full advantage of dynamic information such as user’s hobby, social relationship, and historical log to improve the performance of recommendation system. In this paper, we proposed a new recommendation algorithm which is based on social user’s dynamic information to solve the cold start problem of traditional collaborative filtering algorithm and also considered the dynamic factors. The algorithm takes user’s response information, dynamic interest, and the classic similar measurement of collaborative filtering algorithm into account. Then, we compared the new proposed recommendation algorithm with the traditional user based collaborative filtering algorithm and also presented some of the findings from experiment. The results of experiment demonstrate that the new proposed algorithm has a better recommended performance than the collaborative filtering algorithm in cold start scenario.

  8. Dynamic Sensor Management Algorithm Based on Improved Efficacy Function

    Directory of Open Access Journals (Sweden)

    TANG Shujuan

    2016-01-01

    Full Text Available A dynamic sensor management algorithm based on improved efficacy function is proposed to solve the multi-target and multi-sensory management problem. The tracking task precision requirements (TPR, target priority and sensor use cost were considered to establish the efficacy function by weighted sum the normalized value of the three factors. The dynamic sensor management algorithm was accomplished through control the diversities of the desired covariance matrix (DCM and the filtering covariance matrix (FCM. The DCM was preassigned in terms of TPR and the FCM was obtained by the centralized sequential Kalman filtering algorithm. The simulation results prove that the proposed method could meet the requirements of desired tracking precision and adjust sensor selection according to target priority and cost of sensor source usage. This makes sensor management scheme more reasonable and effective.

  9. Augmented Reality for Searching Potential Assets in Medan using GPS based Tracking

    Science.gov (United States)

    Muchtar, M. A.; Syahputra, M. F.; Syahputra, N.; Ashrafia, S.; Rahmat, R. F.

    2017-01-01

    Every city is required to introduce its variety of potential assets so that the people know how to utilize or to develop their area. Potential assets include infrastructure, facilities, people, communities, organizations, customs that affects the characteristics and the way of life in Medan. Due to lack of socialization and information, most of people in Medan only know a few parts of the assets. Recently, so many mobile apps provide search and mapping locations used to find the location of potential assets in user’s area. However, the available information, such as text and digital maps, sometimes do not much help the user clearly and dynamically. Therefore, Augmented Reality technology able to display information in real world vision is implemented in this research so that the information can be more interactive and easily understood by user. This technology will be implemented in mobile apps using GPS based tracking and define the coordinate of user’s smartphone as a marker so that it can help people dynamically and easily find the location of potential assets in the nearest area based on the direction of user’s view on camera.

  10. A Dynamic Neighborhood Learning-Based Gravitational Search Algorithm.

    Science.gov (United States)

    Zhang, Aizhu; Sun, Genyun; Ren, Jinchang; Li, Xiaodong; Wang, Zhenjie; Jia, Xiuping

    2018-01-01

    Balancing exploration and exploitation according to evolutionary states is crucial to meta-heuristic search (M-HS) algorithms. Owing to its simplicity in theory and effectiveness in global optimization, gravitational search algorithm (GSA) has attracted increasing attention in recent years. However, the tradeoff between exploration and exploitation in GSA is achieved mainly by adjusting the size of an archive, named , which stores those superior agents after fitness sorting in each iteration. Since the global property of remains unchanged in the whole evolutionary process, GSA emphasizes exploitation over exploration and suffers from rapid loss of diversity and premature convergence. To address these problems, in this paper, we propose a dynamic neighborhood learning (DNL) strategy to replace the model and thereby present a DNL-based GSA (DNLGSA). The method incorporates the local and global neighborhood topologies for enhancing the exploration and obtaining adaptive balance between exploration and exploitation. The local neighborhoods are dynamically formed based on evolutionary states. To delineate the evolutionary states, two convergence criteria named limit value and population diversity, are introduced. Moreover, a mutation operator is designed for escaping from the local optima on the basis of evolutionary states. The proposed algorithm was evaluated on 27 benchmark problems with different characteristic and various difficulties. The results reveal that DNLGSA exhibits competitive performances when compared with a variety of state-of-the-art M-HS algorithms. Moreover, the incorporation of local neighborhood topology reduces the numbers of calculations of gravitational force and thus alleviates the high computational cost of GSA.

  11. "Asset Ownership Across Generations"

    OpenAIRE

    Ngina S. Chiteji; Frank P. Stafford

    2000-01-01

    This paper examines cross-generational connections in asset ownership. It begins by presenting a theoretical framework that develops the distinction between the intergenerational transfer of knowledge about financial assets and the direct transfer of dollars from parents to children. Its analysis of data from the Panel Study of Income Dynamics (PSID) reveals intergenerational correlations in asset ownership, and we find evidence to suggest that parental asset ownership or family-based exposur...

  12. Dynamic asset allocation and downside-risk aversion

    NARCIS (Netherlands)

    A.B. Berkelaar (Arjan); R.R.P. Kouwenberg (Roy)

    2000-01-01

    textabstractThis paper considers dynamic asset allocation in a mean versus downside-risk framework. We derive closed-form solutions for the optimal portfolio weights when returns are lognormally distributed. Moreover, we study the impact of skewed and fat-tailed return distributions. We find that

  13. Dynamic Allocation or Diversification: A Regime-Based Approach to Multiple Assets

    DEFF Research Database (Denmark)

    Nystrup, Peter; Hansen, Bo William; Larsen, Henrik Olejasz

    2018-01-01

    ’ behavior and a new, more intuitive way of inferring the hidden market regimes. The empirical results show that regime-based asset allocation is profitable, even when compared to a diversified benchmark portfolio. The results are robust because they are based on available market data with no assumptions...... about forecasting skills....

  14. Dynamic training algorithm for dynamic neural networks

    International Nuclear Information System (INIS)

    Tan, Y.; Van Cauwenberghe, A.; Liu, Z.

    1996-01-01

    The widely used backpropagation algorithm for training neural networks based on the gradient descent has a significant drawback of slow convergence. A Gauss-Newton method based recursive least squares (RLS) type algorithm with dynamic error backpropagation is presented to speed-up the learning procedure of neural networks with local recurrent terms. Finally, simulation examples concerning the applications of the RLS type algorithm to identification of nonlinear processes using a local recurrent neural network are also included in this paper

  15. An evolutionary algorithm technique for intelligence, surveillance, and reconnaissance plan optimization

    Science.gov (United States)

    Langton, John T.; Caroli, Joseph A.; Rosenberg, Brad

    2008-04-01

    To support an Effects Based Approach to Operations (EBAO), Intelligence, Surveillance, and Reconnaissance (ISR) planners must optimize collection plans within an evolving battlespace. A need exists for a decision support tool that allows ISR planners to rapidly generate and rehearse high-performing ISR plans that balance multiple objectives and constraints to address dynamic collection requirements for assessment. To meet this need we have designed an evolutionary algorithm (EA)-based "Integrated ISR Plan Analysis and Rehearsal System" (I2PARS) to support Effects-based Assessment (EBA). I2PARS supports ISR mission planning and dynamic replanning to coordinate assets and optimize their routes, allocation and tasking. It uses an evolutionary algorithm to address the large parametric space of route-finding problems which is sometimes discontinuous in the ISR domain because of conflicting objectives such as minimizing asset utilization yet maximizing ISR coverage. EAs are uniquely suited for generating solutions in dynamic environments and also allow user feedback. They are therefore ideal for "streaming optimization" and dynamic replanning of ISR mission plans. I2PARS uses the Non-dominated Sorting Genetic Algorithm (NSGA-II) to automatically generate a diverse set of high performing collection plans given multiple objectives, constraints, and assets. Intended end users of I2PARS include ISR planners in the Combined Air Operations Centers and Joint Intelligence Centers. Here we show the feasibility of applying the NSGA-II algorithm and EAs in general to the ISR planning domain. Unique genetic representations and operators for optimization within the ISR domain are presented along with multi-objective optimization criteria for ISR planning. Promising results of the I2PARS architecture design, early software prototype, and limited domain testing of the new algorithm are discussed. We also present plans for future research and development, as well as technology

  16. PRESS-based EFOR algorithm for the dynamic parametrical modeling of nonlinear MDOF systems

    Science.gov (United States)

    Liu, Haopeng; Zhu, Yunpeng; Luo, Zhong; Han, Qingkai

    2017-09-01

    In response to the identification problem concerning multi-degree of freedom (MDOF) nonlinear systems, this study presents the extended forward orthogonal regression (EFOR) based on predicted residual sums of squares (PRESS) to construct a nonlinear dynamic parametrical model. The proposed parametrical model is based on the non-linear autoregressive with exogenous inputs (NARX) model and aims to explicitly reveal the physical design parameters of the system. The PRESS-based EFOR algorithm is proposed to identify such a model for MDOF systems. By using the algorithm, we built a common-structured model based on the fundamental concept of evaluating its generalization capability through cross-validation. The resulting model aims to prevent over-fitting with poor generalization performance caused by the average error reduction ratio (AERR)-based EFOR algorithm. Then, a functional relationship is established between the coefficients of the terms and the design parameters of the unified model. Moreover, a 5-DOF nonlinear system is taken as a case to illustrate the modeling of the proposed algorithm. Finally, a dynamic parametrical model of a cantilever beam is constructed from experimental data. Results indicate that the dynamic parametrical model of nonlinear systems, which depends on the PRESS-based EFOR, can accurately predict the output response, thus providing a theoretical basis for the optimal design of modeling methods for MDOF nonlinear systems.

  17. A dynamic decision model for portfolio investment and assets management

    Institute of Scientific and Technical Information of China (English)

    QIAN Edward Y.; FENG Ying; HIGGISION James

    2005-01-01

    This paper addresses a dynamic portfolio investment problem. It discusses how we can dynamically choose candidate assets, achieve the possible maximum revenue and reduce the risk to the minimum level. The paper generalizes Markowitz's portfolio selection theory and Sharpe's rule for investment decision. An analytical solution is presented to show how an institutional or individual investor can combine Markowitz's portfolio selection theory, generalized Sharpe's rule and Value-at-Risk(VaR) to find candidate assets and optimal level of position sizes for investment (dis-investment). The result shows that the generalized Markowitz's portfolio selection theory and generalized Sharpe's rule improve decision making for investment.

  18. New segmentation-based tone mapping algorithm for high dynamic range image

    Science.gov (United States)

    Duan, Weiwei; Guo, Huinan; Zhou, Zuofeng; Huang, Huimin; Cao, Jianzhong

    2017-07-01

    The traditional tone mapping algorithm for the display of high dynamic range (HDR) image has the drawback of losing the impression of brightness, contrast and color information. To overcome this phenomenon, we propose a new tone mapping algorithm based on dividing the image into different exposure regions in this paper. Firstly, the over-exposure region is determined using the Local Binary Pattern information of HDR image. Then, based on the peak and average gray of the histogram, the under-exposure and normal-exposure region of HDR image are selected separately. Finally, the different exposure regions are mapped by differentiated tone mapping methods to get the final result. The experiment results show that the proposed algorithm achieve the better performance both in visual quality and objective contrast criterion than other algorithms.

  19. The generation algorithm of arbitrary polygon animation based on dynamic correction

    Directory of Open Access Journals (Sweden)

    Hou Ya Wei

    2016-01-01

    Full Text Available This paper, based on the key-frame polygon sequence, proposes a method that makes use of dynamic correction to develop continuous animation. Firstly we use quadratic Bezier curve to interpolate the corresponding sides vector of polygon sequence consecutive frame and realize the continuity of animation sequences. And then, according to Bezier curve characteristic, we conduct dynamic regulation to interpolation parameters and implement the changing smoothness. Meanwhile, we take use of Lagrange Multiplier Method to correct the polygon and close it. Finally, we provide the concrete algorithm flow and present numerical experiment results. The experiment results show that the algorithm acquires excellent effect.

  20. Scheduling with Group Dynamics: a Multi-Robot Task Allocation Algorithm based on Vacancy Chains

    National Research Council Canada - National Science Library

    Dahl, Torbjorn S; Mataric, Maja J; Sukhatme, Gaurav S

    2002-01-01

    .... We present a multi-robot task allocation algorithm that is sensitive to group dynamics. Our algorithm is based on vacancy chains, a resource distribution process common in human and animal societies...

  1. Algorithm for Stabilizing a POD-Based Dynamical System

    Science.gov (United States)

    Kalb, Virginia L.

    2010-01-01

    This algorithm provides a new way to improve the accuracy and asymptotic behavior of a low-dimensional system based on the proper orthogonal decomposition (POD). Given a data set representing the evolution of a system of partial differential equations (PDEs), such as the Navier-Stokes equations for incompressible flow, one may obtain a low-dimensional model in the form of ordinary differential equations (ODEs) that should model the dynamics of the flow. Temporal sampling of the direct numerical simulation of the PDEs produces a spatial time series. The POD extracts the temporal and spatial eigenfunctions of this data set. Truncated to retain only the most energetic modes followed by Galerkin projection of these modes onto the PDEs obtains a dynamical system of ordinary differential equations for the time-dependent behavior of the flow. In practice, the steps leading to this system of ODEs entail numerically computing first-order derivatives of the mean data field and the eigenfunctions, and the computation of many inner products. This is far from a perfect process, and often results in the lack of long-term stability of the system and incorrect asymptotic behavior of the model. This algorithm describes a new stabilization method that utilizes the temporal eigenfunctions to derive correction terms for the coefficients of the dynamical system to significantly reduce these errors.

  2. Computer Based Asset Management System For Commercial Banks

    Directory of Open Access Journals (Sweden)

    Amanze

    2015-08-01

    Full Text Available ABSTRACT The Computer-based Asset Management System is a web-based system. It allows commercial banks to keep track of their assets. The most advantages of this system are the effective management of asset by keeping records of the asset and retrieval of information. In this research I gather the information to define the requirements of the new application and look at factors how commercial banks managed their asset.

  3. A Scheduling Algorithm for Cloud Computing System Based on the Driver of Dynamic Essential Path.

    Science.gov (United States)

    Xie, Zhiqiang; Shao, Xia; Xin, Yu

    2016-01-01

    To solve the problem of task scheduling in the cloud computing system, this paper proposes a scheduling algorithm for cloud computing based on the driver of dynamic essential path (DDEP). This algorithm applies a predecessor-task layer priority strategy to solve the problem of constraint relations among task nodes. The strategy assigns different priority values to every task node based on the scheduling order of task node as affected by the constraint relations among task nodes, and the task node list is generated by the different priority value. To address the scheduling order problem in which task nodes have the same priority value, the dynamic essential long path strategy is proposed. This strategy computes the dynamic essential path of the pre-scheduling task nodes based on the actual computation cost and communication cost of task node in the scheduling process. The task node that has the longest dynamic essential path is scheduled first as the completion time of task graph is indirectly influenced by the finishing time of task nodes in the longest dynamic essential path. Finally, we demonstrate the proposed algorithm via simulation experiments using Matlab tools. The experimental results indicate that the proposed algorithm can effectively reduce the task Makespan in most cases and meet a high quality performance objective.

  4. An Approach for State Observation in Dynamical Systems Based on the Twisting Algorithm

    DEFF Research Database (Denmark)

    Schmidt, Lasse; Andersen, Torben Ole; Pedersen, Henrik C.

    2013-01-01

    This paper discusses a novel approach for state estimation in dynamical systems, with the special focus on hydraulic valve-cylinder drives. The proposed observer structure is based on the framework of the so-called twisting algorithm. This algorithm utilizes the sign of the state being the target...

  5. Nonuniform Sparse Data Clustering Cascade Algorithm Based on Dynamic Cumulative Entropy

    Directory of Open Access Journals (Sweden)

    Ning Li

    2016-01-01

    Full Text Available A small amount of prior knowledge and randomly chosen initial cluster centers have a direct impact on the accuracy of the performance of iterative clustering algorithm. In this paper we propose a new algorithm to compute initial cluster centers for k-means clustering and the best number of the clusters with little prior knowledge and optimize clustering result. It constructs the Euclidean distance control factor based on aggregation density sparse degree to select the initial cluster center of nonuniform sparse data and obtains initial data clusters by multidimensional diffusion density distribution. Multiobjective clustering approach based on dynamic cumulative entropy is adopted to optimize the initial data clusters and the best number of the clusters. The experimental results show that the newly proposed algorithm has good performance to obtain the initial cluster centers for the k-means algorithm and it effectively improves the clustering accuracy of nonuniform sparse data by about 5%.

  6. Optimal Control of Complex Systems Based on Improved Dual Heuristic Dynamic Programming Algorithm

    Directory of Open Access Journals (Sweden)

    Hui Li

    2017-01-01

    Full Text Available When applied to solving the data modeling and optimal control problems of complex systems, the dual heuristic dynamic programming (DHP technique, which is based on the BP neural network algorithm (BP-DHP, has difficulty in prediction accuracy, slow convergence speed, poor stability, and so forth. In this paper, a dual DHP technique based on Extreme Learning Machine (ELM algorithm (ELM-DHP was proposed. Through constructing three kinds of network structures, the paper gives the detailed realization process of the DHP technique in the ELM. The controller designed upon the ELM-DHP algorithm controlled a molecular distillation system with complex features, such as multivariability, strong coupling, and nonlinearity. Finally, the effectiveness of the algorithm is verified by the simulation that compares DHP and HDP algorithms based on ELM and BP neural network. The algorithm can also be applied to solve the data modeling and optimal control problems of similar complex systems.

  7. Portfolio management fees: assets or profits based compensation?

    OpenAIRE

    Gil-Bazo, Javier

    2001-01-01

    This paper compares assets-based portfolio management fees to profits-based fees. Whilst both forms of compensation can provide appropriate risk incentives, fund managers' limited liability induces more excess risk-taking under a profits-based fee contract. On the other hand, an assets-based fee is more costly to investors. In Spain, where the law explicitly permits both forms of retribution, assets-based fees are observed far more frequently. Under this type of compensation, the paper provid...

  8. Dynamic route guidance algorithm based algorithm based on artificial immune system

    Institute of Scientific and Technical Information of China (English)

    2007-01-01

    To improve the performance of the K-shortest paths search in intelligent traffic guidance systems,this paper proposes an optimal search algorithm based on the intelligent optimization search theory and the memphor mechanism of vertebrate immune systems.This algorithm,applied to the urban traffic network model established by the node-expanding method,can expediently realize K-shortest paths search in the urban traffic guidance systems.Because of the immune memory and global parallel search ability from artificial immune systems,K shortest paths can be found without any repeat,which indicates evidently the superiority of the algorithm to the conventional ones.Not only does it perform a better parallelism,the algorithm also prevents premature phenomenon that often occurs in genetic algorithms.Thus,it is especially suitable for real-time requirement of the traffic guidance system and other engineering optimal applications.A case study verifies the efficiency and the practicability of the algorithm aforementioned.

  9. Liquid markets and market liquids . Collective and single-asset dynamics in financial markets

    Science.gov (United States)

    Cuniberti, G.; Matassini, L.

    2001-04-01

    We characterize the collective phenomena of a liquid market. By interpreting the behavior of a no-arbitrage N asset market in terms of a particle system scenario, (thermo)dynamical-like properties can be extracted from the asset kinetics. In this scheme the mechanisms of the particle interaction can be widely investigated. We test the verisimilitude of our construction on two-decade stock market daily data (DAX30) and show the result obtained for the interaction potential among asset pairs.

  10. Congested Link Inference Algorithms in Dynamic Routing IP Network

    Directory of Open Access Journals (Sweden)

    Yu Chen

    2017-01-01

    Full Text Available The performance descending of current congested link inference algorithms is obviously in dynamic routing IP network, such as the most classical algorithm CLINK. To overcome this problem, based on the assumptions of Markov property and time homogeneity, we build a kind of Variable Structure Discrete Dynamic Bayesian (VSDDB network simplified model of dynamic routing IP network. Under the simplified VSDDB model, based on the Bayesian Maximum A Posteriori (BMAP and Rest Bayesian Network Model (RBNM, we proposed an Improved CLINK (ICLINK algorithm. Considering the concurrent phenomenon of multiple link congestion usually happens, we also proposed algorithm CLILRS (Congested Link Inference algorithm based on Lagrangian Relaxation Subgradient to infer the set of congested links. We validated our results by the experiments of analogy, simulation, and actual Internet.

  11. PID feedback controller used as a tactical asset allocation technique: The G.A.M. model

    Science.gov (United States)

    Gandolfi, G.; Sabatini, A.; Rossolini, M.

    2007-09-01

    The objective of this paper is to illustrate a tactical asset allocation technique utilizing the PID controller. The proportional-integral-derivative (PID) controller is widely applied in most industrial processes; it has been successfully used for over 50 years and it is used by more than 95% of the plants processes. It is a robust and easily understood algorithm that can provide excellent control performance in spite of the diverse dynamic characteristics of the process plant. In finance, the process plant, controlled by the PID controller, can be represented by financial market assets forming a portfolio. More specifically, in the present work, the plant is represented by a risk-adjusted return variable. Money and portfolio managers’ main target is to achieve a relevant risk-adjusted return in their managing activities. In literature and in the financial industry business, numerous kinds of return/risk ratios are commonly studied and used. The aim of this work is to perform a tactical asset allocation technique consisting in the optimization of risk adjusted return by means of asset allocation methodologies based on the PID model-free feedback control modeling procedure. The process plant does not need to be mathematically modeled: the PID control action lies in altering the portfolio asset weights, according to the PID algorithm and its parameters, Ziegler-and-Nichols-tuned, in order to approach the desired portfolio risk-adjusted return efficiently.

  12. Algebraic dynamics solutions and algebraic dynamics algorithm for nonlinear ordinary differential equations

    Institute of Scientific and Technical Information of China (English)

    WANG; Shunjin; ZHANG; Hua

    2006-01-01

    The problem of preserving fidelity in numerical computation of nonlinear ordinary differential equations is studied in terms of preserving local differential structure and approximating global integration structure of the dynamical system.The ordinary differential equations are lifted to the corresponding partial differential equations in the framework of algebraic dynamics,and a new algorithm-algebraic dynamics algorithm is proposed based on the exact analytical solutions of the ordinary differential equations by the algebraic dynamics method.In the new algorithm,the time evolution of the ordinary differential system is described locally by the time translation operator and globally by the time evolution operator.The exact analytical piece-like solution of the ordinary differential equations is expressd in terms of Taylor series with a local convergent radius,and its finite order truncation leads to the new numerical algorithm with a controllable precision better than Runge Kutta Algorithm and Symplectic Geometric Algorithm.

  13. Determinants of investment in fixed assets and in intangible assets for high-tech firms

    Directory of Open Access Journals (Sweden)

    Paulo Maçãs Nunes

    2017-05-01

    Full Text Available Based on a sample of 141 Portuguese high-tech firms for the period 2004-2012 and using GMM system (1998 and LSDVC (2005 dynamic estimators, this paper studies whether the determinants of high-tech firms’ investment in fixed assets are identical to the determinants of their investment in intangible assets. The multiple empirical evidence obtained allows us to conclude that the determinants of their investment in fixed assets are considerably different from those of their investment in intangible assets. Debt is a determinant stimulating investment in fixed assets, with age being a determinant restricting such investment. Size, age, internal finance and GDP are determinants stimulating investment in intangible assets, whereas debt and interest rates restrict such investment. These results let us make important suggestions for the owners/managers of high-tech firms, and also for policy-makers.

  14. Asset management using genetic algorithm: Evidence from Tehran Stock Exchange

    Directory of Open Access Journals (Sweden)

    Abbas Sarijaloo

    2014-02-01

    Full Text Available This paper presents an empirical investigation to study the effect of market management using Markowitz theorem. The study uses the information of 50 best performers on Tehran Stock Exchange over the period 2006-2009 and, using Markowitz theorem, the efficient asset allocation are determined and the result are analyzed. The proposed model of this paper has been solved using genetic algorithm. The results indicate that Tehran Stock Exchange has managed to perform much better than average world market in most years of studies especially on year 2009. The results of our investigation have also indicated that one could reach outstanding results using GA and forming efficient portfolio.

  15. DYNAMICS OF ASSETS AND INVESTMENTS IN ROMANIA VOLUNTARY PENSION FUNDS

    Directory of Open Access Journals (Sweden)

    CONSTANTIN DURAC

    2016-10-01

    Full Text Available In most countries, private pensions have an increasingly more important place in the current pension systems. Their importance consist on the one hand, by their contribution that have to preserve a level of income in retirement, comparable to the active period, and on the other side, by the amounts collected from the participants and investments in various investment instruments. In this article I analyze the overall evolution of total assets and net assets on 30 September 2007 - September 30, 2016, and dynamics of the main investment instruments in which pension funds have made investments optional.

  16. Explicit symplectic algorithms based on generating functions for relativistic charged particle dynamics in time-dependent electromagnetic field

    Science.gov (United States)

    Zhang, Ruili; Wang, Yulei; He, Yang; Xiao, Jianyuan; Liu, Jian; Qin, Hong; Tang, Yifa

    2018-02-01

    Relativistic dynamics of a charged particle in time-dependent electromagnetic fields has theoretical significance and a wide range of applications. The numerical simulation of relativistic dynamics is often multi-scale and requires accurate long-term numerical simulations. Therefore, explicit symplectic algorithms are much more preferable than non-symplectic methods and implicit symplectic algorithms. In this paper, we employ the proper time and express the Hamiltonian as the sum of exactly solvable terms and product-separable terms in space-time coordinates. Then, we give the explicit symplectic algorithms based on the generating functions of orders 2 and 3 for relativistic dynamics of a charged particle. The methodology is not new, which has been applied to non-relativistic dynamics of charged particles, but the algorithm for relativistic dynamics has much significance in practical simulations, such as the secular simulation of runaway electrons in tokamaks.

  17. A High-Speed Vision-Based Sensor for Dynamic Vibration Analysis Using Fast Motion Extraction Algorithms

    Directory of Open Access Journals (Sweden)

    Dashan Zhang

    2016-04-01

    Full Text Available The development of image sensor and optics enables the application of vision-based techniques to the non-contact dynamic vibration analysis of large-scale structures. As an emerging technology, a vision-based approach allows for remote measuring and does not bring any additional mass to the measuring object compared with traditional contact measurements. In this study, a high-speed vision-based sensor system is developed to extract structure vibration signals in real time. A fast motion extraction algorithm is required for this system because the maximum sampling frequency of the charge-coupled device (CCD sensor can reach up to 1000 Hz. Two efficient subpixel level motion extraction algorithms, namely the modified Taylor approximation refinement algorithm and the localization refinement algorithm, are integrated into the proposed vision sensor. Quantitative analysis shows that both of the two modified algorithms are at least five times faster than conventional upsampled cross-correlation approaches and achieve satisfactory error performance. The practicability of the developed sensor is evaluated by an experiment in a laboratory environment and a field test. Experimental results indicate that the developed high-speed vision-based sensor system can extract accurate dynamic structure vibration signals by tracking either artificial targets or natural features.

  18. Security Analysis of a Block Encryption Algorithm Based on Dynamic Sequences of Multiple Chaotic Systems

    Science.gov (United States)

    Du, Mao-Kang; He, Bo; Wang, Yong

    2011-01-01

    Recently, the cryptosystem based on chaos has attracted much attention. Wang and Yu (Commun. Nonlin. Sci. Numer. Simulat. 14 (2009) 574) proposed a block encryption algorithm based on dynamic sequences of multiple chaotic systems. We analyze the potential flaws in the algorithm. Then, a chosen-plaintext attack is presented. Some remedial measures are suggested to avoid the flaws effectively. Furthermore, an improved encryption algorithm is proposed to resist the attacks and to keep all the merits of the original cryptosystem.

  19. Identifying asset-based trends in sustainable programmes which ...

    African Journals Online (AJOL)

    We indicate the similarities between the asset-based approach and current discourses focusing on the notion of schools as nodes of support and care.1 We conclude by suggesting that knowledge of asset-based good practices could be shared with families in school-based sessions, thereby developing schools', families' ...

  20. A highly accurate dynamic contact angle algorithm for drops on inclined surface based on ellipse-fitting.

    Science.gov (United States)

    Xu, Z N; Wang, S Y

    2015-02-01

    To improve the accuracy in the calculation of dynamic contact angle for drops on the inclined surface, a significant number of numerical drop profiles on the inclined surface with different inclination angles, drop volumes, and contact angles are generated based on the finite difference method, a least-squares ellipse-fitting algorithm is used to calculate the dynamic contact angle. The influences of the above three factors are systematically investigated. The results reveal that the dynamic contact angle errors, including the errors of the left and right contact angles, evaluated by the ellipse-fitting algorithm tend to increase with inclination angle/drop volume/contact angle. If the drop volume and the solid substrate are fixed, the errors of the left and right contact angles increase with inclination angle. After performing a tremendous amount of computation, the critical dimensionless drop volumes corresponding to the critical contact angle error are obtained. Based on the values of the critical volumes, a highly accurate dynamic contact angle algorithm is proposed and fully validated. Within nearly the whole hydrophobicity range, it can decrease the dynamic contact angle error in the inclined plane method to less than a certain value even for different types of liquids.

  1. Two-dimensional priority-based dynamic resource allocation algorithm for QoS in WDM/TDM PON networks

    Science.gov (United States)

    Sun, Yixin; Liu, Bo; Zhang, Lijia; Xin, Xiangjun; Zhang, Qi; Rao, Lan

    2018-01-01

    Wavelength division multiplexing/time division multiplexing (WDM/TDM) passive optical networks (PON) is being viewed as a promising solution for delivering multiple services and applications. The hybrid WDM / TDM PON uses the wavelength and bandwidth allocation strategy to control the distribution of the wavelength channels in the uplink direction, so that it can ensure the high bandwidth requirements of multiple Optical Network Units (ONUs) while improving the wavelength resource utilization. Through the investigation of the presented dynamic bandwidth allocation algorithms, these algorithms can't satisfy the requirements of different levels of service very well while adapting to the structural characteristics of mixed WDM / TDM PON system. This paper introduces a novel wavelength and bandwidth allocation algorithm to efficiently utilize the bandwidth and support QoS (Quality of Service) guarantees in WDM/TDM PON. Two priority based polling subcycles are introduced in order to increase system efficiency and improve system performance. The fixed priority polling subcycle and dynamic priority polling subcycle follow different principles to implement wavelength and bandwidth allocation according to the priority of different levels of service. A simulation was conducted to study the performance of the priority based polling in dynamic resource allocation algorithm in WDM/TDM PON. The results show that the performance of delay-sensitive services is greatly improved without degrading QoS guarantees for other services. Compared with the traditional dynamic bandwidth allocation algorithms, this algorithm can meet bandwidth needs of different priority traffic class, achieve low loss rate performance, and ensure real-time of high priority traffic class in terms of overall traffic on the network.

  2. Analysis of Ant Colony Optimization and Population-Based Evolutionary Algorithms on Dynamic Problems

    DEFF Research Database (Denmark)

    Lissovoi, Andrei

    the dynamic optimum for finite alphabets up to size μ, while MMAS is able to do so for any finite alphabet size. Parallel Evolutionary Algorithms on Maze. We prove that while a (1 + λ) EA is unable to track the optimum of the dynamic fitness function Maze for offspring population size up to λ = O(n1-ε......This thesis presents new running time analyses of nature-inspired algorithms on various dynamic problems. It aims to identify and analyse the features of algorithms and problem classes which allow efficient optimization to occur in the presence of dynamic behaviour. We consider the following...... settings: λ-MMAS on Dynamic Shortest Path Problems. We investigate how in-creasing the number of ants simulated per iteration may help an ACO algorithm to track optimum in a dynamic problem. It is shown that while a constant number of ants per-vertex is sufficient to track some oscillations, there also...

  3. Genetic algorithms with memory- and elitism-based immigrants in dynamic environments

    OpenAIRE

    Yang, S

    2008-01-01

    Copyright @ 2008 by the Massachusetts Institute of Technology In recent years the genetic algorithm community has shown a growing interest in studying dynamic optimization problems. Several approaches have been devised. The random immigrants and memory schemes are two major ones. The random immigrants scheme addresses dynamic environments by maintaining the population diversity while the memory scheme aims to adapt genetic algorithms quickly to new environments by reusing historical inform...

  4. On the use of genetic algorithm to optimize industrial assets lifecycle management under safety and budget constraints

    International Nuclear Information System (INIS)

    Lonchampt, J.; Fessart, K.

    2013-01-01

    The purpose of this paper is to describe the method and tool dedicated to optimize investments planning for industrial assets. These investments may either be preventive maintenance tasks, asset enhancements or logistic investments such as spare parts purchases. The two methodological points to investigate in such an issue are: 1. The measure of the profitability of a portfolio of investments 2. The selection and planning of an optimal set of investments 3. The measure of the risk of a portfolio of investments The measure of the profitability of a set of investments in the IPOP tool is synthesised in the Net Present Value indicator. The NPV is the sum of the differences of discounted cash flows (direct costs, forced outages...) between the situations with and without a given investment. These cash flows are calculated through a pseudo-Markov reliability model representing independently the components of the industrial asset and the spare parts inventories. The component model has been widely discussed over the years but the spare part model is a new one based on some approximations that will be discussed. This model, referred as the NPV function, takes for input an investments portfolio and gives its NPV. The second issue is to optimize the NPV. If all investments were independent, this optimization would be an easy calculation, unfortunately there are two sources of dependency. The first one is introduced by the spare part model, as if components are indeed independent in their reliability model, the fact that several components use the same inventory induces a dependency. The second dependency comes from economic, technical or logistic constraints, such as a global maintenance budget limit or a safety requirement limiting the residual risk of failure of a component or group of component, making the aggregation of individual optimum not necessary feasible. The algorithm used to solve such a difficult optimization problem is a genetic algorithm. After a description

  5. On the use of genetic algorithm to optimize industrial assets lifecycle management under safety and budget constraints

    Energy Technology Data Exchange (ETDEWEB)

    Lonchampt, J.; Fessart, K. [EDF R and D, Departement MRI, 6, quai Watier, 78401 Chatou cedex (France)

    2013-07-01

    The purpose of this paper is to describe the method and tool dedicated to optimize investments planning for industrial assets. These investments may either be preventive maintenance tasks, asset enhancements or logistic investments such as spare parts purchases. The two methodological points to investigate in such an issue are: 1. The measure of the profitability of a portfolio of investments 2. The selection and planning of an optimal set of investments 3. The measure of the risk of a portfolio of investments The measure of the profitability of a set of investments in the IPOP tool is synthesised in the Net Present Value indicator. The NPV is the sum of the differences of discounted cash flows (direct costs, forced outages...) between the situations with and without a given investment. These cash flows are calculated through a pseudo-Markov reliability model representing independently the components of the industrial asset and the spare parts inventories. The component model has been widely discussed over the years but the spare part model is a new one based on some approximations that will be discussed. This model, referred as the NPV function, takes for input an investments portfolio and gives its NPV. The second issue is to optimize the NPV. If all investments were independent, this optimization would be an easy calculation, unfortunately there are two sources of dependency. The first one is introduced by the spare part model, as if components are indeed independent in their reliability model, the fact that several components use the same inventory induces a dependency. The second dependency comes from economic, technical or logistic constraints, such as a global maintenance budget limit or a safety requirement limiting the residual risk of failure of a component or group of component, making the aggregation of individual optimum not necessary feasible. The algorithm used to solve such a difficult optimization problem is a genetic algorithm. After a description

  6. Particle algorithms for population dynamics in flows

    International Nuclear Information System (INIS)

    Perlekar, Prasad; Toschi, Federico; Benzi, Roberto; Pigolotti, Simone

    2011-01-01

    We present and discuss particle based algorithms to numerically study the dynamics of population subjected to an advecting flow condition. We discuss few possible variants of the algorithms and compare them in a model compressible flow. A comparison against appropriate versions of the continuum stochastic Fisher equation (sFKPP) is also presented and discussed. The algorithms can be used to study populations genetics in fluid environments.

  7. A Dynamic Health Assessment Approach for Shearer Based on Artificial Immune Algorithm

    Directory of Open Access Journals (Sweden)

    Zhongbin Wang

    2016-01-01

    Full Text Available In order to accurately identify the dynamic health of shearer, reducing operating trouble and production accident of shearer and improving coal production efficiency further, a dynamic health assessment approach for shearer based on artificial immune algorithm was proposed. The key technologies such as system framework, selecting the indicators for shearer dynamic health assessment, and health assessment model were provided, and the flowchart of the proposed approach was designed. A simulation example, with an accuracy of 96%, based on the collected data from industrial production scene was provided. Furthermore, the comparison demonstrated that the proposed method exhibited higher classification accuracy than the classifiers based on back propagation-neural network (BP-NN and support vector machine (SVM methods. Finally, the proposed approach was applied in an engineering problem of shearer dynamic health assessment. The industrial application results showed that the paper research achievements could be used combining with shearer automation control system in fully mechanized coal face. The simulation and the application results indicated that the proposed method was feasible and outperforming others.

  8. Algebraic dynamics solutions and algebraic dynamics algorithm for nonlinear partial differential evolution equations of dynamical systems

    Institute of Scientific and Technical Information of China (English)

    2008-01-01

    Using functional derivative technique in quantum field theory, the algebraic dy-namics approach for solution of ordinary differential evolution equations was gen-eralized to treat partial differential evolution equations. The partial differential evo-lution equations were lifted to the corresponding functional partial differential equations in functional space by introducing the time translation operator. The functional partial differential evolution equations were solved by algebraic dynam-ics. The algebraic dynamics solutions are analytical in Taylor series in terms of both initial functions and time. Based on the exact analytical solutions, a new nu-merical algorithm—algebraic dynamics algorithm was proposed for partial differ-ential evolution equations. The difficulty of and the way out for the algorithm were discussed. The application of the approach to and computer numerical experi-ments on the nonlinear Burgers equation and meteorological advection equation indicate that the algebraic dynamics approach and algebraic dynamics algorithm are effective to the solution of nonlinear partial differential evolution equations both analytically and numerically.

  9. Dynamic service contracting for on-demand asset delivery

    NARCIS (Netherlands)

    Zhao, X.; Angelov, S.A.; Grefen, P.W.P.J.

    2014-01-01

    Traditional financial asset lease operates in an asset provider centred mode, in which financiers passively provide financial solutions to the customers of their allied asset vendors. To capture the highly customised asset lease demands from the mass market, this paper advocates adopting a

  10. Optimization of Algorithms Using Extensions of Dynamic Programming

    KAUST Repository

    AbouEisha, Hassan M.

    2017-04-09

    We study and answer questions related to the complexity of various important problems such as: multi-frontal solvers of hp-adaptive finite element method, sorting and majority. We advocate the use of dynamic programming as a viable tool to study optimal algorithms for these problems. The main approach used to attack these problems is modeling classes of algorithms that may solve this problem using a discrete model of computation then defining cost functions on this discrete structure that reflect different complexity measures of the represented algorithms. As a last step, dynamic programming algorithms are designed and used to optimize those models (algorithms) and to obtain exact results on the complexity of the studied problems. The first part of the thesis presents a novel model of computation (element partition tree) that represents a class of algorithms for multi-frontal solvers along with cost functions reflecting various complexity measures such as: time and space. It then introduces dynamic programming algorithms for multi-stage and bi-criteria optimization of element partition trees. In addition, it presents results based on optimal element partition trees for famous benchmark meshes such as: meshes with point and edge singularities. New improved heuristics for those benchmark meshes were ob- tained based on insights of the optimal results found by our algorithms. The second part of the thesis starts by introducing a general problem where different problems can be reduced to and show how to use a decision table to model such problem. We describe how decision trees and decision tests for this table correspond to adaptive and non-adaptive algorithms for the original problem. We present exact bounds on the average time complexity of adaptive algorithms for the eight elements sorting problem. Then bounds on adaptive and non-adaptive algorithms for a variant of the majority problem are introduced. Adaptive algorithms are modeled as decision trees whose depth

  11. An Asset-Based Approach to Tribal Community Energy Planning

    Energy Technology Data Exchange (ETDEWEB)

    Gutierrez, Rachael A. [Pratt Inst., Brooklyn, NY (United States). City and Regional Planning; Martino, Anthony [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States). Materials, Devices, and Energy Technologies; Begay, Sandra K. [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States). Materials, Devices, and Energy Technologies

    2016-08-01

    Community energy planning is a vital component of successful energy resource development and project implementation. Planning can help tribes develop a shared vision and strategies to accomplish their energy goals. This paper explores the benefits of an asset-based approach to tribal community energy planning. While a framework for community energy planning and federal funding already exists, some areas of difficulty in the planning cycle have been identified. This paper focuses on developing a planning framework that offsets those challenges. The asset-based framework described here takes inventory of a tribe’s capital assets, such as: land capital, human capital, financial capital, and political capital. Such an analysis evaluates how being rich in a specific type of capital can offer a tribe unique advantages in implementing their energy vision. Finally, a tribal case study demonstrates the practical application of an asset-based framework.

  12. A Rule-Based Model for Bankruptcy Prediction Based on an Improved Genetic Ant Colony Algorithm

    Directory of Open Access Journals (Sweden)

    Yudong Zhang

    2013-01-01

    Full Text Available In this paper, we proposed a hybrid system to predict corporate bankruptcy. The whole procedure consists of the following four stages: first, sequential forward selection was used to extract the most important features; second, a rule-based model was chosen to fit the given dataset since it can present physical meaning; third, a genetic ant colony algorithm (GACA was introduced; the fitness scaling strategy and the chaotic operator were incorporated with GACA, forming a new algorithm—fitness-scaling chaotic GACA (FSCGACA, which was used to seek the optimal parameters of the rule-based model; and finally, the stratified K-fold cross-validation technique was used to enhance the generalization of the model. Simulation experiments of 1000 corporations’ data collected from 2006 to 2009 demonstrated that the proposed model was effective. It selected the 5 most important factors as “net income to stock broker’s equality,” “quick ratio,” “retained earnings to total assets,” “stockholders’ equity to total assets,” and “financial expenses to sales.” The total misclassification error of the proposed FSCGACA was only 7.9%, exceeding the results of genetic algorithm (GA, ant colony algorithm (ACA, and GACA. The average computation time of the model is 2.02 s.

  13. Dynamic Power Dispatch Considering Electric Vehicles and Wind Power Using Decomposition Based Multi-Objective Evolutionary Algorithm

    Directory of Open Access Journals (Sweden)

    Boyang Qu

    2017-12-01

    Full Text Available The intermittency of wind power and the large-scale integration of electric vehicles (EVs bring new challenges to the reliability and economy of power system dispatching. In this paper, a novel multi-objective dynamic economic emission dispatch (DEED model is proposed considering the EVs and uncertainties of wind power. The total fuel cost and pollutant emission are considered as the optimization objectives, and the vehicle to grid (V2G power and the conventional generator output power are set as the decision variables. The stochastic wind power is derived by Weibull probability distribution function. Under the premise of meeting the system energy and user’s travel demand, the charging and discharging behavior of the EVs are dynamically managed. Moreover, we propose a two-step dynamic constraint processing strategy for decision variables based on penalty function, and, on this basis, the Multi-Objective Evolutionary Algorithm Based on Decomposition (MOEA/D algorithm is improved. The proposed model and approach are verified by the 10-generator system. The results demonstrate that the proposed DEED model and the improved MOEA/D algorithm are effective and reasonable.

  14. Stochastic quasi-gradient based optimization algorithms for dynamic reliability applications

    International Nuclear Information System (INIS)

    Bourgeois, F.; Labeau, P.E.

    2001-01-01

    On one hand, PSA results are increasingly used in decision making, system management and optimization of system design. On the other hand, when severe accidental transients are considered, dynamic reliability appears appropriate to account for the complex interaction between the transitions between hardware configurations, the operator behavior and the dynamic evolution of the system. This paper presents an exploratory work in which the estimation of the system unreliability in a dynamic context is coupled with an optimization algorithm to determine the 'best' safety policy. Because some reliability parameters are likely to be distributed, the cost function to be minimized turns out to be a random variable. Stochastic programming techniques are therefore envisioned to determine an optimal strategy. Monte Carlo simulation is used at all stages of the computations, from the estimation of the system unreliability to that of the stochastic quasi-gradient. The optimization algorithm is illustrated on a HNO 3 supply system

  15. Algorithm for simulation of quantum many-body dynamics using dynamical coarse-graining

    International Nuclear Information System (INIS)

    Khasin, M.; Kosloff, R.

    2010-01-01

    An algorithm for simulation of quantum many-body dynamics having su(2) spectrum-generating algebra is developed. The algorithm is based on the idea of dynamical coarse-graining. The original unitary dynamics of the target observables--the elements of the spectrum-generating algebra--is simulated by a surrogate open-system dynamics, which can be interpreted as weak measurement of the target observables, performed on the evolving system. The open-system state can be represented by a mixture of pure states, localized in the phase space. The localization reduces the scaling of the computational resources with the Hilbert-space dimension n by factor n 3/2 (ln n) -1 compared to conventional sparse-matrix methods. The guidelines for the choice of parameters for the simulation are presented and the scaling of the computational resources with the Hilbert-space dimension of the system is estimated. The algorithm is applied to the simulation of the dynamics of systems of 2x10 4 and 2x10 6 cold atoms in a double-well trap, described by the two-site Bose-Hubbard model.

  16. An improved Pattern Search based algorithm to solve the Dynamic Economic Dispatch problem with valve-point effect

    International Nuclear Information System (INIS)

    Alsumait, J.S.; Qasem, M.; Sykulski, J.K.; Al-Othman, A.K.

    2010-01-01

    In this paper, an improved algorithm based on Pattern Search method (PS) to solve the Dynamic Economic Dispatch is proposed. The algorithm maintains the essential unit ramp rate constraint, along with all other necessary constraints, not only for the time horizon of operation (24 h), but it preserves these constraints through the transaction period to the next time horizon (next day) in order to avoid the discontinuity of the power system operation. The Dynamic Economic and Emission Dispatch problem (DEED) is also considered. The load balance constraints, operating limits, valve-point loading and network losses are included in the models of both DED and DEED. The numerical results clarify the significance of the improved algorithm and verify its performance.

  17. Heterogeneous beliefs and routes to complex dynamics in asset pricing models with price contingent contracts

    NARCIS (Netherlands)

    Brock, W.A.; Hommes, C.H.

    2001-01-01

    This paper discusses dynamic evolutionary multi-agent systems, as introduced by Brock and Hommes (1997). In particular the heterogeneous agent dynamic asset pricing model of Brock and Hommes (1998) is extended by introducing derivative securities by means of price contingent contracts. Numerical

  18. Higher Order Expectations in Asset Pricing

    OpenAIRE

    Philippe BACCHETTA; Eric VAN WINCOOP

    2004-01-01

    We examine formally Keynes' idea that higher order beliefs can drive a wedge between an asset price and its fundamental value based on expected future payoffs. Higher order expectations add an additional term to a standard asset pricing equation. We call this the higher order wedge, which depends on the difference between higher and first order expectations of future payoffs. We analyze the determinants of this wedge and its impact on the equilibrium price. In the context of a dynamic noisy r...

  19. Rational Asset Pricing Bubbles Revisited

    OpenAIRE

    Jan Werner

    2012-01-01

    Price bubble arises when the price of an asset exceeds the asset's fundamental value, that is, the present value of future dividend payments. The important result of Santos and Woodford (1997) says that price bubbles cannot exist in equilibrium in the standard dynamic asset pricing model with rational agents as long as assets are in strictly positive supply and the present value of total future resources is finite. This paper explores the possibility of asset price bubbles when either one of ...

  20. Initial cash/asset ratio and asset prices: an experimental study.

    Science.gov (United States)

    Caginalp, G; Porter, D; Smith, V

    1998-01-20

    A series of experiments, in which nine participants trade an asset over 15 periods, test the hypothesis that an initial imbalance of asset/cash will influence the trading price over an extended time. Participants know at the outset that the asset or "stock" pays a single dividend with fixed expectation value at the end of the 15th period. In experiments with a greater total value of cash at the start, the mean prices during the trading periods are higher, compared with those with greater amount of asset, with a high degree of statistical significance. The difference is most significant at the outset and gradually tapers near the end of the experiment. The results are very surprising from a rational expectations and classical game theory perspective, because the possession of a large amount of cash does not lead to a simple motivation for a trader to bid excessively on a financial instrument. The gradual erosion of the difference toward the end of trading, however, suggests that fundamental value is approached belatedly, offering some consolation to the rational expectations theory. It also suggests that there is a time scale on which an evolution toward fundamental value occurs. The experimental results are qualitatively compatible with the price dynamics predicted by a system of differential equations based on asset flow. The results have broad implications for the marketing of securities, particularly initial and secondary public offerings, government bonds, etc., where excess supply has been conjectured to suppress prices.

  1. VALUE-BASED APPROACH TO MANAGING CURRENT ASSETS OF CORPORATE CONSTRUCTION COMPANIES

    Directory of Open Access Journals (Sweden)

    Galyna Shapoval

    2017-09-01

    Full Text Available In modern conditions of management, the value of an enterprise becomes the main indicator, which is learned not only by scientists, but also by owners of enterprise and potential investors. Current assets take a very important place among the factors that affect the value of an enterprise, so management of current assets becomes more acute from the point of their impact on enterprise value. The purpose of the paper is to develop a system of value-based management of corporate construction companies’ current assets. The main tasks are: the study of current assets impact on the value of corporate construction companies, the definition of value-based approach to managing current assets of corporate enterprises and development of value-based management system of corporate construction companies’ current assets by elements. General scientific and special research methods were used while writing the work. Value-based management of current assets involves value-based management of the elements of current assets. The value-based inventory management includes the following stages of management: the assessment of reliability and choice of supplier according to the criterion of cash flow maximization, the classification of stocks in management accounting according to the rhythm of supply and the establishment of periodicity of supplies in accordance with the needs of the construction process. The value-based management of accounts receivable includes the following stages of management: assessment of the efficiency of investment of working capital into accounts receivable, the assessment of customers' loyalty and the definition of credit conditions and monitoring of receivables by construction and debt instruments. Value-based cash management involves determining the required level of cash to ensure the continuity of the construction process, assessing the effectiveness of cash use according to the criterion of maximizing cash flow, as well as budget

  2. Asset prices and priceless assets

    NARCIS (Netherlands)

    Penasse, J.N.G.

    2014-01-01

    The doctoral thesis studies several aspects of asset returns dynamics. The first three chapters focus on returns in the fine art market. The first chapter provides evidence for the existence of a slow-moving fad component in art prices that induces short-term return predictability. The article has

  3. Ambiguity and Volatility : Asset Pricing Implications

    NARCIS (Netherlands)

    Pataracchia, B.

    2011-01-01

    Using a simple dynamic consumption-based asset pricing model, this paper explores the implications of a representative investor with smooth ambiguity averse preferences [Klibano¤, Marinacci and Mukerji, Econometrica (2005)] and provides a comparative analysis of risk aversion and ambiguity aversion.

  4. A Method on Dynamic Path Planning for Robotic Manipulator Autonomous Obstacle Avoidance Based on an Improved RRT Algorithm.

    Science.gov (United States)

    Wei, Kun; Ren, Bingyin

    2018-02-13

    In a future intelligent factory, a robotic manipulator must work efficiently and safely in a Human-Robot collaborative and dynamic unstructured environment. Autonomous path planning is the most important issue which must be resolved first in the process of improving robotic manipulator intelligence. Among the path-planning methods, the Rapidly Exploring Random Tree (RRT) algorithm based on random sampling has been widely applied in dynamic path planning for a high-dimensional robotic manipulator, especially in a complex environment because of its probability completeness, perfect expansion, and fast exploring speed over other planning methods. However, the existing RRT algorithm has a limitation in path planning for a robotic manipulator in a dynamic unstructured environment. Therefore, an autonomous obstacle avoidance dynamic path-planning method for a robotic manipulator based on an improved RRT algorithm, called Smoothly RRT (S-RRT), is proposed. This method that targets a directional node extends and can increase the sampling speed and efficiency of RRT dramatically. A path optimization strategy based on the maximum curvature constraint is presented to generate a smooth and curved continuous executable path for a robotic manipulator. Finally, the correctness, effectiveness, and practicability of the proposed method are demonstrated and validated via a MATLAB static simulation and a Robot Operating System (ROS) dynamic simulation environment as well as a real autonomous obstacle avoidance experiment in a dynamic unstructured environment for a robotic manipulator. The proposed method not only provides great practical engineering significance for a robotic manipulator's obstacle avoidance in an intelligent factory, but also theoretical reference value for other type of robots' path planning.

  5. Research and design on system of asset management based on RFID

    Science.gov (United States)

    Guan, Peng; Du, HuaiChang; Jing, Hua; Zhang, MengYue; Zhang, Meng; Xu, GuiXian

    2011-10-01

    By analyzing the problems in the current assets management, this thesis proposing RFID technology will be applied to asset management in order to improve the management level of automation and information. This paper designed the equipment identification based on 433MHz RFID tag and reader which was deeply studied on the basis of RFID tag and card reader circuits, and this paper also illustrates the system of asset management. The RS232 converts Ethernet is a innovative technology to transfer data to PC monitor software, and implement system of asset management based on WEB techniques (PHP and MySQL).

  6. Large-Scale Portfolio Optimization Using Multiobjective Evolutionary Algorithms and Preselection Methods

    Directory of Open Access Journals (Sweden)

    B. Y. Qu

    2017-01-01

    Full Text Available Portfolio optimization problems involve selection of different assets to invest in order to maximize the overall return and minimize the overall risk simultaneously. The complexity of the optimal asset allocation problem increases with an increase in the number of assets available to select from for investing. The optimization problem becomes computationally challenging when there are more than a few hundreds of assets to select from. To reduce the complexity of large-scale portfolio optimization, two asset preselection procedures that consider return and risk of individual asset and pairwise correlation to remove assets that may not potentially be selected into any portfolio are proposed in this paper. With these asset preselection methods, the number of assets considered to be included in a portfolio can be increased to thousands. To test the effectiveness of the proposed methods, a Normalized Multiobjective Evolutionary Algorithm based on Decomposition (NMOEA/D algorithm and several other commonly used multiobjective evolutionary algorithms are applied and compared. Six experiments with different settings are carried out. The experimental results show that with the proposed methods the simulation time is reduced while return-risk trade-off performances are significantly improved. Meanwhile, the NMOEA/D is able to outperform other compared algorithms on all experiments according to the comparative analysis.

  7. Application of quantum master equation for long-term prognosis of asset-prices

    Science.gov (United States)

    Khrennikova, Polina

    2016-05-01

    This study combines the disciplines of behavioral finance and an extension of econophysics, namely the concepts and mathematical structure of quantum physics. We apply the formalism of quantum theory to model the dynamics of some correlated financial assets, where the proposed model can be potentially applied for developing a long-term prognosis of asset price formation. At the informational level, the asset price states interact with each other by the means of a ;financial bath;. The latter is composed of agents' expectations about the future developments of asset prices on the finance market, as well as financially important information from mass-media, society, and politicians. One of the essential behavioral factors leading to the quantum-like dynamics of asset prices is the irrationality of agents' expectations operating on the finance market. These expectations lead to a deeper type of uncertainty concerning the future price dynamics of the assets, than given by a classical probability theory, e.g., in the framework of the classical financial mathematics, which is based on the theory of stochastic processes. The quantum dimension of the uncertainty in price dynamics is expressed in the form of the price-states superposition and entanglement between the prices of the different financial assets. In our model, the resolution of this deep quantum uncertainty is mathematically captured with the aid of the quantum master equation (its quantum Markov approximation). We illustrate our model of preparation of a future asset price prognosis by a numerical simulation, involving two correlated assets. Their returns interact more intensively, than understood by a classical statistical correlation. The model predictions can be extended to more complex models to obtain price configuration for multiple assets and portfolios.

  8. Overview of fast algorithm in 3D dynamic holographic display

    Science.gov (United States)

    Liu, Juan; Jia, Jia; Pan, Yijie; Wang, Yongtian

    2013-08-01

    3D dynamic holographic display is one of the most attractive techniques for achieving real 3D vision with full depth cue without any extra devices. However, huge 3D information and data should be preceded and be computed in real time for generating the hologram in 3D dynamic holographic display, and it is a challenge even for the most advanced computer. Many fast algorithms are proposed for speeding the calculation and reducing the memory usage, such as:look-up table (LUT), compressed look-up table (C-LUT), split look-up table (S-LUT), and novel look-up table (N-LUT) based on the point-based method, and full analytical polygon-based methods, one-step polygon-based method based on the polygon-based method. In this presentation, we overview various fast algorithms based on the point-based method and the polygon-based method, and focus on the fast algorithm with low memory usage, the C-LUT, and one-step polygon-based method by the 2D Fourier analysis of the 3D affine transformation. The numerical simulations and the optical experiments are presented, and several other algorithms are compared. The results show that the C-LUT algorithm and the one-step polygon-based method are efficient methods for saving calculation time. It is believed that those methods could be used in the real-time 3D holographic display in future.

  9. DARAL: A Dynamic and Adaptive Routing Algorithm for Wireless Sensor Networks

    Directory of Open Access Journals (Sweden)

    Francisco José Estévez

    2016-06-01

    Full Text Available The evolution of Smart City projects is pushing researchers and companies to develop more efficient embedded hardware and also more efficient communication technologies. These communication technologies are the focus of this work, presenting a new routing algorithm based on dynamically-allocated sub-networks and node roles. Among these features, our algorithm presents a fast set-up time, a reduced overhead and a hierarchical organization, which allows for the application of complex management techniques. This work presents a routing algorithm based on a dynamically-allocated hierarchical clustering, which uses the link quality indicator as a reference parameter, maximizing the network coverage and minimizing the control message overhead and the convergence time. The present work based its test scenario and analysis in the density measure, considered as a node degree. The routing algorithm is compared with some of the most well known routing algorithms for different scenario densities.

  10. The Dynamics of Market Insurance, Insurable Assets, and Wealth Accumulation

    OpenAIRE

    Koeniger, Winfried

    2002-01-01

    We analyze dynamic interactions between market insurance, the stock of insurable assets and liquid wealth accumulation in a model with non-durable and durable consumption. The stock of the durable is exposed to risk against which households can insure. Since the model does not have a closed form solution we first provide an analytical approximation for the case in which households own abundant liquid wealth. It turns out that precautionary motives still matter because of fluctuations of the p...

  11. A controllable sensor management algorithm capable of learning

    Science.gov (United States)

    Osadciw, Lisa A.; Veeramacheneni, Kalyan K.

    2005-03-01

    Sensor management technology progress is challenged by the geographic space it spans, the heterogeneity of the sensors, and the real-time timeframes within which plans controlling the assets are executed. This paper presents a new sensor management paradigm and demonstrates its application in a sensor management algorithm designed for a biometric access control system. This approach consists of an artificial intelligence (AI) algorithm focused on uncertainty measures, which makes the high level decisions to reduce uncertainties and interfaces with the user, integrated cohesively with a bottom up evolutionary algorithm, which optimizes the sensor network"s operation as determined by the AI algorithm. The sensor management algorithm presented is composed of a Bayesian network, the AI algorithm component, and a swarm optimization algorithm, the evolutionary algorithm. Thus, the algorithm can change its own performance goals in real-time and will modify its own decisions based on observed measures within the sensor network. The definition of the measures as well as the Bayesian network determine the robustness of the algorithm and its utility in reacting dynamically to changes in the global system.

  12. A composite experimental dynamic substructuring method based on partitioned algorithms and localized Lagrange multipliers

    Science.gov (United States)

    Abbiati, Giuseppe; La Salandra, Vincenzo; Bursi, Oreste S.; Caracoglia, Luca

    2018-02-01

    Successful online hybrid (numerical/physical) dynamic substructuring simulations have shown their potential in enabling realistic dynamic analysis of almost any type of non-linear structural system (e.g., an as-built/isolated viaduct, a petrochemical piping system subjected to non-stationary seismic loading, etc.). Moreover, owing to faster and more accurate testing equipment, a number of different offline experimental substructuring methods, operating both in time (e.g. the impulse-based substructuring) and frequency domains (i.e. the Lagrange multiplier frequency-based substructuring), have been employed in mechanical engineering to examine dynamic substructure coupling. Numerous studies have dealt with the above-mentioned methods and with consequent uncertainty propagation issues, either associated with experimental errors or modelling assumptions. Nonetheless, a limited number of publications have systematically cross-examined the performance of the various Experimental Dynamic Substructuring (EDS) methods and the possibility of their exploitation in a complementary way to expedite a hybrid experiment/numerical simulation. From this perspective, this paper performs a comparative uncertainty propagation analysis of three EDS algorithms for coupling physical and numerical subdomains with a dual assembly approach based on localized Lagrange multipliers. The main results and comparisons are based on a series of Monte Carlo simulations carried out on a five-DoF linear/non-linear chain-like systems that include typical aleatoric uncertainties emerging from measurement errors and excitation loads. In addition, we propose a new Composite-EDS (C-EDS) method to fuse both online and offline algorithms into a unique simulator. Capitalizing from the results of a more complex case study composed of a coupled isolated tank-piping system, we provide a feasible way to employ the C-EDS method when nonlinearities and multi-point constraints are present in the emulated system.

  13. Heuristic Scheduling Algorithm Oriented Dynamic Tasks for Imaging Satellites

    Directory of Open Access Journals (Sweden)

    Maocai Wang

    2014-01-01

    Full Text Available Imaging satellite scheduling is an NP-hard problem with many complex constraints. This paper researches the scheduling problem for dynamic tasks oriented to some emergency cases. After the dynamic properties of satellite scheduling were analyzed, the optimization model is proposed in this paper. Based on the model, two heuristic algorithms are proposed to solve the problem. The first heuristic algorithm arranges new tasks by inserting or deleting them, then inserting them repeatedly according to the priority from low to high, which is named IDI algorithm. The second one called ISDR adopts four steps: insert directly, insert by shifting, insert by deleting, and reinsert the tasks deleted. Moreover, two heuristic factors, congestion degree of a time window and the overlapping degree of a task, are employed to improve the algorithm’s performance. Finally, a case is given to test the algorithms. The results show that the IDI algorithm is better than ISDR from the running time point of view while ISDR algorithm with heuristic factors is more effective with regard to algorithm performance. Moreover, the results also show that our method has good performance for the larger size of the dynamic tasks in comparison with the other two methods.

  14. A Stereo Dual-Channel Dynamic Programming Algorithm for UAV Image Stitching.

    Science.gov (United States)

    Li, Ming; Chen, Ruizhi; Zhang, Weilong; Li, Deren; Liao, Xuan; Wang, Lei; Pan, Yuanjin; Zhang, Peng

    2017-09-08

    Dislocation is one of the major challenges in unmanned aerial vehicle (UAV) image stitching. In this paper, we propose a new algorithm for seamlessly stitching UAV images based on a dynamic programming approach. Our solution consists of two steps: Firstly, an image matching algorithm is used to correct the images so that they are in the same coordinate system. Secondly, a new dynamic programming algorithm is developed based on the concept of a stereo dual-channel energy accumulation. A new energy aggregation and traversal strategy is adopted in our solution, which can find a more optimal seam line for image stitching. Our algorithm overcomes the theoretical limitation of the classical Duplaquet algorithm. Experiments show that the algorithm can effectively solve the dislocation problem in UAV image stitching, especially for the cases in dense urban areas. Our solution is also direction-independent, which has better adaptability and robustness for stitching images.

  15. Multiscale equation-free algorithms for molecular dynamics

    Science.gov (United States)

    Abi Mansour, Andrew

    Molecular dynamics is a physics-based computational tool that has been widely employed to study the dynamics and structure of macromolecules and their assemblies at the atomic scale. However, the efficiency of molecular dynamics simulation is limited because of the broad spectrum of timescales involved. To overcome this limitation, an equation-free algorithm is presented for simulating these systems using a multiscale model cast in terms of atomistic and coarse-grained variables. Both variables are evolved in time in such a way that the cross-talk between short and long scales is preserved. In this way, the coarse-grained variables guide the evolution of the atom-resolved states, while the latter provide the Newtonian physics for the former. While the atomistic variables are evolved using short molecular dynamics runs, time advancement at the coarse-grained level is achieved with a scheme that uses information from past and future states of the system while accounting for both the stochastic and deterministic features of the coarse-grained dynamics. To complete the multiscale cycle, an atom-resolved state consistent with the updated coarse-grained variables is recovered using algorithms from mathematical optimization. This multiscale paradigm is extended to nanofluidics using concepts from hydrodynamics, and it is demonstrated for macromolecular and nanofluidic systems. A toolkit is developed for prototyping these algorithms, which are then implemented within the GROMACS simulation package and released as an open source multiscale simulator.

  16. Dynamic game balancing implementation using adaptive algorithm in mobile-based Safari Indonesia game

    Science.gov (United States)

    Yuniarti, Anny; Nata Wardanie, Novita; Kuswardayan, Imam

    2018-03-01

    In developing a game there is one method that should be applied to maintain the interest of players, namely dynamic game balancing. Dynamic game balancing is a process to match a player’s playing style with the behaviour, attributes, and game environment. This study applies dynamic game balancing using adaptive algorithm in scrolling shooter game type called Safari Indonesia which developed using Unity. The game of this type is portrayed by a fighter aircraft character trying to defend itself from insistent enemy attacks. This classic game is chosen to implement adaptive algorithms because it has quite complex attributes to be developed using dynamic game balancing. Tests conducted by distributing questionnaires to a number of players indicate that this method managed to reduce frustration and increase the pleasure factor in playing.

  17. Detecting change points in VIX and S&P 500: A new approach to dynamic asset allocation

    DEFF Research Database (Denmark)

    Nystrup, Peter; Hansen, Bo William; Madsen, Henrik

    2016-01-01

    to DAA that is based on detection of change points without fitting a model with a fixed number of regimes to the data, without estimating any parameters and without assuming a specific distribution of the data. It is examined whether DAA is most profitable when based on changes in the Chicago Board...... Options Exchange Volatility Index or change points detected in daily returns of the S&P 500 index. In an asset universe consisting of the S&P 500 index and cash, it is shown that a dynamic strategy based on detected change points significantly improves the Sharpe ratio and reduces the drawdown risk when...

  18. Dynamic Programming Algorithms in Speech Recognition

    Directory of Open Access Journals (Sweden)

    Titus Felix FURTUNA

    2008-01-01

    Full Text Available In a system of speech recognition containing words, the recognition requires the comparison between the entry signal of the word and the various words of the dictionary. The problem can be solved efficiently by a dynamic comparison algorithm whose goal is to put in optimal correspondence the temporal scales of the two words. An algorithm of this type is Dynamic Time Warping. This paper presents two alternatives for implementation of the algorithm designed for recognition of the isolated words.

  19. Optimization of dynamic economic dispatch with valve-point effect using chaotic sequence based differential evolution algorithms

    International Nuclear Information System (INIS)

    He Dakuo; Dong Gang; Wang Fuli; Mao Zhizhong

    2011-01-01

    A chaotic sequence based differential evolution (DE) approach for solving the dynamic economic dispatch problem (DEDP) with valve-point effect is presented in this paper. The proposed method combines the DE algorithm with the local search technique to improve the performance of the algorithm. DE is the main optimizer, while an approximated model for local search is applied to fine tune in the solution of the DE run. To accelerate convergence of DE, a series of constraints handling rules are adopted. An initial population obtained by using chaotic sequence exerts optimal performance of the proposed algorithm. The combined algorithm is validated for two test systems consisting of 10 and 13 thermal units whose incremental fuel cost function takes into account the valve-point loading effects. The proposed combined method outperforms other algorithms reported in literatures for DEDP considering valve-point effects.

  20. Dynamic Relationships between Price and Net Asset Value for Asian Real Estate Stocks

    Directory of Open Access Journals (Sweden)

    Kim Hiang LIOW

    2018-03-01

    Full Text Available This paper examines short- and long-term behavior of the price-to net asset value ratio in six Asian public real estate markets. We find mean-reverting behavior of the ratio and spillover effects, where each of the examined public real estate markets correlates with other markets. Additionally, the unexpected shock correlating with the price-to-net asset value ratio in one market has a positive or negative correlation with the ratios of other markets. Our results offer fresh insights to portfolio managers, policymakers, and academic researchers into the regional and country market dynamics of public real estate valuation and cross-country interaction from the long-term and short-term perspectives.

  1. Variable threshold algorithm for division of labor analyzed as a dynamical system.

    Science.gov (United States)

    Castillo-Cagigal, Manuel; Matallanas, Eduardo; Navarro, Iñaki; Caamaño-Martín, Estefanía; Monasterio-Huelin, Félix; Gutiérrez, Álvaro

    2014-12-01

    Division of labor is a widely studied aspect of colony behavior of social insects. Division of labor models indicate how individuals distribute themselves in order to perform different tasks simultaneously. However, models that study division of labor from a dynamical system point of view cannot be found in the literature. In this paper, we define a division of labor model as a discrete-time dynamical system, in order to study the equilibrium points and their properties related to convergence and stability. By making use of this analytical model, an adaptive algorithm based on division of labor can be designed to satisfy dynamic criteria. In this way, we have designed and tested an algorithm that varies the response thresholds in order to modify the dynamic behavior of the system. This behavior modification allows the system to adapt to specific environmental and collective situations, making the algorithm a good candidate for distributed control applications. The variable threshold algorithm is based on specialization mechanisms. It is able to achieve an asymptotically stable behavior of the system in different environments and independently of the number of individuals. The algorithm has been successfully tested under several initial conditions and number of individuals.

  2. Dynamic gradient descent learning algorithms for enhanced empirical modeling of power plants

    International Nuclear Information System (INIS)

    Parlos, A.G.; Atiya, Amir; Chong, K.T.

    1991-01-01

    A newly developed dynamic gradient descent-based learning algorithm is used to train a recurrent multilayer perceptron network for use in empirical modeling of power plants. The two main advantages of the proposed learning algorithm are its ability to consider past error gradient information for future use and the two forward passes associated with its implementation, instead of one forward and one backward pass of the backpropagation algorithm. The latter advantage results in computational time saving because both passes can be performed simultaneously. The dynamic learning algorithm is used to train a hybrid feedforward/feedback neural network, a recurrent multilayer perceptron, which was previously found to exhibit good interpolation and extrapolation capabilities in modeling nonlinear dynamic systems. One of the drawbacks, however, of the previously reported work has been the long training times associated with accurate empirical models. The enhanced learning capabilities provided by the dynamic gradient descent-based learning algorithm are demonstrated by a case study of a steam power plant. The number of iterations required for accurate empirical modeling has been reduced from tens of thousands to hundreds, thus significantly expediting the learning process

  3. Accounting providing of statistical analysis of intangible assets renewal under marketing strategy

    Directory of Open Access Journals (Sweden)

    I.R. Polishchuk

    2016-12-01

    Full Text Available The article analyzes the content of the Regulations on accounting policies of the surveyed enterprises in terms of the operations concerning the amortization of intangible assets on the following criteria: assessment on admission, determination of useful life, the period of depreciation, residual value, depreciation method, reflection in the financial statements, a unit of account, revaluation, formation of fair value. The characteristic of factors affecting the accounting policies and determining the mechanism for evaluating the completeness and timeliness of intangible assets renewal is showed. The algorithm for selecting the method of intangible assets amortization is proposed. The knowledge base of statistical analysis of timeliness and completeness of intangible assets renewal in terms of the developed internal reporting is expanded. The statistical indicators to assess the effectiveness of the amortization policy for intangible assets are proposed. The marketing strategies depending on the condition and amount of intangible assets in relation to increasing marketing potential for continuity of economic activity are described.

  4. Cognitive radio resource allocation based on coupled chaotic genetic algorithm

    International Nuclear Information System (INIS)

    Zu Yun-Xiao; Zhou Jie; Zeng Chang-Chang

    2010-01-01

    A coupled chaotic genetic algorithm for cognitive radio resource allocation which is based on genetic algorithm and coupled Logistic map is proposed. A fitness function for cognitive radio resource allocation is provided. Simulations are conducted for cognitive radio resource allocation by using the coupled chaotic genetic algorithm, simple genetic algorithm and dynamic allocation algorithm respectively. The simulation results show that, compared with simple genetic and dynamic allocation algorithm, coupled chaotic genetic algorithm reduces the total transmission power and bit error rate in cognitive radio system, and has faster convergence speed

  5. Development of advanced risk informed asset management tool based on system dynamics approach for nuclear power plant

    International Nuclear Information System (INIS)

    Lee, Gyoung Cheol

    2007-02-01

    In the competitive circumstance of electricity industry, the economic efficiency of electricity generation facility is the most important factor to increase their competitiveness. For nuclear power plant (NPP), safety is also an essential factor. Over fast several years, efforts for development of safety concerned and financial asset maximizing method, process and tools have been continued internationally and Risk-Informed Asset Management (RIAM) methodology is suggested by Electric Power Research Institute (EPRI). This RIAM methodology is expected to provide plant operators with a project prioritization and life cycle management planning tool for making long-term maintenance plans, guiding plant budgeting, and determining the sensitivity of a plant's economic risk to the reliability and availability of system, structure, and components (SSC), as well as other technical and economic parameters. The focus of this study is to develop model that help us to resource allocation, to find what effect such allocations on the plant economic and safety performance. Detailed research process for this goal is as follow; First step for development of advanced RIAM model is to review for current RIAM model of EPRI. This part describes the overall RIAM methodology including its conceptual model, implementation process, modular approach etc. Second step is to perform feasibility study for current EPRI's RIAM model with case study. This part shows the result of feasibility study for current RIAM method by case study and discussion for result. Finally, concept of Advanced RIAM model is developed based on system dynamics approach and parameter relationship is formulated. In advanced RIAM Model, Identification of scheduled maintenance effect on other parameters and the relationship between PM Activity and failure rate is most important factor. In this study, these relationships are formulated based on system dynamics approach. Creations of these modeling tool using Vensim

  6. Regime-Based Versus Static Asset Allocation: Letting the Data Speak

    DEFF Research Database (Denmark)

    Nystrup, Peter; Hansen, Bo William; Madsen, Henrik

    2015-01-01

    Regime shifts present a big challenge to traditional strategic asset allocation. This article investigates whether regimebased asset allocation can effectively respond to changes in financial regimes at the portfolio level, in an effort to provide better long-term results than more static...... approaches can offer. The authors center their regime-based approach around a regime-switching model with time-varying parameters that can match financial markets’ tendency to change behavior abruptly and the fact that the new behavior often persists for several periods after a change. In an asset universe...

  7. Exploring high dimensional data with Butterfly: a novel classification algorithm based on discrete dynamical systems.

    Science.gov (United States)

    Geraci, Joseph; Dharsee, Moyez; Nuin, Paulo; Haslehurst, Alexandria; Koti, Madhuri; Feilotter, Harriet E; Evans, Ken

    2014-03-01

    We introduce a novel method for visualizing high dimensional data via a discrete dynamical system. This method provides a 2D representation of the relationship between subjects according to a set of variables without geometric projections, transformed axes or principal components. The algorithm exploits a memory-type mechanism inherent in a certain class of discrete dynamical systems collectively referred to as the chaos game that are closely related to iterative function systems. The goal of the algorithm was to create a human readable representation of high dimensional patient data that was capable of detecting unrevealed subclusters of patients from within anticipated classifications. This provides a mechanism to further pursue a more personalized exploration of pathology when used with medical data. For clustering and classification protocols, the dynamical system portion of the algorithm is designed to come after some feature selection filter and before some model evaluation (e.g. clustering accuracy) protocol. In the version given here, a univariate features selection step is performed (in practice more complex feature selection methods are used), a discrete dynamical system is driven by this reduced set of variables (which results in a set of 2D cluster models), these models are evaluated for their accuracy (according to a user-defined binary classification) and finally a visual representation of the top classification models are returned. Thus, in addition to the visualization component, this methodology can be used for both supervised and unsupervised machine learning as the top performing models are returned in the protocol we describe here. Butterfly, the algorithm we introduce and provide working code for, uses a discrete dynamical system to classify high dimensional data and provide a 2D representation of the relationship between subjects. We report results on three datasets (two in the article; one in the appendix) including a public lung cancer

  8. A Wavelet Analysis-Based Dynamic Prediction Algorithm to Network Traffic

    Directory of Open Access Journals (Sweden)

    Meng Fan-Bo

    2016-01-01

    Full Text Available Network traffic is a significantly important parameter for network traffic engineering, while it holds highly dynamic nature in the network. Accordingly, it is difficult and impossible to directly predict traffic amount of end-to-end flows. This paper proposes a new prediction algorithm to network traffic using the wavelet analysis. Firstly, network traffic is converted into the time-frequency domain to capture time-frequency feature of network traffic. Secondly, in different frequency components, we model network traffic in the time-frequency domain. Finally, we build the prediction model about network traffic. At the same time, the corresponding prediction algorithm is presented to attain network traffic prediction. Simulation results indicates that our approach is promising.

  9. Parameter identification for structural dynamics based on interval analysis algorithm

    Science.gov (United States)

    Yang, Chen; Lu, Zixing; Yang, Zhenyu; Liang, Ke

    2018-04-01

    A parameter identification method using interval analysis algorithm for structural dynamics is presented in this paper. The proposed uncertain identification method is investigated by using central difference method and ARMA system. With the help of the fixed memory least square method and matrix inverse lemma, a set-membership identification technology is applied to obtain the best estimation of the identified parameters in a tight and accurate region. To overcome the lack of insufficient statistical description of the uncertain parameters, this paper treats uncertainties as non-probabilistic intervals. As long as we know the bounds of uncertainties, this algorithm can obtain not only the center estimations of parameters, but also the bounds of errors. To improve the efficiency of the proposed method, a time-saving algorithm is presented by recursive formula. At last, to verify the accuracy of the proposed method, two numerical examples are applied and evaluated by three identification criteria respectively.

  10. Habit-based Asset Pricing with Limited Participation Consumption

    DEFF Research Database (Denmark)

    Bach, Christian; Møller, Stig Vinther

    We calibrate and estimate a consumption-based asset pricing model with habit formation using limited participation consumption data. Based on survey data of a representative sample of American households, we distinguish between assetholder and non-assetholder consumption, as well as the standard...

  11. Habit-based asset pricing with limited participation consumption

    DEFF Research Database (Denmark)

    Møller, Stig Vinther; Bach, Christian

    2011-01-01

    We calibrate and estimate a consumption-based asset pricing model with habit formation using limited participation consumption data. Based on survey data of a representative sample of American households, we distinguish between assetholder and non-assetholder consumption, as well as the standard...

  12. New MPPT algorithm for PV applications based on hybrid dynamical approach

    KAUST Repository

    Elmetennani, Shahrazed

    2016-10-24

    This paper proposes a new Maximum Power Point Tracking (MPPT) algorithm for photovoltaic applications using the multicellular converter as a stage of power adaptation. The proposed MPPT technique has been designed using a hybrid dynamical approach to model the photovoltaic generator. The hybrid dynamical theory has been applied taking advantage of the particular topology of the multicellular converter. Then, a hybrid automata has been established to optimize the power production. The maximization of the produced solar energy is achieved by switching between the different operative modes of the hybrid automata, which is conditioned by some invariance and transition conditions. These conditions have been validated by simulation tests under different conditions of temperature and irradiance. Moreover, the performance of the proposed algorithm has been then evaluated by comparison with standard MPPT techniques numerically and by experimental tests under varying external working conditions. The results have shown the interesting features that the hybrid MPPT technique presents in terms of performance and simplicity for real time implementation.

  13. New MPPT algorithm for PV applications based on hybrid dynamical approach

    KAUST Repository

    Elmetennani, Shahrazed; Laleg-Kirati, Taous-Meriem; Djemai, M.; Tadjine, M.

    2016-01-01

    This paper proposes a new Maximum Power Point Tracking (MPPT) algorithm for photovoltaic applications using the multicellular converter as a stage of power adaptation. The proposed MPPT technique has been designed using a hybrid dynamical approach to model the photovoltaic generator. The hybrid dynamical theory has been applied taking advantage of the particular topology of the multicellular converter. Then, a hybrid automata has been established to optimize the power production. The maximization of the produced solar energy is achieved by switching between the different operative modes of the hybrid automata, which is conditioned by some invariance and transition conditions. These conditions have been validated by simulation tests under different conditions of temperature and irradiance. Moreover, the performance of the proposed algorithm has been then evaluated by comparison with standard MPPT techniques numerically and by experimental tests under varying external working conditions. The results have shown the interesting features that the hybrid MPPT technique presents in terms of performance and simplicity for real time implementation.

  14. Dynamic airspace configuration algorithms for next generation air transportation system

    Science.gov (United States)

    Wei, Jian

    The National Airspace System (NAS) is under great pressure to safely and efficiently handle the record-high air traffic volume nowadays, and will face even greater challenge to keep pace with the steady increase of future air travel demand, since the air travel demand is projected to increase to two to three times the current level by 2025. The inefficiency of traffic flow management initiatives causes severe airspace congestion and frequent flight delays, which cost billions of economic losses every year. To address the increasingly severe airspace congestion and delays, the Next Generation Air Transportation System (NextGen) is proposed to transform the current static and rigid radar based system to a dynamic and flexible satellite based system. New operational concepts such as Dynamic Airspace Configuration (DAC) have been under development to allow more flexibility required to mitigate the demand-capacity imbalances in order to increase the throughput of the entire NAS. In this dissertation, we address the DAC problem in the en route and terminal airspace under the framework of NextGen. We develop a series of algorithms to facilitate the implementation of innovative concepts relevant with DAC in both the en route and terminal airspace. We also develop a performance evaluation framework for comprehensive benefit analyses on different aspects of future sector design algorithms. First, we complete a graph based sectorization algorithm for DAC in the en route airspace, which models the underlying air route network with a weighted graph, converts the sectorization problem into the graph partition problem, partitions the weighted graph with an iterative spectral bipartition method, and constructs the sectors from the partitioned graph. The algorithm uses a graph model to accurately capture the complex traffic patterns of the real flights, and generates sectors with high efficiency while evenly distributing the workload among the generated sectors. We further improve

  15. Steam generator asset management: integrating technology and asset management

    International Nuclear Information System (INIS)

    Shoemaker, P.; Cislo, D.

    2006-01-01

    Asset Management is an established but often misunderstood discipline that is gaining momentum within the nuclear generation industry. The global impetus behind the movement toward asset management is sustainability. The discipline of asset management is based upon three fundamental aspects; key performance indicators (KPI), activity-based cost accounting, and cost benefits/risk analysis. The technology associated with these three aspects is fairly well-developed, in all but the most critical area; cost benefits/risk analysis. There are software programs that calculate, trend, and display key-performance indicators to ensure high-level visibility. Activity-based costing is a little more difficult; requiring a consensus on the definition of what comprises an activity and then adjusting cost accounting systems to track. In the United States, the Nuclear Energy Institute's Standard Nuclear Process Model (SNPM) serves as the basis for activity-based costing. As a result, the software industry has quickly adapted to develop tracking systems that include the SNPM structure. Both the KPI's and the activity-based cost accounting feed the cost benefits/risk analysis to allow for continuous improvement and task optimization; the goal of asset management. In the case where the benefits and risks are clearly understood and defined, there has been much progress in applying technology for continuous improvement. Within the nuclear generation industry, more specialized and unique software systems have been developed for active components, such as pumps and motors. Active components lend themselves well to the application of asset management techniques because failure rates can be established, which serves as the basis to quantify risk in the cost-benefits/risk analysis. A key issue with respect to asset management technologies is only now being understood and addressed, that is how to manage passive components. Passive components, such as nuclear steam generators, reactor vessels

  16. Heuristic algorithm for single resource constrained project scheduling problem based on the dynamic programming

    Directory of Open Access Journals (Sweden)

    Stanimirović Ivan

    2009-01-01

    Full Text Available We introduce a heuristic method for the single resource constrained project scheduling problem, based on the dynamic programming solution of the knapsack problem. This method schedules projects with one type of resources, in the non-preemptive case: once started an activity is not interrupted and runs to completion. We compare the implementation of this method with well-known heuristic scheduling method, called Minimum Slack First (known also as Gray-Kidd algorithm, as well as with Microsoft Project.

  17. Basel III and Asset Securitization

    Directory of Open Access Journals (Sweden)

    M. Mpundu

    2013-01-01

    Full Text Available Asset securitization via special purpose entities involves the process of transforming assets into securities that are issued to investors. These investors hold the rights to payments supported by the cash flows from an asset pool held by the said entity. In this paper, we discuss the mechanism by which low- and high-quality entities securitize low- and high-quality assets, respectively, into collateralized debt obligations. During the 2007–2009 financial crisis, asset securitization was seriously inhibited. In response to this, for instance, new Basel III capital and liquidity regulations were introduced. Here, we find that we can explicitly determine the transaction costs related to low-quality asset securitization. Also, in the case of dynamic and static multipliers, the effects of unexpected negative shocks such as rating downgrades on asset price and input, debt obligation price and output, and profit will be quantified. In this case, we note that Basel III has been designed to provide countercyclical capital buffers to negate procyclicality. Moreover, we will develop an illustrative example of low-quality asset securitization for subprime mortgages. Furthermore, numerical examples to illustrate the key results will be provided. In addition, connections between Basel III and asset securitization will be highlighted.

  18. Conjugate-Gradient Algorithms For Dynamics Of Manipulators

    Science.gov (United States)

    Fijany, Amir; Scheid, Robert E.

    1993-01-01

    Algorithms for serial and parallel computation of forward dynamics of multiple-link robotic manipulators by conjugate-gradient method developed. Parallel algorithms have potential for speedup of computations on multiple linked, specialized processors implemented in very-large-scale integrated circuits. Such processors used to stimulate dynamics, possibly faster than in real time, for purposes of planning and control.

  19. Entropy-based financial asset pricing.

    Directory of Open Access Journals (Sweden)

    Mihály Ormos

    Full Text Available We investigate entropy as a financial risk measure. Entropy explains the equity premium of securities and portfolios in a simpler way and, at the same time, with higher explanatory power than the beta parameter of the capital asset pricing model. For asset pricing we define the continuous entropy as an alternative measure of risk. Our results show that entropy decreases in the function of the number of securities involved in a portfolio in a similar way to the standard deviation, and that efficient portfolios are situated on a hyperbola in the expected return-entropy system. For empirical investigation we use daily returns of 150 randomly selected securities for a period of 27 years. Our regression results show that entropy has a higher explanatory power for the expected return than the capital asset pricing model beta. Furthermore we show the time varying behavior of the beta along with entropy.

  20. Entropy-based financial asset pricing.

    Science.gov (United States)

    Ormos, Mihály; Zibriczky, Dávid

    2014-01-01

    We investigate entropy as a financial risk measure. Entropy explains the equity premium of securities and portfolios in a simpler way and, at the same time, with higher explanatory power than the beta parameter of the capital asset pricing model. For asset pricing we define the continuous entropy as an alternative measure of risk. Our results show that entropy decreases in the function of the number of securities involved in a portfolio in a similar way to the standard deviation, and that efficient portfolios are situated on a hyperbola in the expected return-entropy system. For empirical investigation we use daily returns of 150 randomly selected securities for a period of 27 years. Our regression results show that entropy has a higher explanatory power for the expected return than the capital asset pricing model beta. Furthermore we show the time varying behavior of the beta along with entropy.

  1. Dynamic load balancing algorithm for molecular dynamics based on Voronoi cells domain decompositions

    Energy Technology Data Exchange (ETDEWEB)

    Fattebert, J.-L. [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States); Richards, D.F. [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States); Glosli, J.N. [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States)

    2012-12-01

    We present a new algorithm for automatic parallel load balancing in classical molecular dynamics. It assumes a spatial domain decomposition of particles into Voronoi cells. It is a gradient method which attempts to minimize a cost function by displacing Voronoi sites associated with each processor/sub-domain along steepest descent directions. Excellent load balance has been obtained for quasi-2D and 3D practical applications, with up to 440·106 particles on 65,536 MPI tasks.

  2. A QoS-Based Dynamic Queue Length Scheduling Algorithm in Multiantenna Heterogeneous Systems

    Directory of Open Access Journals (Sweden)

    Verikoukis Christos

    2010-01-01

    Full Text Available The use of real-time delay-sensitive applications in wireless systems has significantly grown during the last years. Therefore the designers of wireless systems have faced a challenging issue to guarantee the required Quality of Service (QoS. On the other hand, the recent advances and the extensive use of multiple antennas have already been included in several commercial standards, where the multibeam opportunistic transmission beamforming strategies have been proposed to improve the performance of the wireless systems. A cross-layer-based dynamically tuned queue length scheduler is presented in this paper, for the Downlink of multiuser and multiantenna WLAN systems with heterogeneous traffic requirements. To align with modern wireless systems transmission strategies, an opportunistic scheduling algorithm is employed, while a priority to the different traffic classes is applied. A tradeoff between the maximization of the throughput of the system and the guarantee of the maximum allowed delay is obtained. Therefore, the length of the queue is dynamically adjusted to select the appropriate conditions based on the operator requirements.

  3. Using Genetic Algorithms for Navigation Planning in Dynamic Environments

    Directory of Open Access Journals (Sweden)

    Ferhat Uçan

    2012-01-01

    Full Text Available Navigation planning can be considered as a combination of searching and executing the most convenient flight path from an initial waypoint to a destination waypoint. Generally the aim is to follow the flight path, which provides minimum fuel consumption for the air vehicle. For dynamic environments, constraints change dynamically during flight. This is a special case of dynamic path planning. As the main concern of this paper is flight planning, the conditions and objectives that are most probable to be used in navigation problem are considered. In this paper, the genetic algorithm solution of the dynamic flight planning problem is explained. The evolutionary dynamic navigation planning algorithm is developed for compensating the existing deficiencies of the other approaches. The existing fully dynamic algorithms process unit changes to topology one modification at a time, but when there are several such operations occurring in the environment simultaneously, the algorithms are quite inefficient. The proposed algorithm may respond to the concurrent constraint updates in a shorter time for dynamic environment. The most secure navigation of the air vehicle is planned and executed so that the fuel consumption is minimum.

  4. Algorithms for optimal sequencing of dynamic multileaf collimators

    Energy Technology Data Exchange (ETDEWEB)

    Kamath, Srijit [Department of Computer and Information Science and Engineering, University of Florida, Gainesville, FL (United States); Sahni, Sartaj [Department of Computer and Information Science and Engineering, University of Florida, Gainesville, FL (United States); Palta, Jatinder [Department of Radiation Oncology, University of Florida, Gainesville, FL (United States); Ranka, Sanjay [Department of Computer and Information Science and Engineering, University of Florida, Gainesville, FL (United States)

    2004-01-07

    Dynamic multileaf collimator (DMLC) intensity modulated radiation therapy (IMRT) is used to deliver intensity modulated beams using a multileaf collimator (MLC), with the leaves in motion. DMLC-IMRT requires the conversion of a radiation intensity map into a leaf sequence file that controls the movement of the MLC while the beam is on. It is imperative that the intensity map delivered using the leaf sequence file be as close as possible to the intensity map generated by the dose optimization algorithm, while satisfying hardware constraints of the delivery system. Optimization of the leaf-sequencing algorithm has been the subject of several recent investigations. In this work, we present a systematic study of the optimization of leaf-sequencing algorithms for dynamic multileaf collimator beam delivery and provide rigorous mathematical proofs of optimized leaf sequence settings in terms of monitor unit (MU) efficiency under the most common leaf movement constraints that include leaf interdigitation constraint. Our analytical analysis shows that leaf sequencing based on unidirectional movement of the MLC leaves is as MU efficient as bi-directional movement of the MLC leaves.

  5. Algorithms for optimal sequencing of dynamic multileaf collimators

    International Nuclear Information System (INIS)

    Kamath, Srijit; Sahni, Sartaj; Palta, Jatinder; Ranka, Sanjay

    2004-01-01

    Dynamic multileaf collimator (DMLC) intensity modulated radiation therapy (IMRT) is used to deliver intensity modulated beams using a multileaf collimator (MLC), with the leaves in motion. DMLC-IMRT requires the conversion of a radiation intensity map into a leaf sequence file that controls the movement of the MLC while the beam is on. It is imperative that the intensity map delivered using the leaf sequence file be as close as possible to the intensity map generated by the dose optimization algorithm, while satisfying hardware constraints of the delivery system. Optimization of the leaf-sequencing algorithm has been the subject of several recent investigations. In this work, we present a systematic study of the optimization of leaf-sequencing algorithms for dynamic multileaf collimator beam delivery and provide rigorous mathematical proofs of optimized leaf sequence settings in terms of monitor unit (MU) efficiency under the most common leaf movement constraints that include leaf interdigitation constraint. Our analytical analysis shows that leaf sequencing based on unidirectional movement of the MLC leaves is as MU efficient as bi-directional movement of the MLC leaves

  6. A Localization Algorithm Based on AOA for Ad-Hoc Sensor Networks

    Directory of Open Access Journals (Sweden)

    Yang Sun Lee

    2012-01-01

    Full Text Available Knowledge of positions of sensor nodes in Wireless Sensor Networks (WSNs will make possible many applications such as asset monitoring, object tracking and routing. In WSNs, the errors may happen in the measurement of distances and angles between pairs of nodes in WSN and these errors will be propagated to different nodes, the estimation of positions of sensor nodes can be difficult and have huge errors. In this paper, we will propose localization algorithm based on both distance and angle to landmark. So, we introduce a method of incident angle to landmark and the algorithm to exchange physical data such as distances and incident angles and update the position of a node by utilizing multiple landmarks and multiple paths to landmarks.

  7. On Newton-Raphson formulation and algorithm for displacement based structural dynamics problem with quadratic damping nonlinearity

    Directory of Open Access Journals (Sweden)

    Koh Kim Jie

    2017-01-01

    Full Text Available Quadratic damping nonlinearity is challenging for displacement based structural dynamics problem as the problem is nonlinear in time derivative of the primitive variable. For such nonlinearity, the formulation of tangent stiffness matrix is not lucid in the literature. Consequently, ambiguity related to kinematics update arises when implementing the time integration-iterative algorithm. In present work, an Euler-Bernoulli beam vibration problem with quadratic damping nonlinearity is addressed as the main source of quadratic damping nonlinearity arises from drag force estimation, which is generally valid only for slender structures. Employing Newton-Raphson formulation, tangent stiffness components associated with quadratic damping nonlinearity requires velocity input for evaluation purpose. For this reason, two mathematically equivalent algorithm structures with different kinematics arrangement are tested. Both algorithm structures result in the same accuracy and convergence characteristic of solution.

  8. Using genetic algorithm to solve a new multi-period stochastic optimization model

    Science.gov (United States)

    Zhang, Xin-Li; Zhang, Ke-Cun

    2009-09-01

    This paper presents a new asset allocation model based on the CVaR risk measure and transaction costs. Institutional investors manage their strategic asset mix over time to achieve favorable returns subject to various uncertainties, policy and legal constraints, and other requirements. One may use a multi-period portfolio optimization model in order to determine an optimal asset mix. Recently, an alternative stochastic programming model with simulated paths was proposed by Hibiki [N. Hibiki, A hybrid simulation/tree multi-period stochastic programming model for optimal asset allocation, in: H. Takahashi, (Ed.) The Japanese Association of Financial Econometrics and Engineering, JAFFE Journal (2001) 89-119 (in Japanese); N. Hibiki A hybrid simulation/tree stochastic optimization model for dynamic asset allocation, in: B. Scherer (Ed.), Asset and Liability Management Tools: A Handbook for Best Practice, Risk Books, 2003, pp. 269-294], which was called a hybrid model. However, the transaction costs weren't considered in that paper. In this paper, we improve Hibiki's model in the following aspects: (1) The risk measure CVaR is introduced to control the wealth loss risk while maximizing the expected utility; (2) Typical market imperfections such as short sale constraints, proportional transaction costs are considered simultaneously. (3) Applying a genetic algorithm to solve the resulting model is discussed in detail. Numerical results show the suitability and feasibility of our methodology.

  9. Land of Addicts? An Empirical Investigation of Habit-Based Asset Pricing Behavior

    OpenAIRE

    Xiaohong Chen; Sydney C. Ludvigson

    2004-01-01

    This paper studies the ability of a general class of habit-based asset pricing models to match the conditional moment restrictions implied by asset pricing theory. We treat the functional form of the habit as unknown, and to estimate it along with the rest of the model's finite dimensional parameters. Using quarterly data on consumption growth, assets returns and instruments, our empirical results indicate that the estimated habit function is nonlinear, the habit formation is better described...

  10. Enhanced TDMA Based Anti-Collision Algorithm with a Dynamic Frame Size Adjustment Strategy for Mobile RFID Readers

    Directory of Open Access Journals (Sweden)

    Kwang Cheol Shin

    2009-02-01

    Full Text Available In the fields of production, manufacturing and supply chain management, Radio Frequency Identification (RFID is regarded as one of the most important technologies. Nowadays, Mobile RFID, which is often installed in carts or forklift trucks, is increasingly being applied to the search for and checkout of items in warehouses, supermarkets, libraries and other industrial fields. In using Mobile RFID, since the readers are continuously moving, they can interfere with each other when they attempt to read the tags. In this study, we suggest a Time Division Multiple Access (TDMA based anti-collision algorithm for Mobile RFID readers. Our algorithm automatically adjusts the frame size of each reader without using manual parameters by adopting the dynamic frame size adjustment strategy when collisions occur at a reader. Through experiments on a simulated environment for Mobile RFID readers, we show that the proposed method improves the number of successful transmissions by about 228% on average, compared with Colorwave, a representative TDMA based anti-collision algorithm.

  11. Enhanced TDMA Based Anti-Collision Algorithm with a Dynamic Frame Size Adjustment Strategy for Mobile RFID Readers.

    Science.gov (United States)

    Shin, Kwang Cheol; Park, Seung Bo; Jo, Geun Sik

    2009-01-01

    In the fields of production, manufacturing and supply chain management, Radio Frequency Identification (RFID) is regarded as one of the most important technologies. Nowadays, Mobile RFID, which is often installed in carts or forklift trucks, is increasingly being applied to the search for and checkout of items in warehouses, supermarkets, libraries and other industrial fields. In using Mobile RFID, since the readers are continuously moving, they can interfere with each other when they attempt to read the tags. In this study, we suggest a Time Division Multiple Access (TDMA) based anti-collision algorithm for Mobile RFID readers. Our algorithm automatically adjusts the frame size of each reader without using manual parameters by adopting the dynamic frame size adjustment strategy when collisions occur at a reader. Through experiments on a simulated environment for Mobile RFID readers, we show that the proposed method improves the number of successful transmissions by about 228% on average, compared with Colorwave, a representative TDMA based anti-collision algorithm.

  12. Application of multiple tabu search algorithm to solve dynamic economic dispatch considering generator constraints

    International Nuclear Information System (INIS)

    Pothiya, Saravuth; Ngamroo, Issarachai; Kongprawechnon, Waree

    2008-01-01

    This paper presents a new optimization technique based on a multiple tabu search algorithm (MTS) to solve the dynamic economic dispatch (ED) problem with generator constraints. In the constrained dynamic ED problem, the load demand and spinning reserve capacity as well as some practical operation constraints of generators, such as ramp rate limits and prohibited operating zone are taken into consideration. The MTS algorithm introduces additional mechanisms such as initialization, adaptive searches, multiple searches, crossover and restarting process. To show its efficiency, the MTS algorithm is applied to solve constrained dynamic ED problems of power systems with 6 and 15 units. The results obtained from the MTS algorithm are compared to those achieved from the conventional approaches, such as simulated annealing (SA), genetic algorithm (GA), tabu search (TS) algorithm and particle swarm optimization (PSO). The experimental results show that the proposed MTS algorithm approaches is able to obtain higher quality solutions efficiently and with less computational time than the conventional approaches

  13. Digital asset management.

    Science.gov (United States)

    Humphrey, Clinton D; Tollefson, Travis T; Kriet, J David

    2010-05-01

    Facial plastic surgeons are accumulating massive digital image databases with the evolution of photodocumentation and widespread adoption of digital photography. Managing and maximizing the utility of these vast data repositories, or digital asset management (DAM), is a persistent challenge. Developing a DAM workflow that incorporates a file naming algorithm and metadata assignment will increase the utility of a surgeon's digital images. Copyright 2010 Elsevier Inc. All rights reserved.

  14. Lorentz covariant canonical symplectic algorithms for dynamics of charged particles

    Science.gov (United States)

    Wang, Yulei; Liu, Jian; Qin, Hong

    2016-12-01

    In this paper, the Lorentz covariance of algorithms is introduced. Under Lorentz transformation, both the form and performance of a Lorentz covariant algorithm are invariant. To acquire the advantages of symplectic algorithms and Lorentz covariance, a general procedure for constructing Lorentz covariant canonical symplectic algorithms (LCCSAs) is provided, based on which an explicit LCCSA for dynamics of relativistic charged particles is built. LCCSA possesses Lorentz invariance as well as long-term numerical accuracy and stability, due to the preservation of a discrete symplectic structure and the Lorentz symmetry of the system. For situations with time-dependent electromagnetic fields, which are difficult to handle in traditional construction procedures of symplectic algorithms, LCCSA provides a perfect explicit canonical symplectic solution by implementing the discretization in 4-spacetime. We also show that LCCSA has built-in energy-based adaptive time steps, which can optimize the computation performance when the Lorentz factor varies.

  15. A New Multiobjective Evolutionary Algorithm for Community Detection in Dynamic Complex Networks

    Directory of Open Access Journals (Sweden)

    Guoqiang Chen

    2013-01-01

    Full Text Available Community detection in dynamic networks is an important research topic and has received an enormous amount of attention in recent years. Modularity is selected as a measure to quantify the quality of the community partition in previous detection methods. But, the modularity has been exposed to resolution limits. In this paper, we propose a novel multiobjective evolutionary algorithm for dynamic networks community detection based on the framework of nondominated sorting genetic algorithm. Modularity density which can address the limitations of modularity function is adopted to measure the snapshot cost, and normalized mutual information is selected to measure temporal cost, respectively. The characteristics knowledge of the problem is used in designing the genetic operators. Furthermore, a local search operator was designed, which can improve the effectiveness and efficiency of community detection. Experimental studies based on synthetic datasets show that the proposed algorithm can obtain better performance than the compared algorithms.

  16. An FDTD algorithm for simulating light propagation in anisotropic dynamic gain media

    KAUST Repository

    Al-Jabr, A. A.; San Roman Alerigi, Damian; Ooi, Boon S.; Alsunaidi, M. A.

    2014-01-01

    Simulating light propagation in anisotropic dynamic gain media such as semiconductors and solid-state lasers using the finite difference time-domain FDTD technique is a tedious process, as many variables need to be evaluated in the same instant of time. The algorithm has to take care of the laser dynamic gain, rate equations, anisotropy and dispersion. In this paper, to the best of our knowledge, we present the first algorithm that solves this problem. The algorithm is based on separating calculations into independent layers and hence solving each problem in a layer of calculations. The anisotropic gain medium is presented and tested using a one-dimensional set-up. The algorithm is then used for the analysis of a two-dimensional problem.

  17. An FDTD algorithm for simulating light propagation in anisotropic dynamic gain media

    KAUST Repository

    Al-Jabr, A. A.

    2014-05-02

    Simulating light propagation in anisotropic dynamic gain media such as semiconductors and solid-state lasers using the finite difference time-domain FDTD technique is a tedious process, as many variables need to be evaluated in the same instant of time. The algorithm has to take care of the laser dynamic gain, rate equations, anisotropy and dispersion. In this paper, to the best of our knowledge, we present the first algorithm that solves this problem. The algorithm is based on separating calculations into independent layers and hence solving each problem in a layer of calculations. The anisotropic gain medium is presented and tested using a one-dimensional set-up. The algorithm is then used for the analysis of a two-dimensional problem.

  18. A Uniform Energy Consumption Algorithm for Wireless Sensor and Actuator Networks Based on Dynamic Polling Point Selection

    Science.gov (United States)

    Li, Shuo; Peng, Jun; Liu, Weirong; Zhu, Zhengfa; Lin, Kuo-Chi

    2014-01-01

    Recent research has indicated that using the mobility of the actuator in wireless sensor and actuator networks (WSANs) to achieve mobile data collection can greatly increase the sensor network lifetime. However, mobile data collection may result in unacceptable collection delays in the network if the path of the actuator is too long. Because real-time network applications require meeting data collection delay constraints, planning the path of the actuator is a very important issue to balance the prolongation of the network lifetime and the reduction of the data collection delay. In this paper, a multi-hop routing mobile data collection algorithm is proposed based on dynamic polling point selection with delay constraints to address this issue. The algorithm can actively update the selection of the actuator's polling points according to the sensor nodes' residual energies and their locations while also considering the collection delay constraint. It also dynamically constructs the multi-hop routing trees rooted by these polling points to balance the sensor node energy consumption and the extension of the network lifetime. The effectiveness of the algorithm is validated by simulation. PMID:24451455

  19. A MODIFIED GIFFLER AND THOMPSON ALGORITHM COMBINED WITH DYNAMIC SLACK TIME FOR SOLVING DYNAMIC SCHEDULE PROBLEMS

    Directory of Open Access Journals (Sweden)

    Tanti Octavia

    2003-01-01

    Full Text Available A Modified Giffler and Thompson algorithm combined with dynamic slack time is used to allocate machines resources in dynamic nature. It was compared with a Real Time Order Promising (RTP algorithm. The performance of modified Giffler and Thompson and RTP algorithms are measured by mean tardiness. The result shows that modified Giffler and Thompson algorithm combined with dynamic slack time provides significantly better result compared with RTP algorithm in terms of mean tardiness.

  20. Research on Methodology to Prioritize Critical Digital Assets based on Nuclear Risk Assessment

    Energy Technology Data Exchange (ETDEWEB)

    Kim, Wonjik; Kwon, Kookheui; Kim, Hyundoo [Korea Institute of Nuclear Nonproliferation and Control, Daejeon (Korea, Republic of)

    2016-10-15

    Digital systems are used in nuclear facilities to monitor and control various types of field devices, as well as to obtain and store vital information. Therefore, it is getting important for nuclear facilities to protect digital systems from cyber-attack in terms of safety operation and public health since cyber compromise of these systems could lead to unacceptable radiological consequences. Based on KINAC/RS-015 which is a cyber security regulatory standard, regulatory activities for cyber security at nuclear facilities generally focus on critical digital assets (CDAs) which are safety, security, and emergency preparedness related digital assets. Critical digital assets are estimated over 60% among all digital assets in a nuclear power plant. Therefore, it was required to prioritize critical digital assets to improve efficiency of regulation and implementation. In this paper, the research status on methodology development to prioritize critical digital assets based on nuclear risk assessment will be introduced. In this paper, to derive digital asset directly affect accident, PRA results (ET, FT, and minimal cut set) are analyzed. According to result of analysis, digital systems related to CD are derived ESF-CCS (safety-related component control system) and Process-CCS (non-safety-related component control system) as well as Engineered Safety Features Actuation System (ESFAS). These digital assets can be identified Vital Digital Asset (VDA). Hereafter, to develop general methodology which was identified VDA related to accident among CDAs, (1) method using result of minimal cut set in PRA model will be studied and (2) method quantifying result of Digital I and C PRA which is performed to reflect all digital cabinet related to system in FT will be studied.

  1. Research on Methodology to Prioritize Critical Digital Assets based on Nuclear Risk Assessment

    International Nuclear Information System (INIS)

    Kim, Wonjik; Kwon, Kookheui; Kim, Hyundoo

    2016-01-01

    Digital systems are used in nuclear facilities to monitor and control various types of field devices, as well as to obtain and store vital information. Therefore, it is getting important for nuclear facilities to protect digital systems from cyber-attack in terms of safety operation and public health since cyber compromise of these systems could lead to unacceptable radiological consequences. Based on KINAC/RS-015 which is a cyber security regulatory standard, regulatory activities for cyber security at nuclear facilities generally focus on critical digital assets (CDAs) which are safety, security, and emergency preparedness related digital assets. Critical digital assets are estimated over 60% among all digital assets in a nuclear power plant. Therefore, it was required to prioritize critical digital assets to improve efficiency of regulation and implementation. In this paper, the research status on methodology development to prioritize critical digital assets based on nuclear risk assessment will be introduced. In this paper, to derive digital asset directly affect accident, PRA results (ET, FT, and minimal cut set) are analyzed. According to result of analysis, digital systems related to CD are derived ESF-CCS (safety-related component control system) and Process-CCS (non-safety-related component control system) as well as Engineered Safety Features Actuation System (ESFAS). These digital assets can be identified Vital Digital Asset (VDA). Hereafter, to develop general methodology which was identified VDA related to accident among CDAs, (1) method using result of minimal cut set in PRA model will be studied and (2) method quantifying result of Digital I and C PRA which is performed to reflect all digital cabinet related to system in FT will be studied

  2. Cluster-Based Multipolling Sequencing Algorithm for Collecting RFID Data in Wireless LANs

    Science.gov (United States)

    Choi, Woo-Yong; Chatterjee, Mainak

    2015-03-01

    With the growing use of RFID (Radio Frequency Identification), it is becoming important to devise ways to read RFID tags in real time. Access points (APs) of IEEE 802.11-based wireless Local Area Networks (LANs) are being integrated with RFID networks that can efficiently collect real-time RFID data. Several schemes, such as multipolling methods based on the dynamic search algorithm and random sequencing, have been proposed. However, as the number of RFID readers associated with an AP increases, it becomes difficult for the dynamic search algorithm to derive the multipolling sequence in real time. Though multipolling methods can eliminate the polling overhead, we still need to enhance the performance of the multipolling methods based on random sequencing. To that extent, we propose a real-time cluster-based multipolling sequencing algorithm that drastically eliminates more than 90% of the polling overhead, particularly so when the dynamic search algorithm fails to derive the multipolling sequence in real time.

  3. Agent-based Algorithm for Spatial Distribution of Objects

    KAUST Repository

    Collier, Nathan

    2012-06-02

    In this paper we present an agent-based algorithm for the spatial distribution of objects. The algorithm is a generalization of the bubble mesh algorithm, initially created for the point insertion stage of the meshing process of the finite element method. The bubble mesh algorithm treats objects in space as bubbles, which repel and attract each other. The dynamics of each bubble are approximated by solving a series of ordinary differential equations. We present numerical results for a meshing application as well as a graph visualization application.

  4. IoT-based Asset Management System for Healthcare-related Industries

    Directory of Open Access Journals (Sweden)

    Lee Carman Ka Man

    2015-11-01

    Full Text Available The healthcare industry has been focusing efforts on optimizing inventory management procedures through the incorporation of Information and Communication Technology, in the form of tracking devices and data mining, to establish ideal inventory models. In this paper, a roadmap is developed towards a technological assessment of the Internet of Things (IoT in the healthcare industry, 2010–2020. According to the roadmap, an IoT-based healthcare asset management system (IoT-HAMS is proposed and developed based on Artificial Neural Network (ANN and Fuzzy Logic (FL, incorporating IoT technologies for asset management to optimize the supply of resources.

  5. Dynamic systems models new methods of parameter and state estimation

    CERN Document Server

    2016-01-01

    This monograph is an exposition of a novel method for solving inverse problems, a method of parameter estimation for time series data collected from simulations of real experiments. These time series might be generated by measuring the dynamics of aircraft in flight, by the function of a hidden Markov model used in bioinformatics or speech recognition or when analyzing the dynamics of asset pricing provided by the nonlinear models of financial mathematics. Dynamic Systems Models demonstrates the use of algorithms based on polynomial approximation which have weaker requirements than already-popular iterative methods. Specifically, they do not require a first approximation of a root vector and they allow non-differentiable elements in the vector functions being approximated. The text covers all the points necessary for the understanding and use of polynomial approximation from the mathematical fundamentals, through algorithm development to the application of the method in, for instance, aeroplane flight dynamic...

  6. Incomplete Financial Markets and Jumps in Asset Prices

    DEFF Research Database (Denmark)

    Crès, Hervé; Markeprand, Tobias Ejnar; Tvede, Mich

    A dynamic pure-exchange general equilibrium model with uncertainty is studied. Fundamentals are supposed to depend continuously on states of nature. It is shown that: 1. if financial markets are complete, then asset prices vary continuously with states of nature, and; 2. if financial markets...... are incomplete, jumps in asset prices may be unavoidable. Consequently incomplete financial markets may increase volatility in asset prices significantly....

  7. A Relative-Localization Algorithm Using Incomplete Pairwise Distance Measurements for Underwater Applications

    Directory of Open Access Journals (Sweden)

    Kae Y. Foo

    2010-01-01

    Full Text Available The task of localizing underwater assets involves the relative localization of each unit using only pairwise distance measurements, usually obtained from time-of-arrival or time-delay-of-arrival measurements. In the fluctuating underwater environment, a complete set of pair-wise distance measurements can often be difficult to acquire, thus hindering a straightforward closed-form solution in deriving the assets' relative coordinates. An iterative multidimensional scaling approach is presented based upon a weighted-majorization algorithm that tolerates missing or inaccurate distance measurements. Substantial modifications are proposed to optimize the algorithm, while the effects of refractive propagation paths are considered. A parametric study of the algorithm based upon simulation results is shown. An acoustic field-trial was then carried out, presenting field measurements to highlight the practical implementation of this algorithm.

  8. Dynamically Predicting the Quality of Service: Batch, Online, and Hybrid Algorithms

    Directory of Open Access Journals (Sweden)

    Ya Chen

    2017-01-01

    Full Text Available This paper studies the problem of dynamically modeling the quality of web service. The philosophy of designing practical web service recommender systems is delivered in this paper. A general system architecture for such systems continuously collects the user-service invocation records and includes both an online training module and an offline training module for quality prediction. In addition, we introduce matrix factorization-based online and offline training algorithms based on the gradient descent algorithms and demonstrate the fitness of this online/offline algorithm framework to the proposed architecture. The superiority of the proposed model is confirmed by empirical studies on a real-life quality of web service data set and comparisons with existing web service recommendation algorithms.

  9. Review of Methods and Algorithms for Dynamic Management of CBRNE Collection Assets

    Science.gov (United States)

    2013-07-01

    Li (2000). Note that SAA for UAVs is not a mature technology in the DoD. AFRL’s SAA architecture contains an algorithm called MuSICA (Multi-Sensor...SWOrRD.html. Graham, S., and J. Kay. 2012. Multi-Sensor Integrated Conflict Avoidance ( MuSICA ): 2012 International Test and Evaluation Association...moving target indicator MuSICA Multi-Sensor Integrated Conflict Avoidance MVOI multivariate optimal interpolation MVU maximum variance unfolding

  10. Cone Algorithm of Spinning Vehicles under Dynamic Coning Environment

    Directory of Open Access Journals (Sweden)

    Shuang-biao Zhang

    2015-01-01

    Full Text Available Due to the fact that attitude error of vehicles has an intense trend of divergence when vehicles undergo worsening coning environment, in this paper, the model of dynamic coning environment is derived firstly. Then, through investigation of the effect on Euler attitude algorithm for the equivalency of traditional attitude algorithm, it is found that attitude error is actually the roll angle error including drifting error and oscillating error, which is induced directly by dynamic coning environment and further affects the pitch angle and yaw angle through transferring. Based on definition of the cone frame and cone attitude, a cone algorithm is proposed by rotation relationship to calculate cone attitude, and the relationship between cone attitude and Euler attitude of spinning vehicle is established. Through numerical simulations with different conditions of dynamic coning environment, it is shown that the induced error of Euler attitude fluctuates by the variation of precession and nutation, especially by that of nutation, and the oscillating frequency of roll angle error is twice that of pitch angle error and yaw angle error. In addition, the rotation angle is more competent to describe the spinning process of vehicles under coning environment than Euler angle gamma, and the real pitch angle and yaw angle are calculated finally.

  11. Scaling symmetry, renormalization, and time series modeling: the case of financial assets dynamics.

    Science.gov (United States)

    Zamparo, Marco; Baldovin, Fulvio; Caraglio, Michele; Stella, Attilio L

    2013-12-01

    We present and discuss a stochastic model of financial assets dynamics based on the idea of an inverse renormalization group strategy. With this strategy we construct the multivariate distributions of elementary returns based on the scaling with time of the probability density of their aggregates. In its simplest version the model is the product of an endogenous autoregressive component and a random rescaling factor designed to embody also exogenous influences. Mathematical properties like increments' stationarity and ergodicity can be proven. Thanks to the relatively low number of parameters, model calibration can be conveniently based on a method of moments, as exemplified in the case of historical data of the S&P500 index. The calibrated model accounts very well for many stylized facts, like volatility clustering, power-law decay of the volatility autocorrelation function, and multiscaling with time of the aggregated return distribution. In agreement with empirical evidence in finance, the dynamics is not invariant under time reversal, and, with suitable generalizations, skewness of the return distribution and leverage effects can be included. The analytical tractability of the model opens interesting perspectives for applications, for instance, in terms of obtaining closed formulas for derivative pricing. Further important features are the possibility of making contact, in certain limits, with autoregressive models widely used in finance and the possibility of partially resolving the long- and short-memory components of the volatility, with consistent results when applied to historical series.

  12. Scaling symmetry, renormalization, and time series modeling: The case of financial assets dynamics

    Science.gov (United States)

    Zamparo, Marco; Baldovin, Fulvio; Caraglio, Michele; Stella, Attilio L.

    2013-12-01

    We present and discuss a stochastic model of financial assets dynamics based on the idea of an inverse renormalization group strategy. With this strategy we construct the multivariate distributions of elementary returns based on the scaling with time of the probability density of their aggregates. In its simplest version the model is the product of an endogenous autoregressive component and a random rescaling factor designed to embody also exogenous influences. Mathematical properties like increments’ stationarity and ergodicity can be proven. Thanks to the relatively low number of parameters, model calibration can be conveniently based on a method of moments, as exemplified in the case of historical data of the S&P500 index. The calibrated model accounts very well for many stylized facts, like volatility clustering, power-law decay of the volatility autocorrelation function, and multiscaling with time of the aggregated return distribution. In agreement with empirical evidence in finance, the dynamics is not invariant under time reversal, and, with suitable generalizations, skewness of the return distribution and leverage effects can be included. The analytical tractability of the model opens interesting perspectives for applications, for instance, in terms of obtaining closed formulas for derivative pricing. Further important features are the possibility of making contact, in certain limits, with autoregressive models widely used in finance and the possibility of partially resolving the long- and short-memory components of the volatility, with consistent results when applied to historical series.

  13. Dynamic programming algorithms for biological sequence comparison.

    Science.gov (United States)

    Pearson, W R; Miller, W

    1992-01-01

    Efficient dynamic programming algorithms are available for a broad class of protein and DNA sequence comparison problems. These algorithms require computer time proportional to the product of the lengths of the two sequences being compared [O(N2)] but require memory space proportional only to the sum of these lengths [O(N)]. Although the requirement for O(N2) time limits use of the algorithms to the largest computers when searching protein and DNA sequence databases, many other applications of these algorithms, such as calculation of distances for evolutionary trees and comparison of a new sequence to a library of sequence profiles, are well within the capabilities of desktop computers. In particular, the results of library searches with rapid searching programs, such as FASTA or BLAST, should be confirmed by performing a rigorous optimal alignment. Whereas rapid methods do not overlook significant sequence similarities, FASTA limits the number of gaps that can be inserted into an alignment, so that a rigorous alignment may extend the alignment substantially in some cases. BLAST does not allow gaps in the local regions that it reports; a calculation that allows gaps is very likely to extend the alignment substantially. Although a Monte Carlo evaluation of the statistical significance of a similarity score with a rigorous algorithm is much slower than the heuristic approach used by the RDF2 program, the dynamic programming approach should take less than 1 hr on a 386-based PC or desktop Unix workstation. For descriptive purposes, we have limited our discussion to methods for calculating similarity scores and distances that use gap penalties of the form g = rk. Nevertheless, programs for the more general case (g = q+rk) are readily available. Versions of these programs that run either on Unix workstations, IBM-PC class computers, or the Macintosh can be obtained from either of the authors.

  14. Asset Return Dynamics and Learning

    OpenAIRE

    William A. Branch; George W. Evans

    2010-01-01

    This article advocates a theory of expectation formation that incorporates many of the central motivations of behavioral finance theory while retaining much of the discipline of the rational expectations approach. We provide a framework in which agents, in an asset pricing model, underparameterize their forecasting model in a spirit similar to Hong, Stein, and Yu (2007) and Barberis, Shleifer, and Vishny (1998), except that the parameters of the forecasting model and the choice of predictor a...

  15. Ising model of financial markets with many assets

    Science.gov (United States)

    Eckrot, A.; Jurczyk, J.; Morgenstern, I.

    2016-11-01

    Many models of financial markets exist, but most of them simulate single asset markets. We study a multi asset Ising model of a financial market. Each agent has two possible actions (buy/sell) for every asset. The agents dynamically adjust their coupling coefficients according to past market returns and external news. This leads to fat tails and volatility clustering independent of the number of assets. We find that a separation of news into different channels leads to sector structures in the cross correlations, similar to those found in real markets.

  16. Dynamical Consensus Algorithm for Second-Order Multi-Agent Systems Subjected to Communication Delay

    International Nuclear Information System (INIS)

    Liu Chenglin; Liu Fei

    2013-01-01

    To solve the dynamical consensus problem of second-order multi-agent systems with communication delay, delay-dependent compensations are added into the normal asynchronously-coupled consensus algorithm so as to make the agents achieve a dynamical consensus. Based on frequency-domain analysis, sufficient conditions are gained for second-order multi-agent systems with communication delay under leaderless and leader-following consensus algorithms respectively. Simulation illustrates the correctness of the results. (interdisciplinary physics and related areas of science and technology)

  17. A pathway to a more sustainable water sector: sustainability-based asset management.

    Science.gov (United States)

    Marlow, D R; Beale, D J; Burn, S

    2010-01-01

    The water sectors of many countries are faced with the need to address simultaneously two overarching challenges; the need to undertake effective asset management coupled with the broader need to evolve business processes so as to embrace sustainability principles. Research has thus been undertaken into the role sustainability principles play in asset management. As part of this research, a series of 25 in-depth interviews were undertaken with water sector professionals from around Australia. Drawing on the results of these interviews, this paper outlines the conceptual relationship between asset management and sustainability along with a synthesis of the relevant opinions voiced in the interviews. The interviews indicated that the participating water authorities have made a strong commitment to sustainability, but there is a need to facilitate change processes to embed sustainability principles into business as usual practices. Interviewees also noted that asset management and sustainability are interlinked from a number of perspectives, especially in the way decision making is undertaken with respect to assets and service provision. The interviews also provided insights into the research needed to develop a holistic sustainability-based asset management framework.

  18. Performance-based contracting for maintaining transportation assets with emphasis on bridges

    Directory of Open Access Journals (Sweden)

    Alsharqawi Mohammed

    2017-01-01

    Full Text Available With a large number of aging transportation infrastructure assets in North America and the growing problem of deterioration across the globe, managing these assets have been the subject of ongoing research. There is an overwhelming amount of maintenance and rehabilitation works to be done and selecting a suitable maintenance, repair or replacement (MRR strategy is one of the most challenging tasks for decision makers. Limited budget and resources are even making the decision making process more challenging. Maintaining infrastructure to the highest possible condition while investing the minimal amount of money has promoted innovative contracting approaches. Transportation agencies have increased private sector involvement through long term performance-based maintenance contracts or what is called Performance-Based Contracting. PBC is a type of contract that pays a contractor based on the results achieved, not on the methods for performing the maintenance work. By looking into the literature, it is observed that agencies are expanding the amount of contracting they do in order to maintain and achieve a better standard of infrastructure facilities. Therefore, the objective of this paper is to study and review performance-based contracting for transportation infrastructure with emphasis on bridge assets.

  19. An improved energy conserving implicit time integration algorithm for nonlinear dynamic structural analysis

    International Nuclear Information System (INIS)

    Haug, E.; Rouvray, A.L. de; Nguyen, Q.S.

    1977-01-01

    This study proposes a general nonlinear algorithm stability criterion; it introduces a nonlinear algorithm, easily implemented in existing incremental/iterative codes, and it applies the new scheme beneficially to problems of linear elastic dynamic snap buckling. Based on the concept of energy conservation, the paper outlines an algorithm which degenerates into the trapezoidal rule, if applied to linear systems. The new algorithm conserves energy in systems having elastic potentials up to the fourth order in the displacements. This is true in the important case of nonlinear total Lagrange formulations where linear elastic material properties are substituted. The scheme is easily implemented in existing incremental-iterative codes with provisions for stiffness reformation and containing the basic Newmark scheme. Numerical analyses of dynamic stability can be dramatically sensitive to amplitude errors, because damping algorithms may mask, and overestimating schemes may numerically trigger, the physical instability. The newly proposed scheme has been applied with larger time steps and less cost to the dynamic snap buckling of simple one and multi degree-of-freedom structures for various initial conditions

  20. A New Fuzzy Harmony Search Algorithm Using Fuzzy Logic for Dynamic Parameter Adaptation

    Directory of Open Access Journals (Sweden)

    Cinthia Peraza

    2016-10-01

    Full Text Available In this paper, a new fuzzy harmony search algorithm (FHS for solving optimization problems is presented. FHS is based on a recent method using fuzzy logic for dynamic adaptation of the harmony memory accepting (HMR and pitch adjustment (PArate parameters that improve the convergence rate of traditional harmony search algorithm (HS. The objective of the method is to dynamically adjust the parameters in the range from 0.7 to 1. The impact of using fixed parameters in the harmony search algorithm is discussed and a strategy for efficiently tuning these parameters using fuzzy logic is presented. The FHS algorithm was successfully applied to different benchmarking optimization problems. The results of simulation and comparison studies demonstrate the effectiveness and efficiency of the proposed approach.

  1. Computational plasticity algorithm for particle dynamics simulations

    Science.gov (United States)

    Krabbenhoft, K.; Lyamin, A. V.; Vignes, C.

    2018-01-01

    The problem of particle dynamics simulation is interpreted in the framework of computational plasticity leading to an algorithm which is mathematically indistinguishable from the common implicit scheme widely used in the finite element analysis of elastoplastic boundary value problems. This algorithm provides somewhat of a unification of two particle methods, the discrete element method and the contact dynamics method, which usually are thought of as being quite disparate. In particular, it is shown that the former appears as the special case where the time stepping is explicit while the use of implicit time stepping leads to the kind of schemes usually labelled contact dynamics methods. The framing of particle dynamics simulation within computational plasticity paves the way for new approaches similar (or identical) to those frequently employed in nonlinear finite element analysis. These include mixed implicit-explicit time stepping, dynamic relaxation and domain decomposition schemes.

  2. Seismic active control by a heuristic-based algorithm

    International Nuclear Information System (INIS)

    Tang, Yu.

    1996-01-01

    A heuristic-based algorithm for seismic active control is generalized to permit consideration of the effects of control-structure interaction and actuator dynamics. Control force is computed at onetime step ahead before being applied to the structure. Therefore, the proposed control algorithm is free from the problem of time delay. A numerical example is presented to show the effectiveness of the proposed control algorithm. Also, two indices are introduced in the paper to assess the effectiveness and efficiency of control laws

  3. End-of-life conversations and care: an asset-based model for community engagement.

    Science.gov (United States)

    Matthiesen, Mary; Froggatt, Katherine; Owen, Elaine; Ashton, John R

    2014-09-01

    Public awareness work regarding palliative and end-of-life care is increasingly promoted within national strategies for palliative care. Different approaches to undertaking this work are being used, often based upon broader educational principles, but little is known about how to undertake such initiatives in a way that equally engages both the health and social care sector and the local communities. An asset-based community engagement approach has been developed that facilitates community-led awareness initiatives concerning end-of-life conversations and care by identifying and connecting existing skills and expertise. (1) To describe the processes and features of an asset-based community engagement approach that facilitates community-led awareness initiatives with a focus on end-of-life conversations and care; and (2) to identify key community-identified priorities for sustainable community engagement processes. An asset-based model of community engagement specific to end-of-life issues using a four-step process is described (getting started, coming together, action planning and implementation). The use of this approach, in two regional community engagement programmes, based across rural and urban communities in the northwest of England, is described. The assets identified in the facilitated community engagement process encompassed people's talents and skills, community groups and networks, government and non-government agencies, physical and economic assets and community values and stories. Five priority areas were addressed to ensure active community engagement work: information, outreach, education, leadership and sustainability. A facilitated, asset-based approach of community engagement for end-of-life conversations and care can catalyse community-led awareness initiatives. This occurs through the involvement of community and local health and social care organisations as co-creators of this change across multiple sectors in a sustainable way. This approach

  4. Genetic Algorithms for Agent-Based Infrastructure Interdependency Modeling and Analysis

    Energy Technology Data Exchange (ETDEWEB)

    May Permann

    2007-03-01

    Today’s society relies greatly upon an array of complex national and international infrastructure networks such as transportation, electric power, telecommunication, and financial networks. This paper describes initial research combining agent-based infrastructure modeling software and genetic algorithms (GAs) to help optimize infrastructure protection and restoration decisions. This research proposes to apply GAs to the problem of infrastructure modeling and analysis in order to determine the optimum assets to restore or protect from attack or other disaster. This research is just commencing and therefore the focus of this paper is the integration of a GA optimization method with a simulation through the simulation’s agents.

  5. Framework, process and tool for managing technology-based assets

    CSIR Research Space (South Africa)

    Kfir, R

    2000-10-01

    Full Text Available ) and the intellectual property (IP) of the organisation, The study describes a framework linking the core processes supporting the management of technology-based assets and offerings with other organisational elements such as leadership, strategy, and culture. Specific...

  6. A molecular dynamics-based algorithm for evaluating the glycosaminoglycan mimicking potential of synthetic, homogenous, sulfated small molecules.

    Directory of Open Access Journals (Sweden)

    Balaji Nagarajan

    Full Text Available Glycosaminoglycans (GAGs are key natural biopolymers that exhibit a range of biological functions including growth and differentiation. Despite this multiplicity of function, natural GAG sequences have not yielded drugs because of problems of heterogeneity and synthesis. Recently, several homogenous non-saccharide glycosaminoglycan mimetics (NSGMs have been reported as agents displaying major therapeutic promise. Yet, it remains unclear whether sulfated NSGMs structurally mimic sulfated GAGs. To address this, we developed a three-step molecular dynamics (MD-based algorithm to compare sulfated NSGMs with GAGs. In the first step of this algorithm, parameters related to the range of conformations sampled by the two highly sulfated molecules as free entities in water were compared. The second step compared identity of binding site geometries and the final step evaluated comparable dynamics and interactions in the protein-bound state. Using a test case of interactions with fibroblast growth factor-related proteins, we show that this three-step algorithm effectively predicts the GAG structure mimicking property of NSGMs. Specifically, we show that two unique dimeric NSGMs mimic hexameric GAG sequences in the protein-bound state. In contrast, closely related monomeric and trimeric NSGMs do not mimic GAG in either the free or bound states. These results correspond well with the functional properties of NSGMs. The results show for the first time that appropriately designed sulfated NSGMs can be good structural mimetics of GAGs and the incorporation of a MD-based strategy at the NSGM library screening stage can identify promising mimetics of targeted GAG sequences.

  7. Estimating Phenomenological Parameters in Multi-Assets Markets

    Science.gov (United States)

    Raffaelli, Giacomo; Marsili, Matteo

    Financial correlations exhibit a non-trivial dynamic behavior. This is reproduced by a simple phenomenological model of a multi-asset financial market, which takes into account the impact of portfolio investment on price dynamics. This captures the fact that correlations determine the optimal portfolio but are affected by investment based on it. Such a feedback on correlations gives rise to an instability when the volume of investment exceeds a critical value. Close to the critical point the model exhibits dynamical correlations very similar to those observed in real markets. We discuss how the model's parameter can be estimated in real market data with a maximum likelihood principle. This confirms the main conclusion that real markets operate close to a dynamically unstable point.

  8. Dynamic Harmony Search with Polynomial Mutation Algorithm for Valve-Point Economic Load Dispatch

    Directory of Open Access Journals (Sweden)

    M. Karthikeyan

    2015-01-01

    mutation (DHSPM algorithm to solve ORPD problem. In DHSPM algorithm the key parameters of HS algorithm like harmony memory considering rate (HMCR and pitch adjusting rate (PAR are changed dynamically and there is no need to predefine these parameters. Additionally polynomial mutation is inserted in the updating step of HS algorithm to favor exploration and exploitation of the search space. The DHSPM algorithm is tested with three power system cases consisting of 3, 13, and 40 thermal units. The computational results show that the DHSPM algorithm is more effective in finding better solutions than other computational intelligence based methods.

  9. Saving-Based Asset Pricing

    DEFF Research Database (Denmark)

    Dreyer, Johannes Kabderian; Schneider, Johannes; T. Smith, William

    2013-01-01

    This paper explores the implications of a novel class of preferences for the behavior of asset prices. Following a suggestion by Marshall (1920), we entertain the possibility that people derive utility not only from consumption, but also from the very act of saving. These ‘‘saving-based’’ prefere...

  10. [Health promotion based on assets: how to work with this perspective in local interventions?

    Science.gov (United States)

    Cofiño, Rafael; Aviñó, Dory; Benedé, Carmen Belén; Botello, Blanca; Cubillo, Jara; Morgan, Antony; Paredes-Carbonell, Joan Josep; Hernán, Mariano

    2016-11-01

    An asset-based approach could be useful to revitalise health promotion or community health interventions combining work with multiple partnerships, positive health, community engagement, equity and orientation of health determinants. We set some recommendations about how to incorporate the assets model in programmes, projects and interventions in health promotion. Some techniques are described for assets mapping and some experiences with this methodology being developed in different regions are systematised. We propose the term "Asset-based Health Promotion/Community Health" as an operational definition to work at the local level with a community engagement and participatory approach, building alliances between different institutions at the state-regional level and trying to create a framework for action with the generation of evaluations and evidence to work on population interventions from the perspective of positive health. Copyright © 2016 SESPAS. All rights reserved.

  11. Improving the asset pricing ability of the Consumption-Capital Asset Pricing Model?

    DEFF Research Database (Denmark)

    Rasmussen, Anne-Sofie Reng

    This paper compares the asset pricing ability of the traditional consumption-based capital asset pricing model to models from two strands of literature attempting to improve on the poor empirical results of the C-CAPM. One strand is based on the intertemporal asset pricing model of Campbell (1993...... able to price assets conditionally as suggested by Cochrane (1996) and Lettau and Ludvigson (2001b). The unconditional C-CAPM is rewritten as a scaled factor model using the approximate log consumptionwealth ratio cay, developed by Lettau and Ludvigson (2001a), as scaling variable. The models...... and composite. Thus, there is no unambiguous solution to the pricing ability problems of the C-CAPM. Models from both the alternative literature strands are found to outperform the traditional C-CAPM on average pricing errors. However, when weighting pricing errors by the full variance-covariance matrix...

  12. Complex-based OCT angiography algorithm recovers microvascular information better than amplitude- or phase-based algorithms in phase-stable systems.

    Science.gov (United States)

    Xu, Jingjiang; Song, Shaozhen; Li, Yuandong; Wang, Ruikang K

    2017-12-19

    Optical coherence tomography angiography (OCTA) is increasingly becoming a popular inspection tool for biomedical imaging applications. By exploring the amplitude, phase and complex information available in OCT signals, numerous algorithms have been proposed that contrast functional vessel networks within microcirculatory tissue beds. However, it is not clear which algorithm delivers optimal imaging performance. Here, we investigate systematically how amplitude and phase information have an impact on the OCTA imaging performance, to establish the relationship of amplitude and phase stability with OCT signal-to-noise ratio (SNR), time interval and particle dynamics. With either repeated A-scan or repeated B-scan imaging protocols, the amplitude noise increases with the increase of OCT SNR; however, the phase noise does the opposite, i.e. it increases with the decrease of OCT SNR. Coupled with experimental measurements, we utilize a simple Monte Carlo (MC) model to simulate the performance of amplitude-, phase- and complex-based algorithms for OCTA imaging, the results of which suggest that complex-based algorithms deliver the best performance when the phase noise is  algorithm delivers better performance than either the amplitude- or phase-based algorithms for both the repeated A-scan and the B-scan imaging protocols, which agrees well with the conclusion drawn from the MC simulations.

  13. Investments in fixed assets and depreciation of fixed assets: theoretical and practical aspects of study and analysis

    Directory of Open Access Journals (Sweden)

    Irina D. Demina

    2017-01-01

    Full Text Available It is indicated that domestic economy is experiencing a shortage of investment.The acceleration of the processes of import substitution is one of the most important challenges facing the domestic economy at present.Investments, especially capital investments and related investment relations constitute the basis for the development of the national economy and improving the efficiency of social production as a whole. A problem of formation of the amortization fundremains actual at the moment. In the modern scientific and educational literature amortization fund means the fund, including the use of funds to complete the restoration and repair of the fixed assets. This paper makesthe analysis of the situation in the area of investment in the fixed capital, which has developed in Russia for the past severalyears. The aim of this paper is to study the investment climate in the country based on the analysis of investments in the fixed capital by the sources of financing and types of the economic activity. The work is based on dynamic and structural analysis of analytical and statistical information on the processes occurring in this field.As a result, it can be noted that in spite of a number of efforts being made, in general, there are low growth rates in industry, there is a deficit of investments in the fixed assets. Most of the investments in fixed assets are carried out at the expense of the organizations’ own funds. A significant number of economic entities do not have the means, necessary for the technological renewal. Unfortunately, the regulatory framework in the field of accounting for the fixed assets and accrual of depreciation does not imply the use of a special account for the accumulation, and, most importantly, for the purposeful control of the use of the depreciation fund.First of all, it is necessary for companies with state participation and monopoly organizations. The lack of control over the targeted use of the depreciation fund

  14. Improvement of the methods for company’s fixed assets analysis

    Directory of Open Access Journals (Sweden)

    T. A. Zhurkina

    2018-01-01

    Full Text Available Fixed assets are an integral component of the productive capacity of any enterprise. The financial results of the enterprise largely depend on their intensity and efficiency of use. The analysis of fixed assets is usually carried out using an integrated and systematic approach, based on their availability, their movement, efficiency of use (including their active part. In the opinion of some authors, the traditional methods of analyzing fixed assets have a number of shortcomings, since they do not take into account the life cycle of an enterprise, the ecological aspects of the operation of fixed assets, the operation specifics of the individual divisions of a company and its branches. In order to improve the methodology for analyzing fixed assets, the authors proposed to use formalized and nonformalized criteria for analyzing the risks associated with the fixed asset use. A survey questionnaire was designed to determine the likelihood of the risk of economic losses associated with the use of fixed assets. The authors propose using the integral indicator for the purpose of analyzing the risk of using fixed assets in dynamics. In order to improve the procedure for auditing, the authors proposed segregation of economic transactions with fixed assets according to their cycles in accordance with the stage of their reproduction. Operational analysis is important for managing the efficiency of the fixed asset use, especially during a critical period. Using the analysis of the regularity in grain combines performance would reduce losses during harvesting, implement the work within strictly defined time frame and remunerate the employees for high-quality and intensive performance of their tasks.

  15. Q-learning-based adjustable fixed-phase quantum Grover search algorithm

    International Nuclear Information System (INIS)

    Guo Ying; Shi Wensha; Wang Yijun; Hu, Jiankun

    2017-01-01

    We demonstrate that the rotation phase can be suitably chosen to increase the efficiency of the phase-based quantum search algorithm, leading to a dynamic balance between iterations and success probabilities of the fixed-phase quantum Grover search algorithm with Q-learning for a given number of solutions. In this search algorithm, the proposed Q-learning algorithm, which is a model-free reinforcement learning strategy in essence, is used for performing a matching algorithm based on the fraction of marked items λ and the rotation phase α. After establishing the policy function α = π(λ), we complete the fixed-phase Grover algorithm, where the phase parameter is selected via the learned policy. Simulation results show that the Q-learning-based Grover search algorithm (QLGA) enables fewer iterations and gives birth to higher success probabilities. Compared with the conventional Grover algorithms, it avoids the optimal local situations, thereby enabling success probabilities to approach one. (author)

  16. A Thrust Allocation Method for Efficient Dynamic Positioning of a Semisubmersible Drilling Rig Based on the Hybrid Optimization Algorithm

    Directory of Open Access Journals (Sweden)

    Luman Zhao

    2015-01-01

    Full Text Available A thrust allocation method was proposed based on a hybrid optimization algorithm to efficiently and dynamically position a semisubmersible drilling rig. That is, the thrust allocation was optimized to produce the generalized forces and moment required while at the same time minimizing the total power consumption under the premise that forbidden zones should be taken into account. An optimization problem was mathematically formulated to provide the optimal thrust allocation by introducing the corresponding design variables, objective function, and constraints. A hybrid optimization algorithm consisting of a genetic algorithm and a sequential quadratic programming (SQP algorithm was selected and used to solve this problem. The proposed method was evaluated by applying it to a thrust allocation problem for a semisubmersible drilling rig. The results indicate that the proposed method can be used as part of a cost-effective strategy for thrust allocation of the rig.

  17. Next Generation Suspension Dynamics Algorithms

    Energy Technology Data Exchange (ETDEWEB)

    Schunk, Peter Randall [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Higdon, Jonathon [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Chen, Steven [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States)

    2014-12-01

    This research project has the objective to extend the range of application, improve the efficiency and conduct simulations with the Fast Lubrication Dynamics (FLD) algorithm for concentrated particle suspensions in a Newtonian fluid solvent. The research involves a combination of mathematical development, new computational algorithms, and application to processing flows of relevance in materials processing. The mathematical developments clarify the underlying theory, facilitate verification against classic monographs in the field and provide the framework for a novel parallel implementation optimized for an OpenMP shared memory environment. The project considered application to consolidation flows of major interest in high throughput materials processing and identified hitherto unforeseen challenges in the use of FLD in these applications. Extensions to the algorithm have been developed to improve its accuracy in these applications.

  18. Learning Agents for Autonomous Space Asset Management (LAASAM)

    Science.gov (United States)

    Scally, L.; Bonato, M.; Crowder, J.

    2011-09-01

    Current and future space systems will continue to grow in complexity and capabilities, creating a formidable challenge to monitor, maintain, and utilize these systems and manage their growing network of space and related ground-based assets. Integrated System Health Management (ISHM), and in particular, Condition-Based System Health Management (CBHM), is the ability to manage and maintain a system using dynamic real-time data to prioritize, optimize, maintain, and allocate resources. CBHM entails the maintenance of systems and equipment based on an assessment of current and projected conditions (situational and health related conditions). A complete, modern CBHM system comprises a number of functional capabilities: sensing and data acquisition; signal processing; conditioning and health assessment; diagnostics and prognostics; and decision reasoning. In addition, an intelligent Human System Interface (HSI) is required to provide the user/analyst with relevant context-sensitive information, the system condition, and its effect on overall situational awareness of space (and related) assets. Colorado Engineering, Inc. (CEI) and Raytheon are investigating and designing an Intelligent Information Agent Architecture that will provide a complete range of CBHM and HSI functionality from data collection through recommendations for specific actions. The research leverages CEI’s expertise with provisioning management network architectures and Raytheon’s extensive experience with learning agents to define a system to autonomously manage a complex network of current and future space-based assets to optimize their utilization.

  19. INNOVATION IN ACCOUNTING BIOLOGIC ASSETS

    OpenAIRE

    Stolуarova M. A.; Shcherbina I. D.

    2016-01-01

    The article describes the innovations in the classification and measurement of biological assets according to IFRS (IAS) 41 "Agriculture". The difficulties faced by agricultural producers using standard, set out in article. The classification based on the adopted amendments, according to which the fruit-bearing plants, previously accounted for as biological assets are measured at fair value are included in the category of fixed assets. The structure of biological assets and main means has bee...

  20. The Sociological Imagination and Community-Based Learning: Using an Asset-Based Approach

    Science.gov (United States)

    Garoutte, Lisa

    2018-01-01

    Fostering a sociological imagination in students is a central goal for most introductory sociology courses and sociology departments generally, yet success is difficult to achieve. This project suggests that using elements of asset-based community development can be used in sociology classrooms to develop a sociological perspective. After…

  1. Parameter identification of PEMFC model based on hybrid adaptive differential evolution algorithm

    International Nuclear Information System (INIS)

    Sun, Zhe; Wang, Ning; Bi, Yunrui; Srinivasan, Dipti

    2015-01-01

    In this paper, a HADE (hybrid adaptive differential evolution) algorithm is proposed for the identification problem of PEMFC (proton exchange membrane fuel cell). Inspired by biological genetic strategy, a novel adaptive scaling factor and a dynamic crossover probability are presented to improve the adaptive and dynamic performance of differential evolution algorithm. Moreover, two kinds of neighborhood search operations based on the bee colony foraging mechanism are introduced for enhancing local search efficiency. Through testing the benchmark functions, the proposed algorithm exhibits better performance in convergent accuracy and speed. Finally, the HADE algorithm is applied to identify the nonlinear parameters of PEMFC stack model. Through experimental comparison with other identified methods, the PEMFC model based on the HADE algorithm shows better performance. - Highlights: • We propose a hybrid adaptive differential evolution algorithm (HADE). • The search efficiency is enhanced in low and high dimension search space. • The effectiveness is confirmed by testing benchmark functions. • The identification of the PEMFC model is conducted by adopting HADE.

  2. Coherency Identification of Generators Using a PAM Algorithm for Dynamic Reduction of Power Systems

    Directory of Open Access Journals (Sweden)

    Seung-Il Moon

    2012-11-01

    Full Text Available This paper presents a new coherency identification method for dynamic reduction of a power system. To achieve dynamic reduction, coherency-based equivalence techniques divide generators into groups according to coherency, and then aggregate them. In order to minimize the changes in the dynamic response of the reduced equivalent system, coherency identification of the generators should be clearly defined. The objective of the proposed coherency identification method is to determine the optimal coherent groups of generators with respect to the dynamic response, using the Partitioning Around Medoids (PAM algorithm. For this purpose, the coherency between generators is first evaluated from the dynamic simulation time response, and in the proposed method this result is then used to define a dissimilarity index. Based on the PAM algorithm, the coherent generator groups are then determined so that the sum of the index in each group is minimized. This approach ensures that the dynamic characteristics of the original system are preserved, by providing the optimized coherency identification. To validate the effectiveness of the technique, simulated cases with an IEEE 39-bus test system are evaluated using PSS/E. The proposed method is compared with an existing coherency identification method, which uses the K-means algorithm, and is found to provide a better estimate of the original system. 

  3. PACE: A dynamic programming algorithm for hardware/software partitioning

    DEFF Research Database (Denmark)

    Knudsen, Peter Voigt; Madsen, Jan

    1996-01-01

    This paper presents the PACE partitioning algorithm which is used in the LYCOS co-synthesis system for partitioning control/dataflow graphs into hardware and software parts. The algorithm is a dynamic programming algorithm which solves both the problem of minimizing system execution time...

  4. A Computational Fluid Dynamics Algorithm on a Massively Parallel Computer

    Science.gov (United States)

    Jespersen, Dennis C.; Levit, Creon

    1989-01-01

    The discipline of computational fluid dynamics is demanding ever-increasing computational power to deal with complex fluid flow problems. We investigate the performance of a finite-difference computational fluid dynamics algorithm on a massively parallel computer, the Connection Machine. Of special interest is an implicit time-stepping algorithm; to obtain maximum performance from the Connection Machine, it is necessary to use a nonstandard algorithm to solve the linear systems that arise in the implicit algorithm. We find that the Connection Machine ran achieve very high computation rates on both explicit and implicit algorithms. The performance of the Connection Machine puts it in the same class as today's most powerful conventional supercomputers.

  5. A comparative study of three model-based algorithms for estimating state-of-charge of lithium-ion batteries under a new combined dynamic loading profile

    International Nuclear Information System (INIS)

    Yang, Fangfang; Xing, Yinjiao; Wang, Dong; Tsui, Kwok-Leung

    2016-01-01

    Highlights: • Three different model-based filtering algorithms for SOC estimation are compared. • A combined dynamic loading profile is proposed to evaluate the three algorithms. • Robustness against uncertainty of initial states of SOC estimators are investigated. • Battery capacity degradation is considered in SOC estimation. - Abstract: Accurate state-of-charge (SOC) estimation is critical for the safety and reliability of battery management systems in electric vehicles. Because SOC cannot be directly measured and SOC estimation is affected by many factors, such as ambient temperature, battery aging, and current rate, a robust SOC estimation approach is necessary to be developed so as to deal with time-varying and nonlinear battery systems. In this paper, three popular model-based filtering algorithms, including extended Kalman filter, unscented Kalman filter, and particle filter, are respectively used to estimate SOC and their performances regarding to tracking accuracy, computation time, robustness against uncertainty of initial values of SOC, and battery degradation, are compared. To evaluate the performances of these algorithms, a new combined dynamic loading profile composed of the dynamic stress test, the federal urban driving schedule and the US06 is proposed. The comparison results showed that the unscented Kalman filter is the most robust to different initial values of SOC, while the particle filter owns the fastest convergence ability when an initial guess of SOC is far from a true initial SOC.

  6. A Synthetic Algorithm for Tracking a Moving Object in a Multiple-Dynamic Obstacles Environment Based on Kinematically Planar Redundant Manipulators

    Directory of Open Access Journals (Sweden)

    Hongzhe Jin

    2017-01-01

    Full Text Available This paper presents a synthetic algorithm for tracking a moving object in a multiple-dynamic obstacles environment based on kinematically planar manipulators. By observing the motions of the object and obstacles, Spline filter associated with polynomial fitting is utilized to predict their moving paths for a period of time in the future. Several feasible paths for the manipulator in Cartesian space can be planned according to the predicted moving paths and the defined feasibility criterion. The shortest one among these feasible paths is selected as the optimized path. Then the real-time path along the optimized path is planned for the manipulator to track the moving object in real-time. To improve the convergence rate of tracking, a virtual controller based on PD controller is designed to adaptively adjust the real-time path. In the process of tracking, the null space of inverse kinematic and the local rotation coordinate method (LRCM are utilized for the arms and the end-effector to avoid obstacles, respectively. Finally, the moving object in a multiple-dynamic obstacles environment is thus tracked via real-time updating the joint angles of manipulator according to the iterative method. Simulation results show that the proposed algorithm is feasible to track a moving object in a multiple-dynamic obstacles environment.

  7. Dynamic traffic assignment : genetic algorithms approach

    Science.gov (United States)

    1997-01-01

    Real-time route guidance is a promising approach to alleviating congestion on the nations highways. A dynamic traffic assignment model is central to the development of guidance strategies. The artificial intelligence technique of genetic algorithm...

  8. Local natural and cultural heritage assets and community based ...

    African Journals Online (AJOL)

    Community based tourism (CBT) is seen as an opportunity which mass tourism does not offer for, especially, rural communities to develop their natural and cultural assets into tourism activities for the benefit of the community. The point of CBT is that the community, collectively and individually, gains a livelihood from ...

  9. Asset Pricing in Markets with Illiquid Assets

    OpenAIRE

    Longstaff, Francis A

    2005-01-01

    Many important classes of assets are illiquid in the sense that they cannot always be traded immediately. Thus, a portfolio position in these types of illiquid investments becomes at least temporarily irreversible. We study the asset-pricing implications of illiquidity in a two-asset exchange economy with heterogeneous agents. In this market, one asset is always liquid. The other asset can be traded initially, but then not again until after a “blackout†period. Illiquidity has a dramatic e...

  10. An improved genetic algorithm with dynamic topology

    International Nuclear Information System (INIS)

    Cai Kai-Quan; Tang Yan-Wu; Zhang Xue-Jun; Guan Xiang-Min

    2016-01-01

    The genetic algorithm (GA) is a nature-inspired evolutionary algorithm to find optima in search space via the interaction of individuals. Recently, researchers demonstrated that the interaction topology plays an important role in information exchange among individuals of evolutionary algorithm. In this paper, we investigate the effect of different network topologies adopted to represent the interaction structures. It is found that GA with a high-density topology ends up more likely with an unsatisfactory solution, contrarily, a low-density topology can impede convergence. Consequently, we propose an improved GA with dynamic topology, named DT-GA, in which the topology structure varies dynamically along with the fitness evolution. Several experiments executed with 15 well-known test functions have illustrated that DT-GA outperforms other test GAs for making a balance of convergence speed and optimum quality. Our work may have implications in the combination of complex networks and computational intelligence. (paper)

  11. An Adaptive Sweep-Circle Spatial Clustering Algorithm Based on Gestalt

    Directory of Open Access Journals (Sweden)

    Qingming Zhan

    2017-08-01

    Full Text Available An adaptive spatial clustering (ASC algorithm is proposed in this present study, which employs sweep-circle techniques and a dynamic threshold setting based on the Gestalt theory to detect spatial clusters. The proposed algorithm can automatically discover clusters in one pass, rather than through the modification of the initial model (for example, a minimal spanning tree, Delaunay triangulation, or Voronoi diagram. It can quickly identify arbitrarily-shaped clusters while adapting efficiently to non-homogeneous density characteristics of spatial data, without the need for prior knowledge or parameters. The proposed algorithm is also ideal for use in data streaming technology with dynamic characteristics flowing in the form of spatial clustering in large data sets.

  12. Digital asset ecosystems rethinking crowds and cloud

    CERN Document Server

    Blanke, Tobias

    2014-01-01

    Digital asset management is undergoing a fundamental transformation. Near universal availability of high-quality web-based assets makes it important to pay attention to the new world of digital ecosystems and what it means for managing, using and publishing digital assets. The Ecosystem of Digital Assets reflects on these developments and what the emerging 'web of things' could mean for digital assets. The book is structured into three parts, each covering an important aspect of digital assets. Part one introduces the emerging ecosystems of digital assets. Part two examines digital asset manag

  13. Optimized Bayesian dynamic advising theory and algorithms

    CERN Document Server

    Karny, Miroslav

    2006-01-01

    Written by one of the world's leading groups in the area of Bayesian identification, control, and decision making, this book provides the theoretical and algorithmic basis of optimized probabilistic advising. Starting from abstract ideas and formulations, and culminating in detailed algorithms, the book comprises a unified treatment of an important problem of the design of advisory systems supporting supervisors of complex processes. It introduces the theoretical and algorithmic basis of developed advising, relying on novel and powerful combination black-box modelling by dynamic mixture models

  14. Historical development of derivatives’ underlying assets

    Directory of Open Access Journals (Sweden)

    Sylvie Riederová

    2011-01-01

    Full Text Available The derivative transactions are able to eliminate the unexpected risk arising from the price volatility of the asset. The need for risk elimination relates to the application of derivatives.This paper is focused on derivatives’ underlying assets themselves. With the plain description, supported by progressive summarization, the authors analysed the relevant theoretical sources, dealt with derivatives, their underlying assets and their development in centuries. Starting in the ancient history, 2000 BC, the first non-standard transaction, very close to today’s understanding of derivatives, becomes to be closed between counterparties. During the time, in different kingdoms and emporiums, derivatives started to play a significant role in daily life, helping to reduce the uncertainty of the future. But the real golden era for derivatives started with the so called ‘New derivative markets’ and computer supported trading. They have extended their form from simple tools to most complex structures, without changing their main purpose hedging and risk – reduction.For the main purpose of this paper it is impossible to split the development of derivatives from the very wide extension of underlying assets. The change of these assets was one of the main drivers in derivatives development. Understanding of the dynamic character of these assets helps to understand the world of derivatives.

  15. A multilevel-skin neighbor list algorithm for molecular dynamics simulation

    Science.gov (United States)

    Zhang, Chenglong; Zhao, Mingcan; Hou, Chaofeng; Ge, Wei

    2018-01-01

    Searching of the interaction pairs and organization of the interaction processes are important steps in molecular dynamics (MD) algorithms and are critical to the overall efficiency of the simulation. Neighbor lists are widely used for these steps, where thicker skin can reduce the frequency of list updating but is discounted by more computation in distance check for the particle pairs. In this paper, we propose a new neighbor-list-based algorithm with a precisely designed multilevel skin which can reduce unnecessary computation on inter-particle distances. The performance advantages over traditional methods are then analyzed against the main simulation parameters on Intel CPUs and MICs (many integrated cores), and are clearly demonstrated. The algorithm can be generalized for various discrete simulations using neighbor lists.

  16. DTFP-Growth: Dynamic Threshold-Based FP-Growth Rule Mining Algorithm Through Integrating Gene Expression, Methylation, and Protein-Protein Interaction Profiles.

    Science.gov (United States)

    Mallik, Saurav; Bhadra, Tapas; Mukherji, Ayan; Mallik, Saurav; Bhadra, Tapas; Mukherji, Ayan; Mallik, Saurav; Bhadra, Tapas; Mukherji, Ayan

    2018-04-01

    Association rule mining is an important technique for identifying interesting relationships between gene pairs in a biological data set. Earlier methods basically work for a single biological data set, and, in maximum cases, a single minimum support cutoff can be applied globally, i.e., across all genesets/itemsets. To overcome this limitation, in this paper, we propose dynamic threshold-based FP-growth rule mining algorithm that integrates gene expression, methylation and protein-protein interaction profiles based on weighted shortest distance to find the novel associations among different pairs of genes in multi-view data sets. For this purpose, we introduce three new thresholds, namely, Distance-based Variable/Dynamic Supports (DVS), Distance-based Variable Confidences (DVC), and Distance-based Variable Lifts (DVL) for each rule by integrating co-expression, co-methylation, and protein-protein interactions existed in the multi-omics data set. We develop the proposed algorithm utilizing these three novel multiple threshold measures. In the proposed algorithm, the values of , , and are computed for each rule separately, and subsequently it is verified whether the support, confidence, and lift of each evolved rule are greater than or equal to the corresponding individual , , and values, respectively, or not. If all these three conditions for a rule are found to be true, the rule is treated as a resultant rule. One of the major advantages of the proposed method compared with other related state-of-the-art methods is that it considers both the quantitative and interactive significance among all pairwise genes belonging to each rule. Moreover, the proposed method generates fewer rules, takes less running time, and provides greater biological significance for the resultant top-ranking rules compared to previous methods.

  17. Behavioural modelling using the MOESP algorithm, dynamic neural networks and the Bartels-Stewart algorithm

    NARCIS (Netherlands)

    Schilders, W.H.A.; Meijer, P.B.L.; Ciggaar, E.

    2008-01-01

    In this paper we discuss the use of the state-space modelling MOESP algorithm to generate precise information about the number of neurons and hidden layers in dynamic neural networks developed for the behavioural modelling of electronic circuits. The Bartels–Stewart algorithm is used to transform

  18. Using satellite imagery to evaluate land-based camouflage assets

    CSIR Research Space (South Africa)

    Baumbach, J

    2006-02-01

    Full Text Available to Evaluate Land-based Camouflage Assets J BAUMBACH, M LUBBE CSIR Defence, Peace, Safety and security, PO Box 395, Pretoria, 0001, South Africa Email: jbaumbac@csir.co.za ABSTRACT A camouflage field trial experiment was conducted. For the experiment... analysis, change detection, un-supervised classification, supervised classification and object based classification. RESULTS The following table shows a summary of the different targets, and whether it was detected ( ) or not detected (x), using...

  19. A formal analysis of a dynamic distributed spanning tree algorithm

    NARCIS (Netherlands)

    Mooij, A.J.; Wesselink, J.W.

    2003-01-01

    Abstract. We analyze the spanning tree algorithm in the IEEE 1394.1 draft standard, which correctness has not previously been proved. This algorithm is a fully-dynamic distributed graph algorithm, which, in general, is hard to develop. The approach we use is to formally develop an algorithm that is

  20. Dynamic Airspace Managment - Models and Algorithms

    OpenAIRE

    Cheng, Peng; Geng, Rui

    2010-01-01

    This chapter investigates the models and algorithms for implementing the concept of Dynamic Airspace Management. Three models are discussed. First two models are about how to use or adjust air route dynamically in order to speed up air traffic flow and reduce delay. The third model gives a way to dynamically generate the optimal sector configuration for an air traffic control center to both balance the controller’s workload and save control resources. The first model, called the Dynami...

  1. Functional segmentation of dynamic PET studies: Open source implementation and validation of a leader-follower-based algorithm.

    Science.gov (United States)

    Mateos-Pérez, José María; Soto-Montenegro, María Luisa; Peña-Zalbidea, Santiago; Desco, Manuel; Vaquero, Juan José

    2016-02-01

    We present a novel segmentation algorithm for dynamic PET studies that groups pixels according to the similarity of their time-activity curves. Sixteen mice bearing a human tumor cell line xenograft (CH-157MN) were imaged with three different (68)Ga-DOTA-peptides (DOTANOC, DOTATATE, DOTATOC) using a small animal PET-CT scanner. Regional activities (input function and tumor) were obtained after manual delineation of regions of interest over the image. The algorithm was implemented under the jClustering framework and used to extract the same regional activities as in the manual approach. The volume of distribution in the tumor was computed using the Logan linear method. A Kruskal-Wallis test was used to investigate significant differences between the manually and automatically obtained volumes of distribution. The algorithm successfully segmented all the studies. No significant differences were found for the same tracer across different segmentation methods. Manual delineation revealed significant differences between DOTANOC and the other two tracers (DOTANOC - DOTATATE, p=0.020; DOTANOC - DOTATOC, p=0.033). Similar differences were found using the leader-follower algorithm. An open implementation of a novel segmentation method for dynamic PET studies is presented and validated in rodent studies. It successfully replicated the manual results obtained in small-animal studies, thus making it a reliable substitute for this task and, potentially, for other dynamic segmentation procedures. Copyright © 2016 Elsevier Ltd. All rights reserved.

  2. The Reach-and-Evolve Algorithm for Reachability Analysis of Nonlinear Dynamical Systems

    NARCIS (Netherlands)

    P.J. Collins (Pieter); A. Goldsztejn

    2008-01-01

    htmlabstractThis paper introduces a new algorithm dedicated to the rigorous reachability analysis of nonlinear dynamical systems. The algorithm is initially presented in the context of discrete time dynamical systems, and then extended to continuous time dynamical systems driven by ODEs. In

  3. Arbitrage Pricing, Capital Asset Pricing, and Agricultural Assets

    OpenAIRE

    Louise M. Arthur; Colin A. Carter; Fay Abizadeh

    1988-01-01

    A new asset pricing model, the arbitrage pricing theory, has been developed as an alternative to the capital asset pricing model. The arbitrage pricing theory model is used to analyze the relationship between risk and return for agricultural assets. The major conclusion is that the arbitrage pricing theory results support previous capital asset pricing model findings that the estimated risk associated with agricultural assets is low. This conclusion is more robust for the arbitrage pricing th...

  4. Genetic algorithm enhanced by machine learning in dynamic aperture optimization

    Science.gov (United States)

    Li, Yongjun; Cheng, Weixing; Yu, Li Hua; Rainer, Robert

    2018-05-01

    With the aid of machine learning techniques, the genetic algorithm has been enhanced and applied to the multi-objective optimization problem presented by the dynamic aperture of the National Synchrotron Light Source II (NSLS-II) Storage Ring. During the evolution processes employed by the genetic algorithm, the population is classified into different clusters in the search space. The clusters with top average fitness are given "elite" status. Intervention on the population is implemented by repopulating some potentially competitive candidates based on the experience learned from the accumulated data. These candidates replace randomly selected candidates among the original data pool. The average fitness of the population is therefore improved while diversity is not lost. Maintaining diversity ensures that the optimization is global rather than local. The quality of the population increases and produces more competitive descendants accelerating the evolution process significantly. When identifying the distribution of optimal candidates, they appear to be located in isolated islands within the search space. Some of these optimal candidates have been experimentally confirmed at the NSLS-II storage ring. The machine learning techniques that exploit the genetic algorithm can also be used in other population-based optimization problems such as particle swarm algorithm.

  5. An Enhanced Hybrid Social Based Routing Algorithm for MANET-DTN

    Directory of Open Access Journals (Sweden)

    Martin Matis

    2016-01-01

    Full Text Available A new routing algorithm for mobile ad hoc networks is proposed in this paper: an Enhanced Hybrid Social Based Routing (HSBR algorithm for MANET-DTN as optimal solution for well-connected multihop mobile networks (MANET and/or worse connected MANET with small density of the nodes and/or due to mobility fragmented MANET into two or more subnetworks or islands. This proposed HSBR algorithm is fully decentralized combining main features of both Dynamic Source Routing (DSR and Social Based Opportunistic Routing (SBOR algorithms. The proposed scheme is simulated and evaluated by replaying real life traces which exhibit this highly dynamic topology. Evaluation of new proposed HSBR algorithm was made by comparison with DSR and SBOR. All methods were simulated with different levels of velocity. The results show that HSBR has the highest success of packet delivery, but with higher delay in comparison with DSR, and much lower in comparison with SBOR. Simulation results indicate that HSBR approach can be applicable in networks, where MANET or DTN solutions are separately useless or ineffective. This method provides delivery of the message in every possible situation in areas without infrastructure and can be used as backup method for disaster situation when infrastructure is destroyed.

  6. Fast Dynamic Simulation-Based Small Signal Stability Assessment and Control

    Energy Technology Data Exchange (ETDEWEB)

    Acharya, Naresh [General Electric Company, Fairfield, CT (United States); Baone, Chaitanya [General Electric Company, Fairfield, CT (United States); Veda, Santosh [General Electric Company, Fairfield, CT (United States); Dai, Jing [General Electric Company, Fairfield, CT (United States); Chaudhuri, Nilanjan [General Electric Company, Fairfield, CT (United States); Leonardi, Bruno [General Electric Company, Fairfield, CT (United States); Sanches-Gasca, Juan [General Electric Company, Fairfield, CT (United States); Diao, Ruisheng [Pacific Northwest National Lab. (PNNL), Richland, WA (United States); Wu, Di [Pacific Northwest National Lab. (PNNL), Richland, WA (United States); Huang, Zhenyu [Pacific Northwest National Lab. (PNNL), Richland, WA (United States); Zhang, Yu [Pacific Northwest National Lab. (PNNL), Richland, WA (United States); Jin, Shuangshuang [Pacific Northwest National Lab. (PNNL), Richland, WA (United States); Zheng, Bin [Pacific Northwest National Lab. (PNNL), Richland, WA (United States); Chen, Yousu [Pacific Northwest National Lab. (PNNL), Richland, WA (United States)

    2014-12-31

    Power grid planning and operation decisions are made based on simulation of the dynamic behavior of the system. Enabling substantial energy savings while increasing the reliability of the aging North American power grid through improved utilization of existing transmission assets hinges on the adoption of wide-area measurement systems (WAMS) for power system stabilization. However, adoption of WAMS alone will not suffice if the power system is to reach its full entitlement in stability and reliability. It is necessary to enhance predictability with "faster than real-time" dynamic simulations that will enable the dynamic stability margins, proactive real-time control, and improve grid resiliency to fast time-scale phenomena such as cascading network failures. Present-day dynamic simulations are performed only during offline planning studies, considering only worst case conditions such as summer peak, winter peak days, etc. With widespread deployment of renewable generation, controllable loads, energy storage devices and plug-in hybrid electric vehicles expected in the near future and greater integration of cyber infrastructure (communications, computation and control), monitoring and controlling the dynamic performance of the grid in real-time would become increasingly important. The state-of-the-art dynamic simulation tools have limited computational speed and are not suitable for real-time applications, given the large set of contingency conditions to be evaluated. These tools are optimized for best performance of single-processor computers, but the simulation is still several times slower than real-time due to its computational complexity. With recent significant advances in numerical methods and computational hardware, the expectations have been rising towards more efficient and faster techniques to be implemented in power system simulators. This is a natural expectation, given that the core solution algorithms of most commercial simulators were developed

  7. Algorithm of Dynamic Model Structural Identification of the Multivariable Plant

    Directory of Open Access Journals (Sweden)

    Л.М. Блохін

    2004-02-01

    Full Text Available  The new algorithm of dynamic model structural identification of the multivariable stabilized plant with observable and unobservable disturbances in the regular operating  modes is offered in this paper. With the help of the offered algorithm it is possible to define the “perturbed” models of dynamics not only of the plant, but also the dynamics characteristics of observable and unobservable casual disturbances taking into account the absence of correlation between themselves and control inputs with the unobservable perturbations.

  8. Asset Prices and Trading Volume under Fixed Transactions Costs.

    Science.gov (United States)

    Lo, Andrew W.; Mamaysky, Harry; Wang, Jiang

    2004-01-01

    We propose a dynamic equilibrium model of asset prices and trading volume when agents face fixed transactions costs. We show that even small fixed costs can give rise to large "no-trade" regions for each agent's optimal trading policy. The inability to trade more frequently reduces the agents' asset demand and in equilibrium gives rise to a…

  9. Asset sales, asset exchanges, and shareholder wealth in China

    Directory of Open Access Journals (Sweden)

    Weiting Huang

    2012-01-01

    Full Text Available In this paper, we study a sample of 1376 corporate asset sales and 250 asset exchanges in China between 1998 and 2006. We find that corporate asset sales in China enhance firm value with a cumulative abnormal return (CAR of 0.46% for the pre-announcement five-day period, which is consistent with the evidence discovered in both U.K. and U.S. For companies that exchanged assets during the sample period, the pre-announcement five-day CAR of 1.32% is statistically significant. We also discover that gains from divesting assets are positively related to managerial performance measured by Tobin's q ratio and the relative size of the asset sold or exchanged. Well-managed (high-q companies are more likely to sell or exchange assets in a value-maximizing fashion than poorly managed (low-q companies. Furthermore, asset-seller gains are not related to enhancing corporate focus, but improving corporate focus by exchanging for core assets enhances firm value.

  10. Generalized-ensemble molecular dynamics and Monte Carlo algorithms beyond the limit of the multicanonical algorithm

    International Nuclear Information System (INIS)

    Okumura, Hisashi

    2010-01-01

    I review two new generalized-ensemble algorithms for molecular dynamics and Monte Carlo simulations of biomolecules, that is, the multibaric–multithermal algorithm and the partial multicanonical algorithm. In the multibaric–multithermal algorithm, two-dimensional random walks not only in the potential-energy space but also in the volume space are realized. One can discuss the temperature dependence and pressure dependence of biomolecules with this algorithm. The partial multicanonical simulation samples a wide range of only an important part of potential energy, so that one can concentrate the effort to determine a multicanonical weight factor only on the important energy terms. This algorithm has higher sampling efficiency than the multicanonical and canonical algorithms. (review)

  11. Thickness determination in textile material design: dynamic modeling and numerical algorithms

    International Nuclear Information System (INIS)

    Xu, Dinghua; Ge, Meibao

    2012-01-01

    Textile material design is of paramount importance in the study of functional clothing design. It is therefore important to determine the dynamic heat and moisture transfer characteristics in the human body–clothing–environment system, which directly determine the heat–moisture comfort level of the human body. Based on a model of dynamic heat and moisture transfer with condensation in porous fabric at low temperature, this paper presents a new inverse problem of textile thickness determination (IPTTD). Adopting the idea of the least-squares method, we formulate the IPTTD into a function minimization problem. By means of the finite-difference method, quasi-solution method and direct search method for one-dimensional minimization problems, we construct iterative algorithms of the approximated solution for the IPTTD. Numerical simulation results validate the formulation of the IPTTD and demonstrate the effectiveness of the proposed numerical algorithms. (paper)

  12. Efficiently Inefficient Markets for Assets and Assets Management

    DEFF Research Database (Denmark)

    Garleanu, Nicolae; Heje Pedersen, Lasse

    We consider a model where investors can invest directly or search for an asset manager, information about assets is costly, and managers charge an endogenous fee. The efficiency of asset prices is linked to the efficiency of the asset management market: if investors can find managers more easily......, more money is allocated to active management, fees are lower, and asset prices are more efficient. Informed managers outperform after fees, uninformed managers underperform after fees, and the net performance of the average manager depends on the number of "noise allocators." Finally, we show why large...

  13. A Novel Image Stream Cipher Based On Dynamic Substitution

    OpenAIRE

    Elsharkawi, A.; El-Sagheer, R. M.; Akah, H.; Taha, H.

    2016-01-01

    Recently, many chaos-based stream cipher algorithms have been developed. Traditional chaos stream cipher is based on XORing a generated secure random number sequence based on chaotic maps (e.g. logistic map, Bernoulli Map, Tent Map etc.) with the original image to get the encrypted image, This type of stream cipher seems to be vulnerable to chosen plaintext attacks. This paper introduces a new stream cipher algorithm based on dynamic substitution box. The new algorithm uses one substitution b...

  14. ASSETS STRUCTURE AT CREDIT UNIONS

    Directory of Open Access Journals (Sweden)

    Tiplea Augustin

    2011-12-01

    Full Text Available Balance is a static tool for assessing the entity's position, profit and loss on one hand and cash flow statement on the other hand. These are dynamic situations on one hand showing the effectiveness or ineffectiveness of the total consumption of resources ( profit and loss and on the other hand entity's business viability (by cash flows. As reflection of financial position, the balance, established at the end of the reporting period (called a financial year, describes separately items of assets, liabilities and equity of the company. Assets are resources controlled by the enterprise as a result of past events and from which is expected to generate future economic benefits for the enterprise. The economic benefits correspond to a production potential, a possibility for conversion into cash or a reduction in output capacity of funds (cost reduction that an asset contributes, directly or indirectly to a company-specific cash flow.

  15. Dynamic multiple thresholding breast boundary detection algorithm for mammograms

    International Nuclear Information System (INIS)

    Wu, Yi-Ta; Zhou Chuan; Chan, Heang-Ping; Paramagul, Chintana; Hadjiiski, Lubomir M.; Daly, Caroline Plowden; Douglas, Julie A.; Zhang Yiheng; Sahiner, Berkman; Shi Jiazheng; Wei Jun

    2010-01-01

    Purpose: Automated detection of breast boundary is one of the fundamental steps for computer-aided analysis of mammograms. In this study, the authors developed a new dynamic multiple thresholding based breast boundary (MTBB) detection method for digitized mammograms. Methods: A large data set of 716 screen-film mammograms (442 CC view and 274 MLO view) obtained from consecutive cases of an Institutional Review Board approved project were used. An experienced breast radiologist manually traced the breast boundary on each digitized image using a graphical interface to provide a reference standard. The initial breast boundary (MTBB-Initial) was obtained by dynamically adapting the threshold to the gray level range in local regions of the breast periphery. The initial breast boundary was then refined by using gradient information from horizontal and vertical Sobel filtering to obtain the final breast boundary (MTBB-Final). The accuracy of the breast boundary detection algorithm was evaluated by comparison with the reference standard using three performance metrics: The Hausdorff distance (HDist), the average minimum Euclidean distance (AMinDist), and the area overlap measure (AOM). Results: In comparison with the authors' previously developed gradient-based breast boundary (GBB) algorithm, it was found that 68%, 85%, and 94% of images had HDist errors less than 6 pixels (4.8 mm) for GBB, MTBB-Initial, and MTBB-Final, respectively. 89%, 90%, and 96% of images had AMinDist errors less than 1.5 pixels (1.2 mm) for GBB, MTBB-Initial, and MTBB-Final, respectively. 96%, 98%, and 99% of images had AOM values larger than 0.9 for GBB, MTBB-Initial, and MTBB-Final, respectively. The improvement by the MTBB-Final method was statistically significant for all the evaluation measures by the Wilcoxon signed rank test (p<0.0001). Conclusions: The MTBB approach that combined dynamic multiple thresholding and gradient information provided better performance than the breast boundary

  16. Time Consistent Strategies for Mean-Variance Asset-Liability Management Problems

    Directory of Open Access Journals (Sweden)

    Hui-qiang Ma

    2013-01-01

    Full Text Available This paper studies the optimal time consistent investment strategies in multiperiod asset-liability management problems under mean-variance criterion. By applying time consistent model of Chen et al. (2013 and employing dynamic programming technique, we derive two-time consistent policies for asset-liability management problems in a market with and without a riskless asset, respectively. We show that the presence of liability does affect the optimal strategy. More specifically, liability leads a parallel shift of optimal time-consistent investment policy. Moreover, for an arbitrarily risk averse investor (under the variance criterion with liability, the time-diversification effects could be ignored in a market with a riskless asset; however, it should be considered in a market without any riskless asset.

  17. A Novel Dynamic Algorithm for IT Outsourcing Risk Assessment Based on Transaction Cost Theory

    Directory of Open Access Journals (Sweden)

    Guodong Cong

    2015-01-01

    Full Text Available With the great risk exposed in IT outsourcing, how to assess IT outsourcing risk becomes a critical issue. However, most of approaches to date need to further adapt to the particular complexity of IT outsourcing risk for either falling short in subjective bias, inaccuracy, or efficiency. This paper proposes a dynamic algorithm of risk assessment. It initially forwards extended three layers (risk factors, risks, and risk consequences of transferring mechanism based on transaction cost theory (TCT as the framework of risk analysis, which bridges the interconnection of components in three layers with preset transferring probability and impact. Then, it establishes an equation group between risk factors and risk consequences, which assures the “attribution” more precisely to track the specific sources that lead to certain loss. Namely, in each phase of the outsourcing lifecycle, both the likelihood and the loss of each risk factor and those of each risk are acquired through solving equation group with real data of risk consequences collected. In this “reverse” way, risk assessment becomes a responsive and interactive process with real data instead of subjective estimation, which improves the accuracy and alleviates bias in risk assessment. The numerical case proves the effectiveness of the algorithm compared with the approach forwarded by other references.

  18. An Improved Shuffled Frog Leaping Algorithm and Its Application in Dynamic Emergency Vehicle Dispatching

    Directory of Open Access Journals (Sweden)

    Xiaohong Duan

    2018-01-01

    Full Text Available The traditional method for solving the dynamic emergency vehicle dispatching problem can only get a local optimal strategy in each horizon. In order to obtain the dispatching strategy that can better respond to changes in road conditions during the whole dispatching process, the real-time and time-dependent link travel speeds are fused, and a time-dependent polygonal-shaped link travel speed function is set up to simulate the predictable changes in road conditions. Response times, accident severity, and accident time windows are taken as key factors to build an emergency vehicle dispatching model integrating dynamic emergency vehicle routing and selection. For the unpredictable changes in road conditions caused by accidents, the dispatching strategy is adjusted based on the real-time link travel speed. In order to solve the dynamic emergency vehicle dispatching model, an improved shuffled frog leaping algorithm (ISFLA is proposed. The global search of the improved algorithm uses the probability model of estimation of distribution algorithm to avoid the partial optimal solution. Based on the Beijing expressway network, the efficacy of the model and the improved algorithm were tested from three aspects. The results have shown the following: (1 Compared with SFLA, the optimization performance of ISFLA is getting better and better with the increase of the number of decision variables. When the possible emergency vehicle selection strategies are 815, the objective function value of optimal selection strategies obtained by the base algorithm is 210.10% larger than that of ISFLA. (2 The prediction error of the travel speed affects the accuracy of the initial emergency vehicle dispatching. The prediction error of ±10 can basically meet the requirements of the initial dispatching. (3 The adjustment of emergency vehicle dispatching strategy can successfully bypassed road sections affected by accidents and shorten the response time.

  19. Analysis of the Multi Strategy Goal Programming for Micro-Grid Based on Dynamic ant Genetic Algorithm

    Science.gov (United States)

    Qiu, J. P.; Niu, D. X.

    Micro-grid is one of the key technologies of the future energy supplies. Take economic planning. reliability, and environmental protection of micro grid as a basis for the analysis of multi-strategy objective programming problems for micro grid which contains wind power, solar power, and battery and micro gas turbine. Establish the mathematical model of each power generation characteristics and energy dissipation. and change micro grid planning multi-objective function under different operating strategies to a single objective model based on AHP method. Example analysis shows that in combination with dynamic ant mixed genetic algorithm can get the optimal power output of this model.

  20. Detection of algorithmic trading

    Science.gov (United States)

    Bogoev, Dimitar; Karam, Arzé

    2017-10-01

    We develop a new approach to reflect the behavior of algorithmic traders. Specifically, we provide an analytical and tractable way to infer patterns of quote volatility and price momentum consistent with different types of strategies employed by algorithmic traders, and we propose two ratios to quantify these patterns. Quote volatility ratio is based on the rate of oscillation of the best ask and best bid quotes over an extremely short period of time; whereas price momentum ratio is based on identifying patterns of rapid upward or downward movement in prices. The two ratios are evaluated across several asset classes. We further run a two-stage Artificial Neural Network experiment on the quote volatility ratio; the first stage is used to detect the quote volatility patterns resulting from algorithmic activity, while the second is used to validate the quality of signal detection provided by our measure.

  1. A dynamic global and local combined particle swarm optimization algorithm

    International Nuclear Information System (INIS)

    Jiao Bin; Lian Zhigang; Chen Qunxian

    2009-01-01

    Particle swarm optimization (PSO) algorithm has been developing rapidly and many results have been reported. PSO algorithm has shown some important advantages by providing high speed of convergence in specific problems, but it has a tendency to get stuck in a near optimal solution and one may find it difficult to improve solution accuracy by fine tuning. This paper presents a dynamic global and local combined particle swarm optimization (DGLCPSO) algorithm to improve the performance of original PSO, in which all particles dynamically share the best information of the local particle, global particle and group particles. It is tested with a set of eight benchmark functions with different dimensions and compared with original PSO. Experimental results indicate that the DGLCPSO algorithm improves the search performance on the benchmark functions significantly, and shows the effectiveness of the algorithm to solve optimization problems.

  2. A Dynamic Spectrum Allocation Algorithm for a Maritime Cognitive Radio Communication System Based on a Queuing Model

    Directory of Open Access Journals (Sweden)

    Jingbo Zhang

    2017-09-01

    Full Text Available With the rapid development of maritime digital communication, the demand for spectrum resources is increasing, and building a maritime cognitive radio communication system is an effective solution. In this paper, the problem of how to effectively allocate the spectrum for secondary users (SUs with different priorities in a maritime cognitive radio communication system is studied. According to the characteristics of a maritime cognitive radio and existing research about cognitive radio systems, this paper establishes a centralized maritime cognitive radio communication model and creates a simplified queuing model with two queues for the communication model. In the view of the behaviors of SUs and primary users (PUs, we propose a dynamic spectrum allocation (DSA algorithm based on the system status, and analyze it with a two-dimensional Markov chain. Simulation results show that, when different types of SUs have similar arrival rates, the algorithm can vary the priority factor according to the change of users’ status in the system, so as to adjust the channel allocation, decreasing system congestion. The improvement of the algorithm is about 7–26%, and the specific improvement is negatively correlated with the SU arrival rate.

  3. Imperialist Competitive Algorithm with Dynamic Parameter Adaptation Using Fuzzy Logic Applied to the Optimization of Mathematical Functions

    Directory of Open Access Journals (Sweden)

    Emer Bernal

    2017-01-01

    Full Text Available In this paper we are presenting a method using fuzzy logic for dynamic parameter adaptation in the imperialist competitive algorithm, which is usually known by its acronym ICA. The ICA algorithm was initially studied in its original form to find out how it works and what parameters have more effect upon its results. Based on this study, several designs of fuzzy systems for dynamic adjustment of the ICA parameters are proposed. The experiments were performed on the basis of solving complex optimization problems, particularly applied to benchmark mathematical functions. A comparison of the original imperialist competitive algorithm and our proposed fuzzy imperialist competitive algorithm was performed. In addition, the fuzzy ICA was compared with another metaheuristic using a statistical test to measure the advantage of the proposed fuzzy approach for dynamic parameter adaptation.

  4. Measurement correction method for force sensor used in dynamic pressure calibration based on artificial neural network optimized by genetic algorithm

    Science.gov (United States)

    Gu, Tingwei; Kong, Deren; Shang, Fei; Chen, Jing

    2017-12-01

    We present an optimization algorithm to obtain low-uncertainty dynamic pressure measurements from a force-transducer-based device. In this paper, the advantages and disadvantages of the methods that are commonly used to measure the propellant powder gas pressure, the applicable scope of dynamic pressure calibration devices, and the shortcomings of the traditional comparison calibration method based on the drop-weight device are firstly analysed in detail. Then, a dynamic calibration method for measuring pressure using a force sensor based on a drop-weight device is introduced. This method can effectively save time when many pressure sensors are calibrated simultaneously and extend the life of expensive reference sensors. However, the force sensor is installed between the drop-weight and the hammerhead by transition pieces through the connection mode of bolt fastening, which causes adverse effects such as additional pretightening and inertia forces. To solve these effects, the influence mechanisms of the pretightening force, the inertia force and other influence factors on the force measurement are theoretically analysed. Then a measurement correction method for the force measurement is proposed based on an artificial neural network optimized by a genetic algorithm. The training and testing data sets are obtained from calibration tests, and the selection criteria for the key parameters of the correction model is discussed. The evaluation results for the test data show that the correction model can effectively improve the force measurement accuracy of the force sensor. Compared with the traditional high-accuracy comparison calibration method, the percentage difference of the impact-force-based measurement is less than 0.6% and the relative uncertainty of the corrected force value is 1.95%, which can meet the requirements of engineering applications.

  5. Rule-Based Analytic Asset Management for Space Exploration Systems (RAMSES), Phase II

    Data.gov (United States)

    National Aeronautics and Space Administration — Payload Systems Inc. (PSI) and the Massachusetts Institute of Technology (MIT) were selected to jointly develop the Rule-based Analytic Asset Management for Space...

  6. Binaural model-based dynamic-range compression.

    Science.gov (United States)

    Ernst, Stephan M A; Kortlang, Steffen; Grimm, Giso; Bisitz, Thomas; Kollmeier, Birger; Ewert, Stephan D

    2018-01-26

    Binaural cues such as interaural level differences (ILDs) are used to organise auditory perception and to segregate sound sources in complex acoustical environments. In bilaterally fitted hearing aids, dynamic-range compression operating independently at each ear potentially alters these ILDs, thus distorting binaural perception and sound source segregation. A binaurally-linked model-based fast-acting dynamic compression algorithm designed to approximate the normal-hearing basilar membrane (BM) input-output function in hearing-impaired listeners is suggested. A multi-center evaluation in comparison with an alternative binaural and two bilateral fittings was performed to assess the effect of binaural synchronisation on (a) speech intelligibility and (b) perceived quality in realistic conditions. 30 and 12 hearing impaired (HI) listeners were aided individually with the algorithms for both experimental parts, respectively. A small preference towards the proposed model-based algorithm in the direct quality comparison was found. However, no benefit of binaural-synchronisation regarding speech intelligibility was found, suggesting a dominant role of the better ear in all experimental conditions. The suggested binaural synchronisation of compression algorithms showed a limited effect on the tested outcome measures, however, linking could be situationally beneficial to preserve a natural binaural perception of the acoustical environment.

  7. Extracting quantum dynamics from genetic learning algorithms through principal control analysis

    International Nuclear Information System (INIS)

    White, J L; Pearson, B J; Bucksbaum, P H

    2004-01-01

    Genetic learning algorithms are widely used to control ultrafast optical pulse shapes for photo-induced quantum control of atoms and molecules. An unresolved issue is how to use the solutions found by these algorithms to learn about the system's quantum dynamics. We propose a simple method based on covariance analysis of the control space, which can reveal the degrees of freedom in the effective control Hamiltonian. We have applied this technique to stimulated Raman scattering in liquid methanol. A simple model of two-mode stimulated Raman scattering is consistent with the results. (letter to the editor)

  8. A Dealer Model of Foreign Exchange Market with Finite Assets

    Science.gov (United States)

    Hamano, Tomoya; Kanazawa, Kiyoshi; Takayasu, Hideki; Takayasu, Misako

    An agent-based model is introduced to study the finite-asset effect in foreign exchange markets. We find that the transacted price asymptotically approaches an equilibrium price, which is determined by the monetary balance between the pair of currencies. We phenomenologically derive a formula to estimate the equilibrium price, and we model its relaxation dynamics around the equilibrium price on the basis of a Langevin-like equation.

  9. Tailored Algorithm for Sensitivity Enhancement of Gas Concentration Sensors Based on Tunable Laser Absorption Spectroscopy.

    Science.gov (United States)

    Vargas-Rodriguez, Everardo; Guzman-Chavez, Ana Dinora; Baeza-Serrato, Roberto

    2018-06-04

    In this work, a novel tailored algorithm to enhance the overall sensitivity of gas concentration sensors based on the Direct Absorption Tunable Laser Absorption Spectroscopy (DA-ATLAS) method is presented. By using this algorithm, the sensor sensitivity can be custom-designed to be quasi constant over a much larger dynamic range compared with that obtained by typical methods based on a single statistics feature of the sensor signal output (peak amplitude, area under the curve, mean or RMS). Additionally, it is shown that with our algorithm, an optimal function can be tailored to get a quasi linear relationship between the concentration and some specific statistics features over a wider dynamic range. In order to test the viability of our algorithm, a basic C 2 H 2 sensor based on DA-ATLAS was implemented, and its experimental measurements support the simulated results provided by our algorithm.

  10. A Robust Rational Route to in a Simple Asset Pricing Model

    OpenAIRE

    Hommes, C.H.; Huang, H.; Wang, D.

    2002-01-01

    We investigate asset pricing dynamics in an adaptive evolutionary asset pricing model with fundamentalists, trend followers and a market maker. Agents can choose between a fundamentalist strategy at positive information cost or choose a trend following strategy for free. Price adjustment is proportional to the excess demand in the asset market. Agents asynchronously update their strategy according to realized net profits in the recent past. As agents become more sensitive to differences in st...

  11. A multithreaded parallel implementation of a dynamic programming algorithm for sequence comparison.

    Science.gov (United States)

    Martins, W S; Del Cuvillo, J B; Useche, F J; Theobald, K B; Gao, G R

    2001-01-01

    This paper discusses the issues involved in implementing a dynamic programming algorithm for biological sequence comparison on a general-purpose parallel computing platform based on a fine-grain event-driven multithreaded program execution model. Fine-grain multithreading permits efficient parallelism exploitation in this application both by taking advantage of asynchronous point-to-point synchronizations and communication with low overheads and by effectively tolerating latency through the overlapping of computation and communication. We have implemented our scheme on EARTH, a fine-grain event-driven multithreaded execution and architecture model which has been ported to a number of parallel machines with off-the-shelf processors. Our experimental results show that the dynamic programming algorithm can be efficiently implemented on EARTH systems with high performance (e.g., speedup of 90 on 120 nodes), good programmability and reasonable cost.

  12. Prediction of China's coal production-environmental pollution based on a hybrid genetic algorithm-system dynamics model

    International Nuclear Information System (INIS)

    Yu Shiwei; Wei Yiming

    2012-01-01

    This paper proposes a hybrid model based on genetic algorithm (GA) and system dynamics (SD) for coal production–environmental pollution load in China. GA has been utilized in the optimization of the parameters of the SD model to reduce implementation subjectivity. The chain of “Economic development–coal demand–coal production–environmental pollution load” of China in 2030 was predicted, and scenarios were analyzed. Results show that: (1) GA performs well in optimizing the parameters of the SD model objectively and in simulating the historical data; (2) The demand for coal energy continuously increases, although the coal intensity has actually decreased because of China's persistent economic development. Furthermore, instead of reaching a turning point by 2030, the environmental pollution load continuously increases each year even under the scenario where coal intensity decreased by 20% and investment in pollution abatement increased by 20%; (3) For abating the amount of “three types of wastes”, reducing the coal intensity is more effective than reducing the polluted production per tonne of coal and increasing investment in pollution control. - Highlights: ► We propos a GA-SD model for China's coal production-pollution prediction. ► Genetic algorithm (GA) can objectively and accurately optimize parameters of system dynamics (SD) model. ► Environmental pollution in China is projected to grow in our scenarios by 2030. ► The mechanism of reducing waste production per tonne of coal mining is more effective than others.

  13. Parallel algorithms for continuum dynamics

    International Nuclear Information System (INIS)

    Hicks, D.L.; Liebrock, L.M.

    1987-01-01

    Simply porting existing parallel programs to a new parallel processor may not achieve the full speedup possible; to achieve the maximum efficiency may require redesigning the parallel algorithms for the specific architecture. The authors discuss here parallel algorithms that were developed first for the HEP processor and then ported to the CRAY X-MP/4, the ELXSI/10, and the Intel iPSC/32. Focus is mainly on the most recent parallel processing results produced, i.e., those on the Intel Hypercube. The applications are simulations of continuum dynamics in which the momentum and stress gradients are important. Examples of these are inertial confinement fusion experiments, severe breaks in the coolant system of a reactor, weapons physics, shock-wave physics. Speedup efficiencies on the Intel iPSC Hypercube are very sensitive to the ratio of communication to computation. Great care must be taken in designing algorithms for this machine to avoid global communication. This is much more critical on the iPSC than it was on the three previous parallel processors

  14. Accelerating convergence of molecular dynamics-based structural relaxation

    DEFF Research Database (Denmark)

    Christensen, Asbjørn

    2005-01-01

    We describe strategies to accelerate the terminal stage of molecular dynamics (MD)based relaxation algorithms, where a large fraction of the computational resources are used. First, we analyze the qualitative and quantitative behavior of the QuickMin family of MD relaxation algorithms and explore...

  15. Dynamic Heat Supply Prediction Using Support Vector Regression Optimized by Particle Swarm Optimization Algorithm

    Directory of Open Access Journals (Sweden)

    Meiping Wang

    2016-01-01

    Full Text Available We developed an effective intelligent model to predict the dynamic heat supply of heat source. A hybrid forecasting method was proposed based on support vector regression (SVR model-optimized particle swarm optimization (PSO algorithms. Due to the interaction of meteorological conditions and the heating parameters of heating system, it is extremely difficult to forecast dynamic heat supply. Firstly, the correlations among heat supply and related influencing factors in the heating system were analyzed through the correlation analysis of statistical theory. Then, the SVR model was employed to forecast dynamic heat supply. In the model, the input variables were selected based on the correlation analysis and three crucial parameters, including the penalties factor, gamma of the kernel RBF, and insensitive loss function, were optimized by PSO algorithms. The optimized SVR model was compared with the basic SVR, optimized genetic algorithm-SVR (GA-SVR, and artificial neural network (ANN through six groups of experiment data from two heat sources. The results of the correlation coefficient analysis revealed the relationship between the influencing factors and the forecasted heat supply and determined the input variables. The performance of the PSO-SVR model is superior to those of the other three models. The PSO-SVR method is statistically robust and can be applied to practical heating system.

  16. ASSESSMENT OF BANKING ASSETS ON FINANCIAL RISK MANAGEMENT - ALBANIAN CASE

    Directory of Open Access Journals (Sweden)

    ADRIATIK KOTORRI

    2014-02-01

    Full Text Available Recognizing the asset value dynamics volatility of the financial institutions and the importance of its recognition both for financial reporting purposes and risk management effect, this paper aims to provide a practical model for the assets and financial institutions evaluation especially banks. It also aims to present a model to measure the value of banking assets for the purposes of risk management as an opportunity to identify in an early moment the banking risks. The paper develops the bank assets assessment forms and the basis of mathematical modeling of this assessment in general. He identifies also the evaluation factors as for example time to maturity, interest rate market for the assets (YTM, the interest rate agreed, the early repayment of the loan, interest ceilings and floors, off-balance sheet treatment, etc..

  17. A Dynamic Multistage Hybrid Swarm Intelligence Optimization Algorithm for Function Optimization

    Directory of Open Access Journals (Sweden)

    Daqing Wu

    2012-01-01

    Full Text Available A novel dynamic multistage hybrid swarm intelligence optimization algorithm is introduced, which is abbreviated as DM-PSO-ABC. The DM-PSO-ABC combined the exploration capabilities of the dynamic multiswarm particle swarm optimizer (PSO and the stochastic exploitation of the cooperative artificial bee colony algorithm (CABC for solving the function optimization. In the proposed hybrid algorithm, the whole process is divided into three stages. In the first stage, a dynamic multiswarm PSO is constructed to maintain the population diversity. In the second stage, the parallel, positive feedback of CABC was implemented in each small swarm. In the third stage, we make use of the particle swarm optimization global model, which has a faster convergence speed to enhance the global convergence in solving the whole problem. To verify the effectiveness and efficiency of the proposed hybrid algorithm, various scale benchmark problems are tested to demonstrate the potential of the proposed multistage hybrid swarm intelligence optimization algorithm. The results show that DM-PSO-ABC is better in the search precision, and convergence property and has strong ability to escape from the local suboptima when compared with several other peer algorithms.

  18. Implementation of ASSET concept in India

    International Nuclear Information System (INIS)

    Koley, J.

    1997-01-01

    The paper presents a retrospective assessment of the use of ASSET methodology in India since the first ASSET seminary organized by IAEA in collaboration with the Atomic Energy Regulatory Board, India (AERB) in May, 1994. The first ASSET seminar was organized to initiate the spread of idea among operating and research organizations and regulatory body personnel. The participants were carefully chosen from various fields and with different levels of experiences to generate teams with sufficiently wide spectrum of knowledge base. AERB took initiative in leading by example and formed ASSET teams to carry out the first ASSET reviews in India. These teams at the instance of AERB carried out ASSET review of three Safety Related Events, two at Nuclear Power Plants and one at Research Reactor. This paper describes the outcome of these ASSET studies and subsequent implementation of the recommendations. The initiative taken by the regulatory body has led to formation of ASSET teams by the utilities to carry out ASSET study on their own. The results of these studies are yet to be assessed by the regulatory body. The result of the ASSET experience reveals the fact that it has further potential in improving the safety performance and safety culture and brining in fresh enthusiasm among safety professionals of Indian Nuclear Utilities

  19. Implementation of ASSET concept in India

    Energy Technology Data Exchange (ETDEWEB)

    Koley, J [Operating Plants Safety Div., AERB, Mumbai (India)

    1997-10-01

    The paper presents a retrospective assessment of the use of ASSET methodology in India since the first ASSET seminary organized by IAEA in collaboration with the Atomic Energy Regulatory Board, India (AERB) in May, 1994. The first ASSET seminar was organized to initiate the spread of idea among operating and research organizations and regulatory body personnel. The participants were carefully chosen from various fields and with different levels of experiences to generate teams with sufficiently wide spectrum of knowledge base. AERB took initiative in leading by example and formed ASSET teams to carry out the first ASSET reviews in India. These teams at the instance of AERB carried out ASSET review of three Safety Related Events, two at Nuclear Power Plants and one at Research Reactor. This paper describes the outcome of these ASSET studies and subsequent implementation of the recommendations. The initiative taken by the regulatory body has led to formation of ASSET teams by the utilities to carry out ASSET study on their own. The results of these studies are yet to be assessed by the regulatory body. The result of the ASSET experience reveals the fact that it has further potential in improving the safety performance and safety culture and brining in fresh enthusiasm among safety professionals of Indian Nuclear Utilities.

  20. Asset management using an extended Markowitz theorem

    Directory of Open Access Journals (Sweden)

    Paria Karimi

    2014-06-01

    Full Text Available Markowitz theorem is one of the most popular techniques for asset management. The method has been widely used to solve many applications, successfully. In this paper, we present a multi objective Markowitz model to determine asset allocation by considering cardinality constraints. The resulted model is an NP-Hard problem and the proposed study uses two metaheuristics, namely genetic algorithm (GA and particle swarm optimization (PSO to find efficient solutions. The proposed study has been applied on some data collected from Tehran Stock Exchange over the period 2009-2011. The study considers four objectives including cash return, 12-month return, 36-month return and Lower Partial Moment (LPM. The results indicate that there was no statistical difference between the implementation of PSO and GA methods.

  1. SIMPLE HEURISTIC ALGORITHM FOR DYNAMIC VM REALLOCATION IN IAAS CLOUDS

    Directory of Open Access Journals (Sweden)

    Nikita A. Balashov

    2018-03-01

    Full Text Available The rapid development of cloud technologies and its high prevalence in both commercial and academic areas have stimulated active research in the domain of optimal cloud resource management. One of the most active research directions is dynamic virtual machine (VM placement optimization in clouds build on Infrastructure-as-a-Service model. This kind of research may pursue different goals with energy-aware optimization being the most common goal as it aims at a urgent problem of green cloud computing - reducing energy consumption by data centers. In this paper we present a new heuristic algorithm of dynamic reallocation of VMs based on an approach presented in one of our previous works. In the algorithm we apply a 2-rank strategy to classify VMs and servers corresponding to the highly and lowly active VMs and solve four tasks: VM classification, host classification, forming a VM migration map and VMs migration. Dividing all of the VMs and servers into two classes we attempt to implement the possibility of risk reduction in case of hardware overloads under overcommitment conditions and to reduce the influence of the occurring overloads on the performance of the cloud VMs. Presented algorithm was developed based on the workload profile of the JINR cloud (a scientific private cloud with the goal of maximizing its usage, but it can also be applied in both public and private commercial clouds to organize the simultaneous use of different SLA and QoS levels in the same cloud environment by giving each VM rank its own level of overcommitment.

  2. Optimisation of Offshore Wind Farm Cable Connection Layout Considering Levelised Production Cost Using Dynamic Minimum Spanning Tree Algorithm

    DEFF Research Database (Denmark)

    Hou, Peng; Hu, Weihao; Chen, Cong

    2016-01-01

    The approach in this paper hads been developed to optimize the cable connection layout of large scale offshore wind farms. The objective is to minimize the Levelised Production Cost (LPC) og an offshore wind farm by optimizing the cable connection configuration. Based on the minimum spanning tree...... (MST) algorithm, an improved algorithm, the Dynamic Minimum Spanning Tree (DMST) algorithm is proposed. The current carrying capacity of the cable is considered to be the main constraint and the cable sectional area is changed dynamically. An irregular shaped wind farm is chosen as the studie case...

  3. Analysing the performance of dynamic multi-objective optimisation algorithms

    CSIR Research Space (South Africa)

    Helbig, M

    2013-06-01

    Full Text Available and the goal of the algorithm is to track a set of tradeoff solutions over time. Analysing the performance of a dynamic multi-objective optimisation algorithm (DMOA) is not a trivial task. For each environment (before a change occurs) the DMOA has to find a set...

  4. Return models and dynamic asset allocation strategies

    OpenAIRE

    Shi, Wyanet Wen

    2017-01-01

    This thesis studies the design of optimal investment strategies. A strategy is considered optimal when it minimizes the variance of terminal portfolio wealth for a given level of expected terminal portfolio wealth, or equivalently, maximizes an investor's utility. We study this issue in two particular situations: when asset returns follow a continuous-time path-independent process, and when they follow a discrete-time path-dependent process. Continuous-time path-independent return mode...

  5. Case Based Asset Maintenance for the Electric Equipment

    International Nuclear Information System (INIS)

    Kim, Ji-Hyeon; Jung, Jae-Cheon; Chang, Young-Woo; Chang, Hoon-Seon; Kim, Jae-Cheol; Kim, Hang-Bae; Kim, Kyu-Ho; Hur, Yong; Lee, Dong-Chul

    2006-01-01

    The electric equipment maintenance strategies are changing from PM(Preventive Maintenance) or CM(Corrective Maintenance) to CBM(Condition Based Maintenance). The main benefits of CBM are reduced possibility of service failures of critical equipment and reduced costs or maintenance work. In CBM, the equipment status need to be monitored continuously and a decision should be made whether an equipment need to be repaired or replaced. For the maintenance decision making, the CBR(Case Base Reasoning) system is introduced. The CBR system receives the current equipment status and retrieves the case based historic database to determine any possible equipment failure under current conditions. In retrieving the case based historic data, the suggested DSS(Decision Support System) uses a reasoning engine with an equipment/asset ontology that describes the equipment subsumption relationships

  6. Synthesizing Dynamic Programming Algorithms from Linear Temporal Logic Formulae

    Science.gov (United States)

    Rosu, Grigore; Havelund, Klaus

    2001-01-01

    The problem of testing a linear temporal logic (LTL) formula on a finite execution trace of events, generated by an executing program, occurs naturally in runtime analysis of software. We present an algorithm which takes an LTL formula and generates an efficient dynamic programming algorithm. The generated algorithm tests whether the LTL formula is satisfied by a finite trace of events given as input. The generated algorithm runs in linear time, its constant depending on the size of the LTL formula. The memory needed is constant, also depending on the size of the formula.

  7. Application of Symplectic Algebraic Dynamics Algorithm to Circular Restricted Three-Body Problem

    International Nuclear Information System (INIS)

    Wei-Tao, Lu; Hua, Zhang; Shun-Jin, Wang

    2008-01-01

    Symplectic algebraic dynamics algorithm (SADA) for ordinary differential equations is applied to solve numerically the circular restricted three-body problem (CR3BP) in dynamical astronomy for both stable motion and chaotic motion. The result is compared with those of Runge–Kutta algorithm and symplectic algorithm under the fourth order, which shows that SADA has higher accuracy than the others in the long-term calculations of the CR3BP. (general)

  8. Efficiently Inefficient Markets for Assets and Asset Management

    DEFF Research Database (Denmark)

    Garleanu, Nicolae; Pedersen, Lasse Heje

    We consider a model where investors can invest directly or search for an asset manager, information about assets is costly, and managers charge an endogenous fee. The efficiency of asset prices is linked to the efficiency of the asset management market: if investors can find managers more easily......, more money is allocated to active management, fees are lower, and asset prices are more efficient. Informed managers outperform after fees, uninformed managers underperform after fees, and the net performance of the average manager depends on the number of "noise allocators." Small investors should...... be passive, but large and sophisticated investors benefit from searching for informed active managers since their search cost is low relative to capital. Hence, managers with larger and more sophisticated investors are expected to outperform....

  9. Algorithm for predicting the evolution of series of dynamics of complex systems in solving information problems

    Science.gov (United States)

    Kasatkina, T. I.; Dushkin, A. V.; Pavlov, V. A.; Shatovkin, R. R.

    2018-03-01

    In the development of information, systems and programming to predict the series of dynamics, neural network methods have recently been applied. They are more flexible, in comparison with existing analogues and are capable of taking into account the nonlinearities of the series. In this paper, we propose a modified algorithm for predicting the series of dynamics, which includes a method for training neural networks, an approach to describing and presenting input data, based on the prediction by the multilayer perceptron method. To construct a neural network, the values of a series of dynamics at the extremum points and time values corresponding to them, formed based on the sliding window method, are used as input data. The proposed algorithm can act as an independent approach to predicting the series of dynamics, and be one of the parts of the forecasting system. The efficiency of predicting the evolution of the dynamics series for a short-term one-step and long-term multi-step forecast by the classical multilayer perceptron method and a modified algorithm using synthetic and real data is compared. The result of this modification was the minimization of the magnitude of the iterative error that arises from the previously predicted inputs to the inputs to the neural network, as well as the increase in the accuracy of the iterative prediction of the neural network.

  10. A dynamic material discrimination algorithm for dual MV energy X-ray digital radiography

    International Nuclear Information System (INIS)

    Li, Liang; Li, Ruizhe; Zhang, Siyuan; Zhao, Tiao; Chen, Zhiqiang

    2016-01-01

    Dual-energy X-ray radiography has become a well-established technique in medical, industrial, and security applications, because of its material or tissue discrimination capability. The main difficulty of this technique is dealing with the materials overlapping problem. When there are two or more materials along the X-ray beam path, its material discrimination performance will be affected. In order to solve this problem, a new dynamic material discrimination algorithm is proposed for dual-energy X-ray digital radiography, which can also be extended to multi-energy X-ray situations. The algorithm has three steps: α-curve-based pre-classification, decomposition of overlapped materials, and the final material recognition. The key of the algorithm is to establish a dual-energy radiograph database of both pure basis materials and pair combinations of them. After the pre-classification results, original dual-energy projections of overlapped materials can be dynamically decomposed into two sets of dual-energy radiographs of each pure material by the algorithm. Thus, more accurate discrimination results can be provided even with the existence of the overlapping problem. Both numerical and experimental results that prove the validity and effectiveness of the algorithm are presented. - Highlights: • A material discrimination algorithm for dual MV energy X-ray digital radiography is proposed. • To solve the materials overlapping problem of the current dual energy algorithm. • The experimental results with the 4/7 MV container inspection system are shown.

  11. Dynamic Consensus Algorithm based Distributed Voltage Harmonic Compensation in Islanded Microgrids

    DEFF Research Database (Denmark)

    Meng, Lexuan; Tang, Fen; Firoozabadi, Mehdi Savaghebi

    2015-01-01

    generators can be employed as compensators to enhance the power quality on consumer side. However, conventional centralized control is facing obstacles because of the distributed fashion of generation and consumption. Accordingly, this paper proposes a consensus algorithm based distributed hierarchical...

  12. Using genetic algorithms for calibrating simplified models of nuclear reactor dynamics

    International Nuclear Information System (INIS)

    Marseguerra, Marzio; Zio, Enrico; Canetta, Raffaele

    2004-01-01

    In this paper the use of genetic algorithms for the estimation of the effective parameters of a model of nuclear reactor dynamics is investigated. The calibration of the effective parameters is achieved by best fitting the model responses of the quantities of interest (e.g., reactor power, average fuel and coolant temperatures) to the actual evolution profiles, here simulated by the Quandry based reactor kinetics (Quark) code available from the Nuclear Energy Agency. Alternative schemes of single- and multi-objective optimization are investigated. The efficiency of convergence of the algorithm with respect to the different effective parameters to be calibrated is studied with reference to the physical relationships involved

  13. Using genetic algorithms for calibrating simplified models of nuclear reactor dynamics

    Energy Technology Data Exchange (ETDEWEB)

    Marseguerra, Marzio E-mail: marzio.marseguerra@polimi.it; Zio, Enrico E-mail: enrico.zio@polimi.it; Canetta, Raffaele

    2004-07-01

    In this paper the use of genetic algorithms for the estimation of the effective parameters of a model of nuclear reactor dynamics is investigated. The calibration of the effective parameters is achieved by best fitting the model responses of the quantities of interest (e.g., reactor power, average fuel and coolant temperatures) to the actual evolution profiles, here simulated by the Quandry based reactor kinetics (Quark) code available from the Nuclear Energy Agency. Alternative schemes of single- and multi-objective optimization are investigated. The efficiency of convergence of the algorithm with respect to the different effective parameters to be calibrated is studied with reference to the physical relationships involved.

  14. A variant of the dynamic programming algorithm for unit commitment of combined heat and power systems

    DEFF Research Database (Denmark)

    Rong, Aiying; Hakonen, Henri; Lahdelma, Risto

    2008-01-01

    introduce in this paper the DP-RSC1 algorithm, which is a variant of the dynamic programming (DP) algorithm based on linear relaxation of the ON/OFF states of the units and sequential commitment of units one by one. The time complexity of DP-RSC1 is proportional to the number of generating units...

  15. Improved k-t PCA Algorithm Using Artificial Sparsity in Dynamic MRI.

    Science.gov (United States)

    Wang, Yiran; Chen, Zhifeng; Wang, Jing; Yuan, Lixia; Xia, Ling; Liu, Feng

    2017-01-01

    The k - t principal component analysis ( k - t PCA) is an effective approach for high spatiotemporal resolution dynamic magnetic resonance (MR) imaging. However, it suffers from larger residual aliasing artifacts and noise amplification when the reduction factor goes higher. To further enhance the performance of this technique, we propose a new method called sparse k - t PCA that combines the k - t PCA algorithm with an artificial sparsity constraint. It is a self-calibrated procedure that is based on the traditional k - t PCA method by further eliminating the reconstruction error derived from complex subtraction of the sampled k - t space from the original reconstructed k - t space. The proposed method is tested through both simulations and in vivo datasets with different reduction factors. Compared to the standard k - t PCA algorithm, the sparse k - t PCA can improve the normalized root-mean-square error performance and the accuracy of temporal resolution. It is thus useful for rapid dynamic MR imaging.

  16. Investments Portfolio Optimal Planning for industrial assets management: Method and Tool

    International Nuclear Information System (INIS)

    Lonchampt, Jerome; Fessart, Karine

    2012-01-01

    The purpose of this paper is to describe the method and tool dedicated to optimize investments planning for industrial assets. These investments may either be preventive maintenance tasks, asset enhancement or logistic investment such as spare parts purchase. The three methodological points to investigate in such an issue are: 1. The measure of the profitability of a portfolio of investments 2. The selection and planning of an optimal set of investments 3. The measure of the risk of a portfolio of investments The measure of the profitability of a set of investments in the IPOP (registered) tool is synthesised in the Net Present Value indicator. The NPV is the sum of the differences of discounted cash flows (direct costs, forced outages...) between the situations with and without a given investment. These cash flows are calculated through a pseudo-markov reliability model representing independently the components of the industrial asset and the spare parts inventories. The component model has been widely discussed over the years but the spare part model is a new one based on some approximations that will be discussed. This model, referred as the NPV function, takes for input an investments portfolio and gives its NPV. The second issue is to optimize the NPV. If all investments were independent, this optimization would be an easy calculation, unfortunately there are two sources of dependency. The first one is introduced by the spare part model, as if components are indeed independent in their reliability model, the fact that several components use the same inventory induces a dependency. The second dependency comes from economic, technical or logistic constraints, such as a global maintenance budget limit or a precedence constraint between two investments, making the aggregation of individual optimum not necessary feasible. The algorithm used to solve such a difficult optimization problem is a genetic algorithm. After a description of the features of the software a

  17. PM Synchronous Motor Dynamic Modeling with Genetic Algorithm ...

    African Journals Online (AJOL)

    Adel

    This paper proposes dynamic modeling simulation for ac Surface Permanent Magnet Synchronous ... Simulations are implemented using MATLAB with its genetic algorithm toolbox. .... selection, the process that drives biological evolution.

  18. Developing Subdomain Allocation Algorithms Based on Spatial and Communicational Constraints to Accelerate Dust Storm Simulation

    Science.gov (United States)

    Gui, Zhipeng; Yu, Manzhu; Yang, Chaowei; Jiang, Yunfeng; Chen, Songqing; Xia, Jizhe; Huang, Qunying; Liu, Kai; Li, Zhenlong; Hassan, Mohammed Anowarul; Jin, Baoxuan

    2016-01-01

    Dust storm has serious disastrous impacts on environment, human health, and assets. The developments and applications of dust storm models have contributed significantly to better understand and predict the distribution, intensity and structure of dust storms. However, dust storm simulation is a data and computing intensive process. To improve the computing performance, high performance computing has been widely adopted by dividing the entire study area into multiple subdomains and allocating each subdomain on different computing nodes in a parallel fashion. Inappropriate allocation may introduce imbalanced task loads and unnecessary communications among computing nodes. Therefore, allocation is a key factor that may impact the efficiency of parallel process. An allocation algorithm is expected to consider the computing cost and communication cost for each computing node to minimize total execution time and reduce overall communication cost for the entire simulation. This research introduces three algorithms to optimize the allocation by considering the spatial and communicational constraints: 1) an Integer Linear Programming (ILP) based algorithm from combinational optimization perspective; 2) a K-Means and Kernighan-Lin combined heuristic algorithm (K&K) integrating geometric and coordinate-free methods by merging local and global partitioning; 3) an automatic seeded region growing based geometric and local partitioning algorithm (ASRG). The performance and effectiveness of the three algorithms are compared based on different factors. Further, we adopt the K&K algorithm as the demonstrated algorithm for the experiment of dust model simulation with the non-hydrostatic mesoscale model (NMM-dust) and compared the performance with the MPI default sequential allocation. The results demonstrate that K&K method significantly improves the simulation performance with better subdomain allocation. This method can also be adopted for other relevant atmospheric and numerical

  19. Combining household income and asset data to identify livelihood strategies and their dynamics

    DEFF Research Database (Denmark)

    Walelign, Solomon Zena; Pouliot, Mariéve; Larsen, Helle Overgaard

    2017-01-01

    Current approaches to identifying and describing rural livelihood strategies, and household movements between strategies over time, in developing countries are imprecise. Here we: (i) present a new statistical quantitative approach combining income and asset data to identify household activity...... of livelihood strategies and household movements between strategies over time than using only income or asset data. Most households changed livelihood strategy at least once over the two three-year periods. A common pathway out of poverty included an intermediate step during which households accumulate assets...

  20. Secondary Coordinated Control of Islanded Microgrids Based on Consensus Algorithms

    DEFF Research Database (Denmark)

    Wu, Dan; Dragicevic, Tomislav; Vasquez, Juan Carlos

    2014-01-01

    systems. Nevertheless, the conventional decentralized secondary control, although does not need to be implemented in a microgrid central controller (MGCC), it has the limitation that all decentralized controllers must be mutually synchronized. In a clear cut contrast, the proposed secondary control......This paper proposes a decentralized secondary control for islanded microgrids based on consensus algorithms. In a microgrid, the secondary control is implemented in order to eliminate the frequency changes caused by the primary control when coordinating renewable energy sources and energy storage...... requires only a more simplified communication protocol and a sparse communication network. Moreover, the proposed approach based on dynamic consensus algorithms is able to achieve the coordinated secondary performance even when all units are initially out-of-synchronism. The control algorithm implemented...

  1. Semi-flocking algorithm for motion control of mobile sensors in large-scale surveillance systems.

    Science.gov (United States)

    Semnani, Samaneh Hosseini; Basir, Otman A

    2015-01-01

    The ability of sensors to self-organize is an important asset in surveillance sensor networks. Self-organize implies self-control at the sensor level and coordination at the network level. Biologically inspired approaches have recently gained significant attention as a tool to address the issue of sensor control and coordination in sensor networks. These approaches are exemplified by the two well-known algorithms, namely, the Flocking algorithm and the Anti-Flocking algorithm. Generally speaking, although these two biologically inspired algorithms have demonstrated promising performance, they expose deficiencies when it comes to their ability to maintain simultaneous robust dynamic area coverage and target coverage. These two coverage performance objectives are inherently conflicting. This paper presents Semi-Flocking, a biologically inspired algorithm that benefits from key characteristics of both the Flocking and Anti-Flocking algorithms. The Semi-Flocking algorithm approaches the problem by assigning a small flock of sensors to each target, while at the same time leaving some sensors free to explore the environment. This allows the algorithm to strike balance between robust area coverage and target coverage. Such balance is facilitated via flock-sensor coordination. The performance of the proposed Semi-Flocking algorithm is examined and compared with other two flocking-based algorithms once using randomly moving targets and once using a standard walking pedestrian dataset. The results of both experiments show that the Semi-Flocking algorithm outperforms both the Flocking algorithm and the Anti-Flocking algorithm with respect to the area of coverage and the target coverage objectives. Furthermore, the results show that the proposed algorithm demonstrates shorter target detection time and fewer undetected targets than the other two flocking-based algorithms.

  2. Parallel algorithms and architecture for computation of manipulator forward dynamics

    Science.gov (United States)

    Fijany, Amir; Bejczy, Antal K.

    1989-01-01

    Parallel computation of manipulator forward dynamics is investigated. Considering three classes of algorithms for the solution of the problem, that is, the O(n), the O(n exp 2), and the O(n exp 3) algorithms, parallelism in the problem is analyzed. It is shown that the problem belongs to the class of NC and that the time and processors bounds are of O(log2/2n) and O(n exp 4), respectively. However, the fastest stable parallel algorithms achieve the computation time of O(n) and can be derived by parallelization of the O(n exp 3) serial algorithms. Parallel computation of the O(n exp 3) algorithms requires the development of parallel algorithms for a set of fundamentally different problems, that is, the Newton-Euler formulation, the computation of the inertia matrix, decomposition of the symmetric, positive definite matrix, and the solution of triangular systems. Parallel algorithms for this set of problems are developed which can be efficiently implemented on a unique architecture, a triangular array of n(n+2)/2 processors with a simple nearest-neighbor interconnection. This architecture is particularly suitable for VLSI and WSI implementations. The developed parallel algorithm, compared to the best serial O(n) algorithm, achieves an asymptotic speedup of more than two orders-of-magnitude in the computation the forward dynamics.

  3. Community Asset Mapping. Trends and Issues Alert.

    Science.gov (United States)

    Kerka, Sandra

    Asset mapping involves documenting tangible and intangible resources of a community viewed as a place with assets to be preserved and enhanced, not deficits to be remedied. Kretzmann and McKnight (1993) are credited with developing the concept of asset-based community development (ABCD) that draws on appreciative inquiry; recognition of social…

  4. An independent dose calculation algorithm for MLC-based stereotactic radiotherapy

    International Nuclear Information System (INIS)

    Lorenz, Friedlieb; Killoran, Joseph H.; Wenz, Frederik; Zygmanski, Piotr

    2007-01-01

    We have developed an algorithm to calculate dose in a homogeneous phantom for radiotherapy fields defined by multi-leaf collimator (MLC) for both static and dynamic MLC delivery. The algorithm was developed to supplement the dose algorithms of the commercial treatment planning systems (TPS). The motivation for this work is to provide an independent dose calculation primarily for quality assurance (QA) and secondarily for the development of static MLC field based inverse planning. The dose calculation utilizes a pencil-beam kernel. However, an explicit analytical integration results in a closed form for rectangular-shaped beamlets, defined by single leaf pairs. This approach reduces spatial integration to summation, and leads to a simple method of determination of model parameters. The total dose for any static or dynamic MLC field is obtained by summing over all individual rectangles from each segment which offers faster speed to calculate two-dimensional dose distributions at any depth in the phantom. Standard beam data used in the commissioning of the TPS was used as input data for the algorithm. The calculated results were compared with the TPS and measurements for static and dynamic MLC. The agreement was very good (<2.5%) for all tested cases except for very small static MLC sizes of 0.6 cmx0.6 cm (<6%) and some ion chamber measurements in a high gradient region (<4.4%). This finding enables us to use the algorithm for routine QA as well as for research developments

  5. Final Report: Sampling-Based Algorithms for Estimating Structure in Big Data.

    Energy Technology Data Exchange (ETDEWEB)

    Matulef, Kevin Michael [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States)

    2017-02-01

    The purpose of this project was to develop sampling-based algorithms to discover hidden struc- ture in massive data sets. Inferring structure in large data sets is an increasingly common task in many critical national security applications. These data sets come from myriad sources, such as network traffic, sensor data, and data generated by large-scale simulations. They are often so large that traditional data mining techniques are time consuming or even infeasible. To address this problem, we focus on a class of algorithms that do not compute an exact answer, but instead use sampling to compute an approximate answer using fewer resources. The particular class of algorithms that we focus on are streaming algorithms , so called because they are designed to handle high-throughput streams of data. Streaming algorithms have only a small amount of working storage - much less than the size of the full data stream - so they must necessarily use sampling to approximate the correct answer. We present two results: * A streaming algorithm called HyperHeadTail , that estimates the degree distribution of a graph (i.e., the distribution of the number of connections for each node in a network). The degree distribution is a fundamental graph property, but prior work on estimating the degree distribution in a streaming setting was impractical for many real-world application. We improve upon prior work by developing an algorithm that can handle streams with repeated edges, and graph structures that evolve over time. * An algorithm for the task of maintaining a weighted subsample of items in a stream, when the items must be sampled according to their weight, and the weights are dynamically changing. To our knowledge, this is the first such algorithm designed for dynamically evolving weights. We expect it may be useful as a building block for other streaming algorithms on dynamic data sets.

  6. Towards the Automatic Detection of Efficient Computing Assets in a Heterogeneous Cloud Environment

    OpenAIRE

    Iglesias, Jesus Omana; Stokes, Nicola; Ventresque, Anthony; Murphy, Liam, B.E.; Thorburn, James

    2013-01-01

    peer-reviewed In a heterogeneous cloud environment, the manual grading of computing assets is the first step in the process of configuring IT infrastructures to ensure optimal utilization of resources. Grading the efficiency of computing assets is however, a difficult, subjective and time consuming manual task. Thus, an automatic efficiency grading algorithm is highly desirable. In this paper, we compare the effectiveness of the different criteria used in the manual gr...

  7. Siberia snow depth climatology derived from SSM/I data using a combined dynamic and static algorithm

    Science.gov (United States)

    Grippa, M.; Mognard, N.; Le, Toan T.; Josberger, E.G.

    2004-01-01

    One of the major challenges in determining snow depth (SD) from passive microwave measurements is to take into account the spatiotemporal variations of the snow grain size. Static algorithms based on a constant snow grain size cannot provide accurate estimates of snow pack thickness, particularly over large regions where the snow pack is subjected to big spatial temperature variations. A recent dynamic algorithm that accounts for the dependence of the microwave scattering on the snow grain size has been developed to estimate snow depth from the Special Sensor Microwave/Imager (SSM/I) over the Northern Great Plains (NGP) in the US. In this paper, we develop a combined dynamic and static algorithm to estimate snow depth from 13 years of SSM/I observations over Central Siberia. This region is characterised by extremely cold surface air temperatures and by the presence of permafrost that significantly affects the ground temperature. The dynamic algorithm is implemented to take into account these effects and it yields accurate snow depths early in the winter, when thin snowpacks combine with cold air temperatures to generate rapid crystal growth. However, it is not applicable later in the winter when the grain size growth slows. Combining the dynamic algorithm to a static algorithm, with a temporally constant but spatially varying coefficient, we obtain reasonable snow depth estimates throughout the entire snow season. Validation is carried out by comparing the satellite snow depth monthly averages to monthly climatological data. We show that the location of the snow depth maxima and minima is improved when applying the combined algorithm, since its dynamic portion explicitly incorporate the thermal gradient through the snowpack. The results obtained are presented and evaluated for five different vegetation zones of Central Siberia. Comparison with in situ measurements is also shown and discussed. ?? 2004 Elsevier Inc. All rights reserved.

  8. Asset management: the big picture.

    Science.gov (United States)

    Deinstadt, Deborah C

    2005-10-01

    To develop an comprehensive asset management plan, you need, first of all, to understand the asset management continuum. A key preliminary step is to thoroughly assess the existing equipment base. A critical objective is to ensure that there are open lines of communication among the teams charged with managing the plan's various phases.

  9. An algorithm for the solution of dynamic linear programs

    Science.gov (United States)

    Psiaki, Mark L.

    1989-01-01

    The algorithm's objective is to efficiently solve Dynamic Linear Programs (DLP) by taking advantage of their special staircase structure. This algorithm constitutes a stepping stone to an improved algorithm for solving Dynamic Quadratic Programs, which, in turn, would make the nonlinear programming method of Successive Quadratic Programs more practical for solving trajectory optimization problems. The ultimate goal is to being trajectory optimization solution speeds into the realm of real-time control. The algorithm exploits the staircase nature of the large constraint matrix of the equality-constrained DLPs encountered when solving inequality-constrained DLPs by an active set approach. A numerically-stable, staircase QL factorization of the staircase constraint matrix is carried out starting from its last rows and columns. The resulting recursion is like the time-varying Riccati equation from multi-stage LQR theory. The resulting factorization increases the efficiency of all of the typical LP solution operations over that of a dense matrix LP code. At the same time numerical stability is ensured. The algorithm also takes advantage of dynamic programming ideas about the cost-to-go by relaxing active pseudo constraints in a backwards sweeping process. This further decreases the cost per update of the LP rank-1 updating procedure, although it may result in more changes of the active set that if pseudo constraints were relaxed in a non-stagewise fashion. The usual stability of closed-loop Linear/Quadratic optimally-controlled systems, if it carries over to strictly linear cost functions, implies that the saving due to reduced factor update effort may outweigh the cost of an increased number of updates. An aerospace example is presented in which a ground-to-ground rocket's distance is maximized. This example demonstrates the applicability of this class of algorithms to aerospace guidance. It also sheds light on the efficacy of the proposed pseudo constraint relaxation

  10. System and Method for Monitoring Distributed Asset Data

    Science.gov (United States)

    Gorinevsky, Dimitry (Inventor)

    2015-01-01

    A computer-based monitoring system and monitoring method implemented in computer software for detecting, estimating, and reporting the condition states, their changes, and anomalies for many assets. The assets are of same type, are operated over a period of time, and outfitted with data collection systems. The proposed monitoring method accounts for variability of working conditions for each asset by using regression model that characterizes asset performance. The assets are of the same type but not identical. The proposed monitoring method accounts for asset-to-asset variability; it also accounts for drifts and trends in the asset condition and data. The proposed monitoring system can perform distributed processing of massive amounts of historical data without discarding any useful information where moving all the asset data into one central computing system might be infeasible. The overall processing is includes distributed preprocessing data records from each asset to produce compressed data.

  11. Constraint treatment techniques and parallel algorithms for multibody dynamic analysis. Ph.D. Thesis

    Science.gov (United States)

    Chiou, Jin-Chern

    1990-01-01

    Computational procedures for kinematic and dynamic analysis of three-dimensional multibody dynamic (MBD) systems are developed from the differential-algebraic equations (DAE's) viewpoint. Constraint violations during the time integration process are minimized and penalty constraint stabilization techniques and partitioning schemes are developed. The governing equations of motion, a two-stage staggered explicit-implicit numerical algorithm, are treated which takes advantage of a partitioned solution procedure. A robust and parallelizable integration algorithm is developed. This algorithm uses a two-stage staggered central difference algorithm to integrate the translational coordinates and the angular velocities. The angular orientations of bodies in MBD systems are then obtained by using an implicit algorithm via the kinematic relationship between Euler parameters and angular velocities. It is shown that the combination of the present solution procedures yields a computationally more accurate solution. To speed up the computational procedures, parallel implementation of the present constraint treatment techniques, the two-stage staggered explicit-implicit numerical algorithm was efficiently carried out. The DAE's and the constraint treatment techniques were transformed into arrowhead matrices to which Schur complement form was derived. By fully exploiting the sparse matrix structural analysis techniques, a parallel preconditioned conjugate gradient numerical algorithm is used to solve the systems equations written in Schur complement form. A software testbed was designed and implemented in both sequential and parallel computers. This testbed was used to demonstrate the robustness and efficiency of the constraint treatment techniques, the accuracy of the two-stage staggered explicit-implicit numerical algorithm, and the speed up of the Schur-complement-based parallel preconditioned conjugate gradient algorithm on a parallel computer.

  12. Discrete-Time Nonzero-Sum Games for Multiplayer Using Policy-Iteration-Based Adaptive Dynamic Programming Algorithms.

    Science.gov (United States)

    Zhang, Huaguang; Jiang, He; Luo, Chaomin; Xiao, Geyang

    2017-10-01

    In this paper, we investigate the nonzero-sum games for a class of discrete-time (DT) nonlinear systems by using a novel policy iteration (PI) adaptive dynamic programming (ADP) method. The main idea of our proposed PI scheme is to utilize the iterative ADP algorithm to obtain the iterative control policies, which not only ensure the system to achieve stability but also minimize the performance index function for each player. This paper integrates game theory, optimal control theory, and reinforcement learning technique to formulate and handle the DT nonzero-sum games for multiplayer. First, we design three actor-critic algorithms, an offline one and two online ones, for the PI scheme. Subsequently, neural networks are employed to implement these algorithms and the corresponding stability analysis is also provided via the Lyapunov theory. Finally, a numerical simulation example is presented to demonstrate the effectiveness of our proposed approach.

  13. The estimation of time-varying risks in asset pricing modelling using B-Spline method

    Science.gov (United States)

    Nurjannah; Solimun; Rinaldo, Adji

    2017-12-01

    Asset pricing modelling has been extensively studied in the past few decades to explore the risk-return relationship. The asset pricing literature typically assumed a static risk-return relationship. However, several studies found few anomalies in the asset pricing modelling which captured the presence of the risk instability. The dynamic model is proposed to offer a better model. The main problem highlighted in the dynamic model literature is that the set of conditioning information is unobservable and therefore some assumptions have to be made. Hence, the estimation requires additional assumptions about the dynamics of risk. To overcome this problem, the nonparametric estimators can also be used as an alternative for estimating risk. The flexibility of the nonparametric setting avoids the problem of misspecification derived from selecting a functional form. This paper investigates the estimation of time-varying asset pricing model using B-Spline, as one of nonparametric approach. The advantages of spline method is its computational speed and simplicity, as well as the clarity of controlling curvature directly. The three popular asset pricing models will be investigated namely CAPM (Capital Asset Pricing Model), Fama-French 3-factors model and Carhart 4-factors model. The results suggest that the estimated risks are time-varying and not stable overtime which confirms the risk instability anomaly. The results is more pronounced in Carhart’s 4-factors model.

  14. THE PROBLEMS OF FIXED ASSETS CLASSIFICATION FOR ACCOUNTING

    Directory of Open Access Journals (Sweden)

    Sophiia Kafka

    2016-06-01

    Full Text Available This article provides a critical analysis of research in accounting of fixed assets; the basic issues of fixed assets accounting that have been developed by the Ukrainian scientists during 1999-2016 have been determined. It is established that the problems of non-current assets taxation and their classification are the most noteworthy. In the dissertations the issues of fixed assets classification are of exclusively particular branch nature, so its improvement is important. The purpose of the article is developing science-based classification of fixed assets for accounting purposes since their composition is quite diverse. The classification of fixed assets for accounting purposes have been summarized and developed in Figure 1 according to the results of the research. The accomplished analysis of existing approaches to classification of fixed assets has made it possible to specify its basic types and justify the classification criteria of fixed assets for the main objects of fixed assets. Key words: non-current assets, fixed assets, accounting, valuation, classification of the fixed assets. JEL:G M41  

  15. A stochastic-programming approach to integrated asset and liability ...

    African Journals Online (AJOL)

    This increase in complexity has provided an impetus for the investigation into integrated asset- and liability-management frameworks that could realistically address dynamic portfolio allocation in a risk-controlled way. In this paper the authors propose a multi-stage dynamic stochastic-programming model for the integrated ...

  16. Rigorous lower bound on the dynamic critical exponent of some multilevel Swendsen-Wang algorithms

    International Nuclear Information System (INIS)

    Li, X.; Sokal, A.D.

    1991-01-01

    We prove the rigorous lower bound z exp ≥α/ν for the dynamic critical exponent of a broad class of multilevel (or ''multigrid'') variants of the Swendsen-Wang algorithm. This proves that such algorithms do suffer from critical slowing down. We conjecture that such algorithms in fact lie in the same dynamic universality class as the stanard Swendsen-Wang algorithm

  17. Quadtree of TIN: a new algorithm of dynamic LOD

    Science.gov (United States)

    Zhang, Junfeng; Fei, Lifan; Chen, Zhen

    2009-10-01

    Currently, Real-time visualization of large-scale digital elevation model mainly employs the regular structure of GRID based on quadtree and triangle simplification methods based on irregular triangulated network (TIN). TIN is a refined means to express the terrain surface in the computer science, compared with GRID. However, the data structure of TIN model is complex, and is difficult to realize view-dependence representation of level of detail (LOD) quickly. GRID is a simple method to realize the LOD of terrain, but contains more triangle count. A new algorithm, which takes full advantage of the two methods' merit, is presented in this paper. This algorithm combines TIN with quadtree structure to realize the view-dependence LOD controlling over the irregular sampling point sets, and holds the details through the distance of viewpoint and the geometric error of terrain. Experiments indicate that this approach can generate an efficient quadtree triangulation hierarchy over any irregular sampling point sets and achieve dynamic and visual multi-resolution performance of large-scale terrain at real-time.

  18. Regionalisation of asset values for risk analyses

    Directory of Open Access Journals (Sweden)

    A. H. Thieken

    2006-01-01

    Full Text Available In risk analysis there is a spatial mismatch of hazard data that are commonly modelled on an explicit raster level and exposure data that are often only available for aggregated units, e.g. communities. Dasymetric mapping techniques that use ancillary information to disaggregate data within a spatial unit help to bridge this gap. This paper presents dasymetric maps showing the population density and a unit value of residential assets for whole Germany. A dasymetric mapping approach, which uses land cover data (CORINE Land Cover as ancillary variable, was adapted and applied to regionalize aggregated census data that are provided for all communities in Germany. The results were validated by two approaches. First, it was ascertained whether population data disaggregated at the community level can be used to estimate population in postcodes. Secondly, disaggregated population and asset data were used for a loss evaluation of two flood events that occurred in 1999 and 2002, respectively. It must be concluded that the algorithm tends to underestimate the population in urban areas and to overestimate population in other land cover classes. Nevertheless, flood loss evaluations demonstrate that the approach is capable of providing realistic estimates of the number of exposed people and assets. Thus, the maps are sufficient for applications in large-scale risk assessments such as the estimation of population and assets exposed to natural and man-made hazards.

  19. Control algorithms for dynamic attenuators

    Energy Technology Data Exchange (ETDEWEB)

    Hsieh, Scott S., E-mail: sshsieh@stanford.edu [Department of Radiology, Stanford University, Stanford, California 94305 and Department of Electrical Engineering, Stanford University, Stanford, California 94305 (United States); Pelc, Norbert J. [Department of Radiology, Stanford University, Stanford California 94305 and Department of Bioengineering, Stanford University, Stanford, California 94305 (United States)

    2014-06-15

    Purpose: The authors describe algorithms to control dynamic attenuators in CT and compare their performance using simulated scans. Dynamic attenuators are prepatient beam shaping filters that modulate the distribution of x-ray fluence incident on the patient on a view-by-view basis. These attenuators can reduce dose while improving key image quality metrics such as peak or mean variance. In each view, the attenuator presents several degrees of freedom which may be individually adjusted. The total number of degrees of freedom across all views is very large, making many optimization techniques impractical. The authors develop a theory for optimally controlling these attenuators. Special attention is paid to a theoretically perfect attenuator which controls the fluence for each ray individually, but the authors also investigate and compare three other, practical attenuator designs which have been previously proposed: the piecewise-linear attenuator, the translating attenuator, and the double wedge attenuator. Methods: The authors pose and solve the optimization problems of minimizing the mean and peak variance subject to a fixed dose limit. For a perfect attenuator and mean variance minimization, this problem can be solved in simple, closed form. For other attenuator designs, the problem can be decomposed into separate problems for each view to greatly reduce the computational complexity. Peak variance minimization can be approximately solved using iterated, weighted mean variance (WMV) minimization. Also, the authors develop heuristics for the perfect and piecewise-linear attenuators which do not requirea priori knowledge of the patient anatomy. The authors compare these control algorithms on different types of dynamic attenuators using simulated raw data from forward projected DICOM files of a thorax and an abdomen. Results: The translating and double wedge attenuators reduce dose by an average of 30% relative to current techniques (bowtie filter with tube current

  20. Control algorithms for dynamic attenuators

    International Nuclear Information System (INIS)

    Hsieh, Scott S.; Pelc, Norbert J.

    2014-01-01

    Purpose: The authors describe algorithms to control dynamic attenuators in CT and compare their performance using simulated scans. Dynamic attenuators are prepatient beam shaping filters that modulate the distribution of x-ray fluence incident on the patient on a view-by-view basis. These attenuators can reduce dose while improving key image quality metrics such as peak or mean variance. In each view, the attenuator presents several degrees of freedom which may be individually adjusted. The total number of degrees of freedom across all views is very large, making many optimization techniques impractical. The authors develop a theory for optimally controlling these attenuators. Special attention is paid to a theoretically perfect attenuator which controls the fluence for each ray individually, but the authors also investigate and compare three other, practical attenuator designs which have been previously proposed: the piecewise-linear attenuator, the translating attenuator, and the double wedge attenuator. Methods: The authors pose and solve the optimization problems of minimizing the mean and peak variance subject to a fixed dose limit. For a perfect attenuator and mean variance minimization, this problem can be solved in simple, closed form. For other attenuator designs, the problem can be decomposed into separate problems for each view to greatly reduce the computational complexity. Peak variance minimization can be approximately solved using iterated, weighted mean variance (WMV) minimization. Also, the authors develop heuristics for the perfect and piecewise-linear attenuators which do not requirea priori knowledge of the patient anatomy. The authors compare these control algorithms on different types of dynamic attenuators using simulated raw data from forward projected DICOM files of a thorax and an abdomen. Results: The translating and double wedge attenuators reduce dose by an average of 30% relative to current techniques (bowtie filter with tube current

  1. Control algorithms for dynamic attenuators.

    Science.gov (United States)

    Hsieh, Scott S; Pelc, Norbert J

    2014-06-01

    The authors describe algorithms to control dynamic attenuators in CT and compare their performance using simulated scans. Dynamic attenuators are prepatient beam shaping filters that modulate the distribution of x-ray fluence incident on the patient on a view-by-view basis. These attenuators can reduce dose while improving key image quality metrics such as peak or mean variance. In each view, the attenuator presents several degrees of freedom which may be individually adjusted. The total number of degrees of freedom across all views is very large, making many optimization techniques impractical. The authors develop a theory for optimally controlling these attenuators. Special attention is paid to a theoretically perfect attenuator which controls the fluence for each ray individually, but the authors also investigate and compare three other, practical attenuator designs which have been previously proposed: the piecewise-linear attenuator, the translating attenuator, and the double wedge attenuator. The authors pose and solve the optimization problems of minimizing the mean and peak variance subject to a fixed dose limit. For a perfect attenuator and mean variance minimization, this problem can be solved in simple, closed form. For other attenuator designs, the problem can be decomposed into separate problems for each view to greatly reduce the computational complexity. Peak variance minimization can be approximately solved using iterated, weighted mean variance (WMV) minimization. Also, the authors develop heuristics for the perfect and piecewise-linear attenuators which do not require a priori knowledge of the patient anatomy. The authors compare these control algorithms on different types of dynamic attenuators using simulated raw data from forward projected DICOM files of a thorax and an abdomen. The translating and double wedge attenuators reduce dose by an average of 30% relative to current techniques (bowtie filter with tube current modulation) without

  2. THEORETICAL ASPECTS REGARDING THE VALUATION OF INTANGIBLE ASSETS

    Directory of Open Access Journals (Sweden)

    HOLT GHEORGHE

    2015-03-01

    Full Text Available Valuation of intangible assets represents one of the most delicate problems of assessing a company. Usually, valuation of intangible assets is in the process of evaluating enterprise as a whole. Therefore, Intangible Asset Valuers must have detailed knowledge on business valuation, in particular, the income-based valuation methods (capitalization / updating net cash flow. Valuation of Intangible Assets is the objective of the International Valuation Standards (GN 4 Valuation of Intangible Assets (revised 2010. Next to it was recently proposed GN 16 Valuation of Intangible Assets for IFRS reporting. International Accounting Standard (IAS 38 Intangible Assets prescribe the accounting treatment for intangible assets, analyze the criteria that an intangible asset must meet to be recognized, specific carrying amount of intangible assets and sets out requirements for disclosure of intangible assets. From an accounting perspective, relevant professional accounting standards and the following: IFRS 3 Business Combinations, IAS 36 Impairment of Assets and SFAS 157 fair value measurement, developed by the FASB. There is a more pronounced near the provisions of IAS 38 contained in GN 4. Therefore, a good professional intangible asset valuation must know thoroughly the conditions, principles, criteria and assessment methods recognized by those standards

  3. A Dynamic Enhancement With Background Reduction Algorithm: Overview and Application to Satellite-Based Dust Storm Detection

    Science.gov (United States)

    Miller, Steven D.; Bankert, Richard L.; Solbrig, Jeremy E.; Forsythe, John M.; Noh, Yoo-Jeong; Grasso, Lewis D.

    2017-12-01

    This paper describes a Dynamic Enhancement Background Reduction Algorithm (DEBRA) applicable to multispectral satellite imaging radiometers. DEBRA uses ancillary information about the clear-sky background to reduce false detections of atmospheric parameters in complex scenes. Applied here to the detection of lofted dust, DEBRA enlists a surface emissivity database coupled with a climatological database of surface temperature to approximate the clear-sky equivalent signal for selected infrared-based multispectral dust detection tests. This background allows for suppression of false alarms caused by land surface features while retaining some ability to detect dust above those problematic surfaces. The algorithm is applicable to both day and nighttime observations and enables weighted combinations of dust detection tests. The results are provided quantitatively, as a detection confidence factor [0, 1], but are also readily visualized as enhanced imagery. Utilizing the DEBRA confidence factor as a scaling factor in false color red/green/blue imagery enables depiction of the targeted parameter in the context of the local meteorology and topography. In this way, the method holds utility to both automated clients and human analysts alike. Examples of DEBRA performance from notable dust storms and comparisons against other detection methods and independent observations are presented.

  4. An accurate projection algorithm for array processor based SPECT systems

    International Nuclear Information System (INIS)

    King, M.A.; Schwinger, R.B.; Cool, S.L.

    1985-01-01

    A data re-projection algorithm has been developed for use in single photon emission computed tomography (SPECT) on an array processor based computer system. The algorithm makes use of an accurate representation of pixel activity (uniform square pixel model of intensity distribution), and is rapidly performed due to the efficient handling of an array based algorithm and the Fast Fourier Transform (FFT) on parallel processing hardware. The algorithm consists of using a pixel driven nearest neighbour projection operation to an array of subdivided projection bins. This result is then convolved with the projected uniform square pixel distribution before being compressed to original bin size. This distribution varies with projection angle and is explicitly calculated. The FFT combined with a frequency space multiplication is used instead of a spatial convolution for more rapid execution. The new algorithm was tested against other commonly used projection algorithms by comparing the accuracy of projections of a simulated transverse section of the abdomen against analytically determined projections of that transverse section. The new algorithm was found to yield comparable or better standard error and yet result in easier and more efficient implementation on parallel hardware. Applications of the algorithm include iterative reconstruction and attenuation correction schemes and evaluation of regions of interest in dynamic and gated SPECT

  5. Warehouse stocking optimization based on dynamic ant colony genetic algorithm

    Science.gov (United States)

    Xiao, Xiaoxu

    2018-04-01

    In view of the various orders of FAW (First Automotive Works) International Logistics Co., Ltd., the SLP method is used to optimize the layout of the warehousing units in the enterprise, thus the warehouse logistics is optimized and the external processing speed of the order is improved. In addition, the relevant intelligent algorithms for optimizing the stocking route problem are analyzed. The ant colony algorithm and genetic algorithm which have good applicability are emphatically studied. The parameters of ant colony algorithm are optimized by genetic algorithm, which improves the performance of ant colony algorithm. A typical path optimization problem model is taken as an example to prove the effectiveness of parameter optimization.

  6. Nonlinear dynamics optimization with particle swarm and genetic algorithms for SPEAR3 emittance upgrade

    International Nuclear Information System (INIS)

    Huang, Xiaobiao; Safranek, James

    2014-01-01

    Nonlinear dynamics optimization is carried out for a low emittance upgrade lattice of SPEAR3 in order to improve its dynamic aperture and Touschek lifetime. Two multi-objective optimization algorithms, a genetic algorithm and a particle swarm algorithm, are used for this study. The performance of the two algorithms are compared. The result shows that the particle swarm algorithm converges significantly faster to similar or better solutions than the genetic algorithm and it does not require seeding of good solutions in the initial population. These advantages of the particle swarm algorithm may make it more suitable for many accelerator optimization applications

  7. Nonlinear dynamics optimization with particle swarm and genetic algorithms for SPEAR3 emittance upgrade

    Energy Technology Data Exchange (ETDEWEB)

    Huang, Xiaobiao, E-mail: xiahuang@slac.stanford.edu; Safranek, James

    2014-09-01

    Nonlinear dynamics optimization is carried out for a low emittance upgrade lattice of SPEAR3 in order to improve its dynamic aperture and Touschek lifetime. Two multi-objective optimization algorithms, a genetic algorithm and a particle swarm algorithm, are used for this study. The performance of the two algorithms are compared. The result shows that the particle swarm algorithm converges significantly faster to similar or better solutions than the genetic algorithm and it does not require seeding of good solutions in the initial population. These advantages of the particle swarm algorithm may make it more suitable for many accelerator optimization applications.

  8. Dynamic Programming and Graph Algorithms in Computer Vision*

    Science.gov (United States)

    Felzenszwalb, Pedro F.; Zabih, Ramin

    2013-01-01

    Optimization is a powerful paradigm for expressing and solving problems in a wide range of areas, and has been successfully applied to many vision problems. Discrete optimization techniques are especially interesting, since by carefully exploiting problem structure they often provide non-trivial guarantees concerning solution quality. In this paper we briefly review dynamic programming and graph algorithms, and discuss representative examples of how these discrete optimization techniques have been applied to some classical vision problems. We focus on the low-level vision problem of stereo; the mid-level problem of interactive object segmentation; and the high-level problem of model-based recognition. PMID:20660950

  9. Fully distributed monitoring architecture supporting multiple trackees and trackers in indoor mobile asset management application.

    Science.gov (United States)

    Jeong, Seol Young; Jo, Hyeong Gon; Kang, Soon Ju

    2014-03-21

    A tracking service like asset management is essential in a dynamic hospital environment consisting of numerous mobile assets (e.g., wheelchairs or infusion pumps) that are continuously relocated throughout a hospital. The tracking service is accomplished based on the key technologies of an indoor location-based service (LBS), such as locating and monitoring multiple mobile targets inside a building in real time. An indoor LBS such as a tracking service entails numerous resource lookups being requested concurrently and frequently from several locations, as well as a network infrastructure requiring support for high scalability in indoor environments. A traditional centralized architecture needs to maintain a geographic map of the entire building or complex in its central server, which can cause low scalability and traffic congestion. This paper presents a self-organizing and fully distributed indoor mobile asset management (MAM) platform, and proposes an architecture for multiple trackees (such as mobile assets) and trackers based on the proposed distributed platform in real time. In order to verify the suggested platform, scalability performance according to increases in the number of concurrent lookups was evaluated in a real test bed. Tracking latency and traffic load ratio in the proposed tracking architecture was also evaluated.

  10. Event-chain algorithm for the Heisenberg model: Evidence for z≃1 dynamic scaling.

    Science.gov (United States)

    Nishikawa, Yoshihiko; Michel, Manon; Krauth, Werner; Hukushima, Koji

    2015-12-01

    We apply the event-chain Monte Carlo algorithm to the three-dimensional ferromagnetic Heisenberg model. The algorithm is rejection-free and also realizes an irreversible Markov chain that satisfies global balance. The autocorrelation functions of the magnetic susceptibility and the energy indicate a dynamical critical exponent z≈1 at the critical temperature, while that of the magnetization does not measure the performance of the algorithm. We show that the event-chain Monte Carlo algorithm substantially reduces the dynamical critical exponent from the conventional value of z≃2.

  11. Implementation of Asset Management System Based on Wireless Sensor Technology

    Directory of Open Access Journals (Sweden)

    Nan WANG

    2014-02-01

    Full Text Available RFID technology is regarded as one of the top ten key technologies in the 21st century, which has extensive application prospect in various fields, including asset management, public safety and so on. Through analyzing the current problems existing in asset management, this paper proposes to apply RFID technology in device management to effectively improve the level of automation and informatization of device management, and designs the scheme of equipment monitoring system based on 433 MHz RFID electronic tag and reader. The hardware part of monitoring system consists of the RFID sensor terminals attached in the device and the readers distributed in each monitoring site. The reader uploads the information collected by tag to the backend server and the management system, so as to allow managers and decision makers to understand the usage rate and location of the experimental instruments and to provide managers with a scientific basis for decision making, which effectively solves the relatively backward status quo of current device management level.

  12. THEORETICAL ASPECTS REGARDING THE VALUATION OF INTANGIBLE ASSETS

    OpenAIRE

    HOLT GHEORGHE

    2015-01-01

    Valuation of intangible assets represents one of the most delicate problems of assessing a company. Usually, valuation of intangible assets is in the process of evaluating enterprise as a whole. Therefore, Intangible Asset Valuers must have detailed knowledge on business valuation, in particular, the income-based valuation methods (capitalization / updating net cash flow). Valuation of Intangible Assets is the objective of the International Valuation Standards (GN) 4 Valuation of Intangible A...

  13. Casting a Resource-Based View on Intangible Assets and Export Behaviour

    Directory of Open Access Journals (Sweden)

    Seyyed Mohammad Tabatabaei Nasab

    2013-09-01

    Full Text Available Prosperous companies in the 21st century have come to know the necessity of intangible assets as an important factor to achieve sustainable competitive advantage and constant presence in the international markets. Hence, the purpose of this paper is to examine intangible assets and evaluate its relationship with export behaviour in terms of export intensity (Export-Sales Ratio and export type (Permanent, Occasional & Periodical. The population under study includes all export firms during 2002 until 2010 in Yazd province, Iran. Research data were collected by questionnaire and in order to answer the research questions and testing hypotheses, MCDM techniques (i.e. AHP & TOPSIS and statistical analysis (i.e. ANOVA were utilized. According to the research results, human capital, relational capital, technological capital, corporate reputation, and structural capital placed as the first to the fifth significant factors respectively. Findings revealed that there is a significant difference between the permanent and occasional presence in the international markets regarding intangible assets; as the mean of intangible assets in the firms with permanent export is higher than the mean of intangible assets in the firms with occasional export. However, there is no significant difference between intangible assets and the export intensity.

  14. An algorithm for engineering regime shifts in one-dimensional dynamical systems

    Science.gov (United States)

    Tan, James P. L.

    2018-01-01

    Regime shifts are discontinuous transitions between stable attractors hosting a system. They can occur as a result of a loss of stability in an attractor as a bifurcation is approached. In this work, we consider one-dimensional dynamical systems where attractors are stable equilibrium points. Relying on critical slowing down signals related to the stability of an equilibrium point, we present an algorithm for engineering regime shifts such that a system may escape an undesirable attractor into a desirable one. We test the algorithm on synthetic data from a one-dimensional dynamical system with a multitude of stable equilibrium points and also on a model of the population dynamics of spruce budworms in a forest. The algorithm and other ideas discussed here contribute to an important part of the literature on exercising greater control over the sometimes unpredictable nature of nonlinear systems.

  15. 12 CFR 615.5210 - Risk-adjusted assets.

    Science.gov (United States)

    2010-01-01

    ... appropriate credit conversion factor in § 615.5212, is assigned to one of the risk categories specified in... risk-based capital requirement for the credit-enhanced assets, the risk-based capital required under..., determine the appropriate risk weight for any asset or credit equivalent amount that does not fit wholly...

  16. Accounting valuation development of specific assets

    Directory of Open Access Journals (Sweden)

    I.V. Zhigley

    2017-12-01

    Full Text Available The current issues of accounting estimate development are considered. The necessity of the development of accounting estimate in the context of the non-institutional theory principles based on the selection of a number of reasons is grounded. The reasons for deterioration of accounting reputation as a separate socio-economic institute in the context of developing the methodology for specific assets accounting are discovered. The system of normative regulation of accounting estimate of enterprise non-current assets in the case of diminishing their usefulness is analyzed. The procedure for determining and accounting for the depreciation of assets in accordance with IFRS 36 «Depreciation of Assets» is developed. The features of the joint use of the concept of «value in use» and «fair value» in the accounting system are disclosed. The procedure for determining the value of compensation depending on the degree of specificity of assets is developed. The necessity to clarify the features that indicate the possibility of diminishing the usefulness of specific assets (termination or pre-term termination of the contract for the use of a specific asset is grounded.

  17. Algorithms for testing of fractional dynamics: a practical guide to ARFIMA modelling

    International Nuclear Information System (INIS)

    Burnecki, Krzysztof; Weron, Aleksander

    2014-01-01

    In this survey paper we present a systematic methodology which demonstrates how to identify the origins of fractional dynamics. We consider three mechanisms which lead to it, namely fractional Brownian motion, fractional Lévy stable motion and an autoregressive fractionally integrated moving average (ARFIMA) process but we concentrate on the ARFIMA modelling. The methodology is based on statistical tools for identification and validation of the fractional dynamics, in particular on an ARFIMA parameter estimator, an ergodicity test, a self-similarity index estimator based on sample p-variation and a memory parameter estimator based on sample mean-squared displacement. A complete list of algorithms needed for this is provided in appendices A–F. Finally, we illustrate the methodology on various empirical data and show that ARFIMA can be considered as a universal model for fractional dynamics. Thus, we provide a practical guide for experimentalists on how to efficiently use ARFIMA modelling for a large class of anomalous diffusion data. (paper)

  18. GENERAL: Application of Symplectic Algebraic Dynamics Algorithm to Circular Restricted Three-Body Problem

    Science.gov (United States)

    Lu, Wei-Tao; Zhang, Hua; Wang, Shun-Jin

    2008-07-01

    Symplectic algebraic dynamics algorithm (SADA) for ordinary differential equations is applied to solve numerically the circular restricted three-body problem (CR3BP) in dynamical astronomy for both stable motion and chaotic motion. The result is compared with those of Runge-Kutta algorithm and symplectic algorithm under the fourth order, which shows that SADA has higher accuracy than the others in the long-term calculations of the CR3BP.

  19. An Initialization Method Based on Hybrid Distance for k-Means Algorithm.

    Science.gov (United States)

    Yang, Jie; Ma, Yan; Zhang, Xiangfen; Li, Shunbao; Zhang, Yuping

    2017-11-01

    The traditional [Formula: see text]-means algorithm has been widely used as a simple and efficient clustering method. However, the performance of this algorithm is highly dependent on the selection of initial cluster centers. Therefore, the method adopted for choosing initial cluster centers is extremely important. In this letter, we redefine the density of points according to the number of its neighbors, as well as the distance between points and their neighbors. In addition, we define a new distance measure that considers both Euclidean distance and density. Based on that, we propose an algorithm for selecting initial cluster centers that can dynamically adjust the weighting parameter. Furthermore, we propose a new internal clustering validation measure, the clustering validation index based on the neighbors (CVN), which can be exploited to select the optimal result among multiple clustering results. Experimental results show that the proposed algorithm outperforms existing initialization methods on real-world data sets and demonstrates the adaptability of the proposed algorithm to data sets with various characteristics.

  20. Comparative behaviour of the Dynamically Penalized Likelihood algorithm in inverse radiation therapy planning

    Energy Technology Data Exchange (ETDEWEB)

    Llacer, Jorge [EC Engineering Consultants, LLC, Los Gatos, CA (United States)]. E-mail: jllacer@home.com; Solberg, Timothy D. [Department of Radiation Oncology, University of California, Los Angeles, CA (United States)]. E-mail: Solberg@radonc.ucla.edu; Promberger, Claus [BrainLAB AG, Heimstetten (Germany)]. E-mail: promberg@brainlab.com

    2001-10-01

    This paper presents a description of tests carried out to compare the behaviour of five algorithms in inverse radiation therapy planning: (1) The Dynamically Penalized Likelihood (DPL), an algorithm based on statistical estimation theory; (2) an accelerated version of the same algorithm; (3) a new fast adaptive simulated annealing (ASA) algorithm; (4) a conjugate gradient method; and (5) a Newton gradient method. A three-dimensional mathematical phantom and two clinical cases have been studied in detail. The phantom consisted of a U-shaped tumour with a partially enclosed 'spinal cord'. The clinical examples were a cavernous sinus meningioma and a prostate case. The algorithms have been tested in carefully selected and controlled conditions so as to ensure fairness in the assessment of results. It has been found that all five methods can yield relatively similar optimizations, except when a very demanding optimization is carried out. For the easier cases, the differences are principally in robustness, ease of use and optimization speed. In the more demanding case, there are significant differences in the resulting dose distributions. The accelerated DPL emerges as possibly the algorithm of choice for clinical practice. An appendix describes the differences in behaviour between the new ASA method and the one based on a patent by the Nomos Corporation. (author)

  1. Asset Substitution, Money Demand, and the Inflation Process in Brazil.

    OpenAIRE

    Calomiris, Charles W; Domowitz, Ian

    1989-01-01

    Various domestic financial assets in Brazil have provided relatively liquid nonmonetary alternatives. Monthly money demand estimates, which include domestic asset opportunity costs and take account of T-bill repurchase agreements in a dynamic error-correction model, demonstrate the importance of domestic substitutes in explaining money holdings. Money demand appears responsive and stable. Moreover, T-bills and indexed bonds have acted as an alternative to central bank liabilities as a source ...

  2. Monitoring highway assets using remote sensing technology : research spotlight.

    Science.gov (United States)

    2014-04-01

    Collecting inventory data about roadway assets is a critical part of : MDOTs asset management efforts, which help the department operate, : maintain and upgrade these assets cost-effectively. Federal law requires : that states develop a risk-based...

  3. Looking for Synergy with Momentum in Main Asset Classes

    OpenAIRE

    Lukas Macijauskas; Dimitrios I. Maditinos

    2014-01-01

    As during turbulent market conditions correlations between main asset-classes falter, classical asset management concepts seem unreliable. This problem stimulates search for non-discretionary asset allocation methods. The aim of the paper is to test weather the concept of Momentum phenomena could be used as a stand alone investment strategy using all main asset classes. The study is based on exploring historical prices of various asset classes; statistical data analysis method is used. Result...

  4. A new parallel molecular dynamics algorithm for organic systems

    International Nuclear Information System (INIS)

    Plimpton, S.; Hendrickson, B.; Heffelfinger, G.

    1993-01-01

    A new parallel algorithm for simulating bonded molecular systems such as polymers and proteins by molecular dynamics (MD) is presented. In contrast to methods that extract parallelism by breaking the spatial domain into sub-pieces, the new method does not require regular geometries or uniform particle densities to achieve high parallel efficiency. For very large, regular systems spatial methods are often the best choice, but in practice the new method is faster for systems with tens-of-thousands of atoms simulated on large numbers of processors. It is also several times faster than the techniques commonly used for parallelizing bonded MD that assign a subset of atoms to each processor and require all-to-all communication. Implementation of the algorithm in a CHARMm-like MD model with many body forces and constraint dynamics is discussed and timings on the Intel Delta and Paragon machines are given. Example calculations using the algorithm in simulations of polymers and liquid-crystal molecules will also be briefly discussed

  5. Adaptive swarm cluster-based dynamic multi-objective synthetic minority oversampling technique algorithm for tackling binary imbalanced datasets in biomedical data classification.

    Science.gov (United States)

    Li, Jinyan; Fong, Simon; Sung, Yunsick; Cho, Kyungeun; Wong, Raymond; Wong, Kelvin K L

    2016-01-01

    An imbalanced dataset is defined as a training dataset that has imbalanced proportions of data in both interesting and uninteresting classes. Often in biomedical applications, samples from the stimulating class are rare in a population, such as medical anomalies, positive clinical tests, and particular diseases. Although the target samples in the primitive dataset are small in number, the induction of a classification model over such training data leads to poor prediction performance due to insufficient training from the minority class. In this paper, we use a novel class-balancing method named adaptive swarm cluster-based dynamic multi-objective synthetic minority oversampling technique (ASCB_DmSMOTE) to solve this imbalanced dataset problem, which is common in biomedical applications. The proposed method combines under-sampling and over-sampling into a swarm optimisation algorithm. It adaptively selects suitable parameters for the rebalancing algorithm to find the best solution. Compared with the other versions of the SMOTE algorithm, significant improvements, which include higher accuracy and credibility, are observed with ASCB_DmSMOTE. Our proposed method tactfully combines two rebalancing techniques together. It reasonably re-allocates the majority class in the details and dynamically optimises the two parameters of SMOTE to synthesise a reasonable scale of minority class for each clustered sub-imbalanced dataset. The proposed methods ultimately overcome other conventional methods and attains higher credibility with even greater accuracy of the classification model.

  6. Three-dimensional GIS approach for management of assets

    Science.gov (United States)

    Lee, S. Y.; Yee, S. X.; Majid, Z.; Setan, H.

    2014-02-01

    Assets play an important role in human life, especially to an organization. Organizations strive and put more effort to improve its operation and assets management. The development of GIS technology has become a powerful tool in management as it is able to provide a complete inventory for managing assets with location-based information. Spatial information is one of the requirements in decision making in various areas, including asset management in the buildings. This paper describes a 3D GIS approach for management of assets. An asset management system was developed by integrating GIS concept and 3D model assets. The purposes of 3D visualization to manage assets are to facilitate the analysis and understanding in the complex environment. Behind the 3D model of assets is a database to store the asset information. A user-friendly interface was also designed for more easier to operate the application. In the application developed, location of each individual asset can be easily tracked according to the referring spatial information and 3D viewing. The 3D GIS approach described in this paper is certainly would be useful in asset management. Systematic management of assets can be carried out and this will lead to less-time consuming and cost-effective. The results in this paper will show a new approach to improve asset management.

  7. Three-dimensional GIS approach for management of assets

    International Nuclear Information System (INIS)

    Lee, S Y; Yee, S X; Majid, Z; Setan, H

    2014-01-01

    Assets play an important role in human life, especially to an organization. Organizations strive and put more effort to improve its operation and assets management. The development of GIS technology has become a powerful tool in management as it is able to provide a complete inventory for managing assets with location-based information. Spatial information is one of the requirements in decision making in various areas, including asset management in the buildings. This paper describes a 3D GIS approach for management of assets. An asset management system was developed by integrating GIS concept and 3D model assets. The purposes of 3D visualization to manage assets are to facilitate the analysis and understanding in the complex environment. Behind the 3D model of assets is a database to store the asset information. A user-friendly interface was also designed for more easier to operate the application. In the application developed, location of each individual asset can be easily tracked according to the referring spatial information and 3D viewing. The 3D GIS approach described in this paper is certainly would be useful in asset management. Systematic management of assets can be carried out and this will lead to less-time consuming and cost-effective. The results in this paper will show a new approach to improve asset management

  8. Prudent management of utility assets -- Problem or promise?

    International Nuclear Information System (INIS)

    Hatch, D.; Serwinowski, M.

    1998-01-01

    As utilities move into a deregulated market, the extent and nature of their asset base, as well as, the manner in which they have managed it, may play a key factor in the form of regulatory recovery. Utilities must face the issue of stranded assets. One form of addressing this issue is using ''EVA'', Economic Value Added as a mechanism to form financial models for prudent asset management. The authors present an approach to this challenging aspect of deregulation. They focus on the following utility assets: buildings/facilities, and excess real physical assets. Primarily focusing on Niagara Mohawk, two or three case studies are used to demonstrate how proactive management and EVA analysis transforms underperforming utility assets. These will be presented in a way that can show benefits for all utility stakeholders such as cost avoidance, load growth, real estate tax savings, stranded asset reductions, environmental gains, corporate image enhancement, and regulatory/governmental gains; over and above possible economic gains. Examples will be given that include the transformation of utility assets into award winning commercial, residential, and industrial developments as well as recreational/park lands and greenways. Similarly, other examples will show the many tangible and intangible benefits of an effective investment recovery and waste stream management program. Various strategies will also be presented that detail how utilities can begin to develop a total comprehensive plan for their asset portfolio. The first step in realizing and maximizing EVA towards a portfolio of assets is a change in corporate policy--one from passive ownership to active prudent management. Service and cost will drive competition resulting from full deregulation. To drive down costs, utilities will need to become more efficient in dealing with their asset base. By embracing an EVA model on an entire asset portfolio, utilities can prepare and excel in the newly shaped marketplace

  9. Nonequilibrium molecular dynamics theory, algorithms and applications

    CERN Document Server

    Todd, Billy D

    2017-01-01

    Written by two specialists with over twenty-five years of experience in the field, this valuable text presents a wide range of topics within the growing field of nonequilibrium molecular dynamics (NEMD). It introduces theories which are fundamental to the field - namely, nonequilibrium statistical mechanics and nonequilibrium thermodynamics - and provides state-of-the-art algorithms and advice for designing reliable NEMD code, as well as examining applications for both atomic and molecular fluids. It discusses homogenous and inhomogenous flows and pays considerable attention to highly confined fluids, such as nanofluidics. In addition to statistical mechanics and thermodynamics, the book covers the themes of temperature and thermodynamic fluxes and their computation, the theory and algorithms for homogenous shear and elongational flows, response theory and its applications, heat and mass transport algorithms, applications in molecular rheology, highly confined fluids (nanofluidics), the phenomenon of slip and...

  10. High speed railway track dynamics models, algorithms and applications

    CERN Document Server

    Lei, Xiaoyan

    2017-01-01

    This book systematically summarizes the latest research findings on high-speed railway track dynamics, made by the author and his research team over the past decade. It explores cutting-edge issues concerning the basic theory of high-speed railways, covering the dynamic theories, models, algorithms and engineering applications of the high-speed train and track coupling system. Presenting original concepts, systematic theories and advanced algorithms, the book places great emphasis on the precision and completeness of its content. The chapters are interrelated yet largely self-contained, allowing readers to either read through the book as a whole or focus on specific topics. It also combines theories with practice to effectively introduce readers to the latest research findings and developments in high-speed railway track dynamics. It offers a valuable resource for researchers, postgraduates and engineers in the fields of civil engineering, transportation, highway & railway engineering.

  11. A parallel attractor-finding algorithm based on Boolean satisfiability for genetic regulatory networks.

    Directory of Open Access Journals (Sweden)

    Wensheng Guo

    Full Text Available In biological systems, the dynamic analysis method has gained increasing attention in the past decade. The Boolean network is the most common model of a genetic regulatory network. The interactions of activation and inhibition in the genetic regulatory network are modeled as a set of functions of the Boolean network, while the state transitions in the Boolean network reflect the dynamic property of a genetic regulatory network. A difficult problem for state transition analysis is the finding of attractors. In this paper, we modeled the genetic regulatory network as a Boolean network and proposed a solving algorithm to tackle the attractor finding problem. In the proposed algorithm, we partitioned the Boolean network into several blocks consisting of the strongly connected components according to their gradients, and defined the connection between blocks as decision node. Based on the solutions calculated on the decision nodes and using a satisfiability solving algorithm, we identified the attractors in the state transition graph of each block. The proposed algorithm is benchmarked on a variety of genetic regulatory networks. Compared with existing algorithms, it achieved similar performance on small test cases, and outperformed it on larger and more complex ones, which happens to be the trend of the modern genetic regulatory network. Furthermore, while the existing satisfiability-based algorithms cannot be parallelized due to their inherent algorithm design, the proposed algorithm exhibits a good scalability on parallel computing architectures.

  12. A semi-active suspension control algorithm for vehicle comprehensive vertical dynamics performance

    Science.gov (United States)

    Nie, Shida; Zhuang, Ye; Liu, Weiping; Chen, Fan

    2017-08-01

    Comprehensive performance of the vehicle, including ride qualities and road-holding, is essentially of great value in practice. Many up-to-date semi-active control algorithms improve vehicle dynamics performance effectively. However, it is hard to improve comprehensive performance for the conflict between ride qualities and road-holding around the second-order resonance. Hence, a new control algorithm is proposed to achieve a good trade-off between ride qualities and road-holding. In this paper, the properties of the invariant points are analysed, which gives an insight into the performance conflicting around the second-order resonance. Based on it, a new control algorithm is proposed. The algorithm employs a novel frequency selector to balance suspension ride and handling performance by adopting a medium damping around the second-order resonance. The results of this study show that the proposed control algorithm could improve the performance of ride qualities and suspension working space up to 18.3% and 8.2%, respectively, with little loss of road-holding compared to the passive suspension. Consequently, the comprehensive performance can be improved by 6.6%. Hence, the proposed algorithm is of great potential to be implemented in practice.

  13. Log-Linear Model Based Behavior Selection Method for Artificial Fish Swarm Algorithm

    Directory of Open Access Journals (Sweden)

    Zhehuang Huang

    2015-01-01

    Full Text Available Artificial fish swarm algorithm (AFSA is a population based optimization technique inspired by social behavior of fishes. In past several years, AFSA has been successfully applied in many research and application areas. The behavior of fishes has a crucial impact on the performance of AFSA, such as global exploration ability and convergence speed. How to construct and select behaviors of fishes are an important task. To solve these problems, an improved artificial fish swarm algorithm based on log-linear model is proposed and implemented in this paper. There are three main works. Firstly, we proposed a new behavior selection algorithm based on log-linear model which can enhance decision making ability of behavior selection. Secondly, adaptive movement behavior based on adaptive weight is presented, which can dynamically adjust according to the diversity of fishes. Finally, some new behaviors are defined and introduced into artificial fish swarm algorithm at the first time to improve global optimization capability. The experiments on high dimensional function optimization showed that the improved algorithm has more powerful global exploration ability and reasonable convergence speed compared with the standard artificial fish swarm algorithm.

  14. Log-linear model based behavior selection method for artificial fish swarm algorithm.

    Science.gov (United States)

    Huang, Zhehuang; Chen, Yidong

    2015-01-01

    Artificial fish swarm algorithm (AFSA) is a population based optimization technique inspired by social behavior of fishes. In past several years, AFSA has been successfully applied in many research and application areas. The behavior of fishes has a crucial impact on the performance of AFSA, such as global exploration ability and convergence speed. How to construct and select behaviors of fishes are an important task. To solve these problems, an improved artificial fish swarm algorithm based on log-linear model is proposed and implemented in this paper. There are three main works. Firstly, we proposed a new behavior selection algorithm based on log-linear model which can enhance decision making ability of behavior selection. Secondly, adaptive movement behavior based on adaptive weight is presented, which can dynamically adjust according to the diversity of fishes. Finally, some new behaviors are defined and introduced into artificial fish swarm algorithm at the first time to improve global optimization capability. The experiments on high dimensional function optimization showed that the improved algorithm has more powerful global exploration ability and reasonable convergence speed compared with the standard artificial fish swarm algorithm.

  15. Solar Asset Management Software

    Energy Technology Data Exchange (ETDEWEB)

    Iverson, Aaron [Ra Power Management, Inc., Oakland, CA (United States); Zviagin, George [Ra Power Management, Inc., Oakland, CA (United States)

    2016-09-30

    Ra Power Management (RPM) has developed a cloud based software platform that manages the financial and operational functions of third party financed solar projects throughout their lifecycle. RPM’s software streamlines and automates the sales, financing, and management of a portfolio of solar assets. The software helps solar developers automate the most difficult aspects of asset management, leading to increased transparency, efficiency, and reduction in human error. More importantly, our platform will help developers save money by improving their operating margins.

  16. A parallel row-based algorithm for standard cell placement with integrated error control

    Science.gov (United States)

    Sargent, Jeff S.; Banerjee, Prith

    1989-01-01

    A new row-based parallel algorithm for standard-cell placement targeted for execution on a hypercube multiprocessor is presented. Key features of this implementation include a dynamic simulated-annealing schedule, row-partitioning of the VLSI chip image, and two novel approaches to control error in parallel cell-placement algorithms: (1) Heuristic Cell-Coloring; (2) Adaptive Sequence Length Control.

  17. The application of statistical methods to assess economic assets

    Directory of Open Access Journals (Sweden)

    D. V. Dianov

    2017-01-01

    Full Text Available The article is devoted to consideration and evaluation of machinery, equipment and special equipment, methodological aspects of the use of standards for assessment of buildings and structures in current prices, the valuation of residential, specialized houses, office premises, assessment and reassessment of existing and inactive military assets, the application of statistical methods to obtain the relevant cost estimates.The objective of the scientific article is to consider possible application of statistical tools in the valuation of the assets, composing the core group of elements of national wealth – the fixed assets. Firstly, capital tangible assets constitute the basis of material base of a new value creation, products and non-financial services. The gain, accumulated of tangible assets of a capital nature is a part of the gross domestic product, and from its volume and specific weight in the composition of GDP we can judge the scope of reproductive processes in the country.Based on the methodological materials of the state statistics bodies of the Russian Federation, regulations of the theory of statistics, which describe the methods of statistical analysis such as the index, average values, regression, the methodical approach is structured in the application of statistical tools to obtain value estimates of property, plant and equipment with significant accumulated depreciation. Until now, the use of statistical methodology in the practice of economic assessment of assets is only fragmentary. This applies to both Federal Legislation (Federal law № 135 «On valuation activities in the Russian Federation» dated 16.07.1998 in edition 05.07.2016 and the methodological documents and regulations of the estimated activities, in particular, the valuation activities’ standards. A particular problem is the use of a digital database of Rosstat (Federal State Statistics Service, as to the specific fixed assets the comparison should be carried

  18. Partial multicanonical algorithm for molecular dynamics and Monte Carlo simulations.

    Science.gov (United States)

    Okumura, Hisashi

    2008-09-28

    Partial multicanonical algorithm is proposed for molecular dynamics and Monte Carlo simulations. The partial multicanonical simulation samples a wide range of a part of the potential-energy terms, which is necessary to sample the conformational space widely, whereas a wide range of total potential energy is sampled in the multicanonical algorithm. Thus, one can concentrate the effort to determine the weight factor only on the important energy terms in the partial multicanonical simulation. The partial multicanonical, multicanonical, and canonical molecular dynamics algorithms were applied to an alanine dipeptide in explicit water solvent. The canonical simulation sampled the states of P(II), C(5), alpha(R), and alpha(P). The multicanonical simulation covered the alpha(L) state as well as these states. The partial multicanonical simulation also sampled the C(7) (ax) state in addition to the states that were sampled by the multicanonical simulation. In the partial multicanonical simulation, furthermore, backbone dihedral angles phi and psi rotated more frequently than those in the multicanonical and canonical simulations. These results mean that the partial multicanonical algorithm has a higher sampling efficiency than the multicanonical and canonical algorithms.

  19. An Empirical Study of the Relationship between the Fixed Assets Investment and Urban-rural Income Gap during the Transition Period

    Institute of Scientific and Technical Information of China (English)

    Yingliang; ZHANG; Xingxi; LIU; Fang; YANG; Yongbin; GUAN

    2014-01-01

    As the gap in income between urban and rural residents bigger and bigger,based on the data from 1978 to 2007,this paper makes an empirical study of the dynamic relation between the fixed assets investment and the difference in income between urban and rural residents. The outcome from the study indicates a long-term balance exists between the investment rate of the fixed assets and the difference in income between urban and rural residents. A short-term deviation from the balance can be adjusted through long time. To a certain extent,city-oriented fixed assets investment policy is the main cause of the big gap in income between urban and rural residents. The big gap in income between urban and rural residents in turn reinforces their social status,thus further strengthening the city-oriented instead of countryside-oriented fixed assets investment policy. Based on that,this paper puts forward some suggestions on adjusting the fixed assets investment policy so as to shorten the difference in income between urban and rural residents and realize the goal of harmonious development between city and countryside.

  20. Optimization of Algorithms Using Extensions of Dynamic Programming

    KAUST Repository

    AbouEisha, Hassan M.

    2017-01-01

    of the thesis presents a novel model of computation (element partition tree) that represents a class of algorithms for multi-frontal solvers along with cost functions reflecting various complexity measures such as: time and space. It then introduces dynamic

  1. Fuzzy PID control algorithm based on PSO and application in BLDC motor

    Science.gov (United States)

    Lin, Sen; Wang, Guanglong

    2017-06-01

    A fuzzy PID control algorithm is studied based on improved particle swarm optimization (PSO) to perform Brushless DC (BLDC) motor control which has high accuracy, good anti-jamming capability and steady state accuracy compared with traditional PID control. The mathematical and simulation model is established for BLDC motor by simulink software, and the speed loop of the fuzzy PID controller is designed. The simulation results show that the fuzzy PID control algorithm based on PSO has higher stability, high control precision and faster dynamic response speed.

  2. ASSET guidelines

    International Nuclear Information System (INIS)

    1990-11-01

    The IAEA Assessment of Safety Significant Events Team (ASSET) Service provides advice and assistance to Member States to enhance the overall level of plant safety while dealing with the policy of prevention of incidents at nuclear power plants. The ASSET programme, initiated in 1986, is not restricted to any particular group of Member States, whether developing or industrialized, but is available to all countries with nuclear power plants in operation or approaching commercial operation. The IAEA Safety Series publications form common basis for the ASSET reviews, including the Nuclear Safety Standards (NUSS) and the Basic Safety Principles (Recommendations of Safety Series No. 75-INSAG-3). The ASSET Guidelines provide overall guidance for the experts to ensure the consistency and comprehensiveness of their review of incident investigations. Additional guidance and reference material is provided by the IAEA to complement the expertise of the ASSET members. ASSET reviews accept different approaches that contribute to ensuring an effective prevention of incidents at plants. Suggestions are offered to enhance plant safety performance. Commendable good practices are identified and generic lessons are communicated to other plants, where relevant, for long term improvement

  3. A hybrid algorithm for parallel molecular dynamics simulations

    Science.gov (United States)

    Mangiardi, Chris M.; Meyer, R.

    2017-10-01

    This article describes algorithms for the hybrid parallelization and SIMD vectorization of molecular dynamics simulations with short-range forces. The parallelization method combines domain decomposition with a thread-based parallelization approach. The goal of the work is to enable efficient simulations of very large (tens of millions of atoms) and inhomogeneous systems on many-core processors with hundreds or thousands of cores and SIMD units with large vector sizes. In order to test the efficiency of the method, simulations of a variety of configurations with up to 74 million atoms have been performed. Results are shown that were obtained on multi-core systems with Sandy Bridge and Haswell processors as well as systems with Xeon Phi many-core processors.

  4. Artificial bee colony algorithm with dynamic multi-population

    Science.gov (United States)

    Zhang, Ming; Ji, Zhicheng; Wang, Yan

    2017-07-01

    To improve the convergence rate and make a balance between the global search and local turning abilities, this paper proposes a decentralized form of artificial bee colony (ABC) algorithm with dynamic multi-populations by means of fuzzy C-means (FCM) clustering. Each subpopulation periodically enlarges with the same size during the search process, and the overlapping individuals among different subareas work for delivering information acting as exploring the search space with diffusion of solutions. Moreover, a Gaussian-based search equation with redefined local attractor is proposed to further accelerate the diffusion of the best solution and guide the search towards potential areas. Experimental results on a set of benchmarks demonstrate the competitive performance of our proposed approach.

  5. Using Self-Adaptive Evolutionary Algorithms to Evolve Dynamism-Oriented Maps for a Real Time Strategy Game

    OpenAIRE

    Lara-Cabrera, Raúl; Cotta, Carlos; Fernández Leiva, Antonio J.

    2013-01-01

    This work presents a procedural content generation system that uses an evolutionary algorithm in order to generate interesting maps for a real-time strategy game, called Planet Wars. Interestingness is here captured by the dynamism of games (i.e., the extent to which they are action-packed). We consider two different approaches to measure the dynamism of the games resulting from these generated maps, one based on fluctuations in the resources controlled by either player and another one based ...

  6. Comprehensive transportation asset management : risk-based inventory expansion and data needs.

    Science.gov (United States)

    2011-12-01

    Several agencies are applying asset management principles as a business tool and paradigm to help them define goals and prioritize agency resources in decision making. Previously, transportation asset management (TAM) has focused more on big ticke...

  7. Dynamic airspace configuration by genetic algorithm

    Directory of Open Access Journals (Sweden)

    Marina Sergeeva

    2017-06-01

    Full Text Available With the continuous air traffic growth and limits of resources, there is a need for reducing the congestion of the airspace systems. Nowadays, several projects are launched, aimed at modernizing the global air transportation system and air traffic management. In recent years, special interest has been paid to the solution of the dynamic airspace configuration problem. Airspace sector configurations need to be dynamically adjusted to provide maximum efficiency and flexibility in response to changing weather and traffic conditions. The main objective of this work is to automatically adapt the airspace configurations according to the evolution of traffic. In order to reach this objective, the airspace is considered to be divided into predefined 3D airspace blocks which have to be grouped or ungrouped depending on the traffic situation. The airspace structure is represented as a graph and each airspace configuration is created using a graph partitioning technique. We optimize airspace configurations using a genetic algorithm. The developed algorithm generates a sequence of sector configurations for one day of operation with the minimized controller workload. The overall methodology is implemented and successfully tested with air traffic data taken for one day and for several different airspace control areas of Europe.

  8. Quantum algorithm for simulating the dynamics of an open quantum system

    International Nuclear Information System (INIS)

    Wang Hefeng; Ashhab, S.; Nori, Franco

    2011-01-01

    In the study of open quantum systems, one typically obtains the decoherence dynamics by solving a master equation. The master equation is derived using knowledge of some basic properties of the system, the environment, and their interaction: One basically needs to know the operators through which the system couples to the environment and the spectral density of the environment. For a large system, it could become prohibitively difficult to even write down the appropriate master equation, let alone solve it on a classical computer. In this paper, we present a quantum algorithm for simulating the dynamics of an open quantum system. On a quantum computer, the environment can be simulated using ancilla qubits with properly chosen single-qubit frequencies and with properly designed coupling to the system qubits. The parameters used in the simulation are easily derived from the parameters of the system + environment Hamiltonian. The algorithm is designed to simulate Markovian dynamics, but it can also be used to simulate non-Markovian dynamics provided that this dynamics can be obtained by embedding the system of interest into a larger system that obeys Markovian dynamics. We estimate the resource requirements for the algorithm. In particular, we show that for sufficiently slow decoherence a single ancilla qubit could be sufficient to represent the entire environment, in principle.

  9. Development of an international scale of socio-economic position based on household assets.

    Science.gov (United States)

    Townend, John; Minelli, Cosetta; Harrabi, Imed; Obaseki, Daniel O; El-Rhazi, Karima; Patel, Jaymini; Burney, Peter

    2015-01-01

    The importance of studying associations between socio-economic position and health has often been highlighted. Previous studies have linked the prevalence and severity of lung disease with national wealth and with socio-economic position within some countries but there has been no systematic evaluation of the association between lung function and poverty at the individual level on a global scale. The BOLD study has collected data on lung function for individuals in a wide range of countries, however a barrier to relating this to personal socio-economic position is the need for a suitable measure to compare individuals within and between countries. In this paper we test a method for assessing socio-economic position based on the scalability of a set of durable assets (Mokken scaling), and compare its usefulness across countries of varying gross national income per capita. Ten out of 15 candidate asset questions included in the questionnaire were found to form a Mokken type scale closely associated with GNI per capita (Spearman's rank rs = 0.91, p = 0.002). The same set of assets conformed to a scale in 7 out of the 8 countries, the remaining country being Saudi Arabia where most respondents owned most of the assets. There was good consistency in the rank ordering of ownership of the assets in the different countries (Cronbach's alpha = 0.96). Scores on the Mokken scale were highly correlated with scores developed using principal component analysis (rs = 0.977). Mokken scaling is a potentially valuable tool for uncovering links between disease and socio-economic position within and between countries. It provides an alternative to currently used methods such as principal component analysis for combining personal asset data to give an indication of individuals' relative wealth. Relative strengths of the Mokken scale method were considered to be ease of interpretation, adaptability for comparison with other datasets, and reliability of imputation for even quite

  10. A Novel Immune-Inspired Shellcode Detection Algorithm Based on Hyperellipsoid Detectors

    Directory of Open Access Journals (Sweden)

    Tianliang Lu

    2018-01-01

    Full Text Available Shellcodes are machine language codes injected into target programs in the form of network packets or malformed files. Shellcodes can trigger buffer overflow vulnerability and execute malicious instructions. Signature matching technology used by antivirus software or intrusion detection system has low detection rate for unknown or polymorphic shellcodes; to solve such problem, an immune-inspired shellcode detection algorithm was proposed, named ISDA. Static analysis and dynamic analysis were both applied. The shellcodes were disassembled to assembly instructions during static analysis and, for dynamic analysis, the API function sequences of shellcodes were obtained by simulation execution to get the behavioral features of polymorphic shellcodes. The extracted features of shellcodes were encoded to antigens based on n-gram model. Immature detectors become mature after immune tolerance based on negative selection algorithm. To improve nonself space coverage rate, the immune detectors were encoded to hyperellipsoids. To generate better antibody offspring, the detectors were optimized through clonal selection algorithm with genetic mutation. Finally, shellcode samples were collected and tested, and result shows that the proposed method has higher detection accuracy for both nonencoded and polymorphic shellcodes.

  11. A risk-based approach to sanitary sewer pipe asset management.

    Science.gov (United States)

    Baah, Kelly; Dubey, Brajesh; Harvey, Richard; McBean, Edward

    2015-02-01

    Wastewater collection systems are an important component of proper management of wastewater to prevent environmental and human health implications from mismanagement of anthropogenic waste. Due to aging and inadequate asset management practices, the wastewater collection assets of many cities around the globe are in a state of rapid decline and in need of urgent attention. Risk management is a tool which can help prioritize resources to better manage and rehabilitate wastewater collection systems. In this study, a risk matrix and a weighted sum multi-criteria decision-matrix are used to assess the consequence and risk of sewer pipe failure for a mid-sized city, using ArcGIS. The methodology shows that six percent of the uninspected sewer pipe assets of the case study have a high consequence of failure while four percent of the assets have a high risk of failure and hence provide priorities for inspection. A map incorporating risk of sewer pipe failure and consequence is developed to facilitate future planning, rehabilitation and maintenance programs. The consequence of failure assessment also includes a novel failure impact factor which captures the effect of structurally defective stormwater pipes on the failure assessment. The methodology recommended in this study can serve as a basis for future planning and decision making and has the potential to be universally applied by municipal sewer pipe asset managers globally to effectively manage the sanitary sewer pipe infrastructure within their jurisdiction. Copyright © 2014 Elsevier B.V. All rights reserved.

  12. Matrix product algorithm for stochastic dynamics on networks applied to nonequilibrium Glauber dynamics

    Science.gov (United States)

    Barthel, Thomas; De Bacco, Caterina; Franz, Silvio

    2018-01-01

    We introduce and apply an efficient method for the precise simulation of stochastic dynamical processes on locally treelike graphs. Networks with cycles are treated in the framework of the cavity method. Such models correspond, for example, to spin-glass systems, Boolean networks, neural networks, or other technological, biological, and social networks. Building upon ideas from quantum many-body theory, our approach is based on a matrix product approximation of the so-called edge messages—conditional probabilities of vertex variable trajectories. Computation costs and accuracy can be tuned by controlling the matrix dimensions of the matrix product edge messages (MPEM) in truncations. In contrast to Monte Carlo simulations, the algorithm has a better error scaling and works for both single instances as well as the thermodynamic limit. We employ it to examine prototypical nonequilibrium Glauber dynamics in the kinetic Ising model. Because of the absence of cancellation effects, observables with small expectation values can be evaluated accurately, allowing for the study of decay processes and temporal correlations.

  13. INTEGRATING CASE-BASED REASONING, KNOWLEDGE-BASED APPROACH AND TSP ALGORITHM FOR MINIMUM TOUR FINDING

    Directory of Open Access Journals (Sweden)

    Hossein Erfani

    2009-07-01

    Full Text Available Imagine you have traveled to an unfamiliar city. Before you start your daily tour around the city, you need to know a good route. In Network Theory (NT, this is the traveling salesman problem (TSP. A dynamic programming algorithm is often used for solving this problem. However, when the road network of the city is very complicated and dense, which is usually the case, it will take too long for the algorithm to find the shortest path. Furthermore, in reality, things are not as simple as those stated in AT. For instance, the cost of travel for the same part of the city at different times may not be the same. In this project, we have integrated TSP algorithm with AI knowledge-based approach and case-based reasoning in solving the problem. With this integration, knowledge about the geographical information and past cases are used to help TSP algorithm in finding a solution. This approach dramatically reduces the computation time required for minimum tour finding.

  14. Global Tactical Cross-Asset Allocation: Applying Value and Momentum Across Asset Classes

    NARCIS (Netherlands)

    D.C. Blitz (David); P. van Vliet (Pim)

    2008-01-01

    textabstractIn this paper we examine global tactical asset allocation (GTAA) strategies across a broad range of asset classes. Contrary to market timing for single asset classes and tactical allocation across similar assets, this topic has received little attention in the existing literature. Our

  15. Parallel conjugate gradient algorithms for manipulator dynamic simulation

    Science.gov (United States)

    Fijany, Amir; Scheld, Robert E.

    1989-01-01

    Parallel conjugate gradient algorithms for the computation of multibody dynamics are developed for the specialized case of a robot manipulator. For an n-dimensional positive-definite linear system, the Classical Conjugate Gradient (CCG) algorithms are guaranteed to converge in n iterations, each with a computation cost of O(n); this leads to a total computational cost of O(n sq) on a serial processor. A conjugate gradient algorithms is presented that provide greater efficiency using a preconditioner, which reduces the number of iterations required, and by exploiting parallelism, which reduces the cost of each iteration. Two Preconditioned Conjugate Gradient (PCG) algorithms are proposed which respectively use a diagonal and a tridiagonal matrix, composed of the diagonal and tridiagonal elements of the mass matrix, as preconditioners. Parallel algorithms are developed to compute the preconditioners and their inversions in O(log sub 2 n) steps using n processors. A parallel algorithm is also presented which, on the same architecture, achieves the computational time of O(log sub 2 n) for each iteration. Simulation results for a seven degree-of-freedom manipulator are presented. Variants of the proposed algorithms are also developed which can be efficiently implemented on the Robot Mathematics Processor (RMP).

  16. Fast stochastic algorithm for simulating evolutionary population dynamics

    Science.gov (United States)

    Tsimring, Lev; Hasty, Jeff; Mather, William

    2012-02-01

    Evolution and co-evolution of ecological communities are stochastic processes often characterized by vastly different rates of reproduction and mutation and a coexistence of very large and very small sub-populations of co-evolving species. This creates serious difficulties for accurate statistical modeling of evolutionary dynamics. In this talk, we introduce a new exact algorithm for fast fully stochastic simulations of birth/death/mutation processes. It produces a significant speedup compared to the direct stochastic simulation algorithm in a typical case when the total population size is large and the mutation rates are much smaller than birth/death rates. We illustrate the performance of the algorithm on several representative examples: evolution on a smooth fitness landscape, NK model, and stochastic predator-prey system.

  17. Dynamic Synchronous Capture Algorithm for an Electromagnetic Flowmeter.

    Science.gov (United States)

    Fanjiang, Yong-Yi; Lu, Shih-Wei

    2017-04-10

    This paper proposes a dynamic synchronous capture (DSC) algorithm to calculate the flow rate for an electromagnetic flowmeter. The characteristics of the DSC algorithm can accurately calculate the flow rate signal and efficiently convert an analog signal to upgrade the execution performance of a microcontroller unit (MCU). Furthermore, it can reduce interference from abnormal noise. It is extremely steady and independent of fluctuations in the flow measurement. Moreover, it can calculate the current flow rate signal immediately (m/s). The DSC algorithm can be applied to the current general MCU firmware platform without using DSP (Digital Signal Processing) or a high-speed and high-end MCU platform, and signal amplification by hardware reduces the demand for ADC accuracy, which reduces the cost.

  18. Development of transportation asset management decision support tools : final report.

    Science.gov (United States)

    2017-08-09

    This study developed a web-based prototype decision support platform to demonstrate the benefits of transportation asset management in monitoring asset performance, supporting asset funding decisions, planning budget tradeoffs, and optimizing resourc...

  19. Discrete bacteria foraging optimization algorithm for graph based problems - a transition from continuous to discrete

    Science.gov (United States)

    Sur, Chiranjib; Shukla, Anupam

    2018-03-01

    Bacteria Foraging Optimisation Algorithm is a collective behaviour-based meta-heuristics searching depending on the social influence of the bacteria co-agents in the search space of the problem. The algorithm faces tremendous hindrance in terms of its application for discrete problems and graph-based problems due to biased mathematical modelling and dynamic structure of the algorithm. This had been the key factor to revive and introduce the discrete form called Discrete Bacteria Foraging Optimisation (DBFO) Algorithm for discrete problems which exceeds the number of continuous domain problems represented by mathematical and numerical equations in real life. In this work, we have mainly simulated a graph-based road multi-objective optimisation problem and have discussed the prospect of its utilisation in other similar optimisation problems and graph-based problems. The various solution representations that can be handled by this DBFO has also been discussed. The implications and dynamics of the various parameters used in the DBFO are illustrated from the point view of the problems and has been a combination of both exploration and exploitation. The result of DBFO has been compared with Ant Colony Optimisation and Intelligent Water Drops Algorithms. Important features of DBFO are that the bacteria agents do not depend on the local heuristic information but estimates new exploration schemes depending upon the previous experience and covered path analysis. This makes the algorithm better in combination generation for graph-based problems and combination generation for NP hard problems.

  20. Using Photovoice and Asset Mapping to Inform a Community-Based Diabetes Intervention, Boston, Massachusetts, 2015.

    Science.gov (United States)

    Florian, Jana; Roy, Nicole M St Omer; Quintiliani, Lisa M; Truong, Ve; Feng, Yi; Bloch, Philippe P; Russinova, Zlatka L; Lasser, Karen E

    2016-08-11

    Diabetes self-management takes place within a complex social and environmental context.  This study's objective was to examine the perceived and actual presence of community assets that may aid in diabetes control. We conducted one 6-hour photovoice session with 11 adults with poorly controlled diabetes in Boston, Massachusetts.  Participants were recruited from census tracts with high numbers of people with poorly controlled diabetes (diabetes "hot spots").  We coded the discussions and identified relevant themes.  We further explored themes related to the built environment through community asset mapping.  Through walking surveys, we evaluated 5 diabetes hot spots related to physical activity resources, walking environment, and availability of food choices in restaurants and food stores. Community themes from the photovoice session were access to healthy food, restaurants, and prepared foods; food assistance programs; exercise facilities; and church.  Asset mapping identified 114 community assets including 22 food stores, 22 restaurants, and 5 exercise facilities.  Each diabetes hot spot contained at least 1 food store with 5 to 9 varieties of fruits and vegetables.  Only 1 of the exercise facilities had signage regarding hours or services.  Memberships ranged from free to $9.95 per month.  Overall, these findings were inconsistent with participants' reports in the photovoice group. We identified a mismatch between perceptions of community assets and built environment and the objective reality of that environment. Incorporating photovoice and community asset mapping into a community-based diabetes intervention may bring awareness to underused neighborhood resources that can help people control their diabetes.

  1. A dynamic inertia weight particle swarm optimization algorithm

    International Nuclear Information System (INIS)

    Jiao Bin; Lian Zhigang; Gu Xingsheng

    2008-01-01

    Particle swarm optimization (PSO) algorithm has been developing rapidly and has been applied widely since it was introduced, as it is easily understood and realized. This paper presents an improved particle swarm optimization algorithm (IPSO) to improve the performance of standard PSO, which uses the dynamic inertia weight that decreases according to iterative generation increasing. It is tested with a set of 6 benchmark functions with 30, 50 and 150 different dimensions and compared with standard PSO. Experimental results indicate that the IPSO improves the search performance on the benchmark functions significantly

  2. Global Tactical Cross-Asset Allocation: Applying Value and Momentum Across Asset Classes

    OpenAIRE

    Blitz, D.C.; van Vliet, P.

    2008-01-01

    textabstractIn this paper we examine global tactical asset allocation (GTAA) strategies across a broad range of asset classes. Contrary to market timing for single asset classes and tactical allocation across similar assets, this topic has received little attention in the existing literature. Our main finding is that momentum and value strategies applied to GTAA across twelve asset classes deliver statistically and economically significant abnormal returns. For a long top-quartile and short b...

  3. Combinatorial Optimization Algorithms for Dynamic Multiple Fault Diagnosis in Automotive and Aerospace Applications

    Science.gov (United States)

    Kodali, Anuradha

    In this thesis, we develop dynamic multiple fault diagnosis (DMFD) algorithms to diagnose faults that are sporadic and coupled. Firstly, we formulate a coupled factorial hidden Markov model-based (CFHMM) framework to diagnose dependent faults occurring over time (dynamic case). Here, we implement a mixed memory Markov coupling model to determine the most likely sequence of (dependent) fault states, the one that best explains the observed test outcomes over time. An iterative Gauss-Seidel coordinate ascent optimization method is proposed for solving the problem. A soft Viterbi algorithm is also implemented within the framework for decoding dependent fault states over time. We demonstrate the algorithm on simulated and real-world systems with coupled faults; the results show that this approach improves the correct isolation rate as compared to the formulation where independent fault states are assumed. Secondly, we formulate a generalization of set-covering, termed dynamic set-covering (DSC), which involves a series of coupled set-covering problems over time. The objective of the DSC problem is to infer the most probable time sequence of a parsimonious set of failure sources that explains the observed test outcomes over time. The DSC problem is NP-hard and intractable due to the fault-test dependency matrix that couples the failed tests and faults via the constraint matrix, and the temporal dependence of failure sources over time. Here, the DSC problem is motivated from the viewpoint of a dynamic multiple fault diagnosis problem, but it has wide applications in operations research, for e.g., facility location problem. Thus, we also formulated the DSC problem in the context of a dynamically evolving facility location problem. Here, a facility can be opened, closed, or can be temporarily unavailable at any time for a given requirement of demand points. These activities are associated with costs or penalties, viz., phase-in or phase-out for the opening or closing of a

  4. Overlapping community detection based on link graph using distance dynamics

    Science.gov (United States)

    Chen, Lei; Zhang, Jing; Cai, Li-Jun

    2018-01-01

    The distance dynamics model was recently proposed to detect the disjoint community of a complex network. To identify the overlapping structure of a network using the distance dynamics model, an overlapping community detection algorithm, called L-Attractor, is proposed in this paper. The process of L-Attractor mainly consists of three phases. In the first phase, L-Attractor transforms the original graph to a link graph (a new edge graph) to assure that one node has multiple distances. In the second phase, using the improved distance dynamics model, a dynamic interaction process is introduced to simulate the distance dynamics (shrink or stretch). Through the dynamic interaction process, all distances converge, and the disjoint community structure of the link graph naturally manifests itself. In the third phase, a recovery method is designed to convert the disjoint community structure of the link graph to the overlapping community structure of the original graph. Extensive experiments are conducted on the LFR benchmark networks as well as real-world networks. Based on the results, our algorithm demonstrates higher accuracy and quality than other state-of-the-art algorithms.

  5. STREAM PROCESSING ALGORITHMS FOR DYNAMIC 3D SCENE ANALYSIS

    Science.gov (United States)

    2018-02-15

    PROCESSING ALGORITHMS FOR DYNAMIC 3D SCENE ANALYSIS 5a. CONTRACT NUMBER FA8750-14-2-0072 5b. GRANT NUMBER N/A 5c. PROGRAM ELEMENT NUMBER 62788F 6...of Figures 1 The 3D processing pipeline flowchart showing key modules. . . . . . . . . . . . . . . . . 12 2 Overall view (data flow) of the proposed...pipeline flowchart showing key modules. from motion and bundle adjustment algorithm. By fusion of depth masks of the scene obtained from 3D

  6. The model of asset management of commercial banks

    OpenAIRE

    Shaymardanov, Shakhzod; Nuriddinov, Sadriddin; Mamadaliev, Donierbek; Murodkhonov, Mukhammad

    2018-01-01

    The main objective of the commercial bank's policy in the sphere of asset and liability management is to maintain the optimal structure of assets and liabilities, ensure the compliance of amounts, terms and currency of attracting and allocating resources. The objectives and principles of asset and liability management are based on the bank's strategy and the fundamental principles of the risk management policy.

  7. The Diversification Benefits of Including Carbon Assets in Financial Portfolios

    Directory of Open Access Journals (Sweden)

    Yinpeng Zhang

    2017-03-01

    Full Text Available Carbon allowances traded in the EU-Emission Trading Scheme (EU-ETS were initially designed as an economic motivation for efficiently curbing greenhouse as emissions, but now it mimics quite a few characteristics of financial assets, and have now been used as a candidate product in building financial portfolios. In this study, we examine the time-varying correlations between carbon allowance prices with other financial indices, during the third phase of EU-ETS. The results show that, at the beginning of this period, carbon price was still strongly corrected with other financial indices. However, this connection was weakened over time. Given the relative independence of carbon assets from other financial assets, we argue for the diversification benefits of including carbon assets in financial portfolios, and building such portfolios, respectively, with the traditional global minimum variance (GMV strategy, the mean-variance-OGARCH (MV-OGARCH strategy, and the dynamic conditional correlation (DCC strategy. It is shown that the portfolio built with the MV-OGARCH strategy far out-performs the others and that including carbon assets in financial portfolios does help reduce investment risks.

  8. The strategic importance of identifying knowledge-based and intangible assets for generating value, competitiveness and innovation in sub-Saharan Africa

    Directory of Open Access Journals (Sweden)

    Nicoline Ondari-Okemwa

    2011-01-01

    Full Text Available This article discusses the strategic importance of identifying intangible assets for creating value and enhancing competitiveness and innovation in science and technology in a knowledge economy with particular reference to the sub- Saharan Africa region. It has always been difficult to gather the prerequisite information to manage such assets and create value from them. The paper discusses the nature of intangible assets, the characteristics of a knowledge economy and the role of knowledge workers in a knowledge economy. The paper also discusses the importance of identifying intangible assets in relation to capturing the value of such assets, the transfer of intangible assets to other owners and the challenges of managing organizational intangible assets. Objectives of the article include: underscoring the strategic importance of identifying intangible assets in sub-Saharan Africa; examining the performance of intangible assets in a knowledge economy; how intangible assets may generate competitiveness, economic growth and innovation; and assess how knowledge workers are becoming a dominant factor in the knowledge economy. An extensive literature review was employed to collect data for this article. It is concluded in the article that organizations and governments in sub-Saharan Africa should look at knowledge-based assets as strategic resources, even though the traditional accounting systems may still be having problems in determining the exact book value of such assets. It is recommended that organizations and government departments in sub-Saharan Africa should implement a system of the reporting of the value of intangible organizational assets just like the reporting of the value of tangible assets; and that organizations in sub-Saharan Africa should use knowledge to produce “smart products and services” which command premium prices.

  9. Hierarchical Control Strategy for Active Hydropneumatic Suspension Vehicles Based on Genetic Algorithms

    Directory of Open Access Journals (Sweden)

    Jinzhi Feng

    2015-02-01

    Full Text Available A new hierarchical control strategy for active hydropneumatic suspension systems is proposed. This strategy considers the dynamic characteristics of the actuator. The top hierarchy controller uses a combined control scheme: a genetic algorithm- (GA- based self-tuning proportional-integral-derivative controller and a fuzzy logic controller. For practical implementations of the proposed control scheme, a GA-based self-learning process is initiated only when the defined performance index of vehicle dynamics exceeds a certain debounce time threshold. The designed control algorithm is implemented on a virtual prototype and cosimulations are performed with different road disturbance inputs. Cosimulation results show that the active hydropneumatic suspension system designed in this study significantly improves riding comfort characteristics of vehicles. The robustness and adaptability of the proposed controller are also examined when the control system is subjected to extremely rough road conditions.

  10. Opposition-Based Adaptive Fireworks Algorithm

    Directory of Open Access Journals (Sweden)

    Chibing Gong

    2016-07-01

    Full Text Available A fireworks algorithm (FWA is a recent swarm intelligence algorithm that is inspired by observing fireworks explosions. An adaptive fireworks algorithm (AFWA proposes additional adaptive amplitudes to improve the performance of the enhanced fireworks algorithm (EFWA. The purpose of this paper is to add opposition-based learning (OBL to AFWA with the goal of further boosting performance and achieving global optimization. Twelve benchmark functions are tested in use of an opposition-based adaptive fireworks algorithm (OAFWA. The final results conclude that OAFWA significantly outperformed EFWA and AFWA in terms of solution accuracy. Additionally, OAFWA was compared with a bat algorithm (BA, differential evolution (DE, self-adapting control parameters in differential evolution (jDE, a firefly algorithm (FA, and a standard particle swarm optimization 2011 (SPSO2011 algorithm. The research results indicate that OAFWA ranks the highest of the six algorithms for both solution accuracy and runtime cost.

  11. Deterministic global optimization algorithm based on outer approximation for the parameter estimation of nonlinear dynamic biological systems.

    Science.gov (United States)

    Miró, Anton; Pozo, Carlos; Guillén-Gosálbez, Gonzalo; Egea, Jose A; Jiménez, Laureano

    2012-05-10

    The estimation of parameter values for mathematical models of biological systems is an optimization problem that is particularly challenging due to the nonlinearities involved. One major difficulty is the existence of multiple minima in which standard optimization methods may fall during the search. Deterministic global optimization methods overcome this limitation, ensuring convergence to the global optimum within a desired tolerance. Global optimization techniques are usually classified into stochastic and deterministic. The former typically lead to lower CPU times but offer no guarantee of convergence to the global minimum in a finite number of iterations. In contrast, deterministic methods provide solutions of a given quality (i.e., optimality gap), but tend to lead to large computational burdens. This work presents a deterministic outer approximation-based algorithm for the global optimization of dynamic problems arising in the parameter estimation of models of biological systems. Our approach, which offers a theoretical guarantee of convergence to global minimum, is based on reformulating the set of ordinary differential equations into an equivalent set of algebraic equations through the use of orthogonal collocation methods, giving rise to a nonconvex nonlinear programming (NLP) problem. This nonconvex NLP is decomposed into two hierarchical levels: a master mixed-integer linear programming problem (MILP) that provides a rigorous lower bound on the optimal solution, and a reduced-space slave NLP that yields an upper bound. The algorithm iterates between these two levels until a termination criterion is satisfied. The capabilities of our approach were tested in two benchmark problems, in which the performance of our algorithm was compared with that of the commercial global optimization package BARON. The proposed strategy produced near optimal solutions (i.e., within a desired tolerance) in a fraction of the CPU time required by BARON.

  12. Opposition-Based Adaptive Fireworks Algorithm

    OpenAIRE

    Chibing Gong

    2016-01-01

    A fireworks algorithm (FWA) is a recent swarm intelligence algorithm that is inspired by observing fireworks explosions. An adaptive fireworks algorithm (AFWA) proposes additional adaptive amplitudes to improve the performance of the enhanced fireworks algorithm (EFWA). The purpose of this paper is to add opposition-based learning (OBL) to AFWA with the goal of further boosting performance and achieving global optimization. Twelve benchmark functions are tested in use of an opposition-based a...

  13. Optimal algorithmic trading and market microstructure

    OpenAIRE

    Labadie , Mauricio; Lehalle , Charles-Albert

    2010-01-01

    The efficient frontier is a core concept in Modern Portfolio Theory. Based on this idea, we will construct optimal trading curves for different types of portfolios. These curves correspond to the algorithmic trading strategies that minimize the expected transaction costs, i.e. the joint effect of market impact and market risk. We will study five portfolio trading strategies. For the first three (single-asset, general multi-asseet and balanced portfolios) we will assume that the underlyings fo...

  14. Research and implementation of the algorithm for unwrapped and distortion correction basing on CORDIC for panoramic image

    Science.gov (United States)

    Zhang, Zhenhai; Li, Kejie; Wu, Xiaobing; Zhang, Shujiang

    2008-03-01

    The unwrapped and correcting algorithm based on Coordinate Rotation Digital Computer (CORDIC) and bilinear interpolation algorithm was presented in this paper, with the purpose of processing dynamic panoramic annular image. An original annular panoramic image captured by panoramic annular lens (PAL) can be unwrapped and corrected to conventional rectangular image without distortion, which is much more coincident with people's vision. The algorithm for panoramic image processing is modeled by VHDL and implemented in FPGA. The experimental results show that the proposed panoramic image algorithm for unwrapped and distortion correction has the lower computation complexity and the architecture for dynamic panoramic image processing has lower hardware cost and power consumption. And the proposed algorithm is valid.

  15. Dynamic contrast-enhanced MRI of the prostate. Comparison of two different post-processing algorithms

    International Nuclear Information System (INIS)

    Beyersdorff, Dirk; Franiel, T.; Luedemann, L.; Dietz, E.; Galler, D.; Marchot, P.

    2011-01-01

    Purpose: To evaluate the usefulness of a commercially available post-processing software tool for detecting prostate cancer on dynamic contrast-enhanced magnetic resonance imaging (MRI) and to compare the results to those obtained with a custom-made post-processing algorithm already tested under clinical conditions. Materials and Methods: Forty-eight patients with proven prostate cancer were examined by standard MRI supplemented by dynamic contrast-enhanced dual susceptibility contrast (DCE-DSC) MRI prior to prostatectomy. A custom-made post-processing algorithm was used to analyze the MRI data sets and the results were compared to those obtained using a post-processing algorithm from Invivo Corporation (Dyna CAD for Prostate) applied to dynamic T 1-weighted images. Histology was used as the gold standard. Results: The sensitivity for prostate cancer detection was 78 % for the custom-made algorithm and 60 % for the commercial algorithm and the specificity was 79 % and 82 %, respectively. The accuracy was 79 % for our algorithm and 77.5 % for the commercial software tool. The chi-square test (McNemar-Bowker test) yielded no significant differences between the two tools (p = 0.06). Conclusion: The two investigated post-processing algorithms did not differ in terms of prostate cancer detection. The commercially available software tool allows reliable and fast analysis of dynamic contrast-enhanced MRI for the detection of prostate cancer. (orig.)

  16. Application of Dynamic Mutated Particle Swarm Optimization Algorithm to Design Water Distribution Networks

    Directory of Open Access Journals (Sweden)

    Kazem Mohammadi- Aghdam

    2015-10-01

    Full Text Available This paper proposes the application of a new version of the heuristic particle swarm optimization (PSO method for designing water distribution networks (WDNs. The optimization problem of looped water distribution networks is recognized as an NP-hard combinatorial problem which cannot be easily solved using traditional mathematical optimization techniques. In this paper, the concept of dynamic swarm size is considered in an attempt to increase the convergence speed of the original PSO algorithm. In this strategy, the size of the swarm is dynamically changed according to the iteration number of the algorithm. Furthermore, a novel mutation approach is introduced to increase the diversification property of the PSO and to help the algorithm to avoid trapping in local optima. The new version of the PSO algorithm is called dynamic mutated particle swarm optimization (DMPSO. The proposed DMPSO is then applied to solve WDN design problems. Finally, two illustrative examples are used for comparison to verify the efficiency of the proposed DMPSO as compared to other intelligent algorithms.

  17. Efficient conjugate gradient algorithms for computation of the manipulator forward dynamics

    Science.gov (United States)

    Fijany, Amir; Scheid, Robert E.

    1989-01-01

    The applicability of conjugate gradient algorithms for computation of the manipulator forward dynamics is investigated. The redundancies in the previously proposed conjugate gradient algorithm are analyzed. A new version is developed which, by avoiding these redundancies, achieves a significantly greater efficiency. A preconditioned conjugate gradient algorithm is also presented. A diagonal matrix whose elements are the diagonal elements of the inertia matrix is proposed as the preconditioner. In order to increase the computational efficiency, an algorithm is developed which exploits the synergism between the computation of the diagonal elements of the inertia matrix and that required by the conjugate gradient algorithm.

  18. A model for the dynamic behavior of financial assets affected by news: The case of Tohoku-Kanto earthquake

    Science.gov (United States)

    Ochiai, T.; Nacher, J. C.

    2011-09-01

    The prices of financial products in markets are determined by the behavior of investors, who are influenced by positive and negative news. Here, we present a mathematical model to reproduce the price movements in real financial markets affected by news. The model has both positive and negative feed-back mechanisms. Furthermore, the behavior of the model is examined by considering two types of noise. Our results show that the dynamic balance of positive and negative feed-back mechanisms with the noise effect determines the asset price movement.

  19. Momentum and mean-reversion in strategic asset allocation

    NARCIS (Netherlands)

    Koijen, R.S.J.; Rodriguez, J.C.; Sbuelz, A.

    2009-01-01

    We study a dynamic asset allocation problem in which stock returns exhibit short-run momentum and long-run mean reversion. We develop a tractable continuous-time model that captures these two predictability features and derive the optimal investment strategy in closed form. The model predicts

  20. Multiscale Reaction-Diffusion Algorithms: PDE-Assisted Brownian Dynamics

    KAUST Repository

    Franz, Benjamin

    2013-06-19

    Two algorithms that combine Brownian dynami cs (BD) simulations with mean-field partial differential equations (PDEs) are presented. This PDE-assisted Brownian dynamics (PBD) methodology provides exact particle tracking data in parts of the domain, whilst making use of a mean-field reaction-diffusion PDE description elsewhere. The first PBD algorithm couples BD simulations with PDEs by randomly creating new particles close to the interface, which partitions the domain, and by reincorporating particles into the continuum PDE-description when they cross the interface. The second PBD algorithm introduces an overlap region, where both descriptions exist in parallel. It is shown that the overlap region is required to accurately compute variances using PBD simulations. Advantages of both PBD approaches are discussed and illustrative numerical examples are presented. © 2013 Society for Industrial and Applied Mathematics.

  1. The Algorithm of Continuous Optimization Based on the Modified Cellular Automaton

    Directory of Open Access Journals (Sweden)

    Oleg Evsutin

    2016-08-01

    Full Text Available This article is devoted to the application of the cellular automata mathematical apparatus to the problem of continuous optimization. The cellular automaton with an objective function is introduced as a new modification of the classic cellular automaton. The algorithm of continuous optimization, which is based on dynamics of the cellular automaton having the property of geometric symmetry, is obtained. The results of the simulation experiments with the obtained algorithm on standard test functions are provided, and a comparison between the analogs is shown.

  2. Multiple Time-Step Dual-Hamiltonian Hybrid Molecular Dynamics - Monte Carlo Canonical Propagation Algorithm.

    Science.gov (United States)

    Chen, Yunjie; Kale, Seyit; Weare, Jonathan; Dinner, Aaron R; Roux, Benoît

    2016-04-12

    A multiple time-step integrator based on a dual Hamiltonian and a hybrid method combining molecular dynamics (MD) and Monte Carlo (MC) is proposed to sample systems in the canonical ensemble. The Dual Hamiltonian Multiple Time-Step (DHMTS) algorithm is based on two similar Hamiltonians: a computationally expensive one that serves as a reference and a computationally inexpensive one to which the workload is shifted. The central assumption is that the difference between the two Hamiltonians is slowly varying. Earlier work has shown that such dual Hamiltonian multiple time-step schemes effectively precondition nonlinear differential equations for dynamics by reformulating them into a recursive root finding problem that can be solved by propagating a correction term through an internal loop, analogous to RESPA. Of special interest in the present context, a hybrid MD-MC version of the DHMTS algorithm is introduced to enforce detailed balance via a Metropolis acceptance criterion and ensure consistency with the Boltzmann distribution. The Metropolis criterion suppresses the discretization errors normally associated with the propagation according to the computationally inexpensive Hamiltonian, treating the discretization error as an external work. Illustrative tests are carried out to demonstrate the effectiveness of the method.

  3. Risk Management of Assets Dependency Based on Copulas Function

    Directory of Open Access Journals (Sweden)

    Cheng Lei

    2017-01-01

    Full Text Available As the two important form of financial market, the risk of financial securities, such as stocks and bonds, has been a hot topic in the financial field; at the same time, under the influence of many factors of financial assets, the correlation between portfolio returns causes more research. This paper presents Copula-SV-t model that it uses SV-t model to measure the edge distribution, and uses the Copula-t method to obtain the high-dimensional joint distribution. It not only solves the actual deviation with using the ARCH family model to calculate the portfolio risk, but also solves the problem to overestimate the risk with using extreme value theory to study financial risk. Through the empirical research, the conclusion shows that the model describes better assets and tail characteristics of assets, and is more in line with the reality of the market. Furthermore, Empirical evidence also shows that if the portfolio is relatively large degree of correlation, the ability to disperse portfolio risk is relatively weakness.

  4. Hybrid SOA-SQP algorithm for dynamic economic dispatch with valve-point effects

    Energy Technology Data Exchange (ETDEWEB)

    Sivasubramani, S.; Swarup, K.S. [Department of Electrical Engineering, Indian Institute of Technology Madras, Chennai 600036 (India)

    2010-12-15

    This paper proposes a hybrid technique combining a new heuristic algorithm named seeker optimization algorithm (SOA) and sequential quadratic programming (SQP) method for solving dynamic economic dispatch problem with valve-point effects. The SOA is based on the concept of simulating the act of human searching, where the search direction is based on the empirical gradient (EG) by evaluating the response to the position changes and the step length is based on uncertainty reasoning by using a simple fuzzy rule. In this paper, SOA is used as a base level search, which can give a good direction to the optimal global region and SQP as a local search to fine tune the solution obtained from SOA. Thus SQP guides SOA to find optimal or near optimal solution in the complex search space. Two test systems i.e., 5 unit with losses and 10 unit without losses, have been taken to validate the efficiency of the proposed hybrid method. Simulation results clearly show that the proposed method outperforms the existing method in terms of solution quality. (author)

  5. Analytical Provision of Management of Intangible Assets

    Directory of Open Access Journals (Sweden)

    Shelest Viktoriya S.

    2013-11-01

    Full Text Available The goal of the article lies in the study of the process of conduct of economic analysis of such a complex product of the innovation and information society as objects of intellectual property, which are accepted in business accounting as intangible assets. All-absorbing integration processes in the economy and large-scale propagation of information technologies influence the capital structure. Thus, accepting intangible assets as a driving factor of competitiveness, enterprises prefer namely these assets, reducing the share of tangible assets. Taking this into account the scientists thoroughly studied the issues of economic analysis of intangible assets, since the obtained data are the main source of accounting and analytical information required for making weighted managerial decisions. At the same time, the issues of authenticity, accuracy, efficiency and transparency of the obtained results become topical. In the process of the study the article shows information content of the accounting and analytical data due to introduction of accounting and conduct of economic analysis of intangible assets. The article considers the modern state of the methods of analysis of intangible assets based on opinions of scientists. It characterises economic and legal state of development of licence agreements in Ukraine. It justifies economic expediency of use of such agreements. It forms the ways of making efficient managerial decisions on use of intangible assets in economic activity of subjects of entrepreneurship.

  6. Computer Vision Based Measurement of Wildfire Smoke Dynamics

    Directory of Open Access Journals (Sweden)

    BUGARIC, M.

    2015-02-01

    Full Text Available This article presents a novel method for measurement of wildfire smoke dynamics based on computer vision and augmented reality techniques. The aspect of smoke dynamics is an important feature in video smoke detection that could distinguish smoke from visually similar phenomena. However, most of the existing smoke detection systems are not capable of measuring the real-world size of the detected smoke regions. Using computer vision and GIS-based augmented reality, we measure the real dimensions of smoke plumes, and observe the change in size over time. The measurements are performed on offline video data with known camera parameters and location. The observed data is analyzed in order to create a classifier that could be used to eliminate certain categories of false alarms induced by phenomena with different dynamics than smoke. We carried out an offline evaluation where we measured the improvement in the detection process achieved using the proposed smoke dynamics characteristics. The results show a significant increase in algorithm performance, especially in terms of reducing false alarms rate. From this it follows that the proposed method for measurement of smoke dynamics could be used to improve existing smoke detection algorithms, or taken into account when designing new ones.

  7. Dynamic population artificial bee colony algorithm for multi-objective optimal power flow

    Directory of Open Access Journals (Sweden)

    Man Ding

    2017-03-01

    Full Text Available This paper proposes a novel artificial bee colony algorithm with dynamic population (ABC-DP, which synergizes the idea of extended life-cycle evolving model to balance the exploration and exploitation tradeoff. The proposed ABC-DP is a more bee-colony-realistic model that the bee can reproduce and die dynamically throughout the foraging process and population size varies as the algorithm runs. ABC-DP is then used for solving the optimal power flow (OPF problem in power systems that considers the cost, loss, and emission impacts as the objective functions. The 30-bus IEEE test system is presented to illustrate the application of the proposed algorithm. The simulation results, which are also compared to nondominated sorting genetic algorithm II (NSGAII and multi-objective ABC (MOABC, are presented to illustrate the effectiveness and robustness of the proposed method.

  8. A dynamic programming–enhanced simulated annealing algorithm for solving bi-objective cell formation problem with duplicate machines

    Directory of Open Access Journals (Sweden)

    Mohammad Mohammadi

    2015-04-01

    Full Text Available Cell formation process is one of the first and the most important steps in designing cellular manufacturing systems. It consists of identifying part families according to the similarities in the design, shape, and presses of parts and dedicating machines to each part family based on the operations required by the parts. In this study, a hybrid method based on a combination of simulated annealing algorithm and dynamic programming was developed to solve a bi-objective cell formation problem with duplicate machines. In the proposed hybrid method, each solution was represented as a permutation of parts, which is created by simulated annealing algorithm, and dynamic programming was used to partition this permutation into part families and determine the number of machines in each cell such that the total dissimilarity between the parts and the total machine investment cost are minimized. The performance of the algorithm was evaluated by performing numerical experiments in different sizes. Our computational experiments indicated that the results were very encouraging in terms of computational time and solution quality.

  9. Complex scenes and situations visualization in hierarchical learning algorithm with dynamic 3D NeoAxis engine

    Science.gov (United States)

    Graham, James; Ternovskiy, Igor V.

    2013-06-01

    We applied a two stage unsupervised hierarchical learning system to model complex dynamic surveillance and cyber space monitoring systems using a non-commercial version of the NeoAxis visualization software. The hierarchical scene learning and recognition approach is based on hierarchical expectation maximization, and was linked to a 3D graphics engine for validation of learning and classification results and understanding the human - autonomous system relationship. Scene recognition is performed by taking synthetically generated data and feeding it to a dynamic logic algorithm. The algorithm performs hierarchical recognition of the scene by first examining the features of the objects to determine which objects are present, and then determines the scene based on the objects present. This paper presents a framework within which low level data linked to higher-level visualization can provide support to a human operator and be evaluated in a detailed and systematic way.

  10. Archimedean copula estimation of distribution algorithm based on artificial bee colony algorithm

    Institute of Scientific and Technical Information of China (English)

    Haidong Xu; Mingyan Jiang; Kun Xu

    2015-01-01

    The artificial bee colony (ABC) algorithm is a com-petitive stochastic population-based optimization algorithm. How-ever, the ABC algorithm does not use the social information and lacks the knowledge of the problem structure, which leads to in-sufficiency in both convergent speed and searching precision. Archimedean copula estimation of distribution algorithm (ACEDA) is a relatively simple, time-economic and multivariate correlated EDA. This paper proposes a novel hybrid algorithm based on the ABC algorithm and ACEDA cal ed Archimedean copula estima-tion of distribution based on the artificial bee colony (ACABC) algorithm. The hybrid algorithm utilizes ACEDA to estimate the distribution model and then uses the information to help artificial bees to search more efficiently in the search space. Six bench-mark functions are introduced to assess the performance of the ACABC algorithm on numerical function optimization. Experimen-tal results show that the ACABC algorithm converges much faster with greater precision compared with the ABC algorithm, ACEDA and the global best (gbest)-guided ABC (GABC) algorithm in most of the experiments.

  11. Evaluation of the Effect of Non-Current Fixed Assets on Profitability and Asset Management Efficiency

    Science.gov (United States)

    Lubyanaya, Alexandra V.; Izmailov, Airat M.; Nikulina, Ekaterina Y.; Shaposhnikov, Vladislav A.

    2016-01-01

    The purpose of this article is to investigate the problem, which stems from non-current fixed assets affecting profitability and asset management efficiency. Tangible assets, intangible assets and financial assets are all included in non-current fixed assets. The aim of the research is to identify the impact of estimates and valuation in…

  12. Packets Distributing Evolutionary Algorithm Based on PSO for Ad Hoc Network

    Science.gov (United States)

    Xu, Xiao-Feng

    2018-03-01

    Wireless communication network has such features as limited bandwidth, changeful channel and dynamic topology, etc. Ad hoc network has lots of difficulties in accessing control, bandwidth distribution, resource assign and congestion control. Therefore, a wireless packets distributing Evolutionary algorithm based on PSO (DPSO)for Ad Hoc Network is proposed. Firstly, parameters impact on performance of network are analyzed and researched to obtain network performance effective function. Secondly, the improved PSO Evolutionary Algorithm is used to solve the optimization problem from local to global in the process of network packets distributing. The simulation results show that the algorithm can ensure fairness and timeliness of network transmission, as well as improve ad hoc network resource integrated utilization efficiency.

  13. A Sensor Dynamic Measurement Error Prediction Model Based on NAPSO-SVM.

    Science.gov (United States)

    Jiang, Minlan; Jiang, Lan; Jiang, Dingde; Li, Fei; Song, Houbing

    2018-01-15

    Dynamic measurement error correction is an effective way to improve sensor precision. Dynamic measurement error prediction is an important part of error correction, and support vector machine (SVM) is often used for predicting the dynamic measurement errors of sensors. Traditionally, the SVM parameters were always set manually, which cannot ensure the model's performance. In this paper, a SVM method based on an improved particle swarm optimization (NAPSO) is proposed to predict the dynamic measurement errors of sensors. Natural selection and simulated annealing are added in the PSO to raise the ability to avoid local optima. To verify the performance of NAPSO-SVM, three types of algorithms are selected to optimize the SVM's parameters: the particle swarm optimization algorithm (PSO), the improved PSO optimization algorithm (NAPSO), and the glowworm swarm optimization (GSO). The dynamic measurement error data of two sensors are applied as the test data. The root mean squared error and mean absolute percentage error are employed to evaluate the prediction models' performances. The experimental results show that among the three tested algorithms the NAPSO-SVM method has a better prediction precision and a less prediction errors, and it is an effective method for predicting the dynamic measurement errors of sensors.

  14. Regularity of the exercise boundary for American put options on assets with discrete dividends

    NARCIS (Netherlands)

    Jourdain, B.; Vellekoop, M.

    2009-01-01

    We analyze the regularity of the optimal exercise boundary for the American Put option when the underlying asset pays a discrete dividend at a known time td during the lifetime of the option. The ex-dividend asset price process is assumed to follow Black-Scholes dynamics and the dividend amount is a

  15. Competitive Swarm Optimizer Based Gateway Deployment Algorithm in Cyber-Physical Systems.

    Science.gov (United States)

    Huang, Shuqiang; Tao, Ming

    2017-01-22

    Wireless sensor network topology optimization is a highly important issue, and topology control through node selection can improve the efficiency of data forwarding, while saving energy and prolonging lifetime of the network. To address the problem of connecting a wireless sensor network to the Internet in cyber-physical systems, here we propose a geometric gateway deployment based on a competitive swarm optimizer algorithm. The particle swarm optimization (PSO) algorithm has a continuous search feature in the solution space, which makes it suitable for finding the geometric center of gateway deployment; however, its search mechanism is limited to the individual optimum (pbest) and the population optimum (gbest); thus, it easily falls into local optima. In order to improve the particle search mechanism and enhance the search efficiency of the algorithm, we introduce a new competitive swarm optimizer (CSO) algorithm. The CSO search algorithm is based on an inter-particle competition mechanism and can effectively avoid trapping of the population falling into a local optimum. With the improvement of an adaptive opposition-based search and its ability to dynamically parameter adjustments, this algorithm can maintain the diversity of the entire swarm to solve geometric K -center gateway deployment problems. The simulation results show that this CSO algorithm has a good global explorative ability as well as convergence speed and can improve the network quality of service (QoS) level of cyber-physical systems by obtaining a minimum network coverage radius. We also find that the CSO algorithm is more stable, robust and effective in solving the problem of geometric gateway deployment as compared to the PSO or Kmedoids algorithms.

  16. Competitive Swarm Optimizer Based Gateway Deployment Algorithm in Cyber-Physical Systems

    Directory of Open Access Journals (Sweden)

    Shuqiang Huang

    2017-01-01

    Full Text Available Wireless sensor network topology optimization is a highly important issue, and topology control through node selection can improve the efficiency of data forwarding, while saving energy and prolonging lifetime of the network. To address the problem of connecting a wireless sensor network to the Internet in cyber-physical systems, here we propose a geometric gateway deployment based on a competitive swarm optimizer algorithm. The particle swarm optimization (PSO algorithm has a continuous search feature in the solution space, which makes it suitable for finding the geometric center of gateway deployment; however, its search mechanism is limited to the individual optimum (pbest and the population optimum (gbest; thus, it easily falls into local optima. In order to improve the particle search mechanism and enhance the search efficiency of the algorithm, we introduce a new competitive swarm optimizer (CSO algorithm. The CSO search algorithm is based on an inter-particle competition mechanism and can effectively avoid trapping of the population falling into a local optimum. With the improvement of an adaptive opposition-based search and its ability to dynamically parameter adjustments, this algorithm can maintain the diversity of the entire swarm to solve geometric K-center gateway deployment problems. The simulation results show that this CSO algorithm has a good global explorative ability as well as convergence speed and can improve the network quality of service (QoS level of cyber-physical systems by obtaining a minimum network coverage radius. We also find that the CSO algorithm is more stable, robust and effective in solving the problem of geometric gateway deployment as compared to the PSO or Kmedoids algorithms.

  17. Competitive Swarm Optimizer Based Gateway Deployment Algorithm in Cyber-Physical Systems

    Science.gov (United States)

    Huang, Shuqiang; Tao, Ming

    2017-01-01

    Wireless sensor network topology optimization is a highly important issue, and topology control through node selection can improve the efficiency of data forwarding, while saving energy and prolonging lifetime of the network. To address the problem of connecting a wireless sensor network to the Internet in cyber-physical systems, here we propose a geometric gateway deployment based on a competitive swarm optimizer algorithm. The particle swarm optimization (PSO) algorithm has a continuous search feature in the solution space, which makes it suitable for finding the geometric center of gateway deployment; however, its search mechanism is limited to the individual optimum (pbest) and the population optimum (gbest); thus, it easily falls into local optima. In order to improve the particle search mechanism and enhance the search efficiency of the algorithm, we introduce a new competitive swarm optimizer (CSO) algorithm. The CSO search algorithm is based on an inter-particle competition mechanism and can effectively avoid trapping of the population falling into a local optimum. With the improvement of an adaptive opposition-based search and its ability to dynamically parameter adjustments, this algorithm can maintain the diversity of the entire swarm to solve geometric K-center gateway deployment problems. The simulation results show that this CSO algorithm has a good global explorative ability as well as convergence speed and can improve the network quality of service (QoS) level of cyber-physical systems by obtaining a minimum network coverage radius. We also find that the CSO algorithm is more stable, robust and effective in solving the problem of geometric gateway deployment as compared to the PSO or Kmedoids algorithms. PMID:28117735

  18. Japanese views on ASSET

    Energy Technology Data Exchange (ETDEWEB)

    Hirano, M [Department of Reactor Safety Research, Japan Atomic Energy Research Inst. (Japan)

    1997-10-01

    The presentation briefly reviews the following aspects directed to ensuring NPP safety: Japanese participation in ASSET activities; views to ASSET activities; recent operating experience in Japan; future ASSET activities.

  19. Japanese views on ASSET

    International Nuclear Information System (INIS)

    Hirano, M.

    1997-01-01

    The presentation briefly reviews the following aspects directed to ensuring NPP safety: Japanese participation in ASSET activities; views to ASSET activities; recent operating experience in Japan; future ASSET activities

  20. Estimating the value of a Country's built assets: investment-based exposure modelling for global risk assessment

    Science.gov (United States)

    Daniell, James; Pomonis, Antonios; Gunasekera, Rashmin; Ishizawa, Oscar; Gaspari, Maria; Lu, Xijie; Aubrecht, Christoph; Ungar, Joachim

    2017-04-01

    In order to quantify disaster risk, there is a demand and need for determining consistent and reliable economic value of built assets at national or sub national level exposed to natural hazards. The value of the built stock in the context of a city or a country is critical for risk modelling applications as it allows for the upper bound in potential losses to be established. Under the World Bank probabilistic disaster risk assessment - Country Disaster Risk Profiles (CDRP) Program and rapid post-disaster loss analyses in CATDAT, key methodologies have been developed that quantify the asset exposure of a country. In this study, we assess the complementary methods determining value of building stock through capital investment data vs aggregated ground up values based on built area and unit cost of construction analyses. Different approaches to modelling exposure around the world, have resulted in estimated values of built assets of some countries differing by order(s) of magnitude. Using the aforementioned methodology of comparing investment data based capital stock and bottom-up unit cost of construction values per square meter of assets; a suitable range of capital stock estimates for built assets have been created. A blind test format was undertaken to compare the two types of approaches from top-down (investment) and bottom-up (construction cost per unit), In many cases, census data, demographic, engineering and construction cost data are key for bottom-up calculations from previous years. Similarly for the top-down investment approach, distributed GFCF (Gross Fixed Capital Formation) data is also required. Over the past few years, numerous studies have been undertaken through the World Bank Caribbean and Central America disaster risk assessment program adopting this methodology initially developed by Gunasekera et al. (2015). The range of values of the building stock is tested for around 15 countries. In addition, three types of costs - Reconstruction cost

  1. AN ECOSYSTEM PERSPECTIVE ON ASSET MANAGEMENT INFORMATION

    Directory of Open Access Journals (Sweden)

    Lasse METSO

    2017-07-01

    Full Text Available Big Data and Internet of Things will increase the amount of data on asset management exceedingly. Data sharing with an increased number of partners in the area of asset management is important when developing business opportunities and new ecosystems. An asset management ecosystem is a complex set of relationships between parties taking part in asset management actions. In this paper, the current barriers and benefits of data sharing are identified based on the results of an interview study. The main benefits are transparency, access to data and reuse of data. New services can be created by taking advantage of data sharing. The main barriers to sharing data are an unclear view of the data sharing process and difficulties to recognize the benefits of data sharing. For overcoming the barriers in data sharing, this paper applies the ecosystem perspective on asset management information. The approach is explained by using the Swedish railway industry as an example.

  2. An Ecosystem Perspective On Asset Management Information

    Science.gov (United States)

    Metso, Lasse; Kans, Mirka

    2017-09-01

    Big Data and Internet of Things will increase the amount of data on asset management exceedingly. Data sharing with an increased number of partners in the area of asset management is important when developing business opportunities and new ecosystems. An asset management ecosystem is a complex set of relationships between parties taking part in asset management actions. In this paper, the current barriers and benefits of data sharing are identified based on the results of an interview study. The main benefits are transparency, access to data and reuse of data. New services can be created by taking advantage of data sharing. The main barriers to sharing data are an unclear view of the data sharing process and difficulties to recognize the benefits of data sharing. For overcoming the barriers in data sharing, this paper applies the ecosystem perspective on asset management information. The approach is explained by using the Swedish railway industry as an example.

  3. A study on intangible assets disclosure: An evidence from Indian companies

    Directory of Open Access Journals (Sweden)

    Subash Chander

    2011-04-01

    Full Text Available Purpose: India has emerged at the top of the pedestal in the present knowledge-driven global marketplace, where intangible assets hold much more value than physical assets. The objective of this study is to determine the extent of intangible asset disclosure by companies in IndiaDesign/methodology/approach: This study relates to the years 2003-04 and 2007-08 and is based on 243 companies selected from BT-500 companies. The annual reports of these companies were analyzed using content analysis so as to examine the level of disclosure of intangible asset information. Intangible assets disclosure index based on the intangible assets framework as given by Sveiby (1997 and as used and tested by Guthrie and Petty (2000 and many other subsequent studies was modified and used for this study. Findings: The results showed that external capital is the most disclosed intangible asset category with a disclosure score of 37.90% and 35.83% in the years 2003-04 and 2007-08 respectively. Infosys technologies Ltd. is the company with the highest intangible assets reporting for both the years (2003-04: 68.52%, 2007-08: 81.48%. Further the reporting of intangible assets is unorganized and unsystematic. There is lack of appropriate framework for disclosing intangible assets information in the annual reports.Originality/value: This is perhaps the first comprehensive study on intangible assets disclosure based on a large sample of the companies from India. Literature reveals that now the intangible assets play relatively an increasingly significant role in the decision making process of various users of corporate reports. This study shows that the overall disclosure of intangible assets is low in India. Thus this study may be of value to the corporate sector in India to explore the areas of intangible assets disclosure so that they can provide useful and relevant information to the users of annual reports.

  4. Mobile Ad Hoc Network Energy Cost Algorithm Based on Artificial Bee Colony

    Directory of Open Access Journals (Sweden)

    Mustafa Tareq

    2017-01-01

    Full Text Available A mobile ad hoc network (MANET is a collection of mobile nodes that dynamically form a temporary network without using any existing network infrastructure. MANET selects a path with minimal number of intermediate nodes to reach the destination node. As the distance between each node increases, the quantity of transmission power increases. The power level of nodes affects the simplicity with which a route is constituted between a couple of nodes. This study utilizes the swarm intelligence technique through the artificial bee colony (ABC algorithm to optimize the energy consumption in a dynamic source routing (DSR protocol in MANET. The proposed algorithm is called bee DSR (BEEDSR. The ABC algorithm is used to identify the optimal path from the source to the destination to overcome energy problems. The performance of the BEEDSR algorithm is compared with DSR and bee-inspired protocols (BeeIP. The comparison was conducted based on average energy consumption, average throughput, average end-to-end delay, routing overhead, and packet delivery ratio performance metrics, varying the node speed and packet size. The BEEDSR algorithm is superior in performance than other protocols in terms of energy conservation and delay degradation relating to node speed and packet size.

  5. Gradient descent learning algorithm overview: a general dynamical systems perspective.

    Science.gov (United States)

    Baldi, P

    1995-01-01

    Gives a unified treatment of gradient descent learning algorithms for neural networks using a general framework of dynamical systems. This general approach organizes and simplifies all the known algorithms and results which have been originally derived for different problems (fixed point/trajectory learning), for different models (discrete/continuous), for different architectures (forward/recurrent), and using different techniques (backpropagation, variational calculus, adjoint methods, etc.). The general approach can also be applied to derive new algorithms. The author then briefly examines some of the complexity issues and limitations intrinsic to gradient descent learning. Throughout the paper, the author focuses on the problem of trajectory learning.

  6. Maintenance of Process Control Algorithms based on Dynamic Program Slicing

    DEFF Research Database (Denmark)

    Hansen, Ole Fink; Andersen, Nils Axel; Ravn, Ole

    2010-01-01

    Today’s industrial control systems gradually lose performance after installation and must be regularly maintained by means of adjusting parameters and modifying the control algorithm, in order to regain high performance. Industrial control algorithms are complex software systems, and it is partic...

  7. Theoretical and Empirical Review of Asset Pricing Models: A Structural Synthesis

    Directory of Open Access Journals (Sweden)

    Saban Celik

    2012-01-01

    Full Text Available The purpose of this paper is to give a comprehensive theoretical review devoted to asset pricing models by emphasizing static and dynamic versions in the line with their empirical investigations. A considerable amount of financial economics literature devoted to the concept of asset pricing and their implications. The main task of asset pricing model can be seen as the way to evaluate the present value of the pay offs or cash flows discounted for risk and time lags. The difficulty coming from discounting process is that the relevant factors that affect the pay offs vary through the time whereas the theoretical framework is still useful to incorporate the changing factors into an asset pricing models. This paper fills the gap in literature by giving a comprehensive review of the models and evaluating the historical stream of empirical investigations in the form of structural empirical review.

  8. An algorithm for gradient-based dynamic optimization of UV flash processes

    DEFF Research Database (Denmark)

    Ritschel, Tobias Kasper Skovborg; Capolei, Andrea; Gaspar, Jozsef

    2017-01-01

    This paper presents a novel single-shooting algorithm for gradient-based solution of optimal control problems with vapor-liquid equilibrium constraints. Such optimal control problems are important in several engineering applications, for instance in control of distillation columns, in certain two...... softwareaswellastheperformanceofdifferentcompilersinaLinuxoperatingsystem. Thesetestsindicatethatreal-timenonlinear model predictive control of UV flash processes is computationally feasible....

  9. Inverse Analysis of Pavement Structural Properties Based on Dynamic Finite Element Modeling and Genetic Algorithm

    Directory of Open Access Journals (Sweden)

    Xiaochao Tang

    2013-03-01

    Full Text Available With the movement towards the implementation of mechanistic-empirical pavement design guide (MEPDG, an accurate determination of pavement layer moduli is vital for predicting pavement critical mechanistic responses. A backcalculation procedure is commonly used to estimate the pavement layer moduli based on the non-destructive falling weight deflectometer (FWD tests. Backcalculation of flexible pavement layer properties is an inverse problem with known input and output signals based upon which unknown parameters of the pavement system are evaluated. In this study, an inverse analysis procedure that combines the finite element analysis and a population-based optimization technique, Genetic Algorithm (GA has been developed to determine the pavement layer structural properties. A lightweight deflectometer (LWD was used to infer the moduli of instrumented three-layer scaled flexible pavement models. While the common practice in backcalculating pavement layer properties still assumes a static FWD load and uses only peak values of the load and deflections, dynamic analysis was conducted to simulate the impulse LWD load. The recorded time histories of the LWD load were used as the known inputs into the pavement system while the measured time-histories of surface central deflections and subgrade deflections measured with a linear variable differential transformers (LVDT were considered as the outputs. As a result, consistent pavement layer moduli can be obtained through this inverse analysis procedure.

  10. A physics-based algorithm for the estimation of bearing spall width using vibrations

    Science.gov (United States)

    Kogan, G.; Klein, R.; Bortman, J.

    2018-05-01

    Evaluation of the damage severity in a mechanical system is required for the assessment of its remaining useful life. In rotating machines, bearings are crucial components. Hence, the estimation of the size of spalls in bearings is important for prognostics of the remaining useful life. Recently, this topic has been extensively studied and many of the methods used for the estimation of spall size are based on the analysis of vibrations. A new tool is proposed in the current study for the estimation of the spall width on the outer ring raceway of a rolling element bearing. The understanding and analysis of the dynamics of the rolling element-spall interaction enabled the development of a generic and autonomous algorithm. The algorithm is generic in the sense that it does not require any human interference to make adjustments for each case. All of the algorithm's parameters are defined by analytical expressions describing the dynamics of the system. The required conditions, such as sampling rate, spall width and depth, defining the feasible region of such algorithms, are analyzed in the paper. The algorithm performance was demonstrated with experimental data for different spall widths.

  11. Engineering Asset Management and Infrastructure Sustainability : Proceedings of the 5th World Congress on Engineering Asset Management

    CERN Document Server

    Ma, Lin; Tan, Andy; Weijnen, Margot; Lee, Jay

    2012-01-01

    Engineering Asset Management 2010 represents state-of-the art trends and developments in the emerging field of engineering asset management as presented at the Fifth World Congress on Engineering Asset Management (WCEAM). The proceedings of the WCEAM 2010 is an excellent reference for practitioners, researchers and students in the multidisciplinary field of asset management, covering topics such as: Asset condition monitoring and intelligent maintenance Asset data warehousing, data mining and fusion Asset performance and level-of-service models Design and life-cycle integrity of physical assets Education and training in asset management Engineering standards in asset management Fault diagnosis and prognostics Financial analysis methods for physical assets Human dimensions in integrated asset management Information quality management Information systems and knowledge management Intelligent sensors and devices Maintenance strategies in asset management Optimisation decisions in asset management Risk management ...

  12. Self-consistent asset pricing models

    Science.gov (United States)

    Malevergne, Y.; Sornette, D.

    2007-08-01

    We discuss the foundations of factor or regression models in the light of the self-consistency condition that the market portfolio (and more generally the risk factors) is (are) constituted of the assets whose returns it is (they are) supposed to explain. As already reported in several articles, self-consistency implies correlations between the return disturbances. As a consequence, the alphas and betas of the factor model are unobservable. Self-consistency leads to renormalized betas with zero effective alphas, which are observable with standard OLS regressions. When the conditions derived from internal consistency are not met, the model is necessarily incomplete, which means that some sources of risk cannot be replicated (or hedged) by a portfolio of stocks traded on the market, even for infinite economies. Analytical derivations and numerical simulations show that, for arbitrary choices of the proxy which are different from the true market portfolio, a modified linear regression holds with a non-zero value αi at the origin between an asset i's return and the proxy's return. Self-consistency also introduces “orthogonality” and “normality” conditions linking the betas, alphas (as well as the residuals) and the weights of the proxy portfolio. Two diagnostics based on these orthogonality and normality conditions are implemented on a basket of 323 assets which have been components of the S&P500 in the period from January 1990 to February 2005. These two diagnostics show interesting departures from dynamical self-consistency starting about 2 years before the end of the Internet bubble. Assuming that the CAPM holds with the self-consistency condition, the OLS method automatically obeys the resulting orthogonality and normality conditions and therefore provides a simple way to self-consistently assess the parameters of the model by using proxy portfolios made only of the assets which are used in the CAPM regressions. Finally, the factor decomposition with the

  13. [A quick algorithm of dynamic spectrum photoelectric pulse wave detection based on LabVIEW].

    Science.gov (United States)

    Lin, Ling; Li, Na; Li, Gang

    2010-02-01

    Dynamic spectrum (DS) detection is attractive among the numerous noninvasive blood component detection methods because of the elimination of the main interference of the individual discrepancy and measure conditions. DS is a kind of spectrum extracted from the photoelectric pulse wave and closely relative to the artery blood. It can be used in a noninvasive blood component concentration examination. The key issues in DS detection are high detection precision and high operation speed. The precision of measure can be advanced by making use of over-sampling and lock-in amplifying on the pick-up of photoelectric pulse wave in DS detection. In the present paper, the theory expression formula of the over-sampling and lock-in amplifying method was deduced firstly. Then in order to overcome the problems of great data and excessive operation brought on by this technology, a quick algorithm based on LabVIEW and a method of using external C code applied in the pick-up of photoelectric pulse wave were presented. Experimental verification was conducted in the environment of LabVIEW. The results show that by the method pres ented, the speed of operation was promoted rapidly and the data memory was reduced largely.

  14. Dynamic Vehicle Routing Using an Improved Variable Neighborhood Search Algorithm

    Directory of Open Access Journals (Sweden)

    Yingcheng Xu

    2013-01-01

    Full Text Available In order to effectively solve the dynamic vehicle routing problem with time windows, the mathematical model is established and an improved variable neighborhood search algorithm is proposed. In the algorithm, allocation customers and planning routes for the initial solution are completed by the clustering method. Hybrid operators of insert and exchange are used to achieve the shaking process, the later optimization process is presented to improve the solution space, and the best-improvement strategy is adopted, which make the algorithm can achieve a better balance in the solution quality and running time. The idea of simulated annealing is introduced to take control of the acceptance of new solutions, and the influences of arrival time, distribution of geographical location, and time window range on route selection are analyzed. In the experiment, the proposed algorithm is applied to solve the different sizes' problems of DVRP. Comparing to other algorithms on the results shows that the algorithm is effective and feasible.

  15. Advanced Emergency Braking Control Based on a Nonlinear Model Predictive Algorithm for Intelligent Vehicles

    Directory of Open Access Journals (Sweden)

    Ronghui Zhang

    2017-05-01

    Full Text Available Focusing on safety, comfort and with an overall aim of the comprehensive improvement of a vision-based intelligent vehicle, a novel Advanced Emergency Braking System (AEBS is proposed based on Nonlinear Model Predictive Algorithm. Considering the nonlinearities of vehicle dynamics, a vision-based longitudinal vehicle dynamics model is established. On account of the nonlinear coupling characteristics of the driver, surroundings, and vehicle itself, a hierarchical control structure is proposed to decouple and coordinate the system. To avoid or reduce the collision risk between the intelligent vehicle and collision objects, a coordinated cost function of tracking safety, comfort, and fuel economy is formulated. Based on the terminal constraints of stable tracking, a multi-objective optimization controller is proposed using the theory of non-linear model predictive control. To quickly and precisely track control target in a finite time, an electronic brake controller for AEBS is designed based on the Nonsingular Fast Terminal Sliding Mode (NFTSM control theory. To validate the performance and advantages of the proposed algorithm, simulations are implemented. According to the simulation results, the proposed algorithm has better integrated performance in reducing the collision risk and improving the driving comfort and fuel economy of the smart car compared with the existing single AEBS.

  16. The Normalized-Rate Iterative Algorithm: A Practical Dynamic Spectrum Management Method for DSL

    Science.gov (United States)

    Statovci, Driton; Nordström, Tomas; Nilsson, Rickard

    2006-12-01

    We present a practical solution for dynamic spectrum management (DSM) in digital subscriber line systems: the normalized-rate iterative algorithm (NRIA). Supported by a novel optimization problem formulation, the NRIA is the only DSM algorithm that jointly addresses spectrum balancing for frequency division duplexing systems and power allocation for the users sharing a common cable bundle. With a focus on being implementable rather than obtaining the highest possible theoretical performance, the NRIA is designed to efficiently solve the DSM optimization problem with the operators' business models in mind. This is achieved with the help of two types of parameters: the desired network asymmetry and the desired user priorities. The NRIA is a centralized DSM algorithm based on the iterative water-filling algorithm (IWFA) for finding efficient power allocations, but extends the IWFA by finding the achievable bitrates and by optimizing the bandplan. It is compared with three other DSM proposals: the IWFA, the optimal spectrum balancing algorithm (OSBA), and the bidirectional IWFA (bi-IWFA). We show that the NRIA achieves better bitrate performance than the IWFA and the bi-IWFA. It can even achieve performance almost as good as the OSBA, but with dramatically lower requirements on complexity. Additionally, the NRIA can achieve bitrate combinations that cannot be supported by any other DSM algorithm.

  17. Policy Gradient Adaptive Dynamic Programming for Data-Based Optimal Control.

    Science.gov (United States)

    Luo, Biao; Liu, Derong; Wu, Huai-Ning; Wang, Ding; Lewis, Frank L

    2017-10-01

    The model-free optimal control problem of general discrete-time nonlinear systems is considered in this paper, and a data-based policy gradient adaptive dynamic programming (PGADP) algorithm is developed to design an adaptive optimal controller method. By using offline and online data rather than the mathematical system model, the PGADP algorithm improves control policy with a gradient descent scheme. The convergence of the PGADP algorithm is proved by demonstrating that the constructed Q -function sequence converges to the optimal Q -function. Based on the PGADP algorithm, the adaptive control method is developed with an actor-critic structure and the method of weighted residuals. Its convergence properties are analyzed, where the approximate Q -function converges to its optimum. Computer simulation results demonstrate the effectiveness of the PGADP-based adaptive control method.

  18. An Agent-Based Framework for E-Commerce Information Retrieval Management Using Genetic Algorithms

    Directory of Open Access Journals (Sweden)

    Floarea NASTASE

    2009-01-01

    Full Text Available The paper addresses the issue of improving retrieval performance management for retrieval from document collections that exist on the Internet. It also comes with a solution that uses the benefits of the agent technology and genetic algorithms in the process of the information retrieving management. The most important paradigms of information retrieval are mentioned having the goal to make more evident the advantages of using the genetic algorithms based one. Within the paper, also a genetic algorithm that can be use for the proposed solution is detailed and a comparative description between the dynamic and static proposed solution is made. In the end, new future directions are shown based on elements presented in this paper. The future results look very encouraging.

  19. Development of GPS Receiver Kalman Filter Algorithms for Stationary, Low-Dynamics, and High-Dynamics Applications

    Science.gov (United States)

    2016-06-01

    Filter Algorithms for Stationary, Low-Dynamics, and High-Dynamics Applications Executive Summary The Global Positioning system ( GPS ) is the primary...software that may need to be developed for performance prediction of current or future systems that incorporate GPS . The ultimate aim is to help inform...Defence Science and Technology Organisation in 1986. His major areas of work were adaptive tracking , sig- nal processing, and radar systems engineering

  20. Computational Intelligence Based Data Fusion Algorithm for Dynamic sEMG and Skeletal Muscle Force Modelling

    Energy Technology Data Exchange (ETDEWEB)

    Chandrasekhar Potluri,; Madhavi Anugolu; Marco P. Schoen; D. Subbaram Naidu

    2013-08-01

    In this work, an array of three surface Electrography (sEMG) sensors are used to acquired muscle extension and contraction signals for 18 healthy test subjects. The skeletal muscle force is estimated using the acquired sEMG signals and a Non-linear Wiener Hammerstein model, relating the two signals in a dynamic fashion. The model is obtained from using System Identification (SI) algorithm. The obtained force models for each sensor are fused using a proposed fuzzy logic concept with the intent to improve the force estimation accuracy and resilience to sensor failure or misalignment. For the fuzzy logic inference system, the sEMG entropy, the relative error, and the correlation of the force signals are considered for defining the membership functions. The proposed fusion algorithm yields an average of 92.49% correlation between the actual force and the overall estimated force output. In addition, the proposed fusionbased approach is implemented on a test platform. Experiments indicate an improvement in finger/hand force estimation.

  1. Accounting treatment of intangible assets

    OpenAIRE

    Gorgieva-Trajkovska, Olivera; Koleva, Blagica; Georgieva Svrtinov, Vesna

    2015-01-01

    The accounting for fixed assets is, in many cases, a straightforward exercise, but it isn’t always so when it comes to the issue of intangible fixed assets and recognizing such assets on the balance sheet. IAS 38, In¬tan¬gi¬ble Assets, outlines the accounting re¬quire¬ments for in¬tan¬gi¬ble assets, which are non-mon¬e¬tary assets which are without physical substance and iden¬ti¬fi¬able (either being separable or arising from con¬trac¬tual or other legal rights). In¬tan¬gi¬ble assets meeting ...

  2. A self-learning algorithm for biased molecular dynamics

    Science.gov (United States)

    Tribello, Gareth A.; Ceriotti, Michele; Parrinello, Michele

    2010-01-01

    A new self-learning algorithm for accelerated dynamics, reconnaissance metadynamics, is proposed that is able to work with a very large number of collective coordinates. Acceleration of the dynamics is achieved by constructing a bias potential in terms of a patchwork of one-dimensional, locally valid collective coordinates. These collective coordinates are obtained from trajectory analyses so that they adapt to any new features encountered during the simulation. We show how this methodology can be used to enhance sampling in real chemical systems citing examples both from the physics of clusters and from the biological sciences. PMID:20876135

  3. The Social Relationship Based Adaptive Multi-Spray-and-Wait Routing Algorithm for Disruption Tolerant Network

    Directory of Open Access Journals (Sweden)

    Jianfeng Guan

    2017-01-01

    Full Text Available The existing spray-based routing algorithms in DTN cannot dynamically adjust the number of message copies based on actual conditions, which results in a waste of resource and a reduction of the message delivery rate. Besides, the existing spray-based routing protocols may result in blind spots or dead end problems due to the limitation of various given metrics. Therefore, this paper proposes a social relationship based adaptive multiple spray-and-wait routing algorithm (called SRAMSW which retransmits the message copies based on their residence times in the node via buffer management and selects forwarders based on the social relationship. By these means, the proposed algorithm can remove the plight of the message congestion in the buffer and improve the probability of replicas to reach their destinations. The simulation results under different scenarios show that the SRAMSW algorithm can improve the message delivery rate and reduce the messages’ dwell time in the cache and further improve the buffer effectively.

  4. Regret Theory and Equilibrium Asset Prices

    Directory of Open Access Journals (Sweden)

    Jiliang Sheng

    2014-01-01

    Full Text Available Regret theory is a behavioral approach to decision making under uncertainty. In this paper we assume that there are two representative investors in a frictionless market, a representative active investor who selects his optimal portfolio based on regret theory and a representative passive investor who invests only in the benchmark portfolio. In a partial equilibrium setting, the objective of the representative active investor is modeled as minimization of the regret about final wealth relative to the benchmark portfolio. In equilibrium this optimal strategy gives rise to a behavioral asset priciting model. We show that the market beta and the benchmark beta that is related to the investor’s regret are the determinants of equilibrium asset prices. We also extend our model to a market with multibenchmark portfolios. Empirical tests using stock price data from Shanghai Stock Exchange show strong support to the asset pricing model based on regret theory.

  5. Inversion for Refractivity Parameters Using a Dynamic Adaptive Cuckoo Search with Crossover Operator Algorithm.

    Science.gov (United States)

    Zhang, Zhihua; Sheng, Zheng; Shi, Hanqing; Fan, Zhiqiang

    2016-01-01

    Using the RFC technique to estimate refractivity parameters is a complex nonlinear optimization problem. In this paper, an improved cuckoo search (CS) algorithm is proposed to deal with this problem. To enhance the performance of the CS algorithm, a parameter dynamic adaptive operation and crossover operation were integrated into the standard CS (DACS-CO). Rechenberg's 1/5 criteria combined with learning factor were used to control the parameter dynamic adaptive adjusting process. The crossover operation of genetic algorithm was utilized to guarantee the population diversity. The new hybrid algorithm has better local search ability and contributes to superior performance. To verify the ability of the DACS-CO algorithm to estimate atmospheric refractivity parameters, the simulation data and real radar clutter data are both implemented. The numerical experiments demonstrate that the DACS-CO algorithm can provide an effective method for near-real-time estimation of the atmospheric refractivity profile from radar clutter.

  6. Real options and asset valuation in competitive energy markets

    Science.gov (United States)

    Oduntan, Adekunle Richard

    The focus of this work is to develop a robust valuation framework for physical power assets operating in competitive markets such as peaking or mid-merit thermal power plants and baseload power plants. The goal is to develop a modeling framework that can be adapted to different energy assets with different types of operating flexibilities and technical constraints and which can be employed for various purposes such as capital budgeting, business planning, risk management and strategic bidding planning among others. The valuation framework must also be able to capture the reality of power market rules and opportunities, as well as technical constraints of different assets. The modeling framework developed conceptualizes operating flexibilities of power assets as "switching options' whereby the asset operator decides at every decision point whether to switch from one operating mode to another mutually exclusive mode, within the limits of the equipment constraints of the asset. As a current decision to switch operating modes may affect future operating flexibilities of the asset and hence cash flows, a dynamic optimization framework is employed. The developed framework accounts for the uncertain nature of key value drivers by representing them with appropriate stochastic processes. Specifically, the framework developed conceptualizes the operation of a power asset as a multi-stage decision making problem where the operator has to make a decision at every stage to alter operating mode given currently available information about key value drivers. The problem is then solved dynamically by decomposing it into a series of two-stage sub-problems according to Bellman's optimality principle. The solution algorithm employed is the Least Squares Monte Carlo (LSM) method. The developed valuation framework was adapted for a gas-fired thermal power plant, a peaking hydroelectric power plant and a baseload power plant. This work built on previously published real options valuation

  7. Accounting of Long-Term Biological Assets

    OpenAIRE

    Valeriy Mossakovskyy; Vasyl Korytnyy

    2015-01-01

    The article is devoted to generalization of experience in valuation of long-term biological assets of plant-growing and animal-breeding, and preparation of suggestions concerning improvement of accounting in this field. Recommendations concerning accounting of such assets are given based on the study of accounting practice at specific agricultural company during long period of time. Authors believe that fair value is applicable only if price level for agricultural products is fixed by the gov...

  8. Covariance-Based Measurement Selection Criterion for Gaussian-Based Algorithms

    Directory of Open Access Journals (Sweden)

    Fernando A. Auat Cheein

    2013-01-01

    Full Text Available Process modeling by means of Gaussian-based algorithms often suffers from redundant information which usually increases the estimation computational complexity without significantly improving the estimation performance. In this article, a non-arbitrary measurement selection criterion for Gaussian-based algorithms is proposed. The measurement selection criterion is based on the determination of the most significant measurement from both an estimation convergence perspective and the covariance matrix associated with the measurement. The selection criterion is independent from the nature of the measured variable. This criterion is used in conjunction with three Gaussian-based algorithms: the EIF (Extended Information Filter, the EKF (Extended Kalman Filter and the UKF (Unscented Kalman Filter. Nevertheless, the measurement selection criterion shown herein can also be applied to other Gaussian-based algorithms. Although this work is focused on environment modeling, the results shown herein can be applied to other Gaussian-based algorithm implementations. Mathematical descriptions and implementation results that validate the proposal are also included in this work.

  9. A novel algorithm for image encryption based on mixture of chaotic maps

    International Nuclear Information System (INIS)

    Behnia, S.; Akhshani, A.; Mahmodi, H.; Akhavan, A.

    2008-01-01

    Chaos-based encryption appeared recently in the early 1990s as an original application of nonlinear dynamics in the chaotic regime. In this paper, an implementation of digital image encryption scheme based on the mixture of chaotic systems is reported. The chaotic cryptography technique used in this paper is a symmetric key cryptography. In this algorithm, a typical coupled map was mixed with a one-dimensional chaotic map and used for high degree security image encryption while its speed is acceptable. The proposed algorithm is described in detail, along with its security analysis and implementation. The experimental results based on mixture of chaotic maps approves the effectiveness of the proposed method and the implementation of the algorithm. This mixture application of chaotic maps shows advantages of large key space and high-level security. The ciphertext generated by this method is the same size as the plaintext and is suitable for practical use in the secure transmission of confidential information over the Internet

  10. A divide-conquer-recombine algorithmic paradigm for large spatiotemporal quantum molecular dynamics simulations

    Science.gov (United States)

    Shimojo, Fuyuki; Hattori, Shinnosuke; Kalia, Rajiv K.; Kunaseth, Manaschai; Mou, Weiwei; Nakano, Aiichiro; Nomura, Ken-ichi; Ohmura, Satoshi; Rajak, Pankaj; Shimamura, Kohei; Vashishta, Priya

    2014-05-01

    We introduce an extension of the divide-and-conquer (DC) algorithmic paradigm called divide-conquer-recombine (DCR) to perform large quantum molecular dynamics (QMD) simulations on massively parallel supercomputers, in which interatomic forces are computed quantum mechanically in the framework of density functional theory (DFT). In DCR, the DC phase constructs globally informed, overlapping local-domain solutions, which in the recombine phase are synthesized into a global solution encompassing large spatiotemporal scales. For the DC phase, we design a lean divide-and-conquer (LDC) DFT algorithm, which significantly reduces the prefactor of the O(N) computational cost for N electrons by applying a density-adaptive boundary condition at the peripheries of the DC domains. Our globally scalable and locally efficient solver is based on a hybrid real-reciprocal space approach that combines: (1) a highly scalable real-space multigrid to represent the global charge density; and (2) a numerically efficient plane-wave basis for local electronic wave functions and charge density within each domain. Hybrid space-band decomposition is used to implement the LDC-DFT algorithm on parallel computers. A benchmark test on an IBM Blue Gene/Q computer exhibits an isogranular parallel efficiency of 0.984 on 786 432 cores for a 50.3 × 106-atom SiC system. As a test of production runs, LDC-DFT-based QMD simulation involving 16 661 atoms is performed on the Blue Gene/Q to study on-demand production of hydrogen gas from water using LiAl alloy particles. As an example of the recombine phase, LDC-DFT electronic structures are used as a basis set to describe global photoexcitation dynamics with nonadiabatic QMD (NAQMD) and kinetic Monte Carlo (KMC) methods. The NAQMD simulations are based on the linear response time-dependent density functional theory to describe electronic excited states and a surface-hopping approach to describe transitions between the excited states. A series of techniques

  11. A divide-conquer-recombine algorithmic paradigm for large spatiotemporal quantum molecular dynamics simulations

    International Nuclear Information System (INIS)

    Shimojo, Fuyuki; Hattori, Shinnosuke; Kalia, Rajiv K.; Mou, Weiwei; Nakano, Aiichiro; Nomura, Ken-ichi; Rajak, Pankaj; Vashishta, Priya; Kunaseth, Manaschai; Ohmura, Satoshi; Shimamura, Kohei

    2014-01-01

    We introduce an extension of the divide-and-conquer (DC) algorithmic paradigm called divide-conquer-recombine (DCR) to perform large quantum molecular dynamics (QMD) simulations on massively parallel supercomputers, in which interatomic forces are computed quantum mechanically in the framework of density functional theory (DFT). In DCR, the DC phase constructs globally informed, overlapping local-domain solutions, which in the recombine phase are synthesized into a global solution encompassing large spatiotemporal scales. For the DC phase, we design a lean divide-and-conquer (LDC) DFT algorithm, which significantly reduces the prefactor of the O(N) computational cost for N electrons by applying a density-adaptive boundary condition at the peripheries of the DC domains. Our globally scalable and locally efficient solver is based on a hybrid real-reciprocal space approach that combines: (1) a highly scalable real-space multigrid to represent the global charge density; and (2) a numerically efficient plane-wave basis for local electronic wave functions and charge density within each domain. Hybrid space-band decomposition is used to implement the LDC-DFT algorithm on parallel computers. A benchmark test on an IBM Blue Gene/Q computer exhibits an isogranular parallel efficiency of 0.984 on 786 432 cores for a 50.3 × 10 6 -atom SiC system. As a test of production runs, LDC-DFT-based QMD simulation involving 16 661 atoms is performed on the Blue Gene/Q to study on-demand production of hydrogen gas from water using LiAl alloy particles. As an example of the recombine phase, LDC-DFT electronic structures are used as a basis set to describe global photoexcitation dynamics with nonadiabatic QMD (NAQMD) and kinetic Monte Carlo (KMC) methods. The NAQMD simulations are based on the linear response time-dependent density functional theory to describe electronic excited states and a surface-hopping approach to describe transitions between the excited states. A series of

  12. A novel vehicle dynamics stability control algorithm based on the hierarchical strategy with constrain of nonlinear tyre forces

    Science.gov (United States)

    Li, Liang; Jia, Gang; Chen, Jie; Zhu, Hongjun; Cao, Dongpu; Song, Jian

    2015-08-01

    Direct yaw moment control (DYC), which differentially brakes the wheels to produce a yaw moment for the vehicle stability in a steering process, is an important part of electric stability control system. In this field, most control methods utilise the active brake pressure with a feedback controller to adjust the braked wheel. However, the method might lead to a control delay or overshoot because of the lack of a quantitative project relationship between target values from the upper stability controller to the lower pressure controller. Meanwhile, the stability controller usually ignores the implementing ability of the tyre forces, which might be restrained by the combined-slip dynamics of the tyre. Therefore, a novel control algorithm of DYC based on the hierarchical control strategy is brought forward in this paper. As for the upper controller, a correctional linear quadratic regulator, which not only contains feedback control but also contains feed forward control, is introduced to deduce the object of the stability yaw moment in order to guarantee the yaw rate and side-slip angle stability. As for the medium and lower controller, the quantitative relationship between the vehicle stability object and the target tyre forces of controlled wheels is proposed to achieve smooth control performance based on a combined-slip tyre model. The simulations with the hardware-in-the-loop platform validate that the proposed algorithm can improve the stability of the vehicle effectively.

  13. Dynamics of fragments and associated phenomena in heavy-ion collisions using a modified secondary algorithm

    Energy Technology Data Exchange (ETDEWEB)

    Kumar, Rohit [Department of Physics, Panjab University, Chandigarh-160014 (India)

    2016-05-06

    We discuss the stability of fragments identified by secondary algorithms used to construct fragments within quantum molecular dynamics model. For this purpose we employ three different algorithms for fragment identification. 1) The conventional minimum spanning tree (MST) method based on the spatial correlations, 2) an improved version of MST with additional binding energy constraints of cold nuclear matter, 3) and that of hot matter. We find significant role of thermal binding energies over cold matter binding energies. Significant role is observed for fragment multiplicities and stopping of fragments. Whereas insignificant effect is observed on fragment’s flow.

  14. A new algorithm for combined dynamic economic emission dispatch with security constraints

    International Nuclear Information System (INIS)

    Arul, R.; Velusami, S.; Ravi, G.

    2015-01-01

    The primary objective of CDEED (combined dynamic economic emission dispatch) problem is to determine the optimal power generation schedule for the online generating units over a time horizon considered and simultaneously minimizing the emission level and satisfying the generators and system constraints. The CDEED problem is bi-objective optimization problem, where generation cost and emission are considered as two competing objective functions. This bi-objective CDEED problem is represented as a single objective optimization problem by assigning different weights for each objective functions. The weights are varied in steps and for each variation one compromise solution are generated and finally fuzzy based selection method is used to select the best compromise solution from the set of compromise solutions obtained. In order to reflect the test systems considered as real power system model, the security constraints are also taken into account. Three new versions of DHS (differential harmony search) algorithms have been proposed to solve the CDEED problems. The feasibility of the proposed algorithms is demonstrated on IEEE-26 and IEEE-39 bus systems. The result obtained by the proposed CSADHS (chaotic self-adaptive differential harmony search) algorithm is found to be better than EP (evolutionary programming), DHS, and the other proposed algorithms in terms of solution quality, convergence speed and computation time. - Highlights: • In this paper, three new algorithms CDHS, SADHS and CSADHS are proposed. • To solve DED with emission, poz's, spinning reserve and security constraints. • Results obtained by the proposed CSADHS algorithm are better than others. • The proposed CSADHS algorithm has fast convergence characteristic than others

  15. 基于ISM的动态优先级调度算法%Dynamic Priority Schedule Algorithm Based on ISM

    Institute of Scientific and Technical Information of China (English)

    余祖峰; 蔡启先; 刘明

    2011-01-01

    The EDF schedule algorithm, one of main real-time schedule algorithms of the embedded Linux operating system, can not solve the overload schedule.For this, the paper introduces SLAD algorithm and BACKSLASH algorithm, which have good performance of system load.According to thinking of ISM algorithm, it puts forward a kind of dynamic priority schedule algorithm.According to case of overloads within some time, the algorithm can adjust EDF algorithm and SLAD algorithm neatly, thus improves schedule efficiency of system in usual load and overload cases.Test results for real-time tasks Deadline Miss Ratio(DMR) show its improvement effect.%在嵌入式Linux操作系统的实时调度算法中,EDF调度算法不能解决负载过载问题.为此,引进对系统负载有着良好表现的SLAD算法和BACKSLASH算法.基于ISM算法思路,提出一种动态优先级调度算法.该算法能根据一段时间内负载过载的情况,灵活地调度EDF算法和SLAD算法,从面提高系统在正常负载和过载情况下的调度效率.对实时任务截止期错失率DMR指标的测试结果证明了其改进效果.

  16. Substantiation of Biological Assets Classification Indexes for Enhancing Their Accounting Efficiency

    OpenAIRE

    Rayisa Tsyhan; Olha Chubka

    2013-01-01

    Present day national agricultural companies sell their products in both domestic and foreign markets which has a significant impact on specifics of biological assets accounting. The article offers biological assets classification provided in the Practical Guide to Accounting for Biological Assets and, besides, specifications proposed by various scientists. Based on the analysis, biological assets classification has been supplemented with new classification factors and their appropriateness ha...

  17. Financier-led asset lease model

    NARCIS (Netherlands)

    Zhao, X.; Angelov, S.A.; Grefen, P.W.P.J.; Meersman, R.A.; Dillon, T.S.

    2010-01-01

    Nowadays, the business globalisation trend drives organisations to spread their business worldwide, which in turn generates vast asset demands. In this context, broader asset channels and higher financial capacities are required to boost the asset lease sector to meet the increasing asset demands

  18. Asset Pricing - A Brief Review

    OpenAIRE

    Li, Minqiang

    2010-01-01

    I first introduce the early-stage and modern classical asset pricing and portfolio theories. These include: the capital asset pricing model (CAPM), the arbitrage pricing theory (APT), the consumption capital asset pricing model (CCAPM), the intertemporal capital asset pricing model (ICAPM), and some other important modern concepts and techniques. Finally, I discuss the most recent development during the last decade and the outlook in the field of asset pricing.

  19. An Event-Driven Hybrid Molecular Dynamics and Direct Simulation Monte Carlo Algorithm

    Energy Technology Data Exchange (ETDEWEB)

    Donev, A; Garcia, A L; Alder, B J

    2007-07-30

    A novel algorithm is developed for the simulation of polymer chains suspended in a solvent. The polymers are represented as chains of hard spheres tethered by square wells and interact with the solvent particles with hard core potentials. The algorithm uses event-driven molecular dynamics (MD) for the simulation of the polymer chain and the interactions between the chain beads and the surrounding solvent particles. The interactions between the solvent particles themselves are not treated deterministically as in event-driven algorithms, rather, the momentum and energy exchange in the solvent is determined stochastically using the Direct Simulation Monte Carlo (DSMC) method. The coupling between the solvent and the solute is consistently represented at the particle level, however, unlike full MD simulations of both the solvent and the solute, the spatial structure of the solvent is ignored. The algorithm is described in detail and applied to the study of the dynamics of a polymer chain tethered to a hard wall subjected to uniform shear. The algorithm closely reproduces full MD simulations with two orders of magnitude greater efficiency. Results do not confirm the existence of periodic (cycling) motion of the polymer chain.

  20. Assessing Asset Pricing Anomalies

    NARCIS (Netherlands)

    W.A. de Groot (Wilma)

    2017-01-01

    markdownabstractOne of the most important challenges in the field of asset pricing is to understand anomalies: empirical patterns in asset returns that cannot be explained by standard asset pricing models. Currently, there is no consensus in the academic literature on the underlying causes of

  1. A New Algorithm for ABS/GPS Integration Based on Fuzzy-Logic in Vehicle Navigation System

    Directory of Open Access Journals (Sweden)

    Ali Amin Zadeh

    2011-10-01

    Full Text Available GPS based vehicle navigation systems have difficulties in tracking vehicles in urban canyons due to poor satellite availability. ABS (Antilock Brake System Navigation System consists of self-contained optical encoders mounted on vehicle wheels that can continuously provide accurate short-term positioning information. In this paper, a new concept regarding GPS/ABS integration, based on Fuzzy Logic is presented. The proposed algorithm is used to identify GPS position accuracy based on environment and vehicle dynamic knowledge. The GPS is used as reference during the time it is in a good condition and replaced by ABS positioning system when GPS information is unreliable. We compare our proposed algorithm with other common algorithm in real environment. Our results show that the proposed algorithm can significantly improve the stability and reliability of ABS/GPS navigation system.

  2. Inversion for Refractivity Parameters Using a Dynamic Adaptive Cuckoo Search with Crossover Operator Algorithm

    Directory of Open Access Journals (Sweden)

    Zhihua Zhang

    2016-01-01

    Full Text Available Using the RFC technique to estimate refractivity parameters is a complex nonlinear optimization problem. In this paper, an improved cuckoo search (CS algorithm is proposed to deal with this problem. To enhance the performance of the CS algorithm, a parameter dynamic adaptive operation and crossover operation were integrated into the standard CS (DACS-CO. Rechenberg’s 1/5 criteria combined with learning factor were used to control the parameter dynamic adaptive adjusting process. The crossover operation of genetic algorithm was utilized to guarantee the population diversity. The new hybrid algorithm has better local search ability and contributes to superior performance. To verify the ability of the DACS-CO algorithm to estimate atmospheric refractivity parameters, the simulation data and real radar clutter data are both implemented. The numerical experiments demonstrate that the DACS-CO algorithm can provide an effective method for near-real-time estimation of the atmospheric refractivity profile from radar clutter.

  3. The Algorithm for Algorithms: An Evolutionary Algorithm Based on Automatic Designing of Genetic Operators

    Directory of Open Access Journals (Sweden)

    Dazhi Jiang

    2015-01-01

    Full Text Available At present there is a wide range of evolutionary algorithms available to researchers and practitioners. Despite the great diversity of these algorithms, virtually all of the algorithms share one feature: they have been manually designed. A fundamental question is “are there any algorithms that can design evolutionary algorithms automatically?” A more complete definition of the question is “can computer construct an algorithm which will generate algorithms according to the requirement of a problem?” In this paper, a novel evolutionary algorithm based on automatic designing of genetic operators is presented to address these questions. The resulting algorithm not only explores solutions in the problem space like most traditional evolutionary algorithms do, but also automatically generates genetic operators in the operator space. In order to verify the performance of the proposed algorithm, comprehensive experiments on 23 well-known benchmark optimization problems are conducted. The results show that the proposed algorithm can outperform standard differential evolution algorithm in terms of convergence speed and solution accuracy which shows that the algorithm designed automatically by computers can compete with the algorithms designed by human beings.

  4. A comparative analysis of particle swarm optimization and differential evolution algorithms for parameter estimation in nonlinear dynamic systems

    International Nuclear Information System (INIS)

    Banerjee, Amit; Abu-Mahfouz, Issam

    2014-01-01

    The use of evolutionary algorithms has been popular in recent years for solving the inverse problem of identifying system parameters given the chaotic response of a dynamical system. The inverse problem is reformulated as a minimization problem and population-based optimizers such as evolutionary algorithms have been shown to be efficient solvers of the minimization problem. However, to the best of our knowledge, there has been no published work that evaluates the efficacy of using the two most popular evolutionary techniques – particle swarm optimization and differential evolution algorithm, on a wide range of parameter estimation problems. In this paper, the two methods along with their variants (for a total of seven algorithms) are applied to fifteen different parameter estimation problems of varying degrees of complexity. Estimation results are analyzed using nonparametric statistical methods to identify if an algorithm is statistically superior to others over the class of problems analyzed. Results based on parameter estimation quality suggest that there are significant differences between the algorithms with the newer, more sophisticated algorithms performing better than their canonical versions. More importantly, significant differences were also found among variants of the particle swarm optimizer and the best performing differential evolution algorithm

  5. A dynamic model of the marriage market-part 1: matching algorithm based on age preference and availability.

    Science.gov (United States)

    Matthews, A P; Garenne, M L

    2013-09-01

    The matching algorithm in a dynamic marriage market model is described in this first of two companion papers. Iterative Proportional Fitting is used to find a marriage function (an age distribution of new marriages for both sexes), in a stable reference population, that is consistent with the one-sex age distributions of new marriages, and includes age preference. The one-sex age distributions (which are the marginals of the two-sex distribution) are based on the Picrate model, and age preference on a normal distribution, both of which may be adjusted by choice of parameter values. For a population that is perturbed from the reference state, the total number of new marriages is found as the harmonic mean of target totals for men and women obtained by applying reference population marriage rates to the perturbed population. The marriage function uses the age preference function, assumed to be the same for the reference and the perturbed populations, to distribute the total number of new marriages. The marriage function also has an availability factor that varies as the population changes with time, where availability depends on the supply of unmarried men and women. To simplify exposition, only first marriage is treated, and the algorithm is illustrated by application to Zambia. In the second paper, remarriage and dissolution are included. Copyright © 2013 Elsevier Inc. All rights reserved.

  6. An Iterative Algorithm to Determine the Dynamic User Equilibrium in a Traffic Simulation Model

    Science.gov (United States)

    Gawron, C.

    An iterative algorithm to determine the dynamic user equilibrium with respect to link costs defined by a traffic simulation model is presented. Each driver's route choice is modeled by a discrete probability distribution which is used to select a route in the simulation. After each simulation run, the probability distribution is adapted to minimize the travel costs. Although the algorithm does not depend on the simulation model, a queuing model is used for performance reasons. The stability of the algorithm is analyzed for a simple example network. As an application example, a dynamic version of Braess's paradox is studied.

  7. Capital Structure and Assets

    DEFF Research Database (Denmark)

    Flor, Christian Riis

    2008-01-01

    This paper analyzes a firm's capital structure choice when assets have outside value. Valuable assets implicitly provide a collateral and increase tax shield exploitation. The key feature in this paper is asset value uncertainty, implying that it is unknown ex ante whether the equity holders ex p...

  8. Energy Efficient Routing Algorithms in Dynamic Optical Core Networks with Dual Energy Sources

    DEFF Research Database (Denmark)

    Wang, Jiayuan; Fagertun, Anna Manolova; Ruepp, Sarah Renée

    2013-01-01

    This paper proposes new energy efficient routing algorithms in optical core networks, with the application of solar energy sources and bundled links. A comprehensive solar energy model is described in the proposed network scenarios. Network performance in energy savings, connection blocking...... probability, resource utilization and bundled link usage are evaluated with dynamic network simulations. Results show that algorithms proposed aiming for reducing the dynamic part of the energy consumption of the network may raise the fixed part of the energy consumption meanwhile....

  9. Normalization based K means Clustering Algorithm

    OpenAIRE

    Virmani, Deepali; Taneja, Shweta; Malhotra, Geetika

    2015-01-01

    K-means is an effective clustering technique used to separate similar data into groups based on initial centroids of clusters. In this paper, Normalization based K-means clustering algorithm(N-K means) is proposed. Proposed N-K means clustering algorithm applies normalization prior to clustering on the available data as well as the proposed approach calculates initial centroids based on weights. Experimental results prove the betterment of proposed N-K means clustering algorithm over existing...

  10. Bidirectional Dynamic Diversity Evolutionary Algorithm for Constrained Optimization

    Directory of Open Access Journals (Sweden)

    Weishang Gao

    2013-01-01

    Full Text Available Evolutionary algorithms (EAs were shown to be effective for complex constrained optimization problems. However, inflexible exploration-exploitation and improper penalty in EAs with penalty function would lead to losing the global optimum nearby or on the constrained boundary. To determine an appropriate penalty coefficient is also difficult in most studies. In this paper, we propose a bidirectional dynamic diversity evolutionary algorithm (Bi-DDEA with multiagents guiding exploration-exploitation through local extrema to the global optimum in suitable steps. In Bi-DDEA potential advantage is detected by three kinds of agents. The scale and the density of agents will change dynamically according to the emerging of potential optimal area, which play an important role of flexible exploration-exploitation. Meanwhile, a novel double optimum estimation strategy with objective fitness and penalty fitness is suggested to compute, respectively, the dominance trend of agents in feasible region and forbidden region. This bidirectional evolving with multiagents can not only effectively avoid the problem of determining penalty coefficient but also quickly converge to the global optimum nearby or on the constrained boundary. By examining the rapidity and veracity of Bi-DDEA across benchmark functions, the proposed method is shown to be effective.

  11. Comprehensive transportation asset management : making a business case and prioritizing assets for inclusion in formal asset management programs.

    Science.gov (United States)

    2011-12-01

    Several agencies are applying asset management principles as a business tool and paradigm to help them define goals and prioritize agency resources in decision making. Previously, transportation asset management (TAM) has focused more on big ticke...

  12. A Spatial Queuing-Based Algorithm for Multi-Robot Task Allocation

    Directory of Open Access Journals (Sweden)

    William Lenagh

    2015-08-01

    Full Text Available Multi-robot task allocation (MRTA is an important area of research in autonomous multi-robot systems. The main problem in MRTA is to allocate a set of tasks to a set of robots so that the tasks can be completed by the robots while ensuring that a certain metric, such as the time required to complete all tasks, or the distance traveled, or the energy expended by the robots is reduced. We consider a scenario where tasks can appear dynamically and a task needs to be performed by multiple robots to be completed. We propose a new algorithm called SQ-MRTA (Spatial Queueing-MRTA that uses a spatial queue-based model to allocate tasks between robots in a distributed manner. We have implemented the SQ-MRTA algorithm on accurately simulated models of Corobot robots within the Webots simulator for different numbers of robots and tasks and compared its performance with other state-of-the-art MRTA algorithms. Our results show that the SQ-MRTA algorithm is able to scale up with the number of tasks and robots in the environment, and it either outperforms or performs comparably with respect to other distributed MRTA algorithms.

  13. CLUSTER ANALYSIS OF TOTAL ASSETS PROVIDED BY BANKS FROM FOUR CONTINENTS

    Directory of Open Access Journals (Sweden)

    MIRELA CĂTĂLINA TÜRKEȘ

    2017-08-01

    Full Text Available The paper analysed the total assets in 2016 achieved by the strongest 96 banks from 4 continents: Europe, America, Asia and Africa. It aims to evaluate the level of total assets provided by banks in 2016 and continental banking markets degree of differentiation to determine the overall conditions of the banks. Methodologies used in this study are based on cluster and descriptives analysis. Data set was built based on informations reported by banks on total assets. The results indicate that most of total banking assets are found in Asia and the fewest in Africa. At the end of 2016, the top 16 global banks owned total assets of $ 30.19 trillion according to the data set contains cluster 1 and the centroid was (2.25, 2.11, 3.06, 0.01.

  14. Proactive pavement asset management with climate change aspects

    Science.gov (United States)

    Zofka, Adam

    2018-05-01

    Pavement Asset Management System is a systematic and objective tool to manage pavement network based on the rational, engineering and economic principles. Once implemented and mature Pavement Asset Management System serves the entire range of users starting with the maintenance engineers and ending with the decision-makers. Such a system is necessary to coordinate agency management strategy including proactive maintenance. Basic inputs in the majority of existing Pavement Asset Management System approaches comprise the actual pavement inventory with associated construction history and condition, traffic information as well as various economical parameters. Some Pavement Management System approaches include also weather aspects which is of particular importance considering ongoing climate changes. This paper presents challenges in implementing the Pavement Asset Management System for those National Road Administrations that manage their pavement assets using more traditional strategies, e.g. worse-first approach. Special considerations are given to weather-related inputs and associated analysis to demonstrate the effects of climate change in a short- and long-term range. Based on the presented examples this paper concludes that National Road Administrations should account for the weather-related factors in their Pavement Management Systems as this has a significant impact on the system outcomes from the safety and economical perspective.

  15. The Normalized-Rate Iterative Algorithm: A Practical Dynamic Spectrum Management Method for DSL

    Directory of Open Access Journals (Sweden)

    Statovci Driton

    2006-01-01

    Full Text Available We present a practical solution for dynamic spectrum management (DSM in digital subscriber line systems: the normalized-rate iterative algorithm (NRIA. Supported by a novel optimization problem formulation, the NRIA is the only DSM algorithm that jointly addresses spectrum balancing for frequency division duplexing systems and power allocation for the users sharing a common cable bundle. With a focus on being implementable rather than obtaining the highest possible theoretical performance, the NRIA is designed to efficiently solve the DSM optimization problem with the operators' business models in mind. This is achieved with the help of two types of parameters: the desired network asymmetry and the desired user priorities. The NRIA is a centralized DSM algorithm based on the iterative water-filling algorithm (IWFA for finding efficient power allocations, but extends the IWFA by finding the achievable bitrates and by optimizing the bandplan. It is compared with three other DSM proposals: the IWFA, the optimal spectrum balancing algorithm (OSBA, and the bidirectional IWFA (bi-IWFA. We show that the NRIA achieves better bitrate performance than the IWFA and the bi-IWFA. It can even achieve performance almost as good as the OSBA, but with dramatically lower requirements on complexity. Additionally, the NRIA can achieve bitrate combinations that cannot be supported by any other DSM algorithm.

  16. Trustworthiness Measurement Algorithm for TWfMS Based on Software Behaviour Entropy

    Directory of Open Access Journals (Sweden)

    Qiang Han

    2018-03-01

    Full Text Available As the virtual mirror of complex real-time business processes of organisations’ underlying information systems, the workflow management system (WfMS has emerged in recent decades as a new self-autonomous paradigm in the open, dynamic, distributed computing environment. In order to construct a trustworthy workflow management system (TWfMS, the design of a software behaviour trustworthiness measurement algorithm is an urgent task for researchers. Accompanying the trustworthiness mechanism, the measurement algorithm, with uncertain software behaviour trustworthiness information of the WfMS, should be resolved as an infrastructure. Based on the framework presented in our research prior to this paper, we firstly introduce a formal model for the WfMS trustworthiness measurement, with the main property reasoning based on calculus operators. Secondly, this paper proposes a novel measurement algorithm from the software behaviour entropy of calculus operators through the principle of maximum entropy (POME and the data mining method. Thirdly, the trustworthiness measurement algorithm for incomplete software behaviour tests and runtime information is discussed and compared by means of a detailed explanation. Finally, we provide conclusions and discuss certain future research areas of the TWfMS.

  17. FPGA-Based Implementation of Lithuanian Isolated Word Recognition Algorithm

    Directory of Open Access Journals (Sweden)

    Tomyslav Sledevič

    2013-05-01

    Full Text Available The paper describes the FPGA-based implementation of Lithuanian isolated word recognition algorithm. FPGA is selected for parallel process implementation using VHDL to ensure fast signal processing at low rate clock signal. Cepstrum analysis was applied to features extraction in voice. The dynamic time warping algorithm was used to compare the vectors of cepstrum coefficients. A library of 100 words features was created and stored in the internal FPGA BRAM memory. Experimental testing with speaker dependent records demonstrated the recognition rate of 94%. The recognition rate of 58% was achieved for speaker-independent records. Calculation of cepstrum coefficients lasted for 8.52 ms at 50 MHz clock, while 100 DTWs took 66.56 ms at 25 MHz clock.Article in Lithuanian

  18. Transmission probability-based dynamic power control for multi-radio mesh networks

    CSIR Research Space (South Africa)

    Olwal, TO

    2008-09-01

    Full Text Available This paper presents an analytical model for the selection of the transmission power based on the bi-directional medium access information. Most of dynamic transmission power control algorithms are based on the single directional channel...

  19. An improved contrast enhancement algorithm for infrared images based on adaptive double plateaus histogram equalization

    Science.gov (United States)

    Li, Shuo; Jin, Weiqi; Li, Li; Li, Yiyang

    2018-05-01

    Infrared thermal images can reflect the thermal-radiation distribution of a particular scene. However, the contrast of the infrared images is usually low. Hence, it is generally necessary to enhance the contrast of infrared images in advance to facilitate subsequent recognition and analysis. Based on the adaptive double plateaus histogram equalization, this paper presents an improved contrast enhancement algorithm for infrared thermal images. In the proposed algorithm, the normalized coefficient of variation of the histogram, which characterizes the level of contrast enhancement, is introduced as feedback information to adjust the upper and lower plateau thresholds. The experiments on actual infrared images show that compared to the three typical contrast-enhancement algorithms, the proposed algorithm has better scene adaptability and yields better contrast-enhancement results for infrared images with more dark areas or a higher dynamic range. Hence, it has high application value in contrast enhancement, dynamic range compression, and digital detail enhancement for infrared thermal images.

  20. The Dynamic Enterprise Network Composition Algorithm for Efficient Operation in Cloud Manufacturing

    Directory of Open Access Journals (Sweden)

    Gilseung Ahn

    2016-11-01

    Full Text Available As a service oriented and networked model, cloud manufacturing (CM has been proposed recently for solving a variety of manufacturing problems, including diverse requirements from customers. In CM, on-demand manufacturing services are provided by a temporary production network composed of several enterprises participating within an enterprise network. In other words, the production network is the main agent of production and a subset of an enterprise network. Therefore, it is essential to compose the enterprise network in a way that can respond to demands properly. A properly-composed enterprise network means the network can handle demands that arrive at the CM, with minimal costs, such as network composition and operation costs, such as participation contract costs, system maintenance costs, and so forth. Due to trade-offs among costs (e.g., contract cost and opportunity cost of production, it is a non-trivial problem to find the optimal network enterprise composition. In addition, this includes probabilistic constraints, such as forecasted demand. In this paper, we propose an algorithm, named the dynamic enterprise network composition algorithm (DENCA, based on a genetic algorithm to solve the enterprise network composition problem. A numerical simulation result is provided to demonstrate the performance of the proposed algorithm.

  1. Comparison of Controller and Flight Deck Algorithm Performance During Interval Management with Dynamic Arrival Trees (STARS)

    Science.gov (United States)

    Battiste, Vernol; Lawton, George; Lachter, Joel; Brandt, Summer; Koteskey, Robert; Dao, Arik-Quang; Kraut, Josh; Ligda, Sarah; Johnson, Walter W.

    2012-01-01

    Managing the interval between arrival aircraft is a major part of the en route and TRACON controller s job. In an effort to reduce controller workload and low altitude vectoring, algorithms have been developed to allow pilots to take responsibility for, achieve and maintain proper spacing. Additionally, algorithms have been developed to create dynamic weather-free arrival routes in the presence of convective weather. In a recent study we examined an algorithm to handle dynamic re-routing in the presence of convective weather and two distinct spacing algorithms. The spacing algorithms originated from different core algorithms; both were enhanced with trajectory intent data for the study. These two algorithms were used simultaneously in a human-in-the-loop (HITL) simulation where pilots performed weather-impacted arrival operations into Louisville International Airport while also performing interval management (IM) on some trials. The controllers retained responsibility for separation and for managing the en route airspace and some trials managing IM. The goal was a stress test of dynamic arrival algorithms with ground and airborne spacing concepts. The flight deck spacing algorithms or controller managed spacing not only had to be robust to the dynamic nature of aircraft re-routing around weather but also had to be compatible with two alternative algorithms for achieving the spacing goal. Flight deck interval management spacing in this simulation provided a clear reduction in controller workload relative to when controllers were responsible for spacing the aircraft. At the same time, spacing was much less variable with the flight deck automated spacing. Even though the approaches taken by the two spacing algorithms to achieve the interval management goals were slightly different they seem to be simpatico in achieving the interval management goal of 130 sec by the TRACON boundary.

  2. A new algorithm for extended nonequilibrium molecular dynamics simulations of mixed flow

    NARCIS (Netherlands)

    Hunt, T.A.; Hunt, Thomas A.; Bernardi, Stefano; Todd, B.D.

    2010-01-01

    In this work, we develop a new algorithm for nonequilibrium molecular dynamics of fluids under planar mixed flow, a linear combination of planar elongational flow and planar Couette flow. To date, the only way of simulating mixed flow using nonequilibrium molecular dynamics techniques was to impose

  3. Prediction of future asset prices

    Science.gov (United States)

    Seong, Ng Yew; Hin, Pooi Ah; Ching, Soo Huei

    2014-12-01

    This paper attempts to incorporate trading volumes as an additional predictor for predicting asset prices. Denoting r(t) as the vector consisting of the time-t values of the trading volume and price of a given asset, we model the time-(t+1) asset price to be dependent on the present and l-1 past values r(t), r(t-1), ....., r(t-1+1) via a conditional distribution which is derived from a (2l+1)-dimensional power-normal distribution. A prediction interval based on the 100(α/2)% and 100(1-α/2)% points of the conditional distribution is then obtained. By examining the average lengths of the prediction intervals found by using the composite indices of the Malaysia stock market for the period 2008 to 2013, we found that the value 2 appears to be a good choice for l. With the omission of the trading volume in the vector r(t), the corresponding prediction interval exhibits a slightly longer average length, showing that it might be desirable to keep trading volume as a predictor. From the above conditional distribution, the probability that the time-(t+1) asset price will be larger than the time-t asset price is next computed. When the probability differs from 0 (or 1) by less than 0.03, the observed time-(t+1) increase in price tends to be negative (or positive). Thus the above probability has a good potential of being used as a market indicator in technical analysis.

  4. The assets-based approach: furthering a neoliberal agenda or rediscovering the old public health? A critical examination of practitioner discourses.

    Science.gov (United States)

    Roy, Michael J

    2017-08-08

    The 'assets-based approach' to health and well-being has, on the one hand, been presented as a potentially empowering means to address the social determinants of health while, on the other, been criticised for obscuring structural drivers of inequality and encouraging individualisation and marketisation; in essence, for being a tool of neoliberalism. This study looks at how this apparent contestation plays out in practice through a critical realist-inspired examination of practitioner discourses, specifically of those working within communities to address social vulnerabilities that we know impact upon health. The study finds that practitioners interact with the assets-based policy discourse in interesting ways. Rather than unwitting tools of neoliberalism, they considered their work to be about mitigating the worst effects of poverty and social vulnerability in ways that enhance collectivism and solidarity, concepts that neoliberalism arguably seeks to disrupt. Furthermore, rather than a different, innovative, way of working, they consider the assets-based approach to simply be a re-labelling of what they have been doing anyway, for as long as they can remember. So, for practitioners, rather than a 'new' approach to public health, the assets-based public health movement seems to be a return to recognising and appreciating the role of community within public health policy and practice; ideals that predate neoliberalism by quite some considerable time.

  5. Tapping the Value Potential of Extended Asset Services - Experiences from Finnish Companies

    Science.gov (United States)

    Kortelainen, Helena; Hanski, Jyri; Valkokari, Pasi; Ahonen, Toni

    2017-09-01

    Recent developments in information technology and business models enable a wide variety of new services for companies looking for growth in services. Currently, manufacturing companies have been actively developing and providing novel asset based services such as condition monitoring and remote control. However, there is still untapped potential in extending the service delivery to the long-term co-operative development of physical assets over the whole lifecycle. Close collaboration with the end-customer and other stakeholders is needed in order to understand the value generation options. In this paper, we assess some of the asset services manufacturing companies are currently developing. The descriptions of the asset services are based on the results of an industrial workshop in which the companies presented their service development plans. The service propositions are compared with the Total Cost of Ownership and the closed loop life cycle frameworks. Based on the comparison, gaps that indicate potential for extended asset service concepts are recognised. In conclusion, we argue that the manufacturing companies do not recognise the whole potential for asset based services and for optimizing the performance of the end customers' processes.

  6. Managing Assets in The Infrastructure Sector

    Directory of Open Access Journals (Sweden)

    T.P. van Houten

    2010-09-01

    Full Text Available In view of the importance of managing assets and the lack of research in managing assets in the infrastructure sector, we develop an asset management model in this study. This model is developed in line with the unique characteristics of the infrastructure assets and asset management principles and criteria. In the proposed model, we consider activities at three levels, namely the strategical, tactical and operational levels. The interviews with experts in asset management and officials in several Dutch organizations have proven the potential of our asset management model.

  7. Dynamic airspace configuration method based on a weighted graph model

    Directory of Open Access Journals (Sweden)

    Chen Yangzhou

    2014-08-01

    Full Text Available This paper proposes a new method for dynamic airspace configuration based on a weighted graph model. The method begins with the construction of an undirected graph for the given airspace, where the vertices represent those key points such as airports, waypoints, and the edges represent those air routes. Those vertices are used as the sites of Voronoi diagram, which divides the airspace into units called as cells. Then, aircraft counts of both each cell and of each air-route are computed. Thus, by assigning both the vertices and the edges with those aircraft counts, a weighted graph model comes into being. Accordingly the airspace configuration problem is described as a weighted graph partitioning problem. Then, the problem is solved by a graph partitioning algorithm, which is a mixture of general weighted graph cuts algorithm, an optimal dynamic load balancing algorithm and a heuristic algorithm. After the cuts algorithm partitions the model into sub-graphs, the load balancing algorithm together with the heuristic algorithm transfers aircraft counts to balance workload among sub-graphs. Lastly, airspace configuration is completed by determining the sector boundaries. The simulation result shows that the designed sectors satisfy not only workload balancing condition, but also the constraints such as convexity, connectivity, as well as minimum distance constraint.

  8. Methodological aspects of network assets accounting

    Directory of Open Access Journals (Sweden)

    Yuhimenko-Nazaruk I.A.

    2017-08-01

    Full Text Available The necessity of using innovative tools of processing and representation of information about network assets is substantiated. The suggestions for displaying network assets in accounts are presented. The main reasons for the need to display the network assets in the financial statements of all members of the network structure (the economic essence of network assets as the object of accounting; the non-additional model for the formation of the value of network assets; the internetworking mechanism for the formation of the value of network assets are identified. The stages of accounting valuation of network assets are allocated and substantiated. The analytical table for estimating the value of network assets and additional network capital in accounting is developed. The order of additional network capital reflection in accounting is developed. The method of revaluation of network assets in accounting in the broad sense is revealed. The order of accounting of network assets with increasing or decreasing the number of participants in the network structure is determined.

  9. Dynamic characterization of oil fields, complex stratigraphically using genetic algorithms

    International Nuclear Information System (INIS)

    Gonzalez, Santiago; Hidrobo, Eduardo A

    2004-01-01

    A novel methodology is presented in this paper for the characterization of highly heterogeneous oil fields by integration of the oil fields dynamic information to the static updated model. The objective of the oil field's characterization process is to build an oil field model, as realistic as possible, through the incorporation of all the available information. The classical approach consists in producing a model based in the oil field's static information, having as the process final stage the validation model with the dynamic information available. It is important to clarify that the term validation implies a punctual process by nature, generally intended to secure the required coherence between productive zones and petrophysical properties. The objective of the proposed methodology is to enhance the prediction capacity of the oil field's model by previously integrating, parameters inherent to the oil field's fluid dynamics by a process of dynamic data inversion through an optimization procedure based on evolutionary computation. The proposed methodology relies on the construction of the oil field's high-resolution static model, escalated by means of hybrid techniques while aiming to preserve the oil field's heterogeneity. Afterwards, using an analytic simulator as reference, the scaled model is methodically modified by means of an optimization process that uses genetic algorithms and production data as conditional information. The process's final product is a model that observes the static and dynamic conditions of the oil field with the capacity to minimize the economic impact that generates production historical adjustments to the simulation tasks. This final model features some petrophysical properties (porosity, permeability and water saturation), as modified to achieve a better adjustment of the simulated production's history versus the real one history matching. Additionally, the process involves a slight modification of relative permeability, which has

  10. A boundary PDE feedback control approach for the stabilization of mortgage price dynamics

    Science.gov (United States)

    Rigatos, G.; Siano, P.; Sarno, D.

    2017-11-01

    Several transactions taking place in financial markets are dependent on the pricing of mortgages (loans for the purchase of residences, land or farms). In this article, a method for stabilization of mortgage price dynamics is developed. It is considered that mortgage prices follow a PDE model which is equivalent to a multi-asset Black-Scholes PDE. Actually it is a diffusion process evolving in a 2D assets space, where the first asset is the house price and the second asset is the interest rate. By applying semi-discretization and a finite differences scheme this multi-asset PDE is transformed into a state-space model consisting of ordinary nonlinear differential equations. For the local subsystems, into which the mortgage PDE is decomposed, it becomes possible to apply boundary-based feedback control. The controller design proceeds by showing that the state-space model of the mortgage price PDE stands for a differentially flat system. Next, for each subsystem which is related to a nonlinear ODE, a virtual control input is computed, that can invert the subsystem's dynamics and can eliminate the subsystem's tracking error. From the last row of the state-space description, the control input (boundary condition) that is actually applied to the multi-factor mortgage price PDE system is found. This control input contains recursively all virtual control inputs which were computed for the individual ODE subsystems associated with the previous rows of the state-space equation. Thus, by tracing the rows of the state-space model backwards, at each iteration of the control algorithm, one can finally obtain the control input that should be applied to the mortgage price PDE system so as to assure that all its state variables will converge to the desirable setpoints. By showing the feasibility of such a control method it is also proven that through selected modification of the PDE boundary conditions the price of the mortgage can be made to converge and stabilize at specific

  11. Algorithmic Information Dynamics of Persistent Patterns and Colliding Particles in the Game of Life

    KAUST Repository

    Zenil, Hector

    2018-02-18

    We demonstrate the way to apply and exploit the concept of \\\\textit{algorithmic information dynamics} in the characterization and classification of dynamic and persistent patterns, motifs and colliding particles in, without loss of generalization, Conway\\'s Game of Life (GoL) cellular automaton as a case study. We analyze the distribution of prevailing motifs that occur in GoL from the perspective of algorithmic probability. We demonstrate how the tools introduced are an alternative to computable measures such as entropy and compression algorithms which are often nonsensitive to small changes and features of non-statistical nature in the study of evolving complex systems and their emergent structures.

  12. Managing assets in the infrastructure sector

    NARCIS (Netherlands)

    van Houten, T.P.; Zhang, L.

    2010-01-01

    In view of the importance of managing assets and the lack of research in managing assets in the infrastructure sector, we develop an asset management model in this study. This model is developed in line with the unique characteristics of the infrastructure assets and asset management principles and

  13. Effectiveness of infrastructure asset management: challenges for public agencies

    NARCIS (Netherlands)

    Schraven, Daan; Hartmann, Andreas; Dewulf, Geert P.M.R.

    2011-01-01

    Purpose: The aim of this research is to better understand the decisions in infrastructure asset management at public agencies and the challenges of these agencies to improve the effectiveness of their decision making. Design/methodology/approach: Based on a literature review on asset management at

  14. Empowering file-based radio production through media asset management systems

    Science.gov (United States)

    Muylaert, Bjorn; Beckers, Tom

    2006-10-01

    In recent years, IT-based production and archiving of media has matured to a level which enables broadcasters to switch over from tape- or CD-based to file-based workflows for the production of their radio and television programs. This technology is essential for the future of broadcasters as it provides the flexibility and speed of execution the customer demands by enabling, among others, concurrent access and production, faster than real-time ingest, edit during ingest, centrally managed annotation and quality preservation of media. In terms of automation of program production, the radio department is the most advanced within the VRT, the Flemish broadcaster. Since a couple of years ago, the radio department has been working with digital equipment and producing its programs mainly on standard IT equipment. Historically, the shift from analogue to digital based production has been a step by step process initiated and coordinated by each radio station separately, resulting in a multitude of tools and metadata collections, some of them developed in-house, lacking integration. To make matters worse, each of those stations adopted a slightly different production methodology. The planned introduction of a company-wide Media Asset Management System allows a coordinated overhaul to a unified production architecture. Benefits include the centralized ingest and annotation of audio material and the uniform, integrated (in terms of IT infrastructure) workflow model. Needless to say, the ingest strategy, metadata management and integration with radio production systems play a major role in the level of success of any improvement effort. This paper presents a data model for audio-specific concepts relevant to radio production. It includes an investigation of ingest techniques and strategies. Cooperation with external, professional production tools is demonstrated through a use-case scenario: the integration of an existing, multi-track editing tool with a commercially available

  15. An Energy-Efficient Spectrum-Aware Reinforcement Learning-Based Clustering Algorithm for Cognitive Radio Sensor Networks.

    Science.gov (United States)

    Mustapha, Ibrahim; Mohd Ali, Borhanuddin; Rasid, Mohd Fadlee A; Sali, Aduwati; Mohamad, Hafizal

    2015-08-13

    It is well-known that clustering partitions network into logical groups of nodes in order to achieve energy efficiency and to enhance dynamic channel access in cognitive radio through cooperative sensing. While the topic of energy efficiency has been well investigated in conventional wireless sensor networks, the latter has not been extensively explored. In this paper, we propose a reinforcement learning-based spectrum-aware clustering algorithm that allows a member node to learn the energy and cooperative sensing costs for neighboring clusters to achieve an optimal solution. Each member node selects an optimal cluster that satisfies pairwise constraints, minimizes network energy consumption and enhances channel sensing performance through an exploration technique. We first model the network energy consumption and then determine the optimal number of clusters for the network. The problem of selecting an optimal cluster is formulated as a Markov Decision Process (MDP) in the algorithm and the obtained simulation results show convergence, learning and adaptability of the algorithm to dynamic environment towards achieving an optimal solution. Performance comparisons of our algorithm with the Groupwise Spectrum Aware (GWSA)-based algorithm in terms of Sum of Square Error (SSE), complexity, network energy consumption and probability of detection indicate improved performance from the proposed approach. The results further reveal that an energy savings of 9% and a significant Primary User (PU) detection improvement can be achieved with the proposed approach.

  16. Continuous Time Dynamic Contraflow Models and Algorithms

    Directory of Open Access Journals (Sweden)

    Urmila Pyakurel

    2016-01-01

    Full Text Available The research on evacuation planning problem is promoted by the very challenging emergency issues due to large scale natural or man-created disasters. It is the process of shifting the maximum number of evacuees from the disastrous areas to the safe destinations as quickly and efficiently as possible. Contraflow is a widely accepted model for good solution of evacuation planning problem. It increases the outbound road capacity by reversing the direction of roads towards the safe destination. The continuous dynamic contraflow problem sends the maximum number of flow as a flow rate from the source to the sink in every moment of time unit. We propose the mathematical model for the continuous dynamic contraflow problem. We present efficient algorithms to solve the maximum continuous dynamic contraflow and quickest continuous contraflow problems on single source single sink arbitrary networks and continuous earliest arrival contraflow problem on single source single sink series-parallel networks with undefined supply and demand. We also introduce an approximation solution for continuous earliest arrival contraflow problem on two-terminal arbitrary networks.

  17. Algorithms for computational fluid dynamics n parallel processors

    International Nuclear Information System (INIS)

    Van de Velde, E.F.

    1986-01-01

    A study of parallel algorithms for the numerical solution of partial differential equations arising in computational fluid dynamics is presented. The actual implementation on parallel processors of shared and nonshared memory design is discussed. The performance of these algorithms is analyzed in terms of machine efficiency, communication time, bottlenecks and software development costs. For elliptic equations, a parallel preconditioned conjugate gradient method is described, which has been used to solve pressure equations discretized with high order finite elements on irregular grids. A parallel full multigrid method and a parallel fast Poisson solver are also presented. Hyperbolic conservation laws were discretized with parallel versions of finite difference methods like the Lax-Wendroff scheme and with the Random Choice method. Techniques are developed for comparing the behavior of an algorithm on different architectures as a function of problem size and local computational effort. Effective use of these advanced architecture machines requires the use of machine dependent programming. It is shown that the portability problems can be minimized by introducing high level operations on vectors and matrices structured into program libraries

  18. Optimal Management Of Renewable-Based Mgs An Intelligent Approach Through The Evolutionary Algorithm

    Directory of Open Access Journals (Sweden)

    Mehdi Nafar

    2015-08-01

    Full Text Available Abstract- This article proposes a probabilistic frame built on Scenario fabrication to considerate the uncertainties in the finest action managing of Micro Grids MGs. The MG contains different recoverable energy resources such as Wind Turbine WT Micro Turbine MT Photovoltaic PV Fuel Cell FC and one battery as the storing device. The advised frame is based on scenario generation and Roulette wheel mechanism to produce different circumstances for handling the uncertainties of altered factors. It habits typical spreading role as a probability scattering function of random factors. The uncertainties which are measured in this paper are grid bid alterations cargo request calculating error and PV and WT yield power productions. It is well-intentioned to asset that solving the MG difficult for 24 hours of a day by considering diverse uncertainties and different constraints needs one powerful optimization method that can converge fast when it doesnt fall in local optimal topic. Simultaneously single Group Search Optimization GSO system is presented to vision the total search space globally. The GSO algorithm is instigated from group active of beasts. Also the GSO procedure one change is similarly planned for this algorithm. The planned context and way is applied o one test grid-connected MG as a typical grid.

  19. Dynamic Mean-Variance Asset Allocation

    OpenAIRE

    Basak, Suleyman; Chabakauri, Georgy

    2009-01-01

    Mean-variance criteria remain prevalent in multi-period problems, and yet not much is known about their dynamically optimal policies. We provide a fully analytical characterization of the optimal dynamic mean-variance portfolios within a general incomplete-market economy, and recover a simple structure that also inherits several conventional properties of static models. We also identify a probability measure that incorporates intertemporal hedging demands and facilitates much tractability in ...

  20. Content-based and algorithmic classifications of journals: Perspectives on the dynamics of scientific communication and indexer effects

    NARCIS (Netherlands)

    Rafols, I; Leydesdorff, L.

    2009-01-01

    The aggregated journal-journal citation matrix—based on the Journal Citation Reports (JCR) of the Science Citation Index—can be decomposed by indexers or algorithmically. In this study, we test the results of two recently available algorithms for the decomposition of large matrices against two

  1. Preconditioned dynamic mode decomposition and mode selection algorithms for large datasets using incremental proper orthogonal decomposition

    Science.gov (United States)

    Ohmichi, Yuya

    2017-07-01

    In this letter, we propose a simple and efficient framework of dynamic mode decomposition (DMD) and mode selection for large datasets. The proposed framework explicitly introduces a preconditioning step using an incremental proper orthogonal decomposition (POD) to DMD and mode selection algorithms. By performing the preconditioning step, the DMD and mode selection can be performed with low memory consumption and therefore can be applied to large datasets. Additionally, we propose a simple mode selection algorithm based on a greedy method. The proposed framework is applied to the analysis of three-dimensional flow around a circular cylinder.

  2. Unveiling the development of intracranial injury using dynamic brain EIT: an evaluation of current reconstruction algorithms.

    Science.gov (United States)

    Li, Haoting; Chen, Rongqing; Xu, Canhua; Liu, Benyuan; Tang, Mengxing; Yang, Lin; Dong, Xiuzhen; Fu, Feng

    2017-08-21

    Dynamic brain electrical impedance tomography (EIT) is a promising technique for continuously monitoring the development of cerebral injury. While there are many reconstruction algorithms available for brain EIT, there is still a lack of study to compare their performance in the context of dynamic brain monitoring. To address this problem, we develop a framework for evaluating different current algorithms with their ability to correctly identify small intracranial conductivity changes. Firstly, a simulation 3D head phantom with realistic layered structure and impedance distribution is developed. Next several reconstructing algorithms, such as back projection (BP), damped least-square (DLS), Bayesian, split Bregman (SB) and GREIT are introduced. We investigate their temporal response, noise performance, location and shape error with respect to different noise levels on the simulation phantom. The results show that the SB algorithm demonstrates superior performance in reducing image error. To further improve the location accuracy, we optimize SB by incorporating the brain structure-based conductivity distribution priors, in which differences of the conductivities between different brain tissues and the inhomogeneous conductivity distribution of the skull are considered. We compare this novel algorithm (called SB-IBCD) with SB and DLS using anatomically correct head shaped phantoms with spatial varying skull conductivity. Main results and Significance: The results showed that SB-IBCD is the most effective in unveiling small intracranial conductivity changes, where it can reduce the image error by an average of 30.0% compared to DLS.

  3. Fast and robust wavelet-based dynamic range compression and contrast enhancement model with color restoration

    Science.gov (United States)

    Unaldi, Numan; Asari, Vijayan K.; Rahman, Zia-ur

    2009-05-01

    Recently we proposed a wavelet-based dynamic range compression algorithm to improve the visual quality of digital images captured from high dynamic range scenes with non-uniform lighting conditions. The fast image enhancement algorithm that provides dynamic range compression, while preserving the local contrast and tonal rendition, is also a good candidate for real time video processing applications. Although the colors of the enhanced images produced by the proposed algorithm are consistent with the colors of the original image, the proposed algorithm fails to produce color constant results for some "pathological" scenes that have very strong spectral characteristics in a single band. The linear color restoration process is the main reason for this drawback. Hence, a different approach is required for the final color restoration process. In this paper the latest version of the proposed algorithm, which deals with this issue is presented. The results obtained by applying the algorithm to numerous natural images show strong robustness and high image quality.

  4. Identification of the vital digital assets based on PSA results analysis

    Energy Technology Data Exchange (ETDEWEB)

    Choi, Moon Kyoung; Seong, Poong Hyun [KAIST, Daejeon (Korea, Republic of); Son, Han Seong [Joongbu Univiersity, Geumsan (Korea, Republic of); Kim, Hyundoo [Korea Institute of Nuclear Nonproliferation and Control, Daejeon (Korea, Republic of)

    2016-10-15

    As the main systems for managing totally about the operation, control, monitoring, measurement, and safety function in an emergency, instrumentation and control systems (I and C) in nuclear power plants have been digitalized gradually for the precise operation and its convenience. The digitalization of infrastructure makes systems vulnerable to cyber threats and hybrid attacks. According to ICS-CERT report, as time goes by, the number of vulnerabilities in ICS industries increases rapidly. Recently, due to the digitalization of I and C, it has begun to rise the need of cyber security in the digitalized I and C in NPPs. However, there are too many critical digital assets (CDAs) in NPPs. More than 60% of the total critical systems are digital system. Addressing more than 100 security controls for each CDA needs too much effort for both licensee and inspector. It is necessary to focus on more significant CDAs for effective regulation. Probabilistic Safety Analysis (PSA) results are analyzed in order to identify more significant CDAs which could evoke an accident of NPPs by digital malfunction or cyber-attacks. By eliciting minimal cut sets using fault tree analyses, accident-related CDAs are drawn. Also the CDAs that must be secured from outsiders are elicited in case of some accident scenario. It is expected that effective cyber security regulation based on the graded approach can be implemented. Furthermore, defense-in-depth of digital assets for NPPs safety can be built up. Digital technologies such as computers, control systems, and data networks currently play essential roles in modern NPPs. Further, the introduction of new digitalized technologies is also being considered. These digital technologies make the operation of NPPs more convenient and economical; however, they are inherently susceptible to problems such as digital malfunction of components or cyber-attacks. Recently, needs for cyber security on digitalized nuclear Instrumentation and Control (I and C

  5. Identification of the vital digital assets based on PSA results analysis

    International Nuclear Information System (INIS)

    Choi, Moon Kyoung; Seong, Poong Hyun; Son, Han Seong; Kim, Hyundoo

    2016-01-01

    As the main systems for managing totally about the operation, control, monitoring, measurement, and safety function in an emergency, instrumentation and control systems (I and C) in nuclear power plants have been digitalized gradually for the precise operation and its convenience. The digitalization of infrastructure makes systems vulnerable to cyber threats and hybrid attacks. According to ICS-CERT report, as time goes by, the number of vulnerabilities in ICS industries increases rapidly. Recently, due to the digitalization of I and C, it has begun to rise the need of cyber security in the digitalized I and C in NPPs. However, there are too many critical digital assets (CDAs) in NPPs. More than 60% of the total critical systems are digital system. Addressing more than 100 security controls for each CDA needs too much effort for both licensee and inspector. It is necessary to focus on more significant CDAs for effective regulation. Probabilistic Safety Analysis (PSA) results are analyzed in order to identify more significant CDAs which could evoke an accident of NPPs by digital malfunction or cyber-attacks. By eliciting minimal cut sets using fault tree analyses, accident-related CDAs are drawn. Also the CDAs that must be secured from outsiders are elicited in case of some accident scenario. It is expected that effective cyber security regulation based on the graded approach can be implemented. Furthermore, defense-in-depth of digital assets for NPPs safety can be built up. Digital technologies such as computers, control systems, and data networks currently play essential roles in modern NPPs. Further, the introduction of new digitalized technologies is also being considered. These digital technologies make the operation of NPPs more convenient and economical; however, they are inherently susceptible to problems such as digital malfunction of components or cyber-attacks. Recently, needs for cyber security on digitalized nuclear Instrumentation and Control (I and C

  6. A structural dynamic factor model for the effects of monetary policy estimated by the EM algorithm

    DEFF Research Database (Denmark)

    Bork, Lasse

    This paper applies the maximum likelihood based EM algorithm to a large-dimensional factor analysis of US monetary policy. Specifically, economy-wide effects of shocks to the US federal funds rate are estimated in a structural dynamic factor model in which 100+ US macroeconomic and financial time...... series are driven by the joint dynamics of the federal funds rate and a few correlated dynamic factors. This paper contains a number of methodological contributions to the existing literature on data-rich monetary policy analysis. Firstly, the identification scheme allows for correlated factor dynamics...... as opposed to the orthogonal factors resulting from the popular principal component approach to structural factor models. Correlated factors are economically more sensible and important for a richer monetary policy transmission mechanism. Secondly, I consider both static factor loadings as well as dynamic...

  7. Identifying Assets Associated with Quality Extension Programming at the Local Level

    Directory of Open Access Journals (Sweden)

    Amy Harder

    2017-10-01

    Full Text Available County Extension offices are responsible for the majority of programming delivered in the United States. The purpose of this study was to identify and explore assets influencing the quality of county Extension programs. A basic qualitative research design was followed to conduct constant comparative analysis of five Extension county program review reports. Using the appreciative inquiry process as the lens through which to view the county program review reports revealed multiple assets leading to quality programming. Assets of the reviewed county Extension programs were found to cluster within the following themes: competent and enthusiastic Extension faculty, community partnerships, engaged and supportive stakeholders, effective resource management, sufficient and stable workforce, meeting stakeholder needs, positive reputation, access to facilities, positive relationships between county and state faculty, and innovative practices. The use of both needs-based and assets-based paradigms will provide Extension organizations with a more holistic understanding of its assets and a research-based foundation from which to make decisions about strengthening the organization at all levels.

  8. Wavelet-LMS algorithm-based echo cancellers

    Science.gov (United States)

    Seetharaman, Lalith K.; Rao, Sathyanarayana S.

    2002-12-01

    This paper presents Echo Cancellers based on the Wavelet-LMS Algorithm. The performance of the Least Mean Square Algorithm in Wavelet transform domain is observed and its application in Echo cancellation is analyzed. The Widrow-Hoff Least Mean Square Algorithm is most widely used algorithm for Adaptive filters that function as Echo Cancellers. The present day communication signals are widely non-stationary in nature and some errors crop up when Least Mean Square Algorithm is used for the Echo Cancellers handling such signals. The analysis of non-stationary signals often involves a compromise between how well transitions or discontinuities can be located. The multi-scale or multi-resolution of signal analysis, which is the essence of wavelet transform, makes Wavelets popular in non-stationary signal analysis. In this paper, we present a Wavelet-LMS algorithm wherein the wavelet coefficients of a signal are modified adaptively using the Least Mean Square Algorithm and then reconstructed to give an Echo-free signal. The Echo Canceller based on this Algorithm is found to have a better convergence and a comparatively lesser MSE (Mean Square error).

  9. PeerShield: determining control and resilience criticality of collaborative cyber assets in networks

    Science.gov (United States)

    Cam, Hasan

    2012-06-01

    As attackers get more coordinated and advanced in cyber attacks, cyber assets are required to have much more resilience, control effectiveness, and collaboration in networks. Such a requirement makes it essential to take a comprehensive and objective approach for measuring the individual and relative performances of cyber security assets in network nodes. To this end, this paper presents four techniques as to how the relative importance of cyber assets can be measured more comprehensively and objectively by considering together the main variables of risk assessment (e.g., threats, vulnerabilities), multiple attributes (e.g., resilience, control, and influence), network connectivity and controllability among collaborative cyber assets in networks. In the first technique, a Bayesian network is used to include the random variables for control, recovery, and resilience attributes of nodes, in addition to the random variables of threats, vulnerabilities, and risk. The second technique shows how graph matching and coloring can be utilized to form collaborative pairs of nodes to shield together against threats and vulnerabilities. The third technique ranks the security assets of nodes by incorporating multiple weights and thresholds of attributes into a decision-making algorithm. In the fourth technique, the hierarchically well-separated tree is enhanced to first identify critical nodes of a network with respect to their attributes and network connectivity, and then selecting some nodes as driver nodes for network controllability.

  10. Dynamic statistical optimization of GNSS radio occultation bending angles: advanced algorithm and performance analysis

    Science.gov (United States)

    Li, Y.; Kirchengast, G.; Scherllin-Pirscher, B.; Norman, R.; Yuan, Y. B.; Fritzer, J.; Schwaerz, M.; Zhang, K.

    2015-08-01

    We introduce a new dynamic statistical optimization algorithm to initialize ionosphere-corrected bending angles of Global Navigation Satellite System (GNSS)-based radio occultation (RO) measurements. The new algorithm estimates background and observation error covariance matrices with geographically varying uncertainty profiles and realistic global-mean correlation matrices. The error covariance matrices estimated by the new approach are more accurate and realistic than in simplified existing approaches and can therefore be used in statistical optimization to provide optimal bending angle profiles for high-altitude initialization of the subsequent Abel transform retrieval of refractivity. The new algorithm is evaluated against the existing Wegener Center Occultation Processing System version 5.6 (OPSv5.6) algorithm, using simulated data on two test days from January and July 2008 and real observed CHAllenging Minisatellite Payload (CHAMP) and Constellation Observing System for Meteorology, Ionosphere, and Climate (COSMIC) measurements from the complete months of January and July 2008. The following is achieved for the new method's performance compared to OPSv5.6: (1) significant reduction of random errors (standard deviations) of optimized bending angles down to about half of their size or more; (2) reduction of the systematic differences in optimized bending angles for simulated MetOp data; (3) improved retrieval of refractivity and temperature profiles; and (4) realistically estimated global-mean correlation matrices and realistic uncertainty fields for the background and observations. Overall the results indicate high suitability for employing the new dynamic approach in the processing of long-term RO data into a reference climate record, leading to well-characterized and high-quality atmospheric profiles over the entire stratosphere.

  11. A Genetic Algorithms Based Approach for Identification of Escherichia coli Fed-batch Fermentation

    Directory of Open Access Journals (Sweden)

    Olympia Roeva

    2004-10-01

    Full Text Available This paper presents the use of genetic algorithms for identification of Escherichia coli fed-batch fermentation process. Genetic algorithms are a directed random search technique, based on the mechanics of natural selection and natural genetics, which can find the global optimal solution in complex multidimensional search space. The dynamic behavior of considered process has known nonlinear structure, described with a system of deterministic nonlinear differential equations according to the mass balance. The parameters of the model are estimated using genetic algorithms. Simulation examples for demonstration of the effectiveness and robustness of the proposed identification scheme are included. As a result, the model accurately predicts the process of cultivation of E. coli.

  12. Content-based and algorithmic classifications of journals: perspectives on the dynamics of scientific communication and indexer effects

    NARCIS (Netherlands)

    Rafols, I.; Leydesdorff, L.; Larsen, B.; Leta, J.

    2009-01-01

    The aggregated journal-journal citation matrix—based on the Journal Citation Reports (JCR) of the Science Citation Index—can be decomposed by indexers and/or algorithmically. In this study, we test the results of two recently available algorithms for the decomposition of large matrices against two

  13. Algorithms and programs of dynamic mixture estimation unified approach to different types of components

    CERN Document Server

    Nagy, Ivan

    2017-01-01

    This book provides a general theoretical background for constructing the recursive Bayesian estimation algorithms for mixture models. It collects the recursive algorithms for estimating dynamic mixtures of various distributions and brings them in the unified form, providing a scheme for constructing the estimation algorithm for a mixture of components modeled by distributions with reproducible statistics. It offers the recursive estimation of dynamic mixtures, which are free of iterative processes and close to analytical solutions as much as possible. In addition, these methods can be used online and simultaneously perform learning, which improves their efficiency during estimation. The book includes detailed program codes for solving the presented theoretical tasks. Codes are implemented in the open source platform for engineering computations. The program codes given serve to illustrate the theory and demonstrate the work of the included algorithms.

  14. Asset management techniques

    International Nuclear Information System (INIS)

    Schneider, Joachim; Gaul, Armin J.; Neumann, Claus; Hograefer, Juergen; Wellssow, Wolfram; Schwan, Michael; Schnettler, Armin

    2006-01-01

    Deregulation and an increasing competition in electricity markets urge energy suppliers to optimize the utilization of their equipment, focusing on technical and cost-effective aspects. As a respond to these requirements utilities introduce methods formerly used by investment managers or insurance companies. The article describes the usage of these methods, particularly with regard to asset management and risk management within electrical grids. The essential information needed to set up an appropriate asset management system and differences between asset management systems in transmission and distribution systems are discussed. The bulk of costs in electrical grids can be found in costs for maintenance and capital depreciation. A comprehensive approach for an asset management in transmission systems thus focuses on the 'life-cycle costs' of the individual equipment. The objective of the life management process is the optimal utilisation of the remaining life time regarding a given reliability of service and a constant distribution of costs for reinvestment and maintenance ensuring a suitable return. In distribution systems the high number of components would require an enormous effort for the consideration of single individuals. Therefore statistical approaches have been used successfully in practical applications. Newest insights gained by a German research project on asset management systems in distribution grids give an outlook to future developments. (author)

  15. Asset management techniques

    Energy Technology Data Exchange (ETDEWEB)

    Schneider, Joachim; Gaul, Armin J. [RWE Energy AG, Assetmanagement, Dortmund (Germany); Neumann, Claus [RWE Transportnetz Strom GmbH, Dortmund (Germany); Hograefer, Juergen [SAG Energieversorgungsloesungen GmbH, Langen (Germany); Wellssow, Wolfram; Schwan, Michael [Siemens AG, Power Transmission and Distribution, Erlangen (Germany); Schnettler, Armin [RWTH-Aachen, Institut fuer Hochspannungstechnik, Aachen (Germany)

    2006-11-15

    Deregulation and an increasing competition in electricity markets urge energy suppliers to optimize the utilization of their equipment, focusing on technical and cost-effective aspects. As a respond to these requirements utilities introduce methods formerly used by investment managers or insurance companies. The article describes the usage of these methods, particularly with regard to asset management and risk management within electrical grids. The essential information needed to set up an appropriate asset management system and differences between asset management systems in transmission and distribution systems are discussed. The bulk of costs in electrical grids can be found in costs for maintenance and capital depreciation. A comprehensive approach for an asset management in transmission systems thus focuses on the 'life-cycle costs' of the individual equipment. The objective of the life management process is the optimal utilisation of the remaining life time regarding a given reliability of service and a constant distribution of costs for reinvestment and maintenance ensuring a suitable return. In distribution systems the high number of components would require an enormous effort for the consideration of single individuals. Therefore statistical approaches have been used successfully in practical applications. Newest insights gained by a German research project on asset management systems in distribution grids give an outlook to future developments. (author)

  16. Valuation of intangible assets

    OpenAIRE

    Karlíková, Jitka

    2010-01-01

    The thesis is focused on the valuation of intangible assets, particularly trademarks and copyrights. In the beginning it deals with the problems of valuation of intangible assets. The main part of the thesis provides an overview of methods for valuation of intangible assets. This part is followed by a practical section that illustrates the procedure of valuation of trademarks and copyrights on a concrete example.

  17. Asset Management as a Precondition for Knowledge Management

    International Nuclear Information System (INIS)

    Bajramovic, E.; Waedt, K.; Gupta, D.; Gao, Y.; Parekh, M.

    2016-01-01

    Full text: Smart sensors and extensively configurable devices are gradually imposed by the automation market. Except for safety systems, they find their way into the next instrumentation and control (I&C) generation. The understanding and handling of these devices require an extensive knowledge management (KM). This will be outlined for security, testing and training. For legacy systems, security often relates to vetting and access control. For digital devices, a refined asset management is needed, e.g., down to board-level support chipsets. Firmware and system/application software have their own configurations, versions and patch levels. So, here, as a first step of the KM, a user needs to know the firmware configurability. Then, trainings can address when to apply patches, perform regression tests and on what to focus, based on accumulated experience. While assets are often addressed implicitly, this document justifies an explicit and semiformal representation of primary and supporting assets (the asset portfolio) and the establishment of an asset management system as a basis for a robust knowledge management. (author

  18. An improved molecular dynamics algorithm to study thermodiffusion in binary hydrocarbon mixtures

    Science.gov (United States)

    Antoun, Sylvie; Saghir, M. Ziad; Srinivasan, Seshasai

    2018-03-01

    In multicomponent liquid mixtures, the diffusion flow of chemical species can be induced by temperature gradients, which leads to a separation of the constituent components. This cross effect between temperature and concentration is known as thermodiffusion or the Ludwig-Soret effect. The performance of boundary driven non-equilibrium molecular dynamics along with the enhanced heat exchange (eHEX) algorithm was studied by assessing the thermodiffusion process in n-pentane/n-decane (nC5-nC10) binary mixtures. The eHEX algorithm consists of an extended version of the HEX algorithm with an improved energy conservation property. In addition to this, the transferable potentials for phase equilibria-united atom force field were employed in all molecular dynamics (MD) simulations to precisely model the molecular interactions in the fluid. The Soret coefficients of the n-pentane/n-decane (nC5-nC10) mixture for three different compositions (at 300.15 K and 0.1 MPa) were calculated and compared with the experimental data and other MD results available in the literature. Results of our newly employed MD algorithm showed great agreement with experimental data and a better accuracy compared to other MD procedures.

  19. Problems of intangible assets commercialization accounting

    Directory of Open Access Journals (Sweden)

    S.F. Legenchyk

    2016-03-01

    Full Text Available The growing role of intangible assets in conditions of global economy postindustrialization is grounded. The problems of intangible assets accounting are singled out. The basic tasks of the intangible assets accounting commercialization process are determined. The difference between the commercialization of intellectual property and intangible assets is considered. The basic approaches to understanding the essence of the intangible assets commercialization are singled out and grounded. The basic forms and methods of intangible assets commercialization researched by the author are analyzed. The order of accounting reflection of licensee royalties is considered. The factors of influence on the accounting process of intangible assets commercialization are determined. The necessity of solving the problem of accounting of lease payments for computer program by providing access to SaaS environment is grounded. The prospects of further studies of intangible assets accounting commercialization are determined.

  20. Risk-based asset management methodology for highway infrastructure systems.

    Science.gov (United States)

    2004-01-01

    Maintaining the infrastructure of roads, highways, and bridges is paramount to ensuring that these assets will remain safe and reliable in the future. If maintenance costs remain the same or continue to escalate, and additional funding is not made av...

  1. Modeling and Sensitivity Study of Consensus Algorithm-Based Distributed Hierarchical Control for DC Microgrids

    DEFF Research Database (Denmark)

    Meng, Lexuan; Dragicevic, Tomislav; Roldan Perez, Javier

    2016-01-01

    Distributed control methods based on consensus algorithms have become popular in recent years for microgrid (MG) systems. These kinds of algorithms can be applied to share information in order to coordinate multiple distributed generators within a MG. However, stability analysis becomes a challen......Distributed control methods based on consensus algorithms have become popular in recent years for microgrid (MG) systems. These kinds of algorithms can be applied to share information in order to coordinate multiple distributed generators within a MG. However, stability analysis becomes...... in the communication network, continuous-time methods can be inaccurate for this kind of dynamic study. Therefore, this paper aims at modeling a complete DC MG using a discrete-time approach in order to perform a sensitivity analysis taking into account the effects of the consensus algorithm. To this end......, a generalized modeling method is proposed and the influence of key control parameters, the communication topology and the communication speed are studied in detail. The theoretical results obtained with the proposed model are verified by comparing them with the results obtained with a detailed switching...

  2. A Dynamic Fuzzy Cluster Algorithm for Time Series

    Directory of Open Access Journals (Sweden)

    Min Ji

    2013-01-01

    clustering time series by introducing the definition of key point and improving FCM algorithm. The proposed algorithm works by determining those time series whose class labels are vague and further partitions them into different clusters over time. The main advantage of this approach compared with other existing algorithms is that the property of some time series belonging to different clusters over time can be partially revealed. Results from simulation-based experiments on geographical data demonstrate the excellent performance and the desired results have been obtained. The proposed algorithm can be applied to solve other clustering problems in data mining.

  3. Financial Integration and Asset Returns

    OpenAIRE

    P Martin; H Rey

    2000-01-01

    The paper investigates the impact of financial integration on asset return, risk diversification and breadth of financial markets. We analyse a three-country macroeconomic model in which (i) the number of financial assets is endogenous; (ii) assets are imperfect substitutes; (iii) cross-border asset trade entails some transaction costs; (iv) the investment technology is indivisible. In such an environment, lower transaction costs between two financial markets translate to higher demand for as...

  4. A New Augmentation Based Algorithm for Extracting Maximal Chordal Subgraphs.

    Science.gov (United States)

    Bhowmick, Sanjukta; Chen, Tzu-Yi; Halappanavar, Mahantesh

    2015-02-01

    A graph is chordal if every cycle of length greater than three contains an edge between non-adjacent vertices. Chordal graphs are of interest both theoretically, since they admit polynomial time solutions to a range of NP-hard graph problems, and practically, since they arise in many applications including sparse linear algebra, computer vision, and computational biology. A maximal chordal subgraph is a chordal subgraph that is not a proper subgraph of any other chordal subgraph. Existing algorithms for computing maximal chordal subgraphs depend on dynamically ordering the vertices, which is an inherently sequential process and therefore limits the algorithms' parallelizability. In this paper we explore techniques to develop a scalable parallel algorithm for extracting a maximal chordal subgraph. We demonstrate that an earlier attempt at developing a parallel algorithm may induce a non-optimal vertex ordering and is therefore not guaranteed to terminate with a maximal chordal subgraph. We then give a new algorithm that first computes and then repeatedly augments a spanning chordal subgraph. After proving that the algorithm terminates with a maximal chordal subgraph, we then demonstrate that this algorithm is more amenable to parallelization and that the parallel version also terminates with a maximal chordal subgraph. That said, the complexity of the new algorithm is higher than that of the previous parallel algorithm, although the earlier algorithm computes a chordal subgraph which is not guaranteed to be maximal. We experimented with our augmentation-based algorithm on both synthetic and real-world graphs. We provide scalability results and also explore the effect of different choices for the initial spanning chordal subgraph on both the running time and on the number of edges in the maximal chordal subgraph.

  5. Pension plan asset valuation

    OpenAIRE

    Owadally, M. I; Haberman, S.

    2001-01-01

    Various asset valuation methods are used in the context of funding valuations. The motivation for such methods and their properties are briefly described. Some smoothed value or market-related methods based on arithmetic averaging and exponential smoothing are considered and their effect on funding is discussed. Suggestions for further research are also made.

  6. Inflation, Index-Linked Bonds, and Asset Allocation

    OpenAIRE

    Zvi Bodie

    1988-01-01

    The recent introduction of CPI-linked bonds by several financial institutions is a milestone in the history of the U.S. financial system. It has potentially far-reaching effects on individual and institutional asset allocation decisions because these securities represent the only true long-run hedge against inflation risk. CPI-linked bonds make possible the creation of additional financial innovations that would use them as the asset base. One such innovation that seems likely is inflation-pr...

  7. Hybrid employment recommendation algorithm based on Spark

    Science.gov (United States)

    Li, Zuoquan; Lin, Yubei; Zhang, Xingming

    2017-08-01

    Aiming at the real-time application of collaborative filtering employment recommendation algorithm (CF), a clustering collaborative filtering recommendation algorithm (CCF) is developed, which applies hierarchical clustering to CF and narrows the query range of neighbour items. In addition, to solve the cold-start problem of content-based recommendation algorithm (CB), a content-based algorithm with users’ information (CBUI) is introduced for job recommendation. Furthermore, a hybrid recommendation algorithm (HRA) which combines CCF and CBUI algorithms is proposed, and implemented on Spark platform. The experimental results show that HRA can overcome the problems of cold start and data sparsity, and achieve good recommendation accuracy and scalability for employment recommendation.

  8. The feasibility of magnetic resonance imaging of the dynamic swallowing

    International Nuclear Information System (INIS)

    Yang Jingquan; Gao Mingyong; Luo Suling; Lu Ruiliang; He Xiaohong

    2012-01-01

    Objective: To offer some visual and valuable clinical bases for the pharynx disease diagnosis and treatment by comparing the influence of different scanning sequences on the image quality and scanning time, and studying the application to the dynamic swallowing MRI scanning. Methods: The dynamic swallowing scanning of pharyngeal was performed on 20 nasopharyngeal carcinoma patients without deglutition disorders through GE 3.0 MRI system with fast imaging employing steady state acquisition (FIESTA) and fast gradient recalled echo (Fast GRE) sequences, and combined with the array spatial sensitivity encoding technique (ASSET), which accelerating factors was 2.0 ph, and sixty dynamic images were acquired sequentially. The image quality was graded into three classes:excellent, favorable and poor,which were visually assessed by three senior MRI physician using double-blinded method. The quantitative data were analyzed statistically with the SPSS13.0 software. Results: Under the same parameters,the scanning time with FIESTA, FIESTA+ASSET, Fast GRE and Fast GREA+ASSET sequences were 54 s, 28 s, 49 s and 25 s respectively. The number of excellent images with the four sequences were 44, 52, 52 and 56 respectively. The scanning time was the shortest and the image quality was the best with Fast GRE+ASSET sequence. Conclusions: The dynamic imaging of swallowing in sagittal view was achieved with Fast GRE+ASSET sequence on GE 3.0T MRI system. It could present status of the pharynx well, and the soft tissue of swallowing was showed clearly in the dynamic images. These will provide visual and effective evidence for clinical diagnosis and treatment. (authors)

  9. Autodriver algorithm

    Directory of Open Access Journals (Sweden)

    Anna Bourmistrova

    2011-02-01

    Full Text Available The autodriver algorithm is an intelligent method to eliminate the need of steering by a driver on a well-defined road. The proposed method performs best on a four-wheel steering (4WS vehicle, though it is also applicable to two-wheel-steering (TWS vehicles. The algorithm is based on coinciding the actual vehicle center of rotation and road center of curvature, by adjusting the kinematic center of rotation. The road center of curvature is assumed prior information for a given road, while the dynamic center of rotation is the output of dynamic equations of motion of the vehicle using steering angle and velocity measurements as inputs. We use kinematic condition of steering to set the steering angles in such a way that the kinematic center of rotation of the vehicle sits at a desired point. At low speeds the ideal and actual paths of the vehicle are very close. With increase of forward speed the road and tire characteristics, along with the motion dynamics of the vehicle cause the vehicle to turn about time-varying points. By adjusting the steering angles, our algorithm controls the dynamic turning center of the vehicle so that it coincides with the road curvature center, hence keeping the vehicle on a given road autonomously. The position and orientation errors are used as feedback signals in a closed loop control to adjust the steering angles. The application of the presented autodriver algorithm demonstrates reliable performance under different driving conditions.

  10. Locality constrained joint dynamic sparse representation for local matching based face recognition.

    Science.gov (United States)

    Wang, Jianzhong; Yi, Yugen; Zhou, Wei; Shi, Yanjiao; Qi, Miao; Zhang, Ming; Zhang, Baoxue; Kong, Jun

    2014-01-01

    Recently, Sparse Representation-based Classification (SRC) has attracted a lot of attention for its applications to various tasks, especially in biometric techniques such as face recognition. However, factors such as lighting, expression, pose and disguise variations in face images will decrease the performances of SRC and most other face recognition techniques. In order to overcome these limitations, we propose a robust face recognition method named Locality Constrained Joint Dynamic Sparse Representation-based Classification (LCJDSRC) in this paper. In our method, a face image is first partitioned into several smaller sub-images. Then, these sub-images are sparsely represented using the proposed locality constrained joint dynamic sparse representation algorithm. Finally, the representation results for all sub-images are aggregated to obtain the final recognition result. Compared with other algorithms which process each sub-image of a face image independently, the proposed algorithm regards the local matching-based face recognition as a multi-task learning problem. Thus, the latent relationships among the sub-images from the same face image are taken into account. Meanwhile, the locality information of the data is also considered in our algorithm. We evaluate our algorithm by comparing it with other state-of-the-art approaches. Extensive experiments on four benchmark face databases (ORL, Extended YaleB, AR and LFW) demonstrate the effectiveness of LCJDSRC.

  11. Locality constrained joint dynamic sparse representation for local matching based face recognition.

    Directory of Open Access Journals (Sweden)

    Jianzhong Wang

    Full Text Available Recently, Sparse Representation-based Classification (SRC has attracted a lot of attention for its applications to various tasks, especially in biometric techniques such as face recognition. However, factors such as lighting, expression, pose and disguise variations in face images will decrease the performances of SRC and most other face recognition techniques. In order to overcome these limitations, we propose a robust face recognition method named Locality Constrained Joint Dynamic Sparse Representation-based Classification (LCJDSRC in this paper. In our method, a face image is first partitioned into several smaller sub-images. Then, these sub-images are sparsely represented using the proposed locality constrained joint dynamic sparse representation algorithm. Finally, the representation results for all sub-images are aggregated to obtain the final recognition result. Compared with other algorithms which process each sub-image of a face image independently, the proposed algorithm regards the local matching-based face recognition as a multi-task learning problem. Thus, the latent relationships among the sub-images from the same face image are taken into account. Meanwhile, the locality information of the data is also considered in our algorithm. We evaluate our algorithm by comparing it with other state-of-the-art approaches. Extensive experiments on four benchmark face databases (ORL, Extended YaleB, AR and LFW demonstrate the effectiveness of LCJDSRC.

  12. An Ising spin state explanation for financial asset allocation

    Science.gov (United States)

    Horvath, Philip A.; Roos, Kelly R.; Sinha, Amit

    2016-03-01

    We build on the developments in the application of statistical mechanics, notably the identity of the spin degree of freedom in the Ising model, to explain asset price dynamics in financial markets with a representative agent. Specifically, we consider the value of an individual spin to represent the proportional holdings in various assets. We use partial moment arguments to identify asymmetric reactions to information and develop an extension of a plunging and dumping model. This unique identification of the spin is a relaxation of the conventional discrete state limitation on an Ising spin to accommodate a new archetype in Ising model-finance applications wherein spin states may take on continuous values, and may evolve in time continuously, or discretely, depending on the values of the partial moments.

  13. Analysis of Population Diversity of Dynamic Probabilistic Particle Swarm Optimization Algorithms

    Directory of Open Access Journals (Sweden)

    Qingjian Ni

    2014-01-01

    Full Text Available In evolutionary algorithm, population diversity is an important factor for solving performance. In this paper, combined with some population diversity analysis methods in other evolutionary algorithms, three indicators are introduced to be measures of population diversity in PSO algorithms, which are standard deviation of population fitness values, population entropy, and Manhattan norm of standard deviation in population positions. The three measures are used to analyze the population diversity in a relatively new PSO variant—Dynamic Probabilistic Particle Swarm Optimization (DPPSO. The results show that the three measure methods can fully reflect the evolution of population diversity in DPPSO algorithms from different angles, and we also discuss the impact of population diversity on the DPPSO variants. The relevant conclusions of the population diversity on DPPSO can be used to analyze, design, and improve the DPPSO algorithms, thus improving optimization performance, which could also be beneficial to understand the working mechanism of DPPSO theoretically.

  14. Asset planning performance measurement framework

    NARCIS (Netherlands)

    Arthur, D.; Hodkiewicz, M.; Schoenmaker, R.; Muruvan, S.

    2014-01-01

    The international asset management standard ISO 55001, introduced in early 2014, outlines the requirement for an effective Asset Management System. Asset Management practitioners are seeking guidance on implementing one of the key requirements of the standard: the “line of sight” between the

  15. Optimal design of planar slider-crank mechanism using teaching-learning-based optimization algorithm

    International Nuclear Information System (INIS)

    Chaudhary, Kailash; Chaudhary, Himanshu

    2015-01-01

    In this paper, a two stage optimization technique is presented for optimum design of planar slider-crank mechanism. The slider crank mechanism needs to be dynamically balanced to reduce vibrations and noise in the engine and to improve the vehicle performance. For dynamic balancing, minimization of the shaking force and the shaking moment is achieved by finding optimum mass distribution of crank and connecting rod using the equipemental system of point-masses in the first stage of the optimization. In the second stage, their shapes are synthesized systematically by closed parametric curve, i.e., cubic B-spline curve corresponding to the optimum inertial parameters found in the first stage. The multi-objective optimization problem to minimize both the shaking force and the shaking moment is solved using Teaching-learning-based optimization algorithm (TLBO) and its computational performance is compared with Genetic algorithm (GA).

  16. Optimal design of planar slider-crank mechanism using teaching-learning-based optimization algorithm

    Energy Technology Data Exchange (ETDEWEB)

    Chaudhary, Kailash; Chaudhary, Himanshu [Malaviya National Institute of Technology, Jaipur (Malaysia)

    2015-11-15

    In this paper, a two stage optimization technique is presented for optimum design of planar slider-crank mechanism. The slider crank mechanism needs to be dynamically balanced to reduce vibrations and noise in the engine and to improve the vehicle performance. For dynamic balancing, minimization of the shaking force and the shaking moment is achieved by finding optimum mass distribution of crank and connecting rod using the equipemental system of point-masses in the first stage of the optimization. In the second stage, their shapes are synthesized systematically by closed parametric curve, i.e., cubic B-spline curve corresponding to the optimum inertial parameters found in the first stage. The multi-objective optimization problem to minimize both the shaking force and the shaking moment is solved using Teaching-learning-based optimization algorithm (TLBO) and its computational performance is compared with Genetic algorithm (GA).

  17. Keystroke Dynamics-Based Credential Hardening Systems

    Science.gov (United States)

    Bartlow, Nick; Cukic, Bojan

    abstract Keystroke dynamics are becoming a well-known method for strengthening username- and password-based credential sets. The familiarity and ease of use of these traditional authentication schemes combined with the increased trustworthiness associated with biometrics makes them prime candidates for application in many web-based scenarios. Our keystroke dynamics system uses Breiman’s random forests algorithm to classify keystroke input sequences as genuine or imposter. The system is capable of operating at various points on a traditional ROC curve depending on application-specific security needs. As a username/password authentication scheme, our approach decreases the system penetration rate associated with compromised passwords up to 99.15%. Beyond presenting results demonstrating the credential hardening effect of our scheme, we look into the notion that a user’s familiarity to components of a credential set can non-trivially impact error rates.

  18. Toronto Hydro-Electric System Limited, 2010 asset condition assessment audit

    Energy Technology Data Exchange (ETDEWEB)

    Lotho, K.; Wang, F. [Kinectrics Inc., Toronto, ON (Canada)

    2010-07-15

    Toronto Hydro-Electric System Limited (THESL) has long been devoted to the enhancement of its asset management program. In 2006, Kinectrics Incorporated (Kinectrics) performed a full asset condition assessment (ACA) for important distribution assets. Subsequently, THESL made efforts to follow the recommendations given by the 2006 ACA and to enhance the quality of its asset condition data. THESL also created an application that measures the health indices of assets based on current and best available inspection data. In 2009, THESL performed a new ACA with this health index calculator. Kinectrics was requested to evaluate the improvement achieved by THESL between 2006 and 2009, and to compare the results obtained from the two ACA performed. An examination of the changes and ACA results between 2009 and 2010 has been conducted by Kinectrics. The Kinectrics findings were reported into the 2010 asset condition assessment audit report. The Health Index (HI) formulation and the results obtained between 2009 and 2010 were examined for twenty-one asset categories. The health index formulation including condition parameters, condition parameter weights and condition criteria, the granularity within the asset category, the percentage of the population presenting sufficient condition data and the health index classification distribution were compared for each one of the asset categories between 2009 and 2010. This report provides recommendations to facilitate future improvements.

  19. Offshore Wind Farm Cable Connection Configuration Optimization using Dynamic Minimum Spanning Tree Algorithm

    DEFF Research Database (Denmark)

    Hou, Peng; Hu, Weihao; Chen, Zhe

    2015-01-01

    Anew approach, Dynamic Minimal Spanning Tree (DMST) algorithm, whichisbased on the MST algorithm isproposed in this paper to optimizethe cable connectionlayout for large scale offshore wind farm collection system. The current carrying capacity of the cable is considered as the main constraint....... It is amore economicalway for cable connection configurationdesignof offshore wind farm collection system....

  20. 24 CFR 990.270 - Asset management.

    Science.gov (United States)

    2010-04-01

    ... 24 Housing and Urban Development 4 2010-04-01 2010-04-01 false Asset management. 990.270 Section... THE PUBLIC HOUSING OPERATING FUND PROGRAM Asset Management § 990.270 Asset management. As owners, PHAs have asset management responsibilities that are above and beyond property management activities. These...