On Probabilistic Automata in Continuous Time
DEFF Research Database (Denmark)
Eisentraut, Christian; Hermanns, Holger; Zhang, Lijun
2010-01-01
We develop a compositional behavioural model that integrates a variation of probabilistic automata into a conservative extension of interactive Markov chains. The model is rich enough to embody the semantics of generalised stochastic Petri nets. We define strong and weak bisimulations and discuss...
Probabilistic Survivability Versus Time Modeling
Joyner, James J., Sr.
2016-01-01
This presentation documents Kennedy Space Center's Independent Assessment work completed on three assessments for the Ground Systems Development and Operations (GSDO) Program to assist the Chief Safety and Mission Assurance Officer during key programmatic reviews and provided the GSDO Program with analyses of how egress time affects the likelihood of astronaut and ground worker survival during an emergency. For each assessment, a team developed probability distributions for hazard scenarios to address statistical uncertainty, resulting in survivability plots over time. The first assessment developed a mathematical model of probabilistic survivability versus time to reach a safe location using an ideal Emergency Egress System at Launch Complex 39B (LC-39B); the second used the first model to evaluate and compare various egress systems under consideration at LC-39B. The third used a modified LC-39B model to determine if a specific hazard decreased survivability more rapidly than other events during flight hardware processing in Kennedy's Vehicle Assembly Building.
Probabilistic properties of the continuous double auction
Czech Academy of Sciences Publication Activity Database
Šmíd, Martin
2012-01-01
Roč. 48, č. 1 (2012), s. 50-82 ISSN 0023-5954 R&D Projects: GA ČR GAP402/10/1610; GA ČR GAP402/10/0956; GA ČR(CZ) GA402/06/1417 Institutional research plan: CEZ:AV0Z10750506 Keywords : continuous double auction * limit order market * distribution Subject RIV: BB - Applied Statistics, Operational Research Impact factor: 0.619, year: 2012 http://library.utia.cas.cz/separaty/2012/E/smid-0372702.pdf
Probabilistic real-time contingency ranking method
International Nuclear Information System (INIS)
Mijuskovic, N.A.; Stojnic, D.
2000-01-01
This paper describes a real-time contingency method based on a probabilistic index-expected energy not supplied. This way it is possible to take into account the stochastic nature of the electric power system equipment outages. This approach enables more comprehensive ranking of contingencies and it is possible to form reliability cost values that can form the basis for hourly spot price calculations. The electric power system of Serbia is used as an example for the method proposed. (author)
Probabilistic Power Flow Method Considering Continuous and Discrete Variables
Directory of Open Access Journals (Sweden)
Xuexia Zhang
2017-04-01
Full Text Available This paper proposes a probabilistic power flow (PPF method considering continuous and discrete variables (continuous and discrete power flow, CDPF for power systems. The proposed method—based on the cumulant method (CM and multiple deterministic power flow (MDPF calculations—can deal with continuous variables such as wind power generation (WPG and loads, and discrete variables such as fuel cell generation (FCG. In this paper, continuous variables follow a normal distribution (loads or a non-normal distribution (WPG, and discrete variables follow a binomial distribution (FCG. Through testing on IEEE 14-bus and IEEE 118-bus power systems, the proposed method (CDPF has better accuracy compared with the CM, and higher efficiency compared with the Monte Carlo simulation method (MCSM.
On probabilistic forecasting of wind power time-series
DEFF Research Database (Denmark)
Pinson, Pierre
power dynamics. In both cases, the model parameters are adaptively and recursively estimated, time-adaptativity being the result of exponential forgetting of past observations. The probabilistic forecasting methodology is applied at the Horns Rev wind farm in Denmark, for 10-minute ahead probabilistic...... forecasting of wind power generation. Probabilistic forecasts generated from the proposed methodology clearly have higher skill than those obtained from a classical Gaussian assumption about wind power predictive densities. Corresponding point forecasts also exhibit significantly lower error criteria....
Inference for Continuous-Time Probabilistic Programming
2017-12-01
network of Ising model dynamics. The Ising model is a well-known interaction model with applications in many fields includ- ing statistical mechanics...well- known interaction model with applications in many fields including statistical mechanics, genetics, and neuroscience. This is a Markovian model...chains. The Annals of Mathematical Statistics , 37(6):1554–1563, 1966. ISSN 00034851. URL http://www.jstor.org/stable/2238772. Zachary C Lipton, David C
A probabilistic model for US nuclear power construction times
International Nuclear Information System (INIS)
Shash, A.A.H.
1988-01-01
Construction time for nuclear power plants is an important element in planning for resources to meet future load demands. Analysis of actual versus estimated construction times for past US nuclear power plants indicates that utilities have continuously underestimated their power plants' construction durations. The analysis also indicates that the actual average construction time has been increasing upward, and the actual durations of power plants permitted to construct in the same year varied substantially. This study presents two probabilistic models for nuclear power construction time for use by the nuclear industry as estimating tool. The study also presents a detailed explanation of the factors that are responsible for increasing and varying nuclear power construction times. Observations on 91 complete nuclear units were involved in three interdependent analyses in the process of explanation and derivation of the probabilistic models. The historical data was first utilized in the data envelopment analysis (DEA) for the purpose of obtaining frontier index measures for project management achievement in building nuclear power plants
Directory of Open Access Journals (Sweden)
Yuri B. Tebekin
2011-11-01
Full Text Available The article is devoted to the problem of the quality management for multiphase processes on the basis of the probabilistic approach. Method with continuous response functions is offered from the application of the method of Lagrange multipliers.
Time Alignment as a Necessary Step in the Analysis of Sleep Probabilistic Curves
Rošt'áková, Zuzana; Rosipal, Roman
2018-02-01
Sleep can be characterised as a dynamic process that has a finite set of sleep stages during the night. The standard Rechtschaffen and Kales sleep model produces discrete representation of sleep and does not take into account its dynamic structure. In contrast, the continuous sleep representation provided by the probabilistic sleep model accounts for the dynamics of the sleep process. However, analysis of the sleep probabilistic curves is problematic when time misalignment is present. In this study, we highlight the necessity of curve synchronisation before further analysis. Original and in time aligned sleep probabilistic curves were transformed into a finite dimensional vector space, and their ability to predict subjects' age or daily measures is evaluated. We conclude that curve alignment significantly improves the prediction of the daily measures, especially in the case of the S2-related sleep states or slow wave sleep.
Learning Probabilistic Inference through Spike-Timing-Dependent Plasticity.
Pecevski, Dejan; Maass, Wolfgang
2016-01-01
Numerous experimental data show that the brain is able to extract information from complex, uncertain, and often ambiguous experiences. Furthermore, it can use such learnt information for decision making through probabilistic inference. Several models have been proposed that aim at explaining how probabilistic inference could be performed by networks of neurons in the brain. We propose here a model that can also explain how such neural network could acquire the necessary information for that from examples. We show that spike-timing-dependent plasticity in combination with intrinsic plasticity generates in ensembles of pyramidal cells with lateral inhibition a fundamental building block for that: probabilistic associations between neurons that represent through their firing current values of random variables. Furthermore, by combining such adaptive network motifs in a recursive manner the resulting network is enabled to extract statistical information from complex input streams, and to build an internal model for the distribution p (*) that generates the examples it receives. This holds even if p (*) contains higher-order moments. The analysis of this learning process is supported by a rigorous theoretical foundation. Furthermore, we show that the network can use the learnt internal model immediately for prediction, decision making, and other types of probabilistic inference.
Learning Probabilistic Inference through Spike-Timing-Dependent Plasticity123
Pecevski, Dejan
2016-01-01
Abstract Numerous experimental data show that the brain is able to extract information from complex, uncertain, and often ambiguous experiences. Furthermore, it can use such learnt information for decision making through probabilistic inference. Several models have been proposed that aim at explaining how probabilistic inference could be performed by networks of neurons in the brain. We propose here a model that can also explain how such neural network could acquire the necessary information for that from examples. We show that spike-timing-dependent plasticity in combination with intrinsic plasticity generates in ensembles of pyramidal cells with lateral inhibition a fundamental building block for that: probabilistic associations between neurons that represent through their firing current values of random variables. Furthermore, by combining such adaptive network motifs in a recursive manner the resulting network is enabled to extract statistical information from complex input streams, and to build an internal model for the distribution p* that generates the examples it receives. This holds even if p* contains higher-order moments. The analysis of this learning process is supported by a rigorous theoretical foundation. Furthermore, we show that the network can use the learnt internal model immediately for prediction, decision making, and other types of probabilistic inference. PMID:27419214
Probabilistic assessment methodology for continuous-type petroleum accumulations
Crovelli, R.A.
2003-01-01
The analytic resource assessment method, called ACCESS (Analytic Cell-based Continuous Energy Spreadsheet System), was developed to calculate estimates of petroleum resources for the geologic assessment model, called FORSPAN, in continuous-type petroleum accumulations. The ACCESS method is based upon mathematical equations derived from probability theory in the form of a computer spreadsheet system. ?? 2003 Elsevier B.V. All rights reserved.
Probabilistic eruption forecasting at short and long time scales
Marzocchi, Warner; Bebbington, Mark S.
2012-10-01
Any effective volcanic risk mitigation strategy requires a scientific assessment of the future evolution of a volcanic system and its eruptive behavior. Some consider the onus should be on volcanologists to provide simple but emphatic deterministic forecasts. This traditional way of thinking, however, does not deal with the implications of inherent uncertainties, both aleatoric and epistemic, that are inevitably present in observations, monitoring data, and interpretation of any natural system. In contrast to deterministic predictions, probabilistic eruption forecasting attempts to quantify these inherent uncertainties utilizing all available information to the extent that it can be relied upon and is informative. As with many other natural hazards, probabilistic eruption forecasting is becoming established as the primary scientific basis for planning rational risk mitigation actions: at short-term (hours to weeks or months), it allows decision-makers to prioritize actions in a crisis; and at long-term (years to decades), it is the basic component for land use and emergency planning. Probabilistic eruption forecasting consists of estimating the probability of an eruption event and where it sits in a complex multidimensional time-space-magnitude framework. In this review, we discuss the key developments and features of models that have been used to address the problem.
Dynamic probabilistic models and social structure essays on socioeconomic continuity
Gómez M , Guillermo L
1992-01-01
Mathematical models have been very successful in the study of the physical world. Galilei and Newton introduced point particles moving without friction under the action of simple forces as the basis for the description of concrete motions like the ones of the planets. This approach was sustained by appro priate mathematical methods, namely infinitesimal calculus, which was being developed at that time. In this way classical analytical mechanics was able to establish some general results, gaining insight through explicit solution of some simple cases and developing various methods of approximation for handling more complicated ones. Special relativity theory can be seen as an extension of this kind of modelling. In the study of electromagnetic phenomena and in general relativity another mathematical model is used, in which the concept of classical field plays the fundamental role. The equations of motion here are partial differential equations, and the methods of study used involve further developments of cl...
Distributed synthesis in continuous time
DEFF Research Database (Denmark)
Hermanns, Holger; Krčál, Jan; Vester, Steen
2016-01-01
We introduce a formalism modelling communication of distributed agents strictly in continuous-time. Within this framework, we study the problem of synthesising local strategies for individual agents such that a specified set of goal states is reached, or reached with at least a given probability....... The flow of time is modelled explicitly based on continuous-time randomness, with two natural implications: First, the non-determinism stemming from interleaving disappears. Second, when we restrict to a subclass of non-urgent models, the quantitative value problem for two players can be solved in EXPTIME....... Indeed, the explicit continuous time enables players to communicate their states by delaying synchronisation (which is unrestricted for non-urgent models). In general, the problems are undecidable already for two players in the quantitative case and three players in the qualitative case. The qualitative...
Real-time probabilistic covariance tracking with efficient model update.
Wu, Yi; Cheng, Jian; Wang, Jinqiao; Lu, Hanqing; Wang, Jun; Ling, Haibin; Blasch, Erik; Bai, Li
2012-05-01
The recently proposed covariance region descriptor has been proven robust and versatile for a modest computational cost. The covariance matrix enables efficient fusion of different types of features, where the spatial and statistical properties, as well as their correlation, are characterized. The similarity between two covariance descriptors is measured on Riemannian manifolds. Based on the same metric but with a probabilistic framework, we propose a novel tracking approach on Riemannian manifolds with a novel incremental covariance tensor learning (ICTL). To address the appearance variations, ICTL incrementally learns a low-dimensional covariance tensor representation and efficiently adapts online to appearance changes of the target with only O(1) computational complexity, resulting in a real-time performance. The covariance-based representation and the ICTL are then combined with the particle filter framework to allow better handling of background clutter, as well as the temporary occlusions. We test the proposed probabilistic ICTL tracker on numerous benchmark sequences involving different types of challenges including occlusions and variations in illumination, scale, and pose. The proposed approach demonstrates excellent real-time performance, both qualitatively and quantitatively, in comparison with several previously proposed trackers.
Performance analysis of chi models using discrete-time probabilistic reward graphs
Trcka, N.; Georgievska, S.; Markovski, J.; Andova, S.; Vink, de E.P.
2008-01-01
We propose the model of discrete-time probabilistic reward graphs (DTPRGs) for performance analysis of systems exhibiting discrete deterministic time delays and probabilistic behavior, via their interpretation as discrete-time Markov reward chains, full-fledged platform for qualitative and
Chemical Continuous Time Random Walks
Aquino, T.; Dentz, M.
2017-12-01
Traditional methods for modeling solute transport through heterogeneous media employ Eulerian schemes to solve for solute concentration. More recently, Lagrangian methods have removed the need for spatial discretization through the use of Monte Carlo implementations of Langevin equations for solute particle motions. While there have been recent advances in modeling chemically reactive transport with recourse to Lagrangian methods, these remain less developed than their Eulerian counterparts, and many open problems such as efficient convergence and reconstruction of the concentration field remain. We explore a different avenue and consider the question: In heterogeneous chemically reactive systems, is it possible to describe the evolution of macroscopic reactant concentrations without explicitly resolving the spatial transport? Traditional Kinetic Monte Carlo methods, such as the Gillespie algorithm, model chemical reactions as random walks in particle number space, without the introduction of spatial coordinates. The inter-reaction times are exponentially distributed under the assumption that the system is well mixed. In real systems, transport limitations lead to incomplete mixing and decreased reaction efficiency. We introduce an arbitrary inter-reaction time distribution, which may account for the impact of incomplete mixing. This process defines an inhomogeneous continuous time random walk in particle number space, from which we derive a generalized chemical Master equation and formulate a generalized Gillespie algorithm. We then determine the modified chemical rate laws for different inter-reaction time distributions. We trace Michaelis-Menten-type kinetics back to finite-mean delay times, and predict time-nonlocal macroscopic reaction kinetics as a consequence of broadly distributed delays. Non-Markovian kinetics exhibit weak ergodicity breaking and show key features of reactions under local non-equilibrium.
Hayashi, Hideaki; Shima, Keisuke; Shibanoki, Taro; Kurita, Yuichi; Tsuji, Toshio
2013-01-01
This paper outlines a probabilistic neural network developed on the basis of time-series discriminant component analysis (TSDCA) that can be used to classify high-dimensional time-series patterns. TSDCA involves the compression of high-dimensional time series into a lower-dimensional space using a set of orthogonal transformations and the calculation of posterior probabilities based on a continuous-density hidden Markov model that incorporates a Gaussian mixture model expressed in the reduced-dimensional space. The analysis can be incorporated into a neural network so that parameters can be obtained appropriately as network coefficients according to backpropagation-through-time-based training algorithm. The network is considered to enable high-accuracy classification of high-dimensional time-series patterns and to reduce the computation time taken for network training. In the experiments conducted during the study, the validity of the proposed network was demonstrated for EEG signals.
A Probabilistic Approach to Control of Complex Systems and Its Application to Real-Time Pricing
Directory of Open Access Journals (Sweden)
Koichi Kobayashi
2014-01-01
Full Text Available Control of complex systems is one of the fundamental problems in control theory. In this paper, a control method for complex systems modeled by a probabilistic Boolean network (PBN is studied. A PBN is widely used as a model of complex systems such as gene regulatory networks. For a PBN, the structural control problem is newly formulated. In this problem, a discrete probability distribution appeared in a PBN is controlled by the continuous-valued input. For this problem, an approximate solution method using a matrix-based representation for a PBN is proposed. Then, the problem is approximated by a linear programming problem. Furthermore, the proposed method is applied to design of real-time pricing systems of electricity. Electricity conservation is achieved by appropriately determining the electricity price over time. The effectiveness of the proposed method is presented by a numerical example on real-time pricing systems.
For Time-Continuous Optimisation
DEFF Research Database (Denmark)
Heinrich, Mary Katherine; Ayres, Phil
2016-01-01
Strategies for optimisation in design normatively assume an artefact end-point, disallowing continuous architecture that engages living systems, dynamic behaviour, and complex systems. In our Flora Robotica investigations of symbiotic plant-robot bio-hybrids, we re- quire computational tools...
Amigó, José M; Hirata, Yoshito; Aihara, Kazuyuki
2017-08-01
In a previous paper, the authors studied the limits of probabilistic prediction in nonlinear time series analysis in a perfect model scenario, i.e., in the ideal case that the uncertainty of an otherwise deterministic model is due to only the finite precision of the observations. The model consisted of the symbolic dynamics of a measure-preserving transformation with respect to a finite partition of the state space, and the quality of the predictions was measured by the so-called ignorance score, which is a conditional entropy. In practice, though, partitions are dispensed with by considering numerical and experimental data to be continuous, which prompts us to trade off in this paper the Shannon entropy for the differential entropy. Despite technical differences, we show that the core of the previous results also hold in this extended scenario for sufficiently high precision. The corresponding imperfect model scenario will be revisited too because it is relevant for the applications. The theoretical part and its application to probabilistic forecasting are illustrated with numerical simulations and a new prediction algorithm.
An approach to handle Real Time and Probabilistic behaviors in e-commerce
DEFF Research Database (Denmark)
Diaz, G.; Larsen, Kim Guldstrand; Pardo, J.
2005-01-01
In this work we describe an approach to deal with systems having at the same time probabilistic and real-time behav- iors. The main goal in the paper is to show the automatic translation from a real time model based on UPPAAL tool, which makes automatic verification of Real Time Systems, to the R...
Innovative real time simulation training and nuclear probabilistic risk assessment
International Nuclear Information System (INIS)
Reisinger, M.F.
1991-01-01
Operator errors have been an area of public concern for the safe operation of nuclear power plants since the TMI2 incident. Simply stated, nuclear plants are very complex systems and the public is skeptical of the operators' ability to comprehend and deal with the vast indications and complexities of potential nuclear power plant events. Prior to the TMI2 incident, operator errors and human factors were not included as contributing factors in the Probabilistic Risk Assessment (PRA) studies of nuclear power plant accidents. More recent efforts in nuclear risk assessment have addressed some of the human factors affecting safe nuclear plant operations. One study found four major factors having significant impact on operator effectiveness. This paper discusses human factor PRAs, new applications in simulation training and the specific potential benefits from simulation in promoting safer operation of future power plants as well as current operating power plants
Hayashi, Hideaki; Shibanoki, Taro; Shima, Keisuke; Kurita, Yuichi; Tsuji, Toshio
2015-12-01
This paper proposes a probabilistic neural network (NN) developed on the basis of time-series discriminant component analysis (TSDCA) that can be used to classify high-dimensional time-series patterns. TSDCA involves the compression of high-dimensional time series into a lower dimensional space using a set of orthogonal transformations and the calculation of posterior probabilities based on a continuous-density hidden Markov model with a Gaussian mixture model expressed in the reduced-dimensional space. The analysis can be incorporated into an NN, which is named a time-series discriminant component network (TSDCN), so that parameters of dimensionality reduction and classification can be obtained simultaneously as network coefficients according to a backpropagation through time-based learning algorithm with the Lagrange multiplier method. The TSDCN is considered to enable high-accuracy classification of high-dimensional time-series patterns and to reduce the computation time taken for network training. The validity of the TSDCN is demonstrated for high-dimensional artificial data and electroencephalogram signals in the experiments conducted during the study.
Liu, Hongjian; Wang, Zidong; Shen, Bo; Huang, Tingwen; Alsaadi, Fuad E
2018-06-01
This paper is concerned with the globally exponential stability problem for a class of discrete-time stochastic memristive neural networks (DSMNNs) with both leakage delays as well as probabilistic time-varying delays. For the probabilistic delays, a sequence of Bernoulli distributed random variables is utilized to determine within which intervals the time-varying delays fall at certain time instant. The sector-bounded activation function is considered in the addressed DSMNN. By taking into account the state-dependent characteristics of the network parameters and choosing an appropriate Lyapunov-Krasovskii functional, some sufficient conditions are established under which the underlying DSMNN is globally exponentially stable in the mean square. The derived conditions are made dependent on both the leakage and the probabilistic delays, and are therefore less conservative than the traditional delay-independent criteria. A simulation example is given to show the effectiveness of the proposed stability criterion. Copyright © 2018 Elsevier Ltd. All rights reserved.
International Nuclear Information System (INIS)
Barnett, C.S.
1991-01-01
The Double Contingency Principle (DCP) is widely applied to criticality safety practice in the United States. Most practitioners base their application of the principle on qualitative, intuitive assessments. The recent trend toward probabilistic safety assessments provides a motive to search for a quantitative, probabilistic foundation for the DCP. A Markov model is tractable and leads to relatively simple results. The model yields estimates of mean time to simultaneous collapse of two contingencies as a function of estimates of mean failure times and mean recovery times of two independent contingencies. The model is a tool that can be used to supplement the qualitative methods now used to assess effectiveness of the DCP. (Author)
Directory of Open Access Journals (Sweden)
A. Mahmoodzadeh
2016-10-01
Full Text Available Ground condition and construction (excavation and support time and costs are the key factors in decision-making during planning and design phases of a tunnel project. An innovative methodology for probabilistic estimation of ground condition and construction time and costs is proposed, which is an integration of the ground prediction approach based on Markov process, and the time and cost variance analysis based on Monte-Carlo (MC simulation. The former provides the probabilistic description of ground classification along tunnel alignment according to the geological information revealed from geological profile and boreholes. The latter provides the probabilistic description of the expected construction time and costs for each operation according to the survey feedbacks from experts. Then an engineering application to Hamro tunnel is presented to demonstrate how the ground condition and the construction time and costs are estimated in a probabilistic way. In most items, in order to estimate the data needed for this methodology, a number of questionnaires are distributed among the tunneling experts and finally the mean values of the respondents are applied. These facilitate both the owners and the contractors to be aware of the risk that they should carry before construction, and are useful for both tendering and bidding.
Continuity of Local Time: An applied perspective
Ramirez, Jorge M.; Waymire, Edward C.; Thomann, Enrique A.
2015-01-01
Continuity of local time for Brownian motion ranks among the most notable mathematical results in the theory of stochastic processes. This article addresses its implications from the point of view of applications. In particular an extension of previous results on an explicit role of continuity of (natural) local time is obtained for applications to recent classes of problems in physics, biology and finance involving discontinuities in a dispersion coefficient. The main theorem and its corolla...
Multivariate time-varying volatility modeling using probabilistic fuzzy systems
Basturk, N.; Almeida, R.J.; Golan, R.; Kaymak, U.
2016-01-01
Methods to accurately analyze financial risk have drawn considerable attention in financial institutions. One difficulty in financial risk analysis is the fact that banks and other financial institutions invest in several assets which show time-varying volatilities and hence time-varying financial
Parameter Estimation in Continuous Time Domain
Directory of Open Access Journals (Sweden)
Gabriela M. ATANASIU
2016-12-01
Full Text Available This paper will aim to presents the applications of a continuous-time parameter estimation method for estimating structural parameters of a real bridge structure. For the purpose of illustrating this method two case studies of a bridge pile located in a highly seismic risk area are considered, for which the structural parameters for the mass, damping and stiffness are estimated. The estimation process is followed by the validation of the analytical results and comparison with them to the measurement data. Further benefits and applications for the continuous-time parameter estimation method in civil engineering are presented in the final part of this paper.
International Nuclear Information System (INIS)
Barnett, C.S.
1991-06-01
The Double Contingency Principle (DCP) is widely applied to criticality safety practice in the United States. Most practitioners base their application of the principle on qualitative, intuitive assessments. The recent trend toward probabilistic safety assessments provides a motive to search for a quantitative, probabilistic foundation for the DCP. A Markov model is tractable and leads to relatively simple results. The model yields estimates of mean time to simultaneous collapse of two contingencies as a function of estimates of mean failure times and mean recovery times of two independent contingencies. The model is a tool that can be used to supplement the qualitative methods now used to assess effectiveness of the DCP. 3 refs., 1 fig
International Nuclear Information System (INIS)
Barnett, C.S.
1992-01-01
The double contingency principle (DCP) is widely applied to criticality safety practice in the United States. Most practitioners base their application of the principle on qualitative and intuitive assessments. The recent trend toward probabilistic safety assessments provides a motive for a search for a quantitative and probabilistic foundation for the DCP. A Markov model is tractable and leads to relatively simple results. The model yields estimates of mean time to simultaneous collapse of two contingencies, as functions of estimates of mean failure times and mean recovery times of two independent contingencies. The model is a tool that can be used to supplement the qualitative methods now used to assess the effectiveness of the DCP. (Author)
Time, physics, and the paradoxes of continuity
Steinberg, D A
2003-01-01
A recent article in this journal proposes a radical reformulation of classical and quantum dynamics based on a perceived deficiency in current definitions of time. The argument is incorrect but the errors highlight aspects of the foundations of mathematics and physics that are commonly confused and misunderstood. For this reason, the article provides an important and heuristic opportunity to reexamine the types of time and non-standard analysis. This paper will discuss the differences between physical time and experiential time and explain how an expanded system of real analysis containing infinitesimals can resolve the paradoxes of continuity without sacrificing the modern edifice of mathematical physics.
Continuous Time Dynamic Contraflow Models and Algorithms
Directory of Open Access Journals (Sweden)
Urmila Pyakurel
2016-01-01
Full Text Available The research on evacuation planning problem is promoted by the very challenging emergency issues due to large scale natural or man-created disasters. It is the process of shifting the maximum number of evacuees from the disastrous areas to the safe destinations as quickly and efficiently as possible. Contraflow is a widely accepted model for good solution of evacuation planning problem. It increases the outbound road capacity by reversing the direction of roads towards the safe destination. The continuous dynamic contraflow problem sends the maximum number of flow as a flow rate from the source to the sink in every moment of time unit. We propose the mathematical model for the continuous dynamic contraflow problem. We present efficient algorithms to solve the maximum continuous dynamic contraflow and quickest continuous contraflow problems on single source single sink arbitrary networks and continuous earliest arrival contraflow problem on single source single sink series-parallel networks with undefined supply and demand. We also introduce an approximation solution for continuous earliest arrival contraflow problem on two-terminal arbitrary networks.
Robust stabilisation of time-varying delay systems with probabilistic uncertainties
Jiang, Ning; Xiong, Junlin; Lam, James
2016-09-01
For robust stabilisation of time-varying delay systems, only sufficient conditions are available to date. A natural question is as follows: if the existing sufficient conditions are not satisfied, and hence no controllers can be found, what can one do to improve the stability performance of time-varying delay systems? This question is addressed in this paper when there is a probabilistic structure on the parameter uncertainty set. A randomised algorithm is proposed to design a state-feedback controller, which stabilises the system over the uncertainty domain in a probabilistic sense. The capability of the designed controller is quantified by the probability of stability of the resulting closed-loop system. The accuracy of the solution obtained from the randomised algorithm is also analysed. Finally, numerical examples are used to illustrate the effectiveness and advantages of the developed controller design approach.
A Time-Varied Probabilistic ON/OFF Switching Algorithm for Cellular Networks
Rached, Nadhir B.; Ghazzai, Hakim; Kadri, Abdullah; Alouini, Mohamed-Slim
2018-01-01
In this letter, we develop a time-varied probabilistic on/off switching planning method for cellular networks to reduce their energy consumption. It consists in a risk-aware optimization approach that takes into consideration the randomness of the user profile associated with each base station (BS). The proposed approach jointly determines (i) the instants of time at which the current active BS configuration must be updated due to an increase or decrease of the network traffic load, and (ii) the set of minimum BSs to be activated to serve the networks’ subscribers. Probabilistic metrics modeling the traffic profile variation are developed to trigger this dynamic on/off switching operation. Selected simulation results are then performed to validate the proposed algorithm for different system parameters.
A Time-Varied Probabilistic ON/OFF Switching Algorithm for Cellular Networks
Rached, Nadhir B.
2018-01-11
In this letter, we develop a time-varied probabilistic on/off switching planning method for cellular networks to reduce their energy consumption. It consists in a risk-aware optimization approach that takes into consideration the randomness of the user profile associated with each base station (BS). The proposed approach jointly determines (i) the instants of time at which the current active BS configuration must be updated due to an increase or decrease of the network traffic load, and (ii) the set of minimum BSs to be activated to serve the networks’ subscribers. Probabilistic metrics modeling the traffic profile variation are developed to trigger this dynamic on/off switching operation. Selected simulation results are then performed to validate the proposed algorithm for different system parameters.
a Continuous-Time Positive Linear System
Directory of Open Access Journals (Sweden)
Kyungsup Kim
2013-01-01
Full Text Available This paper discusses a computational method to construct positive realizations with sparse matrices for continuous-time positive linear systems with multiple complex poles. To construct a positive realization of a continuous-time system, we use a Markov sequence similar to the impulse response sequence that is used in the discrete-time case. The existence of the proposed positive realization can be analyzed with the concept of a polyhedral convex cone. We provide a constructive algorithm to compute positive realizations with sparse matrices of some positive systems under certain conditions. A sufficient condition for the existence of a positive realization, under which the proposed constructive algorithm works well, is analyzed.
Whither probabilistic security management for real-time operation of power systems ?
Karangelos, Efthymios; Panciatici, Patrick; Wehenkel, Louis
2016-01-01
This paper investigates the stakes of introducing probabilistic approaches for the management of power system’s security. In real-time operation, the aim is to arbitrate in a rational way between preventive and corrective control, while taking into account i) the prior probabilities of contingencies, ii) the possible failure modes of corrective control actions, iii) the socio-economic consequences of service interruptions. This work is a first step towards the construction of a globally co...
Time-delay analyzer with continuous discretization
International Nuclear Information System (INIS)
Bayatyan, G.L.; Darbinyan, K.T.; Mkrtchyan, K.K.; Stepanyan, S.S.
1988-01-01
A time-delay analyzer is described which when triggered by a start pulse of adjustable duration performs continuous discretization of the analyzed signal within nearly 22 ns time intervals, the recording in a memory unit with following slow read-out of the information to the computer and its processing. The time-delay analyzer consists of four CAMAC-VECTOR systems of unit width. With its help one can separate comparatively short, small-amplitude rare signals against the background of quasistationary noise processes. 4 refs.; 3 figs
Path probabilities of continuous time random walks
International Nuclear Information System (INIS)
Eule, Stephan; Friedrich, Rudolf
2014-01-01
Employing the path integral formulation of a broad class of anomalous diffusion processes, we derive the exact relations for the path probability densities of these processes. In particular, we obtain a closed analytical solution for the path probability distribution of a Continuous Time Random Walk (CTRW) process. This solution is given in terms of its waiting time distribution and short time propagator of the corresponding random walk as a solution of a Dyson equation. Applying our analytical solution we derive generalized Feynman–Kac formulae. (paper)
Lin, T.
2014-01-01
In this paper, we settle a problem in probabilistic verification of infinite--state process (specifically, {\\it probabilistic pushdown process}). We show that model checking {\\it stateless probabilistic pushdown process} (pBPA) against {\\it probabilistic computational tree logic} (PCTL) is undecidable.
Progress in methodology for probabilistic assessment of accidents: timing of accident sequences
International Nuclear Information System (INIS)
Lanore, J.M.; Villeroux, C.; Bouscatie, F.; Maigret, N.
1981-09-01
There is an important problem for probabilistic studies of accident sequences using the current event tree techniques. Indeed this method does not take into account the dependence in time of the real accident scenarios, involving the random behaviour of the systems (lack or delay in intervention, partial failures, repair, operator actions ...) and the correlated evolution of the physical parameters. A powerful method to perform the probabilistic treatment of these complex sequences (dynamic evolution of systems and associated physics) is Monte-Carlo simulation, very rare events being treated with the help of suitable weighting and biasing techniques. As a practical example the accident sequences related to the loss of the residual heat removal system in a fast breeder reactor has been treated with that method
Interaction-aided continuous time quantum search
International Nuclear Information System (INIS)
Bae, Joonwoo; Kwon, Younghun; Baek, Inchan; Yoon, Dalsun
2005-01-01
The continuous quantum search algorithm (based on the Farhi-Gutmann Hamiltonian evolution) is known to be analogous to the Grover (or discrete time quantum) algorithm. Any errors introduced in Grover algorithm are fatal to its success. In the same way the Farhi-Gutmann Hamiltonian algorithm has a severe difficulty when the Hamiltonian is perturbed. In this letter we will show that the interaction term in quantum search Hamiltonian (actually which is in the generalized quantum search Hamiltonian) can save the perturbed Farhi-Gutmann Hamiltonian that should otherwise fail. We note that this fact is quite remarkable since it implies that introduction of interaction can be a way to correct some errors on the continuous time quantum search
Adaptive predictors based on probabilistic SVM for real time disruption mitigation on JET
Murari, A.; Lungaroni, M.; Peluso, E.; Gaudio, P.; Vega, J.; Dormido-Canto, S.; Baruzzo, M.; Gelfusa, M.; Contributors, JET
2018-05-01
Detecting disruptions with sufficient anticipation time is essential to undertake any form of remedial strategy, mitigation or avoidance. Traditional predictors based on machine learning techniques can be very performing, if properly optimised, but do not provide a natural estimate of the quality of their outputs and they typically age very quickly. In this paper a new set of tools, based on probabilistic extensions of support vector machines (SVM), are introduced and applied for the first time to JET data. The probabilistic output constitutes a natural qualification of the prediction quality and provides additional flexibility. An adaptive training strategy ‘from scratch’ has also been devised, which allows preserving the performance even when the experimental conditions change significantly. Large JET databases of disruptions, covering entire campaigns and thousands of discharges, have been analysed, both for the case of the graphite and the ITER Like Wall. Performance significantly better than any previous predictor using adaptive training has been achieved, satisfying even the requirements of the next generation of devices. The adaptive approach to the training has also provided unique information about the evolution of the operational space. The fact that the developed tools give the probability of disruption improves the interpretability of the results, provides an estimate of the predictor quality and gives new insights into the physics. Moreover, the probabilistic treatment permits to insert more easily these classifiers into general decision support and control systems.
Probabilistic tsunami hazard assessment considering time-lag of seismic event on Nankai trough
International Nuclear Information System (INIS)
Sugino, Hideharu; Sakagami, Masaharu; Ebisawa, Katsumi; Korenaga, Mariko
2011-01-01
In the area in front of Nankai trough, tsunami wave height may increase if tsunamis attacking from some wave sources overlap because of time-lag of seismic event on Nankai trough. To evaluation tsunami risk of the important facilities located in front of Nankai trough, we proposed the probabilistic tsunami hazard assessment considering uncertainty on time-lag of seismic event on Nankai trough and we evaluated the influence that the time-lag gave to tsunami hazard at the some representative points. (author)
Light water reactor sequence timing: its significance to probabilistic safety assessment modeling
International Nuclear Information System (INIS)
Bley, D.C.; Buttemer, D.R.; Stetkar, J.W.
1988-01-01
This paper examines event sequence timing in light water reactor plants from the viewpoint of probabilistic safety assessment (PSA). The analytical basis for the ideas presented here come primarily from the authors' work in support of more than 20 PSA studies over the past several years. Timing effects are important for establishing success criteria for support and safety system response and for identifying the time available for operator recovery actions. The principal results of this paper are as follows: 1. Analysis of event sequence timing is necessary for meaningful probabilistic safety assessment - both the success criteria for systems performance and the probability of recovery are tightly linked to sequence timing. 2. Simple engineering analyses based on first principles are often sufficient to provide adequate resolution of the time available for recovery of PSA scenarios. Only those parameters that influence sequence timing and its variability and uncertainty need be examined. 3. Time available for recovery is the basic criterion for evaluation of human performance, whether time is an explicit parameter of the operator actions analysis or not. (author)
Language Emptiness of Continuous-Time Parametric Timed Automata
DEFF Research Database (Denmark)
Benes, Nikola; Bezdek, Peter; Larsen, Kim Guldstrand
2015-01-01
Parametric timed automata extend the standard timed automata with the possibility to use parameters in the clock guards. In general, if the parameters are real-valued, the problem of language emptiness of such automata is undecidable even for various restricted subclasses. We thus focus on the case...... where parameters are assumed to be integer-valued, while the time still remains continuous. On the one hand, we show that the problem remains undecidable for parametric timed automata with three clocks and one parameter. On the other hand, for the case with arbitrary many clocks where only one......-time semantics only. To the best of our knowledge, this is the first positive result in the case of continuous-time and unbounded integer parameters, except for the rather simple case of single-clock automata....
Expectation propagation for continuous time stochastic processes
International Nuclear Information System (INIS)
Cseke, Botond; Schnoerr, David; Sanguinetti, Guido; Opper, Manfred
2016-01-01
We consider the inverse problem of reconstructing the posterior measure over the trajectories of a diffusion process from discrete time observations and continuous time constraints. We cast the problem in a Bayesian framework and derive approximations to the posterior distributions of single time marginals using variational approximate inference, giving rise to an expectation propagation type algorithm. For non-linear diffusion processes, this is achieved by leveraging moment closure approximations. We then show how the approximation can be extended to a wide class of discrete-state Markov jump processes by making use of the chemical Langevin equation. Our empirical results show that the proposed method is computationally efficient and provides good approximations for these classes of inverse problems. (paper)
A continuous time Cournot duopoly with delays
International Nuclear Information System (INIS)
Gori, Luca; Guerrini, Luca; Sodini, Mauro
2015-01-01
This paper extends the classical repeated duopoly model with quantity-setting firms of Bischi et al. (1998) by assuming that production of goods is subject to some gestation lags but exchanges take place continuously in the market. The model is expressed in the form of differential equations with discrete delays. By using some recent mathematical techniques and numerical experiments, results show some dynamic phenomena that cannot be observed when delays are absent. In addition, depending on the extent of time delays and inertia, synchronisation failure can arise even in the event of homogeneous firms.
Stochastic volatility of volatility in continuous time
DEFF Research Database (Denmark)
Barndorff-Nielsen, Ole; Veraart, Almut
This paper introduces the concept of stochastic volatility of volatility in continuous time and, hence, extends standard stochastic volatility (SV) models to allow for an additional source of randomness associated with greater variability in the data. We discuss how stochastic volatility...... of volatility can be defined both non-parametrically, where we link it to the quadratic variation of the stochastic variance process, and parametrically, where we propose two new SV models which allow for stochastic volatility of volatility. In addition, we show that volatility of volatility can be estimated...
Heterogeneous continuous-time random walks
Grebenkov, Denis S.; Tupikina, Liubov
2018-01-01
We introduce a heterogeneous continuous-time random walk (HCTRW) model as a versatile analytical formalism for studying and modeling diffusion processes in heterogeneous structures, such as porous or disordered media, multiscale or crowded environments, weighted graphs or networks. We derive the exact form of the propagator and investigate the effects of spatiotemporal heterogeneities onto the diffusive dynamics via the spectral properties of the generalized transition matrix. In particular, we show how the distribution of first-passage times changes due to local and global heterogeneities of the medium. The HCTRW formalism offers a unified mathematical language to address various diffusion-reaction problems, with numerous applications in material sciences, physics, chemistry, biology, and social sciences.
Stochastic Simulation and Forecast of Hydrologic Time Series Based on Probabilistic Chaos Expansion
Li, Z.; Ghaith, M.
2017-12-01
Hydrological processes are characterized by many complex features, such as nonlinearity, dynamics and uncertainty. How to quantify and address such complexities and uncertainties has been a challenging task for water engineers and managers for decades. To support robust uncertainty analysis, an innovative approach for the stochastic simulation and forecast of hydrologic time series is developed is this study. Probabilistic Chaos Expansions (PCEs) are established through probabilistic collocation to tackle uncertainties associated with the parameters of traditional hydrological models. The uncertainties are quantified in model outputs as Hermite polynomials with regard to standard normal random variables. Sequentially, multivariate analysis techniques are used to analyze the complex nonlinear relationships between meteorological inputs (e.g., temperature, precipitation, evapotranspiration, etc.) and the coefficients of the Hermite polynomials. With the established relationships between model inputs and PCE coefficients, forecasts of hydrologic time series can be generated and the uncertainties in the future time series can be further tackled. The proposed approach is demonstrated using a case study in China and is compared to a traditional stochastic simulation technique, the Markov-Chain Monte-Carlo (MCMC) method. Results show that the proposed approach can serve as a reliable proxy to complicated hydrological models. It can provide probabilistic forecasting in a more computationally efficient manner, compared to the traditional MCMC method. This work provides technical support for addressing uncertainties associated with hydrological modeling and for enhancing the reliability of hydrological modeling results. Applications of the developed approach can be extended to many other complicated geophysical and environmental modeling systems to support the associated uncertainty quantification and risk analysis.
Discrete time and continuous time dynamic mean-variance analysis
Reiss, Ariane
1999-01-01
Contrary to static mean-variance analysis, very few papers have dealt with dynamic mean-variance analysis. Here, the mean-variance efficient self-financing portfolio strategy is derived for n risky assets in discrete and continuous time. In the discrete setting, the resulting portfolio is mean-variance efficient in a dynamic sense. It is shown that the optimal strategy for n risky assets may be dominated if the expected terminal wealth is constrained to exactly attain a certain goal instead o...
The criterion for time symmetry of probabilistic theories and the reversibility of quantum mechanics
International Nuclear Information System (INIS)
Holster, A T
2003-01-01
Physicists routinely claim that the fundamental laws of physics are 'time symmetric' or 'time reversal invariant' or 'reversible'. In particular, it is claimed that the theory of quantum mechanics is time symmetric. But it is shown in this paper that the orthodox analysis suffers from a fatal conceptual error, because the logical criterion for judging the time symmetry of probabilistic theories has been incorrectly formulated. The correct criterion requires symmetry between future-directed laws and past-directed laws. This criterion is formulated and proved in detail. The orthodox claim that quantum mechanics is reversible is re-evaluated. The property demonstrated in the orthodox analysis is shown to be quite distinct from time reversal invariance. The view of Satosi Watanabe that quantum mechanics is time asymmetric is verified, as well as his view that this feature does not merely show a de facto or 'contingent' asymmetry, as commonly supposed, but implies a genuine failure of time reversal invariance of the laws of quantum mechanics. The laws of quantum mechanics would be incompatible with a time-reversed version of our universe
Directory of Open Access Journals (Sweden)
Flavio De Martino
2013-10-01
Full Text Available Stormwater tank performance significantly depends on management practices. This paper proposes a procedure to assess tank efficiency in terms of volume and pollutant concentration using four different capture tank management protocols. The comparison of the efficiency results reveals that, as expected, a combined bypass—stormwater tank system achieves better results than a tank alone. The management practices tested for the tank-only systems provide notably different efficiency results. The practice of immediately emptying after the end of the event exhibits significant levels of efficiency and operational advantages. All other configurations exhibit either significant operational problems or very low performances. The continuous simulation and semi-probabilistic approach for the best tank management practice are compared. The semi-probabilistic approach is based on a Weibull probabilistic model of the main characteristics of the rainfall process. Following this approach, efficiency indexes were established. The comparison with continuous simulations shows the reliability of the probabilistic approach even if this last is certainly very site sensitive.
Discounting Models for Outcomes over Continuous Time
DEFF Research Database (Denmark)
Harvey, Charles M.; Østerdal, Lars Peter
Events that occur over a period of time can be described either as sequences of outcomes at discrete times or as functions of outcomes in an interval of time. This paper presents discounting models for events of the latter type. Conditions on preferences are shown to be satisfied if and only if t...... if the preferences are represented by a function that is an integral of a discounting function times a scale defined on outcomes at instants of time....
Gutmann, Bernd
2011-01-01
In the last decade remarkable progress has been made on combining statistical machine learning techniques, reasoning under uncertainty, and relational representations. The branch of Artificial Intelligence working on the synthesis of these three areas is known as statistical relational learning or probabilistic logic learning.ProbLog, one of the probabilistic frameworks developed, is an extension of the logic programming language Prolog with independent random variables that are defined by an...
A Bimodal Hybrid Model for Time-Dependent Probabilistic Seismic Hazard Analysis
Yaghmaei-Sabegh, Saman; Shoaeifar, Nasser; Shoaeifar, Parva
2018-03-01
The evaluation of evidence provided by geological studies and historical catalogs indicates that in some seismic regions and faults, multiple large earthquakes occur in cluster. Then, the occurrences of large earthquakes confront with quiescence and only the small-to-moderate earthquakes take place. Clustering of large earthquakes is the most distinguishable departure from the assumption of constant hazard of random occurrence of earthquakes in conventional seismic hazard analysis. In the present study, a time-dependent recurrence model is proposed to consider a series of large earthquakes that occurs in clusters. The model is flexible enough to better reflect the quasi-periodic behavior of large earthquakes with long-term clustering, which can be used in time-dependent probabilistic seismic hazard analysis with engineering purposes. In this model, the time-dependent hazard results are estimated by a hazard function which comprises three parts. A decreasing hazard of last large earthquake cluster and an increasing hazard of the next large earthquake cluster, along with a constant hazard of random occurrence of small-to-moderate earthquakes. In the final part of the paper, the time-dependent seismic hazard of the New Madrid Seismic Zone at different time intervals has been calculated for illustrative purpose.
Rosas-Carbajal, M.; Linde, N.; Peacock, J.; Zyserman, F. I.; Kalscheuer, T.; Thiel, S.
2015-12-01
Surface-based monitoring of mass transfer caused by injections and extractions in deep boreholes is crucial to maximize oil, gas and geothermal production. Inductive electromagnetic methods, such as magnetotellurics, are appealing for these applications due to their large penetration depths and sensitivity to changes in fluid conductivity and fracture connectivity. In this work, we propose a 3-D Markov chain Monte Carlo inversion of time-lapse magnetotelluric data to image mass transfer following a saline fluid injection. The inversion estimates the posterior probability density function of the resulting plume, and thereby quantifies model uncertainty. To decrease computation times, we base the parametrization on a reduced Legendre moment decomposition of the plume. A synthetic test shows that our methodology is effective when the electrical resistivity structure prior to the injection is well known. The centre of mass and spread of the plume are well retrieved. We then apply our inversion strategy to an injection experiment in an enhanced geothermal system at Paralana, South Australia, and compare it to a 3-D deterministic time-lapse inversion. The latter retrieves resistivity changes that are more shallow than the actual injection interval, whereas the probabilistic inversion retrieves plumes that are located at the correct depths and oriented in a preferential north-south direction. To explain the time-lapse data, the inversion requires unrealistically large resistivity changes with respect to the base model. We suggest that this is partly explained by unaccounted subsurface heterogeneities in the base model from which time-lapse changes are inferred.
Near-real time 3D probabilistic earthquakes locations at Mt. Etna volcano
Barberi, G.; D'Agostino, M.; Mostaccio, A.; Patane', D.; Tuve', T.
2012-04-01
Automatic procedure for locating earthquake in quasi-real time must provide a good estimation of earthquakes location within a few seconds after the event is first detected and is strongly needed for seismic warning system. The reliability of an automatic location algorithm is influenced by several factors such as errors in picking seismic phases, network geometry, and velocity model uncertainties. On Mt. Etna, the seismic network is managed by INGV and the quasi-real time earthquakes locations are performed by using an automatic-picking algorithm based on short-term-average to long-term-average ratios (STA/LTA) calculated from an approximate squared envelope function of the seismogram, which furnish a list of P-wave arrival times, and the location algorithm Hypoellipse, with a 1D velocity model. The main purpose of this work is to investigate the performances of a different automatic procedure to improve the quasi-real time earthquakes locations. In fact, as the automatic data processing may be affected by outliers (wrong picks), the use of a traditional earthquake location techniques based on a least-square misfit function (L2-norm) often yield unstable and unreliable solutions. Moreover, on Mt. Etna, the 1D model is often unable to represent the complex structure of the volcano (in particular the strong lateral heterogeneities), whereas the increasing accuracy in the 3D velocity models at Mt. Etna during recent years allows their use today in routine earthquake locations. Therefore, we selected, as reference locations, all the events occurred on Mt. Etna in the last year (2011) which was automatically detected and located by means of the Hypoellipse code. By using this dataset (more than 300 events), we applied a nonlinear probabilistic earthquake location algorithm using the Equal Differential Time (EDT) likelihood function, (Font et al., 2004; Lomax, 2005) which is much more robust in the presence of outliers in the data. Successively, by using a probabilistic
Probabilistic analysis of degradation incubation time of steam generator tubing materials
International Nuclear Information System (INIS)
Pandey, M.D.; Jyrkama, M.I.; Lu, Y.; Chi, L.
2012-01-01
The prediction of degradation free lifetime of steam generator (SG) tubing material is an important step in the life cycle management and decision for replacement of steam generators during the refurbishment of a nuclear station. Therefore, an extensive experimental research program has been undertaken by the Canadian Nuclear Industry to investigate the degradation of widely-used SG tubing alloys, namely, Alloy 600 TT, Alloy 690 TT, and Alloy 800. The corrosion related degradations of passive metals, such as pitting, crevice corrosion and stress corrosion cracking (SCC) etc. are assumed to start with the break down of the passive film at the tube-environment interface, which is characterized by the incubation time for passivity breakdown and then the degradation growth rate, and both are influenced by the chemical environment and coolant temperature. Since the incubation time and growth rate exhibit significant variability in the laboratory tests used to simulate these degradation processes, the use of probabilistic modeling is warranted. A pit is initiated with the breakdown of the passive film on the SG tubing surface. Upon exposure to aggressive environments, pitting corrosion may not initiate immediately, or may initiate and then re-passivate. The time required to initiate pitting corrosion is called the pitting incubation time, and that can be used to characterize the corrosion resistance of a material under specific test conditions. Pitting may be the precursor to other corrosion degradation mechanisms, such as environmentally-assisted cracking. This paper will provide an overview of the results of the first stage of experimental program in which samples of Alloy 600 TT, Alloy 690 TT, and Alloy 800 were tested under various temperatures and potentials and simulated crevice environments. The testing environment was chosen to represent layup, startup, and full operating conditions of the steam generators. Degradation incubation times for over 80 samples were
Wang, Jin; Sun, Xiangping; Nahavandi, Saeid; Kouzani, Abbas; Wu, Yuchuan; She, Mary
2014-11-01
Biomedical time series clustering that automatically groups a collection of time series according to their internal similarity is of importance for medical record management and inspection such as bio-signals archiving and retrieval. In this paper, a novel framework that automatically groups a set of unlabelled multichannel biomedical time series according to their internal structural similarity is proposed. Specifically, we treat a multichannel biomedical time series as a document and extract local segments from the time series as words. We extend a topic model, i.e., the Hierarchical probabilistic Latent Semantic Analysis (H-pLSA), which was originally developed for visual motion analysis to cluster a set of unlabelled multichannel time series. The H-pLSA models each channel of the multichannel time series using a local pLSA in the first layer. The topics learned in the local pLSA are then fed to a global pLSA in the second layer to discover the categories of multichannel time series. Experiments on a dataset extracted from multichannel Electrocardiography (ECG) signals demonstrate that the proposed method performs better than previous state-of-the-art approaches and is relatively robust to the variations of parameters including length of local segments and dictionary size. Although the experimental evaluation used the multichannel ECG signals in a biometric scenario, the proposed algorithm is a universal framework for multichannel biomedical time series clustering according to their structural similarity, which has many applications in biomedical time series management. Copyright © 2014 Elsevier Ireland Ltd. All rights reserved.
Remembering the time: a continuous clock.
Lewis, Penelope A; Miall, R Chris
2006-09-01
The neural mechanisms for time measurement are currently a subject of much debate. This article argues that our brains can measure time using the same dorsolateral prefrontal cells that are known to be involved in working memory. Evidence for this is: (1) the dorsolateral prefrontal cortex is integral to both cognitive timing and working memory; (2) both behavioural processes are modulated by dopamine and disrupted by manipulation of dopaminergic projections to the dorsolateral prefrontal cortex; (3) the neurons in question ramp their activity in a temporally predictable way during both types of processing; and (4) this ramping activity is modulated by dopamine. The dual involvement of these prefrontal neurons in working memory and cognitive timing supports a view of the prefrontal cortex as a multipurpose processor recruited by a wide variety of tasks.
Application of a time probabilistic approach to seismic landslide hazard estimates in Iran
Rajabi, A. M.; Del Gaudio, V.; Capolongo, D.; Khamehchiyan, M.; Mahdavifar, M. R.
2009-04-01
Iran is a country located in a tectonic active belt and is prone to earthquake and related phenomena. In the recent years, several earthquakes caused many fatalities and damages to facilities, e.g. the Manjil (1990), Avaj (2002), Bam (2003) and Firuzabad-e-Kojur (2004) earthquakes. These earthquakes generated many landslides. For instance, catastrophic landslides triggered by the Manjil Earthquake (Ms = 7.7) in 1990 buried the village of Fatalak, killed more than 130 peoples and cut many important road and other lifelines, resulting in major economic disruption. In general, earthquakes in Iran have been concentrated in two major zones with different seismicity characteristics: one is the region of Alborz and Central Iran and the other is the Zagros Orogenic Belt. Understanding where seismically induced landslides are most likely to occur is crucial in reducing property damage and loss of life in future earthquakes. For this purpose a time probabilistic approach for earthquake-induced landslide hazard at regional scale, proposed by Del Gaudio et al. (2003), has been applied to the whole Iranian territory to provide the basis of hazard estimates. This method consists in evaluating the recurrence of seismically induced slope failure conditions inferred from the Newmark's model. First, by adopting Arias Intensity to quantify seismic shaking and using different Arias attenuation relations for Alborz - Central Iran and Zagros regions, well-established methods of seismic hazard assessment, based on the Cornell (1968) method, were employed to obtain the occurrence probabilities for different levels of seismic shaking in a time interval of interest (50 year). Then, following Jibson (1998), empirical formulae specifically developed for Alborz - Central Iran and Zagros, were used to represent, according to the Newmark's model, the relation linking Newmark's displacement Dn to Arias intensity Ia and to slope critical acceleration ac. These formulae were employed to evaluate
A Filtering of Incomplete GNSS Position Time Series with Probabilistic Principal Component Analysis
Gruszczynski, Maciej; Klos, Anna; Bogusz, Janusz
2018-04-01
For the first time, we introduced the probabilistic principal component analysis (pPCA) regarding the spatio-temporal filtering of Global Navigation Satellite System (GNSS) position time series to estimate and remove Common Mode Error (CME) without the interpolation of missing values. We used data from the International GNSS Service (IGS) stations which contributed to the latest International Terrestrial Reference Frame (ITRF2014). The efficiency of the proposed algorithm was tested on the simulated incomplete time series, then CME was estimated for a set of 25 stations located in Central Europe. The newly applied pPCA was compared with previously used algorithms, which showed that this method is capable of resolving the problem of proper spatio-temporal filtering of GNSS time series characterized by different observation time span. We showed, that filtering can be carried out with pPCA method when there exist two time series in the dataset having less than 100 common epoch of observations. The 1st Principal Component (PC) explained more than 36% of the total variance represented by time series residuals' (series with deterministic model removed), what compared to the other PCs variances (less than 8%) means that common signals are significant in GNSS residuals. A clear improvement in the spectral indices of the power-law noise was noticed for the Up component, which is reflected by an average shift towards white noise from - 0.98 to - 0.67 (30%). We observed a significant average reduction in the accuracy of stations' velocity estimated for filtered residuals by 35, 28 and 69% for the North, East, and Up components, respectively. CME series were also subjected to analysis in the context of environmental mass loading influences of the filtering results. Subtraction of the environmental loading models from GNSS residuals provides to reduction of the estimated CME variance by 20 and 65% for horizontal and vertical components, respectively.
Rosas-Carbajal, Marina; Linde, Nicolas; Peacock, Jared R.; Zyserman, F. I.; Kalscheuer, Thomas; Thiel, Stephan
2015-01-01
Surface-based monitoring of mass transfer caused by injections and extractions in deep boreholes is crucial to maximize oil, gas and geothermal production. Inductive electromagnetic methods, such as magnetotellurics, are appealing for these applications due to their large penetration depths and sensitivity to changes in fluid conductivity and fracture connectivity. In this work, we propose a 3-D Markov chain Monte Carlo inversion of time-lapse magnetotelluric data to image mass transfer following a saline fluid injection. The inversion estimates the posterior probability density function of the resulting plume, and thereby quantifies model uncertainty. To decrease computation times, we base the parametrization on a reduced Legendre moment decomposition of the plume. A synthetic test shows that our methodology is effective when the electrical resistivity structure prior to the injection is well known. The centre of mass and spread of the plume are well retrieved.We then apply our inversion strategy to an injection experiment in an enhanced geothermal system at Paralana, South Australia, and compare it to a 3-D deterministic time-lapse inversion. The latter retrieves resistivity changes that are more shallow than the actual injection interval, whereas the probabilistic inversion retrieves plumes that are located at the correct depths and oriented in a preferential north-south direction. To explain the time-lapse data, the inversion requires unrealistically large resistivity changes with respect to the base model. We suggest that this is partly explained by unaccounted subsurface heterogeneities in the base model from which time-lapse changes are inferred.
Probabilistic systems coalgebraically: A survey
Sokolova, Ana
2011-01-01
We survey the work on both discrete and continuous-space probabilistic systems as coalgebras, starting with how probabilistic systems are modeled as coalgebras and followed by a discussion of their bisimilarity and behavioral equivalence, mentioning results that follow from the coalgebraic treatment of probabilistic systems. It is interesting to note that, for different reasons, for both discrete and continuous probabilistic systems it may be more convenient to work with behavioral equivalence than with bisimilarity. PMID:21998490
Including foreshocks and aftershocks in time-independent probabilistic seismic hazard analyses
Boyd, Oliver S.
2012-01-01
Time‐independent probabilistic seismic‐hazard analysis treats each source as being temporally and spatially independent; hence foreshocks and aftershocks, which are both spatially and temporally dependent on the mainshock, are removed from earthquake catalogs. Yet, intuitively, these earthquakes should be considered part of the seismic hazard, capable of producing damaging ground motions. In this study, I consider the mainshock and its dependents as a time‐independent cluster, each cluster being temporally and spatially independent from any other. The cluster has a recurrence time of the mainshock; and, by considering the earthquakes in the cluster as a union of events, dependent events have an opportunity to contribute to seismic ground motions and hazard. Based on the methods of the U.S. Geological Survey for a high‐hazard site, the inclusion of dependent events causes ground motions that are exceeded at probability levels of engineering interest to increase by about 10% but could be as high as 20% if variations in aftershock productivity can be accounted for reliably.
Agus, M.; Hitchcott, P. K.; Penna, M. P.; Peró-Cebollero, M.; Guàrdia-Olmos, J.
2016-11-01
Many studies have investigated the features of probabilistic reasoning developed in relation to different formats of problem presentation, showing that it is affected by various individual and contextual factors. Incomplete understanding of the identity and role of these factors may explain the inconsistent evidence concerning the effect of problem presentation format. Thus, superior performance has sometimes been observed for graphically, rather than verbally, presented problems. The present study was undertaken to address this issue. Psychology undergraduates without any statistical expertise (N = 173 in Italy; N = 118 in Spain; N = 55 in England) were administered statistical problems in two formats (verbal-numerical and graphical-pictorial) under a condition of time pressure. Students also completed additional measures indexing several potentially relevant individual dimensions (statistical ability, statistical anxiety, attitudes towards statistics and confidence). Interestingly, a facilitatory effect of graphical presentation was observed in the Italian and Spanish samples but not in the English one. Significantly, the individual dimensions predicting statistical performance also differed between the samples, highlighting a different role of confidence. Hence, these findings confirm previous observations concerning problem presentation format while simultaneously highlighting the importance of individual dimensions.
Directory of Open Access Journals (Sweden)
J. Schmidt
2008-04-01
Full Text Available A project established at the National Institute of Water and Atmospheric Research (NIWA in New Zealand is aimed at developing a prototype of a real-time landslide forecasting system. The objective is to predict temporal changes in landslide probability for shallow, rainfall-triggered landslides, based on quantitative weather forecasts from numerical weather prediction models. Global weather forecasts from the United Kingdom Met Office (MO Numerical Weather Prediction model (NWP are coupled with a regional data assimilating NWP model (New Zealand Limited Area Model, NZLAM to forecast atmospheric variables such as precipitation and temperature up to 48 h ahead for all of New Zealand. The weather forecasts are fed into a hydrologic model to predict development of soil moisture and groundwater levels. The forecasted catchment-scale patterns in soil moisture and soil saturation are then downscaled using topographic indices to predict soil moisture status at the local scale, and an infinite slope stability model is applied to determine the triggering soil water threshold at a local scale. The model uses uncertainty of soil parameters to produce probabilistic forecasts of spatio-temporal landslide occurrence 48~h ahead. The system was evaluated for a damaging landslide event in New Zealand. Comparison with landslide densities estimated from satellite imagery resulted in hit rates of 70–90%.
Mapping of uncertainty relations between continuous and discrete time.
Chiuchiù, Davide; Pigolotti, Simone
2018-03-01
Lower bounds on fluctuations of thermodynamic currents depend on the nature of time, discrete or continuous. To understand the physical reason, we compare current fluctuations in discrete-time Markov chains and continuous-time master equations. We prove that current fluctuations in the master equations are always more likely, due to random timings of transitions. This comparison leads to a mapping of the moments of a current between discrete and continuous time. We exploit this mapping to obtain uncertainty bounds. Our results reduce the quests for uncertainty bounds in discrete and continuous time to a single problem.
Probabilistic logics and probabilistic networks
Haenni, Rolf; Wheeler, Gregory; Williamson, Jon; Andrews, Jill
2014-01-01
Probabilistic Logic and Probabilistic Networks presents a groundbreaking framework within which various approaches to probabilistic logic naturally fit. Additionally, the text shows how to develop computationally feasible methods to mesh with this framework.
Probabilistic mapping of flood-induced backscatter changes in SAR time series
Schlaffer, Stefan; Chini, Marco; Giustarini, Laura; Matgen, Patrick
2017-04-01
The information content of flood extent maps can be increased considerably by including information on the uncertainty of the flood area delineation. This additional information can be of benefit in flood forecasting and monitoring. Furthermore, flood probability maps can be converted to binary maps showing flooded and non-flooded areas by applying a threshold probability value pF = 0.5. In this study, a probabilistic change detection approach for flood mapping based on synthetic aperture radar (SAR) time series is proposed. For this purpose, conditional probability density functions (PDFs) for land and open water surfaces were estimated from ENVISAT ASAR Wide Swath (WS) time series containing >600 images using a reference mask of permanent water bodies. A pixel-wise harmonic model was used to account for seasonality in backscatter from land areas caused by soil moisture and vegetation dynamics. The approach was evaluated for a large-scale flood event along the River Severn, United Kingdom. The retrieved flood probability maps were compared to a reference flood mask derived from high-resolution aerial imagery by means of reliability diagrams. The obtained performance measures indicate both high reliability and confidence although there was a slight under-estimation of the flood extent, which may in part be attributed to topographically induced radar shadows along the edges of the floodplain. Furthermore, the results highlight the importance of local incidence angle for the separability between flooded and non-flooded areas as specular reflection properties of open water surfaces increase with a more oblique viewing geometry.
Continuous-time quantum walks on star graphs
International Nuclear Information System (INIS)
Salimi, S.
2009-01-01
In this paper, we investigate continuous-time quantum walk on star graphs. It is shown that quantum central limit theorem for a continuous-time quantum walk on star graphs for N-fold star power graph, which are invariant under the quantum component of adjacency matrix, converges to continuous-time quantum walk on K 2 graphs (complete graph with two vertices) and the probability of observing walk tends to the uniform distribution.
Model checking conditional CSL for continuous-time Markov chains
DEFF Research Database (Denmark)
Gao, Yang; Xu, Ming; Zhan, Naijun
2013-01-01
In this paper, we consider the model-checking problem of continuous-time Markov chains (CTMCs) with respect to conditional logic. To the end, we extend Continuous Stochastic Logic introduced in Aziz et al. (2000) [1] to Conditional Continuous Stochastic Logic (CCSL) by introducing a conditional...
International Nuclear Information System (INIS)
Walker, J.R.
1990-02-01
The presence of high levels of moisture in the annulus gas system of a CANDU reactor indicates that a leaking crack may be present in a pressure tube. This will initiate the shutdown of the reactor to prevent the possibility of fuel channel damage. It is also desirable, however, to keep the reactor partially pressurized at hot shutdown for as long as it is necessary to unambiguously identify the leaking pressure tube. A premature full depressurization may cause an extended shutdown while the leaking tube is being located. However, fast fracture could occur during an excessively long hot shutdown period. A probabilistic methodology, together with an associated computer code (called MARATHON), has been developed to calculate the time from first leakage to unstable fracture in a probabilistic format. The methodology explicitly uses distributions of material properties and allows the risk associated with leak-before-break to be estimated. A model of the leak detection system is integrated into the methodology to calculate the time from leak detection to unstable fracture. The sensitivity of the risk to changing reactor conditions allows the optimization of reactor management after leak detection. In this report we describe the probabilistic model and give details of the quality assurance and verification of the MARATHON code. Examples of the use of MARATHON are given using preliminary material property distributions. These preliminary material property distributions indicate that the probability of unstable fracture is very low, and that ample time is available to locate the leaking tube
Continuous Time Structural Equation Modeling with R Package ctsem
Directory of Open Access Journals (Sweden)
Charles C. Driver
2017-04-01
Full Text Available We introduce ctsem, an R package for continuous time structural equation modeling of panel (N > 1 and time series (N = 1 data, using full information maximum likelihood. Most dynamic models (e.g., cross-lagged panel models in the social and behavioural sciences are discrete time models. An assumption of discrete time models is that time intervals between measurements are equal, and that all subjects were assessed at the same intervals. Violations of this assumption are often ignored due to the difficulty of accounting for varying time intervals, therefore parameter estimates can be biased and the time course of effects becomes ambiguous. By using stochastic differential equations to estimate an underlying continuous process, continuous time models allow for any pattern of measurement occasions. By interfacing to OpenMx, ctsem combines the flexible specification of structural equation models with the enhanced data gathering opportunities and improved estimation of continuous time models. ctsem can estimate relationships over time for multiple latent processes, measured by multiple noisy indicators with varying time intervals between observations. Within and between effects are estimated simultaneously by modeling both observed covariates and unobserved heterogeneity. Exogenous shocks with different shapes, group differences, higher order diffusion effects and oscillating processes can all be simply modeled. We first introduce and define continuous time models, then show how to specify and estimate a range of continuous time models using ctsem.
A continuous-time control model on production planning network ...
African Journals Online (AJOL)
A continuous-time control model on production planning network. DEA Omorogbe, MIU Okunsebor. Abstract. In this paper, we give a slightly detailed review of Graves and Hollywood model on constant inventory tactical planning model for a job shop. The limitations of this model are pointed out and a continuous time ...
Continuous Time Modeling of the Cross-Lagged Panel Design
Oud, J.H.L.
2002-01-01
Since Newton (1642-1727) continuous time modeling by means of differential equations is the standard approach of dynamic phenomena in natural science. It is argued that most processes in behavioral science also unfold in continuous time and should be analyzed accordingly. After dealing with the
Oh-Descher, Hanna; Beck, Jeffrey M; Ferrari, Silvia; Sommer, Marc A; Egner, Tobias
2017-11-15
Real-life decision-making often involves combining multiple probabilistic sources of information under finite time and cognitive resources. To mitigate these pressures, people "satisfice", foregoing a full evaluation of all available evidence to focus on a subset of cues that allow for fast and "good-enough" decisions. Although this form of decision-making likely mediates many of our everyday choices, very little is known about the way in which the neural encoding of cue information changes when we satisfice under time pressure. Here, we combined human functional magnetic resonance imaging (fMRI) with a probabilistic classification task to characterize neural substrates of multi-cue decision-making under low (1500 ms) and high (500 ms) time pressure. Using variational Bayesian inference, we analyzed participants' choices to track and quantify cue usage under each experimental condition, which was then applied to model the fMRI data. Under low time pressure, participants performed near-optimally, appropriately integrating all available cues to guide choices. Both cortical (prefrontal and parietal cortex) and subcortical (hippocampal and striatal) regions encoded individual cue weights, and activity linearly tracked trial-by-trial variations in the amount of evidence and decision uncertainty. Under increased time pressure, participants adaptively shifted to using a satisficing strategy by discounting the least informative cue in their decision process. This strategic change in decision-making was associated with an increased involvement of the dopaminergic midbrain, striatum, thalamus, and cerebellum in representing and integrating cue values. We conclude that satisficing the probabilistic inference process under time pressure leads to a cortical-to-subcortical shift in the neural drivers of decisions. Copyright © 2017 Elsevier Inc. All rights reserved.
Stability of continuous-time quantum filters with measurement imperfections
Amini, H.; Pellegrini, C.; Rouchon, P.
2014-07-01
The fidelity between the state of a continuously observed quantum system and the state of its associated quantum filter, is shown to be always a submartingale. The observed system is assumed to be governed by a continuous-time Stochastic Master Equation (SME), driven simultaneously by Wiener and Poisson processes and that takes into account incompleteness and errors in measurements. This stability result is the continuous-time counterpart of a similar stability result already established for discrete-time quantum systems and where the measurement imperfections are modelled by a left stochastic matrix.
Integral-Value Models for Outcomes over Continuous Time
DEFF Research Database (Denmark)
Harvey, Charles M.; Østerdal, Lars Peter
Models of preferences between outcomes over continuous time are important for individual, corporate, and social decision making, e.g., medical treatment, infrastructure development, and environmental regulation. This paper presents a foundation for such models. It shows that conditions on prefere...... on preferences between real- or vector-valued outcomes over continuous time are satisfied if and only if the preferences are represented by a value function having an integral form......Models of preferences between outcomes over continuous time are important for individual, corporate, and social decision making, e.g., medical treatment, infrastructure development, and environmental regulation. This paper presents a foundation for such models. It shows that conditions...
Continuous-time Markov decision processes theory and applications
Guo, Xianping
2009-01-01
This volume provides the first book entirely devoted to recent developments on the theory and applications of continuous-time Markov decision processes (MDPs). The MDPs presented here include most of the cases that arise in applications.
A continuous time formulation of the Regge calculus
International Nuclear Information System (INIS)
Brewin, Leo
1988-01-01
A complete continuous time formulation of the Regge calculus is presented by developing the associated continuous time Regge action. It is shown that the time constraint is, by way of the Bianchi identities conserved by the evolution equations. This analysis leads to an explicit first integral for each of the evolution equations. The dynamical equations of the theory are therefore reduced to a set of first-order differential equations. In this formalism the time constraints reduce to a simple sum of the integration constants. This result is unique to the Regge calculus-there does not appear to be a complete set of first integrals available for the vacuum Einstein equations. (author)
Application of continuous-time random walk to statistical arbitrage
Directory of Open Access Journals (Sweden)
Sergey Osmekhin
2015-01-01
Full Text Available An analytical statistical arbitrage strategy is proposed, where the distribution of the spread is modelled as a continuous-time random walk. Optimal boundaries, computed as a function of the mean and variance of the firstpassage time ofthe spread,maximises an objective function. The predictability of the trading strategy is analysed and contrasted for two forms of continuous-time random walk processes. We found that the waiting-time distribution has a significant impact on the prediction of the expected profit for intraday trading
Continuous-time quantum random walks require discrete space
International Nuclear Information System (INIS)
Manouchehri, K; Wang, J B
2007-01-01
Quantum random walks are shown to have non-intuitive dynamics which makes them an attractive area of study for devising quantum algorithms for long-standing open problems as well as those arising in the field of quantum computing. In the case of continuous-time quantum random walks, such peculiar dynamics can arise from simple evolution operators closely resembling the quantum free-wave propagator. We investigate the divergence of quantum walk dynamics from the free-wave evolution and show that, in order for continuous-time quantum walks to display their characteristic propagation, the state space must be discrete. This behavior rules out many continuous quantum systems as possible candidates for implementing continuous-time quantum random walks
Continuous-time quantum random walks require discrete space
Manouchehri, K.; Wang, J. B.
2007-11-01
Quantum random walks are shown to have non-intuitive dynamics which makes them an attractive area of study for devising quantum algorithms for long-standing open problems as well as those arising in the field of quantum computing. In the case of continuous-time quantum random walks, such peculiar dynamics can arise from simple evolution operators closely resembling the quantum free-wave propagator. We investigate the divergence of quantum walk dynamics from the free-wave evolution and show that, in order for continuous-time quantum walks to display their characteristic propagation, the state space must be discrete. This behavior rules out many continuous quantum systems as possible candidates for implementing continuous-time quantum random walks.
Probabilistic Decision Graphs - Combining Verification and AI Techniques for Probabilistic Inference
DEFF Research Database (Denmark)
Jaeger, Manfred
2004-01-01
We adopt probabilistic decision graphs developed in the field of automated verification as a tool for probabilistic model representation and inference. We show that probabilistic inference has linear time complexity in the size of the probabilistic decision graph, that the smallest probabilistic ...
On Transaction-Cost Models in Continuous-Time Markets
Directory of Open Access Journals (Sweden)
Thomas Poufinas
2015-04-01
Full Text Available Transaction-cost models in continuous-time markets are considered. Given that investors decide to buy or sell at certain time instants, we study the existence of trading strategies that reach a certain final wealth level in continuous-time markets, under the assumption that transaction costs, built in certain recommended ways, have to be paid. Markets prove to behave in manners that resemble those of complete ones for a wide variety of transaction-cost types. The results are important, but not exclusively, for the pricing of options with transaction costs.
Continuous time modeling of panel data by means of SEM
Oud, J.H.L.; Delsing, M.J.M.H.; Montfort, C.A.G.M.; Oud, J.H.L.; Satorra, A.
2010-01-01
After a brief history of continuous time modeling and its implementation in panel analysis by means of structural equation modeling (SEM), the problems of discrete time modeling are discussed in detail. This is done by means of the popular cross-lagged panel design. Next, the exact discrete model
Identification of continuous-time systems from samples of input ...
Indian Academy of Sciences (India)
Abstract. This paper presents an introductory survey of the methods that have been developed for identification of continuous-time systems from samples of input±output data. The two basic approaches may be described as (i) the indirect method, where first a discrete-time model is estimated from the sampled data and then ...
Lloyd, G T; Bapst, D W; Friedman, M; Davis, K E
2016-11-01
Branch lengths-measured in character changes-are an essential requirement of clock-based divergence estimation, regardless of whether the fossil calibrations used represent nodes or tips. However, a separate set of divergence time approaches are typically used to date palaeontological trees, which may lack such branch lengths. Among these methods, sophisticated probabilistic approaches have recently emerged, in contrast with simpler algorithms relying on minimum node ages. Here, using a novel phylogenetic hypothesis for Mesozoic dinosaurs, we apply two such approaches to estimate divergence times for: (i) Dinosauria, (ii) Avialae (the earliest birds) and (iii) Neornithes (crown birds). We find: (i) the plausibility of a Permian origin for dinosaurs to be dependent on whether Nyasasaurus is the oldest dinosaur, (ii) a Middle to Late Jurassic origin of avian flight regardless of whether Archaeopteryx or Aurornis is considered the first bird and (iii) a Late Cretaceous origin for Neornithes that is broadly congruent with other node- and tip-dating estimates. Demonstrating the feasibility of probabilistic time-scaling further opens up divergence estimation to the rich histories of extinct biodiversity in the fossil record, even in the absence of detailed character data. © 2016 The Authors.
A continuous-time/discrete-time mixed audio-band sigma delta ADC
International Nuclear Information System (INIS)
Liu Yan; Hua Siliang; Wang Donghui; Hou Chaohuan
2011-01-01
This paper introduces a mixed continuous-time/discrete-time, single-loop, fourth-order, 4-bit audio-band sigma delta ADC that combines the benefits of continuous-time and discrete-time circuits, while mitigating the challenges associated with continuous-time design. Measurement results show that the peak SNR of this ADC reaches 100 dB and the total power consumption is less than 30 mW. (semiconductor integrated circuits)
Safety Verification for Probabilistic Hybrid Systems
DEFF Research Database (Denmark)
Zhang, Lijun; She, Zhikun; Ratschan, Stefan
2010-01-01
The interplay of random phenomena and continuous real-time control deserves increased attention for instance in wireless sensing and control applications. Safety verification for such systems thus needs to consider probabilistic variations of systems with hybrid dynamics. In safety verification o...... on a number of case studies, tackled using a prototypical implementation....
Pseudo-Hermitian continuous-time quantum walks
Energy Technology Data Exchange (ETDEWEB)
Salimi, S; Sorouri, A, E-mail: shsalimi@uok.ac.i, E-mail: a.sorouri@uok.ac.i [Department of Physics, University of Kurdistan, PO Box 66177-15175, Sanandaj (Iran, Islamic Republic of)
2010-07-09
In this paper we present a model exhibiting a new type of continuous-time quantum walk (as a quantum-mechanical transport process) on networks, which is described by a non-Hermitian Hamiltonian possessing a real spectrum. We call it pseudo-Hermitian continuous-time quantum walk. We introduce a method to obtain the probability distribution of walk on any vertex and then study a specific system. We observe that the probability distribution on certain vertices increases compared to that of the Hermitian case. This formalism makes the transport process faster and can be useful for search algorithms.
Directory of Open Access Journals (Sweden)
L. I. Lytkina
2016-01-01
Full Text Available A mathematical model of the polydisperse medium mixing process reflects its stochastic features in the form of uneven distribution of phase elements on the time of their presence in apparatus, particle size, ripple retention of the apparatus, random distribution of the material and thermal phase flows of the working volume, heterogeneity of the medium physical- and chemical properties, complicated by chemical reaction. For the mathematical description of the mixing process of animal feed ingredients in the presence of chemical reaction the system of differential equations of Academician V.V. Kafarov was used. Proposed by him hypothesis based on the theory of Markov’s processes stating that "any multicomponent mixture can be considered as the result of an iterative process of mixing the two components to achieve the desired uniformity of all the ingredients in the mixture" allows us to consider a process of mixing binary composition in a paddle mixer in the form of differential equations of two ingredients concentration numerous changes until it becomes a homogenous mixture. It was found out that the mixing process of the two-component mixture is determined in a paddle mixer with a constant mixing speed and a limit (equilibrium dispersion of the ingredients in the mixture i.e. with its uniformity. Adjustment of the model parameters was carried out according to the results of experimental studies on mixing the crushed wheat with metallomagnetic impurity, which was a key (indicator component. According to the best values of the constant of the continuous mixing speed and the equilibrium disperse values of the ingredients contents, the mathematical model parameters identification was carried out. The results obtained are used to develop a new generation mixer design.
Learning Probabilistic Logic Models from Probabilistic Examples.
Chen, Jianzhong; Muggleton, Stephen; Santos, José
2008-10-01
We revisit an application developed originally using abductive Inductive Logic Programming (ILP) for modeling inhibition in metabolic networks. The example data was derived from studies of the effects of toxins on rats using Nuclear Magnetic Resonance (NMR) time-trace analysis of their biofluids together with background knowledge representing a subset of the Kyoto Encyclopedia of Genes and Genomes (KEGG). We now apply two Probabilistic ILP (PILP) approaches - abductive Stochastic Logic Programs (SLPs) and PRogramming In Statistical modeling (PRISM) to the application. Both approaches support abductive learning and probability predictions. Abductive SLPs are a PILP framework that provides possible worlds semantics to SLPs through abduction. Instead of learning logic models from non-probabilistic examples as done in ILP, the PILP approach applied in this paper is based on a general technique for introducing probability labels within a standard scientific experimental setting involving control and treated data. Our results demonstrate that the PILP approach provides a way of learning probabilistic logic models from probabilistic examples, and the PILP models learned from probabilistic examples lead to a significant decrease in error accompanied by improved insight from the learned results compared with the PILP models learned from non-probabilistic examples.
International Nuclear Information System (INIS)
Martorell, S.A.; Serradell, V.G.; Samanta, P.K.
1995-01-01
Technical Specifications (TS) define the limits and conditions for operating nuclear plants safely. We selected the Limiting Conditions for Operations (LCO) and Surveillance Requirements (SR), both within TS, as the main items to be evaluated using probabilistic methods. In particular, we focused on the Allowed Outage Time (AOT) and Surveillance Test Interval (STI) requirements in LCO and SR, respectively. Already, significant operating and design experience has accumulated revealing several problems which require modifications in some TS rules. Developments in Probabilistic Safety Assessment (PSA) allow the evaluation of effects due to such modifications in AOT and STI from a risk point of view. Thus, some changes have already been adopted in some plants. However, the combined effect of several changes in AOT and STI, i.e. through their interactions, is not addressed. This paper presents a methodology which encompasses, along with the definition of AOT and STI interactions, the quantification of interactions in terms of risk using PSA methods, an approach for evaluating simultaneous AOT and STI modifications, and an assessment of strategies for giving flexibility to plant operation through simultaneous changes on AOT and STI using trade-off-based risk criteria
Wakker, P.P.; Thaler, R.H.; Tversky, A.
1997-01-01
textabstractProbabilistic insurance is an insurance policy involving a small probability that the consumer will not be reimbursed. Survey data suggest that people dislike probabilistic insurance and demand more than a 20% reduction in the premium to compensate for a 1% default risk. While these preferences are intuitively appealing they are difficult to reconcile with expected utility theory. Under highly plausible assumptions about the utility function, willingness to pay for probabilistic i...
Incomplete Continuous-time Securities Markets with Stochastic Income Volatility
DEFF Research Database (Denmark)
Christensen, Peter Ove; Larsen, Kasper
2014-01-01
We derive closed-form solutions for the equilibrium interest rate and market price of risk processes in an incomplete continuous-time market with uncertainty generated by Brownian motions. The economy has a finite number of heterogeneous exponential utility investors, who receive partially...
Modeling of water treatment plant using timed continuous Petri nets
Nurul Fuady Adhalia, H.; Subiono, Adzkiya, Dieky
2017-08-01
Petri nets represent graphically certain conditions and rules. In this paper, we construct a model of the Water Treatment Plant (WTP) using timed continuous Petri nets. Specifically, we consider that (1) the water pump always active and (2) the water source is always available. After obtaining the model, the flow through the transitions and token conservation laws are calculated.
Incomplete Continuous-Time Securities Markets with Stochastic Income Volatility
DEFF Research Database (Denmark)
Christensen, Peter Ove; Larsen, Kasper
In an incomplete continuous-time securities market governed by Brownian motions, we derive closed-form solutions for the equilibrium risk-free rate and equity premium processes. The economy has a finite number of heterogeneous exponential utility investors, who receive partially unspanned income ...
The deviation matrix of a continuous-time Markov chain
Coolen-Schrijner, P.; van Doorn, E.A.
2001-01-01
The deviation matrix of an ergodic, continuous-time Markov chain with transition probability matrix $P(.)$ and ergodic matrix $\\Pi$ is the matrix $D \\equiv \\int_0^{\\infty} (P(t)-\\Pi)dt$. We give conditions for $D$ to exist and discuss properties and a representation of $D$. The deviation matrix of a
The deviation matrix of a continuous-time Markov chain
Coolen-Schrijner, Pauline; van Doorn, Erik A.
2002-01-01
he deviation matrix of an ergodic, continuous-time Markov chain with transition probability matrix $P(.)$ and ergodic matrix $\\Pi$ is the matrix $D \\equiv \\int_0^{\\infty} (P(t)-\\Pi)dt$. We give conditions for $D$ to exist and discuss properties and a representation of $D$. The deviation matrix of a
Asymptotic absolute continuity for perturbed time-dependent ...
Indian Academy of Sciences (India)
R. Narasimhan (Krishtel eMaging) 1461 1996 Oct 15 13:05:22
We study the notion of asymptotic velocity for a class of perturbed time- ... for Mathematical Physics and Stochastics, funded by a grant from the Danish National Research Foun- .... Using (2.4) we can readily continue α(t) to the whole half-axis.
Noise Simulation of Continuous-Time ΣΔ Modulators
International Nuclear Information System (INIS)
Arias, J.; Quintanilla, L.; Bisbal, D.; San Pablo, J.; Enriquez, L.; Vicente, J.; Barbolla, J.
2005-01-01
In this work, an approach for the simulation of the effect of noise sources in the performance of continuous-time ΔΣ modulators is presented. Electrical noise including thermal noise, 1/f noise and clock jitter are included in a simulation program and their impact on the system performance is analyzed
A mean-variance frontier in discrete and continuous time
Bekker, Paul A.
2004-01-01
The paper presents a mean-variance frontier based on dynamic frictionless investment strategies in continuous time. The result applies to a finite number of risky assets whose price process is given by multivariate geometric Brownian motion with deterministically varying coefficients. The derivation
Stability and the structure of continuous-time economic models
Nieuwenhuis, H.J.; Schoonbeek, L.
In this paper we investigate the relationship between the stability of macroeconomic, or macroeconometric, continuous-time models and the structure of the matrices appearing in these models. In particular, we concentrate on dominant-diagonal structures. We derive general stability results for models
A mean-variance frontier in discrete and continuous time
Bekker, Paul A.
2004-01-01
The paper presents a mean-variance frontier based on dynamic frictionless investment strategies in continuous time. The result applies to a finite number of risky assets whose price process is given by multivariate geometric Brownian motion with deterministically varying coefficients. The derivation is based on the solution for the frontier in discrete time. Using the same multiperiod framework as Li and Ng (2000), I provide an alternative derivation and an alternative formulation of the solu...
Continuous Time Portfolio Selection under Conditional Capital at Risk
Directory of Open Access Journals (Sweden)
Gordana Dmitrasinovic-Vidovic
2010-01-01
Full Text Available Portfolio optimization with respect to different risk measures is of interest to both practitioners and academics. For there to be a well-defined optimal portfolio, it is important that the risk measure be coherent and quasiconvex with respect to the proportion invested in risky assets. In this paper we investigate one such measure—conditional capital at risk—and find the optimal strategies under this measure, in the Black-Scholes continuous time setting, with time dependent coefficients.
Deep Brain Stimulation, Continuity over Time, and the True Self.
Nyholm, Sven; O'Neill, Elizabeth
2016-10-01
One of the topics that often comes up in ethical discussions of deep brain stimulation (DBS) is the question of what impact DBS has, or might have, on the patient's self. This is often understood as a question of whether DBS poses a threat to personal identity, which is typically understood as having to do with psychological and/or narrative continuity over time. In this article, we argue that the discussion of whether DBS is a threat to continuity over time is too narrow. There are other questions concerning DBS and the self that are overlooked in discussions exclusively focusing on psychological and/or narrative continuity. For example, it is also important to investigate whether DBS might sometimes have a positive (e.g., a rehabilitating) effect on the patient's self. To widen the discussion of DBS, so as to make it encompass a broader range of considerations that bear on DBS's impact on the self, we identify six features of the commonly used concept of a person's "true self." We apply these six features to the relation between DBS and the self. And we end with a brief discussion of the role DBS might play in treating otherwise treatment-refractory anorexia nervosa. This further highlights the importance of discussing both continuity over time and the notion of the true self.
A stochastic surplus production model in continuous time
DEFF Research Database (Denmark)
Pedersen, Martin Wæver; Berg, Casper Willestofte
2017-01-01
surplus production model in continuous time (SPiCT), which in addition to stock dynamics also models the dynamics of the fisheries. This enables error in the catch process to be reflected in the uncertainty of estimated model parameters and management quantities. Benefits of the continuous-time state......Surplus production modelling has a long history as a method for managing data-limited fish stocks. Recent advancements have cast surplus production models as state-space models that separate random variability of stock dynamics from error in observed indices of biomass. We present a stochastic......-space model formulation include the ability to provide estimates of exploitable biomass and fishing mortality at any point in time from data sampled at arbitrary and possibly irregular intervals. We show in a simulation that the ability to analyse subannual data can increase the effective sample size...
Continuous real-time water information: an important Kansas resource
Loving, Brian L.; Putnam, James E.; Turk, Donita M.
2014-01-01
Continuous real-time information on streams, lakes, and groundwater is an important Kansas resource that can safeguard lives and property, and ensure adequate water resources for a healthy State economy. The U.S. Geological Survey (USGS) operates approximately 230 water-monitoring stations at Kansas streams, lakes, and groundwater sites. Most of these stations are funded cooperatively in partnerships with local, tribal, State, or other Federal agencies. The USGS real-time water-monitoring network provides long-term, accurate, and objective information that meets the needs of many customers. Whether the customer is a water-management or water-quality agency, an emergency planner, a power or navigational official, a farmer, a canoeist, or a fisherman, all can benefit from the continuous real-time water information gathered by the USGS.
Martingale Regressions for a Continuous Time Model of Exchange Rates
Guo, Zi-Yi
2017-01-01
One of the daunting problems in international finance is the weak explanatory power of existing theories of the nominal exchange rates, the so-called “foreign exchange rate determination puzzle”. We propose a continuous-time model to study the impact of order flow on foreign exchange rates. The model is estimated by a newly developed econometric tool based on a time-change sampling from calendar to volatility time. The estimation results indicate that the effect of order flow on exchange rate...
Verification of Continuous Dynamical Systems by Timed Automata
DEFF Research Database (Denmark)
Sloth, Christoffer; Wisniewski, Rafael
2011-01-01
This paper presents a method for abstracting continuous dynamical systems by timed automata. The abstraction is based on partitioning the state space of a dynamical system using positive invariant sets, which form cells that represent locations of a timed automaton. The abstraction is intended......, which is generated utilizing sub-level sets of Lyapunov functions, as they are positive invariant sets. It is shown that this partition generates sound and complete abstractions. Furthermore, the complete abstractions can be composed of multiple timed automata, allowing parallelization...
Coupled continuous time-random walks in quenched random environment
Magdziarz, M.; Szczotka, W.
2018-02-01
We introduce a coupled continuous-time random walk with coupling which is characteristic for Lévy walks. Additionally we assume that the walker moves in a quenched random environment, i.e. the site disorder at each lattice point is fixed in time. We analyze the scaling limit of such a random walk. We show that for large times the behaviour of the analyzed process is exactly the same as in the case of uncoupled quenched trap model for Lévy flights.
Correlating defect density with growth time in continuous graphene films.
Kang, Cheong; Jung, Da Hee; Nam, Ji Eun; Lee, Jin Seok
2014-12-01
We report that graphene flakes and films which were synthesized by copper-catalyzed atmospheric pressure chemical vapor deposition (APCVD) method using a mixture of Ar, H2, and CH4 gases. It was found that variations in the reaction parameters, such as reaction temperature, annealing time, and growth time, influenced the domain size of as-grown graphene. Besides, the reaction parameters influenced the number of layers, degree of defects and uniformity of the graphene films. The increase in growth temperature and annealing time tends to accelerate the graphene growth rate and increase the diffusion length, respectively, thereby increasing the average size of graphene domains. In addition, we confirmed that the number of pinholes reduced with increase in the growth time. Micro-Raman analysis of the as-grown graphene films confirmed that the continuous graphene monolayer film with low defects and high uniformity could be obtained with prolonged reaction time, under the appropriate annealing time and growth temperature.
Continuous equilibrium scores: factoring in the time before a fall.
Wood, Scott J; Reschke, Millard F; Owen Black, F
2012-07-01
The equilibrium (EQ) score commonly used in computerized dynamic posturography is normalized between 0 and 100, with falls assigned a score of 0. The resulting mixed discrete-continuous distribution limits certain statistical analyses and treats all trials with falls equally. We propose a simple modification of the formula in which peak-to-peak sway data from trials with falls is scaled according the percent of the trial completed to derive a continuous equilibrium (cEQ) score. The cEQ scores for trials without falls remain unchanged from the original methodology. The cEQ factors in the time before a fall and results in a continuous variable retaining the central tendencies of the original EQ distribution. A random set of 5315 Sensory Organization Test trials were pooled that included 81 falls. A comparison of the original and cEQ distributions and their rank ordering demonstrated that trials with falls continue to constitute the lower range of scores with the cEQ methodology. The area under the receiver operating characteristic curve (0.997) demonstrates that the cEQ retained near-perfect discrimination between trials with and without falls. We conclude that the cEQ score provides the ability to discriminate between ballistic falls from falls that occur later in the trial. This approach of incorporating time and sway magnitude can be easily extended to enhance other balance tests that include fall data or incomplete trials. Copyright © 2012 Elsevier B.V. All rights reserved.
DEFF Research Database (Denmark)
Jensen, Finn Verner; Lauritzen, Steffen Lilholt
2001-01-01
This article describes the basic ideas and algorithms behind specification and inference in probabilistic networks based on directed acyclic graphs, undirected graphs, and chain graphs.......This article describes the basic ideas and algorithms behind specification and inference in probabilistic networks based on directed acyclic graphs, undirected graphs, and chain graphs....
Wakker, P.P.; Thaler, R.H.; Tversky, A.
1997-01-01
Probabilistic insurance is an insurance policy involving a small probability that the consumer will not be reimbursed. Survey data suggest that people dislike probabilistic insurance and demand more than a 20% reduction in premium to compensate for a 1% default risk. These observations cannot be
P.P. Wakker (Peter); R.H. Thaler (Richard); A. Tversky (Amos)
1997-01-01
textabstractProbabilistic insurance is an insurance policy involving a small probability that the consumer will not be reimbursed. Survey data suggest that people dislike probabilistic insurance and demand more than a 20% reduction in the premium to compensate for a 1% default risk. While these
International Nuclear Information System (INIS)
Laurens, J.M.; Thompson, B.G.J.; Sumerling, T.J.
1990-01-01
During the past decade, the UKDoE has funded the development of an integrated assessment procedure centred around probabilistic risk analysis (p.r.a.) using Monte Carlo simulation techniques to account for the effects of parameter value uncertainty, including those associated with temporal changes in the environment over a postclosure period of about one million years. The influence of these changes can now be incorporated explicitly into the p.r.a. simulator VANDAL (Variability ANalysis of Disposal ALternatives) briefly described here. Although a full statistically converged time-dependent p.r.a. will not be demonstrated until the current Dry Run 3 trial is complete, illustrative examples are given showing the ability of VANDAL to represent spatially complex groundwater and repository systems evolving under the influence of climatic change. 18 refs., 10 figs., 1 tab
Introducing the Dimensional Continuous Space-Time Theory
International Nuclear Information System (INIS)
Martini, Luiz Cesar
2013-01-01
This article is an introduction to a new theory. The name of the theory is justified by the dimensional description of the continuous space-time of the matter, energy and empty space, that gathers all the real things that exists in the universe. The theory presents itself as the consolidation of the classical, quantum and relativity theories. A basic equation that describes the formation of the Universe, relating time, space, matter, energy and movement, is deduced. The four fundamentals physics constants, light speed in empty space, gravitational constant, Boltzmann's constant and Planck's constant and also the fundamentals particles mass, the electrical charges, the energies, the empty space and time are also obtained from this basic equation. This theory provides a new vision of the Big-Bang and how the galaxies, stars, black holes and planets were formed. Based on it, is possible to have a perfect comprehension of the duality between wave-particle, which is an intrinsic characteristic of the matter and energy. It will be possible to comprehend the formation of orbitals and get the equationing of atomics orbits. It presents a singular comprehension of the mass relativity, length and time. It is demonstrated that the continuous space-time is tridimensional, inelastic and temporally instantaneous, eliminating the possibility of spatial fold, slot space, worm hole, time travels and parallel universes. It is shown that many concepts, like dark matter and strong forces, that hypothetically keep the cohesion of the atomics nucleons, are without sense.
Continuous-Time Symmetric Hopfield Nets are Computationally Universal
Czech Academy of Sciences Publication Activity Database
Šíma, Jiří; Orponen, P.
2003-01-01
Roč. 15, č. 3 (2003), s. 693-733 ISSN 0899-7667 R&D Projects: GA AV ČR IAB2030007; GA ČR GA201/02/1456 Institutional research plan: AV0Z1030915 Keywords : continuous-time Hopfield network * Liapunov function * analog computation * computational power * Turing universality Subject RIV: BA - General Mathematics Impact factor: 2.747, year: 2003
Parallel algorithms for simulating continuous time Markov chains
Nicol, David M.; Heidelberger, Philip
1992-01-01
We have previously shown that the mathematical technique of uniformization can serve as the basis of synchronization for the parallel simulation of continuous-time Markov chains. This paper reviews the basic method and compares five different methods based on uniformization, evaluating their strengths and weaknesses as a function of problem characteristics. The methods vary in their use of optimism, logical aggregation, communication management, and adaptivity. Performance evaluation is conducted on the Intel Touchstone Delta multiprocessor, using up to 256 processors.
Estimation of Continuous Time Models in Economics: an Overview
Clifford R. Wymer
2009-01-01
The dynamics of economic behaviour is often developed in theory as a continuous time system. Rigorous estimation and testing of such systems, and the analysis of some aspects of their properties, is of particular importance in distinguishing between competing hypotheses and the resulting models. The consequences for the international economy during the past eighteen months of failures in the financial sector, and particularly the banking sector, make it essential that the dynamics of financia...
International Nuclear Information System (INIS)
Liang Jinling; Cao Jinde
2004-01-01
First, convergence of continuous-time Bidirectional Associative Memory (BAM) neural networks are studied. By using Lyapunov functionals and some analysis technique, the delay-independent sufficient conditions are obtained for the networks to converge exponentially toward the equilibrium associated with the constant input sources. Second, discrete-time analogues of the continuous-time BAM networks are formulated and studied. It is shown that the convergence characteristics of the continuous-time systems are preserved by the discrete-time analogues without any restriction imposed on the uniform discretionary step size. An illustrative example is given to demonstrate the effectiveness of the obtained results
The problem with time in mixed continuous/discrete time modelling
Rovers, K.C.; Kuper, Jan; Smit, Gerardus Johannes Maria
The design of cyber-physical systems requires the use of mixed continuous time and discrete time models. Current modelling tools have problems with time transformations (such as a time delay) or multi-rate systems. We will present a novel approach that implements signals as functions of time,
Dynamical continuous time random Lévy flights
Liu, Jian; Chen, Xiaosong
2016-03-01
The Lévy flights' diffusive behavior is studied within the framework of the dynamical continuous time random walk (DCTRW) method, while the nonlinear friction is introduced in each step. Through the DCTRW method, Lévy random walker in each step flies by obeying the Newton's Second Law while the nonlinear friction f(v) = - γ0v - γ2v3 being considered instead of Stokes friction. It is shown that after introducing the nonlinear friction, the superdiffusive Lévy flights converges, behaves localization phenomenon with long time limit, but for the Lévy index μ = 2 case, it is still Brownian motion.
Infinite time interval backward stochastic differential equations with continuous coefficients.
Zong, Zhaojun; Hu, Feng
2016-01-01
In this paper, we study the existence theorem for [Formula: see text] [Formula: see text] solutions to a class of 1-dimensional infinite time interval backward stochastic differential equations (BSDEs) under the conditions that the coefficients are continuous and have linear growths. We also obtain the existence of a minimal solution. Furthermore, we study the existence and uniqueness theorem for [Formula: see text] [Formula: see text] solutions of infinite time interval BSDEs with non-uniformly Lipschitz coefficients. It should be pointed out that the assumptions of this result is weaker than that of Theorem 3.1 in Zong (Turkish J Math 37:704-718, 2013).
Quantum trajectories and measurements in continuous time. The diffusive case
International Nuclear Information System (INIS)
Barchielli, Alberto; Gregoratti, Matteo
2009-01-01
continuous time for quantum systems. The two-level atom is again used to introduce and study an example of feedback based on the observed output. (orig.)
Coaction versus reciprocity in continuous-time models of cooperation.
van Doorn, G Sander; Riebli, Thomas; Taborsky, Michael
2014-09-07
Cooperating animals frequently show closely coordinated behaviours organized by a continuous flow of information between interacting partners. Such real-time coaction is not captured by the iterated prisoner's dilemma and other discrete-time reciprocal cooperation games, which inherently feature a delay in information exchange. Here, we study the evolution of cooperation when individuals can dynamically respond to each other's actions. We develop continuous-time analogues of iterated-game models and describe their dynamics in terms of two variables, the propensity of individuals to initiate cooperation (altruism) and their tendency to mirror their partner's actions (coordination). These components of cooperation stabilize at an evolutionary equilibrium or show oscillations, depending on the chosen payoff parameters. Unlike reciprocal altruism, cooperation by coaction does not require that those willing to initiate cooperation pay in advance for uncertain future benefits. Correspondingly, we show that introducing a delay to information transfer between players is equivalent to increasing the cost of cooperation. Cooperative coaction can therefore evolve much more easily than reciprocal cooperation. When delays entirely prevent coordination, we recover results from the discrete-time alternating prisoner's dilemma, indicating that coaction and reciprocity are connected by a continuum of opportunities for real-time information exchange. Copyright © 2014 Elsevier Ltd. All rights reserved.
The space-time model according to dimensional continuous space-time theory
International Nuclear Information System (INIS)
Martini, Luiz Cesar
2014-01-01
This article results from the Dimensional Continuous Space-Time Theory for which the introductory theoretician was presented in [1]. A theoretical model of the Continuous Space-Time is presented. The wave equation of time into absolutely stationary empty space referential will be described in detail. The complex time, that is the time fixed on the infinite phase time speed referential, is deduced from the New View of Relativity Theory that is being submitted simultaneously with this article in this congress. Finally considering the inseparable Space-Time is presented the duality equation wave-particle.
Distinct timing mechanisms produce discrete and continuous movements.
Directory of Open Access Journals (Sweden)
Raoul Huys
2008-04-01
Full Text Available The differentiation of discrete and continuous movement is one of the pillars of motor behavior classification. Discrete movements have a definite beginning and end, whereas continuous movements do not have such discriminable end points. In the past decade there has been vigorous debate whether this classification implies different control processes. This debate up until the present has been empirically based. Here, we present an unambiguous non-empirical classification based on theorems in dynamical system theory that sets discrete and continuous movements apart. Through computational simulations of representative modes of each class and topological analysis of the flow in state space, we show that distinct control mechanisms underwrite discrete and fast rhythmic movements. In particular, we demonstrate that discrete movements require a time keeper while fast rhythmic movements do not. We validate our computational findings experimentally using a behavioral paradigm in which human participants performed finger flexion-extension movements at various movement paces and under different instructions. Our results demonstrate that the human motor system employs different timing control mechanisms (presumably via differential recruitment of neural subsystems to accomplish varying behavioral functions such as speed constraints.
Relative entropy and waiting time for continuous-time Markov processes
Chazottes, J.R.; Giardinà, C.; Redig, F.H.J.
2006-01-01
For discrete-time stochastic processes, there is a close connection between return (resp. waiting) times and entropy (resp. relative entropy). Such a connection cannot be straightforwardly extended to the continuous-time setting. Contrarily to the discrete-time case one needs a reference measure on
Nonequilibrium thermodynamic potentials for continuous-time Markov chains.
Verley, Gatien
2016-01-01
We connect the rare fluctuations of an equilibrium (EQ) process and the typical fluctuations of a nonequilibrium (NE) stationary process. In the framework of large deviation theory, this observation allows us to introduce NE thermodynamic potentials. For continuous-time Markov chains, we identify the relevant pairs of conjugated variables and propose two NE ensembles: one with fixed dynamics and fluctuating time-averaged variables, and another with fixed time-averaged variables, but a fluctuating dynamics. Accordingly, we show that NE processes are equivalent to conditioned EQ processes ensuring that NE potentials are Legendre dual. We find a variational principle satisfied by the NE potentials that reach their maximum in the NE stationary state and whose first derivatives produce the NE equations of state and second derivatives produce the NE Maxwell relations generalizing the Onsager reciprocity relations.
L. I. Lytkina; A. A. Shevtsov; E. S. Shentsova; O. A. Apalikhina
2016-01-01
A mathematical model of the polydisperse medium mixing process reflects its stochastic features in the form of uneven distribution of phase elements on the time of their presence in apparatus, particle size, ripple retention of the apparatus, random distribution of the material and thermal phase flows of the working volume, heterogeneity of the medium physical- and chemical properties, complicated by chemical reaction. For the mathematical description of the mixing process of animal feed ingr...
A continuous-time neural model for sequential action.
Kachergis, George; Wyatte, Dean; O'Reilly, Randall C; de Kleijn, Roy; Hommel, Bernhard
2014-11-05
Action selection, planning and execution are continuous processes that evolve over time, responding to perceptual feedback as well as evolving top-down constraints. Existing models of routine sequential action (e.g. coffee- or pancake-making) generally fall into one of two classes: hierarchical models that include hand-built task representations, or heterarchical models that must learn to represent hierarchy via temporal context, but thus far lack goal-orientedness. We present a biologically motivated model of the latter class that, because it is situated in the Leabra neural architecture, affords an opportunity to include both unsupervised and goal-directed learning mechanisms. Moreover, we embed this neurocomputational model in the theoretical framework of the theory of event coding (TEC), which posits that actions and perceptions share a common representation with bidirectional associations between the two. Thus, in this view, not only does perception select actions (along with task context), but actions are also used to generate perceptions (i.e. intended effects). We propose a neural model that implements TEC to carry out sequential action control in hierarchically structured tasks such as coffee-making. Unlike traditional feedforward discrete-time neural network models, which use static percepts to generate static outputs, our biological model accepts continuous-time inputs and likewise generates non-stationary outputs, making short-timescale dynamic predictions. © 2014 The Author(s) Published by the Royal Society. All rights reserved.
Recommender engine for continuous-time quantum Monte Carlo methods
Huang, Li; Yang, Yi-feng; Wang, Lei
2017-03-01
Recommender systems play an essential role in the modern business world. They recommend favorable items such as books, movies, and search queries to users based on their past preferences. Applying similar ideas and techniques to Monte Carlo simulations of physical systems boosts their efficiency without sacrificing accuracy. Exploiting the quantum to classical mapping inherent in the continuous-time quantum Monte Carlo methods, we construct a classical molecular gas model to reproduce the quantum distributions. We then utilize powerful molecular simulation techniques to propose efficient quantum Monte Carlo updates. The recommender engine approach provides a general way to speed up the quantum impurity solvers.
Continuous-time quantum walks on multilayer dendrimer networks
Galiceanu, Mircea; Strunz, Walter T.
2016-08-01
We consider continuous-time quantum walks (CTQWs) on multilayer dendrimer networks (MDs) and their application to quantum transport. A detailed study of properties of CTQWs is presented and transport efficiency is determined in terms of the exact and average return probabilities. The latter depends only on the eigenvalues of the connectivity matrix, which even for very large structures allows a complete analytical solution for this particular choice of network. In the case of MDs we observe an interplay between strong localization effects, due to the dendrimer topology, and good efficiency from the linear segments. We show that quantum transport is enhanced by interconnecting more layers of dendrimers.
Measuring and modelling occupancy time in NHS continuing healthcare
Directory of Open Access Journals (Sweden)
Millard Peter H
2011-06-01
Full Text Available Abstract Background Due to increasing demand and financial constraints, NHS continuing healthcare systems seek to find better ways of forecasting demand and budgeting for care. This paper investigates two areas of concern, namely, how long existing patients stay in service and the number of patients that are likely to be still in care after a period of time. Methods An anonymised dataset containing information for all funded admissions to placement and home care in the NHS continuing healthcare system was provided by 26 (out of 31 London primary care trusts. The data related to 11289 patients staying in placement and home care between 1 April 2005 and 31 May 2008 were first analysed. Using a methodology based on length of stay (LoS modelling, we captured the distribution of LoS of patients to estimate the probability of a patient staying in care over a period of time. Using the estimated probabilities we forecasted the number of patients that are likely to be still in care after a period of time (e.g. monthly. Results We noticed that within the NHS continuing healthcare system there are three main categories of patients. Some patients are discharged after a short stay (few days, some others staying for few months and the third category of patients staying for a long period of time (years. Some variations in proportions of discharge and transition between types of care as well as between care groups (e.g. palliative, functional mental health were observed. A close agreement of the observed and the expected numbers of patients suggests a good prediction model. Conclusions The model was tested for care groups within the NHS continuing healthcare system in London to support Primary Care Trusts in budget planning and improve their responsiveness to meet the increasing demand under limited availability of resources. Its applicability can be extended to other types of care, such as hospital care and re-ablement. Further work will be geared towards
Price discovery in a continuous-time setting
DEFF Research Database (Denmark)
Dias, Gustavo Fruet; Fernandes, Marcelo; Scherrer, Cristina
We formulate a continuous-time price discovery model in which the price discovery measure varies (stochastically) at daily frequency. We estimate daily measures of price discovery using a kernel-based OLS estimator instead of running separate daily VECM regressions as standard in the literature. We...... show that our estimator is not only consistent, but also outperforms the standard daily VECM in finite samples. We illustrate our theoretical findings by studying the price discovery process of 10 actively traded stocks in the U.S. from 2007 to 2013....
Discrete and continuous time dynamic mean-variance analysis
Reiss, Ariane
1999-01-01
Contrary to static mean-variance analysis, very few papers have dealt with dynamic mean-variance analysis. Here, the mean-variance efficient self-financing portfolio strategy is derived for n risky assets in discrete and continuous time. In the discrete setting, the resulting portfolio is mean-variance efficient in a dynamic sense. It is shown that the optimal strategy for n risky assets may be dominated if the expected terminal wealth is constrained to exactly attain a certain goal instead o...
Finite time convergent learning law for continuous neural networks.
Chairez, Isaac
2014-02-01
This paper addresses the design of a discontinuous finite time convergent learning law for neural networks with continuous dynamics. The neural network was used here to obtain a non-parametric model for uncertain systems described by a set of ordinary differential equations. The source of uncertainties was the presence of some external perturbations and poor knowledge of the nonlinear function describing the system dynamics. A new adaptive algorithm based on discontinuous algorithms was used to adjust the weights of the neural network. The adaptive algorithm was derived by means of a non-standard Lyapunov function that is lower semi-continuous and differentiable in almost the whole space. A compensator term was included in the identifier to reject some specific perturbations using a nonlinear robust algorithm. Two numerical examples demonstrated the improvements achieved by the learning algorithm introduced in this paper compared to classical schemes with continuous learning methods. The first one dealt with a benchmark problem used in the paper to explain how the discontinuous learning law works. The second one used the methane production model to show the benefits in engineering applications of the learning law proposed in this paper. Copyright © 2013 Elsevier Ltd. All rights reserved.
Hofstede, ter F.; Wedel, M.
1998-01-01
This study investigates the effects of time aggregation in discrete and continuous-time hazard models. A Monte Carlo study is conducted in which data are generated according to various continuous and discrete-time processes, and aggregated into daily, weekly and monthly intervals. These data are
Time-predictable model application in probabilistic seismic hazard analysis of faults in Taiwan
Directory of Open Access Journals (Sweden)
Yu-Wen Chang
2017-01-01
Full Text Available Given the probability distribution function relating to the recurrence interval and the occurrence time of the previous occurrence of a fault, a time-dependent model of a particular fault for seismic hazard assessment was developed that takes into account the active fault rupture cyclic characteristics during a particular lifetime up to the present time. The Gutenberg and Richter (1944 exponential frequency-magnitude relation uses to describe the earthquake recurrence rate for a regional source. It is a reference for developing a composite procedure modelled the occurrence rate for the large earthquake of a fault when the activity information is shortage. The time-dependent model was used to describe the fault characteristic behavior. The seismic hazards contribution from all sources, including both time-dependent and time-independent models, were then added together to obtain the annual total lifetime hazard curves. The effects of time-dependent and time-independent models of fault [e.g., Brownian passage time (BPT and Poisson, respectively] in hazard calculations are also discussed. The proposed fault model result shows that the seismic demands of near fault areas are lower than the current hazard estimation where the time-dependent model was used on those faults, particularly, the elapsed time since the last event of the faults (such as the Chelungpu fault are short.
Correlated continuous time random walk and option pricing
Lv, Longjin; Xiao, Jianbin; Fan, Liangzhong; Ren, Fuyao
2016-04-01
In this paper, we study a correlated continuous time random walk (CCTRW) with averaged waiting time, whose probability density function (PDF) is proved to follow stretched Gaussian distribution. Then, we apply this process into option pricing problem. Supposing the price of the underlying is driven by this CCTRW, we find this model captures the subdiffusive characteristic of financial markets. By using the mean self-financing hedging strategy, we obtain the closed-form pricing formulas for a European option with and without transaction costs, respectively. At last, comparing the obtained model with the classical Black-Scholes model, we find the price obtained in this paper is higher than that obtained from the Black-Scholes model. A empirical analysis is also introduced to confirm the obtained results can fit the real data well.
International Nuclear Information System (INIS)
Stanzel, Ph; Kahl, B; Haberl, U; Herrnegger, M; Nachtnebel, H P
2008-01-01
A hydrological modelling framework applied within operational flood forecasting systems in three alpine Danube tributary basins, Traisen, Salzach and Enns, is presented. A continuous, semi-distributed rainfall-runoff model, accounting for the main hydrological processes of snow accumulation and melt, interception, evapotranspiration, infiltration, runoff generation and routing is set up. Spatial discretization relies on the division of watersheds into subbasins and subsequently into hydrologic response units based on spatial information on soil types, land cover and elevation bands. The hydrological models are calibrated with meteorological ground measurements and with meteorological analyses incorporating radar information. Operationally, each forecasting sequence starts with the re-calculation of the last 24 to 48 hours. Errors between simulated and observed runoff are minimized by optimizing a correction factor for the input to provide improved system states. For the hydrological forecast quantitative 48 or 72 hour forecast grids of temperature and precipitation - deterministic and probabilistic - are used as input. The forecasted hydrograph is corrected with an autoregressive model. The forecasting sequences are repeated each 15 minutes. First evaluations of resulting hydrological forecasts are presented and reliability of forecasts with different lead times is discussed.
An Analytical Solution for Probabilistic Guarantees of Reservation Based Soft Real--Time Systems
Palopoli, Luigi; Fontanelli, Daniele; Abeni, Luca; Villalba Frias, Bernardo
2015-01-01
We show a methodology for the computation of the probability of deadline miss for a periodic real-time task scheduled by a resource reservation algorithm. We propose a modelling technique for the system that reduces the computation of such a probability to that of the steady state probability of an infinite state Discrete Time Markov Chain with a periodic structure. This structure is exploited to develop an efficient numeric solution where different accuracy/computation time trade-offs can be...
On properties of continuous-time random walks with non-Poissonian jump-times
International Nuclear Information System (INIS)
Villarroel, Javier; Montero, Miquel
2009-01-01
The usual development of the continuous-time random walk (CTRW) proceeds by assuming that the present is one of the jumping times. Under this restrictive assumption integral equations for the propagator and mean escape times have been derived. We generalize these results to the case when the present is an arbitrary time by recourse to renewal theory. The case of Erlang distributed times is analyzed in detail. Several concrete examples are considered.
Continuous-time quantum Monte Carlo impurity solvers
Gull, Emanuel; Werner, Philipp; Fuchs, Sebastian; Surer, Brigitte; Pruschke, Thomas; Troyer, Matthias
2011-04-01
Continuous-time quantum Monte Carlo impurity solvers are algorithms that sample the partition function of an impurity model using diagrammatic Monte Carlo techniques. The present paper describes codes that implement the interaction expansion algorithm originally developed by Rubtsov, Savkin, and Lichtenstein, as well as the hybridization expansion method developed by Werner, Millis, Troyer, et al. These impurity solvers are part of the ALPS-DMFT application package and are accompanied by an implementation of dynamical mean-field self-consistency equations for (single orbital single site) dynamical mean-field problems with arbitrary densities of states. Program summaryProgram title: dmft Catalogue identifier: AEIL_v1_0 Program summary URL:http://cpc.cs.qub.ac.uk/summaries/AEIL_v1_0.html Program obtainable from: CPC Program Library, Queen's University, Belfast, N. Ireland Licensing provisions: ALPS LIBRARY LICENSE version 1.1 No. of lines in distributed program, including test data, etc.: 899 806 No. of bytes in distributed program, including test data, etc.: 32 153 916 Distribution format: tar.gz Programming language: C++ Operating system: The ALPS libraries have been tested on the following platforms and compilers: Linux with GNU Compiler Collection (g++ version 3.1 and higher), and Intel C++ Compiler (icc version 7.0 and higher) MacOS X with GNU Compiler (g++ Apple-version 3.1, 3.3 and 4.0) IBM AIX with Visual Age C++ (xlC version 6.0) and GNU (g++ version 3.1 and higher) compilers Compaq Tru64 UNIX with Compq C++ Compiler (cxx) SGI IRIX with MIPSpro C++ Compiler (CC) HP-UX with HP C++ Compiler (aCC) Windows with Cygwin or coLinux platforms and GNU Compiler Collection (g++ version 3.1 and higher) RAM: 10 MB-1 GB Classification: 7.3 External routines: ALPS [1], BLAS/LAPACK, HDF5 Nature of problem: (See [2].) Quantum impurity models describe an atom or molecule embedded in a host material with which it can exchange electrons. They are basic to nanoscience as
Stochastic calculus for uncoupled continuous-time random walks.
Germano, Guido; Politi, Mauro; Scalas, Enrico; Schilling, René L
2009-06-01
The continuous-time random walk (CTRW) is a pure-jump stochastic process with several applications not only in physics but also in insurance, finance, and economics. A definition is given for a class of stochastic integrals driven by a CTRW, which includes the Itō and Stratonovich cases. An uncoupled CTRW with zero-mean jumps is a martingale. It is proved that, as a consequence of the martingale transform theorem, if the CTRW is a martingale, the Itō integral is a martingale too. It is shown how the definition of the stochastic integrals can be used to easily compute them by Monte Carlo simulation. The relations between a CTRW, its quadratic variation, its Stratonovich integral, and its Itō integral are highlighted by numerical calculations when the jumps in space of the CTRW have a symmetric Lévy alpha -stable distribution and its waiting times have a one-parameter Mittag-Leffler distribution. Remarkably, these distributions have fat tails and an unbounded quadratic variation. In the diffusive limit of vanishing scale parameters, the probability density of this kind of CTRW satisfies the space-time fractional diffusion equation (FDE) or more in general the fractional Fokker-Planck equation, which generalizes the standard diffusion equation, solved by the probability density of the Wiener process, and thus provides a phenomenologic model of anomalous diffusion. We also provide an analytic expression for the quadratic variation of the stochastic process described by the FDE and check it by Monte Carlo.
Effect of hydraulic retention time on continuous biocatalytic calcification reactor
International Nuclear Information System (INIS)
Isik, Mustafa; Altas, Levent; Kurmac, Yakup; Ozcan, Samet; Oruc, Ozcan
2010-01-01
High calcium concentrations in the wastewaters are problematic, because they lead to clogging of pipelines, boilers and heat exchangers through scaling (as carbonate, sulfate or phosphate precipitates), or malfunctioning of aerobic and anaerobic reactors. As a remedy to this problem, the industry typically uses chemical crystallization reactors which are efficient but often require complex monitoring and control and, as a drawback, can give rise to highly alkaline effluents. Biomineralization are emerging as alternative mechanisms for the removal of calcium from aqueous environments. Biocatalytic calcification reactors (BCR) utilize microbial urea hydrolysis by bacteria for the removal of calcium, as calcite, from industrial wastewater. Hydraulic retention time (HRT) effect on calcium removal was studied with a continuous feed BCR reactor treating a simulated pulp paper wastewater. Study showed that HRT is important parameter and HRT of 5-6 h is optimum for calcium removal from calcium-rich wastewaters.
A Continuous-Time Model for Valuing Foreign Exchange Options
Directory of Open Access Journals (Sweden)
James J. Kung
2013-01-01
Full Text Available This paper makes use of stochastic calculus to develop a continuous-time model for valuing European options on foreign exchange (FX when both domestic and foreign spot rates follow a generalized Wiener process. Using the dollar/euro exchange rate as input for parameter estimation and employing our FX option model as a yardstick, we find that the traditional Garman-Kohlhagen FX option model, which assumes constant spot rates, values incorrectly calls and puts for different values of the ratio of exchange rate to exercise price. Specifically, it undervalues calls when the ratio is between 0.70 and 1.08, and it overvalues calls when the ratio is between 1.18 and 1.30, whereas it overvalues puts when the ratio is between 0.70 and 0.82, and it undervalues puts when the ratio is between 0.86 and 1.30.
Short-term wind power forecasting: probabilistic and space-time aspects
DEFF Research Database (Denmark)
Tastu, Julija
work deals with the proposal and evaluation of new mathematical models and forecasting methods for short-term wind power forecasting, accounting for space-time dynamics based on geographically distributed information. Different forms of power predictions are considered, starting from traditional point...... into the corresponding models are analysed. As a final step, emphasis is placed on generating space-time trajectories: this calls for the prediction of joint multivariate predictive densities describing wind power generation at a number of distributed locations and for a number of successive lead times. In addition......Optimal integration of wind energy into power systems calls for high quality wind power predictions. State-of-the-art forecasting systems typically provide forecasts for every location individually, without taking into account information coming from the neighbouring territories. It is however...
Time-dependent reliability analysis of nuclear reactor operators using probabilistic network models
International Nuclear Information System (INIS)
Oka, Y.; Miyata, K.; Kodaira, H.; Murakami, S.; Kondo, S.; Togo, Y.
1987-01-01
Human factors are very important for the reliability of a nuclear power plant. Human behavior has essentially a time-dependent nature. The details of thinking and decision making processes are important for detailed analysis of human reliability. They have, however, not been well considered by the conventional methods of human reliability analysis. The present paper describes the models for the time-dependent and detailed human reliability analysis. Recovery by an operator is taken into account and two-operators models are also presented
Physical time scale in kinetic Monte Carlo simulations of continuous-time Markov chains.
Serebrinsky, Santiago A
2011-03-01
We rigorously establish a physical time scale for a general class of kinetic Monte Carlo algorithms for the simulation of continuous-time Markov chains. This class of algorithms encompasses rejection-free (or BKL) and rejection (or "standard") algorithms. For rejection algorithms, it was formerly considered that the availability of a physical time scale (instead of Monte Carlo steps) was empirical, at best. Use of Monte Carlo steps as a time unit now becomes completely unnecessary.
Time limit and time at VO2max' during a continuous and an intermittent run.
Demarie, S; Koralsztein, J P; Billat, V
2000-06-01
The purpose of this study was to verify, by track field tests, whether sub-elite runners (n=15) could (i) reach their VO2max while running at v50%delta, i.e. midway between the speed associated with lactate threshold (vLAT) and that associated with maximal aerobic power (vVO2max), and (ii) if an intermittent exercise provokes a maximal and/or supra maximal oxygen consumption longer than a continuous one. Within three days, subjects underwent a multistage incremental test during which their vVO2max and vLAT were determined; they then performed two additional testing sessions, where continuous and intermittent running exercises at v50%delta were performed up to exhaustion. Subject's gas exchange and heart rate were continuously recorded by means of a telemetric apparatus. Blood samples were taken from fingertip and analysed for blood lactate concentration. In the continuous and the intermittent tests peak VO2 exceeded VO2max values, as determined during the incremental test. However in the intermittent exercise, peak VO2, time to exhaustion and time at VO2max reached significantly higher values, while blood lactate accumulation showed significantly lower values than in the continuous one. The v50%delta is sufficient to stimulate VO2max in both intermittent and continuous running. The intermittent exercise results better than the continuous one in increasing maximal aerobic power, allowing longer time at VO2max and obtaining higher peak VO2 with lower lactate accumulation.
Moghadas, Davood
2017-10-17
Monitoring spatiotemporal variations of soil water content (θ) is important across a range of research fields, including agricultural engineering, hydrology, meteorology and climatology. Low frequency electromagnetic induction (EMI) systems have proven to be useful tools in mapping soil apparent electrical conductivity (σa) and soil moisture. However, obtaining depth profile water content is an area that has not been fully explored using EMI. To examine this, we performed time-lapse EMI measurements using a CMD mini-Explorer sensor along a 10 m transect of a maize field over a 6 day period. Reference data were measured at the end of the profile via an excavated pit using 5TE capacitance sensors. In order to derive a time-lapse, depth-specific subsurface image of electrical conductivity (σ), we applied a probabilistic sampling approach, DREAM(ZS), on the measured EMI data. The inversely estimated σ values were subsequently converted to θ using the Rhoades et al. (1976) petrophysical relationship. The uncertainties in measured σa, as well as inaccuracies in the inverted data, introduced some discrepancies between estimated σ and reference values in time and space. Moreover, the disparity between the measurement footprints of the 5TE and CMD Mini-Explorer sensors also led to differences. The obtained θ permitted an accurate monitoring of the spatiotemporal distribution and variation of soil water content due to root water uptake and evaporation. The proposed EMI measurement and modeling technique also allowed for detecting temporal root zone soil moisture variations. The time-lapse θ monitoring approach developed using DREAM(ZS) thus appears to be a useful technique to understand spatiotemporal patterns of soil water content and provide insights into linked soil moisture vegetation processes and the dynamics of soil moisture/infiltration processes.
Moghadas, Davood; Jadoon, Khan Zaib; McCabe, Matthew F.
2017-12-01
Monitoring spatiotemporal variations of soil water content (θ) is important across a range of research fields, including agricultural engineering, hydrology, meteorology and climatology. Low frequency electromagnetic induction (EMI) systems have proven to be useful tools in mapping soil apparent electrical conductivity (σa) and soil moisture. However, obtaining depth profile water content is an area that has not been fully explored using EMI. To examine this, we performed time-lapse EMI measurements using a CMD mini-Explorer sensor along a 10 m transect of a maize field over a 6 day period. Reference data were measured at the end of the profile via an excavated pit using 5TE capacitance sensors. In order to derive a time-lapse, depth-specific subsurface image of electrical conductivity (σ), we applied a probabilistic sampling approach, DREAM(ZS) , on the measured EMI data. The inversely estimated σ values were subsequently converted to θ using the Rhoades et al. (1976) petrophysical relationship. The uncertainties in measured σa, as well as inaccuracies in the inverted data, introduced some discrepancies between estimated σ and reference values in time and space. Moreover, the disparity between the measurement footprints of the 5TE and CMD Mini-Explorer sensors also led to differences. The obtained θ permitted an accurate monitoring of the spatiotemporal distribution and variation of soil water content due to root water uptake and evaporation. The proposed EMI measurement and modeling technique also allowed for detecting temporal root zone soil moisture variations. The time-lapse θ monitoring approach developed using DREAM(ZS) thus appears to be a useful technique to understand spatiotemporal patterns of soil water content and provide insights into linked soil moisture vegetation processes and the dynamics of soil moisture/infiltration processes.
Moghadas, Davood; Jadoon, Khan Zaib; McCabe, Matthew
2017-01-01
Monitoring spatiotemporal variations of soil water content (θ) is important across a range of research fields, including agricultural engineering, hydrology, meteorology and climatology. Low frequency electromagnetic induction (EMI) systems have proven to be useful tools in mapping soil apparent electrical conductivity (σa) and soil moisture. However, obtaining depth profile water content is an area that has not been fully explored using EMI. To examine this, we performed time-lapse EMI measurements using a CMD mini-Explorer sensor along a 10 m transect of a maize field over a 6 day period. Reference data were measured at the end of the profile via an excavated pit using 5TE capacitance sensors. In order to derive a time-lapse, depth-specific subsurface image of electrical conductivity (σ), we applied a probabilistic sampling approach, DREAM(ZS), on the measured EMI data. The inversely estimated σ values were subsequently converted to θ using the Rhoades et al. (1976) petrophysical relationship. The uncertainties in measured σa, as well as inaccuracies in the inverted data, introduced some discrepancies between estimated σ and reference values in time and space. Moreover, the disparity between the measurement footprints of the 5TE and CMD Mini-Explorer sensors also led to differences. The obtained θ permitted an accurate monitoring of the spatiotemporal distribution and variation of soil water content due to root water uptake and evaporation. The proposed EMI measurement and modeling technique also allowed for detecting temporal root zone soil moisture variations. The time-lapse θ monitoring approach developed using DREAM(ZS) thus appears to be a useful technique to understand spatiotemporal patterns of soil water content and provide insights into linked soil moisture vegetation processes and the dynamics of soil moisture/infiltration processes.
Continuous data recording on fast real-time systems
Energy Technology Data Exchange (ETDEWEB)
Zabeo, L., E-mail: lzabeo@jet.u [Euratom-CCFE, Culham Science Centre, Abingdon, Oxon OX14 3DB (United Kingdom); Sartori, F. [Euratom-CCFE, Culham Science Centre, Abingdon, Oxon OX14 3DB (United Kingdom); Neto, A. [Associacao Euratom-IST, Instituto de Plasmas e Fusao Nuclear, Av. Rovisco Pais, 1049-001 Lisboa (Portugal); Piccolo, F. [Euratom-CCFE, Culham Science Centre, Abingdon, Oxon OX14 3DB (United Kingdom); Alves, D. [Associacao Euratom-IST, Instituto de Plasmas e Fusao Nuclear, Av. Rovisco Pais, 1049-001 Lisboa (Portugal); Vitelli, R. [Dipartimento di Informatica, Sistemi e Produzione, Universita di Roma, Tor Vergata, Via del Politecnico, 1-00133 Roma (Italy); Barbalace, A. [Euratom-ENEA Association, Consorzio RFX, 35127 Padova (Italy); De Tommasi, G. [Associazione EURATOM/ENEA/CREATE, Universita di Napoli Federico II, Napoli (Italy)
2010-07-15
The PCU-Project launched for the enhancement of the vertical stabilisation system at JET required the design of a new real-time control system with the challenging specifications of 2Gops and a cycle time of 50 {mu}s. The RTAI based architecture running on an x86 multi-core processor technology demonstrated to be the best platform for meeting the high requirements. Moreover, on this architecture thanks to the smart allocation of the interrupts it was possible to demonstrate simultaneous data streaming at 50 MBs on Ethernet while handling a real-time 100 kHz interrupt source with a maximum jitter of just 3 {mu}s. Because of the memory limitation imposed by 32 bit version Linux running in kernel mode, the RTAI-based new controller allows a maximum practical data storage of 800 MB per pulse. While this amount of data can be accepted for JET normal operation it posed some limitations in the debugging and commissioning of the system. In order to increase the capability of the data acquisition of the system we have designed a mechanism that allows continuous full bandwidth (56 MB/s) data streaming from the real-time task (running in kernel mode) to either a data collector (running in user mode) or an external data acquisition server. The exploited architecture involves a peer to peer mechanisms where the sender running in RTAI kernel mode broadcasts large chunks of data using UDP packets, implemented using the 'fcomm' RTAI extension , to a receiver that will store the data. The paper will present the results of the initial RTAI operating system tests, the design of the streaming architecture and the first experimental results.
Requeno, José Ignacio; Colom, José Manuel
2014-12-01
Model checking is a generic verification technique that allows the phylogeneticist to focus on models and specifications instead of on implementation issues. Phylogenetic trees are considered as transition systems over which we interrogate phylogenetic questions written as formulas of temporal logic. Nonetheless, standard logics become insufficient for certain practices of phylogenetic analysis since they do not allow the inclusion of explicit time and probabilities. The aim of this paper is to extend the application of model checking techniques beyond qualitative phylogenetic properties and adapt the existing logical extensions and tools to the field of phylogeny. The introduction of time and probabilities in phylogenetic specifications is motivated by the study of a real example: the analysis of the ratio of lactose intolerance in some populations and the date of appearance of this phenotype.
International Nuclear Information System (INIS)
Schneeberger, B.; Breuleux, R.
1977-01-01
Assuming that earthquake ground motion is a stationary time function, the seismic analysis of a linear structure can be done by probailistic methods using the 'power spectral density function' (PSD), instead of applying the more traditional time-step-integration using earthquake time histories (TH). A given structure was analysed both by PSD and TH methods computing and comparing 'floor response spectra'. The analysis using TH was performed for two different TH and different frequency intervals for the 'floor-response-spectra'. The analysis using PSD first produced PSD functions of the responses of the floors and these were then converted into 'foor-response-spectra'. Plots of the resulting 'floor-response-spectra' show: (1) The agreement of TH and PSD results is quite close. (2) The curves produced by PSD are much smoother than those produced by TH and mostly form an enelope of the latter. (3) The curves produced by TH are quite jagged with the location and magnitude of the peaks depending on the choice of frequencies at which the 'floor-response-spectra' were evaluated and on the choice of TH. (Auth.)
A probabilistic approach for evaluation of load time history of an aircraft impact
International Nuclear Information System (INIS)
Zorn, N.F.; Schueller, G.I.; Riera, J.D.
1981-01-01
In the context of an overall structural realiability study for a containment located in the F.R. Germany the external load case aircraft impact is investigated. Previous investigations have been based on deterministic evaluations of the load time history. However, a close analysis of the input parameters, such as the mass distribution, the stiffness of the aircraft, the impact velocity and the impact angle reveal their random properties. This in turn leads to a stochastic load time history the parameters of which have been determined in this study. In other words, the randomness of the input parameters are introduced in the calculation of the load time history and their influence with regard to the load magnitude and frequency content is determined. The statistical parameters such as the mean values and the standard deviation of the mechanical properties are evaluated directly from the design plans of the manufacturer for the aircraft Phantom F4-F. This includes rupture loads, mass distributions etc.. The probability distributions of the crash velocity and impact angle are based on a thorough statistical evaluation of the crash histories of the airplane under consideration. Reference was made only to crashes which occurred in the F.R. Germany. (orig.)
From discrete-time models to continuous-time, asynchronous modeling of financial markets
Boer, Katalin; Kaymak, Uzay; Spiering, Jaap
2007-01-01
Most agent-based simulation models of financial markets are discrete-time in nature. In this paper, we investigate to what degree such models are extensible to continuous-time, asynchronous modeling of financial markets. We study the behavior of a learning market maker in a market with information
From Discrete-Time Models to Continuous-Time, Asynchronous Models of Financial Markets
K. Boer-Sorban (Katalin); U. Kaymak (Uzay); J. Spiering (Jaap)
2006-01-01
textabstractMost agent-based simulation models of financial markets are discrete-time in nature. In this paper, we investigate to what degree such models are extensible to continuous-time, asynchronous modelling of financial markets. We study the behaviour of a learning market maker in a market with
Verhoeven, Ronald; Dalmau Codina, Ramon; Prats Menéndez, Xavier; de Gelder, Nico
2014-01-01
1 Abstract In this paper an initial implementation of a real - time aircraft trajectory optimization algorithm is presented . The aircraft trajectory for descent and approach is computed for minimum use of thrust and speed brake in support of a “green” continuous descent and approach flight operation, while complying with ATC time constraints for maintaining runway throughput and co...
Chaos and unpredictability in evolution of cooperation in continuous time
You, Taekho; Kwon, Minji; Jo, Hang-Hyun; Jung, Woo-Sung; Baek, Seung Ki
2017-12-01
Cooperators benefit others with paying costs. Evolution of cooperation crucially depends on the cost-benefit ratio of cooperation, denoted as c . In this work, we investigate the infinitely repeated prisoner's dilemma for various values of c with four of the representative memory-one strategies, i.e., unconditional cooperation, unconditional defection, tit-for-tat, and win-stay-lose-shift. We consider replicator dynamics which deterministically describes how the fraction of each strategy evolves over time in an infinite-sized well-mixed population in the presence of implementation error and mutation among the four strategies. Our finding is that this three-dimensional continuous-time dynamics exhibits chaos through a bifurcation sequence similar to that of a logistic map as c varies. If mutation occurs with rate μ ≪1 , the position of the bifurcation sequence on the c axis is numerically found to scale as μ0.1, and such sensitivity to μ suggests that mutation may have nonperturbative effects on evolutionary paths. It demonstrates how the microscopic randomness of the mutation process can be amplified to macroscopic unpredictability by evolutionary dynamics.
Continuous Fine-Fault Estimation with Real-Time GNSS
Norford, B. B.; Melbourne, T. I.; Szeliga, W. M.; Santillan, V. M.; Scrivner, C.; Senko, J.; Larsen, D.
2017-12-01
Thousands of real-time telemetered GNSS stations operate throughout the circum-Pacific that may be used for rapid earthquake characterization and estimation of local tsunami excitation. We report on the development of a GNSS-based finite-fault inversion system that continuously estimates slip using real-time GNSS position streams from the Cascadia subduction zone and which is being expanded throughout the circum-Pacific. The system uses 1 Hz precise point position streams computed in the ITRF14 reference frame using clock and satellite orbit corrections from the IGS. The software is implemented as seven independent modules that filter time series using Kalman filters, trigger and estimate coseismic offsets, invert for slip using a non-negative least squares method developed by Lawson and Hanson (1974) and elastic half-space Green's Functions developed by Okada (1985), smooth the results temporally and spatially, and write the resulting streams of time-dependent slip to a RabbitMQ messaging server for use by downstream modules such as tsunami excitation modules. Additional fault models can be easily added to the system for other circum-Pacific subduction zones as additional real-time GNSS data become available. The system is currently being tested using data from well-recorded earthquakes including the 2011 Tohoku earthquake, the 2010 Maule earthquake, the 2015 Illapel earthquake, the 2003 Tokachi-oki earthquake, the 2014 Iquique earthquake, the 2010 Mentawai earthquake, the 2016 Kaikoura earthquake, the 2016 Ecuador earthquake, the 2015 Gorkha earthquake, and others. Test data will be fed to the system and the resultant earthquake characterizations will be compared with published earthquake parameters. Seismic events will be assumed to occur on major faults, so, for example, only the San Andreas fault will be considered in Southern California, while the hundreds of other faults in the region will be ignored. Rake will be constrained along each subfault to be
Time inconsistency and reputation in monetary policy: a strategic model in continuous time
Li, Jingyuan; Tian, Guoqiang
2005-01-01
This article develops a model to examine the equilibrium behavior of the time inconsistency problem in a continuous time economy with stochastic and endogenized dis- tortion. First, the authors introduce the notion of sequentially rational equilibrium, and show that the time inconsistency problem may be solved with trigger reputation strategies for stochastic setting. The conditions for the existence of sequentially rational equilibrium are provided. Then, the concept of sequen...
Continuous time quantum random walks in free space
Eichelkraut, Toni; Vetter, Christian; Perez-Leija, Armando; Christodoulides, Demetrios; Szameit, Alexander
2014-05-01
We show theoretically and experimentally that two-dimensional continuous time coherent random walks are possible in free space, that is, in the absence of any external potential, by properly tailoring the associated initial wave function. These effects are experimentally demonstrated using classical paraxial light. Evidently, the usage of classical beams to explore the dynamics of point-like quantum particles is possible since both phenomena are mathematically equivalent. This in turn makes our approach suitable for the realization of random walks using different quantum particles, including electrons and photons. To study the spatial evolution of a wavefunction theoretically, we consider the one-dimensional paraxial wave equation (i∂z +1/2 ∂x2) Ψ = 0 . Starting with the initially localized wavefunction Ψ (x , 0) = exp [ -x2 / 2σ2 ] J0 (αx) , one can show that the evolution of such Gaussian-apodized Bessel envelopes within a region of validity resembles the probability pattern of a quantum walker traversing a uniform lattice. In order to generate the desired input-field in our experimental setting we shape the amplitude and phase of a collimated light beam originating from a classical HeNe-Laser (633 nm) utilizing a spatial light modulator.
Anomalous transport in turbulent plasmas and continuous time random walks
International Nuclear Information System (INIS)
Balescu, R.
1995-01-01
The possibility of a model of anomalous transport problems in a turbulent plasma by a purely stochastic process is investigated. The theory of continuous time random walks (CTRW's) is briefly reviewed. It is shown that a particular class, called the standard long tail CTRW's is of special interest for the description of subdiffusive transport. Its evolution is described by a non-Markovian diffusion equation that is constructed in such a way as to yield exact values for all the moments of the density profile. The concept of a CTRW model is compared to an exact solution of a simple test problem: transport of charged particles in a fluctuating magnetic field in the limit of infinite perpendicular correlation length. Although the well-known behavior of the mean square displacement proportional to t 1/2 is easily recovered, the exact density profile cannot be modeled by a CTRW. However, the quasilinear approximation of the kinetic equation has the form of a non-Markovian diffusion equation and can thus be generated by a CTRW
Inverse Ising problem in continuous time: A latent variable approach
Donner, Christian; Opper, Manfred
2017-12-01
We consider the inverse Ising problem: the inference of network couplings from observed spin trajectories for a model with continuous time Glauber dynamics. By introducing two sets of auxiliary latent random variables we render the likelihood into a form which allows for simple iterative inference algorithms with analytical updates. The variables are (1) Poisson variables to linearize an exponential term which is typical for point process likelihoods and (2) Pólya-Gamma variables, which make the likelihood quadratic in the coupling parameters. Using the augmented likelihood, we derive an expectation-maximization (EM) algorithm to obtain the maximum likelihood estimate of network parameters. Using a third set of latent variables we extend the EM algorithm to sparse couplings via L1 regularization. Finally, we develop an efficient approximate Bayesian inference algorithm using a variational approach. We demonstrate the performance of our algorithms on data simulated from an Ising model. For data which are simulated from a more biologically plausible network with spiking neurons, we show that the Ising model captures well the low order statistics of the data and how the Ising couplings are related to the underlying synaptic structure of the simulated network.
Continuous-time quantum algorithms for unstructured problems
International Nuclear Information System (INIS)
Hen, Itay
2014-01-01
We consider a family of unstructured optimization problems, for which we propose a method for constructing analogue, continuous-time (not necessarily adiabatic) quantum algorithms that are faster than their classical counterparts. In this family of problems, which we refer to as ‘scrambled input’ problems, one has to find a minimum-cost configuration of a given integer-valued n-bit black-box function whose input values have been scrambled in some unknown way. Special cases within this set of problems are Grover’s search problem of finding a marked item in an unstructured database, certain random energy models, and the functions of the Deutsch–Josza problem. We consider a couple of examples in detail. In the first, we provide an O(1) deterministic analogue quantum algorithm to solve the seminal problem of Deutsch and Josza, in which one has to determine whether an n-bit boolean function is constant (gives 0 on all inputs or 1 on all inputs) or balanced (returns 0 on half the input states and 1 on the other half). We also study one variant of the random energy model, and show that, as one might expect, its minimum energy configuration can be found quadratically faster with a quantum adiabatic algorithm than with classical algorithms. (paper)
Stamenkovic, V.
2017-12-01
We focus on the connections between plate tectonics and planet composition — by studying how plate yielding is affected by surface and mantle water, and by variable amounts of Fe, SiC, or radiogenic heat sources within the planet interior. We especially explore whether we can make any robust conclusions if we account for variable initial conditions, current uncertainties in model parameters and the pressure dependence of the viscosity, as well as uncertainties on how a variable composition affects mantle rheology, melting temperatures, and thermal conductivities. We use a 1D thermal evolution model to explore with more than 200,000 simulations the robustness of our results and use our previous results from 3D calculations to help determine the most likely scenario within the uncertainties we still face today. The results that are robust in spite of all uncertainties are that iron-rich mantle rock seems to reduce the efficiency of plate yielding occurring on silicate planets like the Earth if those planets formed along or above mantle solidus and that carbon planets do not seem to be ideal candidates for plate tectonics because of slower creep rates and generally higher thermal conductivities for SiC. All other conclusions depend on not yet sufficiently constrained parameters. For the most likely case based on our current understanding, we find that, within our range of varied planet conditions (1-10 Earth masses), planets with the greatest efficiency of plate yielding are silicate rocky planets of 1 Earth mass with large metallic cores (average density 5500-7000 kg m-3) with minimal mantle concentrations of iron (as little as 0% is preferred) and radiogenic isotopes at formation (up to 10 times less than Earth's initial abundance; less heat sources do not mean no heat sources). Based on current planet formation scenarios and observations of stellar abundances across the Galaxy as well as models of the evolution of the interstellar medium, such planets are
Bod, R.; Heine, B.; Narrog, H.
2010-01-01
Probabilistic linguistics takes all linguistic evidence as positive evidence and lets statistics decide. It allows for accurate modelling of gradient phenomena in production and perception, and suggests that rule-like behaviour is no more than a side effect of maximizing probability. This chapter
DEFF Research Database (Denmark)
Sørensen, John Dalsgaard; Burcharth, H. F.
This chapter describes how partial safety factors can be used in design of vertical wall breakwaters and an example of a code format is presented. The partial safety factors are calibrated on a probabilistic basis. The code calibration process used to calibrate some of the partial safety factors...
Simulating continuous-time Hamiltonian dynamics by way of a discrete-time quantum walk
International Nuclear Information System (INIS)
Schmitz, A.T.; Schwalm, W.A.
2016-01-01
Much effort has been made to connect the continuous-time and discrete-time quantum walks. We present a method for making that connection for a general graph Hamiltonian on a bigraph. Furthermore, such a scheme may be adapted for simulating discretized quantum models on a quantum computer. A coin operator is found for the discrete-time quantum walk which exhibits the same dynamics as the continuous-time evolution. Given the spectral decomposition of the graph Hamiltonian and certain restrictions, the discrete-time evolution is solved for explicitly and understood at or near important values of the parameters. Finally, this scheme is connected to past results for the 1D chain. - Highlights: • A discrete-time quantum walk is purposed which approximates a continuous-time quantum walk. • The purposed quantum walk could be used to simulate Hamiltonian dynamics on a quantum computer. • Given the spectra decomposition of the Hamiltonian, the quantum walk is solved explicitly. • The method is demonstrated and connected to previous work done on the 1D chain.
Time-aggregation effects on the baseline of continuous-time and discrete-time hazard models
ter Hofstede, F.; Wedel, M.
In this study we reinvestigate the effect of time-aggregation for discrete- and continuous-time hazard models. We reanalyze the results of a previous Monte Carlo study by ter Hofstede and Wedel (1998), in which the effects of time-aggregation on the parameter estimates of hazard models were
Anticontrol of chaos in continuous-time systems via time-delay feedback.
Wang, Xiao Fan; Chen, Guanrong; Yu, Xinghuo
2000-12-01
In this paper, a systematic design approach based on time-delay feedback is developed for anticontrol of chaos in a continuous-time system. This anticontrol method can drive a finite-dimensional, continuous-time, autonomous system from nonchaotic to chaotic, and can also enhance the existing chaos of an originally chaotic system. Asymptotic analysis is used to establish an approximate relationship between a time-delay differential equation and a discrete map. Anticontrol of chaos is then accomplished based on this relationship and the differential-geometry control theory. Several examples are given to verify the effectiveness of the methodology and to illustrate the systematic design procedure. (c) 2000 American Institute of Physics.
Finite-Time Stability and Controller Design of Continuous-Time Polynomial Fuzzy Systems
Directory of Open Access Journals (Sweden)
Xiaoxing Chen
2017-01-01
Full Text Available Finite-time stability and stabilization problem is first investigated for continuous-time polynomial fuzzy systems. The concept of finite-time stability and stabilization is given for polynomial fuzzy systems based on the idea of classical references. A sum-of-squares- (SOS- based approach is used to obtain the finite-time stability and stabilization conditions, which include some classical results as special cases. The proposed conditions can be solved with the help of powerful Matlab toolbox SOSTOOLS and a semidefinite-program (SDP solver. Finally, two numerical examples and one practical example are employed to illustrate the validity and effectiveness of the provided conditions.
Continuous time modelling of dynamical spatial lattice data observed at sparsely distributed times
DEFF Research Database (Denmark)
Rasmussen, Jakob Gulddahl; Møller, Jesper
2007-01-01
Summary. We consider statistical and computational aspects of simulation-based Bayesian inference for a spatial-temporal model based on a multivariate point process which is only observed at sparsely distributed times. The point processes are indexed by the sites of a spatial lattice......, and they exhibit spatial interaction. For specificity we consider a particular dynamical spatial lattice data set which has previously been analysed by a discrete time model involving unknown normalizing constants. We discuss the advantages and disadvantages of using continuous time processes compared...... with discrete time processes in the setting of the present paper as well as other spatial-temporal situations....
Finite-Time H∞ Filtering for Linear Continuous Time-Varying Systems with Uncertain Observations
Directory of Open Access Journals (Sweden)
Huihong Zhao
2012-01-01
Full Text Available This paper is concerned with the finite-time H∞ filtering problem for linear continuous time-varying systems with uncertain observations and ℒ2-norm bounded noise. The design of finite-time H∞ filter is equivalent to the problem that a certain indefinite quadratic form has a minimum and the filter is such that the minimum is positive. The quadratic form is related to a Krein state-space model according to the Krein space linear estimation theory. By using the projection theory in Krein space, the finite-time H∞ filtering problem is solved. A numerical example is given to illustrate the performance of the H∞ filter.
Voelkle, Manuel C; Oud, Johan H L
2013-02-01
When designing longitudinal studies, researchers often aim at equal intervals. In practice, however, this goal is hardly ever met, with different time intervals between assessment waves and different time intervals between individuals being more the rule than the exception. One of the reasons for the introduction of continuous time models by means of structural equation modelling has been to deal with irregularly spaced assessment waves (e.g., Oud & Delsing, 2010). In the present paper we extend the approach to individually varying time intervals for oscillating and non-oscillating processes. In addition, we show not only that equal intervals are unnecessary but also that it can be advantageous to use unequal sampling intervals, in particular when the sampling rate is low. Two examples are provided to support our arguments. In the first example we compare a continuous time model of a bivariate coupled process with varying time intervals to a standard discrete time model to illustrate the importance of accounting for the exact time intervals. In the second example the effect of different sampling intervals on estimating a damped linear oscillator is investigated by means of a Monte Carlo simulation. We conclude that it is important to account for individually varying time intervals, and encourage researchers to conceive of longitudinal studies with different time intervals within and between individuals as an opportunity rather than a problem. © 2012 The British Psychological Society.
Stylised facts of financial time series and hidden Markov models in continuous time
DEFF Research Database (Denmark)
Nystrup, Peter; Madsen, Henrik; Lindström, Erik
2015-01-01
presents an extension to continuous time where it is possible to increase the number of states with a linear rather than quadratic growth in the number of parameters. The possibility of increasing the number of states leads to a better fit to both the distributional and temporal properties of daily returns....
Global dissipativity of continuous-time recurrent neural networks with time delay
International Nuclear Information System (INIS)
Liao Xiaoxin; Wang Jun
2003-01-01
This paper addresses the global dissipativity of a general class of continuous-time recurrent neural networks. First, the concepts of global dissipation and global exponential dissipation are defined and elaborated. Next, the sets of global dissipativity and global exponentially dissipativity are characterized using the parameters of recurrent neural network models. In particular, it is shown that the Hopfield network and cellular neural networks with or without time delays are dissipative systems
Echocardiography as an indication of continuous-time cardiac quiescence
Wick, C. A.; Auffermann, W. F.; Shah, A. J.; Inan, O. T.; Bhatti, P. T.; Tridandapani, S.
2016-07-01
Cardiac computed tomography (CT) angiography using prospective gating requires that data be acquired during intervals of minimal cardiac motion to obtain diagnostic images of the coronary vessels free of motion artifacts. This work is intended to assess B-mode echocardiography as a continuous-time indication of these quiescent periods to determine if echocardiography can be used as a cost-efficient, non-ionizing modality to develop new prospective gating techniques for cardiac CT. These new prospective gating approaches will not be based on echocardiography itself but on CT-compatible modalities derived from the mechanics of the heart (e.g. seismocardiography and impedance cardiography), unlike the current standard electrocardiogram. To this end, echocardiography and retrospectively-gated CT data were obtained from ten patients with varied cardiac conditions. CT reconstructions were made throughout the cardiac cycle. Motion of the interventricular septum (IVS) was calculated from both echocardiography and CT reconstructions using correlation-based, deviation techniques. The IVS was chosen because it (1) is visible in echocardiography images, whereas the coronary vessels generally are not, and (2) has been shown to be a suitable indicator of cardiac quiescence. Quiescent phases were calculated as the minima of IVS motion and CT volumes were reconstructed for these phases. The diagnostic quality of the CT reconstructions from phases calculated from echocardiography and CT data was graded on a four-point Likert scale by a board-certified radiologist fellowship-trained in cardiothoracic radiology. Using a Wilcoxon signed-rank test, no significant difference in the diagnostic quality of the coronary vessels was found between CT volumes reconstructed from echocardiography- and CT-selected phases. Additionally, there was a correlation of 0.956 between the echocardiography- and CT-selected phases. This initial work suggests that B-mode echocardiography can be used as a
International Nuclear Information System (INIS)
Li Hongjie; Yue Dong
2010-01-01
The paper investigates the synchronization stability problem for a class of complex dynamical networks with Markovian jumping parameters and mixed time delays. The complex networks consist of m modes and the networks switch from one mode to another according to a Markovian chain with known transition probability. The mixed time delays are composed of discrete and distributed delays, the discrete time delay is assumed to be random and its probability distribution is known a priori. In terms of the probability distribution of the delays, the new type of system model with probability-distribution-dependent parameter matrices is proposed. Based on the stochastic analysis techniques and the properties of the Kronecker product, delay-dependent synchronization stability criteria in the mean square are derived in the form of linear matrix inequalities which can be readily solved by using the LMI toolbox in MATLAB, the solvability of derived conditions depends on not only the size of the delay, but also the probability of the delay-taking values in some intervals. Finally, a numerical example is given to illustrate the feasibility and effectiveness of the proposed method.
Probabilistic Logic and Probabilistic Networks
Haenni, R.; Romeijn, J.-W.; Wheeler, G.; Williamson, J.
2009-01-01
While in principle probabilistic logics might be applied to solve a range of problems, in practice they are rarely applied at present. This is perhaps because they seem disparate, complicated, and computationally intractable. However, we shall argue in this programmatic paper that several approaches
Wang, Lianzhen; Pei, Yulong
2014-09-01
This real road driving study was conducted to investigate the effects of driving time and rest time on the driving performance and recovery of commercial coach drivers. Thirty-three commercial coach drivers participated in the study, and were divided into three groups according to driving time: (a) 2 h, (b) 3 h, and (c) 4 h. The Stanford Sleepiness Scale (SSS) was used to assess the subjective fatigue level of the drivers. One-way ANOVA was employed to analyze the variation in driving performance. The statistical analysis revealed that driving time had a significant effect on the subjective fatigue and driving performance measures among the three groups. After 2 h of driving, both the subjective fatigue and driving performance measures began to deteriorate. After 4 h of driving, all of the driving performance indicators changed significantly except for depth perception. A certain amount of rest time eliminated the negative effects of fatigue. A 15-minute rest allowed drivers to recover from a two-hour driving task. This needed to be prolonged to 30 min for driving tasks of 3 to 4 h of continuous driving. Drivers' attention, reactions, operating ability, and perceptions are all affected in turn after over 2 h of continuous driving. Drivers should take a certain amount of rest to recover from the fatigue effects before they continue driving. Copyright © 2014 National Safety Council and Elsevier Ltd. All rights reserved.
Fernandez-Garcia, D.; Sanchez-Vila, X.; Bolster, D.; Tartakovsky, D. M.
2010-12-01
The release of non-aqueous phase liquids (NAPLs) such as petroleum hydrocarbons and chlorinated solvents in the subsurface is a severe source of groundwater and vapor contamination. Because these liquids are essentially immiscible due to low solubility, these contaminants get slowly dissolved in groundwater and/or volatilized in the vadoze zone threatening the environment and public health over a long period. Many remediation technologies and strategies have been developed in the last decades for restoring the water quality properties of these contaminated sites. The failure of an on-site treatment technology application is often due to the unnoticed presence of dissolved NAPL entrapped in low permeability areas (heterogeneity) and/or the remaining of substantial amounts of pure phase after remediation efforts. Full understanding of the impact of remediation efforts is complicated due to the role of many interlink physical and biochemical processes taking place through several potential pathways of exposure to multiple receptors in a highly unknown heterogeneous environment. Due to these difficulties, the design of remediation strategies and definition of remediation endpoints have been traditionally determined without quantifying the risk associated with the failure of such efforts. We conduct a probabilistic risk analysis (PRA) of the likelihood of success of an on-site NAPL treatment technology that easily integrates all aspects of the problem (causes, pathways, and receptors) without doing extensive modeling. Importantly, the method is further capable to incorporate the inherent uncertainty that often exist in the exact location where the dissolved NAPL plume leaves the source zone. This is achieved by describing the failure of the system as a function of this source zone exit location, parameterized in terms of a vector of parameters. Using a Bayesian interpretation of the system and by means of the posterior multivariate distribution, the failure of the
An Expectation Maximization Algorithm to Model Failure Times by Continuous-Time Markov Chains
Directory of Open Access Journals (Sweden)
Qihong Duan
2010-01-01
Full Text Available In many applications, the failure rate function may present a bathtub shape curve. In this paper, an expectation maximization algorithm is proposed to construct a suitable continuous-time Markov chain which models the failure time data by the first time reaching the absorbing state. Assume that a system is described by methods of supplementary variables, the device of stage, and so on. Given a data set, the maximum likelihood estimators of the initial distribution and the infinitesimal transition rates of the Markov chain can be obtained by our novel algorithm. Suppose that there are m transient states in the system and that there are n failure time data. The devised algorithm only needs to compute the exponential of m×m upper triangular matrices for O(nm2 times in each iteration. Finally, the algorithm is applied to two real data sets, which indicates the practicality and efficiency of our algorithm.
Occupation times and ergodicity breaking in biased continuous time random walks
International Nuclear Information System (INIS)
Bel, Golan; Barkai, Eli
2005-01-01
Continuous time random walk (CTRW) models are widely used to model diffusion in condensed matter. There are two classes of such models, distinguished by the convergence or divergence of the mean waiting time. Systems with finite average sojourn time are ergodic and thus Boltzmann-Gibbs statistics can be applied. We investigate the statistical properties of CTRW models with infinite average sojourn time; in particular, the occupation time probability density function is obtained. It is shown that in the non-ergodic phase the distribution of the occupation time of the particle on a given lattice point exhibits bimodal U or trimodal W shape, related to the arcsine law. The key points are as follows. (a) In a CTRW with finite or infinite mean waiting time, the distribution of the number of visits on a lattice point is determined by the probability that a member of an ensemble of particles in equilibrium occupies the lattice point. (b) The asymmetry parameter of the probability distribution function of occupation times is related to the Boltzmann probability and to the partition function. (c) The ensemble average is given by Boltzmann-Gibbs statistics for either finite or infinite mean sojourn time, when detailed balance conditions hold. (d) A non-ergodic generalization of the Boltzmann-Gibbs statistical mechanics for systems with infinite mean sojourn time is found
International Nuclear Information System (INIS)
Helmstetter, A.; Sornette, D.
2002-01-01
The epidemic-type aftershock sequence (ETAS) model is a simple stochastic process modeling seismicity, based on the two best-established empirical laws, the Omori law (power-law decay ∼1/t 1+θ of seismicity after an earthquake) and Gutenberg-Richter law (power-law distribution of earthquake energies). In order to describe also the space distribution of seismicity, we use in addition a power-law distribution ∼1/r 1+μ of distances between triggered and triggering earthquakes. The ETAS model has been studied for the last two decades to model real seismicity catalogs and to obtain short-term probabilistic forecasts. Here, we present a mapping between the ETAS model and a class of CTRW (continuous time random walk) models, based on the identification of their corresponding master equations. This mapping allows us to use the wealth of results previously obtained on anomalous diffusion of CTRW. After translating into the relevant variable for the ETAS model, we provide a classification of the different regimes of diffusion of seismic activity triggered by a mainshock. Specifically, we derive the relation between the average distance between aftershocks and the mainshock as a function of the time from the mainshock and of the joint probability distribution of the times and locations of the aftershocks. The different regimes are fully characterized by the two exponents θ and μ. Our predictions are checked by careful numerical simulations. We stress the distinction between the 'bare' Omori law describing the seismic rate activated directly by a mainshock and the 'renormalized' Omori law taking into account all possible cascades from mainshocks to aftershocks of aftershock of aftershock, and so on. In particular, we predict that seismic diffusion or subdiffusion occurs and should be observable only when the observed Omori exponent is less than 1, because this signals the operation of the renormalization of the bare Omori law, also at the origin of seismic diffusion in
Cluster Observations of Non-Time Continuous Magnetosonic Waves
Walker, Simon N.; Demekhov, Andrei G.; Boardsen, Scott A.; Ganushkina, Natalia Y.; Sibeck, David G.; Balikhin, Michael A.
2016-01-01
Equatorial magnetosonic waves are normally observed as temporally continuous sets of emissions lasting from minutes to hours. Recent observations, however, have shown that this is not always the case. Using Cluster data, this study identifies two distinct forms of these non temporally continuous use missions. The first, referred to as rising tone emissions, are characterized by the systematic onset of wave activity at increasing proton gyroharmonic frequencies. Sets of harmonic emissions (emission elements)are observed to occur periodically in the region +/- 10 off the geomagnetic equator. The sweep rate of these emissions maximizes at the geomagnetic equator. In addition, the ellipticity and propagation direction also change systematically as Cluster crosses the geomagnetic equator. It is shown that the observed frequency sweep rate is unlikely to result from the sideband instability related to nonlinear trapping of suprathermal protons in the wave field. The second form of emissions is characterized by the simultaneous onset of activity across a range of harmonic frequencies. These waves are observed at irregular intervals. Their occurrence correlates with changes in the spacecraft potential, a measurement that is used as a proxy for electron density. Thus, these waves appear to be trapped within regions of localized enhancement of the electron density.
Course Development Cycle Time: A Framework for Continuous Process Improvement.
Lake, Erinn
2003-01-01
Details Edinboro University's efforts to reduce the extended cycle time required to develop new courses and programs. Describes a collaborative process improvement framework, illustrated data findings, the team's recommendations for improvement, and the outcomes of those recommendations. (EV)
On synchronous parallel computations with independent probabilistic choice
International Nuclear Information System (INIS)
Reif, J.H.
1984-01-01
This paper introduces probabilistic choice to synchronous parallel machine models; in particular parallel RAMs. The power of probabilistic choice in parallel computations is illustrate by parallelizing some known probabilistic sequential algorithms. The authors characterize the computational complexity of time, space, and processor bounded probabilistic parallel RAMs in terms of the computational complexity of probabilistic sequential RAMs. They show that parallelism uniformly speeds up time bounded probabilistic sequential RAM computations by nearly a quadratic factor. They also show that probabilistic choice can be eliminated from parallel computations by introducing nonuniformity
An automated quasi-continuous capillary refill timing device
International Nuclear Information System (INIS)
Blaxter, L L; Morris, D E; Crowe, J A; Hayes-Gill, B R; Henry, C; Hill, S; Sharkey, D; Vyas, H
2016-01-01
Capillary refill time (CRT) is a simple means of cardiovascular assessment which is widely used in clinical care. Currently, CRT is measured through manual assessment of the time taken for skin tone to return to normal colour following blanching of the skin surface. There is evidence to suggest that manually assessed CRT is subject to bias from ambient light conditions, a lack of standardisation of both blanching time and manually applied pressure, subjectiveness of return to normal colour, and variability in the manual assessment of time. We present a novel automated system for CRT measurement, incorporating three components: a non-invasive adhesive sensor incorporating a pneumatic actuator, a diffuse multi-wavelength reflectance measurement device, and a temperature sensor; a battery operated datalogger unit containing a self contained pneumatic supply; and PC based data analysis software for the extraction of refill time, patient skin surface temperature, and sensor signal quality. Through standardisation of the test, it is hoped that some of the shortcomings of manual CRT can be overcome. In addition, an automated system will facilitate easier integration of CRT into electronic record keeping and clinical monitoring or scoring systems, as well as reducing demands on clinicians. Summary analysis of volunteer (n = 30) automated CRT datasets are presented, from 15 healthy adults and 15 healthy children (aged from 5 to 15 years), as their arms were cooled from ambient temperature to 5°C. A more detailed analysis of two typical datasets is also presented, demonstrating that the response of automated CRT to cooling matches that of previously published studies. (paper)
Mental time travel: a case for evolutionary continuity.
Corballis, Michael C
2013-01-01
In humans, hippocampal activity responds to the imagining of past or future events. In rats, hippocampal activity is tied to particular locations in a maze, occurs after the animal has been in the maze, and sometimes corresponds to locations the animal did not actually visit. This suggests that mental time travel has neurophysiological underpinnings that go far back in evolution, and may not be, as some (including myself) have claimed, unique to humans. Copyright © 2012 Elsevier Ltd. All rights reserved.
Continuous radon measurements in schools: time variations and related parameters
International Nuclear Information System (INIS)
Giovani, C.; Cappelletto, C.; Garavaglia, M.; Pividore, S.; Villalta, R.
2004-01-01
Some results are reported of observations made within a four-year survey, during different seasons and in different conditions of school building use. Natural radon variations (day-night cycles, seasonal and temperature dependent variations etc..) and artificial ones (opening of windows, weekends and vacations, deployment of air conditioning or heating systems. etc.) were investigated as parameters affecting time dependent radon concentrations. (P.A.)
Real-time continuous nitrate monitoring in Illinois in 2013
Warner, Kelly L.; Terrio, Paul J.; Straub, Timothy D.; Roseboom, Donald; Johnson, Gary P.
2013-01-01
Many sources contribute to the nitrogen found in surface water in Illinois. Illinois is located in the most productive agricultural area in the country, and nitrogen fertilizer is commonly used to maximize corn production in this area. Additionally, septic/wastewater systems, industrial emissions, and lawn fertilizer are common sources of nitrogen in urban areas of Illinois. In agricultural areas, the use of fertilizer has increased grain production to meet the needs of a growing population, but also has resulted in increases in nitrogen concentrations in many streams and aquifers (Dubrovsky and others, 2010). The urban sources can increase nitrogen concentrations, too. The Federal limit for nitrate nitrogen in water that is safe to drink is 10 milligrams per liter (mg/L) (http://water.epa.gov/drink/contaminants/basicinformation/nitrate.cfm, accessed on May 24, 2013). In addition to the concern with nitrate nitrogen in drinking water, nitrogen, along with phosphorus, is an aquatic concern because it feeds the intensive growth of algae that are responsible for the hypoxic zone in the Gulf of Mexico. The largest nitrogen flux to the waters feeding the Gulf of Mexico is from Illinois (Alexander and others, 2008). Most studies of nitrogen in surface water and groundwater include samples for nitrate nitrogen collected weekly or monthly, but nitrate concentrations can change rapidly and these discrete samples may not capture rapid changes in nitrate concentrations that can affect human and aquatic health. Continuous monitoring for nitrate could inform scientists and water-resource managers of these changes and provide information on the transport of nitrate in surface water and groundwater.
Continuous-time random walks on networks with vertex- and time-dependent forcing.
Angstmann, C N; Donnelly, I C; Henry, B I; Langlands, T A M
2013-08-01
We have investigated the transport of particles moving as random walks on the vertices of a network, subject to vertex- and time-dependent forcing. We have derived the generalized master equations for this transport using continuous time random walks, characterized by jump and waiting time densities, as the underlying stochastic process. The forcing is incorporated through a vertex- and time-dependent bias in the jump densities governing the random walking particles. As a particular case, we consider particle forcing proportional to the concentration of particles on adjacent vertices, analogous to self-chemotactic attraction in a spatial continuum. Our algebraic and numerical studies of this system reveal an interesting pair-aggregation pattern formation in which the steady state is composed of a high concentration of particles on a small number of isolated pairs of adjacent vertices. The steady states do not exhibit this pair aggregation if the transport is random on the vertices, i.e., without forcing. The manifestation of pair aggregation on a transport network may thus be a signature of self-chemotactic-like forcing.
Directory of Open Access Journals (Sweden)
Mikaël Cozic
2016-11-01
Full Text Available The modeling of awareness and unawareness is a significant topic in the doxastic logic literature, where it is usually tackled in terms of full belief operators. The present paper aims at a treatment in terms of partial belief operators. It draws upon the modal probabilistic logic that was introduced by Aumann (1999 at the semantic level, and then axiomatized by Heifetz and Mongin (2001. The paper embodies in this framework those properties of unawareness that have been highlighted in the seminal paper by Modica and Rustichini (1999. Their paper deals with full belief, but we argue that the properties in question also apply to partial belief. Our main result is a (soundness and completeness theorem that reunites the two strands—modal and probabilistic—of doxastic logic.
Chen, Xiaofeng; Song, Qiankun; Li, Zhongshan; Zhao, Zhenjiang; Liu, Yurong
2018-07-01
This paper addresses the problem of stability for continuous-time and discrete-time quaternion-valued neural networks (QVNNs) with linear threshold neurons. Applying the semidiscretization technique to the continuous-time QVNNs, the discrete-time analogs are obtained, which preserve the dynamical characteristics of their continuous-time counterparts. Via the plural decomposition method of quaternion, homeomorphic mapping theorem, as well as Lyapunov theorem, some sufficient conditions on the existence, uniqueness, and global asymptotical stability of the equilibrium point are derived for the continuous-time QVNNs and their discrete-time analogs, respectively. Furthermore, a uniform sufficient condition on the existence, uniqueness, and global asymptotical stability of the equilibrium point is obtained for both continuous-time QVNNs and their discrete-time version. Finally, two numerical examples are provided to substantiate the effectiveness of the proposed results.
28 CFR 301.204 - Continuation of lost-time wages.
2010-07-01
... 28 Judicial Administration 2 2010-07-01 2010-07-01 false Continuation of lost-time wages. 301.204... ACCIDENT COMPENSATION Lost-Time Wages § 301.204 Continuation of lost-time wages. (a) Once approved, the inmate shall receive lost-time wages until the inmate: (1) Is released; (2) Is transferred to another...
Accurate Lithium-ion battery parameter estimation with continuous-time system identification methods
International Nuclear Information System (INIS)
Xia, Bing; Zhao, Xin; Callafon, Raymond de; Garnier, Hugues; Nguyen, Truong; Mi, Chris
2016-01-01
Highlights: • Continuous-time system identification is applied in Lithium-ion battery modeling. • Continuous-time and discrete-time identification methods are compared in detail. • The instrumental variable method is employed to further improve the estimation. • Simulations and experiments validate the advantages of continuous-time methods. - Abstract: The modeling of Lithium-ion batteries usually utilizes discrete-time system identification methods to estimate parameters of discrete models. However, in real applications, there is a fundamental limitation of the discrete-time methods in dealing with sensitivity when the system is stiff and the storage resolutions are limited. To overcome this problem, this paper adopts direct continuous-time system identification methods to estimate the parameters of equivalent circuit models for Lithium-ion batteries. Compared with discrete-time system identification methods, the continuous-time system identification methods provide more accurate estimates to both fast and slow dynamics in battery systems and are less sensitive to disturbances. A case of a 2"n"d-order equivalent circuit model is studied which shows that the continuous-time estimates are more robust to high sampling rates, measurement noises and rounding errors. In addition, the estimation by the conventional continuous-time least squares method is further improved in the case of noisy output measurement by introducing the instrumental variable method. Simulation and experiment results validate the analysis and demonstrate the advantages of the continuous-time system identification methods in battery applications.
Probabilistic Forecasting of the Wave Energy Flux
DEFF Research Database (Denmark)
Pinson, Pierre; Reikard, G.; Bidlot, J.-R.
2012-01-01
Wave energy will certainly have a significant role to play in the deployment of renewable energy generation capacities. As with wind and solar, probabilistic forecasts of wave power over horizons of a few hours to a few days are required for power system operation as well as trading in electricit......% and 70% in terms of Continuous Rank Probability Score (CRPS), depending upon the test case and the lead time. It is finally shown that the log-Normal assumption can be seen as acceptable, even though it may be refined in the future....
Arbitrage and Hedging in a non probabilistic framework
Alvarez, Alexander; Ferrando, Sebastian; Olivares, Pablo
2011-01-01
The paper studies the concepts of hedging and arbitrage in a non probabilistic framework. It provides conditions for non probabilistic arbitrage based on the topological structure of the trajectory space and makes connections with the usual notion of arbitrage. Several examples illustrate the non probabilistic arbitrage as well perfect replication of options under continuous and discontinuous trajectories, the results can then be applied in probabilistic models path by path. The approach is r...
Probabilistic Graph Layout for Uncertain Network Visualization.
Schulz, Christoph; Nocaj, Arlind; Goertler, Jochen; Deussen, Oliver; Brandes, Ulrik; Weiskopf, Daniel
2017-01-01
We present a novel uncertain network visualization technique based on node-link diagrams. Nodes expand spatially in our probabilistic graph layout, depending on the underlying probability distributions of edges. The visualization is created by computing a two-dimensional graph embedding that combines samples from the probabilistic graph. A Monte Carlo process is used to decompose a probabilistic graph into its possible instances and to continue with our graph layout technique. Splatting and edge bundling are used to visualize point clouds and network topology. The results provide insights into probability distributions for the entire network-not only for individual nodes and edges. We validate our approach using three data sets that represent a wide range of network types: synthetic data, protein-protein interactions from the STRING database, and travel times extracted from Google Maps. Our approach reveals general limitations of the force-directed layout and allows the user to recognize that some nodes of the graph are at a specific position just by chance.
Probabilistic numerical discrimination in mice.
Berkay, Dilara; Çavdaroğlu, Bilgehan; Balcı, Fuat
2016-03-01
Previous studies showed that both human and non-human animals can discriminate between different quantities (i.e., time intervals, numerosities) with a limited level of precision due to their endogenous/representational uncertainty. In addition, other studies have shown that subjects can modulate their temporal categorization responses adaptively by incorporating information gathered regarding probabilistic contingencies into their time-based decisions. Despite the psychophysical similarities between the interval timing and nonverbal counting functions, the sensitivity of count-based decisions to probabilistic information remains an unanswered question. In the current study, we investigated whether exogenous probabilistic information can be integrated into numerosity-based judgments by mice. In the task employed in this study, reward was presented either after few (i.e., 10) or many (i.e., 20) lever presses, the last of which had to be emitted on the lever associated with the corresponding trial type. In order to investigate the effect of probabilistic information on performance in this task, we manipulated the relative frequency of different trial types across different experimental conditions. We evaluated the behavioral performance of the animals under models that differed in terms of their assumptions regarding the cost of responding (e.g., logarithmically increasing vs. no response cost). Our results showed for the first time that mice could adaptively modulate their count-based decisions based on the experienced probabilistic contingencies in directions predicted by optimality.
Integrating Continuous-Time and Discrete-Event Concepts in Process Modelling, Simulation and Control
Beek, van D.A.; Gordijn, S.H.F.; Rooda, J.E.; Ertas, A.
1995-01-01
Currently, modelling of systems in the process industry requires the use of different specification languages for the specification of the discrete-event and continuous-time subsystems. In this way, models are restricted to individual subsystems of either a continuous-time or discrete-event nature.
Selva, Jacopo; Costa, Antonio; Sandri, Laura; Rouwet, Dmtri; Tonini, Roberto; Macedonio, Giovanni; Marzocchi, Warner
2015-04-01
Probabilistic Volcanic Hazard Assessment (PVHA) represents the most complete scientific contribution for planning rational strategies aimed at mitigating the risk posed by volcanic activity at different time scales. The definition of the space-time window for PVHA is related to the kind of risk mitigation actions that are under consideration. Short temporal intervals (days to weeks) are important for short-term risk mitigation actions like the evacuation of a volcanic area. During volcanic unrest episodes or eruptions, it is of primary importance to produce short-term tephra fallout forecast, and frequently update it to account for the rapidly evolving situation. This information is obviously crucial for crisis management, since tephra may heavily affect building stability, public health, transportations and evacuation routes (airports, trains, road traffic) and lifelines (electric power supply). In this study, we propose a methodology named BET_VHst (Selva et al. 2014) for short-term PVHA of volcanic tephra dispersal based on automatic interpretation of measures from the monitoring system and physical models of tephra dispersal from all possible vent positions and eruptive sizes based on frequently updated meteorological forecasts. The large uncertainty at all the steps required for the analysis, both aleatory and epistemic, is treated by means of Bayesian inference and statistical mixing of long- and short-term analyses. The BET_VHst model is here presented through its implementation during two exercises organized for volcanoes in the Neapolitan area: MESIMEX for Mt. Vesuvius, and VUELCO for Campi Flegrei. References Selva J., Costa A., Sandri L., Macedonio G., Marzocchi W. (2014) Probabilistic short-term volcanic hazard in phases of unrest: a case study for tephra fallout, J. Geophys. Res., 119, doi: 10.1002/2014JB011252
International Nuclear Information System (INIS)
Huo Haifeng; Li Wantong
2009-01-01
This paper is concerned with the global stability characteristics of a system of equations modelling the dynamics of continuous-time bidirectional associative memory neural networks with impulses. Sufficient conditions which guarantee the existence of a unique equilibrium and its exponential stability of the networks are obtained. For the goal of computation, discrete-time analogues of the corresponding continuous-time bidirectional associative memory neural networks with impulses are also formulated and studied. Our results show that the above continuous-time and discrete-time systems with impulses preserve the dynamics of the networks without impulses when we make some modifications and impose some additional conditions on the systems, the convergence characteristics dynamics of the networks are preserved by both continuous-time and discrete-time systems with some restriction imposed on the impulse effect.
Schweizer, B
2005-01-01
Topics include special classes of probabilistic metric spaces, topologies, and several related structures, such as probabilistic normed and inner-product spaces. 1983 edition, updated with 3 new appendixes. Includes 17 illustrations.
Dynamical systems probabilistic risk assessment
Energy Technology Data Exchange (ETDEWEB)
Denman, Matthew R. [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Ames, Arlo Leroy [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States)
2014-03-01
Probabilistic Risk Assessment (PRA) is the primary tool used to risk-inform nuclear power regulatory and licensing activities. Risk-informed regulations are intended to reduce inherent conservatism in regulatory metrics (e.g., allowable operating conditions and technical specifications) which are built into the regulatory framework by quantifying both the total risk profile as well as the change in the risk profile caused by an event or action (e.g., in-service inspection procedures or power uprates). Dynamical Systems (DS) analysis has been used to understand unintended time-dependent feedbacks in both industrial and organizational settings. In dynamical systems analysis, feedback loops can be characterized and studied as a function of time to describe the changes to the reliability of plant Structures, Systems and Components (SSCs). While DS has been used in many subject areas, some even within the PRA community, it has not been applied toward creating long-time horizon, dynamic PRAs (with time scales ranging between days and decades depending upon the analysis). Understanding slowly developing dynamic effects, such as wear-out, on SSC reliabilities may be instrumental in ensuring a safely and reliably operating nuclear fleet. Improving the estimation of a plant's continuously changing risk profile will allow for more meaningful risk insights, greater stakeholder confidence in risk insights, and increased operational flexibility.
Soundness of Timed-Arc Workflow Nets in Discrete and Continuous-Time Semantics
DEFF Research Database (Denmark)
Mateo, Jose Antonio; Srba, Jiri; Sørensen, Mathias Grund
2015-01-01
Analysis of workflow processes with quantitative aspectslike timing is of interest in numerous time-critical applications. We suggest a workflow model based on timed-arc Petri nets and studythe foundational problems of soundness and strong (time-bounded) soundness.We first consider the discrete-t...
Application of probabilistic precipitation forecasts from a ...
African Journals Online (AJOL)
2014-02-14
Feb 14, 2014 ... Application of probabilistic precipitation forecasts from a deterministic model ... aim of this paper is to investigate the increase in the lead-time of flash flood warnings of the SAFFG using probabilistic precipitation forecasts ... The procedure is applied to a real flash flood event and the ensemble-based.
Elastic LiDAR Fusion: Dense Map-Centric Continuous-Time SLAM
Park, Chanoh; Moghadam, Peyman; Kim, Soohwan; Elfes, Alberto; Fookes, Clinton; Sridharan, Sridha
2017-01-01
The concept of continuous-time trajectory representation has brought increased accuracy and efficiency to multi-modal sensor fusion in modern SLAM. However, regardless of these advantages, its offline property caused by the requirement of global batch optimization is critically hindering its relevance for real-time and life-long applications. In this paper, we present a dense map-centric SLAM method based on a continuous-time trajectory to cope with this problem. The proposed system locally f...
Dalmau Codina, Ramon; Prats Menéndez, Xavier
2017-01-01
Continuous descent operations with controlled times of arrival at one or several metering fixes could enable environmentally friendly procedures without compromising terminal airspace capacity. This paper focuses on controlled time of arrival updates once the descent has been already initiated, assessing the feasible time window (and associated fuel consumption) of continuous descent operations requiring neither thrust nor speed-brake usage along the whole descent (i.e. only elevator control ...
CMOS continuous-time adaptive equalizers for high-speed serial links
Gimeno Gasca, Cecilia; Aldea Chagoyen, Concepción
2015-01-01
This book introduces readers to the design of adaptive equalization solutions integrated in standard CMOS technology for high-speed serial links. Since continuous-time equalizers offer various advantages as an alternative to discrete-time equalizers at multi-gigabit rates, this book provides a detailed description of continuous-time adaptive equalizers design - both at transistor and system levels-, their main characteristics and performances. The authors begin with a complete review and analysis of the state of the art of equalizers for wireline applications, describing why they are necessary, their types, and their main applications. Next, theoretical fundamentals of continuous-time adaptive equalizers are explored. Then, new structures are proposed to implement the different building blocks of the adaptive equalizer: line equalizer, loop-filters, power comparator, etc. The authors demonstrate the design of a complete low-power, low-voltage, high-speed, continuous-time adaptive equalizer. Finally, a cost-...
Elliott, Thomas J.; Gu, Mile
2018-03-01
Continuous-time stochastic processes pervade everyday experience, and the simulation of models of these processes is of great utility. Classical models of systems operating in continuous-time must typically track an unbounded amount of information about past behaviour, even for relatively simple models, enforcing limits on precision due to the finite memory of the machine. However, quantum machines can require less information about the past than even their optimal classical counterparts to simulate the future of discrete-time processes, and we demonstrate that this advantage extends to the continuous-time regime. Moreover, we show that this reduction in the memory requirement can be unboundedly large, allowing for arbitrary precision even with a finite quantum memory. We provide a systematic method for finding superior quantum constructions, and a protocol for analogue simulation of continuous-time renewal processes with a quantum machine.
Directory of Open Access Journals (Sweden)
Mokaedi V. Lekgari
2014-01-01
Full Text Available We investigate random-time state-dependent Foster-Lyapunov analysis on subgeometric rate ergodicity of continuous-time Markov chains (CTMCs. We are mainly concerned with making use of the available results on deterministic state-dependent drift conditions for CTMCs and on random-time state-dependent drift conditions for discrete-time Markov chains and transferring them to CTMCs.
Probabilistic reasoning in data analysis.
Sirovich, Lawrence
2011-09-20
This Teaching Resource provides lecture notes, slides, and a student assignment for a lecture on probabilistic reasoning in the analysis of biological data. General probabilistic frameworks are introduced, and a number of standard probability distributions are described using simple intuitive ideas. Particular attention is focused on random arrivals that are independent of prior history (Markovian events), with an emphasis on waiting times, Poisson processes, and Poisson probability distributions. The use of these various probability distributions is applied to biomedical problems, including several classic experimental studies.
Delsing, M.J.M.H.; Oud, J.H.L.; Bruyn, E.E.J. De
2005-01-01
In family research, bidirectional influences between the family and the individual are usually analyzed in discrete time. Results from discrete time analysis, however, have been shown to be highly dependent on the length of the observation interval. Continuous time analysis using stochastic
Probabilistic population aging
2017-01-01
We merge two methodologies, prospective measures of population aging and probabilistic population forecasts. We compare the speed of change and variability in forecasts of the old age dependency ratio and the prospective old age dependency ratio as well as the same comparison for the median age and the prospective median age. While conventional measures of population aging are computed on the basis of the number of years people have already lived, prospective measures are computed also taking account of the expected number of years they have left to live. Those remaining life expectancies change over time and differ from place to place. We compare the probabilistic distributions of the conventional and prospective measures using examples from China, Germany, Iran, and the United States. The changes over time and the variability of the prospective indicators are smaller than those that are observed in the conventional ones. A wide variety of new results emerge from the combination of methodologies. For example, for Germany, Iran, and the United States the likelihood that the prospective median age of the population in 2098 will be lower than it is today is close to 100 percent. PMID:28636675
A New Approach to Rational Discrete-Time Approximations to Continuous-Time Fractional-Order Systems
Matos , Carlos; Ortigueira , Manuel ,
2012-01-01
Part 10: Signal Processing; International audience; In this paper a new approach to rational discrete-time approximations to continuous fractional-order systems of the form 1/(sα+p) is proposed. We will show that such fractional-order LTI system can be decomposed into sub-systems. One has the classic behavior and the other is similar to a Finite Impulse Response (FIR) system. The conversion from continuous-time to discrete-time systems will be done using the Laplace transform inversion integr...
Hardware solution for continuous time-resolved burst detection of single molecules in flow
Wahl, Michael; Erdmann, Rainer; Lauritsen, Kristian; Rahn, Hans-Juergen
1998-04-01
Time Correlated Single Photon Counting (TCSPC) is a valuable tool for Single Molecule Detection (SMD). However, existing TCSPC systems did not support continuous data collection and processing as is desirable for applications such as SMD for e.g. DNA-sequencing in a liquid flow. First attempts at using existing instrumentation in this kind of operation mode required additional routing hardware to switch between several memory banks and were not truly continuous. We have designed a hard- and software system to perform continuous real-time TCSPC based upon a modern solid state Time to Digital Converter (TDC). Short dead times of the fully digital TDC design combined with fast Field Programmable Gay Array logic permit a continuous data throughput as high as 3 Mcounts/sec. The histogramming time may be set as short as 100 microsecond(s) . Every histogram or every single fluorescence photon can be real-time tagged at 200 ns resolution in addition to recording its arrival time relative to the excitation pulse. Continuous switching between memory banks permits concurrent histogramming and data read-out. The instrument provides a time resolution of 60 ps and up to 4096 histogram channels. The overall instrument response function in combination with a low cost picosecond diode laser and an inexpensive photomultiplier tube was found to be 180 ps and well sufficient to measure sub-nanosecond fluorescence lifetimes.
Continuous and Discrete-Time Optimal Controls for an Isolated Signalized Intersection
Directory of Open Access Journals (Sweden)
Jiyuan Tan
2017-01-01
Full Text Available A classical control problem for an isolated oversaturated intersection is revisited with a focus on the optimal control policy to minimize total delay. The difference and connection between existing continuous-time planning models and recently proposed discrete-time planning models are studied. A gradient descent algorithm is proposed to convert the optimal control plan of the continuous-time model to the plan of the discrete-time model in many cases. Analytic proof and numerical tests for the algorithm are also presented. The findings shed light on the links between two kinds of models.
Probabilistic analysis and related topics
Bharucha-Reid, A T
1983-01-01
Probabilistic Analysis and Related Topics, Volume 3 focuses on the continuity, integrability, and differentiability of random functions, including operator theory, measure theory, and functional and numerical analysis. The selection first offers information on the qualitative theory of stochastic systems and Langevin equations with multiplicative noise. Discussions focus on phase-space evolution via direct integration, phase-space evolution, linear and nonlinear systems, linearization, and generalizations. The text then ponders on the stability theory of stochastic difference systems and Marko
Probabilistic analysis and related topics
Bharucha-Reid, A T
1979-01-01
Probabilistic Analysis and Related Topics, Volume 2 focuses on the integrability, continuity, and differentiability of random functions, as well as functional analysis, measure theory, operator theory, and numerical analysis.The selection first offers information on the optimal control of stochastic systems and Gleason measures. Discussions focus on convergence of Gleason measures, random Gleason measures, orthogonally scattered Gleason measures, existence of optimal controls without feedback, random necessary conditions, and Gleason measures in tensor products. The text then elaborates on an
Probabilistic pathway construction.
Yousofshahi, Mona; Lee, Kyongbum; Hassoun, Soha
2011-07-01
Expression of novel synthesis pathways in host organisms amenable to genetic manipulations has emerged as an attractive metabolic engineering strategy to overproduce natural products, biofuels, biopolymers and other commercially useful metabolites. We present a pathway construction algorithm for identifying viable synthesis pathways compatible with balanced cell growth. Rather than exhaustive exploration, we investigate probabilistic selection of reactions to construct the pathways. Three different selection schemes are investigated for the selection of reactions: high metabolite connectivity, low connectivity and uniformly random. For all case studies, which involved a diverse set of target metabolites, the uniformly random selection scheme resulted in the highest average maximum yield. When compared to an exhaustive search enumerating all possible reaction routes, our probabilistic algorithm returned nearly identical distributions of yields, while requiring far less computing time (minutes vs. years). The pathways identified by our algorithm have previously been confirmed in the literature as viable, high-yield synthesis routes. Prospectively, our algorithm could facilitate the design of novel, non-native synthesis routes by efficiently exploring the diversity of biochemical transformations in nature. Copyright © 2011 Elsevier Inc. All rights reserved.
Probabilistic biological network alignment.
Todor, Andrei; Dobra, Alin; Kahveci, Tamer
2013-01-01
Interactions between molecules are probabilistic events. An interaction may or may not happen with some probability, depending on a variety of factors such as the size, abundance, or proximity of the interacting molecules. In this paper, we consider the problem of aligning two biological networks. Unlike existing methods, we allow one of the two networks to contain probabilistic interactions. Allowing interaction probabilities makes the alignment more biologically relevant at the expense of explosive growth in the number of alternative topologies that may arise from different subsets of interactions that take place. We develop a novel method that efficiently and precisely characterizes this massive search space. We represent the topological similarity between pairs of aligned molecules (i.e., proteins) with the help of random variables and compute their expected values. We validate our method showing that, without sacrificing the running time performance, it can produce novel alignments. Our results also demonstrate that our method identifies biologically meaningful mappings under a comprehensive set of criteria used in the literature as well as the statistical coherence measure that we developed to analyze the statistical significance of the similarity of the functions of the aligned protein pairs.
van Rosmalen, Joost; Toy, Mehlika; O'Mahony, James F
2013-08-01
Markov models are a simple and powerful tool for analyzing the health and economic effects of health care interventions. These models are usually evaluated in discrete time using cohort analysis. The use of discrete time assumes that changes in health states occur only at the end of a cycle period. Discrete-time Markov models only approximate the process of disease progression, as clinical events typically occur in continuous time. The approximation can yield biased cost-effectiveness estimates for Markov models with long cycle periods and if no half-cycle correction is made. The purpose of this article is to present an overview of methods for evaluating Markov models in continuous time. These methods use mathematical results from stochastic process theory and control theory. The methods are illustrated using an applied example on the cost-effectiveness of antiviral therapy for chronic hepatitis B. The main result is a mathematical solution for the expected time spent in each state in a continuous-time Markov model. It is shown how this solution can account for age-dependent transition rates and discounting of costs and health effects, and how the concept of tunnel states can be used to account for transition rates that depend on the time spent in a state. The applied example shows that the continuous-time model yields more accurate results than the discrete-time model but does not require much computation time and is easily implemented. In conclusion, continuous-time Markov models are a feasible alternative to cohort analysis and can offer several theoretical and practical advantages.
DEFF Research Database (Denmark)
Pinson, Pierre; Madsen, Henrik
2008-01-01
Better modelling and forecasting of very short-term power fluctuations at large offshore wind farms may significantly enhance control and management strategies of their power output. The paper introduces a new methodology for modelling and forecasting such very short-term fluctuations. The proposed...... consists in 1-step ahead forecasting exercise on time-series of wind generation with a time resolution of 10 minute. The quality of the introduced forecasting methodology and its interest for better understanding power fluctuations are finally discussed....... methodology is based on a Markov-switching autoregressive model with time-varying coefficients. An advantage of the method is that one can easily derive full predictive densities. The quality of this methodology is demonstrated from the test case of 2 large offshore wind farms in Denmark. The exercise...
[Design and implementation of real-time continuous glucose monitoring instrument].
Huang, Yonghong; Liu, Hongying; Tian, Senfu; Jia, Ziru; Wang, Zi; Pi, Xitian
2017-12-01
Real-time continuous glucose monitoring can help diabetics to control blood sugar levels within the normal range. However, in the process of practical monitoring, the output of real-time continuous glucose monitoring system is susceptible to glucose sensor and environment noise, which will influence the measurement accuracy of the system. Aiming at this problem, a dual-calibration algorithm for the moving-window double-layer filtering algorithm combined with real-time self-compensation calibration algorithm is proposed in this paper, which can realize the signal drift compensation for current data. And a real-time continuous glucose monitoring instrument based on this study was designed. This real-time continuous glucose monitoring instrument consisted of an adjustable excitation voltage module, a current-voltage converter module, a microprocessor and a wireless transceiver module. For portability, the size of the device was only 40 mm × 30 mm × 5 mm and its weight was only 30 g. In addition, a communication command code algorithm was designed to ensure the security and integrity of data transmission in this study. Results of experiments in vitro showed that current detection of the device worked effectively. A 5-hour monitoring of blood glucose level in vivo showed that the device could continuously monitor blood glucose in real time. The relative error of monitoring results of the designed device ranged from 2.22% to 7.17% when comparing to a portable blood meter.
Exploratory Study for Continuous-time Parameter Estimation of Ankle Dynamics
Kukreja, Sunil L.; Boyle, Richard D.
2014-01-01
Recently, a parallel pathway model to describe ankle dynamics was proposed. This model provides a relationship between ankle angle and net ankle torque as the sum of a linear and nonlinear contribution. A technique to identify parameters of this model in discrete-time has been developed. However, these parameters are a nonlinear combination of the continuous-time physiology, making insight into the underlying physiology impossible. The stable and accurate estimation of continuous-time parameters is critical for accurate disease modeling, clinical diagnosis, robotic control strategies, development of optimal exercise protocols for longterm space exploration, sports medicine, etc. This paper explores the development of a system identification technique to estimate the continuous-time parameters of ankle dynamics. The effectiveness of this approach is assessed via simulation of a continuous-time model of ankle dynamics with typical parameters found in clinical studies. The results show that although this technique improves estimates, it does not provide robust estimates of continuous-time parameters of ankle dynamics. Due to this we conclude that alternative modeling strategies and more advanced estimation techniques be considered for future work.
Toward Continuous GPS Carrier-Phase Time Transfer: Eliminating the Time Discontinuity at an Anomaly.
Yao, Jian; Levine, Judah; Weiss, Marc
2015-01-01
The wide application of Global Positioning System (GPS) carrier-phase (CP) time transfer is limited by the problem of boundary discontinuity (BD). The discontinuity has two categories. One is "day boundary discontinuity," which has been studied extensively and can be solved by multiple methods [1-8]. The other category of discontinuity, called "anomaly boundary discontinuity (anomaly-BD)," comes from a GPS data anomaly. The anomaly can be a data gap (i.e., missing data), a GPS measurement error (i.e., bad data), or a cycle slip. Initial study of the anomaly-BD shows that we can fix the discontinuity if the anomaly lasts no more than 20 min, using the polynomial curve-fitting strategy to repair the anomaly [9]. However, sometimes, the data anomaly lasts longer than 20 min. Thus, a better curve-fitting strategy is in need. Besides, a cycle slip, as another type of data anomaly, can occur and lead to an anomaly-BD. To solve these problems, this paper proposes a new strategy, i.e., the satellite-clock-aided curve fitting strategy with the function of cycle slip detection. Basically, this new strategy applies the satellite clock correction to the GPS data. After that, we do the polynomial curve fitting for the code and phase data, as before. Our study shows that the phase-data residual is only ~3 mm for all GPS satellites. The new strategy also detects and finds the number of cycle slips by searching the minimum curve-fitting residual. Extensive examples show that this new strategy enables us to repair up to a 40-min GPS data anomaly, regardless of whether the anomaly is due to a data gap, a cycle slip, or a combination of the two. We also find that interference of the GPS signal, known as "jamming", can possibly lead to a time-transfer error, and that this new strategy can compensate for jamming outages. Thus, the new strategy can eliminate the impact of jamming on time transfer. As a whole, we greatly improve the robustness of the GPS CP time transfer.
STATISTICAL ANALYSIS OF NOTATIONAL AFL DATA USING CONTINUOUS TIME MARKOV CHAINS
Directory of Open Access Journals (Sweden)
Denny Meyer
2006-12-01
Full Text Available Animal biologists commonly use continuous time Markov chain models to describe patterns of animal behaviour. In this paper we consider the use of these models for describing AFL football. In particular we test the assumptions for continuous time Markov chain models (CTMCs, with time, distance and speed values associated with each transition. Using a simple event categorisation it is found that a semi-Markov chain model is appropriate for this data. This validates the use of Markov Chains for future studies in which the outcomes of AFL matches are simulated
Measurement of average continuous-time structure of a bond and ...
African Journals Online (AJOL)
The expected continuous-time structure of a bond and bond's interest rate risk in an investment settings was studied. We determined the expected number of years an investor or manager will wait until the stock comes to maturity. The expected principal amount to be paid back per stock at time 't' was determined, while ...
Lyapunov stability robust analysis and robustness design for linear continuous-time systems
Luo, J.S.; Johnson, A.; Bosch, van den P.P.J.
1995-01-01
The linear continuous-time systems to be discussed are described by state space models with structured time-varying uncertainties. First, the explicit maximal perturbation bound for maintaining quadratic Lyapunov stability of the closed-loop systems is presented. Then, a robust design method is
Continuous-time random walk as a guide to fractional Schroedinger equation
International Nuclear Information System (INIS)
Lenzi, E. K.; Ribeiro, H. V.; Mukai, H.; Mendes, R. S.
2010-01-01
We argue that the continuous-time random walk approach may be a useful guide to extend the Schroedinger equation in order to incorporate nonlocal effects, avoiding the inconsistencies raised by Jeng et al. [J. Math. Phys. 51, 062102 (2010)]. As an application, we work out a free particle in a half space, obtaining the time dependent solution by considering an arbitrary initial condition.
Robust model predictive control for constrained continuous-time nonlinear systems
Sun, Tairen; Pan, Yongping; Zhang, Jun; Yu, Haoyong
2018-02-01
In this paper, a robust model predictive control (MPC) is designed for a class of constrained continuous-time nonlinear systems with bounded additive disturbances. The robust MPC consists of a nonlinear feedback control and a continuous-time model-based dual-mode MPC. The nonlinear feedback control guarantees the actual trajectory being contained in a tube centred at the nominal trajectory. The dual-mode MPC is designed to ensure asymptotic convergence of the nominal trajectory to zero. This paper extends current results on discrete-time model-based tube MPC and linear system model-based tube MPC to continuous-time nonlinear model-based tube MPC. The feasibility and robustness of the proposed robust MPC have been demonstrated by theoretical analysis and applications to a cart-damper springer system and a one-link robot manipulator.
Sturm, M E; Arroyo-López, F N; Garrido-Fernández, A; Querol, A; Mercado, L A; Ramirez, M L; Combina, M
2014-01-17
The present study uses a probabilistic model to determine the growth/no growth interfaces of the spoilage wine yeast Dekkera bruxellensis CH29 as a function of ethanol (10-15%, v/v), pH (3.4-4.0) and free SO2 (0-50 mg/l) using time (7, 14, 21 and 30 days) as a dummy variable. The model, built with a total of 756 growth/no growth data obtained in a simile wine medium, could have application in the winery industry to determine the wine conditions needed to inhibit the growth of this species. Thereby, at 12.5% of ethanol and pH 3.7 for a growth probability of 0.01, it is necessary to add 30 mg/l of free SO2 to inhibit yeast growth for 7 days. However, the concentration of free SO2 should be raised to 48 mg/l to achieve a probability of no growth of 0.99 for 30 days under the same wine conditions. Other combinations of environmental variables can also be determined using the mathematical model depending on the needs of the industry. Copyright © 2013 Elsevier B.V. All rights reserved.
Probabilistic Logical Characterization
DEFF Research Database (Denmark)
Hermanns, Holger; Parma, Augusto; Segala, Roberto
2011-01-01
Probabilistic automata exhibit both probabilistic and non-deterministic choice. They are therefore a powerful semantic foundation for modeling concurrent systems with random phenomena arising in many applications ranging from artificial intelligence, security, systems biology to performance...... modeling. Several variations of bisimulation and simulation relations have proved to be useful as means to abstract and compare different automata. This paper develops a taxonomy of logical characterizations of these relations on image-finite and image-infinite probabilistic automata....
Conditional Probabilistic Population Forecasting
Sanderson, W.C.; Scherbov, S.; O'Neill, B.C.; Lutz, W.
2003-01-01
Since policy makers often prefer to think in terms of scenarios, the question has arisen as to whether it is possible to make conditional population forecasts in a probabilistic context. This paper shows that it is both possible and useful to make these forecasts. We do this with two different kinds of examples. The first is the probabilistic analog of deterministic scenario analysis. Conditional probabilistic scenario analysis is essential for policy makers it allows them to answer "what if"...
Conditional probabilistic population forecasting
Sanderson, Warren; Scherbov, Sergei; O'Neill, Brian; Lutz, Wolfgang
2003-01-01
Since policy-makers often prefer to think in terms of alternative scenarios, the question has arisen as to whether it is possible to make conditional population forecasts in a probabilistic context. This paper shows that it is both possible and useful to make these forecasts. We do this with two different kinds of examples. The first is the probabilistic analog of deterministic scenario analysis. Conditional probabilistic scenario analysis is essential for policy-makers because it allows them...
Conditional Probabilistic Population Forecasting
Sanderson, Warren C.; Scherbov, Sergei; O'Neill, Brian C.; Lutz, Wolfgang
2004-01-01
Since policy-makers often prefer to think in terms of alternative scenarios, the question has arisen as to whether it is possible to make conditional population forecasts in a probabilistic context. This paper shows that it is both possible and useful to make these forecasts. We do this with two different kinds of examples. The first is the probabilistic analog of deterministic scenario analysis. Conditional probabilistic scenario analysis is essential for policy-makers because...
Directory of Open Access Journals (Sweden)
Tao Wang
2013-01-01
Full Text Available To obtain reliable transient auditory evoked potentials (AEPs from EEGs recorded using high stimulus rate (HSR paradigm, it is critical to design the stimulus sequences of appropriate frequency properties. Traditionally, the individual stimulus events in a stimulus sequence occur only at discrete time points dependent on the sampling frequency of the recording system and the duration of stimulus sequence. This dependency likely causes the implementation of suboptimal stimulus sequences, sacrificing the reliability of resulting AEPs. In this paper, we explicate the use of continuous-time stimulus sequence for HSR paradigm, which is independent of the discrete electroencephalogram (EEG recording system. We employ simulation studies to examine the applicability of the continuous-time stimulus sequences and the impacts of sampling frequency on AEPs in traditional studies using discrete-time design. Results from these studies show that the continuous-time sequences can offer better frequency properties and improve the reliability of recovered AEPs. Furthermore, we find that the errors in the recovered AEPs depend critically on the sampling frequencies of experimental systems, and their relationship can be fitted using a reciprocal function. As such, our study contributes to the literature by demonstrating the applicability and advantages of continuous-time stimulus sequences for HSR paradigm and by revealing the relationship between the reliability of AEPs and sampling frequencies of the experimental systems when discrete-time stimulus sequences are used in traditional manner for the HSR paradigm.
Local and global dynamics of Ramsey model: From continuous to discrete time.
Guzowska, Malgorzata; Michetti, Elisabetta
2018-05-01
The choice of time as a discrete or continuous variable may radically affect equilibrium stability in an endogenous growth model with durable consumption. In the continuous-time Ramsey model [F. P. Ramsey, Econ. J. 38(152), 543-559 (1928)], the steady state is locally saddle-path stable with monotonic convergence. However, in the discrete-time version, the steady state may be unstable or saddle-path stable with monotonic or oscillatory convergence or periodic solutions [see R.-A. Dana et al., Handbook on Optimal Growth 1 (Springer, 2006) and G. Sorger, Working Paper No. 1505 (2015)]. When this occurs, the discrete-time counterpart of the continuous-time model is not consistent with the initial framework. In order to obtain a discrete-time Ramsey model preserving the main properties of the continuous-time counterpart, we use a general backward and forward discretisation as initially proposed by Bosi and Ragot [Theor. Econ. Lett. 2(1), 10-15 (2012)]. The main result of the study here presented is that, with this hybrid discretisation method, fixed points and local dynamics do not change. For what it concerns global dynamics, i.e., long-run behavior for initial conditions taken on the state space, we mainly perform numerical analysis with the main scope of comparing both qualitative and quantitative evolution of the two systems, also varying some parameters of interest.
Guérin, Charles-Antoine; Grilli, Stéphan T.
2018-01-01
We present a new method for inverting ocean surface currents from beam-forming HF radar data. In contrast with the classical method, which inverts radial currents based on shifts of the main Bragg line in the radar Doppler spectrum, the method works in the temporal domain and inverts currents from the amplitude modulation of the I and Q radar time series. Based on this principle, we propose a Maximum Likelihood approach, which can be combined with a Bayesian inference method assuming a prior current distribution, to infer values of the radial surface currents. We assess the method performance by using synthetic radar signal as well as field data, and systematically comparing results with those of the Doppler method. The new method is found advantageous for its robustness to noise at long range, its ability to accommodate shorter time series, and the possibility to use a priori information to improve the estimates. Limitations are related to current sign errors at far-ranges and biased estimates for small current values and very short samples. We apply the new technique to a data set from a typical 13.5 MHz WERA radar, acquired off of Vancouver Island, BC, and show that it can potentially improve standard synoptic current mapping.
Directory of Open Access Journals (Sweden)
Mohammad Reza Bazargan-Lari
2011-01-01
Full Text Available Developing optimal operating policies for conjunctive use of surface and groundwater resources when different decision makers and stakeholders with conflicting objectives are involved is usually a challenging task. This problem would be more complex when objectives related to surface and groundwater quality are taken into account. In this paper, a new methodology is developed for real time conjunctive use of surface and groundwater resources. In the proposed methodology, a well-known multi-objective genetic algorithm, namely Non-dominated Sorting Genetic Algorithm II (NSGA-II is employed to develop a Pareto front among the objectives. The Young conflict resolution theory is also used for resolving the conflict of interests among decision makers. To develop the real time conjunctive use operating rules, the Probabilistic Support Vector Machines (PSVMs, which are capable of providing probability distribution functions of decision variables, are utilized. The proposed methodology is applied to Tehran Aquifer inTehran metropolitan area,Iran. Stakeholders in the study area have some conflicting interests including supplying water with acceptable quality, reducing pumping costs, improving groundwater quality and controlling the groundwater table fluctuations. In the proposed methodology, MODFLOW and MT3D groundwater quantity and quality simulation models are linked with NSGA-II optimization model to develop Pareto fronts among the objectives. The best solutions on the Pareto fronts are then selected using the Young conflict resolution theory. The selected solution (optimal monthly operating policies is used to train and verify a PSVM. The results show the significance of applying an integrated conflict resolution approach and the capability of support vector machines for the real time conjunctive use of surface and groundwater resources in the study area. It is also shown that the validation accuracy of the proposed operating rules is higher that 80
Generalization bounds of ERM-based learning processes for continuous-time Markov chains.
Zhang, Chao; Tao, Dacheng
2012-12-01
Many existing results on statistical learning theory are based on the assumption that samples are independently and identically distributed (i.i.d.). However, the assumption of i.i.d. samples is not suitable for practical application to problems in which samples are time dependent. In this paper, we are mainly concerned with the empirical risk minimization (ERM) based learning process for time-dependent samples drawn from a continuous-time Markov chain. This learning process covers many kinds of practical applications, e.g., the prediction for a time series and the estimation of channel state information. Thus, it is significant to study its theoretical properties including the generalization bound, the asymptotic convergence, and the rate of convergence. It is noteworthy that, since samples are time dependent in this learning process, the concerns of this paper cannot (at least straightforwardly) be addressed by existing methods developed under the sample i.i.d. assumption. We first develop a deviation inequality for a sequence of time-dependent samples drawn from a continuous-time Markov chain and present a symmetrization inequality for such a sequence. By using the resultant deviation inequality and symmetrization inequality, we then obtain the generalization bounds of the ERM-based learning process for time-dependent samples drawn from a continuous-time Markov chain. Finally, based on the resultant generalization bounds, we analyze the asymptotic convergence and the rate of convergence of the learning process.
Optimization of Modulator and Circuits for Low Power Continuous-Time Delta-Sigma ADC
DEFF Research Database (Denmark)
Marker-Villumsen, Niels; Bruun, Erik
2014-01-01
This paper presents a new optimization method for achieving a minimum current consumption in a continuous-time Delta-Sigma analog-to-digital converter (ADC). The method is applied to a continuous-time modulator realised with active-RC integrators and with a folded-cascode operational transconduc...... levels are swept. Based on the results of the circuit analysis, for each modulator combination the summed current consumption of the 1st integrator and quantizer of the ADC is determined. By also sweeping the partitioning of the noise power for the different circuit parts, the optimum modulator...
Romano, Jennifer C; Howard, James H; Howard, Darlene V
2010-05-01
Procedural skills such as riding a bicycle and playing a musical instrument play a central role in daily life. Such skills are learned gradually and are retained throughout life. The present study investigated 1-year retention of procedural skill in a version of the widely used serial reaction time task (SRTT) in young and older motor-skill experts and older controls in two experiments. The young experts were college-age piano and action video-game players, and the older experts were piano players. Previous studies have reported sequence-specific skill retention in the SRTT as long as 2 weeks but not at 1 year. Results indicated that both young and older experts and older non-experts revealed sequence-specific skill retention after 1 year with some evidence that general motor skill was retained as well. These findings are consistent with theoretical accounts of procedural skill learning such as the procedural reinstatement theory as well as with previous studies of retention of other motor skills.
Directory of Open Access Journals (Sweden)
Yonggu Kim
2017-10-01
Full Text Available This research determines the optimal investment timing using real options valuation to support decision-making for economic sustainability assessment. This paper illustrates an option pricing model using the Black-Scholes model applied to a case project to understand the model performance. Applicability of the project to the model requires two Monte Carlo simulations to satisfy a Markov process and a Wiener process. The position of project developers is not only the seller of products, but it is also the buyer of raw materials. Real options valuation can be influenced by the volatility of cash outflow, as well as the volatility of cash inflow. This study suggests two-color rainbow options valuation to overcome this issue, which is demonstrated for a steel plant project. The asymmetric results of the case study show that cash outflow (put option influences the value of the steel plant project more than cash inflow (call option does of which the discussion of the results is referred to a sensitivity analysis. The real options valuation method proposed in this study contributes to the literature on applying the new model, taking into consideration that investors maximize project profitability for economic sustainable development.
Probabilistic escalation modelling
Energy Technology Data Exchange (ETDEWEB)
Korneliussen, G.; Eknes, M.L.; Haugen, K.; Selmer-Olsen, S. [Det Norske Veritas, Oslo (Norway)
1997-12-31
This paper describes how structural reliability methods may successfully be applied within quantitative risk assessment (QRA) as an alternative to traditional event tree analysis. The emphasis is on fire escalation in hydrocarbon production and processing facilities. This choice was made due to potential improvements over current QRA practice associated with both the probabilistic approach and more detailed modelling of the dynamics of escalating events. The physical phenomena important for the events of interest are explicitly modelled as functions of time. Uncertainties are represented through probability distributions. The uncertainty modelling enables the analysis to be simple when possible and detailed when necessary. The methodology features several advantages compared with traditional risk calculations based on event trees. (Author)
Against all odds -- Probabilistic forecasts and decision making
Liechti, Katharina; Zappa, Massimiliano
2015-04-01
In the city of Zurich (Switzerland) the setting is such that the damage potential due to flooding of the river Sihl is estimated to about 5 billion US dollars. The flood forecasting system that is used by the administration for decision making runs continuously since 2007. It has a time horizon of max. five days and operates at hourly time steps. The flood forecasting system includes three different model chains. Two of those are run by the deterministic NWP models COSMO-2 and COSMO-7 and one is driven by the probabilistic NWP COSMO-Leps. The model chains are consistent since February 2010, so five full years are available for the evaluation for the system. The system was evaluated continuously and is a very nice example to present the added value that lies in probabilistic forecasts. The forecasts are available on an online-platform to the decision makers. Several graphical representations of the forecasts and forecast-history are available to support decision making and to rate the current situation. The communication between forecasters and decision-makers is quite close. To put it short, an ideal situation. However, an event or better put a non-event in summer 2014 showed that the knowledge about the general superiority of probabilistic forecasts doesn't necessarily mean that the decisions taken in a specific situation will be based on that probabilistic forecast. Some years of experience allow gaining confidence in the system, both for the forecasters and for the decision-makers. Even if from the theoretical point of view the handling during crisis situation is well designed, a first event demonstrated that the dialog with the decision-makers still lacks of exercise during such situations. We argue, that a false alarm is a needed experience to consolidate real-time emergency procedures relying on ensemble predictions. A missed event would probably also fit, but, in our case, we are very happy not to report about this option.
Aristizábal, Natalia; Ramírez, Alex; Hincapié-García, Jaime; Laiton, Estefany; Aristizábal, Carolina; Cuesta, Diana; Monsalve, Claudia; Hincapié, Gloria; Zapata, Eliana; Abad, Verónica; Delgado, Maria-Rocio; Torres, José-Luis; Palacio, Andrés; Botero, José
2015-11-01
To describe baseline characteristics of diabetic patients who were started on insulin pump and real time continuous glucose monitor (CSII-rtCGM) in a specialized center in Medellin, Colombia. All patients with diabetes with complete data who were started on CSII-rtCGM between February 2010 and May 2014 were included. This is a descriptive analysis of the sociodemographic and clinical characteristics. 141 of 174 patients attending the clinic were included. 90,1% had type 1diabetes (T1D). The average age of T1D patients at the beginning of therapy was 31,4 years (SD 14,1). 75.8% of patients had normal weight (BMI30). The median duration of T1D was 13 years (P25-P75=10.7-22.0). 14,2% of the patients were admitted at least once in the year preceding the start of CSII-rtCGM because of diabetes related complications. Mean A1c was 8.6%±1.46%. The main reasons for starting CSII-rtCGM were: poor glycemic control (50.2%); frequent hypoglycemia, nocturnal hypoglycemia, hypoglycemia related to exercise, asymptomatic hypoglycemia (30.2%); severe hypoglycemia (16.44%) and dawn phenomena (3.1%). Baseline characteristics of patients included in this study who were started on CSII-rtCGM are similar to those reported in the literature. The Clinic starts CSII-rtCGM mainly in T1D patients with poor glycemic control, frequent or severe hypoglycemia despite being on basal/bolus therapy. Copyright © 2015 SEEN. Published by Elsevier España, S.L.U. All rights reserved.
Duplicate Detection in Probabilistic Data
Panse, Fabian; van Keulen, Maurice; de Keijzer, Ander; Ritter, Norbert
2009-01-01
Collected data often contains uncertainties. Probabilistic databases have been proposed to manage uncertain data. To combine data from multiple autonomous probabilistic databases, an integration of probabilistic data has to be performed. Until now, however, data integration approaches have focused
Next-generation probabilistic seismicity forecasting
Energy Technology Data Exchange (ETDEWEB)
Hiemer, S.
2014-07-01
The development of probabilistic seismicity forecasts is one of the most important tasks of seismologists at present time. Such forecasts form the basis of probabilistic seismic hazard assessment, a widely used approach to generate ground motion exceedance maps. These hazard maps guide the development of building codes, and in the absence of the ability to deterministically predict earthquakes, good building and infrastructure planning is key to prevent catastrophes. Probabilistic seismicity forecasts are models that specify the occurrence rate of earthquakes as a function of space, time and magnitude. The models presented in this thesis are time-invariant mainshock occurrence models. Accordingly, the reliable estimation of the spatial and size distribution of seismicity are of crucial importance when constructing such probabilistic forecasts. Thereby we focus on data-driven approaches to infer these distributions, circumventing the need for arbitrarily chosen external parameters and subjective expert decisions. Kernel estimation has been shown to appropriately transform discrete earthquake locations into spatially continuous probability distributions. However, we show that neglecting the information from fault networks constitutes a considerable shortcoming and thus limits the skill of these current seismicity models. We present a novel earthquake rate forecast that applies the kernel-smoothing method to both past earthquake locations and slip rates on mapped crustal faults applied to Californian and European data. Our model is independent from biases caused by commonly used non-objective seismic zonations, which impose artificial borders of activity that are not expected in nature. Studying the spatial variability of the seismicity size distribution is of great importance. The b-value of the well-established empirical Gutenberg-Richter model forecasts the rates of hazard-relevant large earthquakes based on the observed rates of abundant small events. We propose a
Next-generation probabilistic seismicity forecasting
International Nuclear Information System (INIS)
Hiemer, S.
2014-01-01
The development of probabilistic seismicity forecasts is one of the most important tasks of seismologists at present time. Such forecasts form the basis of probabilistic seismic hazard assessment, a widely used approach to generate ground motion exceedance maps. These hazard maps guide the development of building codes, and in the absence of the ability to deterministically predict earthquakes, good building and infrastructure planning is key to prevent catastrophes. Probabilistic seismicity forecasts are models that specify the occurrence rate of earthquakes as a function of space, time and magnitude. The models presented in this thesis are time-invariant mainshock occurrence models. Accordingly, the reliable estimation of the spatial and size distribution of seismicity are of crucial importance when constructing such probabilistic forecasts. Thereby we focus on data-driven approaches to infer these distributions, circumventing the need for arbitrarily chosen external parameters and subjective expert decisions. Kernel estimation has been shown to appropriately transform discrete earthquake locations into spatially continuous probability distributions. However, we show that neglecting the information from fault networks constitutes a considerable shortcoming and thus limits the skill of these current seismicity models. We present a novel earthquake rate forecast that applies the kernel-smoothing method to both past earthquake locations and slip rates on mapped crustal faults applied to Californian and European data. Our model is independent from biases caused by commonly used non-objective seismic zonations, which impose artificial borders of activity that are not expected in nature. Studying the spatial variability of the seismicity size distribution is of great importance. The b-value of the well-established empirical Gutenberg-Richter model forecasts the rates of hazard-relevant large earthquakes based on the observed rates of abundant small events. We propose a
Quantum cooling and squeezing of a levitating nanosphere via time-continuous measurements
Genoni, Marco G.; Zhang, Jinglei; Millen, James; Barker, Peter F.; Serafini, Alessio
2015-07-01
With the purpose of controlling the steady state of a dielectric nanosphere levitated within an optical cavity, we study its conditional dynamics under simultaneous sideband cooling and additional time-continuous measurement of either the output cavity mode or the nanosphere’s position. We find that the average phonon number, purity and quantum squeezing of the steady-states can all be made more non-classical through the addition of time-continuous measurement. We predict that the continuous monitoring of the system, together with Markovian feedback, allows one to stabilize the dynamics for any value of the laser frequency driving the cavity. By considering state of the art values of the experimental parameters, we prove that one can in principle obtain a non-classical (squeezed) steady-state with an average phonon number {n}{ph}≈ 0.5.
Fixed-point Characterization of Compositionality Properties of Probabilistic Processes Combinators
Directory of Open Access Journals (Sweden)
Daniel Gebler
2014-08-01
Full Text Available Bisimulation metric is a robust behavioural semantics for probabilistic processes. Given any SOS specification of probabilistic processes, we provide a method to compute for each operator of the language its respective metric compositionality property. The compositionality property of an operator is defined as its modulus of continuity which gives the relative increase of the distance between processes when they are combined by that operator. The compositionality property of an operator is computed by recursively counting how many times the combined processes are copied along their evolution. The compositionality properties allow to derive an upper bound on the distance between processes by purely inspecting the operators used to specify those processes.
Continuous-Time Random Walk with multi-step memory: an application to market dynamics
Gubiec, Tomasz; Kutner, Ryszard
2017-11-01
An extended version of the Continuous-Time Random Walk (CTRW) model with memory is herein developed. This memory involves the dependence between arbitrary number of successive jumps of the process while waiting times between jumps are considered as i.i.d. random variables. This dependence was established analyzing empirical histograms for the stochastic process of a single share price on a market within the high frequency time scale. Then, it was justified theoretically by considering bid-ask bounce mechanism containing some delay characteristic for any double-auction market. Our model appeared exactly analytically solvable. Therefore, it enables a direct comparison of its predictions with their empirical counterparts, for instance, with empirical velocity autocorrelation function. Thus, the present research significantly extends capabilities of the CTRW formalism. Contribution to the Topical Issue "Continuous Time Random Walk Still Trendy: Fifty-year History, Current State and Outlook", edited by Ryszard Kutner and Jaume Masoliver.
Wang, Jun; Liang, Jin-Rong; Lv, Long-Jin; Qiu, Wei-Yuan; Ren, Fu-Yao
2012-02-01
In this paper, we study the problem of continuous time option pricing with transaction costs by using the homogeneous subdiffusive fractional Brownian motion (HFBM) Z(t)=X(Sα(t)), 0transaction costs of replicating strategies. We also give the total transaction costs.
Bayesian inference and the analytic continuation of imaginary-time quantum Monte Carlo data
International Nuclear Information System (INIS)
Gubernatis, J.E.; Bonca, J.; Jarrell, M.
1995-01-01
We present brief description of how methods of Bayesian inference are used to obtain real frequency information by the analytic continuation of imaginary-time quantum Monte Carlo data. We present the procedure we used, which is due to R. K. Bryan, and summarize several bottleneck issues
Directory of Open Access Journals (Sweden)
Wilson S
2015-01-01
Full Text Available Scott Wilson,1,2 Andrea Bowyer,3 Stephen B Harrap4 1Department of Renal Medicine, The Alfred Hospital, 2Baker IDI, Melbourne, 3Department of Anaesthesia, Royal Melbourne Hospital, 4University of Melbourne, Parkville, VIC, Australia Abstract: The clinical characterization of cardiovascular dynamics during hemodialysis (HD has important pathophysiological implications in terms of diagnostic, cardiovascular risk assessment, and treatment efficacy perspectives. Currently the diagnosis of significant intradialytic systolic blood pressure (SBP changes among HD patients is imprecise and opportunistic, reliant upon the presence of hypotensive symptoms in conjunction with coincident but isolated noninvasive brachial cuff blood pressure (NIBP readings. Considering hemodynamic variables as a time series makes a continuous recording approach more desirable than intermittent measures; however, in the clinical environment, the data signal is susceptible to corruption due to both impulsive and Gaussian-type noise. Signal preprocessing is an attractive solution to this problem. Prospectively collected continuous noninvasive SBP data over the short-break intradialytic period in ten patients was preprocessed using a novel median hybrid filter (MHF algorithm and compared with 50 time-coincident pairs of intradialytic NIBP measures from routine HD practice. The median hybrid preprocessing technique for continuously acquired cardiovascular data yielded a dynamic regression without significant noise and artifact, suitable for high-level profiling of time-dependent SBP behavior. Signal accuracy is highly comparable with standard NIBP measurement, with the added clinical benefit of dynamic real-time hemodynamic information. Keywords: continuous monitoring, blood pressure
A sixth-order continuous-time bandpass sigma-delta modulator for digital radio IF
Engelen, van J.A.E.P.; Plassche, van de R.J.; Stikvoort, E.F.; Venes, A.G.W.
1999-01-01
This paper presents a sixth-order continuous-time bandpass sigma-delta modulator (SDM) for analog-to-digital conversion of intermediate-frequency signals. An important aspect in the design of this SDM is the stability analysis using the describing function method. The key to the analysis is the
DEFF Research Database (Denmark)
Lauridsen, Mette Munk; Grønbæk, Henning; Næser, Esben
2012-01-01
Abstract Minimal hepatic encephalopathy (MHE) is a metabolic brain disorder occurring in patients with liver cirrhosis. MHE lessens a patient's quality of life, but is treatable when identified. The continuous reaction times (CRT) method is used in screening for MHE. Gender and age effects...
Computing continuous-time Markov chains as transformers of unbounded observables
DEFF Research Database (Denmark)
Danos, Vincent; Heindel, Tobias; Garnier, Ilias
2017-01-01
The paper studies continuous-time Markov chains (CTMCs) as transformers of real-valued functions on their state space, considered as generalised predicates and called observables. Markov chains are assumed to take values in a countable state space S; observables f: S → ℝ may be unbounded...
Continuous performance test assessed with time-domain functional near infrared spectroscopy
Torricelli, Alessandro; Contini, Davide; Spinelli, Lorenzo; Caffini, Matteo; Butti, Michele; Baselli, Giuseppe; Bianchi, Anna M.; Bardoni, Alessandra; Cerutti, Sergio; Cubeddu, Rinaldo
2007-07-01
A time-domain fNIRS multichannel system was used in a sustained attention protocol (continuous performance test) to study activation of the prefrontal cortex. Preliminary results on volounteers show significant activation (decrease in deoxy-hemoglobin and increase in oxy-hemoglobin) in both left and right prefrontal cortex.
Exploring Continuity of Care in Patients with Alcohol Use Disorders Using Time-Variant Measures
S.C. de Vries (Sjoerd); A.I. Wierdsma (André)
2008-01-01
textabstractBackground/Aims: We used time-variant measures of continuity of care to study fluctuations in long-term treatment use by patients with alcohol-related disorders. Methods: Data on service use were extracted from the Psychiatric Case Register for the Rotterdam Region, The Netherlands.
Continuous relaxation time spectrum of α-process in glass-like B2O3
International Nuclear Information System (INIS)
Bartenev, G.M.; Lomovskij, V.A.
1991-01-01
α-process of relaxation of glass-like B 2 O 3 was investigated in a wide temperature range. Continuous spectrum of relaxation times H(τ) for this process was constructed, using data of dynamic methods of investigation. It is shown that increase of temperature of α-process investigation leads to change of glass-like BaO 3 structure in such a way, that H(τ) spectrum tends to the maxwell one with a unit relaxation time
Measuring patient-centered medical home access and continuity in clinics with part-time clinicians.
Rosland, Ann-Marie; Krein, Sarah L; Kim, Hyunglin Myra; Greenstone, Clinton L; Tremblay, Adam; Ratz, David; Saffar, Darcy; Kerr, Eve A
2015-05-01
Common patient-centered medical home (PCMH) performance measures value access to a single primary care provider (PCP), which may have unintended consequences for clinics that rely on part-time PCPs and team-based care. Retrospective analysis of 110,454 primary care visits from 2 Veterans Health Administration clinics from 2010 to 2012. Multi-level models examined associations between PCP availability in clinic, and performance on access and continuity measures. Patient experiences with access and continuity were compared using 2012 patient survey data (N = 2881). Patients of PCPs with fewer half-day clinic sessions per week were significantly less likely to get a requested same-day appointment with their usual PCP (predicted probability 17% for PCPs with 2 sessions/week, 20% for 5 sessions/week, and 26% for 10 sessions/week). Among requests that did not result in a same-day appointment with the usual PCP, there were no significant differences in same-day access to a different PCP, or access within 2 to 7 days with patients' usual PCP. Overall, patients had >92% continuity with their usual PCP at the hospital-based site regardless of PCP sessions/week. Patients of full-time PCPs reported timely appointments for urgent needs more often than patients of part-time PCPs (82% vs 71%; P Part-time PCP performance appeared worse when using measures focused on same-day access to patients' usual PCP. However, clinic-level same-day access, same-week access to the usual PCP, and overall continuity were similar for patients of part-time and full-time PCPs. Measures of in-person access to a usual PCP do not capture alternate access approaches encouraged by PCMH, and often used by part-time providers, such as team-based or non-face-to-face care.
Global stabilization of linear continuous time-varying systems with bounded controls
International Nuclear Information System (INIS)
Phat, V.N.
2004-08-01
This paper deals with the problem of global stabilization of a class of linear continuous time-varying systems with bounded controls. Based on the controllability of the nominal system, a sufficient condition for the global stabilizability is proposed without solving any Riccati differential equation. Moreover, we give sufficient conditions for the robust stabilizability of perturbation/uncertain linear time-varying systems with bounded controls. (author)
Energy Technology Data Exchange (ETDEWEB)
Bartosch, T. [Erlanger-Nuernberg Univ., Erlanger (Germany). Lehrstul fuer Nachrichtentechnik I; Seidl, D. [Seismologisches Zentralobservatorium Graefenberg, Erlanegen (Greece). Bundesanstalt fuer Geiwissenschaften und Rohstoffe
1999-06-01
Among a variety of spectrogram methods short-time Fourier transform (STFT) and continuous wavelet transform (CWT) were selected to analyse transients in non-stationary signals. Depending on the properties of the tremor signals from the volcanos Mt. Stromboli, Mt. Semeru and Mt. Pinatubo were analyzed using both methods. The CWT can also be used to extend the definition of coherency into a time-varying coherency spectrogram. An example is given using array data from the volcano Mt. Stromboli (Italy).
Period, epoch, and prediction errors of ephemerides from continuous sets of timing measurements
Deeg, H. J.
2015-06-01
Space missions such as Kepler and CoRoT have led to large numbers of eclipse or transit measurements in nearly continuous time series. This paper shows how to obtain the period error in such measurements from a basic linear least-squares fit, and how to correctly derive the timing error in the prediction of future transit or eclipse events. Assuming strict periodicity, a formula for the period error of these time series is derived, σP = σT (12 / (N3-N))1 / 2, where σP is the period error, σT the timing error of a single measurement, and N the number of measurements. Compared to the iterative method for period error estimation by Mighell & Plavchan (2013), this much simpler formula leads to smaller period errors, whose correctness has been verified through simulations. For the prediction of times of future periodic events, usual linear ephemeris were epoch errors are quoted for the first time measurement, are prone to an overestimation of the error of that prediction. This may be avoided by a correction for the duration of the time series. An alternative is the derivation of ephemerides whose reference epoch and epoch error are given for the centre of the time series. For long continuous or near-continuous time series whose acquisition is completed, such central epochs should be the preferred way for the quotation of linear ephemerides. While this work was motivated from the analysis of eclipse timing measures in space-based light curves, it should be applicable to any other problem with an uninterrupted sequence of discrete timings for which the determination of a zero point, of a constant period and of the associated errors is needed.
Detectability of Granger causality for subsampled continuous-time neurophysiological processes.
Barnett, Lionel; Seth, Anil K
2017-01-01
Granger causality is well established within the neurosciences for inference of directed functional connectivity from neurophysiological data. These data usually consist of time series which subsample a continuous-time biophysiological process. While it is well known that subsampling can lead to imputation of spurious causal connections where none exist, less is known about the effects of subsampling on the ability to reliably detect causal connections which do exist. We present a theoretical analysis of the effects of subsampling on Granger-causal inference. Neurophysiological processes typically feature signal propagation delays on multiple time scales; accordingly, we base our analysis on a distributed-lag, continuous-time stochastic model, and consider Granger causality in continuous time at finite prediction horizons. Via exact analytical solutions, we identify relationships among sampling frequency, underlying causal time scales and detectability of causalities. We reveal complex interactions between the time scale(s) of neural signal propagation and sampling frequency. We demonstrate that detectability decays exponentially as the sample time interval increases beyond causal delay times, identify detectability "black spots" and "sweet spots", and show that downsampling may potentially improve detectability. We also demonstrate that the invariance of Granger causality under causal, invertible filtering fails at finite prediction horizons, with particular implications for inference of Granger causality from fMRI data. Our analysis emphasises that sampling rates for causal analysis of neurophysiological time series should be informed by domain-specific time scales, and that state-space modelling should be preferred to purely autoregressive modelling. On the basis of a very general model that captures the structure of neurophysiological processes, we are able to help identify confounds, and offer practical insights, for successful detection of causal connectivity
Energy Technology Data Exchange (ETDEWEB)
Salimi, S; Radgohar, R, E-mail: shsalimi@uok.ac.i, E-mail: r.radgohar@uok.ac.i [Faculty of Science, Department of Physics, University of Kurdistan, Pasdaran Ave, Sanandaj (Iran, Islamic Republic of)
2010-01-28
In this paper, we consider decoherence in continuous-time quantum walks on long-range interacting cycles (LRICs), which are the extensions of the cycle graphs. For this purpose, we use Gurvitz's model and assume that every node is monitored by the corresponding point-contact induced by the decoherence process. Then, we focus on large rates of decoherence and calculate the probability distribution analytically and obtain the lower and upper bounds of the mixing time. Our results prove that the mixing time is proportional to the rate of decoherence and the inverse of the square of the distance parameter (m). This shows that the mixing time decreases with increasing range of interaction. Also, what we obtain for m = 0 is in agreement with Fedichkin, Solenov and Tamon's results [48] for cycle, and we see that the mixing time of CTQWs on cycle improves with adding interacting edges.
Learning of temporal motor patterns: An analysis of continuous vs. reset timing
Directory of Open Access Journals (Sweden)
Rodrigo eLaje
2011-10-01
Full Text Available Our ability to generate well-timed sequences of movements is critical to an array of behaviors, including the ability to play a musical instrument or a video game. Here we address two questions relating to timing with the goal of better understanding the neural mechanisms underlying temporal processing. First, how does accuracy and variance change over the course of learning of complex spatiotemporal patterns? Second, is the timing of sequential responses most consistent with starting and stopping an internal timer at each interval or with continuous timing?To address these questions we used a psychophysical task in which subjects learned to reproduce a sequence of finger taps in the correct order and at the correct times—much like playing a melody at the piano. This task allowed us to calculate the variance of the responses at different time points using data from the same trials. Our results show that while standard Weber’s law is clearly violated, variance does increase as a function of time squared, as expected according to the generalized form of Weber’s law—which separates the source of variance into time-dependent and time-independent components. Over the course of learning, both the time-independent variance and the coefficient of the time-dependent term decrease. Our analyses also suggest that timing of sequential events does not rely on the resetting of an internal timer at each event.We describe and interpret our results in the context of computer simulations that capture some of our psychophysical findings. Specifically, we show that continuous timing, as opposed to reset timing, is expected from population clock models in which timing emerges from the internal dynamics of recurrent neural networks.
Smartphone-based Continuous Blood Pressure Measurement Using Pulse Transit Time.
Gholamhosseini, Hamid; Meintjes, Andries; Baig, Mirza; Linden, Maria
2016-01-01
The increasing availability of low cost and easy to use personalized medical monitoring devices has opened the door for new and innovative methods of health monitoring to emerge. Cuff-less and continuous methods of measuring blood pressure are particularly attractive as blood pressure is one of the most important measurements of long term cardiovascular health. Current methods of noninvasive blood pressure measurement are based on inflation and deflation of a cuff with some effects on arteries where blood pressure is being measured. This inflation can also cause patient discomfort and alter the measurement results. In this work, a mobile application was developed to collate the PhotoPlethysmoGramm (PPG) waveform provided by a pulse oximeter and the electrocardiogram (ECG) for calculating the pulse transit time. This information is then indirectly related to the user's systolic blood pressure. The developed application successfully connects to the PPG and ECG monitoring devices using Bluetooth wireless connection and stores the data onto an online server. The pulse transit time is estimated in real time and the user's systolic blood pressure can be estimated after the system has been calibrated. The synchronization between the two devices was found to pose a challenge to this method of continuous blood pressure monitoring. However, the implemented continuous blood pressure monitoring system effectively serves as a proof of concept. This combined with the massive benefits that an accurate and robust continuous blood pressure monitoring system would provide indicates that it is certainly worthwhile to further develop this system.
Park, K. C.; Belvin, W. Keith
1990-01-01
A general form for the first-order representation of the continuous second-order linear structural-dynamics equations is introduced to derive a corresponding form of first-order continuous Kalman filtering equations. Time integration of the resulting equations is carried out via a set of linear multistep integration formulas. It is shown that a judicious combined selection of computational paths and the undetermined matrices introduced in the general form of the first-order linear structural systems leads to a class of second-order discrete Kalman filtering equations involving only symmetric sparse N x N solution matrices.
Summary statistics for end-point conditioned continuous-time Markov chains
DEFF Research Database (Denmark)
Hobolth, Asger; Jensen, Jens Ledet
Continuous-time Markov chains are a widely used modelling tool. Applications include DNA sequence evolution, ion channel gating behavior and mathematical finance. We consider the problem of calculating properties of summary statistics (e.g. mean time spent in a state, mean number of jumps between...... two states and the distribution of the total number of jumps) for discretely observed continuous time Markov chains. Three alternative methods for calculating properties of summary statistics are described and the pros and cons of the methods are discussed. The methods are based on (i) an eigenvalue...... decomposition of the rate matrix, (ii) the uniformization method, and (iii) integrals of matrix exponentials. In particular we develop a framework that allows for analyses of rather general summary statistics using the uniformization method....
Optimal batch production strategies under continuous price decrease and time discounting
Directory of Open Access Journals (Sweden)
Mandal S.
2007-01-01
Full Text Available Single price discount in unit cost for bulk purchasing is quite common in reality as well as in inventory literature. However, in today's high-tech industries such as personal computers and mobile industries, continuous decrease in unit cost is a regular phenomenon. In the present paper, an attempt has been made to investigate the effects of continuous price decrease and time-value of money on optimal decisions for inventoried goods having time-dependent demand and production rates. The proposed models are developed over a finite time horizon considering both shortages and without shortages in inventory. Numerical examples are taken to illustrate the developed models and to examine the sensitivity of model parameters.
Directory of Open Access Journals (Sweden)
Songlin Wo
2018-01-01
Full Text Available Singular systems arise in a great deal of domains of engineering and can be used to solve problems which are more difficult and more extensive than regular systems to solve. Therefore, in this paper, the definition of finite-time robust H∞ control for uncertain linear continuous-time singular systems is presented. The problem we address is to design a robust state feedback controller which can deal with the singular system with time-varying norm-bounded exogenous disturbance, such that the singular system is finite-time robust bounded (FTRB with disturbance attenuation γ. Sufficient conditions for the existence of solutions to this problem are obtained in terms of linear matrix equalities (LMIs. When these LMIs are feasible, the desired robust controller is given. A detailed solving method is proposed for the restricted linear matrix inequalities. Finally, examples are given to show the validity of the methodology.
Continuous-Time Mean-Variance Portfolio Selection with Random Horizon
International Nuclear Information System (INIS)
Yu, Zhiyong
2013-01-01
This paper examines the continuous-time mean-variance optimal portfolio selection problem with random market parameters and random time horizon. Treating this problem as a linearly constrained stochastic linear-quadratic optimal control problem, I explicitly derive the efficient portfolios and efficient frontier in closed forms based on the solutions of two backward stochastic differential equations. Some related issues such as a minimum variance portfolio and a mutual fund theorem are also addressed. All the results are markedly different from those in the problem with deterministic exit time. A key part of my analysis involves proving the global solvability of a stochastic Riccati equation, which is interesting in its own right
Continuous-Time Mean-Variance Portfolio Selection with Random Horizon
Energy Technology Data Exchange (ETDEWEB)
Yu, Zhiyong, E-mail: yuzhiyong@sdu.edu.cn [Shandong University, School of Mathematics (China)
2013-12-15
This paper examines the continuous-time mean-variance optimal portfolio selection problem with random market parameters and random time horizon. Treating this problem as a linearly constrained stochastic linear-quadratic optimal control problem, I explicitly derive the efficient portfolios and efficient frontier in closed forms based on the solutions of two backward stochastic differential equations. Some related issues such as a minimum variance portfolio and a mutual fund theorem are also addressed. All the results are markedly different from those in the problem with deterministic exit time. A key part of my analysis involves proving the global solvability of a stochastic Riccati equation, which is interesting in its own right.
Continuous-variable quantum computing in optical time-frequency modes using quantum memories.
Humphreys, Peter C; Kolthammer, W Steven; Nunn, Joshua; Barbieri, Marco; Datta, Animesh; Walmsley, Ian A
2014-09-26
We develop a scheme for time-frequency encoded continuous-variable cluster-state quantum computing using quantum memories. In particular, we propose a method to produce, manipulate, and measure two-dimensional cluster states in a single spatial mode by exploiting the intrinsic time-frequency selectivity of Raman quantum memories. Time-frequency encoding enables the scheme to be extremely compact, requiring a number of memories that are a linear function of only the number of different frequencies in which the computational state is encoded, independent of its temporal duration. We therefore show that quantum memories can be a powerful component for scalable photonic quantum information processing architectures.
Bi-Criteria System Optimum Traffic Assignment in Networks With Continuous Value of Time
Directory of Open Access Journals (Sweden)
Xin Wang
2013-04-01
Full Text Available For an elastic demand transportation network with continuously distributed value of time, the system disutility can be measured either in time units or in cost units. The user equilibrium model and the system optimization model are each formulated in two different criteria. The conditions required for making the system optimum link flow pattern equivalent to the user equilibrium link flow pattern are derived. Furthermore, a bi-objective model has been developed which minimizes simultaneously the system travel time and the system travel cost. The existence of a pricing scheme with anonymous link tolls which can decentralize a Pareto system optimum into the user equilibrium has been investigated.
CONTINUITY OF THE MEANINGS AND FORMS OF PATRIOTISM IN THE CONTEXT OF SOCIAL TIME STUDY
Directory of Open Access Journals (Sweden)
Olga Valerjevna Kashirina
2017-06-01
Full Text Available Purpose. The work objective is to identify the focus of the meanings’ continuity and forms of patriotism in patriotic choice as the frame meaning of main life strategy that each civilized subject has- an individual, a social community of any size. The choice truthfulness is defined by presence of the meaning time continuity and approach of its structure to «the right rate». Methodology. The problem analysis is carried out on the basis of transdisciplinary dialectical and trialectical method of distinction and meaning-making with respect to intellectual technology of civilized and noospheric patriotism continuity. Results. The article regards to the continuity of meanings and forms of patriotism in the context of social time study and searches for the solution to the problem of patriotism in three lines: 1 as the problem of civilized patriotism of Great and Small Motherland, 2 as the problem of noospheric patriotism, 3 as the problem of the continuity of the meanings between them. It highlights the solution flexibility of patriotism problem that is related to the fact that social time study considers patriotism as the culture phenomenon that has the dialectical «nature of existence», and at the same time, it has three way model of civilized reality «existence» meanings – entirety of present, continuity of past and reasonability of future. The article says that the dynamic balance of meanings of civilized and noospheric patriotism in the identity culture of a civilized subject making the culture of his/her behavior and activity provides formation and stability of moral and spiritual immunity that appears by virtue of them in the semantic field of patriotism. Practical implications. The practical implication of the research is in its usability to work out courses on philosophy, culture philosophy, etc. Social time study theory can be realized in teaching practice of the new course unit «The basics of social time study» as a humanity
Astrand, Elaine
2018-06-01
Working memory (WM), crucial for successful behavioral performance in most of our everyday activities, holds a central role in goal-directed behavior. As task demands increase, inducing higher WM load, maintaining successful behavioral performance requires the brain to work at the higher end of its capacity. Because it is depending on both external and internal factors, individual WM load likely varies in a continuous fashion. The feasibility to extract such a continuous measure in time that correlates to behavioral performance during a working memory task remains unsolved. Multivariate pattern decoding was used to test whether a decoder constructed from two discrete levels of WM load can generalize to produce a continuous measure that predicts task performance. Specifically, a linear regression with L2-regularization was chosen with input features from EEG oscillatory activity recorded from healthy participants while performing the n-back task, [Formula: see text]. The feasibility to extract a continuous time-resolved measure that correlates positively to trial-by-trial working memory task performance is demonstrated (r = 0.47, p performance before action (r = 0.49, p < 0.05). We show that the extracted continuous measure enables to study the temporal dynamics of the complex activation pattern of WM encoding during the n-back task. Specifically, temporally precise contributions of different spectral features are observed which extends previous findings of traditional univariate approaches. These results constitute an important contribution towards a wide range of applications in the field of cognitive brain-machine interfaces. Monitoring mental processes related to attention and WM load to reduce the risk of committing errors in high-risk environments could potentially prevent many devastating consequences or using the continuous measure as neurofeedback opens up new possibilities to develop novel rehabilitation techniques for
Probabilistic Structural Analysis Program
Pai, Shantaram S.; Chamis, Christos C.; Murthy, Pappu L. N.; Stefko, George L.; Riha, David S.; Thacker, Ben H.; Nagpal, Vinod K.; Mital, Subodh K.
2010-01-01
NASA/NESSUS 6.2c is a general-purpose, probabilistic analysis program that computes probability of failure and probabilistic sensitivity measures of engineered systems. Because NASA/NESSUS uses highly computationally efficient and accurate analysis techniques, probabilistic solutions can be obtained even for extremely large and complex models. Once the probabilistic response is quantified, the results can be used to support risk-informed decisions regarding reliability for safety-critical and one-of-a-kind systems, as well as for maintaining a level of quality while reducing manufacturing costs for larger-quantity products. NASA/NESSUS has been successfully applied to a diverse range of problems in aerospace, gas turbine engines, biomechanics, pipelines, defense, weaponry, and infrastructure. This program combines state-of-the-art probabilistic algorithms with general-purpose structural analysis and lifting methods to compute the probabilistic response and reliability of engineered structures. Uncertainties in load, material properties, geometry, boundary conditions, and initial conditions can be simulated. The structural analysis methods include non-linear finite-element methods, heat-transfer analysis, polymer/ceramic matrix composite analysis, monolithic (conventional metallic) materials life-prediction methodologies, boundary element methods, and user-written subroutines. Several probabilistic algorithms are available such as the advanced mean value method and the adaptive importance sampling method. NASA/NESSUS 6.2c is structured in a modular format with 15 elements.
A new continuous-time formulation for scheduling crude oil operations
International Nuclear Information System (INIS)
Reddy, P. Chandra Prakash; Karimi, I.A.; Srinivasan, R.
2004-01-01
In today's competitive business climate characterized by uncertain oil markets, responding effectively and speedily to market forces, while maintaining reliable operations, is crucial to a refinery's bottom line. Optimal crude oil scheduling enables cost reduction by using cheaper crudes intelligently, minimizing crude changeovers, and avoiding ship demurrage. So far, only discrete-time formulations have stood up to the challenge of this important, nonlinear problem. A continuous-time formulation would portend numerous advantages, however, existing work in this area has just begun to scratch the surface. In this paper, we present the first complete continuous-time mixed integer linear programming (MILP) formulation for the short-term scheduling of operations in a refinery that receives crude from very large crude carriers via a high-volume single buoy mooring pipeline. This novel formulation accounts for real-world operational practices. We use an iterative algorithm to eliminate the crude composition discrepancy that has proven to be the Achilles heel for existing formulations. While it does not guarantee global optimality, the algorithm needs only MILP solutions and obtains excellent maximum-profit schedules for industrial problems with up to 7 days of scheduling horizon. We also report the first comparison of discrete- vs. continuous-time formulations for this complex problem. (Author)
Optimal control of nonlinear continuous-time systems in strict-feedback form.
Zargarzadeh, Hassan; Dierks, Travis; Jagannathan, Sarangapani
2015-10-01
This paper proposes a novel optimal tracking control scheme for nonlinear continuous-time systems in strict-feedback form with uncertain dynamics. The optimal tracking problem is transformed into an equivalent optimal regulation problem through a feedforward adaptive control input that is generated by modifying the standard backstepping technique. Subsequently, a neural network-based optimal control scheme is introduced to estimate the cost, or value function, over an infinite horizon for the resulting nonlinear continuous-time systems in affine form when the internal dynamics are unknown. The estimated cost function is then used to obtain the optimal feedback control input; therefore, the overall optimal control input for the nonlinear continuous-time system in strict-feedback form includes the feedforward plus the optimal feedback terms. It is shown that the estimated cost function minimizes the Hamilton-Jacobi-Bellman estimation error in a forward-in-time manner without using any value or policy iterations. Finally, optimal output feedback control is introduced through the design of a suitable observer. Lyapunov theory is utilized to show the overall stability of the proposed schemes without requiring an initial admissible controller. Simulation examples are provided to validate the theoretical results.
Wilson, Scott; Bowyer, Andrea; Harrap, Stephen B
2015-01-01
The clinical characterization of cardiovascular dynamics during hemodialysis (HD) has important pathophysiological implications in terms of diagnostic, cardiovascular risk assessment, and treatment efficacy perspectives. Currently the diagnosis of significant intradialytic systolic blood pressure (SBP) changes among HD patients is imprecise and opportunistic, reliant upon the presence of hypotensive symptoms in conjunction with coincident but isolated noninvasive brachial cuff blood pressure (NIBP) readings. Considering hemodynamic variables as a time series makes a continuous recording approach more desirable than intermittent measures; however, in the clinical environment, the data signal is susceptible to corruption due to both impulsive and Gaussian-type noise. Signal preprocessing is an attractive solution to this problem. Prospectively collected continuous noninvasive SBP data over the short-break intradialytic period in ten patients was preprocessed using a novel median hybrid filter (MHF) algorithm and compared with 50 time-coincident pairs of intradialytic NIBP measures from routine HD practice. The median hybrid preprocessing technique for continuously acquired cardiovascular data yielded a dynamic regression without significant noise and artifact, suitable for high-level profiling of time-dependent SBP behavior. Signal accuracy is highly comparable with standard NIBP measurement, with the added clinical benefit of dynamic real-time hemodynamic information.
Du, Yue; Clark, Jane E; Whitall, Jill
2017-05-01
Timing control, such as producing movements at a given rate or synchronizing movements to an external event, has been studied through a finger-tapping task where timing is measured at the initial contact between finger and tapping surface or the point when a key is pressed. However, the point of peak force is after the time registered at the tapping surface and thus is a less obvious but still an important event during finger tapping. Here, we compared the time at initial contact with the time at peak force as participants tapped their finger on a force sensor at a given rate after the metronome was turned off (continuation task) or in synchrony with the metronome (sensorimotor synchronization task). We found that, in the continuation task, timing was comparably accurate between initial contact and peak force. These two timing events also exhibited similar trial-by-trial statistical dependence (i.e., lag-one autocorrelation). However, the central clock variability was lower at the peak force than the initial contact. In the synchronization task, timing control at peak force appeared to be less variable and more accurate than that at initial contact. In addition to lower central clock variability, the mean SE magnitude at peak force (SEP) was around zero while SE at initial contact (SEC) was negative. Although SEC and SEP demonstrated the same trial-by-trial statistical dependence, we found that participants adjusted the time of tapping to correct SEP, but not SEC, toward zero. These results suggest that timing at peak force is a meaningful target of timing control, particularly in synchronization tapping. This result may explain the fact that SE at initial contact is typically negative as widely observed in the preexisting literature.
Sivak, David A; Chodera, John D; Crooks, Gavin E
2014-06-19
When simulating molecular systems using deterministic equations of motion (e.g., Newtonian dynamics), such equations are generally numerically integrated according to a well-developed set of algorithms that share commonly agreed-upon desirable properties. However, for stochastic equations of motion (e.g., Langevin dynamics), there is still broad disagreement over which integration algorithms are most appropriate. While multiple desiderata have been proposed throughout the literature, consensus on which criteria are important is absent, and no published integration scheme satisfies all desiderata simultaneously. Additional nontrivial complications stem from simulating systems driven out of equilibrium using existing stochastic integration schemes in conjunction with recently developed nonequilibrium fluctuation theorems. Here, we examine a family of discrete time integration schemes for Langevin dynamics, assessing how each member satisfies a variety of desiderata that have been enumerated in prior efforts to construct suitable Langevin integrators. We show that the incorporation of a novel time step rescaling in the deterministic updates of position and velocity can correct a number of dynamical defects in these integrators. Finally, we identify a particular splitting (related to the velocity Verlet discretization) that has essentially universally appropriate properties for the simulation of Langevin dynamics for molecular systems in equilibrium, nonequilibrium, and path sampling contexts.
Continuous time random walk: Galilei invariance and relation for the nth moment
International Nuclear Information System (INIS)
Fa, Kwok Sau
2011-01-01
We consider a decoupled continuous time random walk model with a generic waiting time probability density function (PDF). For the force-free case we derive an integro-differential diffusion equation which is related to the Galilei invariance for the probability density. We also derive a general relation which connects the nth moment in the presence of any external force to the second moment without external force, i.e. it is valid for any waiting time PDF. This general relation includes the generalized second Einstein relation, which connects the first moment in the presence of any external force to the second moment without any external force. These expressions for the first two moments are verified by using several kinds of the waiting time PDF. Moreover, we present new anomalous diffusion behaviours for a waiting time PDF given by a product of power-law and exponential function.
Probabilistic programmable quantum processors
International Nuclear Information System (INIS)
Buzek, V.; Ziman, M.; Hillery, M.
2004-01-01
We analyze how to improve performance of probabilistic programmable quantum processors. We show how the probability of success of the probabilistic processor can be enhanced by using the processor in loops. In addition, we show that an arbitrary SU(2) transformations of qubits can be encoded in program state of a universal programmable probabilistic quantum processor. The probability of success of this processor can be enhanced by a systematic correction of errors via conditional loops. Finally, we show that all our results can be generalized also for qudits. (Abstract Copyright [2004], Wiley Periodicals, Inc.)
Continuous-Time Classical and Quantum Random Walk on Direct Product of Cayley Graphs
International Nuclear Information System (INIS)
Salimi, S.; Jafarizadeh, M. A.
2009-01-01
In this paper we define direct product of graphs and give a recipe for obtaining probability of observing particle on vertices in the continuous-time classical and quantum random walk. In the recipe, the probability of observing particle on direct product of graph is obtained by multiplication of probability on the corresponding to sub-graphs, where this method is useful to determining probability of walk on complicated graphs. Using this method, we calculate the probability of continuous-time classical and quantum random walks on many of finite direct product Cayley graphs (complete cycle, complete K n , charter and n-cube). Also, we inquire that the classical state the stationary uniform distribution is reached as t → ∞ but for quantum state is not always satisfied. (general)
System Level Design of a Continuous-Time Delta-Sigma Modulator for Portable Ultrasound Scanners
DEFF Research Database (Denmark)
Llimos Muntal, Pere; Færch, Kjartan; Jørgensen, Ivan Harald Holger
2015-01-01
In this paper the system level design of a continuous-time ∆Σ modulator for portable ultrasound scanners is presented. The overall required signal-to-noise ratio (SNR) is derived to be 42 dB and the sampling frequency used is 320 MHz for an oversampling ratio of 16. In order to match these requir......, based on high-level VerilogA simulations, the performance of the ∆Σ modulator versus various block performance parameters is presented as trade-off curves. Based on these results, the block specifications are derived.......In this paper the system level design of a continuous-time ∆Σ modulator for portable ultrasound scanners is presented. The overall required signal-to-noise ratio (SNR) is derived to be 42 dB and the sampling frequency used is 320 MHz for an oversampling ratio of 16. In order to match...
The continuous time random walk, still trendy: fifty-year history, state of art and outlook
Kutner, Ryszard; Masoliver, Jaume
2017-03-01
In this article we demonstrate the very inspiring role of the continuous-time random walk (CTRW) formalism, the numerous modifications permitted by its flexibility, its various applications, and the promising perspectives in the various fields of knowledge. A short review of significant achievements and possibilities is given. However, this review is still far from completeness. We focused on a pivotal role of CTRWs mainly in anomalous stochastic processes discovered in physics and beyond. This article plays the role of an extended announcement of the Eur. Phys. J. B Special Issue [open-calls-for-papers/123-epj-b/1090-ctrw-50-years-on">http://epjb.epj.org/open-calls-for-papers/123-epj-b/1090-ctrw-50-years-on] containing articles which show incredible possibilities of the CTRWs. Contribution to the Topical Issue "Continuous Time Random Walk Still Trendy: Fifty-year History, Current State and Outlook", edited by Ryszard Kutner and Jaume Masoliver.
Continuous time sigma delta ADC design and non-idealities analysis
International Nuclear Information System (INIS)
Yuan Jun; Chen Zhenhai; Yang Yintang; Zhang Zhaofeng; Wu Jun; Wang Chao; Qian Wenrong
2011-01-01
A wide bandwidth continuous time sigma delta ADC is implemented in 130 nm CMOS. A detailed non-idealities analysis (excess loop delay, clock jitter, finite gain and GBW, comparator offset and DAC mismatch) is performed developed in Matlab/Simulink. This design is targeted for wide bandwidth applications such as video or wireless base-stations. Athird-order continuous time sigma delta modulator comprises a third-order RC operational-amplifier-based loop filter and 3-bit internal quantizer operated at 512 MHz clock frequency. The sigma delta ADC achieves 60 dB SNR and 59.3 dB SNDR over a 16-MHz signal band at an OSR of 16. The power consumption of the CT sigma delta modulator is 22 mW from the 1.2-V supply. (semiconductor integrated circuits)
Fermion bag approach to Hamiltonian lattice field theories in continuous time
Huffman, Emilie; Chandrasekharan, Shailesh
2017-12-01
We extend the idea of fermion bags to Hamiltonian lattice field theories in the continuous time formulation. Using a class of models we argue that the temperature is a parameter that splits the fermion dynamics into small spatial regions that can be used to identify fermion bags. Using this idea we construct a continuous time quantum Monte Carlo algorithm and compute critical exponents in the 3 d Ising Gross-Neveu universality class using a single flavor of massless Hamiltonian staggered fermions. We find η =0.54 (6 ) and ν =0.88 (2 ) using lattices up to N =2304 sites. We argue that even sizes up to N =10 ,000 sites should be accessible with supercomputers available today.
International Nuclear Information System (INIS)
Pyragas, V.; Pyragas, K.
2011-01-01
We propose a simple adaptive delayed feedback control algorithm for stabilization of unstable periodic orbits with unknown periods. The state dependent time delay is varied continuously towards the period of controlled orbit according to a gradient-descent method realized through three simple ordinary differential equations. We demonstrate the efficiency of the algorithm with the Roessler and Mackey-Glass chaotic systems. The stability of the controlled orbits is proven by computation of the Lyapunov exponents of linearized equations. -- Highlights: → A simple adaptive modification of the delayed feedback control algorithm is proposed. → It enables the control of unstable periodic orbits with unknown periods. → The delay time is varied continuously according to a gradient descend method. → The algorithm is embodied by three simple ordinary differential equations. → The validity of the algorithm is proven by computation of the Lyapunov exponents.
Impulsive Control for Continuous-Time Markov Decision Processes: A Linear Programming Approach
Energy Technology Data Exchange (ETDEWEB)
Dufour, F., E-mail: dufour@math.u-bordeaux1.fr [Bordeaux INP, IMB, UMR CNRS 5251 (France); Piunovskiy, A. B., E-mail: piunov@liv.ac.uk [University of Liverpool, Department of Mathematical Sciences (United Kingdom)
2016-08-15
In this paper, we investigate an optimization problem for continuous-time Markov decision processes with both impulsive and continuous controls. We consider the so-called constrained problem where the objective of the controller is to minimize a total expected discounted optimality criterion associated with a cost rate function while keeping other performance criteria of the same form, but associated with different cost rate functions, below some given bounds. Our model allows multiple impulses at the same time moment. The main objective of this work is to study the associated linear program defined on a space of measures including the occupation measures of the controlled process and to provide sufficient conditions to ensure the existence of an optimal control.
A Wearable System for Real-Time Continuous Monitoring of Physical Activity
Directory of Open Access Journals (Sweden)
Fabrizio Taffoni
2018-01-01
Full Text Available Over the last decades, wearable systems have gained interest for monitoring of physiological variables, promoting health, and improving exercise adherence in different populations ranging from elite athletes to patients. In this paper, we present a wearable system for the continuous real-time monitoring of respiratory frequency (fR, heart rate (HR, and movement cadence during physical activity. The system has been experimentally tested in the laboratory (by simulating the breathing pattern with a mechanical ventilator and by collecting data from one healthy volunteer. Results show the feasibility of the proposed device for real-time continuous monitoring of fR, HR, and movement cadence both in resting condition and during activity. Finally, different synchronization techniques have been investigated to enable simultaneous data collection from different wearable modules.
Transport properties of the continuous-time random walk with a long-tailed waiting-time density
International Nuclear Information System (INIS)
Weissman, H.; Havlin, S.; Weiss, G.H.
1989-01-01
The authors derive asymptotic properties of the propagator p(r, t) of a continuous-time random walk (CTRW) in which the waiting time density has the asymptotic form ψ(t) ∼ T α /t α+1 when t >> T and 0 = ∫ 0 ∞ τψ(τ)dτ is finite. One is that the asymptotic behavior of p(0, t) is demonstrated by the waiting time at the origin rather than by the dimension. The second difference is that in the presence of a field p(r, t) no longer remains symmetric around a moving peak. Rather, it is shown that the peak of this probability always occurs at r = 0, and the effect of the field is to break the symmetry that occurs when < ∞. Finally, they calculate similar properties, although in not such great detail, for the case in which the single-step jump probabilities themselves have an infinite mean
Mean-variance Optimal Reinsurance-investment Strategy in Continuous Time
Directory of Open Access Journals (Sweden)
Daheng Peng
2017-10-01
Full Text Available In this paper, Lagrange method is used to solve the continuous-time mean-variance reinsurance-investment problem. Proportional reinsurance, multiple risky assets and risk-free asset are considered synthetically in the optimal strategy for insurers. By solving the backward stochastic differential equation for the Lagrange multiplier, we get the mean-variance optimal reinsurance-investment strategy and its effective frontier in explicit forms.
Continuous time Boolean modeling for biological signaling: application of Gillespie algorithm.
Stoll, Gautier; Viara, Eric; Barillot, Emmanuel; Calzone, Laurence
2012-01-01
Abstract Mathematical modeling is used as a Systems Biology tool to answer biological questions, and more precisely, to validate a network that describes biological observations and predict the effect of perturbations. This article presents an algorithm for modeling biological networks in a discrete framework with continuous time. Background There exist two major types of mathematical modeling approaches: (1) quantitative modeling, representing various chemical species concentrations by real...
International Nuclear Information System (INIS)
Mikadze, I.; Namchevadze, T.; Gobiani, I.
2007-01-01
There is proposed a generalized mathematical model of the queuing system with time redundancy without preliminary checking of the queuing system at transition from the free state into the engaged one. The model accounts for various failures of the queuing system detected by continuous instrument control, periodic control, control during recovery and the failures revealed immediately after accumulation of a certain number of failures. The generating function of queue length in both stationary and nonstationary modes was determined. (author)
Directory of Open Access Journals (Sweden)
Y. Saiki
2007-09-01
Full Text Available An infinite number of unstable periodic orbits (UPOs are embedded in a chaotic system which models some complex phenomenon. Several algorithms which extract UPOs numerically from continuous-time chaotic systems have been proposed. In this article the damped Newton-Raphson-Mees algorithm is reviewed, and some important techniques and remarks concerning the practical numerical computations are exemplified by employing the Lorenz system.
A continuous time model of the bandwagon effect in collective action
Arieh Gavious; Shlomo Mizrahi
2001-01-01
The paper offers a complex and systematic model of the bandwagon effect in collective action using continuous time equations. The model treats the bandwagon effect as a process influenced by ratio between the mobilization efforts of social activists and the resources invested by the government to counteract this activity. The complex modeling approach makes it possible to identify the conditions for specific types of the bandwagon effect, and determines the scope of that effect. Relying on ce...
A comparison of numerical methods for the solution of continuous-time DSGE models
DEFF Research Database (Denmark)
Parra-Alvarez, Juan Carlos
This paper evaluates the accuracy of a set of techniques that approximate the solution of continuous-time DSGE models. Using the neoclassical growth model I compare linear-quadratic, perturbation and projection methods. All techniques are applied to the HJB equation and the optimality conditions...... parameters of the model and suggest the use of projection methods when a high degree of accuracy is required....
Directory of Open Access Journals (Sweden)
Hajnalka Péics
2016-08-01
Full Text Available The asymptotic behavior of solutions of the system of difference equations with continuous time and lag function between two known real functions is studied. The cases when the lag function is between two linear delay functions, between two power delay functions and between two constant delay functions are observed and illustrated by examples. The asymptotic estimates of solutions of the considered system are obtained.
Wang, Xinghu; Hong, Yiguang; Yi, Peng; Ji, Haibo; Kang, Yu
2017-05-24
In this paper, a distributed optimization problem is studied for continuous-time multiagent systems with unknown-frequency disturbances. A distributed gradient-based control is proposed for the agents to achieve the optimal consensus with estimating unknown frequencies and rejecting the bounded disturbance in the semi-global sense. Based on convex optimization analysis and adaptive internal model approach, the exact optimization solution can be obtained for the multiagent system disturbed by exogenous disturbances with uncertain parameters.
Time series analysis of continuous-wave coherent Doppler Lidar wind measurements
International Nuclear Information System (INIS)
Sjoeholm, M; Mikkelsen, T; Mann, J; Enevoldsen, K; Courtney, M
2008-01-01
The influence of spatial volume averaging of a focused 1.55 μm continuous-wave coherent Doppler Lidar on observed wind turbulence measured in the atmospheric surface layer over homogeneous terrain is described and analysed. Comparison of Lidar-measured turbulent spectra with spectra simultaneously obtained from a mast-mounted sonic anemometer at 78 meters height at the test station for large wind turbines at Hoevsoere in Western Jutland, Denmark is presented for the first time
Correlated continuous-time random walks—scaling limits and Langevin picture
International Nuclear Information System (INIS)
Magdziarz, Marcin; Metzler, Ralf; Szczotka, Wladyslaw; Zebrowski, Piotr
2012-01-01
In this paper we analyze correlated continuous-time random walks introduced recently by Tejedor and Metzler (2010 J. Phys. A: Math. Theor. 43 082002). We obtain the Langevin equations associated with this process and the corresponding scaling limits of their solutions. We prove that the limit processes are self-similar and display anomalous dynamics. Moreover, we extend the model to include external forces. Our results are confirmed by Monte Carlo simulations
Stability Tests of Positive Fractional Continuous-time Linear Systems with Delays
Directory of Open Access Journals (Sweden)
Tadeusz Kaczorek
2013-06-01
Full Text Available Necessary and sufficient conditions for the asymptotic stability of positive fractional continuous-time linear systems with many delays are established. It is shown that: 1 the asymptotic stability of the positive fractional system is independent of their delays, 2 the checking of the asymptotic stability of the positive fractional systems with delays can be reduced to checking of the asymptotic stability of positive standard linear systems without delays.
Regularity of the Rotation Number for the One-Dimensional Time-Continuous Schroedinger Equation
Energy Technology Data Exchange (ETDEWEB)
Amor, Sana Hadj, E-mail: sana_hadjamor@yahoo.fr [Ecole Nationale d' Ingenieurs de Monastir (Tunisia)
2012-12-15
Starting from results already obtained for quasi-periodic co-cycles in SL(2, R), we show that the rotation number of the one-dimensional time-continuous Schroedinger equation with Diophantine frequencies and a small analytic potential has the behavior of a 1/2-Hoelder function. We give also a sub-exponential estimate of the length of the gaps which depends on its label given by the gap-labeling theorem.
Mean-variance Optimal Reinsurance-investment Strategy in Continuous Time
Daheng Peng; Fang Zhang
2017-01-01
In this paper, Lagrange method is used to solve the continuous-time mean-variance reinsurance-investment problem. Proportional reinsurance, multiple risky assets and risk-free asset are considered synthetically in the optimal strategy for insurers. By solving the backward stochastic differential equation for the Lagrange multiplier, we get the mean-variance optimal reinsurance-investment strategy and its effective frontier in explicit forms.
Continuous-Time Mean-Variance Portfolio Selection under the CEV Process
Ma, Hui-qiang
2014-01-01
We consider a continuous-time mean-variance portfolio selection model when stock price follows the constant elasticity of variance (CEV) process. The aim of this paper is to derive an optimal portfolio strategy and the efficient frontier. The mean-variance portfolio selection problem is formulated as a linearly constrained convex program problem. By employing the Lagrange multiplier method and stochastic optimal control theory, we obtain the optimal portfolio strategy and mean-variance effici...
Do probabilistic forecasts lead to better decisions?
Directory of Open Access Journals (Sweden)
M. H. Ramos
2013-06-01
Full Text Available The last decade has seen growing research in producing probabilistic hydro-meteorological forecasts and increasing their reliability. This followed the promise that, supplied with information about uncertainty, people would take better risk-based decisions. In recent years, therefore, research and operational developments have also started focusing attention on ways of communicating the probabilistic forecasts to decision-makers. Communicating probabilistic forecasts includes preparing tools and products for visualisation, but also requires understanding how decision-makers perceive and use uncertainty information in real time. At the EGU General Assembly 2012, we conducted a laboratory-style experiment in which several cases of flood forecasts and a choice of actions to take were presented as part of a game to participants, who acted as decision-makers. Answers were collected and analysed. In this paper, we present the results of this exercise and discuss if we indeed make better decisions on the basis of probabilistic forecasts.
Investigation of continuous-time quantum walk via modules of Bose-Mesner and Terwilliger algebras
International Nuclear Information System (INIS)
Jafarizadeh, M A; Salimi, S
2006-01-01
The continuous-time quantum walk on the underlying graphs of association schemes has been studied, via the algebraic combinatorics structures of association schemes, namely semi-simple modules of their Bose-Mesner and Terwilliger algebras. It is shown that the Terwilliger algebra stratifies the graph into a (d + 1) disjoint union of strata which is different from the stratification based on distance, except for distance regular graphs. In underlying graphs of association schemes, the probability amplitudes and average probabilities are given in terms of dual eigenvalues of association schemes, such that the amplitudes of observing the continuous-time quantum walk on all sites belonging to a given stratum are the same, therefore there are at most (d + 1) different observing probabilities. The importance of association scheme in continuous-time quantum walk is shown by some worked out examples such as arbitrary finite group association schemes followed by symmetric S n , Dihedral D 2m and cyclic groups. At the end it is shown that the highest irreducible representations of Terwilliger algebras pave the way to use the spectral distributions method of Jafarizadeh and Salimi (2005 Preprint quant-ph/0510174) in studying quantum walk on some rather important graphs called distance regular graphs
Relay selection in cooperative communication systems over continuous time-varying fading channel
Directory of Open Access Journals (Sweden)
Ke Geng
2017-02-01
Full Text Available In this paper, we study relay selection under outdated channel state information (CSI in a decode-and-forward (DF cooperative system. Unlike previous researches on cooperative communication under outdated CSI, we consider that the channel varies continuously over time, i.e., the channel not only changes between relay selection and data transmission but also changes during data transmission. Thus the level of accuracy of the CSI used in relay selection degrades with data transmission. We first evaluate the packet error rate (PER of the cooperative system under continuous time-varying fading channel, and find that the PER performance deteriorates more seriously under continuous time-varying fading channel than when the channel is assumed to be constant during data transmission. Then, we propose a repeated relay selection (RRS strategy to improve the PER performance, in which the forwarded data is divided into multiple segments and relay is reselected before the transmission of each segment based on the updated CSI. Finally, we propose a combined relay selection (CRS strategy which takes advantage of three different relay selection strategies to further mitigate the impact of outdated CSI.
A New Continuous-Time Equality-Constrained Optimization to Avoid Singularity.
Quan, Quan; Cai, Kai-Yuan
2016-02-01
In equality-constrained optimization, a standard regularity assumption is often associated with feasible point methods, namely, that the gradients of constraints are linearly independent. In practice, the regularity assumption may be violated. In order to avoid such a singularity, a new projection matrix is proposed based on which a feasible point method to continuous-time, equality-constrained optimization is developed. First, the equality constraint is transformed into a continuous-time dynamical system with solutions that always satisfy the equality constraint. Second, a new projection matrix without singularity is proposed to realize the transformation. An update (or say a controller) is subsequently designed to decrease the objective function along the solutions of the transformed continuous-time dynamical system. The invariance principle is then applied to analyze the behavior of the solution. Furthermore, the proposed method is modified to address cases in which solutions do not satisfy the equality constraint. Finally, the proposed optimization approach is applied to three examples to demonstrate its effectiveness.
Directory of Open Access Journals (Sweden)
D. Seidl
1999-06-01
Full Text Available Among a variety of spectrogram methods Short-Time Fourier Transform (STFT and Continuous Wavelet Transform (CWT were selected to analyse transients in non-stationary tremor signals. Depending on the properties of the tremor signal a more suitable representation of the signal is gained by CWT. Three selected broadband tremor signals from the volcanos Mt. Stromboli, Mt. Semeru and Mt. Pinatubo were analyzed using both methods. The CWT can also be used to extend the definition of coherency into a time-varying coherency spectrogram. An example is given using array data from the volcano Mt. Stromboli.
Liang, Yingjie; Chen, Wen
2018-04-01
The mean squared displacement (MSD) of the traditional ultraslow diffusion is a logarithmic function of time. Recently, the continuous time random walk model is employed to characterize this ultraslow diffusion dynamics by connecting the heavy-tailed logarithmic function and its variation as the asymptotical waiting time density. In this study we investigate the limiting waiting time density of a general ultraslow diffusion model via the inverse Mittag-Leffler function, whose special case includes the traditional logarithmic ultraslow diffusion model. The MSD of the general ultraslow diffusion model is analytically derived as an inverse Mittag-Leffler function, and is observed to increase even more slowly than that of the logarithmic function model. The occurrence of very long waiting time in the case of the inverse Mittag-Leffler function has the largest probability compared with the power law model and the logarithmic function model. The Monte Carlo simulations of one dimensional sample path of a single particle are also performed. The results show that the inverse Mittag-Leffler waiting time density is effective in depicting the general ultraslow random motion.
A test on analytic continuation of thermal imaginary-time data
International Nuclear Information System (INIS)
Burnier, Y.; Laine, M.; Mether, L.
2011-01-01
Some time ago, Cuniberti et al. have proposed a novel method for analytically continuing thermal imaginary-time correlators to real time, which requires no model input and should be applicable with finite-precision data as well. Given that these assertions go against common wisdom, we report on a naive test of the method with an idealized example. We do encounter two problems, which we spell out in detail; this implies that systematic errors are difficult to quantify. On a more positive note, the method is simple to implement and allows for an empirical recipe by which a reasonable qualitative estimate for some transport coefficient may be obtained, if statistical errors of an ultraviolet-subtracted imaginary-time measurement can be reduced to roughly below the per mille level. (orig.)
Time-Frequency-Wavenumber Analysis of Surface Waves Using the Continuous Wavelet Transform
Poggi, V.; Fäh, D.; Giardini, D.
2013-03-01
A modified approach to surface wave dispersion analysis using active sources is proposed. The method is based on continuous recordings, and uses the continuous wavelet transform to analyze the phase velocity dispersion of surface waves. This gives the possibility to accurately localize the phase information in time, and to isolate the most significant contribution of the surface waves. To extract the dispersion information, then, a hybrid technique is applied to the narrowband filtered seismic recordings. The technique combines the flexibility of the slant stack method in identifying waves that propagate in space and time, with the resolution of f- k approaches. This is particularly beneficial for higher mode identification in cases of high noise levels. To process the continuous wavelet transform, a new mother wavelet is presented and compared to the classical and widely used Morlet type. The proposed wavelet is obtained from a raised-cosine envelope function (Hanning type). The proposed approach is particularly suitable when using continuous recordings (e.g., from seismological-like equipment) since it does not require any hardware-based source triggering. This can be subsequently done with the proposed method. Estimation of the surface wave phase delay is performed in the frequency domain by means of a covariance matrix averaging procedure over successive wave field excitations. Thus, no record stacking is necessary in the time domain and a large number of consecutive shots can be used. This leads to a certain simplification of the field procedures. To demonstrate the effectiveness of the method, we tested it on synthetics as well on real field data. For the real case we also combine dispersion curves from ambient vibrations and active measurements.
Reinforcement learning using a continuous time actor-critic framework with spiking neurons.
Directory of Open Access Journals (Sweden)
Nicolas Frémaux
2013-04-01
Full Text Available Animals repeat rewarded behaviors, but the physiological basis of reward-based learning has only been partially elucidated. On one hand, experimental evidence shows that the neuromodulator dopamine carries information about rewards and affects synaptic plasticity. On the other hand, the theory of reinforcement learning provides a framework for reward-based learning. Recent models of reward-modulated spike-timing-dependent plasticity have made first steps towards bridging the gap between the two approaches, but faced two problems. First, reinforcement learning is typically formulated in a discrete framework, ill-adapted to the description of natural situations. Second, biologically plausible models of reward-modulated spike-timing-dependent plasticity require precise calculation of the reward prediction error, yet it remains to be shown how this can be computed by neurons. Here we propose a solution to these problems by extending the continuous temporal difference (TD learning of Doya (2000 to the case of spiking neurons in an actor-critic network operating in continuous time, and with continuous state and action representations. In our model, the critic learns to predict expected future rewards in real time. Its activity, together with actual rewards, conditions the delivery of a neuromodulatory TD signal to itself and to the actor, which is responsible for action choice. In simulations, we show that such an architecture can solve a Morris water-maze-like navigation task, in a number of trials consistent with reported animal performance. We also use our model to solve the acrobot and the cartpole problems, two complex motor control tasks. Our model provides a plausible way of computing reward prediction error in the brain. Moreover, the analytically derived learning rule is consistent with experimental evidence for dopamine-modulated spike-timing-dependent plasticity.
On disturbed time continuity in schizophrenia: an elementary impairment in visual perception?
Directory of Open Access Journals (Sweden)
Anne eGiersch
2013-05-01
Full Text Available Schizophrenia is associated with a series of visual perception impairments, which might impact on the patients’ every day life and be related to clinical symptoms. However, the heterogeneity of the visual disorders make it a challenge to understand both the mechanisms and the consequences of these impairments, i.e. the way patients experience the outer world. Based on earlier psychiatry literature, we argue that issues regarding time might shed a new light on the disorders observed in patients with schizophrenia. We will briefly review the mechanisms involved in the sense of time continuity and clinical evidence that they are impaired in patients with schizophrenia. We will then summarize a recent experimental approach regarding the coding of time-event structure in time, namely the ability to discriminate between simultaneous and asynchronous events. The use of an original method of analysis allowed us to distinguish between explicit and implicit judgements of synchrony. We showed that for SOAs below 20 ms neither patients nor controls fuse events in time. On the contrary subjects distinguish events at an implicit level even when judging them as synchronous. In addition, the implicit responses of patients and controls differ qualitatively. It is as if controls always put more weight on the last occurred event, whereas patients have a difficulty to follow events in time at an implicit level. In patients, there is a clear dissociation between results at short and large asynchronies, that suggest selective mechanisms for the implicit coding of time-event structure. These results might explain the disruption of the sense of time continuity in patients. We argue that this line of research might also help us to better understand the mechanisms of the visual impairments in patients and how they see their environment.
HMM_Model-Checker pour la vérification probabiliste HMM_Model ...
African Journals Online (AJOL)
ASSIA
probabiliste –Télescope Hubble. Abstract. Probabilistic verification for embedded systems continues to attract more and more followers in the research community. Given a probabilistic model, a formula of temporal logic, describing a property of a system and an exploration algorithm to check whether the property is satisfied ...
Hellrung, Lydia; Dietrich, Anja; Hollmann, Maurice; Pleger, Burkhard; Kalberlah, Christian; Roggenhofer, Elisabeth; Villringer, Arno; Horstmann, Annette
2018-02-01
Real-time fMRI neurofeedback is a feasible tool to learn the volitional regulation of brain activity. So far, most studies provide continuous feedback information that is presented upon every volume acquisition. Although this maximizes the temporal resolution of feedback information, it may be accompanied by some disadvantages. Participants can be distracted from the regulation task due to (1) the intrinsic delay of the hemodynamic response and associated feedback and (2) limited cognitive resources available to simultaneously evaluate feedback information and stay engaged with the task. Here, we systematically investigate differences between groups presented with different variants of feedback (continuous vs. intermittent) and a control group receiving no feedback on their ability to regulate amygdala activity using positive memories and feelings. In contrast to the feedback groups, no learning effect was observed in the group without any feedback presentation. The group receiving intermittent feedback exhibited better amygdala regulation performance when compared with the group receiving continuous feedback. Behavioural measurements show that these effects were reflected in differences in task engagement. Overall, we not only demonstrate that the presentation of feedback is a prerequisite to learn volitional control of amygdala activity but also that intermittent feedback is superior to continuous feedback presentation. Copyright © 2017 The Authors. Published by Elsevier Inc. All rights reserved.
Probabilistic Infinite Secret Sharing
Csirmaz, László
2013-01-01
The study of probabilistic secret sharing schemes using arbitrary probability spaces and possibly infinite number of participants lets us investigate abstract properties of such schemes. It highlights important properties, explains why certain definitions work better than others, connects this topic to other branches of mathematics, and might yield new design paradigms. A probabilistic secret sharing scheme is a joint probability distribution of the shares and the secret together with a colle...
Probabilistic Programming (Invited Talk)
Yang, Hongseok
2017-01-01
Probabilistic programming refers to the idea of using standard programming constructs for specifying probabilistic models from machine learning and statistics, and employing generic inference algorithms for answering various queries on these models, such as posterior inference and estimation of model evidence. Although this idea itself is not new and was, in fact, explored by several programming-language and statistics researchers in the early 2000, it is only in the last few years that proba...
Fluctuations around equilibrium laws in ergodic continuous-time random walks.
Schulz, Johannes H P; Barkai, Eli
2015-06-01
We study occupation time statistics in ergodic continuous-time random walks. Under thermal detailed balance conditions, the average occupation time is given by the Boltzmann-Gibbs canonical law. But close to the nonergodic phase, the finite-time fluctuations around this mean are large and nontrivial. They exhibit dual time scaling and distribution laws: the infinite density of large fluctuations complements the Lévy-stable density of bulk fluctuations. Neither of the two should be interpreted as a stand-alone limiting law, as each has its own deficiency: the infinite density has an infinite norm (despite particle conservation), while the stable distribution has an infinite variance (although occupation times are bounded). These unphysical divergences are remedied by consistent use and interpretation of both formulas. Interestingly, while the system's canonical equilibrium laws naturally determine the mean occupation time of the ergodic motion, they also control the infinite and Lévy-stable densities of fluctuations. The duality of stable and infinite densities is in fact ubiquitous for these dynamics, as it concerns the time averages of general physical observables.
International Nuclear Information System (INIS)
Doebrich, Marcus; Markstaller, Klaus; Karmrodt, Jens; Kauczor, Hans-Ulrich; Eberle, Balthasar; Weiler, Norbert; Thelen, Manfred; Schreiber, Wolfgang G
2005-01-01
In this study, an algorithm was developed to measure the distribution of pulmonary time constants (TCs) from dynamic computed tomography (CT) data sets during a sudden airway pressure step up. Simulations with synthetic data were performed to test the methodology as well as the influence of experimental noise. Furthermore the algorithm was applied to in vivo data. In five pigs sudden changes in airway pressure were imposed during dynamic CT acquisition in healthy lungs and in a saline lavage ARDS model. The fractional gas content in the imaged slice (FGC) was calculated by density measurements for each CT image. Temporal variations of the FGC were analysed assuming a model with a continuous distribution of exponentially decaying time constants. The simulations proved the feasibility of the method. The influence of experimental noise could be well evaluated. Analysis of the in vivo data showed that in healthy lungs ventilation processes can be more likely characterized by discrete TCs whereas in ARDS lungs continuous distributions of TCs are observed. The temporal behaviour of lung inflation and deflation can be characterized objectively using the described new methodology. This study indicates that continuous distributions of TCs reflect lung ventilation mechanics more accurately compared to discrete TCs
The new Big Bang Theory according to dimensional continuous space-time theory
International Nuclear Information System (INIS)
Martini, Luiz Cesar
2014-01-01
This New View of the Big Bang Theory results from the Dimensional Continuous Space-Time Theory, for which the introduction was presented in [1]. This theory is based on the concept that the primitive Universe before the Big Bang was constituted only from elementary cells of potential energy disposed side by side. In the primitive Universe there were no particles, charges, movement and the Universe temperature was absolute zero Kelvin. The time was always present, even in the primitive Universe, time is the integral part of the empty space, it is the dynamic energy of space and it is responsible for the movement of matter and energy inside the Universe. The empty space is totally stationary; the primitive Universe was infinite and totally occupied by elementary cells of potential energy. In its event, the Big Bang started a production of matter, charges, energy liberation, dynamic movement, temperature increase and the conformation of galaxies respecting a specific formation law. This article presents the theoretical formation of the Galaxies starting from a basic equation of the Dimensional Continuous Space-time Theory.
The New Big Bang Theory according to Dimensional Continuous Space-Time Theory
Martini, Luiz Cesar
2014-04-01
This New View of the Big Bang Theory results from the Dimensional Continuous Space-Time Theory, for which the introduction was presented in [1]. This theory is based on the concept that the primitive Universe before the Big Bang was constituted only from elementary cells of potential energy disposed side by side. In the primitive Universe there were no particles, charges, movement and the Universe temperature was absolute zero Kelvin. The time was always present, even in the primitive Universe, time is the integral part of the empty space, it is the dynamic energy of space and it is responsible for the movement of matter and energy inside the Universe. The empty space is totally stationary; the primitive Universe was infinite and totally occupied by elementary cells of potential energy. In its event, the Big Bang started a production of matter, charges, energy liberation, dynamic movement, temperature increase and the conformation of galaxies respecting a specific formation law. This article presents the theoretical formation of the Galaxies starting from a basic equation of the Dimensional Continuous Space-time Theory.
Real-time electrocardiogram transmission from Mount Everest during continued ascent.
Kao, Wei-Fong; Huang, Jyh-How; Kuo, Terry B J; Chang, Po-Lun; Chang, Wen-Chen; Chan, Kuo-Hung; Liu, Wen-Hsiung; Wang, Shih-Hao; Su, Tzu-Yao; Chiang, Hsiu-chen; Chen, Jin-Jong
2013-01-01
The feasibility of a real-time electrocardiogram (ECG) transmission via satellite phone from Mount Everest to determine a climber's suitability for continued ascent was examined. Four Taiwanese climbers were enrolled in the 2009 Mount Everest summit program. Physiological measurements were taken at base camp (5300 m), camp 2 (6400 m), camp 3 (7100 m), and camp 4 (7950 m) 1 hour after arrival and following a 10 minute rest period. A total of 3 out of 4 climbers were able to summit Mount Everest successfully. Overall, ECG and global positioning system (GPS) coordinates of climbers were transmitted in real-time via satellite phone successfully from base camp, camp 2, camp 3, and camp 4. At each camp, Resting Heart Rate (RHR) was transmitted and recorded: base camp (54-113 bpm), camp 2 (94-130 bpm), camp 3 (98-115 bpm), and camp 4 (93-111 bpm). Real-time ECG and GPS coordinate transmission via satellite phone is feasible for climbers on Mount Everest. Real-time RHR data can be used to evaluate a climber's physiological capacity to continue an ascent and to summit.
Real-time electrocardiogram transmission from Mount Everest during continued ascent.
Directory of Open Access Journals (Sweden)
Wei-Fong Kao
Full Text Available The feasibility of a real-time electrocardiogram (ECG transmission via satellite phone from Mount Everest to determine a climber's suitability for continued ascent was examined. Four Taiwanese climbers were enrolled in the 2009 Mount Everest summit program. Physiological measurements were taken at base camp (5300 m, camp 2 (6400 m, camp 3 (7100 m, and camp 4 (7950 m 1 hour after arrival and following a 10 minute rest period. A total of 3 out of 4 climbers were able to summit Mount Everest successfully. Overall, ECG and global positioning system (GPS coordinates of climbers were transmitted in real-time via satellite phone successfully from base camp, camp 2, camp 3, and camp 4. At each camp, Resting Heart Rate (RHR was transmitted and recorded: base camp (54-113 bpm, camp 2 (94-130 bpm, camp 3 (98-115 bpm, and camp 4 (93-111 bpm. Real-time ECG and GPS coordinate transmission via satellite phone is feasible for climbers on Mount Everest. Real-time RHR data can be used to evaluate a climber's physiological capacity to continue an ascent and to summit.
OPTIMAL STRATEGIES FOR CONTINUOUS GRAVITATIONAL WAVE DETECTION IN PULSAR TIMING ARRAYS
International Nuclear Information System (INIS)
Ellis, J. A.; Siemens, X.; Creighton, J. D. E.
2012-01-01
Supermassive black hole binaries (SMBHBs) are expected to emit a continuous gravitational wave signal in the pulsar timing array (PTA) frequency band (10 –9 to 10 –7 Hz). The development of data analysis techniques aimed at efficient detection and characterization of these signals is critical to the gravitational wave detection effort. In this paper, we leverage methods developed for LIGO continuous wave gravitational searches and explore the use of the F-statistic for such searches in pulsar timing data. Babak and Sesana have used this approach in the context of PTAs to show that one can resolve multiple SMBHB sources in the sky. Our work improves on several aspects of prior continuous wave search methods developed for PTA data analysis. The algorithm is implemented fully in the time domain, which naturally deals with the irregular sampling typical of PTA data and avoids spectral leakage problems associated with frequency domain methods. We take into account the fitting of the timing model and have generalized our approach to deal with both correlated and uncorrelated colored noise sources. We also develop an incoherent detection statistic that maximizes over all pulsar-dependent contributions to the likelihood. To test the effectiveness and sensitivity of our detection statistics, we perform a number of Monte Carlo simulations. We produce sensitivity curves for PTAs of various configurations and outline an implementation of a fully functional data analysis pipeline. Finally, we present a derivation of the likelihood maximized over the gravitational wave phases at the pulsar locations, which results in a vast reduction of the search parameter space.
Probabilistic studies for a safety assurance program
International Nuclear Information System (INIS)
Iyer, S.S.; Davis, J.F.
1985-01-01
The adequate supply of energy is always a matter of concern for any country. Nuclear power has played, and will continue to play an important role in supplying this energy. However, safety in nuclear power production is a fundamental prerequisite in fulfilling this role. This paper outlines a program to ensure safe operation of a nuclear power plant utilizing the Probabilistic Safety Studies
Astrand, Elaine
2018-06-01
Objective. Working memory (WM), crucial for successful behavioral performance in most of our everyday activities, holds a central role in goal-directed behavior. As task demands increase, inducing higher WM load, maintaining successful behavioral performance requires the brain to work at the higher end of its capacity. Because it is depending on both external and internal factors, individual WM load likely varies in a continuous fashion. The feasibility to extract such a continuous measure in time that correlates to behavioral performance during a working memory task remains unsolved. Approach. Multivariate pattern decoding was used to test whether a decoder constructed from two discrete levels of WM load can generalize to produce a continuous measure that predicts task performance. Specifically, a linear regression with L2-regularization was chosen with input features from EEG oscillatory activity recorded from healthy participants while performing the n-back task, n\\in [1,2] . Main results. The feasibility to extract a continuous time-resolved measure that correlates positively to trial-by-trial working memory task performance is demonstrated (r = 0.47, p < 0.05). It is furthermore shown that this measure allows to predict task performance before action (r = 0.49, p < 0.05). We show that the extracted continuous measure enables to study the temporal dynamics of the complex activation pattern of WM encoding during the n-back task. Specifically, temporally precise contributions of different spectral features are observed which extends previous findings of traditional univariate approaches. Significance. These results constitute an important contribution towards a wide range of applications in the field of cognitive brain–machine interfaces. Monitoring mental processes related to attention and WM load to reduce the risk of committing errors in high-risk environments could potentially prevent many devastating consequences or
Time-synchronized continuous wave laser-induced fluorescence on an oscillatory xenon discharge.
MacDonald, N A; Cappelli, M A; Hargus, W A
2012-11-01
A novel approach to time-synchronizing laser-induced fluorescence measurements to an oscillating current in a 60 Hz xenon discharge lamp using a continuous wave laser is presented. A sample-hold circuit is implemented to separate out signals at different phases along a current cycle, and is followed by a lock-in amplifier to pull out the resulting time-synchronized fluorescence trace from the large background signal. The time evolution of lower state population is derived from the changes in intensity of the fluorescence excitation line shape resulting from laser-induced fluorescence measurements of the 6s(')[1/2](1)(0)-6p(')[3/2](2) xenon atomic transition at λ = 834.68 nm. Results show that the lower state population oscillates at twice the frequency of the discharge current, 120 Hz.
Time-synchronized continuous wave laser-induced fluorescence on an oscillatory xenon discharge
Energy Technology Data Exchange (ETDEWEB)
MacDonald, N. A.; Cappelli, M. A. [Stanford Plasma Physics Laboratory, Stanford University, Stanford, California 94305 (United States); Hargus, W. A. Jr. [Air Force Research Laboratory, Edwards AFB, California 93524 (United States)
2012-11-15
A novel approach to time-synchronizing laser-induced fluorescence measurements to an oscillating current in a 60 Hz xenon discharge lamp using a continuous wave laser is presented. A sample-hold circuit is implemented to separate out signals at different phases along a current cycle, and is followed by a lock-in amplifier to pull out the resulting time-synchronized fluorescence trace from the large background signal. The time evolution of lower state population is derived from the changes in intensity of the fluorescence excitation line shape resulting from laser-induced fluorescence measurements of the 6s{sup Prime }[1/2]{sub 1}{sup 0}-6p{sup Prime }[3/2]{sub 2} xenon atomic transition at {lambda}= 834.68 nm. Results show that the lower state population oscillates at twice the frequency of the discharge current, 120 Hz.
Offset-Free Direct Power Control of DFIG Under Continuous-Time Model Predictive Control
DEFF Research Database (Denmark)
Errouissi, Rachid; Al-Durra, Ahmed; Muyeen, S.M.
2017-01-01
This paper presents a robust continuous-time model predictive direct power control for doubly fed induction generator (DFIG). The proposed approach uses Taylor series expansion to predict the stator current in the synchronous reference frame over a finite time horizon. The predicted stator current...... is directly used to compute the required rotor voltage in order to minimize the difference between the actual stator currents and their references over the predictive time. However, as the proposed strategy is sensitive to parameter variations and external disturbances, a disturbance observer is embedded...... into the control loop to remove the steady-state error of the stator current. It turns out that the steady-state and the transient performances can be identified by simple design parameters. In this paper, the reference of the stator current is directly calculated from the desired stator active and reactive powers...
DEFF Research Database (Denmark)
Tataru, Paula Cristina; Hobolth, Asger
2011-01-01
past evolutionary events (exact times and types of changes) are unaccessible and the past must be inferred from DNA sequence data observed in the present. RESULTS: We describe and implement three algorithms for computing linear combinations of expected values of the sufficient statistics, conditioned......BACKGROUND: Continuous time Markov chains (CTMCs) is a widely used model for describing the evolution of DNA sequences on the nucleotide, amino acid or codon level. The sufficient statistics for CTMCs are the time spent in a state and the number of changes between any two states. In applications...... of the algorithms is available at www.birc.au.dk/~paula/. CONCLUSIONS: We use two different models to analyze the accuracy and eight experiments to investigate the speed of the three algorithms. We find that they have similar accuracy and that EXPM is the slowest method. Furthermore we find that UNI is usually...
Probabilistic Open Set Recognition
Jain, Lalit Prithviraj
support vector machines. Building from the success of statistical EVT based recognition methods such as PI-SVM and W-SVM on the open set problem, we present a new general supervised learning algorithm for multi-class classification and multi-class open set recognition called the Extreme Value Local Basis (EVLB). The design of this algorithm is motivated by the observation that extrema from known negative class distributions are the closest negative points to any positive sample during training, and thus should be used to define the parameters of a probabilistic decision model. In the EVLB, the kernel distribution for each positive training sample is estimated via an EVT distribution fit over the distances to the separating hyperplane between positive training sample and closest negative samples, with a subset of the overall positive training data retained to form a probabilistic decision boundary. Using this subset as a frame of reference, the probability of a sample at test time decreases as it moves away from the positive class. Possessing this property, the EVLB is well-suited to open set recognition problems where samples from unknown or novel classes are encountered at test. Our experimental evaluation shows that the EVLB provides a substantial improvement in scalability compared to standard radial basis function kernel machines, as well as P I-SVM and W-SVM, with improved accuracy in many cases. We evaluate our algorithm on open set variations of the standard visual learning benchmarks, as well as with an open subset of classes from Caltech 256 and ImageNet. Our experiments show that PI-SVM, WSVM and EVLB provide significant advances over the previous state-of-the-art solutions for the same tasks.
Estimating the continuous-time dynamics of energy and fat metabolism in mice.
Guo, Juen; Hall, Kevin D
2009-09-01
The mouse has become the most popular organism for investigating molecular mechanisms of body weight regulation. But understanding the physiological context by which a molecule exerts its effect on body weight requires knowledge of energy intake, energy expenditure, and fuel selection. Furthermore, measurements of these variables made at an isolated time point cannot explain why body weight has its present value since body weight is determined by the past history of energy and macronutrient imbalance. While food intake and body weight changes can be frequently measured over several weeks (the relevant time scale for mice), correspondingly frequent measurements of energy expenditure and fuel selection are not currently feasible. To address this issue, we developed a mathematical method based on the law of energy conservation that uses the measured time course of body weight and food intake to estimate the underlying continuous-time dynamics of energy output and net fat oxidation. We applied our methodology to male C57BL/6 mice consuming various ad libitum diets during weight gain and loss over several weeks and present the first continuous-time estimates of energy output and net fat oxidation rates underlying the observed body composition changes. We show that transient energy and fat imbalances in the first several days following a diet switch can account for a significant fraction of the total body weight change. We also discovered a time-invariant curve relating body fat and fat-free masses in male C57BL/6 mice, and the shape of this curve determines how diet, fuel selection, and body composition are interrelated.
Continuous-time digital front-ends for multistandard wireless transmission
Nuyts, Pieter A J; Dehaene, Wim
2014-01-01
This book describes the design of fully digital multistandard transmitter front-ends which can directly drive one or more switching power amplifiers, thus eliminating all other analog components. After reviewing different architectures, the authors focus on polar architectures using pulse width modulation (PWM), which are entirely based on unclocked delay lines and other continuous-time digital hardware. As a result, readers are enabled to shift accuracy concerns from the voltage domain to the time domain, to coincide with submicron CMOS technology scaling. The authors present different architectural options and compare them, based on their effect on the signal and spectrum quality. Next, a high-level theoretical analysis of two different PWM-based architectures – baseband PWM and RF PWM – is made. On the circuit level, traditional digital components and design techniques are revisited from the point of view of continuous-time digital circuits. Important design criteria are identified and diff...
Energy Technology Data Exchange (ETDEWEB)
Migunov, Vadim, E-mail: v.migunov@fz-juelich.de [Ernst Ruska-Centre for Microscopy and Spectroscopy with Electrons, Peter Grünberg Institute, Forschungszentrum Jülich, D-52425 Jülich (Germany); Dwyer, Christian [Ernst Ruska-Centre for Microscopy and Spectroscopy with Electrons, Peter Grünberg Institute, Forschungszentrum Jülich, D-52425 Jülich (Germany); Department of Physics, Arizona State University, Tempe, AZ 85287 (United States); Boothroyd, Chris B. [Ernst Ruska-Centre for Microscopy and Spectroscopy with Electrons, Peter Grünberg Institute, Forschungszentrum Jülich, D-52425 Jülich (Germany); Pozzi, Giulio [Ernst Ruska-Centre for Microscopy and Spectroscopy with Electrons, Peter Grünberg Institute, Forschungszentrum Jülich, D-52425 Jülich (Germany); Department of Physics and Astronomy, University of Bologna, viale B. Pichat 6/2, Bologna 40127 (Italy); Dunin-Borkowski, Rafal E. [Ernst Ruska-Centre for Microscopy and Spectroscopy with Electrons, Peter Grünberg Institute, Forschungszentrum Jülich, D-52425 Jülich (Germany)
2017-07-15
The technique of double exposure electron holography, which is based on the superposition of two off-axis electron holograms, was originally introduced before the availability of digital image processing to allow differences between electron-optical phases encoded in two electron holograms to be visualised directly without the need for holographic reconstruction. Here, we review the original method and show how it can now be extended to permit quantitative studies of phase shifts that oscillate in time. We begin with a description of the theory of off-axis electron hologram formation for a time-dependent electron wave that results from the excitation of a specimen using an external stimulus with a square, sinusoidal, triangular or other temporal dependence. We refer to the more general method as continuous exposure electron holography, present preliminary experimental measurements and discuss how the technique can be used to image electrostatic potentials and magnetic fields during high frequency switching experiments. - Highlights: • Double and continuous exposure electron holography are described in detail. • The ability to perform quantitative studies of phase shifts that are oscillating in time is illustrated. • Theoretical considerations related to noise are presented. • Future high frequency electromagnetic switching experiments are proposed.
Alken, P.; Chulliat, A.; Maus, S.
2012-12-01
The day-time eastward equatorial electric field (EEF) in the ionospheric E-region plays an important role in equatorial ionospheric dynamics. It is responsible for driving the equatorial electrojet (EEJ) current system, equatorial vertical ion drifts, and the equatorial ionization anomaly (EIA). Due to its importance, there is much interest in accurately measuring and modeling the EEF. However, there are limited sources of direct EEF measurements with full temporal and spatial coverage of the equatorial ionosphere. In this work, we propose a method of estimating a continuous day-time time series of the EEF at any longitude, provided there is a pair of ground magnetic observatories in the region which can accurately track changes in the strength of the EEJ. First, we derive a climatological unit latitudinal current profile from direct overflights of the CHAMP satellite and use delta H measurements from the ground observatory pair to determine the magnitude of the current. The time series of current profiles is then inverted for the EEF by solving the governing electrodynamic equations. While this method has previously been applied and validated in the Peruvian sector, in this work we demonstrate the method using a pair of magnetometers in Africa (Samogossoni, SAM, 0.18 degrees magnetic latitude and Tamanrasset, TAM, 11.5 degrees magnetic latitude) and validate the resulting EEF values against the CINDI ion velocity meter (IVM) instrument on the C/NOFS satellite. We find a very good 80% correlation with C/NOFS IVM measurements and a root-mean-square difference of 9 m/s in vertical drift velocity. This technique can be extended to any pair of ground observatories which can capture the day-time strength of the EEJ. We plan to apply this work to more observatory pairs around the globe and distribute real-time equatorial electric field values to the community.
Alyushin, M. V.; Kolobashkina, L. V.
2017-01-01
The education technology with continuous monitoring of the current functional and emotional students' states is suggested. The application of this technology allows one to increase the effectiveness of practice through informed planning of the training load. For monitoring the current functional and emotional students' states non-contact remote technologies of person bioparameters registration are encouraged to use. These technologies are based on recording and processing in real time the main person bioparameters in a purely passive mode. Experimental testing of this technology has confirmed its effectiveness.
Numerical solution of continuous-time DSGE models under Poisson uncertainty
DEFF Research Database (Denmark)
Posch, Olaf; Trimborn, Timo
We propose a simple and powerful method for determining the transition process in continuous-time DSGE models under Poisson uncertainty numerically. The idea is to transform the system of stochastic differential equations into a system of functional differential equations of the retarded type. We...... classes of models. We illustrate the algorithm simulating both the stochastic neoclassical growth model and the Lucas model under Poisson uncertainty which is motivated by the Barro-Rietz rare disaster hypothesis. We find that, even for non-linear policy functions, the maximum (absolute) error is very...
On the rate of convergence in von Neumann's ergodic theorem with continuous time
International Nuclear Information System (INIS)
Kachurovskii, A G; Reshetenko, Anna V
2010-01-01
The rate of convergence in von Neumann's mean ergodic theorem is studied for continuous time. The condition that the rate of convergence of the ergodic averages be of power-law type is shown to be equivalent to requiring that the spectral measure of the corresponding dynamical system have a power-type singularity at 0. This forces the estimates for the convergence rate in the above ergodic theorem to be necessarily spectral. All the results obtained have obvious exact analogues for wide-sense stationary processes. Bibliography: 7 titles.
Study of N-13 decay on time using continuous kinetic function method
International Nuclear Information System (INIS)
Tran Dai Nghiep; Vu Hoang Lam; Nguyen Ngoc Son; Nguyen Duc Thanh
1993-01-01
The decay function from radioisotope 13 N formed in the reaction 14 N(γ,n) 13 N was registered by high resolution gamma spectrometer in multiscanning mode with gamma energy 511 keV. The experimental data was processed by common and kinetic function method. The continuous comparison of the decay function on time permits to determinate possible deviation from purely exponential decay curve. The results were described by several decay theories. The degrees of corresponding between theories and experiment were evaluated by goodness factor. A complex type of decay was considered. (author). 9 refs, 2 tabs, 6 figs
DEFF Research Database (Denmark)
Lauridsen, M M; Schaffalitzky de Muckadell, O B; Vilstrup, H
2015-01-01
Minimal hepatic encephalopathy (MHE) is a frequent complication to liver cirrhosis that causes poor quality of life, a great burden to caregivers, and can be treated. For diagnosis and grading the international guidelines recommend the use of psychometric tests of different modalities (computer...... based vs. paper and pencil). To compare results of the Continuous Reaction time (CRT) and the Portosystemic Encephalopathy (PSE) tests in a large unselected cohort of cirrhosis patients without clinically detectable brain impairment and to clinically characterize the patients according to their test...
A 10 MHz Bandwidth Continuous-Time Delta-Sigma Modulator for Portable Ultrasound Scanners
DEFF Research Database (Denmark)
Llimos Muntal, Pere; Jørgensen, Ivan Harald Holger; Bruun, Erik
2016-01-01
comparator and a pull-down clocked latch. The feedback signal is generated with voltage DACs based on transmission gates. Using this implementation, a small and low-power solution required for portable ultrasound scanner applications is achieved. The modulator has a bandwidth of 10 MHz with an oversampling......A fourth-order 1-bit continuous-time delta-sigma modulator designed in a 65 nm process for portable ultrasound scanners is presented in this paper. The loop filter consists of RCintegrators, with programmable capacitor arrays and resistors, and the quantizer is implemented with a high-speed clocked...
A Continuous-Time Agency Model of Optimal Contracting and Capital Structure
Peter M. DeMarzo; Yuliy Sannikov
2004-01-01
We consider a principal-agent model in which the agent needs to raise capital from the principal to finance a project. Our model is based on DeMarzo and Fishman (2003), except that the agent's cash flows are given by a Brownian motion with drift in continuous time. The difficulty in writing an appropriate financial contract in this setting is that the agent can conceal and divert cash flows for his own consumption rather than pay back the principal. Alternatively, the agent may reduce the mea...
Forecasting the Global Mean Sea Level, a Continuous-Time State-Space Approach
DEFF Research Database (Denmark)
Boldrini, Lorenzo
In this paper we propose a continuous-time, Gaussian, linear, state-space system to model the relation between global mean sea level (GMSL) and the global mean temperature (GMT), with the aim of making long-term projections for the GMSL. We provide a justification for the model specification based......) and the temperature reconstruction from Hansen et al. (2010). We compare the forecasting performance of the proposed specification to the procedures developed in Rahmstorf (2007b) and Vermeer and Rahmstorf (2009). Finally, we compute projections for the sea-level rise conditional on the 21st century SRES temperature...
An Equivalent LMI Representation of Bounded Real Lemma for Continuous-Time Systems
Directory of Open Access Journals (Sweden)
Xie Wei
2008-01-01
Full Text Available Abstract An equivalent linear matrix inequality (LMI representation of bounded real lemma (BRL for linear continuous-time systems is introduced. As to LTI system including polytopic-type uncertainties, by using a parameter-dependent Lyapunov function, there are several LMIs-based formulations for the analysis and synthesis of performance. All of these representations only provide us with different sufficient conditions. Compared with previous methods, this new representation proposed here provides us the possibility to obtain better results. Finally, some numerical examples are illustrated to show the effectiveness of proposed method.
Downing, Bryan D.; Bergamaschi, Brian; Kendall, Carol; Kraus, Tamara; Dennis, Kate J.; Carter, Jeffery A.; von Dessonneck, Travis
2016-01-01
Stable isotopes present in water (δ2H, δ18O) have been used extensively to evaluate hydrological processes on the basis of parameters such as evaporation, precipitation, mixing, and residence time. In estuarine aquatic habitats, residence time (τ) is a major driver of biogeochemical processes, affecting trophic subsidies and conditions in fish-spawning habitats. But τ is highly variable in estuaries, owing to constant changes in river inflows, tides, wind, and water height, all of which combine to affect τ in unpredictable ways. It recently became feasible to measure δ2H and δ18O continuously, at a high sampling frequency (1 Hz), using diffusion sample introduction into a cavity ring-down spectrometer. To better understand the relationship of τ to biogeochemical processes in a dynamic estuarine system, we continuously measured δ2H and δ18O, nitrate and water quality parameters, on board a small, high-speed boat (5 to >10 m s–1) fitted with a hull-mounted underwater intake. We then calculated τ as is classically done using the isotopic signals of evaporation. The result was high-resolution (∼10 m) maps of residence time, nitrate, and other parameters that showed strong spatial gradients corresponding to geomorphic attributes of the different channels in the area. The mean measured value of τ was 30.5 d, with a range of 0–50 d. We used the measured spatial gradients in both τ and nitrate to calculate whole-ecosystem uptake rates, and the values ranged from 0.006 to 0.039 d–1. The capability to measure residence time over single tidal cycles in estuaries will be useful for evaluating and further understanding drivers of phytoplankton abundance, resolving differences attributable to mixing and water sources, explicitly calculating biogeochemical rates, and exploring the complex linkages among time-dependent biogeochemical processes in hydrodynamically complex environments such as estuaries.
Anomalous dispersion in correlated porous media: a coupled continuous time random walk approach
Comolli, Alessandro; Dentz, Marco
2017-09-01
We study the causes of anomalous dispersion in Darcy-scale porous media characterized by spatially heterogeneous hydraulic properties. Spatial variability in hydraulic conductivity leads to spatial variability in the flow properties through Darcy's law and thus impacts on solute and particle transport. We consider purely advective transport in heterogeneity scenarios characterized by broad distributions of heterogeneity length scales and point values. Particle transport is characterized in terms of the stochastic properties of equidistantly sampled Lagrangian velocities, which are determined by the flow and conductivity statistics. The persistence length scales of flow and transport velocities are imprinted in the spatial disorder and reflect the distribution of heterogeneity length scales. Particle transitions over the velocity length scales are kinematically coupled with the transition time through velocity. We show that the average particle motion follows a coupled continuous time random walk (CTRW), which is fully parameterized by the distribution of flow velocities and the medium geometry in terms of the heterogeneity length scales. The coupled CTRW provides a systematic framework for the investigation of the origins of anomalous dispersion in terms of heterogeneity correlation and the distribution of conductivity point values. We derive analytical expressions for the asymptotic scaling of the moments of the spatial particle distribution and first arrival time distribution (FATD), and perform numerical particle tracking simulations of the coupled CTRW to capture the full average transport behavior. Broad distributions of heterogeneity point values and lengths scales may lead to very similar dispersion behaviors in terms of the spatial variance. Their mechanisms, however are very different, which manifests in the distributions of particle positions and arrival times, which plays a central role for the prediction of the fate of dissolved substances in
Modeling commodity salam contract between two parties for discrete and continuous time series
Hisham, Azie Farhani Badrol; Jaffar, Maheran Mohd
2017-08-01
In order for Islamic finance to remain competitive as the conventional, there needs a new development of Islamic compliance product such as Islamic derivative that can be used to manage the risk. However, under syariah principles and regulations, all financial instruments must not be conflicting with five syariah elements which are riba (interest paid), rishwah (corruption), gharar (uncertainty or unnecessary risk), maysir (speculation or gambling) and jahl (taking advantage of the counterparty's ignorance). This study has proposed a traditional Islamic contract namely salam that can be built as an Islamic derivative product. Although a lot of studies has been done on discussing and proposing the implementation of salam contract as the Islamic product however they are more into qualitative and law issues. Since there is lack of quantitative study of salam contract being developed, this study introduces mathematical models that can value the appropriate salam price for a commodity salam contract between two parties. In modeling the commodity salam contract, this study has modified the existing conventional derivative model and come out with some adjustments to comply with syariah rules and regulations. The cost of carry model has been chosen as the foundation to develop the commodity salam model between two parties for discrete and continuous time series. However, the conventional time value of money results from the concept of interest that is prohibited in Islam. Therefore, this study has adopted the idea of Islamic time value of money which is known as the positive time preference, in modeling the commodity salam contract between two parties for discrete and continuous time series.
Continuous time Boolean modeling for biological signaling: application of Gillespie algorithm.
Stoll, Gautier; Viara, Eric; Barillot, Emmanuel; Calzone, Laurence
2012-08-29
Mathematical modeling is used as a Systems Biology tool to answer biological questions, and more precisely, to validate a network that describes biological observations and predict the effect of perturbations. This article presents an algorithm for modeling biological networks in a discrete framework with continuous time. There exist two major types of mathematical modeling approaches: (1) quantitative modeling, representing various chemical species concentrations by real numbers, mainly based on differential equations and chemical kinetics formalism; (2) and qualitative modeling, representing chemical species concentrations or activities by a finite set of discrete values. Both approaches answer particular (and often different) biological questions. Qualitative modeling approach permits a simple and less detailed description of the biological systems, efficiently describes stable state identification but remains inconvenient in describing the transient kinetics leading to these states. In this context, time is represented by discrete steps. Quantitative modeling, on the other hand, can describe more accurately the dynamical behavior of biological processes as it follows the evolution of concentration or activities of chemical species as a function of time, but requires an important amount of information on the parameters difficult to find in the literature. Here, we propose a modeling framework based on a qualitative approach that is intrinsically continuous in time. The algorithm presented in this article fills the gap between qualitative and quantitative modeling. It is based on continuous time Markov process applied on a Boolean state space. In order to describe the temporal evolution of the biological process we wish to model, we explicitly specify the transition rates for each node. For that purpose, we built a language that can be seen as a generalization of Boolean equations. Mathematically, this approach can be translated in a set of ordinary differential
Meng, Tianhui; Li, Xiaofan; Zhang, Sha; Zhao, Yubin
2016-09-28
Wireless sensor networks (WSNs) have recently gained popularity for a wide spectrum of applications. Monitoring tasks can be performed in various environments. This may be beneficial in many scenarios, but it certainly exhibits new challenges in terms of security due to increased data transmission over the wireless channel with potentially unknown threats. Among possible security issues are timing attacks, which are not prevented by traditional cryptographic security. Moreover, the limited energy and memory resources prohibit the use of complex security mechanisms in such systems. Therefore, balancing between security and the associated energy consumption becomes a crucial challenge. This paper proposes a secure scheme for WSNs while maintaining the requirement of the security-performance tradeoff. In order to proceed to a quantitative treatment of this problem, a hybrid continuous-time Markov chain (CTMC) and queueing model are put forward, and the tradeoff analysis of the security and performance attributes is carried out. By extending and transforming this model, the mean time to security attributes failure is evaluated. Through tradeoff analysis, we show that our scheme can enhance the security of WSNs, and the optimal rekeying rate of the performance and security tradeoff can be obtained.
Yoshida, Yutaka; Yokoyama, Kiyoko; Ishii, Naohiro
It is necessary to monitor the daily health condition for preventing stress syndrome. In this study, it was proposed the method assessing the mental and physiological condition, such as the work stress or the relaxation, using heart rate variability at real time and continuously. The instantanuous heart rate (HR), and the ratio of the number of extreme points (NEP) and the number of heart beats were calculated for assessing mental and physiological condition. In this method, 20 beats heart rate were used to calculate these indexes. These were calculated in one beat interval. Three conditions, which are sitting rest, performing mental arithmetic and watching relaxation movie, were assessed using our proposed algorithm. The assessment accuracies were 71.9% and 55.8%, when performing mental arithmetic and watching relaxation movie respectively. In this method, the mental and physiological condition was assessed using only 20 regressive heart beats, so this method is considered as the real time assessment method.
Stabilization of Continuous-Time Random Switching Systems via a Fault-Tolerant Controller
Directory of Open Access Journals (Sweden)
Guoliang Wang
2017-01-01
Full Text Available This paper focuses on the stabilization problem of continuous-time random switching systems via exploiting a fault-tolerant controller, where the dwell time of each subsystem consists of a fixed part and random part. It is known from the traditional design methods that the computational complexity of LMIs related to the quantity of fault combination is very large; particularly system dimension or amount of subsystems is large. In order to reduce the number of the used fault combinations, new sufficient LMI conditions for designing such a controller are established by a robust approach, which are fault-free and could be solved directly. Moreover, the fault-tolerant stabilization realized by a mode-independent controller is considered and suitably applied to a practical case without mode information. Finally, a numerical example is used to demonstrate the effectiveness and superiority of the proposed methods.
Absolute continuity under time shift of trajectories and related stochastic calculus
Löbus, Jörg-Uwe
2017-01-01
The text is concerned with a class of two-sided stochastic processes of the form X=W+A. Here W is a two-sided Brownian motion with random initial data at time zero and A\\equiv A(W) is a function of W. Elements of the related stochastic calculus are introduced. In particular, the calculus is adjusted to the case when A is a jump process. Absolute continuity of (X,P) under time shift of trajectories is investigated. For example under various conditions on the initial density with respect to the Lebesgue measure, m, and on A with A_0=0 we verify \\frac{P(dX_{\\cdot -t})}{P(dX_\\cdot)}=\\frac{m(X_{-t})}{m(X_0)}\\cdot \\prod_i\\left|\
Donier, J.; Bouchaud, J.-P.
2016-12-01
In standard Walrasian auctions, the price of a good is defined as the point where the supply and demand curves intersect. Since both curves are generically regular, the response to small perturbations is linearly small. However, a crucial ingredient is absent of the theory, namely transactions themselves. What happens after they occur? To answer the question, we develop a dynamic theory for supply and demand based on agents with heterogeneous beliefs. When the inter-auction time is infinitely long, the Walrasian mechanism is recovered. When transactions are allowed to happen in continuous time, a peculiar property emerges: close to the price, supply and demand vanish quadratically, which we empirically confirm on the Bitcoin. This explains why price impact in financial markets is universally observed to behave as the square root of the excess volume. The consequences are important, as they imply that the very fact of clearing the market makes prices hypersensitive to small fluctuations.
An approach to the drone fleet survivability assessment based on a stochastic continues-time model
Kharchenko, Vyacheslav; Fesenko, Herman; Doukas, Nikos
2017-09-01
An approach and the algorithm to the drone fleet survivability assessment based on a stochastic continues-time model are proposed. The input data are the number of the drones, the drone fleet redundancy coefficient, the drone stability and restoration rate, the limit deviation from the norms of the drone fleet recovery, the drone fleet operational availability coefficient, the probability of the drone failure-free operation, time needed for performing the required tasks by the drone fleet. The ways for improving the recoverable drone fleet survivability taking into account amazing factors of system accident are suggested. Dependencies of the drone fleet survivability rate both on the drone stability and the number of the drones are analysed.
Continuous-time random walks with reset events. Historical background and new perspectives
Montero, Miquel; Masó-Puigdellosas, Axel; Villarroel, Javier
2017-09-01
In this paper, we consider a stochastic process that may experience random reset events which relocate the system to its starting position. We focus our attention on a one-dimensional, monotonic continuous-time random walk with a constant drift: the process moves in a fixed direction between the reset events, either by the effect of the random jumps, or by the action of a deterministic bias. However, the orientation of its motion is randomly determined after each restart. As a result of these alternating dynamics, interesting properties do emerge. General formulas for the propagator as well as for two extreme statistics, the survival probability and the mean first-passage time, are also derived. The rigor of these analytical results is verified by numerical estimations, for particular but illuminating examples.
Fitting timeseries by continuous-time Markov chains: A quadratic programming approach
International Nuclear Information System (INIS)
Crommelin, D.T.; Vanden-Eijnden, E.
2006-01-01
Construction of stochastic models that describe the effective dynamics of observables of interest is an useful instrument in various fields of application, such as physics, climate science, and finance. We present a new technique for the construction of such models. From the timeseries of an observable, we construct a discrete-in-time Markov chain and calculate the eigenspectrum of its transition probability (or stochastic) matrix. As a next step we aim to find the generator of a continuous-time Markov chain whose eigenspectrum resembles the observed eigenspectrum as closely as possible, using an appropriate norm. The generator is found by solving a minimization problem: the norm is chosen such that the object function is quadratic and convex, so that the minimization problem can be solved using quadratic programming techniques. The technique is illustrated on various toy problems as well as on datasets stemming from simulations of molecular dynamics and of atmospheric flows
Continuous Positive Airway Pressure Device Time to Procurement in a Disadvantaged Population
Directory of Open Access Journals (Sweden)
Lourdes M. DelRosso
2015-01-01
Full Text Available Introduction. The management of obstructive sleep apnea (OSA in patients who cannot afford a continuous positive airway pressure (CPAP device is challenging. In this study we compare time to CPAP procurement in three groups of patients diagnosed with OSA: uninsured subsidized by a humanitarian grant (Group 1, uninsured unsubsidized (Group 2, and those with Medicare or Medicaid (Group 3. We evaluate follow-up and adherence in Group 1. We hypothesize that additional factors, rather than just the ability to obtain CPAP, may uniquely affect follow-up and adherence in uninsured patients. Methods. 30 patients were in Groups 1 and 2, respectively. 12 patients were in Group 3. Time of CPAP procurement from OSA diagnosis to CPAP initiation was assessed in all groups. CPAP adherence data was collected for Group 1 patients at 1, 3, 6, and 9 months. Results. There were no significant differences between groups in gender, age, body mass index, or apnea hypopnea index. The mean time to procurement in Group 1 was shorter compared to Group 2 but not significant. Compared to both Group 1 and Group 2, Group 3 patients had significantly shorter times to device procurement. Conclusion. Time to procurement of CPAP was significantly shorter in those with Medicaid/Medicare insurance compared to the uninsured.
Directory of Open Access Journals (Sweden)
Pieprzyca J.
2015-04-01
Full Text Available A common method used in identification of hydrodynamics phenomena occurring in Continuous Casting (CC device's tundish is to determine the RTD curves of time. These curves allows to determine the way of the liquid steel flowing and mixing in the tundish. These can be identified either as the result of numerical simulation or by the experiments - as the result of researching the physical models. Special problem is to objectify it while conducting physical research. It is necessary to precisely determine the time constants which characterize researched phenomena basing on the data acquired in the measured change of the concentration of the tracer in model liquid's volume. The mathematical description of determined curves is based on the approximate differential equations formulated in the theory of fluid mechanics. Solving these equations to calculate the time constants requires a special software and it is very time-consuming. To improve the process a method was created to calculate the time constants with use of automation elements. It allows to solve problems using algebraic method, which improves interpretation of the research results of physical modeling.
Freeman, Jonathan B; Ambady, Nalini; Midgley, Katherine J; Holcomb, Phillip J
2011-01-01
Using event-related potentials, we investigated how the brain extracts information from another's face and translates it into relevant action in real time. In Study 1, participants made between-hand sex categorizations of sex-typical and sex-atypical faces. Sex-atypical faces evoked negativity between 250 and 550 ms (N300/N400 effects), reflecting the integration of accumulating sex-category knowledge into a coherent sex-category interpretation. Additionally, the lateralized readiness potential revealed that the motor cortex began preparing for a correct hand response while social category knowledge was still gradually evolving in parallel. In Study 2, participants made between-hand eye-color categorizations as part of go/no-go trials that were contingent on a target's sex. On no-go trials, although the hand did not actually move, information about eye color partially prepared the motor cortex to move the hand before perception of sex had finalized. Together, these findings demonstrate the dynamic continuity between person perception and action, such that ongoing results from face processing are immediately and continuously cascaded into the motor system over time. The preparation of action begins based on tentative perceptions of another's face before perceivers have finished interpreting what they just saw. © 2010 Psychology Press, an imprint of the Taylor & Francis Group, an Informa business
Backward jump continuous-time random walk: An application to market trading
Gubiec, Tomasz; Kutner, Ryszard
2010-10-01
The backward jump modification of the continuous-time random walk model or the version of the model driven by the negative feedback was herein derived for spatiotemporal continuum in the context of a share price evolution on a stock exchange. In the frame of the model, we described stochastic evolution of a typical share price on a stock exchange with a moderate liquidity within a high-frequency time scale. The model was validated by satisfactory agreement of the theoretical velocity autocorrelation function with its empirical counterpart obtained for the continuous quotation. This agreement is mainly a result of a sharp backward correlation found and considered in this article. This correlation is a reminiscence of such a bid-ask bounce phenomenon where backward price jump has the same or almost the same length as preceding jump. We suggested that this correlation dominated the dynamics of the stock market with moderate liquidity. Although assumptions of the model were inspired by the market high-frequency empirical data, its potential applications extend beyond the financial market, for instance, to the field covered by the Le Chatelier-Braun principle of contrariness.
Clarke, Hannah; Done, Fay; Casadio, Stefano; Mackin, Stephen; Dinelli, Bianca Maria; Castelli, Elisa
2016-08-01
The long time-series of observations made by the Along Track Scanning Radiometers (ATSR) missions represents a valuable resource for a wide range of research and EO applications.With the advent of ESA's Long-TermData Preservation (LTDP) programme, thought has turned to the preservation and improved understanding of such long time-series, to support their continued exploitation in both existing and new areas of research, bringing the possibility of improving the existing data set and to inform and contribute towards future missions. For this reason, the 'Long Term Stability of the ATSR Instrument Series: SWIR Calibration, Cloud Masking and SAA' project, commonly known as the ATSR Long Term Stability (or ALTS) project, is designed to explore the key characteristics of the data set and new and innovative ways of enhancing and exploiting it.Work has focussed on: A new approach to the assessment of Short Wave Infra-Red (SWIR) channel calibration.; Developmentof a new method for Total Column Water Vapour (TCWV) retrieval.; Study of the South Atlantic Anomaly (SAA).; Radiative Transfer (RT) modelling for ATSR.; Providing AATSR observations with their location in the original instrument grid.; Strategies for the retrieval and archiving of historical ATSR documentation.; Study of TCWV retrieval over land; Development of new methods for cloud masking This paper provides an overview of these activities and illustrates the importance of preserving and understanding 'old' data for continued use in the future.
Gatto, Riccardo
2017-12-01
This article considers the random walk over Rp, with p ≥ 2, where a given particle starts at the origin and moves stepwise with uniformly distributed step directions and step lengths following a common distribution. Step directions and step lengths are independent. The case where the number of steps of the particle is fixed and the more general case where it follows an independent continuous time inhomogeneous counting process are considered. Saddlepoint approximations to the distribution of the distance from the position of the particle to the origin are provided. Despite the p-dimensional nature of the random walk, the computations of the saddlepoint approximations are one-dimensional and thus simple. Explicit formulae are derived with dimension p = 3: for uniformly and exponentially distributed step lengths, for fixed and for Poisson distributed number of steps. In these situations, the high accuracy of the saddlepoint approximations is illustrated by numerical comparisons with Monte Carlo simulation. Contribution to the "Topical Issue: Continuous Time Random Walk Still Trendy: Fifty-year History, Current State and Outlook", edited by Ryszard Kutner and Jaume Masoliver.
Bravo-Zanoguera, Miguel E; Laris, Casey A; Nguyen, Lam K; Oliva, Mike; Price, Jeffrey H
2007-01-01
Efficient image cytometry of a conventional microscope slide means rapid acquisition and analysis of 20 gigapixels of image data (at 0.3-microm sampling). The voluminous data motivate increased acquisition speed to enable many biomedical applications. Continuous-motion time-delay-and-integrate (TDI) scanning has the potential to speed image acquisition while retaining sensitivity, but the challenge of implementing high-resolution autofocus operating simultaneously with acquisition has limited its adoption. We develop a dynamic autofocus system for this need using: 1. a "volume camera," consisting of nine fiber optic imaging conduits to charge-coupled device (CCD) sensors, that acquires images in parallel from different focal planes, 2. an array of mixed analog-digital processing circuits that measure the high spatial frequencies of the multiple image streams to create focus indices, and 3. a software system that reads and analyzes the focus data streams and calculates best focus for closed feedback loop control. Our system updates autofocus at 56 Hz (or once every 21 microm of stage travel) to collect sharply focused images sampled at 0.3x0.3 microm(2)/pixel at a stage speed of 2.3 mms. The system, tested by focusing in phase contrast and imaging long fluorescence strips, achieves high-performance closed-loop image-content-based autofocus in continuous scanning for the first time.
Real-time CT-video registration for continuous endoscopic guidance
Merritt, Scott A.; Rai, Lav; Higgins, William E.
2006-03-01
Previous research has shown that CT-image-based guidance could be useful for the bronchoscopic assessment of lung cancer. This research drew upon the registration of bronchoscopic video images to CT-based endoluminal renderings of the airway tree. The proposed methods either were restricted to discrete single-frame registration, which took several seconds to complete, or required non-real-time buffering and processing of video sequences. We have devised a fast 2D/3D image registration method that performs single-frame CT-Video registration in under 1/15th of a second. This allows the method to be used for real-time registration at full video frame rates without significantly altering the physician's behavior. The method achieves its speed through a gradient-based optimization method that allows most of the computation to be performed off-line. During live registration, the optimization iteratively steps toward the locally optimal viewpoint at which a CT-based endoluminal view is most similar to a current bronchoscopic video frame. After an initial registration to begin the process (generally done in the trachea for bronchoscopy), subsequent registrations are performed in real-time on each incoming video frame. As each new bronchoscopic video frame becomes available, the current optimization is initialized using the previous frame's optimization result, allowing continuous guidance to proceed without manual re-initialization. Tests were performed using both synthetic and pre-recorded bronchoscopic video. The results show that the method is robust to initialization errors, that registration accuracy is high, and that continuous registration can proceed on real-time video at >15 frames per sec. with minimal user-intervention.
Sayers, Adrian; Ben-Shlomo, Yoav; Blom, Ashley W; Steele, Fiona
2016-06-01
Studies involving the use of probabilistic record linkage are becoming increasingly common. However, the methods underpinning probabilistic record linkage are not widely taught or understood, and therefore these studies can appear to be a 'black box' research tool. In this article, we aim to describe the process of probabilistic record linkage through a simple exemplar. We first introduce the concept of deterministic linkage and contrast this with probabilistic linkage. We illustrate each step of the process using a simple exemplar and describe the data structure required to perform a probabilistic linkage. We describe the process of calculating and interpreting matched weights and how to convert matched weights into posterior probabilities of a match using Bayes theorem. We conclude this article with a brief discussion of some of the computational demands of record linkage, how you might assess the quality of your linkage algorithm, and how epidemiologists can maximize the value of their record-linked research using robust record linkage methods. © The Author 2015; Published by Oxford University Press on behalf of the International Epidemiological Association.
Tataru, Paula; Hobolth, Asger
2011-12-05
Continuous time Markov chains (CTMCs) is a widely used model for describing the evolution of DNA sequences on the nucleotide, amino acid or codon level. The sufficient statistics for CTMCs are the time spent in a state and the number of changes between any two states. In applications past evolutionary events (exact times and types of changes) are unaccessible and the past must be inferred from DNA sequence data observed in the present. We describe and implement three algorithms for computing linear combinations of expected values of the sufficient statistics, conditioned on the end-points of the chain, and compare their performance with respect to accuracy and running time. The first algorithm is based on an eigenvalue decomposition of the rate matrix (EVD), the second on uniformization (UNI), and the third on integrals of matrix exponentials (EXPM). The implementation in R of the algorithms is available at http://www.birc.au.dk/~paula/. We use two different models to analyze the accuracy and eight experiments to investigate the speed of the three algorithms. We find that they have similar accuracy and that EXPM is the slowest method. Furthermore we find that UNI is usually faster than EVD.
Directory of Open Access Journals (Sweden)
Tataru Paula
2011-12-01
Full Text Available Abstract Background Continuous time Markov chains (CTMCs is a widely used model for describing the evolution of DNA sequences on the nucleotide, amino acid or codon level. The sufficient statistics for CTMCs are the time spent in a state and the number of changes between any two states. In applications past evolutionary events (exact times and types of changes are unaccessible and the past must be inferred from DNA sequence data observed in the present. Results We describe and implement three algorithms for computing linear combinations of expected values of the sufficient statistics, conditioned on the end-points of the chain, and compare their performance with respect to accuracy and running time. The first algorithm is based on an eigenvalue decomposition of the rate matrix (EVD, the second on uniformization (UNI, and the third on integrals of matrix exponentials (EXPM. The implementation in R of the algorithms is available at http://www.birc.au.dk/~paula/. Conclusions We use two different models to analyze the accuracy and eight experiments to investigate the speed of the three algorithms. We find that they have similar accuracy and that EXPM is the slowest method. Furthermore we find that UNI is usually faster than EVD.
Continuous performance task in ADHD: Is reaction time variability a key measure?
Levy, Florence; Pipingas, Andrew; Harris, Elizabeth V; Farrow, Maree; Silberstein, Richard B
2018-01-01
To compare the use of the Continuous Performance Task (CPT) reaction time variability (intraindividual variability or standard deviation of reaction time), as a measure of vigilance in attention-deficit hyperactivity disorder (ADHD), and stimulant medication response, utilizing a simple CPT X-task vs an A-X-task. Comparative analyses of two separate X-task vs A-X-task data sets, and subgroup analyses of performance on and off medication were conducted. The CPT X-task reaction time variability had a direct relationship to ADHD clinician severity ratings, unlike the CPT A-X-task. Variability in X-task performance was reduced by medication compared with the children's unmedicated performance, but this effect did not reach significance. When the coefficient of variation was applied, severity measures and medication response were significant for the X-task, but not for the A-X-task. The CPT-X-task is a useful clinical screening test for ADHD and medication response. In particular, reaction time variability is related to default mode interference. The A-X-task is less useful in this regard.
Votava, Ondrej; Mašát, Milan; Parker, Alexander E; Jain, Chaithania; Fittschen, Christa
2012-04-01
We present in this work a new tracking servoloop electronics for continuous wave cavity-ringdown absorption spectroscopy (cw-CRDS) and its application to time resolved cw-CRDS measurements by coupling the system with a pulsed laser photolysis set-up. The tracking unit significantly increases the repetition rate of the CRDS events and thus improves effective time resolution (and/or the signal-to-noise ratio) in kinetics studies with cw-CRDS in given data acquisition time. The tracking servoloop uses novel strategy to track the cavity resonances that result in a fast relocking (few ms) after the loss of tracking due to an external disturbance. The microcontroller based design is highly flexible and thus advanced tracking strategies are easy to implement by the firmware modification without the need to modify the hardware. We believe that the performance of many existing cw-CRDS experiments, not only time-resolved, can be improved with such tracking unit without any additional modification to the experiment. © 2012 American Institute of Physics
Formalizing Probabilistic Safety Claims
Herencia-Zapana, Heber; Hagen, George E.; Narkawicz, Anthony J.
2011-01-01
A safety claim for a system is a statement that the system, which is subject to hazardous conditions, satisfies a given set of properties. Following work by John Rushby and Bev Littlewood, this paper presents a mathematical framework that can be used to state and formally prove probabilistic safety claims. It also enables hazardous conditions, their uncertainties, and their interactions to be integrated into the safety claim. This framework provides a formal description of the probabilistic composition of an arbitrary number of hazardous conditions and their effects on system behavior. An example is given of a probabilistic safety claim for a conflict detection algorithm for aircraft in a 2D airspace. The motivation for developing this mathematical framework is that it can be used in an automated theorem prover to formally verify safety claims.
Discrete- vs. Continuous-Time Modeling of Unequally Spaced Experience Sampling Method Data
Directory of Open Access Journals (Sweden)
Silvia de Haan-Rietdijk
2017-10-01
Full Text Available The Experience Sampling Method is a common approach in psychological research for collecting intensive longitudinal data with high ecological validity. One characteristic of ESM data is that it is often unequally spaced, because the measurement intervals within a day are deliberately varied, and measurement continues over several days. This poses a problem for discrete-time (DT modeling approaches, which are based on the assumption that all measurements are equally spaced. Nevertheless, DT approaches such as (vector autoregressive modeling are often used to analyze ESM data, for instance in the context of affective dynamics research. There are equivalent continuous-time (CT models, but they are more difficult to implement. In this paper we take a pragmatic approach and evaluate the practical relevance of the violated model assumption in DT AR(1 and VAR(1 models, for the N = 1 case. We use simulated data under an ESM measurement design to investigate the bias in the parameters of interest under four different model implementations, ranging from the true CT model that accounts for all the exact measurement times, to the crudest possible DT model implementation, where even the nighttime is treated as a regular interval. An analysis of empirical affect data illustrates how the differences between DT and CT modeling can play out in practice. We find that the size and the direction of the bias in DT (VAR models for unequally spaced ESM data depend quite strongly on the true parameter in addition to data characteristics. Our recommendation is to use CT modeling whenever possible, especially now that new software implementations have become available.
Fitting and interpreting continuous-time latent Markov models for panel data.
Lange, Jane M; Minin, Vladimir N
2013-11-20
Multistate models characterize disease processes within an individual. Clinical studies often observe the disease status of individuals at discrete time points, making exact times of transitions between disease states unknown. Such panel data pose considerable modeling challenges. Assuming the disease process progresses accordingly, a standard continuous-time Markov chain (CTMC) yields tractable likelihoods, but the assumption of exponential sojourn time distributions is typically unrealistic. More flexible semi-Markov models permit generic sojourn distributions yet yield intractable likelihoods for panel data in the presence of reversible transitions. One attractive alternative is to assume that the disease process is characterized by an underlying latent CTMC, with multiple latent states mapping to each disease state. These models retain analytic tractability due to the CTMC framework but allow for flexible, duration-dependent disease state sojourn distributions. We have developed a robust and efficient expectation-maximization algorithm in this context. Our complete data state space consists of the observed data and the underlying latent trajectory, yielding computationally efficient expectation and maximization steps. Our algorithm outperforms alternative methods measured in terms of time to convergence and robustness. We also examine the frequentist performance of latent CTMC point and interval estimates of disease process functionals based on simulated data. The performance of estimates depends on time, functional, and data-generating scenario. Finally, we illustrate the interpretive power of latent CTMC models for describing disease processes on a dataset of lung transplant patients. We hope our work will encourage wider use of these models in the biomedical setting. Copyright © 2013 John Wiley & Sons, Ltd.
DEFF Research Database (Denmark)
Larsen, Kim Guldstrand; Mardare, Radu Iulian; Xue, Bingtian
2016-01-01
We introduce a version of the probabilistic µ-calculus (PMC) built on top of a probabilistic modal logic that allows encoding n-ary inequational conditions on transition probabilities. PMC extends previously studied calculi and we prove that, despite its expressiveness, it enjoys a series of good...... metaproperties. Firstly, we prove the decidability of satisﬁability checking by establishing the small model property. An algorithm for deciding the satisﬁability problem is developed. As a second major result, we provide a complete axiomatization for the alternation-free fragment of PMC. The completeness proof...
Probabilistic conditional independence structures
Studeny, Milan
2005-01-01
Probabilistic Conditional Independence Structures provides the mathematical description of probabilistic conditional independence structures; the author uses non-graphical methods of their description, and takes an algebraic approach.The monograph presents the methods of structural imsets and supermodular functions, and deals with independence implication and equivalence of structural imsets.Motivation, mathematical foundations and areas of application are included, and a rough overview of graphical methods is also given.In particular, the author has been careful to use suitable terminology, and presents the work so that it will be understood by both statisticians, and by researchers in artificial intelligence.The necessary elementary mathematical notions are recalled in an appendix.
Probabilistic approach to mechanisms
Sandler, BZ
1984-01-01
This book discusses the application of probabilistics to the investigation of mechanical systems. The book shows, for example, how random function theory can be applied directly to the investigation of random processes in the deflection of cam profiles, pitch or gear teeth, pressure in pipes, etc. The author also deals with some other technical applications of probabilistic theory, including, amongst others, those relating to pneumatic and hydraulic mechanisms and roller bearings. Many of the aspects are illustrated by examples of applications of the techniques under discussion.
Toward a continuous 405-kyr-calibrated Astronomical Time Scale for the Mesozoic Era
Hinnov, Linda; Ogg, James; Huang, Chunju
2010-05-01
Mesozoic cyclostratigraphy is being assembled into a continuous Astronomical Time Scale (ATS) tied to the Earth's cyclic orbital parameters. Recognition of a nearly ubiquitous, dominant ~400-kyr cycling in formations throughout the era has been particularly striking. Composite formations spanning contiguous intervals up to 50 myr clearly express these long-eccentricity cycles, and in some cases, this cycling is defined by third- or fourth-order sea-level sequences. This frequency is associated with the 405-kyr orbital eccentricity cycle, which provides a basic metronome and enables the extension of the well-defined Cenozoic ATS to scale the majority of the Mesozoic Era. This astronomical calibration has a resolution comparable to the 1% to 0.1% precision for radioisotope dating of Mesozoic ash beds, but with the added benefit of providing continuous stratigraphic coverage between dated beds. Extended portions of the Mesozoic ATS provide solutions to long-standing geologic problems of tectonics, eustasy, paleoclimate change, and rates of seafloor spreading.
Vibration analysis diagnostics by continuous-time models: A case study
International Nuclear Information System (INIS)
Pedregal, Diego J.; Carmen Carnero, Ma.
2009-01-01
In this paper a forecasting system in condition monitoring is developed based on vibration signals in order to improve the diagnosis of a certain critical equipment at an industrial plant. The system is based on statistical models capable of forecasting the state of the equipment combined with a cost model consisting of defining the time of preventive replacement when the minimum of the expected cost per unit of time is reached in the future. The most relevant features of the system are that (i) it is developed for bivariate signals; (ii) the statistical models are set up in a continuous-time framework, due to the specific nature of the data; and (iii) it has been developed from scratch for a real case study and may be generalised to other pieces of equipment. The system is thoroughly tested on the equipment available, showing its correctness with the data in a statistical sense and its capability of producing sensible results for the condition monitoring programme
Vibration analysis diagnostics by continuous-time models: A case study
Energy Technology Data Exchange (ETDEWEB)
Pedregal, Diego J. [Escuela Tecnica Superior de Ingenieros Industriales, Universidad de Castilla-La Mancha, 13071 Ciudad Real (Spain)], E-mail: Diego.Pedregal@uclm.es; Carmen Carnero, Ma. [Escuela Tecnica Superior de Ingenieros Industriales, Universidad de Castilla-La Mancha, 13071 Ciudad Real (Spain)], E-mail: Carmen.Carnero@uclm.es
2009-02-15
In this paper a forecasting system in condition monitoring is developed based on vibration signals in order to improve the diagnosis of a certain critical equipment at an industrial plant. The system is based on statistical models capable of forecasting the state of the equipment combined with a cost model consisting of defining the time of preventive replacement when the minimum of the expected cost per unit of time is reached in the future. The most relevant features of the system are that (i) it is developed for bivariate signals; (ii) the statistical models are set up in a continuous-time framework, due to the specific nature of the data; and (iii) it has been developed from scratch for a real case study and may be generalised to other pieces of equipment. The system is thoroughly tested on the equipment available, showing its correctness with the data in a statistical sense and its capability of producing sensible results for the condition monitoring programme.
Liu, Jian; Miller, William H
2008-09-28
The maximum entropy analytic continuation (MEAC) method is used to extend the range of accuracy of the linearized semiclassical initial value representation (LSC-IVR)/classical Wigner approximation for real time correlation functions. LSC-IVR provides a very effective "prior" for the MEAC procedure since it is very good for short times, exact for all time and temperature for harmonic potentials (even for correlation functions of nonlinear operators), and becomes exact in the classical high temperature limit. This combined MEAC+LSC/IVR approach is applied here to two highly nonlinear dynamical systems, a pure quartic potential in one dimensional and liquid para-hydrogen at two thermal state points (25 and 14 K under nearly zero external pressure). The former example shows the MEAC procedure to be a very significant enhancement of the LSC-IVR for correlation functions of both linear and nonlinear operators, and especially at low temperature where semiclassical approximations are least accurate. For liquid para-hydrogen, the LSC-IVR is seen already to be excellent at T=25 K, but the MEAC procedure produces a significant correction at the lower temperature (T=14 K). Comparisons are also made as to how the MEAC procedure is able to provide corrections for other trajectory-based dynamical approximations when used as priors.
Xu, Tingting; Close, Dan M.; Webb, James D.; Price, Sarah L.; Ripp, Steven A.; Sayler, Gary S.
2013-05-01
Bioluminescent imaging is an emerging biomedical surveillance strategy that uses external cameras to detect in vivo light generated in small animal models of human physiology or in vitro light generated in tissue culture or tissue scaffold mimics of human anatomy. The most widely utilized of reporters is the firefly luciferase (luc) gene; however, it generates light only upon addition of a chemical substrate, thus only generating intermittent single time point data snapshots. To overcome this disadvantage, we have demonstrated substrate-independent bioluminescent imaging using an optimized bacterial bioluminescence (lux) system. The lux reporter produces bioluminescence autonomously using components found naturally within the cell, thereby allowing imaging to occur continuously and in real-time over the lifetime of the host. We have validated this technology in human cells with demonstrated chemical toxicological profiling against exotoxin exposures at signal strengths comparable to existing luc systems (~1.33 × 107 photons/second). As a proof-in-principle demonstration, we have engineered breast carcinoma cells to express bioluminescence for real-time screening of endocrine disrupting chemicals and validated detection of 17β-estradiol (EC50 = ~ 10 pM). These and other applications of this new reporter technology will be discussed as potential new pathways towards improved models of target chemical bioavailability, toxicology, efficacy, and human safety.
International Nuclear Information System (INIS)
Louzguine-Luzgin, Dmitri V.; Inoue, Akihisa
2005-01-01
The time-temperature transformation (TTT) diagrams for the onset of devitrification of the Ge-Ni-La and Cu-Hf-Ti glassy alloys were calculated from the isothermal differential calorimetry data using an Arrhenius equation. The continuous heating transformation (CHT) diagrams for the onset of devitrification of the glassy alloys were subsequently recalculated from TTT diagrams. The recalculation method used for conversion of the TTT into CHT diagrams produces reasonable results and is not sensitive to the type of the devitrification reaction (polymorphous or primary transformation). The diagrams allow to perform a comparison of the stabilities of glassy alloys on a long-term scale. The relationship between these diagrams is discussed
Stochastic Games for Continuous-Time Jump Processes Under Finite-Horizon Payoff Criterion
Energy Technology Data Exchange (ETDEWEB)
Wei, Qingda, E-mail: weiqd@hqu.edu.cn [Huaqiao University, School of Economics and Finance (China); Chen, Xian, E-mail: chenxian@amss.ac.cn [Peking University, School of Mathematical Sciences (China)
2016-10-15
In this paper we study two-person nonzero-sum games for continuous-time jump processes with the randomized history-dependent strategies under the finite-horizon payoff criterion. The state space is countable, and the transition rates and payoff functions are allowed to be unbounded from above and from below. Under the suitable conditions, we introduce a new topology for the set of all randomized Markov multi-strategies and establish its compactness and metrizability. Then by constructing the approximating sequences of the transition rates and payoff functions, we show that the optimal value function for each player is a unique solution to the corresponding optimality equation and obtain the existence of a randomized Markov Nash equilibrium. Furthermore, we illustrate the applications of our main results with a controlled birth and death system.
Mixed-integrator-based bi-quad cell for designing a continuous time filter
International Nuclear Information System (INIS)
Chen Yong; Zhou Yumei
2010-01-01
A new mixed-integrator-based bi-quad cell is proposed. An alternative synthesis mechanism of complex poles is proposed compared with source-follower-based bi-quad cells which is designed applying the positive feedback technique. Using the negative feedback technique to combine different integrators, the proposed bi-quad cell synthesizes complex poles for designing a continuous time filter. It exhibits various advantages including compact topology, high gain, no parasitic pole, no CMFB circuit, and high capability. The fourth-order Butterworth lowpass filter using the proposed cells has been fabricated in 0.18 μm CMOS technology. The active area occupied by the filter with test buffer is only 200 x 170 μm 2 . The proposed filter consumes a low power of 201 μW and achieves a 68.5 dB dynamic range. (semiconductor integrated circuits)
New readout integrated circuit using continuous time fixed pattern noise correction
Dupont, Bertrand; Chammings, G.; Rapellin, G.; Mandier, C.; Tchagaspanian, M.; Dupont, Benoit; Peizerat, A.; Yon, J. J.
2008-04-01
LETI has been involved in IRFPA development since 1978; the design department (LETI/DCIS) has focused its work on new ROIC architecture since many years. The trend is to integrate advanced functions into the CMOS design to achieve cost efficient sensors production. Thermal imaging market is today more and more demanding of systems with instant ON capability and low power consumption. The purpose of this paper is to present the latest developments of fixed pattern noise continuous time correction. Several architectures are proposed, some are based on hardwired digital processing and some are purely analog. Both are using scene based algorithms. Moreover a new method is proposed for simultaneous correction of pixel offsets and sensitivities. In this scope, a new architecture of readout integrated circuit has been implemented; this architecture is developed with 0.18μm CMOS technology. The specification and the application of the ROIC are discussed in details.
A toolbox for safety instrumented system evaluation based on improved continuous-time Markov chain
Wardana, Awang N. I.; Kurniady, Rahman; Pambudi, Galih; Purnama, Jaka; Suryopratomo, Kutut
2017-08-01
Safety instrumented system (SIS) is designed to restore a plant into a safe condition when pre-hazardous event is occur. It has a vital role especially in process industries. A SIS shall be meet with safety requirement specifications. To confirm it, SIS shall be evaluated. Typically, the evaluation is calculated by hand. This paper presents a toolbox for SIS evaluation. It is developed based on improved continuous-time Markov chain. The toolbox supports to detailed approach of evaluation. This paper also illustrates an industrial application of the toolbox to evaluate arch burner safety system of primary reformer. The results of the case study demonstrates that the toolbox can be used to evaluate industrial SIS in detail and to plan the maintenance strategy.
A Random Parameter Model for Continuous-Time Mean-Variance Asset-Liability Management
Directory of Open Access Journals (Sweden)
Hui-qiang Ma
2015-01-01
Full Text Available We consider a continuous-time mean-variance asset-liability management problem in a market with random market parameters; that is, interest rate, appreciation rates, and volatility rates are considered to be stochastic processes. By using the theories of stochastic linear-quadratic (LQ optimal control and backward stochastic differential equations (BSDEs, we tackle this problem and derive optimal investment strategies as well as the mean-variance efficient frontier analytically in terms of the solution of BSDEs. We find that the efficient frontier is still a parabola in a market with random parameters. Comparing with the existing results, we also find that the liability does not affect the feasibility of the mean-variance portfolio selection problem. However, in an incomplete market with random parameters, the liability can not be fully hedged.
Continuous-Time Mean-Variance Portfolio Selection under the CEV Process
Directory of Open Access Journals (Sweden)
Hui-qiang Ma
2014-01-01
Full Text Available We consider a continuous-time mean-variance portfolio selection model when stock price follows the constant elasticity of variance (CEV process. The aim of this paper is to derive an optimal portfolio strategy and the efficient frontier. The mean-variance portfolio selection problem is formulated as a linearly constrained convex program problem. By employing the Lagrange multiplier method and stochastic optimal control theory, we obtain the optimal portfolio strategy and mean-variance efficient frontier analytically. The results show that the mean-variance efficient frontier is still a parabola in the mean-variance plane, and the optimal strategies depend not only on the total wealth but also on the stock price. Moreover, some numerical examples are given to analyze the sensitivity of the efficient frontier with respect to the elasticity parameter and to illustrate the results presented in this paper. The numerical results show that the price of risk decreases as the elasticity coefficient increases.
Lattin, Frank G.; Paul, Donald G.
1996-11-01
A sorbent-based gas chromatographic method provides continuous quantitative measurement of phosgene, hydrogen cyanide, and cyanogen chloride in ambient air. These compounds are subject to workplace exposure limits as well as regulation under terms of the Chemical Arms Treaty and Title III of the 1990 Clean Air Act amendments. The method was developed for on-sit use in a mobile laboratory during remediation operations. Incorporated into the method are automated multi-level calibrations at time weighted average concentrations, or lower. Gaseous standards are prepared in fused silica lined air sampling canisters, then transferred to the analytical system through dynamic spiking. Precision and accuracy studies performed to validate the method are described. Also described are system deactivation and passivation techniques critical to optimum method performance.
Stochastic Games for Continuous-Time Jump Processes Under Finite-Horizon Payoff Criterion
International Nuclear Information System (INIS)
Wei, Qingda; Chen, Xian
2016-01-01
In this paper we study two-person nonzero-sum games for continuous-time jump processes with the randomized history-dependent strategies under the finite-horizon payoff criterion. The state space is countable, and the transition rates and payoff functions are allowed to be unbounded from above and from below. Under the suitable conditions, we introduce a new topology for the set of all randomized Markov multi-strategies and establish its compactness and metrizability. Then by constructing the approximating sequences of the transition rates and payoff functions, we show that the optimal value function for each player is a unique solution to the corresponding optimality equation and obtain the existence of a randomized Markov Nash equilibrium. Furthermore, we illustrate the applications of our main results with a controlled birth and death system.
Continuous-Time Mean-Variance Portfolio Selection: A Stochastic LQ Framework
International Nuclear Information System (INIS)
Zhou, X.Y.; Li, D.
2000-01-01
This paper is concerned with a continuous-time mean-variance portfolio selection model that is formulated as a bicriteria optimization problem. The objective is to maximize the expected terminal return and minimize the variance of the terminal wealth. By putting weights on the two criteria one obtains a single objective stochastic control problem which is however not in the standard form due to the variance term involved. It is shown that this nonstandard problem can be 'embedded' into a class of auxiliary stochastic linear-quadratic (LQ) problems. The stochastic LQ control model proves to be an appropriate and effective framework to study the mean-variance problem in light of the recent development on general stochastic LQ problems with indefinite control weighting matrices. This gives rise to the efficient frontier in a closed form for the original portfolio selection problem
Esfandiari, Kasra; Abdollahi, Farzaneh; Talebi, Heidar Ali
2017-09-01
In this paper, an identifier-critic structure is introduced to find an online near-optimal controller for continuous-time nonaffine nonlinear systems having saturated control signal. By employing two Neural Networks (NNs), the solution of Hamilton-Jacobi-Bellman (HJB) equation associated with the cost function is derived without requiring a priori knowledge about system dynamics. Weights of the identifier and critic NNs are tuned online and simultaneously such that unknown terms are approximated accurately and the control signal is kept between the saturation bounds. The convergence of NNs' weights, identification error, and system states is guaranteed using Lyapunov's direct method. Finally, simulation results are performed on two nonlinear systems to confirm the effectiveness of the proposed control strategy. Copyright © 2017 Elsevier Ltd. All rights reserved.
Yang, Xiong; Liu, Derong; Wang, Ding
2014-03-01
In this paper, an adaptive reinforcement learning-based solution is developed for the infinite-horizon optimal control problem of constrained-input continuous-time nonlinear systems in the presence of nonlinearities with unknown structures. Two different types of neural networks (NNs) are employed to approximate the Hamilton-Jacobi-Bellman equation. That is, an recurrent NN is constructed to identify the unknown dynamical system, and two feedforward NNs are used as the actor and the critic to approximate the optimal control and the optimal cost, respectively. Based on this framework, the action NN and the critic NN are tuned simultaneously, without the requirement for the knowledge of system drift dynamics. Moreover, by using Lyapunov's direct method, the weights of the action NN and the critic NN are guaranteed to be uniformly ultimately bounded, while keeping the closed-loop system stable. To demonstrate the effectiveness of the present approach, simulation results are illustrated.
Accuracy evaluation of a new real-time continuous glucose monitoring algorithm in hypoglycemia
DEFF Research Database (Denmark)
Mahmoudi, Zeinab; Jensen, Morten Hasselstrøm; Johansen, Mette Dencker
2014-01-01
UNLABELLED: Abstract Background: The purpose of this study was to evaluate the performance of a new continuous glucose monitoring (CGM) calibration algorithm and to compare it with the Guardian(®) REAL-Time (RT) (Medtronic Diabetes, Northridge, CA) calibration algorithm in hypoglycemia. SUBJECTS...... AND METHODS: CGM data were obtained from 10 type 1 diabetes patients undergoing insulin-induced hypoglycemia. Data were obtained in two separate sessions using the Guardian RT CGM device. Data from the same CGM sensor were calibrated by two different algorithms: the Guardian RT algorithm and a new calibration...... algorithm. The accuracy of the two algorithms was compared using four performance metrics. RESULTS: The median (mean) of absolute relative deviation in the whole range of plasma glucose was 20.2% (32.1%) for the Guardian RT calibration and 17.4% (25.9%) for the new calibration algorithm. The mean (SD...
Probabilistic assessment of SGTR management
International Nuclear Information System (INIS)
Champ, M.; Cornille, Y.; Lanore, J.M.
1989-04-01
In case of steam generator tube rupture (SGTR) event, in France, the mitigation of accident relies on operator intervention, by applying a specific accidental procedure. A detailed probabilistic analysis has been conducted which required the assessment of the failure probability of the operator actions, and for that purpose it was necessary to estimate the time available for the operator to apply the adequate procedure for various sequences. The results indicate that by taking into account the delays and the existence of adequate accidental procedures, the risk is reduced to a reasonably low level
Vogelmann, James; Gallant, Alisa L.; Shi, Hua; Zhu, Zhe
2016-01-01
There are many types of changes occurring over the Earth's landscapes that can be detected and monitored using Landsat data. Here we focus on monitoring “within-state,” gradual changes in vegetation in contrast with traditional monitoring of “abrupt” land-cover conversions. Gradual changes result from a variety of processes, such as vegetation growth and succession, damage from insects and disease, responses to shifts in climate, and other factors. Despite the prevalence of gradual changes across the landscape, they are largely ignored by the remote sensing community. Gradual changes are best characterized and monitored using time-series analysis, and with the successful launch of Landsat 8 we now have appreciable data continuity that extends the Landsat legacy across the previous 43 years. In this study, we conducted three related analyses: (1) comparison of spectral values acquired by Landsats 7 and 8, separated by eight days, to ensure compatibility for time-series evaluation; (2) tracking of multitemporal signatures for different change processes across Landsat 5, 7, and 8 sensors using anniversary-date imagery; and (3) tracking the same type of processes using all available acquisitions. In this investigation, we found that data representing natural vegetation from Landsats 5, 7, and 8 were comparable and did not indicate a need for major modification prior to use for long-term monitoring. Analyses using anniversary-date imagery can be very effective for assessing long term patterns and trends occurring across the landscape, and are especially good for providing insights regarding trends related to long-term and continuous trends of growth or decline. We found that use of all available data provided a much more comprehensive level of understanding of the trends occurring, providing information about rate, duration, and intra- and inter-annual variability that could not be readily gleaned from the anniversary date analyses. We observed that using all
Confluence reduction for probabilistic systems
Timmer, Mark; van de Pol, Jan Cornelis; Stoelinga, Mariëlle Ida Antoinette
In this presentation we introduce a novel technique for state space reduction of probabilistic specifications, based on a newly developed notion of confluence for probabilistic automata. We proved that this reduction preserves branching probabilistic bisimulation and can be applied on-the-fly. To
Probabilistic Flood Defence Assessment Tools
Directory of Open Access Journals (Sweden)
Slomp Robert
2016-01-01
institutions managing flood the defences, and not by just a small number of experts in probabilistic assessment. Therefore, data management and use of software are main issues that have been covered in courses and training in 2016 and 2017. All in all, this is the largest change in the assessment of Dutch flood defences since 1996. In 1996 probabilistic techniques were first introduced to determine hydraulic boundary conditions (water levels and waves (wave height, wave period and direction for different return periods. To simplify the process, the assessment continues to consist of a three-step approach, moving from simple decision rules, to the methods for semi-probabilistic assessment, and finally to a fully probabilistic analysis to compare the strength of flood defences with the hydraulic loads. The formal assessment results are thus mainly based on the fully probabilistic analysis and the ultimate limit state of the strength of a flood defence. For complex flood defences, additional models and software were developed. The current Hydra software suite (for policy analysis, formal flood defence assessment and design will be replaced by the model Ringtoets. New stand-alone software has been developed for revetments, geotechnical analysis and slope stability of the foreshore. Design software and policy analysis software, including the Delta model, will be updated in 2018. A fully probabilistic method results in more precise assessments and more transparency in the process of assessment and reconstruction of flood defences. This is of increasing importance, as large-scale infrastructural projects in a highly urbanized environment are increasingly subject to political and societal pressure to add additional features. For this reason, it is of increasing importance to be able to determine which new feature really adds to flood protection, to quantify how much its adds to the level of flood protection and to evaluate if it is really worthwhile. Please note: The Netherlands
Continuous-time random-walk model for anomalous diffusion in expanding media
Le Vot, F.; Abad, E.; Yuste, S. B.
2017-09-01
Expanding media are typical in many different fields, e.g., in biology and cosmology. In general, a medium expansion (contraction) brings about dramatic changes in the behavior of diffusive transport properties such as the set of positional moments and the Green's function. Here, we focus on the characterization of such effects when the diffusion process is described by the continuous-time random-walk (CTRW) model. As is well known, when the medium is static this model yields anomalous diffusion for a proper choice of the probability density function (pdf) for the jump length and the waiting time, but the behavior may change drastically if a medium expansion is superimposed on the intrinsic random motion of the diffusing particle. For the case where the jump length and the waiting time pdfs are long-tailed, we derive a general bifractional diffusion equation which reduces to a normal diffusion equation in the appropriate limit. We then study some particular cases of interest, including Lévy flights and subdiffusive CTRWs. In the former case, we find an analytical exact solution for the Green's function (propagator). When the expansion is sufficiently fast, the contribution of the diffusive transport becomes irrelevant at long times and the propagator tends to a stationary profile in the comoving reference frame. In contrast, for a contracting medium a competition between the spreading effect of diffusion and the concentrating effect of contraction arises. In the specific case of a subdiffusive CTRW in an exponentially contracting medium, the latter effect prevails for sufficiently long times, and all the particles are eventually localized at a single point in physical space. This "big crunch" effect, totally absent in the case of normal diffusion, stems from inefficient particle spreading due to subdiffusion. We also derive a hierarchy of differential equations for the moments of the transport process described by the subdiffusive CTRW model in an expanding medium
Continuous-time random-walk model for anomalous diffusion in expanding media.
Le Vot, F; Abad, E; Yuste, S B
2017-09-01
Expanding media are typical in many different fields, e.g., in biology and cosmology. In general, a medium expansion (contraction) brings about dramatic changes in the behavior of diffusive transport properties such as the set of positional moments and the Green's function. Here, we focus on the characterization of such effects when the diffusion process is described by the continuous-time random-walk (CTRW) model. As is well known, when the medium is static this model yields anomalous diffusion for a proper choice of the probability density function (pdf) for the jump length and the waiting time, but the behavior may change drastically if a medium expansion is superimposed on the intrinsic random motion of the diffusing particle. For the case where the jump length and the waiting time pdfs are long-tailed, we derive a general bifractional diffusion equation which reduces to a normal diffusion equation in the appropriate limit. We then study some particular cases of interest, including Lévy flights and subdiffusive CTRWs. In the former case, we find an analytical exact solution for the Green's function (propagator). When the expansion is sufficiently fast, the contribution of the diffusive transport becomes irrelevant at long times and the propagator tends to a stationary profile in the comoving reference frame. In contrast, for a contracting medium a competition between the spreading effect of diffusion and the concentrating effect of contraction arises. In the specific case of a subdiffusive CTRW in an exponentially contracting medium, the latter effect prevails for sufficiently long times, and all the particles are eventually localized at a single point in physical space. This "big crunch" effect, totally absent in the case of normal diffusion, stems from inefficient particle spreading due to subdiffusion. We also derive a hierarchy of differential equations for the moments of the transport process described by the subdiffusive CTRW model in an expanding medium
Bergstra, J.A.; Middelburg, C.A.
2015-01-01
We add probabilistic features to basic thread algebra and its extensions with thread-service interaction and strategic interleaving. Here, threads represent the behaviours produced by instruction sequences under execution and services represent the behaviours exhibited by the components of execution
Probabilistic simple sticker systems
Selvarajoo, Mathuri; Heng, Fong Wan; Sarmin, Nor Haniza; Turaev, Sherzod
2017-04-01
A model for DNA computing using the recombination behavior of DNA molecules, known as a sticker system, was introduced by by L. Kari, G. Paun, G. Rozenberg, A. Salomaa, and S. Yu in the paper entitled DNA computing, sticker systems and universality from the journal of Acta Informatica vol. 35, pp. 401-420 in the year 1998. A sticker system uses the Watson-Crick complementary feature of DNA molecules: starting from the incomplete double stranded sequences, and iteratively using sticking operations until a complete double stranded sequence is obtained. It is known that sticker systems with finite sets of axioms and sticker rules generate only regular languages. Hence, different types of restrictions have been considered to increase the computational power of sticker systems. Recently, a variant of restricted sticker systems, called probabilistic sticker systems, has been introduced [4]. In this variant, the probabilities are initially associated with the axioms, and the probability of a generated string is computed by multiplying the probabilities of all occurrences of the initial strings in the computation of the string. Strings for the language are selected according to some probabilistic requirements. In this paper, we study fundamental properties of probabilistic simple sticker systems. We prove that the probabilistic enhancement increases the computational power of simple sticker systems.
Visualizing Probabilistic Proof
Guerra-Pujol, Enrique
2015-01-01
The author revisits the Blue Bus Problem, a famous thought-experiment in law involving probabilistic proof, and presents simple Bayesian solutions to different versions of the blue bus hypothetical. In addition, the author expresses his solutions in standard and visual formats, i.e. in terms of probabilities and natural frequencies.
Memristive Probabilistic Computing
Alahmadi, Hamzah
2017-10-01
In the era of Internet of Things and Big Data, unconventional techniques are rising to accommodate the large size of data and the resource constraints. New computing structures are advancing based on non-volatile memory technologies and different processing paradigms. Additionally, the intrinsic resiliency of current applications leads to the development of creative techniques in computations. In those applications, approximate computing provides a perfect fit to optimize the energy efficiency while compromising on the accuracy. In this work, we build probabilistic adders based on stochastic memristor. Probabilistic adders are analyzed with respect of the stochastic behavior of the underlying memristors. Multiple adder implementations are investigated and compared. The memristive probabilistic adder provides a different approach from the typical approximate CMOS adders. Furthermore, it allows for a high area saving and design exibility between the performance and power saving. To reach a similar performance level as approximate CMOS adders, the memristive adder achieves 60% of power saving. An image-compression application is investigated using the memristive probabilistic adders with the performance and the energy trade-off.
DEFF Research Database (Denmark)
Chen, Peiyuan; Chen, Zhe; Bak-Jensen, Birgitte
2008-01-01
This paper reviews the development of the probabilistic load flow (PLF) techniques. Applications of the PLF techniques in different areas of power system steady-state analysis are also discussed. The purpose of the review is to identify different available PLF techniques and their corresponding...
Transitive probabilistic CLIR models.
Kraaij, W.; de Jong, Franciska M.G.
2004-01-01
Transitive translation could be a useful technique to enlarge the number of supported language pairs for a cross-language information retrieval (CLIR) system in a cost-effective manner. The paper describes several setups for transitive translation based on probabilistic translation models. The
Well-posedness and accuracy of the ensemble Kalman filter in discrete and continuous time
Kelly, D. T B
2014-09-22
The ensemble Kalman filter (EnKF) is a method for combining a dynamical model with data in a sequential fashion. Despite its widespread use, there has been little analysis of its theoretical properties. Many of the algorithmic innovations associated with the filter, which are required to make a useable algorithm in practice, are derived in an ad hoc fashion. The aim of this paper is to initiate the development of a systematic analysis of the EnKF, in particular to do so for small ensemble size. The perspective is to view the method as a state estimator, and not as an algorithm which approximates the true filtering distribution. The perturbed observation version of the algorithm is studied, without and with variance inflation. Without variance inflation well-posedness of the filter is established; with variance inflation accuracy of the filter, with respect to the true signal underlying the data, is established. The algorithm is considered in discrete time, and also for a continuous time limit arising when observations are frequent and subject to large noise. The underlying dynamical model, and assumptions about it, is sufficiently general to include the Lorenz \\'63 and \\'96 models, together with the incompressible Navier-Stokes equation on a two-dimensional torus. The analysis is limited to the case of complete observation of the signal with additive white noise. Numerical results are presented for the Navier-Stokes equation on a two-dimensional torus for both complete and partial observations of the signal with additive white noise.
Continuous-Time Public Good Contribution Under Uncertainty: A Stochastic Control Approach
International Nuclear Information System (INIS)
Ferrari, Giorgio; Riedel, Frank; Steg, Jan-Henrik
2017-01-01
In this paper we study continuous-time stochastic control problems with both monotone and classical controls motivated by the so-called public good contribution problem. That is the problem of n economic agents aiming to maximize their expected utility allocating initial wealth over a given time period between private consumption and irreversible contributions to increase the level of some public good. We investigate the corresponding social planner problem and the case of strategic interaction between the agents, i.e. the public good contribution game. We show existence and uniqueness of the social planner’s optimal policy, we characterize it by necessary and sufficient stochastic Kuhn–Tucker conditions and we provide its expression in terms of the unique optional solution of a stochastic backward equation. Similar stochastic first order conditions prove to be very useful for studying any Nash equilibria of the public good contribution game. In the symmetric case they allow us to prove (qualitative) uniqueness of the Nash equilibrium, which we again construct as the unique optional solution of a stochastic backward equation. We finally also provide a detailed analysis of the so-called free rider effect.
Continuous-Time Public Good Contribution Under Uncertainty: A Stochastic Control Approach
Energy Technology Data Exchange (ETDEWEB)
Ferrari, Giorgio, E-mail: giorgio.ferrari@uni-bielefeld.de; Riedel, Frank, E-mail: frank.riedel@uni-bielefeld.de; Steg, Jan-Henrik, E-mail: jsteg@uni-bielefeld.de [Bielefeld University, Center for Mathematical Economics (Germany)
2017-06-15
In this paper we study continuous-time stochastic control problems with both monotone and classical controls motivated by the so-called public good contribution problem. That is the problem of n economic agents aiming to maximize their expected utility allocating initial wealth over a given time period between private consumption and irreversible contributions to increase the level of some public good. We investigate the corresponding social planner problem and the case of strategic interaction between the agents, i.e. the public good contribution game. We show existence and uniqueness of the social planner’s optimal policy, we characterize it by necessary and sufficient stochastic Kuhn–Tucker conditions and we provide its expression in terms of the unique optional solution of a stochastic backward equation. Similar stochastic first order conditions prove to be very useful for studying any Nash equilibria of the public good contribution game. In the symmetric case they allow us to prove (qualitative) uniqueness of the Nash equilibrium, which we again construct as the unique optional solution of a stochastic backward equation. We finally also provide a detailed analysis of the so-called free rider effect.
Detection of Coronal Mass Ejections Using Multiple Features and Space-Time Continuity
Zhang, Ling; Yin, Jian-qin; Lin, Jia-ben; Feng, Zhi-quan; Zhou, Jin
2017-07-01
Coronal Mass Ejections (CMEs) release tremendous amounts of energy in the solar system, which has an impact on satellites, power facilities and wireless transmission. To effectively detect a CME in Large Angle Spectrometric Coronagraph (LASCO) C2 images, we propose a novel algorithm to locate the suspected CME regions, using the Extreme Learning Machine (ELM) method and taking into account the features of the grayscale and the texture. Furthermore, space-time continuity is used in the detection algorithm to exclude the false CME regions. The algorithm includes three steps: i) define the feature vector which contains textural and grayscale features of a running difference image; ii) design the detection algorithm based on the ELM method according to the feature vector; iii) improve the detection accuracy rate by using the decision rule of the space-time continuum. Experimental results show the efficiency and the superiority of the proposed algorithm in the detection of CMEs compared with other traditional methods. In addition, our algorithm is insensitive to most noise.
Learning a Continuous-Time Streaming Video QoE Model.
Ghadiyaram, Deepti; Pan, Janice; Bovik, Alan C
2018-05-01
Over-the-top adaptive video streaming services are frequently impacted by fluctuating network conditions that can lead to rebuffering events (stalling events) and sudden bitrate changes. These events visually impact video consumers' quality of experience (QoE) and can lead to consumer churn. The development of models that can accurately predict viewers' instantaneous subjective QoE under such volatile network conditions could potentially enable the more efficient design of quality-control protocols for media-driven services, such as YouTube, Amazon, Netflix, and so on. However, most existing models only predict a single overall QoE score on a given video and are based on simple global video features, without accounting for relevant aspects of human perception and behavior. We have created a QoE evaluator, called the time-varying QoE Indexer, that accounts for interactions between stalling events, analyzes the spatial and temporal content of a video, predicts the perceptual video quality, models the state of the client-side data buffer, and consequently predicts continuous-time quality scores that agree quite well with human opinion scores. The new QoE predictor also embeds the impact of relevant human cognitive factors, such as memory and recency, and their complex interactions with the video content being viewed. We evaluated the proposed model on three different video databases and attained standout QoE prediction performance.
Continuous time random walk analysis of solute transport in fractured porous media
Energy Technology Data Exchange (ETDEWEB)
Cortis, Andrea; Cortis, Andrea; Birkholzer, Jens
2008-06-01
The objective of this work is to discuss solute transport phenomena in fractured porous media, where the macroscopic transport of contaminants in the highly permeable interconnected fractures can be strongly affected by solute exchange with the porous rock matrix. We are interested in a wide range of rock types, with matrix hydraulic conductivities varying from almost impermeable (e.g., granites) to somewhat permeable (e.g., porous sandstones). In the first case, molecular diffusion is the only transport process causing the transfer of contaminants between the fractures and the matrix blocks. In the second case, additional solute transfer occurs as a result of a combination of advective and dispersive transport mechanisms, with considerable impact on the macroscopic transport behavior. We start our study by conducting numerical tracer experiments employing a discrete (microscopic) representation of fractures and matrix. Using the discrete simulations as a surrogate for the 'correct' transport behavior, we then evaluate the accuracy of macroscopic (continuum) approaches in comparison with the discrete results. However, instead of using dual-continuum models, which are quite often used to account for this type of heterogeneity, we develop a macroscopic model based on the Continuous Time Random Walk (CTRW) framework, which characterizes the interaction between the fractured and porous rock domains by using a probability distribution function of residence times. A parametric study of how CTRW parameters evolve is presented, describing transport as a function of the hydraulic conductivity ratio between fractured and porous domains.
Energy Technology Data Exchange (ETDEWEB)
Geiger, S.; Cortis, A.; Birkholzer, J.T.
2010-04-01
Solute transport in fractured porous media is typically 'non-Fickian'; that is, it is characterized by early breakthrough and long tailing and by nonlinear growth of the Green function-centered second moment. This behavior is due to the effects of (1) multirate diffusion occurring between the highly permeable fracture network and the low-permeability rock matrix, (2) a wide range of advection rates in the fractures and, possibly, the matrix as well, and (3) a range of path lengths. As a consequence, prediction of solute transport processes at the macroscale represents a formidable challenge. Classical dual-porosity (or mobile-immobile) approaches in conjunction with an advection-dispersion equation and macroscopic dispersivity commonly fail to predict breakthrough of fractured porous media accurately. It was recently demonstrated that the continuous time random walk (CTRW) method can be used as a generalized upscaling approach. Here we extend this work and use results from high-resolution finite element-finite volume-based simulations of solute transport in an outcrop analogue of a naturally fractured reservoir to calibrate the CTRW method by extracting a distribution of retention times. This procedure allows us to predict breakthrough at other model locations accurately and to gain significant insight into the nature of the fracture-matrix interaction in naturally fractured porous reservoirs with geologically realistic fracture geometries.
A joint logistic regression and covariate-adjusted continuous-time Markov chain model.
Rubin, Maria Laura; Chan, Wenyaw; Yamal, Jose-Miguel; Robertson, Claudia Sue
2017-12-10
The use of longitudinal measurements to predict a categorical outcome is an increasingly common goal in research studies. Joint models are commonly used to describe two or more models simultaneously by considering the correlated nature of their outcomes and the random error present in the longitudinal measurements. However, there is limited research on joint models with longitudinal predictors and categorical cross-sectional outcomes. Perhaps the most challenging task is how to model the longitudinal predictor process such that it represents the true biological mechanism that dictates the association with the categorical response. We propose a joint logistic regression and Markov chain model to describe a binary cross-sectional response, where the unobserved transition rates of a two-state continuous-time Markov chain are included as covariates. We use the method of maximum likelihood to estimate the parameters of our model. In a simulation study, coverage probabilities of about 95%, standard deviations close to standard errors, and low biases for the parameter values show that our estimation method is adequate. We apply the proposed joint model to a dataset of patients with traumatic brain injury to describe and predict a 6-month outcome based on physiological data collected post-injury and admission characteristics. Our analysis indicates that the information provided by physiological changes over time may help improve prediction of long-term functional status of these severely ill subjects. Copyright © 2017 John Wiley & Sons, Ltd. Copyright © 2017 John Wiley & Sons, Ltd.
Coherent exciton transport in dendrimers and continuous-time quantum walks
Mülken, Oliver; Bierbaum, Veronika; Blumen, Alexander
2006-03-01
We model coherent exciton transport in dendrimers by continuous-time quantum walks. For dendrimers up to the second generation the coherent transport shows perfect recurrences when the initial excitation starts at the central node. For larger dendrimers, the recurrence ceases to be perfect, a fact which resembles results for discrete quantum carpets. Moreover, depending on the initial excitation site, we find that the coherent transport to certain nodes of the dendrimer has a very low probability. When the initial excitation starts from the central node, the problem can be mapped onto a line which simplifies the computational effort. Furthermore, the long time average of the quantum mechanical transition probabilities between pairs of nodes shows characteristic patterns and allows us to classify the nodes into clusters with identical limiting probabilities. For the (space) average of the quantum mechanical probability to be still or to be again at the initial site, we obtain, based on the Cauchy-Schwarz inequality, a simple lower bound which depends only on the eigenvalue spectrum of the Hamiltonian.
Optimal Compensation with Hidden Action and Lump-Sum Payment in a Continuous-Time Model
International Nuclear Information System (INIS)
Cvitanic, Jaksa; Wan, Xuhu; Zhang Jianfeng
2009-01-01
We consider a problem of finding optimal contracts in continuous time, when the agent's actions are unobservable by the principal, who pays the agent with a one-time payoff at the end of the contract. We fully solve the case of quadratic cost and separable utility, for general utility functions. The optimal contract is, in general, a nonlinear function of the final outcome only, while in the previously solved cases, for exponential and linear utility functions, the optimal contract is linear in the final output value. In a specific example we compute, the first-best principal's utility is infinite, while it becomes finite with hidden action, which is increasing in value of the output. In the second part of the paper we formulate a general mathematical theory for the problem. We apply the stochastic maximum principle to give necessary conditions for optimal contracts. Sufficient conditions are hard to establish, but we suggest a way to check sufficiency using non-convex optimization
Lv, Yongfeng; Na, Jing; Yang, Qinmin; Wu, Xing; Guo, Yu
2016-01-01
An online adaptive optimal control is proposed for continuous-time nonlinear systems with completely unknown dynamics, which is achieved by developing a novel identifier-critic-based approximate dynamic programming algorithm with a dual neural network (NN) approximation structure. First, an adaptive NN identifier is designed to obviate the requirement of complete knowledge of system dynamics, and a critic NN is employed to approximate the optimal value function. Then, the optimal control law is computed based on the information from the identifier NN and the critic NN, so that the actor NN is not needed. In particular, a novel adaptive law design method with the parameter estimation error is proposed to online update the weights of both identifier NN and critic NN simultaneously, which converge to small neighbourhoods around their ideal values. The closed-loop system stability and the convergence to small vicinity around the optimal solution are all proved by means of the Lyapunov theory. The proposed adaptation algorithm is also improved to achieve finite-time convergence of the NN weights. Finally, simulation results are provided to exemplify the efficacy of the proposed methods.
Data-driven strategies for robust forecast of continuous glucose monitoring time-series.
Fiorini, Samuele; Martini, Chiara; Malpassi, Davide; Cordera, Renzo; Maggi, Davide; Verri, Alessandro; Barla, Annalisa
2017-07-01
Over the past decade, continuous glucose monitoring (CGM) has proven to be a very resourceful tool for diabetes management. To date, CGM devices are employed for both retrospective and online applications. Their use allows to better describe the patients' pathology as well as to achieve a better control of patients' level of glycemia. The analysis of CGM sensor data makes possible to observe a wide range of metrics, such as the glycemic variability during the day or the amount of time spent below or above certain glycemic thresholds. However, due to the high variability of the glycemic signals among sensors and individuals, CGM data analysis is a non-trivial task. Standard signal filtering solutions fall short when an appropriate model personalization is not applied. State-of-the-art data-driven strategies for online CGM forecasting rely upon the use of recursive filters. Each time a new sample is collected, such models need to adjust their parameters in order to predict the next glycemic level. In this paper we aim at demonstrating that the problem of online CGM forecasting can be successfully tackled by personalized machine learning models, that do not need to recursively update their parameters.
Continuous Real-time Measurements of Vertical Distribution of Magnetic Susceptibility In Soils
Petrovsky, E.; Hulka, Z.; Kapicka, A.; Magprox Team
Measurements of top-soil magnetic susceptibility are used in approximative outlining polluted areas. However, one of the serious limitations of the method is discrimina- tion between top-soil layers enhanced by atmospherically deposited anthropogenic particles from those dominated by natural particles migrating from magnetically-rich basement rocks. For this purpose, measurements of vertical distribution of magnetic susceptibility along soil profiles is one of the most effective ways in estimating the effect of lithogenic contribution. Up to now, in most cases soil cores have to be mea- sured in laboratory. This method is quite time consuming and does not allow flexible decision about the suitability of the measured site for surface magnetic mapping. In our contribution we will present a new device enabling continuous real-time measure- ments of vertical distribution of magnetic susceptibility directly in field, performed in holes after soil coring. The method is fast, yielding smooth curves (6 data points per 1 mm dept), at least as sensitive as laboratory methods available until now, and at- tached notebook enables direct, on-line control of the lithogenic versus anthropogenic contributions.
DEFF Research Database (Denmark)
Secher, A L; Madsen, A B; Nielsen, Lene Ringholm
2012-01-01
of initial monitoring). Ten women (15%) did not wish to use continuous glucose monitoring again in pregnancy. Main causes behind early removal of continuous glucose monitoring were self-reported skin irritation, technical problems and continuous glucose monitoring inaccuracy. No differences were found......Aim: To evaluate self-reported satisfaction and barriers to initiating real-time continuous glucose monitoring in early pregnancy among women with pregestational diabetes. Methods: Fifty-four women with Type 1 diabetes and 14 women with Type 2 diabetes were offered continuous glucose monitoring...
Directory of Open Access Journals (Sweden)
Charmaine Demanuele
Full Text Available Discriminating spatiotemporal stages of information processing involved in complex cognitive processes remains a challenge for neuroscience. This is especially so in prefrontal cortex whose subregions, such as the dorsolateral prefrontal (DLPFC, anterior cingulate (ACC and orbitofrontal (OFC cortices are known to have differentiable roles in cognition. Yet it is much less clear how these subregions contribute to different cognitive processes required by a given task. To investigate this, we use functional MRI data recorded from a group of healthy adults during a "Jumping to Conclusions" probabilistic reasoning task.We used a novel approach combining multivariate test statistics with bootstrap-based procedures to discriminate between different task stages reflected in the fMRI blood oxygenation level dependent signal pattern and to unravel differences in task-related information encoded by these regions. Furthermore, we implemented a new feature extraction algorithm that selects voxels from any set of brain regions that are jointly maximally predictive about specific task stages.Using both the multivariate statistics approach and the algorithm that searches for maximally informative voxels we show that during the Jumping to Conclusions task, the DLPFC and ACC contribute more to the decision making phase comprising the accumulation of evidence and probabilistic reasoning, while the OFC is more involved in choice evaluation and uncertainty feedback. Moreover, we show that in presumably non-task-related regions (temporal cortices all information there was about task processing could be extracted from just one voxel (indicating the unspecific nature of that information, while for prefrontal areas a wider multivariate pattern of activity was maximally informative.We present a new approach to reveal the different roles of brain regions during the processing of one task from multivariate activity patterns measured by fMRI. This method can be a valuable
Probabilistic numerics and uncertainty in computations.
Hennig, Philipp; Osborne, Michael A; Girolami, Mark
2015-07-08
We deliver a call to arms for probabilistic numerical methods : algorithms for numerical tasks, including linear algebra, integration, optimization and solving differential equations, that return uncertainties in their calculations. Such uncertainties, arising from the loss of precision induced by numerical calculation with limited time or hardware, are important for much contemporary science and industry. Within applications such as climate science and astrophysics, the need to make decisions on the basis of computations with large and complex data have led to a renewed focus on the management of numerical uncertainty. We describe how several seminal classic numerical methods can be interpreted naturally as probabilistic inference. We then show that the probabilistic view suggests new algorithms that can flexibly be adapted to suit application specifics, while delivering improved empirical performance. We provide concrete illustrations of the benefits of probabilistic numeric algorithms on real scientific problems from astrometry and astronomical imaging, while highlighting open problems with these new algorithms. Finally, we describe how probabilistic numerical methods provide a coherent framework for identifying the uncertainty in calculations performed with a combination of numerical algorithms (e.g. both numerical optimizers and differential equation solvers), potentially allowing the diagnosis (and control) of error sources in computations.
Comolli, Alessandro; Hakoun, Vivien; Dentz, Marco
2017-04-01
Achieving the understanding of the process of solute transport in heterogeneous porous media is of crucial importance for several environmental and social purposes, ranging from aquifers contamination and remediation, to risk assessment in nuclear waste repositories. The complexity of this aim is mainly ascribable to the heterogeneity of natural media, which can be observed at all the scales of interest, from pore scale to catchment scale. In fact, the intrinsic heterogeneity of porous media is responsible for the arising of the well-known non-Fickian footprints of transport, including heavy-tailed breakthrough curves, non-Gaussian spatial density profiles and the non-linear growth of the mean squared displacement. Several studies investigated the processes through which heterogeneity impacts the transport properties, which include local modifications to the advective-dispersive motion of solutes, mass exchanges between some mobile and immobile phases (e.g. sorption/desorption reactions or diffusion into solid matrix) and spatial correlation of the flow field. In the last decades, the continuous time random walk (CTRW) model has often been used to describe solute transport in heterogenous conditions and to quantify the impact of point heterogeneity, spatial correlation and mass transfer on the average transport properties [1]. Open issues regarding this approach are the possibility to relate measurable properties of the medium to the parameters of the model, as well as its capability to provide predictive information. In a recent work [2] the authors have shed new light on understanding the relationship between Lagrangian and Eulerian dynamics as well as on their evolution from arbitrary initial conditions. On the basis of these results, we derive a CTRW model for the description of Darcy-scale transport in d-dimensional media characterized by spatially random permeability fields. The CTRW approach models particle velocities as a spatial Markov process, which is
A probabilistic approach for representation of interval uncertainty
International Nuclear Information System (INIS)
Zaman, Kais; Rangavajhala, Sirisha; McDonald, Mark P.; Mahadevan, Sankaran
2011-01-01
In this paper, we propose a probabilistic approach to represent interval data for input variables in reliability and uncertainty analysis problems, using flexible families of continuous Johnson distributions. Such a probabilistic representation of interval data facilitates a unified framework for handling aleatory and epistemic uncertainty. For fitting probability distributions, methods such as moment matching are commonly used in the literature. However, unlike point data where single estimates for the moments of data can be calculated, moments of interval data can only be computed in terms of upper and lower bounds. Finding bounds on the moments of interval data has been generally considered an NP-hard problem because it includes a search among the combinations of multiple values of the variables, including interval endpoints. In this paper, we present efficient algorithms based on continuous optimization to find the bounds on second and higher moments of interval data. With numerical examples, we show that the proposed bounding algorithms are scalable in polynomial time with respect to increasing number of intervals. Using the bounds on moments computed using the proposed approach, we fit a family of Johnson distributions to interval data. Furthermore, using an optimization approach based on percentiles, we find the bounding envelopes of the family of distributions, termed as a Johnson p-box. The idea of bounding envelopes for the family of Johnson distributions is analogous to the notion of empirical p-box in the literature. Several sets of interval data with different numbers of intervals and type of overlap are presented to demonstrate the proposed methods. As against the computationally expensive nested analysis that is typically required in the presence of interval variables, the proposed probabilistic representation enables inexpensive optimization-based strategies to estimate bounds on an output quantity of interest.
Real-time continuous image-guided surgery: Preclinical investigation in glossectomy.
Tabanfar, Reza; Qiu, Jimmy; Chan, Harley; Aflatouni, Niousha; Weersink, Robert; Hasan, Wael; Irish, Jonathan C
2017-10-01
To develop, validate, and study the efficacy of an intraoperative real-time continuous image-guided surgery (RTC-IGS) system for glossectomy. Prospective study. We created a RTC-IGS system and surgical simulator for glossectomy, enabling definition of a surgical target preoperatively, real-time cautery tracking, and display of a surgical plan intraoperatively. System performance was evaluated by a group of otolaryngology residents, fellows, medical students, and staff under a reproducible setting by using realistic tongue phantoms. Evaluators were grouped into a senior and a junior group based on surgical experience, and guided and unguided tumor resections were performed. National Aeronautics and Space Administration Task Load Index (NASA-TLX) scores and a Likert scale were used to measure workloads and impressions of the system, respectively. Efficacy was studied by comparing surgical accuracy, time, collateral damage, and workload between RTC-IGS and non-navigated resections. The senior group performed more accurately (80.9% ± 3.7% vs. 75.2% ± 5.5%, P = .28), required less time (5.0 ± 1.3 minutes vs. 7.3 ± 1.2 minutes, P = .17), and experienced lower workload (43 ± 2.0 vs. 64.4 ± 1.3 NASA-TLX score, P = .08), suggesting a trend of construct validity. Impressions were favorable, with participants reporting the system is a valuable practice tool (4.0/5 ± 0.3) and increases confidence (3.9/5 ± 0.4). Use of RTC-IGS improved both groups' accuracy, with the junior group improving from 64.4% ± 5.4% to 75.2% ± 5.5% (P = .01) and the senior group improving from 76.1% ± 4.5% to 80.9% ± 3.7% (P = .16). We created an RTC-IGS system and surgical simulator and demonstrated a trend of construct validity. Our navigated simulator allows junior trainees to practice glossectomies outside the operating room. In all evaluators, navigation assistance resulted in increased surgical accuracy. NA Laryngoscope, 127:E347-E353, 2017. © 2017 The American Laryngological
Lee, Jae Young; Park, Jin Bae; Choi, Yoon Ho
2015-05-01
This paper focuses on a class of reinforcement learning (RL) algorithms, named integral RL (I-RL), that solve continuous-time (CT) nonlinear optimal control problems with input-affine system dynamics. First, we extend the concepts of exploration, integral temporal difference, and invariant admissibility to the target CT nonlinear system that is governed by a control policy plus a probing signal called an exploration. Then, we show input-to-state stability (ISS) and invariant admissibility of the closed-loop systems with the policies generated by integral policy iteration (I-PI) or invariantly admissible PI (IA-PI) method. Based on these, three online I-RL algorithms named explorized I-PI and integral Q -learning I, II are proposed, all of which generate the same convergent sequences as I-PI and IA-PI under the required excitation condition on the exploration. All the proposed methods are partially or completely model free, and can simultaneously explore the state space in a stable manner during the online learning processes. ISS, invariant admissibility, and convergence properties of the proposed methods are also investigated, and related with these, we show the design principles of the exploration for safe learning. Neural-network-based implementation methods for the proposed schemes are also presented in this paper. Finally, several numerical simulations are carried out to verify the effectiveness of the proposed methods.
Continuous-time interval model identification of blood glucose dynamics for type 1 diabetes
Kirchsteiger, Harald; Johansson, Rolf; Renard, Eric; del Re, Luigi
2014-07-01
While good physiological models of the glucose metabolism in type 1 diabetic patients are well known, their parameterisation is difficult. The high intra-patient variability observed is a further major obstacle. This holds for data-based models too, so that no good patient-specific models are available. Against this background, this paper proposes the use of interval models to cover the different metabolic conditions. The control-oriented models contain a carbohydrate and insulin sensitivity factor to be used for insulin bolus calculators directly. Available clinical measurements were sampled on an irregular schedule which prompts the use of continuous-time identification, also for the direct estimation of the clinically interpretable factors mentioned above. An identification method is derived and applied to real data from 28 diabetic patients. Model estimation was done on a clinical data-set, whereas validation results shown were done on an out-of-clinic, everyday life data-set. The results show that the interval model approach allows a much more regular estimation of the parameters and avoids physiologically incompatible parameter estimates.
EVALUATING CONTINUOUS-TIME SLAM USING A PREDEFINED TRAJECTORY PROVIDED BY A ROBOTIC ARM
Directory of Open Access Journals (Sweden)
B. Koch
2017-09-01
Full Text Available Recently published approaches to SLAM algorithms process laser sensor measurements and output a map as a point cloud of the environment. Often the actual precision of the map remains unclear, since SLAMalgorithms apply local improvements to the resulting map. Unfortunately, it is not trivial to compare the performance of SLAMalgorithms objectively, especially without an accurate ground truth. This paper presents a novel benchmarking technique that allows to compare a precise map generated with an accurate ground truth trajectory to a map with a manipulated trajectory which was distorted by different forms of noise. The accurate ground truth is acquired by mounting a laser scanner on an industrial robotic arm. The robotic arm is moved on a predefined path while the position and orientation of the end-effector tool are monitored. During this process the 2D profile measurements of the laser scanner are recorded in six degrees of freedom and afterwards used to generate a precise point cloud of the test environment. For benchmarking, an offline continuous-time SLAM algorithm is subsequently applied to remove the inserted distortions. Finally, it is shown that the manipulated point cloud is reversible to its previous state and is slightly improved compared to the original version, since small errors that came into account by imprecise assumptions, sensor noise and calibration errors are removed as well.
Continuous-time modeling of cell fate determination in Arabidopsis flowers
Directory of Open Access Journals (Sweden)
Angenent Gerco C
2010-07-01
Full Text Available Abstract Background The genetic control of floral organ specification is currently being investigated by various approaches, both experimentally and through modeling. Models and simulations have mostly involved boolean or related methods, and so far a quantitative, continuous-time approach has not been explored. Results We propose an ordinary differential equation (ODE model that describes the gene expression dynamics of a gene regulatory network that controls floral organ formation in the model plant Arabidopsis thaliana. In this model, the dimerization of MADS-box transcription factors is incorporated explicitly. The unknown parameters are estimated from (known experimental expression data. The model is validated by simulation studies of known mutant plants. Conclusions The proposed model gives realistic predictions with respect to independent mutation data. A simulation study is carried out to predict the effects of a new type of mutation that has so far not been made in Arabidopsis, but that could be used as a severe test of the validity of the model. According to our predictions, the role of dimers is surprisingly important. Moreover, the functional loss of any dimer leads to one or more phenotypic alterations.
A lattice-model representation of continuous-time random walks
International Nuclear Information System (INIS)
Campos, Daniel; Mendez, Vicenc
2008-01-01
We report some ideas for constructing lattice models (LMs) as a discrete approach to the reaction-dispersal (RD) or reaction-random walks (RRW) models. The analysis of a rather general class of Markovian and non-Markovian processes, from the point of view of their wavefront solutions, let us show that in some regimes their macroscopic dynamics (front speed) turns out to be different from that by classical reaction-diffusion equations, which are often used as a mean-field approximation to the problem. So, the convenience of a more general framework as that given by the continuous-time random walks (CTRW) is claimed. Here we use LMs as a numerical approach in order to support that idea, while in previous works our discussion was restricted to analytical models. For the two specific cases studied here, we derive and analyze the mean-field expressions for our LMs. As a result, we are able to provide some links between the numerical and analytical approaches studied
Low Power Continuous-Time Delta-Sigma ADC with Current Output DAC
DEFF Research Database (Denmark)
Marker-Villumsen, Niels; Jørgensen, Ivan Harald Holger; Bruun, Erik
2015-01-01
The paper presents a continuous-time (CT) DeltaSigma (∆Σ) analog-to-digital converter (ADC) using a current output digital-to-analog converter (DAC) for the feedback. From circuit analysis it is shown that using a current output DAC makes it possible to relax the noise requirements of the 1st...... integrator of the loopfilter, and thereby reduce the current consumption. Furthermore, the noise of the current output DAC being dependent on the ADC input signal level, enabling a dynamic range that is larger than the peak signal-to-noise ratio (SNR). The current output DAC is used in a 3rd order multibit...... CT ∆Σ ADC for audio applications, designed in a 0.18 µm CMOS process, with active-RC integrators, a 7-level Flash ADC quantizer and current output DAC for the feedback. From simulations the ADC achieves a dynamic range of 95.0 dB in the audio band, with a current consumption of 284 µA for a 1.7 V...
The cascade model of teachers’ continuing professional development in Kenya: A time for change?
Directory of Open Access Journals (Sweden)
Harry Kipkemoi Bett
2016-12-01
Full Text Available Kenya is one of the countries whose teachers the UNESCO (2015 report cited as lacking curriculum support in the classroom. As is the case in many African countries, a large portion of teachers in Kenya enter the teaching profession when inadequately prepared, while those already in the field receive insufficient support in their professional lives. The cascade model has often been utilized in the country whenever need for teachers’ continuing professional development (TCPD has arisen, especially on a large scale. The preference for the model is due to, among others, its cost effectiveness and ability to reach out to many teachers within a short period of time. Many researchers have however cast aspersions with this model for its glaring shortcomings. On the contrary, TCPD programmes that are collaborative in nature and based on teachers’ contexts have been found to be more effective than those that are not. This paper briefly examines cases of the cascade model in Kenya, the challenges associated with this model and proposes the adoption of collaborative and institution-based models to mitigate these challenges. The education sectors in many nations in Africa, and those in the developing world will find the discussions here relevant.
A lattice-model representation of continuous-time random walks
Energy Technology Data Exchange (ETDEWEB)
Campos, Daniel [School of Mathematics, Department of Applied Mathematics, University of Manchester, Manchester M60 1QD (United Kingdom); Mendez, Vicenc [Grup de Fisica Estadistica, Departament de Fisica, Universitat Autonoma de Barcelona, 08193 Bellaterra (Barcelona) (Spain)], E-mail: daniel.campos@uab.es, E-mail: vicenc.mendez@uab.es
2008-02-29
We report some ideas for constructing lattice models (LMs) as a discrete approach to the reaction-dispersal (RD) or reaction-random walks (RRW) models. The analysis of a rather general class of Markovian and non-Markovian processes, from the point of view of their wavefront solutions, let us show that in some regimes their macroscopic dynamics (front speed) turns out to be different from that by classical reaction-diffusion equations, which are often used as a mean-field approximation to the problem. So, the convenience of a more general framework as that given by the continuous-time random walks (CTRW) is claimed. Here we use LMs as a numerical approach in order to support that idea, while in previous works our discussion was restricted to analytical models. For the two specific cases studied here, we derive and analyze the mean-field expressions for our LMs. As a result, we are able to provide some links between the numerical and analytical approaches studied.
Real-time continuous glucose monitoring systems in the classroom/school environment.
Benassi, Kari; Drobny, Jessica; Aye, Tandy
2013-05-01
Children with type 1 diabetes (T1D) spend 4-7 h/day in school with very little supervision of their diabetes management. Therefore, families have become more dependent on technology, such as use of real-time continuous glucose monitoring (RT-CGM), to provide increased supervision of their diabetes management. We sought to assess the impact of RT-CGM use in the classroom/school environment. Children with T1D using RT-CGM, their parents, and teachers completed a questionnaire about RT-CGM in the classroom/school environment. The RT-CGM was tolerated well in the classroom/school environment. Seventy percent of parents, 75% of students, and 51% of teachers found RT-CGM useful in the classroom/school environment. The students found the device to be more disruptive than did their parents and teachers. However, all three groups agreed that RT-CGM increased their comfort with diabetes management at school. Our study suggests that RT-CGM is useful and not disruptive in the classroom/school environment. The development of education materials for teachers could further increase its acceptance in the classroom/school environment.
Integration of Continuous-Time Dynamics in a Spiking Neural Network Simulator
Directory of Open Access Journals (Sweden)
Jan Hahne
2017-05-01
Full Text Available Contemporary modeling approaches to the dynamics of neural networks include two important classes of models: biologically grounded spiking neuron models and functionally inspired rate-based units. We present a unified simulation framework that supports the combination of the two for multi-scale modeling, enables the quantitative validation of mean-field approaches by spiking network simulations, and provides an increase in reliability by usage of the same simulation code and the same network model specifications for both model classes. While most spiking simulations rely on the communication of discrete events, rate models require time-continuous interactions between neurons. Exploiting the conceptual similarity to the inclusion of gap junctions in spiking network simulations, we arrive at a reference implementation of instantaneous and delayed interactions between rate-based models in a spiking network simulator. The separation of rate dynamics from the general connection and communication infrastructure ensures flexibility of the framework. In addition to the standard implementation we present an iterative approach based on waveform-relaxation techniques to reduce communication and increase performance for large-scale simulations of rate-based models with instantaneous interactions. Finally we demonstrate the broad applicability of the framework by considering various examples from the literature, ranging from random networks to neural-field models. The study provides the prerequisite for interactions between rate-based and spiking models in a joint simulation.
A Continuous-Time Delta-Sigma ADC for Portable Ultrasound Scanners
DEFF Research Database (Denmark)
Llimos Muntal, Pere; Jørgensen, Ivan Harald Holger; Bruun, Erik
2017-01-01
A fully diﬀerential fourth-order 1-bit continuous-time delta-sigma ADC designed in a 65nm process for portable ultrasound scanners is presented in this paper. The circuit design, implementation and measurements on the fabricated die are shown. The loop ﬁlter consists of RC-integrators, programmable...... capacitor arrays, resistors and voltage feedback DACs. The quantizer contains a pulse generator, a high-speed clocked comparator and a pull-down clocked latch to ensure constant delay in the feedback loop. Using this implementation, a small and low-power solution required for portable ultrasound scanner...... applications is achieved. The converter has a supply voltage of 1.2V, a bandwidth of 10MHz and an oversampling ratio of 16 leading to an operating frequency of 320MHz. The design occupies a die area of 0.0175mm2. Simulations with extracted parasitics show a SNR of 45.2dB and a current consumption of 489 µ...
A low power CMOS 3.3 Gbps continuous-time adaptive equalizer for serial link
International Nuclear Information System (INIS)
Ju Hao; Zhou Yumei; Zhao Jianzhong
2011-01-01
This paper describes using a high-speed continuous-time analog adaptive equalizer as the front-end of a receiver for a high-speed serial interface, which is compliant with many serial communication specifications such as USB2.0, PCI-E2.0 and Rapid IO. The low and high frequency loops are merged to decrease the effect of delay between the two paths, in addition, the infinite input impedance facilitates the cascade stages in order to improve the high frequency boosting gain. The implemented circuit architecture could facilitate the wide frequency range from 1 to 3.3 Gbps with different length FR4-PCB traces, which brings as much as 25 dB loss. The replica control circuits are injected to provide a convenient way to regulate common-mode voltage for full differential operation. In addition, AC coupling is adopted to suppress the common input from the forward stage. A prototype chip was fabricated in 0.18-μm 1P6M mixed-signal CMOS technology. The actual area is 0.6 x 0.57 mm 2 and the analog equalizer operates up to 3.3 Gbps over FR4-PCB trace with 25 dB loss. The overall power dissipation is approximately 23.4 mW. (semiconductor integrated circuits)
A low power CMOS 3.3 Gbps continuous-time adaptive equalizer for serial link
Hao, Ju; Yumei, Zhou; Jianzhong, Zhao
2011-09-01
This paper describes using a high-speed continuous-time analog adaptive equalizer as the front-end of a receiver for a high-speed serial interface, which is compliant with many serial communication specifications such as USB2.0, PCI-E2.0 and Rapid IO. The low and high frequency loops are merged to decrease the effect of delay between the two paths, in addition, the infinite input impedance facilitates the cascade stages in order to improve the high frequency boosting gain. The implemented circuit architecture could facilitate the wide frequency range from 1 to 3.3 Gbps with different length FR4-PCB traces, which brings as much as 25 dB loss. The replica control circuits are injected to provide a convenient way to regulate common-mode voltage for full differential operation. In addition, AC coupling is adopted to suppress the common input from the forward stage. A prototype chip was fabricated in 0.18-μm 1P6M mixed-signal CMOS technology. The actual area is 0.6 × 0.57 mm2 and the analog equalizer operates up to 3.3 Gbps over FR4-PCB trace with 25 dB loss. The overall power dissipation is approximately 23.4 mW.
Integration of Continuous-Time Dynamics in a Spiking Neural Network Simulator.
Hahne, Jan; Dahmen, David; Schuecker, Jannis; Frommer, Andreas; Bolten, Matthias; Helias, Moritz; Diesmann, Markus
2017-01-01
Contemporary modeling approaches to the dynamics of neural networks include two important classes of models: biologically grounded spiking neuron models and functionally inspired rate-based units. We present a unified simulation framework that supports the combination of the two for multi-scale modeling, enables the quantitative validation of mean-field approaches by spiking network simulations, and provides an increase in reliability by usage of the same simulation code and the same network model specifications for both model classes. While most spiking simulations rely on the communication of discrete events, rate models require time-continuous interactions between neurons. Exploiting the conceptual similarity to the inclusion of gap junctions in spiking network simulations, we arrive at a reference implementation of instantaneous and delayed interactions between rate-based models in a spiking network simulator. The separation of rate dynamics from the general connection and communication infrastructure ensures flexibility of the framework. In addition to the standard implementation we present an iterative approach based on waveform-relaxation techniques to reduce communication and increase performance for large-scale simulations of rate-based models with instantaneous interactions. Finally we demonstrate the broad applicability of the framework by considering various examples from the literature, ranging from random networks to neural-field models. The study provides the prerequisite for interactions between rate-based and spiking models in a joint simulation.
Evaluating Continuous-Time Slam Using a Predefined Trajectory Provided by a Robotic Arm
Koch, B.; Leblebici, R.; Martell, A.; Jörissen, S.; Schilling, K.; Nüchter, A.
2017-09-01
Recently published approaches to SLAM algorithms process laser sensor measurements and output a map as a point cloud of the environment. Often the actual precision of the map remains unclear, since SLAMalgorithms apply local improvements to the resulting map. Unfortunately, it is not trivial to compare the performance of SLAMalgorithms objectively, especially without an accurate ground truth. This paper presents a novel benchmarking technique that allows to compare a precise map generated with an accurate ground truth trajectory to a map with a manipulated trajectory which was distorted by different forms of noise. The accurate ground truth is acquired by mounting a laser scanner on an industrial robotic arm. The robotic arm is moved on a predefined path while the position and orientation of the end-effector tool are monitored. During this process the 2D profile measurements of the laser scanner are recorded in six degrees of freedom and afterwards used to generate a precise point cloud of the test environment. For benchmarking, an offline continuous-time SLAM algorithm is subsequently applied to remove the inserted distortions. Finally, it is shown that the manipulated point cloud is reversible to its previous state and is slightly improved compared to the original version, since small errors that came into account by imprecise assumptions, sensor noise and calibration errors are removed as well.
A policy iteration approach to online optimal control of continuous-time constrained-input systems.
Modares, Hamidreza; Naghibi Sistani, Mohammad-Bagher; Lewis, Frank L
2013-09-01
This paper is an effort towards developing an online learning algorithm to find the optimal control solution for continuous-time (CT) systems subject to input constraints. The proposed method is based on the policy iteration (PI) technique which has recently evolved as a major technique for solving optimal control problems. Although a number of online PI algorithms have been developed for CT systems, none of them take into account the input constraints caused by actuator saturation. In practice, however, ignoring these constraints leads to performance degradation or even system instability. In this paper, to deal with the input constraints, a suitable nonquadratic functional is employed to encode the constraints into the optimization formulation. Then, the proposed PI algorithm is implemented on an actor-critic structure to solve the Hamilton-Jacobi-Bellman (HJB) equation associated with this nonquadratic cost functional in an online fashion. That is, two coupled neural network (NN) approximators, namely an actor and a critic are tuned online and simultaneously for approximating the associated HJB solution and computing the optimal control policy. The critic is used to evaluate the cost associated with the current policy, while the actor is used to find an improved policy based on information provided by the critic. Convergence to a close approximation of the HJB solution as well as stability of the proposed feedback control law are shown. Simulation results of the proposed method on a nonlinear CT system illustrate the effectiveness of the proposed approach. Copyright © 2013 ISA. All rights reserved.
Well-posedness and accuracy of the ensemble Kalman filter in discrete and continuous time
International Nuclear Information System (INIS)
Kelly, D T B; Stuart, A M; Law, K J H
2014-01-01
The ensemble Kalman filter (EnKF) is a method for combining a dynamical model with data in a sequential fashion. Despite its widespread use, there has been little analysis of its theoretical properties. Many of the algorithmic innovations associated with the filter, which are required to make a useable algorithm in practice, are derived in an ad hoc fashion. The aim of this paper is to initiate the development of a systematic analysis of the EnKF, in particular to do so for small ensemble size. The perspective is to view the method as a state estimator, and not as an algorithm which approximates the true filtering distribution. The perturbed observation version of the algorithm is studied, without and with variance inflation. Without variance inflation well-posedness of the filter is established; with variance inflation accuracy of the filter, with respect to the true signal underlying the data, is established. The algorithm is considered in discrete time, and also for a continuous time limit arising when observations are frequent and subject to large noise. The underlying dynamical model, and assumptions about it, is sufficiently general to include the Lorenz '63 and '96 models, together with the incompressible Navier–Stokes equation on a two-dimensional torus. The analysis is limited to the case of complete observation of the signal with additive white noise. Numerical results are presented for the Navier–Stokes equation on a two-dimensional torus for both complete and partial observations of the signal with additive white noise. (paper)
Scott, John W; Nyinawankusi, Jeanne D'Arc; Enumah, Samuel; Maine, Rebecca; Uwitonze, Eric; Hu, Yihan; Kabagema, Ignace; Byiringiro, Jean Claude; Riviello, Robert; Jayaraman, Sudha
2017-07-01
Injury is a major cause of premature death and disability in East Africa, and high-quality pre-hospital care is essential for optimal trauma outcomes. The Rwandan pre-hospital emergency care service (SAMU) uses an electronic database to evaluate and optimize pre-hospital care through a continuous quality improvement programme (CQIP), beginning March 2014. The SAMU database was used to assess pre-hospital quality metrics including supplementary oxygen for hypoxia (O2), intravenous fluids for hypotension (IVF), cervical collar placement for head injuries (c-collar), and either splinting (splint) or administration of pain medications (pain) for long bone fractures. Targets of >90% were set for each metric and daily team meetings and monthly feedback sessions were implemented to address opportunities for improvement. These five pre-hospital quality metrics were assessed monthly before and after implementation of the CQIP. Met and unmet needs for O2, IVF, and c-collar were combined into a summative monthly SAMU Trauma Quality Scores (STQ score). An interrupted time series linear regression model compared the STQ score during 14 months before the CQIP implementation to the first 14 months after. During the 29-month study period 3,822 patients met study criteria. 1,028 patients needed one or more of the five studied interventions during the study period. All five endpoints had a significant increase between the pre-CQI and post-CQI periods (pRwanda. This programme may be used as an example for additional efforts engaging frontline staff with real-time data feedback in order to rapidly translate data collection efforts into improved care for the injured in a resource-limited setting. Copyright © 2017 Elsevier Ltd. All rights reserved.
GPU-accelerated algorithms for many-particle continuous-time quantum walks
Piccinini, Enrico; Benedetti, Claudia; Siloi, Ilaria; Paris, Matteo G. A.; Bordone, Paolo
2017-06-01
Many-particle continuous-time quantum walks (CTQWs) represent a resource for several tasks in quantum technology, including quantum search algorithms and universal quantum computation. In order to design and implement CTQWs in a realistic scenario, one needs effective simulation tools for Hamiltonians that take into account static noise and fluctuations in the lattice, i.e. Hamiltonians containing stochastic terms. To this aim, we suggest a parallel algorithm based on the Taylor series expansion of the evolution operator, and compare its performances with those of algorithms based on the exact diagonalization of the Hamiltonian or a 4th order Runge-Kutta integration. We prove that both Taylor-series expansion and Runge-Kutta algorithms are reliable and have a low computational cost, the Taylor-series expansion showing the additional advantage of a memory allocation not depending on the precision of calculation. Both algorithms are also highly parallelizable within the SIMT paradigm, and are thus suitable for GPGPU computing. In turn, we have benchmarked 4 NVIDIA GPUs and 3 quad-core Intel CPUs for a 2-particle system over lattices of increasing dimension, showing that the speedup provided by GPU computing, with respect to the OPENMP parallelization, lies in the range between 8x and (more than) 20x, depending on the frequency of post-processing. GPU-accelerated codes thus allow one to overcome concerns about the execution time, and make it possible simulations with many interacting particles on large lattices, with the only limit of the memory available on the device.
"I had a good time when I was young": Interpreting descriptions of continuity among older people.
Breheny, Mary; Griffiths, Zoë
2017-04-01
Messages describing how best to age are prominent in gerontological theory, research and the media. These prescriptions for ageing may foster positive experiences in later life; however, they may also obscure the social and situated nature of expectations for ageing well. Continuity Theory proposes ageing well is achieved through continuity of activity and stability of relationships and identity over the life course. Continuity seems adaptive, yet prioritising continuity may not match the expectations, desires and realities of older people. To understand continuity among older people, the present study used interpretative phenomenological analysis (IPA) to analyse transcripts from eleven participants over the age of 79 years. Continuity was important for older people in this study, who described a range of practices that supported internal and external continuity. Participants acknowledged both positive and negative changes in roles and obligations as they aged which impacted on continuity of identity. Continuity of identity was linked both to being 'just like always' and 'just like everyone else'. Examining these accounts shows how they are tied to expectations that older people should both maintain earlier patterns of behaviour while also negotiating changing social expectations for behaviour that are linked to age. These tensions point to the balance between physical, environmental and interpersonal change and the negotiation of social expectations which together structure possibilities for ageing well. Copyright © 2017 Elsevier Inc. All rights reserved.
Probabilistic cellular automata: Some statistical mechanical considerations
International Nuclear Information System (INIS)
Lebowitz, J.L.; Maes, C.; Speer, E.R.
1990-01-01
Spin systems evolving in continuous or discrete time under the action of stochastic dynamics are used to model phenomena as diverse as the structure of alloys and the functioning of neural networks. While in some cases the dynamics are secondary, designed to produce a specific stationary measure whose properties one is interested in studying, there are other cases in which the only available information is the dynamical rule. Prime examples of the former are computer simulations, via Glauber dynamics, of equilibrium Gibbs measures with a specified interaction potential. Examples of the latter include various types of majority rule dynamics used as models for pattern recognition and for error-tolerant computations. The present note discusses ways in which techniques found useful in equilibrium statistical mechanics can be applied to a particular class of models of the latter types. These are cellular automata with noise: systems in which the spins are updated stochastically at integer times, simultaneously at all sites of some regular lattice. These models were first investigated in detail in the Soviet literature of the late sixties and early seventies. They are now generally referred to as Stochastic or Probabilistic Cellular Automata (PCA), and may be considered to include deterministic automata (CA) as special limits. 16 refs., 3 figs
Probabilistic analysis of tokamak plasma disruptions
International Nuclear Information System (INIS)
Sanzo, D.L.; Apostolakis, G.E.
1985-01-01
An approximate analytical solution to the heat conduction equations used in modeling component melting and vaporization resulting from plasma disruptions is presented. This solution is then used to propagate uncertainties in the input data characterizing disruptions, namely, energy density and disruption time, to obtain a probabilistic description of the output variables of interest, material melted and vaporized. (orig.)
Some probabilistic properties of fractional point processes
Garra, Roberto; Orsingher, Enzo; Scavino, Marco
2017-01-01
P{T-k(alpha) < infinity} are explicitly obtained and analyzed. The processes N-f (t) are time-changed Poisson processes N( H-f (t)) with subordinators H-f (t) and here we study N(Sigma H-n(j= 1)f j (t)) and obtain probabilistic features
Application of probabilistic precipitation forecasts from a ...
African Journals Online (AJOL)
Application of probabilistic precipitation forecasts from a deterministic model towards increasing the lead-time of flash flood forecasts in South Africa. ... The procedure is applied to a real flash flood event and the ensemble-based rainfall forecasts are verified against rainfall estimated by the SAFFG system. The approach ...
Directory of Open Access Journals (Sweden)
Elodie Magnanou
2009-06-01
Full Text Available Laboratory conditions nullify the extrinsic factors that determine the wild expected lifespan and release the intrinsic or potential lifespan. Thus, wild animals reared in a laboratory often show an increased lifespan, and consequently an increased senescence phase. Senescence is associated with a broad suite of physiological changes, including a decreased responsiveness of the circadian system. The time-keeping hormone melatonin, an important chemical player in this system, is suspected to have an anti-aging role. The Greater White-toothed shrew Crocidura russula is an ideal study model to address questions related to aging and associated changes in biological functions: its lifespan is short and is substantially increased in captivity; daily and seasonal rhythms, while very marked the first year of life, are dramatically altered during the senescence process which starts during the second year. Here we report on an investigation of the effects of melatonin administration on locomotor activity of aging shrews.1 The diel fluctuations of melatonin levels in young, adult and aging shrews were quantified in the pineal gland and plasma. In both, a marked diel rhythm (low diurnal concentration; high nocturnal concentration was present in young animals but then decreased in adults, and, as a result of a loss in the nocturnal production, was absent in old animals. 2 Daily locomotor activity rhythm was monitored in pre-senescent animals that had received either a subcutaneous melatonin implant, an empty implant or no implant at all. In non-implanted and sham-implanted shrews, the rhythm was well marked in adults. A marked degradation in both period and amplitude, however, started after the age of 14-16 months. This pattern was considerably delayed in melatonin-implanted shrews who maintained the daily rhythm for significantly longer.This is the first long term study (>500 days observation of the same individuals that investigates the effects of
Bugaresti, J M; Tator, C H; Szalai, J P
1991-06-01
The present study was conducted to determine whether automated, continuous turning beds would reduce the nursing care time for spinal cord injured (SCI) patients by freeing hospital staff from manual turning of patients every 2 hours. Seventeen patients were randomly assigned to continuous or intermittent turning and were observed during the 8 hour shift for 1 to 18 days following injury. Trained observers recorded the time taken for patient contact activities performed by the nursing staff (direct nursing care) and other hospital staff. The mean direct nursing care time per dayshift per patient was 130 +/- 22 (mean +/- SD) minutes for 9 patients managed with continuous turning and 115 +/- 41 (mean +/- SD) minutes for 8 patients managed with intermittent turning. The observed difference in care time between the two treatment groups was not significant (p greater than 0.05). Numerous factors including neurological level, time following injury, and medical complications appeared to affect the direct nursing care time. Although continuous turning did not reduce nursing care time it offered major advantages for the treatment of selected cases of acute SCI. Some major advantages of continuous turning treatment were observed. Spinal alignment was easier to maintain during continuous turning in patients with injuries of the cervical spine. Continuous turning allowed radiological procedures on the spine, chest and abdomen to be more easily performed without having to alter the patients' position in bed. Therapy and nursing staff indicated that the continuous turning bed facilitated patient positioning for such activities as chest physiotherapy. With continuous turning, one nurse was sufficient to provide care for an individual SCI patient without having to rely on the assistance of other nurses on the ward for patient turning every 2 hours.
Probabilistic safety assessment in nuclear power plant management
International Nuclear Information System (INIS)
Holloway, N.J.
1989-06-01
Probabilistic Safety Assessment (PSA) techniques have been widely used over the past few years to assist in understanding how engineered systems respond to abnormal conditions, particularly during a severe accident. The use of PSAs in the design and operation of such systems thus contributes to the safety of nuclear power plants. Probabilistic safety assessments can be maintained to provide a continuous up-to-date assessment (Living PSA), supporting the management of plant operations and modifications
Probabilistic assessment of faults
International Nuclear Information System (INIS)
Foden, R.W.
1987-01-01
Probabilistic safety analysis (PSA) is the process by which the probability (or frequency of occurrence) of reactor fault conditions which could lead to unacceptable consequences is assessed. The basic objective of a PSA is to allow a judgement to be made as to whether or not the principal probabilistic requirement is satisfied. It also gives insights into the reliability of the plant which can be used to identify possible improvements. This is explained in the article. The scope of a PSA and the PSA performed by the National Nuclear Corporation (NNC) for the Heysham II and Torness AGRs and Sizewell-B PWR are discussed. The NNC methods for hazards, common cause failure and operator error are mentioned. (UK)
Modelling and real-time simulation of continuous-discrete systems in mechatronics
Energy Technology Data Exchange (ETDEWEB)
Lindow, H. [Rostocker, Magdeburg (Germany)
1996-12-31
This work presents a methodology for simulation and modelling of systems with continuous - discrete dynamics. It derives hybrid discrete event models from Lagrange`s equations of motion. This method combines continuous mechanical, electrical and thermodynamical submodels on one hand with discrete event models an the other hand into a hybrid discrete event model. This straight forward software development avoids numeric overhead.
2016-06-01
This paper develops a microeconomic theory-based multiple discrete continuous choice model that considers: (a) that both goods consumption and time allocations (to work and non-work activities) enter separately as decision variables in the utility fu...
Probabilistic Model Development
Adam, James H., Jr.
2010-01-01
Objective: Develop a Probabilistic Model for the Solar Energetic Particle Environment. Develop a tool to provide a reference solar particle radiation environment that: 1) Will not be exceeded at a user-specified confidence level; 2) Will provide reference environments for: a) Peak flux; b) Event-integrated fluence; and c) Mission-integrated fluence. The reference environments will consist of: a) Elemental energy spectra; b) For protons, helium and heavier ions.
Geothermal probabilistic cost study
Energy Technology Data Exchange (ETDEWEB)
Orren, L.H.; Ziman, G.M.; Jones, S.C.; Lee, T.K.; Noll, R.; Wilde, L.; Sadanand, V.
1981-08-01
A tool is presented to quantify the risks of geothermal projects, the Geothermal Probabilistic Cost Model (GPCM). The GPCM model is used to evaluate a geothermal reservoir for a binary-cycle electric plant at Heber, California. Three institutional aspects of the geothermal risk which can shift the risk among different agents are analyzed. The leasing of geothermal land, contracting between the producer and the user of the geothermal heat, and insurance against faulty performance are examined. (MHR)
Probabilistic approaches to recommendations
Barbieri, Nicola; Ritacco, Ettore
2014-01-01
The importance of accurate recommender systems has been widely recognized by academia and industry, and recommendation is rapidly becoming one of the most successful applications of data mining and machine learning. Understanding and predicting the choices and preferences of users is a challenging task: real-world scenarios involve users behaving in complex situations, where prior beliefs, specific tendencies, and reciprocal influences jointly contribute to determining the preferences of users toward huge amounts of information, services, and products. Probabilistic modeling represents a robus
Probabilistic liver atlas construction.
Dura, Esther; Domingo, Juan; Ayala, Guillermo; Marti-Bonmati, Luis; Goceri, E
2017-01-13
Anatomical atlases are 3D volumes or shapes representing an organ or structure of the human body. They contain either the prototypical shape of the object of interest together with other shapes representing its statistical variations (statistical atlas) or a probability map of belonging to the object (probabilistic atlas). Probabilistic atlases are mostly built with simple estimations only involving the data at each spatial location. A new method for probabilistic atlas construction that uses a generalized linear model is proposed. This method aims to improve the estimation of the probability to be covered by the liver. Furthermore, all methods to build an atlas involve previous coregistration of the sample of shapes available. The influence of the geometrical transformation adopted for registration in the quality of the final atlas has not been sufficiently investigated. The ability of an atlas to adapt to a new case is one of the most important quality criteria that should be taken into account. The presented experiments show that some methods for atlas construction are severely affected by the previous coregistration step. We show the good performance of the new approach. Furthermore, results suggest that extremely flexible registration methods are not always beneficial, since they can reduce the variability of the atlas and hence its ability to give sensible values of probability when used as an aid in segmentation of new cases.
Belytschko, Ted; Wing, Kam Liu
1987-01-01
In the Probabilistic Finite Element Method (PFEM), finite element methods have been efficiently combined with second-order perturbation techniques to provide an effective method for informing the designer of the range of response which is likely in a given problem. The designer must provide as input the statistical character of the input variables, such as yield strength, load magnitude, and Young's modulus, by specifying their mean values and their variances. The output then consists of the mean response and the variance in the response. Thus the designer is given a much broader picture of the predicted performance than with simply a single response curve. These methods are applicable to a wide class of problems, provided that the scale of randomness is not too large and the probabilistic density functions possess decaying tails. By incorporating the computational techniques we have developed in the past 3 years for efficiency, the probabilistic finite element methods are capable of handling large systems with many sources of uncertainties. Sample results for an elastic-plastic ten-bar structure and an elastic-plastic plane continuum with a circular hole subject to cyclic loadings with the yield stress on the random field are given.
Toporov, Maria; Löhnert, Ulrich; Potthast, Roland; Cimini, Domenico; De Angelis, Francesco
2017-04-01
Short-term forecasts of current high-resolution numerical weather prediction models still have large deficits in forecasting the exact temporal and spatial location of severe, locally influenced weather such as summer-time convective storms or cool season lifted stratus or ground fog. Often, the thermodynamic instability - especially in the boundary layer - plays an essential role in the evolution of weather events. While the thermodynamic state of the atmosphere is well measured close to the surface (i.e. 2 m) by in-situ sensors and in the upper troposphere by satellite sounders, the planetary boundary layer remains a largely under-sampled region of the atmosphere where only sporadic information from radiosondes or aircraft observations is available. The major objective of the presented DWD-funded project ARON (Extramural Research Programme) is to overcome this observational gap and to design an optimized network of ground based microwave radiometers (MWR) and compact Differential Absorption Lidars (DIAL) for a continuous, near-real-time monitoring of temperature and humidity in the atmospheric boundary layer in order to monitor thermodynamic (in)stability. Previous studies showed, that microwave profilers are well suited for continuously monitoring the temporal development of atmospheric stability (i.e. Cimini et al., 2015) before the initiation of deep convection, especially in the atmospheric boundary layer. However, the vertical resolution of microwave temperature profiles is best in the lowest kilometer above the surface, decreasing rapidly with increasing height. In addition, humidity profile retrievals typically cannot be resolved with more than two degrees of freedom for signal, resulting in a rather poor vertical resolution throughout the troposphere. Typical stability indices used to assess the potential of convection rely on temperature and humidity values not only in the region of the boundary layer but also in the layers above. Therefore, satellite
Online probabilistic learning with an ensemble of forecasts
Thorey, Jean; Mallet, Vivien; Chaussin, Christophe
2016-04-01
Our objective is to produce a calibrated weighted ensemble to forecast a univariate time series. In addition to a meteorological ensemble of forecasts, we rely on observations or analyses of the target variable. The celebrated Continuous Ranked Probability Score (CRPS) is used to evaluate the probabilistic forecasts. However applying the CRPS on weighted empirical distribution functions (deriving from the weighted ensemble) may introduce a bias because of which minimizing the CRPS does not produce the optimal weights. Thus we propose an unbiased version of the CRPS which relies on clusters of members and is strictly proper. We adapt online learning methods for the minimization of the CRPS. These methods generate the weights associated to the members in the forecasted empirical distribution function. The weights are updated before each forecast step using only past observations and forecasts. Our learning algorithms provide the theoretical guarantee that, in the long run, the CRPS of the weighted forecasts is at least as good as the CRPS of any weighted ensemble with weights constant in time. In particular, the performance of our forecast is better than that of any subset ensemble with uniform weights. A noteworthy advantage of our algorithm is that it does not require any assumption on the distributions of the observations and forecasts, both for the application and for the theoretical guarantee to hold. As application example on meteorological forecasts for photovoltaic production integration, we show that our algorithm generates a calibrated probabilistic forecast, with significant performance improvements on probabilistic diagnostic tools (the CRPS, the reliability diagram and the rank histogram).
Directory of Open Access Journals (Sweden)
François Niragire
2017-05-01
Full Text Available Child survival programmes are efficient when they target the most significant and area-specific factors. This study aimed to assess the key determinants and spatial variation of child mortality at the district level in Rwanda. Data from the 2010 Rwanda Demographic and Health Survey were analysed for 8817 live births that occurred during five years preceding the survey. Out of the children born, 433 had died before survey interviews were carried out. A full Bayesian geo-additive continuous-time hazard model enabled us to maximise data utilisation and hence improve the accuracy of our estimates. The results showed substantial district- level spatial variation in childhood mortality in Rwanda. District-specific spatial characteristics were particularly associated with higher death hazards in two districts: Musanze and Nyabihu. The model estimates showed that there were lower death rates among children from households of medium and high economic status compared to those from low-economic status households. Factors, such as four antenatal care visits, delivery at a health facility, prolonged breastfeeding and mothers younger than 31 years were associated with lower child death rates. Long preceding birth intervals were also associated with fewer hazards. For these reasons, programmes aimed at reducing child mortality gaps between districts in Rwanda should target maternal factors and take into consideration district-specific spatial characteristics. Further, child survival gains require strengthening or scaling-up of existing programmes pertaining to access to, and utilisation of maternal and child health care services as well as reduction of the household gap in the economic status.
Niragire, François; Achia, Thomas N O; Lyambabaje, Alexandre; Ntaganira, Joseph
2017-05-11
Child survival programmes are efficient when they target the most significant and area-specific factors. This study aimed to assess the key determinants and spatial variation of child mortality at the district level in Rwanda. Data from the 2010 Rwanda Demographic and Health Survey were analysed for 8817 live births that occurred during five years preceding the survey. Out of the children born, 433 had died before survey interviews were carried out. A full Bayesian geo-additive continuous-time hazard model enabled us to maximise data utilisation and hence improve the accuracy of our estimates. The results showed substantial district- level spatial variation in childhood mortality in Rwanda. District-specific spatial characteristics were particularly associated with higher death hazards in two districts: Musanze and Nyabihu. The model estimates showed that there were lower death rates among children from households of medium and high economic status compared to those from low-economic status households. Factors, such as four antenatal care visits, delivery at a health facility, prolonged breastfeeding and mothers younger than 31 years were associated with lower child death rates. Long preceding birth intervals were also associated with fewer hazards. For these reasons, programmes aimed at reducing child mortality gaps between districts in Rwanda should target maternal factors and take into consideration district-specific spatial characteristics. Further, child survival gains require strengthening or scaling-up of existing programmes pertaining to access to, and utilisation of maternal and child health care services as well as reduction of the household gap in the economic status.
Directory of Open Access Journals (Sweden)
Botond Molnár
Full Text Available There has been a long history of using neural networks for combinatorial optimization and constraint satisfaction problems. Symmetric Hopfield networks and similar approaches use steepest descent dynamics, and they always converge to the closest local minimum of the energy landscape. For finding global minima additional parameter-sensitive techniques are used, such as classical simulated annealing or the so-called chaotic simulated annealing, which induces chaotic dynamics by addition of extra terms to the energy landscape. Here we show that asymmetric continuous-time neural networks can solve constraint satisfaction problems without getting trapped in non-solution attractors. We concentrate on a model solving Boolean satisfiability (k-SAT, which is a quintessential NP-complete problem. There is a one-to-one correspondence between the stable fixed points of the neural network and the k-SAT solutions and we present numerical evidence that limit cycles may also be avoided by appropriately choosing the parameters of the model. This optimal parameter region is fairly independent of the size and hardness of instances, this way parameters can be chosen independently of the properties of problems and no tuning is required during the dynamical process. The model is similar to cellular neural networks already used in CNN computers. On an analog device solving a SAT problem would take a single operation: the connection weights are determined by the k-SAT instance and starting from any initial condition the system searches until finding a solution. In this new approach transient chaotic behavior appears as a natural consequence of optimization hardness and not as an externally induced effect.
Probabilistic Tsunami Hazard Analysis
Thio, H. K.; Ichinose, G. A.; Somerville, P. G.; Polet, J.
2006-12-01
The recent tsunami disaster caused by the 2004 Sumatra-Andaman earthquake has focused our attention to the hazard posed by large earthquakes that occur under water, in particular subduction zone earthquakes, and the tsunamis that they generate. Even though these kinds of events are rare, the very large loss of life and material destruction caused by this earthquake warrant a significant effort towards the mitigation of the tsunami hazard. For ground motion hazard, Probabilistic Seismic Hazard Analysis (PSHA) has become a standard practice in the evaluation and mitigation of seismic hazard to populations in particular with respect to structures, infrastructure and lifelines. Its ability to condense the complexities and variability of seismic activity into a manageable set of parameters greatly facilitates the design of effective seismic resistant buildings but also the planning of infrastructure projects. Probabilistic Tsunami Hazard Analysis (PTHA) achieves the same goal for hazards posed by tsunami. There are great advantages of implementing such a method to evaluate the total risk (seismic and tsunami) to coastal communities. The method that we have developed is based on the traditional PSHA and therefore completely consistent with standard seismic practice. Because of the strong dependence of tsunami wave heights on bathymetry, we use a full waveform tsunami waveform computation in lieu of attenuation relations that are common in PSHA. By pre-computing and storing the tsunami waveforms at points along the coast generated for sets of subfaults that comprise larger earthquake faults, we can efficiently synthesize tsunami waveforms for any slip distribution on those faults by summing the individual subfault tsunami waveforms (weighted by their slip). This efficiency make it feasible to use Green's function summation in lieu of attenuation relations to provide very accurate estimates of tsunami height for probabilistic calculations, where one typically computes
Desvillettes, Laurent; Fellner, Klemens
2010-01-01
We study a continuous coagulation-fragmentation model with constant kernels for reacting polymers (see [M. Aizenman and T. Bak, Comm. Math. Phys., 65 (1979), pp. 203-230]). The polymers are set to diffuse within a smooth bounded one
Sari, Dwi Ivayana; Budayasa, I. Ketut; Juniati, Dwi
2017-08-01
Formulation of mathematical learning goals now is not only oriented on cognitive product, but also leads to cognitive process, which is probabilistic thinking. Probabilistic thinking is needed by students to make a decision. Elementary school students are required to develop probabilistic thinking as foundation to learn probability at higher level. A framework of probabilistic thinking of students had been developed by using SOLO taxonomy, which consists of prestructural probabilistic thinking, unistructural probabilistic thinking, multistructural probabilistic thinking and relational probabilistic thinking. This study aimed to analyze of probability task completion based on taxonomy of probabilistic thinking. The subjects were two students of fifth grade; boy and girl. Subjects were selected by giving test of mathematical ability and then based on high math ability. Subjects were given probability tasks consisting of sample space, probability of an event and probability comparison. The data analysis consisted of categorization, reduction, interpretation and conclusion. Credibility of data used time triangulation. The results was level of boy's probabilistic thinking in completing probability tasks indicated multistructural probabilistic thinking, while level of girl's probabilistic thinking in completing probability tasks indicated unistructural probabilistic thinking. The results indicated that level of boy's probabilistic thinking was higher than level of girl's probabilistic thinking. The results could contribute to curriculum developer in developing probability learning goals for elementary school students. Indeed, teachers could teach probability with regarding gender difference.
Some probabilistic aspects of fracture
International Nuclear Information System (INIS)
Thomas, J.M.
1982-01-01
Some probabilistic aspects of fracture in structural and mechanical components are examined. The principles of fracture mechanics, material quality and inspection uncertainty are formulated into a conceptual and analytical framework for prediction of failure probability. The role of probabilistic fracture mechanics in a more global context of risk and optimization of decisions is illustrated. An example, where Monte Carlo simulation was used to implement a probabilistic fracture mechanics analysis, is discussed. (orig.)
Panattoni, Laura; Stone, Ashley; Chung, Sukyung; Tai-Seale, Ming
2015-03-01
The growing number of primary care physicians (PCPs) reducing their clinical work hours has raised concerns about meeting the future demand for services and fulfilling the continuity and access mandates for patient-centered care. However, the patient's experience of care with part-time physicians is relatively unknown, and may be mediated by continuity and access to care outcomes. We aimed to examine the relationships between a physicians' clinical full-time equivalent (FTE), continuity of care, access to care, and patient satisfaction with the physician. We used a multi-level structural equation estimation, with continuity and access modeled as mediators, for a cross-section in 2010. The study included family medicine (n = 104) and internal medicine (n = 101) physicians in a multi-specialty group practice, along with their patient satisfaction survey responses (n = 12,688). Physician level FTE, continuity of care received by patients, continuity of care provided by physician, and a Press Ganey patient satisfaction with the physician score, on a 0-100 % scale, were measured. Access to care was measured as days to the third next-available appointment. Physician FTE was directly associated with better continuity of care received (0.172% per FTE, p part-time PCPs in practice redesign efforts and initiatives to meet the demand for primary care services.
Time series analysis of continuous-wave coherent Doppler Lidar wind measurements
DEFF Research Database (Denmark)
Sjöholm, Mikael; Mikkelsen, Torben; Mann, Jakob
2008-01-01
The influence of spatial volume averaging of a focused 1.55 mu m continuous-wave coherent Doppler Lidar on observed wind turbulence measured in the atmospheric surface layer over homogeneous terrain is described and analysed. Comparison of Lidar-measured turbulent spectra with spectra simultaneou......The influence of spatial volume averaging of a focused 1.55 mu m continuous-wave coherent Doppler Lidar on observed wind turbulence measured in the atmospheric surface layer over homogeneous terrain is described and analysed. Comparison of Lidar-measured turbulent spectra with spectra...
Synchronized Scheme of Continuous Space-Vector PWM with the Real-Time Control Algorithms
DEFF Research Database (Denmark)
Oleschuk, V.; Blaabjerg, Frede
2004-01-01
This paper describes in details the basic peculiarities of a new method of feedforward synchronous pulsewidth modulation (PWM) of three-phase voltage source inverters for adjustable speed ac drives. It is applied to a continuous scheme of voltage space vector modulation. The method is based...... their position inside clock-intervals. In order to provide smooth shock-less pulse-ratio changing and quarter-wave symmetry of the voltage waveforms, special synchronising signals are formed on the boundaries of the 60 clock-intervals. The process of gradual transition from continuous to discontinuous...
In My Own Time: Tuition Fees, Class Time and Student Effort in Non-Formal (Or Continuing) Education
Bolli, Thomas; Johnes, Geraint
2015-01-01
We develop and empirically test a model which examines the impact of changes in class time and tuition fees on student effort in the form of private study. The data come from the European Union's Adult Education Survey, conducted over the period 2005-2008. We find, in line with theoretical predictions, that the time students devote to private…
International Nuclear Information System (INIS)
Qiu-Ye, Sun; Hua-Guang, Zhang; Yan, Zhao
2010-01-01
This paper investigates the chaotification problem of a stable continuous-time T–S fuzzy system. A simple nonlinear state time-delay feedback controller is designed by parallel distributed compensation technique. Then, the asymptotically approximate relationship between the controlled continuous-time T–S fuzzy system with time-delay and a discrete-time T–S fuzzy system is established. Based on the discrete-time T–S fuzzy system, it proves that the chaos in the discrete-time T–S fuzzy system satisfies the Li–Yorke definition by choosing appropriate controller parameters via the revised Marotto theorem. Finally, the effectiveness of the proposed chaotic anticontrol method is verified by a practical example. (general)
Probabilistic safety assessment
International Nuclear Information System (INIS)
Hoertner, H.; Schuetz, B.
1982-09-01
For the purpose of assessing applicability and informativeness on risk-analysis methods in licencing procedures under atomic law, the choice of instruments for probabilistic analysis, the problems in and experience gained in their application, and the discussion of safety goals with respect to such instruments are of paramount significance. Naturally, such a complex field can only be dealt with step by step, making contribution relative to specific problems. The report on hand shows the essentials of a 'stocktaking' of systems relability studies in the licencing procedure under atomic law and of an American report (NUREG-0739) on 'Quantitative Safety Goals'. (orig.) [de
Probabilistic methods for physics
International Nuclear Information System (INIS)
Cirier, G
2013-01-01
We present an asymptotic method giving a probability of presence of the iterated spots of R d by a polynomial function f. We use the well-known Perron Frobenius operator (PF) that lets certain sets and measure invariant by f. Probabilistic solutions can exist for the deterministic iteration. If the theoretical result is already known, here we quantify these probabilities. This approach seems interesting to use for computing situations when the deterministic methods don't run. Among the examined applications, are asymptotic solutions of Lorenz, Navier-Stokes or Hamilton's equations. In this approach, linearity induces many difficult problems, all of whom we have not yet resolved.
Quantum probability for probabilists
Meyer, Paul-André
1993-01-01
In recent years, the classical theory of stochastic integration and stochastic differential equations has been extended to a non-commutative set-up to develop models for quantum noises. The author, a specialist of classical stochastic calculus and martingale theory, tries to provide anintroduction to this rapidly expanding field in a way which should be accessible to probabilists familiar with the Ito integral. It can also, on the other hand, provide a means of access to the methods of stochastic calculus for physicists familiar with Fock space analysis.
Integration of Probabilistic Exposure Assessment and Probabilistic Hazard Characterization
Voet, van der H.; Slob, W.
2007-01-01
A method is proposed for integrated probabilistic risk assessment where exposure assessment and hazard characterization are both included in a probabilistic way. The aim is to specify the probability that a random individual from a defined (sub)population will have an exposure high enough to cause a