WorldWideScience

Sample records for time model based

  1. A unified model of time perception accounts for duration-based and beat-based timing mechanisms

    Directory of Open Access Journals (Sweden)

    Sundeep eTeki

    2012-01-01

    Full Text Available Accurate timing is an integral aspect of sensory and motor processes such as the perception of speech and music and the execution of skilled movement. Neuropsychological studies of time perception in patient groups and functional neuroimaging studies of timing in normal participants suggest common neural substrates for perceptual and motor timing. A timing system is implicated in core regions of the motor network such as the cerebellum, inferior olive, basal ganglia, pre-supplementary and supplementary motor area, pre-motor cortex and higher regions such as the prefrontal cortex.In this article, we assess how distinct parts of the timing system subserve different aspects of perceptual timing. We previously established brain bases for absolute, duration-based timing and relative, beat-based timing in the olivocerebellar and striato-thalamo-cortical circuits respectively (Teki et al., 2011. However, neurophysiological and neuroanatomical studies provide a basis to suggest that timing functions of these circuits may not be independent.Here, we propose a unified model of time perception based on coordinated activity in the core striatal and olivocerebellar networks that are interconnected with each other and the cerebral cortex th

  2. Real-time traffic signal optimization model based on average delay time per person

    Directory of Open Access Journals (Sweden)

    Pengpeng Jiao

    2015-10-01

    Full Text Available Real-time traffic signal control is very important for relieving urban traffic congestion. Many existing traffic control models were formulated using optimization approach, with the objective functions of minimizing vehicle delay time. To improve people’s trip efficiency, this article aims to minimize delay time per person. Based on the time-varying traffic flow data at intersections, the article first fits curves of accumulative arrival and departure vehicles, as well as the corresponding functions. Moreover, this article transfers vehicle delay time to personal delay time using average passenger load of cars and buses, employs such time as the objective function, and proposes a signal timing optimization model for intersections to achieve real-time signal parameters, including cycle length and green time. This research further implements a case study based on practical data collected at an intersection in Beijing, China. The average delay time per person and queue length are employed as evaluation indices to show the performances of the model. The results show that the proposed methodology is capable of improving traffic efficiency and is very effective for real-world applications.

  3. Reaction time for trimolecular reactions in compartment-based reaction-diffusion models

    Science.gov (United States)

    Li, Fei; Chen, Minghan; Erban, Radek; Cao, Yang

    2018-05-01

    Trimolecular reaction models are investigated in the compartment-based (lattice-based) framework for stochastic reaction-diffusion modeling. The formulae for the first collision time and the mean reaction time are derived for the case where three molecules are present in the solution under periodic boundary conditions. For the case of reflecting boundary conditions, similar formulae are obtained using a computer-assisted approach. The accuracy of these formulae is further verified through comparison with numerical results. The presented derivation is based on the first passage time analysis of Montroll [J. Math. Phys. 10, 753 (1969)]. Montroll's results for two-dimensional lattice-based random walks are adapted and applied to compartment-based models of trimolecular reactions, which are studied in one-dimensional or pseudo one-dimensional domains.

  4. Timing-based business models for flexibility creation in the electric power sector

    International Nuclear Information System (INIS)

    Helms, Thorsten; Loock, Moritz; Bohnsack, René

    2016-01-01

    Energy policies in many countries push for an increase in the generation of wind and solar power. Along these developments, the balance between supply and demand becomes more challenging as the generation of wind and solar power is volatile, and flexibility of supply and demand becomes valuable. As a consequence, companies in the electric power sector develop new business models that create flexibility through activities of timing supply and demand. Based on an extensive qualitative analysis of interviews and industry research in the energy industry, the paper at hand explores the role of timing-based business models in the power sector and sheds light on the mechanisms of flexibility creation through timing. In particular we distill four ideal-type business models of flexibility creation with timing and reveal how they can be classified along two dimensions, namely costs of multiplicity and intervention costs. We put forward that these business models offer ‘coupled services’, combining resource-centered and service-centered perspectives. This complementary character has important implications for energy policy. - Highlights: •Explores timing-based business models providing flexibility in the energy industry. •Timing-based business models can be classified on two dimensions. •Timing-based business models offer ‘coupled services’. • ‘Coupled services’ couple timing as a service with supply- or demand side valuables. •Policy and managerial implications for energy market design.

  5. A travel time forecasting model based on change-point detection method

    Science.gov (United States)

    LI, Shupeng; GUANG, Xiaoping; QIAN, Yongsheng; ZENG, Junwei

    2017-06-01

    Travel time parameters obtained from road traffic sensors data play an important role in traffic management practice. A travel time forecasting model is proposed for urban road traffic sensors data based on the method of change-point detection in this paper. The first-order differential operation is used for preprocessing over the actual loop data; a change-point detection algorithm is designed to classify the sequence of large number of travel time data items into several patterns; then a travel time forecasting model is established based on autoregressive integrated moving average (ARIMA) model. By computer simulation, different control parameters are chosen for adaptive change point search for travel time series, which is divided into several sections of similar state.Then linear weight function is used to fit travel time sequence and to forecast travel time. The results show that the model has high accuracy in travel time forecasting.

  6. Traffic Incident Clearance Time and Arrival Time Prediction Based on Hazard Models

    Directory of Open Access Journals (Sweden)

    Yang beibei Ji

    2014-01-01

    Full Text Available Accurate prediction of incident duration is not only important information of Traffic Incident Management System, but also an effective input for travel time prediction. In this paper, the hazard based prediction models are developed for both incident clearance time and arrival time. The data are obtained from the Queensland Department of Transport and Main Roads’ STREAMS Incident Management System (SIMS for one year ending in November 2010. The best fitting distributions are drawn for both clearance and arrival time for 3 types of incident: crash, stationary vehicle, and hazard. The results show that Gamma, Log-logistic, and Weibull are the best fit for crash, stationary vehicle, and hazard incident, respectively. The obvious impact factors are given for crash clearance time and arrival time. The quantitative influences for crash and hazard incident are presented for both clearance and arrival. The model accuracy is analyzed at the end.

  7. A Dynamic Travel Time Estimation Model Based on Connected Vehicles

    Directory of Open Access Journals (Sweden)

    Daxin Tian

    2015-01-01

    Full Text Available With advances in connected vehicle technology, dynamic vehicle route guidance models gradually become indispensable equipment for drivers. Traditional route guidance models are designed to direct a vehicle along the shortest path from the origin to the destination without considering the dynamic traffic information. In this paper a dynamic travel time estimation model is presented which can collect and distribute traffic data based on the connected vehicles. To estimate the real-time travel time more accurately, a road link dynamic dividing algorithm is proposed. The efficiency of the model is confirmed by simulations, and the experiment results prove the effectiveness of the travel time estimation method.

  8. An overview of the recent advances in delay-time-based maintenance modelling

    International Nuclear Information System (INIS)

    Wang, Wenbin

    2012-01-01

    Industrial plant maintenance is an area which has enormous potential to be improved. It is also an area attracted significant attention from mathematical modellers because of the random phenomenon of plant failures. This paper reviews the recent advances in delay-time-based maintenance modelling, which is one of the mathematical techniques for optimising inspection planning and related problems. The delay-time is a concept that divides a plant failure process into two stages: from new until the point of an identifiable defect, and then from this point to failure. The first stage is called the normal working stage and the second stage is called the failure delay-time stage. If the distributions of the two stages can be quantified, the relationship between the number of failures and the inspection interval can be readily established. This can then be used for optimizing the inspection interval and other related decision variables. In this review, we pay particular attention to new methodological developments and industrial applications of the delay-time-based models over the last few decades. The use of the delay-time concept and modeling techniques in other areas rather than in maintenance is also reviewed. Future research directions are also highlighted. - Highlights: ► Reviewed the recent advances in delay-time-based maintenance models and applications. ► Compared the delay-time-based models with other models. ► Focused on methodologies and applications. ► Pointed out future research directions.

  9. Model-based Clustering of Categorical Time Series with Multinomial Logit Classification

    Science.gov (United States)

    Frühwirth-Schnatter, Sylvia; Pamminger, Christoph; Winter-Ebmer, Rudolf; Weber, Andrea

    2010-09-01

    A common problem in many areas of applied statistics is to identify groups of similar time series in a panel of time series. However, distance-based clustering methods cannot easily be extended to time series data, where an appropriate distance-measure is rather difficult to define, particularly for discrete-valued time series. Markov chain clustering, proposed by Pamminger and Frühwirth-Schnatter [6], is an approach for clustering discrete-valued time series obtained by observing a categorical variable with several states. This model-based clustering method is based on finite mixtures of first-order time-homogeneous Markov chain models. In order to further explain group membership we present an extension to the approach of Pamminger and Frühwirth-Schnatter [6] by formulating a probabilistic model for the latent group indicators within the Bayesian classification rule by using a multinomial logit model. The parameters are estimated for a fixed number of clusters within a Bayesian framework using an Markov chain Monte Carlo (MCMC) sampling scheme representing a (full) Gibbs-type sampler which involves only draws from standard distributions. Finally, an application to a panel of Austrian wage mobility data is presented which leads to an interesting segmentation of the Austrian labour market.

  10. A Perspective for Time-Varying Channel Compensation with Model-Based Adaptive Passive Time-Reversal

    Directory of Open Access Journals (Sweden)

    Lussac P. MAIA

    2015-06-01

    Full Text Available Underwater communications mainly rely on acoustic propagation which is strongly affected by frequency-dependent attenuation, shallow water multipath propagation and significant Doppler spread/shift induced by source-receiver-surface motion. Time-reversal based techniques offer a low complexity solution to decrease interferences caused by multipath, but a complete equalization cannot be reached (it saturates when maximize signal to noise ratio and these techniques in conventional form are quite sensible to channel variations along the transmission. Acoustic propagation modeling in high frequency regime can yield physical-based information that is potentially useful to channel compensation methods as the passive time-reversal (pTR, which is often employed in Digital Acoustic Underwater Communications (DAUC systems because of its low computational cost. Aiming to overcome the difficulties of pTR to solve time-variations in underwater channels, it is intended to insert physical knowledge from acoustic propagation modeling in the pTR filtering. Investigation is being done by the authors about the influence of channel physical parameters on propagation of coherent acoustic signals transmitted through shallow water waveguides and received in a vertical line array of sensors. Time-variant approach is used, as required to model high frequency acoustic propagation on realistic scenarios, and applied to a DAUC simulator containing an adaptive passive time-reversal receiver (ApTR. The understanding about the effects of changes in physical features of the channel over the propagation can lead to design ApTR filters which could help to improve the communications system performance. This work presents a short extension and review of the paper 12, which tested Doppler distortion induced by source-surface motion and ApTR compensation for a DAUC system on a simulated time-variant channel, in the scope of model-based equalization. Environmental focusing approach

  11. Nonlinear Prediction Model for Hydrologic Time Series Based on Wavelet Decomposition

    Science.gov (United States)

    Kwon, H.; Khalil, A.; Brown, C.; Lall, U.; Ahn, H.; Moon, Y.

    2005-12-01

    Traditionally forecasting and characterizations of hydrologic systems is performed utilizing many techniques. Stochastic linear methods such as AR and ARIMA and nonlinear ones such as statistical learning theory based tools have been extensively used. The common difficulty to all methods is the determination of sufficient and necessary information and predictors for a successful prediction. Relationships between hydrologic variables are often highly nonlinear and interrelated across the temporal scale. A new hybrid approach is proposed for the simulation of hydrologic time series combining both the wavelet transform and the nonlinear model. The present model employs some merits of wavelet transform and nonlinear time series model. The Wavelet Transform is adopted to decompose a hydrologic nonlinear process into a set of mono-component signals, which are simulated by nonlinear model. The hybrid methodology is formulated in a manner to improve the accuracy of a long term forecasting. The proposed hybrid model yields much better results in terms of capturing and reproducing the time-frequency properties of the system at hand. Prediction results are promising when compared to traditional univariate time series models. An application of the plausibility of the proposed methodology is provided and the results conclude that wavelet based time series model can be utilized for simulating and forecasting of hydrologic variable reasonably well. This will ultimately serve the purpose of integrated water resources planning and management.

  12. A bootstrap based space-time surveillance model with an application to crime occurrences

    Science.gov (United States)

    Kim, Youngho; O'Kelly, Morton

    2008-06-01

    This study proposes a bootstrap-based space-time surveillance model. Designed to find emerging hotspots in near-real time, the bootstrap based model is characterized by its use of past occurrence information and bootstrap permutations. Many existing space-time surveillance methods, using population at risk data to generate expected values, have resulting hotspots bounded by administrative area units and are of limited use for near-real time applications because of the population data needed. However, this study generates expected values for local hotspots from past occurrences rather than population at risk. Also, bootstrap permutations of previous occurrences are used for significant tests. Consequently, the bootstrap-based model, without the requirement of population at risk data, (1) is free from administrative area restriction, (2) enables more frequent surveillance for continuously updated registry database, and (3) is readily applicable to criminology and epidemiology surveillance. The bootstrap-based model performs better for space-time surveillance than the space-time scan statistic. This is shown by means of simulations and an application to residential crime occurrences in Columbus, OH, year 2000.

  13. Agent-Based Modeling of Day-Ahead Real Time Pricing in a Pool-Based Electricity Market

    Directory of Open Access Journals (Sweden)

    Sh. Yousefi

    2011-09-01

    Full Text Available In this paper, an agent-based structure of the electricity retail market is presented based on which day-ahead (DA energy procurement for customers is modeled. Here, we focus on operation of only one Retail Energy Provider (REP agent who purchases energy from DA pool-based wholesale market and offers DA real time tariffs to a group of its customers. As a model of customer response to the offered real time prices, an hourly acceptance function is proposed in order to represent the hourly changes in the customer’s effective demand according to the prices. Here, Q-learning (QL approach is applied in day-ahead real time pricing for the customers enabling the REP agent to discover which price yields the most benefit through a trial-and-error search. Numerical studies are presented based on New England day-ahead market data which include comparing the results of RTP based on QL approach with that of genetic-based pricing.

  14. RB Particle Filter Time Synchronization Algorithm Based on the DPM Model.

    Science.gov (United States)

    Guo, Chunsheng; Shen, Jia; Sun, Yao; Ying, Na

    2015-09-03

    Time synchronization is essential for node localization, target tracking, data fusion, and various other Wireless Sensor Network (WSN) applications. To improve the estimation accuracy of continuous clock offset and skew of mobile nodes in WSNs, we propose a novel time synchronization algorithm, the Rao-Blackwellised (RB) particle filter time synchronization algorithm based on the Dirichlet process mixture (DPM) model. In a state-space equation with a linear substructure, state variables are divided into linear and non-linear variables by the RB particle filter algorithm. These two variables can be estimated using Kalman filter and particle filter, respectively, which improves the computational efficiency more so than if only the particle filter was used. In addition, the DPM model is used to describe the distribution of non-deterministic delays and to automatically adjust the number of Gaussian mixture model components based on the observational data. This improves the estimation accuracy of clock offset and skew, which allows achieving the time synchronization. The time synchronization performance of this algorithm is also validated by computer simulations and experimental measurements. The results show that the proposed algorithm has a higher time synchronization precision than traditional time synchronization algorithms.

  15. Automatic CT-based finite element model generation for temperature-based death time estimation: feasibility study and sensitivity analysis.

    Science.gov (United States)

    Schenkl, Sebastian; Muggenthaler, Holger; Hubig, Michael; Erdmann, Bodo; Weiser, Martin; Zachow, Stefan; Heinrich, Andreas; Güttler, Felix Victor; Teichgräber, Ulf; Mall, Gita

    2017-05-01

    Temperature-based death time estimation is based either on simple phenomenological models of corpse cooling or on detailed physical heat transfer models. The latter are much more complex but allow a higher accuracy of death time estimation, as in principle, all relevant cooling mechanisms can be taken into account.Here, a complete workflow for finite element-based cooling simulation is presented. The following steps are demonstrated on a CT phantom: Computer tomography (CT) scan Segmentation of the CT images for thermodynamically relevant features of individual geometries and compilation in a geometric computer-aided design (CAD) model Conversion of the segmentation result into a finite element (FE) simulation model Computation of the model cooling curve (MOD) Calculation of the cooling time (CTE) For the first time in FE-based cooling time estimation, the steps from the CT image over segmentation to FE model generation are performed semi-automatically. The cooling time calculation results are compared to cooling measurements performed on the phantoms under controlled conditions. In this context, the method is validated using a CT phantom. Some of the phantoms' thermodynamic material parameters had to be determined via independent experiments.Moreover, the impact of geometry and material parameter uncertainties on the estimated cooling time is investigated by a sensitivity analysis.

  16. A stochastic HMM-based forecasting model for fuzzy time series.

    Science.gov (United States)

    Li, Sheng-Tun; Cheng, Yi-Chung

    2010-10-01

    Recently, fuzzy time series have attracted more academic attention than traditional time series due to their capability of dealing with the uncertainty and vagueness inherent in the data collected. The formulation of fuzzy relations is one of the key issues affecting forecasting results. Most of the present works adopt IF-THEN rules for relationship representation, which leads to higher computational overhead and rule redundancy. Sullivan and Woodall proposed a Markov-based formulation and a forecasting model to reduce computational overhead; however, its applicability is limited to handling one-factor problems. In this paper, we propose a novel forecasting model based on the hidden Markov model by enhancing Sullivan and Woodall's work to allow handling of two-factor forecasting problems. Moreover, in order to make the nature of conjecture and randomness of forecasting more realistic, the Monte Carlo method is adopted to estimate the outcome. To test the effectiveness of the resulting stochastic model, we conduct two experiments and compare the results with those from other models. The first experiment consists of forecasting the daily average temperature and cloud density in Taipei, Taiwan, and the second experiment is based on the Taiwan Weighted Stock Index by forecasting the exchange rate of the New Taiwan dollar against the U.S. dollar. In addition to improving forecasting accuracy, the proposed model adheres to the central limit theorem, and thus, the result statistically approximates to the real mean of the target value being forecast.

  17. A Sarsa(λ)-based control model for real-time traffic light coordination.

    Science.gov (United States)

    Zhou, Xiaoke; Zhu, Fei; Liu, Quan; Fu, Yuchen; Huang, Wei

    2014-01-01

    Traffic problems often occur due to the traffic demands by the outnumbered vehicles on road. Maximizing traffic flow and minimizing the average waiting time are the goals of intelligent traffic control. Each junction wants to get larger traffic flow. During the course, junctions form a policy of coordination as well as constraints for adjacent junctions to maximize their own interests. A good traffic signal timing policy is helpful to solve the problem. However, as there are so many factors that can affect the traffic control model, it is difficult to find the optimal solution. The disability of traffic light controllers to learn from past experiences caused them to be unable to adaptively fit dynamic changes of traffic flow. Considering dynamic characteristics of the actual traffic environment, reinforcement learning algorithm based traffic control approach can be applied to get optimal scheduling policy. The proposed Sarsa(λ)-based real-time traffic control optimization model can maintain the traffic signal timing policy more effectively. The Sarsa(λ)-based model gains traffic cost of the vehicle, which considers delay time, the number of waiting vehicles, and the integrated saturation from its experiences to learn and determine the optimal actions. The experiment results show an inspiring improvement in traffic control, indicating the proposed model is capable of facilitating real-time dynamic traffic control.

  18. A Sarsa(λ-Based Control Model for Real-Time Traffic Light Coordination

    Directory of Open Access Journals (Sweden)

    Xiaoke Zhou

    2014-01-01

    Full Text Available Traffic problems often occur due to the traffic demands by the outnumbered vehicles on road. Maximizing traffic flow and minimizing the average waiting time are the goals of intelligent traffic control. Each junction wants to get larger traffic flow. During the course, junctions form a policy of coordination as well as constraints for adjacent junctions to maximize their own interests. A good traffic signal timing policy is helpful to solve the problem. However, as there are so many factors that can affect the traffic control model, it is difficult to find the optimal solution. The disability of traffic light controllers to learn from past experiences caused them to be unable to adaptively fit dynamic changes of traffic flow. Considering dynamic characteristics of the actual traffic environment, reinforcement learning algorithm based traffic control approach can be applied to get optimal scheduling policy. The proposed Sarsa(λ-based real-time traffic control optimization model can maintain the traffic signal timing policy more effectively. The Sarsa(λ-based model gains traffic cost of the vehicle, which considers delay time, the number of waiting vehicles, and the integrated saturation from its experiences to learn and determine the optimal actions. The experiment results show an inspiring improvement in traffic control, indicating the proposed model is capable of facilitating real-time dynamic traffic control.

  19. Connectivity-based neurofeedback: Dynamic causal modeling for real-time fMRI☆

    Science.gov (United States)

    Koush, Yury; Rosa, Maria Joao; Robineau, Fabien; Heinen, Klaartje; W. Rieger, Sebastian; Weiskopf, Nikolaus; Vuilleumier, Patrik; Van De Ville, Dimitri; Scharnowski, Frank

    2013-01-01

    Neurofeedback based on real-time fMRI is an emerging technique that can be used to train voluntary control of brain activity. Such brain training has been shown to lead to behavioral effects that are specific to the functional role of the targeted brain area. However, real-time fMRI-based neurofeedback so far was limited to mainly training localized brain activity within a region of interest. Here, we overcome this limitation by presenting near real-time dynamic causal modeling in order to provide feedback information based on connectivity between brain areas rather than activity within a single brain area. Using a visual–spatial attention paradigm, we show that participants can voluntarily control a feedback signal that is based on the Bayesian model comparison between two predefined model alternatives, i.e. the connectivity between left visual cortex and left parietal cortex vs. the connectivity between right visual cortex and right parietal cortex. Our new approach thus allows for training voluntary control over specific functional brain networks. Because most mental functions and most neurological disorders are associated with network activity rather than with activity in a single brain region, this novel approach is an important methodological innovation in order to more directly target functionally relevant brain networks. PMID:23668967

  20. Intuitionistic Fuzzy Time Series Forecasting Model Based on Intuitionistic Fuzzy Reasoning

    Directory of Open Access Journals (Sweden)

    Ya’nan Wang

    2016-01-01

    Full Text Available Fuzzy sets theory cannot describe the data comprehensively, which has greatly limited the objectivity of fuzzy time series in uncertain data forecasting. In this regard, an intuitionistic fuzzy time series forecasting model is built. In the new model, a fuzzy clustering algorithm is used to divide the universe of discourse into unequal intervals, and a more objective technique for ascertaining the membership function and nonmembership function of the intuitionistic fuzzy set is proposed. On these bases, forecast rules based on intuitionistic fuzzy approximate reasoning are established. At last, contrast experiments on the enrollments of the University of Alabama and the Taiwan Stock Exchange Capitalization Weighted Stock Index are carried out. The results show that the new model has a clear advantage of improving the forecast accuracy.

  1. Efficient model checking for duration calculus based on branching-time approximations

    DEFF Research Database (Denmark)

    Fränzle, Martin; Hansen, Michael Reichhardt

    2008-01-01

    Duration Calculus (abbreviated to DC) is an interval-based, metric-time temporal logic designed for reasoning about embedded real-time systems at a high level of abstraction. But the complexity of model checking any decidable fragment featuring both negation and chop, DC's only modality, is non...

  2. Household time allocation model based on a group utility function

    NARCIS (Netherlands)

    Zhang, J.; Borgers, A.W.J.; Timmermans, H.J.P.

    2002-01-01

    Existing activity-based models typically assume an individual decision-making process. In household decision-making, however, interaction exists among household members and their activities during the allocation of the members' limited time. This paper, therefore, attempts to develop a new household

  3. Hybrid perturbation methods based on statistical time series models

    Science.gov (United States)

    San-Juan, Juan Félix; San-Martín, Montserrat; Pérez, Iván; López, Rosario

    2016-04-01

    In this work we present a new methodology for orbit propagation, the hybrid perturbation theory, based on the combination of an integration method and a prediction technique. The former, which can be a numerical, analytical or semianalytical theory, generates an initial approximation that contains some inaccuracies derived from the fact that, in order to simplify the expressions and subsequent computations, not all the involved forces are taken into account and only low-order terms are considered, not to mention the fact that mathematical models of perturbations not always reproduce physical phenomena with absolute precision. The prediction technique, which can be based on either statistical time series models or computational intelligence methods, is aimed at modelling and reproducing missing dynamics in the previously integrated approximation. This combination results in the precision improvement of conventional numerical, analytical and semianalytical theories for determining the position and velocity of any artificial satellite or space debris object. In order to validate this methodology, we present a family of three hybrid orbit propagators formed by the combination of three different orders of approximation of an analytical theory and a statistical time series model, and analyse their capability to process the effect produced by the flattening of the Earth. The three considered analytical components are the integration of the Kepler problem, a first-order and a second-order analytical theories, whereas the prediction technique is the same in the three cases, namely an additive Holt-Winters method.

  4. A Model-free Approach to Fault Detection of Continuous-time Systems Based on Time Domain Data

    Institute of Scientific and Technical Information of China (English)

    Ping Zhang; Steven X. Ding

    2007-01-01

    In this paper, a model-free approach is presented to design an observer-based fault detection system of linear continuoustime systems based on input and output data in the time domain. The core of the approach is to directly identify parameters of the observer-based residual generator based on a numerically reliable data equation obtained by filtering and sampling the input and output signals.

  5. Short-Term Bus Passenger Demand Prediction Based on Time Series Model and Interactive Multiple Model Approach

    Directory of Open Access Journals (Sweden)

    Rui Xue

    2015-01-01

    Full Text Available Although bus passenger demand prediction has attracted increased attention during recent years, limited research has been conducted in the context of short-term passenger demand forecasting. This paper proposes an interactive multiple model (IMM filter algorithm-based model to predict short-term passenger demand. After aggregated in 15 min interval, passenger demand data collected from a busy bus route over four months were used to generate time series. Considering that passenger demand exhibits various characteristics in different time scales, three time series were developed, named weekly, daily, and 15 min time series. After the correlation, periodicity, and stationarity analyses, time series models were constructed. Particularly, the heteroscedasticity of time series was explored to achieve better prediction performance. Finally, IMM filter algorithm was applied to combine individual forecasting models with dynamically predicted passenger demand for next interval. Different error indices were adopted for the analyses of individual and hybrid models. The performance comparison indicates that hybrid model forecasts are superior to individual ones in accuracy. Findings of this study are of theoretical and practical significance in bus scheduling.

  6. CD-SEM real time bias correction using reference metrology based modeling

    Science.gov (United States)

    Ukraintsev, V.; Banke, W.; Zagorodnev, G.; Archie, C.; Rana, N.; Pavlovsky, V.; Smirnov, V.; Briginas, I.; Katnani, A.; Vaid, A.

    2018-03-01

    Accuracy of patterning impacts yield, IC performance and technology time to market. Accuracy of patterning relies on optical proximity correction (OPC) models built using CD-SEM inputs and intra die critical dimension (CD) control based on CD-SEM. Sub-nanometer measurement uncertainty (MU) of CD-SEM is required for current technologies. Reported design and process related bias variation of CD-SEM is in the range of several nanometers. Reference metrology and numerical modeling are used to correct SEM. Both methods are slow to be used for real time bias correction. We report on real time CD-SEM bias correction using empirical models based on reference metrology (RM) data. Significant amount of currently untapped information (sidewall angle, corner rounding, etc.) is obtainable from SEM waveforms. Using additional RM information provided for specific technology (design rules, materials, processes) CD extraction algorithms can be pre-built and then used in real time for accurate CD extraction from regular CD-SEM images. The art and challenge of SEM modeling is in finding robust correlation between SEM waveform features and bias of CD-SEM as well as in minimizing RM inputs needed to create accurate (within the design and process space) model. The new approach was applied to improve CD-SEM accuracy of 45 nm GATE and 32 nm MET1 OPC 1D models. In both cases MU of the state of the art CD-SEM has been improved by 3x and reduced to a nanometer level. Similar approach can be applied to 2D (end of line, contours, etc.) and 3D (sidewall angle, corner rounding, etc.) cases.

  7. Nonlinear System Identification via Basis Functions Based Time Domain Volterra Model

    Directory of Open Access Journals (Sweden)

    Yazid Edwar

    2014-07-01

    Full Text Available This paper proposes basis functions based time domain Volterra model for nonlinear system identification. The Volterra kernels are expanded by using complex exponential basis functions and estimated via genetic algorithm (GA. The accuracy and practicability of the proposed method are then assessed experimentally from a scaled 1:100 model of a prototype truss spar platform. Identification results in time and frequency domain are presented and coherent functions are performed to check the quality of the identification results. It is shown that results between experimental data and proposed method are in good agreement.

  8. Real time polymer nanocomposites-based physical nanosensors: theory and modeling

    Science.gov (United States)

    Bellucci, Stefano; Shunin, Yuri; Gopeyenko, Victor; Lobanova-Shunina, Tamara; Burlutskaya, Nataly; Zhukovskii, Yuri

    2017-09-01

    Functionalized carbon nanotubes and graphene nanoribbons nanostructures, serving as the basis for the creation of physical pressure and temperature nanosensors, are considered as tools for ecological monitoring and medical applications. Fragments of nanocarbon inclusions with different morphologies, presenting a disordered system, are regarded as models for nanocomposite materials based on carbon nanoсluster suspension in dielectric polymer environments (e.g., epoxy resins). We have formulated the approach of conductivity calculations for carbon-based polymer nanocomposites using the effective media cluster approach, disordered systems theory and conductivity mechanisms analysis, and obtained the calibration dependences. Providing a proper description of electric responses in nanosensoring systems, we demonstrate the implementation of advanced simulation models suitable for real time control nanosystems. We also consider the prospects and prototypes of the proposed physical nanosensor models providing the comparisons with experimental calibration dependences.

  9. From discrete-time models to continuous-time, asynchronous modeling of financial markets

    NARCIS (Netherlands)

    Boer, Katalin; Kaymak, Uzay; Spiering, Jaap

    2007-01-01

    Most agent-based simulation models of financial markets are discrete-time in nature. In this paper, we investigate to what degree such models are extensible to continuous-time, asynchronous modeling of financial markets. We study the behavior of a learning market maker in a market with information

  10. From Discrete-Time Models to Continuous-Time, Asynchronous Models of Financial Markets

    NARCIS (Netherlands)

    K. Boer-Sorban (Katalin); U. Kaymak (Uzay); J. Spiering (Jaap)

    2006-01-01

    textabstractMost agent-based simulation models of financial markets are discrete-time in nature. In this paper, we investigate to what degree such models are extensible to continuous-time, asynchronous modelling of financial markets. We study the behaviour of a learning market maker in a market with

  11. Deformation analysis of polymers composites: rheological model involving time-based fractional derivative

    DEFF Research Database (Denmark)

    Zhou, H. W.; Yi, H. Y.; Mishnaevsky, Leon

    2017-01-01

    A modeling approach to time-dependent property of Glass Fiber Reinforced Polymers (GFRP) composites is of special interest for quantitative description of long-term behavior. An electronic creep machine is employed to investigate the time-dependent deformation of four specimens of dog-bond-shaped......A modeling approach to time-dependent property of Glass Fiber Reinforced Polymers (GFRP) composites is of special interest for quantitative description of long-term behavior. An electronic creep machine is employed to investigate the time-dependent deformation of four specimens of dog......-bond-shaped GFRP composites at various stress level. A negative exponent function based on structural changes is introduced to describe the damage evolution of material properties in the process of creep test. Accordingly, a new creep constitutive equation, referred to fractional derivative Maxwell model...... by the fractional derivative Maxwell model proposed in the paper are in a good agreement with the experimental data. It is shown that the new creep constitutive model proposed in the paper needs few parameters to represent various time-dependent behaviors....

  12. A GIS-based time-dependent seismic source modeling of Northern Iran

    Science.gov (United States)

    Hashemi, Mahdi; Alesheikh, Ali Asghar; Zolfaghari, Mohammad Reza

    2017-01-01

    The first step in any seismic hazard study is the definition of seismogenic sources and the estimation of magnitude-frequency relationships for each source. There is as yet no standard methodology for source modeling and many researchers have worked on this topic. This study is an effort to define linear and area seismic sources for Northern Iran. The linear or fault sources are developed based on tectonic features and characteristic earthquakes while the area sources are developed based on spatial distribution of small to moderate earthquakes. Time-dependent recurrence relationships are developed for fault sources using renewal approach while time-independent frequency-magnitude relationships are proposed for area sources based on Poisson process. GIS functionalities are used in this study to introduce and incorporate spatial-temporal and geostatistical indices in delineating area seismic sources. The proposed methodology is used to model seismic sources for an area of about 500 by 400 square kilometers around Tehran. Previous researches and reports are studied to compile an earthquake/fault catalog that is as complete as possible. All events are transformed to uniform magnitude scale; duplicate events and dependent shocks are removed. Completeness and time distribution of the compiled catalog is taken into account. The proposed area and linear seismic sources in conjunction with defined recurrence relationships can be used to develop time-dependent probabilistic seismic hazard analysis of Northern Iran.

  13. Creating wavelet-based models for real-time synthesis of perceptually convincing environmental sounds

    Science.gov (United States)

    Miner, Nadine Elizabeth

    1998-09-01

    This dissertation presents a new wavelet-based method for synthesizing perceptually convincing, dynamic sounds using parameterized sound models. The sound synthesis method is applicable to a variety of applications including Virtual Reality (VR), multi-media, entertainment, and the World Wide Web (WWW). A unique contribution of this research is the modeling of the stochastic, or non-pitched, sound components. This stochastic-based modeling approach leads to perceptually compelling sound synthesis. Two preliminary studies conducted provide data on multi-sensory interaction and audio-visual synchronization timing. These results contributed to the design of the new sound synthesis method. The method uses a four-phase development process, including analysis, parameterization, synthesis and validation, to create the wavelet-based sound models. A patent is pending for this dynamic sound synthesis method, which provides perceptually-realistic, real-time sound generation. This dissertation also presents a battery of perceptual experiments developed to verify the sound synthesis results. These experiments are applicable for validation of any sound synthesis technique.

  14. Ergodicity of forward times of the renewal process in a block-based inspection model using the delay time concept

    International Nuclear Information System (INIS)

    Wang Wenbin; Banjevic, Dragan

    2012-01-01

    The delay time concept and the techniques developed for modelling and optimising plant inspection practice have been reported in many papers and case studies. For a system subject to a few major failure modes, component based delay time models have been developed under the assumptions of an age-based inspection policy. An age-based inspection assumes that an inspection is scheduled according to the age of the component, and if there is a failure renewal, the next inspection is always, say τ times, from the time of the failure renewal. This applies to certain cases, particularly important plant items where the time since the last renewal or inspection is a key to schedule the next inspection service. However, in most cases, the inspection service is not scheduled according to the need of a particular component, rather it is scheduled according to a fixed calendar time regardless whether the component being inspected was just renewed or not. This policy is called a block-based inspection which has the advantage of easy planning and is particularly useful for plant items which are part of a larger system to be inspected. If a block-based inspection policy is used, the time to failure since the last inspection prior to the failure for a particular item is a random variable. This time is called the forward time in this paper. To optimise the inspection interval for block-based inspections, the usual criterion functions such as expected cost or down time per unit time depend on the distribution of this forward time. We report in this paper the development of a theoretical proof that a limiting distribution for such a forward time exists if certain conditions are met. We also propose a recursive algorithm for determining such a limiting distribution. A numerical example is presented to demonstrate the existence of the limiting distribution.

  15. Time-driven activity-based costing: A dynamic value assessment model in pediatric appendicitis.

    Science.gov (United States)

    Yu, Yangyang R; Abbas, Paulette I; Smith, Carolyn M; Carberry, Kathleen E; Ren, Hui; Patel, Binita; Nuchtern, Jed G; Lopez, Monica E

    2017-06-01

    Healthcare reform policies are emphasizing value-based healthcare delivery. We hypothesize that time-driven activity-based costing (TDABC) can be used to appraise healthcare interventions in pediatric appendicitis. Triage-based standing delegation orders, surgical advanced practice providers, and a same-day discharge protocol were implemented to target deficiencies identified in our initial TDABC model. Post-intervention process maps for a hospital episode were created using electronic time stamp data for simple appendicitis cases during February to March 2016. Total personnel and consumable costs were determined using TDABC methodology. The post-intervention TDABC model featured 6 phases of care, 33 processes, and 19 personnel types. Our interventions reduced duration and costs in the emergency department (-41min, -$23) and pre-operative floor (-57min, -$18). While post-anesthesia care unit duration and costs increased (+224min, +$41), the same-day discharge protocol eliminated post-operative floor costs (-$306). Our model incorporating all three interventions reduced total direct costs by 11% ($2753.39 to $2447.68) and duration of hospitalization by 51% (1984min to 966min). Time-driven activity-based costing can dynamically model changes in our healthcare delivery as a result of process improvement interventions. It is an effective tool to continuously assess the impact of these interventions on the value of appendicitis care. II, Type of study: Economic Analysis. Copyright © 2017 Elsevier Inc. All rights reserved.

  16. An Efficient Explicit-time Description Method for Timed Model Checking

    Directory of Open Access Journals (Sweden)

    Hao Wang

    2009-12-01

    Full Text Available Timed model checking, the method to formally verify real-time systems, is attracting increasing attention from both the model checking community and the real-time community. Explicit-time description methods verify real-time systems using general model constructs found in standard un-timed model checkers. Lamport proposed an explicit-time description method using a clock-ticking process (Tick to simulate the passage of time together with a group of global variables to model time requirements. Two methods, the Sync-based Explicit-time Description Method using rendezvous synchronization steps and the Semaphore-based Explicit-time Description Method using only one global variable were proposed; they both achieve better modularity than Lamport's method in modeling the real-time systems. In contrast to timed automata based model checkers like UPPAAL, explicit-time description methods can access and store the current time instant for future calculations necessary for many real-time systems, especially those with pre-emptive scheduling. However, the Tick process in the above three methods increments the time by one unit in each tick; the state spaces therefore grow relatively fast as the time parameters increase, a problem when the system's time period is relatively long. In this paper, we propose a more efficient method which enables the Tick process to leap multiple time units in one tick. Preliminary experimental results in a high performance computing environment show that this new method significantly reduces the state space and improves both the time and memory efficiency.

  17. Real-time process optimization based on grey-box neural models

    Directory of Open Access Journals (Sweden)

    F. A. Cubillos

    2007-09-01

    Full Text Available This paper investigates the feasibility of using grey-box neural models (GNM in Real Time Optimization (RTO. These models are based on a suitable combination of fundamental conservation laws and neural networks, being used in at least two different ways: to complement available phenomenological knowledge with empirical information, or to reduce dimensionality of complex rigorous physical models. We have observed that the benefits of using these simple adaptable models are counteracted by some difficulties associated with the solution of the optimization problem. Nonlinear Programming (NLP algorithms failed in finding the global optimum due to the fact that neural networks can introduce multimodal objective functions. One alternative considered to solve this problem was the use of some kind of evolutionary algorithms, like Genetic Algorithms (GA. Although these algorithms produced better results in terms of finding the appropriate region, they took long periods of time to reach the global optimum. It was found that a combination of genetic and nonlinear programming algorithms can be use to fast obtain the optimum solution. The proposed approach was applied to the Williams-Otto reactor, considering three different GNM models of increasing complexity. Results demonstrated that the use of GNM models and mixed GA/NLP optimization algorithms is a promissory approach for solving dynamic RTO problems.

  18. Time series modeling by a regression approach based on a latent process.

    Science.gov (United States)

    Chamroukhi, Faicel; Samé, Allou; Govaert, Gérard; Aknin, Patrice

    2009-01-01

    Time series are used in many domains including finance, engineering, economics and bioinformatics generally to represent the change of a measurement over time. Modeling techniques may then be used to give a synthetic representation of such data. A new approach for time series modeling is proposed in this paper. It consists of a regression model incorporating a discrete hidden logistic process allowing for activating smoothly or abruptly different polynomial regression models. The model parameters are estimated by the maximum likelihood method performed by a dedicated Expectation Maximization (EM) algorithm. The M step of the EM algorithm uses a multi-class Iterative Reweighted Least-Squares (IRLS) algorithm to estimate the hidden process parameters. To evaluate the proposed approach, an experimental study on simulated data and real world data was performed using two alternative approaches: a heteroskedastic piecewise regression model using a global optimization algorithm based on dynamic programming, and a Hidden Markov Regression Model whose parameters are estimated by the Baum-Welch algorithm. Finally, in the context of the remote monitoring of components of the French railway infrastructure, and more particularly the switch mechanism, the proposed approach has been applied to modeling and classifying time series representing the condition measurements acquired during switch operations.

  19. Time-based collision risk modeling for air traffic management

    Science.gov (United States)

    Bell, Alan E.

    Since the emergence of commercial aviation in the early part of last century, economic forces have driven a steadily increasing demand for air transportation. Increasing density of aircraft operating in a finite volume of airspace is accompanied by a corresponding increase in the risk of collision, and in response to a growing number of incidents and accidents involving collisions between aircraft, governments worldwide have developed air traffic control systems and procedures to mitigate this risk. The objective of any collision risk management system is to project conflicts and provide operators with sufficient opportunity to recognize potential collisions and take necessary actions to avoid them. It is therefore the assertion of this research that the currency of collision risk management is time. Future Air Traffic Management Systems are being designed around the foundational principle of four dimensional trajectory based operations, a method that replaces legacy first-come, first-served sequencing priorities with time-based reservations throughout the airspace system. This research will demonstrate that if aircraft are to be sequenced in four dimensions, they must also be separated in four dimensions. In order to separate aircraft in four dimensions, time must emerge as the primary tool by which air traffic is managed. A functional relationship exists between the time-based performance of aircraft, the interval between aircraft scheduled to cross some three dimensional point in space, and the risk of collision. This research models that relationship and presents two key findings. First, a method is developed by which the ability of an aircraft to meet a required time of arrival may be expressed as a robust standard for both industry and operations. Second, a method by which airspace system capacity may be increased while maintaining an acceptable level of collision risk is presented and demonstrated for the purpose of formulating recommendations for procedures

  20. Working Memory Span Development: A Time-Based Resource-Sharing Model Account

    Science.gov (United States)

    Barrouillet, Pierre; Gavens, Nathalie; Vergauwe, Evie; Gaillard, Vinciane; Camos, Valerie

    2009-01-01

    The time-based resource-sharing model (P. Barrouillet, S. Bernardin, & V. Camos, 2004) assumes that during complex working memory span tasks, attention is frequently and surreptitiously switched from processing to reactivate decaying memory traces before their complete loss. Three experiments involving children from 5 to 14 years of age…

  1. Detection of Common Problems in Real-Time and Multicore Systems Using Model-Based Constraints

    Directory of Open Access Journals (Sweden)

    Raphaël Beamonte

    2016-01-01

    Full Text Available Multicore systems are complex in that multiple processes are running concurrently and can interfere with each other. Real-time systems add on top of that time constraints, making results invalid as soon as a deadline has been missed. Tracing is often the most reliable and accurate tool available to study and understand those systems. However, tracing requires that users understand the kernel events and their meaning. It is therefore not very accessible. Using modeling to generate source code or represent applications’ workflow is handy for developers and has emerged as part of the model-driven development methodology. In this paper, we propose a new approach to system analysis using model-based constraints, on top of userspace and kernel traces. We introduce the constraints representation and how traces can be used to follow the application’s workflow and check the constraints we set on the model. We then present a number of common problems that we encountered in real-time and multicore systems and describe how our model-based constraints could have helped to save time by automatically identifying the unwanted behavior.

  2. Seismic wavefield modeling based on time-domain symplectic and Fourier finite-difference method

    Science.gov (United States)

    Fang, Gang; Ba, Jing; Liu, Xin-xin; Zhu, Kun; Liu, Guo-Chang

    2017-06-01

    Seismic wavefield modeling is important for improving seismic data processing and interpretation. Calculations of wavefield propagation are sometimes not stable when forward modeling of seismic wave uses large time steps for long times. Based on the Hamiltonian expression of the acoustic wave equation, we propose a structure-preserving method for seismic wavefield modeling by applying the symplectic finite-difference method on time grids and the Fourier finite-difference method on space grids to solve the acoustic wave equation. The proposed method is called the symplectic Fourier finite-difference (symplectic FFD) method, and offers high computational accuracy and improves the computational stability. Using acoustic approximation, we extend the method to anisotropic media. We discuss the calculations in the symplectic FFD method for seismic wavefield modeling of isotropic and anisotropic media, and use the BP salt model and BP TTI model to test the proposed method. The numerical examples suggest that the proposed method can be used in seismic modeling of strongly variable velocities, offering high computational accuracy and low numerical dispersion. The symplectic FFD method overcomes the residual qSV wave of seismic modeling in anisotropic media and maintains the stability of the wavefield propagation for large time steps.

  3. Compiling models into real-time systems

    International Nuclear Information System (INIS)

    Dormoy, J.L.; Cherriaux, F.; Ancelin, J.

    1992-08-01

    This paper presents an architecture for building real-time systems from models, and model-compiling techniques. This has been applied for building a real-time model-based monitoring system for nuclear plants, called KSE, which is currently being used in two plants in France. We describe how we used various artificial intelligence techniques for building it: a model-based approach, a logical model of its operation, a declarative implementation of these models, and original knowledge-compiling techniques for automatically generating the real-time expert system from those models. Some of those techniques have just been borrowed from the literature, but we had to modify or invent other techniques which simply did not exist. We also discuss two important problems, which are often underestimated in the artificial intelligence literature: size, and errors. Our architecture, which could be used in other applications, combines the advantages of the model-based approach with the efficiency requirements of real-time applications, while in general model-based approaches present serious drawbacks on this point

  4. Compiling models into real-time systems

    International Nuclear Information System (INIS)

    Dormoy, J.L.; Cherriaux, F.; Ancelin, J.

    1992-08-01

    This paper presents an architecture for building real-time systems from models, and model-compiling techniques. This has been applied for building a real-time model-base monitoring system for nuclear plants, called KSE, which is currently being used in two plants in France. We describe how we used various artificial intelligence techniques for building it: a model-based approach, a logical model of its operation, a declarative implementation of these models, and original knowledge-compiling techniques for automatically generating the real-time expert system from those models. Some of those techniques have just been borrowed from the literature, but we had to modify or invent other techniques which simply did not exist. We also discuss two important problems, which are often underestimated in the artificial intelligence literature: size, and errors. Our architecture, which could be used in other applications, combines the advantages of the model-based approach with the efficiency requirements of real-time applications, while in general model-based approaches present serious drawbacks on this point

  5. Modeling Complex Time Limits

    Directory of Open Access Journals (Sweden)

    Oleg Svatos

    2013-01-01

    Full Text Available In this paper we analyze complexity of time limits we can find especially in regulated processes of public administration. First we review the most popular process modeling languages. There is defined an example scenario based on the current Czech legislature which is then captured in discussed process modeling languages. Analysis shows that the contemporary process modeling languages support capturing of the time limit only partially. This causes troubles to analysts and unnecessary complexity of the models. Upon unsatisfying results of the contemporary process modeling languages we analyze the complexity of the time limits in greater detail and outline lifecycles of a time limit using the multiple dynamic generalizations pattern. As an alternative to the popular process modeling languages there is presented PSD process modeling language, which supports the defined lifecycles of a time limit natively and therefore allows keeping the models simple and easy to understand.

  6. Model-based Integration of Past & Future in TimeTravel

    DEFF Research Database (Denmark)

    Khalefa, Mohamed E.; Fischer, Ulrike; Pedersen, Torben Bach

    2012-01-01

    We demonstrate TimeTravel, an efficient DBMS system for seamless integrated querying of past and (forecasted) future values of time series, allowing the user to view past and future values as one joint time series. This functionality is important for advanced application domain like energy....... The main idea is to compactly represent time series as models. By using models, the TimeTravel system answers queries approximately on past and future data with error guarantees (absolute error and confidence) one order of magnitude faster than when accessing the time series directly. In addition...... it to answer approximate and exact queries. TimeTravel is implemented into PostgreSQL, thus achieving complete user transparency at the query level. In the demo, we show the easy building of a hierarchical model index for a real-world time series and the effect of varying the error guarantees on the speed up...

  7. Grey-Theory-Based Optimization Model of Emergency Logistics Considering Time Uncertainty.

    Science.gov (United States)

    Qiu, Bao-Jian; Zhang, Jiang-Hua; Qi, Yuan-Tao; Liu, Yang

    2015-01-01

    Natural disasters occur frequently in recent years, causing huge casualties and property losses. Nowadays, people pay more and more attention to the emergency logistics problems. This paper studies the emergency logistics problem with multi-center, multi-commodity, and single-affected-point. Considering that the path near the disaster point may be damaged, the information of the state of the paths is not complete, and the travel time is uncertainty, we establish the nonlinear programming model that objective function is the maximization of time-satisfaction degree. To overcome these drawbacks: the incomplete information and uncertain time, this paper firstly evaluates the multiple roads of transportation network based on grey theory and selects the reliable and optimal path. Then simplify the original model under the scenario that the vehicle only follows the optimal path from the emergency logistics center to the affected point, and use Lingo software to solve it. The numerical experiments are presented to show the feasibility and effectiveness of the proposed method.

  8. A prediction method based on wavelet transform and multiple models fusion for chaotic time series

    International Nuclear Information System (INIS)

    Zhongda, Tian; Shujiang, Li; Yanhong, Wang; Yi, Sha

    2017-01-01

    In order to improve the prediction accuracy of chaotic time series, a prediction method based on wavelet transform and multiple models fusion is proposed. The chaotic time series is decomposed and reconstructed by wavelet transform, and approximate components and detail components are obtained. According to different characteristics of each component, least squares support vector machine (LSSVM) is used as predictive model for approximation components. At the same time, an improved free search algorithm is utilized for predictive model parameters optimization. Auto regressive integrated moving average model (ARIMA) is used as predictive model for detail components. The multiple prediction model predictive values are fusion by Gauss–Markov algorithm, the error variance of predicted results after fusion is less than the single model, the prediction accuracy is improved. The simulation results are compared through two typical chaotic time series include Lorenz time series and Mackey–Glass time series. The simulation results show that the prediction method in this paper has a better prediction.

  9. Evidence-based guidelines, time-based health outcomes, and the Matthew effect

    NARCIS (Netherlands)

    M.L.E. Essink-Bot (Marie-Louise); M.E. Kruijshaar (Michelle); J.J.M. Barendregt (Jan); L.G.A. Bonneux (Luc)

    2007-01-01

    textabstractBackground: Cardiovascular risk management guidelines are 'risk based'; health economists' practice is 'time based'. The 'medical' risk-based allocation model maximises numbers of deaths prevented by targeting subjects at high risk, for example, elderly and smokers. The time-based model

  10. Evidence-based guidelines, time-based health outcomes, and the Matthew effect

    NARCIS (Netherlands)

    Essink-Bot, Marie-Louise; Kruijshaar, Michelle E.; Barendregt, Jan J.; Bonneux, Luc G. A.

    2007-01-01

    BACKGROUND: Cardiovascular risk management guidelines are 'risk based'; health economists' practice is 'time based'. The 'medical' risk-based allocation model maximises numbers of deaths prevented by targeting subjects at high risk, for example, elderly and smokers. The time-based model maximises

  11. Non-linear time variant model intended for polypyrrole-based actuators

    Science.gov (United States)

    Farajollahi, Meisam; Madden, John D. W.; Sassani, Farrokh

    2014-03-01

    Polypyrrole-based actuators are of interest due to their biocompatibility, low operation voltage and relatively high strain and force. Modeling and simulation are very important to predict the behaviour of each actuator. To develop an accurate model, we need to know the electro-chemo-mechanical specifications of the Polypyrrole. In this paper, the non-linear time-variant model of Polypyrrole film is derived and proposed using a combination of an RC transmission line model and a state space representation. The model incorporates the potential dependent ionic conductivity. A function of ionic conductivity of Polypyrrole vs. local charge is proposed and implemented in the non-linear model. Matching of the measured and simulated electrical response suggests that ionic conductivity of Polypyrrole decreases significantly at negative potential vs. silver/silver chloride and leads to reduced current in the cyclic voltammetry (CV) tests. The next stage is to relate the distributed charging of the polymer to actuation via the strain to charge ratio. Further work is also needed to identify ionic and electronic conductivities as well as capacitance as a function of oxidation state so that a fully predictive model can be created.

  12. A multiscale MDCT image-based breathing lung model with time-varying regional ventilation

    Science.gov (United States)

    Yin, Youbing; Choi, Jiwoong; Hoffman, Eric A.; Tawhai, Merryn H.; Lin, Ching-Long

    2012-01-01

    A novel algorithm is presented that links local structural variables (regional ventilation and deforming central airways) to global function (total lung volume) in the lung over three imaged lung volumes, to derive a breathing lung model for computational fluid dynamics simulation. The algorithm constitutes the core of an integrative, image-based computational framework for subject-specific simulation of the breathing lung. For the first time, the algorithm is applied to three multi-detector row computed tomography (MDCT) volumetric lung images of the same individual. A key technique in linking global and local variables over multiple images is an in-house mass-preserving image registration method. Throughout breathing cycles, cubic interpolation is employed to ensure C1 continuity in constructing time-varying regional ventilation at the whole lung level, flow rate fractions exiting the terminal airways, and airway deformation. The imaged exit airway flow rate fractions are derived from regional ventilation with the aid of a three-dimensional (3D) and one-dimensional (1D) coupled airway tree that connects the airways to the alveolar tissue. An in-house parallel large-eddy simulation (LES) technique is adopted to capture turbulent-transitional-laminar flows in both normal and deep breathing conditions. The results obtained by the proposed algorithm when using three lung volume images are compared with those using only one or two volume images. The three-volume-based lung model produces physiologically-consistent time-varying pressure and ventilation distribution. The one-volume-based lung model under-predicts pressure drop and yields un-physiological lobar ventilation. The two-volume-based model can account for airway deformation and non-uniform regional ventilation to some extent, but does not capture the non-linear features of the lung. PMID:23794749

  13. A multiscale MDCT image-based breathing lung model with time-varying regional ventilation

    Energy Technology Data Exchange (ETDEWEB)

    Yin, Youbing, E-mail: youbing-yin@uiowa.edu [Department of Mechanical and Industrial Engineering, The University of Iowa, Iowa City, IA 52242 (United States); IIHR-Hydroscience and Engineering, The University of Iowa, Iowa City, IA 52242 (United States); Department of Radiology, The University of Iowa, Iowa City, IA 52242 (United States); Choi, Jiwoong, E-mail: jiwoong-choi@uiowa.edu [Department of Mechanical and Industrial Engineering, The University of Iowa, Iowa City, IA 52242 (United States); IIHR-Hydroscience and Engineering, The University of Iowa, Iowa City, IA 52242 (United States); Hoffman, Eric A., E-mail: eric-hoffman@uiowa.edu [Department of Radiology, The University of Iowa, Iowa City, IA 52242 (United States); Department of Biomedical Engineering, The University of Iowa, Iowa City, IA 52242 (United States); Department of Internal Medicine, The University of Iowa, Iowa City, IA 52242 (United States); Tawhai, Merryn H., E-mail: m.tawhai@auckland.ac.nz [Auckland Bioengineering Institute, The University of Auckland, Auckland (New Zealand); Lin, Ching-Long, E-mail: ching-long-lin@uiowa.edu [Department of Mechanical and Industrial Engineering, The University of Iowa, Iowa City, IA 52242 (United States); IIHR-Hydroscience and Engineering, The University of Iowa, Iowa City, IA 52242 (United States)

    2013-07-01

    A novel algorithm is presented that links local structural variables (regional ventilation and deforming central airways) to global function (total lung volume) in the lung over three imaged lung volumes, to derive a breathing lung model for computational fluid dynamics simulation. The algorithm constitutes the core of an integrative, image-based computational framework for subject-specific simulation of the breathing lung. For the first time, the algorithm is applied to three multi-detector row computed tomography (MDCT) volumetric lung images of the same individual. A key technique in linking global and local variables over multiple images is an in-house mass-preserving image registration method. Throughout breathing cycles, cubic interpolation is employed to ensure C{sub 1} continuity in constructing time-varying regional ventilation at the whole lung level, flow rate fractions exiting the terminal airways, and airway deformation. The imaged exit airway flow rate fractions are derived from regional ventilation with the aid of a three-dimensional (3D) and one-dimensional (1D) coupled airway tree that connects the airways to the alveolar tissue. An in-house parallel large-eddy simulation (LES) technique is adopted to capture turbulent-transitional-laminar flows in both normal and deep breathing conditions. The results obtained by the proposed algorithm when using three lung volume images are compared with those using only one or two volume images. The three-volume-based lung model produces physiologically-consistent time-varying pressure and ventilation distribution. The one-volume-based lung model under-predicts pressure drop and yields un-physiological lobar ventilation. The two-volume-based model can account for airway deformation and non-uniform regional ventilation to some extent, but does not capture the non-linear features of the lung.

  14. Lapse of time effects on tax evasion in an agent-based econophysics model

    Science.gov (United States)

    Seibold, Götz; Pickhardt, Michael

    2013-05-01

    We investigate an inhomogeneous Ising model in the context of tax evasion dynamics where different types of agents are parameterized via local temperatures and magnetic fields. In particular, we analyze the impact of lapse of time effects (i.e. backauditing) and endogenously determined penalty rates on tax compliance. Both features contribute to a microfoundation of agent-based econophysics models of tax evasion.

  15. Performance of joint modelling of time-to-event data with time-dependent predictors: an assessment based on transition to psychosis data

    Directory of Open Access Journals (Sweden)

    Hok Pan Yuen

    2016-10-01

    Full Text Available Joint modelling has emerged to be a potential tool to analyse data with a time-to-event outcome and longitudinal measurements collected over a series of time points. Joint modelling involves the simultaneous modelling of the two components, namely the time-to-event component and the longitudinal component. The main challenges of joint modelling are the mathematical and computational complexity. Recent advances in joint modelling have seen the emergence of several software packages which have implemented some of the computational requirements to run joint models. These packages have opened the door for more routine use of joint modelling. Through simulations and real data based on transition to psychosis research, we compared joint model analysis of time-to-event outcome with the conventional Cox regression analysis. We also compared a number of packages for fitting joint models. Our results suggest that joint modelling do have advantages over conventional analysis despite its potential complexity. Our results also suggest that the results of analyses may depend on how the methodology is implemented.

  16. SEM Based CARMA Time Series Modeling for Arbitrary N.

    Science.gov (United States)

    Oud, Johan H L; Voelkle, Manuel C; Driver, Charles C

    2018-01-01

    This article explains in detail the state space specification and estimation of first and higher-order autoregressive moving-average models in continuous time (CARMA) in an extended structural equation modeling (SEM) context for N = 1 as well as N > 1. To illustrate the approach, simulations will be presented in which a single panel model (T = 41 time points) is estimated for a sample of N = 1,000 individuals as well as for samples of N = 100 and N = 50 individuals, followed by estimating 100 separate models for each of the one-hundred N = 1 cases in the N = 100 sample. Furthermore, we will demonstrate how to test the difference between the full panel model and each N = 1 model by means of a subject-group-reproducibility test. Finally, the proposed analyses will be applied in an empirical example, in which the relationships between mood at work and mood at home are studied in a sample of N = 55 women. All analyses are carried out by ctsem, an R-package for continuous time modeling, interfacing to OpenMx.

  17. Parameter Estimation of a Delay Time Model of Wearing Parts Based on Objective Data

    Directory of Open Access Journals (Sweden)

    Y. Tang

    2015-01-01

    Full Text Available The wearing parts of a system have a very high failure frequency, making it necessary to carry out continual functional inspections and maintenance to protect the system from unscheduled downtime. This allows for the collection of a large amount of maintenance data. Taking the unique characteristics of the wearing parts into consideration, we establish their respective delay time models in ideal inspection cases and nonideal inspection cases. The model parameters are estimated entirely using the collected maintenance data. Then, a likelihood function of all renewal events is derived based on their occurring probability functions, and the model parameters are calculated with the maximum likelihood function method, which is solved by the CRM. Finally, using two wearing parts from the oil and gas drilling industry as examples—the filter element and the blowout preventer rubber core—the parameters of the distribution function of the initial failure time and the delay time for each example are estimated, and their distribution functions are obtained. Such parameter estimation based on objective data will contribute to the optimization of the reasonable function inspection interval and will also provide some theoretical models to support the integrity management of equipment or systems.

  18. 3D airborne EM modeling based on the spectral-element time-domain (SETD) method

    Science.gov (United States)

    Cao, X.; Yin, C.; Huang, X.; Liu, Y.; Zhang, B., Sr.; Cai, J.; Liu, L.

    2017-12-01

    In the field of 3D airborne electromagnetic (AEM) modeling, both finite-difference time-domain (FDTD) method and finite-element time-domain (FETD) method have limitations that FDTD method depends too much on the grids and time steps, while FETD requires large number of grids for complex structures. We propose a time-domain spectral-element (SETD) method based on GLL interpolation basis functions for spatial discretization and Backward Euler (BE) technique for time discretization. The spectral-element method is based on a weighted residual technique with polynomials as vector basis functions. It can contribute to an accurate result by increasing the order of polynomials and suppressing spurious solution. BE method is a stable tine discretization technique that has no limitation on time steps and can guarantee a higher accuracy during the iteration process. To minimize the non-zero number of sparse matrix and obtain a diagonal mass matrix, we apply the reduced order integral technique. A direct solver with its speed independent of the condition number is adopted for quickly solving the large-scale sparse linear equations system. To check the accuracy of our SETD algorithm, we compare our results with semi-analytical solutions for a three-layered earth model within the time lapse 10-6-10-2s for different physical meshes and SE orders. The results show that the relative errors for magnetic field B and magnetic induction are both around 3-5%. Further we calculate AEM responses for an AEM system over a 3D earth model in Figure 1. From numerical experiments for both 1D and 3D model, we draw the conclusions that: 1) SETD can deliver an accurate results for both dB/dt and B; 2) increasing SE order improves the modeling accuracy for early to middle time channels when the EM field diffuses fast so the high-order SE can model the detailed variation; 3) at very late time channels, increasing SE order has little improvement on modeling accuracy, but the time interval plays

  19. Robust self-triggered model predictive control for constrained discrete-time LTI systems based on homothetic tubes

    NARCIS (Netherlands)

    Aydiner, E.; Brunner, F.D.; Heemels, W.P.M.H.; Allgower, F.

    2015-01-01

    In this paper we present a robust self-triggered model predictive control (MPC) scheme for discrete-time linear time-invariant systems subject to input and state constraints and additive disturbances. In self-triggered model predictive control, at every sampling instant an optimization problem based

  20. Value of time determination for the city of Alexandria based on a disaggregate binary mode choice model

    Directory of Open Access Journals (Sweden)

    Mounir Mahmoud Moghazy Abdel-Aal

    2017-12-01

    Full Text Available In the travel demand modeling field, mode choice is the most important decision that affects the resulted road congestion. The behavioral nature of the disaggregate models and the associated advantages of such models over aggregate models have led to their extensive use. This paper proposes a framework to determine the value of time (VoT for the city of Alexandria through calibrating a disaggregate linear-in parameter utility-based binary logit mode choice model of the city. The mode attributes (travel time and travel cost along with traveler attributes (car ownership and income were selected as the utility attributes of the basic model formulation which included 5 models. Three additional alternative utility formulations based on the transformation of the mode attributes including relative travel cost (cost divided by income and log (travel time and the combination of the two transformations together were introduced. The parameter estimation procedure was based on the likelihood maximization technique and was performed in EXCEL. Out of 20 models estimated, only 2 models are considered successful in terms of the parameters estimates correct signs and the magnitude of their significance (t-statistics value. The determination of the VoT serves also in the model validation. The best two models estimated the value of time at LE 11.30/hr and LE 14.50/hr with a relative error of +3.7% and +33.0%, respectively, of the hourly salary of LE 10.9/hr. The proposed two models prove to be sensitive to trip time and income levels as factors affecting the choice mechanism. The sensitivity analysis was performed and proved the model with higher relative error is marginally more robust. Keywords: Transportation modeling, Binary mode choice, Parameter estimation, Value of time, Likelihood maximization, Sensitivity analysis

  1. Modelling of the acid base properties of two thermophilic bacteria at different growth times

    Science.gov (United States)

    Heinrich, Hannah T. M.; Bremer, Phil J.; McQuillan, A. James; Daughney, Christopher J.

    2008-09-01

    Acid-base titrations and electrophoretic mobility measurements were conducted on the thermophilic bacteria Anoxybacillus flavithermus and Geobacillus stearothermophilus at two different growth times corresponding to exponential and stationary/death phase. The data showed significant differences between the two investigated growth times for both bacterial species. In stationary/death phase samples, cells were disrupted and their buffering capacity was lower than that of exponential phase cells. For G. stearothermophilus the electrophoretic mobility profiles changed dramatically. Chemical equilibrium models were developed to simultaneously describe the data from the titrations and the electrophoretic mobility measurements. A simple approach was developed to determine confidence intervals for the overall variance between the model and the experimental data, in order to identify statistically significant changes in model fit and thereby select the simplest model that was able to adequately describe each data set. Exponential phase cells of the investigated thermophiles had a higher total site concentration than the average found for mesophilic bacteria (based on a previously published generalised model for the acid-base behaviour of mesophiles), whereas the opposite was true for cells in stationary/death phase. The results of this study indicate that growth phase is an important parameter that can affect ion binding by bacteria, that growth phase should be considered when developing or employing chemical models for bacteria-bearing systems.

  2. Real-Time Model-Based Leak-Through Detection within Cryogenic Flow Systems

    Science.gov (United States)

    Walker, M.; Figueroa, F.

    2015-01-01

    The timely detection of leaks within cryogenic fuel replenishment systems is of significant importance to operators on account of the safety and economic impacts associated with material loss and operational inefficiencies. Associated loss in control of pressure also effects the stability and ability to control the phase of cryogenic fluids during replenishment operations. Current research dedicated to providing Prognostics and Health Management (PHM) coverage of such cryogenic replenishment systems has focused on the detection of leaks to atmosphere involving relatively simple model-based diagnostic approaches that, while effective, are unable to isolate the fault to specific piping system components. The authors have extended this research to focus on the detection of leaks through closed valves that are intended to isolate sections of the piping system from the flow and pressurization of cryogenic fluids. The described approach employs model-based detection of leak-through conditions based on correlations of pressure changes across isolation valves and attempts to isolate the faults to specific valves. Implementation of this capability is enabled by knowledge and information embedded in the domain model of the system. The approach has been used effectively to detect such leak-through faults during cryogenic operational testing at the Cryogenic Testbed at NASA's Kennedy Space Center.

  3. Hydrological real-time modelling in the Zambezi river basin using satellite-based soil moisture and rainfall data

    Directory of Open Access Journals (Sweden)

    P. Meier

    2011-03-01

    Full Text Available Reliable real-time forecasts of the discharge can provide valuable information for the management of a river basin system. For the management of ecological releases even discharge forecasts with moderate accuracy can be beneficial. Sequential data assimilation using the Ensemble Kalman Filter provides a tool that is both efficient and robust for a real-time modelling framework. One key parameter in a hydrological system is the soil moisture, which recently can be characterized by satellite based measurements. A forecasting framework for the prediction of discharges is developed and applied to three different sub-basins of the Zambezi River Basin. The model is solely based on remote sensing data providing soil moisture and rainfall estimates. The soil moisture product used is based on the back-scattering intensity of a radar signal measured by a radar scatterometer. These soil moisture data correlate well with the measured discharge of the corresponding watershed if the data are shifted by a time lag which is dependent on the size and the dominant runoff process in the catchment. This time lag is the basis for the applicability of the soil moisture data for hydrological forecasts. The conceptual model developed is based on two storage compartments. The processes modeled include evaporation losses, infiltration and percolation. The application of this model in a real-time modelling framework yields good results in watersheds where soil storage is an important factor. The lead time of the forecast is dependent on the size and the retention capacity of the watershed. For the largest watershed a forecast over 40 days can be provided. However, the quality of the forecast increases significantly with decreasing prediction time. In a watershed with little soil storage and a quick response to rainfall events, the performance is relatively poor and the lead time is as short as 10 days only.

  4. A Time-Series Water Level Forecasting Model Based on Imputation and Variable Selection Method

    OpenAIRE

    Jun-He Yang; Ching-Hsue Cheng; Chia-Pan Chan

    2017-01-01

    Reservoirs are important for households and impact the national economy. This paper proposed a time-series forecasting model based on estimating a missing value followed by variable selection to forecast the reservoir's water level. This study collected data from the Taiwan Shimen Reservoir as well as daily atmospheric data from 2008 to 2015. The two datasets are concatenated into an integrated dataset based on ordering of the data as a research dataset. The proposed time-series forecasting m...

  5. A Time-Series Water Level Forecasting Model Based on Imputation and Variable Selection Method.

    Science.gov (United States)

    Yang, Jun-He; Cheng, Ching-Hsue; Chan, Chia-Pan

    2017-01-01

    Reservoirs are important for households and impact the national economy. This paper proposed a time-series forecasting model based on estimating a missing value followed by variable selection to forecast the reservoir's water level. This study collected data from the Taiwan Shimen Reservoir as well as daily atmospheric data from 2008 to 2015. The two datasets are concatenated into an integrated dataset based on ordering of the data as a research dataset. The proposed time-series forecasting model summarily has three foci. First, this study uses five imputation methods to directly delete the missing value. Second, we identified the key variable via factor analysis and then deleted the unimportant variables sequentially via the variable selection method. Finally, the proposed model uses a Random Forest to build the forecasting model of the reservoir's water level. This was done to compare with the listing method under the forecasting error. These experimental results indicate that the Random Forest forecasting model when applied to variable selection with full variables has better forecasting performance than the listing model. In addition, this experiment shows that the proposed variable selection can help determine five forecast methods used here to improve the forecasting capability.

  6. Impedance based time-domain modeling of lithium-ion batteries: Part I

    Science.gov (United States)

    Gantenbein, Sophia; Weiss, Michael; Ivers-Tiffée, Ellen

    2018-03-01

    This paper presents a novel lithium-ion cell model, which simulates the current voltage characteristic as a function of state of charge (0%-100%) and temperature (0-30 °C). It predicts the cell voltage at each operating point by calculating the total overvoltage from the individual contributions of (i) the ohmic loss η0, (ii) the charge transfer loss of the cathode ηCT,C, (iii) the charge transfer loss and the solid electrolyte interface loss of the anode ηSEI/CT,A, and (iv) the solid state and electrolyte diffusion loss ηDiff,A/C/E. This approach is based on a physically meaningful equivalent circuit model, which is parametrized by electrochemical impedance spectroscopy and time domain measurements, covering a wide frequency range from MHz to μHz. The model is exemplarily parametrized to a commercial, high-power 350 mAh graphite/LiNiCoAlO2-LiCoO2 pouch cell and validated by continuous discharge and charge curves at varying temperature. For the first time, the physical background of the model allows the operator to draw conclusions about the performance-limiting factor at various operating conditions. Not only can the model help to choose application-optimized cell characteristics, but it can also support the battery management system when taking corrective actions during operation.

  7. Verifying real-time systems against scenario-based requirements

    DEFF Research Database (Denmark)

    Larsen, Kim Guldstrand; Li, Shuhao; Nielsen, Brian

    2009-01-01

    We propose an approach to automatic verification of real-time systems against scenario-based requirements. A real-time system is modeled as a network of Timed Automata (TA), and a scenario-based requirement is specified as a Live Sequence Chart (LSC). We define a trace-based semantics for a kernel...... subset of the LSC language. By equivalently translating an LSC chart into an observer TA and then non-intrusively composing this observer with the original system model, the problem of verifying a real-time system against a scenario-based requirement reduces to a classical real-time model checking...

  8. A discrete time-varying internal model-based approach for high precision tracking of a multi-axis servo gantry.

    Science.gov (United States)

    Zhang, Zhen; Yan, Peng; Jiang, Huan; Ye, Peiqing

    2014-09-01

    In this paper, we consider the discrete time-varying internal model-based control design for high precision tracking of complicated reference trajectories generated by time-varying systems. Based on a novel parallel time-varying internal model structure, asymptotic tracking conditions for the design of internal model units are developed, and a low order robust time-varying stabilizer is further synthesized. In a discrete time setting, the high precision tracking control architecture is deployed on a Voice Coil Motor (VCM) actuated servo gantry system, where numerical simulations and real time experimental results are provided, achieving the tracking errors around 3.5‰ for frequency-varying signals. Copyright © 2014 ISA. Published by Elsevier Ltd. All rights reserved.

  9. A turbulent time scale based k–ε model for probability density function modeling of turbulence/chemistry interactions: Application to HCCI combustion

    International Nuclear Information System (INIS)

    Maroteaux, Fadila; Pommier, Pierre-Lin

    2013-01-01

    Highlights: ► Turbulent time evolution is introduced in stochastic modeling approach. ► The particles number is optimized trough a restricted initial distribution. ► The initial distribution amplitude is modeled by magnitude of turbulence field. -- Abstract: Homogenous Charge Compression Ignition (HCCI) engine technology is known as an alternative to reduce NO x and particulate matter (PM) emissions. As shown by several experimental studies published in the literature, the ideally homogeneous mixture charge becomes stratified in composition and temperature, and turbulent mixing is found to play an important role in controlling the combustion progress. In a previous study, an IEM model (Interaction by Exchange with the Mean) has been used to describe the micromixing in a stochastic reactor model that simulates the HCCI process. The IEM model is a deterministic model, based on the principle that the scalar value approaches the mean value over the entire volume with a characteristic mixing time. In this previous model, the turbulent time scale was treated as a fixed parameter. The present study focuses on the development of a micro-mixing time model, in order to take into account the physical phenomena it stands for. For that purpose, a (k–ε) model is used to express this micro-mixing time model. The turbulence model used here is based on zero dimensional energy cascade applied during the compression and the expansion cycle; mean kinetic energy is converted to turbulent kinetic energy. Turbulent kinetic energy is converted to heat through viscous dissipation. Besides, in this study a relation to calculate the initial heterogeneities amplitude is proposed. The comparison of simulation results against experimental data shows overall satisfactory agreement at variable turbulent time scale

  10. Modeling of the attenuation of stress waves in concrete based on the Rayleigh damping model using time-reversal and PZT transducers

    Science.gov (United States)

    Tian, Zhen; Huo, Linsheng; Gao, Weihang; Li, Hongnan; Song, Gangbing

    2017-10-01

    Wave-based concrete structural health monitoring has attracted much attention. A stress wave experiences significant attenuation in concrete, however there is a lack of a unified method for predicting the attenuation coefficient of the stress wave. In this paper, a simple and effective absorption attenuation model of stress waves in concrete is developed based on the Rayleigh damping model, which indicates that the absorption attenuation coefficient of stress waves in concrete is directly proportional to the square of the stress wave frequency when the damping ratio is small. In order to verify the theoretical model, related experiments were carried out. During the experiments, a concrete beam was designed in which the d33-model piezoelectric smart aggregates were embedded to detect the propagation of stress waves. It is difficult to distinguish direct stress waves due to the complex propagation paths and the reflection and scattering of stress waves in concrete. Hence, as another innovation of this paper, a new method for computing the absorption attenuation coefficient based on the time-reversal method is developed. Due to the self-adaptive focusing properties of the time-reversal method, the time-reversed stress wave focuses and generates a peak value. The time-reversal method eliminates the adverse effects of multipaths, reflection, and scattering. The absorption attenuation coefficient is computed by analyzing the peak value changes of the time-reversal focused signal. Finally, the experimental results are found to be in good agreement with the theoretical model.

  11. Gaussian Mixture Random Coefficient model based framework for SHM in structures with time-dependent dynamics under uncertainty

    Science.gov (United States)

    Avendaño-Valencia, Luis David; Fassois, Spilios D.

    2017-12-01

    The problem of vibration-based damage diagnosis in structures characterized by time-dependent dynamics under significant environmental and/or operational uncertainty is considered. A stochastic framework consisting of a Gaussian Mixture Random Coefficient model of the uncertain time-dependent dynamics under each structural health state, proper estimation methods, and Bayesian or minimum distance type decision making, is postulated. The Random Coefficient (RC) time-dependent stochastic model with coefficients following a multivariate Gaussian Mixture Model (GMM) allows for significant flexibility in uncertainty representation. Certain of the model parameters are estimated via a simple procedure which is founded on the related Multiple Model (MM) concept, while the GMM weights are explicitly estimated for optimizing damage diagnostic performance. The postulated framework is demonstrated via damage detection in a simple simulated model of a quarter-car active suspension with time-dependent dynamics and considerable uncertainty on the payload. Comparisons with a simpler Gaussian RC model based method are also presented, with the postulated framework shown to be capable of offering considerable improvement in diagnostic performance.

  12. Modeling and simulation of tumor-influenced high resolution real-time physics-based breast models for model-guided robotic interventions

    Science.gov (United States)

    Neylon, John; Hasse, Katelyn; Sheng, Ke; Santhanam, Anand P.

    2016-03-01

    Breast radiation therapy is typically delivered to the patient in either supine or prone position. Each of these positioning systems has its limitations in terms of tumor localization, dose to the surrounding normal structures, and patient comfort. We envision developing a pneumatically controlled breast immobilization device that will enable the benefits of both supine and prone positioning. In this paper, we present a physics-based breast deformable model that aids in both the design of the breast immobilization device as well as a control module for the device during every day positioning. The model geometry is generated from a subject's CT scan acquired during the treatment planning stage. A GPU based deformable model is then generated for the breast. A mass-spring-damper approach is then employed for the deformable model, with the spring modeled to represent a hyperelastic tissue behavior. Each voxel of the CT scan is then associated with a mass element, which gives the model its high resolution nature. The subject specific elasticity is then estimated from a CT scan in prone position. Our results show that the model can deform at >60 deformations per second, which satisfies the real-time requirement for robotic positioning. The model interacts with a computer designed immobilization device to position the breast and tumor anatomy in a reproducible location. The design of the immobilization device was also systematically varied based on the breast geometry, tumor location, elasticity distribution and the reproducibility of the desired tumor location.

  13. Time-varying metamaterials based on graphene-wrapped microwires: Modeling and potential applications

    Science.gov (United States)

    Salary, Mohammad Mahdi; Jafar-Zanjani, Samad; Mosallaei, Hossein

    2018-03-01

    The successful realization of metamaterials and metasurfaces requires the judicious choice of constituent elements. In this paper, we demonstrate the implementation of time-varying metamaterials in the terahertz frequency regime by utilizing graphene-wrapped microwires as building blocks and modulation of graphene conductivity through exterior electrical gating. These elements enable enhancement of light-graphene interaction by utilizing optical resonances associated with Mie scattering, yielding a large tunability and modulation depth. We develop a semianalytical framework based on transition-matrix formulation for modeling and analysis of periodic and aperiodic arrays of such time-varying building blocks. The proposed method is validated against full-wave numerical results obtained using the finite-difference time-domain method. It provides an ideal tool for mathematical synthesis and analysis of space-time gradient metamaterials, eliminating the need for computationally expensive numerical models. Moreover, it allows for a wider exploration of exotic space-time scattering phenomena in time-modulated metamaterials. We apply the method to explore the role of modulation parameters in the generation of frequency harmonics and their emerging wavefronts. Several potential applications of such platforms are demonstrated, including frequency conversion, holographic generation of frequency harmonics, and spatiotemporal manipulation of light. The presented results provide key physical insights to design time-modulated functional metadevices using various building blocks and open up new directions in the emerging paradigm of time-modulated metamaterials.

  14. Conceptual framework for model-based analysis of residence time distribution in twin-screw granulation

    DEFF Research Database (Denmark)

    Kumar, Ashish; Vercruysse, Jurgen; Vanhoorne, Valerie

    2015-01-01

    Twin-screw granulation is a promising continuous alternative for traditional batchwise wet granulation processes. The twin-screw granulator (TSG) screws consist of transport and kneading element modules. Therefore, the granulation to a large extent is governed by the residence time distribution...... within each module where different granulation rate processes dominate over others. Currently, experimental data is used to determine the residence time distributions. In this study, a conceptual model based on classical chemical engineering methods is proposed to better understand and simulate...... the residence time distribution in a TSG. The experimental data were compared with the proposed most suitable conceptual model to estimate the parameters of the model and to analyse and predict the effects of changes in number of kneading discs and their stagger angle, screw speed and powder feed rate...

  15. Research on Adaptive Neural Network Control System Based on Nonlinear U-Model with Time-Varying Delay

    Directory of Open Access Journals (Sweden)

    Fengxia Xu

    2014-01-01

    Full Text Available U-model can approximate a large class of smooth nonlinear time-varying delay system to any accuracy by using time-varying delay parameters polynomial. This paper proposes a new approach, namely, U-model approach, to solving the problems of analysis and synthesis for nonlinear systems. Based on the idea of discrete-time U-model with time-varying delay, the identification algorithm of adaptive neural network is given for the nonlinear model. Then, the controller is designed by using the Newton-Raphson formula and the stability analysis is given for the closed-loop nonlinear systems. Finally, illustrative examples are given to show the validity and applicability of the obtained results.

  16. A model for food and stimulus changes that signal time-based contingency changes.

    Science.gov (United States)

    Cowie, Sarah; Davison, Michael; Elliffe, Douglas

    2014-11-01

    When the availability of reinforcers depends on time since an event, time functions as a discriminative stimulus. Behavioral control by elapsed time is generally weak, but may be enhanced by added stimuli that act as additional time markers. The present paper assessed the effect of brief and continuous added stimuli on control by time-based changes in the reinforcer differential, using a procedure in which the local reinforcer ratio reversed at a fixed time after the most recent reinforcer delivery. Local choice was enhanced by the presentation of the brief stimuli, even when the stimulus change signalled only elapsed time, but not the local reinforcer ratio. The effect of the brief stimulus presentations on choice decreased as a function of time since the most recent stimulus change. We compared the ability of several versions of a model of local choice to describe these data. The data were best described by a model which assumed that error in discriminating the local reinforcer ratio arose from imprecise discrimination of reinforcers in both time and space, suggesting that timing behavior is controlled not only by discrimination elapsed time, but by discrimination of the reinforcer differential in time. © Society for the Experimental Analysis of Behavior.

  17. Adaptive time-variant models for fuzzy-time-series forecasting.

    Science.gov (United States)

    Wong, Wai-Keung; Bai, Enjian; Chu, Alice Wai-Ching

    2010-12-01

    A fuzzy time series has been applied to the prediction of enrollment, temperature, stock indices, and other domains. Related studies mainly focus on three factors, namely, the partition of discourse, the content of forecasting rules, and the methods of defuzzification, all of which greatly influence the prediction accuracy of forecasting models. These studies use fixed analysis window sizes for forecasting. In this paper, an adaptive time-variant fuzzy-time-series forecasting model (ATVF) is proposed to improve forecasting accuracy. The proposed model automatically adapts the analysis window size of fuzzy time series based on the prediction accuracy in the training phase and uses heuristic rules to generate forecasting values in the testing phase. The performance of the ATVF model is tested using both simulated and actual time series including the enrollments at the University of Alabama, Tuscaloosa, and the Taiwan Stock Exchange Capitalization Weighted Stock Index (TAIEX). The experiment results show that the proposed ATVF model achieves a significant improvement in forecasting accuracy as compared to other fuzzy-time-series forecasting models.

  18. Time-Weighted Balanced Stochastic Model Reduction

    DEFF Research Database (Denmark)

    Tahavori, Maryamsadat; Shaker, Hamid Reza

    2011-01-01

    A new relative error model reduction technique for linear time invariant (LTI) systems is proposed in this paper. Both continuous and discrete time systems can be reduced within this framework. The proposed model reduction method is mainly based upon time-weighted balanced truncation and a recently...

  19. A Time-Series Water Level Forecasting Model Based on Imputation and Variable Selection Method

    Directory of Open Access Journals (Sweden)

    Jun-He Yang

    2017-01-01

    Full Text Available Reservoirs are important for households and impact the national economy. This paper proposed a time-series forecasting model based on estimating a missing value followed by variable selection to forecast the reservoir’s water level. This study collected data from the Taiwan Shimen Reservoir as well as daily atmospheric data from 2008 to 2015. The two datasets are concatenated into an integrated dataset based on ordering of the data as a research dataset. The proposed time-series forecasting model summarily has three foci. First, this study uses five imputation methods to directly delete the missing value. Second, we identified the key variable via factor analysis and then deleted the unimportant variables sequentially via the variable selection method. Finally, the proposed model uses a Random Forest to build the forecasting model of the reservoir’s water level. This was done to compare with the listing method under the forecasting error. These experimental results indicate that the Random Forest forecasting model when applied to variable selection with full variables has better forecasting performance than the listing model. In addition, this experiment shows that the proposed variable selection can help determine five forecast methods used here to improve the forecasting capability.

  20. A Distributed Web-based Solution for Ionospheric Model Real-time Management, Monitoring, and Short-term Prediction

    Science.gov (United States)

    Kulchitsky, A.; Maurits, S.; Watkins, B.

    2006-12-01

    With the widespread availability of the Internet today, many people can monitor various scientific research activities. It is important to accommodate this interest providing on-line access to dynamic and illustrative Web-resources, which could demonstrate different aspects of ongoing research. It is especially important to explain and these research activities for high school and undergraduate students, thereby providing more information for making decisions concerning their future studies. Such Web resources are also important to clarify scientific research for the general public, in order to achieve better awareness of research progress in various fields. Particularly rewarding is dissemination of information about ongoing projects within Universities and research centers to their local communities. The benefits of this type of scientific outreach are mutual, since development of Web-based automatic systems is prerequisite for many research projects targeting real-time monitoring and/or modeling of natural conditions. Continuous operation of such systems provide ongoing research opportunities for the statistically massive validation of the models, as well. We have developed a Web-based system to run the University of Alaska Fairbanks Polar Ionospheric Model in real-time. This model makes use of networking and computational resources at the Arctic Region Supercomputing Center. This system was designed to be portable among various operating systems and computational resources. Its components can be installed across different computers, separating Web servers and computational engines. The core of the system is a Real-Time Management module (RMM) written Python, which facilitates interactions of remote input data transfers, the ionospheric model runs, MySQL database filling, and PHP scripts for the Web-page preparations. The RMM downloads current geophysical inputs as soon as they become available at different on-line depositories. This information is processed to

  1. Comparing and Contrasting Traditional Membrane Bioreactor Models with Novel Ones Based on Time Series Analysis

    Directory of Open Access Journals (Sweden)

    Parneet Paul

    2013-02-01

    Full Text Available The computer modelling and simulation of wastewater treatment plant and their specific technologies, such as membrane bioreactors (MBRs, are becoming increasingly useful to consultant engineers when designing, upgrading, retrofitting, operating and controlling these plant. This research uses traditional phenomenological mechanistic models based on MBR filtration and biochemical processes to measure the effectiveness of alternative and novel time series models based upon input–output system identification methods. Both model types are calibrated and validated using similar plant layouts and data sets derived for this purpose. Results prove that although both approaches have their advantages, they also have specific disadvantages as well. In conclusion, the MBR plant designer and/or operator who wishes to use good quality, calibrated models to gain a better understanding of their process, should carefully consider which model type is selected based upon on what their initial modelling objectives are. Each situation usually proves unique.

  2. Informing hydrological models with ground-based time-lapse relative gravimetry: potential and limitations

    DEFF Research Database (Denmark)

    Bauer-Gottwein, Peter; Christiansen, Lars; Rosbjerg, Dan

    2011-01-01

    parameter uncertainty decreased significantly when TLRG data was included in the inversion. The forced infiltration experiment caused changes in unsaturated zone storage, which were monitored using TLRG and ground-penetrating radar. A numerical unsaturated zone model was subsequently conditioned on both......Coupled hydrogeophysical inversion emerges as an attractive option to improve the calibration and predictive capability of hydrological models. Recently, ground-based time-lapse relative gravity (TLRG) measurements have attracted increasing interest because there is a direct relationship between...

  3. ARIMA-Based Time Series Model of Stochastic Wind Power Generation

    DEFF Research Database (Denmark)

    Chen, Peiyuan; Pedersen, Troels; Bak-Jensen, Birgitte

    2010-01-01

    This paper proposes a stochastic wind power model based on an autoregressive integrated moving average (ARIMA) process. The model takes into account the nonstationarity and physical limits of stochastic wind power generation. The model is constructed based on wind power measurement of one year from...... the Nysted offshore wind farm in Denmark. The proposed limited-ARIMA (LARIMA) model introduces a limiter and characterizes the stochastic wind power generation by mean level, temporal correlation and driving noise. The model is validated against the measurement in terms of temporal correlation...... and probability distribution. The LARIMA model outperforms a first-order transition matrix based discrete Markov model in terms of temporal correlation, probability distribution and model parameter number. The proposed LARIMA model is further extended to include the monthly variation of the stochastic wind power...

  4. Autoregressive-model-based missing value estimation for DNA microarray time series data.

    Science.gov (United States)

    Choong, Miew Keen; Charbit, Maurice; Yan, Hong

    2009-01-01

    Missing value estimation is important in DNA microarray data analysis. A number of algorithms have been developed to solve this problem, but they have several limitations. Most existing algorithms are not able to deal with the situation where a particular time point (column) of the data is missing entirely. In this paper, we present an autoregressive-model-based missing value estimation method (ARLSimpute) that takes into account the dynamic property of microarray temporal data and the local similarity structures in the data. ARLSimpute is especially effective for the situation where a particular time point contains many missing values or where the entire time point is missing. Experiment results suggest that our proposed algorithm is an accurate missing value estimator in comparison with other imputation methods on simulated as well as real microarray time series datasets.

  5. TIME SERIES ANALYSIS USING A UNIQUE MODEL OF TRANSFORMATION

    Directory of Open Access Journals (Sweden)

    Goran Klepac

    2007-12-01

    Full Text Available REFII1 model is an authorial mathematical model for time series data mining. The main purpose of that model is to automate time series analysis, through a unique transformation model of time series. An advantage of this approach of time series analysis is the linkage of different methods for time series analysis, linking traditional data mining tools in time series, and constructing new algorithms for analyzing time series. It is worth mentioning that REFII model is not a closed system, which means that we have a finite set of methods. At first, this is a model for transformation of values of time series, which prepares data used by different sets of methods based on the same model of transformation in a domain of problem space. REFII model gives a new approach in time series analysis based on a unique model of transformation, which is a base for all kind of time series analysis. The advantage of REFII model is its possible application in many different areas such as finance, medicine, voice recognition, face recognition and text mining.

  6. Accounting for large deformations in real-time simulations of soft tissues based on reduced-order models.

    Science.gov (United States)

    Niroomandi, S; Alfaro, I; Cueto, E; Chinesta, F

    2012-01-01

    Model reduction techniques have shown to constitute a valuable tool for real-time simulation in surgical environments and other fields. However, some limitations, imposed by real-time constraints, have not yet been overcome. One of such limitations is the severe limitation in time (established in 500Hz of frequency for the resolution) that precludes the employ of Newton-like schemes for solving non-linear models as the ones usually employed for modeling biological tissues. In this work we present a technique able to deal with geometrically non-linear models, based on the employ of model reduction techniques, together with an efficient non-linear solver. Examples of the performance of the technique over some examples will be given. Copyright © 2010 Elsevier Ireland Ltd. All rights reserved.

  7. On the validity of travel-time based nonlinear bioreactive transport models in steady-state flow.

    Science.gov (United States)

    Sanz-Prat, Alicia; Lu, Chuanhe; Finkel, Michael; Cirpka, Olaf A

    2015-01-01

    Travel-time based models simplify the description of reactive transport by replacing the spatial coordinates with the groundwater travel time, posing a quasi one-dimensional (1-D) problem and potentially rendering the determination of multidimensional parameter fields unnecessary. While the approach is exact for strictly advective transport in steady-state flow if the reactive properties of the porous medium are uniform, its validity is unclear when local-scale mixing affects the reactive behavior. We compare a two-dimensional (2-D), spatially explicit, bioreactive, advective-dispersive transport model, considered as "virtual truth", with three 1-D travel-time based models which differ in the conceptualization of longitudinal dispersion: (i) neglecting dispersive mixing altogether, (ii) introducing a local-scale longitudinal dispersivity constant in time and space, and (iii) using an effective longitudinal dispersivity that increases linearly with distance. The reactive system considers biodegradation of dissolved organic carbon, which is introduced into a hydraulically heterogeneous domain together with oxygen and nitrate. Aerobic and denitrifying bacteria use the energy of the microbial transformations for growth. We analyze six scenarios differing in the variance of log-hydraulic conductivity and in the inflow boundary conditions (constant versus time-varying concentration). The concentrations of the 1-D models are mapped to the 2-D domain by means of the kinematic (for case i), and mean groundwater age (for cases ii & iii), respectively. The comparison between concentrations of the "virtual truth" and the 1-D approaches indicates extremely good agreement when using an effective, linearly increasing longitudinal dispersivity in the majority of the scenarios, while the other two 1-D approaches reproduce at least the concentration tendencies well. At late times, all 1-D models give valid approximations of two-dimensional transport. We conclude that the

  8. Characterization of Models for Time-Dependent Behavior of Soils

    DEFF Research Database (Denmark)

    Liingaard, Morten; Augustesen, Anders; Lade, Poul V.

    2004-01-01

      Different classes of constitutive models have been developed to capture the time-dependent viscous phenomena ~ creep, stress relaxation, and rate effects ! observed in soils. Models based on empirical, rheological, and general stress-strain-time concepts have been studied. The first part....... Special attention is paid to elastoviscoplastic models that combine inviscid elastic and time-dependent plastic behavior. Various general elastoviscoplastic models can roughly be divided into two categories: Models based on the concept of overstress and models based on nonstationary flow surface theory...

  9. A new incomplete-repair model based on a ''reciprocal-time'' pattern of sublethal damage repair

    International Nuclear Information System (INIS)

    Dale, R.G.; Fowler, J.F.

    1999-01-01

    A radiobiological model for closely spaced non-instantaneous radiation fractions is presented, based on the premise that the time process of sublethal damage (SLD) repair is 'reciprocal-time' (second order), rather than exponential (first order), in form. The initial clinical implications of such an incomplete-repair model are assessed. A previously derived linear-quadratic-based model was revised to take account of the possibility that SLD may repair with time such that the fraction of an element of initial damage remaining at time t is given as 1/(1+zt), where z is an appropriate rate constant; z is the reciprocal of the first half-time (τ) of repair. The general equation so derived for incomplete repair is applicable to all types of radiotherapy delivered at high, low and medium dose-rate in fractions delivered at regular time intervals. The model allows both the fraction duration and interfraction intervals to vary between zero and infinity. For any given value of z, reciprocal repair is associated with an apparent 'slowing-down' in the SLD repair rate as treatment proceeds. The instantaneous repair rates are not directly governed by total dose or dose per fraction, but are influenced by the treatment duration and individual fraction duration. Instantaneous repair rates of SLD appear to be slower towards the end of a continuous treatment, and are also slower following 'long' fractions than they are following 'short' fractions. The new model, with its single repair-rate parameter, is shown to be capable of providing a degree of quantitative explanation for some enigmas that have been encountered in clinical studies. A single-component reciprocal repair process provides an alternative explanation for the apparent existence of a range of repair rates in human tissues, and which have hitherto been explained by postulating the existence of a multi-exponential repair process. The build-up of SLD over extended treatments is greater than would be inferred using a

  10. New time scale based k-epsilon model for near-wall turbulence

    Science.gov (United States)

    Yang, Z.; Shih, T. H.

    1993-01-01

    A k-epsilon model is proposed for wall bonded turbulent flows. In this model, the eddy viscosity is characterized by a turbulent velocity scale and a turbulent time scale. The time scale is bounded from below by the Kolmogorov time scale. The dissipation equation is reformulated using this time scale and no singularity exists at the wall. The damping function used in the eddy viscosity is chosen to be a function of R(sub y) = (k(sup 1/2)y)/v instead of y(+). Hence, the model could be used for flows with separation. The model constants used are the same as in the high Reynolds number standard k-epsilon model. Thus, the proposed model will be also suitable for flows far from the wall. Turbulent channel flows at different Reynolds numbers and turbulent boundary layer flows with and without pressure gradient are calculated. Results show that the model predictions are in good agreement with direct numerical simulation and experimental data.

  11. Time delay and profit accumulation effect on a mine-based uranium market clearing model

    International Nuclear Information System (INIS)

    Auzans, Aris; Teder, Allan; Tkaczyk, Alan H.

    2016-01-01

    Highlights: • Improved version of a mine-based uranium market clearing model for the front-end uranium market and enrichment industries is proposed. • A profit accumulation algorithm and time delay function provides more realistic uranium mine decision making process. • Operational decision delay increased uranium market price volatility. - Abstract: The mining industry faces a number of challenges such as market volatility, investment safety, issues surrounding employment and productivity. Therefore, computer simulations are highly relevant in order to reduce financial risks associated with these challenges. In the mining industry, each firm must compete with other mines and the basic target is profit maximization. The aim of this paper is to evaluate the world uranium (U) supply by simulating financial management challenges faced by an individual U mine that are caused by a variety of regulation issues. In this paper front-end nuclear fuel cycle tool is used to simulate market conditions and the effects they have on the stability of U supply. An individual U mine’s exit or entry in the market might cause changes in the U supply side which can increase or decrease the market price. In this paper we offer a more advanced version of a mine-based U market clearing model. The existing U market model incorporates the market of primary U from uranium mines with secondary uranium (depleted uranium DU), enriched uranium (HEU) and enrichment services. In the model each uranium mine acts as an independent agent that is able to make operational decisions based on the market price. This paper introduces a more realistic decision making algorithm of individual U mine that adds constraints to production decisions. The authors added an accumulated profit model, which allows for the profits accumulated to cover any possible future economic losses and the time-delay algorithm to simulate delayed process of reopening a U mine. The U market simulation covers time period 2010

  12. Time delay and profit accumulation effect on a mine-based uranium market clearing model

    Energy Technology Data Exchange (ETDEWEB)

    Auzans, Aris [Institute of Physics, University of Tartu, Ostwaldi 1, EE-50411 Tartu (Estonia); Teder, Allan [School of Economics and Business Administration, University of Tartu, Narva mnt 4, EE-51009 Tartu (Estonia); Tkaczyk, Alan H., E-mail: alan@ut.ee [Institute of Physics, University of Tartu, Ostwaldi 1, EE-50411 Tartu (Estonia)

    2016-12-15

    Highlights: • Improved version of a mine-based uranium market clearing model for the front-end uranium market and enrichment industries is proposed. • A profit accumulation algorithm and time delay function provides more realistic uranium mine decision making process. • Operational decision delay increased uranium market price volatility. - Abstract: The mining industry faces a number of challenges such as market volatility, investment safety, issues surrounding employment and productivity. Therefore, computer simulations are highly relevant in order to reduce financial risks associated with these challenges. In the mining industry, each firm must compete with other mines and the basic target is profit maximization. The aim of this paper is to evaluate the world uranium (U) supply by simulating financial management challenges faced by an individual U mine that are caused by a variety of regulation issues. In this paper front-end nuclear fuel cycle tool is used to simulate market conditions and the effects they have on the stability of U supply. An individual U mine’s exit or entry in the market might cause changes in the U supply side which can increase or decrease the market price. In this paper we offer a more advanced version of a mine-based U market clearing model. The existing U market model incorporates the market of primary U from uranium mines with secondary uranium (depleted uranium DU), enriched uranium (HEU) and enrichment services. In the model each uranium mine acts as an independent agent that is able to make operational decisions based on the market price. This paper introduces a more realistic decision making algorithm of individual U mine that adds constraints to production decisions. The authors added an accumulated profit model, which allows for the profits accumulated to cover any possible future economic losses and the time-delay algorithm to simulate delayed process of reopening a U mine. The U market simulation covers time period 2010

  13. Validation of Energy Expenditure Prediction Models Using Real-Time Shoe-Based Motion Detectors.

    Science.gov (United States)

    Lin, Shih-Yun; Lai, Ying-Chih; Hsia, Chi-Chun; Su, Pei-Fang; Chang, Chih-Han

    2017-09-01

    This study aimed to verify and compare the accuracy of energy expenditure (EE) prediction models using shoe-based motion detectors with embedded accelerometers. Three physical activity (PA) datasets (unclassified, recognition, and intensity segmentation) were used to develop three prediction models. A multiple classification flow and these models were used to estimate EE. The "unclassified" dataset was defined as the data without PA recognition, the "recognition" as the data classified with PA recognition, and the "intensity segmentation" as the data with intensity segmentation. The three datasets contained accelerometer signals (quantified as signal magnitude area (SMA)) and net heart rate (HR net ). The accuracy of these models was assessed according to the deviation between physically measured EE and model-estimated EE. The variance between physically measured EE and model-estimated EE expressed by simple linear regressions was increased by 63% and 13% using SMA and HR net , respectively. The accuracy of the EE predicted from accelerometer signals is influenced by the different activities that exhibit different count-EE relationships within the same prediction model. The recognition model provides a better estimation and lower variability of EE compared with the unclassified and intensity segmentation models. The proposed shoe-based motion detectors can improve the accuracy of EE estimation and has great potential to be used to manage everyday exercise in real time.

  14. Partition-based discrete-time quantum walks

    Science.gov (United States)

    Konno, Norio; Portugal, Renato; Sato, Iwao; Segawa, Etsuo

    2018-04-01

    We introduce a family of discrete-time quantum walks, called two-partition model, based on two equivalence-class partitions of the computational basis, which establish the notion of local dynamics. This family encompasses most versions of unitary discrete-time quantum walks driven by two local operators studied in literature, such as the coined model, Szegedy's model, and the 2-tessellable staggered model. We also analyze the connection of those models with the two-step coined model, which is driven by the square of the evolution operator of the standard discrete-time coined walk. We prove formally that the two-step coined model, an extension of Szegedy model for multigraphs, and the two-tessellable staggered model are unitarily equivalent. Then, selecting one specific model among those families is a matter of taste not generality.

  15. SEM based CARMA time series modeling for arbitrary N

    NARCIS (Netherlands)

    Oud, J.H.L.; Völkle, M.C.; Driver, C.C.

    2018-01-01

    This article explains in detail the state space specification and estimation of first and higher-order autoregressive moving-average models in continuous time (CARMA) in an extended structural equation modeling (SEM) context for N = 1 as well as N > 1. To illustrate the approach, simulations will be

  16. Model based Computerized Ionospheric Tomography in space and time

    Science.gov (United States)

    Tuna, Hakan; Arikan, Orhan; Arikan, Feza

    2018-04-01

    Reconstruction of the ionospheric electron density distribution in space and time not only provide basis for better understanding the physical nature of the ionosphere, but also provide improvements in various applications including HF communication. Recently developed IONOLAB-CIT technique provides physically admissible 3D model of the ionosphere by using both Slant Total Electron Content (STEC) measurements obtained from a GPS satellite - receiver network and IRI-Plas model. IONOLAB-CIT technique optimizes IRI-Plas model parameters in the region of interest such that the synthetic STEC computations obtained from the IRI-Plas model are in accordance with the actual STEC measurements. In this work, the IONOLAB-CIT technique is extended to provide reconstructions both in space and time. This extension exploits the temporal continuity of the ionosphere to provide more reliable reconstructions with a reduced computational load. The proposed 4D-IONOLAB-CIT technique is validated on real measurement data obtained from TNPGN-Active GPS receiver network in Turkey.

  17. Extracting a robust U.S. business cycle using a time-varying multivariate model-based bandpass filter

    NARCIS (Netherlands)

    Koopman, S.J.; Creal, D.D.

    2010-01-01

    We develop a flexible business cycle indicator that accounts for potential time variation in macroeconomic variables. The coincident economic indicator is based on a multivariate trend cycle decomposition model and is constructed from a moderate set of US macroeconomic time series. In particular, we

  18. The Time Division Multi-Channel Communication Model and the Correlative Protocol Based on Quantum Time Division Multi-Channel Communication

    International Nuclear Information System (INIS)

    Liu Xiao-Hui; Pei Chang-Xing; Nie Min

    2010-01-01

    Based on the classical time division multi-channel communication theory, we present a scheme of quantum time-division multi-channel communication (QTDMC). Moreover, the model of quantum time division switch (QTDS) and correlative protocol of QTDMC are proposed. The quantum bit error rate (QBER) is analyzed and the QBER simulation test is performed. The scheme shows that the QTDS can carry out multi-user communication through quantum channel, the QBER can also reach the reliability requirement of communication, and the protocol of QTDMC has high practicability and transplantable. The scheme of QTDS may play an important role in the establishment of quantum communication in a large scale in the future. (general)

  19. Bayesian Model Selection under Time Constraints

    Science.gov (United States)

    Hoege, M.; Nowak, W.; Illman, W. A.

    2017-12-01

    Bayesian model selection (BMS) provides a consistent framework for rating and comparing models in multi-model inference. In cases where models of vastly different complexity compete with each other, we also face vastly different computational runtimes of such models. For instance, time series of a quantity of interest can be simulated by an autoregressive process model that takes even less than a second for one run, or by a partial differential equations-based model with runtimes up to several hours or even days. The classical BMS is based on a quantity called Bayesian model evidence (BME). It determines the model weights in the selection process and resembles a trade-off between bias of a model and its complexity. However, in practice, the runtime of models is another weight relevant factor for model selection. Hence, we believe that it should be included, leading to an overall trade-off problem between bias, variance and computing effort. We approach this triple trade-off from the viewpoint of our ability to generate realizations of the models under a given computational budget. One way to obtain BME values is through sampling-based integration techniques. We argue with the fact that more expensive models can be sampled much less under time constraints than faster models (in straight proportion to their runtime). The computed evidence in favor of a more expensive model is statistically less significant than the evidence computed in favor of a faster model, since sampling-based strategies are always subject to statistical sampling error. We present a straightforward way to include this misbalance into the model weights that are the basis for model selection. Our approach follows directly from the idea of insufficient significance. It is based on a computationally cheap bootstrapping error estimate of model evidence and is easy to implement. The approach is illustrated in a small synthetic modeling study.

  20. SCS-CN based time-distributed sediment yield model

    Science.gov (United States)

    Tyagi, J. V.; Mishra, S. K.; Singh, Ranvir; Singh, V. P.

    2008-05-01

    SummaryA sediment yield model is developed to estimate the temporal rates of sediment yield from rainfall events on natural watersheds. The model utilizes the SCS-CN based infiltration model for computation of rainfall-excess rate, and the SCS-CN-inspired proportionality concept for computation of sediment-excess. For computation of sedimentographs, the sediment-excess is routed to the watershed outlet using a single linear reservoir technique. Analytical development of the model shows the ratio of the potential maximum erosion (A) to the potential maximum retention (S) of the SCS-CN method is constant for a watershed. The model is calibrated and validated on a number of events using the data of seven watersheds from India and the USA. Representative values of the A/S ratio computed for the watersheds from calibration are used for the validation of the model. The encouraging results of the proposed simple four parameter model exhibit its potential in field application.

  1. GOODNESS-OF-FIT TEST FOR THE ACCELERATED FAILURE TIME MODEL BASED ON MARTINGALE RESIDUALS

    Czech Academy of Sciences Publication Activity Database

    Novák, Petr

    2013-01-01

    Roč. 49, č. 1 (2013), s. 40-59 ISSN 0023-5954 R&D Projects: GA MŠk(CZ) 1M06047 Grant - others:GA MŠk(CZ) SVV 261315/2011 Keywords : accelerated failure time model * survival analysis * goodness-of-fit Subject RIV: BB - Applied Statistics, Operational Research Impact factor: 0.563, year: 2013 http://library.utia.cas.cz/separaty/2013/SI/novak-goodness-of-fit test for the aft model based on martingale residuals.pdf

  2. Electricity price modeling with stochastic time change

    International Nuclear Information System (INIS)

    Borovkova, Svetlana; Schmeck, Maren Diane

    2017-01-01

    In this paper, we develop a novel approach to electricity price modeling, based on the powerful technique of stochastic time change. This technique allows us to incorporate the characteristic features of electricity prices (such as seasonal volatility, time varying mean reversion and seasonally occurring price spikes) into the model in an elegant and economically justifiable way. The stochastic time change introduces stochastic as well as deterministic (e.g., seasonal) features in the price process' volatility and in the jump component. We specify the base process as a mean reverting jump diffusion and the time change as an absolutely continuous stochastic process with seasonal component. The activity rate of the stochastic time change can be related to the factors that influence supply and demand. Here we use the temperature as a proxy for the demand and hence, as the driving factor of the stochastic time change, and show that this choice leads to realistic price paths. We derive properties of the resulting price process and develop the model calibration procedure. We calibrate the model to the historical EEX power prices and apply it to generating realistic price paths by Monte Carlo simulations. We show that the simulated price process matches the distributional characteristics of the observed electricity prices in periods of both high and low demand. - Highlights: • We develop a novel approach to electricity price modeling, based on the powerful technique of stochastic time change. • We incorporate the characteristic features of electricity prices, such as seasonal volatility and spikes into the model. • We use the temperature as a proxy for the demand and hence, as the driving factor of the stochastic time change • We derive properties of the resulting price process and develop the model calibration procedure. • We calibrate the model to the historical EEX power prices and apply it to generating realistic price paths.

  3. Model-based framework for multi-axial real-time hybrid simulation testing

    Science.gov (United States)

    Fermandois, Gaston A.; Spencer, Billie F.

    2017-10-01

    Real-time hybrid simulation is an efficient and cost-effective dynamic testing technique for performance evaluation of structural systems subjected to earthquake loading with rate-dependent behavior. A loading assembly with multiple actuators is required to impose realistic boundary conditions on physical specimens. However, such a testing system is expected to exhibit significant dynamic coupling of the actuators and suffer from time lags that are associated with the dynamics of the servo-hydraulic system, as well as control-structure interaction (CSI). One approach to reducing experimental errors considers a multi-input, multi-output (MIMO) controller design, yielding accurate reference tracking and noise rejection. In this paper, a framework for multi-axial real-time hybrid simulation (maRTHS) testing is presented. The methodology employs a real-time feedback-feedforward controller for multiple actuators commanded in Cartesian coordinates. Kinematic transformations between actuator space and Cartesian space are derived for all six-degrees-offreedom of the moving platform. Then, a frequency domain identification technique is used to develop an accurate MIMO transfer function of the system. Further, a Cartesian-domain model-based feedforward-feedback controller is implemented for time lag compensation and to increase the robustness of the reference tracking for given model uncertainty. The framework is implemented using the 1/5th-scale Load and Boundary Condition Box (LBCB) located at the University of Illinois at Urbana- Champaign. To demonstrate the efficacy of the proposed methodology, a single-story frame subjected to earthquake loading is tested. One of the columns in the frame is represented physically in the laboratory as a cantilevered steel column. For realtime execution, the numerical substructure, kinematic transformations, and controllers are implemented on a digital signal processor. Results show excellent performance of the maRTHS framework when six

  4. Stability Analysis of Positive Polynomial Fuzzy-Model-Based Control Systems with Time Delay under Imperfect Premise Matching

    OpenAIRE

    Li, Xiaomiao; Lam, Hak Keung; Song, Ge; Liu, Fucai

    2017-01-01

    This paper deals with the stability and positivity analysis of polynomial-fuzzy-model-based ({PFMB}) control systems with time delay, which is formed by a polynomial fuzzy model and a polynomial fuzzy controller connected in a closed loop, under imperfect premise matching. To improve the design and realization flexibility, the polynomial fuzzy model and the polynomial fuzzy controller are allowed to have their own set of premise membership functions. A sum-of-squares (SOS)-based stability ana...

  5. A Fault Prognosis Strategy Based on Time-Delayed Digraph Model and Principal Component Analysis

    Directory of Open Access Journals (Sweden)

    Ningyun Lu

    2012-01-01

    Full Text Available Because of the interlinking of process equipments in process industry, event information may propagate through the plant and affect a lot of downstream process variables. Specifying the causality and estimating the time delays among process variables are critically important for data-driven fault prognosis. They are not only helpful to find the root cause when a plant-wide disturbance occurs, but to reveal the evolution of an abnormal event propagating through the plant. This paper concerns with the information flow directionality and time-delay estimation problems in process industry and presents an information synchronization technique to assist fault prognosis. Time-delayed mutual information (TDMI is used for both causality analysis and time-delay estimation. To represent causality structure of high-dimensional process variables, a time-delayed signed digraph (TD-SDG model is developed. Then, a general fault prognosis strategy is developed based on the TD-SDG model and principle component analysis (PCA. The proposed method is applied to an air separation unit and has achieved satisfying results in predicting the frequently occurred “nitrogen-block” fault.

  6. Impedance models in time domain

    NARCIS (Netherlands)

    Rienstra, S.W.

    2005-01-01

    Necessary conditions for an impedance function are derived. Methods available in the literature are discussed. A format with recipe is proposed for an exact impedance condition in time domain on a time grid, based on the Helmholtz resonator model. An explicit solution is given of a pulse reflecting

  7. Nonlinear dynamic modeling of a simple flexible rotor system subjected to time-variable base motions

    Science.gov (United States)

    Chen, Liqiang; Wang, Jianjun; Han, Qinkai; Chu, Fulei

    2017-09-01

    Rotor systems carried in transportation system or under seismic excitations are considered to have a moving base. To study the dynamic behavior of flexible rotor systems subjected to time-variable base motions, a general model is developed based on finite element method and Lagrange's equation. Two groups of Euler angles are defined to describe the rotation of the rotor with respect to the base and that of the base with respect to the ground. It is found that the base rotations would cause nonlinearities in the model. To verify the proposed model, a novel test rig which could simulate the base angular-movement is designed. Dynamic experiments on a flexible rotor-bearing system with base angular motions are carried out. Based upon these, numerical simulations are conducted to further study the dynamic response of the flexible rotor under harmonic angular base motions. The effects of base angular amplitude, rotating speed and base frequency on response behaviors are discussed by means of FFT, waterfall, frequency response curve and orbits of the rotor. The FFT and waterfall plots of the disk horizontal and vertical vibrations are marked with multiplications of the base frequency and sum and difference tones of the rotating frequency and the base frequency. Their amplitudes will increase remarkably when they meet the whirling frequencies of the rotor system.

  8. Downsizer - A Graphical User Interface-Based Application for Browsing, Acquiring, and Formatting Time-Series Data for Hydrologic Modeling

    Science.gov (United States)

    Ward-Garrison, Christian; Markstrom, Steven L.; Hay, Lauren E.

    2009-01-01

    The U.S. Geological Survey Downsizer is a computer application that selects, downloads, verifies, and formats station-based time-series data for environmental-resource models, particularly the Precipitation-Runoff Modeling System. Downsizer implements the client-server software architecture. The client presents a map-based, graphical user interface that is intuitive to modelers; the server provides streamflow and climate time-series data from over 40,000 measurement stations across the United States. This report is the Downsizer user's manual and provides (1) an overview of the software design, (2) installation instructions, (3) a description of the graphical user interface, (4) a description of selected output files, and (5) troubleshooting information.

  9. A Unified Trading Model Based on Robust Optimization for Day-Ahead and Real-Time Markets with Wind Power Integration

    Directory of Open Access Journals (Sweden)

    Yuewen Jiang

    2017-04-01

    Full Text Available In a conventional electricity market, trading is conducted based on power forecasts in the day-ahead market, while the power imbalance is regulated in the real-time market, which is a separate trading scheme. With large-scale wind power connected into the power grid, power forecast errors increase in the day-ahead market which lowers the economic efficiency of the separate trading scheme. This paper proposes a robust unified trading model that includes the forecasts of real-time prices and imbalance power into the day-ahead trading scheme. The model is developed based on robust optimization in view of the undefined probability distribution of clearing prices of the real-time market. For the model to be used efficiently, an improved quantum-behaved particle swarm algorithm (IQPSO is presented in the paper based on an in-depth analysis of the limitations of the static character of quantum-behaved particle swarm algorithm (QPSO. Finally, the impacts of associated parameters on the separate trading and unified trading model are analyzed to verify the superiority of the proposed model and algorithm.

  10. Real-time physics-based 3D biped character animation using an inverted pendulum model.

    Science.gov (United States)

    Tsai, Yao-Yang; Lin, Wen-Chieh; Cheng, Kuangyou B; Lee, Jehee; Lee, Tong-Yee

    2010-01-01

    We present a physics-based approach to generate 3D biped character animation that can react to dynamical environments in real time. Our approach utilizes an inverted pendulum model to online adjust the desired motion trajectory from the input motion capture data. This online adjustment produces a physically plausible motion trajectory adapted to dynamic environments, which is then used as the desired motion for the motion controllers to track in dynamics simulation. Rather than using Proportional-Derivative controllers whose parameters usually cannot be easily set, our motion tracking adopts a velocity-driven method which computes joint torques based on the desired joint angular velocities. Physically correct full-body motion of the 3D character is computed in dynamics simulation using the computed torques and dynamical model of the character. Our experiments demonstrate that tracking motion capture data with real-time response animation can be achieved easily. In addition, physically plausible motion style editing, automatic motion transition, and motion adaptation to different limb sizes can also be generated without difficulty.

  11. Kernel based methods for accelerated failure time model with ultra-high dimensional data

    Directory of Open Access Journals (Sweden)

    Jiang Feng

    2010-12-01

    Full Text Available Abstract Background Most genomic data have ultra-high dimensions with more than 10,000 genes (probes. Regularization methods with L1 and Lp penalty have been extensively studied in survival analysis with high-dimensional genomic data. However, when the sample size n ≪ m (the number of genes, directly identifying a small subset of genes from ultra-high (m > 10, 000 dimensional data is time-consuming and not computationally efficient. In current microarray analysis, what people really do is select a couple of thousands (or hundreds of genes using univariate analysis or statistical tests, and then apply the LASSO-type penalty to further reduce the number of disease associated genes. This two-step procedure may introduce bias and inaccuracy and lead us to miss biologically important genes. Results The accelerated failure time (AFT model is a linear regression model and a useful alternative to the Cox model for survival analysis. In this paper, we propose a nonlinear kernel based AFT model and an efficient variable selection method with adaptive kernel ridge regression. Our proposed variable selection method is based on the kernel matrix and dual problem with a much smaller n × n matrix. It is very efficient when the number of unknown variables (genes is much larger than the number of samples. Moreover, the primal variables are explicitly updated and the sparsity in the solution is exploited. Conclusions Our proposed methods can simultaneously identify survival associated prognostic factors and predict survival outcomes with ultra-high dimensional genomic data. We have demonstrated the performance of our methods with both simulation and real data. The proposed method performs superbly with limited computational studies.

  12. The Abridgment and Relaxation Time for a Linear Multi-Scale Model Based on Multiple Site Phosphorylation.

    Directory of Open Access Journals (Sweden)

    Shuo Wang

    Full Text Available Random effect in cellular systems is an important topic in systems biology and often simulated with Gillespie's stochastic simulation algorithm (SSA. Abridgment refers to model reduction that approximates a group of reactions by a smaller group with fewer species and reactions. This paper presents a theoretical analysis, based on comparison of the first exit time, for the abridgment on a linear chain reaction model motivated by systems with multiple phosphorylation sites. The analysis shows that if the relaxation time of the fast subsystem is much smaller than the mean firing time of the slow reactions, the abridgment can be applied with little error. This analysis is further verified with numerical experiments for models of bistable switch and oscillations in which linear chain system plays a critical role.

  13. The application of convolution-based statistical model on the electrical breakdown time delay distributions in neon

    International Nuclear Information System (INIS)

    Maluckov, Cedomir A.; Karamarkovic, Jugoslav P.; Radovic, Miodrag K.; Pejovic, Momcilo M.

    2004-01-01

    The convolution-based model of the electrical breakdown time delay distribution is applied for statistical analysis of experimental results obtained in neon-filled diode tube at 6.5 mbar. At first, the numerical breakdown time delay density distributions are obtained by stochastic modeling as the sum of two independent random variables, the electrical breakdown statistical time delay with exponential, and discharge formative time with Gaussian distribution. Then, the single characteristic breakdown time delay distribution is obtained as the convolution of these two random variables with previously determined parameters. These distributions show good correspondence with the experimental distributions, obtained on the basis of 1000 successive and independent measurements. The shape of distributions is investigated, and corresponding skewness and kurtosis are plotted, in order to follow the transition from Gaussian to exponential distribution

  14. A Unified Trading Model Based on Robust Optimization for Day-Ahead and Real-Time Markets with Wind Power Integration

    DEFF Research Database (Denmark)

    Jiang, Yuewen; Chen, Meisen; You, Shi

    2017-01-01

    In a conventional electricity market, trading is conducted based on power forecasts in the day-ahead market, while the power imbalance is regulated in the real-time market, which is a separate trading scheme. With large-scale wind power connected into the power grid, power forecast errors increase...... in the day-ahead market which lowers the economic efficiency of the separate trading scheme. This paper proposes a robust unified trading model that includes the forecasts of real-time prices and imbalance power into the day-ahead trading scheme. The model is developed based on robust optimization in view...... of the undefined probability distribution of clearing prices of the real-time market. For the model to be used efficiently, an improved quantum-behaved particle swarm algorithm (IQPSO) is presented in the paper based on an in-depth analysis of the limitations of the static character of quantum-behaved particle...

  15. Scenario-based verification of real-time systems using UPPAAL

    DEFF Research Database (Denmark)

    Li, Shuhao; Belaguer, Sandie; David, Alexandre

    2010-01-01

    Abstract This paper proposes two approaches to tool-supported automatic verification of dense real-time systems against scenario-based requirements, where a system is modeled as a network of timed automata (TAs) or as a set of driving live sequence charts (LSCs), and a requirement is specified...... as a separate monitored LSC chart. We make timed extensions to a kernel subset of the LSC language and define a trace-based semantics. By translating a monitored LSC chart to a behavior-equivalent observer TA and then non-intrusively composing this observer with the original TA modeled real-time system......, the problem of scenario-based verification reduces to a computation tree logic (CTL) real-time model checking problem. In case the real time system is modeled as a set of driving LSC charts, we translate these driving charts and the monitored chart into a behavior-equivalent network of TAs by using a “one...

  16. Modeling Financial Time Series Based on a Market Microstructure Model with Leverage Effect

    OpenAIRE

    Yanhui Xi; Hui Peng; Yemei Qin

    2016-01-01

    The basic market microstructure model specifies that the price/return innovation and the volatility innovation are independent Gaussian white noise processes. However, the financial leverage effect has been found to be statistically significant in many financial time series. In this paper, a novel market microstructure model with leverage effects is proposed. The model specification assumed a negative correlation in the errors between the price/return innovation and the volatility innovation....

  17. Interevent times in a new alarm-based earthquake forecasting model

    Science.gov (United States)

    Talbi, Abdelhak; Nanjo, Kazuyoshi; Zhuang, Jiancang; Satake, Kenji; Hamdache, Mohamed

    2013-09-01

    This study introduces a new earthquake forecasting model that uses the moment ratio (MR) of the first to second order moments of earthquake interevent times as a precursory alarm index to forecast large earthquake events. This MR model is based on the idea that the MR is associated with anomalous long-term changes in background seismicity prior to large earthquake events. In a given region, the MR statistic is defined as the inverse of the index of dispersion or Fano factor, with MR values (or scores) providing a biased estimate of the relative regional frequency of background events, here termed the background fraction. To test the forecasting performance of this proposed MR model, a composite Japan-wide earthquake catalogue for the years between 679 and 2012 was compiled using the Japan Meteorological Agency catalogue for the period between 1923 and 2012, and the Utsu historical seismicity records between 679 and 1922. MR values were estimated by sampling interevent times from events with magnitude M ≥ 6 using an earthquake random sampling (ERS) algorithm developed during previous research. Three retrospective tests of M ≥ 7 target earthquakes were undertaken to evaluate the long-, intermediate- and short-term performance of MR forecasting, using mainly Molchan diagrams and optimal spatial maps obtained by minimizing forecasting error defined by miss and alarm rate addition. This testing indicates that the MR forecasting technique performs well at long-, intermediate- and short-term. The MR maps produced during long-term testing indicate significant alarm levels before 15 of the 18 shallow earthquakes within the testing region during the past two decades, with an alarm region covering about 20 per cent (alarm rate) of the testing region. The number of shallow events missed by forecasting was reduced by about 60 per cent after using the MR method instead of the relative intensity (RI) forecasting method. At short term, our model succeeded in forecasting the

  18. Continuous time modelling of dynamical spatial lattice data observed at sparsely distributed times

    DEFF Research Database (Denmark)

    Rasmussen, Jakob Gulddahl; Møller, Jesper

    2007-01-01

    Summary. We consider statistical and computational aspects of simulation-based Bayesian inference for a spatial-temporal model based on a multivariate point process which is only observed at sparsely distributed times. The point processes are indexed by the sites of a spatial lattice......, and they exhibit spatial interaction. For specificity we consider a particular dynamical spatial lattice data set which has previously been analysed by a discrete time model involving unknown normalizing constants. We discuss the advantages and disadvantages of using continuous time processes compared...... with discrete time processes in the setting of the present paper as well as other spatial-temporal situations....

  19. Robust model predictive control for constrained continuous-time nonlinear systems

    Science.gov (United States)

    Sun, Tairen; Pan, Yongping; Zhang, Jun; Yu, Haoyong

    2018-02-01

    In this paper, a robust model predictive control (MPC) is designed for a class of constrained continuous-time nonlinear systems with bounded additive disturbances. The robust MPC consists of a nonlinear feedback control and a continuous-time model-based dual-mode MPC. The nonlinear feedback control guarantees the actual trajectory being contained in a tube centred at the nominal trajectory. The dual-mode MPC is designed to ensure asymptotic convergence of the nominal trajectory to zero. This paper extends current results on discrete-time model-based tube MPC and linear system model-based tube MPC to continuous-time nonlinear model-based tube MPC. The feasibility and robustness of the proposed robust MPC have been demonstrated by theoretical analysis and applications to a cart-damper springer system and a one-link robot manipulator.

  20. Time-dependent pharmacokinetics of dexamethasone and its efficacy in human breast cancer xenograft mice: a semi-mechanism-based pharmacokinetic/pharmacodynamic model.

    Science.gov (United States)

    Li, Jian; Chen, Rong; Yao, Qing-Yu; Liu, Sheng-Jun; Tian, Xiu-Yun; Hao, Chun-Yi; Lu, Wei; Zhou, Tian-Yan

    2018-03-01

    Dexamethasone (DEX) is the substrate of CYP3A. However, the activity of CYP3A could be induced by DEX when DEX was persistently administered, resulting in auto-induction and time-dependent pharmacokinetics (pharmacokinetics with time-dependent clearance) of DEX. In this study we investigated the pharmacokinetic profiles of DEX after single or multiple doses in human breast cancer xenograft nude mice and established a semi-mechanism-based pharmacokinetic/pharmacodynamic (PK/PD) model for characterizing the time-dependent PK of DEX as well as its anti-cancer effect. The mice were orally given a single or multiple doses (8 mg/kg) of DEX, and the plasma concentrations of DEX were assessed using LC-MS/MS. Tumor volumes were recorded daily. Based on the experimental data, a two-compartment model with first order absorption and time-dependent clearance was established, and the time-dependence of clearance was modeled by a sigmoid E max equation. Moreover, a semi-mechanism-based PK/PD model was developed, in which the auto-induction effect of DEX on its metabolizing enzyme CYP3A was integrated and drug potency was described using an E max equation. The PK/PD model was further used to predict the drug efficacy when the auto-induction effect was or was not considered, which further revealed the necessity of adding the auto-induction effect into the final PK/PD model. This study established a semi-mechanism-based PK/PD model for characterizing the time-dependent pharmacokinetics of DEX and its anti-cancer effect in breast cancer xenograft mice. The model may serve as a reference for DEX dose adjustments or optimization in future preclinical or clinical studies.

  1. USE OF TRANS-CONTEXTUAL MODEL-BASED PHYSICAL ACTIVITY COURSE IN DEVELOPING LEISURE-TIME PHYSICAL ACTIVITY BEHAVIOR OF UNIVERSITY STUDENTS.

    Science.gov (United States)

    Müftüler, Mine; İnce, Mustafa Levent

    2015-08-01

    This study examined how a physical activity course based on the Trans-Contextual Model affected the variables of perceived autonomy support, autonomous motivation, determinants of leisure-time physical activity behavior, basic psychological needs satisfaction, and leisure-time physical activity behaviors. The participants were 70 Turkish university students (M age=23.3 yr., SD=3.2). A pre-test-post-test control group design was constructed. Initially, the participants were randomly assigned into an experimental (n=35) and a control (n=35) group. The experimental group followed a 12 wk. trans-contextual model-based intervention. The participants were pre- and post-tested in terms of Trans-Contextual Model constructs and of self-reported leisure-time physical activity behaviors. Multivariate analyses showed significant increases over the 12 wk. period for perceived autonomy support from instructor and peers, autonomous motivation in leisure-time physical activity setting, positive intention and perceived behavioral control over leisure-time physical activity behavior, more fulfillment of psychological needs, and more engagement in leisure-time physical activity behavior in the experimental group. These results indicated that the intervention was effective in developing leisure-time physical activity and indicated that the Trans-Contextual Model is a useful way to conceptualize these relationships.

  2. Numerical modelling of softwood time-dependent behaviour based on microstructure

    DEFF Research Database (Denmark)

    Engelund, Emil Tang

    2010-01-01

    The time-dependent mechanical behaviour of softwood such as creep or relaxation can be predicted, from knowledge of the microstructural arrangement of the cell wall, by applying deformation kinetics. This has been done several times before; however, often without considering the constraints defined...... by the basic physical mechanism behind the time-dependent behaviour. The mechanism causing time-dependency is thought to be sliding of the microfibrils past each other as a result breaking and re-bonding of hydrogen bonds. This can be incorporated in a numerical model by only allowing time-dependency in shear...

  3. FPGA-Based Real Time, Multichannel Emulated-Digital Retina Model Implementation

    Directory of Open Access Journals (Sweden)

    Zsolt Vörösházi

    2009-01-01

    Full Text Available The function of the low-level image processing that takes place in the biological retina is to compress only the relevant visual information to a manageable size. The behavior of the layers and different channels of the neuromorphic retina has been successfully modeled by cellular neural/nonlinear networks (CNNs. In this paper, we present an extended, application-specific emulated-digital CNN-universal machine (UM architecture to compute the complex dynamic of this mammalian retina in video real time. The proposed emulated-digital implementation of multichannel retina model is compared to the previously developed models from three key aspects, which are processing speed, number of physical cells, and accuracy. Our primary aim was to build up a simple, real-time test environment with camera input and display output in order to mimic the behavior of retina model implementation on emulated digital CNN by using low-cost, moderate-sized field-programmable gate array (FPGA architectures.

  4. Time Domain Analysis of Graphene Nanoribbon Interconnects Based on Transmission Line ‎Model

    Directory of Open Access Journals (Sweden)

    S. Haji Nasiri

    2012-03-01

    Full Text Available Time domain analysis of multilayer graphene nanoribbon (MLGNR interconnects, based on ‎transmission line modeling (TLM using a six-order linear parametric expression, has been ‎presented for the first time. We have studied the effects of interconnect geometry along with ‎its contact resistance on its step response and Nyquist stability. It is shown that by increasing ‎interconnects dimensions their propagation delays are increased and accordingly the system ‎becomes relatively more stable. In addition, we have compared time responses and Nyquist ‎stabilities of MLGNR and SWCNT bundle interconnects, with the same external dimensions. ‎The results show that under the same conditions, the propagation delays for MLGNR ‎interconnects are smaller than those of SWCNT bundle interconnects are. Hence, SWCNT ‎bundle interconnects are relatively more stable than their MLGNR rivals.‎

  5. Modelling mean transit time of stream base flow during tropical cyclone rainstorm in a steep relief forested catchment

    Science.gov (United States)

    Lee, Jun-Yi; Huang, -Chuan, Jr.

    2017-04-01

    Mean transit time (MTT) is one of the of fundamental catchment descriptors to advance understanding on hydrological, ecological, and biogeochemical processes and improve water resources management. However, there were few documented the base flow partitioning (BFP) and mean transit time within a mountainous catchment in typhoon alley. We used a unique data set of 18O isotope and conductivity composition of rainfall (136 mm to 778 mm) and streamflow water samples collected for 14 tropical cyclone events (during 2011 to 2015) in a steep relief forested catchment (Pinglin, in northern Taiwan). A lumped hydrological model, HBV, considering dispersion model transit time distribution was used to estimate total flow, base flow, and MTT of stream base flow. Linear regression between MTT and hydrometric (precipitation intensity and antecedent precipitation index) variables were used to explore controls on MTT variation. Results revealed that both the simulation performance of total flow and base flow were satisfactory, and the Nash-Sutcliffe model efficiency coefficient of total flow and base flow was 0.848 and 0.732, respectively. The event magnitude increased with the decrease of estimated MTTs. Meanwhile, the estimated MTTs varied 4-21 days with the increase of BFP between 63-92%. The negative correlation between event magnitude and MTT and BFP showed the forcing controls the MTT and BFP. Besides, a negative relationship between MTT and the antecedent precipitation index was also found. In other words, wetter antecedent moisture content more rapidly active the fast flow paths. This approach is well suited for constraining process-based modeling in a range of high precipitation intensity and steep relief forested environments.

  6. H∞ Filtering for Discrete Markov Jump Singular Systems with Mode-Dependent Time Delay Based on T-S Fuzzy Model

    Directory of Open Access Journals (Sweden)

    Cheng Gong

    2014-01-01

    Full Text Available This paper investigates the H∞ filtering problem of discrete singular Markov jump systems (SMJSs with mode-dependent time delay based on T-S fuzzy model. First, by Lyapunov-Krasovskii functional approach, a delay-dependent sufficient condition on H∞-disturbance attenuation is presented, in which both stability and prescribed H∞ performance are required to be achieved for the filtering-error systems. Then, based on the condition, the delay-dependent H∞ filter design scheme for SMJSs with mode-dependent time delay based on T-S fuzzy model is developed in term of linear matrix inequality (LMI. Finally, an example is given to illustrate the effectiveness of the result.

  7. Forecasting with nonlinear time series models

    DEFF Research Database (Denmark)

    Kock, Anders Bredahl; Teräsvirta, Timo

    In this paper, nonlinear models are restricted to mean nonlinear parametric models. Several such models popular in time series econo- metrics are presented and some of their properties discussed. This in- cludes two models based on universal approximators: the Kolmogorov- Gabor polynomial model...... applied to economic fore- casting problems, is briefly highlighted. A number of large published studies comparing macroeconomic forecasts obtained using different time series models are discussed, and the paper also contains a small simulation study comparing recursive and direct forecasts in a partic...... and two versions of a simple artificial neural network model. Techniques for generating multi-period forecasts from nonlinear models recursively are considered, and the direct (non-recursive) method for this purpose is mentioned as well. Forecasting with com- plex dynamic systems, albeit less frequently...

  8. Logic Model Checking of Time-Periodic Real-Time Systems

    Science.gov (United States)

    Florian, Mihai; Gamble, Ed; Holzmann, Gerard

    2012-01-01

    In this paper we report on the work we performed to extend the logic model checker SPIN with built-in support for the verification of periodic, real-time embedded software systems, as commonly used in aircraft, automobiles, and spacecraft. We first extended the SPIN verification algorithms to model priority based scheduling policies. Next, we added a library to support the modeling of periodic tasks. This library was used in a recent application of the SPIN model checker to verify the engine control software of an automobile, to study the feasibility of software triggers for unintended acceleration events.

  9. Using a Time-Driven Activity-Based Costing Model To Determine the Actual Cost of Services Provided by a Transgenic Core.

    Science.gov (United States)

    Gerwin, Philip M; Norinsky, Rada M; Tolwani, Ravi J

    2018-03-01

    Laboratory animal programs and core laboratories often set service rates based on cost estimates. However, actual costs may be unknown, and service rates may not reflect the actual cost of services. Accurately evaluating the actual costs of services can be challenging and time-consuming. We used a time-driven activity-based costing (ABC) model to determine the cost of services provided by a resource laboratory at our institution. The time-driven approach is a more efficient approach to calculating costs than using a traditional ABC model. We calculated only 2 parameters: the time required to perform an activity and the unit cost of the activity based on employee cost. This method allowed us to rapidly and accurately calculate the actual cost of services provided, including microinjection of a DNA construct, microinjection of embryonic stem cells, embryo transfer, and in vitro fertilization. We successfully implemented a time-driven ABC model to evaluate the cost of these services and the capacity of labor used to deliver them. We determined how actual costs compared with current service rates. In addition, we determined that the labor supplied to conduct all services (10,645 min/wk) exceeded the practical labor capacity (8400 min/wk), indicating that the laboratory team was highly efficient and that additional labor capacity was needed to prevent overloading of the current team. Importantly, this time-driven ABC approach allowed us to establish a baseline model that can easily be updated to reflect operational changes or changes in labor costs. We demonstrated that a time-driven ABC model is a powerful management tool that can be applied to other core facilities as well as to entire animal programs, providing valuable information that can be used to set rates based on the actual cost of services and to improve operating efficiency.

  10. The Effect of Inquiry Training Learning Model Based on Just in Time Teaching for Problem Solving Skill

    Science.gov (United States)

    Turnip, Betty; Wahyuni, Ida; Tanjung, Yul Ifda

    2016-01-01

    One of the factors that can support successful learning activity is the use of learning models according to the objectives to be achieved. This study aimed to analyze the differences in problem-solving ability Physics student learning model Inquiry Training based on Just In Time Teaching [JITT] and conventional learning taught by cooperative model…

  11. Real time wave forecasting using wind time history and numerical model

    Science.gov (United States)

    Jain, Pooja; Deo, M. C.; Latha, G.; Rajendran, V.

    Operational activities in the ocean like planning for structural repairs or fishing expeditions require real time prediction of waves over typical time duration of say a few hours. Such predictions can be made by using a numerical model or a time series model employing continuously recorded waves. This paper presents another option to do so and it is based on a different time series approach in which the input is in the form of preceding wind speed and wind direction observations. This would be useful for those stations where the costly wave buoys are not deployed and instead only meteorological buoys measuring wind are moored. The technique employs alternative artificial intelligence approaches of an artificial neural network (ANN), genetic programming (GP) and model tree (MT) to carry out the time series modeling of wind to obtain waves. Wind observations at four offshore sites along the east coast of India were used. For calibration purpose the wave data was generated using a numerical model. The predicted waves obtained using the proposed time series models when compared with the numerically generated waves showed good resemblance in terms of the selected error criteria. Large differences across the chosen techniques of ANN, GP, MT were not noticed. Wave hindcasting at the same time step and the predictions over shorter lead times were better than the predictions over longer lead times. The proposed method is a cost effective and convenient option when a site-specific information is desired.

  12. Real-time advanced nuclear reactor core model

    International Nuclear Information System (INIS)

    Koclas, J.; Friedman, F.; Paquette, C.; Vivier, P.

    1990-01-01

    The paper describes a multi-nodal advanced nuclear reactor core model. The model is based on application of modern equivalence theory to the solution of neutron diffusion equation in real time employing the finite differences method. The use of equivalence theory allows the application of the finite differences method to cores divided into hundreds of nodes, as opposed to the much finer divisions (in the order of ten thousands of nodes) where the unmodified method is currently applied. As a result the model can be used for modelling of the core kinetics for real time full scope training simulators. Results of benchmarks, validate the basic assumptions of the model and its applicability to real-time simulation. (orig./HP)

  13. Unifying Model-Based and Reactive Programming within a Model-Based Executive

    Science.gov (United States)

    Williams, Brian C.; Gupta, Vineet; Norvig, Peter (Technical Monitor)

    1999-01-01

    Real-time, model-based, deduction has recently emerged as a vital component in AI's tool box for developing highly autonomous reactive systems. Yet one of the current hurdles towards developing model-based reactive systems is the number of methods simultaneously employed, and their corresponding melange of programming and modeling languages. This paper offers an important step towards unification. We introduce RMPL, a rich modeling language that combines probabilistic, constraint-based modeling with reactive programming constructs, while offering a simple semantics in terms of hidden state Markov processes. We introduce probabilistic, hierarchical constraint automata (PHCA), which allow Markov processes to be expressed in a compact representation that preserves the modularity of RMPL programs. Finally, a model-based executive, called Reactive Burton is described that exploits this compact encoding to perform efficIent simulation, belief state update and control sequence generation.

  14. A practical procedure for the selection of time-to-failure models based on the assessment of trends in maintenance data

    International Nuclear Information System (INIS)

    Louit, D.M.; Pascual, R.; Jardine, A.K.S.

    2009-01-01

    Many times, reliability studies rely on false premises such as independent and identically distributed time between failures assumption (renewal process). This can lead to erroneous model selection for the time to failure of a particular component or system, which can in turn lead to wrong conclusions and decisions. A strong statistical focus, a lack of a systematic approach and sometimes inadequate theoretical background seem to have made it difficult for maintenance analysts to adopt the necessary stage of data testing before the selection of a suitable model. In this paper, a framework for model selection to represent the failure process for a component or system is presented, based on a review of available trend tests. The paper focuses only on single-time-variable models and is primarily directed to analysts responsible for reliability analyses in an industrial maintenance environment. The model selection framework is directed towards the discrimination between the use of statistical distributions to represent the time to failure ('renewal approach'); and the use of stochastic point processes ('repairable systems approach'), when there may be the presence of system ageing or reliability growth. An illustrative example based on failure data from a fleet of backhoes is included.

  15. Accounting for detectability in fish distribution models: an approach based on time-to-first-detection

    Directory of Open Access Journals (Sweden)

    Mário Ferreira

    2015-12-01

    Full Text Available Imperfect detection (i.e., failure to detect a species when the species is present is increasingly recognized as an important source of uncertainty and bias in species distribution modeling. Although methods have been developed to solve this problem by explicitly incorporating variation in detectability in the modeling procedure, their use in freshwater systems remains limited. This is probably because most methods imply repeated sampling (≥ 2 of each location within a short time frame, which may be impractical or too expensive in most studies. Here we explore a novel approach to control for detectability based on the time-to-first-detection, which requires only a single sampling occasion and so may find more general applicability in freshwaters. The approach uses a Bayesian framework to combine conventional occupancy modeling with techniques borrowed from parametric survival analysis, jointly modeling factors affecting the probability of occupancy and the time required to detect a species. To illustrate the method, we modeled large scale factors (elevation, stream order and precipitation affecting the distribution of six fish species in a catchment located in north-eastern Portugal, while accounting for factors potentially affecting detectability at sampling points (stream depth and width. Species detectability was most influenced by depth and to lesser extent by stream width and tended to increase over time for most species. Occupancy was consistently affected by stream order, elevation and annual precipitation. These species presented a widespread distribution with higher uncertainty in tributaries and upper stream reaches. This approach can be used to estimate sampling efficiency and provide a practical framework to incorporate variations in the detection rate in fish distribution models.

  16. Gap timing and the spectral timing model.

    Science.gov (United States)

    Hopson, J W

    1999-04-01

    A hypothesized mechanism underlying gap timing was implemented in the Spectral Timing Model [Grossberg, S., Schmajuk, N., 1989. Neural dynamics of adaptive timing and temporal discrimination during associative learning. Neural Netw. 2, 79-102] , a neural network timing model. The activation of the network nodes was made to decay in the absence of the timed signal, causing the model to shift its peak response time in a fashion similar to that shown in animal subjects. The model was then able to accurately simulate a parametric study of gap timing [Cabeza de Vaca, S., Brown, B., Hemmes, N., 1994. Internal clock and memory processes in aminal timing. J. Exp. Psychol.: Anim. Behav. Process. 20 (2), 184-198]. The addition of a memory decay process appears to produce the correct pattern of results in both Scalar Expectancy Theory models and in the Spectral Timing Model, and the fact that the same process should be effective in two such disparate models argues strongly that process reflects a true aspect of animal cognition.

  17. Assessment of surface solar irradiance derived from real-time modelling techniques and verification with ground-based measurements

    Science.gov (United States)

    Kosmopoulos, Panagiotis G.; Kazadzis, Stelios; Taylor, Michael; Raptis, Panagiotis I.; Keramitsoglou, Iphigenia; Kiranoudis, Chris; Bais, Alkiviadis F.

    2018-02-01

    This study focuses on the assessment of surface solar radiation (SSR) based on operational neural network (NN) and multi-regression function (MRF) modelling techniques that produce instantaneous (in less than 1 min) outputs. Using real-time cloud and aerosol optical properties inputs from the Spinning Enhanced Visible and Infrared Imager (SEVIRI) on board the Meteosat Second Generation (MSG) satellite and the Copernicus Atmosphere Monitoring Service (CAMS), respectively, these models are capable of calculating SSR in high resolution (1 nm, 0.05°, 15 min) that can be used for spectrally integrated irradiance maps, databases and various applications related to energy exploitation. The real-time models are validated against ground-based measurements of the Baseline Surface Radiation Network (BSRN) in a temporal range varying from 15 min to monthly means, while a sensitivity analysis of the cloud and aerosol effects on SSR is performed to ensure reliability under different sky and climatological conditions. The simulated outputs, compared to their common training dataset created by the radiative transfer model (RTM) libRadtran, showed median error values in the range -15 to 15 % for the NN that produces spectral irradiances (NNS), 5-6 % underestimation for the integrated NN and close to zero errors for the MRF technique. The verification against BSRN revealed that the real-time calculation uncertainty ranges from -100 to 40 and -20 to 20 W m-2, for the 15 min and monthly mean global horizontal irradiance (GHI) averages, respectively, while the accuracy of the input parameters, in terms of aerosol and cloud optical thickness (AOD and COT), and their impact on GHI, was of the order of 10 % as compared to the ground-based measurements. The proposed system aims to be utilized through studies and real-time applications which are related to solar energy production planning and use.

  18. The "Carbon Data Explorer": Web-Based Space-Time Visualization of Modeled Carbon Fluxes

    Science.gov (United States)

    Billmire, M.; Endsley, K. A.

    2014-12-01

    The visualization of and scientific "sense-making" from large datasets varying in both space and time is a challenge; one that is still being addressed in a number of different fields. The approaches taken thus far are often specific to a given academic field due to the unique questions that arise in different disciplines, however, basic approaches such as geographic maps and time series plots are still widely useful. The proliferation of model estimates of increasing size and resolution further complicates what ought to be a simple workflow: Model some geophysical phenomen(on), obtain results and measure uncertainty, organize and display the data, make comparisons across trials, and share findings. A new tool is in development that is intended to help scientists with the latter parts of that workflow. The tentatively-titled "Carbon Data Explorer" (http://spatial.mtri.org/flux-client/) enables users to access carbon science and related spatio-temporal science datasets over the web. All that is required to access multiple interactive visualizations of carbon science datasets is a compatible web browser and an internet connection. While the application targets atmospheric and climate science datasets, particularly spatio-temporal model estimates of carbon products, the software architecture takes an agnostic approach to the data to be visualized. Any atmospheric, biophysical, or geophysical quanity that varies in space and time, including one or more measures of uncertainty, can be visualized within the application. Within the web application, users have seamless control over a flexible and consistent symbology for map-based visualizations and plots. Where time series data are represented by one or more data "frames" (e.g. a map), users can animate the data. In the "coordinated view," users can make direct comparisons between different frames and different models or model runs, facilitating intermodal comparisons and assessments of spatio-temporal variability. Map

  19. Discrete-time modelling of musical instruments

    International Nuclear Information System (INIS)

    Vaelimaeki, Vesa; Pakarinen, Jyri; Erkut, Cumhur; Karjalainen, Matti

    2006-01-01

    This article describes physical modelling techniques that can be used for simulating musical instruments. The methods are closely related to digital signal processing. They discretize the system with respect to time, because the aim is to run the simulation using a computer. The physics-based modelling methods can be classified as mass-spring, modal, wave digital, finite difference, digital waveguide and source-filter models. We present the basic theory and a discussion on possible extensions for each modelling technique. For some methods, a simple model example is chosen from the existing literature demonstrating a typical use of the method. For instance, in the case of the digital waveguide modelling technique a vibrating string model is discussed, and in the case of the wave digital filter technique we present a classical piano hammer model. We tackle some nonlinear and time-varying models and include new results on the digital waveguide modelling of a nonlinear string. Current trends and future directions in physical modelling of musical instruments are discussed

  20. Evaluation of different time domain peak models using extreme learning machine-based peak detection for EEG signal.

    Science.gov (United States)

    Adam, Asrul; Ibrahim, Zuwairie; Mokhtar, Norrima; Shapiai, Mohd Ibrahim; Cumming, Paul; Mubin, Marizan

    2016-01-01

    Various peak models have been introduced to detect and analyze peaks in the time domain analysis of electroencephalogram (EEG) signals. In general, peak model in the time domain analysis consists of a set of signal parameters, such as amplitude, width, and slope. Models including those proposed by Dumpala, Acir, Liu, and Dingle are routinely used to detect peaks in EEG signals acquired in clinical studies of epilepsy or eye blink. The optimal peak model is the most reliable peak detection performance in a particular application. A fair measure of performance of different models requires a common and unbiased platform. In this study, we evaluate the performance of the four different peak models using the extreme learning machine (ELM)-based peak detection algorithm. We found that the Dingle model gave the best performance, with 72 % accuracy in the analysis of real EEG data. Statistical analysis conferred that the Dingle model afforded significantly better mean testing accuracy than did the Acir and Liu models, which were in the range 37-52 %. Meanwhile, the Dingle model has no significant difference compared to Dumpala model.

  1. Reverse time migration by Krylov subspace reduced order modeling

    Science.gov (United States)

    Basir, Hadi Mahdavi; Javaherian, Abdolrahim; Shomali, Zaher Hossein; Firouz-Abadi, Roohollah Dehghani; Gholamy, Shaban Ali

    2018-04-01

    Imaging is a key step in seismic data processing. To date, a myriad of advanced pre-stack depth migration approaches have been developed; however, reverse time migration (RTM) is still considered as the high-end imaging algorithm. The main limitations associated with the performance cost of reverse time migration are the intensive computation of the forward and backward simulations, time consumption, and memory allocation related to imaging condition. Based on the reduced order modeling, we proposed an algorithm, which can be adapted to all the aforementioned factors. Our proposed method benefit from Krylov subspaces method to compute certain mode shapes of the velocity model computed by as an orthogonal base of reduced order modeling. Reverse time migration by reduced order modeling is helpful concerning the highly parallel computation and strongly reduces the memory requirement of reverse time migration. The synthetic model results showed that suggested method can decrease the computational costs of reverse time migration by several orders of magnitudes, compared with reverse time migration by finite element method.

  2. Design base transient analysis using the real-time nuclear reactor simulator model

    International Nuclear Information System (INIS)

    Tien, K.K.; Yakura, S.J.; Morin, J.P.; Gregory, M.V.

    1987-01-01

    A real-time simulation model has been developed to describe the dynamic response of all major systems in a nuclear process reactor. The model consists of a detailed representation of all hydraulic components in the external coolant circulating loops consisting of piping, valves, pumps and heat exchangers. The reactor core is described by a three-dimensional neutron kinetics model with detailed representation of assembly coolant and moderator thermal hydraulics. The models have been developed to support a real-time training simulator, therefore, they reproduce system parameters characteristic of steady state normal operation with high precision. The system responses for postulated severe transients such as large pipe breaks, loss of pumping power, piping leaks, malfunctions in control rod insertion, and emergency injection of neutron absorber are calculated to be in good agreement with reference safety analyses. Restrictions were imposed by the requirement that the resulting code be able to run in real-time with sufficient spare time to allow interfacing with secondary systems and simulator hardware. Due to hardware set-up and real plant instrumentation, simplifications due to symmetry were not allowed. The resulting code represents a coarse-node engineering model in which the level of detail has been tailored to the available computing power of a present generation super-minicomputer. Results for several significant transients, as calculated by the real-time model, are compared both to actual plant data and to results generated by fine-mesh analysis codes

  3. Design base transient analysis using the real-time nuclear reactor simulator model

    International Nuclear Information System (INIS)

    Tien, K.K.; Yakura, S.J.; Morin, J.P.; Gregory, M.V.

    1987-01-01

    A real-time simulation model has been developed to describe the dynamic response of all major systems in a nuclear process reactor. The model consists of a detailed representation of all hydraulic components in the external coolant circulating loops consisting of piping, valves, pumps and heat exchangers. The reactor core is described by a three-dimensional neutron kinetics model with detailed representation of assembly coolant and mode-rator thermal hydraulics. The models have been developed to support a real-time training simulator, therefore, they reproduce system parameters characteristic of steady state normal operation with high precision. The system responses for postulated severe transients such as large pipe breaks, loss of pumping power, piping leaks, malfunctions in control rod insertion, and emergency injection of neutron absorber are calculated to be in good agreement with reference safety analyses. Restrictions were imposed by the requirement that the resulting code be able to run in real-time with sufficient spare time to allow interfacing with secondary systems and simulator hardware. Due to hardware set-up and real plant instrumentation, simplifications due to symmetry were not allowed. The resulting code represents a coarse-node engineering model in which the level of detail has been tailored to the available computing power of a present generation super-minicomputer. Results for several significant transients, as calculated by the real-time model, are compared both to actual plant data and to results generated by fine-mesh analysis codes

  4. Performance Evaluation of Components Using a Granularity-based Interface Between Real-Time Calculus and Timed Automata

    Directory of Open Access Journals (Sweden)

    Karine Altisen

    2010-06-01

    Full Text Available To analyze complex and heterogeneous real-time embedded systems, recent works have proposed interface techniques between real-time calculus (RTC and timed automata (TA, in order to take advantage of the strengths of each technique for analyzing various components. But the time to analyze a state-based component modeled by TA may be prohibitively high, due to the state space explosion problem. In this paper, we propose a framework of granularity-based interfacing to speed up the analysis of a TA modeled component. First, we abstract fine models to work with event streams at coarse granularity. We perform analysis of the component at multiple coarse granularities and then based on RTC theory, we derive lower and upper bounds on arrival patterns of the fine output streams using the causality closure algorithm. Our framework can help to achieve tradeoffs between precision and analysis time.

  5. Incident Duration Modeling Using Flexible Parametric Hazard-Based Models

    Directory of Open Access Journals (Sweden)

    Ruimin Li

    2014-01-01

    Full Text Available Assessing and prioritizing the duration time and effects of traffic incidents on major roads present significant challenges for road network managers. This study examines the effect of numerous factors associated with various types of incidents on their duration and proposes an incident duration prediction model. Several parametric accelerated failure time hazard-based models were examined, including Weibull, log-logistic, log-normal, and generalized gamma, as well as all models with gamma heterogeneity and flexible parametric hazard-based models with freedom ranging from one to ten, by analyzing a traffic incident dataset obtained from the Incident Reporting and Dispatching System in Beijing in 2008. Results show that different factors significantly affect different incident time phases, whose best distributions were diverse. Given the best hazard-based models of each incident time phase, the prediction result can be reasonable for most incidents. The results of this study can aid traffic incident management agencies not only in implementing strategies that would reduce incident duration, and thus reduce congestion, secondary incidents, and the associated human and economic losses, but also in effectively predicting incident duration time.

  6. Examining School-Based Bullying Interventions Using Multilevel Discrete Time Hazard Modeling

    Science.gov (United States)

    Wagaman, M. Alex; Geiger, Jennifer Mullins; Bermudez-Parsai, Monica; Hedberg, E. C.

    2014-01-01

    Although schools have been trying to address bulling by utilizing different approaches that stop or reduce the incidence of bullying, little remains known about what specific intervention strategies are most successful in reducing bullying in the school setting. Using the social-ecological framework, this paper examines school-based disciplinary interventions often used to deliver consequences to deter the reoccurrence of bullying and aggressive behaviors among school-aged children. Data for this study are drawn from the School-Wide Information System (SWIS) with the final analytic sample consisting of 1,221 students in grades K – 12 who received an office disciplinary referral for bullying during the first semester. Using Kaplan-Meier Failure Functions and Multi-level discrete time hazard models, determinants of the probability of a student receiving a second referral over time were examined. Of the seven interventions tested, only Parent-Teacher Conference (AOR=0.65, pbullying and aggressive behaviors. By using a social-ecological framework, schools can develop strategies that deter the reoccurrence of bullying by identifying key factors that enhance a sense of connection between the students’ mesosystems as well as utilizing disciplinary strategies that take into consideration student’s microsystem roles. PMID:22878779

  7. A practical procedure for the selection of time-to-failure models based on the assessment of trends in maintenance data

    Energy Technology Data Exchange (ETDEWEB)

    Louit, D.M. [Komatsu Chile, Av. Americo Vespucio 0631, Quilicura, Santiago (Chile)], E-mail: rpascual@ing.puc.cl; Pascual, R. [Centro de Mineria, Pontificia Universidad Catolica de Chile, Av. Vicuna Mackenna 4860, Santiago (Chile); Jardine, A.K.S. [Department of Mechanical and Industrial Engineering, University of Toronto, 5 King' s College Road, Toronto, Ont., M5S 3G8 (Canada)

    2009-10-15

    Many times, reliability studies rely on false premises such as independent and identically distributed time between failures assumption (renewal process). This can lead to erroneous model selection for the time to failure of a particular component or system, which can in turn lead to wrong conclusions and decisions. A strong statistical focus, a lack of a systematic approach and sometimes inadequate theoretical background seem to have made it difficult for maintenance analysts to adopt the necessary stage of data testing before the selection of a suitable model. In this paper, a framework for model selection to represent the failure process for a component or system is presented, based on a review of available trend tests. The paper focuses only on single-time-variable models and is primarily directed to analysts responsible for reliability analyses in an industrial maintenance environment. The model selection framework is directed towards the discrimination between the use of statistical distributions to represent the time to failure ('renewal approach'); and the use of stochastic point processes ('repairable systems approach'), when there may be the presence of system ageing or reliability growth. An illustrative example based on failure data from a fleet of backhoes is included.

  8. RTMOD: Real-Time MODel evaluation

    International Nuclear Information System (INIS)

    Graziani, G; Galmarini, S.; Mikkelsen, T.

    2000-01-01

    The 1998 - 1999 RTMOD project is a system based on an automated statistical evaluation for the inter-comparison of real-time forecasts produced by long-range atmospheric dispersion models for national nuclear emergency predictions of cross-boundary consequences. The background of RTMOD was the 1994 ETEX project that involved about 50 models run in several Institutes around the world to simulate two real tracer releases involving a large part of the European territory. In the preliminary phase of ETEX, three dry runs (i.e. simulations in real-time of fictitious releases) were carried out. At that time, the World Wide Web was not available to all the exercise participants, and plume predictions were therefore submitted to JRC-Ispra by fax and regular mail for subsequent processing. The rapid development of the World Wide Web in the second half of the nineties, together with the experience gained during the ETEX exercises suggested the development of this project. RTMOD featured a web-based user-friendly interface for data submission and an interactive program module for displaying, intercomparison and analysis of the forecasts. RTMOD has focussed on model intercomparison of concentration predictions at the nodes of a regular grid with 0.5 degrees of resolution both in latitude and in longitude, the domain grid extending from 5W to 40E and 40N to 65N. Hypothetical releases were notified around the world to the 28 model forecasters via the web on a one-day warning in advance. They then accessed the RTMOD web page for detailed information on the actual release, and as soon as possible they then uploaded their predictions to the RTMOD server and could soon after start their inter-comparison analysis with other modelers. When additional forecast data arrived, already existing statistical results would be recalculated to include the influence by all available predictions. The new web-based RTMOD concept has proven useful as a practical decision-making tool for realtime

  9. Towards real-time communication between in vivo neurophysiological data sources and simulator-based brain biomimetic models.

    Science.gov (United States)

    Lee, Giljae; Matsunaga, Andréa; Dura-Bernal, Salvador; Zhang, Wenjie; Lytton, William W; Francis, Joseph T; Fortes, José Ab

    2014-11-01

    Development of more sophisticated implantable brain-machine interface (BMI) will require both interpretation of the neurophysiological data being measured and subsequent determination of signals to be delivered back to the brain. Computational models are the heart of the machine of BMI and therefore an essential tool in both of these processes. One approach is to utilize brain biomimetic models (BMMs) to develop and instantiate these algorithms. These then must be connected as hybrid systems in order to interface the BMM with in vivo data acquisition devices and prosthetic devices. The combined system then provides a test bed for neuroprosthetic rehabilitative solutions and medical devices for the repair and enhancement of damaged brain. We propose here a computer network-based design for this purpose, detailing its internal modules and data flows. We describe a prototype implementation of the design, enabling interaction between the Plexon Multichannel Acquisition Processor (MAP) server, a commercial tool to collect signals from microelectrodes implanted in a live subject and a BMM, a NEURON-based model of sensorimotor cortex capable of controlling a virtual arm. The prototype implementation supports an online mode for real-time simulations, as well as an offline mode for data analysis and simulations without real-time constraints, and provides binning operations to discretize continuous input to the BMM and filtering operations for dealing with noise. Evaluation demonstrated that the implementation successfully delivered monkey spiking activity to the BMM through LAN environments, respecting real-time constraints.

  10. T-UPPAAL: Online Model-based Testing of Real-Time Systems

    DEFF Research Database (Denmark)

    Mikucionis, Marius; Larsen, Kim Guldstrand; Nielsen, Brian

    2004-01-01

    The goal of testing is to gain confidence in a physical computer based system by means of executing it. More than one third of typical project resources is spent on testing embedded and real-time systems, but still it remains ad-hoc, based on heuristics, and error-prone. Therefore systematic...

  11. Time-and-ID-Based Proxy Reencryption Scheme

    Directory of Open Access Journals (Sweden)

    Kambombo Mtonga

    2014-01-01

    Full Text Available Time- and ID-based proxy reencryption scheme is proposed in this paper in which a type-based proxy reencryption enables the delegator to implement fine-grained policies with one key pair without any additional trust on the proxy. However, in some applications, the time within which the data was sampled or collected is very critical. In such applications, for example, healthcare and criminal investigations, the delegatee may be interested in only some of the messages with some types sampled within some time bound instead of the entire subset. Hence, in order to carter for such situations, in this paper, we propose a time-and-identity-based proxy reencryption scheme that takes into account the time within which the data was collected as a factor to consider when categorizing data in addition to its type. Our scheme is based on Boneh and Boyen identity-based scheme (BB-IBE and Matsuo’s proxy reencryption scheme for identity-based encryption (IBE to IBE. We prove that our scheme is semantically secure in the standard model.

  12. Study of Railway Track Irregularity Standard Deviation Time Series Based on Data Mining and Linear Model

    Directory of Open Access Journals (Sweden)

    Jia Chaolong

    2013-01-01

    Full Text Available Good track geometry state ensures the safe operation of the railway passenger service and freight service. Railway transportation plays an important role in the Chinese economic and social development. This paper studies track irregularity standard deviation time series data and focuses on the characteristics and trend changes of track state by applying clustering analysis. Linear recursive model and linear-ARMA model based on wavelet decomposition reconstruction are proposed, and all they offer supports for the safe management of railway transportation.

  13. Adaptation to shift work: physiologically based modeling of the effects of lighting and shifts' start time.

    Directory of Open Access Journals (Sweden)

    Svetlana Postnova

    Full Text Available Shift work has become an integral part of our life with almost 20% of the population being involved in different shift schedules in developed countries. However, the atypical work times, especially the night shifts, are associated with reduced quality and quantity of sleep that leads to increase of sleepiness often culminating in accidents. It has been demonstrated that shift workers' sleepiness can be improved by a proper scheduling of light exposure and optimizing shifts timing. Here, an integrated physiologically-based model of sleep-wake cycles is used to predict adaptation to shift work in different light conditions and for different shift start times for a schedule of four consecutive days of work. The integrated model combines a model of the ascending arousal system in the brain that controls the sleep-wake switch and a human circadian pacemaker model. To validate the application of the integrated model and demonstrate its utility, its dynamics are adjusted to achieve a fit to published experimental results showing adaptation of night shift workers (n = 8 in conditions of either bright or regular lighting. Further, the model is used to predict the shift workers' adaptation to the same shift schedule, but for conditions not considered in the experiment. The model demonstrates that the intensity of shift light can be reduced fourfold from that used in the experiment and still produce good adaptation to night work. The model predicts that sleepiness of the workers during night shifts on a protocol with either bright or regular lighting can be significantly improved by starting the shift earlier in the night, e.g.; at 21:00 instead of 00:00. Finally, the study predicts that people of the same chronotype, i.e. with identical sleep times in normal conditions, can have drastically different responses to shift work depending on their intrinsic circadian and homeostatic parameters.

  14. Adaptation to shift work: physiologically based modeling of the effects of lighting and shifts' start time.

    Science.gov (United States)

    Postnova, Svetlana; Robinson, Peter A; Postnov, Dmitry D

    2013-01-01

    Shift work has become an integral part of our life with almost 20% of the population being involved in different shift schedules in developed countries. However, the atypical work times, especially the night shifts, are associated with reduced quality and quantity of sleep that leads to increase of sleepiness often culminating in accidents. It has been demonstrated that shift workers' sleepiness can be improved by a proper scheduling of light exposure and optimizing shifts timing. Here, an integrated physiologically-based model of sleep-wake cycles is used to predict adaptation to shift work in different light conditions and for different shift start times for a schedule of four consecutive days of work. The integrated model combines a model of the ascending arousal system in the brain that controls the sleep-wake switch and a human circadian pacemaker model. To validate the application of the integrated model and demonstrate its utility, its dynamics are adjusted to achieve a fit to published experimental results showing adaptation of night shift workers (n = 8) in conditions of either bright or regular lighting. Further, the model is used to predict the shift workers' adaptation to the same shift schedule, but for conditions not considered in the experiment. The model demonstrates that the intensity of shift light can be reduced fourfold from that used in the experiment and still produce good adaptation to night work. The model predicts that sleepiness of the workers during night shifts on a protocol with either bright or regular lighting can be significantly improved by starting the shift earlier in the night, e.g.; at 21:00 instead of 00:00. Finally, the study predicts that people of the same chronotype, i.e. with identical sleep times in normal conditions, can have drastically different responses to shift work depending on their intrinsic circadian and homeostatic parameters.

  15. Adaptation to Shift Work: Physiologically Based Modeling of the Effects of Lighting and Shifts’ Start Time

    Science.gov (United States)

    Postnova, Svetlana; Robinson, Peter A.; Postnov, Dmitry D.

    2013-01-01

    Shift work has become an integral part of our life with almost 20% of the population being involved in different shift schedules in developed countries. However, the atypical work times, especially the night shifts, are associated with reduced quality and quantity of sleep that leads to increase of sleepiness often culminating in accidents. It has been demonstrated that shift workers’ sleepiness can be improved by a proper scheduling of light exposure and optimizing shifts timing. Here, an integrated physiologically-based model of sleep-wake cycles is used to predict adaptation to shift work in different light conditions and for different shift start times for a schedule of four consecutive days of work. The integrated model combines a model of the ascending arousal system in the brain that controls the sleep-wake switch and a human circadian pacemaker model. To validate the application of the integrated model and demonstrate its utility, its dynamics are adjusted to achieve a fit to published experimental results showing adaptation of night shift workers (n = 8) in conditions of either bright or regular lighting. Further, the model is used to predict the shift workers’ adaptation to the same shift schedule, but for conditions not considered in the experiment. The model demonstrates that the intensity of shift light can be reduced fourfold from that used in the experiment and still produce good adaptation to night work. The model predicts that sleepiness of the workers during night shifts on a protocol with either bright or regular lighting can be significantly improved by starting the shift earlier in the night, e.g.; at 21∶00 instead of 00∶00. Finally, the study predicts that people of the same chronotype, i.e. with identical sleep times in normal conditions, can have drastically different responses to shift work depending on their intrinsic circadian and homeostatic parameters. PMID:23308206

  16. Travel Time Reliability for Urban Networks : Modelling and Empirics

    NARCIS (Netherlands)

    Zheng, F.; Liu, Xiaobo; van Zuylen, H.J.; Li, Jie; Lu, Chao

    2017-01-01

    The importance of travel time reliability in traffic management, control, and network design has received a lot of attention in the past decade. In this paper, a network travel time distribution model based on the Johnson curve system is proposed. The model is applied to field travel time data

  17. Modeling Financial Time Series Based on a Market Microstructure Model with Leverage Effect

    Directory of Open Access Journals (Sweden)

    Yanhui Xi

    2016-01-01

    Full Text Available The basic market microstructure model specifies that the price/return innovation and the volatility innovation are independent Gaussian white noise processes. However, the financial leverage effect has been found to be statistically significant in many financial time series. In this paper, a novel market microstructure model with leverage effects is proposed. The model specification assumed a negative correlation in the errors between the price/return innovation and the volatility innovation. With the new representations, a theoretical explanation of leverage effect is provided. Simulated data and daily stock market indices (Shanghai composite index, Shenzhen component index, and Standard and Poor’s 500 Composite index via Bayesian Markov Chain Monte Carlo (MCMC method are used to estimate the leverage market microstructure model. The results verify the effectiveness of the model and its estimation approach proposed in the paper and also indicate that the stock markets have strong leverage effects. Compared with the classical leverage stochastic volatility (SV model in terms of DIC (Deviance Information Criterion, the leverage market microstructure model fits the data better.

  18. Time dependent policy-based access control

    DEFF Research Database (Denmark)

    Vasilikos, Panagiotis; Nielson, Flemming; Nielson, Hanne Riis

    2017-01-01

    also on other attributes of the environment such as the time. In this paper, we use systems of Timed Automata to model distributed systems and we present a logic in which one can express time-dependent policies for access control. We show how a fragment of our logic can be reduced to a logic......Access control policies are essential to determine who is allowed to access data in a system without compromising the data's security. However, applications inside a distributed environment may require those policies to be dependent on the actual content of the data, the flow of information, while...... that current model checkers for Timed Automata such as UPPAAL can handle and we present a translator that performs this reduction. We then use our translator and UPPAAL to enforce time-dependent policy-based access control on an example application from the aerospace industry....

  19. Gradient-based model calibration with proxy-model assistance

    Science.gov (United States)

    Burrows, Wesley; Doherty, John

    2016-02-01

    Use of a proxy model in gradient-based calibration and uncertainty analysis of a complex groundwater model with large run times and problematic numerical behaviour is described. The methodology is general, and can be used with models of all types. The proxy model is based on a series of analytical functions that link all model outputs used in the calibration process to all parameters requiring estimation. In enforcing history-matching constraints during the calibration and post-calibration uncertainty analysis processes, the proxy model is run for the purposes of populating the Jacobian matrix, while the original model is run when testing parameter upgrades; the latter process is readily parallelized. Use of a proxy model in this fashion dramatically reduces the computational burden of complex model calibration and uncertainty analysis. At the same time, the effect of model numerical misbehaviour on calculation of local gradients is mitigated, this allowing access to the benefits of gradient-based analysis where lack of integrity in finite-difference derivatives calculation would otherwise have impeded such access. Construction of a proxy model, and its subsequent use in calibration of a complex model, and in analysing the uncertainties of predictions made by that model, is implemented in the PEST suite.

  20. A Virtual Machine Migration Strategy Based on Time Series Workload Prediction Using Cloud Model

    Directory of Open Access Journals (Sweden)

    Yanbing Liu

    2014-01-01

    Full Text Available Aimed at resolving the issues of the imbalance of resources and workloads at data centers and the overhead together with the high cost of virtual machine (VM migrations, this paper proposes a new VM migration strategy which is based on the cloud model time series workload prediction algorithm. By setting the upper and lower workload bounds for host machines, forecasting the tendency of their subsequent workloads by creating a workload time series using the cloud model, and stipulating a general VM migration criterion workload-aware migration (WAM, the proposed strategy selects a source host machine, a destination host machine, and a VM on the source host machine carrying out the task of the VM migration. Experimental results and analyses show, through comparison with other peer research works, that the proposed method can effectively avoid VM migrations caused by momentary peak workload values, significantly lower the number of VM migrations, and dynamically reach and maintain a resource and workload balance for virtual machines promoting an improved utilization of resources in the entire data center.

  1. Dynamic Bus Travel Time Prediction Models on Road with Multiple Bus Routes.

    Science.gov (United States)

    Bai, Cong; Peng, Zhong-Ren; Lu, Qing-Chang; Sun, Jian

    2015-01-01

    Accurate and real-time travel time information for buses can help passengers better plan their trips and minimize waiting times. A dynamic travel time prediction model for buses addressing the cases on road with multiple bus routes is proposed in this paper, based on support vector machines (SVMs) and Kalman filtering-based algorithm. In the proposed model, the well-trained SVM model predicts the baseline bus travel times from the historical bus trip data; the Kalman filtering-based dynamic algorithm can adjust bus travel times with the latest bus operation information and the estimated baseline travel times. The performance of the proposed dynamic model is validated with the real-world data on road with multiple bus routes in Shenzhen, China. The results show that the proposed dynamic model is feasible and applicable for bus travel time prediction and has the best prediction performance among all the five models proposed in the study in terms of prediction accuracy on road with multiple bus routes.

  2. Dynamic Bus Travel Time Prediction Models on Road with Multiple Bus Routes

    Science.gov (United States)

    Bai, Cong; Peng, Zhong-Ren; Lu, Qing-Chang; Sun, Jian

    2015-01-01

    Accurate and real-time travel time information for buses can help passengers better plan their trips and minimize waiting times. A dynamic travel time prediction model for buses addressing the cases on road with multiple bus routes is proposed in this paper, based on support vector machines (SVMs) and Kalman filtering-based algorithm. In the proposed model, the well-trained SVM model predicts the baseline bus travel times from the historical bus trip data; the Kalman filtering-based dynamic algorithm can adjust bus travel times with the latest bus operation information and the estimated baseline travel times. The performance of the proposed dynamic model is validated with the real-world data on road with multiple bus routes in Shenzhen, China. The results show that the proposed dynamic model is feasible and applicable for bus travel time prediction and has the best prediction performance among all the five models proposed in the study in terms of prediction accuracy on road with multiple bus routes. PMID:26294903

  3. Time dependent patient no-show predictive modelling development.

    Science.gov (United States)

    Huang, Yu-Li; Hanauer, David A

    2016-05-09

    Purpose - The purpose of this paper is to develop evident-based predictive no-show models considering patients' each past appointment status, a time-dependent component, as an independent predictor to improve predictability. Design/methodology/approach - A ten-year retrospective data set was extracted from a pediatric clinic. It consisted of 7,291 distinct patients who had at least two visits along with their appointment characteristics, patient demographics, and insurance information. Logistic regression was adopted to develop no-show models using two-thirds of the data for training and the remaining data for validation. The no-show threshold was then determined based on minimizing the misclassification of show/no-show assignments. There were a total of 26 predictive model developed based on the number of available past appointments. Simulation was employed to test the effective of each model on costs of patient wait time, physician idle time, and overtime. Findings - The results demonstrated the misclassification rate and the area under the curve of the receiver operating characteristic gradually improved as more appointment history was included until around the 20th predictive model. The overbooking method with no-show predictive models suggested incorporating up to the 16th model and outperformed other overbooking methods by as much as 9.4 per cent in the cost per patient while allowing two additional patients in a clinic day. Research limitations/implications - The challenge now is to actually implement the no-show predictive model systematically to further demonstrate its robustness and simplicity in various scheduling systems. Originality/value - This paper provides examples of how to build the no-show predictive models with time-dependent components to improve the overbooking policy. Accurately identifying scheduled patients' show/no-show status allows clinics to proactively schedule patients to reduce the negative impact of patient no-shows.

  4. Modeling pollen time series using seasonal-trend decomposition procedure based on LOESS smoothing.

    Science.gov (United States)

    Rojo, Jesús; Rivero, Rosario; Romero-Morte, Jorge; Fernández-González, Federico; Pérez-Badia, Rosa

    2017-02-01

    Analysis of airborne pollen concentrations provides valuable information on plant phenology and is thus a useful tool in agriculture-for predicting harvests in crops such as the olive and for deciding when to apply phytosanitary treatments-as well as in medicine and the environmental sciences. Variations in airborne pollen concentrations, moreover, are indicators of changing plant life cycles. By modeling pollen time series, we can not only identify the variables influencing pollen levels but also predict future pollen concentrations. In this study, airborne pollen time series were modeled using a seasonal-trend decomposition procedure based on LOcally wEighted Scatterplot Smoothing (LOESS) smoothing (STL). The data series-daily Poaceae pollen concentrations over the period 2006-2014-was broken up into seasonal and residual (stochastic) components. The seasonal component was compared with data on Poaceae flowering phenology obtained by field sampling. Residuals were fitted to a model generated from daily temperature and rainfall values, and daily pollen concentrations, using partial least squares regression (PLSR). This method was then applied to predict daily pollen concentrations for 2014 (independent validation data) using results for the seasonal component of the time series and estimates of the residual component for the period 2006-2013. Correlation between predicted and observed values was r = 0.79 (correlation coefficient) for the pre-peak period (i.e., the period prior to the peak pollen concentration) and r = 0.63 for the post-peak period. Separate analysis of each of the components of the pollen data series enables the sources of variability to be identified more accurately than by analysis of the original non-decomposed data series, and for this reason, this procedure has proved to be a suitable technique for analyzing the main environmental factors influencing airborne pollen concentrations.

  5. Developing a local least-squares support vector machines-based neuro-fuzzy model for nonlinear and chaotic time series prediction.

    Science.gov (United States)

    Miranian, A; Abdollahzade, M

    2013-02-01

    Local modeling approaches, owing to their ability to model different operating regimes of nonlinear systems and processes by independent local models, seem appealing for modeling, identification, and prediction applications. In this paper, we propose a local neuro-fuzzy (LNF) approach based on the least-squares support vector machines (LSSVMs). The proposed LNF approach employs LSSVMs, which are powerful in modeling and predicting time series, as local models and uses hierarchical binary tree (HBT) learning algorithm for fast and efficient estimation of its parameters. The HBT algorithm heuristically partitions the input space into smaller subdomains by axis-orthogonal splits. In each partitioning, the validity functions automatically form a unity partition and therefore normalization side effects, e.g., reactivation, are prevented. Integration of LSSVMs into the LNF network as local models, along with the HBT learning algorithm, yield a high-performance approach for modeling and prediction of complex nonlinear time series. The proposed approach is applied to modeling and predictions of different nonlinear and chaotic real-world and hand-designed systems and time series. Analysis of the prediction results and comparisons with recent and old studies demonstrate the promising performance of the proposed LNF approach with the HBT learning algorithm for modeling and prediction of nonlinear and chaotic systems and time series.

  6. Building Chaotic Model From Incomplete Time Series

    Science.gov (United States)

    Siek, Michael; Solomatine, Dimitri

    2010-05-01

    This paper presents a number of novel techniques for building a predictive chaotic model from incomplete time series. A predictive chaotic model is built by reconstructing the time-delayed phase space from observed time series and the prediction is made by a global model or adaptive local models based on the dynamical neighbors found in the reconstructed phase space. In general, the building of any data-driven models depends on the completeness and quality of the data itself. However, the completeness of the data availability can not always be guaranteed since the measurement or data transmission is intermittently not working properly due to some reasons. We propose two main solutions dealing with incomplete time series: using imputing and non-imputing methods. For imputing methods, we utilized the interpolation methods (weighted sum of linear interpolations, Bayesian principle component analysis and cubic spline interpolation) and predictive models (neural network, kernel machine, chaotic model) for estimating the missing values. After imputing the missing values, the phase space reconstruction and chaotic model prediction are executed as a standard procedure. For non-imputing methods, we reconstructed the time-delayed phase space from observed time series with missing values. This reconstruction results in non-continuous trajectories. However, the local model prediction can still be made from the other dynamical neighbors reconstructed from non-missing values. We implemented and tested these methods to construct a chaotic model for predicting storm surges at Hoek van Holland as the entrance of Rotterdam Port. The hourly surge time series is available for duration of 1990-1996. For measuring the performance of the proposed methods, a synthetic time series with missing values generated by a particular random variable to the original (complete) time series is utilized. There exist two main performance measures used in this work: (1) error measures between the actual

  7. Multiobjective optimization model of intersection signal timing considering emissions based on field data: A case study of Beijing.

    Science.gov (United States)

    Kou, Weibin; Chen, Xumei; Yu, Lei; Gong, Huibo

    2018-04-18

    Most existing signal timing models are aimed to minimize the total delay and stops at intersections, without considering environmental factors. This paper analyzes the trade-off between vehicle emissions and traffic efficiencies on the basis of field data. First, considering the different operating modes of cruising, acceleration, deceleration, and idling, field data of emissions and Global Positioning System (GPS) are collected to estimate emission rates for heavy-duty and light-duty vehicles. Second, multiobjective signal timing optimization model is established based on a genetic algorithm to minimize delay, stops, and emissions. Finally, a case study is conducted in Beijing. Nine scenarios are designed considering different weights of emission and traffic efficiency. The results compared with those using Highway Capacity Manual (HCM) 2010 show that signal timing optimized by the model proposed in this paper can decrease vehicles delay and emissions more significantly. The optimization model can be applied in different cities, which provides supports for eco-signal design and development. Vehicle emissions are heavily at signal intersections in urban area. The multiobjective signal timing optimization model is proposed considering the trade-off between vehicle emissions and traffic efficiencies on the basis of field data. The results indicate that signal timing optimized by the model proposed in this paper can decrease vehicle emissions and delays more significantly. The optimization model can be applied in different cities, which provides supports for eco-signal design and development.

  8. Koopman Operator Framework for Time Series Modeling and Analysis

    Science.gov (United States)

    Surana, Amit

    2018-01-01

    We propose an interdisciplinary framework for time series classification, forecasting, and anomaly detection by combining concepts from Koopman operator theory, machine learning, and linear systems and control theory. At the core of this framework is nonlinear dynamic generative modeling of time series using the Koopman operator which is an infinite-dimensional but linear operator. Rather than working with the underlying nonlinear model, we propose two simpler linear representations or model forms based on Koopman spectral properties. We show that these model forms are invariants of the generative model and can be readily identified directly from data using techniques for computing Koopman spectral properties without requiring the explicit knowledge of the generative model. We also introduce different notions of distances on the space of such model forms which is essential for model comparison/clustering. We employ the space of Koopman model forms equipped with distance in conjunction with classical machine learning techniques to develop a framework for automatic feature generation for time series classification. The forecasting/anomaly detection framework is based on using Koopman model forms along with classical linear systems and control approaches. We demonstrate the proposed framework for human activity classification, and for time series forecasting/anomaly detection in power grid application.

  9. Real-time prediction of respiratory motion based on a local dynamic model in an augmented space.

    Science.gov (United States)

    Hong, S-M; Jung, B-H; Ruan, D

    2011-03-21

    Motion-adaptive radiotherapy aims to deliver ablative radiation dose to the tumor target with minimal normal tissue exposure, by accounting for real-time target movement. In practice, prediction is usually necessary to compensate for system latency induced by measurement, communication and control. This work focuses on predicting respiratory motion, which is most dominant for thoracic and abdominal tumors. We develop and investigate the use of a local dynamic model in an augmented space, motivated by the observation that respiratory movement exhibits a locally circular pattern in a plane augmented with a delayed axis. By including the angular velocity as part of the system state, the proposed dynamic model effectively captures the natural evolution of respiratory motion. The first-order extended Kalman filter is used to propagate and update the state estimate. The target location is predicted by evaluating the local dynamic model equations at the required prediction length. This method is complementary to existing work in that (1) the local circular motion model characterizes 'turning', overcoming the limitation of linear motion models; (2) it uses a natural state representation including the local angular velocity and updates the state estimate systematically, offering explicit physical interpretations; (3) it relies on a parametric model and is much less data-satiate than the typical adaptive semiparametric or nonparametric method. We tested the performance of the proposed method with ten RPM traces, using the normalized root mean squared difference between the predicted value and the retrospective observation as the error metric. Its performance was compared with predictors based on the linear model, the interacting multiple linear models and the kernel density estimator for various combinations of prediction lengths and observation rates. The local dynamic model based approach provides the best performance for short to medium prediction lengths under relatively

  10. Automated time activity classification based on global positioning system (GPS) tracking data.

    Science.gov (United States)

    Wu, Jun; Jiang, Chengsheng; Houston, Douglas; Baker, Dean; Delfino, Ralph

    2011-11-14

    Air pollution epidemiological studies are increasingly using global positioning system (GPS) to collect time-location data because they offer continuous tracking, high temporal resolution, and minimum reporting burden for participants. However, substantial uncertainties in the processing and classifying of raw GPS data create challenges for reliably characterizing time activity patterns. We developed and evaluated models to classify people's major time activity patterns from continuous GPS tracking data. We developed and evaluated two automated models to classify major time activity patterns (i.e., indoor, outdoor static, outdoor walking, and in-vehicle travel) based on GPS time activity data collected under free living conditions for 47 participants (N = 131 person-days) from the Harbor Communities Time Location Study (HCTLS) in 2008 and supplemental GPS data collected from three UC-Irvine research staff (N = 21 person-days) in 2010. Time activity patterns used for model development were manually classified by research staff using information from participant GPS recordings, activity logs, and follow-up interviews. We evaluated two models: (a) a rule-based model that developed user-defined rules based on time, speed, and spatial location, and (b) a random forest decision tree model. Indoor, outdoor static, outdoor walking and in-vehicle travel activities accounted for 82.7%, 6.1%, 3.2% and 7.2% of manually-classified time activities in the HCTLS dataset, respectively. The rule-based model classified indoor and in-vehicle travel periods reasonably well (Indoor: sensitivity > 91%, specificity > 80%, and precision > 96%; in-vehicle travel: sensitivity > 71%, specificity > 99%, and precision > 88%), but the performance was moderate for outdoor static and outdoor walking predictions. No striking differences in performance were observed between the rule-based and the random forest models. The random forest model was fast and easy to execute, but was likely less robust

  11. On the Designing of Model Checkers for Real-Time Distributed Systems

    Directory of Open Access Journals (Sweden)

    D. Yu. Volkanov

    2012-01-01

    Full Text Available To verify real-time properties of UML statecharts one may apply a UPPAAL, toolbox for model checking of real-time systems. One of the most suitable ways to specify an operational semantics of UML statecharts is to invoke the formal model of Hierarchical Timed Automata. Since the model language of UPPAAL is based on Networks of Timed Automata one has to provide a conversion of Hierarchical Timed Automata to Networks of Timed Automata. In this paper we describe this conversion algorithm and prove that it is correct w.r.t. UPPAAL query language which is based on the subset of Timed CTL.

  12. Lévy-based growth models

    DEFF Research Database (Denmark)

    Jónsdóttir, Kristjana Ýr; Schmiegel, Jürgen; Jensen, Eva Bjørn Vedel

    2008-01-01

    In the present paper, we give a condensed review, for the nonspecialist reader, of a new modelling framework for spatio-temporal processes, based on Lévy theory. We show the potential of the approach in stochastic geometry and spatial statistics by studying Lévy-based growth modelling of planar o...... objects. The growth models considered are spatio-temporal stochastic processes on the circle. As a by product, flexible new models for space–time covariance functions on the circle are provided. An application of the Lévy-based growth models to tumour growth is discussed....

  13. Applications For Real Time NOMADS At NCEP To Disseminate NOAA's Operational Model Data Base

    Science.gov (United States)

    Alpert, J. C.; Wang, J.; Rutledge, G.

    2007-05-01

    A wide range of environmental information, in digital form, with metadata descriptions and supporting infrastructure is contained in the NOAA Operational Modeling Archive Distribution System (NOMADS) and its Real Time (RT) project prototype at the National Centers for Environmental Prediction (NCEP). NOMADS is now delivering on its goal of a seamless framework, from archival to real time data dissemination for NOAA's operational model data holdings. A process is under way to make NOMADS part of NCEP's operational production of products. A goal is to foster collaborations among the research and education communities, value added retailers, and public access for science and development. In the National Research Council's "Completing the Forecast", Recommendation 3.4 states: "NOMADS should be maintained and extended to include (a) long-term archives of the global and regional ensemble forecasting systems at their native resolution, and (b) re-forecast datasets to facilitate post-processing." As one of many participants of NOMADS, NCEP serves the operational model data base using data application protocol (Open-DAP) and other services for participants to serve their data sets and users to obtain them. Using the NCEP global ensemble data as an example, we show an Open-DAP (also known as DODS) client application that provides a request-and-fulfill mechanism for access to the complex ensemble matrix of holdings. As an example of the DAP service, we show a client application which accesses the Global or Regional Ensemble data set to produce user selected weather element event probabilities. The event probabilities are easily extended over model forecast time to show probability histograms defining the future trend of user selected events. This approach insures an efficient use of computer resources because users transmit only the data necessary for their tasks. Data sets are served by OPeN-DAP allowing commercial clients such as MATLAB or IDL as well as freeware clients

  14. Risk-based technical specifications: Development and application of an approach to the generation of a plant specific real-time risk model

    International Nuclear Information System (INIS)

    Puglia, B.; Gallagher, D.; Amico, P.; Atefi, B.

    1992-10-01

    This report describes a process developed to convert an existing PRA into a model amenable to real time, risk-based technical specification calculations. In earlier studies (culminating in NUREG/CR-5742), several risk-based approaches to technical specification were evaluated. A real-time approach using a plant specific PRA capable of modeling plant configurations as they change was identified as the most comprehensive approach to control plant risk. A master fault tree logic model representative of-all of the core damage sequences was developed. Portions of the system fault trees were modularized and supercomponents comprised of component failures with similar effects were developed to reduce the size of the model and, quantification times. Modifications to the master fault tree logic were made to properly model the effect of maintenance and recovery actions. Fault trees representing several actuation systems not modeled in detail in the existing PRA were added to the master fault tree logic. This process was applied to the Surry NUREG-1150 Level 1 PRA. The master logic mode was confirmed. The model was then used to evaluate frequency associated with several plant configurations using the IRRAS code. For all cases analyzed computational time was less than three minutes. This document Volume 2, contains appendices A, B, and C. These provide, respectively: Surry Technical Specifications Model Database, Surry Technical Specifications Model, and a list of supercomponents used in the Surry Technical Specifications Model

  15. Models and synchronization of time-delayed complex dynamical networks with multi-links based on adaptive control

    International Nuclear Information System (INIS)

    Peng Haipeng; Wei Nan; Li Lixiang; Xie Weisheng; Yang Yixian

    2010-01-01

    In this Letter, time-delay has been introduced in to split the networks, upon which a model of complex dynamical networks with multi-links has been constructed. Moreover, based on Lyapunov stability theory and some hypotheses, we achieve synchronization between two complex networks with different structures by designing effective controllers. The validity of the results was proved through numerical simulations of this Letter.

  16. Time-domain modeling for shielding effectiveness of materials against electromagnetic pulse based on system identification

    International Nuclear Information System (INIS)

    Chen, Xiang; Chen, Yong Guang; Wei, Ming; Hu, Xiao Feng

    2013-01-01

    Shielding effectiveness (SE) of materials against electromagnetic pulse (EMP) cannot be well estimated by traditional test method of SE of materials which only consider the amplitude-frequency characteristic of materials, but ignore the phase-frequency ones. In order to solve this problem, the model of SE of materials against EMP was established based on system identification (SI) method with time-domain linear cosine frequency sweep signal. The feasibility of the method in this paper was examined depending on infinite planar material and the simulation research of coaxial test method and windowed semi-anechoic box of materials. The results show that the amplitude-frequency and phase-frequency information of each frequency can be fully extracted with this method. SE of materials against strong EMP can be evaluated with time-domain low field strength (voltage) of cosine frequency sweep signal. And SE of materials against a variety EMP will be predicted by the model.

  17. State-space prediction model for chaotic time series

    Science.gov (United States)

    Alparslan, A. K.; Sayar, M.; Atilgan, A. R.

    1998-08-01

    A simple method for predicting the continuation of scalar chaotic time series ahead in time is proposed. The false nearest neighbors technique in connection with the time-delayed embedding is employed so as to reconstruct the state space. A local forecasting model based upon the time evolution of the topological neighboring in the reconstructed phase space is suggested. A moving root-mean-square error is utilized in order to monitor the error along the prediction horizon. The model is tested for the convection amplitude of the Lorenz model. The results indicate that for approximately 100 cycles of the training data, the prediction follows the actual continuation very closely about six cycles. The proposed model, like other state-space forecasting models, captures the long-term behavior of the system due to the use of spatial neighbors in the state space.

  18. The Research of Car-Following Model Based on Real-Time Maximum Deceleration

    Directory of Open Access Journals (Sweden)

    Longhai Yang

    2015-01-01

    Full Text Available This paper is concerned with the effect of real-time maximum deceleration in car-following. The real-time maximum acceleration is estimated with vehicle dynamics. It is known that an intelligent driver model (IDM can control adaptive cruise control (ACC well. The disadvantages of IDM at high and constant speed are analyzed. A new car-following model which is applied to ACC is established accordingly to modify the desired minimum gap and structure of the IDM. We simulated the new car-following model and IDM under two different kinds of road conditions. In the first, the vehicles drive on a single road, taking dry asphalt road as the example in this paper. In the second, vehicles drive onto a different road, and this paper analyzed the situation in which vehicles drive from a dry asphalt road onto an icy road. From the simulation, we found that the new car-following model can not only ensure driving security and comfort but also control the steady driving of the vehicle with a smaller time headway than IDM.

  19. Parametric, nonparametric and parametric modelling of a chaotic circuit time series

    Science.gov (United States)

    Timmer, J.; Rust, H.; Horbelt, W.; Voss, H. U.

    2000-09-01

    The determination of a differential equation underlying a measured time series is a frequently arising task in nonlinear time series analysis. In the validation of a proposed model one often faces the dilemma that it is hard to decide whether possible discrepancies between the time series and model output are caused by an inappropriate model or by bad estimates of parameters in a correct type of model, or both. We propose a combination of parametric modelling based on Bock's multiple shooting algorithm and nonparametric modelling based on optimal transformations as a strategy to test proposed models and if rejected suggest and test new ones. We exemplify this strategy on an experimental time series from a chaotic circuit where we obtain an extremely accurate reconstruction of the observed attractor.

  20. Speech Silicon: An FPGA Architecture for Real-Time Hidden Markov-Model-Based Speech Recognition

    Directory of Open Access Journals (Sweden)

    Schuster Jeffrey

    2006-01-01

    Full Text Available This paper examines the design of an FPGA-based system-on-a-chip capable of performing continuous speech recognition on medium sized vocabularies in real time. Through the creation of three dedicated pipelines, one for each of the major operations in the system, we were able to maximize the throughput of the system while simultaneously minimizing the number of pipeline stalls in the system. Further, by implementing a token-passing scheme between the later stages of the system, the complexity of the control was greatly reduced and the amount of active data present in the system at any time was minimized. Additionally, through in-depth analysis of the SPHINX 3 large vocabulary continuous speech recognition engine, we were able to design models that could be efficiently benchmarked against a known software platform. These results, combined with the ability to reprogram the system for different recognition tasks, serve to create a system capable of performing real-time speech recognition in a vast array of environments.

  1. Speech Silicon: An FPGA Architecture for Real-Time Hidden Markov-Model-Based Speech Recognition

    Directory of Open Access Journals (Sweden)

    Alex K. Jones

    2006-11-01

    Full Text Available This paper examines the design of an FPGA-based system-on-a-chip capable of performing continuous speech recognition on medium sized vocabularies in real time. Through the creation of three dedicated pipelines, one for each of the major operations in the system, we were able to maximize the throughput of the system while simultaneously minimizing the number of pipeline stalls in the system. Further, by implementing a token-passing scheme between the later stages of the system, the complexity of the control was greatly reduced and the amount of active data present in the system at any time was minimized. Additionally, through in-depth analysis of the SPHINX 3 large vocabulary continuous speech recognition engine, we were able to design models that could be efficiently benchmarked against a known software platform. These results, combined with the ability to reprogram the system for different recognition tasks, serve to create a system capable of performing real-time speech recognition in a vast array of environments.

  2. Degeneracy of time series models: The best model is not always the correct model

    International Nuclear Information System (INIS)

    Judd, Kevin; Nakamura, Tomomichi

    2006-01-01

    There are a number of good techniques for finding, in some sense, the best model of a deterministic system given a time series of observations. We examine a problem called model degeneracy, which has the consequence that even when a perfect model of a system exists, one does not find it using the best techniques currently available. The problem is illustrated using global polynomial models and the theory of Groebner bases

  3. Real-time modeling of heat distributions

    Science.gov (United States)

    Hamann, Hendrik F.; Li, Hongfei; Yarlanki, Srinivas

    2018-01-02

    Techniques for real-time modeling temperature distributions based on streaming sensor data are provided. In one aspect, a method for creating a three-dimensional temperature distribution model for a room having a floor and a ceiling is provided. The method includes the following steps. A ceiling temperature distribution in the room is determined. A floor temperature distribution in the room is determined. An interpolation between the ceiling temperature distribution and the floor temperature distribution is used to obtain the three-dimensional temperature distribution model for the room.

  4. Real-Time Corrected Traffic Correlation Model for Traffic Flow Forecasting

    Directory of Open Access Journals (Sweden)

    Hua-pu Lu

    2015-01-01

    Full Text Available This paper focuses on the problems of short-term traffic flow forecasting. The main goal is to put forward traffic correlation model and real-time correction algorithm for traffic flow forecasting. Traffic correlation model is established based on the temporal-spatial-historical correlation characteristic of traffic big data. In order to simplify the traffic correlation model, this paper presents correction coefficients optimization algorithm. Considering multistate characteristic of traffic big data, a dynamic part is added to traffic correlation model. Real-time correction algorithm based on Fuzzy Neural Network is presented to overcome the nonlinear mapping problems. A case study based on a real-world road network in Beijing, China, is implemented to test the efficiency and applicability of the proposed modeling methods.

  5. Average inactivity time model, associated orderings and reliability properties

    Science.gov (United States)

    Kayid, M.; Izadkhah, S.; Abouammoh, A. M.

    2018-02-01

    In this paper, we introduce and study a new model called 'average inactivity time model'. This new model is specifically applicable to handle the heterogeneity of the time of the failure of a system in which some inactive items exist. We provide some bounds for the mean average inactivity time of a lifespan unit. In addition, we discuss some dependence structures between the average variable and the mixing variable in the model when original random variable possesses some aging behaviors. Based on the conception of the new model, we introduce and study a new stochastic order. Finally, to illustrate the concept of the model, some interesting reliability problems are reserved.

  6. Real-Time EEG-Based Happiness Detection System

    Directory of Open Access Journals (Sweden)

    Noppadon Jatupaiboon

    2013-01-01

    Full Text Available We propose to use real-time EEG signal to classify happy and unhappy emotions elicited by pictures and classical music. We use PSD as a feature and SVM as a classifier. The average accuracies of subject-dependent model and subject-independent model are approximately 75.62% and 65.12%, respectively. Considering each pair of channels, temporal pair of channels (T7 and T8 gives a better result than the other area. Considering different frequency bands, high-frequency bands (Beta and Gamma give a better result than low-frequency bands. Considering different time durations for emotion elicitation, that result from 30 seconds does not have significant difference compared with the result from 60 seconds. From all of these results, we implement real-time EEG-based happiness detection system using only one pair of channels. Furthermore, we develop games based on the happiness detection system to help user recognize and control the happiness.

  7. Optimal time points sampling in pathway modelling.

    Science.gov (United States)

    Hu, Shiyan

    2004-01-01

    Modelling cellular dynamics based on experimental data is at the heart of system biology. Considerable progress has been made to dynamic pathway modelling as well as the related parameter estimation. However, few of them gives consideration for the issue of optimal sampling time selection for parameter estimation. Time course experiments in molecular biology rarely produce large and accurate data sets and the experiments involved are usually time consuming and expensive. Therefore, to approximate parameters for models with only few available sampling data is of significant practical value. For signal transduction, the sampling intervals are usually not evenly distributed and are based on heuristics. In the paper, we investigate an approach to guide the process of selecting time points in an optimal way to minimize the variance of parameter estimates. In the method, we first formulate the problem to a nonlinear constrained optimization problem by maximum likelihood estimation. We then modify and apply a quantum-inspired evolutionary algorithm, which combines the advantages of both quantum computing and evolutionary computing, to solve the optimization problem. The new algorithm does not suffer from the morass of selecting good initial values and being stuck into local optimum as usually accompanied with the conventional numerical optimization techniques. The simulation results indicate the soundness of the new method.

  8. Research of Manufacture Time Management System Based on PLM

    Science.gov (United States)

    Jing, Ni; Juan, Zhu; Liangwei, Zhong

    This system is targeted by enterprises manufacturing machine shop, analyzes their business needs and builds the plant management information system of Manufacture time and Manufacture time information management. for manufacturing process Combined with WEB technology, based on EXCEL VBA development of methods, constructs a hybrid model based on PLM workshop Manufacture time management information system framework, discusses the functionality of the system architecture, database structure.

  9. Modelling road accidents: An approach using structural time series

    Science.gov (United States)

    Junus, Noor Wahida Md; Ismail, Mohd Tahir

    2014-09-01

    In this paper, the trend of road accidents in Malaysia for the years 2001 until 2012 was modelled using a structural time series approach. The structural time series model was identified using a stepwise method, and the residuals for each model were tested. The best-fitted model was chosen based on the smallest Akaike Information Criterion (AIC) and prediction error variance. In order to check the quality of the model, a data validation procedure was performed by predicting the monthly number of road accidents for the year 2012. Results indicate that the best specification of the structural time series model to represent road accidents is the local level with a seasonal model.

  10. Mixed Hitting-Time Models

    NARCIS (Netherlands)

    Abbring, J.H.

    2009-01-01

    We study mixed hitting-time models, which specify durations as the first time a Levy process (a continuous-time process with stationary and independent increments) crosses a heterogeneous threshold. Such models of substantial interest because they can be reduced from optimal-stopping models with

  11. An approach to the drone fleet survivability assessment based on a stochastic continues-time model

    Science.gov (United States)

    Kharchenko, Vyacheslav; Fesenko, Herman; Doukas, Nikos

    2017-09-01

    An approach and the algorithm to the drone fleet survivability assessment based on a stochastic continues-time model are proposed. The input data are the number of the drones, the drone fleet redundancy coefficient, the drone stability and restoration rate, the limit deviation from the norms of the drone fleet recovery, the drone fleet operational availability coefficient, the probability of the drone failure-free operation, time needed for performing the required tasks by the drone fleet. The ways for improving the recoverable drone fleet survivability taking into account amazing factors of system accident are suggested. Dependencies of the drone fleet survivability rate both on the drone stability and the number of the drones are analysed.

  12. Model based analysis of the time scales associated to pump start-ups

    Energy Technology Data Exchange (ETDEWEB)

    Dazin, Antoine, E-mail: antoine.dazin@lille.ensam.fr [Arts et métiers ParisTech/LML Laboratory UMR CNRS 8107, 8 bld Louis XIV, 59046 Lille cedex (France); Caignaert, Guy [Arts et métiers ParisTech/LML Laboratory UMR CNRS 8107, 8 bld Louis XIV, 59046 Lille cedex (France); Dauphin-Tanguy, Geneviève, E-mail: genevieve.dauphin-tanguy@ec-lille.fr [Univ Lille Nord de France, Ecole Centrale de Lille/CRISTAL UMR CNRS 9189, BP 48, 59651, Villeneuve d’Ascq cedex F 59000 (France)

    2015-11-15

    Highlights: • A dynamic model of a hydraulic system has been built. • Three periods in a pump start-up have been identified. • The time scales of each period have been estimated. • The parameters affecting the rapidity of a pump start-up have been explored. - Abstract: The paper refers to a non dimensional analysis of the behaviour of a hydraulic system during pump fast start-ups. The system is composed of a radial flow pump and its suction and delivery pipes. It is modelled using the bond graph methodology. The prediction of the model is validated by comparison to experimental results. An analysis of the time evolution of the terms acting on the total pump pressure is proposed. It allows for a decomposition of the start-up into three consecutive periods. The time scales associated with these periods are estimated. The effects of parameters (angular acceleration, final rotation speed, pipe length and resistance) affecting the start-up rapidity are then explored.

  13. Modelling endurance and resumption times for repetitive one-hand pushing.

    Science.gov (United States)

    Rose, Linda M; Beauchemin, Catherine A A; Neumann, W Patrick

    2018-07-01

    This study's objective was to develop models of endurance time (ET), as a function of load level (LL), and of resumption time (RT) after loading as a function of both LL and loading time (LT) for repeated loadings. Ten male participants with experience in construction work each performed 15 different one-handed repetaed pushing tasks at shoulder height with varied exerted force and duration. These data were used to create regression models predicting ET and RT. It is concluded that power law relationships are most appropriate to use when modelling ET and RT. While the data the equations are based on are limited regarding number of participants, gender, postures, magnitude and type of exerted force, the paper suggests how this kind of modelling can be used in job design and in further research. Practitioner Summary: Adequate muscular recovery during work-shifts is important to create sustainable jobs. This paper describes mathematical modelling and presents models for endurance times and resumption times (an aspect of recovery need), based on data from an empirical study. The models can be used to help manage fatigue levels in job design.

  14. Traceability in Model-Based Testing

    Directory of Open Access Journals (Sweden)

    Mathew George

    2012-11-01

    Full Text Available The growing complexities of software and the demand for shorter time to market are two important challenges that face today’s IT industry. These challenges demand the increase of both productivity and quality of software. Model-based testing is a promising technique for meeting these challenges. Traceability modeling is a key issue and challenge in model-based testing. Relationships between the different models will help to navigate from one model to another, and trace back to the respective requirements and the design model when the test fails. In this paper, we present an approach for bridging the gaps between the different models in model-based testing. We propose relation definition markup language (RDML for defining the relationships between models.

  15. Is there any correlation between model-based perfusion parameters and model-free parameters of time-signal intensity curve on dynamic contrast enhanced MRI in breast cancer patients?

    Energy Technology Data Exchange (ETDEWEB)

    Yi, Boram; Kang, Doo Kyoung; Kim, Tae Hee [Ajou University School of Medicine, Department of Radiology, Suwon, Gyeonggi-do (Korea, Republic of); Yoon, Dukyong [Ajou University School of Medicine, Department of Biomedical Informatics, Suwon (Korea, Republic of); Jung, Yong Sik; Kim, Ku Sang [Ajou University School of Medicine, Department of Surgery, Suwon (Korea, Republic of); Yim, Hyunee [Ajou University School of Medicine, Department of Pathology, Suwon (Korea, Republic of)

    2014-05-15

    To find out any correlation between dynamic contrast-enhanced (DCE) model-based parameters and model-free parameters, and evaluate correlations between perfusion parameters with histologic prognostic factors. Model-based parameters (Ktrans, Kep and Ve) of 102 invasive ductal carcinomas were obtained using DCE-MRI and post-processing software. Correlations between model-based and model-free parameters and between perfusion parameters and histologic prognostic factors were analysed. Mean Kep was significantly higher in cancers showing initial rapid enhancement (P = 0.002) and a delayed washout pattern (P = 0.001). Ve was significantly lower in cancers showing a delayed washout pattern (P = 0.015). Kep significantly correlated with time to peak enhancement (TTP) (ρ = -0.33, P < 0.001) and washout slope (ρ = 0.39, P = 0.002). Ve was significantly correlated with TTP (ρ = 0.33, P = 0.002). Mean Kep was higher in tumours with high nuclear grade (P = 0.017). Mean Ve was lower in tumours with high histologic grade (P = 0.005) and in tumours with negative oestrogen receptor status (P = 0.047). TTP was shorter in tumours with negative oestrogen receptor status (P = 0.037). We could acquire general information about the tumour vascular physiology, interstitial space volume and pathologic prognostic factors by analyzing time-signal intensity curve without a complicated acquisition process for the model-based parameters. (orig.)

  16. Time Series Analysis of Non-Gaussian Observations Based on State Space Models from Both Classical and Bayesian Perspectives

    NARCIS (Netherlands)

    Durbin, J.; Koopman, S.J.M.

    1998-01-01

    The analysis of non-Gaussian time series using state space models is considered from both classical and Bayesian perspectives. The treatment in both cases is based on simulation using importance sampling and antithetic variables; Monte Carlo Markov chain methods are not employed. Non-Gaussian

  17. Parameterizing Spatial Models of Infectious Disease Transmission that Incorporate Infection Time Uncertainty Using Sampling-Based Likelihood Approximations.

    Directory of Open Access Journals (Sweden)

    Rajat Malik

    Full Text Available A class of discrete-time models of infectious disease spread, referred to as individual-level models (ILMs, are typically fitted in a Bayesian Markov chain Monte Carlo (MCMC framework. These models quantify probabilistic outcomes regarding the risk of infection of susceptible individuals due to various susceptibility and transmissibility factors, including their spatial distance from infectious individuals. The infectious pressure from infected individuals exerted on susceptible individuals is intrinsic to these ILMs. Unfortunately, quantifying this infectious pressure for data sets containing many individuals can be computationally burdensome, leading to a time-consuming likelihood calculation and, thus, computationally prohibitive MCMC-based analysis. This problem worsens when using data augmentation to allow for uncertainty in infection times. In this paper, we develop sampling methods that can be used to calculate a fast, approximate likelihood when fitting such disease models. A simple random sampling approach is initially considered followed by various spatially-stratified schemes. We test and compare the performance of our methods with both simulated data and data from the 2001 foot-and-mouth disease (FMD epidemic in the U.K. Our results indicate that substantial computation savings can be obtained--albeit, of course, with some information loss--suggesting that such techniques may be of use in the analysis of very large epidemic data sets.

  18. Application of WRF - SWAT OpenMI 2.0 based models integration for real time hydrological modelling and forecasting

    Science.gov (United States)

    Bugaets, Andrey; Gonchukov, Leonid

    2014-05-01

    Intake of deterministic distributed hydrological models into operational water management requires intensive collection and inputting of spatial distributed climatic information in a timely manner that is both time consuming and laborious. The lead time of the data pre-processing stage could be essentially reduced by coupling of hydrological and numerical weather prediction models. This is especially important for the regions such as the South of the Russian Far East where its geographical position combined with a monsoon climate affected by typhoons and extreme heavy rains caused rapid rising of the mountain rivers water level and led to the flash flooding and enormous damage. The objective of this study is development of end-to-end workflow that executes, in a loosely coupled mode, an integrated modeling system comprised of Weather Research and Forecast (WRF) atmospheric model and Soil and Water Assessment Tool (SWAT 2012) hydrological model using OpenMI 2.0 and web-service technologies. Migration SWAT into OpenMI compliant involves reorganization of the model into a separate initialization, performing timestep and finalization functions that can be accessed from outside. To save SWAT normal behavior, the source code was separated from OpenMI-specific implementation into the static library. Modified code was assembled into dynamic library and wrapped into C# class implemented the OpenMI ILinkableComponent interface. Development of WRF OpenMI-compliant component based on the idea of the wrapping web-service clients into a linkable component and seamlessly access to output netCDF files without actual models connection. The weather state variables (precipitation, wind, solar radiation, air temperature and relative humidity) are processed by automatic input selection algorithm to single out the most relevant values used by SWAT model to yield climatic data at the subbasin scale. Spatial interpolation between the WRF regular grid and SWAT subbasins centroid (which are

  19. Modelling and analysis of real-time and hybrid systems

    Energy Technology Data Exchange (ETDEWEB)

    Olivero, A

    1994-09-29

    This work deals with the modelling and analysis of real-time and hybrid systems. We first present the timed-graphs as model for the real-time systems and we recall the basic notions of the analysis of real-time systems. We describe the temporal properties on the timed-graphs using TCTL formulas. We consider two methods for property verification: in one hand we study the symbolic model-checking (based on backward analysis) and in the other hand we propose a verification method derived of the construction of the simulation graph (based on forward analysis). Both methods have been implemented within the KRONOS verification tool. Their application for the automatic verification on several real-time systems confirms the practical interest of our approach. In a second part we study the hybrid systems, systems combining discrete components with continuous ones. As in the general case the analysis of this king of systems is not decidable, we identify two sub-classes of hybrid systems and we give a construction based method for the generation of a timed-graph from an element into the sub-classes. We prove that in one case the timed-graph obtained is bi-similar with the considered system and that there exists a simulation in the other case. These relationships allow the application of the described technics on the hybrid systems into the defined sub-classes. (authors). 60 refs., 43 figs., 8 tabs., 2 annexes.

  20. Time series segmentation: a new approach based on Genetic Algorithm and Hidden Markov Model

    Science.gov (United States)

    Toreti, A.; Kuglitsch, F. G.; Xoplaki, E.; Luterbacher, J.

    2009-04-01

    The subdivision of a time series into homogeneous segments has been performed using various methods applied to different disciplines. In climatology, for example, it is accompanied by the well-known homogenization problem and the detection of artificial change points. In this context, we present a new method (GAMM) based on Hidden Markov Model (HMM) and Genetic Algorithm (GA), applicable to series of independent observations (and easily adaptable to autoregressive processes). A left-to-right hidden Markov model, estimating the parameters and the best-state sequence, respectively, with the Baum-Welch and Viterbi algorithms, was applied. In order to avoid the well-known dependence of the Baum-Welch algorithm on the initial condition, a Genetic Algorithm was developed. This algorithm is characterized by mutation, elitism and a crossover procedure implemented with some restrictive rules. Moreover the function to be minimized was derived following the approach of Kehagias (2004), i.e. it is the so-called complete log-likelihood. The number of states was determined applying a two-fold cross-validation procedure (Celeux and Durand, 2008). Being aware that the last issue is complex, and it influences all the analysis, a Multi Response Permutation Procedure (MRPP; Mielke et al., 1981) was inserted. It tests the model with K+1 states (where K is the state number of the best model) if its likelihood is close to K-state model. Finally, an evaluation of the GAMM performances, applied as a break detection method in the field of climate time series homogenization, is shown. 1. G. Celeux and J.B. Durand, Comput Stat 2008. 2. A. Kehagias, Stoch Envir Res 2004. 3. P.W. Mielke, K.J. Berry, G.W. Brier, Monthly Wea Rev 1981.

  1. Introduction to Time Series Modeling

    CERN Document Server

    Kitagawa, Genshiro

    2010-01-01

    In time series modeling, the behavior of a certain phenomenon is expressed in relation to the past values of itself and other covariates. Since many important phenomena in statistical analysis are actually time series and the identification of conditional distribution of the phenomenon is an essential part of the statistical modeling, it is very important and useful to learn fundamental methods of time series modeling. Illustrating how to build models for time series using basic methods, "Introduction to Time Series Modeling" covers numerous time series models and the various tools f

  2. Limited information estimation of the diffusion-based item response theory model for responses and response times.

    Science.gov (United States)

    Ranger, Jochen; Kuhn, Jörg-Tobias; Szardenings, Carsten

    2016-05-01

    Psychological tests are usually analysed with item response models. Recently, some alternative measurement models have been proposed that were derived from cognitive process models developed in experimental psychology. These models consider the responses but also the response times of the test takers. Two such models are the Q-diffusion model and the D-diffusion model. Both models can be calibrated with the diffIRT package of the R statistical environment via marginal maximum likelihood (MML) estimation. In this manuscript, an alternative approach to model calibration is proposed. The approach is based on weighted least squares estimation and parallels the standard estimation approach in structural equation modelling. Estimates are determined by minimizing the discrepancy between the observed and the implied covariance matrix. The estimator is simple to implement, consistent, and asymptotically normally distributed. Least squares estimation also provides a test of model fit by comparing the observed and implied covariance matrix. The estimator and the test of model fit are evaluated in a simulation study. Although parameter recovery is good, the estimator is less efficient than the MML estimator. © 2016 The British Psychological Society.

  3. Patient specific dynamic geometric models from sequential volumetric time series image data.

    Science.gov (United States)

    Cameron, B M; Robb, R A

    2004-01-01

    Generating patient specific dynamic models is complicated by the complexity of the motion intrinsic and extrinsic to the anatomic structures being modeled. Using a physics-based sequentially deforming algorithm, an anatomically accurate dynamic four-dimensional model can be created from a sequence of 3-D volumetric time series data sets. While such algorithms may accurately track the cyclic non-linear motion of the heart, they generally fail to accurately track extrinsic structural and non-cyclic motion. To accurately model these motions, we have modified a physics-based deformation algorithm to use a meta-surface defining the temporal and spatial maxima of the anatomic structure as the base reference surface. A mass-spring physics-based deformable model, which can expand or shrink with the local intrinsic motion, is applied to the metasurface, deforming this base reference surface to the volumetric data at each time point. As the meta-surface encompasses the temporal maxima of the structure, any extrinsic motion is inherently encoded into the base reference surface and allows the computation of the time point surfaces to be performed in parallel. The resultant 4-D model can be interactively transformed and viewed from different angles, showing the spatial and temporal motion of the anatomic structure. Using texture maps and per-vertex coloring, additional data such as physiological and/or biomechanical variables (e.g., mapping electrical activation sequences onto contracting myocardial surfaces) can be associated with the dynamic model, producing a 5-D model. For acquisition systems that may capture only limited time series data (e.g., only images at end-diastole/end-systole or inhalation/exhalation), this algorithm can provide useful interpolated surfaces between the time points. Such models help minimize the number of time points required to usefully depict the motion of anatomic structures for quantitative assessment of regional dynamics.

  4. A model of negotiation scenarios based on time, relevance andcontrol used to define advantageous positions in a negotiation

    Directory of Open Access Journals (Sweden)

    Omar Guillermo Rojas Altamirano

    2016-04-01

    Full Text Available Models that apply to negotiation are based on different perspectives that range from the relationship between the actors, game theory or the steps in a procedure. This research proposes a model of negotiation scenarios that considers three factors (time, relevance and control, which are displayed as the most important in a negotiation. These factors interact with each other and create different scenarios for each of the actors involved in a negotiation. The proposed model not only facilitates the creation of a negotiation strategy but also an ideal choice of effective tactics.

  5. Hard real-time multibody simulations using ARM-based embedded systems

    Energy Technology Data Exchange (ETDEWEB)

    Pastorino, Roland, E-mail: roland.pastorino@kuleuven.be, E-mail: rpastorino@udc.es; Cosco, Francesco, E-mail: francesco.cosco@kuleuven.be; Naets, Frank, E-mail: frank.naets@kuleuven.be; Desmet, Wim, E-mail: wim.desmet@kuleuven.be [KU Leuven, PMA division, Department of Mechanical Engineering (Belgium); Cuadrado, Javier, E-mail: javicuad@cdf.udc.es [Universidad de La Coruña, Laboratorio de Ingeniería Mecánica (Spain)

    2016-05-15

    The real-time simulation of multibody models on embedded systems is of particular interest for controllers and observers such as model predictive controllers and state observers, which rely on a dynamic model of the process and are customarily executed in electronic control units. This work first identifies the software techniques and tools required to easily write efficient code for multibody models to be simulated on ARM-based embedded systems. Automatic Programming and Source Code Translation are the two techniques that were chosen to generate source code for multibody models in different programming languages. Automatic Programming is used to generate procedural code in an intermediate representation from an object-oriented library and Source Code Translation is used to translate the intermediate representation automatically to an interpreted language or to a compiled language for efficiency purposes. An implementation of these techniques is proposed. It is based on a Python template engine and AST tree walkers for Source Code Generation and on a model-driven translator for the Source Code Translation. The code is translated from a metalanguage to any of the following four programming languages: Python-Numpy, Matlab, C++-Armadillo, C++-Eigen. Two examples of multibody models were simulated: a four-bar linkage with multiple loops and a 3D vehicle steering system. The code for these examples has been generated and executed on two ARM-based single-board computers. Using compiled languages, both models could be simulated faster than real-time despite the low resources and performance of these embedded systems. Finally, the real-time performance of both models was evaluated when executed in hard real-time on Xenomai for both embedded systems. This work shows through measurements that Automatic Programming and Source Code Translation are valuable techniques to develop real-time multibody models to be used in embedded observers and controllers.

  6. Hard real-time multibody simulations using ARM-based embedded systems

    International Nuclear Information System (INIS)

    Pastorino, Roland; Cosco, Francesco; Naets, Frank; Desmet, Wim; Cuadrado, Javier

    2016-01-01

    The real-time simulation of multibody models on embedded systems is of particular interest for controllers and observers such as model predictive controllers and state observers, which rely on a dynamic model of the process and are customarily executed in electronic control units. This work first identifies the software techniques and tools required to easily write efficient code for multibody models to be simulated on ARM-based embedded systems. Automatic Programming and Source Code Translation are the two techniques that were chosen to generate source code for multibody models in different programming languages. Automatic Programming is used to generate procedural code in an intermediate representation from an object-oriented library and Source Code Translation is used to translate the intermediate representation automatically to an interpreted language or to a compiled language for efficiency purposes. An implementation of these techniques is proposed. It is based on a Python template engine and AST tree walkers for Source Code Generation and on a model-driven translator for the Source Code Translation. The code is translated from a metalanguage to any of the following four programming languages: Python-Numpy, Matlab, C++-Armadillo, C++-Eigen. Two examples of multibody models were simulated: a four-bar linkage with multiple loops and a 3D vehicle steering system. The code for these examples has been generated and executed on two ARM-based single-board computers. Using compiled languages, both models could be simulated faster than real-time despite the low resources and performance of these embedded systems. Finally, the real-time performance of both models was evaluated when executed in hard real-time on Xenomai for both embedded systems. This work shows through measurements that Automatic Programming and Source Code Translation are valuable techniques to develop real-time multibody models to be used in embedded observers and controllers.

  7. Physical models on discrete space and time

    International Nuclear Information System (INIS)

    Lorente, M.

    1986-01-01

    The idea of space and time quantum operators with a discrete spectrum has been proposed frequently since the discovery that some physical quantities exhibit measured values that are multiples of fundamental units. This paper first reviews a number of these physical models. They are: the method of finite elements proposed by Bender et al; the quantum field theory model on discrete space-time proposed by Yamamoto; the finite dimensional quantum mechanics approach proposed by Santhanam et al; the idea of space-time as lattices of n-simplices proposed by Kaplunovsky et al; and the theory of elementary processes proposed by Weizsaecker and his colleagues. The paper then presents a model proposed by the authors and based on the (n+1)-dimensional space-time lattice where fundamental entities interact among themselves 1 to 2n in order to build up a n-dimensional cubic lattice as a ground field where the physical interactions take place. The space-time coordinates are nothing more than the labelling of the ground field and take only discrete values. 11 references

  8. Fuzzy model-based adaptive synchronization of time-delayed chaotic systems

    International Nuclear Information System (INIS)

    Vasegh, Nastaran; Majd, Vahid Johari

    2009-01-01

    In this paper, fuzzy model-based synchronization of a class of first order chaotic systems described by delayed-differential equations is addressed. To design the fuzzy controller, the chaotic system is modeled by Takagi-Sugeno fuzzy system considering the properties of the nonlinear part of the system. Assuming that the parameters of the chaotic system are unknown, an adaptive law is derived to estimate these unknown parameters, and the stability of error dynamics is guaranteed by Lyapunov theory. Numerical examples are given to demonstrate the validity of the proposed adaptive synchronization approach.

  9. Nonparametric volatility density estimation for discrete time models

    NARCIS (Netherlands)

    Es, van Bert; Spreij, P.J.C.; Zanten, van J.H.

    2005-01-01

    We consider discrete time models for asset prices with a stationary volatility process. We aim at estimating the multivariate density of this process at a set of consecutive time instants. A Fourier-type deconvolution kernel density estimator based on the logarithm of the squared process is proposed

  10. Modeling Dynamic Systems with Efficient Ensembles of Process-Based Models.

    Directory of Open Access Journals (Sweden)

    Nikola Simidjievski

    Full Text Available Ensembles are a well established machine learning paradigm, leading to accurate and robust models, predominantly applied to predictive modeling tasks. Ensemble models comprise a finite set of diverse predictive models whose combined output is expected to yield an improved predictive performance as compared to an individual model. In this paper, we propose a new method for learning ensembles of process-based models of dynamic systems. The process-based modeling paradigm employs domain-specific knowledge to automatically learn models of dynamic systems from time-series observational data. Previous work has shown that ensembles based on sampling observational data (i.e., bagging and boosting, significantly improve predictive performance of process-based models. However, this improvement comes at the cost of a substantial increase of the computational time needed for learning. To address this problem, the paper proposes a method that aims at efficiently learning ensembles of process-based models, while maintaining their accurate long-term predictive performance. This is achieved by constructing ensembles with sampling domain-specific knowledge instead of sampling data. We apply the proposed method to and evaluate its performance on a set of problems of automated predictive modeling in three lake ecosystems using a library of process-based knowledge for modeling population dynamics. The experimental results identify the optimal design decisions regarding the learning algorithm. The results also show that the proposed ensembles yield significantly more accurate predictions of population dynamics as compared to individual process-based models. Finally, while their predictive performance is comparable to the one of ensembles obtained with the state-of-the-art methods of bagging and boosting, they are substantially more efficient.

  11. Damage-Based Time-Dependent Modeling of Paraglacial to Postglacial Progressive Failure of Large Rock Slopes

    Science.gov (United States)

    Riva, Federico; Agliardi, Federico; Amitrano, David; Crosta, Giovanni B.

    2018-01-01

    Large alpine rock slopes undergo long-term evolution in paraglacial to postglacial environments. Rock mass weakening and increased permeability associated with the progressive failure of deglaciated slopes promote the development of potentially catastrophic rockslides. We captured the entire life cycle of alpine slopes in one damage-based, time-dependent 2-D model of brittle creep, including deglaciation, damage-dependent fluid occurrence, and rock mass property upscaling. We applied the model to the Spriana rock slope (Central Alps), affected by long-term instability after Last Glacial Maximum and representing an active threat. We simulated the evolution of the slope from glaciated conditions to present day and calibrated the model using site investigation data and available temporal constraints. The model tracks the entire progressive failure path of the slope from deglaciation to rockslide development, without a priori assumptions on shear zone geometry and hydraulic conditions. Complete rockslide differentiation occurs through the transition from dilatant damage to a compacting basal shear zone, accounting for observed hydraulic barrier effects and perched aquifer formation. Our model investigates the mechanical role of deglaciation and damage-controlled fluid distribution in the development of alpine rockslides. The absolute simulated timing of rock slope instability development supports a very long "paraglacial" period of subcritical rock mass damage. After initial damage localization during the Lateglacial, rockslide nucleation initiates soon after the onset of Holocene, whereas full mechanical and hydraulic rockslide differentiation occurs during Mid-Holocene, supporting a key role of long-term damage in the reported occurrence of widespread rockslide clusters of these ages.

  12. Residence-time framework for modeling multicomponent reactive transport in stream hyporheic zones

    Science.gov (United States)

    Painter, S. L.; Coon, E. T.; Brooks, S. C.

    2017-12-01

    Process-based models for transport and transformation of nutrients and contaminants in streams require tractable representations of solute exchange between the stream channel and biogeochemically active hyporheic zones. Residence-time based formulations provide an alternative to detailed three-dimensional simulations and have had good success in representing hyporheic exchange of non-reacting solutes. We extend the residence-time formulation for hyporheic transport to accommodate general multicomponent reactive transport. To that end, the integro-differential form of previous residence time models is replaced by an equivalent formulation based on a one-dimensional advection dispersion equation along the channel coupled at each channel location to a one-dimensional transport model in Lagrangian travel-time form. With the channel discretized for numerical solution, the associated Lagrangian model becomes a subgrid model representing an ensemble of streamlines that are diverted into the hyporheic zone before returning to the channel. In contrast to the previous integro-differential forms of the residence-time based models, the hyporheic flowpaths have semi-explicit spatial representation (parameterized by travel time), thus allowing coupling to general biogeochemical models. The approach has been implemented as a stream-corridor subgrid model in the open-source integrated surface/subsurface modeling software ATS. We use bedform-driven flow coupled to a biogeochemical model with explicit microbial biomass dynamics as an example to show that the subgrid representation is able to represent redox zonation in sediments and resulting effects on metal biogeochemical dynamics in a tractable manner that can be scaled to reach scales.

  13. Activity-based DEVS modeling

    DEFF Research Database (Denmark)

    Alshareef, Abdurrahman; Sarjoughian, Hessam S.; Zarrin, Bahram

    2018-01-01

    architecture and the UML concepts. In this paper, we further this work by grounding Activity-based DEVS modeling and developing a fully-fledged modeling engine to demonstrate applicability. We also detail the relevant aspects of the created metamodel in terms of modeling and simulation. A significant number......Use of model-driven approaches has been increasing to significantly benefit the process of building complex systems. Recently, an approach for specifying model behavior using UML activities has been devised to support the creation of DEVS models in a disciplined manner based on the model driven...... of the artifacts of the UML 2.5 activities and actions, from the vantage point of DEVS behavioral modeling, is covered in details. Their semantics are discussed to the extent of time-accurate requirements for simulation. We characterize them in correspondence with the specification of the atomic model behavior. We...

  14. GPS-based microenvironment tracker (MicroTrac) model to estimate time-location of individuals for air pollution exposure assessments: model evaluation in central North Carolina.

    Science.gov (United States)

    Breen, Michael S; Long, Thomas C; Schultz, Bradley D; Crooks, James; Breen, Miyuki; Langstaff, John E; Isaacs, Kristin K; Tan, Yu-Mei; Williams, Ronald W; Cao, Ye; Geller, Andrew M; Devlin, Robert B; Batterman, Stuart A; Buckley, Timothy J

    2014-07-01

    A critical aspect of air pollution exposure assessment is the estimation of the time spent by individuals in various microenvironments (ME). Accounting for the time spent in different ME with different pollutant concentrations can reduce exposure misclassifications, while failure to do so can add uncertainty and bias to risk estimates. In this study, a classification model, called MicroTrac, was developed to estimate time of day and duration spent in eight ME (indoors and outdoors at home, work, school; inside vehicles; other locations) from global positioning system (GPS) data and geocoded building boundaries. Based on a panel study, MicroTrac estimates were compared with 24-h diary data from nine participants, with corresponding GPS data and building boundaries of home, school, and work. MicroTrac correctly classified the ME for 99.5% of the daily time spent by the participants. The capability of MicroTrac could help to reduce the time-location uncertainty in air pollution exposure models and exposure metrics for individuals in health studies.

  15. The Trick Simulation Toolkit: A NASA/Opensource Framework for Running Time Based Physics Models

    Science.gov (United States)

    Penn, John M.

    2016-01-01

    The Trick Simulation Toolkit is a simulation development environment used to create high fidelity training and engineering simulations at the NASA Johnson Space Center and many other NASA facilities. Its purpose is to generate a simulation executable from a collection of user-supplied models and a simulation definition file. For each Trick-based simulation, Trick automatically provides job scheduling, numerical integration, the ability to write and restore human readable checkpoints, data recording, interactive variable manipulation, a run-time interpreter, and many other commonly needed capabilities. This allows simulation developers to concentrate on their domain expertise and the algorithms and equations of their models. Also included in Trick are tools for plotting recorded data and various other supporting utilities and libraries. Trick is written in C/C++ and Java and supports both Linux and MacOSX computer operating systems. This paper describes Trick's design and use at NASA Johnson Space Center.

  16. A Time-Variant Reliability Model for Copper Bending Pipe under Seawater-Active Corrosion Based on the Stochastic Degradation Process

    Directory of Open Access Journals (Sweden)

    Bo Sun

    2018-03-01

    Full Text Available In the degradation process, the randomness and multiplicity of variables are difficult to describe by mathematical models. However, they are common in engineering and cannot be neglected, so it is necessary to study this issue in depth. In this paper, the copper bending pipe in seawater piping systems is taken as the analysis object, and the time-variant reliability is calculated by solving the interference of limit strength and maximum stress. We did degradation experiments and tensile experiments on copper material, and obtained the limit strength at each time. In addition, degradation experiments on copper bending pipe were done and the thickness at each time has been obtained, then the response of maximum stress was calculated by simulation. Further, with the help of one kind of Monte Carlo method we propose, the time-variant reliability of copper bending pipe was calculated based on the stochastic degradation process and interference theory. Compared with traditional methods and verified by maintenance records, the results show that the time-variant reliability model based on the stochastic degradation process proposed in this paper has better applicability in the reliability analysis, and it can be more convenient and accurate to predict the replacement cycle of copper bending pipe under seawater-active corrosion.

  17. Convex-based void filling method for CAD-based Monte Carlo geometry modeling

    International Nuclear Information System (INIS)

    Yu, Shengpeng; Cheng, Mengyun; Song, Jing; Long, Pengcheng; Hu, Liqin

    2015-01-01

    Highlights: • We present a new void filling method named CVF for CAD based MC geometry modeling. • We describe convex based void description based and quality-based space subdivision. • The results showed improvements provided by CVF for both modeling and MC calculation efficiency. - Abstract: CAD based automatic geometry modeling tools have been widely applied to generate Monte Carlo (MC) calculation geometry for complex systems according to CAD models. Automatic void filling is one of the main functions in the CAD based MC geometry modeling tools, because the void space between parts in CAD models is traditionally not modeled while MC codes such as MCNP need all the problem space to be described. A dedicated void filling method, named Convex-based Void Filling (CVF), is proposed in this study for efficient void filling and concise void descriptions. The method subdivides all the problem space into disjointed regions using Quality based Subdivision (QS) and describes the void space in each region with complementary descriptions of the convex volumes intersecting with that region. It has been implemented in SuperMC/MCAM, the Multiple-Physics Coupling Analysis Modeling Program, and tested on International Thermonuclear Experimental Reactor (ITER) Alite model. The results showed that the new method reduced both automatic modeling time and MC calculation time

  18. Space-time latent component Modeling of Geo-referenced health data

    OpenAIRE

    Lawson, Andrew B.; Song, Hae-Ryoung; Cai, Bo; Hossain, Md Monir; Huang, Kun

    2010-01-01

    Latent structure models have been proposed in many applications. For space time health data it is often important to be able to find underlying trends in time which are supported by subsets of small areas. Latent structure modeling is one approach to this analysis. This paper presents a mixture-based approach that can be appied to component selction. The analysis of a Georgia ambulatory asthma county level data set is presented and a simulation-based evaluation is made.

  19. Flatness-based control and Kalman filtering for a continuous-time macroeconomic model

    Science.gov (United States)

    Rigatos, G.; Siano, P.; Ghosh, T.; Busawon, K.; Binns, R.

    2017-11-01

    The article proposes flatness-based control for a nonlinear macro-economic model of the UK economy. The differential flatness properties of the model are proven. This enables to introduce a transformation (diffeomorphism) of the system's state variables and to express the state-space description of the model in the linear canonical (Brunowsky) form in which both the feedback control and the state estimation problem can be solved. For the linearized equivalent model of the macroeconomic system, stabilizing feedback control can be achieved using pole placement methods. Moreover, to implement stabilizing feedback control of the system by measuring only a subset of its state vector elements the Derivative-free nonlinear Kalman Filter is used. This consists of the Kalman Filter recursion applied on the linearized equivalent model of the financial system and of an inverse transformation that is based again on differential flatness theory. The asymptotic stability properties of the control scheme are confirmed.

  20. Using Agent-Based Modeling to Enhance System-Level Real-time Control of Urban Stormwater Systems

    Science.gov (United States)

    Rimer, S.; Mullapudi, A. M.; Kerkez, B.

    2017-12-01

    The ability to reduce combined-sewer overflow (CSO) events is an issue that challenges over 800 U.S. municipalities. When the volume of a combined sewer system or wastewater treatment plant is exceeded, untreated wastewater then overflows (a CSO event) into nearby streams, rivers, or other water bodies causing localized urban flooding and pollution. The likelihood and impact of CSO events has only exacerbated due to urbanization, population growth, climate change, aging infrastructure, and system complexity. Thus, there is an urgent need for urban areas to manage CSO events. Traditionally, mitigating CSO events has been carried out via time-intensive and expensive structural interventions such as retention basins or sewer separation, which are able to reduce CSO events, but are costly, arduous, and only provide a fixed solution to a dynamic problem. Real-time control (RTC) of urban drainage systems using sensor and actuator networks has served as an inexpensive and versatile alternative to traditional CSO intervention. In particular, retrofitting individual stormwater elements for sensing and automated active distributed control has been shown to significantly reduce the volume of discharge during CSO events, with some RTC models demonstrating a reduction upwards of 90% when compared to traditional passive systems. As more stormwater elements become retrofitted for RTC, system-level RTC across complete watersheds is an attainable possibility. However, when considering the diverse set of control needs of each of these individual stormwater elements, such system-level RTC becomes a far more complex problem. To address such diverse control needs, agent-based modeling is employed such that each individual stormwater element is treated as an autonomous agent with a diverse decision making capabilities. We present preliminary results and limitations of utilizing the agent-based modeling computational framework for the system-level control of diverse, interacting

  1. Population PK modelling and simulation based on fluoxetine and norfluoxetine concentrations in milk: a milk concentration-based prediction model.

    Science.gov (United States)

    Tanoshima, Reo; Bournissen, Facundo Garcia; Tanigawara, Yusuke; Kristensen, Judith H; Taddio, Anna; Ilett, Kenneth F; Begg, Evan J; Wallach, Izhar; Ito, Shinya

    2014-10-01

    Population pharmacokinetic (pop PK) modelling can be used for PK assessment of drugs in breast milk. However, complex mechanistic modelling of a parent and an active metabolite using both blood and milk samples is challenging. We aimed to develop a simple predictive pop PK model for milk concentration-time profiles of a parent and a metabolite, using data on fluoxetine (FX) and its active metabolite, norfluoxetine (NFX), in milk. Using a previously published data set of drug concentrations in milk from 25 women treated with FX, a pop PK model predictive of milk concentration-time profiles of FX and NFX was developed. Simulation was performed with the model to generate FX and NFX concentration-time profiles in milk of 1000 mothers. This milk concentration-based pop PK model was compared with the previously validated plasma/milk concentration-based pop PK model of FX. Milk FX and NFX concentration-time profiles were described reasonably well by a one compartment model with a FX-to-NFX conversion coefficient. Median values of the simulated relative infant dose on a weight basis (sRID: weight-adjusted daily doses of FX and NFX through breastmilk to the infant, expressed as a fraction of therapeutic FX daily dose per body weight) were 0.028 for FX and 0.029 for NFX. The FX sRID estimates were consistent with those of the plasma/milk-based pop PK model. A predictive pop PK model based on only milk concentrations can be developed for simultaneous estimation of milk concentration-time profiles of a parent (FX) and an active metabolite (NFX). © 2014 The British Pharmacological Society.

  2. A Procedure for Identification of Appropriate State Space and ARIMA Models Based on Time-Series Cross-Validation

    Directory of Open Access Journals (Sweden)

    Patrícia Ramos

    2016-11-01

    Full Text Available In this work, a cross-validation procedure is used to identify an appropriate Autoregressive Integrated Moving Average model and an appropriate state space model for a time series. A minimum size for the training set is specified. The procedure is based on one-step forecasts and uses different training sets, each containing one more observation than the previous one. All possible state space models and all ARIMA models where the orders are allowed to range reasonably are fitted considering raw data and log-transformed data with regular differencing (up to second order differences and, if the time series is seasonal, seasonal differencing (up to first order differences. The value of root mean squared error for each model is calculated averaging the one-step forecasts obtained. The model which has the lowest root mean squared error value and passes the Ljung–Box test using all of the available data with a reasonable significance level is selected among all the ARIMA and state space models considered. The procedure is exemplified in this paper with a case study of retail sales of different categories of women’s footwear from a Portuguese retailer, and its accuracy is compared with three reliable forecasting approaches. The results show that our procedure consistently forecasts more accurately than the other approaches and the improvements in the accuracy are significant.

  3. Modeling Nonstationary Emotion Dynamics in Dyads using a Time-Varying Vector-Autoregressive Model.

    Science.gov (United States)

    Bringmann, Laura F; Ferrer, Emilio; Hamaker, Ellen L; Borsboom, Denny; Tuerlinckx, Francis

    2018-01-01

    Emotion dynamics are likely to arise in an interpersonal context. Standard methods to study emotions in interpersonal interaction are limited because stationarity is assumed. This means that the dynamics, for example, time-lagged relations, are invariant across time periods. However, this is generally an unrealistic assumption. Whether caused by an external (e.g., divorce) or an internal (e.g., rumination) event, emotion dynamics are prone to change. The semi-parametric time-varying vector-autoregressive (TV-VAR) model is based on well-studied generalized additive models, implemented in the software R. The TV-VAR can explicitly model changes in temporal dependency without pre-existing knowledge about the nature of change. A simulation study is presented, showing that the TV-VAR model is superior to the standard time-invariant VAR model when the dynamics change over time. The TV-VAR model is applied to empirical data on daily feelings of positive affect (PA) from a single couple. Our analyses indicate reliable changes in the male's emotion dynamics over time, but not in the female's-which were not predicted by her own affect or that of her partner. This application illustrates the usefulness of using a TV-VAR model to detect changes in the dynamics in a system.

  4. Modeling and Understanding Time-Evolving Scenarios

    Directory of Open Access Journals (Sweden)

    Riccardo Melen

    2015-08-01

    Full Text Available In this paper, we consider the problem of modeling application scenarios characterized by variability over time and involving heterogeneous kinds of knowledge. The evolution of distributed technologies creates new and challenging possibilities of integrating different kinds of problem solving methods, obtaining many benefits from the user point of view. In particular, we propose here a multilayer modeling system and adopt the Knowledge Artifact concept to tie together statistical and Artificial Intelligence rule-based methods to tackle problems in ubiquitous and distributed scenarios.

  5. Survey of time preference, delay discounting models

    Directory of Open Access Journals (Sweden)

    John R. Doyle

    2013-03-01

    Full Text Available The paper surveys over twenty models of delay discounting (also known as temporal discounting, time preference, time discounting, that psychologists and economists have put forward to explain the way people actually trade off time and money. Using little more than the basic algebra of powers and logarithms, I show how the models are derived, what assumptions they are based upon, and how different models relate to each other. Rather than concentrate only on discount functions themselves, I show how discount functions may be manipulated to isolate rate parameters for each model. This approach, consistently applied, helps focus attention on the three main components in any discounting model: subjectively perceived money; subjectively perceived time; and how these elements are combined. We group models by the number of parameters that have to be estimated, which means our exposition follows a trajectory of increasing complexity to the models. However, as the story unfolds it becomes clear that most models fall into a smaller number of families. We also show how new models may be constructed by combining elements of different models. The surveyed models are: Exponential; Hyperbolic; Arithmetic; Hyperboloid (Green and Myerson, Rachlin; Loewenstein and Prelec Generalized Hyperboloid; quasi-Hyperbolic (also known as beta-delta discounting; Benhabib et al's fixed cost; Benhabib et al's Exponential / Hyperbolic / quasi-Hyperbolic; Read's discounting fractions; Roelofsma's exponential time; Scholten and Read's discounting-by-intervals (DBI; Ebert and Prelec's constant sensitivity (CS; Bleichrodt et al.'s constant absolute decreasing impatience (CADI; Bleichrodt et al.'s constant relative decreasing impatience (CRDI; Green, Myerson, and Macaux's hyperboloid over intervals models; Killeen's additive utility; size-sensitive additive utility; Yi, Landes, and Bickel's memory trace models; McClure et al.'s two exponentials; and Scholten and Read's trade

  6. A Data-Driven Modeling Strategy for Smart Grid Power Quality Coupling Assessment Based on Time Series Pattern Matching

    Directory of Open Access Journals (Sweden)

    Hao Yu

    2018-01-01

    Full Text Available This study introduces a data-driven modeling strategy for smart grid power quality (PQ coupling assessment based on time series pattern matching to quantify the influence of single and integrated disturbance among nodes in different pollution patterns. Periodic and random PQ patterns are constructed by using multidimensional frequency-domain decomposition for all disturbances. A multidimensional piecewise linear representation based on local extreme points is proposed to extract the patterns features of single and integrated disturbance in consideration of disturbance variation trend and severity. A feature distance of pattern (FDP is developed to implement pattern matching on univariate PQ time series (UPQTS and multivariate PQ time series (MPQTS to quantify the influence of single and integrated disturbance among nodes in the pollution patterns. Case studies on a 14-bus distribution system are performed and analyzed; the accuracy and applicability of the FDP in the smart grid PQ coupling assessment are verified by comparing with other time series pattern matching methods.

  7. Predicting the Risk of Attrition for Undergraduate Students with Time Based Modelling

    Science.gov (United States)

    Chai, Kevin E. K.; Gibson, David

    2015-01-01

    Improving student retention is an important and challenging problem for universities. This paper reports on the development of a student attrition model for predicting which first year students are most at-risk of leaving at various points in time during their first semester of study. The objective of developing such a model is to assist…

  8. Use of Just in Time Maintenance of Reinforced Concrete Bridge Structures based on Real Historical Data Deterioration Models

    Directory of Open Access Journals (Sweden)

    Abu-Tair A.

    2016-01-01

    Full Text Available Concrete is the backbone of any developed economy. Concrete can suffer from a large number of deleterious effects including physical, chemical and biological causes. Large owning bridge structures organizations are facing very serious questions when asking for maintenance budgets. The questions range from needing to justify the need for the work, its urgency, to also have to predict or show the consequences of delayed rehabilitation of a particular structure. There is therefore a need for a probabilistic model that can estimate the range of service lives of bridge populations and also the likelihood of level of deteriorations it can reached for every incremental time interval. A model was developed for such estimation based on statistical data from actual inspection records of a large reinforced concrete bridge portfolio. The method used both deterministic and stochastic methods to predict the service life of a bridge, using these service lives in combination with the just in time (JIT principle of management would enable maintenance managers to justify the need for action and the budgets needed, to intervene at the optimum time in the life of the structure and that of the deterioration. The paper will report on the model which is based on a large database of deterioration records of concrete bridges covering a period of over 60 years and include data from over 400 bridge structures. The paper will also illustrate how the service life model was developed and how these service lives combined with the JIT can be used to effectively allocate resources and use them to keep a major infrastructure asset moving with little disruption to the transport system and its users.

  9. A four-stage hybrid model for hydrological time series forecasting.

    Science.gov (United States)

    Di, Chongli; Yang, Xiaohua; Wang, Xiaochao

    2014-01-01

    Hydrological time series forecasting remains a difficult task due to its complicated nonlinear, non-stationary and multi-scale characteristics. To solve this difficulty and improve the prediction accuracy, a novel four-stage hybrid model is proposed for hydrological time series forecasting based on the principle of 'denoising, decomposition and ensemble'. The proposed model has four stages, i.e., denoising, decomposition, components prediction and ensemble. In the denoising stage, the empirical mode decomposition (EMD) method is utilized to reduce the noises in the hydrological time series. Then, an improved method of EMD, the ensemble empirical mode decomposition (EEMD), is applied to decompose the denoised series into a number of intrinsic mode function (IMF) components and one residual component. Next, the radial basis function neural network (RBFNN) is adopted to predict the trend of all of the components obtained in the decomposition stage. In the final ensemble prediction stage, the forecasting results of all of the IMF and residual components obtained in the third stage are combined to generate the final prediction results, using a linear neural network (LNN) model. For illustration and verification, six hydrological cases with different characteristics are used to test the effectiveness of the proposed model. The proposed hybrid model performs better than conventional single models, the hybrid models without denoising or decomposition and the hybrid models based on other methods, such as the wavelet analysis (WA)-based hybrid models. In addition, the denoising and decomposition strategies decrease the complexity of the series and reduce the difficulties of the forecasting. With its effective denoising and accurate decomposition ability, high prediction precision and wide applicability, the new model is very promising for complex time series forecasting. This new forecast model is an extension of nonlinear prediction models.

  10. A Four-Stage Hybrid Model for Hydrological Time Series Forecasting

    Science.gov (United States)

    Di, Chongli; Yang, Xiaohua; Wang, Xiaochao

    2014-01-01

    Hydrological time series forecasting remains a difficult task due to its complicated nonlinear, non-stationary and multi-scale characteristics. To solve this difficulty and improve the prediction accuracy, a novel four-stage hybrid model is proposed for hydrological time series forecasting based on the principle of ‘denoising, decomposition and ensemble’. The proposed model has four stages, i.e., denoising, decomposition, components prediction and ensemble. In the denoising stage, the empirical mode decomposition (EMD) method is utilized to reduce the noises in the hydrological time series. Then, an improved method of EMD, the ensemble empirical mode decomposition (EEMD), is applied to decompose the denoised series into a number of intrinsic mode function (IMF) components and one residual component. Next, the radial basis function neural network (RBFNN) is adopted to predict the trend of all of the components obtained in the decomposition stage. In the final ensemble prediction stage, the forecasting results of all of the IMF and residual components obtained in the third stage are combined to generate the final prediction results, using a linear neural network (LNN) model. For illustration and verification, six hydrological cases with different characteristics are used to test the effectiveness of the proposed model. The proposed hybrid model performs better than conventional single models, the hybrid models without denoising or decomposition and the hybrid models based on other methods, such as the wavelet analysis (WA)-based hybrid models. In addition, the denoising and decomposition strategies decrease the complexity of the series and reduce the difficulties of the forecasting. With its effective denoising and accurate decomposition ability, high prediction precision and wide applicability, the new model is very promising for complex time series forecasting. This new forecast model is an extension of nonlinear prediction models. PMID:25111782

  11. Real-time Face Detection using Skin Color Model

    Institute of Scientific and Technical Information of China (English)

    LU Yao-xin; LIU Zhi-Qiang; ZHU Xiang-hua

    2004-01-01

    This paper presents a new face detection approach to real-time applications, which is based on the skin color model and the morphological filtering. First the non-skin color pixels of the input image are removed based on the skin color model in the YCrCb chrominance space, from which we extract candidate human face regions. Then a mathematical morphological filter is used to remove noisy regions and fill the holes in the candidate skin color regions. We adopt the similarity between the human face features and the candidate face regions to locate the face regions in the original image. We have implemented the algorithm in our smart media system. The experiment results show that this system is effective in real-time applications.

  12. A Seasonal Time-Series Model Based on Gene Expression Programming for Predicting Financial Distress.

    Science.gov (United States)

    Cheng, Ching-Hsue; Chan, Chia-Pang; Yang, Jun-He

    2018-01-01

    The issue of financial distress prediction plays an important and challenging research topic in the financial field. Currently, there have been many methods for predicting firm bankruptcy and financial crisis, including the artificial intelligence and the traditional statistical methods, and the past studies have shown that the prediction result of the artificial intelligence method is better than the traditional statistical method. Financial statements are quarterly reports; hence, the financial crisis of companies is seasonal time-series data, and the attribute data affecting the financial distress of companies is nonlinear and nonstationary time-series data with fluctuations. Therefore, this study employed the nonlinear attribute selection method to build a nonlinear financial distress prediction model: that is, this paper proposed a novel seasonal time-series gene expression programming model for predicting the financial distress of companies. The proposed model has several advantages including the following: (i) the proposed model is different from the previous models lacking the concept of time series; (ii) the proposed integrated attribute selection method can find the core attributes and reduce high dimensional data; and (iii) the proposed model can generate the rules and mathematical formulas of financial distress for providing references to the investors and decision makers. The result shows that the proposed method is better than the listing classifiers under three criteria; hence, the proposed model has competitive advantages in predicting the financial distress of companies.

  13. A valuation-Based Test of Market Timing

    NARCIS (Netherlands)

    Koeter-Kant, J.; Elliott, W.B.; Warr, R.S.

    2007-01-01

    We implement an earnings-based fundamental valuation model to test the impact of market timing on the firm's method of funding the financing deficit. We argue that our valuation metric provides a superior measure of equity misvaluation because it avoids multiple interpretation problems faced by the

  14. Real-time interferometric monitoring and measuring of photopolymerization based stereolithographic additive manufacturing process: sensor model and algorithm

    International Nuclear Information System (INIS)

    Zhao, X; Rosen, D W

    2017-01-01

    As additive manufacturing is poised for growth and innovations, it faces barriers of lack of in-process metrology and control to advance into wider industry applications. The exposure controlled projection lithography (ECPL) is a layerless mask-projection stereolithographic additive manufacturing process, in which parts are fabricated from photopolymers on a stationary transparent substrate. To improve the process accuracy with closed-loop control for ECPL, this paper develops an interferometric curing monitoring and measuring (ICM and M) method which addresses the sensor modeling and algorithms issues. A physical sensor model for ICM and M is derived based on interference optics utilizing the concept of instantaneous frequency. The associated calibration procedure is outlined for ICM and M measurement accuracy. To solve the sensor model, particularly in real time, an online evolutionary parameter estimation algorithm is developed adopting moving horizon exponentially weighted Fourier curve fitting and numerical integration. As a preliminary validation, simulated real-time measurement by offline analysis of a video of interferograms acquired in the ECPL process is presented. The agreement between the cured height estimated by ICM and M and that measured by microscope indicates that the measurement principle is promising as real-time metrology for global measurement and control of the ECPL process. (paper)

  15. Time series modeling for syndromic surveillance

    Directory of Open Access Journals (Sweden)

    Mandl Kenneth D

    2003-01-01

    Full Text Available Abstract Background Emergency department (ED based syndromic surveillance systems identify abnormally high visit rates that may be an early signal of a bioterrorist attack. For example, an anthrax outbreak might first be detectable as an unusual increase in the number of patients reporting to the ED with respiratory symptoms. Reliably identifying these abnormal visit patterns requires a good understanding of the normal patterns of healthcare usage. Unfortunately, systematic methods for determining the expected number of (ED visits on a particular day have not yet been well established. We present here a generalized methodology for developing models of expected ED visit rates. Methods Using time-series methods, we developed robust models of ED utilization for the purpose of defining expected visit rates. The models were based on nearly a decade of historical data at a major metropolitan academic, tertiary care pediatric emergency department. The historical data were fit using trimmed-mean seasonal models, and additional models were fit with autoregressive integrated moving average (ARIMA residuals to account for recent trends in the data. The detection capabilities of the model were tested with simulated outbreaks. Results Models were built both for overall visits and for respiratory-related visits, classified according to the chief complaint recorded at the beginning of each visit. The mean absolute percentage error of the ARIMA models was 9.37% for overall visits and 27.54% for respiratory visits. A simple detection system based on the ARIMA model of overall visits was able to detect 7-day-long simulated outbreaks of 30 visits per day with 100% sensitivity and 97% specificity. Sensitivity decreased with outbreak size, dropping to 94% for outbreaks of 20 visits per day, and 57% for 10 visits per day, all while maintaining a 97% benchmark specificity. Conclusions Time series methods applied to historical ED utilization data are an important tool

  16. Small Sample Properties of Bayesian Multivariate Autoregressive Time Series Models

    Science.gov (United States)

    Price, Larry R.

    2012-01-01

    The aim of this study was to compare the small sample (N = 1, 3, 5, 10, 15) performance of a Bayesian multivariate vector autoregressive (BVAR-SEM) time series model relative to frequentist power and parameter estimation bias. A multivariate autoregressive model was developed based on correlated autoregressive time series vectors of varying…

  17. Fisher information framework for time series modeling

    Science.gov (United States)

    Venkatesan, R. C.; Plastino, A.

    2017-08-01

    A robust prediction model invoking the Takens embedding theorem, whose working hypothesis is obtained via an inference procedure based on the minimum Fisher information principle, is presented. The coefficients of the ansatz, central to the working hypothesis satisfy a time independent Schrödinger-like equation in a vector setting. The inference of (i) the probability density function of the coefficients of the working hypothesis and (ii) the establishing of constraint driven pseudo-inverse condition for the modeling phase of the prediction scheme, is made, for the case of normal distributions, with the aid of the quantum mechanical virial theorem. The well-known reciprocity relations and the associated Legendre transform structure for the Fisher information measure (FIM, hereafter)-based model in a vector setting (with least square constraints) are self-consistently derived. These relations are demonstrated to yield an intriguing form of the FIM for the modeling phase, which defines the working hypothesis, solely in terms of the observed data. Cases for prediction employing time series' obtained from the: (i) the Mackey-Glass delay-differential equation, (ii) one ECG signal from the MIT-Beth Israel Deaconess Hospital (MIT-BIH) cardiac arrhythmia database, and (iii) one ECG signal from the Creighton University ventricular tachyarrhythmia database. The ECG samples were obtained from the Physionet online repository. These examples demonstrate the efficiency of the prediction model. Numerical examples for exemplary cases are provided.

  18. Model-based schedulability analysis of safety critical hard real-time Java programs

    DEFF Research Database (Denmark)

    Bøgholm, Thomas; Kragh-Hansen, Henrik; Olsen, Petur

    2008-01-01

    verifiable by the Uppaal model checker [23]. Schedulability analysis is reduced to a simple reachability question, checking for deadlock freedom. Model-based schedulability analysis has been developed by Amnell et al. [2], but has so far only been applied to high level specifications, not actual...

  19. NASA AVOSS Fast-Time Wake Prediction Models: User's Guide

    Science.gov (United States)

    Ahmad, Nash'at N.; VanValkenburg, Randal L.; Pruis, Matthew

    2014-01-01

    The National Aeronautics and Space Administration (NASA) is developing and testing fast-time wake transport and decay models to safely enhance the capacity of the National Airspace System (NAS). The fast-time wake models are empirical algorithms used for real-time predictions of wake transport and decay based on aircraft parameters and ambient weather conditions. The aircraft dependent parameters include the initial vortex descent velocity and the vortex pair separation distance. The atmospheric initial conditions include vertical profiles of temperature or potential temperature, eddy dissipation rate, and crosswind. The current distribution includes the latest versions of the APA (3.4) and the TDP (2.1) models. This User's Guide provides detailed information on the model inputs, file formats, and the model output. An example of a model run and a brief description of the Memphis 1995 Wake Vortex Dataset is also provided.

  20. Stability Analysis and H∞ Model Reduction for Switched Discrete-Time Time-Delay Systems

    Directory of Open Access Journals (Sweden)

    Zheng-Fan Liu

    2014-01-01

    Full Text Available This paper is concerned with the problem of exponential stability and H∞ model reduction of a class of switched discrete-time systems with state time-varying delay. Some subsystems can be unstable. Based on the average dwell time technique and Lyapunov-Krasovskii functional (LKF approach, sufficient conditions for exponential stability with H∞ performance of such systems are derived in terms of linear matrix inequalities (LMIs. For the high-order systems, sufficient conditions for the existence of reduced-order model are derived in terms of LMIs. Moreover, the error system is guaranteed to be exponentially stable and an H∞ error performance is guaranteed. Numerical examples are also given to demonstrate the effectiveness and reduced conservatism of the obtained results.

  1. A Time-Space Symmetry Based Cylindrical Model for Quantum Mechanical Interpretations

    Science.gov (United States)

    Vo Van, Thuan

    2017-12-01

    Following a bi-cylindrical model of geometrical dynamics, our study shows that a 6D-gravitational equation leads to geodesic description in an extended symmetrical time-space, which fits Hubble-like expansion on a microscopic scale. As a duality, the geodesic solution is mathematically equivalent to the basic Klein-Gordon-Fock equations of free massive elementary particles, in particular, the squared Dirac equations of leptons. The quantum indeterminism is proved to have originated from space-time curvatures. Interpretation of some important issues of quantum mechanical reality is carried out in comparison with the 5D space-time-matter theory. A solution of lepton mass hierarchy is proposed by extending to higher dimensional curvatures of time-like hyper-spherical surfaces than one of the cylindrical dynamical geometry. In a result, the reasonable charged lepton mass ratios have been calculated, which would be tested experimentally.

  2. Statistical performance and information content of time lag analysis and redundancy analysis in time series modeling.

    Science.gov (United States)

    Angeler, David G; Viedma, Olga; Moreno, José M

    2009-11-01

    Time lag analysis (TLA) is a distance-based approach used to study temporal dynamics of ecological communities by measuring community dissimilarity over increasing time lags. Despite its increased use in recent years, its performance in comparison with other more direct methods (i.e., canonical ordination) has not been evaluated. This study fills this gap using extensive simulations and real data sets from experimental temporary ponds (true zooplankton communities) and landscape studies (landscape categories as pseudo-communities) that differ in community structure and anthropogenic stress history. Modeling time with a principal coordinate of neighborhood matrices (PCNM) approach, the canonical ordination technique (redundancy analysis; RDA) consistently outperformed the other statistical tests (i.e., TLAs, Mantel test, and RDA based on linear time trends) using all real data. In addition, the RDA-PCNM revealed different patterns of temporal change, and the strength of each individual time pattern, in terms of adjusted variance explained, could be evaluated, It also identified species contributions to these patterns of temporal change. This additional information is not provided by distance-based methods. The simulation study revealed better Type I error properties of the canonical ordination techniques compared with the distance-based approaches when no deterministic component of change was imposed on the communities. The simulation also revealed that strong emphasis on uniform deterministic change and low variability at other temporal scales is needed to result in decreased statistical power of the RDA-PCNM approach relative to the other methods. Based on the statistical performance of and information content provided by RDA-PCNM models, this technique serves ecologists as a powerful tool for modeling temporal change of ecological (pseudo-) communities.

  3. Rule-based approach to cognitive modeling of real-time decision making

    International Nuclear Information System (INIS)

    Thorndyke, P.W.

    1982-01-01

    Recent developments in the fields of cognitive science and artificial intelligence have made possible the creation of a new class of models of complex human behavior. These models, referred to as either expert or knowledge-based systems, describe the high-level cognitive processing undertaken by a skilled human to perform a complex, largely mental, task. Expert systems have been developed to provide simulations of skilled performance of a variety of tasks. These include problems of data interpretation, system monitoring and fault isolation, prediction, planning, diagnosis, and design. In general, such systems strive to produce prescriptive (error-free) behavior, rather than model descriptively the typical human's errorful behavior. However, some research has sought to develop descriptive models of human behavior using the same theoretical frameworks adopted by expert systems builders. This paper presents an overview of this theoretical framework and modeling approach, and indicates the applicability of such models to the development of a model of control room operators in a nuclear power plant. Such a model could serve several beneficial functions in plant design, licensing, and operation

  4. A GIS-based groundwater travel time model to evaluate stream nitrate concentration reductions from land use change

    Science.gov (United States)

    Schilling, K.E.; Wolter, C.F.

    2007-01-01

    Excessive nitrate-nitrogen (nitrate) loss from agricultural watersheds is an environmental concern. A common conservation practice to improve stream water quality is to retire vulnerable row croplands to grass. In this paper, a groundwater travel time model based on a geographic information system (GIS) analysis of readily available soil and topographic variables was used to evaluate the time needed to observe stream nitrate concentration reductions from conversion of row crop land to native prairie in Walnut Creek watershed, Iowa. Average linear groundwater velocity in 5-m cells was estimated by overlaying GIS layers of soil permeability, land slope (surrogates for hydraulic conductivity and gradient, respectively) and porosity. Cells were summed backwards from the stream network to watershed divide to develop a travel time distribution map. Results suggested that groundwater from half of the land planted in prairie has reached the stream network during the 10 years of ongoing water quality monitoring. The mean travel time for the watershed was estimated to be 10.1 years, consistent with results from a simple analytical model. The proportion of land in the watershed and subbasins with prairie groundwater reaching the stream (10-22%) was similar to the measured reduction of stream nitrate (11-36%). Results provide encouragement that additional nitrate reductions in Walnut Creek are probable in the future as reduced nitrate groundwater from distal locations discharges to the stream network in the coming years. The high spatial resolution of the model (5-m cells) and its simplicity may make it potentially applicable for land managers interested in communicating lag time issues to the public, particularly related to nitrate concentration reductions over time. ?? 2007 Springer-Verlag.

  5. A Space-Time Network-Based Modeling Framework for Dynamic Unmanned Aerial Vehicle Routing in Traffic Incident Monitoring Applications

    Directory of Open Access Journals (Sweden)

    Jisheng Zhang

    2015-06-01

    Full Text Available It is essential for transportation management centers to equip and manage a network of fixed and mobile sensors in order to quickly detect traffic incidents and further monitor the related impact areas, especially for high-impact accidents with dramatic traffic congestion propagation. As emerging small Unmanned Aerial Vehicles (UAVs start to have a more flexible regulation environment, it is critically important to fully explore the potential for of using UAVs for monitoring recurring and non-recurring traffic conditions and special events on transportation networks. This paper presents a space-time network- based modeling framework for integrated fixed and mobile sensor networks, in order to provide a rapid and systematic road traffic monitoring mechanism. By constructing a discretized space-time network to characterize not only the speed for UAVs but also the time-sensitive impact areas of traffic congestion, we formulate the problem as a linear integer programming model to minimize the detection delay cost and operational cost, subject to feasible flying route constraints. A Lagrangian relaxation solution framework is developed to decompose the original complex problem into a series of computationally efficient time-dependent and least cost path finding sub-problems. Several examples are used to demonstrate the results of proposed models in UAVs’ route planning for small and medium-scale networks.

  6. Space-time latent component modeling of geo-referenced health data.

    Science.gov (United States)

    Lawson, Andrew B; Song, Hae-Ryoung; Cai, Bo; Hossain, Md Monir; Huang, Kun

    2010-08-30

    Latent structure models have been proposed in many applications. For space-time health data it is often important to be able to find the underlying trends in time, which are supported by subsets of small areas. Latent structure modeling is one such approach to this analysis. This paper presents a mixture-based approach that can be applied to component selection. The analysis of a Georgia ambulatory asthma county-level data set is presented and a simulation-based evaluation is made. Copyright (c) 2010 John Wiley & Sons, Ltd.

  7. A Seasonal Time-Series Model Based on Gene Expression Programming for Predicting Financial Distress

    Science.gov (United States)

    2018-01-01

    The issue of financial distress prediction plays an important and challenging research topic in the financial field. Currently, there have been many methods for predicting firm bankruptcy and financial crisis, including the artificial intelligence and the traditional statistical methods, and the past studies have shown that the prediction result of the artificial intelligence method is better than the traditional statistical method. Financial statements are quarterly reports; hence, the financial crisis of companies is seasonal time-series data, and the attribute data affecting the financial distress of companies is nonlinear and nonstationary time-series data with fluctuations. Therefore, this study employed the nonlinear attribute selection method to build a nonlinear financial distress prediction model: that is, this paper proposed a novel seasonal time-series gene expression programming model for predicting the financial distress of companies. The proposed model has several advantages including the following: (i) the proposed model is different from the previous models lacking the concept of time series; (ii) the proposed integrated attribute selection method can find the core attributes and reduce high dimensional data; and (iii) the proposed model can generate the rules and mathematical formulas of financial distress for providing references to the investors and decision makers. The result shows that the proposed method is better than the listing classifiers under three criteria; hence, the proposed model has competitive advantages in predicting the financial distress of companies. PMID:29765399

  8. A Seasonal Time-Series Model Based on Gene Expression Programming for Predicting Financial Distress

    Directory of Open Access Journals (Sweden)

    Ching-Hsue Cheng

    2018-01-01

    Full Text Available The issue of financial distress prediction plays an important and challenging research topic in the financial field. Currently, there have been many methods for predicting firm bankruptcy and financial crisis, including the artificial intelligence and the traditional statistical methods, and the past studies have shown that the prediction result of the artificial intelligence method is better than the traditional statistical method. Financial statements are quarterly reports; hence, the financial crisis of companies is seasonal time-series data, and the attribute data affecting the financial distress of companies is nonlinear and nonstationary time-series data with fluctuations. Therefore, this study employed the nonlinear attribute selection method to build a nonlinear financial distress prediction model: that is, this paper proposed a novel seasonal time-series gene expression programming model for predicting the financial distress of companies. The proposed model has several advantages including the following: (i the proposed model is different from the previous models lacking the concept of time series; (ii the proposed integrated attribute selection method can find the core attributes and reduce high dimensional data; and (iii the proposed model can generate the rules and mathematical formulas of financial distress for providing references to the investors and decision makers. The result shows that the proposed method is better than the listing classifiers under three criteria; hence, the proposed model has competitive advantages in predicting the financial distress of companies.

  9. Modelling bursty time series

    International Nuclear Information System (INIS)

    Vajna, Szabolcs; Kertész, János; Tóth, Bálint

    2013-01-01

    Many human-related activities show power-law decaying interevent time distribution with exponents usually varying between 1 and 2. We study a simple task-queuing model, which produces bursty time series due to the non-trivial dynamics of the task list. The model is characterized by a priority distribution as an input parameter, which describes the choice procedure from the list. We give exact results on the asymptotic behaviour of the model and we show that the interevent time distribution is power-law decaying for any kind of input distributions that remain normalizable in the infinite list limit, with exponents tunable between 1 and 2. The model satisfies a scaling law between the exponents of interevent time distribution (β) and autocorrelation function (α): α + β = 2. This law is general for renewal processes with power-law decaying interevent time distribution. We conclude that slowly decaying autocorrelation function indicates long-range dependence only if the scaling law is violated. (paper)

  10. DeepTravel: a Neural Network Based Travel Time Estimation Model with Auxiliary Supervision

    OpenAIRE

    Zhang, Hanyuan; Wu, Hao; Sun, Weiwei; Zheng, Baihua

    2018-01-01

    Estimating the travel time of a path is of great importance to smart urban mobility. Existing approaches are either based on estimating the time cost of each road segment which are not able to capture many cross-segment complex factors, or designed heuristically in a non-learning-based way which fail to utilize the existing abundant temporal labels of the data, i.e., the time stamp of each trajectory point. In this paper, we leverage on new development of deep neural networks and propose a no...

  11. Quality prediction modeling for sintered ores based on mechanism models of sintering and extreme learning machine based error compensation

    Science.gov (United States)

    Tiebin, Wu; Yunlian, Liu; Xinjun, Li; Yi, Yu; Bin, Zhang

    2018-06-01

    Aiming at the difficulty in quality prediction of sintered ores, a hybrid prediction model is established based on mechanism models of sintering and time-weighted error compensation on the basis of the extreme learning machine (ELM). At first, mechanism models of drum index, total iron, and alkalinity are constructed according to the chemical reaction mechanism and conservation of matter in the sintering process. As the process is simplified in the mechanism models, these models are not able to describe high nonlinearity. Therefore, errors are inevitable. For this reason, the time-weighted ELM based error compensation model is established. Simulation results verify that the hybrid model has a high accuracy and can meet the requirement for industrial applications.

  12. Modeling biological pathway dynamics with timed automata.

    Science.gov (United States)

    Schivo, Stefano; Scholma, Jetse; Wanders, Brend; Urquidi Camacho, Ricardo A; van der Vet, Paul E; Karperien, Marcel; Langerak, Rom; van de Pol, Jaco; Post, Janine N

    2014-05-01

    Living cells are constantly subjected to a plethora of environmental stimuli that require integration into an appropriate cellular response. This integration takes place through signal transduction events that form tightly interconnected networks. The understanding of these networks requires capturing their dynamics through computational support and models. ANIMO (analysis of Networks with Interactive Modeling) is a tool that enables the construction and exploration of executable models of biological networks, helping to derive hypotheses and to plan wet-lab experiments. The tool is based on the formalism of Timed Automata, which can be analyzed via the UPPAAL model checker. Thanks to Timed Automata, we can provide a formal semantics for the domain-specific language used to represent signaling networks. This enforces precision and uniformity in the definition of signaling pathways, contributing to the integration of isolated signaling events into complex network models. We propose an approach to discretization of reaction kinetics that allows us to efficiently use UPPAAL as the computational engine to explore the dynamic behavior of the network of interest. A user-friendly interface hides the use of Timed Automata from the user, while keeping the expressive power intact. Abstraction to single-parameter kinetics speeds up construction of models that remain faithful enough to provide meaningful insight. The resulting dynamic behavior of the network components is displayed graphically, allowing for an intuitive and interactive modeling experience.

  13. Laboratory Load Model Based on 150 kVA Power Frequency Converter and Simulink Real-Time – Concept, Implementation, Experiments

    Directory of Open Access Journals (Sweden)

    Robert Małkowski

    2016-09-01

    Full Text Available First section of the paper provides technical specification of laboratory load model basing on 150 kVA power frequency converter and Simulink Real-Time platform. Assumptions, as well as control algorithm structure is presented. Theoretical considerations based on criteria which load types may be simulated using discussed laboratory setup, are described. As described model contains transformer with thyristor-controlled tap changer, wider scope of device capabilities is presented. Paper lists and describes tunable parameters, both: tunable during device operation and changed only before starting the experiment. Implementation details are given in second section of paper. Hardware structure is presented and described. Information about used communication interface, data maintenance and storage solution, as well as used Simulink real-time features are presented. List and description of all measurements is provided. Potential of laboratory setup modifications is evaluated. Third section describes performed laboratory tests. Different load configurations are described and experimental results are presented. This includes simulation of under frequency load shedding, frequency and voltage dependent characteristics of groups of load units, time characteristics of group of different load units in a chosen area and arbitrary active and reactive power regulation basing on defined schedule. Different operation modes of control algorithm are described: apparent power control, active and reactive power control, active and reactive current RMS value control.

  14. Pattern formation in individual-based systems with time-varying parameters

    Science.gov (United States)

    Ashcroft, Peter; Galla, Tobias

    2013-12-01

    We study the patterns generated in finite-time sweeps across symmetry-breaking bifurcations in individual-based models. Similar to the well-known Kibble-Zurek scenario of defect formation, large-scale patterns are generated when model parameters are varied slowly, whereas fast sweeps produce a large number of small domains. The symmetry breaking is triggered by intrinsic noise, originating from the discrete dynamics at the microlevel. Based on a linear-noise approximation, we calculate the characteristic length scale of these patterns. We demonstrate the applicability of this approach in a simple model of opinion dynamics, a model in evolutionary game theory with a time-dependent fitness structure, and a model of cell differentiation. Our theoretical estimates are confirmed in simulations. In further numerical work, we observe a similar phenomenon when the symmetry-breaking bifurcation is triggered by population growth.

  15. Fuzzy time-series based on Fibonacci sequence for stock price forecasting

    Science.gov (United States)

    Chen, Tai-Liang; Cheng, Ching-Hsue; Jong Teoh, Hia

    2007-07-01

    Time-series models have been utilized to make reasonably accurate predictions in the areas of stock price movements, academic enrollments, weather, etc. For promoting the forecasting performance of fuzzy time-series models, this paper proposes a new model, which incorporates the concept of the Fibonacci sequence, the framework of Song and Chissom's model and the weighted method of Yu's model. This paper employs a 5-year period TSMC (Taiwan Semiconductor Manufacturing Company) stock price data and a 13-year period of TAIEX (Taiwan Stock Exchange Capitalization Weighted Stock Index) stock index data as experimental datasets. By comparing our forecasting performances with Chen's (Forecasting enrollments based on fuzzy time-series. Fuzzy Sets Syst. 81 (1996) 311-319), Yu's (Weighted fuzzy time-series models for TAIEX forecasting. Physica A 349 (2004) 609-624) and Huarng's (The application of neural networks to forecast fuzzy time series. Physica A 336 (2006) 481-491) models, we conclude that the proposed model surpasses in accuracy these conventional fuzzy time-series models.

  16. Model-Assisted Control of Flow Front in Resin Transfer Molding Based on Real-Time Estimation of Permeability/Porosity Ratio

    Directory of Open Access Journals (Sweden)

    Bai-Jian Wei

    2016-09-01

    Full Text Available Resin transfer molding (RTM is a popular manufacturing technique that produces fiber reinforced polymer (FRP composites. In this paper, a model-assisted flow front control system is developed based on real-time estimation of permeability/porosity ratio using the information acquired by a visualization system. In the proposed control system, a radial basis function (RBF network meta-model is utilized to predict the position of the future flow front by inputting the injection pressure, the current position of flow front, and the estimated ratio. By conducting optimization based on the meta-model, the value of injection pressure to be implemented at each step is obtained. Moreover, a cascade control structure is established to further improve the control performance. Experiments show that the developed system successfully enhances the performance of flow front control in RTM. Especially, the cascade structure makes the control system robust to model mismatch.

  17. Finite-time adaptive sliding mode force control for electro-hydraulic load simulator based on improved GMS friction model

    Science.gov (United States)

    Kang, Shuo; Yan, Hao; Dong, Lijing; Li, Changchun

    2018-03-01

    This paper addresses the force tracking problem of electro-hydraulic load simulator under the influence of nonlinear friction and uncertain disturbance. A nonlinear system model combined with the improved generalized Maxwell-slip (GMS) friction model is firstly derived to describe the characteristics of load simulator system more accurately. Then, by using particle swarm optimization (PSO) algorithm ​combined with the system hysteresis characteristic analysis, the GMS friction parameters are identified. To compensate for nonlinear friction and uncertain disturbance, a finite-time adaptive sliding mode control method is proposed based on the accurate system model. This controller has the ability to ensure that the system state moves along the nonlinear sliding surface to steady state in a short time as well as good dynamic properties under the influence of parametric uncertainties and disturbance, which further improves the force loading accuracy and rapidity. At the end of this work, simulation and experimental results are employed to demonstrate the effectiveness of the proposed sliding mode control strategy.

  18. A novel model for Time-Series Data Clustering Based on piecewise SVD and BIRCH for Stock Data Analysis on Hadoop Platform

    Directory of Open Access Journals (Sweden)

    Ibgtc Bowala

    2017-06-01

    Full Text Available With the rapid growth of financial markets, analyzers are paying more attention on predictions. Stock data are time series data, with huge amounts. Feasible solution for handling the increasing amount of data is to use a cluster for parallel processing, and Hadoop parallel computing platform is a typical representative. There are various statistical models for forecasting time series data, but accurate clusters are a pre-requirement. Clustering analysis for time series data is one of the main methods for mining time series data for many other analysis processes. However, general clustering algorithms cannot perform clustering for time series data because series data has a special structure and a high dimensionality has highly co-related values due to high noise level. A novel model for time series clustering is presented using BIRCH, based on piecewise SVD, leading to a novel dimension reduction approach. Highly co-related features are handled using SVD with a novel approach for dimensionality reduction in order to keep co-related behavior optimal and then use BIRCH for clustering. The algorithm is a novel model that can handle massive time series data. Finally, this new model is successfully applied to real stock time series data of Yahoo finance with satisfactory results.

  19. A model of interval timing by neural integration.

    Science.gov (United States)

    Simen, Patrick; Balci, Fuat; de Souza, Laura; Cohen, Jonathan D; Holmes, Philip

    2011-06-22

    We show that simple assumptions about neural processing lead to a model of interval timing as a temporal integration process, in which a noisy firing-rate representation of time rises linearly on average toward a response threshold over the course of an interval. Our assumptions include: that neural spike trains are approximately independent Poisson processes, that correlations among them can be largely cancelled by balancing excitation and inhibition, that neural populations can act as integrators, and that the objective of timed behavior is maximal accuracy and minimal variance. The model accounts for a variety of physiological and behavioral findings in rodents, monkeys, and humans, including ramping firing rates between the onset of reward-predicting cues and the receipt of delayed rewards, and universally scale-invariant response time distributions in interval timing tasks. It furthermore makes specific, well-supported predictions about the skewness of these distributions, a feature of timing data that is usually ignored. The model also incorporates a rapid (potentially one-shot) duration-learning procedure. Human behavioral data support the learning rule's predictions regarding learning speed in sequences of timed responses. These results suggest that simple, integration-based models should play as prominent a role in interval timing theory as they do in theories of perceptual decision making, and that a common neural mechanism may underlie both types of behavior.

  20. Real-time deformation of human soft tissues: A radial basis meshless 3D model based on Marquardt's algorithm.

    Science.gov (United States)

    Zhou, Jianyong; Luo, Zu; Li, Chunquan; Deng, Mi

    2018-01-01

    When the meshless method is used to establish the mathematical-mechanical model of human soft tissues, it is necessary to define the space occupied by human tissues as the problem domain and the boundary of the domain as the surface of those tissues. Nodes should be distributed in both the problem domain and on the boundaries. Under external force, the displacement of the node is computed by the meshless method to represent the deformation of biological soft tissues. However, computation by the meshless method consumes too much time, which will affect the simulation of real-time deformation of human tissues in virtual surgery. In this article, the Marquardt's Algorithm is proposed to fit the nodal displacement at the problem domain's boundary and obtain the relationship between surface deformation and force. When different external forces are applied, the deformation of soft tissues can be quickly obtained based on this relationship. The analysis and discussion show that the improved model equations with Marquardt's Algorithm not only can simulate the deformation in real-time but also preserve the authenticity of the deformation model's physical properties. Copyright © 2017 Elsevier B.V. All rights reserved.

  1. Draft Forecasts from Real-Time Runs of Physics-Based Models - A Road to the Future

    Science.gov (United States)

    Hesse, Michael; Rastatter, Lutz; MacNeice, Peter; Kuznetsova, Masha

    2008-01-01

    The Community Coordinated Modeling Center (CCMC) is a US inter-agency activity aiming at research in support of the generation of advanced space weather models. As one of its main functions, the CCMC provides to researchers the use of space science models, even if they are not model owners themselves. The second focus of CCMC activities is on validation and verification of space weather models, and on the transition of appropriate models to space weather forecast centers. As part of the latter activity, the CCMC develops real-time simulation systems that stress models through routine execution. A by-product of these real-time calculations is the ability to derive model products, which may be useful for space weather operators. After consultations with NOAA/SEC and with AFWA, CCMC has developed a set of tools as a first step to make real-time model output useful to forecast centers. In this presentation, we will discuss the motivation for this activity, the actions taken so far, and options for future tools from model output.

  2. A residence-time-based transport approach for the groundwater pathway in performance assessment models

    Science.gov (United States)

    Robinson, Bruce A.; Chu, Shaoping

    2013-03-01

    This paper presents the theoretical development and numerical implementation of a new modeling approach for representing the groundwater pathway in risk assessment or performance assessment model of a contaminant transport system. The model developed in the present study, called the Residence Time Distribution (RTD) Mixing Model (RTDMM), allows for an arbitrary distribution of fluid travel times to be represented, to capture the effects on the breakthrough curve of flow processes such as channelized flow and fast pathways and complex three-dimensional dispersion. Mathematical methods for constructing the model for a given RTD are derived directly from the theory of residence time distributions in flowing systems. A simple mixing model is presented, along with the basic equations required to enable an arbitrary RTD to be reproduced using the model. The practical advantages of the RTDMM include easy incorporation into a multi-realization probabilistic simulation; computational burden no more onerous than a one-dimensional model with the same number of grid cells; and straightforward implementation into available flow and transport modeling codes, enabling one to then utilize advanced transport features of that code. For example, in this study we incorporated diffusion into the stagnant fluid in the rock matrix away from the flowing fractures, using a generalized dual porosity model formulation. A suite of example calculations presented herein showed the utility of the RTDMM for the case of a radioactive decay chain, dual porosity transport and sorption.

  3. An Agent-Based Model for Analyzing Control Policies and the Dynamic Service-Time Performance of a Capacity-Constrained Air Traffic Management Facility

    Science.gov (United States)

    Conway, Sheila R.

    2006-01-01

    Simple agent-based models may be useful for investigating air traffic control strategies as a precursory screening for more costly, higher fidelity simulation. Of concern is the ability of the models to capture the essence of the system and provide insight into system behavior in a timely manner and without breaking the bank. The method is put to the test with the development of a model to address situations where capacity is overburdened and potential for propagation of the resultant delay though later flights is possible via flight dependencies. The resultant model includes primitive representations of principal air traffic system attributes, namely system capacity, demand, airline schedules and strategy, and aircraft capability. It affords a venue to explore their interdependence in a time-dependent, dynamic system simulation. The scope of the research question and the carefully-chosen modeling fidelity did allow for the development of an agent-based model in short order. The model predicted non-linear behavior given certain initial conditions and system control strategies. Additionally, a combination of the model and dimensionless techniques borrowed from fluid systems was demonstrated that can predict the system s dynamic behavior across a wide range of parametric settings.

  4. Predicting Charging Time of Battery Electric Vehicles Based on Regression and Time-Series Methods: A Case Study of Beijing

    Directory of Open Access Journals (Sweden)

    Jun Bi

    2018-04-01

    Full Text Available Battery electric vehicles (BEVs reduce energy consumption and air pollution as compared with conventional vehicles. However, the limited driving range and potential long charging time of BEVs create new problems. Accurate charging time prediction of BEVs helps drivers determine travel plans and alleviate their range anxiety during trips. This study proposed a combined model for charging time prediction based on regression and time-series methods according to the actual data from BEVs operating in Beijing, China. After data analysis, a regression model was established by considering the charged amount for charging time prediction. Furthermore, a time-series method was adopted to calibrate the regression model, which significantly improved the fitting accuracy of the model. The parameters of the model were determined by using the actual data. Verification results confirmed the accuracy of the model and showed that the model errors were small. The proposed model can accurately depict the charging time characteristics of BEVs in Beijing.

  5. Finite Time Blowup in a Realistic Food-Chain Model

    KAUST Repository

    Parshad, Rana; Ait Abderrahmane, Hamid; Upadhyay, Ranjit Kumar; Kumari, Nitu

    2013-01-01

    We investigate a realistic three-species food-chain model, with generalist top predator. The model based on a modified version of the Leslie-Gower scheme incorporates mutual interference in all the three populations and generalizes several other known models in the ecological literature. We show that the model exhibits finite time blowup in certain parameter range and for large enough initial data. This result implies that finite time blowup is possible in a large class of such three-species food-chain models. We propose a modification to the model and prove that the modified model has globally existing classical solutions, as well as a global attractor. We reconstruct the attractor using nonlinear time series analysis and show that it pssesses rich dynamics, including chaos in certain parameter regime, whilst avoiding blowup in any parameter regime. We also provide estimates on its fractal dimension as well as provide numerical simulations to visualise the spatiotemporal chaos.

  6. Finite Time Blowup in a Realistic Food-Chain Model

    KAUST Repository

    Parshad, Rana

    2013-05-19

    We investigate a realistic three-species food-chain model, with generalist top predator. The model based on a modified version of the Leslie-Gower scheme incorporates mutual interference in all the three populations and generalizes several other known models in the ecological literature. We show that the model exhibits finite time blowup in certain parameter range and for large enough initial data. This result implies that finite time blowup is possible in a large class of such three-species food-chain models. We propose a modification to the model and prove that the modified model has globally existing classical solutions, as well as a global attractor. We reconstruct the attractor using nonlinear time series analysis and show that it pssesses rich dynamics, including chaos in certain parameter regime, whilst avoiding blowup in any parameter regime. We also provide estimates on its fractal dimension as well as provide numerical simulations to visualise the spatiotemporal chaos.

  7. Modeling the Quiet Time Outflow Solution in the Polar Cap

    Science.gov (United States)

    Glocer, Alex

    2011-01-01

    We use the Polar Wind Outflow Model (PWOM) to study the geomagnetically quiet conditions in the polar cap during solar maximum, The PWOM solves the gyrotropic transport equations for O(+), H(+), and He(+) along several magnetic field lines in the polar region in order to reconstruct the full 3D solution. We directly compare our simulation results to the data based empirical model of Kitamura et al. [2011] of electron density, which is based on 63 months of Akebono satellite observations. The modeled ion and electron temperatures are also compared with a statistical compilation of quiet time data obtained by the EISCAT Svalbard Radar (ESR) and Intercosmos Satellites (Kitamura et al. [2011]). The data and model agree reasonably well. This study shows that photoelectrons play an important role in explaining the differences between sunlit and dark results, ion composition, as well as ion and electron temperatures of the quiet time polar wind solution. Moreover, these results provide validation of the PWOM's ability to model the quiet time ((background" solution.

  8. Soft sensor modelling by time difference, recursive partial least squares and adaptive model updating

    International Nuclear Information System (INIS)

    Fu, Y; Xu, O; Yang, W; Zhou, L; Wang, J

    2017-01-01

    To investigate time-variant and nonlinear characteristics in industrial processes, a soft sensor modelling method based on time difference, moving-window recursive partial least square (PLS) and adaptive model updating is proposed. In this method, time difference values of input and output variables are used as training samples to construct the model, which can reduce the effects of the nonlinear characteristic on modelling accuracy and retain the advantages of recursive PLS algorithm. To solve the high updating frequency of the model, a confidence value is introduced, which can be updated adaptively according to the results of the model performance assessment. Once the confidence value is updated, the model can be updated. The proposed method has been used to predict the 4-carboxy-benz-aldehyde (CBA) content in the purified terephthalic acid (PTA) oxidation reaction process. The results show that the proposed soft sensor modelling method can reduce computation effectively, improve prediction accuracy by making use of process information and reflect the process characteristics accurately. (paper)

  9. Relative Error Model Reduction via Time-Weighted Balanced Stochastic Singular Perturbation

    DEFF Research Database (Denmark)

    Tahavori, Maryamsadat; Shaker, Hamid Reza

    2012-01-01

    A new mixed method for relative error model reduction of linear time invariant (LTI) systems is proposed in this paper. This order reduction technique is mainly based upon time-weighted balanced stochastic model reduction method and singular perturbation model reduction technique. Compared...... by using the concept and properties of the reciprocal systems. The results are further illustrated by two practical numerical examples: a model of CD player and a model of the atmospheric storm track....

  10. Modelling time-dependent mechanical behaviour of softwood using deformation kinetics

    DEFF Research Database (Denmark)

    Engelund, Emil Tang; Svensson, Staffan

    2010-01-01

    The time-dependent mechanical behaviour (TDMB) of softwood is relevant, e.g., when wood is used as building material where the mechanical properties must be predicted for decades ahead. The established mathematical models should be able to predict the time-dependent behaviour. However, these models...... are not always based on the actual physical processes causing time-dependent behaviour and the physical interpretation of their input parameters is difficult. The present study describes the TDMB of a softwood tissue and its individual tracheids. A model is constructed with a local coordinate system that follows...... macroscopic viscoelasticity, i.e., the time-dependent processes are to a significant degree reversible....

  11. Model Checking Real-Time Systems

    DEFF Research Database (Denmark)

    Bouyer, Patricia; Fahrenberg, Uli; Larsen, Kim Guldstrand

    2018-01-01

    This chapter surveys timed automata as a formalism for model checking real-time systems. We begin with introducing the model, as an extension of finite-state automata with real-valued variables for measuring time. We then present the main model-checking results in this framework, and give a hint...

  12. An advection-based model to increase the temporal resolution of PIV time series.

    Science.gov (United States)

    Scarano, Fulvio; Moore, Peter

    A numerical implementation of the advection equation is proposed to increase the temporal resolution of PIV time series. The method is based on the principle that velocity fluctuations are transported passively, similar to Taylor's hypothesis of frozen turbulence . In the present work, the advection model is extended to unsteady three-dimensional flows. The main objective of the method is that of lowering the requirement on the PIV repetition rate from the Eulerian frequency toward the Lagrangian one. The local trajectory of the fluid parcel is obtained by forward projection of the instantaneous velocity at the preceding time instant and backward projection from the subsequent time step. The trajectories are approximated by the instantaneous streamlines, which yields accurate results when the amplitude of velocity fluctuations is small with respect to the convective motion. The verification is performed with two experiments conducted at temporal resolutions significantly higher than that dictated by Nyquist criterion. The flow past the trailing edge of a NACA0012 airfoil closely approximates frozen turbulence , where the largest ratio between the Lagrangian and Eulerian temporal scales is expected. An order of magnitude reduction of the needed acquisition frequency is demonstrated by the velocity spectra of super-sampled series. The application to three-dimensional data is made with time-resolved tomographic PIV measurements of a transitional jet. Here, the 3D advection equation is implemented to estimate the fluid trajectories. The reduction in the minimum sampling rate by the use of super-sampling in this case is less, due to the fact that vortices occurring in the jet shear layer are not well approximated by sole advection at large time separation. Both cases reveal that the current requirements for time-resolved PIV experiments can be revised when information is poured from space to time . An additional favorable effect is observed by the analysis in the

  13. D Model Visualization Enhancements in Real-Time Game Engines

    Science.gov (United States)

    Merlo, A.; Sánchez Belenguer, C.; Vendrell Vidal, E.; Fantini, F.; Aliperta, A.

    2013-02-01

    This paper describes two procedures used to disseminate tangible cultural heritage through real-time 3D simulations providing accurate-scientific representations. The main idea is to create simple geometries (with low-poly count) and apply two different texture maps to them: a normal map and a displacement map. There are two ways to achieve models that fit with normal or displacement maps: with the former (normal maps), the number of polygons in the reality-based model may be dramatically reduced by decimation algorithms and then normals may be calculated by rendering them to texture solutions (baking). With the latter, a LOD model is needed; its topology has to be quad-dominant for it to be converted to a good quality subdivision surface (with consistent tangency and curvature all over). The subdivision surface is constructed using methodologies for the construction of assets borrowed from character animation: these techniques have been recently implemented in many entertainment applications known as "retopology". The normal map is used as usual, in order to shade the surface of the model in a realistic way. The displacement map is used to finish, in real-time, the flat faces of the object, by adding the geometric detail missing in the low-poly models. The accuracy of the resulting geometry is progressively refined based on the distance from the viewing point, so the result is like a continuous level of detail, the only difference being that there is no need to create different 3D models for one and the same object. All geometric detail is calculated in real-time according to the displacement map. This approach can be used in Unity, a real-time 3D engine originally designed for developing computer games. It provides a powerful rendering engine, fully integrated with a complete set of intuitive tools and rapid workflows that allow users to easily create interactive 3D contents. With the release of Unity 4.0, new rendering features have been added, including Direct

  14. The forecasting of menstruation based on a state-space modeling of basal body temperature time series.

    Science.gov (United States)

    Fukaya, Keiichi; Kawamori, Ai; Osada, Yutaka; Kitazawa, Masumi; Ishiguro, Makio

    2017-09-20

    Women's basal body temperature (BBT) shows a periodic pattern that associates with menstrual cycle. Although this fact suggests a possibility that daily BBT time series can be useful for estimating the underlying phase state as well as for predicting the length of current menstrual cycle, little attention has been paid to model BBT time series. In this study, we propose a state-space model that involves the menstrual phase as a latent state variable to explain the daily fluctuation of BBT and the menstruation cycle length. Conditional distributions of the phase are obtained by using sequential Bayesian filtering techniques. A predictive distribution of the next menstruation day can be derived based on this conditional distribution and the model, leading to a novel statistical framework that provides a sequentially updated prediction for upcoming menstruation day. We applied this framework to a real data set of women's BBT and menstruation days and compared prediction accuracy of the proposed method with that of previous methods, showing that the proposed method generally provides a better prediction. Because BBT can be obtained with relatively small cost and effort, the proposed method can be useful for women's health management. Potential extensions of this framework as the basis of modeling and predicting events that are associated with the menstrual cycles are discussed. © 2017 The Authors. Statistics in Medicine Published by John Wiley & Sons Ltd. © 2017 The Authors. Statistics in Medicine Published by John Wiley & Sons Ltd.

  15. Bayesian Modelling of fMRI Time Series

    DEFF Research Database (Denmark)

    Højen-Sørensen, Pedro; Hansen, Lars Kai; Rasmussen, Carl Edward

    2000-01-01

    We present a Hidden Markov Model (HMM) for inferring the hidden psychological state (or neural activity) during single trial fMRI activation experiments with blocked task paradigms. Inference is based on Bayesian methodology, using a combination of analytical and a variety of Markov Chain Monte...... Carlo (MCMC) sampling techniques. The advantage of this method is that detection of short time learning effects between repeated trials is possible since inference is based only on single trial experiments....

  16. Thermally-aware composite run-time CPU power models

    OpenAIRE

    Walker, Matthew J.; Diestelhorst, Stephan; Hansson, Andreas; Balsamo, Domenico; Merrett, Geoff V.; Al-Hashimi, Bashir M.

    2016-01-01

    Accurate and stable CPU power modelling is fundamental in modern system-on-chips (SoCs) for two main reasons: 1) they enable significant online energy savings by providing a run-time manager with reliable power consumption data for controlling CPU energy-saving techniques; 2) they can be used as accurate and trusted reference models for system design and exploration. We begin by showing the limitations in typical performance monitoring counter (PMC) based power modelling approaches and illust...

  17. A model for quantification of temperature profiles via germination times

    DEFF Research Database (Denmark)

    Pipper, Christian Bressen; Adolf, Verena Isabelle; Jacobsen, Sven-Erik

    2013-01-01

    Current methodology to quantify temperature characteristics in germination of seeds is predominantly based on analysis of the time to reach a given germination fraction, that is, the quantiles in the distribution of the germination time of a seed. In practice interpolation between observed...... time and a specific type of accelerated failure time models is provided. As a consequence the observed number of germinated seeds at given monitoring times may be analysed directly by a grouped time-to-event model from which characteristics of the temperature profile may be identified and estimated...... germination fractions at given monitoring times is used to obtain the time to reach a given germination fraction. As a consequence the obtained value will be highly dependent on the actual monitoring scheme used in the experiment. In this paper a link between currently used quantile models for the germination...

  18. A new costing model in hospital management: time-driven activity-based costing system.

    Science.gov (United States)

    Öker, Figen; Özyapıcı, Hasan

    2013-01-01

    Traditional cost systems cause cost distortions because they cannot meet the requirements of today's businesses. Therefore, a new and more effective cost system is needed. Consequently, time-driven activity-based costing system has emerged. The unit cost of supplying capacity and the time needed to perform an activity are the only 2 factors considered by the system. Furthermore, this system determines unused capacity by considering practical capacity. The purpose of this article is to emphasize the efficiency of the time-driven activity-based costing system and to display how it can be applied in a health care institution. A case study was conducted in a private hospital in Cyprus. Interviews and direct observations were used to collect the data. The case study revealed that the cost of unused capacity is allocated to both open and laparoscopic (closed) surgeries. Thus, by using the time-driven activity-based costing system, managers should eliminate the cost of unused capacity so as to obtain better results. Based on the results of the study, hospital management is better able to understand the costs of different surgeries. In addition, managers can easily notice the cost of unused capacity and decide how many employees to be dismissed or directed to other productive areas.

  19. An actor-based model of social network influence on adolescent body size, screen time, and playing sports.

    Directory of Open Access Journals (Sweden)

    David A Shoham

    Full Text Available Recent studies suggest that obesity may be "contagious" between individuals in social networks. Social contagion (influence, however, may not be identifiable using traditional statistical approaches because they cannot distinguish contagion from homophily (the propensity for individuals to select friends who are similar to themselves or from shared environmental influences. In this paper, we apply the stochastic actor-based model (SABM framework developed by Snijders and colleagues to data on adolescent body mass index (BMI, screen time, and playing active sports. Our primary hypothesis was that social influences on adolescent body size and related behaviors are independent of friend selection. Employing the SABM, we simultaneously modeled network dynamics (friendship selection based on homophily and structural characteristics of the network and social influence. We focused on the 2 largest schools in the National Longitudinal Study of Adolescent Health (Add Health and held the school environment constant by examining the 2 school networks separately (N = 624 and 1151. Results show support in both schools for homophily on BMI, but also for social influence on BMI. There was no evidence of homophily on screen time in either school, while only one of the schools showed homophily on playing active sports. There was, however, evidence of social influence on screen time in one of the schools, and playing active sports in both schools. These results suggest that both homophily and social influence are important in understanding patterns of adolescent obesity. Intervention efforts should take into consideration peers' influence on one another, rather than treating "high risk" adolescents in isolation.

  20. Modelling and finite-time stability analysis of psoriasis pathogenesis

    Science.gov (United States)

    Oza, Harshal B.; Pandey, Rakesh; Roper, Daniel; Al-Nuaimi, Yusur; Spurgeon, Sarah K.; Goodfellow, Marc

    2017-08-01

    A new systems model of psoriasis is presented and analysed from the perspective of control theory. Cytokines are treated as actuators to the plant model that govern the cell population under the reasonable assumption that cytokine dynamics are faster than the cell population dynamics. The analysis of various equilibria is undertaken based on singular perturbation theory. Finite-time stability and stabilisation have been studied in various engineering applications where the principal paradigm uses non-Lipschitz functions of the states. A comprehensive study of the finite-time stability properties of the proposed psoriasis dynamics is carried out. It is demonstrated that the dynamics are finite-time convergent to certain equilibrium points rather than asymptotically or exponentially convergent. This feature of finite-time convergence motivates the development of a modified version of the Michaelis-Menten function, frequently used in biology. This framework is used to model cytokines as fast finite-time actuators.

  1. The elastic body model: a pedagogical approach integrating real time measurements and modelling activities

    International Nuclear Information System (INIS)

    Fazio, C; Guastella, I; Tarantino, G

    2007-01-01

    In this paper, we describe a pedagogical approach to elastic body movement based on measurements of the contact times between a metallic rod and small bodies colliding with it and on modelling of the experimental results by using a microcomputer-based laboratory and simulation tools. The experiments and modelling activities have been built in the context of the laboratory of mechanical wave propagation of the two-year graduate teacher education programme of Palermo's University. Some considerations about observed modifications in trainee teachers' attitudes in utilizing experiments and modelling are discussed

  2. A Personalized Predictive Framework for Multivariate Clinical Time Series via Adaptive Model Selection.

    Science.gov (United States)

    Liu, Zitao; Hauskrecht, Milos

    2017-11-01

    Building of an accurate predictive model of clinical time series for a patient is critical for understanding of the patient condition, its dynamics, and optimal patient management. Unfortunately, this process is not straightforward. First, patient-specific variations are typically large and population-based models derived or learned from many different patients are often unable to support accurate predictions for each individual patient. Moreover, time series observed for one patient at any point in time may be too short and insufficient to learn a high-quality patient-specific model just from the patient's own data. To address these problems we propose, develop and experiment with a new adaptive forecasting framework for building multivariate clinical time series models for a patient and for supporting patient-specific predictions. The framework relies on the adaptive model switching approach that at any point in time selects the most promising time series model out of the pool of many possible models, and consequently, combines advantages of the population, patient-specific and short-term individualized predictive models. We demonstrate that the adaptive model switching framework is very promising approach to support personalized time series prediction, and that it is able to outperform predictions based on pure population and patient-specific models, as well as, other patient-specific model adaptation strategies.

  3. Model Based Temporal Reasoning

    Science.gov (United States)

    Rabin, Marla J.; Spinrad, Paul R.; Fall, Thomas C.

    1988-03-01

    Systems that assess the real world must cope with evidence that is uncertain, ambiguous, and spread over time. Typically, the most important function of an assessment system is to identify when activities are occurring that are unusual or unanticipated. Model based temporal reasoning addresses both of these requirements. The differences among temporal reasoning schemes lies in the methods used to avoid computational intractability. If we had n pieces of data and we wanted to examine how they were related, the worst case would be where we had to examine every subset of these points to see if that subset satisfied the relations. This would be 2n, which is intractable. Models compress this; if several data points are all compatible with a model, then that model represents all those data points. Data points are then considered related if they lie within the same model or if they lie in models that are related. Models thus address the intractability problem. They also address the problem of determining unusual activities if the data do not agree with models that are indicated by earlier data then something out of the norm is taking place. The models can summarize what we know up to that time, so when they are not predicting correctly, either something unusual is happening or we need to revise our models. The model based reasoner developed at Advanced Decision Systems is thus both intuitive and powerful. It is currently being used on one operational system and several prototype systems. It has enough power to be used in domains spanning the spectrum from manufacturing engineering and project management to low-intensity conflict and strategic assessment.

  4. Modeling of the time sharing for lecturers

    Directory of Open Access Journals (Sweden)

    E. Yu. Shakhova

    2017-01-01

    Full Text Available In the context of modernization of the Russian system of higher education, it is necessary to analyze the working time of the university lecturers, taking into account both basic job functions as the university lecturer, and others.The mathematical problem is presented for the optimal working time planning for the university lecturers. The review of the documents, native and foreign works on the study is made. Simulation conditions, based on analysis of the subject area, are defined. Models of optimal working time sharing of the university lecturers («the second half of the day» are developed and implemented in the system MathCAD. Optimal solutions have been obtained.Three problems have been solved:1 to find the optimal time sharing for «the second half of the day» in a certain position of the university lecturer;2 to find the optimal time sharing for «the second half of the day» for all positions of the university lecturers in view of the established model of the academic load differentiation;3 to find the volume value of the non-standardized part of time work in the department for the academic year, taking into account: the established model of an academic load differentiation, distribution of the Faculty number for the positions and the optimal time sharing «the second half of the day» for the university lecturers of the department.Examples are given of the analysis results. The practical application of the research: the developed models can be used when planning the working time of an individual professor in the preparation of the work plan of the university department for the academic year, as well as to conduct a comprehensive analysis of the administrative decisions in the development of local university regulations.

  5. Stochastic models for time series

    CERN Document Server

    Doukhan, Paul

    2018-01-01

    This book presents essential tools for modelling non-linear time series. The first part of the book describes the main standard tools of probability and statistics that directly apply to the time series context to obtain a wide range of modelling possibilities. Functional estimation and bootstrap are discussed, and stationarity is reviewed. The second part describes a number of tools from Gaussian chaos and proposes a tour of linear time series models. It goes on to address nonlinearity from polynomial or chaotic models for which explicit expansions are available, then turns to Markov and non-Markov linear models and discusses Bernoulli shifts time series models. Finally, the volume focuses on the limit theory, starting with the ergodic theorem, which is seen as the first step for statistics of time series. It defines the distributional range to obtain generic tools for limit theory under long or short-range dependences (LRD/SRD) and explains examples of LRD behaviours. More general techniques (central limit ...

  6. Stochastic time-dependent vehicle routing problem: Mathematical models and ant colony algorithm

    Directory of Open Access Journals (Sweden)

    Zhengyu Duan

    2015-11-01

    Full Text Available This article addresses the stochastic time-dependent vehicle routing problem. Two mathematical models named robust optimal schedule time model and minimum expected schedule time model are proposed for stochastic time-dependent vehicle routing problem, which can guarantee delivery within the time windows of customers. The robust optimal schedule time model only requires the variation range of link travel time, which can be conveniently derived from historical traffic data. In addition, the robust optimal schedule time model based on robust optimization method can be converted into a time-dependent vehicle routing problem. Moreover, an ant colony optimization algorithm is designed to solve stochastic time-dependent vehicle routing problem. As the improvements in initial solution and transition probability, ant colony optimization algorithm has a good performance in convergence. Through computational instances and Monte Carlo simulation tests, robust optimal schedule time model is proved to be better than minimum expected schedule time model in computational efficiency and coping with the travel time fluctuations. Therefore, robust optimal schedule time model is applicable in real road network.

  7. Cluster Based Text Classification Model

    DEFF Research Database (Denmark)

    Nizamani, Sarwat; Memon, Nasrullah; Wiil, Uffe Kock

    2011-01-01

    We propose a cluster based classification model for suspicious email detection and other text classification tasks. The text classification tasks comprise many training examples that require a complex classification model. Using clusters for classification makes the model simpler and increases...... the accuracy at the same time. The test example is classified using simpler and smaller model. The training examples in a particular cluster share the common vocabulary. At the time of clustering, we do not take into account the labels of the training examples. After the clusters have been created......, the classifier is trained on each cluster having reduced dimensionality and less number of examples. The experimental results show that the proposed model outperforms the existing classification models for the task of suspicious email detection and topic categorization on the Reuters-21578 and 20 Newsgroups...

  8. Analytical model of SiPM time resolution and order statistics with crosstalk

    International Nuclear Information System (INIS)

    Vinogradov, S.

    2015-01-01

    Time resolution is the most important parameter of photon detectors in a wide range of time-of-flight and time correlation applications within the areas of high energy physics, medical imaging, and others. Silicon photomultipliers (SiPM) have been initially recognized as perfect photon-number-resolving detectors; now they also provide outstanding results in the scintillator timing resolution. However, crosstalk and afterpulsing introduce false secondary non-Poissonian events, and SiPM time resolution models are experiencing significant difficulties with that. This study presents an attempt to develop an analytical model of the timing resolution of an SiPM taking into account statistics of secondary events resulting from a crosstalk. Two approaches have been utilized to derive an analytical expression for time resolution: the first one based on statistics of independent identically distributed detection event times and the second one based on order statistics of these times. The first approach is found to be more straightforward and “analytical-friendly” to model analog SiPMs. Comparisons of coincidence resolving times predicted by the model with the known experimental results from a LYSO:Ce scintillator and a Hamamatsu MPPC are presented

  9. Analytical model of SiPM time resolution and order statistics with crosstalk

    Energy Technology Data Exchange (ETDEWEB)

    Vinogradov, S., E-mail: Sergey.Vinogradov@liverpool.ac.uk [University of Liverpool and Cockcroft Institute, Sci-Tech Daresbury, Keckwick Lane, Warrington WA4 4AD (United Kingdom); P.N. Lebedev Physical Institute of the Russian Academy of Sciences, 119991 Leninskiy Prospekt 53, Moscow (Russian Federation)

    2015-07-01

    Time resolution is the most important parameter of photon detectors in a wide range of time-of-flight and time correlation applications within the areas of high energy physics, medical imaging, and others. Silicon photomultipliers (SiPM) have been initially recognized as perfect photon-number-resolving detectors; now they also provide outstanding results in the scintillator timing resolution. However, crosstalk and afterpulsing introduce false secondary non-Poissonian events, and SiPM time resolution models are experiencing significant difficulties with that. This study presents an attempt to develop an analytical model of the timing resolution of an SiPM taking into account statistics of secondary events resulting from a crosstalk. Two approaches have been utilized to derive an analytical expression for time resolution: the first one based on statistics of independent identically distributed detection event times and the second one based on order statistics of these times. The first approach is found to be more straightforward and “analytical-friendly” to model analog SiPMs. Comparisons of coincidence resolving times predicted by the model with the known experimental results from a LYSO:Ce scintillator and a Hamamatsu MPPC are presented.

  10. Measurement-based reliability/performability models

    Science.gov (United States)

    Hsueh, Mei-Chen

    1987-01-01

    Measurement-based models based on real error-data collected on a multiprocessor system are described. Model development from the raw error-data to the estimation of cumulative reward is also described. A workload/reliability model is developed based on low-level error and resource usage data collected on an IBM 3081 system during its normal operation in order to evaluate the resource usage/error/recovery process in a large mainframe system. Thus, both normal and erroneous behavior of the system are modeled. The results provide an understanding of the different types of errors and recovery processes. The measured data show that the holding times in key operational and error states are not simple exponentials and that a semi-Markov process is necessary to model the system behavior. A sensitivity analysis is performed to investigate the significance of using a semi-Markov process, as opposed to a Markov process, to model the measured system.

  11. Martingale Regressions for a Continuous Time Model of Exchange Rates

    OpenAIRE

    Guo, Zi-Yi

    2017-01-01

    One of the daunting problems in international finance is the weak explanatory power of existing theories of the nominal exchange rates, the so-called “foreign exchange rate determination puzzle”. We propose a continuous-time model to study the impact of order flow on foreign exchange rates. The model is estimated by a newly developed econometric tool based on a time-change sampling from calendar to volatility time. The estimation results indicate that the effect of order flow on exchange rate...

  12. Splitting Travel Time Based on AFC Data: Estimating Walking, Waiting, Transfer, and In-Vehicle Travel Times in Metro System

    Directory of Open Access Journals (Sweden)

    Yong-Sheng Zhang

    2015-01-01

    Full Text Available The walking, waiting, transfer, and delayed in-vehicle travel times mainly contribute to route’s travel time reliability in the metro system. The automatic fare collection (AFC system provides huge amounts of smart card records which can be used to estimate all these times distributions. A new estimation model based on Bayesian inference formulation is proposed in this paper by integrating the probability measurement of the OD pair with only one effective route, in which all kinds of times follow the truncated normal distributions. Then, Markov Chain Monte Carlo method is designed to estimate all parameters endogenously. Finally, based on AFC data in Guangzhou Metro, the estimations show that all parameters can be estimated endogenously and identifiably. Meanwhile, the truncated property of the travel time is significant and the threshold tested by the surveyed data is reliable. Furthermore, the superiority of the proposed model over the existing model in estimation and forecasting accuracy is also demonstrated.

  13. Space-time least-squares Petrov-Galerkin projection in nonlinear model reduction.

    Energy Technology Data Exchange (ETDEWEB)

    Choi, Youngsoo [Sandia National Laboratories (SNL-CA), Livermore, CA (United States). Extreme-scale Data Science and Analytics Dept.; Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States); Carlberg, Kevin Thomas [Sandia National Laboratories (SNL-CA), Livermore, CA (United States). Extreme-scale Data Science and Analytics Dept.

    2017-09-01

    Our work proposes a space-time least-squares Petrov-Galerkin (ST-LSPG) projection method for model reduction of nonlinear dynamical systems. In contrast to typical nonlinear model-reduction methods that first apply Petrov-Galerkin projection in the spatial dimension and subsequently apply time integration to numerically resolve the resulting low-dimensional dynamical system, the proposed method applies projection in space and time simultaneously. To accomplish this, the method first introduces a low-dimensional space-time trial subspace, which can be obtained by computing tensor decompositions of state-snapshot data. The method then computes discrete-optimal approximations in this space-time trial subspace by minimizing the residual arising after time discretization over all space and time in a weighted ℓ2-norm. This norm can be de ned to enable complexity reduction (i.e., hyper-reduction) in time, which leads to space-time collocation and space-time GNAT variants of the ST-LSPG method. Advantages of the approach relative to typical spatial-projection-based nonlinear model reduction methods such as Galerkin projection and least-squares Petrov-Galerkin projection include: (1) a reduction of both the spatial and temporal dimensions of the dynamical system, (2) the removal of spurious temporal modes (e.g., unstable growth) from the state space, and (3) error bounds that exhibit slower growth in time. Numerical examples performed on model problems in fluid dynamics demonstrate the ability of the method to generate orders-of-magnitude computational savings relative to spatial-projection-based reduced-order models without sacrificing accuracy.

  14. Time-Elastic Generative Model for Acceleration Time Series in Human Activity Recognition.

    Science.gov (United States)

    Munoz-Organero, Mario; Ruiz-Blazquez, Ramona

    2017-02-08

    Body-worn sensors in general and accelerometers in particular have been widely used in order to detect human movements and activities. The execution of each type of movement by each particular individual generates sequences of time series of sensed data from which specific movement related patterns can be assessed. Several machine learning algorithms have been used over windowed segments of sensed data in order to detect such patterns in activity recognition based on intermediate features (either hand-crafted or automatically learned from data). The underlying assumption is that the computed features will capture statistical differences that can properly classify different movements and activities after a training phase based on sensed data. In order to achieve high accuracy and recall rates (and guarantee the generalization of the system to new users), the training data have to contain enough information to characterize all possible ways of executing the activity or movement to be detected. This could imply large amounts of data and a complex and time-consuming training phase, which has been shown to be even more relevant when automatically learning the optimal features to be used. In this paper, we present a novel generative model that is able to generate sequences of time series for characterizing a particular movement based on the time elasticity properties of the sensed data. The model is used to train a stack of auto-encoders in order to learn the particular features able to detect human movements. The results of movement detection using a newly generated database with information on five users performing six different movements are presented. The generalization of results using an existing database is also presented in the paper. The results show that the proposed mechanism is able to obtain acceptable recognition rates ( F = 0.77) even in the case of using different people executing a different sequence of movements and using different hardware.

  15. The Design of Model-Based Training Programs

    Science.gov (United States)

    Polson, Peter; Sherry, Lance; Feary, Michael; Palmer, Everett; Alkin, Marty; McCrobie, Dan; Kelley, Jerry; Rosekind, Mark (Technical Monitor)

    1997-01-01

    This paper proposes a model-based training program for the skills necessary to operate advance avionics systems that incorporate advanced autopilots and fight management systems. The training model is based on a formalism, the operational procedure model, that represents the mission model, the rules, and the functions of a modem avionics system. This formalism has been defined such that it can be understood and shared by pilots, the avionics software, and design engineers. Each element of the software is defined in terms of its intent (What?), the rationale (Why?), and the resulting behavior (How?). The Advanced Computer Tutoring project at Carnegie Mellon University has developed a type of model-based, computer aided instructional technology called cognitive tutors. They summarize numerous studies showing that training times to a specified level of competence can be achieved in one third the time of conventional class room instruction. We are developing a similar model-based training program for the skills necessary to operation the avionics. The model underlying the instructional program and that simulates the effects of pilots entries and the behavior of the avionics is based on the operational procedure model. Pilots are given a series of vertical flightpath management problems. Entries that result in violations, such as failure to make a crossing restriction or violating the speed limits, result in error messages with instruction. At any time, the flightcrew can request suggestions on the appropriate set of actions. A similar and successful training program for basic skills for the FMS on the Boeing 737-300 was developed and evaluated. The results strongly support the claim that the training methodology can be adapted to the cockpit.

  16. A Model-Based Approach to Infer Shifts in Regional Fire Regimes Over Time Using Sediment Charcoal Records

    Science.gov (United States)

    Itter, M.; Finley, A. O.; Hooten, M.; Higuera, P. E.; Marlon, J. R.; McLachlan, J. S.; Kelly, R.

    2016-12-01

    Sediment charcoal records are used in paleoecological analyses to identify individual local fire events and to estimate fire frequency and regional biomass burned at centennial to millenial time scales. Methods to identify local fire events based on sediment charcoal records have been well developed over the past 30 years, however, an integrated statistical framework for fire identification is still lacking. We build upon existing paleoecological methods to develop a hierarchical Bayesian point process model for local fire identification and estimation of fire return intervals. The model is unique in that it combines sediment charcoal records from multiple lakes across a region in a spatially-explicit fashion leading to estimation of a joint, regional fire return interval in addition to lake-specific local fire frequencies. Further, the model estimates a joint regional charcoal deposition rate free from the effects of local fires that can be used as a measure of regional biomass burned over time. Finally, the hierarchical Bayesian approach allows for tractable error propagation such that estimates of fire return intervals reflect the full range of uncertainty in sediment charcoal records. Specific sources of uncertainty addressed include sediment age models, the separation of local versus regional charcoal sources, and generation of a composite charcoal record The model is applied to sediment charcoal records from a dense network of lakes in the Yukon Flats region of Alaska. The multivariate joint modeling approach results in improved estimates of regional charcoal deposition with reduced uncertainty in the identification of individual fire events and local fire return intervals compared to individual lake approaches. Modeled individual-lake fire return intervals range from 100 to 500 years with a regional interval of roughly 200 years. Regional charcoal deposition to the network of lakes is correlated up to 50 kilometers. Finally, the joint regional charcoal

  17. The problem with time in mixed continuous/discrete time modelling

    NARCIS (Netherlands)

    Rovers, K.C.; Kuper, Jan; Smit, Gerardus Johannes Maria

    The design of cyber-physical systems requires the use of mixed continuous time and discrete time models. Current modelling tools have problems with time transformations (such as a time delay) or multi-rate systems. We will present a novel approach that implements signals as functions of time,

  18. Time-Based Readout of a Silicon Photomultiplier (SiPM) for Time of Flight Positron Emission Tomography (TOF-PET)

    CERN Document Server

    Powolny, F; Brunner, S E; Hillemanns, H; Meyer, T; Garutti, E; Williams, M C S; Auffray, E; Shen, W; Goettlich, M; Jarron, P; Schultz-Coulon, H C

    2011-01-01

    Time of flight (TOF) measurements in positron emission tomography (PET) are very challenging in terms of timing performance, and should ideally achieve less than 100 ps FWHM precision. We present a time-based differential technique to read out silicon photomultipliers (SiPMs) which has less than 20 ps FWHM electronic jitter. The novel readout is a fast front end circuit (NINO) based on a first stage differential current mode amplifier with 20 Omega input resistance. Therefore the amplifier inputs are connected differentially to the SiPM's anode and cathode ports. The leading edge of the output signal provides the time information, while the trailing edge provides the energy information. Based on a Monte Carlo photon-generation model, HSPICE simulations were run with a 3 x 3 mm(2) SiPM-model, read out with a differential current amplifier. The results of these simulations are presented here and compared with experimental data obtained with a 3 x 3 x 15 mm(3) LSO crystal coupled to a SiPM. The measured time coi...

  19. Constitutive modeling for uniaxial time-dependent ratcheting of SS304 stainless steel

    International Nuclear Information System (INIS)

    Kan Qianhua; Kang Guozheng; Zhang Juan

    2007-01-01

    Based on the experimental results of uniaxial time-dependent ratcheting behavior of SS304 stainless steel at room temperature and 973K, a new time-dependent constitutive model was proposed. The model describes the time-dependent ratcheting by adding a static/thermal recovery into the Abdel-Karim-Ohno non-linear kinematic hardening rule. The capability of the model to describe the time-dependent ratcheting was discussed by comparing the simulations with the corresponding experimental results. It is shown that the revised unified viscoplastic model can simulate the time-dependent ratcheting reasonably both at room and high temperatures. (authors)

  20. Time series modeling of live-cell shape dynamics for image-based phenotypic profiling.

    Science.gov (United States)

    Gordonov, Simon; Hwang, Mun Kyung; Wells, Alan; Gertler, Frank B; Lauffenburger, Douglas A; Bathe, Mark

    2016-01-01

    Live-cell imaging can be used to capture spatio-temporal aspects of cellular responses that are not accessible to fixed-cell imaging. As the use of live-cell imaging continues to increase, new computational procedures are needed to characterize and classify the temporal dynamics of individual cells. For this purpose, here we present the general experimental-computational framework SAPHIRE (Stochastic Annotation of Phenotypic Individual-cell Responses) to characterize phenotypic cellular responses from time series imaging datasets. Hidden Markov modeling is used to infer and annotate morphological state and state-switching properties from image-derived cell shape measurements. Time series modeling is performed on each cell individually, making the approach broadly useful for analyzing asynchronous cell populations. Two-color fluorescent cells simultaneously expressing actin and nuclear reporters enabled us to profile temporal changes in cell shape following pharmacological inhibition of cytoskeleton-regulatory signaling pathways. Results are compared with existing approaches conventionally applied to fixed-cell imaging datasets, and indicate that time series modeling captures heterogeneous dynamic cellular responses that can improve drug classification and offer additional important insight into mechanisms of drug action. The software is available at http://saphire-hcs.org.

  1. Evaluating perceptual integration: uniting response-time- and accuracy-based methodologies.

    Science.gov (United States)

    Eidels, Ami; Townsend, James T; Hughes, Howard C; Perry, Lacey A

    2015-02-01

    This investigation brings together a response-time system identification methodology (e.g., Townsend & Wenger Psychonomic Bulletin & Review 11, 391-418, 2004a) and an accuracy methodology, intended to assess models of integration across stimulus dimensions (features, modalities, etc.) that were proposed by Shaw and colleagues (e.g., Mulligan & Shaw Perception & Psychophysics 28, 471-478, 1980). The goal was to theoretically examine these separate strategies and to apply them conjointly to the same set of participants. The empirical phases were carried out within an extension of an established experimental design called the double factorial paradigm (e.g., Townsend & Nozawa Journal of Mathematical Psychology 39, 321-359, 1995). That paradigm, based on response times, permits assessments of architecture (parallel vs. serial processing), stopping rule (exhaustive vs. minimum time), and workload capacity, all within the same blocks of trials. The paradigm introduced by Shaw and colleagues uses a statistic formally analogous to that of the double factorial paradigm, but based on accuracy rather than response times. We demonstrate that the accuracy measure cannot discriminate between parallel and serial processing. Nonetheless, the class of models supported by the accuracy data possesses a suitable interpretation within the same set of models supported by the response-time data. The supported model, consistent across individuals, is parallel and has limited capacity, with the participants employing the appropriate stopping rule for the experimental setting.

  2. Forecast of useful energy for the TIMES-Norway model

    International Nuclear Information System (INIS)

    Rosenberg, Eva

    2012-01-01

    A regional forecast of useful energy demand in seven Norwegian regions is calculated based on an earlier work with a national forecast. This forecast will be input to the energy system model TIMES-Norway and analyses will result in forecasts of energy use of different energy carriers with varying external conditions (not included in this report). The forecast presented here describes the methodology used and the resulting forecast of useful energy. lt is based on information of the long-term development of the economy by the Ministry of Finance, projections of population growths from Statistics Norway and several other studies. The definition of a forecast of useful energy demand is not absolute, but depends on the purpose. One has to be careful not to include parts that are a part of the energy system model, such as energy efficiency measures. In the forecast presented here the influence of new building regulations and the prohibition of production of incandescent light bulbs in EU etc. are included. Other energy efficiency measures such as energy management, heat pumps, tightening of leaks etc. are modelled as technologies to invest in and are included in the TIMES-Norway model. The elasticity between different energy carriers are handled by the TIMES-Norway model and some elasticity is also included as the possibility to invest in energy efficiency measures. The forecast results in an increase of the total useful energy from 2006 to 2050 by 18 o/o. The growth is expected to be highest in the regions South and East. The industry remains at a constant level in the base case and increased or reduced energy demand is analysed as different scenarios with the TIMES-Norway model. The most important driver is the population growth. Together with the assumptions made it results in increased useful energy demand in the household and service sectors of 25 o/o and 57 % respectively.(au)

  3. Forecast of useful energy for the TIMES-Norway model

    Energy Technology Data Exchange (ETDEWEB)

    Rosenberg, Eva

    2012-07-25

    A regional forecast of useful energy demand in seven Norwegian regions is calculated based on an earlier work with a national forecast. This forecast will be input to the energy system model TIMES-Norway and analyses will result in forecasts of energy use of different energy carriers with varying external conditions (not included in this report). The forecast presented here describes the methodology used and the resulting forecast of useful energy. lt is based on information of the long-term development of the economy by the Ministry of Finance, projections of population growths from Statistics Norway and several other studies. The definition of a forecast of useful energy demand is not absolute, but depends on the purpose. One has to be careful not to include parts that are a part of the energy system model, such as energy efficiency measures. In the forecast presented here the influence of new building regulations and the prohibition of production of incandescent light bulbs in EU etc. are included. Other energy efficiency measures such as energy management, heat pumps, tightening of leaks etc. are modelled as technologies to invest in and are included in the TIMES-Norway model. The elasticity between different energy carriers are handled by the TIMES-Norway model and some elasticity is also included as the possibility to invest in energy efficiency measures. The forecast results in an increase of the total useful energy from 2006 to 2050 by 18 o/o. The growth is expected to be highest in the regions South and East. The industry remains at a constant level in the base case and increased or reduced energy demand is analysed as different scenarios with the TIMES-Norway model. The most important driver is the population growth. Together with the assumptions made it results in increased useful energy demand in the household and service sectors of 25 o/o and 57 % respectively.(au)

  4. High-order fuzzy time-series based on multi-period adaptation model for forecasting stock markets

    Science.gov (United States)

    Chen, Tai-Liang; Cheng, Ching-Hsue; Teoh, Hia-Jong

    2008-02-01

    Stock investors usually make their short-term investment decisions according to recent stock information such as the late market news, technical analysis reports, and price fluctuations. To reflect these short-term factors which impact stock price, this paper proposes a comprehensive fuzzy time-series, which factors linear relationships between recent periods of stock prices and fuzzy logical relationships (nonlinear relationships) mined from time-series into forecasting processes. In empirical analysis, the TAIEX (Taiwan Stock Exchange Capitalization Weighted Stock Index) and HSI (Heng Seng Index) are employed as experimental datasets, and four recent fuzzy time-series models, Chen’s (1996), Yu’s (2005), Cheng’s (2006) and Chen’s (2007), are used as comparison models. Besides, to compare with conventional statistic method, the method of least squares is utilized to estimate the auto-regressive models of the testing periods within the databases. From analysis results, the performance comparisons indicate that the multi-period adaptation model, proposed in this paper, can effectively improve the forecasting performance of conventional fuzzy time-series models which only factor fuzzy logical relationships in forecasting processes. From the empirical study, the traditional statistic method and the proposed model both reveal that stock price patterns in the Taiwan stock and Hong Kong stock markets are short-term.

  5. Doubly stochastic Poisson process models for precipitation at fine time-scales

    Science.gov (United States)

    Ramesh, Nadarajah I.; Onof, Christian; Xie, Dichao

    2012-09-01

    This paper considers a class of stochastic point process models, based on doubly stochastic Poisson processes, in the modelling of rainfall. We examine the application of this class of models, a neglected alternative to the widely-known Poisson cluster models, in the analysis of fine time-scale rainfall intensity. These models are mainly used to analyse tipping-bucket raingauge data from a single site but an extension to multiple sites is illustrated which reveals the potential of this class of models to study the temporal and spatial variability of precipitation at fine time-scales.

  6. Multi-disciplinary techniques for understanding time-varying space-based imagery

    Science.gov (United States)

    Casasent, D.; Sanderson, A.; Kanade, T.

    1984-06-01

    A multidisciplinary program for space-based image processing is reported. This project combines optical and digital processing techniques and pattern recognition, image understanding and artificial intelligence methodologies. Time change image processing was recognized as the key issue to be addressed. Three time change scenarios were defined based on the frame rate of the data change. This report details the recent research on: various statistical and deterministic image features, recognition of sub-pixel targets in time varying imagery, and 3-D object modeling and recognition.

  7. Vibration analysis diagnostics by continuous-time models: A case study

    International Nuclear Information System (INIS)

    Pedregal, Diego J.; Carmen Carnero, Ma.

    2009-01-01

    In this paper a forecasting system in condition monitoring is developed based on vibration signals in order to improve the diagnosis of a certain critical equipment at an industrial plant. The system is based on statistical models capable of forecasting the state of the equipment combined with a cost model consisting of defining the time of preventive replacement when the minimum of the expected cost per unit of time is reached in the future. The most relevant features of the system are that (i) it is developed for bivariate signals; (ii) the statistical models are set up in a continuous-time framework, due to the specific nature of the data; and (iii) it has been developed from scratch for a real case study and may be generalised to other pieces of equipment. The system is thoroughly tested on the equipment available, showing its correctness with the data in a statistical sense and its capability of producing sensible results for the condition monitoring programme

  8. Vibration analysis diagnostics by continuous-time models: A case study

    Energy Technology Data Exchange (ETDEWEB)

    Pedregal, Diego J. [Escuela Tecnica Superior de Ingenieros Industriales, Universidad de Castilla-La Mancha, 13071 Ciudad Real (Spain)], E-mail: Diego.Pedregal@uclm.es; Carmen Carnero, Ma. [Escuela Tecnica Superior de Ingenieros Industriales, Universidad de Castilla-La Mancha, 13071 Ciudad Real (Spain)], E-mail: Carmen.Carnero@uclm.es

    2009-02-15

    In this paper a forecasting system in condition monitoring is developed based on vibration signals in order to improve the diagnosis of a certain critical equipment at an industrial plant. The system is based on statistical models capable of forecasting the state of the equipment combined with a cost model consisting of defining the time of preventive replacement when the minimum of the expected cost per unit of time is reached in the future. The most relevant features of the system are that (i) it is developed for bivariate signals; (ii) the statistical models are set up in a continuous-time framework, due to the specific nature of the data; and (iii) it has been developed from scratch for a real case study and may be generalised to other pieces of equipment. The system is thoroughly tested on the equipment available, showing its correctness with the data in a statistical sense and its capability of producing sensible results for the condition monitoring programme.

  9. Model based design introduction: modeling game controllers to microprocessor architectures

    Science.gov (United States)

    Jungwirth, Patrick; Badawy, Abdel-Hameed

    2017-04-01

    We present an introduction to model based design. Model based design is a visual representation, generally a block diagram, to model and incrementally develop a complex system. Model based design is a commonly used design methodology for digital signal processing, control systems, and embedded systems. Model based design's philosophy is: to solve a problem - a step at a time. The approach can be compared to a series of steps to converge to a solution. A block diagram simulation tool allows a design to be simulated with real world measurement data. For example, if an analog control system is being upgraded to a digital control system, the analog sensor input signals can be recorded. The digital control algorithm can be simulated with the real world sensor data. The output from the simulated digital control system can then be compared to the old analog based control system. Model based design can compared to Agile software develop. The Agile software development goal is to develop working software in incremental steps. Progress is measured in completed and tested code units. Progress is measured in model based design by completed and tested blocks. We present a concept for a video game controller and then use model based design to iterate the design towards a working system. We will also describe a model based design effort to develop an OS Friendly Microprocessor Architecture based on the RISC-V.

  10. Modeling the impact of forecast-based regime switches on macroeconomic time series

    NARCIS (Netherlands)

    K. Bel (Koen); R. Paap (Richard)

    2013-01-01

    textabstractForecasts of key macroeconomic variables may lead to policy changes of governments, central banks and other economic agents. Policy changes in turn lead to structural changes in macroeconomic time series models. To describe this phenomenon we introduce a logistic smooth transition

  11. Real Time Animation of Trees Based on BBSC in Computer Games

    Directory of Open Access Journals (Sweden)

    Xuefeng Ao

    2009-01-01

    Full Text Available That researchers in the field of computer games usually find it is difficult to simulate the motion of actual 3D model trees lies in the fact that the tree model itself has very complicated structure, and many sophisticated factors need to be considered during the simulation. Though there are some works on simulating 3D tree and its motion, few of them are used in computer games due to the high demand for real-time in computer games. In this paper, an approach of animating trees in computer games based on a novel tree model representation—Ball B-Spline Curves (BBSCs are proposed. By taking advantage of the good features of the BBSC-based model, physical simulation of the motion of leafless trees with wind blowing becomes easier and more efficient. The method can generate realistic 3D tree animation in real-time, which meets the high requirement for real time in computer games.

  12. Analog computing for a new nuclear reactor dynamic model based on a time-dependent second order form of the neutron transport equation

    International Nuclear Information System (INIS)

    Pirouzmand, Ahmad; Hadad, Kamal; Suh, Kune Y.

    2011-01-01

    This paper considers the concept of analog computing based on a cellular neural network (CNN) paradigm to simulate nuclear reactor dynamics using a time-dependent second order form of the neutron transport equation. Instead of solving nuclear reactor dynamic equations numerically, which is time-consuming and suffers from such weaknesses as vulnerability to transient phenomena, accumulation of round-off errors and floating-point overflows, use is made of a new method based on a cellular neural network. The state-of-the-art shows the CNN as being an alternative solution to the conventional numerical computation method. Indeed CNN is an analog computing paradigm that performs ultra-fast calculations and provides accurate results. In this study use is made of the CNN model to simulate the space-time response of scalar flux distribution in steady state and transient conditions. The CNN model also is used to simulate step perturbation in the core. The accuracy and capability of the CNN model are examined in 2D Cartesian geometry for two fixed source problems, a mini-BWR assembly, and a TWIGL Seed/Blanket problem. We also use the CNN model concurrently for a typical small PWR assembly to simulate the effect of temperature feedback, poisons, and control rods on the scalar flux distribution

  13. GPU-accelerated 3-D model-based tracking

    International Nuclear Information System (INIS)

    Brown, J Anthony; Capson, David W

    2010-01-01

    Model-based approaches to tracking the pose of a 3-D object in video are effective but computationally demanding. While statistical estimation techniques, such as the particle filter, are often employed to minimize the search space, real-time performance remains unachievable on current generation CPUs. Recent advances in graphics processing units (GPUs) have brought massively parallel computational power to the desktop environment and powerful developer tools, such as NVIDIA Compute Unified Device Architecture (CUDA), have provided programmers with a mechanism to exploit it. NVIDIA GPUs' single-instruction multiple-thread (SIMT) programming model is well-suited to many computer vision tasks, particularly model-based tracking, which requires several hundred 3-D model poses to be dynamically configured, rendered, and evaluated against each frame in the video sequence. Using 6 degree-of-freedom (DOF) rigid hand tracking as an example application, this work harnesses consumer-grade GPUs to achieve real-time, 3-D model-based, markerless object tracking in monocular video.

  14. An ELM-Based Approach for Estimating Train Dwell Time in Urban Rail Traffic

    Directory of Open Access Journals (Sweden)

    Wen-jun Chu

    2015-01-01

    Full Text Available Dwell time estimation plays an important role in the operation of urban rail system. On this specific problem, a range of models based on either polynomial regression or microsimulation have been proposed. However, the generalization performance of polynomial regression models is limited and the accuracy of existing microsimulation models is unstable. In this paper, a new dwell time estimation model based on extreme learning machine (ELM is proposed. The underlying factors that may affect urban rail dwell time are analyzed first. Then, the relationships among different factors are extracted and modeled by ELM neural networks, on basis of which an overall estimation model is proposed. At last, a set of observed data from Beijing subway is used to illustrate the proposed method and verify its overall performance.

  15. Web-Based Real Time Earthquake Forecasting and Personal Risk Management

    Science.gov (United States)

    Rundle, J. B.; Holliday, J. R.; Graves, W. R.; Turcotte, D. L.; Donnellan, A.

    2012-12-01

    Earthquake forecasts have been computed by a variety of countries and economies world-wide for over two decades. For the most part, forecasts have been computed for insurance, reinsurance and underwriters of catastrophe bonds. One example is the Working Group on California Earthquake Probabilities that has been responsible for the official California earthquake forecast since 1988. However, in a time of increasingly severe global financial constraints, we are now moving inexorably towards personal risk management, wherein mitigating risk is becoming the responsibility of individual members of the public. Under these circumstances, open access to a variety of web-based tools, utilities and information is a necessity. Here we describe a web-based system that has been operational since 2009 at www.openhazards.com and www.quakesim.org. Models for earthquake physics and forecasting require input data, along with model parameters. The models we consider are the Natural Time Weibull (NTW) model for regional earthquake forecasting, together with models for activation and quiescence. These models use small earthquakes ('seismicity-based models") to forecast the occurrence of large earthquakes, either through varying rates of small earthquake activity, or via an accumulation of this activity over time. These approaches use data-mining algorithms combined with the ANSS earthquake catalog. The basic idea is to compute large earthquake probabilities using the number of small earthquakes that have occurred in a region since the last large earthquake. Each of these approaches has computational challenges associated with computing forecast information in real time. Using 25 years of data from the ANSS California-Nevada catalog of earthquakes, we show that real-time forecasting is possible at a grid scale of 0.1o. We have analyzed the performance of these models using Reliability/Attributes and standard Receiver Operating Characteristic (ROC) tests. We show how the Reliability and

  16. Comprehensive model of annual plankton succession based on the whole-plankton time series approach.

    Directory of Open Access Journals (Sweden)

    Jean-Baptiste Romagnan

    Full Text Available Ecological succession provides a widely accepted description of seasonal changes in phytoplankton and mesozooplankton assemblages in the natural environment, but concurrent changes in smaller (i.e. microbes and larger (i.e. macroplankton organisms are not included in the model because plankton ranging from bacteria to jellies are seldom sampled and analyzed simultaneously. Here we studied, for the first time in the aquatic literature, the succession of marine plankton in the whole-plankton assemblage that spanned 5 orders of magnitude in size from microbes to macroplankton predators (not including fish or fish larvae, for which no consistent data were available. Samples were collected in the northwestern Mediterranean Sea (Bay of Villefranche weekly during 10 months. Simultaneously collected samples were analyzed by flow cytometry, inverse microscopy, FlowCam, and ZooScan. The whole-plankton assemblage underwent sharp reorganizations that corresponded to bottom-up events of vertical mixing in the water-column, and its development was top-down controlled by large gelatinous filter feeders and predators. Based on the results provided by our novel whole-plankton assemblage approach, we propose a new comprehensive conceptual model of the annual plankton succession (i.e. whole plankton model characterized by both stepwise stacking of four broad trophic communities from early spring through summer, which is a new concept, and progressive replacement of ecological plankton categories within the different trophic communities, as recognised traditionally.

  17. Continuous time Boolean modeling for biological signaling: application of Gillespie algorithm.

    Science.gov (United States)

    Stoll, Gautier; Viara, Eric; Barillot, Emmanuel; Calzone, Laurence

    2012-08-29

    Mathematical modeling is used as a Systems Biology tool to answer biological questions, and more precisely, to validate a network that describes biological observations and predict the effect of perturbations. This article presents an algorithm for modeling biological networks in a discrete framework with continuous time. There exist two major types of mathematical modeling approaches: (1) quantitative modeling, representing various chemical species concentrations by real numbers, mainly based on differential equations and chemical kinetics formalism; (2) and qualitative modeling, representing chemical species concentrations or activities by a finite set of discrete values. Both approaches answer particular (and often different) biological questions. Qualitative modeling approach permits a simple and less detailed description of the biological systems, efficiently describes stable state identification but remains inconvenient in describing the transient kinetics leading to these states. In this context, time is represented by discrete steps. Quantitative modeling, on the other hand, can describe more accurately the dynamical behavior of biological processes as it follows the evolution of concentration or activities of chemical species as a function of time, but requires an important amount of information on the parameters difficult to find in the literature. Here, we propose a modeling framework based on a qualitative approach that is intrinsically continuous in time. The algorithm presented in this article fills the gap between qualitative and quantitative modeling. It is based on continuous time Markov process applied on a Boolean state space. In order to describe the temporal evolution of the biological process we wish to model, we explicitly specify the transition rates for each node. For that purpose, we built a language that can be seen as a generalization of Boolean equations. Mathematically, this approach can be translated in a set of ordinary differential

  18. Self-calibration for lab-μCT using space-time regularized projection-based DVC and model reduction

    Science.gov (United States)

    Jailin, C.; Buljac, A.; Bouterf, A.; Poncelet, M.; Hild, F.; Roux, S.

    2018-02-01

    An online calibration procedure for x-ray lab-CT is developed using projection-based digital volume correlation. An initial reconstruction of the sample is positioned in the 3D space for every angle so that its projection matches the initial one. This procedure allows a space-time displacement field to be estimated for the scanned sample, which is regularized with (i) rigid body motions in space and (ii) modal time shape functions computed using model reduction techniques (i.e. proper generalized decomposition). The result is an accurate identification of the position of the sample adapted for each angle, which may deviate from the desired perfect rotation required for standard reconstructions. An application of this procedure to a 4D in situ mechanical test is shown. The proposed correction leads to a much improved tomographic reconstruction quality.

  19. A real-time, dynamic early-warning model based on uncertainty analysis and risk assessment for sudden water pollution accidents.

    Science.gov (United States)

    Hou, Dibo; Ge, Xiaofan; Huang, Pingjie; Zhang, Guangxin; Loáiciga, Hugo

    2014-01-01

    A real-time, dynamic, early-warning model (EP-risk model) is proposed to cope with sudden water quality pollution accidents affecting downstream areas with raw-water intakes (denoted as EPs). The EP-risk model outputs the risk level of water pollution at the EP by calculating the likelihood of pollution and evaluating the impact of pollution. A generalized form of the EP-risk model for river pollution accidents based on Monte Carlo simulation, the analytic hierarchy process (AHP) method, and the risk matrix method is proposed. The likelihood of water pollution at the EP is calculated by the Monte Carlo method, which is used for uncertainty analysis of pollutants' transport in rivers. The impact of water pollution at the EP is evaluated by expert knowledge and the results of Monte Carlo simulation based on the analytic hierarchy process. The final risk level of water pollution at the EP is determined by the risk matrix method. A case study of the proposed method is illustrated with a phenol spill accident in China.

  20. A meshless EFG-based algorithm for 3D deformable modeling of soft tissue in real-time.

    Science.gov (United States)

    Abdi, Elahe; Farahmand, Farzam; Durali, Mohammad

    2012-01-01

    The meshless element-free Galerkin method was generalized and an algorithm was developed for 3D dynamic modeling of deformable bodies in real time. The efficacy of the algorithm was investigated in a 3D linear viscoelastic model of human spleen subjected to a time-varying compressive force exerted by a surgical grasper. The model remained stable in spite of the considerably large deformations occurred. There was a good agreement between the results and those of an equivalent finite element model. The computational cost, however, was much lower, enabling the proposed algorithm to be effectively used in real-time applications.

  1. Timing properties and pulse shape discrimination of LAB-based liquid scintillator

    International Nuclear Information System (INIS)

    Li Xiaobo; Xiao Hualin; Cao Jun; Li Jin; Heng Yuekun; Ruan Xichao

    2011-01-01

    Linear Alkyl Benzene (LAB) is a promising liquid scintillator solvent in neutrino experiments because it has many appealing properties. The timing properties of LAB-based liquid scintillator have been studied through ultraviolet and ionization excitation in this study. The decay time of LAB, PPO and bis-MSB is found to be 48.6 ns, 1.55 ns and 1.5 ns, respectively. A model can describe the absorption and re-emission process between PPO and bis-MSB perfectly. The energy transfer time between LAB and PPO with different concentrations can be obtained via another model. We also show that the LAB-based liquid scintillator has good (n, γ) and (α, γ) discrimination power. (authors)

  2. Modelling the thermal quenching mechanism in quartz based on time-resolved optically stimulated luminescence

    International Nuclear Information System (INIS)

    Pagonis, V.; Ankjaergaard, C.; Murray, A.S.; Jain, M.; Chen, R.; Lawless, J.; Greilich, S.

    2010-01-01

    This paper presents a new numerical model for thermal quenching in quartz, based on the previously suggested Mott-Seitz mechanism. In the model electrons from a dosimetric trap are raised by optical or thermal stimulation into the conduction band, followed by an electronic transition from the conduction band into an excited state of the recombination center. Subsequently electrons in this excited state undergo either a direct radiative transition into a recombination center, or a competing thermally assisted non-radiative process into the ground state of the recombination center. As the temperature of the sample is increased, more electrons are removed from the excited state via the non-radiative pathway. This reduction in the number of available electrons leads to both a decrease of the intensity of the luminescence signal and to a simultaneous decrease of the luminescence lifetime. Several simulations are carried out of time-resolved optically stimulated luminescence (TR-OSL) experiments, in which the temperature dependence of luminescence lifetimes in quartz is studied as a function of the stimulation temperature. Good quantitative agreement is found between the simulation results and new experimental data obtained using a single-aliquot procedure on a sedimentary quartz sample.

  3. Extended Cellular Automata Models of Particles and Space-Time

    Science.gov (United States)

    Beedle, Michael

    2005-04-01

    Models of particles and space-time are explored through simulations and theoretical models that use Extended Cellular Automata models. The expanded Cellular Automata Models consist go beyond simple scalar binary cell-fields, into discrete multi-level group representations like S0(2), SU(2), SU(3), SPIN(3,1). The propagation and evolution of these expanded cellular automatas are then compared to quantum field theories based on the "harmonic paradigm" i.e. built by an infinite number of harmonic oscillators, and with gravitational models.

  4. A Time Scheduling Model of Logistics Service Supply Chain Based on the Customer Order Decoupling Point: A Perspective from the Constant Service Operation Time

    Science.gov (United States)

    Yang, Yi; Xu, Haitao; Liu, Xiaoyan; Wang, Yijia; Liang, Zhicheng

    2014-01-01

    In mass customization logistics service, reasonable scheduling of the logistics service supply chain (LSSC), especially time scheduling, is benefit to increase its competitiveness. Therefore, the effect of a customer order decoupling point (CODP) on the time scheduling performance should be considered. To minimize the total order operation cost of the LSSC, minimize the difference between the expected and actual time of completing the service orders, and maximize the satisfaction of functional logistics service providers, this study establishes an LSSC time scheduling model based on the CODP. Matlab 7.8 software is used in the numerical analysis for a specific example. Results show that the order completion time of the LSSC can be delayed or be ahead of schedule but cannot be infinitely advanced or infinitely delayed. Obtaining the optimal comprehensive performance can be effective if the expected order completion time is appropriately delayed. The increase in supply chain comprehensive performance caused by the increase in the relationship coefficient of logistics service integrator (LSI) is limited. The relative concern degree of LSI on cost and service delivery punctuality leads to not only changes in CODP but also to those in the scheduling performance of the LSSC. PMID:24715818

  5. A time scheduling model of logistics service supply chain based on the customer order decoupling point: a perspective from the constant service operation time.

    Science.gov (United States)

    Liu, Weihua; Yang, Yi; Xu, Haitao; Liu, Xiaoyan; Wang, Yijia; Liang, Zhicheng

    2014-01-01

    In mass customization logistics service, reasonable scheduling of the logistics service supply chain (LSSC), especially time scheduling, is benefit to increase its competitiveness. Therefore, the effect of a customer order decoupling point (CODP) on the time scheduling performance should be considered. To minimize the total order operation cost of the LSSC, minimize the difference between the expected and actual time of completing the service orders, and maximize the satisfaction of functional logistics service providers, this study establishes an LSSC time scheduling model based on the CODP. Matlab 7.8 software is used in the numerical analysis for a specific example. Results show that the order completion time of the LSSC can be delayed or be ahead of schedule but cannot be infinitely advanced or infinitely delayed. Obtaining the optimal comprehensive performance can be effective if the expected order completion time is appropriately delayed. The increase in supply chain comprehensive performance caused by the increase in the relationship coefficient of logistics service integrator (LSI) is limited. The relative concern degree of LSI on cost and service delivery punctuality leads to not only changes in CODP but also to those in the scheduling performance of the LSSC.

  6. Software reliability growth models with normal failure time distributions

    International Nuclear Information System (INIS)

    Okamura, Hiroyuki; Dohi, Tadashi; Osaki, Shunji

    2013-01-01

    This paper proposes software reliability growth models (SRGM) where the software failure time follows a normal distribution. The proposed model is mathematically tractable and has sufficient ability of fitting to the software failure data. In particular, we consider the parameter estimation algorithm for the SRGM with normal distribution. The developed algorithm is based on an EM (expectation-maximization) algorithm and is quite simple for implementation as software application. Numerical experiment is devoted to investigating the fitting ability of the SRGMs with normal distribution through 16 types of failure time data collected in real software projects

  7. Stochastic modeling of hourly rainfall times series in Campania (Italy)

    Science.gov (United States)

    Giorgio, M.; Greco, R.

    2009-04-01

    Occurrence of flowslides and floods in small catchments is uneasy to predict, since it is affected by a number of variables, such as mechanical and hydraulic soil properties, slope morphology, vegetation coverage, rainfall spatial and temporal variability. Consequently, landslide risk assessment procedures and early warning systems still rely on simple empirical models based on correlation between recorded rainfall data and observed landslides and/or river discharges. Effectiveness of such systems could be improved by reliable quantitative rainfall prediction, which can allow gaining larger lead-times. Analysis of on-site recorded rainfall height time series represents the most effective approach for a reliable prediction of local temporal evolution of rainfall. Hydrological time series analysis is a widely studied field in hydrology, often carried out by means of autoregressive models, such as AR, ARMA, ARX, ARMAX (e.g. Salas [1992]). Such models gave the best results when applied to the analysis of autocorrelated hydrological time series, like river flow or level time series. Conversely, they are not able to model the behaviour of intermittent time series, like point rainfall height series usually are, especially when recorded with short sampling time intervals. More useful for this issue are the so-called DRIP (Disaggregated Rectangular Intensity Pulse) and NSRP (Neymann-Scott Rectangular Pulse) model [Heneker et al., 2001; Cowpertwait et al., 2002], usually adopted to generate synthetic point rainfall series. In this paper, the DRIP model approach is adopted, in which the sequence of rain storms and dry intervals constituting the structure of rainfall time series is modeled as an alternating renewal process. Final aim of the study is to provide a useful tool to implement an early warning system for hydrogeological risk management. Model calibration has been carried out with hourly rainfall hieght data provided by the rain gauges of Campania Region civil

  8. Random walk-percolation-based modeling of two-phase flow in porous media: Breakthrough time and net to gross ratio estimation

    Science.gov (United States)

    Ganjeh-Ghazvini, Mostafa; Masihi, Mohsen; Ghaedi, Mojtaba

    2014-07-01

    Fluid flow modeling in porous media has many applications in waste treatment, hydrology and petroleum engineering. In any geological model, flow behavior is controlled by multiple properties. These properties must be known in advance of common flow simulations. When uncertainties are present, deterministic modeling often produces poor results. Percolation and Random Walk (RW) methods have recently been used in flow modeling. Their stochastic basis is useful in dealing with uncertainty problems. They are also useful in finding the relationship between porous media descriptions and flow behavior. This paper employs a simple methodology based on random walk and percolation techniques. The method is applied to a well-defined model reservoir in which the breakthrough time distributions are estimated. The results of this method and the conventional simulation are then compared. The effect of the net to gross ratio on the breakthrough time distribution is studied in terms of Shannon entropy. Use of the entropy plot allows one to assign the appropriate net to gross ratio to any porous medium.

  9. Multiple Indicator Stationary Time Series Models.

    Science.gov (United States)

    Sivo, Stephen A.

    2001-01-01

    Discusses the propriety and practical advantages of specifying multivariate time series models in the context of structural equation modeling for time series and longitudinal panel data. For time series data, the multiple indicator model specification improves on classical time series analysis. For panel data, the multiple indicator model…

  10. Are Model Transferability And Complexity Antithetical? Insights From Validation of a Variable-Complexity Empirical Snow Model in Space and Time

    Science.gov (United States)

    Lute, A. C.; Luce, Charles H.

    2017-11-01

    The related challenges of predictions in ungauged basins and predictions in ungauged climates point to the need to develop environmental models that are transferable across both space and time. Hydrologic modeling has historically focused on modelling one or only a few basins using highly parameterized conceptual or physically based models. However, model parameters and structures have been shown to change significantly when calibrated to new basins or time periods, suggesting that model complexity and model transferability may be antithetical. Empirical space-for-time models provide a framework within which to assess model transferability and any tradeoff with model complexity. Using 497 SNOTEL sites in the western U.S., we develop space-for-time models of April 1 SWE and Snow Residence Time based on mean winter temperature and cumulative winter precipitation. The transferability of the models to new conditions (in both space and time) is assessed using non-random cross-validation tests with consideration of the influence of model complexity on transferability. As others have noted, the algorithmic empirical models transfer best when minimal extrapolation in input variables is required. Temporal split-sample validations use pseudoreplicated samples, resulting in the selection of overly complex models, which has implications for the design of hydrologic model validation tests. Finally, we show that low to moderate complexity models transfer most successfully to new conditions in space and time, providing empirical confirmation of the parsimony principal.

  11. Real-time simulation of a Doubly-Fed Induction Generator based wind power system on eMEGASimRTM Real-Time Digital Simulator

    Science.gov (United States)

    Boakye-Boateng, Nasir Abdulai

    The growing demand for wind power integration into the generation mix prompts the need to subject these systems to stringent performance requirements. This study sought to identify the required tools and procedures needed to perform real-time simulation studies of Doubly-Fed Induction Generator (DFIG) based wind generation systems as basis for performing more practical tests of reliability and performance for both grid-connected and islanded wind generation systems. The author focused on developing a platform for wind generation studies and in addition, the author tested the performance of two DFIG models on the platform real-time simulation model; an average SimpowerSystemsRTM DFIG wind turbine, and a detailed DFIG based wind turbine using ARTEMiSRTM components. The platform model implemented here consists of a high voltage transmission system with four integrated wind farm models consisting in total of 65 DFIG based wind turbines and it was developed and tested on OPAL-RT's eMEGASimRTM Real-Time Digital Simulator.

  12. Convergent Time-Varying Regression Models for Data Streams: Tracking Concept Drift by the Recursive Parzen-Based Generalized Regression Neural Networks.

    Science.gov (United States)

    Duda, Piotr; Jaworski, Maciej; Rutkowski, Leszek

    2018-03-01

    One of the greatest challenges in data mining is related to processing and analysis of massive data streams. Contrary to traditional static data mining problems, data streams require that each element is processed only once, the amount of allocated memory is constant and the models incorporate changes of investigated streams. A vast majority of available methods have been developed for data stream classification and only a few of them attempted to solve regression problems, using various heuristic approaches. In this paper, we develop mathematically justified regression models working in a time-varying environment. More specifically, we study incremental versions of generalized regression neural networks, called IGRNNs, and we prove their tracking properties - weak (in probability) and strong (with probability one) convergence assuming various concept drift scenarios. First, we present the IGRNNs, based on the Parzen kernels, for modeling stationary systems under nonstationary noise. Next, we extend our approach to modeling time-varying systems under nonstationary noise. We present several types of concept drifts to be handled by our approach in such a way that weak and strong convergence holds under certain conditions. Finally, in the series of simulations, we compare our method with commonly used heuristic approaches, based on forgetting mechanism or sliding windows, to deal with concept drift. Finally, we apply our concept in a real life scenario solving the problem of currency exchange rates prediction.

  13. A finite element-based machine learning approach for modeling the mechanical behavior of the breast tissues under compression in real-time.

    Science.gov (United States)

    Martínez-Martínez, F; Rupérez-Moreno, M J; Martínez-Sober, M; Solves-Llorens, J A; Lorente, D; Serrano-López, A J; Martínez-Sanchis, S; Monserrat, C; Martín-Guerrero, J D

    2017-11-01

    This work presents a data-driven method to simulate, in real-time, the biomechanical behavior of the breast tissues in some image-guided interventions such as biopsies or radiotherapy dose delivery as well as to speed up multimodal registration algorithms. Ten real breasts were used for this work. Their deformation due to the displacement of two compression plates was simulated off-line using the finite element (FE) method. Three machine learning models were trained with the data from those simulations. Then, they were used to predict in real-time the deformation of the breast tissues during the compression. The models were a decision tree and two tree-based ensemble methods (extremely randomized trees and random forest). Two different experimental setups were designed to validate and study the performance of these models under different conditions. The mean 3D Euclidean distance between nodes predicted by the models and those extracted from the FE simulations was calculated to assess the performance of the models in the validation set. The experiments proved that extremely randomized trees performed better than the other two models. The mean error committed by the three models in the prediction of the nodal displacements was under 2 mm, a threshold usually set for clinical applications. The time needed for breast compression prediction is sufficiently short to allow its use in real-time (<0.2 s). Copyright © 2017 Elsevier Ltd. All rights reserved.

  14. NHPP-Based Software Reliability Models Using Equilibrium Distribution

    Science.gov (United States)

    Xiao, Xiao; Okamura, Hiroyuki; Dohi, Tadashi

    Non-homogeneous Poisson processes (NHPPs) have gained much popularity in actual software testing phases to estimate the software reliability, the number of remaining faults in software and the software release timing. In this paper, we propose a new modeling approach for the NHPP-based software reliability models (SRMs) to describe the stochastic behavior of software fault-detection processes. The fundamental idea is to apply the equilibrium distribution to the fault-detection time distribution in NHPP-based modeling. We also develop efficient parameter estimation procedures for the proposed NHPP-based SRMs. Through numerical experiments, it can be concluded that the proposed NHPP-based SRMs outperform the existing ones in many data sets from the perspective of goodness-of-fit and prediction performance.

  15. Temporal Drivers of Liking Based on Functional Data Analysis and Non-Additive Models for Multi-Attribute Time-Intensity Data of Fruit Chews.

    Science.gov (United States)

    Kuesten, Carla; Bi, Jian

    2018-06-03

    Conventional drivers of liking analysis was extended with a time dimension into temporal drivers of liking (TDOL) based on functional data analysis methodology and non-additive models for multiple-attribute time-intensity (MATI) data. The non-additive models, which consider both direct effects and interaction effects of attributes to consumer overall liking, include Choquet integral and fuzzy measure in the multi-criteria decision-making, and linear regression based on variance decomposition. Dynamics of TDOL, i.e., the derivatives of the relative importance functional curves were also explored. Well-established R packages 'fda', 'kappalab' and 'relaimpo' were used in the paper for developing TDOL. Applied use of these methods shows that the relative importance of MATI curves offers insights for understanding the temporal aspects of consumer liking for fruit chews.

  16. Using Model Replication to Improve the Reliability of Agent-Based Models

    Science.gov (United States)

    Zhong, Wei; Kim, Yushim

    The basic presupposition of model replication activities for a computational model such as an agent-based model (ABM) is that, as a robust and reliable tool, it must be replicable in other computing settings. This assumption has recently gained attention in the community of artificial society and simulation due to the challenges of model verification and validation. Illustrating the replication of an ABM representing fraudulent behavior in a public service delivery system originally developed in the Java-based MASON toolkit for NetLogo by a different author, this paper exemplifies how model replication exercises provide unique opportunities for model verification and validation process. At the same time, it helps accumulate best practices and patterns of model replication and contributes to the agenda of developing a standard methodological protocol for agent-based social simulation.

  17. Modeling nonstationarity in space and time.

    Science.gov (United States)

    Shand, Lyndsay; Li, Bo

    2017-09-01

    We propose to model a spatio-temporal random field that has nonstationary covariance structure in both space and time domains by applying the concept of the dimension expansion method in Bornn et al. (2012). Simulations are conducted for both separable and nonseparable space-time covariance models, and the model is also illustrated with a streamflow dataset. Both simulation and data analyses show that modeling nonstationarity in both space and time can improve the predictive performance over stationary covariance models or models that are nonstationary in space but stationary in time. © 2017, The International Biometric Society.

  18. A point-based rendering approach for real-time interaction on mobile devices

    Institute of Scientific and Technical Information of China (English)

    LIANG XiaoHui; ZHAO QinPing; HE ZhiYing; XIE Ke; LIU YuBo

    2009-01-01

    Mobile device is an Important interactive platform. Due to the limitation of computation, memory, display area and energy, how to realize the efficient and real-time interaction of 3D models based on mobile devices is an important research topic. Considering features of mobile devices, this paper adopts remote rendering mode and point models, and then, proposes a transmission and rendering approach that could interact in real time. First, improved simplification algorithm based on MLS and display resolution of mobile devices is proposed. Then, a hierarchy selection of point models and a QoS transmission control strategy are given based on interest area of operator, interest degree of object in the virtual environment and rendering error. They can save the energy consumption. Finally, the rendering and interaction of point models are completed on mobile devices. The experiments show that our method is efficient.

  19. Real-time DSP implementation for MRF-based video motion detection.

    Science.gov (United States)

    Dumontier, C; Luthon, F; Charras, J P

    1999-01-01

    This paper describes the real time implementation of a simple and robust motion detection algorithm based on Markov random field (MRF) modeling, MRF-based algorithms often require a significant amount of computations. The intrinsic parallel property of MRF modeling has led most of implementations toward parallel machines and neural networks, but none of these approaches offers an efficient solution for real-world (i.e., industrial) applications. Here, an alternative implementation for the problem at hand is presented yielding a complete, efficient and autonomous real-time system for motion detection. This system is based on a hybrid architecture, associating pipeline modules with one asynchronous module to perform the whole process, from video acquisition to moving object masks visualization. A board prototype is presented and a processing rate of 15 images/s is achieved, showing the validity of the approach.

  20. A comparison of the conditional inference survival forest model to random survival forests based on a simulation study as well as on two applications with time-to-event data.

    Science.gov (United States)

    Nasejje, Justine B; Mwambi, Henry; Dheda, Keertan; Lesosky, Maia

    2017-07-28

    Random survival forest (RSF) models have been identified as alternative methods to the Cox proportional hazards model in analysing time-to-event data. These methods, however, have been criticised for the bias that results from favouring covariates with many split-points and hence conditional inference forests for time-to-event data have been suggested. Conditional inference forests (CIF) are known to correct the bias in RSF models by separating the procedure for the best covariate to split on from that of the best split point search for the selected covariate. In this study, we compare the random survival forest model to the conditional inference model (CIF) using twenty-two simulated time-to-event datasets. We also analysed two real time-to-event datasets. The first dataset is based on the survival of children under-five years of age in Uganda and it consists of categorical covariates with most of them having more than two levels (many split-points). The second dataset is based on the survival of patients with extremely drug resistant tuberculosis (XDR TB) which consists of mainly categorical covariates with two levels (few split-points). The study findings indicate that the conditional inference forest model is superior to random survival forest models in analysing time-to-event data that consists of covariates with many split-points based on the values of the bootstrap cross-validated estimates for integrated Brier scores. However, conditional inference forests perform comparably similar to random survival forests models in analysing time-to-event data consisting of covariates with fewer split-points. Although survival forests are promising methods in analysing time-to-event data, it is important to identify the best forest model for analysis based on the nature of covariates of the dataset in question.

  1. A comparison of the conditional inference survival forest model to random survival forests based on a simulation study as well as on two applications with time-to-event data

    Directory of Open Access Journals (Sweden)

    Justine B. Nasejje

    2017-07-01

    Full Text Available Abstract Background Random survival forest (RSF models have been identified as alternative methods to the Cox proportional hazards model in analysing time-to-event data. These methods, however, have been criticised for the bias that results from favouring covariates with many split-points and hence conditional inference forests for time-to-event data have been suggested. Conditional inference forests (CIF are known to correct the bias in RSF models by separating the procedure for the best covariate to split on from that of the best split point search for the selected covariate. Methods In this study, we compare the random survival forest model to the conditional inference model (CIF using twenty-two simulated time-to-event datasets. We also analysed two real time-to-event datasets. The first dataset is based on the survival of children under-five years of age in Uganda and it consists of categorical covariates with most of them having more than two levels (many split-points. The second dataset is based on the survival of patients with extremely drug resistant tuberculosis (XDR TB which consists of mainly categorical covariates with two levels (few split-points. Results The study findings indicate that the conditional inference forest model is superior to random survival forest models in analysing time-to-event data that consists of covariates with many split-points based on the values of the bootstrap cross-validated estimates for integrated Brier scores. However, conditional inference forests perform comparably similar to random survival forests models in analysing time-to-event data consisting of covariates with fewer split-points. Conclusion Although survival forests are promising methods in analysing time-to-event data, it is important to identify the best forest model for analysis based on the nature of covariates of the dataset in question.

  2. A multi-component and multi-failure mode inspection model based on the delay time concept

    International Nuclear Information System (INIS)

    Wang Wenbin; Banjevic, Dragan; Pecht, Michael

    2010-01-01

    The delay time concept and the techniques developed for modelling and optimising plant inspection practices have been reported in many papers and case studies. For a system comprised of many components and subject to many different failure modes, one of the most convenient ways to model the inspection and failure processes is to use a stochastic point process for defect arrivals and a common delay time distribution for the duration between defect the arrival and failure of all defects. This is an approximation, but has been proven to be valid when the number of components is large. However, for a system with just a few key components and subject to few major failure modes, the approximation may be poor. In this paper, a model is developed to address this situation, where each component and failure mode is modelled individually and then pooled together to form the system inspection model. Since inspections are usually scheduled for the whole system rather than individual components, we then formulate the inspection model when the time to the next inspection from the point of a component failure renewal is random. This imposes some complication to the model, and an asymptotic solution was found. Simulation algorithms have also been proposed as a comparison to the analytical results. A numerical example is presented to demonstrate the model.

  3. Incorporating time-delays in S-System model for reverse engineering genetic networks.

    Science.gov (United States)

    Chowdhury, Ahsan Raja; Chetty, Madhu; Vinh, Nguyen Xuan

    2013-06-18

    In any gene regulatory network (GRN), the complex interactions occurring amongst transcription factors and target genes can be either instantaneous or time-delayed. However, many existing modeling approaches currently applied for inferring GRNs are unable to represent both these interactions simultaneously. As a result, all these approaches cannot detect important interactions of the other type. S-System model, a differential equation based approach which has been increasingly applied for modeling GRNs, also suffers from this limitation. In fact, all S-System based existing modeling approaches have been designed to capture only instantaneous interactions, and are unable to infer time-delayed interactions. In this paper, we propose a novel Time-Delayed S-System (TDSS) model which uses a set of delay differential equations to represent the system dynamics. The ability to incorporate time-delay parameters in the proposed S-System model enables simultaneous modeling of both instantaneous and time-delayed interactions. Furthermore, the delay parameters are not limited to just positive integer values (corresponding to time stamps in the data), but can also take fractional values. Moreover, we also propose a new criterion for model evaluation exploiting the sparse and scale-free nature of GRNs to effectively narrow down the search space, which not only reduces the computation time significantly but also improves model accuracy. The evaluation criterion systematically adapts the max-min in-degrees and also systematically balances the effect of network accuracy and complexity during optimization. The four well-known performance measures applied to the experimental studies on synthetic networks with various time-delayed regulations clearly demonstrate that the proposed method can capture both instantaneous and delayed interactions correctly with high precision. The experiments carried out on two well-known real-life networks, namely IRMA and SOS DNA repair network in

  4. Chaos Time Series Prediction Based on Membrane Optimization Algorithms

    Directory of Open Access Journals (Sweden)

    Meng Li

    2015-01-01

    Full Text Available This paper puts forward a prediction model based on membrane computing optimization algorithm for chaos time series; the model optimizes simultaneously the parameters of phase space reconstruction (τ,m and least squares support vector machine (LS-SVM (γ,σ by using membrane computing optimization algorithm. It is an important basis for spectrum management to predict accurately the change trend of parameters in the electromagnetic environment, which can help decision makers to adopt an optimal action. Then, the model presented in this paper is used to forecast band occupancy rate of frequency modulation (FM broadcasting band and interphone band. To show the applicability and superiority of the proposed model, this paper will compare the forecast model presented in it with conventional similar models. The experimental results show that whether single-step prediction or multistep prediction, the proposed model performs best based on three error measures, namely, normalized mean square error (NMSE, root mean square error (RMSE, and mean absolute percentage error (MAPE.

  5. An adaptive time-stepping strategy for solving the phase field crystal model

    International Nuclear Information System (INIS)

    Zhang, Zhengru; Ma, Yuan; Qiao, Zhonghua

    2013-01-01

    In this work, we will propose an adaptive time step method for simulating the dynamics of the phase field crystal (PFC) model. The numerical simulation of the PFC model needs long time to reach steady state, and then large time-stepping method is necessary. Unconditionally energy stable schemes are used to solve the PFC model. The time steps are adaptively determined based on the time derivative of the corresponding energy. It is found that the use of the proposed time step adaptivity cannot only resolve the steady state solution, but also the dynamical development of the solution efficiently and accurately. The numerical experiments demonstrate that the CPU time is significantly saved for long time simulations

  6. Power Supply Interruption Costs: Models and Methods Incorporating Time Dependent Patterns

    International Nuclear Information System (INIS)

    Kjoelle, G.H.

    1996-12-01

    This doctoral thesis develops models and methods for estimation of annual interruption costs for delivery points, emphasizing the handling of time dependent patterns and uncertainties in the variables determining the annual costs. It presents an analytical method for calculation of annual expected interruption costs for delivery points in radial systems, based on a radial reliability model, with time dependent variables. And a similar method for meshed systems, based on a list of outage events, assuming that these events are found in advance from load flow and contingency analyses. A Monte Carlo simulation model is given which handles both time variations and stochastic variations in the input variables and is based on the same list of outage events. This general procedure for radial and meshed systems provides expectation values and probability distributions for interruption costs from delivery points. There is also a procedure for handling uncertainties in input variables by a fuzzy description, giving annual interruption costs as a fuzzy membership function. The methods are developed for practical applications in radial and meshed systems, based on available data from failure statistics, load registrations and customer surveys. Traditional reliability indices such as annual interruption time, power- and energy not supplied, are calculated as by-products. The methods are presented as algorithms and/or procedures which are available as prototypes. 97 refs., 114 figs., 62 tabs

  7. Power Supply Interruption Costs: Models and Methods Incorporating Time Dependent Patterns

    Energy Technology Data Exchange (ETDEWEB)

    Kjoelle, G.H.

    1996-12-01

    This doctoral thesis develops models and methods for estimation of annual interruption costs for delivery points, emphasizing the handling of time dependent patterns and uncertainties in the variables determining the annual costs. It presents an analytical method for calculation of annual expected interruption costs for delivery points in radial systems, based on a radial reliability model, with time dependent variables. And a similar method for meshed systems, based on a list of outage events, assuming that these events are found in advance from load flow and contingency analyses. A Monte Carlo simulation model is given which handles both time variations and stochastic variations in the input variables and is based on the same list of outage events. This general procedure for radial and meshed systems provides expectation values and probability distributions for interruption costs from delivery points. There is also a procedure for handling uncertainties in input variables by a fuzzy description, giving annual interruption costs as a fuzzy membership function. The methods are developed for practical applications in radial and meshed systems, based on available data from failure statistics, load registrations and customer surveys. Traditional reliability indices such as annual interruption time, power- and energy not supplied, are calculated as by-products. The methods are presented as algorithms and/or procedures which are available as prototypes. 97 refs., 114 figs., 62 tabs.

  8. Models and analysis for multivariate failure time data

    Science.gov (United States)

    Shih, Joanna Huang

    The goal of this research is to develop and investigate models and analytic methods for multivariate failure time data. We compare models in terms of direct modeling of the margins, flexibility of dependency structure, local vs. global measures of association, and ease of implementation. In particular, we study copula models, and models produced by right neutral cumulative hazard functions and right neutral hazard functions. We examine the changes of association over time for families of bivariate distributions induced from these models by displaying their density contour plots, conditional density plots, correlation curves of Doksum et al, and local cross ratios of Oakes. We know that bivariate distributions with same margins might exhibit quite different dependency structures. In addition to modeling, we study estimation procedures. For copula models, we investigate three estimation procedures. the first procedure is full maximum likelihood. The second procedure is two-stage maximum likelihood. At stage 1, we estimate the parameters in the margins by maximizing the marginal likelihood. At stage 2, we estimate the dependency structure by fixing the margins at the estimated ones. The third procedure is two-stage partially parametric maximum likelihood. It is similar to the second procedure, but we estimate the margins by the Kaplan-Meier estimate. We derive asymptotic properties for these three estimation procedures and compare their efficiency by Monte-Carlo simulations and direct computations. For models produced by right neutral cumulative hazards and right neutral hazards, we derive the likelihood and investigate the properties of the maximum likelihood estimates. Finally, we develop goodness of fit tests for the dependency structure in the copula models. We derive a test statistic and its asymptotic properties based on the test of homogeneity of Zelterman and Chen (1988), and a graphical diagnostic procedure based on the empirical Bayes approach. We study the

  9. Modelling tourism demand in Madeira since 1946: and historical overview based on a time series approach

    Directory of Open Access Journals (Sweden)

    António Manuel Martins de Almeida

    2016-06-01

    Full Text Available Tourism is the leading economic sector in most islands and for that reason market trends are closely monitored due to the huge impacts of relatively minor changes in the demand patterns. An interesting line of research regarding the analysis of market trends concerns the examination of time series to get an historical overview of the data patterns. The modelling of demand patterns is obviously dependent on data availability, and the measurement of changes in demand patterns is quite often focused on a few decades. In this paper, we use long-term time-series data to analyse the evolution of the main markets in Madeira, by country of origin, in order to re-examine the Butler life cycle model, based on data available from 1946 onwards. This study is an opportunity to document the historical development of the industry in Madeira and to introduce the discussion about the rejuvenation of a mature destination. Tourism development in Madeira has experienced rapid growth until the late 90s, as one of the leading destinations in the European context. However, annual growth rates are not within acceptable ranges, which lead policy-makers and experts to recommend a thoughtfully assessment of the industry prospects.

  10. Time dependent mechanical modeling for polymers based on network theory

    Energy Technology Data Exchange (ETDEWEB)

    Billon, Noëlle [MINES ParisTech, PSL-Research University, CEMEF – Centre de mise en forme des matériaux, CNRS UMR 7635, CS 10207 rue Claude Daunesse 06904 Sophia Antipolis Cedex (France)

    2016-05-18

    Despite of a lot of attempts during recent years, complex mechanical behaviour of polymers remains incompletely modelled, making industrial design of structures under complex, cyclic and hard loadings not totally reliable. The non linear and dissipative viscoelastic, viscoplastic behaviour of those materials impose to take into account non linear and combined effects of mechanical and thermal phenomena. In this view, a visco-hyperelastic, viscoplastic model, based on network description of the material has recently been developed and designed in a complete thermodynamic frame in order to take into account those main thermo-mechanical couplings. Also, a way to account for coupled effects of strain-rate and temperature was suggested. First experimental validations conducted in the 1D limit on amorphous rubbery like PMMA in isothermal conditions led to pretty goods results. In this paper a more complete formalism is presented and validated in the case of a semi crystalline polymer, a PA66 and a PET (either amorphous or semi crystalline) are used. Protocol for identification of constitutive parameters is described. It is concluded that this new approach should be the route to accurately model thermo-mechanical behaviour of polymers using a reduced number of parameters of some physical meaning.

  11. PSO-MISMO modeling strategy for multistep-ahead time series prediction.

    Science.gov (United States)

    Bao, Yukun; Xiong, Tao; Hu, Zhongyi

    2014-05-01

    Multistep-ahead time series prediction is one of the most challenging research topics in the field of time series modeling and prediction, and is continually under research. Recently, the multiple-input several multiple-outputs (MISMO) modeling strategy has been proposed as a promising alternative for multistep-ahead time series prediction, exhibiting advantages compared with the two currently dominating strategies, the iterated and the direct strategies. Built on the established MISMO strategy, this paper proposes a particle swarm optimization (PSO)-based MISMO modeling strategy, which is capable of determining the number of sub-models in a self-adaptive mode, with varying prediction horizons. Rather than deriving crisp divides with equal-size s prediction horizons from the established MISMO, the proposed PSO-MISMO strategy, implemented with neural networks, employs a heuristic to create flexible divides with varying sizes of prediction horizons and to generate corresponding sub-models, providing considerable flexibility in model construction, which has been validated with simulated and real datasets.

  12. A new wind speed forecasting strategy based on the chaotic time series modelling technique and the Apriori algorithm

    International Nuclear Information System (INIS)

    Guo, Zhenhai; Chi, Dezhong; Wu, Jie; Zhang, Wenyu

    2014-01-01

    Highlights: • Impact of meteorological factors on wind speed forecasting is taken into account. • Forecasted wind speed results are corrected by the associated rules. • Forecasting accuracy is improved by the new wind speed forecasting strategy. • Robust of the proposed model is validated by data sampled from different sites. - Abstract: Wind energy has been the fastest growing renewable energy resource in recent years. Because of the intermittent nature of wind, wind power is a fluctuating source of electrical energy. Therefore, to minimize the impact of wind power on the electrical grid, accurate and reliable wind power forecasting is mandatory. In this paper, a new wind speed forecasting approach based on based on the chaotic time series modelling technique and the Apriori algorithm has been developed. The new approach consists of four procedures: (I) Clustering by using the k-means clustering approach; (II) Employing the Apriori algorithm to discover the association rules; (III) Forecasting the wind speed according to the chaotic time series forecasting model; and (IV) Correcting the forecasted wind speed data using the associated rules discovered previously. This procedure has been verified by 31-day-ahead daily average wind speed forecasting case studies, which employed the wind speed and other meteorological data collected from four meteorological stations located in the Hexi Corridor area of China. The results of these case studies reveal that the chaotic forecasting model can efficiently improve the accuracy of the wind speed forecasting, and the Apriori algorithm can effectively discover the association rules between the wind speed and other meteorological factors. In addition, the correction results demonstrate that the association rules discovered by the Apriori algorithm have powerful capacities in handling the forecasted wind speed values correction when the forecasted values do not match the classification discovered by the association rules

  13. Comparing an Annual and a Daily Time-Step Model for Predicting Field-Scale Phosphorus Loss.

    Science.gov (United States)

    Bolster, Carl H; Forsberg, Adam; Mittelstet, Aaron; Radcliffe, David E; Storm, Daniel; Ramirez-Avila, John; Sharpley, Andrew N; Osmond, Deanna

    2017-11-01

    A wide range of mathematical models are available for predicting phosphorus (P) losses from agricultural fields, ranging from simple, empirically based annual time-step models to more complex, process-based daily time-step models. In this study, we compare field-scale P-loss predictions between the Annual P Loss Estimator (APLE), an empirically based annual time-step model, and the Texas Best Management Practice Evaluation Tool (TBET), a process-based daily time-step model based on the Soil and Water Assessment Tool. We first compared predictions of field-scale P loss from both models using field and land management data collected from 11 research sites throughout the southern United States. We then compared predictions of P loss from both models with measured P-loss data from these sites. We observed a strong and statistically significant ( loss between the two models; however, APLE predicted, on average, 44% greater dissolved P loss, whereas TBET predicted, on average, 105% greater particulate P loss for the conditions simulated in our study. When we compared model predictions with measured P-loss data, neither model consistently outperformed the other, indicating that more complex models do not necessarily produce better predictions of field-scale P loss. Our results also highlight limitations with both models and the need for continued efforts to improve their accuracy. Copyright © by the American Society of Agronomy, Crop Science Society of America, and Soil Science Society of America, Inc.

  14. Opinion dynamics model based on quantum formalism

    Energy Technology Data Exchange (ETDEWEB)

    Artawan, I. Nengah, E-mail: nengahartawan@gmail.com [Theoretical Physics Division, Department of Physics, Udayana University (Indonesia); Trisnawati, N. L. P., E-mail: nlptrisnawati@gmail.com [Biophysics, Department of Physics, Udayana University (Indonesia)

    2016-03-11

    Opinion dynamics model based on quantum formalism is proposed. The core of the quantum formalism is on the half spin dynamics system. In this research the implicit time evolution operators are derived. The analogy between the model with Deffuant dan Sznajd models is discussed.

  15. A discrete-time Bayesian network reliability modeling and analysis framework

    International Nuclear Information System (INIS)

    Boudali, H.; Dugan, J.B.

    2005-01-01

    Dependability tools are becoming an indispensable tool for modeling and analyzing (critical) systems. However the growing complexity of such systems calls for increasing sophistication of these tools. Dependability tools need to not only capture the complex dynamic behavior of the system components, but they must be also easy to use, intuitive, and computationally efficient. In general, current tools have a number of shortcomings including lack of modeling power, incapacity to efficiently handle general component failure distributions, and ineffectiveness in solving large models that exhibit complex dependencies between their components. We propose a novel reliability modeling and analysis framework based on the Bayesian network (BN) formalism. The overall approach is to investigate timed Bayesian networks and to find a suitable reliability framework for dynamic systems. We have applied our methodology to two example systems and preliminary results are promising. We have defined a discrete-time BN reliability formalism and demonstrated its capabilities from a modeling and analysis point of view. This research shows that a BN based reliability formalism is a powerful potential solution to modeling and analyzing various kinds of system components behaviors and interactions. Moreover, being based on the BN formalism, the framework is easy to use and intuitive for non-experts, and provides a basis for more advanced and useful analyses such as system diagnosis

  16. Time-series models on somatic cell score improve detection of matistis

    DEFF Research Database (Denmark)

    Norberg, E; Korsgaard, I R; Sloth, K H M N

    2008-01-01

    In-line detection of mastitis using frequent milk sampling was studied in 241 cows in a Danish research herd. Somatic cell scores obtained at a daily basis were analyzed using a mixture of four time-series models. Probabilities were assigned to each model for the observations to belong to a normal...... "steady-state" development, change in "level", change of "slope" or "outlier". Mastitis was indicated from the sum of probabilities for the "level" and "slope" models. Time-series models were based on the Kalman filter. Reference data was obtained from veterinary assessment of health status combined...... with bacteriological findings. At a sensitivity of 90% the corresponding specificity was 68%, which increased to 83% using a one-step back smoothing. It is concluded that mixture models based on Kalman filters are efficient in handling in-line sensor data for detection of mastitis and may be useful for similar...

  17. [Predicting Incidence of Hepatitis E in Chinausing Fuzzy Time Series Based on Fuzzy C-Means Clustering Analysis].

    Science.gov (United States)

    Luo, Yi; Zhang, Tao; Li, Xiao-song

    2016-05-01

    To explore the application of fuzzy time series model based on fuzzy c-means clustering in forecasting monthly incidence of Hepatitis E in mainland China. Apredictive model (fuzzy time series method based on fuzzy c-means clustering) was developed using Hepatitis E incidence data in mainland China between January 2004 and July 2014. The incidence datafrom August 2014 to November 2014 were used to test the fitness of the predictive model. The forecasting results were compared with those resulted from traditional fuzzy time series models. The fuzzy time series model based on fuzzy c-means clustering had 0.001 1 mean squared error (MSE) of fitting and 6.977 5 x 10⁻⁴ MSE of forecasting, compared with 0.0017 and 0.0014 from the traditional forecasting model. The results indicate that the fuzzy time series model based on fuzzy c-means clustering has a better performance in forecasting incidence of Hepatitis E.

  18. Detection and modelling of time-dependent QTL in animal populations

    DEFF Research Database (Denmark)

    Lund, Mogens S; Sørensen, Peter; Madsen, Per

    2008-01-01

    A longitudinal approach is proposed to map QTL affecting function-valued traits and to estimate their effect over time. The method is based on fitting mixed random regression models. The QTL allelic effects are modelled with random coefficient parametric curves and using a gametic relationship...... matrix. A simulation study was conducted in order to assess the ability of the approach to fit different patterns of QTL over time. It was found that this longitudinal approach was able to adequately fit the simulated variance functions and considerably improved the power of detection of time-varying QTL...... effects compared to the traditional univariate model. This was confirmed by an analysis of protein yield data in dairy cattle, where the model was able to detect QTL with high effect either at the beginning or the end of the lactation, that were not detected with a simple 305 day model....

  19. Time domain series system definition and gear set reliability modeling

    International Nuclear Information System (INIS)

    Xie, Liyang; Wu, Ningxiang; Qian, Wenxue

    2016-01-01

    Time-dependent multi-configuration is a typical feature for mechanical systems such as gear trains and chain drives. As a series system, a gear train is distinct from a traditional series system, such as a chain, in load transmission path, system-component relationship, system functioning manner, as well as time-dependent system configuration. Firstly, the present paper defines time-domain series system to which the traditional series system reliability model is not adequate. Then, system specific reliability modeling technique is proposed for gear sets, including component (tooth) and subsystem (tooth-pair) load history description, material priori/posterior strength expression, time-dependent and system specific load-strength interference analysis, as well as statistically dependent failure events treatment. Consequently, several system reliability models are developed for gear sets with different tooth numbers in the scenario of tooth root material ultimate tensile strength failure. The application of the models is discussed in the last part, and the differences between the system specific reliability model and the traditional series system reliability model are illustrated by virtue of several numerical examples. - Highlights: • A new type of series system, i.e. time-domain multi-configuration series system is defined, that is of great significance to reliability modeling. • Multi-level statistical analysis based reliability modeling method is presented for gear transmission system. • Several system specific reliability models are established for gear set reliability estimation. • The differences between the traditional series system reliability model and the new model are illustrated.

  20. Estimating marginal properties of quantitative real-time PCR data using nonlinear mixed models

    DEFF Research Database (Denmark)

    Gerhard, Daniel; Bremer, Melanie; Ritz, Christian

    2014-01-01

    A unified modeling framework based on a set of nonlinear mixed models is proposed for flexible modeling of gene expression in real-time PCR experiments. Focus is on estimating the marginal or population-based derived parameters: cycle thresholds and ΔΔc(t), but retaining the conditional mixed mod...

  1. Time-Elastic Generative Model for Acceleration Time Series in Human Activity Recognition

    Directory of Open Access Journals (Sweden)

    Mario Munoz-Organero

    2017-02-01

    Full Text Available Body-worn sensors in general and accelerometers in particular have been widely used in order to detect human movements and activities. The execution of each type of movement by each particular individual generates sequences of time series of sensed data from which specific movement related patterns can be assessed. Several machine learning algorithms have been used over windowed segments of sensed data in order to detect such patterns in activity recognition based on intermediate features (either hand-crafted or automatically learned from data. The underlying assumption is that the computed features will capture statistical differences that can properly classify different movements and activities after a training phase based on sensed data. In order to achieve high accuracy and recall rates (and guarantee the generalization of the system to new users, the training data have to contain enough information to characterize all possible ways of executing the activity or movement to be detected. This could imply large amounts of data and a complex and time-consuming training phase, which has been shown to be even more relevant when automatically learning the optimal features to be used. In this paper, we present a novel generative model that is able to generate sequences of time series for characterizing a particular movement based on the time elasticity properties of the sensed data. The model is used to train a stack of auto-encoders in order to learn the particular features able to detect human movements. The results of movement detection using a newly generated database with information on five users performing six different movements are presented. The generalization of results using an existing database is also presented in the paper. The results show that the proposed mechanism is able to obtain acceptable recognition rates (F = 0.77 even in the case of using different people executing a different sequence of movements and using different

  2. Sensitivity analysis of machine-learning models of hydrologic time series

    Science.gov (United States)

    O'Reilly, A. M.

    2017-12-01

    Sensitivity analysis traditionally has been applied to assessing model response to perturbations in model parameters, where the parameters are those model input variables adjusted during calibration. Unlike physics-based models where parameters represent real phenomena, the equivalent of parameters for machine-learning models are simply mathematical "knobs" that are automatically adjusted during training/testing/verification procedures. Thus the challenge of extracting knowledge of hydrologic system functionality from machine-learning models lies in their very nature, leading to the label "black box." Sensitivity analysis of the forcing-response behavior of machine-learning models, however, can provide understanding of how the physical phenomena represented by model inputs affect the physical phenomena represented by model outputs.As part of a previous study, hybrid spectral-decomposition artificial neural network (ANN) models were developed to simulate the observed behavior of hydrologic response contained in multidecadal datasets of lake water level, groundwater level, and spring flow. Model inputs used moving window averages (MWA) to represent various frequencies and frequency-band components of time series of rainfall and groundwater use. Using these forcing time series, the MWA-ANN models were trained to predict time series of lake water level, groundwater level, and spring flow at 51 sites in central Florida, USA. A time series of sensitivities for each MWA-ANN model was produced by perturbing forcing time-series and computing the change in response time-series per unit change in perturbation. Variations in forcing-response sensitivities are evident between types (lake, groundwater level, or spring), spatially (among sites of the same type), and temporally. Two generally common characteristics among sites are more uniform sensitivities to rainfall over time and notable increases in sensitivities to groundwater usage during significant drought periods.

  3. Variable selection for mixture and promotion time cure rate models.

    Science.gov (United States)

    Masud, Abdullah; Tu, Wanzhu; Yu, Zhangsheng

    2016-11-16

    Failure-time data with cured patients are common in clinical studies. Data from these studies are typically analyzed with cure rate models. Variable selection methods have not been well developed for cure rate models. In this research, we propose two least absolute shrinkage and selection operators based methods, for variable selection in mixture and promotion time cure models with parametric or nonparametric baseline hazards. We conduct an extensive simulation study to assess the operating characteristics of the proposed methods. We illustrate the use of the methods using data from a study of childhood wheezing. © The Author(s) 2016.

  4. Development of an Agent Based Model to Estimate and Reduce Time to Restoration of Storm Induced Power Outages

    Science.gov (United States)

    Walsh, T.; Layton, T.; Mellor, J. E.

    2017-12-01

    Storm damage to the electric grid impacts 23 million electric utility customers and costs US consumers $119 billion annually. Current restoration techniques rely on the past experiences of emergency managers. There are few analytical simulation and prediction tools available for utility managers to optimize storm recovery and decrease consumer cost, lost revenue and restoration time. We developed an agent based model (ABM) for storm recovery in Connecticut. An ABM is a computer modeling technique comprised of agents who are given certain behavioral rules and operate in a given environment. It allows the user to simulate complex systems by varying user-defined parameters to study emergent, unpredicted behavior. The ABM incorporates the road network and electric utility grid for the state, is validated using actual storm event recoveries and utilizes the Dijkstra routing algorithm to determine the best path for repair crews to travel between outages. The ABM has benefits for both researchers and utility managers. It can simulate complex system dynamics, rank variable importance, find tipping points that could significantly reduce restoration time or costs and test a broad range of scenarios. It is a modular, scalable and adaptable technique that can simulate scenarios in silico to inform emergency managers before and during storm events to optimize restoration strategies and better manage expectations of when power will be restored. Results indicate that total restoration time is strongly dependent on the number of crews. However, there is a threshold whereby more crews will not decrease the restoration time, which depends on the total number of outages. The addition of outside crews is more beneficial for storms with a higher number of outages. The time to restoration increases linearly with increasing repair time, while the travel speed has little overall effect on total restoration time. Crews traveling to the nearest outage reduces the total restoration time

  5. Models for dependent time series

    CERN Document Server

    Tunnicliffe Wilson, Granville; Haywood, John

    2015-01-01

    Models for Dependent Time Series addresses the issues that arise and the methodology that can be applied when the dependence between time series is described and modeled. Whether you work in the economic, physical, or life sciences, the book shows you how to draw meaningful, applicable, and statistically valid conclusions from multivariate (or vector) time series data.The first four chapters discuss the two main pillars of the subject that have been developed over the last 60 years: vector autoregressive modeling and multivariate spectral analysis. These chapters provide the foundational mater

  6. Expediting model-based optoacoustic reconstructions with tomographic symmetries

    International Nuclear Information System (INIS)

    Lutzweiler, Christian; Deán-Ben, Xosé Luís; Razansky, Daniel

    2014-01-01

    Purpose: Image quantification in optoacoustic tomography implies the use of accurate forward models of excitation, propagation, and detection of optoacoustic signals while inversions with high spatial resolution usually involve very large matrices, leading to unreasonably long computation times. The development of fast and memory efficient model-based approaches represents then an important challenge to advance on the quantitative and dynamic imaging capabilities of tomographic optoacoustic imaging. Methods: Herein, a method for simplification and acceleration of model-based inversions, relying on inherent symmetries present in common tomographic acquisition geometries, has been introduced. The method is showcased for the case of cylindrical symmetries by using polar image discretization of the time-domain optoacoustic forward model combined with efficient storage and inversion strategies. Results: The suggested methodology is shown to render fast and accurate model-based inversions in both numerical simulations andpost mortem small animal experiments. In case of a full-view detection scheme, the memory requirements are reduced by one order of magnitude while high-resolution reconstructions are achieved at video rate. Conclusions: By considering the rotational symmetry present in many tomographic optoacoustic imaging systems, the proposed methodology allows exploiting the advantages of model-based algorithms with feasible computational requirements and fast reconstruction times, so that its convenience and general applicability in optoacoustic imaging systems with tomographic symmetries is anticipated

  7. Time-dependent Networks as Models to Achieve Fast Exact Time-table Queries

    DEFF Research Database (Denmark)

    Brodal, Gerth Stølting; Jacob, Rico

    2001-01-01

    We consider efficient algorithms for exact time-table queries, i.e. algorithms that find optimal itineraries. We propose to use time-dependent networks as a model and show advantages of this approach over space-time networks as models.......We consider efficient algorithms for exact time-table queries, i.e. algorithms that find optimal itineraries. We propose to use time-dependent networks as a model and show advantages of this approach over space-time networks as models....

  8. Modeling Non-Gaussian Time Series with Nonparametric Bayesian Model.

    Science.gov (United States)

    Xu, Zhiguang; MacEachern, Steven; Xu, Xinyi

    2015-02-01

    We present a class of Bayesian copula models whose major components are the marginal (limiting) distribution of a stationary time series and the internal dynamics of the series. We argue that these are the two features with which an analyst is typically most familiar, and hence that these are natural components with which to work. For the marginal distribution, we use a nonparametric Bayesian prior distribution along with a cdf-inverse cdf transformation to obtain large support. For the internal dynamics, we rely on the traditionally successful techniques of normal-theory time series. Coupling the two components gives us a family of (Gaussian) copula transformed autoregressive models. The models provide coherent adjustments of time scales and are compatible with many extensions, including changes in volatility of the series. We describe basic properties of the models, show their ability to recover non-Gaussian marginal distributions, and use a GARCH modification of the basic model to analyze stock index return series. The models are found to provide better fit and improved short-range and long-range predictions than Gaussian competitors. The models are extensible to a large variety of fields, including continuous time models, spatial models, models for multiple series, models driven by external covariate streams, and non-stationary models.

  9. Breaking Computational Barriers: Real-time Analysis and Optimization with Large-scale Nonlinear Models via Model Reduction

    Energy Technology Data Exchange (ETDEWEB)

    Carlberg, Kevin Thomas [Sandia National Lab. (SNL-CA), Livermore, CA (United States). Quantitative Modeling and Analysis; Drohmann, Martin [Sandia National Lab. (SNL-CA), Livermore, CA (United States). Quantitative Modeling and Analysis; Tuminaro, Raymond S. [Sandia National Lab. (SNL-CA), Livermore, CA (United States). Computational Mathematics; Boggs, Paul T. [Sandia National Lab. (SNL-CA), Livermore, CA (United States). Quantitative Modeling and Analysis; Ray, Jaideep [Sandia National Lab. (SNL-CA), Livermore, CA (United States). Quantitative Modeling and Analysis; van Bloemen Waanders, Bart Gustaaf [Sandia National Lab. (SNL-CA), Livermore, CA (United States). Optimization and Uncertainty Estimation

    2014-10-01

    Model reduction for dynamical systems is a promising approach for reducing the computational cost of large-scale physics-based simulations to enable high-fidelity models to be used in many- query (e.g., Bayesian inference) and near-real-time (e.g., fast-turnaround simulation) contexts. While model reduction works well for specialized problems such as linear time-invariant systems, it is much more difficult to obtain accurate, stable, and efficient reduced-order models (ROMs) for systems with general nonlinearities. This report describes several advances that enable nonlinear reduced-order models (ROMs) to be deployed in a variety of time-critical settings. First, we present an error bound for the Gauss-Newton with Approximated Tensors (GNAT) nonlinear model reduction technique. This bound allows the state-space error for the GNAT method to be quantified when applied with the backward Euler time-integration scheme. Second, we present a methodology for preserving classical Lagrangian structure in nonlinear model reduction. This technique guarantees that important properties--such as energy conservation and symplectic time-evolution maps--are preserved when performing model reduction for models described by a Lagrangian formalism (e.g., molecular dynamics, structural dynamics). Third, we present a novel technique for decreasing the temporal complexity --defined as the number of Newton-like iterations performed over the course of the simulation--by exploiting time-domain data. Fourth, we describe a novel method for refining projection-based reduced-order models a posteriori using a goal-oriented framework similar to mesh-adaptive h -refinement in finite elements. The technique allows the ROM to generate arbitrarily accurate solutions, thereby providing the ROM with a 'failsafe' mechanism in the event of insufficient training data. Finally, we present the reduced-order model error surrogate (ROMES) method for statistically quantifying reduced- order-model

  10. Real-time model for simulating a tracked vehicle on deformable soils

    Directory of Open Access Journals (Sweden)

    Martin Meywerk

    2016-05-01

    Full Text Available Simulation is one possibility to gain insight into the behaviour of tracked vehicles on deformable soils. A lot of publications are known on this topic, but most of the simulations described there cannot be run in real-time. The ability to run a simulation in real-time is necessary for driving simulators. This article describes an approach for real-time simulation of a tracked vehicle on deformable soils. The components of the real-time model are as follows: a conventional wheeled vehicle simulated in the Multi Body System software TRUCKSim, a geometric description of landscape, a track model and an interaction model between track and deformable soils based on Bekker theory and Janosi–Hanamoto, on one hand, and between track and vehicle wheels, on the other hand. Landscape, track model, soil model and the interaction are implemented in MATLAB/Simulink. The details of the real-time model are described in this article, and a detailed description of the Multi Body System part is omitted. Simulations with the real-time model are compared to measurements and to a detailed Multi Body System–finite element method model of a tracked vehicle. An application of the real-time model in a driving simulator is presented, in which 13 drivers assess the comfort of a passive and an active suspension of a tracked vehicle.

  11. Output-only modal parameter estimator of linear time-varying structural systems based on vector TAR model and least squares support vector machine

    Science.gov (United States)

    Zhou, Si-Da; Ma, Yuan-Chen; Liu, Li; Kang, Jie; Ma, Zhi-Sai; Yu, Lei

    2018-01-01

    Identification of time-varying modal parameters contributes to the structural health monitoring, fault detection, vibration control, etc. of the operational time-varying structural systems. However, it is a challenging task because there is not more information for the identification of the time-varying systems than that of the time-invariant systems. This paper presents a vector time-dependent autoregressive model and least squares support vector machine based modal parameter estimator for linear time-varying structural systems in case of output-only measurements. To reduce the computational cost, a Wendland's compactly supported radial basis function is used to achieve the sparsity of the Gram matrix. A Gamma-test-based non-parametric approach of selecting the regularization factor is adapted for the proposed estimator to replace the time-consuming n-fold cross validation. A series of numerical examples have illustrated the advantages of the proposed modal parameter estimator on the suppression of the overestimate and the short data. A laboratory experiment has further validated the proposed estimator.

  12. Hidden discriminative features extraction for supervised high-order time series modeling.

    Science.gov (United States)

    Nguyen, Ngoc Anh Thi; Yang, Hyung-Jeong; Kim, Sunhee

    2016-11-01

    In this paper, an orthogonal Tucker-decomposition-based extraction of high-order discriminative subspaces from a tensor-based time series data structure is presented, named as Tensor Discriminative Feature Extraction (TDFE). TDFE relies on the employment of category information for the maximization of the between-class scatter and the minimization of the within-class scatter to extract optimal hidden discriminative feature subspaces that are simultaneously spanned by every modality for supervised tensor modeling. In this context, the proposed tensor-decomposition method provides the following benefits: i) reduces dimensionality while robustly mining the underlying discriminative features, ii) results in effective interpretable features that lead to an improved classification and visualization, and iii) reduces the processing time during the training stage and the filtering of the projection by solving the generalized eigenvalue issue at each alternation step. Two real third-order tensor-structures of time series datasets (an epilepsy electroencephalogram (EEG) that is modeled as channel×frequency bin×time frame and a microarray data that is modeled as gene×sample×time) were used for the evaluation of the TDFE. The experiment results corroborate the advantages of the proposed method with averages of 98.26% and 89.63% for the classification accuracies of the epilepsy dataset and the microarray dataset, respectively. These performance averages represent an improvement on those of the matrix-based algorithms and recent tensor-based, discriminant-decomposition approaches; this is especially the case considering the small number of samples that are used in practice. Copyright © 2016 Elsevier Ltd. All rights reserved.

  13. Accounting for Uncertainty in Decision Analytic Models Using Rank Preserving Structural Failure Time Modeling: Application to Parametric Survival Models.

    Science.gov (United States)

    Bennett, Iain; Paracha, Noman; Abrams, Keith; Ray, Joshua

    2018-01-01

    Rank Preserving Structural Failure Time models are one of the most commonly used statistical methods to adjust for treatment switching in oncology clinical trials. The method is often applied in a decision analytic model without appropriately accounting for additional uncertainty when determining the allocation of health care resources. The aim of the study is to describe novel approaches to adequately account for uncertainty when using a Rank Preserving Structural Failure Time model in a decision analytic model. Using two examples, we tested and compared the performance of the novel Test-based method with the resampling bootstrap method and with the conventional approach of no adjustment. In the first example, we simulated life expectancy using a simple decision analytic model based on a hypothetical oncology trial with treatment switching. In the second example, we applied the adjustment method on published data when no individual patient data were available. Mean estimates of overall and incremental life expectancy were similar across methods. However, the bootstrapped and test-based estimates consistently produced greater estimates of uncertainty compared with the estimate without any adjustment applied. Similar results were observed when using the test based approach on a published data showing that failing to adjust for uncertainty led to smaller confidence intervals. Both the bootstrapping and test-based approaches provide a solution to appropriately incorporate uncertainty, with the benefit that the latter can implemented by researchers in the absence of individual patient data. Copyright © 2018 International Society for Pharmacoeconomics and Outcomes Research (ISPOR). Published by Elsevier Inc. All rights reserved.

  14. Improving the Prediction of Total Surgical Procedure Time Using Linear Regression Modeling

    Directory of Open Access Journals (Sweden)

    Eric R. Edelman

    2017-06-01

    Full Text Available For efficient utilization of operating rooms (ORs, accurate schedules of assigned block time and sequences of patient cases need to be made. The quality of these planning tools is dependent on the accurate prediction of total procedure time (TPT per case. In this paper, we attempt to improve the accuracy of TPT predictions by using linear regression models based on estimated surgeon-controlled time (eSCT and other variables relevant to TPT. We extracted data from a Dutch benchmarking database of all surgeries performed in six academic hospitals in The Netherlands from 2012 till 2016. The final dataset consisted of 79,983 records, describing 199,772 h of total OR time. Potential predictors of TPT that were included in the subsequent analysis were eSCT, patient age, type of operation, American Society of Anesthesiologists (ASA physical status classification, and type of anesthesia used. First, we computed the predicted TPT based on a previously described fixed ratio model for each record, multiplying eSCT by 1.33. This number is based on the research performed by van Veen-Berkx et al., which showed that 33% of SCT is generally a good approximation of anesthesia-controlled time (ACT. We then systematically tested all possible linear regression models to predict TPT using eSCT in combination with the other available independent variables. In addition, all regression models were again tested without eSCT as a predictor to predict ACT separately (which leads to TPT by adding SCT. TPT was most accurately predicted using a linear regression model based on the independent variables eSCT, type of operation, ASA classification, and type of anesthesia. This model performed significantly better than the fixed ratio model and the method of predicting ACT separately. Making use of these more accurate predictions in planning and sequencing algorithms may enable an increase in utilization of ORs, leading to significant financial and productivity related

  15. Improving the Prediction of Total Surgical Procedure Time Using Linear Regression Modeling.

    Science.gov (United States)

    Edelman, Eric R; van Kuijk, Sander M J; Hamaekers, Ankie E W; de Korte, Marcel J M; van Merode, Godefridus G; Buhre, Wolfgang F F A

    2017-01-01

    For efficient utilization of operating rooms (ORs), accurate schedules of assigned block time and sequences of patient cases need to be made. The quality of these planning tools is dependent on the accurate prediction of total procedure time (TPT) per case. In this paper, we attempt to improve the accuracy of TPT predictions by using linear regression models based on estimated surgeon-controlled time (eSCT) and other variables relevant to TPT. We extracted data from a Dutch benchmarking database of all surgeries performed in six academic hospitals in The Netherlands from 2012 till 2016. The final dataset consisted of 79,983 records, describing 199,772 h of total OR time. Potential predictors of TPT that were included in the subsequent analysis were eSCT, patient age, type of operation, American Society of Anesthesiologists (ASA) physical status classification, and type of anesthesia used. First, we computed the predicted TPT based on a previously described fixed ratio model for each record, multiplying eSCT by 1.33. This number is based on the research performed by van Veen-Berkx et al., which showed that 33% of SCT is generally a good approximation of anesthesia-controlled time (ACT). We then systematically tested all possible linear regression models to predict TPT using eSCT in combination with the other available independent variables. In addition, all regression models were again tested without eSCT as a predictor to predict ACT separately (which leads to TPT by adding SCT). TPT was most accurately predicted using a linear regression model based on the independent variables eSCT, type of operation, ASA classification, and type of anesthesia. This model performed significantly better than the fixed ratio model and the method of predicting ACT separately. Making use of these more accurate predictions in planning and sequencing algorithms may enable an increase in utilization of ORs, leading to significant financial and productivity related benefits.

  16. Compositional schedulability analysis of real-time actor-based systems.

    Science.gov (United States)

    Jaghoori, Mohammad Mahdi; de Boer, Frank; Longuet, Delphine; Chothia, Tom; Sirjani, Marjan

    2017-01-01

    We present an extension of the actor model with real-time, including deadlines associated with messages, and explicit application-level scheduling policies, e.g.,"earliest deadline first" which can be associated with individual actors. Schedulability analysis in this setting amounts to checking whether, given a scheduling policy for each actor, every task is processed within its designated deadline. To check schedulability, we introduce a compositional automata-theoretic approach, based on maximal use of model checking combined with testing. Behavioral interfaces define what an actor expects from the environment, and the deadlines for messages given these assumptions. We use model checking to verify that actors match their behavioral interfaces. We extend timed automata refinement with the notion of deadlines and use it to define compatibility of actor environments with the behavioral interfaces. Model checking of compatibility is computationally hard, so we propose a special testing process. We show that the analyses are decidable and automate the process using the Uppaal model checker.

  17. An estimation model of population in China using time series DMSP night-time satellite imagery from 2002-2010

    Science.gov (United States)

    Zhang, Xiaoyong; Zhang, Zhijie; Chang, Yuguang; Chen, Zhengchao

    2015-12-01

    Accurate data on the spatial distribution and potential growth estimation of human population are playing pivotal role in addressing and mitigating heavy lose caused by earthquake. Traditional demographic data is limited in its spatial resolution and is extremely hard to update. With the accessibility of massive DMSP/OLS night time imagery, it is possible to model population distribution at the county level across China. In order to compare and improve the continuity and consistency of time-series DMSP night-time satellite imagery obtained by different satellites in same year or different years by the same satellite from 2002-2010, normalized method was deployed for the inter-correction among imageries. And we referred to the reference F162007 Jixi city, whose social-economic has been relatively stable. Through binomial model, with average R2 0.90, then derived the correction factor of each year. The normalization obviously improved consistency comparing to previous data, which enhanced the correspondent accuracy of model. Then conducted the model of population density between average night-time light intensity in eight-economic districts. According to the two parameters variation law of consecutive years, established the prediction model of next following years with R2of slope and constant typically 0.85 to 0.95 in different regions. To validate the model, taking the year of 2005 as example, retrieved quantitatively population distribution in per square kilometer based on the model, then compared the results to the statistical data based on census, the difference of the result is acceptable. In summary, the estimation model facilitates the quick estimation and prediction in relieving the damage to people, which is significant in decision-making.

  18. Data-based Non-Markovian Model Inference

    Science.gov (United States)

    Ghil, Michael

    2015-04-01

    This talk concentrates on obtaining stable and efficient data-based models for simulation and prediction in the geosciences and life sciences. The proposed model derivation relies on using a multivariate time series of partial observations from a large-dimensional system, and the resulting low-order models are compared with the optimal closures predicted by the non-Markovian Mori-Zwanzig formalism of statistical physics. Multilayer stochastic models (MSMs) are introduced as both a very broad generalization and a time-continuous limit of existing multilevel, regression-based approaches to data-based closure, in particular of empirical model reduction (EMR). We show that the multilayer structure of MSMs can provide a natural Markov approximation to the generalized Langevin equation (GLE) of the Mori-Zwanzig formalism. A simple correlation-based stopping criterion for an EMR-MSM model is derived to assess how well it approximates the GLE solution. Sufficient conditions are given for the nonlinear cross-interactions between the constitutive layers of a given MSM to guarantee the existence of a global random attractor. This existence ensures that no blow-up can occur for a very broad class of MSM applications. The EMR-MSM methodology is first applied to a conceptual, nonlinear, stochastic climate model of coupled slow and fast variables, in which only slow variables are observed. The resulting reduced model with energy-conserving nonlinearities captures the main statistical features of the slow variables, even when there is no formal scale separation and the fast variables are quite energetic. Second, an MSM is shown to successfully reproduce the statistics of a partially observed, generalized Lokta-Volterra model of population dynamics in its chaotic regime. The positivity constraint on the solutions' components replaces here the quadratic-energy-preserving constraint of fluid-flow problems and it successfully prevents blow-up. This work is based on a close

  19. Markov Chain Modelling for Short-Term NDVI Time Series Forecasting

    Directory of Open Access Journals (Sweden)

    Stepčenko Artūrs

    2016-12-01

    Full Text Available In this paper, the NDVI time series forecasting model has been developed based on the use of discrete time, continuous state Markov chain of suitable order. The normalised difference vegetation index (NDVI is an indicator that describes the amount of chlorophyll (the green mass and shows the relative density and health of vegetation; therefore, it is an important variable for vegetation forecasting. A Markov chain is a stochastic process that consists of a state space. This stochastic process undergoes transitions from one state to another in the state space with some probabilities. A Markov chain forecast model is flexible in accommodating various forecast assumptions and structures. The present paper discusses the considerations and techniques in building a Markov chain forecast model at each step. Continuous state Markov chain model is analytically described. Finally, the application of the proposed Markov chain model is illustrated with reference to a set of NDVI time series data.

  20. From Safety Critical Java Programs to Timed Process Models

    DEFF Research Database (Denmark)

    Thomsen, Bent; Luckow, Kasper Søe; Thomsen, Lone Leth

    2015-01-01

    frameworks, we have in recent years pursued an agenda of translating hard-real-time embedded safety critical programs written in the Safety Critical Java Profile [33] into networks of timed automata [4] and subjecting those to automated analysis using the UPPAAL model checker [10]. Several tools have been...... built and the tools have been used to analyse a number of systems for properties such as worst case execution time, schedulability and energy optimization [12–14,19,34,36,38]. In this paper we will elaborate on the theoretical underpinning of the translation from Java programs to timed automata models...... and briefly summarize some of the results based on this translation. Furthermore, we discuss future work, especially relations to the work in [16,24] as Java recently has adopted first class higher order functions in the form of lambda abstractions....

  1. Agent-based Modeling Automated: Data-driven Generation of Innovation Diffusion Models

    NARCIS (Netherlands)

    Jensen, T.; Chappin, E.J.L.

    2016-01-01

    Simulation modeling is useful to gain insights into driving mechanisms of diffusion of innovations. This study aims to introduce automation to make identification of such mechanisms with agent-based simulation modeling less costly in time and labor. We present a novel automation procedure in which

  2. Lower and Upper Bounds in Zone Based Abstractions of Timed Automata

    DEFF Research Database (Denmark)

    Behrmann, Gerd; Bouyer, Patricia; Larsen, Kim Guldstrand

    2004-01-01

    Timed automata have an infinite semantics. For verification purposes, one usually uses zone based abstractions w.r.t. the maximal constants to which clocks of the timed automaton are compared. We show that by distinguishing maximal lower and upper bounds, significantly coarser abstractions can...... dramatically increases the scalability of the real-time model checker Uppaal....

  3. Chemistry resolved kinetic flow modeling of TATB based explosives

    Science.gov (United States)

    Vitello, Peter; Fried, Laurence E.; William, Howard; Levesque, George; Souers, P. Clark

    2012-03-01

    Detonation waves in insensitive, TATB-based explosives are believed to have multiple time scale regimes. The initial burn rate of such explosives has a sub-microsecond time scale. However, significant late-time slow release in energy is believed to occur due to diffusion limited growth of carbon. In the intermediate time scale concentrations of product species likely change from being in equilibrium to being kinetic rate controlled. We use the thermo-chemical code CHEETAH linked to an ALE hydrodynamics code to model detonations. We term our model chemistry resolved kinetic flow, since CHEETAH tracks the time dependent concentrations of individual species in the detonation wave and calculates EOS values based on the concentrations. We present here two variants of our new rate model and comparison with hot, ambient, and cold experimental data for PBX 9502.

  4. Time-Dependent Networks as Models to Achieve Fast Exact Time-Table Queries

    DEFF Research Database (Denmark)

    Brodal, Gert Stølting; Jacob, Rico

    2003-01-01

    We consider efficient algorithms for exact time-table queries, i.e. algorithms that find optimal itineraries for travelers using a train system. We propose to use time-dependent networks as a model and show advantages of this approach over space-time networks as models.......We consider efficient algorithms for exact time-table queries, i.e. algorithms that find optimal itineraries for travelers using a train system. We propose to use time-dependent networks as a model and show advantages of this approach over space-time networks as models....

  5. Time-aggregation effects on the baseline of continuous-time and discrete-time hazard models

    NARCIS (Netherlands)

    ter Hofstede, F.; Wedel, M.

    In this study we reinvestigate the effect of time-aggregation for discrete- and continuous-time hazard models. We reanalyze the results of a previous Monte Carlo study by ter Hofstede and Wedel (1998), in which the effects of time-aggregation on the parameter estimates of hazard models were

  6. The Drift Diffusion Model can account for the accuracy and reaction time of value-based choices under high and low time pressure

    Directory of Open Access Journals (Sweden)

    Milica Milosavljevic

    2010-10-01

    Full Text Available An important open problem is how values are compared to make simple choices. A natural hypothesis is that the brain carries out the computations associated with the value comparisons in a manner consistent with the Drift Diffusion Model (DDM, since this model has been able to account for a large amount of data in other domains. We investigated the ability of four different versions of the DDM to explain the data in a real binary food choice task under conditions of high and low time pressure. We found that a seven-parameter version of the DDM can account for the choice and reaction time data with high-accuracy, in both the high and low time pressure conditions. The changes associated with the introduction of time pressure could be traced to changes in two key model parameters: the barrier height and the noise in the slope of the drift process.

  7. Forecasting Hourly Water Demands With Seasonal Autoregressive Models for Real-Time Application

    Science.gov (United States)

    Chen, Jinduan; Boccelli, Dominic L.

    2018-02-01

    Consumer water demands are not typically measured at temporal or spatial scales adequate to support real-time decision making, and recent approaches for estimating unobserved demands using observed hydraulic measurements are generally not capable of forecasting demands and uncertainty information. While time series modeling has shown promise for representing total system demands, these models have generally not been evaluated at spatial scales appropriate for representative real-time modeling. This study investigates the use of a double-seasonal time series model to capture daily and weekly autocorrelations to both total system demands and regional aggregated demands at a scale that would capture demand variability across a distribution system. Emphasis was placed on the ability to forecast demands and quantify uncertainties with results compared to traditional time series pattern-based demand models as well as nonseasonal and single-seasonal time series models. Additional research included the implementation of an adaptive-parameter estimation scheme to update the time series model when unobserved changes occurred in the system. For two case studies, results showed that (1) for the smaller-scale aggregated water demands, the log-transformed time series model resulted in improved forecasts, (2) the double-seasonal model outperformed other models in terms of forecasting errors, and (3) the adaptive adjustment of parameters during forecasting improved the accuracy of the generated prediction intervals. These results illustrate the capabilities of time series modeling to forecast both water demands and uncertainty estimates at spatial scales commensurate for real-time modeling applications and provide a foundation for developing a real-time integrated demand-hydraulic model.

  8. Integral definition of transition time in the Landau-Zener model

    International Nuclear Information System (INIS)

    Yan Yue; Wu Biao

    2010-01-01

    We give a general definition for the transition time in the Landau-Zener model. This definition allows us to compute numerically the Landau-Zener transition time at any sweeping rate without ambiguity in both diabatic and adiabatic bases. With this new definition, analytical results are obtained in both the adiabatic limit and the sudden limit.

  9. A time-dependent stop operator for modeling a class of singular hysteresis loops in a piezoceramic actuator

    International Nuclear Information System (INIS)

    Al Janaideh, Mohammad

    2013-01-01

    We present a time-dependent stop operator-based Prandtl–Ishlinskii model to characterize singular hysteresis loops in a piezoceramic actuator. The model is constructed based on the time-dependent threshold. The inverse time-dependent stop operator-based Prandtl–Ishlinskii model is obtained analytically and it can be applied as a feedforward compensator to compensate for singular hysteresis loops in a class of smart-material-based actuators. The objective of this study is to present an invertible Prandtl–Ishlinskii model that can be applied as a feedforward compensator to compensate for singular hysteresis loops without inserting a feedback control system

  10. A time-dependent stop operator for modeling a class of singular hysteresis loops in a piezoceramic actuator

    Energy Technology Data Exchange (ETDEWEB)

    Al Janaideh, Mohammad, E-mail: aljanaideh@gmail.com [Department of Mechatronics Engineering, The University of Jordan, 11942 Amman (Jordan)

    2013-03-15

    We present a time-dependent stop operator-based Prandtl–Ishlinskii model to characterize singular hysteresis loops in a piezoceramic actuator. The model is constructed based on the time-dependent threshold. The inverse time-dependent stop operator-based Prandtl–Ishlinskii model is obtained analytically and it can be applied as a feedforward compensator to compensate for singular hysteresis loops in a class of smart-material-based actuators. The objective of this study is to present an invertible Prandtl–Ishlinskii model that can be applied as a feedforward compensator to compensate for singular hysteresis loops without inserting a feedback control system.

  11. Advantage of make-to-stock strategy based on linear mixed-effect model: a comparison with regression, autoregressive, times series, and exponential smoothing models

    Directory of Open Access Journals (Sweden)

    Yu-Pin Liao

    2017-11-01

    Full Text Available In the past few decades, demand forecasting has become relatively difficult due to rapid changes in the global environment. This research illustrates the use of the make-to-stock (MTS production strategy in order to explain how forecasting plays an essential role in business management. The linear mixed-effect (LME model has been extensively developed and is widely applied in various fields. However, no study has used the LME model for business forecasting. We suggest that the LME model be used as a tool for prediction and to overcome environment complexity. The data analysis is based on real data in an international display company, where the company needs accurate demand forecasting before adopting a MTS strategy. The forecasting result from the LME model is compared to the commonly used approaches, including the regression model, autoregressive model, times series model, and exponential smoothing model, with the results revealing that prediction performance provided by the LME model is more stable than using the other methods. Furthermore, product types in the data are regarded as a random effect in the LME model, hence demands of all types can be predicted simultaneously using a single LME model. However, some approaches require splitting the data into different type categories, and then predicting the type demand by establishing a model for each type. This feature also demonstrates the practicability of the LME model in real business operations.

  12. Time-varying parameter models for catchments with land use change: the importance of model structure

    Science.gov (United States)

    Pathiraja, Sahani; Anghileri, Daniela; Burlando, Paolo; Sharma, Ashish; Marshall, Lucy; Moradkhani, Hamid

    2018-05-01

    Rapid population and economic growth in Southeast Asia has been accompanied by extensive land use change with consequent impacts on catchment hydrology. Modeling methodologies capable of handling changing land use conditions are therefore becoming ever more important and are receiving increasing attention from hydrologists. A recently developed data-assimilation-based framework that allows model parameters to vary through time in response to signals of change in observations is considered for a medium-sized catchment (2880 km2) in northern Vietnam experiencing substantial but gradual land cover change. We investigate the efficacy of the method as well as the importance of the chosen model structure in ensuring the success of a time-varying parameter method. The method was used with two lumped daily conceptual models (HBV and HyMOD) that gave good-quality streamflow predictions during pre-change conditions. Although both time-varying parameter models gave improved streamflow predictions under changed conditions compared to the time-invariant parameter model, persistent biases for low flows were apparent in the HyMOD case. It was found that HyMOD was not suited to representing the modified baseflow conditions, resulting in extreme and unrealistic time-varying parameter estimates. This work shows that the chosen model can be critical for ensuring the time-varying parameter framework successfully models streamflow under changing land cover conditions. It can also be used to determine whether land cover changes (and not just meteorological factors) contribute to the observed hydrologic changes in retrospective studies where the lack of a paired control catchment precludes such an assessment.

  13. Time-varying parameter models for catchments with land use change: the importance of model structure

    Directory of Open Access Journals (Sweden)

    S. Pathiraja

    2018-05-01

    Full Text Available Rapid population and economic growth in Southeast Asia has been accompanied by extensive land use change with consequent impacts on catchment hydrology. Modeling methodologies capable of handling changing land use conditions are therefore becoming ever more important and are receiving increasing attention from hydrologists. A recently developed data-assimilation-based framework that allows model parameters to vary through time in response to signals of change in observations is considered for a medium-sized catchment (2880 km2 in northern Vietnam experiencing substantial but gradual land cover change. We investigate the efficacy of the method as well as the importance of the chosen model structure in ensuring the success of a time-varying parameter method. The method was used with two lumped daily conceptual models (HBV and HyMOD that gave good-quality streamflow predictions during pre-change conditions. Although both time-varying parameter models gave improved streamflow predictions under changed conditions compared to the time-invariant parameter model, persistent biases for low flows were apparent in the HyMOD case. It was found that HyMOD was not suited to representing the modified baseflow conditions, resulting in extreme and unrealistic time-varying parameter estimates. This work shows that the chosen model can be critical for ensuring the time-varying parameter framework successfully models streamflow under changing land cover conditions. It can also be used to determine whether land cover changes (and not just meteorological factors contribute to the observed hydrologic changes in retrospective studies where the lack of a paired control catchment precludes such an assessment.

  14. Towards longitudinal activity-based models of travel demand

    NARCIS (Netherlands)

    Arentze, T.A.; Timmermans, H.J.P.; Lo, H.P.; Leung, Stephen C.H.; Tan, Susanna M.L.

    2008-01-01

    Existing activity-based models of travel demand consider a day as the time unit of observation and predict activity patterns of inhviduals for a typical or average day. In this study we argue that the use of a time span of one day severely limits the ability of the models to predict responsive

  15. Semi-active control of magnetorheological elastomer base isolation system utilising learning-based inverse model

    Science.gov (United States)

    Gu, Xiaoyu; Yu, Yang; Li, Jianchun; Li, Yancheng

    2017-10-01

    Magnetorheological elastomer (MRE) base isolations have attracted considerable attention over the last two decades thanks to its self-adaptability and high-authority controllability in semi-active control realm. Due to the inherent nonlinearity and hysteresis of the devices, it is challenging to obtain a reasonably complicated mathematical model to describe the inverse dynamics of MRE base isolators and hence to realise control synthesis of the MRE base isolation system. Two aims have been achieved in this paper: i) development of an inverse model for MRE base isolator based on optimal general regression neural network (GRNN); ii) numerical and experimental validation of a real-time semi-active controlled MRE base isolation system utilising LQR controller and GRNN inverse model. The superiority of GRNN inverse model lays in fewer input variables requirement, faster training process and prompt calculation response, which makes it suitable for online training and real-time control. The control system is integrated with a three-storey shear building model and control performance of the MRE base isolation system is compared with bare building, passive-on isolation system and passive-off isolation system. Testing results show that the proposed GRNN inverse model is able to reproduce desired control force accurately and the MRE base isolation system can effectively suppress the structural responses when compared to the passive isolation system.

  16. Time series modeling and forecasting using memetic algorithms for regime-switching models.

    Science.gov (United States)

    Bergmeir, Christoph; Triguero, Isaac; Molina, Daniel; Aznarte, José Luis; Benitez, José Manuel

    2012-11-01

    In this brief, we present a novel model fitting procedure for the neuro-coefficient smooth transition autoregressive model (NCSTAR), as presented by Medeiros and Veiga. The model is endowed with a statistically founded iterative building procedure and can be interpreted in terms of fuzzy rule-based systems. The interpretability of the generated models and a mathematically sound building procedure are two very important properties of forecasting models. The model fitting procedure employed by the original NCSTAR is a combination of initial parameter estimation by a grid search procedure with a traditional local search algorithm. We propose a different fitting procedure, using a memetic algorithm, in order to obtain more accurate models. An empirical evaluation of the method is performed, applying it to various real-world time series originating from three forecasting competitions. The results indicate that we can significantly enhance the accuracy of the models, making them competitive to models commonly used in the field.

  17. Real-Time Management of Groundwater Resources Based on Wireless Sensors Networks

    Directory of Open Access Journals (Sweden)

    Qingguo Zhou

    2018-01-01

    Full Text Available Groundwater plays a vital role in the arid inland river basins, in which the groundwater management is critical to the sustainable development of area economy and ecology. Traditional sustainable management approaches are to analyze different scenarios subject to assumptions or to construct simulation–optimization models to obtain optimal strategy. However, groundwater system is time-varying due to exogenous inputs. In this sense, the groundwater management based on static data is relatively outdated. As part of the Heihe River Basin (HRB, which is a typical arid river basin in Northwestern China, the Daman irrigation district was selected as the study area in this paper. First, a simulation–optimization model was constructed to optimize the pumping rates of the study area according to the groundwater level constraints. Three different groundwater level constraints were assigned to explore sustainable strategies for groundwater resources. The results indicated that the simulation–optimization model was capable of identifying the optimal pumping yields and satisfy the given constraints. Second, the simulation–optimization model was integrated with wireless sensors network (WSN technology to provide real-time features for the management. The results showed time-varying feature for the groundwater management, which was capable of updating observations, constraints, and decision variables in real time. Furthermore, a web-based platform was developed to facilitate the decision-making process. This study combined simulation and optimization model with WSN techniques and meanwhile attempted to real-time monitor and manage the scarce groundwater resource, which could be used to support the decision-making related to sustainable management.

  18. The Development and Evaluation of a Time Based Network Model of the Industrial Engineering Technology Curriculum at the Southern Technical Institute.

    Science.gov (United States)

    Bannerman, James W.

    A practicum was conducted to develop a scientific management tool that would assist students in obtaining a systems view of their college curriculum and to coordinate planning with curriculum requirements. A modification of the critical path method was employed and the result was a time-based network model of the Industrial Engineering Technology…

  19. Real-Time Agent-Based Modeling Simulation with in-situ Visualization of Complex Biological Systems: A Case Study on Vocal Fold Inflammation and Healing.

    Science.gov (United States)

    Seekhao, Nuttiiya; Shung, Caroline; JaJa, Joseph; Mongeau, Luc; Li-Jessen, Nicole Y K

    2016-05-01

    We present an efficient and scalable scheme for implementing agent-based modeling (ABM) simulation with In Situ visualization of large complex systems on heterogeneous computing platforms. The scheme is designed to make optimal use of the resources available on a heterogeneous platform consisting of a multicore CPU and a GPU, resulting in minimal to no resource idle time. Furthermore, the scheme was implemented under a client-server paradigm that enables remote users to visualize and analyze simulation data as it is being generated at each time step of the model. Performance of a simulation case study of vocal fold inflammation and wound healing with 3.8 million agents shows 35× and 7× speedup in execution time over single-core and multi-core CPU respectively. Each iteration of the model took less than 200 ms to simulate, visualize and send the results to the client. This enables users to monitor the simulation in real-time and modify its course as needed.

  20. Development of constitutive model for composites exhibiting time dependent properties

    International Nuclear Information System (INIS)

    Pupure, L; Joffe, R; Varna, J; Nyström, B

    2013-01-01

    Regenerated cellulose fibres and their composites exhibit highly nonlinear behaviour. The mechanical response of these materials can be successfully described by the model developed by Schapery for time-dependent materials. However, this model requires input parameters that are experimentally determined via large number of time-consuming tests on the studied composite material. If, for example, the volume fraction of fibres is changed we have a different material and new series of experiments on this new material are required. Therefore the ultimate objective of our studies is to develop model which determines the composite behaviour based on behaviour of constituents of the composite. This paper gives an overview of problems and difficulties, associated with development, implementation and verification of such model

  1. Modeling of Volatility with Non-linear Time Series Model

    OpenAIRE

    Kim Song Yon; Kim Mun Chol

    2013-01-01

    In this paper, non-linear time series models are used to describe volatility in financial time series data. To describe volatility, two of the non-linear time series are combined into form TAR (Threshold Auto-Regressive Model) with AARCH (Asymmetric Auto-Regressive Conditional Heteroskedasticity) error term and its parameter estimation is studied.

  2. Time-based prospective memory in young children-Exploring executive functions as a developmental mechanism.

    Science.gov (United States)

    Kretschmer, Anett; Voigt, Babett; Friedrich, Sylva; Pfeiffer, Kathrin; Kliegel, Matthias

    2014-01-01

    The present study investigated time-based prospective memory (PM) during the transition from kindergarten/preschool to school age and applied mediation models to test the impact of executive functions (working memory, inhibitory control) and time monitoring on time-based PM development. Twenty-five preschool (age: M = 5.75, SD = 0.28) and 22 primary school children (age: M = 7.83, SD = 0.39) participated. To examine time-based PM, children had to play a computer-based driving game requiring them to drive a car on a road without hitting others cars (ongoing task) and to refill the car regularly according to a fuel gauge, which serves as clock equivalent (PM task). The level of gas that was still left in the fuel gauge was not displayed on the screen and children had to monitor it via a button press (time monitoring). Results revealed a developmental increase in time-based PM performance from preschool to school age. Applying the mediation models, only working memory was revealed to influence PM development. Neither inhibitory control alone nor the mediation paths leading from both executive functions to time monitoring could explain the link between age and time-based PM. Thus, results of the present study suggest that working memory may be one key cognitive process driving the developmental growth of time-based PM during the transition from preschool to school age.

  3. Energy-based method for near-real time modeling of sound field in complex urban environments.

    Science.gov (United States)

    Pasareanu, Stephanie M; Remillieux, Marcel C; Burdisso, Ricardo A

    2012-12-01

    Prediction of the sound field in large urban environments has been limited thus far by the heavy computational requirements of conventional numerical methods such as boundary element (BE) or finite-difference time-domain (FDTD) methods. Recently, a considerable amount of work has been devoted to developing energy-based methods for this application, and results have shown the potential to compete with conventional methods. However, these developments have been limited to two-dimensional (2-D) studies (along street axes), and no real description of the phenomena at issue has been exposed. Here the mathematical theory of diffusion is used to predict the sound field in 3-D complex urban environments. A 3-D diffusion equation is implemented by means of a simple finite-difference scheme and applied to two different types of urban configurations. This modeling approach is validated against FDTD and geometrical acoustic (GA) solutions, showing a good overall agreement. The role played by diffraction near buildings edges close to the source is discussed, and suggestions are made on the possibility to predict accurately the sound field in complex urban environments, in near real time simulations.

  4. Time series modeling, computation, and inference

    CERN Document Server

    Prado, Raquel

    2010-01-01

    The authors systematically develop a state-of-the-art analysis and modeling of time series. … this book is well organized and well written. The authors present various statistical models for engineers to solve problems in time series analysis. Readers no doubt will learn state-of-the-art techniques from this book.-Hsun-Hsien Chang, Computing Reviews, March 2012My favorite chapters were on dynamic linear models and vector AR and vector ARMA models.-William Seaver, Technometrics, August 2011… a very modern entry to the field of time-series modelling, with a rich reference list of the current lit

  5. Transient response of a five-region nonequilibrium real-time pressurizer model

    International Nuclear Information System (INIS)

    Fakory, M.R.; Seifaee, F.

    1987-01-01

    Recent accidents at nuclear power plants in the US and abroad have prompted accurate analysis and simulation of the plant systems and the training of reactor operators on plant-specific simulators that are equipped with the simulation models. Consequently, several models for real-time and off-time simulation of nuclear reactor systems, with various levels of accuracy and simulation fidelity, have been introduced. Experience with power plant simulation demonstrates that in order to realistically predict and simulate reactor responses during unanticipated transients, it is necessary to equip the simulation model with a multielement pressurizer model. The objective of this paper is to present the results of a five-region drift-flux-based pressurizer model, which has been developed for integration with real-time training simulators. A comparison between the plant data and the results of the nonequilibrium pressurizer model demonstrates that the model is well capable of close simulation of dynamic behavior of the pressurizer system

  6. Modeling sports highlights using a time-series clustering framework and model interpretation

    Science.gov (United States)

    Radhakrishnan, Regunathan; Otsuka, Isao; Xiong, Ziyou; Divakaran, Ajay

    2005-01-01

    In our past work on sports highlights extraction, we have shown the utility of detecting audience reaction using an audio classification framework. The audio classes in the framework were chosen based on intuition. In this paper, we present a systematic way of identifying the key audio classes for sports highlights extraction using a time series clustering framework. We treat the low-level audio features as a time series and model the highlight segments as "unusual" events in a background of an "usual" process. The set of audio classes to characterize the sports domain is then identified by analyzing the consistent patterns in each of the clusters output from the time series clustering framework. The distribution of features from the training data so obtained for each of the key audio classes, is parameterized by a Minimum Description Length Gaussian Mixture Model (MDL-GMM). We also interpret the meaning of each of the mixture components of the MDL-GMM for the key audio class (the "highlight" class) that is correlated with highlight moments. Our results show that the "highlight" class is a mixture of audience cheering and commentator's excited speech. Furthermore, we show that the precision-recall performance for highlights extraction based on this "highlight" class is better than that of our previous approach which uses only audience cheering as the key highlight class.

  7. Lead-Time Models Should Not Be Used to Estimate Overdiagnosis in Cancer Screening

    DEFF Research Database (Denmark)

    Zahl, Per-Henrik; Jørgensen, Karsten Juhl; Gøtzsche, Peter C

    2014-01-01

    screening--the excess-incidence approach and the lead-time approach--that rely on two different lead-time definitions. Overdiagnosis when screening with mammography has varied from 0 to 75 %. We have explained that these differences are mainly caused by using different definitions and methods......Lead-time can mean two different things: Clinical lead-time is the lead-time for clinically relevant tumors; that is, those that are not overdiagnosed. Model-based lead-time is a theoretical construct where the time when the tumor would have caused symptoms is not limited by the person's death....... It is the average time at which the diagnosis is brought forward for both clinically relevant and overdiagnosed cancers. When screening for breast cancer, clinical lead-time is about 1 year, while model-based lead-time varies from 2 to 7 years. There are two different methods to calculate overdiagnosis in cancer...

  8. Integration of Simulink Models with Component-based Software Models

    Directory of Open Access Journals (Sweden)

    MARIAN, N.

    2008-06-01

    Full Text Available Model based development aims to facilitate the development of embedded control systems by emphasizing the separation of the design level from the implementation level. Model based design involves the use of multiple models that represent different views of a system, having different semantics of abstract system descriptions. Usually, in mechatronics systems, design proceeds by iterating model construction, model analysis, and model transformation. Constructing a MATLAB/Simulink model, a plant and controller behavior is simulated using graphical blocks to represent mathematical and logical constructs and process flow, then software code is generated. A Simulink model is a representation of the design or implementation of a physical system that satisfies a set of requirements. A software component-based system aims to organize system architecture and behavior as a means of computation, communication and constraints, using computational blocks and aggregates for both discrete and continuous behavior, different interconnection and execution disciplines for event-based and time-based controllers, and so on, to encompass the demands to more functionality, at even lower prices, and with opposite constraints. COMDES (Component-based Design of Software for Distributed Embedded Systems is such a component-based system framework developed by the software engineering group of Mads Clausen Institute for Product Innovation (MCI, University of Southern Denmark. Once specified, the software model has to be analyzed. One way of doing that is to integrate in wrapper files the model back into Simulink S-functions, and use its extensive simulation features, thus allowing an early exploration of the possible design choices over multiple disciplines. The paper describes a safe translation of a restricted set of MATLAB/Simulink blocks to COMDES software components, both for continuous and discrete behavior, and the transformation of the software system into the S

  9. A computer-based time study system for timber harvesting operations

    Science.gov (United States)

    Jingxin Wang; Joe McNeel; John Baumgras

    2003-01-01

    A computer-based time study system was developed for timber harvesting operations. Object-oriented techniques were used to model and design the system. The front-end of the time study system resides on the MS Windows CE and the back-end is supported by MS Access. The system consists of three major components: a handheld system, data transfer interface, and data storage...

  10. Monitoring Murder Crime in Namibia Using Bayesian Space-Time Models

    Directory of Open Access Journals (Sweden)

    Isak Neema

    2012-01-01

    Full Text Available This paper focuses on the analysis of murder in Namibia using Bayesian spatial smoothing approach with temporal trends. The analysis was based on the reported cases from 13 regions of Namibia for the period 2002–2006 complemented with regional population sizes. The evaluated random effects include space-time structured heterogeneity measuring the effect of regional clustering, unstructured heterogeneity, time, space and time interaction and population density. The model consists of carefully chosen prior and hyper-prior distributions for parameters and hyper-parameters, with inference conducted using Gibbs sampling algorithm and sensitivity test for model validation. The posterior mean estimate of the parameters from the model using DIC as model selection criteria show that most of the variation in the relative risk of murder is due to regional clustering, while the effect of population density and time was insignificant. The sensitivity analysis indicates that both intrinsic and Laplace CAR prior can be adopted as prior distribution for the space-time heterogeneity. In addition, the relative risk map show risk structure of increasing north-south gradient, pointing to low risk in northern regions of Namibia, while Karas and Khomas region experience long-term increase in murder risk.

  11. Finite difference time domain modeling of spiral antennas

    Science.gov (United States)

    Penney, Christopher W.; Beggs, John H.; Luebbers, Raymond J.

    1992-01-01

    The objectives outlined in the original proposal for this project were to create a well-documented computer analysis model based on the finite-difference, time-domain (FDTD) method that would be capable of computing antenna impedance, far-zone radiation patterns, and radar cross-section (RCS). The ability to model a variety of penetrable materials in addition to conductors is also desired. The spiral antennas under study by this project meet these requirements since they are constructed of slots cut into conducting surfaces which are backed by dielectric materials.

  12. Matrix-algebra-based calculations of the time evolution of the binary spin-bath model for magnetization transfer.

    Science.gov (United States)

    Müller, Dirk K; Pampel, André; Möller, Harald E

    2013-05-01

    Quantification of magnetization-transfer (MT) experiments are typically based on the assumption of the binary spin-bath model. This model allows for the extraction of up to six parameters (relative pool sizes, relaxation times, and exchange rate constants) for the characterization of macromolecules, which are coupled via exchange processes to the water in tissues. Here, an approach is presented for estimating MT parameters acquired with arbitrary saturation schemes and imaging pulse sequences. It uses matrix algebra to solve the Bloch-McConnell equations without unwarranted simplifications, such as assuming steady-state conditions for pulsed saturation schemes or neglecting imaging pulses. The algorithm achieves sufficient efficiency for voxel-by-voxel MT parameter estimations by using a polynomial interpolation technique. Simulations, as well as experiments in agar gels with continuous-wave and pulsed MT preparation, were performed for validation and for assessing approximations in previous modeling approaches. In vivo experiments in the normal human brain yielded results that were consistent with published data. Copyright © 2013 Elsevier Inc. All rights reserved.

  13. The predictive model on the user reaction time using the information similarity

    International Nuclear Information System (INIS)

    Lee, Sung Jin; Heo, Gyun Young; Chang, Soon Heung

    2005-01-01

    Human performance is frequently degraded because people forget. Memory is one of brain processes that are important when trying to understand how people process information. Although a large number of studies have been made on the human performance, little is known about the similarity effect in human performance. The purpose of this paper is to propose and validate the quantitative and predictive model on the human response time in the user interface with the concept of similarity. However, it is not easy to explain the human performance with only similarity or information amount. We are confronted by two difficulties: making the quantitative model on the human response time with the similarity and validating the proposed model by experimental work. We made the quantitative model based on the Hick's law and the law of practice. In addition, we validated the model with various experimental conditions by measuring participants' response time in the environment of computer-based display. Experimental results reveal that the human performance is improved by the user interface's similarity. We think that the proposed model is useful for the user interface design and evaluation phases

  14. Characteristic Model-Based Robust Model Predictive Control for Hypersonic Vehicles with Constraints

    Directory of Open Access Journals (Sweden)

    Jun Zhang

    2017-06-01

    Full Text Available Designing robust control for hypersonic vehicles in reentry is difficult, due to the features of the vehicles including strong coupling, non-linearity, and multiple constraints. This paper proposed a characteristic model-based robust model predictive control (MPC for hypersonic vehicles with reentry constraints. First, the hypersonic vehicle is modeled by a characteristic model composed of a linear time-varying system and a lumped disturbance. Then, the identification data are regenerated by the accumulative sum idea in the gray theory, which weakens effects of the random noises and strengthens regularity of the identification data. Based on the regenerated data, the time-varying parameters and the disturbance are online estimated according to the gray identification. At last, the mixed H2/H∞ robust predictive control law is proposed based on linear matrix inequalities (LMIs and receding horizon optimization techniques. Using active tackling system constraints of MPC, the input and state constraints are satisfied in the closed-loop control system. The validity of the proposed control is verified theoretically according to Lyapunov theory and illustrated by simulation results.

  15. Simulation-Based Dynamic Passenger Flow Assignment Modelling for a Schedule-Based Transit Network

    Directory of Open Access Journals (Sweden)

    Xiangming Yao

    2017-01-01

    Full Text Available The online operation management and the offline policy evaluation in complex transit networks require an effective dynamic traffic assignment (DTA method that can capture the temporal-spatial nature of traffic flows. The objective of this work is to propose a simulation-based dynamic passenger assignment framework and models for such applications in the context of schedule-based rail transit systems. In the simulation framework, travellers are regarded as individual agents who are able to obtain complete information on the current traffic conditions. A combined route selection model integrated with pretrip route selection and entrip route switch is established for achieving the dynamic network flow equilibrium status. The train agent is operated strictly with the timetable and its capacity limitation is considered. A continuous time-driven simulator based on the proposed framework and models is developed, whose performance is illustrated through a large-scale network of Beijing subway. The results indicate that more than 0.8 million individual passengers and thousands of trains can be simulated simultaneously at a speed ten times faster than real time. This study provides an efficient approach to analyze the dynamic demand-supply relationship for large schedule-based transit networks.

  16. Approaching near real-time biosensing: microfluidic microsphere based biosensor for real-time analyte detection.

    Science.gov (United States)

    Cohen, Noa; Sabhachandani, Pooja; Golberg, Alexander; Konry, Tania

    2015-04-15

    In this study we describe a simple lab-on-a-chip (LOC) biosensor approach utilizing well mixed microfluidic device and a microsphere-based assay capable of performing near real-time diagnostics of clinically relevant analytes such cytokines and antibodies. We were able to overcome the adsorption kinetics reaction rate-limiting mechanism, which is diffusion-controlled in standard immunoassays, by introducing the microsphere-based assay into well-mixed yet simple microfluidic device with turbulent flow profiles in the reaction regions. The integrated microsphere-based LOC device performs dynamic detection of the analyte in minimal amount of biological specimen by continuously sampling micro-liter volumes of sample per minute to detect dynamic changes in target analyte concentration. Furthermore we developed a mathematical model for the well-mixed reaction to describe the near real time detection mechanism observed in the developed LOC method. To demonstrate the specificity and sensitivity of the developed real time monitoring LOC approach, we applied the device for clinically relevant analytes: Tumor Necrosis Factor (TNF)-α cytokine and its clinically used inhibitor, anti-TNF-α antibody. Based on the reported results herein, the developed LOC device provides continuous sensitive and specific near real-time monitoring method for analytes such as cytokines and antibodies, reduces reagent volumes by nearly three orders of magnitude as well as eliminates the washing steps required by standard immunoassays. Copyright © 2014 Elsevier B.V. All rights reserved.

  17. Assimilation of LAI time-series in crop production models

    Science.gov (United States)

    Kooistra, Lammert; Rijk, Bert; Nannes, Louis

    2014-05-01

    Agriculture is worldwide a large consumer of freshwater, nutrients and land. Spatial explicit agricultural management activities (e.g., fertilization, irrigation) could significantly improve efficiency in resource use. In previous studies and operational applications, remote sensing has shown to be a powerful method for spatio-temporal monitoring of actual crop status. As a next step, yield forecasting by assimilating remote sensing based plant variables in crop production models would improve agricultural decision support both at the farm and field level. In this study we investigated the potential of remote sensing based Leaf Area Index (LAI) time-series assimilated in the crop production model LINTUL to improve yield forecasting at field level. The effect of assimilation method and amount of assimilated observations was evaluated. The LINTUL-3 crop production model was calibrated and validated for a potato crop on two experimental fields in the south of the Netherlands. A range of data sources (e.g., in-situ soil moisture and weather sensors, destructive crop measurements) was used for calibration of the model for the experimental field in 2010. LAI from cropscan field radiometer measurements and actual LAI measured with the LAI-2000 instrument were used as input for the LAI time-series. The LAI time-series were assimilated in the LINTUL model and validated for a second experimental field on which potatoes were grown in 2011. Yield in 2011 was simulated with an R2 of 0.82 when compared with field measured yield. Furthermore, we analysed the potential of assimilation of LAI into the LINTUL-3 model through the 'updating' assimilation technique. The deviation between measured and simulated yield decreased from 9371 kg/ha to 8729 kg/ha when assimilating weekly LAI measurements in the LINTUL model over the season of 2011. LINTUL-3 furthermore shows the main growth reducing factors, which are useful for farm decision support. The combination of crop models and sensor

  18. Time-sensitive Customer Churn Prediction based on PU Learning

    OpenAIRE

    Wang, Li; Chen, Chaochao; Zhou, Jun; Li, Xiaolong

    2018-01-01

    With the fast development of Internet companies throughout the world, customer churn has become a serious concern. To better help the companies retain their customers, it is important to build a customer churn prediction model to identify the customers who are most likely to churn ahead of time. In this paper, we propose a Time-sensitive Customer Churn Prediction (TCCP) framework based on Positive and Unlabeled (PU) learning technique. Specifically, we obtain the recent data by shortening the...

  19. Syndromic surveillance system based on near real-time cattle mortality monitoring.

    Science.gov (United States)

    Torres, G; Ciaravino, V; Ascaso, S; Flores, V; Romero, L; Simón, F

    2015-05-01

    Early detection of an infectious disease incursion will minimize the impact of outbreaks in livestock. Syndromic surveillance based on the analysis of readily available data can enhance traditional surveillance systems and allow veterinary authorities to react in a timely manner. This study was based on monitoring the number of cattle carcasses sent for rendering in the veterinary unit of Talavera de la Reina (Spain). The aim was to develop a system to detect deviations from expected values which would signal unexpected health events. Historical weekly collected dead cattle (WCDC) time series stabilized by the Box-Cox transformation and adjusted by the minimum least squares method were used to build the univariate cycling regression model based on a Fourier transformation. Three different models, according to type of production system, were built to estimate the baseline expected number of WCDC. Two types of risk signals were generated: point risk signals when the observed value was greater than the upper 95% confidence interval of the expected baseline, and cumulative risk signals, generated by a modified cumulative sum algorithm, when the cumulative sums of reported deaths were above the cumulative sum of expected deaths. Data from 2011 were used to prospectively validate the model generating seven risk signals. None of them were correlated to infectious disease events but some coincided, in time, with very high climatic temperatures recorded in the region. The harvest effect was also observed during the first week of the study year. Establishing appropriate risk signal thresholds is a limiting factor of predictive models; it needs to be adjusted based on experience gained during the use of the models. To increase the sensitivity and specificity of the predictions epidemiological interpretation of non-specific risk signals should be complemented by other sources of information. The methodology developed in this study can enhance other existing early detection

  20. Modelling of Attentional Dwell Time

    DEFF Research Database (Denmark)

    Petersen, Anders; Kyllingsbæk, Søren; Bundesen, Claus

    2009-01-01

    . This confinement of attentional resources leads to the impairment in identifying the second target. With the model, we are able to produce close fits to data from the traditional two target dwell time paradigm. A dwell-time experiment with three targets has also been carried out for individual subjects...... and the model has been extended to fit these data....

  1. Finite Time Control for Fractional Order Nonlinear Hydroturbine Governing System via Frequency Distributed Model

    Directory of Open Access Journals (Sweden)

    Bin Wang

    2016-01-01

    Full Text Available This paper studies the application of frequency distributed model for finite time control of a fractional order nonlinear hydroturbine governing system (HGS. Firstly, the mathematical model of HGS with external random disturbances is introduced. Secondly, a novel terminal sliding surface is proposed and its stability to origin is proved based on the frequency distributed model and Lyapunov stability theory. Furthermore, based on finite time stability and sliding mode control theory, a robust control law to ensure the occurrence of the sliding motion in a finite time is designed for stabilization of the fractional order HGS. Finally, simulation results show the effectiveness and robustness of the proposed scheme.

  2. A Model for Learning Over Time: The Big Picture

    Science.gov (United States)

    Amato, Herbert K.; Konin, Jeff G.; Brader, Holly

    2002-01-01

    Objective: To present a method of describing the concept of “learning over time” with respect to its implementation into an athletic training education program curriculum. Background: The formal process of learning over time has recently been introduced as a required way for athletic training educational competencies and clinical proficiencies to be delivered and mastered. Learning over time incorporates the documented cognitive, psychomotor, and affective skills associated with the acquisition, progression, and reflection of information. This method of academic preparation represents a move away from a quantitative-based learning module toward a proficiency-based mastery of learning. Little research or documentation can be found demonstrating either the specificity of this concept or suggestions for its application. Description: We present a model for learning over time that encompasses multiple indicators for assessment in a successive format. Based on a continuum approach, cognitive, psychomotor, and affective characteristics are assessed at different levels in classroom and clinical environments. Clinical proficiencies are a common set of entry-level skills that need to be integrated into the athletic training educational domains. Objective documentation is presented, including the skill breakdown of a task and a matrix to identify a timeline of competency and proficiency delivery. Clinical Advantages: The advantages of learning over time pertain to the integration of cognitive knowledge into clinical skill acquisition. Given the fact that learning over time has been implemented as a required concept for athletic training education programs, this model may serve to assist those program faculty who have not yet developed, or are in the process of developing, a method of administering this approach to learning. PMID:12937551

  3. Three-factor models versus time series models: quantifying time-dependencies of interactions between stimuli in cell biology and psychobiology for short longitudinal data.

    Science.gov (United States)

    Frank, Till D; Kiyatkin, Anatoly; Cheong, Alex; Kholodenko, Boris N

    2017-06-01

    Signal integration determines cell fate on the cellular level, affects cognitive processes and affective responses on the behavioural level, and is likely to be involved in psychoneurobiological processes underlying mood disorders. Interactions between stimuli may subjected to time effects. Time-dependencies of interactions between stimuli typically lead to complex cell responses and complex responses on the behavioural level. We show that both three-factor models and time series models can be used to uncover such time-dependencies. However, we argue that for short longitudinal data the three factor modelling approach is more suitable. In order to illustrate both approaches, we re-analysed previously published short longitudinal data sets. We found that in human embryonic kidney 293 cells cells the interaction effect in the regulation of extracellular signal-regulated kinase (ERK) 1 signalling activation by insulin and epidermal growth factor is subjected to a time effect and dramatically decays at peak values of ERK activation. In contrast, we found that the interaction effect induced by hypoxia and tumour necrosis factor-alpha for the transcriptional activity of the human cyclo-oxygenase-2 promoter in HEK293 cells is time invariant at least in the first 12-h time window after stimulation. Furthermore, we applied the three-factor model to previously reported animal studies. In these studies, memory storage was found to be subjected to an interaction effect of the beta-adrenoceptor agonist clenbuterol and certain antagonists acting on the alpha-1-adrenoceptor / glucocorticoid-receptor system. Our model-based analysis suggests that only if the antagonist drug is administer in a critical time window, then the interaction effect is relevant. © The authors 2016. Published by Oxford University Press on behalf of the Institute of Mathematics and its Applications. All rights reserved.

  4. An Improved Global Harmony Search Algorithm for the Identification of Nonlinear Discrete-Time Systems Based on Volterra Filter Modeling

    Directory of Open Access Journals (Sweden)

    Zongyan Li

    2016-01-01

    Full Text Available This paper describes an improved global harmony search (IGHS algorithm for identifying the nonlinear discrete-time systems based on second-order Volterra model. The IGHS is an improved version of the novel global harmony search (NGHS algorithm, and it makes two significant improvements on the NGHS. First, the genetic mutation operation is modified by combining normal distribution and Cauchy distribution, which enables the IGHS to fully explore and exploit the solution space. Second, an opposition-based learning (OBL is introduced and modified to improve the quality of harmony vectors. The IGHS algorithm is implemented on two numerical examples, and they are nonlinear discrete-time rational system and the real heat exchanger, respectively. The results of the IGHS are compared with those of the other three methods, and it has been verified to be more effective than the other three methods on solving the above two problems with different input signals and system memory sizes.

  5. Power Forecasting of Combined Heating and Cooling Systems Based on Chaotic Time Series

    Directory of Open Access Journals (Sweden)

    Liu Hai

    2015-01-01

    Full Text Available Theoretic analysis shows that the output power of the distributed generation system is nonlinear and chaotic. And it is coupled with the microenvironment meteorological data. Chaos is an inherent property of nonlinear dynamic system. A predicator of the output power of the distributed generation system is to establish a nonlinear model of the dynamic system based on real time series in the reconstructed phase space. Firstly, chaos should be detected and quantified for the intensive studies of nonlinear systems. If the largest Lyapunov exponent is positive, the dynamical system must be chaotic. Then, the embedding dimension and the delay time are chosen based on the improved C-C method. The attractor of chaotic power time series can be reconstructed based on the embedding dimension and delay time in the phase space. By now, the neural network can be trained based on the training samples, which are observed from the distributed generation system. The neural network model will approximate the curve of output power adequately. Experimental results show that the maximum power point of the distributed generation system will be predicted based on the meteorological data. The system can be controlled effectively based on the prediction.

  6. Dual-EKF-Based Real-Time Celestial Navigation for Lunar Rover

    Directory of Open Access Journals (Sweden)

    Li Xie

    2012-01-01

    Full Text Available A key requirement of lunar rover autonomous navigation is to acquire state information accurately in real-time during its motion and set up a gradual parameter-based nonlinear kinematics model for the rover. In this paper, we propose a dual-extended-Kalman-filter- (dual-EKF- based real-time celestial navigation (RCN method. The proposed method considers the rover position and velocity on the lunar surface as the system parameters and establishes a constant velocity (CV model. In addition, the attitude quaternion is considered as the system state, and the quaternion differential equation is established as the state equation, which incorporates the output of angular rate gyroscope. Therefore, the measurement equation can be established with sun direction vector from the sun sensor and speed observation from the speedometer. The gyro continuous output ensures the algorithm real-time operation. Finally, we use the dual-EKF method to solve the system equations. Simulation results show that the proposed method can acquire the rover position and heading information in real time and greatly improve the navigation accuracy. Our method overcomes the disadvantage of the cumulative error in inertial navigation.

  7. Performance evaluation and modeling of a conformal filter (CF) based real-time standoff hazardous material detection sensor

    Science.gov (United States)

    Nelson, Matthew P.; Tazik, Shawna K.; Bangalore, Arjun S.; Treado, Patrick J.; Klem, Ethan; Temple, Dorota

    2017-05-01

    Hyperspectral imaging (HSI) systems can provide detection and identification of a variety of targets in the presence of complex backgrounds. However, current generation sensors are typically large, costly to field, do not usually operate in real time and have limited sensitivity and specificity. Despite these shortcomings, HSI-based intelligence has proven to be a valuable tool, thus resulting in increased demand for this type of technology. By moving the next generation of HSI technology into a more adaptive configuration, and a smaller and more cost effective form factor, HSI technologies can help maintain a competitive advantage for the U.S. armed forces as well as local, state and federal law enforcement agencies. Operating near the physical limits of HSI system capability is often necessary and very challenging, but is often enabled by rigorous modeling of detection performance. Specific performance envelopes we consistently strive to improve include: operating under low signal to background conditions; at higher and higher frame rates; and under less than ideal motion control scenarios. An adaptable, low cost, low footprint, standoff sensor architecture we have been maturing includes the use of conformal liquid crystal tunable filters (LCTFs). These Conformal Filters (CFs) are electro-optically tunable, multivariate HSI spectrometers that, when combined with Dual Polarization (DP) optics, produce optimized spectral passbands on demand, which can readily be reconfigured, to discriminate targets from complex backgrounds in real-time. With DARPA support, ChemImage Sensor Systems (CISS™) in collaboration with Research Triangle Institute (RTI) International are developing a novel, real-time, adaptable, compressive sensing short-wave infrared (SWIR) hyperspectral imaging technology called the Reconfigurable Conformal Imaging Sensor (RCIS) based on DP-CF technology. RCIS will address many shortcomings of current generation systems and offer improvements in

  8. Identification of human operator performance models utilizing time series analysis

    Science.gov (United States)

    Holden, F. M.; Shinners, S. M.

    1973-01-01

    The results of an effort performed by Sperry Systems Management Division for AMRL in applying time series analysis as a tool for modeling the human operator are presented. This technique is utilized for determining the variation of the human transfer function under various levels of stress. The human operator's model is determined based on actual input and output data from a tracking experiment.

  9. Incorporating time and income constraints in dynamic agent-based models of activity generation and time use : Approach and illustration

    NARCIS (Netherlands)

    Arentze, Theo; Ettema, D.F.; Timmermans, Harry

    Existing theories and models in economics and transportation treat households’ decisions regarding allocation of time and income to activities as a resource-allocation optimization problem. This stands in contrast with the dynamic nature of day-by-day activity-travel choices. Therefore, in the

  10. Cure modeling in real-time prediction: How much does it help?

    Science.gov (United States)

    Ying, Gui-Shuang; Zhang, Qiang; Lan, Yu; Li, Yimei; Heitjan, Daniel F

    2017-08-01

    Various parametric and nonparametric modeling approaches exist for real-time prediction in time-to-event clinical trials. Recently, Chen (2016 BMC Biomedical Research Methodology 16) proposed a prediction method based on parametric cure-mixture modeling, intending to cover those situations where it appears that a non-negligible fraction of subjects is cured. In this article we apply a Weibull cure-mixture model to create predictions, demonstrating the approach in RTOG 0129, a randomized trial in head-and-neck cancer. We compare the ultimate realized data in RTOG 0129 to interim predictions from a Weibull cure-mixture model, a standard Weibull model without a cure component, and a nonparametric model based on the Bayesian bootstrap. The standard Weibull model predicted that events would occur earlier than the Weibull cure-mixture model, but the difference was unremarkable until late in the trial when evidence for a cure became clear. Nonparametric predictions often gave undefined predictions or infinite prediction intervals, particularly at early stages of the trial. Simulations suggest that cure modeling can yield better-calibrated prediction intervals when there is a cured component, or the appearance of a cured component, but at a substantial cost in the average width of the intervals. Copyright © 2017 Elsevier Inc. All rights reserved.

  11. Adaptive MPC based on MIMO ARX-Laguerre model.

    Science.gov (United States)

    Ben Abdelwahed, Imen; Mbarek, Abdelkader; Bouzrara, Kais

    2017-03-01

    This paper proposes a method for synthesizing an adaptive predictive controller using a reduced complexity model. This latter is given by the projection of the ARX model on Laguerre bases. The resulting model is entitled MIMO ARX-Laguerre and it is characterized by an easy recursive representation. The adaptive predictive control law is computed based on multi-step-ahead finite-element predictors, identified directly from experimental input/output data. The model is tuned in each iteration by an online identification algorithms of both model parameters and Laguerre poles. The proposed approach avoids time consuming numerical optimization algorithms associated with most common linear predictive control strategies, which makes it suitable for real-time implementation. The method is used to synthesize and test in numerical simulations adaptive predictive controllers for the CSTR process benchmark. Copyright © 2016 ISA. Published by Elsevier Ltd. All rights reserved.

  12. Time Prediction Models for Echinococcosis Based on Gray System Theory and Epidemic Dynamics.

    Science.gov (United States)

    Zhang, Liping; Wang, Li; Zheng, Yanling; Wang, Kai; Zhang, Xueliang; Zheng, Yujian

    2017-03-04

    Echinococcosis, which can seriously harm human health and animal husbandry production, has become an endemic in the Xinjiang Uygur Autonomous Region of China. In order to explore an effective human Echinococcosis forecasting model in Xinjiang, three grey models, namely, the traditional grey GM(1,1) model, the Grey-Periodic Extensional Combinatorial Model (PECGM(1,1)), and the Modified Grey Model using Fourier Series (FGM(1,1)), in addition to a multiplicative seasonal ARIMA(1,0,1)(1,1,0)₄ model, are applied in this study for short-term predictions. The accuracy of the different grey models is also investigated. The simulation results show that the FGM(1,1) model has a higher performance ability, not only for model fitting, but also for forecasting. Furthermore, considering the stability and the modeling precision in the long run, a dynamic epidemic prediction model based on the transmission mechanism of Echinococcosis is also established for long-term predictions. Results demonstrate that the dynamic epidemic prediction model is capable of identifying the future tendency. The number of human Echinococcosis cases will increase steadily over the next 25 years, reaching a peak of about 1250 cases, before eventually witnessing a slow decline, until it finally ends.

  13. Time Prediction Models for Echinococcosis Based on Gray System Theory and Epidemic Dynamics

    Directory of Open Access Journals (Sweden)

    Liping Zhang

    2017-03-01

    Full Text Available Echinococcosis, which can seriously harm human health and animal husbandry production, has become an endemic in the Xinjiang Uygur Autonomous Region of China. In order to explore an effective human Echinococcosis forecasting model in Xinjiang, three grey models, namely, the traditional grey GM(1,1 model, the Grey-Periodic Extensional Combinatorial Model (PECGM(1,1, and the Modified Grey Model using Fourier Series (FGM(1,1, in addition to a multiplicative seasonal ARIMA(1,0,1(1,1,04 model, are applied in this study for short-term predictions. The accuracy of the different grey models is also investigated. The simulation results show that the FGM(1,1 model has a higher performance ability, not only for model fitting, but also for forecasting. Furthermore, considering the stability and the modeling precision in the long run, a dynamic epidemic prediction model based on the transmission mechanism of Echinococcosis is also established for long-term predictions. Results demonstrate that the dynamic epidemic prediction model is capable of identifying the future tendency. The number of human Echinococcosis cases will increase steadily over the next 25 years, reaching a peak of about 1250 cases, before eventually witnessing a slow decline, until it finally ends.

  14. Self-organising mixture autoregressive model for non-stationary time series modelling.

    Science.gov (United States)

    Ni, He; Yin, Hujun

    2008-12-01

    Modelling non-stationary time series has been a difficult task for both parametric and nonparametric methods. One promising solution is to combine the flexibility of nonparametric models with the simplicity of parametric models. In this paper, the self-organising mixture autoregressive (SOMAR) network is adopted as a such mixture model. It breaks time series into underlying segments and at the same time fits local linear regressive models to the clusters of segments. In such a way, a global non-stationary time series is represented by a dynamic set of local linear regressive models. Neural gas is used for a more flexible structure of the mixture model. Furthermore, a new similarity measure has been introduced in the self-organising network to better quantify the similarity of time series segments. The network can be used naturally in modelling and forecasting non-stationary time series. Experiments on artificial, benchmark time series (e.g. Mackey-Glass) and real-world data (e.g. numbers of sunspots and Forex rates) are presented and the results show that the proposed SOMAR network is effective and superior to other similar approaches.

  15. Application of Real Time Models Updating in ABO Central Field

    International Nuclear Information System (INIS)

    Heikal, S.; Adewale, D.; Doghmi, A.; Augustine, U.

    2003-01-01

    ABO central field is the first deep offshore oil production in Nigeria located in OML 125 (ex-OPL316). The field was developed in a water depth of between 500 and 800 meters. Deep-water development requires much faster data handling and model updates in order to make the best possible technical decision. This required an easy way to incorporate the latest information and dynamic update of the reservoir model enabling real time reservoir management. The paper aims at discussing the benefits of real time static and dynamic model update and illustrates with a horizontal well example how this update was beneficial prior and during the drilling operation minimizing the project CAPEX Prior to drilling, a 3D geological model was built based on seismic and offset wells' data. The geological model was updated twice, once after the pilot hole drilling and then after reaching the landing point and prior drilling the horizontal section .Forward modeling ws made was well using the along the planned trajectory. During the drilling process both geo- steering and LWD data were loaded in real time to the 3D modeling software. The data was analyzed and compared with the predicted model. The location of markers was changed as drilling progressed and the entire 3D Geological model was rapidly updated. The target zones were revaluated in the light of the new model updates. Recommendations were communicated to the field, and the well trajectory was modified to take into account the new information. The combination of speed, flexibility and update-ability of the 3D modeling software enabled continues geological model update on which the asset team based their trajectory modification decisions throughout the drilling phase. The well was geo-steered through 7 meters thickness of sand. After the drilling, the testing showed excellent results with a productivity and fluid properties data were used to update the dynamic model reviewing the well production plateau providing optimum reservoir

  16. Modelling fourier regression for time series data- a case study: modelling inflation in foods sector in Indonesia

    Science.gov (United States)

    Prahutama, Alan; Suparti; Wahyu Utami, Tiani

    2018-03-01

    Regression analysis is an analysis to model the relationship between response variables and predictor variables. The parametric approach to the regression model is very strict with the assumption, but nonparametric regression model isn’t need assumption of model. Time series data is the data of a variable that is observed based on a certain time, so if the time series data wanted to be modeled by regression, then we should determined the response and predictor variables first. Determination of the response variable in time series is variable in t-th (yt), while the predictor variable is a significant lag. In nonparametric regression modeling, one developing approach is to use the Fourier series approach. One of the advantages of nonparametric regression approach using Fourier series is able to overcome data having trigonometric distribution. In modeling using Fourier series needs parameter of K. To determine the number of K can be used Generalized Cross Validation method. In inflation modeling for the transportation sector, communication and financial services using Fourier series yields an optimal K of 120 parameters with R-square 99%. Whereas if it was modeled by multiple linear regression yield R-square 90%.

  17. Temporal validation for landsat-based volume estimation model

    Science.gov (United States)

    Renaldo J. Arroyo; Emily B. Schultz; Thomas G. Matney; David L. Evans; Zhaofei Fan

    2015-01-01

    Satellite imagery can potentially reduce the costs and time associated with ground-based forest inventories; however, for satellite imagery to provide reliable forest inventory data, it must produce consistent results from one time period to the next. The objective of this study was to temporally validate a Landsat-based volume estimation model in a four county study...

  18. Modelling of an intermediate pressure microwave oxygen discharge reactor: from stationary two-dimensional to time-dependent global (volume-averaged) plasma models

    International Nuclear Information System (INIS)

    Kemaneci, Efe; Graef, Wouter; Rahimi, Sara; Van Dijk, Jan; Kroesen, Gerrit; Carbone, Emile; Jimenez-Diaz, Manuel

    2015-01-01

    A microwave-induced oxygen plasma is simulated using both stationary and time-resolved modelling strategies. The stationary model is spatially resolved and it is self-consistently coupled to the microwaves (Jimenez-Diaz et al 2012 J. Phys. D: Appl. Phys. 45 335204), whereas the time-resolved description is based on a global (volume-averaged) model (Kemaneci et al 2014 Plasma Sources Sci. Technol. 23 045002). We observe agreement of the global model data with several published measurements of microwave-induced oxygen plasmas in both continuous and modulated power inputs. Properties of the microwave plasma reactor are investigated and corresponding simulation data based on two distinct models shows agreement on the common parameters. The role of the square wave modulated power input is also investigated within the time-resolved description. (paper)

  19. A LAN with real-time facilities based on OSI concepts

    International Nuclear Information System (INIS)

    Raaf, A.J. de; Dijkstra, A.; Swierstra, S.D.

    1986-01-01

    Research is being done into structured design and realization methods for Local Area Networks (LAN's). The main aim is to develop a LAN (ZWOLAN) with real-time facilities for use in laboratories and based on ISO-OSI standards. Twentenet will be used for the physical and the data link layer of ZWOLAN. Twentenet is based on a Priority based CSMA/CD data link access mechanism with guaranteed access times. An implementation model has been constructed from an FSM decomposition analysis of OSI protocols. Modular Pascal will be used as language for the realization of the network software. The emphasis is on the software architecture and the reduction of the OSI protocol overhead. (Auth.)

  20. On the upscaling of process-based models in deltaic applications

    Science.gov (United States)

    Li, L.; Storms, J. E. A.; Walstra, D. J. R.

    2018-03-01

    Process-based numerical models are increasingly used to study the evolution of marine and terrestrial depositional environments. Whilst a detailed description of small-scale processes provides an accurate representation of reality, application on geological timescales is restrained by the associated increase in computational time. In order to reduce the computational time, a number of acceleration methods are combined and evaluated for a schematic supply-driven delta (static base level) and an accommodation-driven delta (variable base level). The performance of the combined acceleration methods is evaluated by comparing the morphological indicators such as distributary channel networking and delta volumes derived from the model predictions for various levels of acceleration. The results of the accelerated models are compared to the outcomes from a series of simulations to capture autogenic variability. Autogenic variability is quantified by re-running identical models on an initial bathymetry with 1 cm added noise. The overall results show that the variability of the accelerated models fall within the autogenic variability range, suggesting that the application of acceleration methods does not significantly affect the simulated delta evolution. The Time-scale compression method (the acceleration method introduced in this paper) results in an increased computational efficiency of 75% without adversely affecting the simulated delta evolution compared to a base case. The combination of the Time-scale compression method with the existing acceleration methods has the potential to extend the application range of process-based models towards geologic timescales.

  1. Remote sensing-based time series models for malaria early warning in the highlands of Ethiopia

    Directory of Open Access Journals (Sweden)

    Midekisa Alemayehu

    2012-05-01

    Full Text Available Abstract Background Malaria is one of the leading public health problems in most of sub-Saharan Africa, particularly in Ethiopia. Almost all demographic groups are at risk of malaria because of seasonal and unstable transmission of the disease. Therefore, there is a need to develop malaria early-warning systems to enhance public health decision making for control and prevention of malaria epidemics. Data from orbiting earth-observing sensors can monitor environmental risk factors that trigger malaria epidemics. Remotely sensed environmental indicators were used to examine the influences of climatic and environmental variability on temporal patterns of malaria cases in the Amhara region of Ethiopia. Methods In this study seasonal autoregressive integrated moving average (SARIMA models were used to quantify the relationship between malaria cases and remotely sensed environmental variables, including rainfall, land-surface temperature (LST, vegetation indices (NDVI and EVI, and actual evapotranspiration (ETa with lags ranging from one to three months. Predictions from the best model with environmental variables were compared to the actual observations from the last 12 months of the time series. Results Malaria cases exhibited positive associations with LST at a lag of one month and positive associations with indicators of moisture (rainfall, EVI and ETa at lags from one to three months. SARIMA models that included these environmental covariates had better fits and more accurate predictions, as evidenced by lower AIC and RMSE values, than models without environmental covariates. Conclusions Malaria risk indicators such as satellite-based rainfall estimates, LST, EVI, and ETa exhibited significant lagged associations with malaria cases in the Amhara region and improved model fit and prediction accuracy. These variables can be monitored frequently and extensively across large geographic areas using data from earth-observing sensors to support public

  2. A Full-Body Layered Deformable Model for Automatic Model-Based Gait Recognition

    Science.gov (United States)

    Lu, Haiping; Plataniotis, Konstantinos N.; Venetsanopoulos, Anastasios N.

    2007-12-01

    This paper proposes a full-body layered deformable model (LDM) inspired by manually labeled silhouettes for automatic model-based gait recognition from part-level gait dynamics in monocular video sequences. The LDM is defined for the fronto-parallel gait with 22 parameters describing the human body part shapes (widths and lengths) and dynamics (positions and orientations). There are four layers in the LDM and the limbs are deformable. Algorithms for LDM-based human body pose recovery are then developed to estimate the LDM parameters from both manually labeled and automatically extracted silhouettes, where the automatic silhouette extraction is through a coarse-to-fine localization and extraction procedure. The estimated LDM parameters are used for model-based gait recognition by employing the dynamic time warping for matching and adopting the combination scheme in AdaBoost.M2. While the existing model-based gait recognition approaches focus primarily on the lower limbs, the estimated LDM parameters enable us to study full-body model-based gait recognition by utilizing the dynamics of the upper limbs, the shoulders and the head as well. In the experiments, the LDM-based gait recognition is tested on gait sequences with differences in shoe-type, surface, carrying condition and time. The results demonstrate that the recognition performance benefits from not only the lower limb dynamics, but also the dynamics of the upper limbs, the shoulders and the head. In addition, the LDM can serve as an analysis tool for studying factors affecting the gait under various conditions.

  3. Expression for time travel based on diffusive wave theory: applicability and considerations

    Science.gov (United States)

    Aguilera, J. C.; Escauriaza, C. R.; Passalacqua, P.; Gironas, J. A.

    2017-12-01

    Prediction of hydrological response is of utmost importance when dealing with urban planning, risk assessment, or water resources management issues. With the advent of climate change, special care must be taken with respect to variations in rainfall and runoff due to rising temperature averages. Nowadays, while typical workstations have adequate power to run distributed routing hydrological models, it is still not enough for modeling on-the-fly, a crucial ability in a natural disaster context, where rapid decisions must be made. Semi-distributed time travel models, which compute a watershed's hydrograph without explicitly solving the full shallow water equations, appear as an attractive approach to rainfall-runoff modeling since, like fully distributed models, also superimpose a grid on the watershed, and compute runoff based on cell parameter values. These models are heavily dependent on the travel time expression for an individual cell. Many models make use of expressions based on kinematic wave theory, which is not applicable in cases where watershed storage is important, such as mild slopes. This work presents a new expression for concentration times in overland flow, based on diffusive wave theory, which considers not only the effects of storage but also the effects on upstream contribution. Setting upstream contribution equal to zero gives an expression consistent with previous work on diffusive wave theory; on the other hand, neglecting storage effects (i.e.: diffusion,) is shown to be equivalent to kinematic wave theory, currently used in many spatially distributed time travel models. The newly found expression is shown to be dependent on plane discretization, particularly when dealing with very non-kinematic cases. This is shown to be the result of upstream contribution, which gets larger downstream, versus plane length. This result also provides some light on the limits on applicability of the expression: when a certain kinematic threshold is reached, the

  4. Control-Oriented Models for Real-Time Simulation of Automotive Transmission Systems

    Directory of Open Access Journals (Sweden)

    Cavina N.

    2015-01-01

    Full Text Available A control-oriented model of a Dual Clutch Transmission (DCT was developed for real-time Hardware In the Loop (HIL applications, to support model-based development of the DCT controller and to systematically test its performance. The model is an innovative attempt to reproduce the fast dynamics of the actuation system while maintaining a simulation step size large enough for real-time applications. The model comprehends a detailed physical description of hydraulic circuit, clutches, synchronizers and gears, and simplified vehicle and internal combustion engine sub-models. As the oil circulating in the system has a large bulk modulus, the pressure dynamics are very fast, possibly causing instability in a real-time simulation; the same challenge involves the servo valves dynamics, due to the very small masses of the moving elements. Therefore, the hydraulic circuit model has been modified and simplified without losing physical validity, in order to adapt it to the real-time simulation requirements. The results of offline simulations have been compared to on-board measurements to verify the validity of the developed model, which was then implemented in a HIL system and connected to the Transmission Control Unit (TCU. Several tests have been performed on the HIL simulator, to verify the TCU performance: electrical failure tests on sensors and actuators, hydraulic and mechanical failure tests on hydraulic valves, clutches and synchronizers, and application tests comprehending all the main features of the control actions performed by the TCU. Being based on physical laws, in every condition the model simulates a plausible reaction of the system. A test automation procedure has finally been developed to permit the execution of a pattern of tests without the interaction of the user; perfectly repeatable tests can be performed for non-regression verification, allowing the testing of new software releases in fully automatic mode.

  5. System Identification Based Proxy Model of a Reservoir under Water Injection

    Directory of Open Access Journals (Sweden)

    Berihun M. Negash

    2017-01-01

    Full Text Available Simulation of numerical reservoir models with thousands and millions of grid blocks may consume a significant amount of time and effort, even when high performance processors are used. In cases where the simulation runs are required for sensitivity analysis, dynamic control, and optimization, the act needs to be repeated several times by continuously changing parameters. This makes it even more time-consuming. Currently, proxy models that are based on response surface are being used to lessen the time required for running simulations during sensitivity analysis and optimization. Proxy models are lighter mathematical models that run faster and perform in place of heavier models that require large computations. Nevertheless, to acquire data for modeling and validation and develop the proxy model itself, hundreds of simulation runs are required. In this paper, a system identification based proxy model that requires only a single simulation run and a properly designed excitation signal was proposed and evaluated using a benchmark case study. The results show that, with proper design of excitation signal and proper selection of model structure, system identification based proxy models are found to be practical and efficient alternatives for mimicking the performance of numerical reservoir models. The resulting proxy models have potential applications for dynamic well control and optimization.

  6. Testing the time-of-flight model for flagellar length sensing.

    Science.gov (United States)

    Ishikawa, Hiroaki; Marshall, Wallace F

    2017-11-07

    Cilia and flagella are microtubule-based organelles that protrude from the surface of most cells, are important to the sensing of extracellular signals, and make a driving force for fluid flow. Maintenance of flagellar length requires an active transport process known as intraflagellar transport (IFT). Recent studies reveal that the amount of IFT injection negatively correlates with the length of flagella. These observations suggest that a length-dependent feedback regulates IFT. However, it is unknown how cells recognize the length of flagella and control IFT. Several theoretical models try to explain this feedback system. We focused on one of the models, the "time-of-flight" model, which measures the length of flagella on the basis of the travel time of IFT protein in the flagellar compartment. We tested the time-of-flight model using Chlamydomonas dynein mutant cells, which show slower retrograde transport speed. The amount of IFT injection in dynein mutant cells was higher than that in control cells. This observation does not support the prediction of the time-of-flight model and suggests that Chlamydomonas uses another length-control feedback system rather than that described by the time-of-flight model. © 2017 Ishikawa and Marshall. This article is distributed by The American Society for Cell Biology under license from the author(s). Two months after publication it is available to the public under an Attribution–Noncommercial–Share Alike 3.0 Unported Creative Commons License (http://creativecommons.org/licenses/by-nc-sa/3.0).

  7. SPARSE: quadratic time simultaneous alignment and folding of RNAs without sequence-based heuristics.

    Science.gov (United States)

    Will, Sebastian; Otto, Christina; Miladi, Milad; Möhl, Mathias; Backofen, Rolf

    2015-08-01

    RNA-Seq experiments have revealed a multitude of novel ncRNAs. The gold standard for their analysis based on simultaneous alignment and folding suffers from extreme time complexity of [Formula: see text]. Subsequently, numerous faster 'Sankoff-style' approaches have been suggested. Commonly, the performance of such methods relies on sequence-based heuristics that restrict the search space to optimal or near-optimal sequence alignments; however, the accuracy of sequence-based methods breaks down for RNAs with sequence identities below 60%. Alignment approaches like LocARNA that do not require sequence-based heuristics, have been limited to high complexity ([Formula: see text] quartic time). Breaking this barrier, we introduce the novel Sankoff-style algorithm 'sparsified prediction and alignment of RNAs based on their structure ensembles (SPARSE)', which runs in quadratic time without sequence-based heuristics. To achieve this low complexity, on par with sequence alignment algorithms, SPARSE features strong sparsification based on structural properties of the RNA ensembles. Following PMcomp, SPARSE gains further speed-up from lightweight energy computation. Although all existing lightweight Sankoff-style methods restrict Sankoff's original model by disallowing loop deletions and insertions, SPARSE transfers the Sankoff algorithm to the lightweight energy model completely for the first time. Compared with LocARNA, SPARSE achieves similar alignment and better folding quality in significantly less time (speedup: 3.7). At similar run-time, it aligns low sequence identity instances substantially more accurate than RAF, which uses sequence-based heuristics. © The Author 2015. Published by Oxford University Press.

  8. Real-time cerebellar neuroprosthetic system based on a spiking neural network model of motor learning.

    Science.gov (United States)

    Xu, Tao; Xiao, Na; Zhai, Xiaolong; Kwan Chan, Pak; Tin, Chung

    2018-02-01

    Damage to the brain, as a result of various medical conditions, impacts the everyday life of patients and there is still no complete cure to neurological disorders. Neuroprostheses that can functionally replace the damaged neural circuit have recently emerged as a possible solution to these problems. Here we describe the development of a real-time cerebellar neuroprosthetic system to substitute neural function in cerebellar circuitry for learning delay eyeblink conditioning (DEC). The system was empowered by a biologically realistic spiking neural network (SNN) model of the cerebellar neural circuit, which considers the neuronal population and anatomical connectivity of the network. The model simulated synaptic plasticity critical for learning DEC. This SNN model was carefully implemented on a field programmable gate array (FPGA) platform for real-time simulation. This hardware system was interfaced in in vivo experiments with anesthetized rats and it used neural spikes recorded online from the animal to learn and trigger conditioned eyeblink in the animal during training. This rat-FPGA hybrid system was able to process neuronal spikes in real-time with an embedded cerebellum model of ~10 000 neurons and reproduce learning of DEC with different inter-stimulus intervals. Our results validated that the system performance is physiologically relevant at both the neural (firing pattern) and behavioral (eyeblink pattern) levels. This integrated system provides the sufficient computation power for mimicking the cerebellar circuit in real-time. The system interacts with the biological system naturally at the spike level and can be generalized for including other neural components (neuron types and plasticity) and neural functions for potential neuroprosthetic applications.

  9. Research on Healthy Anomaly Detection Model Based on Deep Learning from Multiple Time-Series Physiological Signals

    Directory of Open Access Journals (Sweden)

    Kai Wang

    2016-01-01

    Full Text Available Health is vital to every human being. To further improve its already respectable medical technology, the medical community is transitioning towards a proactive approach which anticipates and mitigates risks before getting ill. This approach requires measuring the physiological signals of human and analyzes these data at regular intervals. In this paper, we present a novel approach to apply deep learning in physiological signals analysis that allows doctor to identify latent risks. However, extracting high level information from physiological time-series data is a hard problem faced by the machine learning communities. Therefore, in this approach, we apply model based on convolutional neural network that can automatically learn features from raw physiological signals in an unsupervised manner and then based on the learned features use multivariate Gauss distribution anomaly detection method to detect anomaly data. Our experiment is shown to have a significant performance in physiological signals anomaly detection. So it is a promising tool for doctor to identify early signs of illness even if the criteria are unknown a priori.

  10. Modelling, simulation and dynamic analysis of the time delay model of the recuperative heat exchanger

    Directory of Open Access Journals (Sweden)

    Debeljković Dragutin Lj.

    2016-01-01

    Full Text Available The heat exchangers are frequently used as constructive elements in various plants and their dynamics is very important. Their operation is usually controlled by manipulating inlet fluid temperatures or mass flow rates. On the basis of the accepted and critically clarified assumptions, a linearized mathematical model of the cross-flow heat exchanger has been derived, taking into account the wall dynamics. The model is based on the fundamental law of energy conservation, covers all heat accumulation storages in the process, and leads to the set of partial differential equations (PDE, which solution is not possible in closed form. In order to overcome this problem the approach based on physical discretization was applied with associated time delay on the positions where it was necessary and unavoidable. This is quite new approach, which represent the further extension of previous results which did not include significant time delay existing in the working media. Simulation results, were derived, showing progress in building such a model suitable for further treatment from the position of analysis as well as the needs for control synthesis problem.

  11. Time-dependent Hartree approximation and time-dependent harmonic oscillator model

    International Nuclear Information System (INIS)

    Blaizot, J.P.

    1982-01-01

    We present an analytically soluble model for studying nuclear collective motion within the framework of the time-dependent Hartree (TDH) approximation. The model reduces the TDH equations to the Schroedinger equation of a time-dependent harmonic oscillator. Using canonical transformations and coherent states we derive a few properties of the time-dependent harmonic oscillator which are relevant for applications. We analyse the role of the normal modes in the time evolution of a system governed by TDH equations. We show how these modes couple together due to the anharmonic terms generated by the non-linearity of the theory. (orig.)

  12. Modelling and Comparative Performance Analysis of a Time-Reversed UWB System

    Directory of Open Access Journals (Sweden)

    Popovski K

    2007-01-01

    Full Text Available The effects of multipath propagation lead to a significant decrease in system performance in most of the proposed ultra-wideband communication systems. A time-reversed system utilises the multipath channel impulse response to decrease receiver complexity, through a prefiltering at the transmitter. This paper discusses the modelling and comparative performance of a UWB system utilising time-reversed communications. System equations are presented, together with a semianalytical formulation on the level of intersymbol interference and multiuser interference. The standardised IEEE 802.15.3a channel model is applied, and the estimated error performance is compared through simulation with the performance of both time-hopped time-reversed and RAKE-based UWB systems.

  13. Improvement of a Robotic Manipulator Model Based on Multivariate Residual Modeling

    Directory of Open Access Journals (Sweden)

    Serge Gale

    2017-07-01

    Full Text Available A new method is presented for extending a dynamic model of a six degrees of freedom robotic manipulator. A non-linear multivariate calibration of input–output training data from several typical motion trajectories is carried out with the aim of predicting the model systematic output error at time (t + 1 from known input reference up till and including time (t. A new partial least squares regression (PLSR based method, nominal PLSR with interactions was developed and used to handle, unmodelled non-linearities. The performance of the new method is compared with least squares (LS. Different cross-validation schemes were compared in order to assess the sampling of the state space based on conventional trajectories. The method developed in the paper can be used as fault monitoring mechanism and early warning system for sensor failure. The results show that the suggested methods improves trajectory tracking performance of the robotic manipulator by extending the initial dynamic model of the manipulator.

  14. Presenting a Model Based on Fuzzy Application to Optimize the Time of IBS Projects in Gas Refineries

    Directory of Open Access Journals (Sweden)

    Naderpour Abbas

    2017-01-01

    Full Text Available Nowadays, the construction industry has started to embrace IBS as a method of attaining better construction quality and productivity and reducing risks related to occupational safety and health. The built of pre-fabricated component in factories reduces many problems related to lack of purposing uncertainty in scheduling calculation and time management of projects. In the case of using IBS method for managing time in projects, former studies such as Allan Tay’s research, indicates that this method can save up at least 29% of overall completion period versus the conventional method. But beside mentioned advantages of this technical method, the projects could be optimized more and more in scheduling calculations. This issue is critical in gas refineries, since special parameters such as risk of spreading poison H2S gas and mandatory of performing projects in short time period events such as maintenance overhauls demands to perform projects in optimum time. Custom scheduling calculation of project planning uses the Critical Path Method (CPM as a tool for Planning Project’s activities. The researches of this paper’s authors indicated that Fuzzy Critical Path Method (FCPM is the best technique to manage the uncertainty in project scheduling and can save up the construction project’s time versus the custom methods. This paper aims to present a model based on fuzzy application in CPM calculations to optimize the time of Industrial Building System.

  15. Parallel Motion Simulation of Large-Scale Real-Time Crowd in a Hierarchical Environmental Model

    Directory of Open Access Journals (Sweden)

    Xin Wang

    2012-01-01

    Full Text Available This paper presents a parallel real-time crowd simulation method based on a hierarchical environmental model. A dynamical model of the complex environment should be constructed to simulate the state transition and propagation of individual motions. By modeling of a virtual environment where virtual crowds reside, we employ different parallel methods on a topological layer, a path layer and a perceptual layer. We propose a parallel motion path matching method based on the path layer and a parallel crowd simulation method based on the perceptual layer. The large-scale real-time crowd simulation becomes possible with these methods. Numerical experiments are carried out to demonstrate the methods and results.

  16. Real-Time Robust Adaptive Modeling and Scheduling for an Electronic Commerce Server

    Science.gov (United States)

    Du, Bing; Ruan, Chun

    With the increasing importance and pervasiveness of Internet services, it is becoming a challenge for the proliferation of electronic commerce services to provide performance guarantees under extreme overload. This paper describes a real-time optimization modeling and scheduling approach for performance guarantee of electronic commerce servers. We show that an electronic commerce server may be simulated as a multi-tank system. A robust adaptive server model is subject to unknown additive load disturbances and uncertain model matching. Overload control techniques are based on adaptive admission control to achieve timing guarantees. We evaluate the performance of the model using a complex simulation that is subjected to varying model parameters and massive overload.

  17. Hidden Markov Item Response Theory Models for Responses and Response Times.

    Science.gov (United States)

    Molenaar, Dylan; Oberski, Daniel; Vermunt, Jeroen; De Boeck, Paul

    2016-01-01

    Current approaches to model responses and response times to psychometric tests solely focus on between-subject differences in speed and ability. Within subjects, speed and ability are assumed to be constants. Violations of this assumption are generally absorbed in the residual of the model. As a result, within-subject departures from the between-subject speed and ability level remain undetected. These departures may be of interest to the researcher as they reflect differences in the response processes adopted on the items of a test. In this article, we propose a dynamic approach for responses and response times based on hidden Markov modeling to account for within-subject differences in responses and response times. A simulation study is conducted to demonstrate acceptable parameter recovery and acceptable performance of various fit indices in distinguishing between different models. In addition, both a confirmatory and an exploratory application are presented to demonstrate the practical value of the modeling approach.

  18. A Memory-Based Model of Hick's Law

    Science.gov (United States)

    Schneider, Darryl W.; Anderson, John R.

    2011-01-01

    We propose and evaluate a memory-based model of Hick's law, the approximately linear increase in choice reaction time with the logarithm of set size (the number of stimulus-response alternatives). According to the model, Hick's law reflects a combination of associative interference during retrieval from declarative memory and occasional savings…

  19. Multidimensional k-nearest neighbor model based on EEMD for financial time series forecasting

    Science.gov (United States)

    Zhang, Ningning; Lin, Aijing; Shang, Pengjian

    2017-07-01

    In this paper, we propose a new two-stage methodology that combines the ensemble empirical mode decomposition (EEMD) with multidimensional k-nearest neighbor model (MKNN) in order to forecast the closing price and high price of the stocks simultaneously. The modified algorithm of k-nearest neighbors (KNN) has an increasingly wide application in the prediction of all fields. Empirical mode decomposition (EMD) decomposes a nonlinear and non-stationary signal into a series of intrinsic mode functions (IMFs), however, it cannot reveal characteristic information of the signal with much accuracy as a result of mode mixing. So ensemble empirical mode decomposition (EEMD), an improved method of EMD, is presented to resolve the weaknesses of EMD by adding white noise to the original data. With EEMD, the components with true physical meaning can be extracted from the time series. Utilizing the advantage of EEMD and MKNN, the new proposed ensemble empirical mode decomposition combined with multidimensional k-nearest neighbor model (EEMD-MKNN) has high predictive precision for short-term forecasting. Moreover, we extend this methodology to the case of two-dimensions to forecast the closing price and high price of the four stocks (NAS, S&P500, DJI and STI stock indices) at the same time. The results indicate that the proposed EEMD-MKNN model has a higher forecast precision than EMD-KNN, KNN method and ARIMA.

  20. Big Data-Driven Based Real-Time Traffic Flow State Identification and Prediction

    Directory of Open Access Journals (Sweden)

    Hua-pu Lu

    2015-01-01

    Full Text Available With the rapid development of urban informatization, the era of big data is coming. To satisfy the demand of traffic congestion early warning, this paper studies the method of real-time traffic flow state identification and prediction based on big data-driven theory. Traffic big data holds several characteristics, such as temporal correlation, spatial correlation, historical correlation, and multistate. Traffic flow state quantification, the basis of traffic flow state identification, is achieved by a SAGA-FCM (simulated annealing genetic algorithm based fuzzy c-means based traffic clustering model. Considering simple calculation and predictive accuracy, a bilevel optimization model for regional traffic flow correlation analysis is established to predict traffic flow parameters based on temporal-spatial-historical correlation. A two-stage model for correction coefficients optimization is put forward to simplify the bilevel optimization model. The first stage model is built to calculate the number of temporal-spatial-historical correlation variables. The second stage model is present to calculate basic model formulation of regional traffic flow correlation. A case study based on a real-world road network in Beijing, China, is implemented to test the efficiency and applicability of the proposed modeling and computing methods.

  1. Distribution Locational Real-Time Pricing Based Smart Building Control and Management

    Energy Technology Data Exchange (ETDEWEB)

    Hao, Jun; Dai, Xiaoxiao; Zhang, Yingchen; Zhang, Jun; Gao, Wenzhong

    2016-11-21

    This paper proposes an real-virtual parallel computing scheme for smart building operations aiming at augmenting overall social welfare. The University of Denver's campus power grid and Ritchie fitness center is used for demonstrating the proposed approach. An artificial virtual system is built in parallel to the real physical system to evaluate the overall social cost of the building operation based on the social science based working productivity model, numerical experiment based building energy consumption model and the power system based real-time pricing mechanism. Through interactive feedback exchanged between the real and virtual system, enlarged social welfare, including monetary cost reduction and energy saving, as well as working productivity improvements, can be achieved.

  2. Comparison of interplanetary CME arrival times and solar wind parameters based on the WSA-ENLIL model with three cone types and observations

    Science.gov (United States)

    Jang, Soojeong; Moon, Y.-J.; Lee, Jae-Ok; Na, Hyeonock

    2014-09-01

    We have made a comparison between coronal mass ejection (CME)-associated shock propagations based on the Wang-Sheeley-Arge (WSA)-ENLIL model using three cone types and in situ observations. For this we use 28 full-halo CMEs, whose cone parameters are determined and their corresponding interplanetary shocks were observed at the Earth, from 2001 to 2002. We consider three different cone types (an asymmetric cone model, an ice cream cone model, and an elliptical cone model) to determine 3-D CME cone parameters (radial velocity, angular width, and source location), which are the input values of the WSA-ENLIL model. The mean absolute error of the CME-associated shock travel times for the WSA-ENLIL model using the ice-cream cone model is 9.9 h, which is about 1 h smaller than those of the other models. We compare the peak values and profiles of solar wind parameters (speed and density) with in situ observations. We find that the root-mean-square errors of solar wind peak speed and density for the ice cream and asymmetric cone model are about 190 km/s and 24/cm3, respectively. We estimate the cross correlations between the models and observations within the time lag of ± 2 days from the shock travel time. The correlation coefficients between the solar wind speeds from the WSA-ENLIL model using three cone types and in situ observations are approximately 0.7, which is larger than those of solar wind density (cc ˜0.6). Our preliminary investigations show that the ice cream cone model seems to be better than the other cone models in terms of the input parameters of the WSA-ENLIL model.

  3. An eigenvalue approach for the automatic scaling of unknowns in model-based reconstructions: Application to real-time phase-contrast flow MRI.

    Science.gov (United States)

    Tan, Zhengguo; Hohage, Thorsten; Kalentev, Oleksandr; Joseph, Arun A; Wang, Xiaoqing; Voit, Dirk; Merboldt, K Dietmar; Frahm, Jens

    2017-12-01

    The purpose of this work is to develop an automatic method for the scaling of unknowns in model-based nonlinear inverse reconstructions and to evaluate its application to real-time phase-contrast (RT-PC) flow magnetic resonance imaging (MRI). Model-based MRI reconstructions of parametric maps which describe a physical or physiological function require the solution of a nonlinear inverse problem, because the list of unknowns in the extended MRI signal equation comprises multiple functional parameters and all coil sensitivity profiles. Iterative solutions therefore rely on an appropriate scaling of unknowns to numerically balance partial derivatives and regularization terms. The scaling of unknowns emerges as a self-adjoint and positive-definite matrix which is expressible by its maximal eigenvalue and solved by power iterations. The proposed method is applied to RT-PC flow MRI based on highly undersampled acquisitions. Experimental validations include numerical phantoms providing ground truth and a wide range of human studies in the ascending aorta, carotid arteries, deep veins during muscular exercise and cerebrospinal fluid during deep respiration. For RT-PC flow MRI, model-based reconstructions with automatic scaling not only offer velocity maps with high spatiotemporal acuity and much reduced phase noise, but also ensure fast convergence as well as accurate and precise velocities for all conditions tested, i.e. for different velocity ranges, vessel sizes and the simultaneous presence of signals with velocity aliasing. In summary, the proposed automatic scaling of unknowns in model-based MRI reconstructions yields quantitatively reliable velocities for RT-PC flow MRI in various experimental scenarios. Copyright © 2017 John Wiley & Sons, Ltd.

  4. Real-time sensor failure detection by dynamic modelling of a PWR plant

    International Nuclear Information System (INIS)

    Turkcan, E.; Ciftcioglu, O.

    1992-06-01

    Signal validation and sensor failure detection is an important problem in real-time nuclear power plant (NPP) surveillance. Although conventional sensor redundancy, in a way, is a solution, identification of faulty sensor is necessary for further preventive actions to be taken. A comprehensive solution for the system so that any sensory reading is verified by its model based estimated counterpart, in real-time. Such a realization is accomplished by means of dynamic system's states estimation methodology using Kalman filter modelling technique. The method is investigated by means of real-time data of the steam generator of Borssele nuclear power plant and the method has proved to be satisfactory for real-time sensor failure detection as well as model validation verification. (author). 5 refs.; 6 figs.; 1 tab

  5. Discrete random walk models for space-time fractional diffusion

    International Nuclear Information System (INIS)

    Gorenflo, Rudolf; Mainardi, Francesco; Moretti, Daniele; Pagnini, Gianni; Paradisi, Paolo

    2002-01-01

    A physical-mathematical approach to anomalous diffusion may be based on generalized diffusion equations (containing derivatives of fractional order in space or/and time) and related random walk models. By space-time fractional diffusion equation we mean an evolution equation obtained from the standard linear diffusion equation by replacing the second-order space derivative with a Riesz-Feller derivative of order α is part of (0,2] and skewness θ (moduleθ≤{α,2-α}), and the first-order time derivative with a Caputo derivative of order β is part of (0,1]. Such evolution equation implies for the flux a fractional Fick's law which accounts for spatial and temporal non-locality. The fundamental solution (for the Cauchy problem) of the fractional diffusion equation can be interpreted as a probability density evolving in time of a peculiar self-similar stochastic process that we view as a generalized diffusion process. By adopting appropriate finite-difference schemes of solution, we generate models of random walk discrete in space and time suitable for simulating random variables whose spatial probability density evolves in time according to this fractional diffusion equation

  6. Model-based testing for embedded systems

    CERN Document Server

    Zander, Justyna; Mosterman, Pieter J

    2011-01-01

    What the experts have to say about Model-Based Testing for Embedded Systems: "This book is exactly what is needed at the exact right time in this fast-growing area. From its beginnings over 10 years ago of deriving tests from UML statecharts, model-based testing has matured into a topic with both breadth and depth. Testing embedded systems is a natural application of MBT, and this book hits the nail exactly on the head. Numerous topics are presented clearly, thoroughly, and concisely in this cutting-edge book. The authors are world-class leading experts in this area and teach us well-used

  7. Forecasting business cycle with chaotic time series based on neural network with weighted fuzzy membership functions

    International Nuclear Information System (INIS)

    Chai, Soo H.; Lim, Joon S.

    2016-01-01

    This study presents a forecasting model of cyclical fluctuations of the economy based on the time delay coordinate embedding method. The model uses a neuro-fuzzy network called neural network with weighted fuzzy membership functions (NEWFM). The preprocessed time series of the leading composite index using the time delay coordinate embedding method are used as input data to the NEWFM to forecast the business cycle. A comparative study is conducted using other methods based on wavelet transform and Principal Component Analysis for the performance comparison. The forecasting results are tested using a linear regression analysis to compare the approximation of the input data against the target class, gross domestic product (GDP). The chaos based model captures nonlinear dynamics and interactions within the system, which other two models ignore. The test results demonstrated that chaos based method significantly improved the prediction capability, thereby demonstrating superior performance to the other methods.

  8. Time-varying coefficient vector autoregressions model based on dynamic correlation with an application to crude oil and stock markets

    International Nuclear Information System (INIS)

    Lu, Fengbin; Qiao, Han; Wang, Shouyang; Lai, Kin Keung; Li, Yuze

    2017-01-01

    This paper proposes a new time-varying coefficient vector autoregressions (VAR) model, in which the coefficient is a linear function of dynamic lagged correlation. The proposed model allows for flexibility in choices of dynamic correlation models (e.g. dynamic conditional correlation generalized autoregressive conditional heteroskedasticity (GARCH) models, Markov-switching GARCH models and multivariate stochastic volatility models), which indicates that it can describe many types of time-varying causal effects. Time-varying causal relations between West Texas Intermediate (WTI) crude oil and the US Standard and Poor’s 500 (S&P 500) stock markets are examined by the proposed model. The empirical results show that their causal relations evolve with time and display complex characters. Both positive and negative causal effects of the WTI on the S&P 500 in the subperiods have been found and confirmed by the traditional VAR models. Similar results have been obtained in the causal effects of S&P 500 on WTI. In addition, the proposed model outperforms the traditional VAR model.

  9. Time-varying coefficient vector autoregressions model based on dynamic correlation with an application to crude oil and stock markets

    Energy Technology Data Exchange (ETDEWEB)

    Lu, Fengbin, E-mail: fblu@amss.ac.cn [Academy of Mathematics and Systems Science, Chinese Academy of Sciences, Beijing 100190 (China); Qiao, Han, E-mail: qiaohan@ucas.ac.cn [School of Economics and Management, University of Chinese Academy of Sciences, Beijing 100190 (China); Wang, Shouyang, E-mail: sywang@amss.ac.cn [School of Economics and Management, University of Chinese Academy of Sciences, Beijing 100190 (China); Lai, Kin Keung, E-mail: mskklai@cityu.edu.hk [Department of Management Sciences, City University of Hong Kong (Hong Kong); Li, Yuze, E-mail: richardyz.li@mail.utoronto.ca [Department of Industrial Engineering, University of Toronto (Canada)

    2017-01-15

    This paper proposes a new time-varying coefficient vector autoregressions (VAR) model, in which the coefficient is a linear function of dynamic lagged correlation. The proposed model allows for flexibility in choices of dynamic correlation models (e.g. dynamic conditional correlation generalized autoregressive conditional heteroskedasticity (GARCH) models, Markov-switching GARCH models and multivariate stochastic volatility models), which indicates that it can describe many types of time-varying causal effects. Time-varying causal relations between West Texas Intermediate (WTI) crude oil and the US Standard and Poor’s 500 (S&P 500) stock markets are examined by the proposed model. The empirical results show that their causal relations evolve with time and display complex characters. Both positive and negative causal effects of the WTI on the S&P 500 in the subperiods have been found and confirmed by the traditional VAR models. Similar results have been obtained in the causal effects of S&P 500 on WTI. In addition, the proposed model outperforms the traditional VAR model.

  10. Time series analysis as input for clinical predictive modeling: modeling cardiac arrest in a pediatric ICU.

    Science.gov (United States)

    Kennedy, Curtis E; Turley, James P

    2011-10-24

    Thousands of children experience cardiac arrest events every year in pediatric intensive care units. Most of these children die. Cardiac arrest prediction tools are used as part of medical emergency team evaluations to identify patients in standard hospital beds that are at high risk for cardiac arrest. There are no models to predict cardiac arrest in pediatric intensive care units though, where the risk of an arrest is 10 times higher than for standard hospital beds. Current tools are based on a multivariable approach that does not characterize deterioration, which often precedes cardiac arrests. Characterizing deterioration requires a time series approach. The purpose of this study is to propose a method that will allow for time series data to be used in clinical prediction models. Successful implementation of these methods has the potential to bring arrest prediction to the pediatric intensive care environment, possibly allowing for interventions that can save lives and prevent disabilities. We reviewed prediction models from nonclinical domains that employ time series data, and identified the steps that are necessary for building predictive models using time series clinical data. We illustrate the method by applying it to the specific case of building a predictive model for cardiac arrest in a pediatric intensive care unit. Time course analysis studies from genomic analysis provided a modeling template that was compatible with the steps required to develop a model from clinical time series data. The steps include: 1) selecting candidate variables; 2) specifying measurement parameters; 3) defining data format; 4) defining time window duration and resolution; 5) calculating latent variables for candidate variables not directly measured; 6) calculating time series features as latent variables; 7) creating data subsets to measure model performance effects attributable to various classes of candidate variables; 8) reducing the number of candidate features; 9

  11. Quadratic Term Structure Models in Discrete Time

    OpenAIRE

    Marco Realdon

    2006-01-01

    This paper extends the results on quadratic term structure models in continuos time to the discrete time setting. The continuos time setting can be seen as a special case of the discrete time one. Recursive closed form solutions for zero coupon bonds are provided even in the presence of multiple correlated underlying factors. Pricing bond options requires simple integration. Model parameters may well be time dependent without scuppering such tractability. Model estimation does not require a r...

  12. A simple analytical model for dynamics of time-varying target leverage ratios

    Science.gov (United States)

    Lo, C. F.; Hui, C. H.

    2012-03-01

    In this paper we have formulated a simple theoretical model for the dynamics of the time-varying target leverage ratio of a firm under some assumptions based upon empirical observations. In our theoretical model the time evolution of the target leverage ratio of a firm can be derived self-consistently from a set of coupled Ito's stochastic differential equations governing the leverage ratios of an ensemble of firms by the nonlinear Fokker-Planck equation approach. The theoretically derived time paths of the target leverage ratio bear great resemblance to those used in the time-dependent stationary-leverage (TDSL) model [Hui et al., Int. Rev. Financ. Analy. 15, 220 (2006)]. Thus, our simple model is able to provide a theoretical foundation for the selected time paths of the target leverage ratio in the TDSL model. We also examine how the pace of the adjustment of a firm's target ratio, the volatility of the leverage ratio and the current leverage ratio affect the dynamics of the time-varying target leverage ratio. Hence, with the proposed dynamics of the time-dependent target leverage ratio, the TDSL model can be readily applied to generate the default probabilities of individual firms and to assess the default risk of the firms.

  13. The application of time series models to cloud field morphology analysis

    Science.gov (United States)

    Chin, Roland T.; Jau, Jack Y. C.; Weinman, James A.

    1987-01-01

    A modeling method for the quantitative description of remotely sensed cloud field images is presented. A two-dimensional texture modeling scheme based on one-dimensional time series procedures is adopted for this purpose. The time series procedure used is the seasonal autoregressive, moving average (ARMA) process in Box and Jenkins. Cloud field properties such as directionality, clustering and cloud coverage can be retrieved by this method. It has been demonstrated that a cloud field image can be quantitatively defined by a small set of parameters and synthesized surrogates can be reconstructed from these model parameters. This method enables cloud climatology to be studied quantitatively.

  14. A practical MGA-ARIMA model for forecasting real-time dynamic rain-induced attenuation

    Science.gov (United States)

    Gong, Shuhong; Gao, Yifeng; Shi, Houbao; Zhao, Ge

    2013-05-01

    novel and practical modified genetic algorithm (MGA)-autoregressive integrated moving average (ARIMA) model for forecasting real-time dynamic rain-induced attenuation has been established by combining genetic algorithm ideas with the ARIMA model. It is proved that due to the introduction of MGA into the ARIMA(1,1,7) model, the MGA-ARIMA model has the potential to be conveniently applied in every country or area by creating a parameter database used by the ARIMA(1,1,7) model. The parameter database is given in this paper based on attenuation data measured in Xi'an, China. The methods to create the parameter databases in other countries or areas are offered, too. Based on the experimental results, the MGA-ARIMA model has been proved practical for forecasting dynamic rain-induced attenuation in real time. The novel model given in this paper is significant for developing adaptive fade mitigation technologies at millimeter wave bands.

  15. The influence of noise on nonlinear time series detection based on Volterra-Wiener-Korenberg model

    Energy Technology Data Exchange (ETDEWEB)

    Lei Min [State Key Laboratory of Vibration, Shock and Noise, Shanghai Jiao Tong University, Shanghai 200030 (China)], E-mail: leimin@sjtu.edu.cn; Meng Guang [State Key Laboratory of Vibration, Shock and Noise, Shanghai Jiao Tong University, Shanghai 200030 (China)

    2008-04-15

    This paper studies the influence of noises on Volterra-Wiener-Korenberg (VWK) nonlinear test model. Our numerical results reveal that different types of noises lead to different behavior of VWK model detection. For dynamic noise, it is difficult to distinguish chaos from nonchaotic but nonlinear determinism. For time series, measure noise has no impact on chaos determinism detection. This paper also discusses various behavior of VWK model detection with surrogate data for different noises.

  16. Real-time cerebellar neuroprosthetic system based on a spiking neural network model of motor learning

    Science.gov (United States)

    Xu, Tao; Xiao, Na; Zhai, Xiaolong; Chan, Pak Kwan; Tin, Chung

    2018-02-01

    Objective. Damage to the brain, as a result of various medical conditions, impacts the everyday life of patients and there is still no complete cure to neurological disorders. Neuroprostheses that can functionally replace the damaged neural circuit have recently emerged as a possible solution to these problems. Here we describe the development of a real-time cerebellar neuroprosthetic system to substitute neural function in cerebellar circuitry for learning delay eyeblink conditioning (DEC). Approach. The system was empowered by a biologically realistic spiking neural network (SNN) model of the cerebellar neural circuit, which considers the neuronal population and anatomical connectivity of the network. The model simulated synaptic plasticity critical for learning DEC. This SNN model was carefully implemented on a field programmable gate array (FPGA) platform for real-time simulation. This hardware system was interfaced in in vivo experiments with anesthetized rats and it used neural spikes recorded online from the animal to learn and trigger conditioned eyeblink in the animal during training. Main results. This rat-FPGA hybrid system was able to process neuronal spikes in real-time with an embedded cerebellum model of ~10 000 neurons and reproduce learning of DEC with different inter-stimulus intervals. Our results validated that the system performance is physiologically relevant at both the neural (firing pattern) and behavioral (eyeblink pattern) levels. Significance. This integrated system provides the sufficient computation power for mimicking the cerebellar circuit in real-time. The system interacts with the biological system naturally at the spike level and can be generalized for including other neural components (neuron types and plasticity) and neural functions for potential neuroprosthetic applications.

  17. Model-based testing for space-time interaction using point processes: An application to psychiatric hospital admissions in an urban area.

    Science.gov (United States)

    Meyer, Sebastian; Warnke, Ingeborg; Rössler, Wulf; Held, Leonhard

    2016-05-01

    Spatio-temporal interaction is inherent to cases of infectious diseases and occurrences of earthquakes, whereas the spread of other events, such as cancer or crime, is less evident. Statistical significance tests of space-time clustering usually assess the correlation between the spatial and temporal (transformed) distances of the events. Although appealing through simplicity, these classical tests do not adjust for the underlying population nor can they account for a distance decay of interaction. We propose to use the framework of an endemic-epidemic point process model to jointly estimate a background event rate explained by seasonal and areal characteristics, as well as a superposed epidemic component representing the hypothesis of interest. We illustrate this new model-based test for space-time interaction by analysing psychiatric inpatient admissions in Zurich, Switzerland (2007-2012). Several socio-economic factors were found to be associated with the admission rate, but there was no evidence of general clustering of the cases. Copyright © 2016 Elsevier Ltd. All rights reserved.

  18. Multivariate statistical modelling based on generalized linear models

    CERN Document Server

    Fahrmeir, Ludwig

    1994-01-01

    This book is concerned with the use of generalized linear models for univariate and multivariate regression analysis. Its emphasis is to provide a detailed introductory survey of the subject based on the analysis of real data drawn from a variety of subjects including the biological sciences, economics, and the social sciences. Where possible, technical details and proofs are deferred to an appendix in order to provide an accessible account for non-experts. Topics covered include: models for multi-categorical responses, model checking, time series and longitudinal data, random effects models, and state-space models. Throughout, the authors have taken great pains to discuss the underlying theoretical ideas in ways that relate well to the data at hand. As a result, numerous researchers whose work relies on the use of these models will find this an invaluable account to have on their desks. "The basic aim of the authors is to bring together and review a large part of recent advances in statistical modelling of m...

  19. Comments on a time-dependent version of the linear-quadratic model

    International Nuclear Information System (INIS)

    Tucker, S.L.; Travis, E.L.

    1990-01-01

    The accuracy and interpretation of the 'LQ + time' model are discussed. Evidence is presented, based on data in the literature, that this model does not accurately describe the changes in isoeffect dose occurring with protraction of the overall treatment time during fractionated irradiation of the lung. This lack of fit of the model explains, in part, the surprisingly large values of γ/α that have been derived from experimental lung data. The large apparent time factors for lung suggested by the model are also partly explained by the fact that γT/α, despite having units of dose, actually measures the influence of treatment time on the effect scale, not the dose scale, and is shown to consistently overestimate the change in total dose. The unusually high values of α/β that have been derived for lung using the model are shown to be influenced by the method by which the model was fitted to data. Reanalyses of the data using a more statistically valid regression procedure produce estimates of α/β more typical of those usually cited for lung. Most importantly, published isoeffect data from lung indicate that the true deviation from the linear-quadratic (LQ) model is nonlinear in time, instead of linear, and also depends on other factors such as the effect level and the size of dose per fraction. Thus, the authors do not advocate the use of the 'LQ + time' expression as a general isoeffect model. (author). 32 refs.; 3 figs.; 1 tab

  20. Real-time GIS data model and sensor web service platform for environmental data management.

    Science.gov (United States)

    Gong, Jianya; Geng, Jing; Chen, Zeqiang

    2015-01-09

    Effective environmental data management is meaningful for human health. In the past, environmental data management involved developing a specific environmental data management system, but this method often lacks real-time data retrieving and sharing/interoperating capability. With the development of information technology, a Geospatial Service Web method is proposed that can be employed for environmental data management. The purpose of this study is to determine a method to realize environmental data management under the Geospatial Service Web framework. A real-time GIS (Geographic Information System) data model and a Sensor Web service platform to realize environmental data management under the Geospatial Service Web framework are proposed in this study. The real-time GIS data model manages real-time data. The Sensor Web service platform is applied to support the realization of the real-time GIS data model based on the Sensor Web technologies. To support the realization of the proposed real-time GIS data model, a Sensor Web service platform is implemented. Real-time environmental data, such as meteorological data, air quality data, soil moisture data, soil temperature data, and landslide data, are managed in the Sensor Web service platform. In addition, two use cases of real-time air quality monitoring and real-time soil moisture monitoring based on the real-time GIS data model in the Sensor Web service platform are realized and demonstrated. The total time efficiency of the two experiments is 3.7 s and 9.2 s. The experimental results show that the method integrating real-time GIS data model and Sensor Web Service Platform is an effective way to manage environmental data under the Geospatial Service Web framework.

  1. Evaluating Emulation-based Models of Distributed Computing Systems

    Energy Technology Data Exchange (ETDEWEB)

    Jones, Stephen T. [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States). Cyber Initiatives; Gabert, Kasimir G. [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States). Cyber Initiatives; Tarman, Thomas D. [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States). Emulytics Initiatives

    2017-08-01

    Emulation-based models of distributed computing systems are collections of virtual ma- chines, virtual networks, and other emulation components configured to stand in for oper- ational systems when performing experimental science, training, analysis of design alterna- tives, test and evaluation, or idea generation. As with any tool, we should carefully evaluate whether our uses of emulation-based models are appropriate and justified. Otherwise, we run the risk of using a model incorrectly and creating meaningless results. The variety of uses of emulation-based models each have their own goals and deserve thoughtful evaluation. In this paper, we enumerate some of these uses and describe approaches that one can take to build an evidence-based case that a use of an emulation-based model is credible. Predictive uses of emulation-based models, where we expect a model to tell us something true about the real world, set the bar especially high and the principal evaluation method, called validation , is comensurately rigorous. We spend the majority of our time describing and demonstrating the validation of a simple predictive model using a well-established methodology inherited from decades of development in the compuational science and engineering community.

  2. TRSM-a thermal-hydraulic real-time simulation model for PWR

    International Nuclear Information System (INIS)

    Zhou Weichang

    1997-01-01

    TRSM (a Thermal-hydraulic Real-time Simulation Model) has been developed for PWR real-time simulation and best-estimate prediction of normal operating and abnormal accident conditions. It is a non-equilibrium two phase flow thermal-hydraulic model based on five basic conservation equations. A drift flux model is used to account for the unequal velocities of liquid and gaseous mixture, with or without the presence of the noncondensibles. Critical flow models are applied for break flow and valve flow calculations. A 5-regime two phase heat convection model is applied for clad-to-coolant as well as fluid-to-tubing heat transfer. A rigorous reactor coolant pump model is used to calculate the pressure drop and rise for the suction and discharge ends with complete pump characteristics curves included. The TRSM model has been adapted in the full-scale training simulator of Qinshan Nuclear Power Plant 300 MW unit to simulate the thermal-hydraulic performance of the NSSS. The simulation results of a cold leg LOCA and a steam generator tube rupture (SGTR) accident are presented

  3. A delay time model for a mission-based system subject to periodic and random inspection and postponed replacement

    International Nuclear Information System (INIS)

    Yang, Li; Ma, Xiaobing; Zhai, Qingqing; Zhao, Yu

    2016-01-01

    We propose an inspection and replacement policy for a single component system that successively executes missions with random durations. The failure process of the system can be divided into two states, namely, normal and defective, following the delay time concept. Inspections are carried out periodically and immediately after the completion of each mission (random inspections). The failed state is always identified immediately, whereas the defective state can only be revealed by an inspection. If the system fails or is defective at a periodic inspection, then replacement is immediate. If, however, the system is defective at a random inspection, then replacement will be postponed if the time to the subsequent periodic inspection is shorter than a pre-determined threshold, and immediate otherwise. We derive the long run expected cost per unit time and then investigate the optimal periodic inspection interval and postponement threshold. A numerical example is presented to demonstrate the applicability of the proposed maintenance policy. - Highlights: • A delay time model of inspection is introduced for mission-based systems. • Periodic and random inspections are performed to check the state. • Replacement of the defective system at a random inspection can be postponed.

  4. Analytical model for real time, noninvasive estimation of blood glucose level.

    Science.gov (United States)

    Adhyapak, Anoop; Sidley, Matthew; Venkataraman, Jayanti

    2014-01-01

    The paper presents an analytical model to estimate blood glucose level from measurements made non-invasively and in real time by an antenna strapped to a patient's wrist. Some promising success has been shown by the RIT ETA Lab research group that an antenna's resonant frequency can track, in real time, changes in glucose concentration. Based on an in-vitro study of blood samples of diabetic patients, the paper presents a modified Cole-Cole model that incorporates a factor to represent the change in glucose level. A calibration technique using the input impedance technique is discussed and the results show a good estimation as compared to the glucose meter readings. An alternate calibration methodology has been developed that is based on the shift in the antenna resonant frequency using an equivalent circuit model containing a shunt capacitor to represent the shift in resonant frequency with changing glucose levels. Work under progress is the optimization of the technique with a larger sample of patients.

  5. SPARSE: quadratic time simultaneous alignment and folding of RNAs without sequence-based heuristics

    Science.gov (United States)

    Will, Sebastian; Otto, Christina; Miladi, Milad; Möhl, Mathias; Backofen, Rolf

    2015-01-01

    Motivation: RNA-Seq experiments have revealed a multitude of novel ncRNAs. The gold standard for their analysis based on simultaneous alignment and folding suffers from extreme time complexity of O(n6). Subsequently, numerous faster ‘Sankoff-style’ approaches have been suggested. Commonly, the performance of such methods relies on sequence-based heuristics that restrict the search space to optimal or near-optimal sequence alignments; however, the accuracy of sequence-based methods breaks down for RNAs with sequence identities below 60%. Alignment approaches like LocARNA that do not require sequence-based heuristics, have been limited to high complexity (≥ quartic time). Results: Breaking this barrier, we introduce the novel Sankoff-style algorithm ‘sparsified prediction and alignment of RNAs based on their structure ensembles (SPARSE)’, which runs in quadratic time without sequence-based heuristics. To achieve this low complexity, on par with sequence alignment algorithms, SPARSE features strong sparsification based on structural properties of the RNA ensembles. Following PMcomp, SPARSE gains further speed-up from lightweight energy computation. Although all existing lightweight Sankoff-style methods restrict Sankoff’s original model by disallowing loop deletions and insertions, SPARSE transfers the Sankoff algorithm to the lightweight energy model completely for the first time. Compared with LocARNA, SPARSE achieves similar alignment and better folding quality in significantly less time (speedup: 3.7). At similar run-time, it aligns low sequence identity instances substantially more accurate than RAF, which uses sequence-based heuristics. Availability and implementation: SPARSE is freely available at http://www.bioinf.uni-freiburg.de/Software/SPARSE. Contact: backofen@informatik.uni-freiburg.de Supplementary information: Supplementary data are available at Bioinformatics online. PMID:25838465

  6. CHIMERA II - A real-time multiprocessing environment for sensor-based robot control

    Science.gov (United States)

    Stewart, David B.; Schmitz, Donald E.; Khosla, Pradeep K.

    1989-01-01

    A multiprocessing environment for a wide variety of sensor-based robot system, providing the flexibility, performance, and UNIX-compatible interface needed for fast development of real-time code is addressed. The requirements imposed on the design of a programming environment for sensor-based robotic control is outlined. The details of the current hardware configuration are presented, along with the details of the CHIMERA II software. Emphasis is placed on the kernel, low-level interboard communication, user interface, extended file system, user-definable and dynamically selectable real-time schedulers, remote process synchronization, and generalized interprocess communication. A possible implementation of a hierarchical control model, the NASA/NBS standard reference model for telerobot control system is demonstrated.

  7. Modeling and real time simulation of an HVDC inverter feeding a weak AC system based on commutation failure study.

    Science.gov (United States)

    Mankour, Mohamed; Khiat, Mounir; Ghomri, Leila; Chaker, Abdelkader; Bessalah, Mourad

    2018-06-01

    This paper presents modeling and study of 12-pulse HVDC (High Voltage Direct Current) based on real time simulation where the HVDC inverter is connected to a weak AC system. In goal to study the dynamic performance of the HVDC link, two serious kind of disturbance are applied at HVDC converters where the first one is the single phase to ground AC fault and the second one is the DC link to ground fault. The study is based on two different mode of analysis, which the first is to test the performance of the DC control and the second is focalized to study the effect of the protection function on the system behavior. This real time simulation considers the strength of the AC system to witch is connected and his relativity with the capacity of the DC link. The results obtained are validated by means of RT-lab platform using digital Real time simulator Hypersim (OP-5600), the results carried out show the effect of the DC control and the influence of the protection function to reduce the probability of commutation failures and also for helping inverter to take out from commutation failure even while the DC control fails to eliminate them. Copyright © 2018 ISA. Published by Elsevier Ltd. All rights reserved.

  8. An EOQ model for time-dependent deteriorating items with alternating demand rates allowing shortages by considering time value of money

    Directory of Open Access Journals (Sweden)

    Kundu Antara

    2013-01-01

    Full Text Available The present paper deals with an economic order quantity (EOQ model of an inventory problem with alternating demand rate: (i For a certain period, the demand rate is a non linear function of the instantaneous inventory level. (ii For the rest of the cycle, the demand rate is time dependent. The time at which demand rate changes, may be deterministic or uncertain. The deterioration rate of the item is time dependent. The holding cost and shortage cost are taken as a linear function of time. The total cost function per unit time is obtained. Finally, the model is solved using a gradient based non-linear optimization technique (LINGO and is illustrated by a numerical example.

  9. Real-time characterization of partially observed epidemics using surrogate models.

    Energy Technology Data Exchange (ETDEWEB)

    Safta, Cosmin; Ray, Jaideep; Lefantzi, Sophia; Crary, David (Applied Research Associates, Arlington, VA); Sargsyan, Khachik; Cheng, Karen (Applied Research Associates, Arlington, VA)

    2011-09-01

    We present a statistical method, predicated on the use of surrogate models, for the 'real-time' characterization of partially observed epidemics. Observations consist of counts of symptomatic patients, diagnosed with the disease, that may be available in the early epoch of an ongoing outbreak. Characterization, in this context, refers to estimation of epidemiological parameters that can be used to provide short-term forecasts of the ongoing epidemic, as well as to provide gross information on the dynamics of the etiologic agent in the affected population e.g., the time-dependent infection rate. The characterization problem is formulated as a Bayesian inverse problem, and epidemiological parameters are estimated as distributions using a Markov chain Monte Carlo (MCMC) method, thus quantifying the uncertainty in the estimates. In some cases, the inverse problem can be computationally expensive, primarily due to the epidemic simulator used inside the inversion algorithm. We present a method, based on replacing the epidemiological model with computationally inexpensive surrogates, that can reduce the computational time to minutes, without a significant loss of accuracy. The surrogates are created by projecting the output of an epidemiological model on a set of polynomial chaos bases; thereafter, computations involving the surrogate model reduce to evaluations of a polynomial. We find that the epidemic characterizations obtained with the surrogate models is very close to that obtained with the original model. We also find that the number of projections required to construct a surrogate model is O(10)-O(10{sup 2}) less than the number of samples required by the MCMC to construct a stationary posterior distribution; thus, depending upon the epidemiological models in question, it may be possible to omit the offline creation and caching of surrogate models, prior to their use in an inverse problem. The technique is demonstrated on synthetic data as well as

  10. On discrete models of space-time

    International Nuclear Information System (INIS)

    Horzela, A.; Kempczynski, J.; Kapuscik, E.; Georgia Univ., Athens, GA; Uzes, Ch.

    1992-02-01

    Analyzing the Einstein radiolocation method we come to the conclusion that results of any measurement of space-time coordinates should be expressed in terms of rational numbers. We show that this property is Lorentz invariant and may be used in the construction of discrete models of space-time different from the models of the lattice type constructed in the process of discretization of continuous models. (author)

  11. Using a thermal-based two source energy balance model with time-differencing to estimate surface energy fluxes with day-night MODIS observations

    DEFF Research Database (Denmark)

    Guzinski, Radoslaw; Anderson, M.C.; Kustas, W.P.

    2013-01-01

    The Dual Temperature Difference (DTD) model, introduced by Norman et al. (2000), uses a two source energy balance modelling scheme driven by remotely sensed observations of diurnal changes in land surface temperature (LST) to estimate surface energy fluxes. By using a time-differential temperature...... agreement with field measurements is obtained for a number of ecosystems in Denmark and the United States. Finally, regional maps of energy fluxes are produced for the Danish Hydrological ObsErvatory (HOBE) in western Denmark, indicating realistic patterns based on land use....

  12. Model-predictive control based on Takagi-Sugeno fuzzy model for electrical vehicles delayed model

    DEFF Research Database (Denmark)

    Khooban, Mohammad-Hassan; Vafamand, Navid; Niknam, Taher

    2017-01-01

    Electric vehicles (EVs) play a significant role in different applications, such as commuter vehicles and short distance transport applications. This study presents a new structure of model-predictive control based on the Takagi-Sugeno fuzzy model, linear matrix inequalities, and a non......-quadratic Lyapunov function for the speed control of EVs including time-delay states and parameter uncertainty. Experimental data, using the Federal Test Procedure (FTP-75), is applied to test the performance and robustness of the suggested controller in the presence of time-varying parameters. Besides, a comparison...... is made between the results of the suggested robust strategy and those obtained from some of the most recent studies on the same topic, to assess the efficiency of the suggested controller. Finally, the experimental results based on a TMS320F28335 DSP are performed on a direct current motor. Simulation...

  13. Identification of walking human model using agent-based modelling

    Science.gov (United States)

    Shahabpoor, Erfan; Pavic, Aleksandar; Racic, Vitomir

    2018-03-01

    The interaction of walking people with large vibrating structures, such as footbridges and floors, in the vertical direction is an important yet challenging phenomenon to describe mathematically. Several different models have been proposed in the literature to simulate interaction of stationary people with vibrating structures. However, the research on moving (walking) human models, explicitly identified for vibration serviceability assessment of civil structures, is still sparse. In this study, the results of a comprehensive set of FRF-based modal tests were used, in which, over a hundred test subjects walked in different group sizes and walking patterns on a test structure. An agent-based model was used to simulate discrete traffic-structure interactions. The occupied structure modal parameters found in tests were used to identify the parameters of the walking individual's single-degree-of-freedom (SDOF) mass-spring-damper model using 'reverse engineering' methodology. The analysis of the results suggested that the normal distribution with the average of μ = 2.85Hz and standard deviation of σ = 0.34Hz can describe human SDOF model natural frequency. Similarly, the normal distribution with μ = 0.295 and σ = 0.047 can describe the human model damping ratio. Compared to the previous studies, the agent-based modelling methodology proposed in this paper offers significant flexibility in simulating multi-pedestrian walking traffics, external forces and simulating different mechanisms of human-structure and human-environment interaction at the same time.

  14. Local Stability of AIDS Epidemic Model Through Treatment and Vertical Transmission with Time Delay

    Science.gov (United States)

    Novi W, Cascarilla; Lestari, Dwi

    2016-02-01

    This study aims to explain stability of the spread of AIDS through treatment and vertical transmission model. Human with HIV need a time to positively suffer AIDS. The existence of a time, human with HIV until positively suffer AIDS can be delayed for a time so that the model acquired is the model with time delay. The model form is a nonlinear differential equation with time delay, SIPTA (susceptible-infected-pre AIDS-treatment-AIDS). Based on SIPTA model analysis results the disease free equilibrium point and the endemic equilibrium point. The disease free equilibrium point with and without time delay are local asymptotically stable if the basic reproduction number is less than one. The endemic equilibrium point will be local asymptotically stable if the time delay is less than the critical value of delay, unstable if the time delay is more than the critical value of delay, and bifurcation occurs if the time delay is equal to the critical value of delay.

  15. Multiband Prediction Model for Financial Time Series with Multivariate Empirical Mode Decomposition

    Directory of Open Access Journals (Sweden)

    Md. Rabiul Islam

    2012-01-01

    Full Text Available This paper presents a subband approach to financial time series prediction. Multivariate empirical mode decomposition (MEMD is employed here for multiband representation of multichannel financial time series together. Autoregressive moving average (ARMA model is used in prediction of individual subband of any time series data. Then all the predicted subband signals are summed up to obtain the overall prediction. The ARMA model works better for stationary signal. With multiband representation, each subband becomes a band-limited (narrow band signal and hence better prediction is achieved. The performance of the proposed MEMD-ARMA model is compared with classical EMD, discrete wavelet transform (DWT, and with full band ARMA model in terms of signal-to-noise ratio (SNR and mean square error (MSE between the original and predicted time series. The simulation results show that the MEMD-ARMA-based method performs better than the other methods.

  16. Research on light rail electric load forecasting based on ARMA model

    Science.gov (United States)

    Huang, Yifan

    2018-04-01

    The article compares a variety of time series models and combines the characteristics of power load forecasting. Then, a light load forecasting model based on ARMA model is established. Based on this model, a light rail system is forecasted. The prediction results show that the accuracy of the model prediction is high.

  17. Time-varying coefficient vector autoregressions model based on dynamic correlation with an application to crude oil and stock markets.

    Science.gov (United States)

    Lu, Fengbin; Qiao, Han; Wang, Shouyang; Lai, Kin Keung; Li, Yuze

    2017-01-01

    This paper proposes a new time-varying coefficient vector autoregressions (VAR) model, in which the coefficient is a linear function of dynamic lagged correlation. The proposed model allows for flexibility in choices of dynamic correlation models (e.g. dynamic conditional correlation generalized autoregressive conditional heteroskedasticity (GARCH) models, Markov-switching GARCH models and multivariate stochastic volatility models), which indicates that it can describe many types of time-varying causal effects. Time-varying causal relations between West Texas Intermediate (WTI) crude oil and the US Standard and Poor's 500 (S&P 500) stock markets are examined by the proposed model. The empirical results show that their causal relations evolve with time and display complex characters. Both positive and negative causal effects of the WTI on the S&P 500 in the subperiods have been found and confirmed by the traditional VAR models. Similar results have been obtained in the causal effects of S&P 500 on WTI. In addition, the proposed model outperforms the traditional VAR model. Copyright © 2016 Elsevier Ltd. All rights reserved.

  18. Model-Checking Real-Time Control Programs

    DEFF Research Database (Denmark)

    Iversen, T. K.; Kristoffersen, K. J.; Larsen, Kim Guldstrand

    2000-01-01

    In this paper, we present a method for automatic verification of real-time control programs running on LEGO(R) RCX(TM) bricks using the verification tool UPPALL. The control programs, consisting of a number of tasks running concurrently, are automatically translated into the mixed automata model...... of UPPAAL. The fixed scheduling algorithm used by the LEGO(R) RCX(TM) processor is modeled in UPPALL, and supply of similar (sufficient) timed automata models for the environment allows analysis of the overall real-time system using the tools of UPPALL. To illustrate our technique for sorting LEGO(R) bricks...

  19. Particle Swarm Based Approach of a Real-Time Discrete Neural Identifier for Linear Induction Motors

    Directory of Open Access Journals (Sweden)

    Alma Y. Alanis

    2013-01-01

    Full Text Available This paper focusses on a discrete-time neural identifier applied to a linear induction motor (LIM model, whose model is assumed to be unknown. This neural identifier is robust in presence of external and internal uncertainties. The proposed scheme is based on a discrete-time recurrent high-order neural network (RHONN trained with a novel algorithm based on extended Kalman filter (EKF and particle swarm optimization (PSO, using an online series-parallel con…figuration. Real-time results are included in order to illustrate the applicability of the proposed scheme.

  20. MAC-Level Communication Time Modeling and Analysis for Real-Time WSNs

    Directory of Open Access Journals (Sweden)

    STANGACIU, V.

    2016-02-01

    Full Text Available Low-level communication protocols and their timing behavior are essential to developing wireless sensor networks (WSNs able to provide the support and operating guarantees required by many current real-time applications. Nevertheless, this aspect still remains an issue in the state-of-the-art. In this paper we provide a detailed analysis of a recently proposed MAC-level communication timing model and demonstrate its usability in designing real-time protocols. The results of a large set of measurements are also presented and discussed here, in direct relation to the main time parameters of the analyzed model.