WorldWideScience

Sample records for time model based

  1. A Dynamic Travel Time Estimation Model Based on Connected Vehicles

    Directory of Open Access Journals (Sweden)

    Daxin Tian

    2015-01-01

    Full Text Available With advances in connected vehicle technology, dynamic vehicle route guidance models gradually become indispensable equipment for drivers. Traditional route guidance models are designed to direct a vehicle along the shortest path from the origin to the destination without considering the dynamic traffic information. In this paper a dynamic travel time estimation model is presented which can collect and distribute traffic data based on the connected vehicles. To estimate the real-time travel time more accurately, a road link dynamic dividing algorithm is proposed. The efficiency of the model is confirmed by simulations, and the experiment results prove the effectiveness of the travel time estimation method.

  2. Household time allocation model based on a group utility function

    NARCIS (Netherlands)

    Zhang, J.; Borgers, A.W.J.; Timmermans, H.J.P.

    2002-01-01

    Existing activity-based models typically assume an individual decision-making process. In household decision-making, however, interaction exists among household members and their activities during the allocation of the members' limited time. This paper, therefore, attempts to develop a new household

  3. Hybrid perturbation methods based on statistical time series models

    Science.gov (United States)

    San-Juan, Juan Félix; San-Martín, Montserrat; Pérez, Iván; López, Rosario

    2016-04-01

    In this work we present a new methodology for orbit propagation, the hybrid perturbation theory, based on the combination of an integration method and a prediction technique. The former, which can be a numerical, analytical or semianalytical theory, generates an initial approximation that contains some inaccuracies derived from the fact that, in order to simplify the expressions and subsequent computations, not all the involved forces are taken into account and only low-order terms are considered, not to mention the fact that mathematical models of perturbations not always reproduce physical phenomena with absolute precision. The prediction technique, which can be based on either statistical time series models or computational intelligence methods, is aimed at modelling and reproducing missing dynamics in the previously integrated approximation. This combination results in the precision improvement of conventional numerical, analytical and semianalytical theories for determining the position and velocity of any artificial satellite or space debris object. In order to validate this methodology, we present a family of three hybrid orbit propagators formed by the combination of three different orders of approximation of an analytical theory and a statistical time series model, and analyse their capability to process the effect produced by the flattening of the Earth. The three considered analytical components are the integration of the Kepler problem, a first-order and a second-order analytical theories, whereas the prediction technique is the same in the three cases, namely an additive Holt-Winters method.

  4. Time-based collision risk modeling for air traffic management

    Science.gov (United States)

    Bell, Alan E.

    Since the emergence of commercial aviation in the early part of last century, economic forces have driven a steadily increasing demand for air transportation. Increasing density of aircraft operating in a finite volume of airspace is accompanied by a corresponding increase in the risk of collision, and in response to a growing number of incidents and accidents involving collisions between aircraft, governments worldwide have developed air traffic control systems and procedures to mitigate this risk. The objective of any collision risk management system is to project conflicts and provide operators with sufficient opportunity to recognize potential collisions and take necessary actions to avoid them. It is therefore the assertion of this research that the currency of collision risk management is time. Future Air Traffic Management Systems are being designed around the foundational principle of four dimensional trajectory based operations, a method that replaces legacy first-come, first-served sequencing priorities with time-based reservations throughout the airspace system. This research will demonstrate that if aircraft are to be sequenced in four dimensions, they must also be separated in four dimensions. In order to separate aircraft in four dimensions, time must emerge as the primary tool by which air traffic is managed. A functional relationship exists between the time-based performance of aircraft, the interval between aircraft scheduled to cross some three dimensional point in space, and the risk of collision. This research models that relationship and presents two key findings. First, a method is developed by which the ability of an aircraft to meet a required time of arrival may be expressed as a robust standard for both industry and operations. Second, a method by which airspace system capacity may be increased while maintaining an acceptable level of collision risk is presented and demonstrated for the purpose of formulating recommendations for procedures

  5. Real-time traffic signal optimization model based on average delay time per person

    Directory of Open Access Journals (Sweden)

    Pengpeng Jiao

    2015-10-01

    Full Text Available Real-time traffic signal control is very important for relieving urban traffic congestion. Many existing traffic control models were formulated using optimization approach, with the objective functions of minimizing vehicle delay time. To improve people’s trip efficiency, this article aims to minimize delay time per person. Based on the time-varying traffic flow data at intersections, the article first fits curves of accumulative arrival and departure vehicles, as well as the corresponding functions. Moreover, this article transfers vehicle delay time to personal delay time using average passenger load of cars and buses, employs such time as the objective function, and proposes a signal timing optimization model for intersections to achieve real-time signal parameters, including cycle length and green time. This research further implements a case study based on practical data collected at an intersection in Beijing, China. The average delay time per person and queue length are employed as evaluation indices to show the performances of the model. The results show that the proposed methodology is capable of improving traffic efficiency and is very effective for real-world applications.

  6. Traffic Incident Clearance Time and Arrival Time Prediction Based on Hazard Models

    Directory of Open Access Journals (Sweden)

    Yang beibei Ji

    2014-01-01

    Full Text Available Accurate prediction of incident duration is not only important information of Traffic Incident Management System, but also an effective input for travel time prediction. In this paper, the hazard based prediction models are developed for both incident clearance time and arrival time. The data are obtained from the Queensland Department of Transport and Main Roads’ STREAMS Incident Management System (SIMS for one year ending in November 2010. The best fitting distributions are drawn for both clearance and arrival time for 3 types of incident: crash, stationary vehicle, and hazard. The results show that Gamma, Log-logistic, and Weibull are the best fit for crash, stationary vehicle, and hazard incident, respectively. The obvious impact factors are given for crash clearance time and arrival time. The quantitative influences for crash and hazard incident are presented for both clearance and arrival. The model accuracy is analyzed at the end.

  7. SEM based CARMA time series modeling for arbitrary N

    NARCIS (Netherlands)

    Oud, J.H.L.; Völkle, M.C.; Driver, C.C.

    2018-01-01

    This article explains in detail the state space specification and estimation of first and higher-order autoregressive moving-average models in continuous time (CARMA) in an extended structural equation modeling (SEM) context for N = 1 as well as N > 1. To illustrate the approach, simulations will be

  8. SEM Based CARMA Time Series Modeling for Arbitrary N.

    Science.gov (United States)

    Oud, Johan H L; Voelkle, Manuel C; Driver, Charles C

    2018-01-01

    This article explains in detail the state space specification and estimation of first and higher-order autoregressive moving-average models in continuous time (CARMA) in an extended structural equation modeling (SEM) context for N = 1 as well as N > 1. To illustrate the approach, simulations will be presented in which a single panel model (T = 41 time points) is estimated for a sample of N = 1,000 individuals as well as for samples of N = 100 and N = 50 individuals, followed by estimating 100 separate models for each of the one-hundred N = 1 cases in the N = 100 sample. Furthermore, we will demonstrate how to test the difference between the full panel model and each N = 1 model by means of a subject-group-reproducibility test. Finally, the proposed analyses will be applied in an empirical example, in which the relationships between mood at work and mood at home are studied in a sample of N = 55 women. All analyses are carried out by ctsem, an R-package for continuous time modeling, interfacing to OpenMx.

  9. A unified model of time perception accounts for duration-based and beat-based timing mechanisms

    Directory of Open Access Journals (Sweden)

    Sundeep eTeki

    2012-01-01

    Full Text Available Accurate timing is an integral aspect of sensory and motor processes such as the perception of speech and music and the execution of skilled movement. Neuropsychological studies of time perception in patient groups and functional neuroimaging studies of timing in normal participants suggest common neural substrates for perceptual and motor timing. A timing system is implicated in core regions of the motor network such as the cerebellum, inferior olive, basal ganglia, pre-supplementary and supplementary motor area, pre-motor cortex and higher regions such as the prefrontal cortex.In this article, we assess how distinct parts of the timing system subserve different aspects of perceptual timing. We previously established brain bases for absolute, duration-based timing and relative, beat-based timing in the olivocerebellar and striato-thalamo-cortical circuits respectively (Teki et al., 2011. However, neurophysiological and neuroanatomical studies provide a basis to suggest that timing functions of these circuits may not be independent.Here, we propose a unified model of time perception based on coordinated activity in the core striatal and olivocerebellar networks that are interconnected with each other and the cerebral cortex th

  10. SCS-CN based time-distributed sediment yield model

    Science.gov (United States)

    Tyagi, J. V.; Mishra, S. K.; Singh, Ranvir; Singh, V. P.

    2008-05-01

    SummaryA sediment yield model is developed to estimate the temporal rates of sediment yield from rainfall events on natural watersheds. The model utilizes the SCS-CN based infiltration model for computation of rainfall-excess rate, and the SCS-CN-inspired proportionality concept for computation of sediment-excess. For computation of sedimentographs, the sediment-excess is routed to the watershed outlet using a single linear reservoir technique. Analytical development of the model shows the ratio of the potential maximum erosion (A) to the potential maximum retention (S) of the SCS-CN method is constant for a watershed. The model is calibrated and validated on a number of events using the data of seven watersheds from India and the USA. Representative values of the A/S ratio computed for the watersheds from calibration are used for the validation of the model. The encouraging results of the proposed simple four parameter model exhibit its potential in field application.

  11. Model based Computerized Ionospheric Tomography in space and time

    Science.gov (United States)

    Tuna, Hakan; Arikan, Orhan; Arikan, Feza

    2018-04-01

    Reconstruction of the ionospheric electron density distribution in space and time not only provide basis for better understanding the physical nature of the ionosphere, but also provide improvements in various applications including HF communication. Recently developed IONOLAB-CIT technique provides physically admissible 3D model of the ionosphere by using both Slant Total Electron Content (STEC) measurements obtained from a GPS satellite - receiver network and IRI-Plas model. IONOLAB-CIT technique optimizes IRI-Plas model parameters in the region of interest such that the synthetic STEC computations obtained from the IRI-Plas model are in accordance with the actual STEC measurements. In this work, the IONOLAB-CIT technique is extended to provide reconstructions both in space and time. This extension exploits the temporal continuity of the ionosphere to provide more reliable reconstructions with a reduced computational load. The proposed 4D-IONOLAB-CIT technique is validated on real measurement data obtained from TNPGN-Active GPS receiver network in Turkey.

  12. Time dependent mechanical modeling for polymers based on network theory

    Energy Technology Data Exchange (ETDEWEB)

    Billon, Noëlle [MINES ParisTech, PSL-Research University, CEMEF – Centre de mise en forme des matériaux, CNRS UMR 7635, CS 10207 rue Claude Daunesse 06904 Sophia Antipolis Cedex (France)

    2016-05-18

    Despite of a lot of attempts during recent years, complex mechanical behaviour of polymers remains incompletely modelled, making industrial design of structures under complex, cyclic and hard loadings not totally reliable. The non linear and dissipative viscoelastic, viscoplastic behaviour of those materials impose to take into account non linear and combined effects of mechanical and thermal phenomena. In this view, a visco-hyperelastic, viscoplastic model, based on network description of the material has recently been developed and designed in a complete thermodynamic frame in order to take into account those main thermo-mechanical couplings. Also, a way to account for coupled effects of strain-rate and temperature was suggested. First experimental validations conducted in the 1D limit on amorphous rubbery like PMMA in isothermal conditions led to pretty goods results. In this paper a more complete formalism is presented and validated in the case of a semi crystalline polymer, a PA66 and a PET (either amorphous or semi crystalline) are used. Protocol for identification of constitutive parameters is described. It is concluded that this new approach should be the route to accurately model thermo-mechanical behaviour of polymers using a reduced number of parameters of some physical meaning.

  13. A Feature Fusion Based Forecasting Model for Financial Time Series

    Science.gov (United States)

    Guo, Zhiqiang; Wang, Huaiqing; Liu, Quan; Yang, Jie

    2014-01-01

    Predicting the stock market has become an increasingly interesting research area for both researchers and investors, and many prediction models have been proposed. In these models, feature selection techniques are used to pre-process the raw data and remove noise. In this paper, a prediction model is constructed to forecast stock market behavior with the aid of independent component analysis, canonical correlation analysis, and a support vector machine. First, two types of features are extracted from the historical closing prices and 39 technical variables obtained by independent component analysis. Second, a canonical correlation analysis method is utilized to combine the two types of features and extract intrinsic features to improve the performance of the prediction model. Finally, a support vector machine is applied to forecast the next day's closing price. The proposed model is applied to the Shanghai stock market index and the Dow Jones index, and experimental results show that the proposed model performs better in the area of prediction than other two similar models. PMID:24971455

  14. A Model-free Approach to Fault Detection of Continuous-time Systems Based on Time Domain Data

    Institute of Scientific and Technical Information of China (English)

    Ping Zhang; Steven X. Ding

    2007-01-01

    In this paper, a model-free approach is presented to design an observer-based fault detection system of linear continuoustime systems based on input and output data in the time domain. The core of the approach is to directly identify parameters of the observer-based residual generator based on a numerically reliable data equation obtained by filtering and sampling the input and output signals.

  15. A travel time forecasting model based on change-point detection method

    Science.gov (United States)

    LI, Shupeng; GUANG, Xiaoping; QIAN, Yongsheng; ZENG, Junwei

    2017-06-01

    Travel time parameters obtained from road traffic sensors data play an important role in traffic management practice. A travel time forecasting model is proposed for urban road traffic sensors data based on the method of change-point detection in this paper. The first-order differential operation is used for preprocessing over the actual loop data; a change-point detection algorithm is designed to classify the sequence of large number of travel time data items into several patterns; then a travel time forecasting model is established based on autoregressive integrated moving average (ARIMA) model. By computer simulation, different control parameters are chosen for adaptive change point search for travel time series, which is divided into several sections of similar state.Then linear weight function is used to fit travel time sequence and to forecast travel time. The results show that the model has high accuracy in travel time forecasting.

  16. A prediction method based on wavelet transform and multiple models fusion for chaotic time series

    International Nuclear Information System (INIS)

    Zhongda, Tian; Shujiang, Li; Yanhong, Wang; Yi, Sha

    2017-01-01

    In order to improve the prediction accuracy of chaotic time series, a prediction method based on wavelet transform and multiple models fusion is proposed. The chaotic time series is decomposed and reconstructed by wavelet transform, and approximate components and detail components are obtained. According to different characteristics of each component, least squares support vector machine (LSSVM) is used as predictive model for approximation components. At the same time, an improved free search algorithm is utilized for predictive model parameters optimization. Auto regressive integrated moving average model (ARIMA) is used as predictive model for detail components. The multiple prediction model predictive values are fusion by Gauss–Markov algorithm, the error variance of predicted results after fusion is less than the single model, the prediction accuracy is improved. The simulation results are compared through two typical chaotic time series include Lorenz time series and Mackey–Glass time series. The simulation results show that the prediction method in this paper has a better prediction.

  17. Timing-based business models for flexibility creation in the electric power sector

    International Nuclear Information System (INIS)

    Helms, Thorsten; Loock, Moritz; Bohnsack, René

    2016-01-01

    Energy policies in many countries push for an increase in the generation of wind and solar power. Along these developments, the balance between supply and demand becomes more challenging as the generation of wind and solar power is volatile, and flexibility of supply and demand becomes valuable. As a consequence, companies in the electric power sector develop new business models that create flexibility through activities of timing supply and demand. Based on an extensive qualitative analysis of interviews and industry research in the energy industry, the paper at hand explores the role of timing-based business models in the power sector and sheds light on the mechanisms of flexibility creation through timing. In particular we distill four ideal-type business models of flexibility creation with timing and reveal how they can be classified along two dimensions, namely costs of multiplicity and intervention costs. We put forward that these business models offer ‘coupled services’, combining resource-centered and service-centered perspectives. This complementary character has important implications for energy policy. - Highlights: •Explores timing-based business models providing flexibility in the energy industry. •Timing-based business models can be classified on two dimensions. •Timing-based business models offer ‘coupled services’. • ‘Coupled services’ couple timing as a service with supply- or demand side valuables. •Policy and managerial implications for energy market design.

  18. Ergodicity of forward times of the renewal process in a block-based inspection model using the delay time concept

    International Nuclear Information System (INIS)

    Wang Wenbin; Banjevic, Dragan

    2012-01-01

    The delay time concept and the techniques developed for modelling and optimising plant inspection practice have been reported in many papers and case studies. For a system subject to a few major failure modes, component based delay time models have been developed under the assumptions of an age-based inspection policy. An age-based inspection assumes that an inspection is scheduled according to the age of the component, and if there is a failure renewal, the next inspection is always, say τ times, from the time of the failure renewal. This applies to certain cases, particularly important plant items where the time since the last renewal or inspection is a key to schedule the next inspection service. However, in most cases, the inspection service is not scheduled according to the need of a particular component, rather it is scheduled according to a fixed calendar time regardless whether the component being inspected was just renewed or not. This policy is called a block-based inspection which has the advantage of easy planning and is particularly useful for plant items which are part of a larger system to be inspected. If a block-based inspection policy is used, the time to failure since the last inspection prior to the failure for a particular item is a random variable. This time is called the forward time in this paper. To optimise the inspection interval for block-based inspections, the usual criterion functions such as expected cost or down time per unit time depend on the distribution of this forward time. We report in this paper the development of a theoretical proof that a limiting distribution for such a forward time exists if certain conditions are met. We also propose a recursive algorithm for determining such a limiting distribution. A numerical example is presented to demonstrate the existence of the limiting distribution.

  19. An overview of the recent advances in delay-time-based maintenance modelling

    International Nuclear Information System (INIS)

    Wang, Wenbin

    2012-01-01

    Industrial plant maintenance is an area which has enormous potential to be improved. It is also an area attracted significant attention from mathematical modellers because of the random phenomenon of plant failures. This paper reviews the recent advances in delay-time-based maintenance modelling, which is one of the mathematical techniques for optimising inspection planning and related problems. The delay-time is a concept that divides a plant failure process into two stages: from new until the point of an identifiable defect, and then from this point to failure. The first stage is called the normal working stage and the second stage is called the failure delay-time stage. If the distributions of the two stages can be quantified, the relationship between the number of failures and the inspection interval can be readily established. This can then be used for optimizing the inspection interval and other related decision variables. In this review, we pay particular attention to new methodological developments and industrial applications of the delay-time-based models over the last few decades. The use of the delay-time concept and modeling techniques in other areas rather than in maintenance is also reviewed. Future research directions are also highlighted. - Highlights: ► Reviewed the recent advances in delay-time-based maintenance models and applications. ► Compared the delay-time-based models with other models. ► Focused on methodologies and applications. ► Pointed out future research directions.

  20. Modeling Financial Time Series Based on a Market Microstructure Model with Leverage Effect

    OpenAIRE

    Yanhui Xi; Hui Peng; Yemei Qin

    2016-01-01

    The basic market microstructure model specifies that the price/return innovation and the volatility innovation are independent Gaussian white noise processes. However, the financial leverage effect has been found to be statistically significant in many financial time series. In this paper, a novel market microstructure model with leverage effects is proposed. The model specification assumed a negative correlation in the errors between the price/return innovation and the volatility innovation....

  1. Reaction time for trimolecular reactions in compartment-based reaction-diffusion models

    Science.gov (United States)

    Li, Fei; Chen, Minghan; Erban, Radek; Cao, Yang

    2018-05-01

    Trimolecular reaction models are investigated in the compartment-based (lattice-based) framework for stochastic reaction-diffusion modeling. The formulae for the first collision time and the mean reaction time are derived for the case where three molecules are present in the solution under periodic boundary conditions. For the case of reflecting boundary conditions, similar formulae are obtained using a computer-assisted approach. The accuracy of these formulae is further verified through comparison with numerical results. The presented derivation is based on the first passage time analysis of Montroll [J. Math. Phys. 10, 753 (1969)]. Montroll's results for two-dimensional lattice-based random walks are adapted and applied to compartment-based models of trimolecular reactions, which are studied in one-dimensional or pseudo one-dimensional domains.

  2. Lapse of time effects on tax evasion in an agent-based econophysics model

    Science.gov (United States)

    Seibold, Götz; Pickhardt, Michael

    2013-05-01

    We investigate an inhomogeneous Ising model in the context of tax evasion dynamics where different types of agents are parameterized via local temperatures and magnetic fields. In particular, we analyze the impact of lapse of time effects (i.e. backauditing) and endogenously determined penalty rates on tax compliance. Both features contribute to a microfoundation of agent-based econophysics models of tax evasion.

  3. Efficient model checking for duration calculus based on branching-time approximations

    DEFF Research Database (Denmark)

    Fränzle, Martin; Hansen, Michael Reichhardt

    2008-01-01

    Duration Calculus (abbreviated to DC) is an interval-based, metric-time temporal logic designed for reasoning about embedded real-time systems at a high level of abstraction. But the complexity of model checking any decidable fragment featuring both negation and chop, DC's only modality, is non...

  4. Comparing and Contrasting Traditional Membrane Bioreactor Models with Novel Ones Based on Time Series Analysis

    Directory of Open Access Journals (Sweden)

    Parneet Paul

    2013-02-01

    Full Text Available The computer modelling and simulation of wastewater treatment plant and their specific technologies, such as membrane bioreactors (MBRs, are becoming increasingly useful to consultant engineers when designing, upgrading, retrofitting, operating and controlling these plant. This research uses traditional phenomenological mechanistic models based on MBR filtration and biochemical processes to measure the effectiveness of alternative and novel time series models based upon input–output system identification methods. Both model types are calibrated and validated using similar plant layouts and data sets derived for this purpose. Results prove that although both approaches have their advantages, they also have specific disadvantages as well. In conclusion, the MBR plant designer and/or operator who wishes to use good quality, calibrated models to gain a better understanding of their process, should carefully consider which model type is selected based upon on what their initial modelling objectives are. Each situation usually proves unique.

  5. A Time-Series Water Level Forecasting Model Based on Imputation and Variable Selection Method

    OpenAIRE

    Jun-He Yang; Ching-Hsue Cheng; Chia-Pan Chan

    2017-01-01

    Reservoirs are important for households and impact the national economy. This paper proposed a time-series forecasting model based on estimating a missing value followed by variable selection to forecast the reservoir's water level. This study collected data from the Taiwan Shimen Reservoir as well as daily atmospheric data from 2008 to 2015. The two datasets are concatenated into an integrated dataset based on ordering of the data as a research dataset. The proposed time-series forecasting m...

  6. A bootstrap based space-time surveillance model with an application to crime occurrences

    Science.gov (United States)

    Kim, Youngho; O'Kelly, Morton

    2008-06-01

    This study proposes a bootstrap-based space-time surveillance model. Designed to find emerging hotspots in near-real time, the bootstrap based model is characterized by its use of past occurrence information and bootstrap permutations. Many existing space-time surveillance methods, using population at risk data to generate expected values, have resulting hotspots bounded by administrative area units and are of limited use for near-real time applications because of the population data needed. However, this study generates expected values for local hotspots from past occurrences rather than population at risk. Also, bootstrap permutations of previous occurrences are used for significant tests. Consequently, the bootstrap-based model, without the requirement of population at risk data, (1) is free from administrative area restriction, (2) enables more frequent surveillance for continuously updated registry database, and (3) is readily applicable to criminology and epidemiology surveillance. The bootstrap-based model performs better for space-time surveillance than the space-time scan statistic. This is shown by means of simulations and an application to residential crime occurrences in Columbus, OH, year 2000.

  7. Modeling Financial Time Series Based on a Market Microstructure Model with Leverage Effect

    Directory of Open Access Journals (Sweden)

    Yanhui Xi

    2016-01-01

    Full Text Available The basic market microstructure model specifies that the price/return innovation and the volatility innovation are independent Gaussian white noise processes. However, the financial leverage effect has been found to be statistically significant in many financial time series. In this paper, a novel market microstructure model with leverage effects is proposed. The model specification assumed a negative correlation in the errors between the price/return innovation and the volatility innovation. With the new representations, a theoretical explanation of leverage effect is provided. Simulated data and daily stock market indices (Shanghai composite index, Shenzhen component index, and Standard and Poor’s 500 Composite index via Bayesian Markov Chain Monte Carlo (MCMC method are used to estimate the leverage market microstructure model. The results verify the effectiveness of the model and its estimation approach proposed in the paper and also indicate that the stock markets have strong leverage effects. Compared with the classical leverage stochastic volatility (SV model in terms of DIC (Deviance Information Criterion, the leverage market microstructure model fits the data better.

  8. Automatic CT-based finite element model generation for temperature-based death time estimation: feasibility study and sensitivity analysis.

    Science.gov (United States)

    Schenkl, Sebastian; Muggenthaler, Holger; Hubig, Michael; Erdmann, Bodo; Weiser, Martin; Zachow, Stefan; Heinrich, Andreas; Güttler, Felix Victor; Teichgräber, Ulf; Mall, Gita

    2017-05-01

    Temperature-based death time estimation is based either on simple phenomenological models of corpse cooling or on detailed physical heat transfer models. The latter are much more complex but allow a higher accuracy of death time estimation, as in principle, all relevant cooling mechanisms can be taken into account.Here, a complete workflow for finite element-based cooling simulation is presented. The following steps are demonstrated on a CT phantom: Computer tomography (CT) scan Segmentation of the CT images for thermodynamically relevant features of individual geometries and compilation in a geometric computer-aided design (CAD) model Conversion of the segmentation result into a finite element (FE) simulation model Computation of the model cooling curve (MOD) Calculation of the cooling time (CTE) For the first time in FE-based cooling time estimation, the steps from the CT image over segmentation to FE model generation are performed semi-automatically. The cooling time calculation results are compared to cooling measurements performed on the phantoms under controlled conditions. In this context, the method is validated using a CT phantom. Some of the phantoms' thermodynamic material parameters had to be determined via independent experiments.Moreover, the impact of geometry and material parameter uncertainties on the estimated cooling time is investigated by a sensitivity analysis.

  9. Nonlinear Prediction Model for Hydrologic Time Series Based on Wavelet Decomposition

    Science.gov (United States)

    Kwon, H.; Khalil, A.; Brown, C.; Lall, U.; Ahn, H.; Moon, Y.

    2005-12-01

    Traditionally forecasting and characterizations of hydrologic systems is performed utilizing many techniques. Stochastic linear methods such as AR and ARIMA and nonlinear ones such as statistical learning theory based tools have been extensively used. The common difficulty to all methods is the determination of sufficient and necessary information and predictors for a successful prediction. Relationships between hydrologic variables are often highly nonlinear and interrelated across the temporal scale. A new hybrid approach is proposed for the simulation of hydrologic time series combining both the wavelet transform and the nonlinear model. The present model employs some merits of wavelet transform and nonlinear time series model. The Wavelet Transform is adopted to decompose a hydrologic nonlinear process into a set of mono-component signals, which are simulated by nonlinear model. The hybrid methodology is formulated in a manner to improve the accuracy of a long term forecasting. The proposed hybrid model yields much better results in terms of capturing and reproducing the time-frequency properties of the system at hand. Prediction results are promising when compared to traditional univariate time series models. An application of the plausibility of the proposed methodology is provided and the results conclude that wavelet based time series model can be utilized for simulating and forecasting of hydrologic variable reasonably well. This will ultimately serve the purpose of integrated water resources planning and management.

  10. Nonlinear dynamic modeling of a simple flexible rotor system subjected to time-variable base motions

    Science.gov (United States)

    Chen, Liqiang; Wang, Jianjun; Han, Qinkai; Chu, Fulei

    2017-09-01

    Rotor systems carried in transportation system or under seismic excitations are considered to have a moving base. To study the dynamic behavior of flexible rotor systems subjected to time-variable base motions, a general model is developed based on finite element method and Lagrange's equation. Two groups of Euler angles are defined to describe the rotation of the rotor with respect to the base and that of the base with respect to the ground. It is found that the base rotations would cause nonlinearities in the model. To verify the proposed model, a novel test rig which could simulate the base angular-movement is designed. Dynamic experiments on a flexible rotor-bearing system with base angular motions are carried out. Based upon these, numerical simulations are conducted to further study the dynamic response of the flexible rotor under harmonic angular base motions. The effects of base angular amplitude, rotating speed and base frequency on response behaviors are discussed by means of FFT, waterfall, frequency response curve and orbits of the rotor. The FFT and waterfall plots of the disk horizontal and vertical vibrations are marked with multiplications of the base frequency and sum and difference tones of the rotating frequency and the base frequency. Their amplitudes will increase remarkably when they meet the whirling frequencies of the rotor system.

  11. A Sarsa(λ)-based control model for real-time traffic light coordination.

    Science.gov (United States)

    Zhou, Xiaoke; Zhu, Fei; Liu, Quan; Fu, Yuchen; Huang, Wei

    2014-01-01

    Traffic problems often occur due to the traffic demands by the outnumbered vehicles on road. Maximizing traffic flow and minimizing the average waiting time are the goals of intelligent traffic control. Each junction wants to get larger traffic flow. During the course, junctions form a policy of coordination as well as constraints for adjacent junctions to maximize their own interests. A good traffic signal timing policy is helpful to solve the problem. However, as there are so many factors that can affect the traffic control model, it is difficult to find the optimal solution. The disability of traffic light controllers to learn from past experiences caused them to be unable to adaptively fit dynamic changes of traffic flow. Considering dynamic characteristics of the actual traffic environment, reinforcement learning algorithm based traffic control approach can be applied to get optimal scheduling policy. The proposed Sarsa(λ)-based real-time traffic control optimization model can maintain the traffic signal timing policy more effectively. The Sarsa(λ)-based model gains traffic cost of the vehicle, which considers delay time, the number of waiting vehicles, and the integrated saturation from its experiences to learn and determine the optimal actions. The experiment results show an inspiring improvement in traffic control, indicating the proposed model is capable of facilitating real-time dynamic traffic control.

  12. A Sarsa(λ-Based Control Model for Real-Time Traffic Light Coordination

    Directory of Open Access Journals (Sweden)

    Xiaoke Zhou

    2014-01-01

    Full Text Available Traffic problems often occur due to the traffic demands by the outnumbered vehicles on road. Maximizing traffic flow and minimizing the average waiting time are the goals of intelligent traffic control. Each junction wants to get larger traffic flow. During the course, junctions form a policy of coordination as well as constraints for adjacent junctions to maximize their own interests. A good traffic signal timing policy is helpful to solve the problem. However, as there are so many factors that can affect the traffic control model, it is difficult to find the optimal solution. The disability of traffic light controllers to learn from past experiences caused them to be unable to adaptively fit dynamic changes of traffic flow. Considering dynamic characteristics of the actual traffic environment, reinforcement learning algorithm based traffic control approach can be applied to get optimal scheduling policy. The proposed Sarsa(λ-based real-time traffic control optimization model can maintain the traffic signal timing policy more effectively. The Sarsa(λ-based model gains traffic cost of the vehicle, which considers delay time, the number of waiting vehicles, and the integrated saturation from its experiences to learn and determine the optimal actions. The experiment results show an inspiring improvement in traffic control, indicating the proposed model is capable of facilitating real-time dynamic traffic control.

  13. Model-based Clustering of Categorical Time Series with Multinomial Logit Classification

    Science.gov (United States)

    Frühwirth-Schnatter, Sylvia; Pamminger, Christoph; Winter-Ebmer, Rudolf; Weber, Andrea

    2010-09-01

    A common problem in many areas of applied statistics is to identify groups of similar time series in a panel of time series. However, distance-based clustering methods cannot easily be extended to time series data, where an appropriate distance-measure is rather difficult to define, particularly for discrete-valued time series. Markov chain clustering, proposed by Pamminger and Frühwirth-Schnatter [6], is an approach for clustering discrete-valued time series obtained by observing a categorical variable with several states. This model-based clustering method is based on finite mixtures of first-order time-homogeneous Markov chain models. In order to further explain group membership we present an extension to the approach of Pamminger and Frühwirth-Schnatter [6] by formulating a probabilistic model for the latent group indicators within the Bayesian classification rule by using a multinomial logit model. The parameters are estimated for a fixed number of clusters within a Bayesian framework using an Markov chain Monte Carlo (MCMC) sampling scheme representing a (full) Gibbs-type sampler which involves only draws from standard distributions. Finally, an application to a panel of Austrian wage mobility data is presented which leads to an interesting segmentation of the Austrian labour market.

  14. Modeling Complex Time Limits

    Directory of Open Access Journals (Sweden)

    Oleg Svatos

    2013-01-01

    Full Text Available In this paper we analyze complexity of time limits we can find especially in regulated processes of public administration. First we review the most popular process modeling languages. There is defined an example scenario based on the current Czech legislature which is then captured in discussed process modeling languages. Analysis shows that the contemporary process modeling languages support capturing of the time limit only partially. This causes troubles to analysts and unnecessary complexity of the models. Upon unsatisfying results of the contemporary process modeling languages we analyze the complexity of the time limits in greater detail and outline lifecycles of a time limit using the multiple dynamic generalizations pattern. As an alternative to the popular process modeling languages there is presented PSD process modeling language, which supports the defined lifecycles of a time limit natively and therefore allows keeping the models simple and easy to understand.

  15. Informing hydrological models with ground-based time-lapse relative gravimetry: potential and limitations

    DEFF Research Database (Denmark)

    Bauer-Gottwein, Peter; Christiansen, Lars; Rosbjerg, Dan

    2011-01-01

    parameter uncertainty decreased significantly when TLRG data was included in the inversion. The forced infiltration experiment caused changes in unsaturated zone storage, which were monitored using TLRG and ground-penetrating radar. A numerical unsaturated zone model was subsequently conditioned on both......Coupled hydrogeophysical inversion emerges as an attractive option to improve the calibration and predictive capability of hydrological models. Recently, ground-based time-lapse relative gravity (TLRG) measurements have attracted increasing interest because there is a direct relationship between...

  16. Working Memory Span Development: A Time-Based Resource-Sharing Model Account

    Science.gov (United States)

    Barrouillet, Pierre; Gavens, Nathalie; Vergauwe, Evie; Gaillard, Vinciane; Camos, Valerie

    2009-01-01

    The time-based resource-sharing model (P. Barrouillet, S. Bernardin, & V. Camos, 2004) assumes that during complex working memory span tasks, attention is frequently and surreptitiously switched from processing to reactivate decaying memory traces before their complete loss. Three experiments involving children from 5 to 14 years of age…

  17. A model for food and stimulus changes that signal time-based contingency changes.

    Science.gov (United States)

    Cowie, Sarah; Davison, Michael; Elliffe, Douglas

    2014-11-01

    When the availability of reinforcers depends on time since an event, time functions as a discriminative stimulus. Behavioral control by elapsed time is generally weak, but may be enhanced by added stimuli that act as additional time markers. The present paper assessed the effect of brief and continuous added stimuli on control by time-based changes in the reinforcer differential, using a procedure in which the local reinforcer ratio reversed at a fixed time after the most recent reinforcer delivery. Local choice was enhanced by the presentation of the brief stimuli, even when the stimulus change signalled only elapsed time, but not the local reinforcer ratio. The effect of the brief stimulus presentations on choice decreased as a function of time since the most recent stimulus change. We compared the ability of several versions of a model of local choice to describe these data. The data were best described by a model which assumed that error in discriminating the local reinforcer ratio arose from imprecise discrimination of reinforcers in both time and space, suggesting that timing behavior is controlled not only by discrimination elapsed time, but by discrimination of the reinforcer differential in time. © Society for the Experimental Analysis of Behavior.

  18. A Perspective for Time-Varying Channel Compensation with Model-Based Adaptive Passive Time-Reversal

    Directory of Open Access Journals (Sweden)

    Lussac P. MAIA

    2015-06-01

    Full Text Available Underwater communications mainly rely on acoustic propagation which is strongly affected by frequency-dependent attenuation, shallow water multipath propagation and significant Doppler spread/shift induced by source-receiver-surface motion. Time-reversal based techniques offer a low complexity solution to decrease interferences caused by multipath, but a complete equalization cannot be reached (it saturates when maximize signal to noise ratio and these techniques in conventional form are quite sensible to channel variations along the transmission. Acoustic propagation modeling in high frequency regime can yield physical-based information that is potentially useful to channel compensation methods as the passive time-reversal (pTR, which is often employed in Digital Acoustic Underwater Communications (DAUC systems because of its low computational cost. Aiming to overcome the difficulties of pTR to solve time-variations in underwater channels, it is intended to insert physical knowledge from acoustic propagation modeling in the pTR filtering. Investigation is being done by the authors about the influence of channel physical parameters on propagation of coherent acoustic signals transmitted through shallow water waveguides and received in a vertical line array of sensors. Time-variant approach is used, as required to model high frequency acoustic propagation on realistic scenarios, and applied to a DAUC simulator containing an adaptive passive time-reversal receiver (ApTR. The understanding about the effects of changes in physical features of the channel over the propagation can lead to design ApTR filters which could help to improve the communications system performance. This work presents a short extension and review of the paper 12, which tested Doppler distortion induced by source-surface motion and ApTR compensation for a DAUC system on a simulated time-variant channel, in the scope of model-based equalization. Environmental focusing approach

  19. Seismic wavefield modeling based on time-domain symplectic and Fourier finite-difference method

    Science.gov (United States)

    Fang, Gang; Ba, Jing; Liu, Xin-xin; Zhu, Kun; Liu, Guo-Chang

    2017-06-01

    Seismic wavefield modeling is important for improving seismic data processing and interpretation. Calculations of wavefield propagation are sometimes not stable when forward modeling of seismic wave uses large time steps for long times. Based on the Hamiltonian expression of the acoustic wave equation, we propose a structure-preserving method for seismic wavefield modeling by applying the symplectic finite-difference method on time grids and the Fourier finite-difference method on space grids to solve the acoustic wave equation. The proposed method is called the symplectic Fourier finite-difference (symplectic FFD) method, and offers high computational accuracy and improves the computational stability. Using acoustic approximation, we extend the method to anisotropic media. We discuss the calculations in the symplectic FFD method for seismic wavefield modeling of isotropic and anisotropic media, and use the BP salt model and BP TTI model to test the proposed method. The numerical examples suggest that the proposed method can be used in seismic modeling of strongly variable velocities, offering high computational accuracy and low numerical dispersion. The symplectic FFD method overcomes the residual qSV wave of seismic modeling in anisotropic media and maintains the stability of the wavefield propagation for large time steps.

  20. Time series modeling by a regression approach based on a latent process.

    Science.gov (United States)

    Chamroukhi, Faicel; Samé, Allou; Govaert, Gérard; Aknin, Patrice

    2009-01-01

    Time series are used in many domains including finance, engineering, economics and bioinformatics generally to represent the change of a measurement over time. Modeling techniques may then be used to give a synthetic representation of such data. A new approach for time series modeling is proposed in this paper. It consists of a regression model incorporating a discrete hidden logistic process allowing for activating smoothly or abruptly different polynomial regression models. The model parameters are estimated by the maximum likelihood method performed by a dedicated Expectation Maximization (EM) algorithm. The M step of the EM algorithm uses a multi-class Iterative Reweighted Least-Squares (IRLS) algorithm to estimate the hidden process parameters. To evaluate the proposed approach, an experimental study on simulated data and real world data was performed using two alternative approaches: a heteroskedastic piecewise regression model using a global optimization algorithm based on dynamic programming, and a Hidden Markov Regression Model whose parameters are estimated by the Baum-Welch algorithm. Finally, in the context of the remote monitoring of components of the French railway infrastructure, and more particularly the switch mechanism, the proposed approach has been applied to modeling and classifying time series representing the condition measurements acquired during switch operations.

  1. RB Particle Filter Time Synchronization Algorithm Based on the DPM Model.

    Science.gov (United States)

    Guo, Chunsheng; Shen, Jia; Sun, Yao; Ying, Na

    2015-09-03

    Time synchronization is essential for node localization, target tracking, data fusion, and various other Wireless Sensor Network (WSN) applications. To improve the estimation accuracy of continuous clock offset and skew of mobile nodes in WSNs, we propose a novel time synchronization algorithm, the Rao-Blackwellised (RB) particle filter time synchronization algorithm based on the Dirichlet process mixture (DPM) model. In a state-space equation with a linear substructure, state variables are divided into linear and non-linear variables by the RB particle filter algorithm. These two variables can be estimated using Kalman filter and particle filter, respectively, which improves the computational efficiency more so than if only the particle filter was used. In addition, the DPM model is used to describe the distribution of non-deterministic delays and to automatically adjust the number of Gaussian mixture model components based on the observational data. This improves the estimation accuracy of clock offset and skew, which allows achieving the time synchronization. The time synchronization performance of this algorithm is also validated by computer simulations and experimental measurements. The results show that the proposed algorithm has a higher time synchronization precision than traditional time synchronization algorithms.

  2. Nonlinear System Identification via Basis Functions Based Time Domain Volterra Model

    Directory of Open Access Journals (Sweden)

    Yazid Edwar

    2014-07-01

    Full Text Available This paper proposes basis functions based time domain Volterra model for nonlinear system identification. The Volterra kernels are expanded by using complex exponential basis functions and estimated via genetic algorithm (GA. The accuracy and practicability of the proposed method are then assessed experimentally from a scaled 1:100 model of a prototype truss spar platform. Identification results in time and frequency domain are presented and coherent functions are performed to check the quality of the identification results. It is shown that results between experimental data and proposed method are in good agreement.

  3. Time-driven activity-based costing: A dynamic value assessment model in pediatric appendicitis.

    Science.gov (United States)

    Yu, Yangyang R; Abbas, Paulette I; Smith, Carolyn M; Carberry, Kathleen E; Ren, Hui; Patel, Binita; Nuchtern, Jed G; Lopez, Monica E

    2017-06-01

    Healthcare reform policies are emphasizing value-based healthcare delivery. We hypothesize that time-driven activity-based costing (TDABC) can be used to appraise healthcare interventions in pediatric appendicitis. Triage-based standing delegation orders, surgical advanced practice providers, and a same-day discharge protocol were implemented to target deficiencies identified in our initial TDABC model. Post-intervention process maps for a hospital episode were created using electronic time stamp data for simple appendicitis cases during February to March 2016. Total personnel and consumable costs were determined using TDABC methodology. The post-intervention TDABC model featured 6 phases of care, 33 processes, and 19 personnel types. Our interventions reduced duration and costs in the emergency department (-41min, -$23) and pre-operative floor (-57min, -$18). While post-anesthesia care unit duration and costs increased (+224min, +$41), the same-day discharge protocol eliminated post-operative floor costs (-$306). Our model incorporating all three interventions reduced total direct costs by 11% ($2753.39 to $2447.68) and duration of hospitalization by 51% (1984min to 966min). Time-driven activity-based costing can dynamically model changes in our healthcare delivery as a result of process improvement interventions. It is an effective tool to continuously assess the impact of these interventions on the value of appendicitis care. II, Type of study: Economic Analysis. Copyright © 2017 Elsevier Inc. All rights reserved.

  4. Intuitionistic Fuzzy Time Series Forecasting Model Based on Intuitionistic Fuzzy Reasoning

    Directory of Open Access Journals (Sweden)

    Ya’nan Wang

    2016-01-01

    Full Text Available Fuzzy sets theory cannot describe the data comprehensively, which has greatly limited the objectivity of fuzzy time series in uncertain data forecasting. In this regard, an intuitionistic fuzzy time series forecasting model is built. In the new model, a fuzzy clustering algorithm is used to divide the universe of discourse into unequal intervals, and a more objective technique for ascertaining the membership function and nonmembership function of the intuitionistic fuzzy set is proposed. On these bases, forecast rules based on intuitionistic fuzzy approximate reasoning are established. At last, contrast experiments on the enrollments of the University of Alabama and the Taiwan Stock Exchange Capitalization Weighted Stock Index are carried out. The results show that the new model has a clear advantage of improving the forecast accuracy.

  5. Detection of Common Problems in Real-Time and Multicore Systems Using Model-Based Constraints

    Directory of Open Access Journals (Sweden)

    Raphaël Beamonte

    2016-01-01

    Full Text Available Multicore systems are complex in that multiple processes are running concurrently and can interfere with each other. Real-time systems add on top of that time constraints, making results invalid as soon as a deadline has been missed. Tracing is often the most reliable and accurate tool available to study and understand those systems. However, tracing requires that users understand the kernel events and their meaning. It is therefore not very accessible. Using modeling to generate source code or represent applications’ workflow is handy for developers and has emerged as part of the model-driven development methodology. In this paper, we propose a new approach to system analysis using model-based constraints, on top of userspace and kernel traces. We introduce the constraints representation and how traces can be used to follow the application’s workflow and check the constraints we set on the model. We then present a number of common problems that we encountered in real-time and multicore systems and describe how our model-based constraints could have helped to save time by automatically identifying the unwanted behavior.

  6. Conceptual framework for model-based analysis of residence time distribution in twin-screw granulation

    DEFF Research Database (Denmark)

    Kumar, Ashish; Vercruysse, Jurgen; Vanhoorne, Valerie

    2015-01-01

    Twin-screw granulation is a promising continuous alternative for traditional batchwise wet granulation processes. The twin-screw granulator (TSG) screws consist of transport and kneading element modules. Therefore, the granulation to a large extent is governed by the residence time distribution...... within each module where different granulation rate processes dominate over others. Currently, experimental data is used to determine the residence time distributions. In this study, a conceptual model based on classical chemical engineering methods is proposed to better understand and simulate...... the residence time distribution in a TSG. The experimental data were compared with the proposed most suitable conceptual model to estimate the parameters of the model and to analyse and predict the effects of changes in number of kneading discs and their stagger angle, screw speed and powder feed rate...

  7. 3D airborne EM modeling based on the spectral-element time-domain (SETD) method

    Science.gov (United States)

    Cao, X.; Yin, C.; Huang, X.; Liu, Y.; Zhang, B., Sr.; Cai, J.; Liu, L.

    2017-12-01

    In the field of 3D airborne electromagnetic (AEM) modeling, both finite-difference time-domain (FDTD) method and finite-element time-domain (FETD) method have limitations that FDTD method depends too much on the grids and time steps, while FETD requires large number of grids for complex structures. We propose a time-domain spectral-element (SETD) method based on GLL interpolation basis functions for spatial discretization and Backward Euler (BE) technique for time discretization. The spectral-element method is based on a weighted residual technique with polynomials as vector basis functions. It can contribute to an accurate result by increasing the order of polynomials and suppressing spurious solution. BE method is a stable tine discretization technique that has no limitation on time steps and can guarantee a higher accuracy during the iteration process. To minimize the non-zero number of sparse matrix and obtain a diagonal mass matrix, we apply the reduced order integral technique. A direct solver with its speed independent of the condition number is adopted for quickly solving the large-scale sparse linear equations system. To check the accuracy of our SETD algorithm, we compare our results with semi-analytical solutions for a three-layered earth model within the time lapse 10-6-10-2s for different physical meshes and SE orders. The results show that the relative errors for magnetic field B and magnetic induction are both around 3-5%. Further we calculate AEM responses for an AEM system over a 3D earth model in Figure 1. From numerical experiments for both 1D and 3D model, we draw the conclusions that: 1) SETD can deliver an accurate results for both dB/dt and B; 2) increasing SE order improves the modeling accuracy for early to middle time channels when the EM field diffuses fast so the high-order SE can model the detailed variation; 3) at very late time channels, increasing SE order has little improvement on modeling accuracy, but the time interval plays

  8. A stochastic HMM-based forecasting model for fuzzy time series.

    Science.gov (United States)

    Li, Sheng-Tun; Cheng, Yi-Chung

    2010-10-01

    Recently, fuzzy time series have attracted more academic attention than traditional time series due to their capability of dealing with the uncertainty and vagueness inherent in the data collected. The formulation of fuzzy relations is one of the key issues affecting forecasting results. Most of the present works adopt IF-THEN rules for relationship representation, which leads to higher computational overhead and rule redundancy. Sullivan and Woodall proposed a Markov-based formulation and a forecasting model to reduce computational overhead; however, its applicability is limited to handling one-factor problems. In this paper, we propose a novel forecasting model based on the hidden Markov model by enhancing Sullivan and Woodall's work to allow handling of two-factor forecasting problems. Moreover, in order to make the nature of conjecture and randomness of forecasting more realistic, the Monte Carlo method is adopted to estimate the outcome. To test the effectiveness of the resulting stochastic model, we conduct two experiments and compare the results with those from other models. The first experiment consists of forecasting the daily average temperature and cloud density in Taipei, Taiwan, and the second experiment is based on the Taiwan Weighted Stock Index by forecasting the exchange rate of the New Taiwan dollar against the U.S. dollar. In addition to improving forecasting accuracy, the proposed model adheres to the central limit theorem, and thus, the result statistically approximates to the real mean of the target value being forecast.

  9. Modelling of the acid base properties of two thermophilic bacteria at different growth times

    Science.gov (United States)

    Heinrich, Hannah T. M.; Bremer, Phil J.; McQuillan, A. James; Daughney, Christopher J.

    2008-09-01

    Acid-base titrations and electrophoretic mobility measurements were conducted on the thermophilic bacteria Anoxybacillus flavithermus and Geobacillus stearothermophilus at two different growth times corresponding to exponential and stationary/death phase. The data showed significant differences between the two investigated growth times for both bacterial species. In stationary/death phase samples, cells were disrupted and their buffering capacity was lower than that of exponential phase cells. For G. stearothermophilus the electrophoretic mobility profiles changed dramatically. Chemical equilibrium models were developed to simultaneously describe the data from the titrations and the electrophoretic mobility measurements. A simple approach was developed to determine confidence intervals for the overall variance between the model and the experimental data, in order to identify statistically significant changes in model fit and thereby select the simplest model that was able to adequately describe each data set. Exponential phase cells of the investigated thermophiles had a higher total site concentration than the average found for mesophilic bacteria (based on a previously published generalised model for the acid-base behaviour of mesophiles), whereas the opposite was true for cells in stationary/death phase. The results of this study indicate that growth phase is an important parameter that can affect ion binding by bacteria, that growth phase should be considered when developing or employing chemical models for bacteria-bearing systems.

  10. Deformation analysis of polymers composites: rheological model involving time-based fractional derivative

    DEFF Research Database (Denmark)

    Zhou, H. W.; Yi, H. Y.; Mishnaevsky, Leon

    2017-01-01

    A modeling approach to time-dependent property of Glass Fiber Reinforced Polymers (GFRP) composites is of special interest for quantitative description of long-term behavior. An electronic creep machine is employed to investigate the time-dependent deformation of four specimens of dog-bond-shaped......A modeling approach to time-dependent property of Glass Fiber Reinforced Polymers (GFRP) composites is of special interest for quantitative description of long-term behavior. An electronic creep machine is employed to investigate the time-dependent deformation of four specimens of dog......-bond-shaped GFRP composites at various stress level. A negative exponent function based on structural changes is introduced to describe the damage evolution of material properties in the process of creep test. Accordingly, a new creep constitutive equation, referred to fractional derivative Maxwell model...... by the fractional derivative Maxwell model proposed in the paper are in a good agreement with the experimental data. It is shown that the new creep constitutive model proposed in the paper needs few parameters to represent various time-dependent behaviors....

  11. Validation of Energy Expenditure Prediction Models Using Real-Time Shoe-Based Motion Detectors.

    Science.gov (United States)

    Lin, Shih-Yun; Lai, Ying-Chih; Hsia, Chi-Chun; Su, Pei-Fang; Chang, Chih-Han

    2017-09-01

    This study aimed to verify and compare the accuracy of energy expenditure (EE) prediction models using shoe-based motion detectors with embedded accelerometers. Three physical activity (PA) datasets (unclassified, recognition, and intensity segmentation) were used to develop three prediction models. A multiple classification flow and these models were used to estimate EE. The "unclassified" dataset was defined as the data without PA recognition, the "recognition" as the data classified with PA recognition, and the "intensity segmentation" as the data with intensity segmentation. The three datasets contained accelerometer signals (quantified as signal magnitude area (SMA)) and net heart rate (HR net ). The accuracy of these models was assessed according to the deviation between physically measured EE and model-estimated EE. The variance between physically measured EE and model-estimated EE expressed by simple linear regressions was increased by 63% and 13% using SMA and HR net , respectively. The accuracy of the EE predicted from accelerometer signals is influenced by the different activities that exhibit different count-EE relationships within the same prediction model. The recognition model provides a better estimation and lower variability of EE compared with the unclassified and intensity segmentation models. The proposed shoe-based motion detectors can improve the accuracy of EE estimation and has great potential to be used to manage everyday exercise in real time.

  12. Connectivity-based neurofeedback: Dynamic causal modeling for real-time fMRI☆

    Science.gov (United States)

    Koush, Yury; Rosa, Maria Joao; Robineau, Fabien; Heinen, Klaartje; W. Rieger, Sebastian; Weiskopf, Nikolaus; Vuilleumier, Patrik; Van De Ville, Dimitri; Scharnowski, Frank

    2013-01-01

    Neurofeedback based on real-time fMRI is an emerging technique that can be used to train voluntary control of brain activity. Such brain training has been shown to lead to behavioral effects that are specific to the functional role of the targeted brain area. However, real-time fMRI-based neurofeedback so far was limited to mainly training localized brain activity within a region of interest. Here, we overcome this limitation by presenting near real-time dynamic causal modeling in order to provide feedback information based on connectivity between brain areas rather than activity within a single brain area. Using a visual–spatial attention paradigm, we show that participants can voluntarily control a feedback signal that is based on the Bayesian model comparison between two predefined model alternatives, i.e. the connectivity between left visual cortex and left parietal cortex vs. the connectivity between right visual cortex and right parietal cortex. Our new approach thus allows for training voluntary control over specific functional brain networks. Because most mental functions and most neurological disorders are associated with network activity rather than with activity in a single brain region, this novel approach is an important methodological innovation in order to more directly target functionally relevant brain networks. PMID:23668967

  13. A Time-Series Water Level Forecasting Model Based on Imputation and Variable Selection Method.

    Science.gov (United States)

    Yang, Jun-He; Cheng, Ching-Hsue; Chan, Chia-Pan

    2017-01-01

    Reservoirs are important for households and impact the national economy. This paper proposed a time-series forecasting model based on estimating a missing value followed by variable selection to forecast the reservoir's water level. This study collected data from the Taiwan Shimen Reservoir as well as daily atmospheric data from 2008 to 2015. The two datasets are concatenated into an integrated dataset based on ordering of the data as a research dataset. The proposed time-series forecasting model summarily has three foci. First, this study uses five imputation methods to directly delete the missing value. Second, we identified the key variable via factor analysis and then deleted the unimportant variables sequentially via the variable selection method. Finally, the proposed model uses a Random Forest to build the forecasting model of the reservoir's water level. This was done to compare with the listing method under the forecasting error. These experimental results indicate that the Random Forest forecasting model when applied to variable selection with full variables has better forecasting performance than the listing model. In addition, this experiment shows that the proposed variable selection can help determine five forecast methods used here to improve the forecasting capability.

  14. A Time-Series Water Level Forecasting Model Based on Imputation and Variable Selection Method

    Directory of Open Access Journals (Sweden)

    Jun-He Yang

    2017-01-01

    Full Text Available Reservoirs are important for households and impact the national economy. This paper proposed a time-series forecasting model based on estimating a missing value followed by variable selection to forecast the reservoir’s water level. This study collected data from the Taiwan Shimen Reservoir as well as daily atmospheric data from 2008 to 2015. The two datasets are concatenated into an integrated dataset based on ordering of the data as a research dataset. The proposed time-series forecasting model summarily has three foci. First, this study uses five imputation methods to directly delete the missing value. Second, we identified the key variable via factor analysis and then deleted the unimportant variables sequentially via the variable selection method. Finally, the proposed model uses a Random Forest to build the forecasting model of the reservoir’s water level. This was done to compare with the listing method under the forecasting error. These experimental results indicate that the Random Forest forecasting model when applied to variable selection with full variables has better forecasting performance than the listing model. In addition, this experiment shows that the proposed variable selection can help determine five forecast methods used here to improve the forecasting capability.

  15. Time-varying metamaterials based on graphene-wrapped microwires: Modeling and potential applications

    Science.gov (United States)

    Salary, Mohammad Mahdi; Jafar-Zanjani, Samad; Mosallaei, Hossein

    2018-03-01

    The successful realization of metamaterials and metasurfaces requires the judicious choice of constituent elements. In this paper, we demonstrate the implementation of time-varying metamaterials in the terahertz frequency regime by utilizing graphene-wrapped microwires as building blocks and modulation of graphene conductivity through exterior electrical gating. These elements enable enhancement of light-graphene interaction by utilizing optical resonances associated with Mie scattering, yielding a large tunability and modulation depth. We develop a semianalytical framework based on transition-matrix formulation for modeling and analysis of periodic and aperiodic arrays of such time-varying building blocks. The proposed method is validated against full-wave numerical results obtained using the finite-difference time-domain method. It provides an ideal tool for mathematical synthesis and analysis of space-time gradient metamaterials, eliminating the need for computationally expensive numerical models. Moreover, it allows for a wider exploration of exotic space-time scattering phenomena in time-modulated metamaterials. We apply the method to explore the role of modulation parameters in the generation of frequency harmonics and their emerging wavefronts. Several potential applications of such platforms are demonstrated, including frequency conversion, holographic generation of frequency harmonics, and spatiotemporal manipulation of light. The presented results provide key physical insights to design time-modulated functional metadevices using various building blocks and open up new directions in the emerging paradigm of time-modulated metamaterials.

  16. Short-Term Bus Passenger Demand Prediction Based on Time Series Model and Interactive Multiple Model Approach

    Directory of Open Access Journals (Sweden)

    Rui Xue

    2015-01-01

    Full Text Available Although bus passenger demand prediction has attracted increased attention during recent years, limited research has been conducted in the context of short-term passenger demand forecasting. This paper proposes an interactive multiple model (IMM filter algorithm-based model to predict short-term passenger demand. After aggregated in 15 min interval, passenger demand data collected from a busy bus route over four months were used to generate time series. Considering that passenger demand exhibits various characteristics in different time scales, three time series were developed, named weekly, daily, and 15 min time series. After the correlation, periodicity, and stationarity analyses, time series models were constructed. Particularly, the heteroscedasticity of time series was explored to achieve better prediction performance. Finally, IMM filter algorithm was applied to combine individual forecasting models with dynamically predicted passenger demand for next interval. Different error indices were adopted for the analyses of individual and hybrid models. The performance comparison indicates that hybrid model forecasts are superior to individual ones in accuracy. Findings of this study are of theoretical and practical significance in bus scheduling.

  17. Adaptation to shift work: physiologically based modeling of the effects of lighting and shifts' start time.

    Directory of Open Access Journals (Sweden)

    Svetlana Postnova

    Full Text Available Shift work has become an integral part of our life with almost 20% of the population being involved in different shift schedules in developed countries. However, the atypical work times, especially the night shifts, are associated with reduced quality and quantity of sleep that leads to increase of sleepiness often culminating in accidents. It has been demonstrated that shift workers' sleepiness can be improved by a proper scheduling of light exposure and optimizing shifts timing. Here, an integrated physiologically-based model of sleep-wake cycles is used to predict adaptation to shift work in different light conditions and for different shift start times for a schedule of four consecutive days of work. The integrated model combines a model of the ascending arousal system in the brain that controls the sleep-wake switch and a human circadian pacemaker model. To validate the application of the integrated model and demonstrate its utility, its dynamics are adjusted to achieve a fit to published experimental results showing adaptation of night shift workers (n = 8 in conditions of either bright or regular lighting. Further, the model is used to predict the shift workers' adaptation to the same shift schedule, but for conditions not considered in the experiment. The model demonstrates that the intensity of shift light can be reduced fourfold from that used in the experiment and still produce good adaptation to night work. The model predicts that sleepiness of the workers during night shifts on a protocol with either bright or regular lighting can be significantly improved by starting the shift earlier in the night, e.g.; at 21:00 instead of 00:00. Finally, the study predicts that people of the same chronotype, i.e. with identical sleep times in normal conditions, can have drastically different responses to shift work depending on their intrinsic circadian and homeostatic parameters.

  18. Adaptation to shift work: physiologically based modeling of the effects of lighting and shifts' start time.

    Science.gov (United States)

    Postnova, Svetlana; Robinson, Peter A; Postnov, Dmitry D

    2013-01-01

    Shift work has become an integral part of our life with almost 20% of the population being involved in different shift schedules in developed countries. However, the atypical work times, especially the night shifts, are associated with reduced quality and quantity of sleep that leads to increase of sleepiness often culminating in accidents. It has been demonstrated that shift workers' sleepiness can be improved by a proper scheduling of light exposure and optimizing shifts timing. Here, an integrated physiologically-based model of sleep-wake cycles is used to predict adaptation to shift work in different light conditions and for different shift start times for a schedule of four consecutive days of work. The integrated model combines a model of the ascending arousal system in the brain that controls the sleep-wake switch and a human circadian pacemaker model. To validate the application of the integrated model and demonstrate its utility, its dynamics are adjusted to achieve a fit to published experimental results showing adaptation of night shift workers (n = 8) in conditions of either bright or regular lighting. Further, the model is used to predict the shift workers' adaptation to the same shift schedule, but for conditions not considered in the experiment. The model demonstrates that the intensity of shift light can be reduced fourfold from that used in the experiment and still produce good adaptation to night work. The model predicts that sleepiness of the workers during night shifts on a protocol with either bright or regular lighting can be significantly improved by starting the shift earlier in the night, e.g.; at 21:00 instead of 00:00. Finally, the study predicts that people of the same chronotype, i.e. with identical sleep times in normal conditions, can have drastically different responses to shift work depending on their intrinsic circadian and homeostatic parameters.

  19. Adaptation to Shift Work: Physiologically Based Modeling of the Effects of Lighting and Shifts’ Start Time

    Science.gov (United States)

    Postnova, Svetlana; Robinson, Peter A.; Postnov, Dmitry D.

    2013-01-01

    Shift work has become an integral part of our life with almost 20% of the population being involved in different shift schedules in developed countries. However, the atypical work times, especially the night shifts, are associated with reduced quality and quantity of sleep that leads to increase of sleepiness often culminating in accidents. It has been demonstrated that shift workers’ sleepiness can be improved by a proper scheduling of light exposure and optimizing shifts timing. Here, an integrated physiologically-based model of sleep-wake cycles is used to predict adaptation to shift work in different light conditions and for different shift start times for a schedule of four consecutive days of work. The integrated model combines a model of the ascending arousal system in the brain that controls the sleep-wake switch and a human circadian pacemaker model. To validate the application of the integrated model and demonstrate its utility, its dynamics are adjusted to achieve a fit to published experimental results showing adaptation of night shift workers (n = 8) in conditions of either bright or regular lighting. Further, the model is used to predict the shift workers’ adaptation to the same shift schedule, but for conditions not considered in the experiment. The model demonstrates that the intensity of shift light can be reduced fourfold from that used in the experiment and still produce good adaptation to night work. The model predicts that sleepiness of the workers during night shifts on a protocol with either bright or regular lighting can be significantly improved by starting the shift earlier in the night, e.g.; at 21∶00 instead of 00∶00. Finally, the study predicts that people of the same chronotype, i.e. with identical sleep times in normal conditions, can have drastically different responses to shift work depending on their intrinsic circadian and homeostatic parameters. PMID:23308206

  20. CD-SEM real time bias correction using reference metrology based modeling

    Science.gov (United States)

    Ukraintsev, V.; Banke, W.; Zagorodnev, G.; Archie, C.; Rana, N.; Pavlovsky, V.; Smirnov, V.; Briginas, I.; Katnani, A.; Vaid, A.

    2018-03-01

    Accuracy of patterning impacts yield, IC performance and technology time to market. Accuracy of patterning relies on optical proximity correction (OPC) models built using CD-SEM inputs and intra die critical dimension (CD) control based on CD-SEM. Sub-nanometer measurement uncertainty (MU) of CD-SEM is required for current technologies. Reported design and process related bias variation of CD-SEM is in the range of several nanometers. Reference metrology and numerical modeling are used to correct SEM. Both methods are slow to be used for real time bias correction. We report on real time CD-SEM bias correction using empirical models based on reference metrology (RM) data. Significant amount of currently untapped information (sidewall angle, corner rounding, etc.) is obtainable from SEM waveforms. Using additional RM information provided for specific technology (design rules, materials, processes) CD extraction algorithms can be pre-built and then used in real time for accurate CD extraction from regular CD-SEM images. The art and challenge of SEM modeling is in finding robust correlation between SEM waveform features and bias of CD-SEM as well as in minimizing RM inputs needed to create accurate (within the design and process space) model. The new approach was applied to improve CD-SEM accuracy of 45 nm GATE and 32 nm MET1 OPC 1D models. In both cases MU of the state of the art CD-SEM has been improved by 3x and reduced to a nanometer level. Similar approach can be applied to 2D (end of line, contours, etc.) and 3D (sidewall angle, corner rounding, etc.) cases.

  1. Real time polymer nanocomposites-based physical nanosensors: theory and modeling

    Science.gov (United States)

    Bellucci, Stefano; Shunin, Yuri; Gopeyenko, Victor; Lobanova-Shunina, Tamara; Burlutskaya, Nataly; Zhukovskii, Yuri

    2017-09-01

    Functionalized carbon nanotubes and graphene nanoribbons nanostructures, serving as the basis for the creation of physical pressure and temperature nanosensors, are considered as tools for ecological monitoring and medical applications. Fragments of nanocarbon inclusions with different morphologies, presenting a disordered system, are regarded as models for nanocomposite materials based on carbon nanoсluster suspension in dielectric polymer environments (e.g., epoxy resins). We have formulated the approach of conductivity calculations for carbon-based polymer nanocomposites using the effective media cluster approach, disordered systems theory and conductivity mechanisms analysis, and obtained the calibration dependences. Providing a proper description of electric responses in nanosensoring systems, we demonstrate the implementation of advanced simulation models suitable for real time control nanosystems. We also consider the prospects and prototypes of the proposed physical nanosensor models providing the comparisons with experimental calibration dependences.

  2. Parameter Estimation of a Delay Time Model of Wearing Parts Based on Objective Data

    Directory of Open Access Journals (Sweden)

    Y. Tang

    2015-01-01

    Full Text Available The wearing parts of a system have a very high failure frequency, making it necessary to carry out continual functional inspections and maintenance to protect the system from unscheduled downtime. This allows for the collection of a large amount of maintenance data. Taking the unique characteristics of the wearing parts into consideration, we establish their respective delay time models in ideal inspection cases and nonideal inspection cases. The model parameters are estimated entirely using the collected maintenance data. Then, a likelihood function of all renewal events is derived based on their occurring probability functions, and the model parameters are calculated with the maximum likelihood function method, which is solved by the CRM. Finally, using two wearing parts from the oil and gas drilling industry as examples—the filter element and the blowout preventer rubber core—the parameters of the distribution function of the initial failure time and the delay time for each example are estimated, and their distribution functions are obtained. Such parameter estimation based on objective data will contribute to the optimization of the reasonable function inspection interval and will also provide some theoretical models to support the integrity management of equipment or systems.

  3. Creating wavelet-based models for real-time synthesis of perceptually convincing environmental sounds

    Science.gov (United States)

    Miner, Nadine Elizabeth

    1998-09-01

    This dissertation presents a new wavelet-based method for synthesizing perceptually convincing, dynamic sounds using parameterized sound models. The sound synthesis method is applicable to a variety of applications including Virtual Reality (VR), multi-media, entertainment, and the World Wide Web (WWW). A unique contribution of this research is the modeling of the stochastic, or non-pitched, sound components. This stochastic-based modeling approach leads to perceptually compelling sound synthesis. Two preliminary studies conducted provide data on multi-sensory interaction and audio-visual synchronization timing. These results contributed to the design of the new sound synthesis method. The method uses a four-phase development process, including analysis, parameterization, synthesis and validation, to create the wavelet-based sound models. A patent is pending for this dynamic sound synthesis method, which provides perceptually-realistic, real-time sound generation. This dissertation also presents a battery of perceptual experiments developed to verify the sound synthesis results. These experiments are applicable for validation of any sound synthesis technique.

  4. GOODNESS-OF-FIT TEST FOR THE ACCELERATED FAILURE TIME MODEL BASED ON MARTINGALE RESIDUALS

    Czech Academy of Sciences Publication Activity Database

    Novák, Petr

    2013-01-01

    Roč. 49, č. 1 (2013), s. 40-59 ISSN 0023-5954 R&D Projects: GA MŠk(CZ) 1M06047 Grant - others:GA MŠk(CZ) SVV 261315/2011 Keywords : accelerated failure time model * survival analysis * goodness-of-fit Subject RIV: BB - Applied Statistics, Operational Research Impact factor: 0.563, year: 2013 http://library.utia.cas.cz/separaty/2013/SI/novak-goodness-of-fit test for the aft model based on martingale residuals.pdf

  5. Study of Railway Track Irregularity Standard Deviation Time Series Based on Data Mining and Linear Model

    Directory of Open Access Journals (Sweden)

    Jia Chaolong

    2013-01-01

    Full Text Available Good track geometry state ensures the safe operation of the railway passenger service and freight service. Railway transportation plays an important role in the Chinese economic and social development. This paper studies track irregularity standard deviation time series data and focuses on the characteristics and trend changes of track state by applying clustering analysis. Linear recursive model and linear-ARMA model based on wavelet decomposition reconstruction are proposed, and all they offer supports for the safe management of railway transportation.

  6. A GIS-based time-dependent seismic source modeling of Northern Iran

    Science.gov (United States)

    Hashemi, Mahdi; Alesheikh, Ali Asghar; Zolfaghari, Mohammad Reza

    2017-01-01

    The first step in any seismic hazard study is the definition of seismogenic sources and the estimation of magnitude-frequency relationships for each source. There is as yet no standard methodology for source modeling and many researchers have worked on this topic. This study is an effort to define linear and area seismic sources for Northern Iran. The linear or fault sources are developed based on tectonic features and characteristic earthquakes while the area sources are developed based on spatial distribution of small to moderate earthquakes. Time-dependent recurrence relationships are developed for fault sources using renewal approach while time-independent frequency-magnitude relationships are proposed for area sources based on Poisson process. GIS functionalities are used in this study to introduce and incorporate spatial-temporal and geostatistical indices in delineating area seismic sources. The proposed methodology is used to model seismic sources for an area of about 500 by 400 square kilometers around Tehran. Previous researches and reports are studied to compile an earthquake/fault catalog that is as complete as possible. All events are transformed to uniform magnitude scale; duplicate events and dependent shocks are removed. Completeness and time distribution of the compiled catalog is taken into account. The proposed area and linear seismic sources in conjunction with defined recurrence relationships can be used to develop time-dependent probabilistic seismic hazard analysis of Northern Iran.

  7. Autoregressive-model-based missing value estimation for DNA microarray time series data.

    Science.gov (United States)

    Choong, Miew Keen; Charbit, Maurice; Yan, Hong

    2009-01-01

    Missing value estimation is important in DNA microarray data analysis. A number of algorithms have been developed to solve this problem, but they have several limitations. Most existing algorithms are not able to deal with the situation where a particular time point (column) of the data is missing entirely. In this paper, we present an autoregressive-model-based missing value estimation method (ARLSimpute) that takes into account the dynamic property of microarray temporal data and the local similarity structures in the data. ARLSimpute is especially effective for the situation where a particular time point contains many missing values or where the entire time point is missing. Experiment results suggest that our proposed algorithm is an accurate missing value estimator in comparison with other imputation methods on simulated as well as real microarray time series datasets.

  8. A multiscale MDCT image-based breathing lung model with time-varying regional ventilation

    Energy Technology Data Exchange (ETDEWEB)

    Yin, Youbing, E-mail: youbing-yin@uiowa.edu [Department of Mechanical and Industrial Engineering, The University of Iowa, Iowa City, IA 52242 (United States); IIHR-Hydroscience and Engineering, The University of Iowa, Iowa City, IA 52242 (United States); Department of Radiology, The University of Iowa, Iowa City, IA 52242 (United States); Choi, Jiwoong, E-mail: jiwoong-choi@uiowa.edu [Department of Mechanical and Industrial Engineering, The University of Iowa, Iowa City, IA 52242 (United States); IIHR-Hydroscience and Engineering, The University of Iowa, Iowa City, IA 52242 (United States); Hoffman, Eric A., E-mail: eric-hoffman@uiowa.edu [Department of Radiology, The University of Iowa, Iowa City, IA 52242 (United States); Department of Biomedical Engineering, The University of Iowa, Iowa City, IA 52242 (United States); Department of Internal Medicine, The University of Iowa, Iowa City, IA 52242 (United States); Tawhai, Merryn H., E-mail: m.tawhai@auckland.ac.nz [Auckland Bioengineering Institute, The University of Auckland, Auckland (New Zealand); Lin, Ching-Long, E-mail: ching-long-lin@uiowa.edu [Department of Mechanical and Industrial Engineering, The University of Iowa, Iowa City, IA 52242 (United States); IIHR-Hydroscience and Engineering, The University of Iowa, Iowa City, IA 52242 (United States)

    2013-07-01

    A novel algorithm is presented that links local structural variables (regional ventilation and deforming central airways) to global function (total lung volume) in the lung over three imaged lung volumes, to derive a breathing lung model for computational fluid dynamics simulation. The algorithm constitutes the core of an integrative, image-based computational framework for subject-specific simulation of the breathing lung. For the first time, the algorithm is applied to three multi-detector row computed tomography (MDCT) volumetric lung images of the same individual. A key technique in linking global and local variables over multiple images is an in-house mass-preserving image registration method. Throughout breathing cycles, cubic interpolation is employed to ensure C{sub 1} continuity in constructing time-varying regional ventilation at the whole lung level, flow rate fractions exiting the terminal airways, and airway deformation. The imaged exit airway flow rate fractions are derived from regional ventilation with the aid of a three-dimensional (3D) and one-dimensional (1D) coupled airway tree that connects the airways to the alveolar tissue. An in-house parallel large-eddy simulation (LES) technique is adopted to capture turbulent-transitional-laminar flows in both normal and deep breathing conditions. The results obtained by the proposed algorithm when using three lung volume images are compared with those using only one or two volume images. The three-volume-based lung model produces physiologically-consistent time-varying pressure and ventilation distribution. The one-volume-based lung model under-predicts pressure drop and yields un-physiological lobar ventilation. The two-volume-based model can account for airway deformation and non-uniform regional ventilation to some extent, but does not capture the non-linear features of the lung.

  9. A multiscale MDCT image-based breathing lung model with time-varying regional ventilation

    Science.gov (United States)

    Yin, Youbing; Choi, Jiwoong; Hoffman, Eric A.; Tawhai, Merryn H.; Lin, Ching-Long

    2012-01-01

    A novel algorithm is presented that links local structural variables (regional ventilation and deforming central airways) to global function (total lung volume) in the lung over three imaged lung volumes, to derive a breathing lung model for computational fluid dynamics simulation. The algorithm constitutes the core of an integrative, image-based computational framework for subject-specific simulation of the breathing lung. For the first time, the algorithm is applied to three multi-detector row computed tomography (MDCT) volumetric lung images of the same individual. A key technique in linking global and local variables over multiple images is an in-house mass-preserving image registration method. Throughout breathing cycles, cubic interpolation is employed to ensure C1 continuity in constructing time-varying regional ventilation at the whole lung level, flow rate fractions exiting the terminal airways, and airway deformation. The imaged exit airway flow rate fractions are derived from regional ventilation with the aid of a three-dimensional (3D) and one-dimensional (1D) coupled airway tree that connects the airways to the alveolar tissue. An in-house parallel large-eddy simulation (LES) technique is adopted to capture turbulent-transitional-laminar flows in both normal and deep breathing conditions. The results obtained by the proposed algorithm when using three lung volume images are compared with those using only one or two volume images. The three-volume-based lung model produces physiologically-consistent time-varying pressure and ventilation distribution. The one-volume-based lung model under-predicts pressure drop and yields un-physiological lobar ventilation. The two-volume-based model can account for airway deformation and non-uniform regional ventilation to some extent, but does not capture the non-linear features of the lung. PMID:23794749

  10. Real-time process optimization based on grey-box neural models

    Directory of Open Access Journals (Sweden)

    F. A. Cubillos

    2007-09-01

    Full Text Available This paper investigates the feasibility of using grey-box neural models (GNM in Real Time Optimization (RTO. These models are based on a suitable combination of fundamental conservation laws and neural networks, being used in at least two different ways: to complement available phenomenological knowledge with empirical information, or to reduce dimensionality of complex rigorous physical models. We have observed that the benefits of using these simple adaptable models are counteracted by some difficulties associated with the solution of the optimization problem. Nonlinear Programming (NLP algorithms failed in finding the global optimum due to the fact that neural networks can introduce multimodal objective functions. One alternative considered to solve this problem was the use of some kind of evolutionary algorithms, like Genetic Algorithms (GA. Although these algorithms produced better results in terms of finding the appropriate region, they took long periods of time to reach the global optimum. It was found that a combination of genetic and nonlinear programming algorithms can be use to fast obtain the optimum solution. The proposed approach was applied to the Williams-Otto reactor, considering three different GNM models of increasing complexity. Results demonstrated that the use of GNM models and mixed GA/NLP optimization algorithms is a promissory approach for solving dynamic RTO problems.

  11. Time delay and profit accumulation effect on a mine-based uranium market clearing model

    International Nuclear Information System (INIS)

    Auzans, Aris; Teder, Allan; Tkaczyk, Alan H.

    2016-01-01

    Highlights: • Improved version of a mine-based uranium market clearing model for the front-end uranium market and enrichment industries is proposed. • A profit accumulation algorithm and time delay function provides more realistic uranium mine decision making process. • Operational decision delay increased uranium market price volatility. - Abstract: The mining industry faces a number of challenges such as market volatility, investment safety, issues surrounding employment and productivity. Therefore, computer simulations are highly relevant in order to reduce financial risks associated with these challenges. In the mining industry, each firm must compete with other mines and the basic target is profit maximization. The aim of this paper is to evaluate the world uranium (U) supply by simulating financial management challenges faced by an individual U mine that are caused by a variety of regulation issues. In this paper front-end nuclear fuel cycle tool is used to simulate market conditions and the effects they have on the stability of U supply. An individual U mine’s exit or entry in the market might cause changes in the U supply side which can increase or decrease the market price. In this paper we offer a more advanced version of a mine-based U market clearing model. The existing U market model incorporates the market of primary U from uranium mines with secondary uranium (depleted uranium DU), enriched uranium (HEU) and enrichment services. In the model each uranium mine acts as an independent agent that is able to make operational decisions based on the market price. This paper introduces a more realistic decision making algorithm of individual U mine that adds constraints to production decisions. The authors added an accumulated profit model, which allows for the profits accumulated to cover any possible future economic losses and the time-delay algorithm to simulate delayed process of reopening a U mine. The U market simulation covers time period 2010

  12. Time delay and profit accumulation effect on a mine-based uranium market clearing model

    Energy Technology Data Exchange (ETDEWEB)

    Auzans, Aris [Institute of Physics, University of Tartu, Ostwaldi 1, EE-50411 Tartu (Estonia); Teder, Allan [School of Economics and Business Administration, University of Tartu, Narva mnt 4, EE-51009 Tartu (Estonia); Tkaczyk, Alan H., E-mail: alan@ut.ee [Institute of Physics, University of Tartu, Ostwaldi 1, EE-50411 Tartu (Estonia)

    2016-12-15

    Highlights: • Improved version of a mine-based uranium market clearing model for the front-end uranium market and enrichment industries is proposed. • A profit accumulation algorithm and time delay function provides more realistic uranium mine decision making process. • Operational decision delay increased uranium market price volatility. - Abstract: The mining industry faces a number of challenges such as market volatility, investment safety, issues surrounding employment and productivity. Therefore, computer simulations are highly relevant in order to reduce financial risks associated with these challenges. In the mining industry, each firm must compete with other mines and the basic target is profit maximization. The aim of this paper is to evaluate the world uranium (U) supply by simulating financial management challenges faced by an individual U mine that are caused by a variety of regulation issues. In this paper front-end nuclear fuel cycle tool is used to simulate market conditions and the effects they have on the stability of U supply. An individual U mine’s exit or entry in the market might cause changes in the U supply side which can increase or decrease the market price. In this paper we offer a more advanced version of a mine-based U market clearing model. The existing U market model incorporates the market of primary U from uranium mines with secondary uranium (depleted uranium DU), enriched uranium (HEU) and enrichment services. In the model each uranium mine acts as an independent agent that is able to make operational decisions based on the market price. This paper introduces a more realistic decision making algorithm of individual U mine that adds constraints to production decisions. The authors added an accumulated profit model, which allows for the profits accumulated to cover any possible future economic losses and the time-delay algorithm to simulate delayed process of reopening a U mine. The U market simulation covers time period 2010

  13. Accounting for detectability in fish distribution models: an approach based on time-to-first-detection

    Directory of Open Access Journals (Sweden)

    Mário Ferreira

    2015-12-01

    Full Text Available Imperfect detection (i.e., failure to detect a species when the species is present is increasingly recognized as an important source of uncertainty and bias in species distribution modeling. Although methods have been developed to solve this problem by explicitly incorporating variation in detectability in the modeling procedure, their use in freshwater systems remains limited. This is probably because most methods imply repeated sampling (≥ 2 of each location within a short time frame, which may be impractical or too expensive in most studies. Here we explore a novel approach to control for detectability based on the time-to-first-detection, which requires only a single sampling occasion and so may find more general applicability in freshwaters. The approach uses a Bayesian framework to combine conventional occupancy modeling with techniques borrowed from parametric survival analysis, jointly modeling factors affecting the probability of occupancy and the time required to detect a species. To illustrate the method, we modeled large scale factors (elevation, stream order and precipitation affecting the distribution of six fish species in a catchment located in north-eastern Portugal, while accounting for factors potentially affecting detectability at sampling points (stream depth and width. Species detectability was most influenced by depth and to lesser extent by stream width and tended to increase over time for most species. Occupancy was consistently affected by stream order, elevation and annual precipitation. These species presented a widespread distribution with higher uncertainty in tributaries and upper stream reaches. This approach can be used to estimate sampling efficiency and provide a practical framework to incorporate variations in the detection rate in fish distribution models.

  14. Time-domain modeling for shielding effectiveness of materials against electromagnetic pulse based on system identification

    International Nuclear Information System (INIS)

    Chen, Xiang; Chen, Yong Guang; Wei, Ming; Hu, Xiao Feng

    2013-01-01

    Shielding effectiveness (SE) of materials against electromagnetic pulse (EMP) cannot be well estimated by traditional test method of SE of materials which only consider the amplitude-frequency characteristic of materials, but ignore the phase-frequency ones. In order to solve this problem, the model of SE of materials against EMP was established based on system identification (SI) method with time-domain linear cosine frequency sweep signal. The feasibility of the method in this paper was examined depending on infinite planar material and the simulation research of coaxial test method and windowed semi-anechoic box of materials. The results show that the amplitude-frequency and phase-frequency information of each frequency can be fully extracted with this method. SE of materials against strong EMP can be evaluated with time-domain low field strength (voltage) of cosine frequency sweep signal. And SE of materials against a variety EMP will be predicted by the model.

  15. Agent-Based Modeling of Day-Ahead Real Time Pricing in a Pool-Based Electricity Market

    Directory of Open Access Journals (Sweden)

    Sh. Yousefi

    2011-09-01

    Full Text Available In this paper, an agent-based structure of the electricity retail market is presented based on which day-ahead (DA energy procurement for customers is modeled. Here, we focus on operation of only one Retail Energy Provider (REP agent who purchases energy from DA pool-based wholesale market and offers DA real time tariffs to a group of its customers. As a model of customer response to the offered real time prices, an hourly acceptance function is proposed in order to represent the hourly changes in the customer’s effective demand according to the prices. Here, Q-learning (QL approach is applied in day-ahead real time pricing for the customers enabling the REP agent to discover which price yields the most benefit through a trial-and-error search. Numerical studies are presented based on New England day-ahead market data which include comparing the results of RTP based on QL approach with that of genetic-based pricing.

  16. Impedance based time-domain modeling of lithium-ion batteries: Part I

    Science.gov (United States)

    Gantenbein, Sophia; Weiss, Michael; Ivers-Tiffée, Ellen

    2018-03-01

    This paper presents a novel lithium-ion cell model, which simulates the current voltage characteristic as a function of state of charge (0%-100%) and temperature (0-30 °C). It predicts the cell voltage at each operating point by calculating the total overvoltage from the individual contributions of (i) the ohmic loss η0, (ii) the charge transfer loss of the cathode ηCT,C, (iii) the charge transfer loss and the solid electrolyte interface loss of the anode ηSEI/CT,A, and (iv) the solid state and electrolyte diffusion loss ηDiff,A/C/E. This approach is based on a physically meaningful equivalent circuit model, which is parametrized by electrochemical impedance spectroscopy and time domain measurements, covering a wide frequency range from MHz to μHz. The model is exemplarily parametrized to a commercial, high-power 350 mAh graphite/LiNiCoAlO2-LiCoO2 pouch cell and validated by continuous discharge and charge curves at varying temperature. For the first time, the physical background of the model allows the operator to draw conclusions about the performance-limiting factor at various operating conditions. Not only can the model help to choose application-optimized cell characteristics, but it can also support the battery management system when taking corrective actions during operation.

  17. Non-linear time variant model intended for polypyrrole-based actuators

    Science.gov (United States)

    Farajollahi, Meisam; Madden, John D. W.; Sassani, Farrokh

    2014-03-01

    Polypyrrole-based actuators are of interest due to their biocompatibility, low operation voltage and relatively high strain and force. Modeling and simulation are very important to predict the behaviour of each actuator. To develop an accurate model, we need to know the electro-chemo-mechanical specifications of the Polypyrrole. In this paper, the non-linear time-variant model of Polypyrrole film is derived and proposed using a combination of an RC transmission line model and a state space representation. The model incorporates the potential dependent ionic conductivity. A function of ionic conductivity of Polypyrrole vs. local charge is proposed and implemented in the non-linear model. Matching of the measured and simulated electrical response suggests that ionic conductivity of Polypyrrole decreases significantly at negative potential vs. silver/silver chloride and leads to reduced current in the cyclic voltammetry (CV) tests. The next stage is to relate the distributed charging of the polymer to actuation via the strain to charge ratio. Further work is also needed to identify ionic and electronic conductivities as well as capacitance as a function of oxidation state so that a fully predictive model can be created.

  18. A Fault Prognosis Strategy Based on Time-Delayed Digraph Model and Principal Component Analysis

    Directory of Open Access Journals (Sweden)

    Ningyun Lu

    2012-01-01

    Full Text Available Because of the interlinking of process equipments in process industry, event information may propagate through the plant and affect a lot of downstream process variables. Specifying the causality and estimating the time delays among process variables are critically important for data-driven fault prognosis. They are not only helpful to find the root cause when a plant-wide disturbance occurs, but to reveal the evolution of an abnormal event propagating through the plant. This paper concerns with the information flow directionality and time-delay estimation problems in process industry and presents an information synchronization technique to assist fault prognosis. Time-delayed mutual information (TDMI is used for both causality analysis and time-delay estimation. To represent causality structure of high-dimensional process variables, a time-delayed signed digraph (TD-SDG model is developed. Then, a general fault prognosis strategy is developed based on the TD-SDG model and principle component analysis (PCA. The proposed method is applied to an air separation unit and has achieved satisfying results in predicting the frequently occurred “nitrogen-block” fault.

  19. Time Domain Analysis of Graphene Nanoribbon Interconnects Based on Transmission Line ‎Model

    Directory of Open Access Journals (Sweden)

    S. Haji Nasiri

    2012-03-01

    Full Text Available Time domain analysis of multilayer graphene nanoribbon (MLGNR interconnects, based on ‎transmission line modeling (TLM using a six-order linear parametric expression, has been ‎presented for the first time. We have studied the effects of interconnect geometry along with ‎its contact resistance on its step response and Nyquist stability. It is shown that by increasing ‎interconnects dimensions their propagation delays are increased and accordingly the system ‎becomes relatively more stable. In addition, we have compared time responses and Nyquist ‎stabilities of MLGNR and SWCNT bundle interconnects, with the same external dimensions. ‎The results show that under the same conditions, the propagation delays for MLGNR ‎interconnects are smaller than those of SWCNT bundle interconnects are. Hence, SWCNT ‎bundle interconnects are relatively more stable than their MLGNR rivals.‎

  20. Real-Time Model-Based Leak-Through Detection within Cryogenic Flow Systems

    Science.gov (United States)

    Walker, M.; Figueroa, F.

    2015-01-01

    The timely detection of leaks within cryogenic fuel replenishment systems is of significant importance to operators on account of the safety and economic impacts associated with material loss and operational inefficiencies. Associated loss in control of pressure also effects the stability and ability to control the phase of cryogenic fluids during replenishment operations. Current research dedicated to providing Prognostics and Health Management (PHM) coverage of such cryogenic replenishment systems has focused on the detection of leaks to atmosphere involving relatively simple model-based diagnostic approaches that, while effective, are unable to isolate the fault to specific piping system components. The authors have extended this research to focus on the detection of leaks through closed valves that are intended to isolate sections of the piping system from the flow and pressurization of cryogenic fluids. The described approach employs model-based detection of leak-through conditions based on correlations of pressure changes across isolation valves and attempts to isolate the faults to specific valves. Implementation of this capability is enabled by knowledge and information embedded in the domain model of the system. The approach has been used effectively to detect such leak-through faults during cryogenic operational testing at the Cryogenic Testbed at NASA's Kennedy Space Center.

  1. The Trick Simulation Toolkit: A NASA/Opensource Framework for Running Time Based Physics Models

    Science.gov (United States)

    Penn, John M.

    2016-01-01

    The Trick Simulation Toolkit is a simulation development environment used to create high fidelity training and engineering simulations at the NASA Johnson Space Center and many other NASA facilities. Its purpose is to generate a simulation executable from a collection of user-supplied models and a simulation definition file. For each Trick-based simulation, Trick automatically provides job scheduling, numerical integration, the ability to write and restore human readable checkpoints, data recording, interactive variable manipulation, a run-time interpreter, and many other commonly needed capabilities. This allows simulation developers to concentrate on their domain expertise and the algorithms and equations of their models. Also included in Trick are tools for plotting recorded data and various other supporting utilities and libraries. Trick is written in C/C++ and Java and supports both Linux and MacOSX computer operating systems. This paper describes Trick's design and use at NASA Johnson Space Center.

  2. An approach to the drone fleet survivability assessment based on a stochastic continues-time model

    Science.gov (United States)

    Kharchenko, Vyacheslav; Fesenko, Herman; Doukas, Nikos

    2017-09-01

    An approach and the algorithm to the drone fleet survivability assessment based on a stochastic continues-time model are proposed. The input data are the number of the drones, the drone fleet redundancy coefficient, the drone stability and restoration rate, the limit deviation from the norms of the drone fleet recovery, the drone fleet operational availability coefficient, the probability of the drone failure-free operation, time needed for performing the required tasks by the drone fleet. The ways for improving the recoverable drone fleet survivability taking into account amazing factors of system accident are suggested. Dependencies of the drone fleet survivability rate both on the drone stability and the number of the drones are analysed.

  3. Kernel based methods for accelerated failure time model with ultra-high dimensional data

    Directory of Open Access Journals (Sweden)

    Jiang Feng

    2010-12-01

    Full Text Available Abstract Background Most genomic data have ultra-high dimensions with more than 10,000 genes (probes. Regularization methods with L1 and Lp penalty have been extensively studied in survival analysis with high-dimensional genomic data. However, when the sample size n ≪ m (the number of genes, directly identifying a small subset of genes from ultra-high (m > 10, 000 dimensional data is time-consuming and not computationally efficient. In current microarray analysis, what people really do is select a couple of thousands (or hundreds of genes using univariate analysis or statistical tests, and then apply the LASSO-type penalty to further reduce the number of disease associated genes. This two-step procedure may introduce bias and inaccuracy and lead us to miss biologically important genes. Results The accelerated failure time (AFT model is a linear regression model and a useful alternative to the Cox model for survival analysis. In this paper, we propose a nonlinear kernel based AFT model and an efficient variable selection method with adaptive kernel ridge regression. Our proposed variable selection method is based on the kernel matrix and dual problem with a much smaller n × n matrix. It is very efficient when the number of unknown variables (genes is much larger than the number of samples. Moreover, the primal variables are explicitly updated and the sparsity in the solution is exploited. Conclusions Our proposed methods can simultaneously identify survival associated prognostic factors and predict survival outcomes with ultra-high dimensional genomic data. We have demonstrated the performance of our methods with both simulation and real data. The proposed method performs superbly with limited computational studies.

  4. Real-time physics-based 3D biped character animation using an inverted pendulum model.

    Science.gov (United States)

    Tsai, Yao-Yang; Lin, Wen-Chieh; Cheng, Kuangyou B; Lee, Jehee; Lee, Tong-Yee

    2010-01-01

    We present a physics-based approach to generate 3D biped character animation that can react to dynamical environments in real time. Our approach utilizes an inverted pendulum model to online adjust the desired motion trajectory from the input motion capture data. This online adjustment produces a physically plausible motion trajectory adapted to dynamic environments, which is then used as the desired motion for the motion controllers to track in dynamics simulation. Rather than using Proportional-Derivative controllers whose parameters usually cannot be easily set, our motion tracking adopts a velocity-driven method which computes joint torques based on the desired joint angular velocities. Physically correct full-body motion of the 3D character is computed in dynamics simulation using the computed torques and dynamical model of the character. Our experiments demonstrate that tracking motion capture data with real-time response animation can be achieved easily. In addition, physically plausible motion style editing, automatic motion transition, and motion adaptation to different limb sizes can also be generated without difficulty.

  5. Grey-Theory-Based Optimization Model of Emergency Logistics Considering Time Uncertainty.

    Science.gov (United States)

    Qiu, Bao-Jian; Zhang, Jiang-Hua; Qi, Yuan-Tao; Liu, Yang

    2015-01-01

    Natural disasters occur frequently in recent years, causing huge casualties and property losses. Nowadays, people pay more and more attention to the emergency logistics problems. This paper studies the emergency logistics problem with multi-center, multi-commodity, and single-affected-point. Considering that the path near the disaster point may be damaged, the information of the state of the paths is not complete, and the travel time is uncertainty, we establish the nonlinear programming model that objective function is the maximization of time-satisfaction degree. To overcome these drawbacks: the incomplete information and uncertain time, this paper firstly evaluates the multiple roads of transportation network based on grey theory and selects the reliable and optimal path. Then simplify the original model under the scenario that the vehicle only follows the optimal path from the emergency logistics center to the affected point, and use Lingo software to solve it. The numerical experiments are presented to show the feasibility and effectiveness of the proposed method.

  6. Time series segmentation: a new approach based on Genetic Algorithm and Hidden Markov Model

    Science.gov (United States)

    Toreti, A.; Kuglitsch, F. G.; Xoplaki, E.; Luterbacher, J.

    2009-04-01

    The subdivision of a time series into homogeneous segments has been performed using various methods applied to different disciplines. In climatology, for example, it is accompanied by the well-known homogenization problem and the detection of artificial change points. In this context, we present a new method (GAMM) based on Hidden Markov Model (HMM) and Genetic Algorithm (GA), applicable to series of independent observations (and easily adaptable to autoregressive processes). A left-to-right hidden Markov model, estimating the parameters and the best-state sequence, respectively, with the Baum-Welch and Viterbi algorithms, was applied. In order to avoid the well-known dependence of the Baum-Welch algorithm on the initial condition, a Genetic Algorithm was developed. This algorithm is characterized by mutation, elitism and a crossover procedure implemented with some restrictive rules. Moreover the function to be minimized was derived following the approach of Kehagias (2004), i.e. it is the so-called complete log-likelihood. The number of states was determined applying a two-fold cross-validation procedure (Celeux and Durand, 2008). Being aware that the last issue is complex, and it influences all the analysis, a Multi Response Permutation Procedure (MRPP; Mielke et al., 1981) was inserted. It tests the model with K+1 states (where K is the state number of the best model) if its likelihood is close to K-state model. Finally, an evaluation of the GAMM performances, applied as a break detection method in the field of climate time series homogenization, is shown. 1. G. Celeux and J.B. Durand, Comput Stat 2008. 2. A. Kehagias, Stoch Envir Res 2004. 3. P.W. Mielke, K.J. Berry, G.W. Brier, Monthly Wea Rev 1981.

  7. Model-based framework for multi-axial real-time hybrid simulation testing

    Science.gov (United States)

    Fermandois, Gaston A.; Spencer, Billie F.

    2017-10-01

    Real-time hybrid simulation is an efficient and cost-effective dynamic testing technique for performance evaluation of structural systems subjected to earthquake loading with rate-dependent behavior. A loading assembly with multiple actuators is required to impose realistic boundary conditions on physical specimens. However, such a testing system is expected to exhibit significant dynamic coupling of the actuators and suffer from time lags that are associated with the dynamics of the servo-hydraulic system, as well as control-structure interaction (CSI). One approach to reducing experimental errors considers a multi-input, multi-output (MIMO) controller design, yielding accurate reference tracking and noise rejection. In this paper, a framework for multi-axial real-time hybrid simulation (maRTHS) testing is presented. The methodology employs a real-time feedback-feedforward controller for multiple actuators commanded in Cartesian coordinates. Kinematic transformations between actuator space and Cartesian space are derived for all six-degrees-offreedom of the moving platform. Then, a frequency domain identification technique is used to develop an accurate MIMO transfer function of the system. Further, a Cartesian-domain model-based feedforward-feedback controller is implemented for time lag compensation and to increase the robustness of the reference tracking for given model uncertainty. The framework is implemented using the 1/5th-scale Load and Boundary Condition Box (LBCB) located at the University of Illinois at Urbana- Champaign. To demonstrate the efficacy of the proposed methodology, a single-story frame subjected to earthquake loading is tested. One of the columns in the frame is represented physically in the laboratory as a cantilevered steel column. For realtime execution, the numerical substructure, kinematic transformations, and controllers are implemented on a digital signal processor. Results show excellent performance of the maRTHS framework when six

  8. An advection-based model to increase the temporal resolution of PIV time series.

    Science.gov (United States)

    Scarano, Fulvio; Moore, Peter

    A numerical implementation of the advection equation is proposed to increase the temporal resolution of PIV time series. The method is based on the principle that velocity fluctuations are transported passively, similar to Taylor's hypothesis of frozen turbulence . In the present work, the advection model is extended to unsteady three-dimensional flows. The main objective of the method is that of lowering the requirement on the PIV repetition rate from the Eulerian frequency toward the Lagrangian one. The local trajectory of the fluid parcel is obtained by forward projection of the instantaneous velocity at the preceding time instant and backward projection from the subsequent time step. The trajectories are approximated by the instantaneous streamlines, which yields accurate results when the amplitude of velocity fluctuations is small with respect to the convective motion. The verification is performed with two experiments conducted at temporal resolutions significantly higher than that dictated by Nyquist criterion. The flow past the trailing edge of a NACA0012 airfoil closely approximates frozen turbulence , where the largest ratio between the Lagrangian and Eulerian temporal scales is expected. An order of magnitude reduction of the needed acquisition frequency is demonstrated by the velocity spectra of super-sampled series. The application to three-dimensional data is made with time-resolved tomographic PIV measurements of a transitional jet. Here, the 3D advection equation is implemented to estimate the fluid trajectories. The reduction in the minimum sampling rate by the use of super-sampling in this case is less, due to the fact that vortices occurring in the jet shear layer are not well approximated by sole advection at large time separation. Both cases reveal that the current requirements for time-resolved PIV experiments can be revised when information is poured from space to time . An additional favorable effect is observed by the analysis in the

  9. The "Carbon Data Explorer": Web-Based Space-Time Visualization of Modeled Carbon Fluxes

    Science.gov (United States)

    Billmire, M.; Endsley, K. A.

    2014-12-01

    The visualization of and scientific "sense-making" from large datasets varying in both space and time is a challenge; one that is still being addressed in a number of different fields. The approaches taken thus far are often specific to a given academic field due to the unique questions that arise in different disciplines, however, basic approaches such as geographic maps and time series plots are still widely useful. The proliferation of model estimates of increasing size and resolution further complicates what ought to be a simple workflow: Model some geophysical phenomen(on), obtain results and measure uncertainty, organize and display the data, make comparisons across trials, and share findings. A new tool is in development that is intended to help scientists with the latter parts of that workflow. The tentatively-titled "Carbon Data Explorer" (http://spatial.mtri.org/flux-client/) enables users to access carbon science and related spatio-temporal science datasets over the web. All that is required to access multiple interactive visualizations of carbon science datasets is a compatible web browser and an internet connection. While the application targets atmospheric and climate science datasets, particularly spatio-temporal model estimates of carbon products, the software architecture takes an agnostic approach to the data to be visualized. Any atmospheric, biophysical, or geophysical quanity that varies in space and time, including one or more measures of uncertainty, can be visualized within the application. Within the web application, users have seamless control over a flexible and consistent symbology for map-based visualizations and plots. Where time series data are represented by one or more data "frames" (e.g. a map), users can animate the data. In the "coordinated view," users can make direct comparisons between different frames and different models or model runs, facilitating intermodal comparisons and assessments of spatio-temporal variability. Map

  10. Applications For Real Time NOMADS At NCEP To Disseminate NOAA's Operational Model Data Base

    Science.gov (United States)

    Alpert, J. C.; Wang, J.; Rutledge, G.

    2007-05-01

    A wide range of environmental information, in digital form, with metadata descriptions and supporting infrastructure is contained in the NOAA Operational Modeling Archive Distribution System (NOMADS) and its Real Time (RT) project prototype at the National Centers for Environmental Prediction (NCEP). NOMADS is now delivering on its goal of a seamless framework, from archival to real time data dissemination for NOAA's operational model data holdings. A process is under way to make NOMADS part of NCEP's operational production of products. A goal is to foster collaborations among the research and education communities, value added retailers, and public access for science and development. In the National Research Council's "Completing the Forecast", Recommendation 3.4 states: "NOMADS should be maintained and extended to include (a) long-term archives of the global and regional ensemble forecasting systems at their native resolution, and (b) re-forecast datasets to facilitate post-processing." As one of many participants of NOMADS, NCEP serves the operational model data base using data application protocol (Open-DAP) and other services for participants to serve their data sets and users to obtain them. Using the NCEP global ensemble data as an example, we show an Open-DAP (also known as DODS) client application that provides a request-and-fulfill mechanism for access to the complex ensemble matrix of holdings. As an example of the DAP service, we show a client application which accesses the Global or Regional Ensemble data set to produce user selected weather element event probabilities. The event probabilities are easily extended over model forecast time to show probability histograms defining the future trend of user selected events. This approach insures an efficient use of computer resources because users transmit only the data necessary for their tasks. Data sets are served by OPeN-DAP allowing commercial clients such as MATLAB or IDL as well as freeware clients

  11. Modeling pollen time series using seasonal-trend decomposition procedure based on LOESS smoothing.

    Science.gov (United States)

    Rojo, Jesús; Rivero, Rosario; Romero-Morte, Jorge; Fernández-González, Federico; Pérez-Badia, Rosa

    2017-02-01

    Analysis of airborne pollen concentrations provides valuable information on plant phenology and is thus a useful tool in agriculture-for predicting harvests in crops such as the olive and for deciding when to apply phytosanitary treatments-as well as in medicine and the environmental sciences. Variations in airborne pollen concentrations, moreover, are indicators of changing plant life cycles. By modeling pollen time series, we can not only identify the variables influencing pollen levels but also predict future pollen concentrations. In this study, airborne pollen time series were modeled using a seasonal-trend decomposition procedure based on LOcally wEighted Scatterplot Smoothing (LOESS) smoothing (STL). The data series-daily Poaceae pollen concentrations over the period 2006-2014-was broken up into seasonal and residual (stochastic) components. The seasonal component was compared with data on Poaceae flowering phenology obtained by field sampling. Residuals were fitted to a model generated from daily temperature and rainfall values, and daily pollen concentrations, using partial least squares regression (PLSR). This method was then applied to predict daily pollen concentrations for 2014 (independent validation data) using results for the seasonal component of the time series and estimates of the residual component for the period 2006-2013. Correlation between predicted and observed values was r = 0.79 (correlation coefficient) for the pre-peak period (i.e., the period prior to the peak pollen concentration) and r = 0.63 for the post-peak period. Separate analysis of each of the components of the pollen data series enables the sources of variability to be identified more accurately than by analysis of the original non-decomposed data series, and for this reason, this procedure has proved to be a suitable technique for analyzing the main environmental factors influencing airborne pollen concentrations.

  12. ARIMA-Based Time Series Model of Stochastic Wind Power Generation

    DEFF Research Database (Denmark)

    Chen, Peiyuan; Pedersen, Troels; Bak-Jensen, Birgitte

    2010-01-01

    This paper proposes a stochastic wind power model based on an autoregressive integrated moving average (ARIMA) process. The model takes into account the nonstationarity and physical limits of stochastic wind power generation. The model is constructed based on wind power measurement of one year from...... the Nysted offshore wind farm in Denmark. The proposed limited-ARIMA (LARIMA) model introduces a limiter and characterizes the stochastic wind power generation by mean level, temporal correlation and driving noise. The model is validated against the measurement in terms of temporal correlation...... and probability distribution. The LARIMA model outperforms a first-order transition matrix based discrete Markov model in terms of temporal correlation, probability distribution and model parameter number. The proposed LARIMA model is further extended to include the monthly variation of the stochastic wind power...

  13. Model-based Integration of Past & Future in TimeTravel

    DEFF Research Database (Denmark)

    Khalefa, Mohamed E.; Fischer, Ulrike; Pedersen, Torben Bach

    2012-01-01

    We demonstrate TimeTravel, an efficient DBMS system for seamless integrated querying of past and (forecasted) future values of time series, allowing the user to view past and future values as one joint time series. This functionality is important for advanced application domain like energy....... The main idea is to compactly represent time series as models. By using models, the TimeTravel system answers queries approximately on past and future data with error guarantees (absolute error and confidence) one order of magnitude faster than when accessing the time series directly. In addition...... it to answer approximate and exact queries. TimeTravel is implemented into PostgreSQL, thus achieving complete user transparency at the query level. In the demo, we show the easy building of a hierarchical model index for a real-world time series and the effect of varying the error guarantees on the speed up...

  14. Incorporating time and income constraints in dynamic agent-based models of activity generation and time use : Approach and illustration

    NARCIS (Netherlands)

    Arentze, Theo; Ettema, D.F.; Timmermans, Harry

    Existing theories and models in economics and transportation treat households’ decisions regarding allocation of time and income to activities as a resource-allocation optimization problem. This stands in contrast with the dynamic nature of day-by-day activity-travel choices. Therefore, in the

  15. Modelling tourism demand in Madeira since 1946: and historical overview based on a time series approach

    Directory of Open Access Journals (Sweden)

    António Manuel Martins de Almeida

    2016-06-01

    Full Text Available Tourism is the leading economic sector in most islands and for that reason market trends are closely monitored due to the huge impacts of relatively minor changes in the demand patterns. An interesting line of research regarding the analysis of market trends concerns the examination of time series to get an historical overview of the data patterns. The modelling of demand patterns is obviously dependent on data availability, and the measurement of changes in demand patterns is quite often focused on a few decades. In this paper, we use long-term time-series data to analyse the evolution of the main markets in Madeira, by country of origin, in order to re-examine the Butler life cycle model, based on data available from 1946 onwards. This study is an opportunity to document the historical development of the industry in Madeira and to introduce the discussion about the rejuvenation of a mature destination. Tourism development in Madeira has experienced rapid growth until the late 90s, as one of the leading destinations in the European context. However, annual growth rates are not within acceptable ranges, which lead policy-makers and experts to recommend a thoughtfully assessment of the industry prospects.

  16. A Virtual Machine Migration Strategy Based on Time Series Workload Prediction Using Cloud Model

    Directory of Open Access Journals (Sweden)

    Yanbing Liu

    2014-01-01

    Full Text Available Aimed at resolving the issues of the imbalance of resources and workloads at data centers and the overhead together with the high cost of virtual machine (VM migrations, this paper proposes a new VM migration strategy which is based on the cloud model time series workload prediction algorithm. By setting the upper and lower workload bounds for host machines, forecasting the tendency of their subsequent workloads by creating a workload time series using the cloud model, and stipulating a general VM migration criterion workload-aware migration (WAM, the proposed strategy selects a source host machine, a destination host machine, and a VM on the source host machine carrying out the task of the VM migration. Experimental results and analyses show, through comparison with other peer research works, that the proposed method can effectively avoid VM migrations caused by momentary peak workload values, significantly lower the number of VM migrations, and dynamically reach and maintain a resource and workload balance for virtual machines promoting an improved utilization of resources in the entire data center.

  17. Comprehensive model of annual plankton succession based on the whole-plankton time series approach.

    Directory of Open Access Journals (Sweden)

    Jean-Baptiste Romagnan

    Full Text Available Ecological succession provides a widely accepted description of seasonal changes in phytoplankton and mesozooplankton assemblages in the natural environment, but concurrent changes in smaller (i.e. microbes and larger (i.e. macroplankton organisms are not included in the model because plankton ranging from bacteria to jellies are seldom sampled and analyzed simultaneously. Here we studied, for the first time in the aquatic literature, the succession of marine plankton in the whole-plankton assemblage that spanned 5 orders of magnitude in size from microbes to macroplankton predators (not including fish or fish larvae, for which no consistent data were available. Samples were collected in the northwestern Mediterranean Sea (Bay of Villefranche weekly during 10 months. Simultaneously collected samples were analyzed by flow cytometry, inverse microscopy, FlowCam, and ZooScan. The whole-plankton assemblage underwent sharp reorganizations that corresponded to bottom-up events of vertical mixing in the water-column, and its development was top-down controlled by large gelatinous filter feeders and predators. Based on the results provided by our novel whole-plankton assemblage approach, we propose a new comprehensive conceptual model of the annual plankton succession (i.e. whole plankton model characterized by both stepwise stacking of four broad trophic communities from early spring through summer, which is a new concept, and progressive replacement of ecological plankton categories within the different trophic communities, as recognised traditionally.

  18. Speech Silicon: An FPGA Architecture for Real-Time Hidden Markov-Model-Based Speech Recognition

    Directory of Open Access Journals (Sweden)

    Schuster Jeffrey

    2006-01-01

    Full Text Available This paper examines the design of an FPGA-based system-on-a-chip capable of performing continuous speech recognition on medium sized vocabularies in real time. Through the creation of three dedicated pipelines, one for each of the major operations in the system, we were able to maximize the throughput of the system while simultaneously minimizing the number of pipeline stalls in the system. Further, by implementing a token-passing scheme between the later stages of the system, the complexity of the control was greatly reduced and the amount of active data present in the system at any time was minimized. Additionally, through in-depth analysis of the SPHINX 3 large vocabulary continuous speech recognition engine, we were able to design models that could be efficiently benchmarked against a known software platform. These results, combined with the ability to reprogram the system for different recognition tasks, serve to create a system capable of performing real-time speech recognition in a vast array of environments.

  19. Speech Silicon: An FPGA Architecture for Real-Time Hidden Markov-Model-Based Speech Recognition

    Directory of Open Access Journals (Sweden)

    Alex K. Jones

    2006-11-01

    Full Text Available This paper examines the design of an FPGA-based system-on-a-chip capable of performing continuous speech recognition on medium sized vocabularies in real time. Through the creation of three dedicated pipelines, one for each of the major operations in the system, we were able to maximize the throughput of the system while simultaneously minimizing the number of pipeline stalls in the system. Further, by implementing a token-passing scheme between the later stages of the system, the complexity of the control was greatly reduced and the amount of active data present in the system at any time was minimized. Additionally, through in-depth analysis of the SPHINX 3 large vocabulary continuous speech recognition engine, we were able to design models that could be efficiently benchmarked against a known software platform. These results, combined with the ability to reprogram the system for different recognition tasks, serve to create a system capable of performing real-time speech recognition in a vast array of environments.

  20. Remote sensing-based time series models for malaria early warning in the highlands of Ethiopia

    Directory of Open Access Journals (Sweden)

    Midekisa Alemayehu

    2012-05-01

    Full Text Available Abstract Background Malaria is one of the leading public health problems in most of sub-Saharan Africa, particularly in Ethiopia. Almost all demographic groups are at risk of malaria because of seasonal and unstable transmission of the disease. Therefore, there is a need to develop malaria early-warning systems to enhance public health decision making for control and prevention of malaria epidemics. Data from orbiting earth-observing sensors can monitor environmental risk factors that trigger malaria epidemics. Remotely sensed environmental indicators were used to examine the influences of climatic and environmental variability on temporal patterns of malaria cases in the Amhara region of Ethiopia. Methods In this study seasonal autoregressive integrated moving average (SARIMA models were used to quantify the relationship between malaria cases and remotely sensed environmental variables, including rainfall, land-surface temperature (LST, vegetation indices (NDVI and EVI, and actual evapotranspiration (ETa with lags ranging from one to three months. Predictions from the best model with environmental variables were compared to the actual observations from the last 12 months of the time series. Results Malaria cases exhibited positive associations with LST at a lag of one month and positive associations with indicators of moisture (rainfall, EVI and ETa at lags from one to three months. SARIMA models that included these environmental covariates had better fits and more accurate predictions, as evidenced by lower AIC and RMSE values, than models without environmental covariates. Conclusions Malaria risk indicators such as satellite-based rainfall estimates, LST, EVI, and ETa exhibited significant lagged associations with malaria cases in the Amhara region and improved model fit and prediction accuracy. These variables can be monitored frequently and extensively across large geographic areas using data from earth-observing sensors to support public

  1. Interevent times in a new alarm-based earthquake forecasting model

    Science.gov (United States)

    Talbi, Abdelhak; Nanjo, Kazuyoshi; Zhuang, Jiancang; Satake, Kenji; Hamdache, Mohamed

    2013-09-01

    This study introduces a new earthquake forecasting model that uses the moment ratio (MR) of the first to second order moments of earthquake interevent times as a precursory alarm index to forecast large earthquake events. This MR model is based on the idea that the MR is associated with anomalous long-term changes in background seismicity prior to large earthquake events. In a given region, the MR statistic is defined as the inverse of the index of dispersion or Fano factor, with MR values (or scores) providing a biased estimate of the relative regional frequency of background events, here termed the background fraction. To test the forecasting performance of this proposed MR model, a composite Japan-wide earthquake catalogue for the years between 679 and 2012 was compiled using the Japan Meteorological Agency catalogue for the period between 1923 and 2012, and the Utsu historical seismicity records between 679 and 1922. MR values were estimated by sampling interevent times from events with magnitude M ≥ 6 using an earthquake random sampling (ERS) algorithm developed during previous research. Three retrospective tests of M ≥ 7 target earthquakes were undertaken to evaluate the long-, intermediate- and short-term performance of MR forecasting, using mainly Molchan diagrams and optimal spatial maps obtained by minimizing forecasting error defined by miss and alarm rate addition. This testing indicates that the MR forecasting technique performs well at long-, intermediate- and short-term. The MR maps produced during long-term testing indicate significant alarm levels before 15 of the 18 shallow earthquakes within the testing region during the past two decades, with an alarm region covering about 20 per cent (alarm rate) of the testing region. The number of shallow events missed by forecasting was reduced by about 60 per cent after using the MR method instead of the relative intensity (RI) forecasting method. At short term, our model succeeded in forecasting the

  2. Robust self-triggered model predictive control for constrained discrete-time LTI systems based on homothetic tubes

    NARCIS (Netherlands)

    Aydiner, E.; Brunner, F.D.; Heemels, W.P.M.H.; Allgower, F.

    2015-01-01

    In this paper we present a robust self-triggered model predictive control (MPC) scheme for discrete-time linear time-invariant systems subject to input and state constraints and additive disturbances. In self-triggered model predictive control, at every sampling instant an optimization problem based

  3. Modelling the thermal quenching mechanism in quartz based on time-resolved optically stimulated luminescence

    International Nuclear Information System (INIS)

    Pagonis, V.; Ankjaergaard, C.; Murray, A.S.; Jain, M.; Chen, R.; Lawless, J.; Greilich, S.

    2010-01-01

    This paper presents a new numerical model for thermal quenching in quartz, based on the previously suggested Mott-Seitz mechanism. In the model electrons from a dosimetric trap are raised by optical or thermal stimulation into the conduction band, followed by an electronic transition from the conduction band into an excited state of the recombination center. Subsequently electrons in this excited state undergo either a direct radiative transition into a recombination center, or a competing thermally assisted non-radiative process into the ground state of the recombination center. As the temperature of the sample is increased, more electrons are removed from the excited state via the non-radiative pathway. This reduction in the number of available electrons leads to both a decrease of the intensity of the luminescence signal and to a simultaneous decrease of the luminescence lifetime. Several simulations are carried out of time-resolved optically stimulated luminescence (TR-OSL) experiments, in which the temperature dependence of luminescence lifetimes in quartz is studied as a function of the stimulation temperature. Good quantitative agreement is found between the simulation results and new experimental data obtained using a single-aliquot procedure on a sedimentary quartz sample.

  4. Examining School-Based Bullying Interventions Using Multilevel Discrete Time Hazard Modeling

    Science.gov (United States)

    Wagaman, M. Alex; Geiger, Jennifer Mullins; Bermudez-Parsai, Monica; Hedberg, E. C.

    2014-01-01

    Although schools have been trying to address bulling by utilizing different approaches that stop or reduce the incidence of bullying, little remains known about what specific intervention strategies are most successful in reducing bullying in the school setting. Using the social-ecological framework, this paper examines school-based disciplinary interventions often used to deliver consequences to deter the reoccurrence of bullying and aggressive behaviors among school-aged children. Data for this study are drawn from the School-Wide Information System (SWIS) with the final analytic sample consisting of 1,221 students in grades K – 12 who received an office disciplinary referral for bullying during the first semester. Using Kaplan-Meier Failure Functions and Multi-level discrete time hazard models, determinants of the probability of a student receiving a second referral over time were examined. Of the seven interventions tested, only Parent-Teacher Conference (AOR=0.65, pbullying and aggressive behaviors. By using a social-ecological framework, schools can develop strategies that deter the reoccurrence of bullying by identifying key factors that enhance a sense of connection between the students’ mesosystems as well as utilizing disciplinary strategies that take into consideration student’s microsystem roles. PMID:22878779

  5. A residence-time-based transport approach for the groundwater pathway in performance assessment models

    Science.gov (United States)

    Robinson, Bruce A.; Chu, Shaoping

    2013-03-01

    This paper presents the theoretical development and numerical implementation of a new modeling approach for representing the groundwater pathway in risk assessment or performance assessment model of a contaminant transport system. The model developed in the present study, called the Residence Time Distribution (RTD) Mixing Model (RTDMM), allows for an arbitrary distribution of fluid travel times to be represented, to capture the effects on the breakthrough curve of flow processes such as channelized flow and fast pathways and complex three-dimensional dispersion. Mathematical methods for constructing the model for a given RTD are derived directly from the theory of residence time distributions in flowing systems. A simple mixing model is presented, along with the basic equations required to enable an arbitrary RTD to be reproduced using the model. The practical advantages of the RTDMM include easy incorporation into a multi-realization probabilistic simulation; computational burden no more onerous than a one-dimensional model with the same number of grid cells; and straightforward implementation into available flow and transport modeling codes, enabling one to then utilize advanced transport features of that code. For example, in this study we incorporated diffusion into the stagnant fluid in the rock matrix away from the flowing fractures, using a generalized dual porosity model formulation. A suite of example calculations presented herein showed the utility of the RTDMM for the case of a radioactive decay chain, dual porosity transport and sorption.

  6. Numerical modelling of softwood time-dependent behaviour based on microstructure

    DEFF Research Database (Denmark)

    Engelund, Emil Tang

    2010-01-01

    The time-dependent mechanical behaviour of softwood such as creep or relaxation can be predicted, from knowledge of the microstructural arrangement of the cell wall, by applying deformation kinetics. This has been done several times before; however, often without considering the constraints defined...... by the basic physical mechanism behind the time-dependent behaviour. The mechanism causing time-dependency is thought to be sliding of the microfibrils past each other as a result breaking and re-bonding of hydrogen bonds. This can be incorporated in a numerical model by only allowing time-dependency in shear...

  7. Predicting the Risk of Attrition for Undergraduate Students with Time Based Modelling

    Science.gov (United States)

    Chai, Kevin E. K.; Gibson, David

    2015-01-01

    Improving student retention is an important and challenging problem for universities. This paper reports on the development of a student attrition model for predicting which first year students are most at-risk of leaving at various points in time during their first semester of study. The objective of developing such a model is to assist…

  8. The influence of noise on nonlinear time series detection based on Volterra-Wiener-Korenberg model

    Energy Technology Data Exchange (ETDEWEB)

    Lei Min [State Key Laboratory of Vibration, Shock and Noise, Shanghai Jiao Tong University, Shanghai 200030 (China)], E-mail: leimin@sjtu.edu.cn; Meng Guang [State Key Laboratory of Vibration, Shock and Noise, Shanghai Jiao Tong University, Shanghai 200030 (China)

    2008-04-15

    This paper studies the influence of noises on Volterra-Wiener-Korenberg (VWK) nonlinear test model. Our numerical results reveal that different types of noises lead to different behavior of VWK model detection. For dynamic noise, it is difficult to distinguish chaos from nonchaotic but nonlinear determinism. For time series, measure noise has no impact on chaos determinism detection. This paper also discusses various behavior of VWK model detection with surrogate data for different noises.

  9. Design base transient analysis using the real-time nuclear reactor simulator model

    International Nuclear Information System (INIS)

    Tien, K.K.; Yakura, S.J.; Morin, J.P.; Gregory, M.V.

    1987-01-01

    A real-time simulation model has been developed to describe the dynamic response of all major systems in a nuclear process reactor. The model consists of a detailed representation of all hydraulic components in the external coolant circulating loops consisting of piping, valves, pumps and heat exchangers. The reactor core is described by a three-dimensional neutron kinetics model with detailed representation of assembly coolant and moderator thermal hydraulics. The models have been developed to support a real-time training simulator, therefore, they reproduce system parameters characteristic of steady state normal operation with high precision. The system responses for postulated severe transients such as large pipe breaks, loss of pumping power, piping leaks, malfunctions in control rod insertion, and emergency injection of neutron absorber are calculated to be in good agreement with reference safety analyses. Restrictions were imposed by the requirement that the resulting code be able to run in real-time with sufficient spare time to allow interfacing with secondary systems and simulator hardware. Due to hardware set-up and real plant instrumentation, simplifications due to symmetry were not allowed. The resulting code represents a coarse-node engineering model in which the level of detail has been tailored to the available computing power of a present generation super-minicomputer. Results for several significant transients, as calculated by the real-time model, are compared both to actual plant data and to results generated by fine-mesh analysis codes

  10. Design base transient analysis using the real-time nuclear reactor simulator model

    International Nuclear Information System (INIS)

    Tien, K.K.; Yakura, S.J.; Morin, J.P.; Gregory, M.V.

    1987-01-01

    A real-time simulation model has been developed to describe the dynamic response of all major systems in a nuclear process reactor. The model consists of a detailed representation of all hydraulic components in the external coolant circulating loops consisting of piping, valves, pumps and heat exchangers. The reactor core is described by a three-dimensional neutron kinetics model with detailed representation of assembly coolant and mode-rator thermal hydraulics. The models have been developed to support a real-time training simulator, therefore, they reproduce system parameters characteristic of steady state normal operation with high precision. The system responses for postulated severe transients such as large pipe breaks, loss of pumping power, piping leaks, malfunctions in control rod insertion, and emergency injection of neutron absorber are calculated to be in good agreement with reference safety analyses. Restrictions were imposed by the requirement that the resulting code be able to run in real-time with sufficient spare time to allow interfacing with secondary systems and simulator hardware. Due to hardware set-up and real plant instrumentation, simplifications due to symmetry were not allowed. The resulting code represents a coarse-node engineering model in which the level of detail has been tailored to the available computing power of a present generation super-minicomputer. Results for several significant transients, as calculated by the real-time model, are compared both to actual plant data and to results generated by fine-mesh analysis codes

  11. A Seasonal Time-Series Model Based on Gene Expression Programming for Predicting Financial Distress

    Directory of Open Access Journals (Sweden)

    Ching-Hsue Cheng

    2018-01-01

    Full Text Available The issue of financial distress prediction plays an important and challenging research topic in the financial field. Currently, there have been many methods for predicting firm bankruptcy and financial crisis, including the artificial intelligence and the traditional statistical methods, and the past studies have shown that the prediction result of the artificial intelligence method is better than the traditional statistical method. Financial statements are quarterly reports; hence, the financial crisis of companies is seasonal time-series data, and the attribute data affecting the financial distress of companies is nonlinear and nonstationary time-series data with fluctuations. Therefore, this study employed the nonlinear attribute selection method to build a nonlinear financial distress prediction model: that is, this paper proposed a novel seasonal time-series gene expression programming model for predicting the financial distress of companies. The proposed model has several advantages including the following: (i the proposed model is different from the previous models lacking the concept of time series; (ii the proposed integrated attribute selection method can find the core attributes and reduce high dimensional data; and (iii the proposed model can generate the rules and mathematical formulas of financial distress for providing references to the investors and decision makers. The result shows that the proposed method is better than the listing classifiers under three criteria; hence, the proposed model has competitive advantages in predicting the financial distress of companies.

  12. A Seasonal Time-Series Model Based on Gene Expression Programming for Predicting Financial Distress

    Science.gov (United States)

    2018-01-01

    The issue of financial distress prediction plays an important and challenging research topic in the financial field. Currently, there have been many methods for predicting firm bankruptcy and financial crisis, including the artificial intelligence and the traditional statistical methods, and the past studies have shown that the prediction result of the artificial intelligence method is better than the traditional statistical method. Financial statements are quarterly reports; hence, the financial crisis of companies is seasonal time-series data, and the attribute data affecting the financial distress of companies is nonlinear and nonstationary time-series data with fluctuations. Therefore, this study employed the nonlinear attribute selection method to build a nonlinear financial distress prediction model: that is, this paper proposed a novel seasonal time-series gene expression programming model for predicting the financial distress of companies. The proposed model has several advantages including the following: (i) the proposed model is different from the previous models lacking the concept of time series; (ii) the proposed integrated attribute selection method can find the core attributes and reduce high dimensional data; and (iii) the proposed model can generate the rules and mathematical formulas of financial distress for providing references to the investors and decision makers. The result shows that the proposed method is better than the listing classifiers under three criteria; hence, the proposed model has competitive advantages in predicting the financial distress of companies. PMID:29765399

  13. A Seasonal Time-Series Model Based on Gene Expression Programming for Predicting Financial Distress.

    Science.gov (United States)

    Cheng, Ching-Hsue; Chan, Chia-Pang; Yang, Jun-He

    2018-01-01

    The issue of financial distress prediction plays an important and challenging research topic in the financial field. Currently, there have been many methods for predicting firm bankruptcy and financial crisis, including the artificial intelligence and the traditional statistical methods, and the past studies have shown that the prediction result of the artificial intelligence method is better than the traditional statistical method. Financial statements are quarterly reports; hence, the financial crisis of companies is seasonal time-series data, and the attribute data affecting the financial distress of companies is nonlinear and nonstationary time-series data with fluctuations. Therefore, this study employed the nonlinear attribute selection method to build a nonlinear financial distress prediction model: that is, this paper proposed a novel seasonal time-series gene expression programming model for predicting the financial distress of companies. The proposed model has several advantages including the following: (i) the proposed model is different from the previous models lacking the concept of time series; (ii) the proposed integrated attribute selection method can find the core attributes and reduce high dimensional data; and (iii) the proposed model can generate the rules and mathematical formulas of financial distress for providing references to the investors and decision makers. The result shows that the proposed method is better than the listing classifiers under three criteria; hence, the proposed model has competitive advantages in predicting the financial distress of companies.

  14. New time scale based k-epsilon model for near-wall turbulence

    Science.gov (United States)

    Yang, Z.; Shih, T. H.

    1993-01-01

    A k-epsilon model is proposed for wall bonded turbulent flows. In this model, the eddy viscosity is characterized by a turbulent velocity scale and a turbulent time scale. The time scale is bounded from below by the Kolmogorov time scale. The dissipation equation is reformulated using this time scale and no singularity exists at the wall. The damping function used in the eddy viscosity is chosen to be a function of R(sub y) = (k(sup 1/2)y)/v instead of y(+). Hence, the model could be used for flows with separation. The model constants used are the same as in the high Reynolds number standard k-epsilon model. Thus, the proposed model will be also suitable for flows far from the wall. Turbulent channel flows at different Reynolds numbers and turbulent boundary layer flows with and without pressure gradient are calculated. Results show that the model predictions are in good agreement with direct numerical simulation and experimental data.

  15. Model-based schedulability analysis of safety critical hard real-time Java programs

    DEFF Research Database (Denmark)

    Bøgholm, Thomas; Kragh-Hansen, Henrik; Olsen, Petur

    2008-01-01

    verifiable by the Uppaal model checker [23]. Schedulability analysis is reduced to a simple reachability question, checking for deadlock freedom. Model-based schedulability analysis has been developed by Amnell et al. [2], but has so far only been applied to high level specifications, not actual...

  16. T-UPPAAL: Online Model-based Testing of Real-Time Systems

    DEFF Research Database (Denmark)

    Mikucionis, Marius; Larsen, Kim Guldstrand; Nielsen, Brian

    2004-01-01

    The goal of testing is to gain confidence in a physical computer based system by means of executing it. More than one third of typical project resources is spent on testing embedded and real-time systems, but still it remains ad-hoc, based on heuristics, and error-prone. Therefore systematic...

  17. Application of WRF - SWAT OpenMI 2.0 based models integration for real time hydrological modelling and forecasting

    Science.gov (United States)

    Bugaets, Andrey; Gonchukov, Leonid

    2014-05-01

    Intake of deterministic distributed hydrological models into operational water management requires intensive collection and inputting of spatial distributed climatic information in a timely manner that is both time consuming and laborious. The lead time of the data pre-processing stage could be essentially reduced by coupling of hydrological and numerical weather prediction models. This is especially important for the regions such as the South of the Russian Far East where its geographical position combined with a monsoon climate affected by typhoons and extreme heavy rains caused rapid rising of the mountain rivers water level and led to the flash flooding and enormous damage. The objective of this study is development of end-to-end workflow that executes, in a loosely coupled mode, an integrated modeling system comprised of Weather Research and Forecast (WRF) atmospheric model and Soil and Water Assessment Tool (SWAT 2012) hydrological model using OpenMI 2.0 and web-service technologies. Migration SWAT into OpenMI compliant involves reorganization of the model into a separate initialization, performing timestep and finalization functions that can be accessed from outside. To save SWAT normal behavior, the source code was separated from OpenMI-specific implementation into the static library. Modified code was assembled into dynamic library and wrapped into C# class implemented the OpenMI ILinkableComponent interface. Development of WRF OpenMI-compliant component based on the idea of the wrapping web-service clients into a linkable component and seamlessly access to output netCDF files without actual models connection. The weather state variables (precipitation, wind, solar radiation, air temperature and relative humidity) are processed by automatic input selection algorithm to single out the most relevant values used by SWAT model to yield climatic data at the subbasin scale. Spatial interpolation between the WRF regular grid and SWAT subbasins centroid (which are

  18. DeepTravel: a Neural Network Based Travel Time Estimation Model with Auxiliary Supervision

    OpenAIRE

    Zhang, Hanyuan; Wu, Hao; Sun, Weiwei; Zheng, Baihua

    2018-01-01

    Estimating the travel time of a path is of great importance to smart urban mobility. Existing approaches are either based on estimating the time cost of each road segment which are not able to capture many cross-segment complex factors, or designed heuristically in a non-learning-based way which fail to utilize the existing abundant temporal labels of the data, i.e., the time stamp of each trajectory point. In this paper, we leverage on new development of deep neural networks and propose a no...

  19. Flatness-based control and Kalman filtering for a continuous-time macroeconomic model

    Science.gov (United States)

    Rigatos, G.; Siano, P.; Ghosh, T.; Busawon, K.; Binns, R.

    2017-11-01

    The article proposes flatness-based control for a nonlinear macro-economic model of the UK economy. The differential flatness properties of the model are proven. This enables to introduce a transformation (diffeomorphism) of the system's state variables and to express the state-space description of the model in the linear canonical (Brunowsky) form in which both the feedback control and the state estimation problem can be solved. For the linearized equivalent model of the macroeconomic system, stabilizing feedback control can be achieved using pole placement methods. Moreover, to implement stabilizing feedback control of the system by measuring only a subset of its state vector elements the Derivative-free nonlinear Kalman Filter is used. This consists of the Kalman Filter recursion applied on the linearized equivalent model of the financial system and of an inverse transformation that is based again on differential flatness theory. The asymptotic stability properties of the control scheme are confirmed.

  20. Stability Analysis of Positive Polynomial Fuzzy-Model-Based Control Systems with Time Delay under Imperfect Premise Matching

    OpenAIRE

    Li, Xiaomiao; Lam, Hak Keung; Song, Ge; Liu, Fucai

    2017-01-01

    This paper deals with the stability and positivity analysis of polynomial-fuzzy-model-based ({PFMB}) control systems with time delay, which is formed by a polynomial fuzzy model and a polynomial fuzzy controller connected in a closed loop, under imperfect premise matching. To improve the design and realization flexibility, the polynomial fuzzy model and the polynomial fuzzy controller are allowed to have their own set of premise membership functions. A sum-of-squares (SOS)-based stability ana...

  1. Time Prediction Models for Echinococcosis Based on Gray System Theory and Epidemic Dynamics.

    Science.gov (United States)

    Zhang, Liping; Wang, Li; Zheng, Yanling; Wang, Kai; Zhang, Xueliang; Zheng, Yujian

    2017-03-04

    Echinococcosis, which can seriously harm human health and animal husbandry production, has become an endemic in the Xinjiang Uygur Autonomous Region of China. In order to explore an effective human Echinococcosis forecasting model in Xinjiang, three grey models, namely, the traditional grey GM(1,1) model, the Grey-Periodic Extensional Combinatorial Model (PECGM(1,1)), and the Modified Grey Model using Fourier Series (FGM(1,1)), in addition to a multiplicative seasonal ARIMA(1,0,1)(1,1,0)₄ model, are applied in this study for short-term predictions. The accuracy of the different grey models is also investigated. The simulation results show that the FGM(1,1) model has a higher performance ability, not only for model fitting, but also for forecasting. Furthermore, considering the stability and the modeling precision in the long run, a dynamic epidemic prediction model based on the transmission mechanism of Echinococcosis is also established for long-term predictions. Results demonstrate that the dynamic epidemic prediction model is capable of identifying the future tendency. The number of human Echinococcosis cases will increase steadily over the next 25 years, reaching a peak of about 1250 cases, before eventually witnessing a slow decline, until it finally ends.

  2. Time Prediction Models for Echinococcosis Based on Gray System Theory and Epidemic Dynamics

    Directory of Open Access Journals (Sweden)

    Liping Zhang

    2017-03-01

    Full Text Available Echinococcosis, which can seriously harm human health and animal husbandry production, has become an endemic in the Xinjiang Uygur Autonomous Region of China. In order to explore an effective human Echinococcosis forecasting model in Xinjiang, three grey models, namely, the traditional grey GM(1,1 model, the Grey-Periodic Extensional Combinatorial Model (PECGM(1,1, and the Modified Grey Model using Fourier Series (FGM(1,1, in addition to a multiplicative seasonal ARIMA(1,0,1(1,1,04 model, are applied in this study for short-term predictions. The accuracy of the different grey models is also investigated. The simulation results show that the FGM(1,1 model has a higher performance ability, not only for model fitting, but also for forecasting. Furthermore, considering the stability and the modeling precision in the long run, a dynamic epidemic prediction model based on the transmission mechanism of Echinococcosis is also established for long-term predictions. Results demonstrate that the dynamic epidemic prediction model is capable of identifying the future tendency. The number of human Echinococcosis cases will increase steadily over the next 25 years, reaching a peak of about 1250 cases, before eventually witnessing a slow decline, until it finally ends.

  3. Study on Apparent Kinetic Prediction Model of the Smelting Reduction Based on the Time-Series

    Directory of Open Access Journals (Sweden)

    Guo-feng Fan

    2012-01-01

    Full Text Available A series of direct smelting reduction experiment has been carried out with high phosphorous iron ore of the different bases by thermogravimetric analyzer. The derivative thermogravimetric (DTG data have been obtained from the experiments. One-step forward local weighted linear (LWL method , one of the most suitable ways of predicting chaotic time-series methods which focus on the errors, is used to predict DTG. In the meanwhile, empirical mode decomposition-autoregressive (EMD-AR, a data mining technique in signal processing, is also used to predict DTG. The results show that (1 EMD-AR(4 is the most appropriate and its error is smaller than the former; (2 root mean square error (RMSE has decreased about two-thirds; (3 standardized root mean square error (NMSE has decreased in an order of magnitude. Finally in this paper, EMD-AR method has been improved by golden section weighting; its error would be smaller than before. Therefore, the improved EMD-AR model is a promising alternative for apparent reaction rate (DTG. The analytical results have been an important reference in the field of industrial control.

  4. Value of time determination for the city of Alexandria based on a disaggregate binary mode choice model

    Directory of Open Access Journals (Sweden)

    Mounir Mahmoud Moghazy Abdel-Aal

    2017-12-01

    Full Text Available In the travel demand modeling field, mode choice is the most important decision that affects the resulted road congestion. The behavioral nature of the disaggregate models and the associated advantages of such models over aggregate models have led to their extensive use. This paper proposes a framework to determine the value of time (VoT for the city of Alexandria through calibrating a disaggregate linear-in parameter utility-based binary logit mode choice model of the city. The mode attributes (travel time and travel cost along with traveler attributes (car ownership and income were selected as the utility attributes of the basic model formulation which included 5 models. Three additional alternative utility formulations based on the transformation of the mode attributes including relative travel cost (cost divided by income and log (travel time and the combination of the two transformations together were introduced. The parameter estimation procedure was based on the likelihood maximization technique and was performed in EXCEL. Out of 20 models estimated, only 2 models are considered successful in terms of the parameters estimates correct signs and the magnitude of their significance (t-statistics value. The determination of the VoT serves also in the model validation. The best two models estimated the value of time at LE 11.30/hr and LE 14.50/hr with a relative error of +3.7% and +33.0%, respectively, of the hourly salary of LE 10.9/hr. The proposed two models prove to be sensitive to trip time and income levels as factors affecting the choice mechanism. The sensitivity analysis was performed and proved the model with higher relative error is marginally more robust. Keywords: Transportation modeling, Binary mode choice, Parameter estimation, Value of time, Likelihood maximization, Sensitivity analysis

  5. Modeling the impact of forecast-based regime switches on macroeconomic time series

    NARCIS (Netherlands)

    K. Bel (Koen); R. Paap (Richard)

    2013-01-01

    textabstractForecasts of key macroeconomic variables may lead to policy changes of governments, central banks and other economic agents. Policy changes in turn lead to structural changes in macroeconomic time series models. To describe this phenomenon we introduce a logistic smooth transition

  6. The Effect of Inquiry Training Learning Model Based on Just in Time Teaching for Problem Solving Skill

    Science.gov (United States)

    Turnip, Betty; Wahyuni, Ida; Tanjung, Yul Ifda

    2016-01-01

    One of the factors that can support successful learning activity is the use of learning models according to the objectives to be achieved. This study aimed to analyze the differences in problem-solving ability Physics student learning model Inquiry Training based on Just In Time Teaching [JITT] and conventional learning taught by cooperative model…

  7. Extracting a robust U.S. business cycle using a time-varying multivariate model-based bandpass filter

    NARCIS (Netherlands)

    Koopman, S.J.; Creal, D.D.

    2010-01-01

    We develop a flexible business cycle indicator that accounts for potential time variation in macroeconomic variables. The coincident economic indicator is based on a multivariate trend cycle decomposition model and is constructed from a moderate set of US macroeconomic time series. In particular, we

  8. Targeting and timing promotional activities : An agent-based model for the takeoff of new products

    NARCIS (Netherlands)

    Delre, S. A.; Jager, W.; Bijmolt, T. H. A.; Janssen, M. A.

    Many marketing efforts focus on promotional activities that support the launch of new products. Promotional strategies may play a crucial role in the early stages of the product life cycle, and determine to a large extent the diffusion of a new product. This paper proposes an agent-based model to

  9. A new costing model in hospital management: time-driven activity-based costing system.

    Science.gov (United States)

    Öker, Figen; Özyapıcı, Hasan

    2013-01-01

    Traditional cost systems cause cost distortions because they cannot meet the requirements of today's businesses. Therefore, a new and more effective cost system is needed. Consequently, time-driven activity-based costing system has emerged. The unit cost of supplying capacity and the time needed to perform an activity are the only 2 factors considered by the system. Furthermore, this system determines unused capacity by considering practical capacity. The purpose of this article is to emphasize the efficiency of the time-driven activity-based costing system and to display how it can be applied in a health care institution. A case study was conducted in a private hospital in Cyprus. Interviews and direct observations were used to collect the data. The case study revealed that the cost of unused capacity is allocated to both open and laparoscopic (closed) surgeries. Thus, by using the time-driven activity-based costing system, managers should eliminate the cost of unused capacity so as to obtain better results. Based on the results of the study, hospital management is better able to understand the costs of different surgeries. In addition, managers can easily notice the cost of unused capacity and decide how many employees to be dismissed or directed to other productive areas.

  10. A Time-Space Symmetry Based Cylindrical Model for Quantum Mechanical Interpretations

    Science.gov (United States)

    Vo Van, Thuan

    2017-12-01

    Following a bi-cylindrical model of geometrical dynamics, our study shows that a 6D-gravitational equation leads to geodesic description in an extended symmetrical time-space, which fits Hubble-like expansion on a microscopic scale. As a duality, the geodesic solution is mathematically equivalent to the basic Klein-Gordon-Fock equations of free massive elementary particles, in particular, the squared Dirac equations of leptons. The quantum indeterminism is proved to have originated from space-time curvatures. Interpretation of some important issues of quantum mechanical reality is carried out in comparison with the 5D space-time-matter theory. A solution of lepton mass hierarchy is proposed by extending to higher dimensional curvatures of time-like hyper-spherical surfaces than one of the cylindrical dynamical geometry. In a result, the reasonable charged lepton mass ratios have been calculated, which would be tested experimentally.

  11. Rule-based approach to cognitive modeling of real-time decision making

    International Nuclear Information System (INIS)

    Thorndyke, P.W.

    1982-01-01

    Recent developments in the fields of cognitive science and artificial intelligence have made possible the creation of a new class of models of complex human behavior. These models, referred to as either expert or knowledge-based systems, describe the high-level cognitive processing undertaken by a skilled human to perform a complex, largely mental, task. Expert systems have been developed to provide simulations of skilled performance of a variety of tasks. These include problems of data interpretation, system monitoring and fault isolation, prediction, planning, diagnosis, and design. In general, such systems strive to produce prescriptive (error-free) behavior, rather than model descriptively the typical human's errorful behavior. However, some research has sought to develop descriptive models of human behavior using the same theoretical frameworks adopted by expert systems builders. This paper presents an overview of this theoretical framework and modeling approach, and indicates the applicability of such models to the development of a model of control room operators in a nuclear power plant. Such a model could serve several beneficial functions in plant design, licensing, and operation

  12. The Research of Car-Following Model Based on Real-Time Maximum Deceleration

    Directory of Open Access Journals (Sweden)

    Longhai Yang

    2015-01-01

    Full Text Available This paper is concerned with the effect of real-time maximum deceleration in car-following. The real-time maximum acceleration is estimated with vehicle dynamics. It is known that an intelligent driver model (IDM can control adaptive cruise control (ACC well. The disadvantages of IDM at high and constant speed are analyzed. A new car-following model which is applied to ACC is established accordingly to modify the desired minimum gap and structure of the IDM. We simulated the new car-following model and IDM under two different kinds of road conditions. In the first, the vehicles drive on a single road, taking dry asphalt road as the example in this paper. In the second, vehicles drive onto a different road, and this paper analyzed the situation in which vehicles drive from a dry asphalt road onto an icy road. From the simulation, we found that the new car-following model can not only ensure driving security and comfort but also control the steady driving of the vehicle with a smaller time headway than IDM.

  13. Estimating Travel Time in Bank Filtration Systems from a Numerical Model Based on DTS Measurements.

    Science.gov (United States)

    des Tombe, Bas F; Bakker, Mark; Schaars, Frans; van der Made, Kees-Jan

    2018-03-01

    An approach is presented to determine the seasonal variations in travel time in a bank filtration system using a passive heat tracer test. The temperature in the aquifer varies seasonally because of temperature variations of the infiltrating surface water and at the soil surface. Temperature was measured with distributed temperature sensing along fiber optic cables that were inserted vertically into the aquifer with direct push equipment. The approach was applied to a bank filtration system consisting of a sequence of alternating, elongated recharge basins and rows of recovery wells. A SEAWAT model was developed to simulate coupled flow and heat transport. The model of a two-dimensional vertical cross section is able to simulate the temperature of the water at the well and the measured vertical temperature profiles reasonably well. MODPATH was used to compute flowpaths and the travel time distribution. At the study site, temporal variation of the pumping discharge was the dominant factor influencing the travel time distribution. For an equivalent system with a constant pumping rate, variations in the travel time distribution are caused by variations in the temperature-dependent viscosity. As a result, travel times increase in the winter, when a larger fraction of the water travels through the warmer, lower part of the aquifer, and decrease in the summer, when the upper part of the aquifer is warmer. © 2017 The Authors. Groundwater published by Wiley Periodicals, Inc. on behalf of National Ground Water Association.

  14. Time Series Model of Wind Speed for Multi Wind Turbines based on Mixed Copula

    Directory of Open Access Journals (Sweden)

    Nie Dan

    2016-01-01

    Full Text Available Because wind power is intermittent, random and so on, large scale grid will directly affect the safe and stable operation of power grid. In order to make a quantitative study on the characteristics of the wind speed of wind turbine, the wind speed time series model of the multi wind turbine generator is constructed by using the mixed Copula-ARMA function in this paper, and a numerical example is also given. The research results show that the model can effectively predict the wind speed, ensure the efficient operation of the wind turbine, and provide theoretical basis for the stability of wind power grid connected operation.

  15. Real-time cerebellar neuroprosthetic system based on a spiking neural network model of motor learning

    Science.gov (United States)

    Xu, Tao; Xiao, Na; Zhai, Xiaolong; Chan, Pak Kwan; Tin, Chung

    2018-02-01

    Objective. Damage to the brain, as a result of various medical conditions, impacts the everyday life of patients and there is still no complete cure to neurological disorders. Neuroprostheses that can functionally replace the damaged neural circuit have recently emerged as a possible solution to these problems. Here we describe the development of a real-time cerebellar neuroprosthetic system to substitute neural function in cerebellar circuitry for learning delay eyeblink conditioning (DEC). Approach. The system was empowered by a biologically realistic spiking neural network (SNN) model of the cerebellar neural circuit, which considers the neuronal population and anatomical connectivity of the network. The model simulated synaptic plasticity critical for learning DEC. This SNN model was carefully implemented on a field programmable gate array (FPGA) platform for real-time simulation. This hardware system was interfaced in in vivo experiments with anesthetized rats and it used neural spikes recorded online from the animal to learn and trigger conditioned eyeblink in the animal during training. Main results. This rat-FPGA hybrid system was able to process neuronal spikes in real-time with an embedded cerebellum model of ~10 000 neurons and reproduce learning of DEC with different inter-stimulus intervals. Our results validated that the system performance is physiologically relevant at both the neural (firing pattern) and behavioral (eyeblink pattern) levels. Significance. This integrated system provides the sufficient computation power for mimicking the cerebellar circuit in real-time. The system interacts with the biological system naturally at the spike level and can be generalized for including other neural components (neuron types and plasticity) and neural functions for potential neuroprosthetic applications.

  16. Real-time cerebellar neuroprosthetic system based on a spiking neural network model of motor learning.

    Science.gov (United States)

    Xu, Tao; Xiao, Na; Zhai, Xiaolong; Kwan Chan, Pak; Tin, Chung

    2018-02-01

    Damage to the brain, as a result of various medical conditions, impacts the everyday life of patients and there is still no complete cure to neurological disorders. Neuroprostheses that can functionally replace the damaged neural circuit have recently emerged as a possible solution to these problems. Here we describe the development of a real-time cerebellar neuroprosthetic system to substitute neural function in cerebellar circuitry for learning delay eyeblink conditioning (DEC). The system was empowered by a biologically realistic spiking neural network (SNN) model of the cerebellar neural circuit, which considers the neuronal population and anatomical connectivity of the network. The model simulated synaptic plasticity critical for learning DEC. This SNN model was carefully implemented on a field programmable gate array (FPGA) platform for real-time simulation. This hardware system was interfaced in in vivo experiments with anesthetized rats and it used neural spikes recorded online from the animal to learn and trigger conditioned eyeblink in the animal during training. This rat-FPGA hybrid system was able to process neuronal spikes in real-time with an embedded cerebellum model of ~10 000 neurons and reproduce learning of DEC with different inter-stimulus intervals. Our results validated that the system performance is physiologically relevant at both the neural (firing pattern) and behavioral (eyeblink pattern) levels. This integrated system provides the sufficient computation power for mimicking the cerebellar circuit in real-time. The system interacts with the biological system naturally at the spike level and can be generalized for including other neural components (neuron types and plasticity) and neural functions for potential neuroprosthetic applications.

  17. Performance of joint modelling of time-to-event data with time-dependent predictors: an assessment based on transition to psychosis data

    Directory of Open Access Journals (Sweden)

    Hok Pan Yuen

    2016-10-01

    Full Text Available Joint modelling has emerged to be a potential tool to analyse data with a time-to-event outcome and longitudinal measurements collected over a series of time points. Joint modelling involves the simultaneous modelling of the two components, namely the time-to-event component and the longitudinal component. The main challenges of joint modelling are the mathematical and computational complexity. Recent advances in joint modelling have seen the emergence of several software packages which have implemented some of the computational requirements to run joint models. These packages have opened the door for more routine use of joint modelling. Through simulations and real data based on transition to psychosis research, we compared joint model analysis of time-to-event outcome with the conventional Cox regression analysis. We also compared a number of packages for fitting joint models. Our results suggest that joint modelling do have advantages over conventional analysis despite its potential complexity. Our results also suggest that the results of analyses may depend on how the methodology is implemented.

  18. Fuzzy model-based adaptive synchronization of time-delayed chaotic systems

    International Nuclear Information System (INIS)

    Vasegh, Nastaran; Majd, Vahid Johari

    2009-01-01

    In this paper, fuzzy model-based synchronization of a class of first order chaotic systems described by delayed-differential equations is addressed. To design the fuzzy controller, the chaotic system is modeled by Takagi-Sugeno fuzzy system considering the properties of the nonlinear part of the system. Assuming that the parameters of the chaotic system are unknown, an adaptive law is derived to estimate these unknown parameters, and the stability of error dynamics is guaranteed by Lyapunov theory. Numerical examples are given to demonstrate the validity of the proposed adaptive synchronization approach.

  19. Model based analysis of the time scales associated to pump start-ups

    Energy Technology Data Exchange (ETDEWEB)

    Dazin, Antoine, E-mail: antoine.dazin@lille.ensam.fr [Arts et métiers ParisTech/LML Laboratory UMR CNRS 8107, 8 bld Louis XIV, 59046 Lille cedex (France); Caignaert, Guy [Arts et métiers ParisTech/LML Laboratory UMR CNRS 8107, 8 bld Louis XIV, 59046 Lille cedex (France); Dauphin-Tanguy, Geneviève, E-mail: genevieve.dauphin-tanguy@ec-lille.fr [Univ Lille Nord de France, Ecole Centrale de Lille/CRISTAL UMR CNRS 9189, BP 48, 59651, Villeneuve d’Ascq cedex F 59000 (France)

    2015-11-15

    Highlights: • A dynamic model of a hydraulic system has been built. • Three periods in a pump start-up have been identified. • The time scales of each period have been estimated. • The parameters affecting the rapidity of a pump start-up have been explored. - Abstract: The paper refers to a non dimensional analysis of the behaviour of a hydraulic system during pump fast start-ups. The system is composed of a radial flow pump and its suction and delivery pipes. It is modelled using the bond graph methodology. The prediction of the model is validated by comparison to experimental results. An analysis of the time evolution of the terms acting on the total pump pressure is proposed. It allows for a decomposition of the start-up into three consecutive periods. The time scales associated with these periods are estimated. The effects of parameters (angular acceleration, final rotation speed, pipe length and resistance) affecting the start-up rapidity are then explored.

  20. Time series modeling of live-cell shape dynamics for image-based phenotypic profiling.

    Science.gov (United States)

    Gordonov, Simon; Hwang, Mun Kyung; Wells, Alan; Gertler, Frank B; Lauffenburger, Douglas A; Bathe, Mark

    2016-01-01

    Live-cell imaging can be used to capture spatio-temporal aspects of cellular responses that are not accessible to fixed-cell imaging. As the use of live-cell imaging continues to increase, new computational procedures are needed to characterize and classify the temporal dynamics of individual cells. For this purpose, here we present the general experimental-computational framework SAPHIRE (Stochastic Annotation of Phenotypic Individual-cell Responses) to characterize phenotypic cellular responses from time series imaging datasets. Hidden Markov modeling is used to infer and annotate morphological state and state-switching properties from image-derived cell shape measurements. Time series modeling is performed on each cell individually, making the approach broadly useful for analyzing asynchronous cell populations. Two-color fluorescent cells simultaneously expressing actin and nuclear reporters enabled us to profile temporal changes in cell shape following pharmacological inhibition of cytoskeleton-regulatory signaling pathways. Results are compared with existing approaches conventionally applied to fixed-cell imaging datasets, and indicate that time series modeling captures heterogeneous dynamic cellular responses that can improve drug classification and offer additional important insight into mechanisms of drug action. The software is available at http://saphire-hcs.org.

  1. FPGA-Based Real Time, Multichannel Emulated-Digital Retina Model Implementation

    Directory of Open Access Journals (Sweden)

    Zsolt Vörösházi

    2009-01-01

    Full Text Available The function of the low-level image processing that takes place in the biological retina is to compress only the relevant visual information to a manageable size. The behavior of the layers and different channels of the neuromorphic retina has been successfully modeled by cellular neural/nonlinear networks (CNNs. In this paper, we present an extended, application-specific emulated-digital CNN-universal machine (UM architecture to compute the complex dynamic of this mammalian retina in video real time. The proposed emulated-digital implementation of multichannel retina model is compared to the previously developed models from three key aspects, which are processing speed, number of physical cells, and accuracy. Our primary aim was to build up a simple, real-time test environment with camera input and display output in order to mimic the behavior of retina model implementation on emulated digital CNN by using low-cost, moderate-sized field-programmable gate array (FPGA architectures.

  2. Free terminal time optimal control problem of an HIV model based on a conjugate gradient method.

    Science.gov (United States)

    Jang, Taesoo; Kwon, Hee-Dae; Lee, Jeehyun

    2011-10-01

    The minimum duration of treatment periods and the optimal multidrug therapy for human immunodeficiency virus (HIV) type 1 infection are considered. We formulate an optimal tracking problem, attempting to drive the states of the model to a "healthy" steady state in which the viral load is low and the immune response is strong. We study an optimal time frame as well as HIV therapeutic strategies by analyzing the free terminal time optimal tracking control problem. The minimum duration of treatment periods and the optimal multidrug therapy are found by solving the corresponding optimality systems with the additional transversality condition for the terminal time. We demonstrate by numerical simulations that the optimal dynamic multidrug therapy can lead to the long-term control of HIV by the strong immune response after discontinuation of therapy.

  3. Modeling and simulation of tumor-influenced high resolution real-time physics-based breast models for model-guided robotic interventions

    Science.gov (United States)

    Neylon, John; Hasse, Katelyn; Sheng, Ke; Santhanam, Anand P.

    2016-03-01

    Breast radiation therapy is typically delivered to the patient in either supine or prone position. Each of these positioning systems has its limitations in terms of tumor localization, dose to the surrounding normal structures, and patient comfort. We envision developing a pneumatically controlled breast immobilization device that will enable the benefits of both supine and prone positioning. In this paper, we present a physics-based breast deformable model that aids in both the design of the breast immobilization device as well as a control module for the device during every day positioning. The model geometry is generated from a subject's CT scan acquired during the treatment planning stage. A GPU based deformable model is then generated for the breast. A mass-spring-damper approach is then employed for the deformable model, with the spring modeled to represent a hyperelastic tissue behavior. Each voxel of the CT scan is then associated with a mass element, which gives the model its high resolution nature. The subject specific elasticity is then estimated from a CT scan in prone position. Our results show that the model can deform at >60 deformations per second, which satisfies the real-time requirement for robotic positioning. The model interacts with a computer designed immobilization device to position the breast and tumor anatomy in a reproducible location. The design of the immobilization device was also systematically varied based on the breast geometry, tumor location, elasticity distribution and the reproducibility of the desired tumor location.

  4. Passenger Flow Forecasting Research for Airport Terminal Based on SARIMA Time Series Model

    Science.gov (United States)

    Li, Ziyu; Bi, Jun; Li, Zhiyin

    2017-12-01

    Based on the data of practical operating of Kunming Changshui International Airport during2016, this paper proposes Seasonal Autoregressive Integrated Moving Average (SARIMA) model to predict the passenger flow. This article not only considers the non-stationary and autocorrelation of the sequence, but also considers the daily periodicity of the sequence. The prediction results can accurately describe the change trend of airport passenger flow and provide scientific decision support for the optimal allocation of airport resources and optimization of departure process. The result shows that this model is applicable to the short-term prediction of airport terminal departure passenger traffic and the average error ranges from 1% to 3%. The difference between the predicted and the true values of passenger traffic flow is quite small, which indicates that the model has fairly good passenger traffic flow prediction ability.

  5. Wave propagation numerical models in damage detection based on the time domain spectral element method

    International Nuclear Information System (INIS)

    Ostachowicz, W; Kudela, P

    2010-01-01

    A Spectral Element Method is used for wave propagation modelling. A 3D solid spectral element is derived with shape functions based on Lagrange interpolation and Gauss-Lobatto-Legendre points. This approach is applied for displacement approximation suited for fundamental modes of Lamb waves as well as potential distribution in piezoelectric transducers. The novelty is the model geometry extension from flat to curved elements for application in shell-like structures. Exemplary visualisations of waves excited by the piezoelectric transducers in curved shell structure made of aluminium alloy are presented. Simple signal analysis of wave interaction with crack is performed. The crack is modelled by separation of appropriate nodes between elements. An investigation of influence of the crack length on wave propagation signals is performed. Additionally, some aspects of the spectral element method implementation are discussed.

  6. DNA DSB measurements and modelling approaches based on gamma-H2AX foci time evolution

    Science.gov (United States)

    Esposito, Giuseppe; Campa, Alessandro; Antonelli, Francesca; Mariotti, Luca; Belli, Mauro; Giardullo, Paola; Simone, Giustina; Antonella Tabocchini, Maria; Ottolenghi, Andrea

    DNA double strand breaks (DSBs) induced by ionising radiation are considered the main dam-age related to the deleterious consequences in the cells. Unrepaired or mis-repaired DSBs can cause mutations or loss of chromosome regions which can eventually lead to cell death or neo-plastic transformation. Quantification of the number and complexity of DSBs induced by low doses of radiation remains a complex problem. About ten years ago Rogakou et al. proposed an immunofluorescent technique able to detect even a single DSB per cell. This approach is based on the serine 139 phosphorylation of many molecules (up to 2000) of histone H2AX (γg-H2AX) following the induction of a DSB in the DNA. DSB can be visualized as foci by immunofluores-cence by using phospho-specific antibodies, so that enumeration of foci can be used to measure DSB induction and processing. It is still not completely clear how γ-H2AX dephosphorylation takes place; however it has been related with DSB repair, in particular with the efficiency of DSB repair. In this work we analyse the H2AX phosphorylation-dephosphorylation kinetics after irradiation of primary human fibroblasts (AG1522 cell line) with radiation of differing quality, that is γ-rays and α-particles (125 keV/µm), with the aim of comparing the time evolution of γ-H2AX foci. Our results show that, after a dose of 0.5 Gy, both γ-rays and α-particles induce the maximum number of γ-H2AX foci within 30 minutes from irradiation, that this number depends on the radiation type and is consistent with the number of track traversal in α-irradiated nuclei, that the dephosphorylation kinetics are very different, being the α-induced foci rate of disappearence slower than that of γ-induced foci. In this work a modellistic approach to estimate the number of DSB induced by γ-rays detectable by using the γ-H2AX assay is presented. The competing processes of appearance and disappearance of visible foci will be modeled taking into account the

  7. Hydrological real-time modelling in the Zambezi river basin using satellite-based soil moisture and rainfall data

    Directory of Open Access Journals (Sweden)

    P. Meier

    2011-03-01

    Full Text Available Reliable real-time forecasts of the discharge can provide valuable information for the management of a river basin system. For the management of ecological releases even discharge forecasts with moderate accuracy can be beneficial. Sequential data assimilation using the Ensemble Kalman Filter provides a tool that is both efficient and robust for a real-time modelling framework. One key parameter in a hydrological system is the soil moisture, which recently can be characterized by satellite based measurements. A forecasting framework for the prediction of discharges is developed and applied to three different sub-basins of the Zambezi River Basin. The model is solely based on remote sensing data providing soil moisture and rainfall estimates. The soil moisture product used is based on the back-scattering intensity of a radar signal measured by a radar scatterometer. These soil moisture data correlate well with the measured discharge of the corresponding watershed if the data are shifted by a time lag which is dependent on the size and the dominant runoff process in the catchment. This time lag is the basis for the applicability of the soil moisture data for hydrological forecasts. The conceptual model developed is based on two storage compartments. The processes modeled include evaporation losses, infiltration and percolation. The application of this model in a real-time modelling framework yields good results in watersheds where soil storage is an important factor. The lead time of the forecast is dependent on the size and the retention capacity of the watershed. For the largest watershed a forecast over 40 days can be provided. However, the quality of the forecast increases significantly with decreasing prediction time. In a watershed with little soil storage and a quick response to rainfall events, the performance is relatively poor and the lead time is as short as 10 days only.

  8. Multidimensional k-nearest neighbor model based on EEMD for financial time series forecasting

    Science.gov (United States)

    Zhang, Ningning; Lin, Aijing; Shang, Pengjian

    2017-07-01

    In this paper, we propose a new two-stage methodology that combines the ensemble empirical mode decomposition (EEMD) with multidimensional k-nearest neighbor model (MKNN) in order to forecast the closing price and high price of the stocks simultaneously. The modified algorithm of k-nearest neighbors (KNN) has an increasingly wide application in the prediction of all fields. Empirical mode decomposition (EMD) decomposes a nonlinear and non-stationary signal into a series of intrinsic mode functions (IMFs), however, it cannot reveal characteristic information of the signal with much accuracy as a result of mode mixing. So ensemble empirical mode decomposition (EEMD), an improved method of EMD, is presented to resolve the weaknesses of EMD by adding white noise to the original data. With EEMD, the components with true physical meaning can be extracted from the time series. Utilizing the advantage of EEMD and MKNN, the new proposed ensemble empirical mode decomposition combined with multidimensional k-nearest neighbor model (EEMD-MKNN) has high predictive precision for short-term forecasting. Moreover, we extend this methodology to the case of two-dimensions to forecast the closing price and high price of the four stocks (NAS, S&P500, DJI and STI stock indices) at the same time. The results indicate that the proposed EEMD-MKNN model has a higher forecast precision than EMD-KNN, KNN method and ARIMA.

  9. The research and practice based on the full-time visitation model in clinical medical education

    Directory of Open Access Journals (Sweden)

    Hong Zhang

    2015-01-01

    Full Text Available Most of the higher medical colleges and universities teaching hospital carry certain clinical teaching tasks, but the traditional teaching pattern of "two stage", including the early stage of the theory of teaching, the late arrangement of clinical practice, had some drawbacks such as practice time is too concentrated and the chasm between students' theory and practice. It is suggested that students contact clinical diagnosis and treatment earlier, visit more patients and increase the ratio of visitation and course. But as more and more students flood into university, clinical visitation has turned into a difficulty to improve students’ ability. To resolve this problem, we have made some efficient practice and exploration in Rizhao City People's Hospital from September 2005 to July 2014. The students were divided into full-time visitation model group and “two stage” pattern group randomly. The single factors are of great difference between the two groups. The full-time visitation model in clinical medical education builds a new mode of practice of clinical practice teaching in the medical stuents' concept of doctor-patient communication, humanistic care to patients, basic theoretical knowledge, clinical practice skills and graduate admission rate increased significantly. Continuous improvement of OSCE exam is needed to make evaluation more scientific, objective and fair.

  10. Research on Adaptive Neural Network Control System Based on Nonlinear U-Model with Time-Varying Delay

    Directory of Open Access Journals (Sweden)

    Fengxia Xu

    2014-01-01

    Full Text Available U-model can approximate a large class of smooth nonlinear time-varying delay system to any accuracy by using time-varying delay parameters polynomial. This paper proposes a new approach, namely, U-model approach, to solving the problems of analysis and synthesis for nonlinear systems. Based on the idea of discrete-time U-model with time-varying delay, the identification algorithm of adaptive neural network is given for the nonlinear model. Then, the controller is designed by using the Newton-Raphson formula and the stability analysis is given for the closed-loop nonlinear systems. Finally, illustrative examples are given to show the validity and applicability of the obtained results.

  11. Accounting for large deformations in real-time simulations of soft tissues based on reduced-order models.

    Science.gov (United States)

    Niroomandi, S; Alfaro, I; Cueto, E; Chinesta, F

    2012-01-01

    Model reduction techniques have shown to constitute a valuable tool for real-time simulation in surgical environments and other fields. However, some limitations, imposed by real-time constraints, have not yet been overcome. One of such limitations is the severe limitation in time (established in 500Hz of frequency for the resolution) that precludes the employ of Newton-like schemes for solving non-linear models as the ones usually employed for modeling biological tissues. In this work we present a technique able to deal with geometrically non-linear models, based on the employ of model reduction techniques, together with an efficient non-linear solver. Examples of the performance of the technique over some examples will be given. Copyright © 2010 Elsevier Ireland Ltd. All rights reserved.

  12. Nuclear grade cable thermal life model by time temperature superposition algorithm based on Matlab GUI

    International Nuclear Information System (INIS)

    Lu Yanyun; Gu Shenjie; Lou Tianyang

    2014-01-01

    Background: As nuclear grade cable must endure harsh environment within design life, it is critical to predict cable thermal life accurately owing to thermal aging, which is one of dominant factors of aging mechanism. Purpose: Using time temperature superposition (TTS) method, the aim is to construct nuclear grade cable thermal life model, predict cable residual life and develop life model interactive interface under Matlab GUI. Methods: According to TTS, nuclear grade cable thermal life model can be constructed by shifting data groups at various temperatures to preset reference temperature with translation factor which is determined by non linear programming optimization. Interactive interface of cable thermal life model developed under Matlab GUI consists of superposition mode and standard mode which include features such as optimization of translation factor, calculation of activation energy, construction of thermal aging curve and analysis of aging mechanism., Results: With calculation result comparison between superposition and standard method, the result with TTS has better accuracy than that with standard method. Furthermore, confidence level of nuclear grade cable thermal life with TTS is higher than that with standard method. Conclusion: The results show that TTS methodology is applicable to thermal life prediction of nuclear grade cable. Interactive Interface under Matlab GUI achieves anticipated functionalities. (authors)

  13. The role of residence time in diagnostic models of global carbon storage capacity: model decomposition based on a traceable scheme.

    Science.gov (United States)

    Yizhao, Chen; Jianyang, Xia; Zhengguo, Sun; Jianlong, Li; Yiqi, Luo; Chengcheng, Gang; Zhaoqi, Wang

    2015-11-06

    As a key factor that determines carbon storage capacity, residence time (τE) is not well constrained in terrestrial biosphere models. This factor is recognized as an important source of model uncertainty. In this study, to understand how τE influences terrestrial carbon storage prediction in diagnostic models, we introduced a model decomposition scheme in the Boreal Ecosystem Productivity Simulator (BEPS) and then compared it with a prognostic model. The result showed that τE ranged from 32.7 to 158.2 years. The baseline residence time (τ'E) was stable for each biome, ranging from 12 to 53.7 years for forest biomes and 4.2 to 5.3 years for non-forest biomes. The spatiotemporal variations in τE were mainly determined by the environmental scalar (ξ). By comparing models, we found that the BEPS uses a more detailed pool construction but rougher parameterization for carbon allocation and decomposition. With respect to ξ comparison, the global difference in the temperature scalar (ξt) averaged 0.045, whereas the moisture scalar (ξw) had a much larger variation, with an average of 0.312. We propose that further evaluations and improvements in τ'E and ξw predictions are essential to reduce the uncertainties in predicting carbon storage by the BEPS and similar diagnostic models.

  14. Gaussian Mixture Random Coefficient model based framework for SHM in structures with time-dependent dynamics under uncertainty

    Science.gov (United States)

    Avendaño-Valencia, Luis David; Fassois, Spilios D.

    2017-12-01

    The problem of vibration-based damage diagnosis in structures characterized by time-dependent dynamics under significant environmental and/or operational uncertainty is considered. A stochastic framework consisting of a Gaussian Mixture Random Coefficient model of the uncertain time-dependent dynamics under each structural health state, proper estimation methods, and Bayesian or minimum distance type decision making, is postulated. The Random Coefficient (RC) time-dependent stochastic model with coefficients following a multivariate Gaussian Mixture Model (GMM) allows for significant flexibility in uncertainty representation. Certain of the model parameters are estimated via a simple procedure which is founded on the related Multiple Model (MM) concept, while the GMM weights are explicitly estimated for optimizing damage diagnostic performance. The postulated framework is demonstrated via damage detection in a simple simulated model of a quarter-car active suspension with time-dependent dynamics and considerable uncertainty on the payload. Comparisons with a simpler Gaussian RC model based method are also presented, with the postulated framework shown to be capable of offering considerable improvement in diagnostic performance.

  15. Predicting the Water Level Fluctuation in an Alpine Lake Using Physically Based, Artificial Neural Network, and Time Series Forecasting Models

    Directory of Open Access Journals (Sweden)

    Chih-Chieh Young

    2015-01-01

    Full Text Available Accurate prediction of water level fluctuation is important in lake management due to its significant impacts in various aspects. This study utilizes four model approaches to predict water levels in the Yuan-Yang Lake (YYL in Taiwan: a three-dimensional hydrodynamic model, an artificial neural network (ANN model (back propagation neural network, BPNN, a time series forecasting (autoregressive moving average with exogenous inputs, ARMAX model, and a combined hydrodynamic and ANN model. Particularly, the black-box ANN model and physically based hydrodynamic model are coupled to more accurately predict water level fluctuation. Hourly water level data (a total of 7296 observations was collected for model calibration (training and validation. Three statistical indicators (mean absolute error, root mean square error, and coefficient of correlation were adopted to evaluate model performances. Overall, the results demonstrate that the hydrodynamic model can satisfactorily predict hourly water level changes during the calibration stage but not for the validation stage. The ANN and ARMAX models better predict the water level than the hydrodynamic model does. Meanwhile, the results from an ANN model are superior to those by the ARMAX model in both training and validation phases. The novel proposed concept using a three-dimensional hydrodynamic model in conjunction with an ANN model has clearly shown the improved prediction accuracy for the water level fluctuation.

  16. An accurate real-time model of maglev planar motor based on compound Simpson numerical integration

    Directory of Open Access Journals (Sweden)

    Baoquan Kou

    2017-05-01

    Full Text Available To realize the high-speed and precise control of the maglev planar motor, a more accurate real-time electromagnetic model, which considers the influence of the coil corners, is proposed in this paper. Three coordinate systems for the stator, mover and corner coil are established. The coil is divided into two segments, the straight coil segment and the corner coil segment, in order to obtain a complete electromagnetic model. When only take the first harmonic of the flux density distribution of a Halbach magnet array into account, the integration method can be carried out towards the two segments according to Lorenz force law. The force and torque analysis formula of the straight coil segment can be derived directly from Newton-Leibniz formula, however, this is not applicable to the corner coil segment. Therefore, Compound Simpson numerical integration method is proposed in this paper to solve the corner segment. With the validation of simulation and experiment, the proposed model has high accuracy and can realize practical application easily.

  17. An accurate real-time model of maglev planar motor based on compound Simpson numerical integration

    Science.gov (United States)

    Kou, Baoquan; Xing, Feng; Zhang, Lu; Zhou, Yiheng; Liu, Jiaqi

    2017-05-01

    To realize the high-speed and precise control of the maglev planar motor, a more accurate real-time electromagnetic model, which considers the influence of the coil corners, is proposed in this paper. Three coordinate systems for the stator, mover and corner coil are established. The coil is divided into two segments, the straight coil segment and the corner coil segment, in order to obtain a complete electromagnetic model. When only take the first harmonic of the flux density distribution of a Halbach magnet array into account, the integration method can be carried out towards the two segments according to Lorenz force law. The force and torque analysis formula of the straight coil segment can be derived directly from Newton-Leibniz formula, however, this is not applicable to the corner coil segment. Therefore, Compound Simpson numerical integration method is proposed in this paper to solve the corner segment. With the validation of simulation and experiment, the proposed model has high accuracy and can realize practical application easily.

  18. Time dependent approach of TeV blazars based on a model of inhomogeneous stratified jet

    International Nuclear Information System (INIS)

    Boutelier, T.

    2009-05-01

    The study of the emission and variability mechanisms of TeV blazars has been the subject of intensive research for years. The homogeneous one-zone model commonly used is puzzling since it yields very high Lorentz factor, in contradiction with other observational evidences. In this work, I describe a new time dependent multi-zone approach, in the framework of the two-flow model. I compute the emission of a full jet, where relativistic electron-positron pairs distributed in pileup propagate. The evolution and the emission of the plasma is computed taking into account a turbulent heating term, some radiative cooling, and a pair production term due to photo-annihilation process. Applied to PKS 2155-304, the model allows the reproduction of the full spectra, as well as the simultaneous multi wavelength variability, with a relatively small Lorentz factor. The variability is explained by the instability of the pair creation process. Nonetheless, the value is still high to agree with other observational evidences in radio. Hence, I show in the last part of this work how to conciliate high Lorentz factor with the absence of apparent superluminal movement in radio, by taking into account the effect of the opening angle on the appearance of relativistic jets. (author)

  19. Application of wavelet-based multi-model Kalman filters to real-time flood forecasting

    Science.gov (United States)

    Chou, Chien-Ming; Wang, Ru-Yih

    2004-04-01

    This paper presents the application of a multimodel method using a wavelet-based Kalman filter (WKF) bank to simultaneously estimate decomposed state variables and unknown parameters for real-time flood forecasting. Applying the Haar wavelet transform alters the state vector and input vector of the state space. In this way, an overall detail plus approximation describes each new state vector and input vector, which allows the WKF to simultaneously estimate and decompose state variables. The wavelet-based multimodel Kalman filter (WMKF) is a multimodel Kalman filter (MKF), in which the Kalman filter has been substituted for a WKF. The WMKF then obtains M estimated state vectors. Next, the M state-estimates, each of which is weighted by its possibility that is also determined on-line, are combined to form an optimal estimate. Validations conducted for the Wu-Tu watershed, a small watershed in Taiwan, have demonstrated that the method is effective because of the decomposition of wavelet transform, the adaptation of the time-varying Kalman filter and the characteristics of the multimodel method. Validation results also reveal that the resulting method enhances the accuracy of the runoff prediction of the rainfall-runoff process in the Wu-Tu watershed.

  20. Time Series Analysis of Non-Gaussian Observations Based on State Space Models from Both Classical and Bayesian Perspectives

    NARCIS (Netherlands)

    Durbin, J.; Koopman, S.J.M.

    1998-01-01

    The analysis of non-Gaussian time series using state space models is considered from both classical and Bayesian perspectives. The treatment in both cases is based on simulation using importance sampling and antithetic variables; Monte Carlo Markov chain methods are not employed. Non-Gaussian

  1. Multiobjective optimization model of intersection signal timing considering emissions based on field data: A case study of Beijing.

    Science.gov (United States)

    Kou, Weibin; Chen, Xumei; Yu, Lei; Gong, Huibo

    2018-04-18

    Most existing signal timing models are aimed to minimize the total delay and stops at intersections, without considering environmental factors. This paper analyzes the trade-off between vehicle emissions and traffic efficiencies on the basis of field data. First, considering the different operating modes of cruising, acceleration, deceleration, and idling, field data of emissions and Global Positioning System (GPS) are collected to estimate emission rates for heavy-duty and light-duty vehicles. Second, multiobjective signal timing optimization model is established based on a genetic algorithm to minimize delay, stops, and emissions. Finally, a case study is conducted in Beijing. Nine scenarios are designed considering different weights of emission and traffic efficiency. The results compared with those using Highway Capacity Manual (HCM) 2010 show that signal timing optimized by the model proposed in this paper can decrease vehicles delay and emissions more significantly. The optimization model can be applied in different cities, which provides supports for eco-signal design and development. Vehicle emissions are heavily at signal intersections in urban area. The multiobjective signal timing optimization model is proposed considering the trade-off between vehicle emissions and traffic efficiencies on the basis of field data. The results indicate that signal timing optimized by the model proposed in this paper can decrease vehicle emissions and delays more significantly. The optimization model can be applied in different cities, which provides supports for eco-signal design and development.

  2. Thermal erosion of a permafrost coastline: Improving process-based models using time-lapse photography

    Science.gov (United States)

    Wobus, C.; Anderson, R.; Overeem, I.; Matell, N.; Clow, G.; Urban, F.

    2011-01-01

    Coastal erosion rates locally exceeding 30 m y-1 have been documented along Alaska's Beaufort Sea coastline, and a number of studies suggest that these erosion rates have accelerated as a result of climate change. However, a lack of direct observational evidence has limited our progress in quantifying the specific processes that connect climate change to coastal erosion rates in the Arctic. In particular, while longer ice-free periods are likely to lead to both warmer surface waters and longer fetch, the relative roles of thermal and mechanical (wave) erosion in driving coastal retreat have not been comprehensively quantified. We focus on a permafrost coastline in the northern National Petroleum Reserve-Alaska (NPR-A), where coastal erosion rates have averaged 10-15 m y-1 over two years of direct monitoring. We take advantage of these extraordinary rates of coastal erosion to observe and quantify coastal erosion directly via time-lapse photography in combination with meteorological observations. Our observations indicate that the erosion of these bluffs is largely thermally driven, but that surface winds play a crucial role in exposing the frozen bluffs to the radiatively warmed seawater that drives melting of interstitial ice. To first order, erosion in this setting can be modeled using formulations developed to describe iceberg deterioration in the open ocean. These simple models provide a conceptual framework for evaluating how climate-induced changes in thermal and wave energy might influence future erosion rates in this setting.

  3. The application of convolution-based statistical model on the electrical breakdown time delay distributions in neon

    International Nuclear Information System (INIS)

    Maluckov, Cedomir A.; Karamarkovic, Jugoslav P.; Radovic, Miodrag K.; Pejovic, Momcilo M.

    2004-01-01

    The convolution-based model of the electrical breakdown time delay distribution is applied for statistical analysis of experimental results obtained in neon-filled diode tube at 6.5 mbar. At first, the numerical breakdown time delay density distributions are obtained by stochastic modeling as the sum of two independent random variables, the electrical breakdown statistical time delay with exponential, and discharge formative time with Gaussian distribution. Then, the single characteristic breakdown time delay distribution is obtained as the convolution of these two random variables with previously determined parameters. These distributions show good correspondence with the experimental distributions, obtained on the basis of 1000 successive and independent measurements. The shape of distributions is investigated, and corresponding skewness and kurtosis are plotted, in order to follow the transition from Gaussian to exponential distribution

  4. Evaluation of different time domain peak models using extreme learning machine-based peak detection for EEG signal.

    Science.gov (United States)

    Adam, Asrul; Ibrahim, Zuwairie; Mokhtar, Norrima; Shapiai, Mohd Ibrahim; Cumming, Paul; Mubin, Marizan

    2016-01-01

    Various peak models have been introduced to detect and analyze peaks in the time domain analysis of electroencephalogram (EEG) signals. In general, peak model in the time domain analysis consists of a set of signal parameters, such as amplitude, width, and slope. Models including those proposed by Dumpala, Acir, Liu, and Dingle are routinely used to detect peaks in EEG signals acquired in clinical studies of epilepsy or eye blink. The optimal peak model is the most reliable peak detection performance in a particular application. A fair measure of performance of different models requires a common and unbiased platform. In this study, we evaluate the performance of the four different peak models using the extreme learning machine (ELM)-based peak detection algorithm. We found that the Dingle model gave the best performance, with 72 % accuracy in the analysis of real EEG data. Statistical analysis conferred that the Dingle model afforded significantly better mean testing accuracy than did the Acir and Liu models, which were in the range 37-52 %. Meanwhile, the Dingle model has no significant difference compared to Dumpala model.

  5. Time-Frequency Analysis Using Warped-Based High-Order Phase Modeling

    Directory of Open Access Journals (Sweden)

    Ioana Cornel

    2005-01-01

    Full Text Available The high-order ambiguity function (HAF was introduced for the estimation of polynomial-phase signals (PPS embedded in noise. Since the HAF is a nonlinear operator, it suffers from noise-masking effects and from the appearance of undesired cross-terms when multicomponents PPS are analyzed. In order to improve the performances of the HAF, the multi-lag HAF concept was proposed. Based on this approach, several advanced methods (e.g., product high-order ambiguity function (PHAF have been recently proposed. Nevertheless, performances of these new methods are affected by the error propagation effect which drastically limits the order of the polynomial approximation. This phenomenon acts especially when a high-order polynomial modeling is needed: representation of the digital modulation signals or the acoustic transient signals. This effect is caused by the technique used for polynomial order reduction, common for existing approaches: signal multiplication with the complex conjugated exponentials formed with the estimated coefficients. In this paper, we introduce an alternative method to reduce the polynomial order, based on the successive unitary signal transformation, according to each polynomial order. We will prove that this method reduces considerably the effect of error propagation. Namely, with this order reduction method, the estimation error at a given order will depend only on the performances of the estimation method.

  6. Downsizer - A Graphical User Interface-Based Application for Browsing, Acquiring, and Formatting Time-Series Data for Hydrologic Modeling

    Science.gov (United States)

    Ward-Garrison, Christian; Markstrom, Steven L.; Hay, Lauren E.

    2009-01-01

    The U.S. Geological Survey Downsizer is a computer application that selects, downloads, verifies, and formats station-based time-series data for environmental-resource models, particularly the Precipitation-Runoff Modeling System. Downsizer implements the client-server software architecture. The client presents a map-based, graphical user interface that is intuitive to modelers; the server provides streamflow and climate time-series data from over 40,000 measurement stations across the United States. This report is the Downsizer user's manual and provides (1) an overview of the software design, (2) installation instructions, (3) a description of the graphical user interface, (4) a description of selected output files, and (5) troubleshooting information.

  7. The Abridgment and Relaxation Time for a Linear Multi-Scale Model Based on Multiple Site Phosphorylation.

    Directory of Open Access Journals (Sweden)

    Shuo Wang

    Full Text Available Random effect in cellular systems is an important topic in systems biology and often simulated with Gillespie's stochastic simulation algorithm (SSA. Abridgment refers to model reduction that approximates a group of reactions by a smaller group with fewer species and reactions. This paper presents a theoretical analysis, based on comparison of the first exit time, for the abridgment on a linear chain reaction model motivated by systems with multiple phosphorylation sites. The analysis shows that if the relaxation time of the fast subsystem is much smaller than the mean firing time of the slow reactions, the abridgment can be applied with little error. This analysis is further verified with numerical experiments for models of bistable switch and oscillations in which linear chain system plays a critical role.

  8. A model of negotiation scenarios based on time, relevance andcontrol used to define advantageous positions in a negotiation

    Directory of Open Access Journals (Sweden)

    Omar Guillermo Rojas Altamirano

    2016-04-01

    Full Text Available Models that apply to negotiation are based on different perspectives that range from the relationship between the actors, game theory or the steps in a procedure. This research proposes a model of negotiation scenarios that considers three factors (time, relevance and control, which are displayed as the most important in a negotiation. These factors interact with each other and create different scenarios for each of the actors involved in a negotiation. The proposed model not only facilitates the creation of a negotiation strategy but also an ideal choice of effective tactics.

  9. Limited information estimation of the diffusion-based item response theory model for responses and response times.

    Science.gov (United States)

    Ranger, Jochen; Kuhn, Jörg-Tobias; Szardenings, Carsten

    2016-05-01

    Psychological tests are usually analysed with item response models. Recently, some alternative measurement models have been proposed that were derived from cognitive process models developed in experimental psychology. These models consider the responses but also the response times of the test takers. Two such models are the Q-diffusion model and the D-diffusion model. Both models can be calibrated with the diffIRT package of the R statistical environment via marginal maximum likelihood (MML) estimation. In this manuscript, an alternative approach to model calibration is proposed. The approach is based on weighted least squares estimation and parallels the standard estimation approach in structural equation modelling. Estimates are determined by minimizing the discrepancy between the observed and the implied covariance matrix. The estimator is simple to implement, consistent, and asymptotically normally distributed. Least squares estimation also provides a test of model fit by comparing the observed and implied covariance matrix. The estimator and the test of model fit are evaluated in a simulation study. Although parameter recovery is good, the estimator is less efficient than the MML estimator. © 2016 The British Psychological Society.

  10. Modelling bursty time series

    International Nuclear Information System (INIS)

    Vajna, Szabolcs; Kertész, János; Tóth, Bálint

    2013-01-01

    Many human-related activities show power-law decaying interevent time distribution with exponents usually varying between 1 and 2. We study a simple task-queuing model, which produces bursty time series due to the non-trivial dynamics of the task list. The model is characterized by a priority distribution as an input parameter, which describes the choice procedure from the list. We give exact results on the asymptotic behaviour of the model and we show that the interevent time distribution is power-law decaying for any kind of input distributions that remain normalizable in the infinite list limit, with exponents tunable between 1 and 2. The model satisfies a scaling law between the exponents of interevent time distribution (β) and autocorrelation function (α): α + β = 2. This law is general for renewal processes with power-law decaying interevent time distribution. We conclude that slowly decaying autocorrelation function indicates long-range dependence only if the scaling law is violated. (paper)

  11. Impedance models in time domain

    NARCIS (Netherlands)

    Rienstra, S.W.

    2005-01-01

    Necessary conditions for an impedance function are derived. Methods available in the literature are discussed. A format with recipe is proposed for an exact impedance condition in time domain on a time grid, based on the Helmholtz resonator model. An explicit solution is given of a pulse reflecting

  12. Ship-Track Models Based on Poisson-Distributed Port-Departure Times

    National Research Council Canada - National Science Library

    Heitmeyer, Richard

    2006-01-01

    ... of those ships, and their nominal speeds. The probability law assumes that the ship departure times are Poisson-distributed with a time-varying departure rate and that the ship speeds and ship routes are statistically independent...

  13. Simulating Serious Games: A Discrete-Time Computational Model Based on Cognitive Flow Theory

    Science.gov (United States)

    Westera, Wim

    2018-01-01

    This paper presents a computational model for simulating how people learn from serious games. While avoiding the combinatorial explosion of a games micro-states, the model offers a meso-level pathfinding approach, which is guided by cognitive flow theory and various concepts from learning sciences. It extends a basic, existing model by exposing…

  14. Travel time reliability modeling.

    Science.gov (United States)

    2011-07-01

    This report includes three papers as follows: : 1. Guo F., Rakha H., and Park S. (2010), "A Multi-state Travel Time Reliability Model," : Transportation Research Record: Journal of the Transportation Research Board, n 2188, : pp. 46-54. : 2. Park S.,...

  15. Modelling mean transit time of stream base flow during tropical cyclone rainstorm in a steep relief forested catchment

    Science.gov (United States)

    Lee, Jun-Yi; Huang, -Chuan, Jr.

    2017-04-01

    Mean transit time (MTT) is one of the of fundamental catchment descriptors to advance understanding on hydrological, ecological, and biogeochemical processes and improve water resources management. However, there were few documented the base flow partitioning (BFP) and mean transit time within a mountainous catchment in typhoon alley. We used a unique data set of 18O isotope and conductivity composition of rainfall (136 mm to 778 mm) and streamflow water samples collected for 14 tropical cyclone events (during 2011 to 2015) in a steep relief forested catchment (Pinglin, in northern Taiwan). A lumped hydrological model, HBV, considering dispersion model transit time distribution was used to estimate total flow, base flow, and MTT of stream base flow. Linear regression between MTT and hydrometric (precipitation intensity and antecedent precipitation index) variables were used to explore controls on MTT variation. Results revealed that both the simulation performance of total flow and base flow were satisfactory, and the Nash-Sutcliffe model efficiency coefficient of total flow and base flow was 0.848 and 0.732, respectively. The event magnitude increased with the decrease of estimated MTTs. Meanwhile, the estimated MTTs varied 4-21 days with the increase of BFP between 63-92%. The negative correlation between event magnitude and MTT and BFP showed the forcing controls the MTT and BFP. Besides, a negative relationship between MTT and the antecedent precipitation index was also found. In other words, wetter antecedent moisture content more rapidly active the fast flow paths. This approach is well suited for constraining process-based modeling in a range of high precipitation intensity and steep relief forested environments.

  16. A Distributed Web-based Solution for Ionospheric Model Real-time Management, Monitoring, and Short-term Prediction

    Science.gov (United States)

    Kulchitsky, A.; Maurits, S.; Watkins, B.

    2006-12-01

    With the widespread availability of the Internet today, many people can monitor various scientific research activities. It is important to accommodate this interest providing on-line access to dynamic and illustrative Web-resources, which could demonstrate different aspects of ongoing research. It is especially important to explain and these research activities for high school and undergraduate students, thereby providing more information for making decisions concerning their future studies. Such Web resources are also important to clarify scientific research for the general public, in order to achieve better awareness of research progress in various fields. Particularly rewarding is dissemination of information about ongoing projects within Universities and research centers to their local communities. The benefits of this type of scientific outreach are mutual, since development of Web-based automatic systems is prerequisite for many research projects targeting real-time monitoring and/or modeling of natural conditions. Continuous operation of such systems provide ongoing research opportunities for the statistically massive validation of the models, as well. We have developed a Web-based system to run the University of Alaska Fairbanks Polar Ionospheric Model in real-time. This model makes use of networking and computational resources at the Arctic Region Supercomputing Center. This system was designed to be portable among various operating systems and computational resources. Its components can be installed across different computers, separating Web servers and computational engines. The core of the system is a Real-Time Management module (RMM) written Python, which facilitates interactions of remote input data transfers, the ionospheric model runs, MySQL database filling, and PHP scripts for the Web-page preparations. The RMM downloads current geophysical inputs as soon as they become available at different on-line depositories. This information is processed to

  17. Models and synchronization of time-delayed complex dynamical networks with multi-links based on adaptive control

    International Nuclear Information System (INIS)

    Peng Haipeng; Wei Nan; Li Lixiang; Xie Weisheng; Yang Yixian

    2010-01-01

    In this Letter, time-delay has been introduced in to split the networks, upon which a model of complex dynamical networks with multi-links has been constructed. Moreover, based on Lyapunov stability theory and some hypotheses, we achieve synchronization between two complex networks with different structures by designing effective controllers. The validity of the results was proved through numerical simulations of this Letter.

  18. Decoupling of modeling and measuring interval in groundwater time series analysis based on response characteristics

    NARCIS (Netherlands)

    Berendrecht, W.L.; Heemink, A.W.; Geer, F.C. van; Gehrels, J.C.

    2003-01-01

    A state-space representation of the transfer function-noise (TFN) model allows the choice of a modeling (input) interval that is smaller than the measuring interval of the output variable. Since in geohydrological applications the interval of the available input series (precipitation excess) is

  19. A Procedure for Identification of Appropriate State Space and ARIMA Models Based on Time-Series Cross-Validation

    Directory of Open Access Journals (Sweden)

    Patrícia Ramos

    2016-11-01

    Full Text Available In this work, a cross-validation procedure is used to identify an appropriate Autoregressive Integrated Moving Average model and an appropriate state space model for a time series. A minimum size for the training set is specified. The procedure is based on one-step forecasts and uses different training sets, each containing one more observation than the previous one. All possible state space models and all ARIMA models where the orders are allowed to range reasonably are fitted considering raw data and log-transformed data with regular differencing (up to second order differences and, if the time series is seasonal, seasonal differencing (up to first order differences. The value of root mean squared error for each model is calculated averaging the one-step forecasts obtained. The model which has the lowest root mean squared error value and passes the Ljung–Box test using all of the available data with a reasonable significance level is selected among all the ARIMA and state space models considered. The procedure is exemplified in this paper with a case study of retail sales of different categories of women’s footwear from a Portuguese retailer, and its accuracy is compared with three reliable forecasting approaches. The results show that our procedure consistently forecasts more accurately than the other approaches and the improvements in the accuracy are significant.

  20. Real-Time Model Based Process Monitoring of Enzymatic Biodiesel Production

    DEFF Research Database (Denmark)

    Price, Jason Anthony; Nordblad, Mathias; Woodley, John

    2015-01-01

    In this contribution we extend our modelling work on the enzymatic production of biodiesel where we demonstrate the application of a Continuous-Discrete Extended Kalman Filter (a state estimator). The state estimator is used to correct for mismatch between the process data and the process model...... for Fed-batch production of biodiesel. For the three process runs investigated, using a single tuning parameter, qx=2 x 10-2 which represents the uncertainty in the process model, it was possible over the entire course of the reaction to reduce the overall mean and standard deviation of the error between......, there was over a ten-fold decrease in the overall mean error for the state estimator prediction compared with the predictions from the pure model simulations. It is also shown that the state estimator can be used as a tool for detection of outliers in the measurement data. For the enzymatic biodiesel process...

  1. Supervised learning based model for predicting variability-induced timing errors

    NARCIS (Netherlands)

    Jiao, X.; Rahimi, A.; Narayanaswamy, B.; Fatemi, H.; Pineda de Gyvez, J.; Gupta, R.K.

    2015-01-01

    Circuit designers typically combat variations in hardware and workload by increasing conservative guardbanding that leads to operational inefficiency. Reducing this excessive guardband is highly desirable, but causes timing errors in synchronous circuits. We propose a methodology for supervised

  2. A turbulent time scale based k–ε model for probability density function modeling of turbulence/chemistry interactions: Application to HCCI combustion

    International Nuclear Information System (INIS)

    Maroteaux, Fadila; Pommier, Pierre-Lin

    2013-01-01

    Highlights: ► Turbulent time evolution is introduced in stochastic modeling approach. ► The particles number is optimized trough a restricted initial distribution. ► The initial distribution amplitude is modeled by magnitude of turbulence field. -- Abstract: Homogenous Charge Compression Ignition (HCCI) engine technology is known as an alternative to reduce NO x and particulate matter (PM) emissions. As shown by several experimental studies published in the literature, the ideally homogeneous mixture charge becomes stratified in composition and temperature, and turbulent mixing is found to play an important role in controlling the combustion progress. In a previous study, an IEM model (Interaction by Exchange with the Mean) has been used to describe the micromixing in a stochastic reactor model that simulates the HCCI process. The IEM model is a deterministic model, based on the principle that the scalar value approaches the mean value over the entire volume with a characteristic mixing time. In this previous model, the turbulent time scale was treated as a fixed parameter. The present study focuses on the development of a micro-mixing time model, in order to take into account the physical phenomena it stands for. For that purpose, a (k–ε) model is used to express this micro-mixing time model. The turbulence model used here is based on zero dimensional energy cascade applied during the compression and the expansion cycle; mean kinetic energy is converted to turbulent kinetic energy. Turbulent kinetic energy is converted to heat through viscous dissipation. Besides, in this study a relation to calculate the initial heterogeneities amplitude is proposed. The comparison of simulation results against experimental data shows overall satisfactory agreement at variable turbulent time scale

  3. The Time Division Multi-Channel Communication Model and the Correlative Protocol Based on Quantum Time Division Multi-Channel Communication

    International Nuclear Information System (INIS)

    Liu Xiao-Hui; Pei Chang-Xing; Nie Min

    2010-01-01

    Based on the classical time division multi-channel communication theory, we present a scheme of quantum time-division multi-channel communication (QTDMC). Moreover, the model of quantum time division switch (QTDS) and correlative protocol of QTDMC are proposed. The quantum bit error rate (QBER) is analyzed and the QBER simulation test is performed. The scheme shows that the QTDS can carry out multi-user communication through quantum channel, the QBER can also reach the reliability requirement of communication, and the protocol of QTDMC has high practicability and transplantable. The scheme of QTDS may play an important role in the establishment of quantum communication in a large scale in the future. (general)

  4. Assessment of surface solar irradiance derived from real-time modelling techniques and verification with ground-based measurements

    Science.gov (United States)

    Kosmopoulos, Panagiotis G.; Kazadzis, Stelios; Taylor, Michael; Raptis, Panagiotis I.; Keramitsoglou, Iphigenia; Kiranoudis, Chris; Bais, Alkiviadis F.

    2018-02-01

    This study focuses on the assessment of surface solar radiation (SSR) based on operational neural network (NN) and multi-regression function (MRF) modelling techniques that produce instantaneous (in less than 1 min) outputs. Using real-time cloud and aerosol optical properties inputs from the Spinning Enhanced Visible and Infrared Imager (SEVIRI) on board the Meteosat Second Generation (MSG) satellite and the Copernicus Atmosphere Monitoring Service (CAMS), respectively, these models are capable of calculating SSR in high resolution (1 nm, 0.05°, 15 min) that can be used for spectrally integrated irradiance maps, databases and various applications related to energy exploitation. The real-time models are validated against ground-based measurements of the Baseline Surface Radiation Network (BSRN) in a temporal range varying from 15 min to monthly means, while a sensitivity analysis of the cloud and aerosol effects on SSR is performed to ensure reliability under different sky and climatological conditions. The simulated outputs, compared to their common training dataset created by the radiative transfer model (RTM) libRadtran, showed median error values in the range -15 to 15 % for the NN that produces spectral irradiances (NNS), 5-6 % underestimation for the integrated NN and close to zero errors for the MRF technique. The verification against BSRN revealed that the real-time calculation uncertainty ranges from -100 to 40 and -20 to 20 W m-2, for the 15 min and monthly mean global horizontal irradiance (GHI) averages, respectively, while the accuracy of the input parameters, in terms of aerosol and cloud optical thickness (AOD and COT), and their impact on GHI, was of the order of 10 % as compared to the ground-based measurements. The proposed system aims to be utilized through studies and real-time applications which are related to solar energy production planning and use.

  5. Modeling and control for a magnetic levitation system based on SIMLAB platform in real time

    Directory of Open Access Journals (Sweden)

    Mundher H.A. Yaseen

    2018-03-01

    Full Text Available Magnetic Levitation system becomes a hot topic of study due to the minimum friction and low energy consumption which regards as very important issues. This paper proposed a new magnetic levitation system using real-time control simulink feature of (SIMLAB microcontroller. The control system of the maglev transportation system is verified by simulations with experimental results, and its superiority is indicated in comparison with previous literature and conventional control strategies. In addition, the proposed system was implemented under effect of three controller types which are Linear–quadratic regulator (LQR, proportional–integral–derivative controller (PID and Lead compensation. As well, the controller system performance was compared in term of three parameters Peak overshoot, Settling time and Rise time. The findings prove the agreement of simulation with experimental results obtained. Moreover, the LQR controller produced a great stability and homogeneous response than other controllers used. For experimental results, the LQR brought a 14.6%, 0.199 and 0.064 for peak overshoot, Setting time and Rise time respectively. Keywords: Magnetic levitation system, Linear Quadratic Regulator (LQR, PID control, Lead compensation

  6. Modeling and control for a magnetic levitation system based on SIMLAB platform in real time

    Science.gov (United States)

    Yaseen, Mundher H. A.; Abd, Haider J.

    2018-03-01

    Magnetic Levitation system becomes a hot topic of study due to the minimum friction and low energy consumption which regards as very important issues. This paper proposed a new magnetic levitation system using real-time control simulink feature of (SIMLAB) microcontroller. The control system of the maglev transportation system is verified by simulations with experimental results, and its superiority is indicated in comparison with previous literature and conventional control strategies. In addition, the proposed system was implemented under effect of three controller types which are Linear-quadratic regulator (LQR), proportional-integral-derivative controller (PID) and Lead compensation. As well, the controller system performance was compared in term of three parameters Peak overshoot, Settling time and Rise time. The findings prove the agreement of simulation with experimental results obtained. Moreover, the LQR controller produced a great stability and homogeneous response than other controllers used. For experimental results, the LQR brought a 14.6%, 0.199 and 0.064 for peak overshoot, Setting time and Rise time respectively.

  7. Time series forecasting using ERNN and QR based on Bayesian model averaging

    Science.gov (United States)

    Pwasong, Augustine; Sathasivam, Saratha

    2017-08-01

    The Bayesian model averaging technique is a multi-model combination technique. The technique was employed to amalgamate the Elman recurrent neural network (ERNN) technique with the quadratic regression (QR) technique. The amalgamation produced a hybrid technique known as the hybrid ERNN-QR technique. The potentials of forecasting with the hybrid technique are compared with the forecasting capabilities of individual techniques of ERNN and QR. The outcome revealed that the hybrid technique is superior to the individual techniques in the mean square error sense.

  8. On the validity of travel-time based nonlinear bioreactive transport models in steady-state flow.

    Science.gov (United States)

    Sanz-Prat, Alicia; Lu, Chuanhe; Finkel, Michael; Cirpka, Olaf A

    2015-01-01

    Travel-time based models simplify the description of reactive transport by replacing the spatial coordinates with the groundwater travel time, posing a quasi one-dimensional (1-D) problem and potentially rendering the determination of multidimensional parameter fields unnecessary. While the approach is exact for strictly advective transport in steady-state flow if the reactive properties of the porous medium are uniform, its validity is unclear when local-scale mixing affects the reactive behavior. We compare a two-dimensional (2-D), spatially explicit, bioreactive, advective-dispersive transport model, considered as "virtual truth", with three 1-D travel-time based models which differ in the conceptualization of longitudinal dispersion: (i) neglecting dispersive mixing altogether, (ii) introducing a local-scale longitudinal dispersivity constant in time and space, and (iii) using an effective longitudinal dispersivity that increases linearly with distance. The reactive system considers biodegradation of dissolved organic carbon, which is introduced into a hydraulically heterogeneous domain together with oxygen and nitrate. Aerobic and denitrifying bacteria use the energy of the microbial transformations for growth. We analyze six scenarios differing in the variance of log-hydraulic conductivity and in the inflow boundary conditions (constant versus time-varying concentration). The concentrations of the 1-D models are mapped to the 2-D domain by means of the kinematic (for case i), and mean groundwater age (for cases ii & iii), respectively. The comparison between concentrations of the "virtual truth" and the 1-D approaches indicates extremely good agreement when using an effective, linearly increasing longitudinal dispersivity in the majority of the scenarios, while the other two 1-D approaches reproduce at least the concentration tendencies well. At late times, all 1-D models give valid approximations of two-dimensional transport. We conclude that the

  9. Modeling of the attenuation of stress waves in concrete based on the Rayleigh damping model using time-reversal and PZT transducers

    Science.gov (United States)

    Tian, Zhen; Huo, Linsheng; Gao, Weihang; Li, Hongnan; Song, Gangbing

    2017-10-01

    Wave-based concrete structural health monitoring has attracted much attention. A stress wave experiences significant attenuation in concrete, however there is a lack of a unified method for predicting the attenuation coefficient of the stress wave. In this paper, a simple and effective absorption attenuation model of stress waves in concrete is developed based on the Rayleigh damping model, which indicates that the absorption attenuation coefficient of stress waves in concrete is directly proportional to the square of the stress wave frequency when the damping ratio is small. In order to verify the theoretical model, related experiments were carried out. During the experiments, a concrete beam was designed in which the d33-model piezoelectric smart aggregates were embedded to detect the propagation of stress waves. It is difficult to distinguish direct stress waves due to the complex propagation paths and the reflection and scattering of stress waves in concrete. Hence, as another innovation of this paper, a new method for computing the absorption attenuation coefficient based on the time-reversal method is developed. Due to the self-adaptive focusing properties of the time-reversal method, the time-reversed stress wave focuses and generates a peak value. The time-reversal method eliminates the adverse effects of multipaths, reflection, and scattering. The absorption attenuation coefficient is computed by analyzing the peak value changes of the time-reversal focused signal. Finally, the experimental results are found to be in good agreement with the theoretical model.

  10. Crystal plasticity based modeling of time and scale dependent behavior of thin films

    NARCIS (Netherlands)

    Erturk, I.; Gao, K.; Bielen, J.A.; Dommelen, van J.A.W.; Geers, M.G.D.

    2013-01-01

    The micro and sub-micro scale dimensions of the components of modern high-tech products pose challenging engineering problems that require advanced tools to tackle them. An example hereof is time dependent strain recovery, here referred to as anelasticity, which is observed in metallic thin film

  11. Towards real-time communication between in vivo neurophysiological data sources and simulator-based brain biomimetic models.

    Science.gov (United States)

    Lee, Giljae; Matsunaga, Andréa; Dura-Bernal, Salvador; Zhang, Wenjie; Lytton, William W; Francis, Joseph T; Fortes, José Ab

    2014-11-01

    Development of more sophisticated implantable brain-machine interface (BMI) will require both interpretation of the neurophysiological data being measured and subsequent determination of signals to be delivered back to the brain. Computational models are the heart of the machine of BMI and therefore an essential tool in both of these processes. One approach is to utilize brain biomimetic models (BMMs) to develop and instantiate these algorithms. These then must be connected as hybrid systems in order to interface the BMM with in vivo data acquisition devices and prosthetic devices. The combined system then provides a test bed for neuroprosthetic rehabilitative solutions and medical devices for the repair and enhancement of damaged brain. We propose here a computer network-based design for this purpose, detailing its internal modules and data flows. We describe a prototype implementation of the design, enabling interaction between the Plexon Multichannel Acquisition Processor (MAP) server, a commercial tool to collect signals from microelectrodes implanted in a live subject and a BMM, a NEURON-based model of sensorimotor cortex capable of controlling a virtual arm. The prototype implementation supports an online mode for real-time simulations, as well as an offline mode for data analysis and simulations without real-time constraints, and provides binning operations to discretize continuous input to the BMM and filtering operations for dealing with noise. Evaluation demonstrated that the implementation successfully delivered monkey spiking activity to the BMM through LAN environments, respecting real-time constraints.

  12. A Time Scheduling Model of Logistics Service Supply Chain Based on the Customer Order Decoupling Point: A Perspective from the Constant Service Operation Time

    Science.gov (United States)

    Yang, Yi; Xu, Haitao; Liu, Xiaoyan; Wang, Yijia; Liang, Zhicheng

    2014-01-01

    In mass customization logistics service, reasonable scheduling of the logistics service supply chain (LSSC), especially time scheduling, is benefit to increase its competitiveness. Therefore, the effect of a customer order decoupling point (CODP) on the time scheduling performance should be considered. To minimize the total order operation cost of the LSSC, minimize the difference between the expected and actual time of completing the service orders, and maximize the satisfaction of functional logistics service providers, this study establishes an LSSC time scheduling model based on the CODP. Matlab 7.8 software is used in the numerical analysis for a specific example. Results show that the order completion time of the LSSC can be delayed or be ahead of schedule but cannot be infinitely advanced or infinitely delayed. Obtaining the optimal comprehensive performance can be effective if the expected order completion time is appropriately delayed. The increase in supply chain comprehensive performance caused by the increase in the relationship coefficient of logistics service integrator (LSI) is limited. The relative concern degree of LSI on cost and service delivery punctuality leads to not only changes in CODP but also to those in the scheduling performance of the LSSC. PMID:24715818

  13. A time scheduling model of logistics service supply chain based on the customer order decoupling point: a perspective from the constant service operation time.

    Science.gov (United States)

    Liu, Weihua; Yang, Yi; Xu, Haitao; Liu, Xiaoyan; Wang, Yijia; Liang, Zhicheng

    2014-01-01

    In mass customization logistics service, reasonable scheduling of the logistics service supply chain (LSSC), especially time scheduling, is benefit to increase its competitiveness. Therefore, the effect of a customer order decoupling point (CODP) on the time scheduling performance should be considered. To minimize the total order operation cost of the LSSC, minimize the difference between the expected and actual time of completing the service orders, and maximize the satisfaction of functional logistics service providers, this study establishes an LSSC time scheduling model based on the CODP. Matlab 7.8 software is used in the numerical analysis for a specific example. Results show that the order completion time of the LSSC can be delayed or be ahead of schedule but cannot be infinitely advanced or infinitely delayed. Obtaining the optimal comprehensive performance can be effective if the expected order completion time is appropriately delayed. The increase in supply chain comprehensive performance caused by the increase in the relationship coefficient of logistics service integrator (LSI) is limited. The relative concern degree of LSI on cost and service delivery punctuality leads to not only changes in CODP but also to those in the scheduling performance of the LSSC.

  14. A time-dependent Green's function-based model for stream ...

    African Journals Online (AJOL)

    DRINIE

    2003-07-03

    Jul 3, 2003 ... applications, this Green's function has found use primarily in linear heat transfer and flow ... based on the mathematical description of the flow with the nonlinear .... i∂/∂x + j∂/∂y is the two-dimensional gradient operator,.

  15. Using ground-based time-lapse gravity observations for hydrological model calibration

    DEFF Research Database (Denmark)

    Christiansen, Lars

    steder vil forværre situationen. Hydrologiske modeller er computerprogrammer, der hjælper os med at forstå vandets vej fra det falder som nedbør og til det igen fordamper til atmosfæren. Modellerne bruges ligeledes til at forudse konsekvenserne af ændringer på alle skalaer. Det kan være alt fra...... vand i jorden er svært at måle, men datatypen er samtidig meget virksom når man kalibrerer modeller. Det er information, som vi ikke kan få fra brønde. De giver os blot udbredelsen af og trykket i grundvandet. I mit forskningsprojekt viser jeg, at vi kan måle ændringer i vandmængden i jorden som lokale...... ændringer i tyngdekraften og bruge dem til at kalibrere hydrologiske modeller med. Når man erstatter luften i jordens porer med vand, så stiger jordens densitet nemlig og dermed også tiltrækningen – eller sagt lidt populært: Du bliver tungere når det regner! I mit forskningsprojekt har jeg undersøgt hvordan...

  16. Land use and land cover change based on historical space-time model

    Science.gov (United States)

    Sun, Qiong; Zhang, Chi; Liu, Min; Zhang, Yongjing

    2016-09-01

    Land use and cover change is a leading edge topic in the current research field of global environmental changes and case study of typical areas is an important approach understanding global environmental changes. Taking the Qiantang River (Zhejiang, China) as an example, this study explores automatic classification of land use using remote sensing technology and analyzes historical space-time change by remote sensing monitoring. This study combines spectral angle mapping (SAM) with multi-source information and creates a convenient and efficient high-precision land use computer automatic classification method which meets the application requirements and is suitable for complex landform of the studied area. This work analyzes the histological space-time characteristics of land use and cover change in the Qiantang River basin in 2001, 2007 and 2014, in order to (i) verify the feasibility of studying land use change with remote sensing technology, (ii) accurately understand the change of land use and cover as well as historical space-time evolution trend, (iii) provide a realistic basis for the sustainable development of the Qiantang River basin and (iv) provide a strong information support and new research method for optimizing the Qiantang River land use structure and achieving optimal allocation of land resources and scientific management.

  17. Parameterizing Spatial Models of Infectious Disease Transmission that Incorporate Infection Time Uncertainty Using Sampling-Based Likelihood Approximations.

    Directory of Open Access Journals (Sweden)

    Rajat Malik

    Full Text Available A class of discrete-time models of infectious disease spread, referred to as individual-level models (ILMs, are typically fitted in a Bayesian Markov chain Monte Carlo (MCMC framework. These models quantify probabilistic outcomes regarding the risk of infection of susceptible individuals due to various susceptibility and transmissibility factors, including their spatial distance from infectious individuals. The infectious pressure from infected individuals exerted on susceptible individuals is intrinsic to these ILMs. Unfortunately, quantifying this infectious pressure for data sets containing many individuals can be computationally burdensome, leading to a time-consuming likelihood calculation and, thus, computationally prohibitive MCMC-based analysis. This problem worsens when using data augmentation to allow for uncertainty in infection times. In this paper, we develop sampling methods that can be used to calculate a fast, approximate likelihood when fitting such disease models. A simple random sampling approach is initially considered followed by various spatially-stratified schemes. We test and compare the performance of our methods with both simulated data and data from the 2001 foot-and-mouth disease (FMD epidemic in the U.K. Our results indicate that substantial computation savings can be obtained--albeit, of course, with some information loss--suggesting that such techniques may be of use in the analysis of very large epidemic data sets.

  18. A Space-Time Network-Based Modeling Framework for Dynamic Unmanned Aerial Vehicle Routing in Traffic Incident Monitoring Applications

    Directory of Open Access Journals (Sweden)

    Jisheng Zhang

    2015-06-01

    Full Text Available It is essential for transportation management centers to equip and manage a network of fixed and mobile sensors in order to quickly detect traffic incidents and further monitor the related impact areas, especially for high-impact accidents with dramatic traffic congestion propagation. As emerging small Unmanned Aerial Vehicles (UAVs start to have a more flexible regulation environment, it is critically important to fully explore the potential for of using UAVs for monitoring recurring and non-recurring traffic conditions and special events on transportation networks. This paper presents a space-time network- based modeling framework for integrated fixed and mobile sensor networks, in order to provide a rapid and systematic road traffic monitoring mechanism. By constructing a discretized space-time network to characterize not only the speed for UAVs but also the time-sensitive impact areas of traffic congestion, we formulate the problem as a linear integer programming model to minimize the detection delay cost and operational cost, subject to feasible flying route constraints. A Lagrangian relaxation solution framework is developed to decompose the original complex problem into a series of computationally efficient time-dependent and least cost path finding sub-problems. Several examples are used to demonstrate the results of proposed models in UAVs’ route planning for small and medium-scale networks.

  19. A new incomplete-repair model based on a ''reciprocal-time'' pattern of sublethal damage repair

    International Nuclear Information System (INIS)

    Dale, R.G.; Fowler, J.F.

    1999-01-01

    A radiobiological model for closely spaced non-instantaneous radiation fractions is presented, based on the premise that the time process of sublethal damage (SLD) repair is 'reciprocal-time' (second order), rather than exponential (first order), in form. The initial clinical implications of such an incomplete-repair model are assessed. A previously derived linear-quadratic-based model was revised to take account of the possibility that SLD may repair with time such that the fraction of an element of initial damage remaining at time t is given as 1/(1+zt), where z is an appropriate rate constant; z is the reciprocal of the first half-time (τ) of repair. The general equation so derived for incomplete repair is applicable to all types of radiotherapy delivered at high, low and medium dose-rate in fractions delivered at regular time intervals. The model allows both the fraction duration and interfraction intervals to vary between zero and infinity. For any given value of z, reciprocal repair is associated with an apparent 'slowing-down' in the SLD repair rate as treatment proceeds. The instantaneous repair rates are not directly governed by total dose or dose per fraction, but are influenced by the treatment duration and individual fraction duration. Instantaneous repair rates of SLD appear to be slower towards the end of a continuous treatment, and are also slower following 'long' fractions than they are following 'short' fractions. The new model, with its single repair-rate parameter, is shown to be capable of providing a degree of quantitative explanation for some enigmas that have been encountered in clinical studies. A single-component reciprocal repair process provides an alternative explanation for the apparent existence of a range of repair rates in human tissues, and which have hitherto been explained by postulating the existence of a multi-exponential repair process. The build-up of SLD over extended treatments is greater than would be inferred using a

  20. Modeling real time CBTC operation in mixed traffic networks: A simulation-based approach

    OpenAIRE

    De Martinis, Valerio; Toletti, Ambra; Weidmann, Ulrich; Nash, Andrew

    2017-01-01

    Vehicle automation and continuous connection with communication networks are the key innovations currently redefining transport systems. Just as autonomous cars are rapidly changing road transport, increasing railway automation will help to maximize the use of infrastructure, increase schedule reliability, improve safety, and increase energy efficiency. However, railway operations are fundamentally different from road-based transport systems and automation must be specifically tailored to rai...

  1. An Improved Global Harmony Search Algorithm for the Identification of Nonlinear Discrete-Time Systems Based on Volterra Filter Modeling

    Directory of Open Access Journals (Sweden)

    Zongyan Li

    2016-01-01

    Full Text Available This paper describes an improved global harmony search (IGHS algorithm for identifying the nonlinear discrete-time systems based on second-order Volterra model. The IGHS is an improved version of the novel global harmony search (NGHS algorithm, and it makes two significant improvements on the NGHS. First, the genetic mutation operation is modified by combining normal distribution and Cauchy distribution, which enables the IGHS to fully explore and exploit the solution space. Second, an opposition-based learning (OBL is introduced and modified to improve the quality of harmony vectors. The IGHS algorithm is implemented on two numerical examples, and they are nonlinear discrete-time rational system and the real heat exchanger, respectively. The results of the IGHS are compared with those of the other three methods, and it has been verified to be more effective than the other three methods on solving the above two problems with different input signals and system memory sizes.

  2. Modelling systematics of ground-based transit photometry I. Implications on transit timing variations

    DEFF Research Database (Denmark)

    von Essen, C.; Cellone, S.; Mallonn, M.

    2016-01-01

    introduced a perturbation in the mid-transit times of the hot Jupiter, caused by an Earth-sized planet in a 3:2 mean motion resonance. Analyzing the synthetic light curves produced after certain epochs, we attempt to recover the synthetically added TTV signal by means of usual primary transit fitting...... we attempt to reproduce, by means of physically and empirically motivated relationships, the effects caused by the Earth's atmosphere and the instrumental setup on the synthetic light curves. Therefore, the synthetic data present different photometric quality and transit coverage. In addition, we...

  3. FPGA implementation for real-time background subtraction based on Horprasert model.

    Science.gov (United States)

    Rodriguez-Gomez, Rafael; Fernandez-Sanchez, Enrique J; Diaz, Javier; Ros, Eduardo

    2012-01-01

    Background subtraction is considered the first processing stage in video surveillance systems, and consists of determining objects in movement in a scene captured by a static camera. It is an intensive task with a high computational cost. This work proposes an embedded novel architecture on FPGA which is able to extract the background on resource-limited environments and offers low degradation (produced because of the hardware-friendly model modification). In addition, the original model is extended in order to detect shadows and improve the quality of the segmentation of the moving objects. We have analyzed the resource consumption and performance in Spartan3 Xilinx FPGAs and compared to others works available on the literature, showing that the current architecture is a good trade-off in terms of accuracy, performance and resources utilization. With less than a 65% of the resources utilization of a XC3SD3400 Spartan-3A low-cost family FPGA, the system achieves a frequency of 66.5 MHz reaching 32.8 fps with resolution 1,024 × 1,024 pixels, and an estimated power consumption of 5.76 W.

  4. FPGA Implementation for Real-Time Background Subtraction Based on Horprasert Model

    Directory of Open Access Journals (Sweden)

    Eduardo Ros

    2012-01-01

    Full Text Available Background subtraction is considered the first processing stage in video surveillance systems, and consists of determining objects in movement in a scene captured by a static camera. It is an intensive task with a high computational cost. This work proposes an embedded novel architecture on FPGA which is able to extract the background on resource-limited environments and offers low degradation (produced because of the hardware-friendly model modification. In addition, the original model is extended in order to detect shadows and improve the quality of the segmentation of the moving objects. We have analyzed the resource consumption and performance in Spartan3 Xilinx FPGAs and compared to others works available on the literature, showing that the current architecture is a good trade-off in terms of accuracy, performance and resources utilization. With less than a 65% of the resources utilization of a XC3SD3400 Spartan-3A low-cost family FPGA, the system achieves a frequency of 66.5 MHz reaching 32.8 fps with resolution 1,024 x 1,024 pixels, and an estimated power consumption of 5.76 W.

  5. Perspective: Web-based machine learning models for real-time screening of thermoelectric materials properties

    Science.gov (United States)

    Gaultois, Michael W.; Oliynyk, Anton O.; Mar, Arthur; Sparks, Taylor D.; Mulholland, Gregory J.; Meredig, Bryce

    2016-05-01

    The experimental search for new thermoelectric materials remains largely confined to a limited set of successful chemical and structural families, such as chalcogenides, skutterudites, and Zintl phases. In principle, computational tools such as density functional theory (DFT) offer the possibility of rationally guiding experimental synthesis efforts toward very different chemistries. However, in practice, predicting thermoelectric properties from first principles remains a challenging endeavor [J. Carrete et al., Phys. Rev. X 4, 011019 (2014)], and experimental researchers generally do not directly use computation to drive their own synthesis efforts. To bridge this practical gap between experimental needs and computational tools, we report an open machine learning-based recommendation engine (http://thermoelectrics.citrination.com) for materials researchers that suggests promising new thermoelectric compositions based on pre-screening about 25 000 known materials and also evaluates the feasibility of user-designed compounds. We show this engine can identify interesting chemistries very different from known thermoelectrics. Specifically, we describe the experimental characterization of one example set of compounds derived from our engine, RE12Co5Bi (RE = Gd, Er), which exhibits surprising thermoelectric performance given its unprecedentedly high loading with metallic d and f block elements and warrants further investigation as a new thermoelectric material platform. We show that our engine predicts this family of materials to have low thermal and high electrical conductivities, but modest Seebeck coefficient, all of which are confirmed experimentally. We note that the engine also predicts materials that may simultaneously optimize all three properties entering into zT; we selected RE12Co5Bi for this study due to its interesting chemical composition and known facile synthesis.

  6. A GIS-based groundwater travel time model to evaluate stream nitrate concentration reductions from land use change

    Science.gov (United States)

    Schilling, K.E.; Wolter, C.F.

    2007-01-01

    Excessive nitrate-nitrogen (nitrate) loss from agricultural watersheds is an environmental concern. A common conservation practice to improve stream water quality is to retire vulnerable row croplands to grass. In this paper, a groundwater travel time model based on a geographic information system (GIS) analysis of readily available soil and topographic variables was used to evaluate the time needed to observe stream nitrate concentration reductions from conversion of row crop land to native prairie in Walnut Creek watershed, Iowa. Average linear groundwater velocity in 5-m cells was estimated by overlaying GIS layers of soil permeability, land slope (surrogates for hydraulic conductivity and gradient, respectively) and porosity. Cells were summed backwards from the stream network to watershed divide to develop a travel time distribution map. Results suggested that groundwater from half of the land planted in prairie has reached the stream network during the 10 years of ongoing water quality monitoring. The mean travel time for the watershed was estimated to be 10.1 years, consistent with results from a simple analytical model. The proportion of land in the watershed and subbasins with prairie groundwater reaching the stream (10-22%) was similar to the measured reduction of stream nitrate (11-36%). Results provide encouragement that additional nitrate reductions in Walnut Creek are probable in the future as reduced nitrate groundwater from distal locations discharges to the stream network in the coming years. The high spatial resolution of the model (5-m cells) and its simplicity may make it potentially applicable for land managers interested in communicating lag time issues to the public, particularly related to nitrate concentration reductions over time. ?? 2007 Springer-Verlag.

  7. Gap timing and the spectral timing model.

    Science.gov (United States)

    Hopson, J W

    1999-04-01

    A hypothesized mechanism underlying gap timing was implemented in the Spectral Timing Model [Grossberg, S., Schmajuk, N., 1989. Neural dynamics of adaptive timing and temporal discrimination during associative learning. Neural Netw. 2, 79-102] , a neural network timing model. The activation of the network nodes was made to decay in the absence of the timed signal, causing the model to shift its peak response time in a fashion similar to that shown in animal subjects. The model was then able to accurately simulate a parametric study of gap timing [Cabeza de Vaca, S., Brown, B., Hemmes, N., 1994. Internal clock and memory processes in aminal timing. J. Exp. Psychol.: Anim. Behav. Process. 20 (2), 184-198]. The addition of a memory decay process appears to produce the correct pattern of results in both Scalar Expectancy Theory models and in the Spectral Timing Model, and the fact that the same process should be effective in two such disparate models argues strongly that process reflects a true aspect of animal cognition.

  8. Real-time prediction of respiratory motion based on a local dynamic model in an augmented space.

    Science.gov (United States)

    Hong, S-M; Jung, B-H; Ruan, D

    2011-03-21

    Motion-adaptive radiotherapy aims to deliver ablative radiation dose to the tumor target with minimal normal tissue exposure, by accounting for real-time target movement. In practice, prediction is usually necessary to compensate for system latency induced by measurement, communication and control. This work focuses on predicting respiratory motion, which is most dominant for thoracic and abdominal tumors. We develop and investigate the use of a local dynamic model in an augmented space, motivated by the observation that respiratory movement exhibits a locally circular pattern in a plane augmented with a delayed axis. By including the angular velocity as part of the system state, the proposed dynamic model effectively captures the natural evolution of respiratory motion. The first-order extended Kalman filter is used to propagate and update the state estimate. The target location is predicted by evaluating the local dynamic model equations at the required prediction length. This method is complementary to existing work in that (1) the local circular motion model characterizes 'turning', overcoming the limitation of linear motion models; (2) it uses a natural state representation including the local angular velocity and updates the state estimate systematically, offering explicit physical interpretations; (3) it relies on a parametric model and is much less data-satiate than the typical adaptive semiparametric or nonparametric method. We tested the performance of the proposed method with ten RPM traces, using the normalized root mean squared difference between the predicted value and the retrospective observation as the error metric. Its performance was compared with predictors based on the linear model, the interacting multiple linear models and the kernel density estimator for various combinations of prediction lengths and observation rates. The local dynamic model based approach provides the best performance for short to medium prediction lengths under relatively

  9. Use of Just in Time Maintenance of Reinforced Concrete Bridge Structures based on Real Historical Data Deterioration Models

    Directory of Open Access Journals (Sweden)

    Abu-Tair A.

    2016-01-01

    Full Text Available Concrete is the backbone of any developed economy. Concrete can suffer from a large number of deleterious effects including physical, chemical and biological causes. Large owning bridge structures organizations are facing very serious questions when asking for maintenance budgets. The questions range from needing to justify the need for the work, its urgency, to also have to predict or show the consequences of delayed rehabilitation of a particular structure. There is therefore a need for a probabilistic model that can estimate the range of service lives of bridge populations and also the likelihood of level of deteriorations it can reached for every incremental time interval. A model was developed for such estimation based on statistical data from actual inspection records of a large reinforced concrete bridge portfolio. The method used both deterministic and stochastic methods to predict the service life of a bridge, using these service lives in combination with the just in time (JIT principle of management would enable maintenance managers to justify the need for action and the budgets needed, to intervene at the optimum time in the life of the structure and that of the deterioration. The paper will report on the model which is based on a large database of deterioration records of concrete bridges covering a period of over 60 years and include data from over 400 bridge structures. The paper will also illustrate how the service life model was developed and how these service lives combined with the JIT can be used to effectively allocate resources and use them to keep a major infrastructure asset moving with little disruption to the transport system and its users.

  10. Real-time deformation of human soft tissues: A radial basis meshless 3D model based on Marquardt's algorithm.

    Science.gov (United States)

    Zhou, Jianyong; Luo, Zu; Li, Chunquan; Deng, Mi

    2018-01-01

    When the meshless method is used to establish the mathematical-mechanical model of human soft tissues, it is necessary to define the space occupied by human tissues as the problem domain and the boundary of the domain as the surface of those tissues. Nodes should be distributed in both the problem domain and on the boundaries. Under external force, the displacement of the node is computed by the meshless method to represent the deformation of biological soft tissues. However, computation by the meshless method consumes too much time, which will affect the simulation of real-time deformation of human tissues in virtual surgery. In this article, the Marquardt's Algorithm is proposed to fit the nodal displacement at the problem domain's boundary and obtain the relationship between surface deformation and force. When different external forces are applied, the deformation of soft tissues can be quickly obtained based on this relationship. The analysis and discussion show that the improved model equations with Marquardt's Algorithm not only can simulate the deformation in real-time but also preserve the authenticity of the deformation model's physical properties. Copyright © 2017 Elsevier B.V. All rights reserved.

  11. Real-time interferometric monitoring and measuring of photopolymerization based stereolithographic additive manufacturing process: sensor model and algorithm

    International Nuclear Information System (INIS)

    Zhao, X; Rosen, D W

    2017-01-01

    As additive manufacturing is poised for growth and innovations, it faces barriers of lack of in-process metrology and control to advance into wider industry applications. The exposure controlled projection lithography (ECPL) is a layerless mask-projection stereolithographic additive manufacturing process, in which parts are fabricated from photopolymers on a stationary transparent substrate. To improve the process accuracy with closed-loop control for ECPL, this paper develops an interferometric curing monitoring and measuring (ICM and M) method which addresses the sensor modeling and algorithms issues. A physical sensor model for ICM and M is derived based on interference optics utilizing the concept of instantaneous frequency. The associated calibration procedure is outlined for ICM and M measurement accuracy. To solve the sensor model, particularly in real time, an online evolutionary parameter estimation algorithm is developed adopting moving horizon exponentially weighted Fourier curve fitting and numerical integration. As a preliminary validation, simulated real-time measurement by offline analysis of a video of interferograms acquired in the ECPL process is presented. The agreement between the cured height estimated by ICM and M and that measured by microscope indicates that the measurement principle is promising as real-time metrology for global measurement and control of the ECPL process. (paper)

  12. Presenting a Model Based on Fuzzy Application to Optimize the Time of IBS Projects in Gas Refineries

    Directory of Open Access Journals (Sweden)

    Naderpour Abbas

    2017-01-01

    Full Text Available Nowadays, the construction industry has started to embrace IBS as a method of attaining better construction quality and productivity and reducing risks related to occupational safety and health. The built of pre-fabricated component in factories reduces many problems related to lack of purposing uncertainty in scheduling calculation and time management of projects. In the case of using IBS method for managing time in projects, former studies such as Allan Tay’s research, indicates that this method can save up at least 29% of overall completion period versus the conventional method. But beside mentioned advantages of this technical method, the projects could be optimized more and more in scheduling calculations. This issue is critical in gas refineries, since special parameters such as risk of spreading poison H2S gas and mandatory of performing projects in short time period events such as maintenance overhauls demands to perform projects in optimum time. Custom scheduling calculation of project planning uses the Critical Path Method (CPM as a tool for Planning Project’s activities. The researches of this paper’s authors indicated that Fuzzy Critical Path Method (FCPM is the best technique to manage the uncertainty in project scheduling and can save up the construction project’s time versus the custom methods. This paper aims to present a model based on fuzzy application in CPM calculations to optimize the time of Industrial Building System.

  13. Development of an Agent Based Model to Estimate and Reduce Time to Restoration of Storm Induced Power Outages

    Science.gov (United States)

    Walsh, T.; Layton, T.; Mellor, J. E.

    2017-12-01

    Storm damage to the electric grid impacts 23 million electric utility customers and costs US consumers $119 billion annually. Current restoration techniques rely on the past experiences of emergency managers. There are few analytical simulation and prediction tools available for utility managers to optimize storm recovery and decrease consumer cost, lost revenue and restoration time. We developed an agent based model (ABM) for storm recovery in Connecticut. An ABM is a computer modeling technique comprised of agents who are given certain behavioral rules and operate in a given environment. It allows the user to simulate complex systems by varying user-defined parameters to study emergent, unpredicted behavior. The ABM incorporates the road network and electric utility grid for the state, is validated using actual storm event recoveries and utilizes the Dijkstra routing algorithm to determine the best path for repair crews to travel between outages. The ABM has benefits for both researchers and utility managers. It can simulate complex system dynamics, rank variable importance, find tipping points that could significantly reduce restoration time or costs and test a broad range of scenarios. It is a modular, scalable and adaptable technique that can simulate scenarios in silico to inform emergency managers before and during storm events to optimize restoration strategies and better manage expectations of when power will be restored. Results indicate that total restoration time is strongly dependent on the number of crews. However, there is a threshold whereby more crews will not decrease the restoration time, which depends on the total number of outages. The addition of outside crews is more beneficial for storms with a higher number of outages. The time to restoration increases linearly with increasing repair time, while the travel speed has little overall effect on total restoration time. Crews traveling to the nearest outage reduces the total restoration time

  14. The forecasting of menstruation based on a state-space modeling of basal body temperature time series.

    Science.gov (United States)

    Fukaya, Keiichi; Kawamori, Ai; Osada, Yutaka; Kitazawa, Masumi; Ishiguro, Makio

    2017-09-20

    Women's basal body temperature (BBT) shows a periodic pattern that associates with menstrual cycle. Although this fact suggests a possibility that daily BBT time series can be useful for estimating the underlying phase state as well as for predicting the length of current menstrual cycle, little attention has been paid to model BBT time series. In this study, we propose a state-space model that involves the menstrual phase as a latent state variable to explain the daily fluctuation of BBT and the menstruation cycle length. Conditional distributions of the phase are obtained by using sequential Bayesian filtering techniques. A predictive distribution of the next menstruation day can be derived based on this conditional distribution and the model, leading to a novel statistical framework that provides a sequentially updated prediction for upcoming menstruation day. We applied this framework to a real data set of women's BBT and menstruation days and compared prediction accuracy of the proposed method with that of previous methods, showing that the proposed method generally provides a better prediction. Because BBT can be obtained with relatively small cost and effort, the proposed method can be useful for women's health management. Potential extensions of this framework as the basis of modeling and predicting events that are associated with the menstrual cycles are discussed. © 2017 The Authors. Statistics in Medicine Published by John Wiley & Sons Ltd. © 2017 The Authors. Statistics in Medicine Published by John Wiley & Sons Ltd.

  15. A Data-Driven Modeling Strategy for Smart Grid Power Quality Coupling Assessment Based on Time Series Pattern Matching

    Directory of Open Access Journals (Sweden)

    Hao Yu

    2018-01-01

    Full Text Available This study introduces a data-driven modeling strategy for smart grid power quality (PQ coupling assessment based on time series pattern matching to quantify the influence of single and integrated disturbance among nodes in different pollution patterns. Periodic and random PQ patterns are constructed by using multidimensional frequency-domain decomposition for all disturbances. A multidimensional piecewise linear representation based on local extreme points is proposed to extract the patterns features of single and integrated disturbance in consideration of disturbance variation trend and severity. A feature distance of pattern (FDP is developed to implement pattern matching on univariate PQ time series (UPQTS and multivariate PQ time series (MPQTS to quantify the influence of single and integrated disturbance among nodes in the pollution patterns. Case studies on a 14-bus distribution system are performed and analyzed; the accuracy and applicability of the FDP in the smart grid PQ coupling assessment are verified by comparing with other time series pattern matching methods.

  16. Advantage of make-to-stock strategy based on linear mixed-effect model: a comparison with regression, autoregressive, times series, and exponential smoothing models

    Directory of Open Access Journals (Sweden)

    Yu-Pin Liao

    2017-11-01

    Full Text Available In the past few decades, demand forecasting has become relatively difficult due to rapid changes in the global environment. This research illustrates the use of the make-to-stock (MTS production strategy in order to explain how forecasting plays an essential role in business management. The linear mixed-effect (LME model has been extensively developed and is widely applied in various fields. However, no study has used the LME model for business forecasting. We suggest that the LME model be used as a tool for prediction and to overcome environment complexity. The data analysis is based on real data in an international display company, where the company needs accurate demand forecasting before adopting a MTS strategy. The forecasting result from the LME model is compared to the commonly used approaches, including the regression model, autoregressive model, times series model, and exponential smoothing model, with the results revealing that prediction performance provided by the LME model is more stable than using the other methods. Furthermore, product types in the data are regarded as a random effect in the LME model, hence demands of all types can be predicted simultaneously using a single LME model. However, some approaches require splitting the data into different type categories, and then predicting the type demand by establishing a model for each type. This feature also demonstrates the practicability of the LME model in real business operations.

  17. A hybrid feature selection and health indicator construction scheme for delay-time-based degradation modelling of rolling element bearings

    Science.gov (United States)

    Zhang, Bin; Deng, Congying; Zhang, Yi

    2018-03-01

    Rolling element bearings are mechanical components used frequently in most rotating machinery and they are also vulnerable links representing the main source of failures in such systems. Thus, health condition monitoring and fault diagnosis of rolling element bearings have long been studied to improve operational reliability and maintenance efficiency of rotatory machines. Over the past decade, prognosis that enables forewarning of failure and estimation of residual life attracted increasing attention. To accurately and efficiently predict failure of the rolling element bearing, the degradation requires to be well represented and modelled. For this purpose, degradation of the rolling element bearing is analysed with the delay-time-based model in this paper. Also, a hybrid feature selection and health indicator construction scheme is proposed for extraction of the bearing health relevant information from condition monitoring sensor data. Effectiveness of the presented approach is validated through case studies on rolling element bearing run-to-failure experiments.

  18. Self-calibration for lab-μCT using space-time regularized projection-based DVC and model reduction

    Science.gov (United States)

    Jailin, C.; Buljac, A.; Bouterf, A.; Poncelet, M.; Hild, F.; Roux, S.

    2018-02-01

    An online calibration procedure for x-ray lab-CT is developed using projection-based digital volume correlation. An initial reconstruction of the sample is positioned in the 3D space for every angle so that its projection matches the initial one. This procedure allows a space-time displacement field to be estimated for the scanned sample, which is regularized with (i) rigid body motions in space and (ii) modal time shape functions computed using model reduction techniques (i.e. proper generalized decomposition). The result is an accurate identification of the position of the sample adapted for each angle, which may deviate from the desired perfect rotation required for standard reconstructions. An application of this procedure to a 4D in situ mechanical test is shown. The proposed correction leads to a much improved tomographic reconstruction quality.

  19. A new wind speed forecasting strategy based on the chaotic time series modelling technique and the Apriori algorithm

    International Nuclear Information System (INIS)

    Guo, Zhenhai; Chi, Dezhong; Wu, Jie; Zhang, Wenyu

    2014-01-01

    Highlights: • Impact of meteorological factors on wind speed forecasting is taken into account. • Forecasted wind speed results are corrected by the associated rules. • Forecasting accuracy is improved by the new wind speed forecasting strategy. • Robust of the proposed model is validated by data sampled from different sites. - Abstract: Wind energy has been the fastest growing renewable energy resource in recent years. Because of the intermittent nature of wind, wind power is a fluctuating source of electrical energy. Therefore, to minimize the impact of wind power on the electrical grid, accurate and reliable wind power forecasting is mandatory. In this paper, a new wind speed forecasting approach based on based on the chaotic time series modelling technique and the Apriori algorithm has been developed. The new approach consists of four procedures: (I) Clustering by using the k-means clustering approach; (II) Employing the Apriori algorithm to discover the association rules; (III) Forecasting the wind speed according to the chaotic time series forecasting model; and (IV) Correcting the forecasted wind speed data using the associated rules discovered previously. This procedure has been verified by 31-day-ahead daily average wind speed forecasting case studies, which employed the wind speed and other meteorological data collected from four meteorological stations located in the Hexi Corridor area of China. The results of these case studies reveal that the chaotic forecasting model can efficiently improve the accuracy of the wind speed forecasting, and the Apriori algorithm can effectively discover the association rules between the wind speed and other meteorological factors. In addition, the correction results demonstrate that the association rules discovered by the Apriori algorithm have powerful capacities in handling the forecasted wind speed values correction when the forecasted values do not match the classification discovered by the association rules

  20. Finite-time adaptive sliding mode force control for electro-hydraulic load simulator based on improved GMS friction model

    Science.gov (United States)

    Kang, Shuo; Yan, Hao; Dong, Lijing; Li, Changchun

    2018-03-01

    This paper addresses the force tracking problem of electro-hydraulic load simulator under the influence of nonlinear friction and uncertain disturbance. A nonlinear system model combined with the improved generalized Maxwell-slip (GMS) friction model is firstly derived to describe the characteristics of load simulator system more accurately. Then, by using particle swarm optimization (PSO) algorithm ​combined with the system hysteresis characteristic analysis, the GMS friction parameters are identified. To compensate for nonlinear friction and uncertain disturbance, a finite-time adaptive sliding mode control method is proposed based on the accurate system model. This controller has the ability to ensure that the system state moves along the nonlinear sliding surface to steady state in a short time as well as good dynamic properties under the influence of parametric uncertainties and disturbance, which further improves the force loading accuracy and rapidity. At the end of this work, simulation and experimental results are employed to demonstrate the effectiveness of the proposed sliding mode control strategy.

  1. Modeling and real time simulation of an HVDC inverter feeding a weak AC system based on commutation failure study.

    Science.gov (United States)

    Mankour, Mohamed; Khiat, Mounir; Ghomri, Leila; Chaker, Abdelkader; Bessalah, Mourad

    2018-06-01

    This paper presents modeling and study of 12-pulse HVDC (High Voltage Direct Current) based on real time simulation where the HVDC inverter is connected to a weak AC system. In goal to study the dynamic performance of the HVDC link, two serious kind of disturbance are applied at HVDC converters where the first one is the single phase to ground AC fault and the second one is the DC link to ground fault. The study is based on two different mode of analysis, which the first is to test the performance of the DC control and the second is focalized to study the effect of the protection function on the system behavior. This real time simulation considers the strength of the AC system to witch is connected and his relativity with the capacity of the DC link. The results obtained are validated by means of RT-lab platform using digital Real time simulator Hypersim (OP-5600), the results carried out show the effect of the DC control and the influence of the protection function to reduce the probability of commutation failures and also for helping inverter to take out from commutation failure even while the DC control fails to eliminate them. Copyright © 2018 ISA. Published by Elsevier Ltd. All rights reserved.

  2. Damage-Based Time-Dependent Modeling of Paraglacial to Postglacial Progressive Failure of Large Rock Slopes

    Science.gov (United States)

    Riva, Federico; Agliardi, Federico; Amitrano, David; Crosta, Giovanni B.

    2018-01-01

    Large alpine rock slopes undergo long-term evolution in paraglacial to postglacial environments. Rock mass weakening and increased permeability associated with the progressive failure of deglaciated slopes promote the development of potentially catastrophic rockslides. We captured the entire life cycle of alpine slopes in one damage-based, time-dependent 2-D model of brittle creep, including deglaciation, damage-dependent fluid occurrence, and rock mass property upscaling. We applied the model to the Spriana rock slope (Central Alps), affected by long-term instability after Last Glacial Maximum and representing an active threat. We simulated the evolution of the slope from glaciated conditions to present day and calibrated the model using site investigation data and available temporal constraints. The model tracks the entire progressive failure path of the slope from deglaciation to rockslide development, without a priori assumptions on shear zone geometry and hydraulic conditions. Complete rockslide differentiation occurs through the transition from dilatant damage to a compacting basal shear zone, accounting for observed hydraulic barrier effects and perched aquifer formation. Our model investigates the mechanical role of deglaciation and damage-controlled fluid distribution in the development of alpine rockslides. The absolute simulated timing of rock slope instability development supports a very long "paraglacial" period of subcritical rock mass damage. After initial damage localization during the Lateglacial, rockslide nucleation initiates soon after the onset of Holocene, whereas full mechanical and hydraulic rockslide differentiation occurs during Mid-Holocene, supporting a key role of long-term damage in the reported occurrence of widespread rockslide clusters of these ages.

  3. A model of human motor sequence learning explains facilitation and interference effects based on spike-timing dependent plasticity.

    Directory of Open Access Journals (Sweden)

    Quan Wang

    2017-08-01

    Full Text Available The ability to learn sequential behaviors is a fundamental property of our brains. Yet a long stream of studies including recent experiments investigating motor sequence learning in adult human subjects have produced a number of puzzling and seemingly contradictory results. In particular, when subjects have to learn multiple action sequences, learning is sometimes impaired by proactive and retroactive interference effects. In other situations, however, learning is accelerated as reflected in facilitation and transfer effects. At present it is unclear what the underlying neural mechanism are that give rise to these diverse findings. Here we show that a recently developed recurrent neural network model readily reproduces this diverse set of findings. The self-organizing recurrent neural network (SORN model is a network of recurrently connected threshold units that combines a simplified form of spike-timing dependent plasticity (STDP with homeostatic plasticity mechanisms ensuring network stability, namely intrinsic plasticity (IP and synaptic normalization (SN. When trained on sequence learning tasks modeled after recent experiments we find that it reproduces the full range of interference, facilitation, and transfer effects. We show how these effects are rooted in the network's changing internal representation of the different sequences across learning and how they depend on an interaction of training schedule and task similarity. Furthermore, since learning in the model is based on fundamental neuronal plasticity mechanisms, the model reveals how these plasticity mechanisms are ultimately responsible for the network's sequence learning abilities. In particular, we find that all three plasticity mechanisms are essential for the network to learn effective internal models of the different training sequences. This ability to form effective internal models is also the basis for the observed interference and facilitation effects. This suggests that

  4. A Model-Based Approach to Infer Shifts in Regional Fire Regimes Over Time Using Sediment Charcoal Records

    Science.gov (United States)

    Itter, M.; Finley, A. O.; Hooten, M.; Higuera, P. E.; Marlon, J. R.; McLachlan, J. S.; Kelly, R.

    2016-12-01

    Sediment charcoal records are used in paleoecological analyses to identify individual local fire events and to estimate fire frequency and regional biomass burned at centennial to millenial time scales. Methods to identify local fire events based on sediment charcoal records have been well developed over the past 30 years, however, an integrated statistical framework for fire identification is still lacking. We build upon existing paleoecological methods to develop a hierarchical Bayesian point process model for local fire identification and estimation of fire return intervals. The model is unique in that it combines sediment charcoal records from multiple lakes across a region in a spatially-explicit fashion leading to estimation of a joint, regional fire return interval in addition to lake-specific local fire frequencies. Further, the model estimates a joint regional charcoal deposition rate free from the effects of local fires that can be used as a measure of regional biomass burned over time. Finally, the hierarchical Bayesian approach allows for tractable error propagation such that estimates of fire return intervals reflect the full range of uncertainty in sediment charcoal records. Specific sources of uncertainty addressed include sediment age models, the separation of local versus regional charcoal sources, and generation of a composite charcoal record The model is applied to sediment charcoal records from a dense network of lakes in the Yukon Flats region of Alaska. The multivariate joint modeling approach results in improved estimates of regional charcoal deposition with reduced uncertainty in the identification of individual fire events and local fire return intervals compared to individual lake approaches. Modeled individual-lake fire return intervals range from 100 to 500 years with a regional interval of roughly 200 years. Regional charcoal deposition to the network of lakes is correlated up to 50 kilometers. Finally, the joint regional charcoal

  5. A delay time model for a mission-based system subject to periodic and random inspection and postponed replacement

    International Nuclear Information System (INIS)

    Yang, Li; Ma, Xiaobing; Zhai, Qingqing; Zhao, Yu

    2016-01-01

    We propose an inspection and replacement policy for a single component system that successively executes missions with random durations. The failure process of the system can be divided into two states, namely, normal and defective, following the delay time concept. Inspections are carried out periodically and immediately after the completion of each mission (random inspections). The failed state is always identified immediately, whereas the defective state can only be revealed by an inspection. If the system fails or is defective at a periodic inspection, then replacement is immediate. If, however, the system is defective at a random inspection, then replacement will be postponed if the time to the subsequent periodic inspection is shorter than a pre-determined threshold, and immediate otherwise. We derive the long run expected cost per unit time and then investigate the optimal periodic inspection interval and postponement threshold. A numerical example is presented to demonstrate the applicability of the proposed maintenance policy. - Highlights: • A delay time model of inspection is introduced for mission-based systems. • Periodic and random inspections are performed to check the state. • Replacement of the defective system at a random inspection can be postponed.

  6. Using Agent-Based Modeling to Enhance System-Level Real-time Control of Urban Stormwater Systems

    Science.gov (United States)

    Rimer, S.; Mullapudi, A. M.; Kerkez, B.

    2017-12-01

    The ability to reduce combined-sewer overflow (CSO) events is an issue that challenges over 800 U.S. municipalities. When the volume of a combined sewer system or wastewater treatment plant is exceeded, untreated wastewater then overflows (a CSO event) into nearby streams, rivers, or other water bodies causing localized urban flooding and pollution. The likelihood and impact of CSO events has only exacerbated due to urbanization, population growth, climate change, aging infrastructure, and system complexity. Thus, there is an urgent need for urban areas to manage CSO events. Traditionally, mitigating CSO events has been carried out via time-intensive and expensive structural interventions such as retention basins or sewer separation, which are able to reduce CSO events, but are costly, arduous, and only provide a fixed solution to a dynamic problem. Real-time control (RTC) of urban drainage systems using sensor and actuator networks has served as an inexpensive and versatile alternative to traditional CSO intervention. In particular, retrofitting individual stormwater elements for sensing and automated active distributed control has been shown to significantly reduce the volume of discharge during CSO events, with some RTC models demonstrating a reduction upwards of 90% when compared to traditional passive systems. As more stormwater elements become retrofitted for RTC, system-level RTC across complete watersheds is an attainable possibility. However, when considering the diverse set of control needs of each of these individual stormwater elements, such system-level RTC becomes a far more complex problem. To address such diverse control needs, agent-based modeling is employed such that each individual stormwater element is treated as an autonomous agent with a diverse decision making capabilities. We present preliminary results and limitations of utilizing the agent-based modeling computational framework for the system-level control of diverse, interacting

  7. Performance evaluation and modeling of a conformal filter (CF) based real-time standoff hazardous material detection sensor

    Science.gov (United States)

    Nelson, Matthew P.; Tazik, Shawna K.; Bangalore, Arjun S.; Treado, Patrick J.; Klem, Ethan; Temple, Dorota

    2017-05-01

    Hyperspectral imaging (HSI) systems can provide detection and identification of a variety of targets in the presence of complex backgrounds. However, current generation sensors are typically large, costly to field, do not usually operate in real time and have limited sensitivity and specificity. Despite these shortcomings, HSI-based intelligence has proven to be a valuable tool, thus resulting in increased demand for this type of technology. By moving the next generation of HSI technology into a more adaptive configuration, and a smaller and more cost effective form factor, HSI technologies can help maintain a competitive advantage for the U.S. armed forces as well as local, state and federal law enforcement agencies. Operating near the physical limits of HSI system capability is often necessary and very challenging, but is often enabled by rigorous modeling of detection performance. Specific performance envelopes we consistently strive to improve include: operating under low signal to background conditions; at higher and higher frame rates; and under less than ideal motion control scenarios. An adaptable, low cost, low footprint, standoff sensor architecture we have been maturing includes the use of conformal liquid crystal tunable filters (LCTFs). These Conformal Filters (CFs) are electro-optically tunable, multivariate HSI spectrometers that, when combined with Dual Polarization (DP) optics, produce optimized spectral passbands on demand, which can readily be reconfigured, to discriminate targets from complex backgrounds in real-time. With DARPA support, ChemImage Sensor Systems (CISS™) in collaboration with Research Triangle Institute (RTI) International are developing a novel, real-time, adaptable, compressive sensing short-wave infrared (SWIR) hyperspectral imaging technology called the Reconfigurable Conformal Imaging Sensor (RCIS) based on DP-CF technology. RCIS will address many shortcomings of current generation systems and offer improvements in

  8. An actor-based model of social network influence on adolescent body size, screen time, and playing sports.

    Directory of Open Access Journals (Sweden)

    David A Shoham

    Full Text Available Recent studies suggest that obesity may be "contagious" between individuals in social networks. Social contagion (influence, however, may not be identifiable using traditional statistical approaches because they cannot distinguish contagion from homophily (the propensity for individuals to select friends who are similar to themselves or from shared environmental influences. In this paper, we apply the stochastic actor-based model (SABM framework developed by Snijders and colleagues to data on adolescent body mass index (BMI, screen time, and playing active sports. Our primary hypothesis was that social influences on adolescent body size and related behaviors are independent of friend selection. Employing the SABM, we simultaneously modeled network dynamics (friendship selection based on homophily and structural characteristics of the network and social influence. We focused on the 2 largest schools in the National Longitudinal Study of Adolescent Health (Add Health and held the school environment constant by examining the 2 school networks separately (N = 624 and 1151. Results show support in both schools for homophily on BMI, but also for social influence on BMI. There was no evidence of homophily on screen time in either school, while only one of the schools showed homophily on playing active sports. There was, however, evidence of social influence on screen time in one of the schools, and playing active sports in both schools. These results suggest that both homophily and social influence are important in understanding patterns of adolescent obesity. Intervention efforts should take into consideration peers' influence on one another, rather than treating "high risk" adolescents in isolation.

  9. GIS model-based real-time hydrological forecasting and operation management system for the Lake Balaton and its watershed

    Science.gov (United States)

    Adolf Szabó, János; Zoltán Réti, Gábor; Tóth, Tünde

    2017-04-01

    Today, the most significant mission of the decision makers on integrated water management issues is to carry out sustainable management for sharing the resources between a variety of users and the environment under conditions of considerable uncertainty (such as climate/land-use/population/etc. change) conditions. In light of this increasing water management complexity, we consider that the most pressing needs is to develop and implement up-to-date GIS model-based real-time hydrological forecasting and operation management systems for aiding decision-making processes to improve water management. After years of researches and developments the HYDROInform Ltd. has developed an integrated, on-line IT system (DIWA-HFMS: DIstributed WAtershed - Hydrologyc Forecasting & Modelling System) which is able to support a wide-ranging of the operational tasks in water resources management such as: forecasting, operation of lakes and reservoirs, water-control and management, etc. Following a test period, the DIWA-HFMS has been implemented for the Lake Balaton and its watershed (in 500 m resolution) at Central-Transdanubian Water Directorate (KDTVIZIG). The significant pillars of the system are: - The DIWA (DIstributed WAtershed) hydrologic model, which is a 3D dynamic water-balance model that distributed both in space and its parameters, and which was developed along combined principles but its mostly based on physical foundations. The DIWA integrates 3D soil-, 2D surface-, and 1D channel-hydraulic components as well. - Lakes and reservoir-operating component; - Radar-data integration module; - fully online data collection tools; - scenario manager tool to create alternative scenarios, - interactive, intuitive, highly graphical user interface. In Vienna, the main functions, operations and results-management of the system will be presented.

  10. A meshless EFG-based algorithm for 3D deformable modeling of soft tissue in real-time.

    Science.gov (United States)

    Abdi, Elahe; Farahmand, Farzam; Durali, Mohammad

    2012-01-01

    The meshless element-free Galerkin method was generalized and an algorithm was developed for 3D dynamic modeling of deformable bodies in real time. The efficacy of the algorithm was investigated in a 3D linear viscoelastic model of human spleen subjected to a time-varying compressive force exerted by a surgical grasper. The model remained stable in spite of the considerably large deformations occurred. There was a good agreement between the results and those of an equivalent finite element model. The computational cost, however, was much lower, enabling the proposed algorithm to be effectively used in real-time applications.

  11. Modelling urban travel times

    NARCIS (Netherlands)

    Zheng, F.

    2011-01-01

    Urban travel times are intrinsically uncertain due to a lot of stochastic characteristics of traffic, especially at signalized intersections. A single travel time does not have much meaning and is not informative to drivers or traffic managers. The range of travel times is large such that certain

  12. Matrix-algebra-based calculations of the time evolution of the binary spin-bath model for magnetization transfer.

    Science.gov (United States)

    Müller, Dirk K; Pampel, André; Möller, Harald E

    2013-05-01

    Quantification of magnetization-transfer (MT) experiments are typically based on the assumption of the binary spin-bath model. This model allows for the extraction of up to six parameters (relative pool sizes, relaxation times, and exchange rate constants) for the characterization of macromolecules, which are coupled via exchange processes to the water in tissues. Here, an approach is presented for estimating MT parameters acquired with arbitrary saturation schemes and imaging pulse sequences. It uses matrix algebra to solve the Bloch-McConnell equations without unwarranted simplifications, such as assuming steady-state conditions for pulsed saturation schemes or neglecting imaging pulses. The algorithm achieves sufficient efficiency for voxel-by-voxel MT parameter estimations by using a polynomial interpolation technique. Simulations, as well as experiments in agar gels with continuous-wave and pulsed MT preparation, were performed for validation and for assessing approximations in previous modeling approaches. In vivo experiments in the normal human brain yielded results that were consistent with published data. Copyright © 2013 Elsevier Inc. All rights reserved.

  13. An Analysis of Delay and Travel Times at Sao Paulo International Airport (AISP/GRU): Planning Based on Simulation Model

    Science.gov (United States)

    Santana, Erico Soriano Martins; Mueller, Carlos

    2003-01-01

    The occurrence of flight delays in Brazil, mostly verified at the ground (airfield), is responsible for serious disruptions at the airport level but also for the unchaining of problems in all the airport system, affecting also the airspace. The present study develops an analysis of delay and travel times at Sao Paulo International Airport/ Guarulhos (AISP/GRU) airfield based on simulation model. Different airport physical and operational scenarios had been analyzed by means of simulation. SIMMOD Plus 4.0, the computational tool developed to represent aircraft operation in the airspace and airside of airports, was used to perform these analysis. The study was mainly focused on aircraft operations on ground, at the airport runway, taxi-lanes and aprons. The visualization of the operations with increasing demand facilitated the analyses. The results generated in this work certify the viability of the methodology, they also indicated the solutions capable to solve the delay problem by travel time analysis, thus diminishing the costs for users mainly airport authority. It also indicated alternatives for airport operations, assisting the decision-making process and in the appropriate timing of the proposed changes in the existing infrastructure.

  14. Introduction to Time Series Modeling

    CERN Document Server

    Kitagawa, Genshiro

    2010-01-01

    In time series modeling, the behavior of a certain phenomenon is expressed in relation to the past values of itself and other covariates. Since many important phenomena in statistical analysis are actually time series and the identification of conditional distribution of the phenomenon is an essential part of the statistical modeling, it is very important and useful to learn fundamental methods of time series modeling. Illustrating how to build models for time series using basic methods, "Introduction to Time Series Modeling" covers numerous time series models and the various tools f

  15. Time-varying coefficient vector autoregressions model based on dynamic correlation with an application to crude oil and stock markets

    Energy Technology Data Exchange (ETDEWEB)

    Lu, Fengbin, E-mail: fblu@amss.ac.cn [Academy of Mathematics and Systems Science, Chinese Academy of Sciences, Beijing 100190 (China); Qiao, Han, E-mail: qiaohan@ucas.ac.cn [School of Economics and Management, University of Chinese Academy of Sciences, Beijing 100190 (China); Wang, Shouyang, E-mail: sywang@amss.ac.cn [School of Economics and Management, University of Chinese Academy of Sciences, Beijing 100190 (China); Lai, Kin Keung, E-mail: mskklai@cityu.edu.hk [Department of Management Sciences, City University of Hong Kong (Hong Kong); Li, Yuze, E-mail: richardyz.li@mail.utoronto.ca [Department of Industrial Engineering, University of Toronto (Canada)

    2017-01-15

    This paper proposes a new time-varying coefficient vector autoregressions (VAR) model, in which the coefficient is a linear function of dynamic lagged correlation. The proposed model allows for flexibility in choices of dynamic correlation models (e.g. dynamic conditional correlation generalized autoregressive conditional heteroskedasticity (GARCH) models, Markov-switching GARCH models and multivariate stochastic volatility models), which indicates that it can describe many types of time-varying causal effects. Time-varying causal relations between West Texas Intermediate (WTI) crude oil and the US Standard and Poor’s 500 (S&P 500) stock markets are examined by the proposed model. The empirical results show that their causal relations evolve with time and display complex characters. Both positive and negative causal effects of the WTI on the S&P 500 in the subperiods have been found and confirmed by the traditional VAR models. Similar results have been obtained in the causal effects of S&P 500 on WTI. In addition, the proposed model outperforms the traditional VAR model.

  16. Time-varying coefficient vector autoregressions model based on dynamic correlation with an application to crude oil and stock markets

    International Nuclear Information System (INIS)

    Lu, Fengbin; Qiao, Han; Wang, Shouyang; Lai, Kin Keung; Li, Yuze

    2017-01-01

    This paper proposes a new time-varying coefficient vector autoregressions (VAR) model, in which the coefficient is a linear function of dynamic lagged correlation. The proposed model allows for flexibility in choices of dynamic correlation models (e.g. dynamic conditional correlation generalized autoregressive conditional heteroskedasticity (GARCH) models, Markov-switching GARCH models and multivariate stochastic volatility models), which indicates that it can describe many types of time-varying causal effects. Time-varying causal relations between West Texas Intermediate (WTI) crude oil and the US Standard and Poor’s 500 (S&P 500) stock markets are examined by the proposed model. The empirical results show that their causal relations evolve with time and display complex characters. Both positive and negative causal effects of the WTI on the S&P 500 in the subperiods have been found and confirmed by the traditional VAR models. Similar results have been obtained in the causal effects of S&P 500 on WTI. In addition, the proposed model outperforms the traditional VAR model.

  17. Time-varying coefficient vector autoregressions model based on dynamic correlation with an application to crude oil and stock markets.

    Science.gov (United States)

    Lu, Fengbin; Qiao, Han; Wang, Shouyang; Lai, Kin Keung; Li, Yuze

    2017-01-01

    This paper proposes a new time-varying coefficient vector autoregressions (VAR) model, in which the coefficient is a linear function of dynamic lagged correlation. The proposed model allows for flexibility in choices of dynamic correlation models (e.g. dynamic conditional correlation generalized autoregressive conditional heteroskedasticity (GARCH) models, Markov-switching GARCH models and multivariate stochastic volatility models), which indicates that it can describe many types of time-varying causal effects. Time-varying causal relations between West Texas Intermediate (WTI) crude oil and the US Standard and Poor's 500 (S&P 500) stock markets are examined by the proposed model. The empirical results show that their causal relations evolve with time and display complex characters. Both positive and negative causal effects of the WTI on the S&P 500 in the subperiods have been found and confirmed by the traditional VAR models. Similar results have been obtained in the causal effects of S&P 500 on WTI. In addition, the proposed model outperforms the traditional VAR model. Copyright © 2016 Elsevier Ltd. All rights reserved.

  18. The Drift Diffusion Model can account for the accuracy and reaction time of value-based choices under high and low time pressure

    Directory of Open Access Journals (Sweden)

    Milica Milosavljevic

    2010-10-01

    Full Text Available An important open problem is how values are compared to make simple choices. A natural hypothesis is that the brain carries out the computations associated with the value comparisons in a manner consistent with the Drift Diffusion Model (DDM, since this model has been able to account for a large amount of data in other domains. We investigated the ability of four different versions of the DDM to explain the data in a real binary food choice task under conditions of high and low time pressure. We found that a seven-parameter version of the DDM can account for the choice and reaction time data with high-accuracy, in both the high and low time pressure conditions. The changes associated with the introduction of time pressure could be traced to changes in two key model parameters: the barrier height and the noise in the slope of the drift process.

  19. Multi-Scale Particle Size Distributions of Mars, Moon and Itokawa based on a time-maturation dependent fragmentation model

    Science.gov (United States)

    Charalambous, C. A.; Pike, W. T.

    2013-12-01

    We present the development of a soil evolution framework and multiscale modelling of the surface of Mars, Moon and Itokawa thus providing an atlas of extra-terrestrial Particle Size Distributions (PSD). These PSDs are profoundly based on a tailoring method which interconnects several datasets from different sites captured by the various missions. The final integrated product is then fully justified through a soil evolution analysis model mathematically constructed via fundamental physical principles (Charalambous, 2013). The construction of the PSD takes into account the macroscale fresh primary impacts and their products, the mesoscale distributions obtained by the in-situ data of surface missions (Golombek et al., 1997, 2012) and finally the microscopic scale distributions provided by Curiosity and Phoenix Lander (Pike, 2011). The distribution naturally extends at the magnitudinal scales at which current data does not exist due to the lack of scientific instruments capturing the populations at these data absent scales. The extension is based on the model distribution (Charalambous, 2013) which takes as parameters known values of material specific probabilities of fragmentation and grinding limits. Additionally, the establishment of a closed-form statistical distribution provides a quantitative description of the soil's structure. Consequently, reverse engineering of the model distribution allows the synthesis of soil that faithfully represents the particle population at the studied sites (Charalambous, 2011). Such representation essentially delivers a virtual soil environment to work with for numerous applications. A specific application demonstrated here will be the information that can directly be extracted for the successful drilling probability as a function of distance in an effort to aid the HP3 instrument of the 2016 Insight Mission to Mars. Pike, W. T., et al. "Quantification of the dry history of the Martian soil inferred from in situ microscopy

  20. Energy-based method for near-real time modeling of sound field in complex urban environments.

    Science.gov (United States)

    Pasareanu, Stephanie M; Remillieux, Marcel C; Burdisso, Ricardo A

    2012-12-01

    Prediction of the sound field in large urban environments has been limited thus far by the heavy computational requirements of conventional numerical methods such as boundary element (BE) or finite-difference time-domain (FDTD) methods. Recently, a considerable amount of work has been devoted to developing energy-based methods for this application, and results have shown the potential to compete with conventional methods. However, these developments have been limited to two-dimensional (2-D) studies (along street axes), and no real description of the phenomena at issue has been exposed. Here the mathematical theory of diffusion is used to predict the sound field in 3-D complex urban environments. A 3-D diffusion equation is implemented by means of a simple finite-difference scheme and applied to two different types of urban configurations. This modeling approach is validated against FDTD and geometrical acoustic (GA) solutions, showing a good overall agreement. The role played by diffraction near buildings edges close to the source is discussed, and suggestions are made on the possibility to predict accurately the sound field in complex urban environments, in near real time simulations.

  1. GPU-based parallel computing in real-time modeling of atmospheric transport and diffusion of radioactive material

    International Nuclear Information System (INIS)

    Santos, Marcelo C. dos; Pereira, Claudio M.N.A.; Schirru, Roberto; Pinheiro, André; Coordenacao de Pos-Graduacao e Pesquisa de Engenharia

    2017-01-01

    Atmospheric radionuclide dispersion systems (ARDS) are essential mechanisms to predict the consequences of unexpected radioactive releases from nuclear power plants. Considering, that during an eventuality of an accident with a radioactive material release, an accurate forecast is vital to guide the evacuation plan of the possible affected areas. However, in order to predict the dispersion of the radioactive material and its impact on the environment, the model must process information about source term (radioactive materials released, activities and location), weather condition (wind, humidity and precipitation) and geographical characteristics (topography). Furthermore, ARDS is basically composed of 4 main modules: Source Term, Wind Field, Plume Dispersion and Doses Calculations. The Wind Field and Plume Dispersion modules are the ones that require a high computational performance to achieve accurate results within an acceptable time. Taking this into account, this work focuses on the development of a GPU-based parallel Plume Dispersion module, focusing on the radionuclide transport and diffusion calculations, which use a given wind field and a released source term as parameters. The program is being developed using the C ++ programming language, allied with CUDA libraries. In comparative case study between a parallel and sequential version of the slower function of the Plume Dispersion module, a speedup of 11.63 times could be observed. (author)

  2. GPU-based parallel computing in real-time modeling of atmospheric transport and diffusion of radioactive material

    Energy Technology Data Exchange (ETDEWEB)

    Santos, Marcelo C. dos; Pereira, Claudio M.N.A.; Schirru, Roberto; Pinheiro, André, E-mail: jovitamarcelo@gmail.com, E-mail: cmnap@ien.gov.br, E-mail: schirru@lmp.ufrj.br, E-mail: apinheiro99@gmail.com [Instituto de Engenharia Nuclear (IEN/CNEN-RJ), Rio de Janeiro, RJ (Brazil); Coordenacao de Pos-Graduacao e Pesquisa de Engenharia (COPPE/UFRJ), Rio de Janeiro, RJ (Brazil). Programa de Engenharia Nuclear

    2017-07-01

    Atmospheric radionuclide dispersion systems (ARDS) are essential mechanisms to predict the consequences of unexpected radioactive releases from nuclear power plants. Considering, that during an eventuality of an accident with a radioactive material release, an accurate forecast is vital to guide the evacuation plan of the possible affected areas. However, in order to predict the dispersion of the radioactive material and its impact on the environment, the model must process information about source term (radioactive materials released, activities and location), weather condition (wind, humidity and precipitation) and geographical characteristics (topography). Furthermore, ARDS is basically composed of 4 main modules: Source Term, Wind Field, Plume Dispersion and Doses Calculations. The Wind Field and Plume Dispersion modules are the ones that require a high computational performance to achieve accurate results within an acceptable time. Taking this into account, this work focuses on the development of a GPU-based parallel Plume Dispersion module, focusing on the radionuclide transport and diffusion calculations, which use a given wind field and a released source term as parameters. The program is being developed using the C ++ programming language, allied with CUDA libraries. In comparative case study between a parallel and sequential version of the slower function of the Plume Dispersion module, a speedup of 11.63 times could be observed. (author)

  3. GPS-based microenvironment tracker (MicroTrac) model to estimate time-location of individuals for air pollution exposure assessments: model evaluation in central North Carolina.

    Science.gov (United States)

    Breen, Michael S; Long, Thomas C; Schultz, Bradley D; Crooks, James; Breen, Miyuki; Langstaff, John E; Isaacs, Kristin K; Tan, Yu-Mei; Williams, Ronald W; Cao, Ye; Geller, Andrew M; Devlin, Robert B; Batterman, Stuart A; Buckley, Timothy J

    2014-07-01

    A critical aspect of air pollution exposure assessment is the estimation of the time spent by individuals in various microenvironments (ME). Accounting for the time spent in different ME with different pollutant concentrations can reduce exposure misclassifications, while failure to do so can add uncertainty and bias to risk estimates. In this study, a classification model, called MicroTrac, was developed to estimate time of day and duration spent in eight ME (indoors and outdoors at home, work, school; inside vehicles; other locations) from global positioning system (GPS) data and geocoded building boundaries. Based on a panel study, MicroTrac estimates were compared with 24-h diary data from nine participants, with corresponding GPS data and building boundaries of home, school, and work. MicroTrac correctly classified the ME for 99.5% of the daily time spent by the participants. The capability of MicroTrac could help to reduce the time-location uncertainty in air pollution exposure models and exposure metrics for individuals in health studies.

  4. Draft Forecasts from Real-Time Runs of Physics-Based Models - A Road to the Future

    Science.gov (United States)

    Hesse, Michael; Rastatter, Lutz; MacNeice, Peter; Kuznetsova, Masha

    2008-01-01

    The Community Coordinated Modeling Center (CCMC) is a US inter-agency activity aiming at research in support of the generation of advanced space weather models. As one of its main functions, the CCMC provides to researchers the use of space science models, even if they are not model owners themselves. The second focus of CCMC activities is on validation and verification of space weather models, and on the transition of appropriate models to space weather forecast centers. As part of the latter activity, the CCMC develops real-time simulation systems that stress models through routine execution. A by-product of these real-time calculations is the ability to derive model products, which may be useful for space weather operators. After consultations with NOAA/SEC and with AFWA, CCMC has developed a set of tools as a first step to make real-time model output useful to forecast centers. In this presentation, we will discuss the motivation for this activity, the actions taken so far, and options for future tools from model output.

  5. Research on Healthy Anomaly Detection Model Based on Deep Learning from Multiple Time-Series Physiological Signals

    Directory of Open Access Journals (Sweden)

    Kai Wang

    2016-01-01

    Full Text Available Health is vital to every human being. To further improve its already respectable medical technology, the medical community is transitioning towards a proactive approach which anticipates and mitigates risks before getting ill. This approach requires measuring the physiological signals of human and analyzes these data at regular intervals. In this paper, we present a novel approach to apply deep learning in physiological signals analysis that allows doctor to identify latent risks. However, extracting high level information from physiological time-series data is a hard problem faced by the machine learning communities. Therefore, in this approach, we apply model based on convolutional neural network that can automatically learn features from raw physiological signals in an unsupervised manner and then based on the learned features use multivariate Gauss distribution anomaly detection method to detect anomaly data. Our experiment is shown to have a significant performance in physiological signals anomaly detection. So it is a promising tool for doctor to identify early signs of illness even if the criteria are unknown a priori.

  6. Mixed Hitting-Time Models

    NARCIS (Netherlands)

    Abbring, J.H.

    2009-01-01

    We study mixed hitting-time models, which specify durations as the first time a Levy process (a continuous-time process with stationary and independent increments) crosses a heterogeneous threshold. Such models of substantial interest because they can be reduced from optimal-stopping models with

  7. Developing a Time Series Predictive Model for Dengue in Zhongshan, China Based on Weather and Guangzhou Dengue Surveillance Data.

    Science.gov (United States)

    Zhang, Yingtao; Wang, Tao; Liu, Kangkang; Xia, Yao; Lu, Yi; Jing, Qinlong; Yang, Zhicong; Hu, Wenbiao; Lu, Jiahai

    2016-02-01

    Dengue is a re-emerging infectious disease of humans, rapidly growing from endemic areas to dengue-free regions due to favorable conditions. In recent decades, Guangzhou has again suffered from several big outbreaks of dengue; as have its neighboring cities. This study aims to examine the impact of dengue epidemics in Guangzhou, China, and to develop a predictive model for Zhongshan based on local weather conditions and Guangzhou dengue surveillance information. We obtained weekly dengue case data from 1st January, 2005 to 31st December, 2014 for Guangzhou and Zhongshan city from the Chinese National Disease Surveillance Reporting System. Meteorological data was collected from the Zhongshan Weather Bureau and demographic data was collected from the Zhongshan Statistical Bureau. A negative binomial regression model with a log link function was used to analyze the relationship between weekly dengue cases in Guangzhou and Zhongshan, controlling for meteorological factors. Cross-correlation functions were applied to identify the time lags of the effect of each weather factor on weekly dengue cases. Models were validated using receiver operating characteristic (ROC) curves and k-fold cross-validation. Our results showed that weekly dengue cases in Zhongshan were significantly associated with dengue cases in Guangzhou after the treatment of a 5 weeks prior moving average (Relative Risk (RR) = 2.016, 95% Confidence Interval (CI): 1.845-2.203), controlling for weather factors including minimum temperature, relative humidity, and rainfall. ROC curve analysis indicated our forecasting model performed well at different prediction thresholds, with 0.969 area under the receiver operating characteristic curve (AUC) for a threshold of 3 cases per week, 0.957 AUC for a threshold of 2 cases per week, and 0.938 AUC for a threshold of 1 case per week. Models established during k-fold cross-validation also had considerable AUC (average 0.938-0.967). The sensitivity and specificity

  8. Developing a Time Series Predictive Model for Dengue in Zhongshan, China Based on Weather and Guangzhou Dengue Surveillance Data.

    Directory of Open Access Journals (Sweden)

    Yingtao Zhang

    2016-02-01

    Full Text Available Dengue is a re-emerging infectious disease of humans, rapidly growing from endemic areas to dengue-free regions due to favorable conditions. In recent decades, Guangzhou has again suffered from several big outbreaks of dengue; as have its neighboring cities. This study aims to examine the impact of dengue epidemics in Guangzhou, China, and to develop a predictive model for Zhongshan based on local weather conditions and Guangzhou dengue surveillance information.We obtained weekly dengue case data from 1st January, 2005 to 31st December, 2014 for Guangzhou and Zhongshan city from the Chinese National Disease Surveillance Reporting System. Meteorological data was collected from the Zhongshan Weather Bureau and demographic data was collected from the Zhongshan Statistical Bureau. A negative binomial regression model with a log link function was used to analyze the relationship between weekly dengue cases in Guangzhou and Zhongshan, controlling for meteorological factors. Cross-correlation functions were applied to identify the time lags of the effect of each weather factor on weekly dengue cases. Models were validated using receiver operating characteristic (ROC curves and k-fold cross-validation.Our results showed that weekly dengue cases in Zhongshan were significantly associated with dengue cases in Guangzhou after the treatment of a 5 weeks prior moving average (Relative Risk (RR = 2.016, 95% Confidence Interval (CI: 1.845-2.203, controlling for weather factors including minimum temperature, relative humidity, and rainfall. ROC curve analysis indicated our forecasting model performed well at different prediction thresholds, with 0.969 area under the receiver operating characteristic curve (AUC for a threshold of 3 cases per week, 0.957 AUC for a threshold of 2 cases per week, and 0.938 AUC for a threshold of 1 case per week. Models established during k-fold cross-validation also had considerable AUC (average 0.938-0.967. The sensitivity and

  9. A field and glacier modelling based approach to determine the timing and extent of glaciation in southern Africa

    Science.gov (United States)

    Mills, Stephanie C.; Rowan, Ann V.; Barrow, Timothy T.; Plummer, Mitchell A.; Smith, Michael; Grab, Stefan W.; Carr, Simon J.; Fifield, L. Keith

    2014-05-01

    Moraines identified at high-altitude sites in southern Africa and dated to the last glacial maximum (LGM) indicate that the climate in this region was cold enough to support glaciers. Small glaciers are very sensitive to changes in temperature and precipitation and the identification of LGM moraines in southern Africa has important palaeoclimatic implications concerning the magnitude of temperature change and the seasonality of precipitation during the last glacial cycle. This paper presents a refined time-frame for likely glaciations based on surface exposure dating using Cl-36 at sites in Lesotho and reports results of a 2D glacier energy balance and ice flow modelling approach (Plummer and Phillips, 2003) to evaluate the most likely climatic scenarios associated with mapped moraine limits. Samples for surface exposure dating were collected from glacially eroded bedrock at several locations and yield ages within the timescale of the LGM. Scatter in the ages may be due to insufficient erosion of the bedrock surface due to the small and relatively thin nature of the glaciers. To determine the most likely climatic conditions that may have caused the glaciers to reach their mapped extent, we use a glacier-climate model, driven by data from local weather stations and a 30m (ASTER) DEM (sub-sampled to 10m) representation of the topographic surface. The model is forced using modern climate data for primary climatic controls (temperature and precipitation) and for secondary climatic parameters (relative humidity, cloudiness, wind speed). Various sensitivity tests were run by dropping temperature by small increments and by varying the amount of precipitation and its seasonality relative to present-day values. Results suggest that glaciers could have existed in the Lesotho highlands with a temperature depression of ~5-6 ºC and that the glaciers were highly sensitive to small changes in temperature. The additional accumulation of mass through wind redistribution appears to

  10. A multi-component and multi-failure mode inspection model based on the delay time concept

    International Nuclear Information System (INIS)

    Wang Wenbin; Banjevic, Dragan; Pecht, Michael

    2010-01-01

    The delay time concept and the techniques developed for modelling and optimising plant inspection practices have been reported in many papers and case studies. For a system comprised of many components and subject to many different failure modes, one of the most convenient ways to model the inspection and failure processes is to use a stochastic point process for defect arrivals and a common delay time distribution for the duration between defect the arrival and failure of all defects. This is an approximation, but has been proven to be valid when the number of components is large. However, for a system with just a few key components and subject to few major failure modes, the approximation may be poor. In this paper, a model is developed to address this situation, where each component and failure mode is modelled individually and then pooled together to form the system inspection model. Since inspections are usually scheduled for the whole system rather than individual components, we then formulate the inspection model when the time to the next inspection from the point of a component failure renewal is random. This imposes some complication to the model, and an asymptotic solution was found. Simulation algorithms have also been proposed as a comparison to the analytical results. A numerical example is presented to demonstrate the model.

  11. High-order fuzzy time-series based on multi-period adaptation model for forecasting stock markets

    Science.gov (United States)

    Chen, Tai-Liang; Cheng, Ching-Hsue; Teoh, Hia-Jong

    2008-02-01

    Stock investors usually make their short-term investment decisions according to recent stock information such as the late market news, technical analysis reports, and price fluctuations. To reflect these short-term factors which impact stock price, this paper proposes a comprehensive fuzzy time-series, which factors linear relationships between recent periods of stock prices and fuzzy logical relationships (nonlinear relationships) mined from time-series into forecasting processes. In empirical analysis, the TAIEX (Taiwan Stock Exchange Capitalization Weighted Stock Index) and HSI (Heng Seng Index) are employed as experimental datasets, and four recent fuzzy time-series models, Chen’s (1996), Yu’s (2005), Cheng’s (2006) and Chen’s (2007), are used as comparison models. Besides, to compare with conventional statistic method, the method of least squares is utilized to estimate the auto-regressive models of the testing periods within the databases. From analysis results, the performance comparisons indicate that the multi-period adaptation model, proposed in this paper, can effectively improve the forecasting performance of conventional fuzzy time-series models which only factor fuzzy logical relationships in forecasting processes. From the empirical study, the traditional statistic method and the proposed model both reveal that stock price patterns in the Taiwan stock and Hong Kong stock markets are short-term.

  12. Accounting for the Decreasing Denitrification Potential of Aquifers in Travel-Time Based Reactive-Transport Models of Nitrate

    Science.gov (United States)

    Cirpka, O. A.; Loschko, M.; Wöhling, T.; Rudolph, D. L.

    2017-12-01

    Excess nitrate concentrations pose a threat to drinking-water production from groundwater in all regions of intensive agriculture worldwide. Natural organic matter, pyrite, and other reduced constituents of the aquifer matrix can be oxidized by aerobic and denitrifying bacteria, leading to self-cleaning of groundwater. Various studies have shown that the heterogeneity of both hydraulic and chemical aquifer properties influence the reactive behavior. Since the exact spatial distributions of these properties are not known, predictions on the temporal evolution of nitrate should be probabilistic. However, the computational effort of pde-based, spatially explicit multi-component reactive-transport simulations are so high that multiple model runs become impossible. Conversely, simplistic models that treat denitrification as first-order decay process miss important controls on denitrification. We have proposed a Lagrangian framework of nonlinear reactive transport, in which the electron-donor supply by the aquifer matrix is parameterized by a relative reactivity, that is the reaction rate relative to a standard reaction rate for identical solute concentrations (Loschko et al., 2016). We could show that reactive transport simplifies to solving a single ordinary dfferential equation in terms of the cumulative relative reactivity for a given combination of inflow concentrations. Simulating 3-D flow and reactive transport are computationally so inexpensive that Monte Carlo simulation become feasible. The original scheme did not consider a change of the relative reactivity over time, implying that the electron-donor pool in the matrix is infinite. We have modified the scheme to address the consumption of the reducing aquifer constituents upon the reactions. We also analyzed how a minimally complex model of aerobic respiration and denitrification could look like. With the revised scheme, we performed Monte Carlo simulations in 3-D domains, confirming that the uncertainty in

  13. Evaluating the Generalization Value of Process-based Models in a Deep-in-time Machine Learning framework

    Science.gov (United States)

    Shen, C.; Fang, K.

    2017-12-01

    Deep Learning (DL) methods have made revolutionary strides in recent years. A core value proposition of DL is that abstract notions and patterns can be extracted purely from data, without the need for domain expertise. Process-based models (PBM), on the other hand, can be regarded as repositories of human knowledge or hypotheses about how systems function. Here, through computational examples, we argue that there is merit in integrating PBMs with DL due to the imbalance and lack of data in many situations, especially in hydrology. We trained a deep-in-time neural network, the Long Short-Term Memory (LSTM), to learn soil moisture dynamics from Soil Moisture Active Passive (SMAP) Level 3 product. We show that when PBM solutions are integrated into LSTM, the network is able to better generalize across regions. LSTM is able to better utilize PBM solutions than simpler statistical methods. Our results suggest PBMs have generalization value which should be carefully assessed and utilized. We also emphasize that when properly regularized, the deep network is robust and is of superior testing performance compared to simpler methods.

  14. Processing of recognition information and additional cues: A model-based analysis of choice, confidence, and response time

    Directory of Open Access Journals (Sweden)

    Andreas Glockner

    2011-02-01

    Full Text Available Research on the processing of recognition information has focused on testing the recognition heuristic (RH. On the aggregate, the noncompensatory use of recognition information postulated by the RH was rejected in several studies, while RH could still account for a considerable proportion of choices. These results can be explained if either a a part of the subjects used RH or b nobody used it but its choice predictions were accidentally in line with predictions of the strategy used. In the current study, which exemplifies a new approach to model testing, we determined individuals' decision strategies based on a maximum-likelihood classification method, taking into account choices, response times and confidence ratings simultaneously. Unlike most previous studies of the RH, our study tested the RH under conditions in which we provided information about cue values of unrecognized objects (which we argue is fairly common and thus of some interest. For 77.5% of the subjects, overall behavior was best explained by a compensatory parallel constraint satisfaction (PCS strategy. The proportion of subjects using an enhanced RH heuristic (RHe was negligible (up to 7.5%; 15% of the subjects seemed to use a take the best strategy (TTB. A more-fine grained analysis of the supplemental behavioral parameters conditional on strategy use supports PCS but calls into question process assumptions for apparent users of RH, RHe, and TTB within our experimental context. Our results are consistent with previous literature highlighting the importance of individual strategy classification as compared to aggregated analyses.

  15. Multi-mutational model for cancer based on age-time patterns of radiation effects: 2. Biological aspects

    Energy Technology Data Exchange (ETDEWEB)

    Mendelsohn, M.L.; Pierce, P.A.

    1997-09-04

    Biological properties of relevance when modeling cancers induced in the atom bomb survivors include the wide distribution of the induced cancers across all organs, their biological indistinguishability from background cancers, their rates being proportional to background cancer rates, their rates steadily increasing over at least 50 years as the survivors age, and their radiation dose response being linear. We have successfully described this array of properties with a modified Armitage-Doll model using 5 to 6 somatic mutations, no intermediate growth, and the dose-related replacement of any one of these time-driven mutations by a radiation-induced mutation. Such a model is contrasted to prevailing models that use fewer mutations combined with intervening growth. While the rationale and effectiveness of our model is compelling for carcinogenesis in the atom bomb survivors, the lack of a promotional component may limit the generality of the model for other types of human carcinogenesis.

  16. Time-Weighted Balanced Stochastic Model Reduction

    DEFF Research Database (Denmark)

    Tahavori, Maryamsadat; Shaker, Hamid Reza

    2011-01-01

    A new relative error model reduction technique for linear time invariant (LTI) systems is proposed in this paper. Both continuous and discrete time systems can be reduced within this framework. The proposed model reduction method is mainly based upon time-weighted balanced truncation and a recently...

  17. A discrete time-varying internal model-based approach for high precision tracking of a multi-axis servo gantry.

    Science.gov (United States)

    Zhang, Zhen; Yan, Peng; Jiang, Huan; Ye, Peiqing

    2014-09-01

    In this paper, we consider the discrete time-varying internal model-based control design for high precision tracking of complicated reference trajectories generated by time-varying systems. Based on a novel parallel time-varying internal model structure, asymptotic tracking conditions for the design of internal model units are developed, and a low order robust time-varying stabilizer is further synthesized. In a discrete time setting, the high precision tracking control architecture is deployed on a Voice Coil Motor (VCM) actuated servo gantry system, where numerical simulations and real time experimental results are provided, achieving the tracking errors around 3.5‰ for frequency-varying signals. Copyright © 2014 ISA. Published by Elsevier Ltd. All rights reserved.

  18. On the more accurate channel model and positioning based on time-of-arrival for visible light localization

    Science.gov (United States)

    Amini, Changeez; Taherpour, Abbas; Khattab, Tamer; Gazor, Saeed

    2017-01-01

    This paper presents an improved propagation channel model for the visible light in indoor environments. We employ this model to derive an enhanced positioning algorithm using on the relation between the time-of-arrivals (TOAs) and the distances for two cases either by assuming known or unknown transmitter and receiver vertical distances. We propose two estimators, namely the maximum likelihood estimator and an estimator by employing the method of moments. To have an evaluation basis for these methods, we calculate the Cramer-Rao lower bound (CRLB) for the performance of the estimations. We show that the proposed model and estimations result in a superior performance in positioning when the transmitter and receiver are perfectly synchronized in comparison to the existing state-of-the-art counterparts. Moreover, the corresponding CRLB of the proposed model represents almost about 20 dB reduction in the localization error bound in comparison with the previous model for some practical scenarios.

  19. Scalability of Semi-Implicit Time Integrators for Nonhydrostatic Galerkin-based Atmospheric Models on Large Scale Cluster

    Science.gov (United States)

    2011-01-01

    present performance statistics to explain the scalability behavior. Keywords-atmospheric models, time intergrators , MPI, scal- ability, performance; I...across inter-element bound- aries. Basis functions are constructed as tensor products of Lagrange polynomials ψi (x) = hα(ξ) ⊗ hβ(η) ⊗ hγ(ζ)., where hα

  20. Nonlinear detection of disordered voice productions from short time series based on a Volterra-Wiener-Korenberg model

    Energy Technology Data Exchange (ETDEWEB)

    Zhang Yu, E-mail: yuzhang@xmu.edu.cn [Key Laboratory of Underwater Acoustic Communication and Marine Information Technology of the Ministry of Education, Xiamen University, Xiamen Fujian 361005 (China); Sprecher, Alicia J. [Department of Surgery, Division of Otolaryngology - Head and Neck Surgery, University of Wisconsin School of Medicine and Public Health, Madison, WI 53792-7375 (United States); Zhao Zongxi [Key Laboratory of Underwater Acoustic Communication and Marine Information Technology of the Ministry of Education, Xiamen University, Xiamen Fujian 361005 (China); Jiang, Jack J. [Department of Surgery, Division of Otolaryngology - Head and Neck Surgery, University of Wisconsin School of Medicine and Public Health, Madison, WI 53792-7375 (United States)

    2011-09-15

    Highlights: > The VWK method effectively detects the nonlinearity of a discrete map. > The method describes the chaotic time series of a biomechanical vocal fold model. > Nonlinearity in laryngeal pathology is detected from short and noisy time series. - Abstract: In this paper, we apply the Volterra-Wiener-Korenberg (VWK) model method to detect nonlinearity in disordered voice productions. The VWK method effectively describes the nonlinearity of a third-order nonlinear map. It allows for the analysis of short and noisy data sets. The extracted VWK model parameters show an agreement with the original nonlinear map parameters. Furthermore, the VWK mode method is applied to successfully assess the nonlinearity of a biomechanical voice production model simulating irregular vibratory dynamics of vocal folds with a unilateral vocal polyp. Finally, we show the clinical applicability of this nonlinear detection method to analyze the electroglottographic data generated by 14 patients with vocal nodules or polyps. The VWK model method shows potential in describing the nonlinearity inherent in disordered voice productions from short and noisy time series that are common in the clinical setting.

  1. Nonlinear detection of disordered voice productions from short time series based on a Volterra-Wiener-Korenberg model

    International Nuclear Information System (INIS)

    Zhang Yu; Sprecher, Alicia J.; Zhao Zongxi; Jiang, Jack J.

    2011-01-01

    Highlights: → The VWK method effectively detects the nonlinearity of a discrete map. → The method describes the chaotic time series of a biomechanical vocal fold model. → Nonlinearity in laryngeal pathology is detected from short and noisy time series. - Abstract: In this paper, we apply the Volterra-Wiener-Korenberg (VWK) model method to detect nonlinearity in disordered voice productions. The VWK method effectively describes the nonlinearity of a third-order nonlinear map. It allows for the analysis of short and noisy data sets. The extracted VWK model parameters show an agreement with the original nonlinear map parameters. Furthermore, the VWK mode method is applied to successfully assess the nonlinearity of a biomechanical voice production model simulating irregular vibratory dynamics of vocal folds with a unilateral vocal polyp. Finally, we show the clinical applicability of this nonlinear detection method to analyze the electroglottographic data generated by 14 patients with vocal nodules or polyps. The VWK model method shows potential in describing the nonlinearity inherent in disordered voice productions from short and noisy time series that are common in the clinical setting.

  2. Time-series-based hybrid mathematical modelling method adapted to forecast automotive and medical waste generation: Case study of Lithuania.

    Science.gov (United States)

    Karpušenkaitė, Aistė; Ruzgas, Tomas; Denafas, Gintaras

    2018-05-01

    The aim of the study was to create a hybrid forecasting method that could produce higher accuracy forecasts than previously used 'pure' time series methods. Mentioned methods were already tested with total automotive waste, hazardous automotive waste, and total medical waste generation, but demonstrated at least a 6% error rate in different cases and efforts were made to decrease it even more. Newly developed hybrid models used a random start generation method to incorporate different time-series advantages and it helped to increase the accuracy of forecasts by 3%-4% in hazardous automotive waste and total medical waste generation cases; the new model did not increase the accuracy of total automotive waste generation forecasts. Developed models' abilities to forecast short- and mid-term forecasts were tested using prediction horizon.

  3. A rainfall disaggregation scheme for sub-hourly time scales: Coupling a Bartlett-Lewis based model with adjusting procedures

    Science.gov (United States)

    Kossieris, Panagiotis; Makropoulos, Christos; Onof, Christian; Koutsoyiannis, Demetris

    2018-01-01

    Many hydrological applications, such as flood studies, require the use of long rainfall data at fine time scales varying from daily down to 1 min time step. However, in the real world there is limited availability of data at sub-hourly scales. To cope with this issue, stochastic disaggregation techniques are typically employed to produce possible, statistically consistent, rainfall events that aggregate up to the field data collected at coarser scales. A methodology for the stochastic disaggregation of rainfall at fine time scales was recently introduced, combining the Bartlett-Lewis process to generate rainfall events along with adjusting procedures to modify the lower-level variables (i.e., hourly) so as to be consistent with the higher-level one (i.e., daily). In the present paper, we extend the aforementioned scheme, initially designed and tested for the disaggregation of daily rainfall into hourly depths, for any sub-hourly time scale. In addition, we take advantage of the recent developments in Poisson-cluster processes incorporating in the methodology a Bartlett-Lewis model variant that introduces dependence between cell intensity and duration in order to capture the variability of rainfall at sub-hourly time scales. The disaggregation scheme is implemented in an R package, named HyetosMinute, to support disaggregation from daily down to 1-min time scale. The applicability of the methodology was assessed on a 5-min rainfall records collected in Bochum, Germany, comparing the performance of the above mentioned model variant against the original Bartlett-Lewis process (non-random with 5 parameters). The analysis shows that the disaggregation process reproduces adequately the most important statistical characteristics of rainfall at wide range of time scales, while the introduction of the model with dependent intensity-duration results in a better performance in terms of skewness, rainfall extremes and dry proportions.

  4. Modelling of Attentional Dwell Time

    DEFF Research Database (Denmark)

    Petersen, Anders; Kyllingsbæk, Søren; Bundesen, Claus

    2009-01-01

    . This confinement of attentional resources leads to the impairment in identifying the second target. With the model, we are able to produce close fits to data from the traditional two target dwell time paradigm. A dwell-time experiment with three targets has also been carried out for individual subjects...... and the model has been extended to fit these data....

  5. Risk-based technical specifications: Development and application of an approach to the generation of a plant specific real-time risk model

    International Nuclear Information System (INIS)

    Puglia, B.; Gallagher, D.; Amico, P.; Atefi, B.

    1992-10-01

    This report describes a process developed to convert an existing PRA into a model amenable to real time, risk-based technical specification calculations. In earlier studies (culminating in NUREG/CR-5742), several risk-based approaches to technical specification were evaluated. A real-time approach using a plant specific PRA capable of modeling plant configurations as they change was identified as the most comprehensive approach to control plant risk. A master fault tree logic model representative of-all of the core damage sequences was developed. Portions of the system fault trees were modularized and supercomponents comprised of component failures with similar effects were developed to reduce the size of the model and, quantification times. Modifications to the master fault tree logic were made to properly model the effect of maintenance and recovery actions. Fault trees representing several actuation systems not modeled in detail in the existing PRA were added to the master fault tree logic. This process was applied to the Surry NUREG-1150 Level 1 PRA. The master logic mode was confirmed. The model was then used to evaluate frequency associated with several plant configurations using the IRRAS code. For all cases analyzed computational time was less than three minutes. This document Volume 2, contains appendices A, B, and C. These provide, respectively: Surry Technical Specifications Model Database, Surry Technical Specifications Model, and a list of supercomponents used in the Surry Technical Specifications Model

  6. ROSMOD: A Toolsuite for Modeling, Generating, Deploying, and Managing Distributed Real-time Component-based Software using ROS

    Directory of Open Access Journals (Sweden)

    Pranav Srinivas Kumar

    2016-09-01

    Full Text Available This paper presents the Robot Operating System Model-driven development tool suite, (ROSMOD an integrated development environment for rapid prototyping component-based software for the Robot Operating System (ROS middleware. ROSMOD is well suited for the design, development and deployment of large-scale distributed applications on embedded devices. We present the various features of ROSMOD including the modeling language, the graphical user interface, code generators, and deployment infrastructure. We demonstrate the utility of this tool with a real-world case study: an Autonomous Ground Support Equipment (AGSE robot that was designed and prototyped using ROSMOD for the NASA Student Launch competition, 2014–2015.

  7. A Hybrid Fuzzy Time Series Approach Based on Fuzzy Clustering and Artificial Neural Network with Single Multiplicative Neuron Model

    Directory of Open Access Journals (Sweden)

    Ozge Cagcag Yolcu

    2013-01-01

    Full Text Available Particularly in recent years, artificial intelligence optimization techniques have been used to make fuzzy time series approaches more systematic and improve forecasting performance. Besides, some fuzzy clustering methods and artificial neural networks with different structures are used in the fuzzification of observations and determination of fuzzy relationships, respectively. In approaches considering the membership values, the membership values are determined subjectively or fuzzy outputs of the system are obtained by considering that there is a relation between membership values in identification of relation. This necessitates defuzzification step and increases the model error. In this study, membership values were obtained more systematically by using Gustafson-Kessel fuzzy clustering technique. The use of artificial neural network with single multiplicative neuron model in identification of fuzzy relation eliminated the architecture selection problem as well as the necessity for defuzzification step by constituting target values from real observations of time series. The training of artificial neural network with single multiplicative neuron model which is used for identification of fuzzy relation step is carried out with particle swarm optimization. The proposed method is implemented using various time series and the results are compared with those of previous studies to demonstrate the performance of the proposed method.

  8. A Unified Trading Model Based on Robust Optimization for Day-Ahead and Real-Time Markets with Wind Power Integration

    DEFF Research Database (Denmark)

    Jiang, Yuewen; Chen, Meisen; You, Shi

    2017-01-01

    In a conventional electricity market, trading is conducted based on power forecasts in the day-ahead market, while the power imbalance is regulated in the real-time market, which is a separate trading scheme. With large-scale wind power connected into the power grid, power forecast errors increase...... in the day-ahead market which lowers the economic efficiency of the separate trading scheme. This paper proposes a robust unified trading model that includes the forecasts of real-time prices and imbalance power into the day-ahead trading scheme. The model is developed based on robust optimization in view...... of the undefined probability distribution of clearing prices of the real-time market. For the model to be used efficiently, an improved quantum-behaved particle swarm algorithm (IQPSO) is presented in the paper based on an in-depth analysis of the limitations of the static character of quantum-behaved particle...

  9. A Predictive Model for Time-to-Flowering in the Common Bean Based on QTL and Environmental Variables

    Directory of Open Access Journals (Sweden)

    Mehul S. Bhakta

    2017-12-01

    Full Text Available The common bean is a tropical facultative short-day legume that is now grown in tropical and temperate zones. This observation underscores how domestication and modern breeding can change the adaptive phenology of a species. A key adaptive trait is the optimal timing of the transition from the vegetative to the reproductive stage. This trait is responsive to genetically controlled signal transduction pathways and local climatic cues. A comprehensive characterization of this trait can be started by assessing the quantitative contribution of the genetic and environmental factors, and their interactions. This study aimed to locate significant QTL (G and environmental (E factors controlling time-to-flower in the common bean, and to identify and measure G × E interactions. Phenotypic data were collected from a biparental [Andean × Mesoamerican] recombinant inbred population (F11:14, 188 genotypes grown at five environmentally distinct sites. QTL analysis using a dense linkage map revealed 12 QTL, five of which showed significant interactions with the environment. Dissection of G × E interactions using a linear mixed-effect model revealed that temperature, solar radiation, and photoperiod play major roles in controlling common bean flowering time directly, and indirectly by modifying the effect of certain QTL. The model predicts flowering time across five sites with an adjusted r-square of 0.89 and root-mean square error of 2.52 d. The model provides the means to disentangle the environmental dependencies of complex traits, and presents an opportunity to identify in silico QTL allele combinations that could yield desired phenotypes under different climatic conditions.

  10. Time-dependent pharmacokinetics of dexamethasone and its efficacy in human breast cancer xenograft mice: a semi-mechanism-based pharmacokinetic/pharmacodynamic model.

    Science.gov (United States)

    Li, Jian; Chen, Rong; Yao, Qing-Yu; Liu, Sheng-Jun; Tian, Xiu-Yun; Hao, Chun-Yi; Lu, Wei; Zhou, Tian-Yan

    2018-03-01

    Dexamethasone (DEX) is the substrate of CYP3A. However, the activity of CYP3A could be induced by DEX when DEX was persistently administered, resulting in auto-induction and time-dependent pharmacokinetics (pharmacokinetics with time-dependent clearance) of DEX. In this study we investigated the pharmacokinetic profiles of DEX after single or multiple doses in human breast cancer xenograft nude mice and established a semi-mechanism-based pharmacokinetic/pharmacodynamic (PK/PD) model for characterizing the time-dependent PK of DEX as well as its anti-cancer effect. The mice were orally given a single or multiple doses (8 mg/kg) of DEX, and the plasma concentrations of DEX were assessed using LC-MS/MS. Tumor volumes were recorded daily. Based on the experimental data, a two-compartment model with first order absorption and time-dependent clearance was established, and the time-dependence of clearance was modeled by a sigmoid E max equation. Moreover, a semi-mechanism-based PK/PD model was developed, in which the auto-induction effect of DEX on its metabolizing enzyme CYP3A was integrated and drug potency was described using an E max equation. The PK/PD model was further used to predict the drug efficacy when the auto-induction effect was or was not considered, which further revealed the necessity of adding the auto-induction effect into the final PK/PD model. This study established a semi-mechanism-based PK/PD model for characterizing the time-dependent pharmacokinetics of DEX and its anti-cancer effect in breast cancer xenograft mice. The model may serve as a reference for DEX dose adjustments or optimization in future preclinical or clinical studies.

  11. Models for dependent time series

    CERN Document Server

    Tunnicliffe Wilson, Granville; Haywood, John

    2015-01-01

    Models for Dependent Time Series addresses the issues that arise and the methodology that can be applied when the dependence between time series is described and modeled. Whether you work in the economic, physical, or life sciences, the book shows you how to draw meaningful, applicable, and statistically valid conclusions from multivariate (or vector) time series data.The first four chapters discuss the two main pillars of the subject that have been developed over the last 60 years: vector autoregressive modeling and multivariate spectral analysis. These chapters provide the foundational mater

  12. A ‘post-honeymoon’ measles epidemic in Burundi: mathematical model-based analysis and implications for vaccination timing

    Directory of Open Access Journals (Sweden)

    Katelyn C. Corey

    2016-09-01

    Full Text Available Using a mathematical model with realistic demography, we analyze a large outbreak of measles in Muyinga sector in rural Burundi in 1988–1989. We generate simulated epidemic curves and age × time epidemic surfaces, which we qualitatively and quantitatively compare with the data. Our findings suggest that supplementary immunization activities (SIAs should be used in places where routine vaccination cannot keep up with the increasing numbers of susceptible individuals resulting from population growth or from logistical problems such as cold chain maintenance. We use the model to characterize the relationship between SIA frequency and SIA age range necessary to suppress measles outbreaks. If SIAs are less frequent, they must expand their target age range.

  13. Real-Time Measurements and Modelling on Dynamic Behaviour of SonoVue Bubbles Based on Light Scattering Technology

    International Nuclear Information System (INIS)

    Juan, Tu; Rongjue, Wei; Guan, J. F.; Matula, T. J.; Crum, L. A.

    2008-01-01

    The dynamic behaviour of SonoVue microbubbles, a new generation ultrasound contrast agent, is investigated in real time with light scattering method. Highly diluted SonoVue microbubbles are injected into a diluted gel made of xanthan gum and water. The responses of individual SonoVue bubbles to driven ultrasound pulses are measured. Both linear and nonlinear bubble oscillations are observed and the results suggest that SonoVue microbubbles can generate strong nonlinear responses. By fitting the experimental data of individual bubble responses with Sarkar's model, the shell coating parameter of the bubbles and dilatational viscosity is estimated to be 7.0 nm·s·Pa

  14. Development and validation of a local time stepping-based PaSR solver for combustion and radiation modeling

    DEFF Research Database (Denmark)

    Pang, Kar Mun; Ivarsson, Anders; Haider, Sajjad

    2013-01-01

    In the current work, a local time stepping (LTS) solver for the modeling of combustion, radiative heat transfer and soot formation is developed and validated. This is achieved using an open source computational fluid dynamics code, OpenFOAM. Akin to the solver provided in default assembly i...... library in the edcSimpleFoam solver which was introduced during the 6th OpenFOAM workshop is modified and coupled with the current solver. One of the main amendments made is the integration of soot radiation submodel since this is significant in rich flames where soot particles are formed. The new solver...

  15. Efficient Computation of Multiscale Entropy over Short Biomedical Time Series Based on Linear State-Space Models

    Directory of Open Access Journals (Sweden)

    Luca Faes

    2017-01-01

    Full Text Available The most common approach to assess the dynamical complexity of a time series across multiple temporal scales makes use of the multiscale entropy (MSE and refined MSE (RMSE measures. In spite of their popularity, MSE and RMSE lack an analytical framework allowing their calculation for known dynamic processes and cannot be reliably computed over short time series. To overcome these limitations, we propose a method to assess RMSE for autoregressive (AR stochastic processes. The method makes use of linear state-space (SS models to provide the multiscale parametric representation of an AR process observed at different time scales and exploits the SS parameters to quantify analytically the complexity of the process. The resulting linear MSE (LMSE measure is first tested in simulations, both theoretically to relate the multiscale complexity of AR processes to their dynamical properties and over short process realizations to assess its computational reliability in comparison with RMSE. Then, it is applied to the time series of heart period, arterial pressure, and respiration measured for healthy subjects monitored in resting conditions and during physiological stress. This application to short-term cardiovascular variability documents that LMSE can describe better than RMSE the activity of physiological mechanisms producing biological oscillations at different temporal scales.

  16. Developing a local least-squares support vector machines-based neuro-fuzzy model for nonlinear and chaotic time series prediction.

    Science.gov (United States)

    Miranian, A; Abdollahzade, M

    2013-02-01

    Local modeling approaches, owing to their ability to model different operating regimes of nonlinear systems and processes by independent local models, seem appealing for modeling, identification, and prediction applications. In this paper, we propose a local neuro-fuzzy (LNF) approach based on the least-squares support vector machines (LSSVMs). The proposed LNF approach employs LSSVMs, which are powerful in modeling and predicting time series, as local models and uses hierarchical binary tree (HBT) learning algorithm for fast and efficient estimation of its parameters. The HBT algorithm heuristically partitions the input space into smaller subdomains by axis-orthogonal splits. In each partitioning, the validity functions automatically form a unity partition and therefore normalization side effects, e.g., reactivation, are prevented. Integration of LSSVMs into the LNF network as local models, along with the HBT learning algorithm, yield a high-performance approach for modeling and prediction of complex nonlinear time series. The proposed approach is applied to modeling and predictions of different nonlinear and chaotic real-world and hand-designed systems and time series. Analysis of the prediction results and comparisons with recent and old studies demonstrate the promising performance of the proposed LNF approach with the HBT learning algorithm for modeling and prediction of nonlinear and chaotic systems and time series.

  17. USE OF TRANS-CONTEXTUAL MODEL-BASED PHYSICAL ACTIVITY COURSE IN DEVELOPING LEISURE-TIME PHYSICAL ACTIVITY BEHAVIOR OF UNIVERSITY STUDENTS.

    Science.gov (United States)

    Müftüler, Mine; İnce, Mustafa Levent

    2015-08-01

    This study examined how a physical activity course based on the Trans-Contextual Model affected the variables of perceived autonomy support, autonomous motivation, determinants of leisure-time physical activity behavior, basic psychological needs satisfaction, and leisure-time physical activity behaviors. The participants were 70 Turkish university students (M age=23.3 yr., SD=3.2). A pre-test-post-test control group design was constructed. Initially, the participants were randomly assigned into an experimental (n=35) and a control (n=35) group. The experimental group followed a 12 wk. trans-contextual model-based intervention. The participants were pre- and post-tested in terms of Trans-Contextual Model constructs and of self-reported leisure-time physical activity behaviors. Multivariate analyses showed significant increases over the 12 wk. period for perceived autonomy support from instructor and peers, autonomous motivation in leisure-time physical activity setting, positive intention and perceived behavioral control over leisure-time physical activity behavior, more fulfillment of psychological needs, and more engagement in leisure-time physical activity behavior in the experimental group. These results indicated that the intervention was effective in developing leisure-time physical activity and indicated that the Trans-Contextual Model is a useful way to conceptualize these relationships.

  18. Processor core for real time background identification of HD video based on OpenCV Gaussian mixture model algorithm

    Science.gov (United States)

    Genovese, Mariangela; Napoli, Ettore

    2013-05-01

    The identification of moving objects is a fundamental step in computer vision processing chains. The development of low cost and lightweight smart cameras steadily increases the request of efficient and high performance circuits able to process high definition video in real time. The paper proposes two processor cores aimed to perform the real time background identification on High Definition (HD, 1920 1080 pixel) video streams. The implemented algorithm is the OpenCV version of the Gaussian Mixture Model (GMM), an high performance probabilistic algorithm for the segmentation of the background that is however computationally intensive and impossible to implement on general purpose CPU with the constraint of real time processing. In the proposed paper, the equations of the OpenCV GMM algorithm are optimized in such a way that a lightweight and low power implementation of the algorithm is obtained. The reported performances are also the result of the use of state of the art truncated binary multipliers and ROM compression techniques for the implementation of the non-linear functions. The first circuit has commercial FPGA devices as a target and provides speed and logic resource occupation that overcome previously proposed implementations. The second circuit is oriented to an ASIC (UMC-90nm) standard cell implementation. Both implementations are able to process more than 60 frames per second in 1080p format, a frame rate compatible with HD television.

  19. Bayesian Model Selection under Time Constraints

    Science.gov (United States)

    Hoege, M.; Nowak, W.; Illman, W. A.

    2017-12-01

    Bayesian model selection (BMS) provides a consistent framework for rating and comparing models in multi-model inference. In cases where models of vastly different complexity compete with each other, we also face vastly different computational runtimes of such models. For instance, time series of a quantity of interest can be simulated by an autoregressive process model that takes even less than a second for one run, or by a partial differential equations-based model with runtimes up to several hours or even days. The classical BMS is based on a quantity called Bayesian model evidence (BME). It determines the model weights in the selection process and resembles a trade-off between bias of a model and its complexity. However, in practice, the runtime of models is another weight relevant factor for model selection. Hence, we believe that it should be included, leading to an overall trade-off problem between bias, variance and computing effort. We approach this triple trade-off from the viewpoint of our ability to generate realizations of the models under a given computational budget. One way to obtain BME values is through sampling-based integration techniques. We argue with the fact that more expensive models can be sampled much less under time constraints than faster models (in straight proportion to their runtime). The computed evidence in favor of a more expensive model is statistically less significant than the evidence computed in favor of a faster model, since sampling-based strategies are always subject to statistical sampling error. We present a straightforward way to include this misbalance into the model weights that are the basis for model selection. Our approach follows directly from the idea of insufficient significance. It is based on a computationally cheap bootstrapping error estimate of model evidence and is easy to implement. The approach is illustrated in a small synthetic modeling study.

  20. Stochastic models for time series

    CERN Document Server

    Doukhan, Paul

    2018-01-01

    This book presents essential tools for modelling non-linear time series. The first part of the book describes the main standard tools of probability and statistics that directly apply to the time series context to obtain a wide range of modelling possibilities. Functional estimation and bootstrap are discussed, and stationarity is reviewed. The second part describes a number of tools from Gaussian chaos and proposes a tour of linear time series models. It goes on to address nonlinearity from polynomial or chaotic models for which explicit expansions are available, then turns to Markov and non-Markov linear models and discusses Bernoulli shifts time series models. Finally, the volume focuses on the limit theory, starting with the ergodic theorem, which is seen as the first step for statistics of time series. It defines the distributional range to obtain generic tools for limit theory under long or short-range dependences (LRD/SRD) and explains examples of LRD behaviours. More general techniques (central limit ...

  1. Using a Time-Driven Activity-Based Costing Model To Determine the Actual Cost of Services Provided by a Transgenic Core.

    Science.gov (United States)

    Gerwin, Philip M; Norinsky, Rada M; Tolwani, Ravi J

    2018-03-01

    Laboratory animal programs and core laboratories often set service rates based on cost estimates. However, actual costs may be unknown, and service rates may not reflect the actual cost of services. Accurately evaluating the actual costs of services can be challenging and time-consuming. We used a time-driven activity-based costing (ABC) model to determine the cost of services provided by a resource laboratory at our institution. The time-driven approach is a more efficient approach to calculating costs than using a traditional ABC model. We calculated only 2 parameters: the time required to perform an activity and the unit cost of the activity based on employee cost. This method allowed us to rapidly and accurately calculate the actual cost of services provided, including microinjection of a DNA construct, microinjection of embryonic stem cells, embryo transfer, and in vitro fertilization. We successfully implemented a time-driven ABC model to evaluate the cost of these services and the capacity of labor used to deliver them. We determined how actual costs compared with current service rates. In addition, we determined that the labor supplied to conduct all services (10,645 min/wk) exceeded the practical labor capacity (8400 min/wk), indicating that the laboratory team was highly efficient and that additional labor capacity was needed to prevent overloading of the current team. Importantly, this time-driven ABC approach allowed us to establish a baseline model that can easily be updated to reflect operational changes or changes in labor costs. We demonstrated that a time-driven ABC model is a powerful management tool that can be applied to other core facilities as well as to entire animal programs, providing valuable information that can be used to set rates based on the actual cost of services and to improve operating efficiency.

  2. A time-driven activity-based costing model to improve health-care resource use in Mirebalais, Haiti.

    Science.gov (United States)

    Mandigo, Morgan; O'Neill, Kathleen; Mistry, Bipin; Mundy, Bryan; Millien, Christophe; Nazaire, Yolande; Damuse, Ruth; Pierre, Claire; Mugunga, Jean Claude; Gillies, Rowan; Lucien, Franciscka; Bertrand, Karla; Luo, Eva; Costas, Ainhoa; Greenberg, Sarah L M; Meara, John G; Kaplan, Robert

    2015-04-27

    In resource-limited settings, efficiency is crucial to maximise resources available for patient care. Time driven activity-based costing (TDABC) estimates costs directly from clinical and administrative processes used in patient care, thereby providing valuable information for process improvements. TDABC is more accurate and simpler than traditional activity-based costing because it assigns resource costs to patients based on the amount of time clinical and staff resources are used in patient encounters. Other costing approaches use somewhat arbitrary allocations that provide little transparency into the actual clinical processes used to treat medical conditions. TDABC has been successfully applied in European and US health-care settings to facilitate process improvements and new reimbursement approaches, but it has not been used in resource-limited settings. We aimed to optimise TDABC for use in a resource-limited setting to provide accurate procedure and service costs, reliably predict financing needs, inform quality improvement initiatives, and maximise efficiency. A multidisciplinary team used TDABC to map clinical processes for obstetric care (vaginal and caesarean deliveries, from triage to post-partum discharge) and breast cancer care (diagnosis, chemotherapy, surgery, and support services, such as pharmacy, radiology, laboratory, and counselling) at Hôpital Universitaire de Mirebalais (HUM) in Haiti. The team estimated the direct costs of personnel, equipment, and facilities used in patient care based on the amount of time each of these resources was used. We calculated inpatient personnel costs by allocating provider costs per staffed bed, and assigned indirect costs (administration, facility maintenance and operations, education, procurement and warehouse, bloodbank, and morgue) to various subgroups of the patient population. This study was approved by the Partners in Health/Zanmi Lasante Research Committee. The direct cost of an uncomplicated vaginal

  3. Multiple Model-Based Synchronization Approaches for Time Delayed Slaving Data in a Space Launch Vehicle Tracking System

    Directory of Open Access Journals (Sweden)

    Haryong Song

    2016-01-01

    Full Text Available Due to the inherent characteristics of the flight mission of a space launch vehicle (SLV, which is required to fly over very large distances and have very high fault tolerances, in general, SLV tracking systems (TSs comprise multiple heterogeneous sensors such as radars, GPS, INS, and electrooptical targeting systems installed over widespread areas. To track an SLV without interruption and to hand over the measurement coverage between TSs properly, the mission control system (MCS transfers slaving data to each TS through mission networks. When serious network delays occur, however, the slaving data from the MCS can lead to the failure of the TS. To address this problem, in this paper, we propose multiple model-based synchronization (MMS approaches, which take advantage of the multiple motion models of an SLV. Cubic spline extrapolation, prediction through an α-β-γ filter, and a single model Kalman filter are presented as benchmark approaches. We demonstrate the synchronization accuracy and effectiveness of the proposed MMS approaches using the Monte Carlo simulation with the nominal trajectory data of Korea Space Launch Vehicle-I.

  4. Model-based, semiquantitative and time intensity curve shape analysis of dynamic contrast-enhanced MRI: a comparison in patients undergoing antiangiogenic treatment for recurrent glioma

    NARCIS (Netherlands)

    Lavini, Cristina; Verhoeff, Joost J. C.; Majoie, Charles B.; Stalpers, Lukas J. A.; Richel, Dick J.; Maas, Mario

    2011-01-01

    To compare time intensity curve (TIC)-shape analysis of dynamic contrast-enhanced magnetic resonance imaging (DCE-MRI) data with model-based analysis and semiquantitative analysis in patients with high-grade glioma treated with the antiangiogenic drug bevacizumab. Fifteen patients had a pretreatment

  5. The Development and Evaluation of a Time Based Network Model of the Industrial Engineering Technology Curriculum at the Southern Technical Institute.

    Science.gov (United States)

    Bannerman, James W.

    A practicum was conducted to develop a scientific management tool that would assist students in obtaining a systems view of their college curriculum and to coordinate planning with curriculum requirements. A modification of the critical path method was employed and the result was a time-based network model of the Industrial Engineering Technology…

  6. Data-constrained models of quiet and storm-time geosynchronous magnetic field based on observations in the near geospace

    Science.gov (United States)

    Andreeva, V. A.; Tsyganenko, N. A.

    2017-12-01

    The geosynchronous orbit is unique in that its nightside segment skims along the boundary, separating the inner magnetosphere with a predominantly dipolar configuration from the magnetotail, where the Earth's magnetic field becomes small relative to the contribution from external sources. The ability to accurately reconstruct the magnetospheric configuration at GEO is important to understand the behavior of plasma and energetic particles, which critically affect space weather in the area densely populated by a host of satellites. To that end, we have developed a dynamical empirical model of the geosynchronous magnetic field with forecasting capability, based on a multi-year set of data taken by THEMIS, Polar, Cluster, Geotail, and Van Allen missions. The model's mathematical structure is devised using a new approach [Andreeva and Tsyganenko, 2016, doi:10.1002/2015JA022242], in which the toroidal/poloidal components of the field are represented using the radial and azimuthal basis functions. The model describes the field as a function of solar-magnetic coordinates, geodipole tilt angle, solar wind pressure, and a set of dynamic variables, quantifying the magnetosphere's response to external driving/loading and internal relaxation/dissipation during the disturbance recovery. The response variables are introduced following the approach by Tsyganenko and Sitnov [2005, doi:10.1029/2004JA010798], in which the electric current dynamics was described as a result of competition between the external energy input and the subsequent internal losses of the injected energy. The model's applicability range extends from quiet to moderately disturbed conditions, with peak Sym-H values -150 nT. The obtained results have been validated using independent GOES magnetometer data, taken during the maximum of the 23rd solar cycle and its declining phase.

  7. Model-based deconvolution of cell cycle time-series data reveals gene expression details at high resolution.

    Directory of Open Access Journals (Sweden)

    Dan Siegal-Gaskins

    2009-08-01

    Full Text Available In both prokaryotic and eukaryotic cells, gene expression is regulated across the cell cycle to ensure "just-in-time" assembly of select cellular structures and molecular machines. However, present in all time-series gene expression measurements is variability that arises from both systematic error in the cell synchrony process and variance in the timing of cell division at the level of the single cell. Thus, gene or protein expression data collected from a population of synchronized cells is an inaccurate measure of what occurs in the average single-cell across a cell cycle. Here, we present a general computational method to extract "single-cell"-like information from population-level time-series expression data. This method removes the effects of 1 variance in growth rate and 2 variance in the physiological and developmental state of the cell. Moreover, this method represents an advance in the deconvolution of molecular expression data in its flexibility, minimal assumptions, and the use of a cross-validation analysis to determine the appropriate level of regularization. Applying our deconvolution algorithm to cell cycle gene expression data from the dimorphic bacterium Caulobacter crescentus, we recovered critical features of cell cycle regulation in essential genes, including ctrA and ftsZ, that were obscured in population-based measurements. In doing so, we highlight the problem with using population data alone to decipher cellular regulatory mechanisms and demonstrate how our deconvolution algorithm can be applied to produce a more realistic picture of temporal regulation in a cell.

  8. A practical procedure for the selection of time-to-failure models based on the assessment of trends in maintenance data

    International Nuclear Information System (INIS)

    Louit, D.M.; Pascual, R.; Jardine, A.K.S.

    2009-01-01

    Many times, reliability studies rely on false premises such as independent and identically distributed time between failures assumption (renewal process). This can lead to erroneous model selection for the time to failure of a particular component or system, which can in turn lead to wrong conclusions and decisions. A strong statistical focus, a lack of a systematic approach and sometimes inadequate theoretical background seem to have made it difficult for maintenance analysts to adopt the necessary stage of data testing before the selection of a suitable model. In this paper, a framework for model selection to represent the failure process for a component or system is presented, based on a review of available trend tests. The paper focuses only on single-time-variable models and is primarily directed to analysts responsible for reliability analyses in an industrial maintenance environment. The model selection framework is directed towards the discrimination between the use of statistical distributions to represent the time to failure ('renewal approach'); and the use of stochastic point processes ('repairable systems approach'), when there may be the presence of system ageing or reliability growth. An illustrative example based on failure data from a fleet of backhoes is included.

  9. A practical procedure for the selection of time-to-failure models based on the assessment of trends in maintenance data

    Energy Technology Data Exchange (ETDEWEB)

    Louit, D.M. [Komatsu Chile, Av. Americo Vespucio 0631, Quilicura, Santiago (Chile)], E-mail: rpascual@ing.puc.cl; Pascual, R. [Centro de Mineria, Pontificia Universidad Catolica de Chile, Av. Vicuna Mackenna 4860, Santiago (Chile); Jardine, A.K.S. [Department of Mechanical and Industrial Engineering, University of Toronto, 5 King' s College Road, Toronto, Ont., M5S 3G8 (Canada)

    2009-10-15

    Many times, reliability studies rely on false premises such as independent and identically distributed time between failures assumption (renewal process). This can lead to erroneous model selection for the time to failure of a particular component or system, which can in turn lead to wrong conclusions and decisions. A strong statistical focus, a lack of a systematic approach and sometimes inadequate theoretical background seem to have made it difficult for maintenance analysts to adopt the necessary stage of data testing before the selection of a suitable model. In this paper, a framework for model selection to represent the failure process for a component or system is presented, based on a review of available trend tests. The paper focuses only on single-time-variable models and is primarily directed to analysts responsible for reliability analyses in an industrial maintenance environment. The model selection framework is directed towards the discrimination between the use of statistical distributions to represent the time to failure ('renewal approach'); and the use of stochastic point processes ('repairable systems approach'), when there may be the presence of system ageing or reliability growth. An illustrative example based on failure data from a fleet of backhoes is included.

  10. A novel model for Time-Series Data Clustering Based on piecewise SVD and BIRCH for Stock Data Analysis on Hadoop Platform

    Directory of Open Access Journals (Sweden)

    Ibgtc Bowala

    2017-06-01

    Full Text Available With the rapid growth of financial markets, analyzers are paying more attention on predictions. Stock data are time series data, with huge amounts. Feasible solution for handling the increasing amount of data is to use a cluster for parallel processing, and Hadoop parallel computing platform is a typical representative. There are various statistical models for forecasting time series data, but accurate clusters are a pre-requirement. Clustering analysis for time series data is one of the main methods for mining time series data for many other analysis processes. However, general clustering algorithms cannot perform clustering for time series data because series data has a special structure and a high dimensionality has highly co-related values due to high noise level. A novel model for time series clustering is presented using BIRCH, based on piecewise SVD, leading to a novel dimension reduction approach. Highly co-related features are handled using SVD with a novel approach for dimensionality reduction in order to keep co-related behavior optimal and then use BIRCH for clustering. The algorithm is a novel model that can handle massive time series data. Finally, this new model is successfully applied to real stock time series data of Yahoo finance with satisfactory results.

  11. A new Caputo time fractional model for heat transfer enhancement of water based graphene nanofluid: An application to solar energy

    Science.gov (United States)

    Aman, Sidra; Khan, Ilyas; Ismail, Zulkhibri; Salleh, Mohd Zuki; Tlili, I.

    2018-06-01

    In this article the idea of Caputo time fractional derivatives is applied to MHD mixed convection Poiseuille flow of nanofluids with graphene nanoparticles in a vertical channel. The applications of nanofluids in solar energy are argued for various solar thermal systems. It is argued in the article that using nanofluids is an alternate source to produce solar energy in thermal engineering and solar energy devices in industries. The problem is modelled in terms of PDE's with initial and boundary conditions and solved analytically via Laplace transform method. The obtained solutions for velocity, temperature and concentration are expressed in terms of Wright's function. These solutions are significantly controlled by the variations of parameters including thermal Grashof number, Solutal Grashof number and nanoparticles volume fraction. Expressions for skin-friction, Nusselt and Sherwood numbers are also determined on left and right walls of the vertical channel with important numerical results in tabular form. It is found that rate of heat transfer increases with increasing nanoparticles volume fraction and Caputo time fractional parameters.

  12. Compiling models into real-time systems

    International Nuclear Information System (INIS)

    Dormoy, J.L.; Cherriaux, F.; Ancelin, J.

    1992-08-01

    This paper presents an architecture for building real-time systems from models, and model-compiling techniques. This has been applied for building a real-time model-based monitoring system for nuclear plants, called KSE, which is currently being used in two plants in France. We describe how we used various artificial intelligence techniques for building it: a model-based approach, a logical model of its operation, a declarative implementation of these models, and original knowledge-compiling techniques for automatically generating the real-time expert system from those models. Some of those techniques have just been borrowed from the literature, but we had to modify or invent other techniques which simply did not exist. We also discuss two important problems, which are often underestimated in the artificial intelligence literature: size, and errors. Our architecture, which could be used in other applications, combines the advantages of the model-based approach with the efficiency requirements of real-time applications, while in general model-based approaches present serious drawbacks on this point

  13. Compiling models into real-time systems

    International Nuclear Information System (INIS)

    Dormoy, J.L.; Cherriaux, F.; Ancelin, J.

    1992-08-01

    This paper presents an architecture for building real-time systems from models, and model-compiling techniques. This has been applied for building a real-time model-base monitoring system for nuclear plants, called KSE, which is currently being used in two plants in France. We describe how we used various artificial intelligence techniques for building it: a model-based approach, a logical model of its operation, a declarative implementation of these models, and original knowledge-compiling techniques for automatically generating the real-time expert system from those models. Some of those techniques have just been borrowed from the literature, but we had to modify or invent other techniques which simply did not exist. We also discuss two important problems, which are often underestimated in the artificial intelligence literature: size, and errors. Our architecture, which could be used in other applications, combines the advantages of the model-based approach with the efficiency requirements of real-time applications, while in general model-based approaches present serious drawbacks on this point

  14. Probabilistic Survivability Versus Time Modeling

    Science.gov (United States)

    Joyner, James J., Sr.

    2016-01-01

    This presentation documents Kennedy Space Center's Independent Assessment work completed on three assessments for the Ground Systems Development and Operations (GSDO) Program to assist the Chief Safety and Mission Assurance Officer during key programmatic reviews and provided the GSDO Program with analyses of how egress time affects the likelihood of astronaut and ground worker survival during an emergency. For each assessment, a team developed probability distributions for hazard scenarios to address statistical uncertainty, resulting in survivability plots over time. The first assessment developed a mathematical model of probabilistic survivability versus time to reach a safe location using an ideal Emergency Egress System at Launch Complex 39B (LC-39B); the second used the first model to evaluate and compare various egress systems under consideration at LC-39B. The third used a modified LC-39B model to determine if a specific hazard decreased survivability more rapidly than other events during flight hardware processing in Kennedy's Vehicle Assembly Building.

  15. From discrete-time models to continuous-time, asynchronous modeling of financial markets

    NARCIS (Netherlands)

    Boer, Katalin; Kaymak, Uzay; Spiering, Jaap

    2007-01-01

    Most agent-based simulation models of financial markets are discrete-time in nature. In this paper, we investigate to what degree such models are extensible to continuous-time, asynchronous modeling of financial markets. We study the behavior of a learning market maker in a market with information

  16. From Discrete-Time Models to Continuous-Time, Asynchronous Models of Financial Markets

    NARCIS (Netherlands)

    K. Boer-Sorban (Katalin); U. Kaymak (Uzay); J. Spiering (Jaap)

    2006-01-01

    textabstractMost agent-based simulation models of financial markets are discrete-time in nature. In this paper, we investigate to what degree such models are extensible to continuous-time, asynchronous modelling of financial markets. We study the behaviour of a learning market maker in a market with

  17. Comparison of interplanetary CME arrival times and solar wind parameters based on the WSA-ENLIL model with three cone types and observations

    Science.gov (United States)

    Jang, Soojeong; Moon, Y.-J.; Lee, Jae-Ok; Na, Hyeonock

    2014-09-01

    We have made a comparison between coronal mass ejection (CME)-associated shock propagations based on the Wang-Sheeley-Arge (WSA)-ENLIL model using three cone types and in situ observations. For this we use 28 full-halo CMEs, whose cone parameters are determined and their corresponding interplanetary shocks were observed at the Earth, from 2001 to 2002. We consider three different cone types (an asymmetric cone model, an ice cream cone model, and an elliptical cone model) to determine 3-D CME cone parameters (radial velocity, angular width, and source location), which are the input values of the WSA-ENLIL model. The mean absolute error of the CME-associated shock travel times for the WSA-ENLIL model using the ice-cream cone model is 9.9 h, which is about 1 h smaller than those of the other models. We compare the peak values and profiles of solar wind parameters (speed and density) with in situ observations. We find that the root-mean-square errors of solar wind peak speed and density for the ice cream and asymmetric cone model are about 190 km/s and 24/cm3, respectively. We estimate the cross correlations between the models and observations within the time lag of ± 2 days from the shock travel time. The correlation coefficients between the solar wind speeds from the WSA-ENLIL model using three cone types and in situ observations are approximately 0.7, which is larger than those of solar wind density (cc ˜0.6). Our preliminary investigations show that the ice cream cone model seems to be better than the other cone models in terms of the input parameters of the WSA-ENLIL model.

  18. A finite element-based machine learning approach for modeling the mechanical behavior of the breast tissues under compression in real-time.

    Science.gov (United States)

    Martínez-Martínez, F; Rupérez-Moreno, M J; Martínez-Sober, M; Solves-Llorens, J A; Lorente, D; Serrano-López, A J; Martínez-Sanchis, S; Monserrat, C; Martín-Guerrero, J D

    2017-11-01

    This work presents a data-driven method to simulate, in real-time, the biomechanical behavior of the breast tissues in some image-guided interventions such as biopsies or radiotherapy dose delivery as well as to speed up multimodal registration algorithms. Ten real breasts were used for this work. Their deformation due to the displacement of two compression plates was simulated off-line using the finite element (FE) method. Three machine learning models were trained with the data from those simulations. Then, they were used to predict in real-time the deformation of the breast tissues during the compression. The models were a decision tree and two tree-based ensemble methods (extremely randomized trees and random forest). Two different experimental setups were designed to validate and study the performance of these models under different conditions. The mean 3D Euclidean distance between nodes predicted by the models and those extracted from the FE simulations was calculated to assess the performance of the models in the validation set. The experiments proved that extremely randomized trees performed better than the other two models. The mean error committed by the three models in the prediction of the nodal displacements was under 2 mm, a threshold usually set for clinical applications. The time needed for breast compression prediction is sufficiently short to allow its use in real-time (<0.2 s). Copyright © 2017 Elsevier Ltd. All rights reserved.

  19. A Unified Trading Model Based on Robust Optimization for Day-Ahead and Real-Time Markets with Wind Power Integration

    Directory of Open Access Journals (Sweden)

    Yuewen Jiang

    2017-04-01

    Full Text Available In a conventional electricity market, trading is conducted based on power forecasts in the day-ahead market, while the power imbalance is regulated in the real-time market, which is a separate trading scheme. With large-scale wind power connected into the power grid, power forecast errors increase in the day-ahead market which lowers the economic efficiency of the separate trading scheme. This paper proposes a robust unified trading model that includes the forecasts of real-time prices and imbalance power into the day-ahead trading scheme. The model is developed based on robust optimization in view of the undefined probability distribution of clearing prices of the real-time market. For the model to be used efficiently, an improved quantum-behaved particle swarm algorithm (IQPSO is presented in the paper based on an in-depth analysis of the limitations of the static character of quantum-behaved particle swarm algorithm (QPSO. Finally, the impacts of associated parameters on the separate trading and unified trading model are analyzed to verify the superiority of the proposed model and algorithm.

  20. An Agent-Based Model for Analyzing Control Policies and the Dynamic Service-Time Performance of a Capacity-Constrained Air Traffic Management Facility

    Science.gov (United States)

    Conway, Sheila R.

    2006-01-01

    Simple agent-based models may be useful for investigating air traffic control strategies as a precursory screening for more costly, higher fidelity simulation. Of concern is the ability of the models to capture the essence of the system and provide insight into system behavior in a timely manner and without breaking the bank. The method is put to the test with the development of a model to address situations where capacity is overburdened and potential for propagation of the resultant delay though later flights is possible via flight dependencies. The resultant model includes primitive representations of principal air traffic system attributes, namely system capacity, demand, airline schedules and strategy, and aircraft capability. It affords a venue to explore their interdependence in a time-dependent, dynamic system simulation. The scope of the research question and the carefully-chosen modeling fidelity did allow for the development of an agent-based model in short order. The model predicted non-linear behavior given certain initial conditions and system control strategies. Additionally, a combination of the model and dimensionless techniques borrowed from fluid systems was demonstrated that can predict the system s dynamic behavior across a wide range of parametric settings.

  1. Optimal Real-Time Scheduling for Hybrid Energy Storage Systems and Wind Farms Based on Model Predictive Control

    Directory of Open Access Journals (Sweden)

    Meng Xiong

    2015-08-01

    Full Text Available Energy storage devices are expected to be more frequently implemented in wind farms in near future. In this paper, both pumped hydro and fly wheel storage systems are used to assist a wind farm to smooth the power fluctuations. Due to the significant difference in the response speeds of the two storages types, the wind farm coordination with two types of energy storage is a problem. This paper presents two methods for the coordination problem: a two-level hierarchical model predictive control (MPC method and a single-level MPC method. In the single-level MPC method, only one MPC controller coordinates the wind farm and the two storage systems to follow the grid scheduling. Alternatively, in the two-level MPC method, two MPC controllers are used to coordinate the wind farm and the two storage systems. The structure of two level MPC consists of outer level and inner level MPC. They run alternatively to perform real-time scheduling and then stop, thus obtaining long-term scheduling results and sending some results to the inner level as input. The single-level MPC method performs both long- and short-term scheduling tasks in each interval. The simulation results show that the methods proposed can improve the utilization of wind power and reduce wind power spillage. In addition, the single-level MPC and the two-level MPC are not interchangeable. The single-level MPC has the advantage of following the grid schedule while the two-level MPC can reduce the optimization time by 60%.

  2. H∞ Filtering for Discrete Markov Jump Singular Systems with Mode-Dependent Time Delay Based on T-S Fuzzy Model

    Directory of Open Access Journals (Sweden)

    Cheng Gong

    2014-01-01

    Full Text Available This paper investigates the H∞ filtering problem of discrete singular Markov jump systems (SMJSs with mode-dependent time delay based on T-S fuzzy model. First, by Lyapunov-Krasovskii functional approach, a delay-dependent sufficient condition on H∞-disturbance attenuation is presented, in which both stability and prescribed H∞ performance are required to be achieved for the filtering-error systems. Then, based on the condition, the delay-dependent H∞ filter design scheme for SMJSs with mode-dependent time delay based on T-S fuzzy model is developed in term of linear matrix inequality (LMI. Finally, an example is given to illustrate the effectiveness of the result.

  3. Model Based Temporal Reasoning

    Science.gov (United States)

    Rabin, Marla J.; Spinrad, Paul R.; Fall, Thomas C.

    1988-03-01

    Systems that assess the real world must cope with evidence that is uncertain, ambiguous, and spread over time. Typically, the most important function of an assessment system is to identify when activities are occurring that are unusual or unanticipated. Model based temporal reasoning addresses both of these requirements. The differences among temporal reasoning schemes lies in the methods used to avoid computational intractability. If we had n pieces of data and we wanted to examine how they were related, the worst case would be where we had to examine every subset of these points to see if that subset satisfied the relations. This would be 2n, which is intractable. Models compress this; if several data points are all compatible with a model, then that model represents all those data points. Data points are then considered related if they lie within the same model or if they lie in models that are related. Models thus address the intractability problem. They also address the problem of determining unusual activities if the data do not agree with models that are indicated by earlier data then something out of the norm is taking place. The models can summarize what we know up to that time, so when they are not predicting correctly, either something unusual is happening or we need to revise our models. The model based reasoner developed at Advanced Decision Systems is thus both intuitive and powerful. It is currently being used on one operational system and several prototype systems. It has enough power to be used in domains spanning the spectrum from manufacturing engineering and project management to low-intensity conflict and strategic assessment.

  4. Model-Assisted Control of Flow Front in Resin Transfer Molding Based on Real-Time Estimation of Permeability/Porosity Ratio

    Directory of Open Access Journals (Sweden)

    Bai-Jian Wei

    2016-09-01

    Full Text Available Resin transfer molding (RTM is a popular manufacturing technique that produces fiber reinforced polymer (FRP composites. In this paper, a model-assisted flow front control system is developed based on real-time estimation of permeability/porosity ratio using the information acquired by a visualization system. In the proposed control system, a radial basis function (RBF network meta-model is utilized to predict the position of the future flow front by inputting the injection pressure, the current position of flow front, and the estimated ratio. By conducting optimization based on the meta-model, the value of injection pressure to be implemented at each step is obtained. Moreover, a cascade control structure is established to further improve the control performance. Experiments show that the developed system successfully enhances the performance of flow front control in RTM. Especially, the cascade structure makes the control system robust to model mismatch.

  5. Forecasting with nonlinear time series models

    DEFF Research Database (Denmark)

    Kock, Anders Bredahl; Teräsvirta, Timo

    In this paper, nonlinear models are restricted to mean nonlinear parametric models. Several such models popular in time series econo- metrics are presented and some of their properties discussed. This in- cludes two models based on universal approximators: the Kolmogorov- Gabor polynomial model...... applied to economic fore- casting problems, is briefly highlighted. A number of large published studies comparing macroeconomic forecasts obtained using different time series models are discussed, and the paper also contains a small simulation study comparing recursive and direct forecasts in a partic...... and two versions of a simple artificial neural network model. Techniques for generating multi-period forecasts from nonlinear models recursively are considered, and the direct (non-recursive) method for this purpose is mentioned as well. Forecasting with com- plex dynamic systems, albeit less frequently...

  6. Electricity price modeling with stochastic time change

    International Nuclear Information System (INIS)

    Borovkova, Svetlana; Schmeck, Maren Diane

    2017-01-01

    In this paper, we develop a novel approach to electricity price modeling, based on the powerful technique of stochastic time change. This technique allows us to incorporate the characteristic features of electricity prices (such as seasonal volatility, time varying mean reversion and seasonally occurring price spikes) into the model in an elegant and economically justifiable way. The stochastic time change introduces stochastic as well as deterministic (e.g., seasonal) features in the price process' volatility and in the jump component. We specify the base process as a mean reverting jump diffusion and the time change as an absolutely continuous stochastic process with seasonal component. The activity rate of the stochastic time change can be related to the factors that influence supply and demand. Here we use the temperature as a proxy for the demand and hence, as the driving factor of the stochastic time change, and show that this choice leads to realistic price paths. We derive properties of the resulting price process and develop the model calibration procedure. We calibrate the model to the historical EEX power prices and apply it to generating realistic price paths by Monte Carlo simulations. We show that the simulated price process matches the distributional characteristics of the observed electricity prices in periods of both high and low demand. - Highlights: • We develop a novel approach to electricity price modeling, based on the powerful technique of stochastic time change. • We incorporate the characteristic features of electricity prices, such as seasonal volatility and spikes into the model. • We use the temperature as a proxy for the demand and hence, as the driving factor of the stochastic time change • We derive properties of the resulting price process and develop the model calibration procedure. • We calibrate the model to the historical EEX power prices and apply it to generating realistic price paths.

  7. Comparative studies of the ITU-T prediction model for radiofrequency radiation emission and real time measurements at some selected mobile base transceiver stations in Accra, Ghana

    International Nuclear Information System (INIS)

    Obeng, S. O

    2014-07-01

    Recent developments in the electronics industry have led to the widespread use of radiofrequency (RF) devices in various areas including telecommunications. The increasing numbers of mobile base station (BTS) as well as their proximity to residential areas have been accompanied by public health concerns due to the radiation exposure. The main objective of this research was to compare and modify the ITU- T predictive model for radiofrequency radiation emission for BTS with measured data at some selected cell sites in Accra, Ghana. Theoretical and experimental assessment of radiofrequency exposures due to mobile base station antennas have been analysed. The maximum and minimum average power density measured from individual base station in the town was 1. 86µW/m2 and 0.00961µW/m2 respectively. The ITU-T Predictive model power density ranged between 6.40mW/m 2 and 0.344W/m 2 . Results obtained showed a variation between measured power density levels and the ITU-T predictive model. The ITU-T model power density levels decrease with increase in radial distance while real time measurements do not due to fluctuations during measurement. The ITU-T model overestimated the power density levels by a factor l0 5 as compared to real time measurements. The ITU-T model was modified to reduce the level of overestimation. The result showed that radiation intensity varies from one base station to another even at the same distance. Occupational exposure quotient ranged between 5.43E-10 and 1.89E-08 whilst general public exposure quotient ranged between 2.72E-09 and 9.44E-08. From the results, it shows that the RF exposure levels in Accra from these mobile phone base station antennas are below the permitted RF exposure limit to the general public recommended by the International Commission on Non-Ionizing Radiation Protection. (au)

  8. Temporal Drivers of Liking Based on Functional Data Analysis and Non-Additive Models for Multi-Attribute Time-Intensity Data of Fruit Chews.

    Science.gov (United States)

    Kuesten, Carla; Bi, Jian

    2018-06-03

    Conventional drivers of liking analysis was extended with a time dimension into temporal drivers of liking (TDOL) based on functional data analysis methodology and non-additive models for multiple-attribute time-intensity (MATI) data. The non-additive models, which consider both direct effects and interaction effects of attributes to consumer overall liking, include Choquet integral and fuzzy measure in the multi-criteria decision-making, and linear regression based on variance decomposition. Dynamics of TDOL, i.e., the derivatives of the relative importance functional curves were also explored. Well-established R packages 'fda', 'kappalab' and 'relaimpo' were used in the paper for developing TDOL. Applied use of these methods shows that the relative importance of MATI curves offers insights for understanding the temporal aspects of consumer liking for fruit chews.

  9. Analog computing for a new nuclear reactor dynamic model based on a time-dependent second order form of the neutron transport equation

    International Nuclear Information System (INIS)

    Pirouzmand, Ahmad; Hadad, Kamal; Suh, Kune Y.

    2011-01-01

    This paper considers the concept of analog computing based on a cellular neural network (CNN) paradigm to simulate nuclear reactor dynamics using a time-dependent second order form of the neutron transport equation. Instead of solving nuclear reactor dynamic equations numerically, which is time-consuming and suffers from such weaknesses as vulnerability to transient phenomena, accumulation of round-off errors and floating-point overflows, use is made of a new method based on a cellular neural network. The state-of-the-art shows the CNN as being an alternative solution to the conventional numerical computation method. Indeed CNN is an analog computing paradigm that performs ultra-fast calculations and provides accurate results. In this study use is made of the CNN model to simulate the space-time response of scalar flux distribution in steady state and transient conditions. The CNN model also is used to simulate step perturbation in the core. The accuracy and capability of the CNN model are examined in 2D Cartesian geometry for two fixed source problems, a mini-BWR assembly, and a TWIGL Seed/Blanket problem. We also use the CNN model concurrently for a typical small PWR assembly to simulate the effect of temperature feedback, poisons, and control rods on the scalar flux distribution

  10. Survey of time preference, delay discounting models

    Directory of Open Access Journals (Sweden)

    John R. Doyle

    2013-03-01

    Full Text Available The paper surveys over twenty models of delay discounting (also known as temporal discounting, time preference, time discounting, that psychologists and economists have put forward to explain the way people actually trade off time and money. Using little more than the basic algebra of powers and logarithms, I show how the models are derived, what assumptions they are based upon, and how different models relate to each other. Rather than concentrate only on discount functions themselves, I show how discount functions may be manipulated to isolate rate parameters for each model. This approach, consistently applied, helps focus attention on the three main components in any discounting model: subjectively perceived money; subjectively perceived time; and how these elements are combined. We group models by the number of parameters that have to be estimated, which means our exposition follows a trajectory of increasing complexity to the models. However, as the story unfolds it becomes clear that most models fall into a smaller number of families. We also show how new models may be constructed by combining elements of different models. The surveyed models are: Exponential; Hyperbolic; Arithmetic; Hyperboloid (Green and Myerson, Rachlin; Loewenstein and Prelec Generalized Hyperboloid; quasi-Hyperbolic (also known as beta-delta discounting; Benhabib et al's fixed cost; Benhabib et al's Exponential / Hyperbolic / quasi-Hyperbolic; Read's discounting fractions; Roelofsma's exponential time; Scholten and Read's discounting-by-intervals (DBI; Ebert and Prelec's constant sensitivity (CS; Bleichrodt et al.'s constant absolute decreasing impatience (CADI; Bleichrodt et al.'s constant relative decreasing impatience (CRDI; Green, Myerson, and Macaux's hyperboloid over intervals models; Killeen's additive utility; size-sensitive additive utility; Yi, Landes, and Bickel's memory trace models; McClure et al.'s two exponentials; and Scholten and Read's trade

  11. Using a thermal-based two source energy balance model with time-differencing to estimate surface energy fluxes with day-night MODIS observations

    DEFF Research Database (Denmark)

    Guzinski, Radoslaw; Anderson, M.C.; Kustas, W.P.

    2013-01-01

    The Dual Temperature Difference (DTD) model, introduced by Norman et al. (2000), uses a two source energy balance modelling scheme driven by remotely sensed observations of diurnal changes in land surface temperature (LST) to estimate surface energy fluxes. By using a time-differential temperature...... agreement with field measurements is obtained for a number of ecosystems in Denmark and the United States. Finally, regional maps of energy fluxes are produced for the Danish Hydrological ObsErvatory (HOBE) in western Denmark, indicating realistic patterns based on land use....

  12. ToTCompute: A Novel EEG-Based TimeOnTask Threshold Computation Mechanism for Engagement Modelling and Monitoring

    Science.gov (United States)

    Ghergulescu, Ioana; Muntean, Cristina Hava

    2016-01-01

    Engagement influences participation, progression and retention in game-based e-learning (GBeL). Therefore, GBeL systems should engage the players in order to support them to maximize their learning outcomes, and provide the players with adequate feedback to maintain their motivation. Innovative engagement monitoring solutions based on players'…

  13. A Time-Variant Reliability Model for Copper Bending Pipe under Seawater-Active Corrosion Based on the Stochastic Degradation Process

    Directory of Open Access Journals (Sweden)

    Bo Sun

    2018-03-01

    Full Text Available In the degradation process, the randomness and multiplicity of variables are difficult to describe by mathematical models. However, they are common in engineering and cannot be neglected, so it is necessary to study this issue in depth. In this paper, the copper bending pipe in seawater piping systems is taken as the analysis object, and the time-variant reliability is calculated by solving the interference of limit strength and maximum stress. We did degradation experiments and tensile experiments on copper material, and obtained the limit strength at each time. In addition, degradation experiments on copper bending pipe were done and the thickness at each time has been obtained, then the response of maximum stress was calculated by simulation. Further, with the help of one kind of Monte Carlo method we propose, the time-variant reliability of copper bending pipe was calculated based on the stochastic degradation process and interference theory. Compared with traditional methods and verified by maintenance records, the results show that the time-variant reliability model based on the stochastic degradation process proposed in this paper has better applicability in the reliability analysis, and it can be more convenient and accurate to predict the replacement cycle of copper bending pipe under seawater-active corrosion.

  14. Output-only modal parameter estimator of linear time-varying structural systems based on vector TAR model and least squares support vector machine

    Science.gov (United States)

    Zhou, Si-Da; Ma, Yuan-Chen; Liu, Li; Kang, Jie; Ma, Zhi-Sai; Yu, Lei

    2018-01-01

    Identification of time-varying modal parameters contributes to the structural health monitoring, fault detection, vibration control, etc. of the operational time-varying structural systems. However, it is a challenging task because there is not more information for the identification of the time-varying systems than that of the time-invariant systems. This paper presents a vector time-dependent autoregressive model and least squares support vector machine based modal parameter estimator for linear time-varying structural systems in case of output-only measurements. To reduce the computational cost, a Wendland's compactly supported radial basis function is used to achieve the sparsity of the Gram matrix. A Gamma-test-based non-parametric approach of selecting the regularization factor is adapted for the proposed estimator to replace the time-consuming n-fold cross validation. A series of numerical examples have illustrated the advantages of the proposed modal parameter estimator on the suppression of the overestimate and the short data. A laboratory experiment has further validated the proposed estimator.

  15. A 2D multi-term time and space fractional Bloch-Torrey model based on bilinear rectangular finite elements

    Science.gov (United States)

    Qin, Shanlin; Liu, Fawang; Turner, Ian W.

    2018-03-01

    The consideration of diffusion processes in magnetic resonance imaging (MRI) signal attenuation is classically described by the Bloch-Torrey equation. However, many recent works highlight the distinct deviation in MRI signal decay due to anomalous diffusion, which motivates the fractional order generalization of the Bloch-Torrey equation. In this work, we study the two-dimensional multi-term time and space fractional diffusion equation generalized from the time and space fractional Bloch-Torrey equation. By using the Galerkin finite element method with a structured mesh consisting of rectangular elements to discretize in space and the L1 approximation of the Caputo fractional derivative in time, a fully discrete numerical scheme is derived. A rigorous analysis of stability and error estimation is provided. Numerical experiments in the square and L-shaped domains are performed to give an insight into the efficiency and reliability of our method. Then the scheme is applied to solve the multi-term time and space fractional Bloch-Torrey equation, which shows that the extra time derivative terms impact the relaxation process.

  16. A metabolism-based whole lake eutrophication model to estimate the magnitude and time scales of the effects of restoration in Upper Klamath Lake, south-central Oregon

    Science.gov (United States)

    Wherry, Susan A.; Wood, Tamara M.

    2018-04-27

    A whole lake eutrophication (WLE) model approach for phosphorus and cyanobacterial biomass in Upper Klamath Lake, south-central Oregon, is presented here. The model is a successor to a previous model developed to inform a Total Maximum Daily Load (TMDL) for phosphorus in the lake, but is based on net primary production (NPP), which can be calculated from dissolved oxygen, rather than scaling up a small-scale description of cyanobacterial growth and respiration rates. This phase 3 WLE model is a refinement of the proof-of-concept developed in phase 2, which was the first attempt to use NPP to simulate cyanobacteria in the TMDL model. The calibration of the calculated NPP WLE model was successful, with performance metrics indicating a good fit to calibration data, and the calculated NPP WLE model was able to simulate mid-season bloom decreases, a feature that previous models could not reproduce.In order to use the model to simulate future scenarios based on phosphorus load reduction, a multivariate regression model was created to simulate NPP as a function of the model state variables (phosphorus and chlorophyll a) and measured meteorological and temperature model inputs. The NPP time series was split into a low- and high-frequency component using wavelet analysis, and regression models were fit to the components separately, with moderate success.The regression models for NPP were incorporated in the WLE model, referred to as the “scenario” WLE (SWLE), and the fit statistics for phosphorus during the calibration period were mostly unchanged. The fit statistics for chlorophyll a, however, were degraded. These statistics are still an improvement over prior models, and indicate that the SWLE is appropriate for long-term predictions even though it misses some of the seasonal variations in chlorophyll a.The complete whole lake SWLE model, with multivariate regression to predict NPP, was used to make long-term simulations of the response to 10-, 20-, and 40-percent

  17. EFFECTS OF COOPERATIVE LEARNING MODEL TYPE STAD JUST-IN TIME BASED ON THE RESULTS OF LEARNING TEACHING PHYSICS COURSE IN PHYSICS SCHOOL IN PHYSICS PROGRAM FACULTY UNIMED

    Directory of Open Access Journals (Sweden)

    Teguh Febri Sudarma

    2013-06-01

    Full Text Available Research was aimed to determine: (1 Students’ learning outcomes that was taught with just in time teaching based STAD cooperative learning method and STAD cooperative learning method (2 Students’ outcomes on Physics subject that had high learning activity compared with low learning activity. The research sample was random by raffling four classes to get two classes. The first class taught with just in time teaching based STAD cooperative learning method, while the second class was taught with STAD cooperative learning method. The instrument used was conceptual understanding that had been validated with 7 essay questions. The average gain values of students learning results with just in time teaching based STAD cooperative learning method 0,47 higher than average gain values of students learning results with STAD cooperative learning method. The high learning activity and low learning activity gave different learning results. In this case the average gain values of students learning results with just in time teaching based STAD cooperative learning method 0,48 higher than average gain values of students learning results with STAD cooperative learning method. There was interaction between learning model and learning activity to the physics learning result test in students

  18. Time dependent policy-based access control

    DEFF Research Database (Denmark)

    Vasilikos, Panagiotis; Nielson, Flemming; Nielson, Hanne Riis

    2017-01-01

    also on other attributes of the environment such as the time. In this paper, we use systems of Timed Automata to model distributed systems and we present a logic in which one can express time-dependent policies for access control. We show how a fragment of our logic can be reduced to a logic......Access control policies are essential to determine who is allowed to access data in a system without compromising the data's security. However, applications inside a distributed environment may require those policies to be dependent on the actual content of the data, the flow of information, while...... that current model checkers for Timed Automata such as UPPAAL can handle and we present a translator that performs this reduction. We then use our translator and UPPAAL to enforce time-dependent policy-based access control on an example application from the aerospace industry....

  19. TIME SERIES CHARACTERISTIC ANALYSIS OF RAINFALL, LAND USE AND FLOOD DISCHARGE BASED ON ARIMA BOX-JENKINS MODEL

    Directory of Open Access Journals (Sweden)

    Abror Abror

    2014-01-01

    Full Text Available Indonesia located in tropic area consists of wet season and dry season. However, in last few years, in river discharge in dry season is very little, but in contrary, in wet season, frequency of flood increases with sharp peak and increasingly great water elevation. The increased flood discharge may occur due to change in land use or change in rainfall characteristic. Both matters should get clarity. Therefore, a research should be done to analyze rainfall characteristic, land use and flood discharge in some watershed area (DAS quantitatively from time series data. The research was conducted in DAS Gintung in Parakankidang, DAS Gung in Danawarih, DAS Rambut in Cipero, DAS Kemiri in Sidapurna and DAS Comal in Nambo, located in Tegal Regency and Pemalang Regency in Central Java Province. This research activity consisted of three main steps: input, DAS system and output. Input is DAS determination and selection and searching secondary data. DAS system is early secondary data processing consisting of rainfall analysis, HSS GAMA I parameter, land type analysis and DAS land use. Output is final processing step that consisting of calculation of Tadashi Tanimoto, USSCS effective rainfall, flood discharge, ARIMA analysis, result analysis and conclusion. Analytical calculation of ARIMA Box-Jenkins time series used software Number Cruncher Statistical Systems and Power Analysis Sample Size (NCSS-PASS version 2000, which result in time series characteristic in form of time series pattern, mean square errors (MSE, root mean square ( RMS, autocorrelation of residual and trend. Result of this research indicates that composite CN and flood discharge is proportional that means when composite CN trend increase then flood discharge trend also increase and vice versa. Meanwhile, decrease of rainfall trend is not always followed with decrease in flood discharge trend. The main cause of flood discharge characteristic is DAS management characteristic, not change in

  20. Laboratory Load Model Based on 150 kVA Power Frequency Converter and Simulink Real-Time – Concept, Implementation, Experiments

    Directory of Open Access Journals (Sweden)

    Robert Małkowski

    2016-09-01

    Full Text Available First section of the paper provides technical specification of laboratory load model basing on 150 kVA power frequency converter and Simulink Real-Time platform. Assumptions, as well as control algorithm structure is presented. Theoretical considerations based on criteria which load types may be simulated using discussed laboratory setup, are described. As described model contains transformer with thyristor-controlled tap changer, wider scope of device capabilities is presented. Paper lists and describes tunable parameters, both: tunable during device operation and changed only before starting the experiment. Implementation details are given in second section of paper. Hardware structure is presented and described. Information about used communication interface, data maintenance and storage solution, as well as used Simulink real-time features are presented. List and description of all measurements is provided. Potential of laboratory setup modifications is evaluated. Third section describes performed laboratory tests. Different load configurations are described and experimental results are presented. This includes simulation of under frequency load shedding, frequency and voltage dependent characteristics of groups of load units, time characteristics of group of different load units in a chosen area and arbitrary active and reactive power regulation basing on defined schedule. Different operation modes of control algorithm are described: apparent power control, active and reactive power control, active and reactive current RMS value control.

  1. Convergent Time-Varying Regression Models for Data Streams: Tracking Concept Drift by the Recursive Parzen-Based Generalized Regression Neural Networks.

    Science.gov (United States)

    Duda, Piotr; Jaworski, Maciej; Rutkowski, Leszek

    2018-03-01

    One of the greatest challenges in data mining is related to processing and analysis of massive data streams. Contrary to traditional static data mining problems, data streams require that each element is processed only once, the amount of allocated memory is constant and the models incorporate changes of investigated streams. A vast majority of available methods have been developed for data stream classification and only a few of them attempted to solve regression problems, using various heuristic approaches. In this paper, we develop mathematically justified regression models working in a time-varying environment. More specifically, we study incremental versions of generalized regression neural networks, called IGRNNs, and we prove their tracking properties - weak (in probability) and strong (with probability one) convergence assuming various concept drift scenarios. First, we present the IGRNNs, based on the Parzen kernels, for modeling stationary systems under nonstationary noise. Next, we extend our approach to modeling time-varying systems under nonstationary noise. We present several types of concept drifts to be handled by our approach in such a way that weak and strong convergence holds under certain conditions. Finally, in the series of simulations, we compare our method with commonly used heuristic approaches, based on forgetting mechanism or sliding windows, to deal with concept drift. Finally, we apply our concept in a real life scenario solving the problem of currency exchange rates prediction.

  2. Time series analysis based on two-part models for excessive zero count data to detect farm-level outbreaks of swine echinococcosis during meat inspections.

    Science.gov (United States)

    Adachi, Yasumoto; Makita, Kohei

    2017-12-01

    Echinococcus multilocularis is a parasite that causes highly pathogenic zoonoses and is maintained in foxes and rodents on Hokkaido Island, Japan. Detection of E. multilocularis infections in swine is epidemiologically important. In Hokkaido, administrative information is provided to swine producers based on the results of meat inspections. However, as the current criteria for providing administrative information often results in delays in providing information to producers, novel criteria are needed. Time series models were developed to monitor autocorrelations between data and lags using data collected from 84 producers at the Higashi-Mokoto Meat Inspection Center between April 2003 and November 2015. The two criteria were quantitatively compared using the sign test for the ability to rapidly detect farm-level outbreaks. Overall, the time series models based on an autoexponentially regressed zero-inflated negative binomial distribution with 60th percentile cumulative distribution function of the model detected outbreaks earlier more frequently than the current criteria (90.5%, 276/305, ppart model with autoexponential regression can adequately deal with data involving an excessive number of zeros and that the novel criteria overcome disadvantages of the current criteria to provide an earlier indication of increases in the rate of echinococcosis. Copyright © 2017 Elsevier B.V. All rights reserved.

  3. A complex bid model and strategy for dispatchable loads in real-time market-based demand response

    NARCIS (Netherlands)

    Babar, M.; Nguyen, P.H.; Cuk, V.; Kamphuis, I.G.; Kling, W.L.

    2014-01-01

    The power system is moving into the new era of Smart Grid with the help of advance ICT and other developed technologies. These advancements made the demand response as an integral part of power and energy systems. Nowadays, the concept of Energy bidding is emerging in the Market-based Demand

  4. Real-time stereo matching architecture based on 2D MRF model: a memory-efficient systolic array

    Directory of Open Access Journals (Sweden)

    Park Sungchan

    2011-01-01

    Full Text Available Abstract There is a growing need in computer vision applications for stereopsis, requiring not only accurate distance but also fast and compact physical implementation. Global energy minimization techniques provide remarkably precise results. But they suffer from huge computational complexity. One of the main challenges is to parallelize the iterative computation, solving the memory access problem between the big external memory and the massive processors. Remarkable memory saving can be obtained with our memory reduction scheme, and our new architecture is a systolic array. If we expand it into N's multiple chips in a cascaded manner, we can cope with various ranges of image resolutions. We have realized it using the FPGA technology. Our architecture records 19 times smaller memory than the global minimization technique, which is a principal step toward real-time chip implementation of the various iterative image processing algorithms with tiny and distributed memory resources like optical flow, image restoration, etc.

  5. Model-Based Generation of Synthetic 3D Time-Lapse Sequences of Motile Cells with Growing Filopodia

    OpenAIRE

    Sorokin , Dmitry ,; Peterlik , Igor; Ulman , Vladimír ,; Svoboda , David; Maška , Martin

    2017-01-01

    International audience; The existence of benchmark datasets is essential to objectively evaluate various image analysis methods. Nevertheless, manual annotations of fluorescence microscopy image data are very laborious and not often practicable, especially in the case of 3D+t experiments. In this work, we propose a simulation system capable of generating 3D time-lapse sequences of single motile cells with filopodial protrusions, accompanied by inherently generated ground truth. The system con...

  6. Real-Time Agent-Based Modeling Simulation with in-situ Visualization of Complex Biological Systems: A Case Study on Vocal Fold Inflammation and Healing.

    Science.gov (United States)

    Seekhao, Nuttiiya; Shung, Caroline; JaJa, Joseph; Mongeau, Luc; Li-Jessen, Nicole Y K

    2016-05-01

    We present an efficient and scalable scheme for implementing agent-based modeling (ABM) simulation with In Situ visualization of large complex systems on heterogeneous computing platforms. The scheme is designed to make optimal use of the resources available on a heterogeneous platform consisting of a multicore CPU and a GPU, resulting in minimal to no resource idle time. Furthermore, the scheme was implemented under a client-server paradigm that enables remote users to visualize and analyze simulation data as it is being generated at each time step of the model. Performance of a simulation case study of vocal fold inflammation and wound healing with 3.8 million agents shows 35× and 7× speedup in execution time over single-core and multi-core CPU respectively. Each iteration of the model took less than 200 ms to simulate, visualize and send the results to the client. This enables users to monitor the simulation in real-time and modify its course as needed.

  7. Building Chaotic Model From Incomplete Time Series

    Science.gov (United States)

    Siek, Michael; Solomatine, Dimitri

    2010-05-01

    This paper presents a number of novel techniques for building a predictive chaotic model from incomplete time series. A predictive chaotic model is built by reconstructing the time-delayed phase space from observed time series and the prediction is made by a global model or adaptive local models based on the dynamical neighbors found in the reconstructed phase space. In general, the building of any data-driven models depends on the completeness and quality of the data itself. However, the completeness of the data availability can not always be guaranteed since the measurement or data transmission is intermittently not working properly due to some reasons. We propose two main solutions dealing with incomplete time series: using imputing and non-imputing methods. For imputing methods, we utilized the interpolation methods (weighted sum of linear interpolations, Bayesian principle component analysis and cubic spline interpolation) and predictive models (neural network, kernel machine, chaotic model) for estimating the missing values. After imputing the missing values, the phase space reconstruction and chaotic model prediction are executed as a standard procedure. For non-imputing methods, we reconstructed the time-delayed phase space from observed time series with missing values. This reconstruction results in non-continuous trajectories. However, the local model prediction can still be made from the other dynamical neighbors reconstructed from non-missing values. We implemented and tested these methods to construct a chaotic model for predicting storm surges at Hoek van Holland as the entrance of Rotterdam Port. The hourly surge time series is available for duration of 1990-1996. For measuring the performance of the proposed methods, a synthetic time series with missing values generated by a particular random variable to the original (complete) time series is utilized. There exist two main performance measures used in this work: (1) error measures between the actual

  8. Random walk-percolation-based modeling of two-phase flow in porous media: Breakthrough time and net to gross ratio estimation

    Science.gov (United States)

    Ganjeh-Ghazvini, Mostafa; Masihi, Mohsen; Ghaedi, Mojtaba

    2014-07-01

    Fluid flow modeling in porous media has many applications in waste treatment, hydrology and petroleum engineering. In any geological model, flow behavior is controlled by multiple properties. These properties must be known in advance of common flow simulations. When uncertainties are present, deterministic modeling often produces poor results. Percolation and Random Walk (RW) methods have recently been used in flow modeling. Their stochastic basis is useful in dealing with uncertainty problems. They are also useful in finding the relationship between porous media descriptions and flow behavior. This paper employs a simple methodology based on random walk and percolation techniques. The method is applied to a well-defined model reservoir in which the breakthrough time distributions are estimated. The results of this method and the conventional simulation are then compared. The effect of the net to gross ratio on the breakthrough time distribution is studied in terms of Shannon entropy. Use of the entropy plot allows one to assign the appropriate net to gross ratio to any porous medium.

  9. Modeling the Time-Course of Responses for the Border Ownership Selectivity Based on the Integration of Feedforward Signals and Visual Cortical Interactions.

    Science.gov (United States)

    Wagatsuma, Nobuhiko; Sakai, Ko

    2016-01-01

    modulations for time-courses were induced by selective enhancement of early-level features due to interactions between V1 and PP. Our proposed model suggests fundamental roles of surrounding suppression/facilitation based on feedforward inputs as well as the interactions between early and parietal visual areas with respect to the ambiguity dependence of the neural dynamics in intermediate-level vision.

  10. Modeling the Time-Course of Responses for the Border Ownership Selectivity Based on the Integration of Feedforward Signals and Visual Cortical Interactions

    Science.gov (United States)

    Wagatsuma, Nobuhiko; Sakai, Ko

    2017-01-01

    modulations for time-courses were induced by selective enhancement of early-level features due to interactions between V1 and PP. Our proposed model suggests fundamental roles of surrounding suppression/facilitation based on feedforward inputs as well as the interactions between early and parietal visual areas with respect to the ambiguity dependence of the neural dynamics in intermediate-level vision. PMID:28163688

  11. Discrete-time modelling of musical instruments

    International Nuclear Information System (INIS)

    Vaelimaeki, Vesa; Pakarinen, Jyri; Erkut, Cumhur; Karjalainen, Matti

    2006-01-01

    This article describes physical modelling techniques that can be used for simulating musical instruments. The methods are closely related to digital signal processing. They discretize the system with respect to time, because the aim is to run the simulation using a computer. The physics-based modelling methods can be classified as mass-spring, modal, wave digital, finite difference, digital waveguide and source-filter models. We present the basic theory and a discussion on possible extensions for each modelling technique. For some methods, a simple model example is chosen from the existing literature demonstrating a typical use of the method. For instance, in the case of the digital waveguide modelling technique a vibrating string model is discussed, and in the case of the wave digital filter technique we present a classical piano hammer model. We tackle some nonlinear and time-varying models and include new results on the digital waveguide modelling of a nonlinear string. Current trends and future directions in physical modelling of musical instruments are discussed

  12. The time course of syntactic activation during language processing: a model based on neuropsychological and neurophysiological data.

    Science.gov (United States)

    Friederici, A D

    1995-09-01

    This paper presents a model describing the temporal and neurotopological structure of syntactic processes during comprehension. It postulates three distinct phases of language comprehension, two of which are primarily syntactic in nature. During the first phase the parser assigns the initial syntactic structure on the basis of word category information. These early structural processes are assumed to be subserved by the anterior parts of the left hemisphere, as event-related brain potentials show this area to be maximally activated when phrase structure violations are processed and as circumscribed lesions in this area lead to an impairment of the on-line structural assignment. During the second phase lexical-semantic and verb-argument structure information is processed. This phase is neurophysiologically manifest in a negative component in the event-related brain potential around 400 ms after stimulus onset which is distributed over the left and right temporo-parietal areas when lexical-semantic information is processed and over left anterior areas when verb-argument structure information is processed. During the third phase the parser tries to map the initial syntactic structure onto the available lexical-semantic and verb-argument structure information. In case of an unsuccessful match between the two types of information reanalyses may become necessary. These processes of structural reanalysis are correlated with a centroparietally distributed late positive component in the event-related brain potential.(ABSTRACT TRUNCATED AT 250 WORDS)

  13. An eigenvalue approach for the automatic scaling of unknowns in model-based reconstructions: Application to real-time phase-contrast flow MRI.

    Science.gov (United States)

    Tan, Zhengguo; Hohage, Thorsten; Kalentev, Oleksandr; Joseph, Arun A; Wang, Xiaoqing; Voit, Dirk; Merboldt, K Dietmar; Frahm, Jens

    2017-12-01

    The purpose of this work is to develop an automatic method for the scaling of unknowns in model-based nonlinear inverse reconstructions and to evaluate its application to real-time phase-contrast (RT-PC) flow magnetic resonance imaging (MRI). Model-based MRI reconstructions of parametric maps which describe a physical or physiological function require the solution of a nonlinear inverse problem, because the list of unknowns in the extended MRI signal equation comprises multiple functional parameters and all coil sensitivity profiles. Iterative solutions therefore rely on an appropriate scaling of unknowns to numerically balance partial derivatives and regularization terms. The scaling of unknowns emerges as a self-adjoint and positive-definite matrix which is expressible by its maximal eigenvalue and solved by power iterations. The proposed method is applied to RT-PC flow MRI based on highly undersampled acquisitions. Experimental validations include numerical phantoms providing ground truth and a wide range of human studies in the ascending aorta, carotid arteries, deep veins during muscular exercise and cerebrospinal fluid during deep respiration. For RT-PC flow MRI, model-based reconstructions with automatic scaling not only offer velocity maps with high spatiotemporal acuity and much reduced phase noise, but also ensure fast convergence as well as accurate and precise velocities for all conditions tested, i.e. for different velocity ranges, vessel sizes and the simultaneous presence of signals with velocity aliasing. In summary, the proposed automatic scaling of unknowns in model-based MRI reconstructions yields quantitatively reliable velocities for RT-PC flow MRI in various experimental scenarios. Copyright © 2017 John Wiley & Sons, Ltd.

  14. P-wave velocity changes in freezing hard low-porosity rocks: a laboratory-based time-average model

    Directory of Open Access Journals (Sweden)

    D. Draebing

    2012-10-01

    Full Text Available P-wave refraction seismics is a key method in permafrost research but its applicability to low-porosity rocks, which constitute alpine rock walls, has been denied in prior studies. These studies explain p-wave velocity changes in freezing rocks exclusively due to changing velocities of pore infill, i.e. water, air and ice. In existing models, no significant velocity increase is expected for low-porosity bedrock. We postulate, that mixing laws apply for high-porosity rocks, but freezing in confined space in low-porosity bedrock also alters physical rock matrix properties. In the laboratory, we measured p-wave velocities of 22 decimetre-large low-porosity (< 10% metamorphic, magmatic and sedimentary rock samples from permafrost sites with a natural texture (> 100 micro-fissures from 25 °C to −15 °C in 0.3 °C increments close to the freezing point. When freezing, p-wave velocity increases by 11–166% perpendicular to cleavage/bedding and equivalent to a matrix velocity increase from 11–200% coincident to an anisotropy decrease in most samples. The expansion of rigid bedrock upon freezing is restricted and ice pressure will increase matrix velocity and decrease anisotropy while changing velocities of the pore infill are insignificant. Here, we present a modified Timur's two-phase-equation implementing changes in matrix velocity dependent on lithology and demonstrate the general applicability of refraction seismics to differentiate frozen and unfrozen low-porosity bedrock.

  15. The effects of residential real-time pricing contracts on transco loads, pricing, and profitability: Simulations using the N-ABLE trademark agent-based model

    International Nuclear Information System (INIS)

    Ehlen, Mark A.; Scholand, Andrew J.; Stamber, Kevin L.

    2007-01-01

    An agent-based model is constructed in which a demand aggregator sells both uniform-price and real-time price (RTP) contracts to households as means for adding price elasticity in residential power use sectors, particularly during peak-price hours of the day. Simulations suggest that RTP contracts help a demand aggregator (1) shift its long-term contracts toward off-peak hours, thereby reducing its cost of power and (2) increase its short-run profits if it is one of the first aggregators to have large numbers of RTP contracts; but (3) create susceptibilities to short-term market demand and price volatilities. (author)

  16. Using a thermal-based two source energy balance model with time-differencing to estimate surface energy fluxes with day-night MODIS observations

    Science.gov (United States)

    Guzinski, R.; Anderson, M. C.; Kustas, W. P.; Nieto, H.; Sandholt, I.

    2013-07-01

    The Dual Temperature Difference (DTD) model, introduced by Norman et al. (2000), uses a two source energy balance modelling scheme driven by remotely sensed observations of diurnal changes in land surface temperature (LST) to estimate surface energy fluxes. By using a time-differential temperature measurement as input, the approach reduces model sensitivity to errors in absolute temperature retrieval. The original formulation of the DTD required an early morning LST observation (approximately 1 h after sunrise) when surface fluxes are minimal, limiting application to data provided by geostationary satellites at sub-hourly temporal resolution. The DTD model has been applied primarily during the active growth phase of agricultural crops and rangeland vegetation grasses, and has not been rigorously evaluated during senescence or in forested ecosystems. In this paper we present modifications to the DTD model that enable applications using thermal observations from polar orbiting satellites, such as Terra and Aqua, with day and night overpass times over the area of interest. This allows the application of the DTD model in high latitude regions where large viewing angles preclude the use of geostationary satellites, and also exploits the higher spatial resolution provided by polar orbiting satellites. A method for estimating nocturnal surface fluxes and a scheme for estimating the fraction of green vegetation are developed and evaluated. Modification for green vegetation fraction leads to significantly improved estimation of the heat fluxes from the vegetation canopy during senescence and in forests. When the modified DTD model is run with LST measurements acquired with the Moderate Resolution Imaging Spectroradiometer (MODIS) on board the Terra and Aqua satellites, generally satisfactory agreement with field measurements is obtained for a number of ecosystems in Denmark and the United States. Finally, regional maps of energy fluxes are produced for the Danish

  17. Using a thermal-based two source energy balance model with time-differencing to estimate surface energy fluxes with day–night MODIS observations

    Directory of Open Access Journals (Sweden)

    R. Guzinski

    2013-07-01

    Full Text Available The Dual Temperature Difference (DTD model, introduced by Norman et al. (2000, uses a two source energy balance modelling scheme driven by remotely sensed observations of diurnal changes in land surface temperature (LST to estimate surface energy fluxes. By using a time-differential temperature measurement as input, the approach reduces model sensitivity to errors in absolute temperature retrieval. The original formulation of the DTD required an early morning LST observation (approximately 1 h after sunrise when surface fluxes are minimal, limiting application to data provided by geostationary satellites at sub-hourly temporal resolution. The DTD model has been applied primarily during the active growth phase of agricultural crops and rangeland vegetation grasses, and has not been rigorously evaluated during senescence or in forested ecosystems. In this paper we present modifications to the DTD model that enable applications using thermal observations from polar orbiting satellites, such as Terra and Aqua, with day and night overpass times over the area of interest. This allows the application of the DTD model in high latitude regions where large viewing angles preclude the use of geostationary satellites, and also exploits the higher spatial resolution provided by polar orbiting satellites. A method for estimating nocturnal surface fluxes and a scheme for estimating the fraction of green vegetation are developed and evaluated. Modification for green vegetation fraction leads to significantly improved estimation of the heat fluxes from the vegetation canopy during senescence and in forests. When the modified DTD model is run with LST measurements acquired with the Moderate Resolution Imaging Spectroradiometer (MODIS on board the Terra and Aqua satellites, generally satisfactory agreement with field measurements is obtained for a number of ecosystems in Denmark and the United States. Finally, regional maps of energy fluxes are produced for the

  18. Using Virtual Reality to Assess Ethical Decisions in Road Traffic Scenarios: Applicability of Value-of-Life-Based Models and Influences of Time Pressure

    Directory of Open Access Journals (Sweden)

    Leon R. Sütfeld

    2017-07-01

    Full Text Available Self-driving cars are posing a new challenge to our ethics. By using algorithms to make decisions in situations where harming humans is possible, probable, or even unavoidable, a self-driving car's ethical behavior comes pre-defined. Ad hoc decisions are made in milliseconds, but can be based on extensive research and debates. The same algorithms are also likely to be used in millions of cars at a time, increasing the impact of any inherent biases, and increasing the importance of getting it right. Previous research has shown that moral judgment and behavior are highly context-dependent, and comprehensive and nuanced models of the underlying cognitive processes are out of reach to date. Models of ethics for self-driving cars should thus aim to match human decisions made in the same context. We employed immersive virtual reality to assess ethical behavior in simulated road traffic scenarios, and used the collected data to train and evaluate a range of decision models. In the study, participants controlled a virtual car and had to choose which of two given obstacles they would sacrifice in order to spare the other. We randomly sampled obstacles from a variety of inanimate objects, animals and humans. Our model comparison shows that simple models based on one-dimensional value-of-life scales are suited to describe human ethical behavior in these situations. Furthermore, we examined the influence of severe time pressure on the decision-making process. We found that it decreases consistency in the decision patterns, thus providing an argument for algorithmic decision-making in road traffic. This study demonstrates the suitability of virtual reality for the assessment of ethical behavior in humans, delivering consistent results across subjects, while closely matching the experimental settings to the real world scenarios in question.

  19. Using Virtual Reality to Assess Ethical Decisions in Road Traffic Scenarios: Applicability of Value-of-Life-Based Models and Influences of Time Pressure.

    Science.gov (United States)

    Sütfeld, Leon R; Gast, Richard; König, Peter; Pipa, Gordon

    2017-01-01

    Self-driving cars are posing a new challenge to our ethics. By using algorithms to make decisions in situations where harming humans is possible, probable, or even unavoidable, a self-driving car's ethical behavior comes pre-defined. Ad hoc decisions are made in milliseconds, but can be based on extensive research and debates. The same algorithms are also likely to be used in millions of cars at a time, increasing the impact of any inherent biases, and increasing the importance of getting it right. Previous research has shown that moral judgment and behavior are highly context-dependent, and comprehensive and nuanced models of the underlying cognitive processes are out of reach to date. Models of ethics for self-driving cars should thus aim to match human decisions made in the same context. We employed immersive virtual reality to assess ethical behavior in simulated road traffic scenarios, and used the collected data to train and evaluate a range of decision models. In the study, participants controlled a virtual car and had to choose which of two given obstacles they would sacrifice in order to spare the other. We randomly sampled obstacles from a variety of inanimate objects, animals and humans. Our model comparison shows that simple models based on one-dimensional value-of-life scales are suited to describe human ethical behavior in these situations. Furthermore, we examined the influence of severe time pressure on the decision-making process. We found that it decreases consistency in the decision patterns, thus providing an argument for algorithmic decision-making in road traffic. This study demonstrates the suitability of virtual reality for the assessment of ethical behavior in humans, delivering consistent results across subjects, while closely matching the experimental settings to the real world scenarios in question.

  20. Real-time modeling of heat distributions

    Science.gov (United States)

    Hamann, Hendrik F.; Li, Hongfei; Yarlanki, Srinivas

    2018-01-02

    Techniques for real-time modeling temperature distributions based on streaming sensor data are provided. In one aspect, a method for creating a three-dimensional temperature distribution model for a room having a floor and a ceiling is provided. The method includes the following steps. A ceiling temperature distribution in the room is determined. A floor temperature distribution in the room is determined. An interpolation between the ceiling temperature distribution and the floor temperature distribution is used to obtain the three-dimensional temperature distribution model for the room.

  1. A real-time, dynamic early-warning model based on uncertainty analysis and risk assessment for sudden water pollution accidents.

    Science.gov (United States)

    Hou, Dibo; Ge, Xiaofan; Huang, Pingjie; Zhang, Guangxin; Loáiciga, Hugo

    2014-01-01

    A real-time, dynamic, early-warning model (EP-risk model) is proposed to cope with sudden water quality pollution accidents affecting downstream areas with raw-water intakes (denoted as EPs). The EP-risk model outputs the risk level of water pollution at the EP by calculating the likelihood of pollution and evaluating the impact of pollution. A generalized form of the EP-risk model for river pollution accidents based on Monte Carlo simulation, the analytic hierarchy process (AHP) method, and the risk matrix method is proposed. The likelihood of water pollution at the EP is calculated by the Monte Carlo method, which is used for uncertainty analysis of pollutants' transport in rivers. The impact of water pollution at the EP is evaluated by expert knowledge and the results of Monte Carlo simulation based on the analytic hierarchy process. The final risk level of water pollution at the EP is determined by the risk matrix method. A case study of the proposed method is illustrated with a phenol spill accident in China.

  2. Skull base tumor model.

    Science.gov (United States)

    Gragnaniello, Cristian; Nader, Remi; van Doormaal, Tristan; Kamel, Mahmoud; Voormolen, Eduard H J; Lasio, Giovanni; Aboud, Emad; Regli, Luca; Tulleken, Cornelius A F; Al-Mefty, Ossama

    2010-11-01

    Resident duty-hours restrictions have now been instituted in many countries worldwide. Shortened training times and increased public scrutiny of surgical competency have led to a move away from the traditional apprenticeship model of training. The development of educational models for brain anatomy is a fascinating innovation allowing neurosurgeons to train without the need to practice on real patients and it may be a solution to achieve competency within a shortened training period. The authors describe the use of Stratathane resin ST-504 polymer (SRSP), which is inserted at different intracranial locations to closely mimic meningiomas and other pathological entities of the skull base, in a cadaveric model, for use in neurosurgical training. Silicone-injected and pressurized cadaveric heads were used for studying the SRSP model. The SRSP presents unique intrinsic metamorphic characteristics: liquid at first, it expands and foams when injected into the desired area of the brain, forming a solid tumorlike structure. The authors injected SRSP via different passages that did not influence routes used for the surgical approach for resection of the simulated lesion. For example, SRSP injection routes included endonasal transsphenoidal or transoral approaches if lesions were to be removed through standard skull base approach, or, alternatively, SRSP was injected via a cranial approach if the removal was planned to be via the transsphenoidal or transoral route. The model was set in place in 3 countries (US, Italy, and The Netherlands), and a pool of 13 physicians from 4 different institutions (all surgeons and surgeons in training) participated in evaluating it and provided feedback. All 13 evaluating physicians had overall positive impressions of the model. The overall score on 9 components evaluated--including comparison between the tumor model and real tumor cases, perioperative requirements, general impression, and applicability--was 88% (100% being the best possible

  3. Predicting the mean cycle time as a function of throughput and product mix for cluster tool workstations using EPT-based aggregate modeling

    NARCIS (Netherlands)

    Veeger, C.P.L.; Etman, L.F.P.; Herk, van J.; Rooda, J.E.

    2009-01-01

    Predicting the mean cycle time as a function of throughput and product mix is helpful in making the production planning for cluster tools. To predict the mean cycle time, detailed simulation models may be used. However, detailed models require much development time, and it may not be possible to

  4. Modeling biological pathway dynamics with timed automata.

    Science.gov (United States)

    Schivo, Stefano; Scholma, Jetse; Wanders, Brend; Urquidi Camacho, Ricardo A; van der Vet, Paul E; Karperien, Marcel; Langerak, Rom; van de Pol, Jaco; Post, Janine N

    2014-05-01

    Living cells are constantly subjected to a plethora of environmental stimuli that require integration into an appropriate cellular response. This integration takes place through signal transduction events that form tightly interconnected networks. The understanding of these networks requires capturing their dynamics through computational support and models. ANIMO (analysis of Networks with Interactive Modeling) is a tool that enables the construction and exploration of executable models of biological networks, helping to derive hypotheses and to plan wet-lab experiments. The tool is based on the formalism of Timed Automata, which can be analyzed via the UPPAAL model checker. Thanks to Timed Automata, we can provide a formal semantics for the domain-specific language used to represent signaling networks. This enforces precision and uniformity in the definition of signaling pathways, contributing to the integration of isolated signaling events into complex network models. We propose an approach to discretization of reaction kinetics that allows us to efficiently use UPPAAL as the computational engine to explore the dynamic behavior of the network of interest. A user-friendly interface hides the use of Timed Automata from the user, while keeping the expressive power intact. Abstraction to single-parameter kinetics speeds up construction of models that remain faithful enough to provide meaningful insight. The resulting dynamic behavior of the network components is displayed graphically, allowing for an intuitive and interactive modeling experience.

  5. Joint model-based clustering of nonlinear longitudinal trajectories and associated time-to-event data analysis, linked by latent class membership: with application to AIDS clinical studies.

    Science.gov (United States)

    Huang, Yangxin; Lu, Xiaosun; Chen, Jiaqing; Liang, Juan; Zangmeister, Miriam

    2017-10-27

    Longitudinal and time-to-event data are often observed together. Finite mixture models are currently used to analyze nonlinear heterogeneous longitudinal data, which, by releasing the homogeneity restriction of nonlinear mixed-effects (NLME) models, can cluster individuals into one of the pre-specified classes with class membership probabilities. This clustering may have clinical significance, and be associated with clinically important time-to-event data. This article develops a joint modeling approach to a finite mixture of NLME models for longitudinal data and proportional hazard Cox model for time-to-event data, linked by individual latent class indicators, under a Bayesian framework. The proposed joint models and method are applied to a real AIDS clinical trial data set, followed by simulation studies to assess the performance of the proposed joint model and a naive two-step model, in which finite mixture model and Cox model are fitted separately.

  6. Modelling of the concentration-time relationship based on global diffusion-charge transfer parameters in a flow-by reactor with a 3D electrode

    International Nuclear Information System (INIS)

    Nava, J.L.; Sosa, E.; Carreno, G.; Ponce-de-Leon, C.; Oropeza, M.T.

    2006-01-01

    A concentration versus time relationship model based on the isothermal diffusion-charge transfer mechanism was developed for a flow-by reactor with a three-dimensional (3D) reticulated vitreous carbon (RVC) electrode. The relationship was based on the effectiveness factor (η) which lead to the simulation of the concentration decay at different electrode polarisation conditions, i.e. -0.1, -0.3 and -0.59 V versus SCE; the charge transfer process was used for the former and mix and a mass transport control was used for the latter. Charge transfer and mass transport parameters were estimated from experimental data using Electrochemical Impedance Spectroscopy (EIS) and Linear Voltammetry (LV) techniques, respectively

  7. Modelling of the concentration-time relationship based on global diffusion-charge transfer parameters in a flow-by reactor with a 3D electrode

    Energy Technology Data Exchange (ETDEWEB)

    Nava, J.L. [Universidad Autonoma Metropolitana-Iztapalapa, Departamento de Quimica, Av. San Rafael Atlixco 186, A.P. 55-534, C.P. 09340, Mexico D.F. (Mexico); Sosa, E. [Instituto Mexicano del Petroleo, Programa de Investigacion en Ingenieria Molecular, Eje Central 152, C.P. 07730, Mexico D.F. (Mexico); Carreno, G. [Universidad de Guanajuato, Facultad de Ingenieria en Geomatica e Hidraulica, Av. Juarez 77, C.P. 36000, Guanajuato, Gto. (Mexico); Ponce-de-Leon, C. [Electrochemical Engineering Group, School of Engineering Sciences, University of Southampton, Highfield, Southampton SO17 1BJ (United Kingdom)]. E-mail: capla@soton.ac.uk; Oropeza, M.T. [Centro de Graduados e Investigacion del Instituto Tecnologico de Tijuana, Blvd. Industrial, s/n, C.P. 22500, Tijuana B.C. (Mexico)

    2006-05-25

    A concentration versus time relationship model based on the isothermal diffusion-charge transfer mechanism was developed for a flow-by reactor with a three-dimensional (3D) reticulated vitreous carbon (RVC) electrode. The relationship was based on the effectiveness factor ({eta}) which lead to the simulation of the concentration decay at different electrode polarisation conditions, i.e. -0.1, -0.3 and -0.59 V versus SCE; the charge transfer process was used for the former and mix and a mass transport control was used for the latter. Charge transfer and mass transport parameters were estimated from experimental data using Electrochemical Impedance Spectroscopy (EIS) and Linear Voltammetry (LV) techniques, respectively.

  8. GPS-based Microenvironment Tracker (MicroTrac) Model to Estimate Time-Location of Individuals for Air Pollution Exposure Assessments: Model Evaluation in Central North Carolina

    Science.gov (United States)

    A critical aspect of air pollution exposure assessment is the estimation of the time spent by individuals in various microenvironments (ME). Accounting for the time spent in different ME with different pollutant concentrations can reduce exposure misclassifications, while failure...

  9. A GPU-based framework for modeling real-time 3D lung tumor conformal dosimetry with subject-specific lung tumor motion

    International Nuclear Information System (INIS)

    Min Yugang; Santhanam, Anand; Ruddy, Bari H; Neelakkantan, Harini; Meeks, Sanford L; Kupelian, Patrick A

    2010-01-01

    In this paper, we present a graphics processing unit (GPU)-based simulation framework to calculate the delivered dose to a 3D moving lung tumor and its surrounding normal tissues, which are undergoing subject-specific lung deformations. The GPU-based simulation framework models the motion of the 3D volumetric lung tumor and its surrounding tissues, simulates the dose delivery using the dose extracted from a treatment plan using Pinnacle Treatment Planning System, Phillips, for one of the 3DCTs of the 4DCT and predicts the amount and location of radiation doses deposited inside the lung. The 4DCT lung datasets were registered with each other using a modified optical flow algorithm. The motion of the tumor and the motion of the surrounding tissues were simulated by measuring the changes in lung volume during the radiotherapy treatment using spirometry. The real-time dose delivered to the tumor for each beam is generated by summing the dose delivered to the target volume at each increase in lung volume during the beam delivery time period. The simulation results showed the real-time capability of the framework at 20 discrete tumor motion steps per breath, which is higher than the number of 4DCT steps (approximately 12) reconstructed during multiple breathing cycles.

  10. A GPU-based framework for modeling real-time 3D lung tumor conformal dosimetry with subject-specific lung tumor motion

    Energy Technology Data Exchange (ETDEWEB)

    Min Yugang; Santhanam, Anand; Ruddy, Bari H [University of Central Florida, FL (United States); Neelakkantan, Harini; Meeks, Sanford L [M D Anderson Cancer Center Orlando, FL (United States); Kupelian, Patrick A, E-mail: anand.santhanam@orlandohealth.co [Department of Radiation Oncology, University of California, Los Angeles, CA (United States)

    2010-09-07

    In this paper, we present a graphics processing unit (GPU)-based simulation framework to calculate the delivered dose to a 3D moving lung tumor and its surrounding normal tissues, which are undergoing subject-specific lung deformations. The GPU-based simulation framework models the motion of the 3D volumetric lung tumor and its surrounding tissues, simulates the dose delivery using the dose extracted from a treatment plan using Pinnacle Treatment Planning System, Phillips, for one of the 3DCTs of the 4DCT and predicts the amount and location of radiation doses deposited inside the lung. The 4DCT lung datasets were registered with each other using a modified optical flow algorithm. The motion of the tumor and the motion of the surrounding tissues were simulated by measuring the changes in lung volume during the radiotherapy treatment using spirometry. The real-time dose delivered to the tumor for each beam is generated by summing the dose delivered to the target volume at each increase in lung volume during the beam delivery time period. The simulation results showed the real-time capability of the framework at 20 discrete tumor motion steps per breath, which is higher than the number of 4DCT steps (approximately 12) reconstructed during multiple breathing cycles.

  11. A GPU-based framework for modeling real-time 3D lung tumor conformal dosimetry with subject-specific lung tumor motion.

    Science.gov (United States)

    Min, Yugang; Santhanam, Anand; Neelakkantan, Harini; Ruddy, Bari H; Meeks, Sanford L; Kupelian, Patrick A

    2010-09-07

    In this paper, we present a graphics processing unit (GPU)-based simulation framework to calculate the delivered dose to a 3D moving lung tumor and its surrounding normal tissues, which are undergoing subject-specific lung deformations. The GPU-based simulation framework models the motion of the 3D volumetric lung tumor and its surrounding tissues, simulates the dose delivery using the dose extracted from a treatment plan using Pinnacle Treatment Planning System, Phillips, for one of the 3DCTs of the 4DCT and predicts the amount and location of radiation doses deposited inside the lung. The 4DCT lung datasets were registered with each other using a modified optical flow algorithm. The motion of the tumor and the motion of the surrounding tissues were simulated by measuring the changes in lung volume during the radiotherapy treatment using spirometry. The real-time dose delivered to the tumor for each beam is generated by summing the dose delivered to the target volume at each increase in lung volume during the beam delivery time period. The simulation results showed the real-time capability of the framework at 20 discrete tumor motion steps per breath, which is higher than the number of 4DCT steps (approximately 12) reconstructed during multiple breathing cycles.

  12. Bootstrapping a time series model

    International Nuclear Information System (INIS)

    Son, M.S.

    1984-01-01

    The bootstrap is a methodology for estimating standard errors. The idea is to use a Monte Carlo simulation experiment based on a nonparametric estimate of the error distribution. The main objective of this dissertation was to demonstrate the use of the bootstrap to attach standard errors to coefficient estimates and multi-period forecasts in a second-order autoregressive model fitted by least squares and maximum likelihood estimation. A secondary objective of this article was to present the bootstrap in the context of two econometric equations describing the unemployment rate and individual income tax in the state of Oklahoma. As it turns out, the conventional asymptotic formulae (both the least squares and maximum likelihood estimates) for estimating standard errors appear to overestimate the true standard errors. But there are two problems: 1) the first two observations y 1 and y 2 have been fixed, and 2) the residuals have not been inflated. After these two factors are considered in the trial and bootstrap experiment, both the conventional maximum likelihood and bootstrap estimates of the standard errors appear to be performing quite well. At present, there does not seem to be a good rule of thumb for deciding when the conventional asymptotic formulae will give acceptable results

  13. THE EFFECT OF PROBLEM SOLVING LEARNING MODEL BASED JUST IN TIME TEACHING (JiTT ON SCIENCE PROCESS SKILLS (SPS ON STRUCTURE AND FUNCTION OF PLANT TISSUE CONCEPT

    Directory of Open Access Journals (Sweden)

    Resha Maulida

    2017-11-01

    Full Text Available The purpose of this study was to determine the effect of Problem Solving learning model based Just in Time Teaching (JiTT on students' science process skills (SPS on structure and function of plant tissue concept. This research was conducted at State Senior High School in South Tangerang .The research conducted using the quasi-experimental with Nonequivalent pretest-Postest Control Group Design. The samples of this study were 34 students for experimental group and 34 students for the control group. Data was obtained using a process skill test instrument (essai type that has been tested for its validity and reliability. Result of data analysis by ANACOVA, show that there were significant difference of postest between experiment and control group, by controlling the pretest score (F = 4.958; p <0.05. Thus, the problem-solving learning based on JiTT proved to improve students’ SPS. The contribution of this treatment in improving the students’ SPS was 7.2%. This shows that there was effect of problem solving model based JiTT on students’ SPS on the Structure and function of plant tissue concept.

  14. Is there any correlation between model-based perfusion parameters and model-free parameters of time-signal intensity curve on dynamic contrast enhanced MRI in breast cancer patients?

    Energy Technology Data Exchange (ETDEWEB)

    Yi, Boram; Kang, Doo Kyoung; Kim, Tae Hee [Ajou University School of Medicine, Department of Radiology, Suwon, Gyeonggi-do (Korea, Republic of); Yoon, Dukyong [Ajou University School of Medicine, Department of Biomedical Informatics, Suwon (Korea, Republic of); Jung, Yong Sik; Kim, Ku Sang [Ajou University School of Medicine, Department of Surgery, Suwon (Korea, Republic of); Yim, Hyunee [Ajou University School of Medicine, Department of Pathology, Suwon (Korea, Republic of)

    2014-05-15

    To find out any correlation between dynamic contrast-enhanced (DCE) model-based parameters and model-free parameters, and evaluate correlations between perfusion parameters with histologic prognostic factors. Model-based parameters (Ktrans, Kep and Ve) of 102 invasive ductal carcinomas were obtained using DCE-MRI and post-processing software. Correlations between model-based and model-free parameters and between perfusion parameters and histologic prognostic factors were analysed. Mean Kep was significantly higher in cancers showing initial rapid enhancement (P = 0.002) and a delayed washout pattern (P = 0.001). Ve was significantly lower in cancers showing a delayed washout pattern (P = 0.015). Kep significantly correlated with time to peak enhancement (TTP) (ρ = -0.33, P < 0.001) and washout slope (ρ = 0.39, P = 0.002). Ve was significantly correlated with TTP (ρ = 0.33, P = 0.002). Mean Kep was higher in tumours with high nuclear grade (P = 0.017). Mean Ve was lower in tumours with high histologic grade (P = 0.005) and in tumours with negative oestrogen receptor status (P = 0.047). TTP was shorter in tumours with negative oestrogen receptor status (P = 0.037). We could acquire general information about the tumour vascular physiology, interstitial space volume and pathologic prognostic factors by analyzing time-signal intensity curve without a complicated acquisition process for the model-based parameters. (orig.)

  15. Modeling of the time sharing for lecturers

    Directory of Open Access Journals (Sweden)

    E. Yu. Shakhova

    2017-01-01

    Full Text Available In the context of modernization of the Russian system of higher education, it is necessary to analyze the working time of the university lecturers, taking into account both basic job functions as the university lecturer, and others.The mathematical problem is presented for the optimal working time planning for the university lecturers. The review of the documents, native and foreign works on the study is made. Simulation conditions, based on analysis of the subject area, are defined. Models of optimal working time sharing of the university lecturers («the second half of the day» are developed and implemented in the system MathCAD. Optimal solutions have been obtained.Three problems have been solved:1 to find the optimal time sharing for «the second half of the day» in a certain position of the university lecturer;2 to find the optimal time sharing for «the second half of the day» for all positions of the university lecturers in view of the established model of the academic load differentiation;3 to find the volume value of the non-standardized part of time work in the department for the academic year, taking into account: the established model of an academic load differentiation, distribution of the Faculty number for the positions and the optimal time sharing «the second half of the day» for the university lecturers of the department.Examples are given of the analysis results. The practical application of the research: the developed models can be used when planning the working time of an individual professor in the preparation of the work plan of the university department for the academic year, as well as to conduct a comprehensive analysis of the administrative decisions in the development of local university regulations.

  16. Optimal time points sampling in pathway modelling.

    Science.gov (United States)

    Hu, Shiyan

    2004-01-01

    Modelling cellular dynamics based on experimental data is at the heart of system biology. Considerable progress has been made to dynamic pathway modelling as well as the related parameter estimation. However, few of them gives consideration for the issue of optimal sampling time selection for parameter estimation. Time course experiments in molecular biology rarely produce large and accurate data sets and the experiments involved are usually time consuming and expensive. Therefore, to approximate parameters for models with only few available sampling data is of significant practical value. For signal transduction, the sampling intervals are usually not evenly distributed and are based on heuristics. In the paper, we investigate an approach to guide the process of selecting time points in an optimal way to minimize the variance of parameter estimates. In the method, we first formulate the problem to a nonlinear constrained optimization problem by maximum likelihood estimation. We then modify and apply a quantum-inspired evolutionary algorithm, which combines the advantages of both quantum computing and evolutionary computing, to solve the optimization problem. The new algorithm does not suffer from the morass of selecting good initial values and being stuck into local optimum as usually accompanied with the conventional numerical optimization techniques. The simulation results indicate the soundness of the new method.

  17. Multiple Indicator Stationary Time Series Models.

    Science.gov (United States)

    Sivo, Stephen A.

    2001-01-01

    Discusses the propriety and practical advantages of specifying multivariate time series models in the context of structural equation modeling for time series and longitudinal panel data. For time series data, the multiple indicator model specification improves on classical time series analysis. For panel data, the multiple indicator model…

  18. Modeling and Understanding Time-Evolving Scenarios

    Directory of Open Access Journals (Sweden)

    Riccardo Melen

    2015-08-01

    Full Text Available In this paper, we consider the problem of modeling application scenarios characterized by variability over time and involving heterogeneous kinds of knowledge. The evolution of distributed technologies creates new and challenging possibilities of integrating different kinds of problem solving methods, obtaining many benefits from the user point of view. In particular, we propose here a multilayer modeling system and adopt the Knowledge Artifact concept to tie together statistical and Artificial Intelligence rule-based methods to tackle problems in ubiquitous and distributed scenarios.

  19. Verifying real-time systems against scenario-based requirements

    DEFF Research Database (Denmark)

    Larsen, Kim Guldstrand; Li, Shuhao; Nielsen, Brian

    2009-01-01

    We propose an approach to automatic verification of real-time systems against scenario-based requirements. A real-time system is modeled as a network of Timed Automata (TA), and a scenario-based requirement is specified as a Live Sequence Chart (LSC). We define a trace-based semantics for a kernel...... subset of the LSC language. By equivalently translating an LSC chart into an observer TA and then non-intrusively composing this observer with the original system model, the problem of verifying a real-time system against a scenario-based requirement reduces to a classical real-time model checking...

  20. Fisher information framework for time series modeling

    Science.gov (United States)

    Venkatesan, R. C.; Plastino, A.

    2017-08-01

    A robust prediction model invoking the Takens embedding theorem, whose working hypothesis is obtained via an inference procedure based on the minimum Fisher information principle, is presented. The coefficients of the ansatz, central to the working hypothesis satisfy a time independent Schrödinger-like equation in a vector setting. The inference of (i) the probability density function of the coefficients of the working hypothesis and (ii) the establishing of constraint driven pseudo-inverse condition for the modeling phase of the prediction scheme, is made, for the case of normal distributions, with the aid of the quantum mechanical virial theorem. The well-known reciprocity relations and the associated Legendre transform structure for the Fisher information measure (FIM, hereafter)-based model in a vector setting (with least square constraints) are self-consistently derived. These relations are demonstrated to yield an intriguing form of the FIM for the modeling phase, which defines the working hypothesis, solely in terms of the observed data. Cases for prediction employing time series' obtained from the: (i) the Mackey-Glass delay-differential equation, (ii) one ECG signal from the MIT-Beth Israel Deaconess Hospital (MIT-BIH) cardiac arrhythmia database, and (iii) one ECG signal from the Creighton University ventricular tachyarrhythmia database. The ECG samples were obtained from the Physionet online repository. These examples demonstrate the efficiency of the prediction model. Numerical examples for exemplary cases are provided.

  1. Evaluation of NWP-based Satellite Precipitation Error Correction with Near-Real-Time Model Products and Flood-inducing Storms

    Science.gov (United States)

    Zhang, X.; Anagnostou, E. N.; Schwartz, C. S.

    2017-12-01

    Satellite precipitation products tend to have significant biases over complex terrain. Our research investigates a statistical approach for satellite precipitation adjustment based solely on numerical weather simulations. This approach has been evaluated in two mid-latitude (Zhang et al. 2013*1, Zhang et al. 2016*2) and three topical mountainous regions by using the WRF model to adjust two high-resolution satellite products i) National Oceanic and Atmospheric Administration (NOAA) Climate Prediction Center morphing technique (CMORPH) and ii) Global Satellite Mapping of Precipitation (GSMaP). Results show the adjustment effectively reduces the satellite underestimation of high rain rates, which provides a solid proof-of-concept for continuing research of NWP-based satellite correction. In this study we investigate the feasibility of using NCAR Real-time Ensemble Forecasts*3 for adjusting near-real-time satellite precipitation datasets over complex terrain areas in the Continental United States (CONUS) such as Olympic Peninsula, California coastal mountain ranges, Rocky Mountains and South Appalachians. The research will focus on flood-inducing storms occurred from May 2015 to December 2016 and four satellite precipitation products (CMORPH, GSMaP, PERSIANN-CCS and IMERG). The error correction performance evaluation will be based on comparisons against the gauge-adjusted Stage IV precipitation data. *1 Zhang, Xinxuan, et al. "Using NWP simulations in satellite rainfall estimation of heavy precipitation events over mountainous areas." Journal of Hydrometeorology 14.6 (2013): 1844-1858. *2 Zhang, Xinxuan, et al. "Hydrologic Evaluation of NWP-Adjusted CMORPH Estimates of Hurricane-Induced Precipitation in the Southern Appalachians." Journal of Hydrometeorology 17.4 (2016): 1087-1099. *3 Schwartz, Craig S., et al. "NCAR's experimental real-time convection-allowing ensemble prediction system." Weather and Forecasting 30.6 (2015): 1645-1654.

  2. Model-based testing for space-time interaction using point processes: An application to psychiatric hospital admissions in an urban area.

    Science.gov (United States)

    Meyer, Sebastian; Warnke, Ingeborg; Rössler, Wulf; Held, Leonhard

    2016-05-01

    Spatio-temporal interaction is inherent to cases of infectious diseases and occurrences of earthquakes, whereas the spread of other events, such as cancer or crime, is less evident. Statistical significance tests of space-time clustering usually assess the correlation between the spatial and temporal (transformed) distances of the events. Although appealing through simplicity, these classical tests do not adjust for the underlying population nor can they account for a distance decay of interaction. We propose to use the framework of an endemic-epidemic point process model to jointly estimate a background event rate explained by seasonal and areal characteristics, as well as a superposed epidemic component representing the hypothesis of interest. We illustrate this new model-based test for space-time interaction by analysing psychiatric inpatient admissions in Zurich, Switzerland (2007-2012). Several socio-economic factors were found to be associated with the admission rate, but there was no evidence of general clustering of the cases. Copyright © 2016 Elsevier Ltd. All rights reserved.

  3. Intra-individual gait patterns across different time-scales as revealed by means of a supervised learning model using kernel-based discriminant regression.

    Directory of Open Access Journals (Sweden)

    Fabian Horst

    Full Text Available Traditionally, gait analysis has been centered on the idea of average behavior and normality. On one hand, clinical diagnoses and therapeutic interventions typically assume that average gait patterns remain constant over time. On the other hand, it is well known that all our movements are accompanied by a certain amount of variability, which does not allow us to make two identical steps. The purpose of this study was to examine changes in the intra-individual gait patterns across different time-scales (i.e., tens-of-mins, tens-of-hours.Nine healthy subjects performed 15 gait trials at a self-selected speed on 6 sessions within one day (duration between two subsequent sessions from 10 to 90 mins. For each trial, time-continuous ground reaction forces and lower body joint angles were measured. A supervised learning model using a kernel-based discriminant regression was applied for classifying sessions within individual gait patterns.Discernable characteristics of intra-individual gait patterns could be distinguished between repeated sessions by classification rates of 67.8 ± 8.8% and 86.3 ± 7.9% for the six-session-classification of ground reaction forces and lower body joint angles, respectively. Furthermore, the one-on-one-classification showed that increasing classification rates go along with increasing time durations between two sessions and indicate that changes of gait patterns appear at different time-scales.Discernable characteristics between repeated sessions indicate continuous intrinsic changes in intra-individual gait patterns and suggest a predominant role of deterministic processes in human motor control and learning. Natural changes of gait patterns without any externally induced injury or intervention may reflect continuous adaptations of the motor system over several time-scales. Accordingly, the modelling of walking by means of average gait patterns that are assumed to be near constant over time needs to be reconsidered in the

  4. Intra-individual gait patterns across different time-scales as revealed by means of a supervised learning model using kernel-based discriminant regression.

    Science.gov (United States)

    Horst, Fabian; Eekhoff, Alexander; Newell, Karl M; Schöllhorn, Wolfgang I

    2017-01-01

    Traditionally, gait analysis has been centered on the idea of average behavior and normality. On one hand, clinical diagnoses and therapeutic interventions typically assume that average gait patterns remain constant over time. On the other hand, it is well known that all our movements are accompanied by a certain amount of variability, which does not allow us to make two identical steps. The purpose of this study was to examine changes in the intra-individual gait patterns across different time-scales (i.e., tens-of-mins, tens-of-hours). Nine healthy subjects performed 15 gait trials at a self-selected speed on 6 sessions within one day (duration between two subsequent sessions from 10 to 90 mins). For each trial, time-continuous ground reaction forces and lower body joint angles were measured. A supervised learning model using a kernel-based discriminant regression was applied for classifying sessions within individual gait patterns. Discernable characteristics of intra-individual gait patterns could be distinguished between repeated sessions by classification rates of 67.8 ± 8.8% and 86.3 ± 7.9% for the six-session-classification of ground reaction forces and lower body joint angles, respectively. Furthermore, the one-on-one-classification showed that increasing classification rates go along with increasing time durations between two sessions and indicate that changes of gait patterns appear at different time-scales. Discernable characteristics between repeated sessions indicate continuous intrinsic changes in intra-individual gait patterns and suggest a predominant role of deterministic processes in human motor control and learning. Natural changes of gait patterns without any externally induced injury or intervention may reflect continuous adaptations of the motor system over several time-scales. Accordingly, the modelling of walking by means of average gait patterns that are assumed to be near constant over time needs to be reconsidered in the context of

  5. Physical models on discrete space and time

    International Nuclear Information System (INIS)

    Lorente, M.

    1986-01-01

    The idea of space and time quantum operators with a discrete spectrum has been proposed frequently since the discovery that some physical quantities exhibit measured values that are multiples of fundamental units. This paper first reviews a number of these physical models. They are: the method of finite elements proposed by Bender et al; the quantum field theory model on discrete space-time proposed by Yamamoto; the finite dimensional quantum mechanics approach proposed by Santhanam et al; the idea of space-time as lattices of n-simplices proposed by Kaplunovsky et al; and the theory of elementary processes proposed by Weizsaecker and his colleagues. The paper then presents a model proposed by the authors and based on the (n+1)-dimensional space-time lattice where fundamental entities interact among themselves 1 to 2n in order to build up a n-dimensional cubic lattice as a ground field where the physical interactions take place. The space-time coordinates are nothing more than the labelling of the ground field and take only discrete values. 11 references

  6. Estimating daily time series of streamflow using hydrological model calibrated based on satellite observations of river water surface width: Toward real world applications.

    Science.gov (United States)

    Sun, Wenchao; Ishidaira, Hiroshi; Bastola, Satish; Yu, Jingshan

    2015-05-01

    Lacking observation data for calibration constrains applications of hydrological models to estimate daily time series of streamflow. Recent improvements in remote sensing enable detection of river water-surface width from satellite observations, making possible the tracking of streamflow from space. In this study, a method calibrating hydrological models using river width derived from remote sensing is demonstrated through application to the ungauged Irrawaddy Basin in Myanmar. Generalized likelihood uncertainty estimation (GLUE) is selected as a tool for automatic calibration and uncertainty analysis. Of 50,000 randomly generated parameter sets, 997 are identified as behavioral, based on comparing model simulation with satellite observations. The uncertainty band of streamflow simulation can span most of 10-year average monthly observed streamflow for moderate and high flow conditions. Nash-Sutcliffe efficiency is 95.7% for the simulated streamflow at the 50% quantile. These results indicate that application to the target basin is generally successful. Beyond evaluating the method in a basin lacking streamflow data, difficulties and possible solutions for applications in the real world are addressed to promote future use of the proposed method in more ungauged basins. Copyright © 2015 The Authors. Published by Elsevier Inc. All rights reserved.

  7. Real-Time Flood Control by Tree-Based Model Predictive Control Including Forecast Uncertainty: A Case Study Reservoir in Turkey

    Directory of Open Access Journals (Sweden)

    Gökçen Uysal

    2018-03-01

    Full Text Available Optimal control of reservoirs is a challenging task due to conflicting objectives, complex system structure, and uncertainties in the system. Real time control decisions suffer from streamflow forecast uncertainty. This study aims to use Probabilistic Streamflow Forecasts (PSFs having a lead-time up to 48 h as input for the recurrent reservoir operation problem. A related technique for decision making is multi-stage stochastic optimization using scenario trees, referred to as Tree-based Model Predictive Control (TB-MPC. Deterministic Streamflow Forecasts (DSFs are provided by applying random perturbations on perfect data. PSFs are synthetically generated from DSFs by a new approach which explicitly presents dynamic uncertainty evolution. We assessed different variables in the generation of stochasticity and compared the results using different scenarios. The developed real-time hourly flood control was applied to a test case which had limited reservoir storage and restricted downstream condition. According to hindcasting closed-loop experiment results, TB-MPC outperforms the deterministic counterpart in terms of decreased downstream flood risk according to different independent forecast scenarios. TB-MPC was also tested considering different number of tree branches, forecast horizons, and different inflow conditions. We conclude that using synthetic PSFs in TB-MPC can provide more robust solutions against forecast uncertainty by resolution of uncertainty in trees.

  8. T-S Fuzzy Model-Based Approximation and Filter Design for Stochastic Time-Delay Systems with Hankel Norm Criterion

    Directory of Open Access Journals (Sweden)

    Yanhui Li

    2014-01-01

    Full Text Available This paper investigates the Hankel norm filter design problem for stochastic time-delay systems, which are represented by Takagi-Sugeno (T-S fuzzy model. Motivated by the parallel distributed compensation (PDC technique, a novel filtering error system is established. The objective is to design a suitable filter that guarantees the corresponding filtering error system to be mean-square asymptotically stable and to have a specified Hankel norm performance level γ. Based on the Lyapunov stability theory and the Itô differential rule, the Hankel norm criterion is first established by adopting the integral inequality method, which can make some useful efforts in reducing conservativeness. The Hankel norm filtering problem is casted into a convex optimization problem with a convex linearization approach, which expresses all the conditions for the existence of admissible Hankel norm filter as standard linear matrix inequalities (LMIs. The effectiveness of the proposed method is demonstrated via a numerical example.

  9. Activity-based DEVS modeling

    DEFF Research Database (Denmark)

    Alshareef, Abdurrahman; Sarjoughian, Hessam S.; Zarrin, Bahram

    2018-01-01

    architecture and the UML concepts. In this paper, we further this work by grounding Activity-based DEVS modeling and developing a fully-fledged modeling engine to demonstrate applicability. We also detail the relevant aspects of the created metamodel in terms of modeling and simulation. A significant number......Use of model-driven approaches has been increasing to significantly benefit the process of building complex systems. Recently, an approach for specifying model behavior using UML activities has been devised to support the creation of DEVS models in a disciplined manner based on the model driven...... of the artifacts of the UML 2.5 activities and actions, from the vantage point of DEVS behavioral modeling, is covered in details. Their semantics are discussed to the extent of time-accurate requirements for simulation. We characterize them in correspondence with the specification of the atomic model behavior. We...

  10. Characterization of Models for Time-Dependent Behavior of Soils

    DEFF Research Database (Denmark)

    Liingaard, Morten; Augustesen, Anders; Lade, Poul V.

    2004-01-01

      Different classes of constitutive models have been developed to capture the time-dependent viscous phenomena ~ creep, stress relaxation, and rate effects ! observed in soils. Models based on empirical, rheological, and general stress-strain-time concepts have been studied. The first part....... Special attention is paid to elastoviscoplastic models that combine inviscid elastic and time-dependent plastic behavior. Various general elastoviscoplastic models can roughly be divided into two categories: Models based on the concept of overstress and models based on nonstationary flow surface theory...

  11. Base Station Performance Model

    OpenAIRE

    Walsh, Barbara; Farrell, Ronan

    2005-01-01

    At present the testing of power amplifiers within base station transmitters is limited to testing at component level as opposed to testing at the system level. While the detection of catastrophic failure is possible, that of performance degradation is not. This paper proposes a base station model with respect to transmitter output power with the aim of introducing system level monitoring of the power amplifier behaviour within the base station. Our model reflects the expe...

  12. Urban Adolescents’ Physical Activity Experience, Physical Activity Levels, and Use of Screen-Based Media during Leisure Time: A Structural Model

    Directory of Open Access Journals (Sweden)

    Hui Xie

    2018-01-01

    Full Text Available There is limited understanding of the relationship between physical activity and use of screen-based media, two important behaviors associated with adolescents’ health outcomes. To understand this relationship, researchers may need to consider not only physical activity level but also physical activity experience (i.e., affective experience obtained from doing physical activity. Using a sample predominantly consisting of African and Latino American urban adolescents, this study examined the interrelationships between physical activity experience, physical activity level, and use of screen-based media during leisure time. Data collected using self-report, paper and pencil surveys was analyzed using structural equation modeling. Results showed that physical activity experience was positively associated with physical activity level and had a direct negative relationship with use of non-active video games for males and a direct negative relationship with use of computer/Internet for both genders, after controlling for physical activity level. Physical activity level did not have a direct relationship with use of non-active video games or computer/Internet. However, physical activity level had a direct negative association with use of TV/movies. This study suggests that physical activity experience may play an important role in promoting physical activity and thwarting use of screen-based media among adolescents.

  13. Urban Adolescents’ Physical Activity Experience, Physical Activity Levels, and Use of Screen-Based Media during Leisure Time: A Structural Model

    Science.gov (United States)

    Xie, Hui; Scott, Jason L.; Caldwell, Linda L.

    2018-01-01

    There is limited understanding of the relationship between physical activity and use of screen-based media, two important behaviors associated with adolescents’ health outcomes. To understand this relationship, researchers may need to consider not only physical activity level but also physical activity experience (i.e., affective experience obtained from doing physical activity). Using a sample predominantly consisting of African and Latino American urban adolescents, this study examined the interrelationships between physical activity experience, physical activity level, and use of screen-based media during leisure time. Data collected using self-report, paper and pencil surveys was analyzed using structural equation modeling. Results showed that physical activity experience was positively associated with physical activity level and had a direct negative relationship with use of non-active video games for males and a direct negative relationship with use of computer/Internet for both genders, after controlling for physical activity level. Physical activity level did not have a direct relationship with use of non-active video games or computer/Internet. However, physical activity level had a direct negative association with use of TV/movies. This study suggests that physical activity experience may play an important role in promoting physical activity and thwarting use of screen-based media among adolescents. PMID:29410634

  14. Quadratic Term Structure Models in Discrete Time

    OpenAIRE

    Marco Realdon

    2006-01-01

    This paper extends the results on quadratic term structure models in continuos time to the discrete time setting. The continuos time setting can be seen as a special case of the discrete time one. Recursive closed form solutions for zero coupon bonds are provided even in the presence of multiple correlated underlying factors. Pricing bond options requires simple integration. Model parameters may well be time dependent without scuppering such tractability. Model estimation does not require a r...

  15. Model Checking Real-Time Systems

    DEFF Research Database (Denmark)

    Bouyer, Patricia; Fahrenberg, Uli; Larsen, Kim Guldstrand

    2018-01-01

    This chapter surveys timed automata as a formalism for model checking real-time systems. We begin with introducing the model, as an extension of finite-state automata with real-valued variables for measuring time. We then present the main model-checking results in this framework, and give a hint...

  16. Time series modeling for syndromic surveillance

    Directory of Open Access Journals (Sweden)

    Mandl Kenneth D

    2003-01-01

    Full Text Available Abstract Background Emergency department (ED based syndromic surveillance systems identify abnormally high visit rates that may be an early signal of a bioterrorist attack. For example, an anthrax outbreak might first be detectable as an unusual increase in the number of patients reporting to the ED with respiratory symptoms. Reliably identifying these abnormal visit patterns requires a good understanding of the normal patterns of healthcare usage. Unfortunately, systematic methods for determining the expected number of (ED visits on a particular day have not yet been well established. We present here a generalized methodology for developing models of expected ED visit rates. Methods Using time-series methods, we developed robust models of ED utilization for the purpose of defining expected visit rates. The models were based on nearly a decade of historical data at a major metropolitan academic, tertiary care pediatric emergency department. The historical data were fit using trimmed-mean seasonal models, and additional models were fit with autoregressive integrated moving average (ARIMA residuals to account for recent trends in the data. The detection capabilities of the model were tested with simulated outbreaks. Results Models were built both for overall visits and for respiratory-related visits, classified according to the chief complaint recorded at the beginning of each visit. The mean absolute percentage error of the ARIMA models was 9.37% for overall visits and 27.54% for respiratory visits. A simple detection system based on the ARIMA model of overall visits was able to detect 7-day-long simulated outbreaks of 30 visits per day with 100% sensitivity and 97% specificity. Sensitivity decreased with outbreak size, dropping to 94% for outbreaks of 20 visits per day, and 57% for 10 visits per day, all while maintaining a 97% benchmark specificity. Conclusions Time series methods applied to historical ED utilization data are an important tool

  17. TH-E-BRF-05: Comparison of Survival-Time Prediction Models After Radiotherapy for High-Grade Glioma Patients Based On Clinical and DVH Features

    International Nuclear Information System (INIS)

    Magome, T; Haga, A; Igaki, H; Sekiya, N; Masutani, Y; Sakumi, A; Mukasa, A; Nakagawa, K

    2014-01-01

    Purpose: Although many outcome prediction models based on dose-volume information have been proposed, it is well known that the prognosis may be affected also by multiple clinical factors. The purpose of this study is to predict the survival time after radiotherapy for high-grade glioma patients based on features including clinical and dose-volume histogram (DVH) information. Methods: A total of 35 patients with high-grade glioma (oligodendroglioma: 2, anaplastic astrocytoma: 3, glioblastoma: 30) were selected in this study. All patients were treated with prescribed dose of 30–80 Gy after surgical resection or biopsy from 2006 to 2013 at The University of Tokyo Hospital. All cases were randomly separated into training dataset (30 cases) and test dataset (5 cases). The survival time after radiotherapy was predicted based on a multiple linear regression analysis and artificial neural network (ANN) by using 204 candidate features. The candidate features included the 12 clinical features (tumor location, extent of surgical resection, treatment duration of radiotherapy, etc.), and the 192 DVH features (maximum dose, minimum dose, D95, V60, etc.). The effective features for the prediction were selected according to a step-wise method by using 30 training cases. The prediction accuracy was evaluated by a coefficient of determination (R 2 ) between the predicted and actual survival time for the training and test dataset. Results: In the multiple regression analysis, the value of R 2 between the predicted and actual survival time was 0.460 for the training dataset and 0.375 for the test dataset. On the other hand, in the ANN analysis, the value of R 2 was 0.806 for the training dataset and 0.811 for the test dataset. Conclusion: Although a large number of patients would be needed for more accurate and robust prediction, our preliminary Result showed the potential to predict the outcome in the patients with high-grade glioma. This work was partly supported by the JSPS Core

  18. Time lags in biological models

    CERN Document Server

    MacDonald, Norman

    1978-01-01

    In many biological models it is necessary to allow the rates of change of the variables to depend on the past history, rather than only the current values, of the variables. The models may require discrete lags, with the use of delay-differential equations, or distributed lags, with the use of integro-differential equations. In these lecture notes I discuss the reasons for including lags, especially distributed lags, in biological models. These reasons may be inherent in the system studied, or may be the result of simplifying assumptions made in the model used. I examine some of the techniques available for studying the solution of the equations. A large proportion of the material presented relates to a special method that can be applied to a particular class of distributed lags. This method uses an extended set of ordinary differential equations. I examine the local stability of equilibrium points, and the existence and frequency of periodic solutions. I discuss the qualitative effects of lags, and how these...

  19. Real-time simulation of response to load variation for a ship reactor based on point-reactor double regions and lumped parameter model

    Energy Technology Data Exchange (ETDEWEB)

    Wang Qiao; Zhang De [Department of Nuclear Energy Science and Engineering, Naval University of Engineering, Wuhan 430033 (China); Chen Wenzhen, E-mail: Cwz2@21cn.com [Department of Nuclear Energy Science and Engineering, Naval University of Engineering, Wuhan 430033 (China); Chen Zhiyun [Department of Nuclear Energy Science and Engineering, Naval University of Engineering, Wuhan 430033 (China)

    2011-05-15

    Research highlights: > We calculate the variation of main parameters of the reactor core by the Simulink. > The Simulink calculation software (SCS) can deal well with the stiff problem. > The high calculation precision is reached with less time, and the results can be easily displayed. > The quick calculation of ship reactor transient can be achieved by this method. - Abstract: Based on the point-reactor double regions and lumped parameter model, while the nuclear power plant second loop load is increased or decreased quickly, the Simulink calculation software (SCS) is adopted to calculate the variation of main physical and thermal-hydraulic parameters of the reactor core. The calculation results are compared with those of three-dimensional simulation program. It is indicated that the SCS can deal well with the stiff problem of the point-reactor kinetics equations and the coupled problem of neutronics and thermal-hydraulics. The high calculation precision can be reached with less time, and the quick calculation of parameters of response to load disturbance for the ship reactor can be achieved. The clear image of the calculation results can also be displayed quickly by the SCS, which is very significant and important to guarantee the reactor safety operation.

  20. Modeling nonstationarity in space and time.

    Science.gov (United States)

    Shand, Lyndsay; Li, Bo

    2017-09-01

    We propose to model a spatio-temporal random field that has nonstationary covariance structure in both space and time domains by applying the concept of the dimension expansion method in Bornn et al. (2012). Simulations are conducted for both separable and nonseparable space-time covariance models, and the model is also illustrated with a streamflow dataset. Both simulation and data analyses show that modeling nonstationarity in both space and time can improve the predictive performance over stationary covariance models or models that are nonstationary in space but stationary in time. © 2017, The International Biometric Society.

  1. Evaluating Site-Specific and Generic Spatial Models of Aboveground Forest Biomass Based on Landsat Time-Series and LiDAR Strip Samples in the Eastern USA

    Science.gov (United States)

    Ram Deo; Matthew Russell; Grant Domke; Hans-Erik Andersen; Warren Cohen; Christopher Woodall

    2017-01-01

    Large-area assessment of aboveground tree biomass (AGB) to inform regional or national forest monitoring programs can be efficiently carried out by combining remotely sensed data and field sample measurements through a generic statistical model, in contrast to site-specific models. We integrated forest inventory plot data with spatial predictors from Landsat time-...

  2. Taking a closer look: disentangling effects of functional diversity on ecosystem functions with a trait-based model across hierarchy and time

    Science.gov (United States)

    Holzwarth, Frédéric; Rüger, Nadja; Wirth, Christian

    2015-01-01

    Biodiversity and ecosystem functioning (BEF) research has progressed from the detection of relationships to elucidating their drivers and underlying mechanisms. In this context, replacing taxonomic predictors by trait-based measures of functional composition (FC)—bridging functions of species and of ecosystems—is a widely used approach. The inherent challenge of trait-based approaches is the multi-faceted, dynamic and hierarchical nature of trait influence: (i) traits may act via different facets of their distribution in a community, (ii) their influence may change over time and (iii) traits may influence processes at different levels of the natural hierarchy of organization. Here, we made use of the forest ecosystem model ‘LPJ-GUESS’ parametrized with empirical trait data, which creates output of individual performance, community assembly, stand-level states and processes. To address the three challenges, we resolved the dynamics of the top-level ecosystem function ‘annual biomass change’ hierarchically into its various component processes (growth, leaf and root turnover, recruitment and mortality) and states (stand structures, water stress) and traced the influence of different facets of FC along this hierarchy in a path analysis. We found an independent influence of functional richness, dissimilarity and identity on ecosystem states and processes and hence biomass change. Biodiversity effects were only positive during early succession and later turned negative. Unexpectedly, resource acquisition (growth, recruitment) and conservation (mortality, turnover) played an equally important role throughout the succession. These results add to a mechanistic understanding of biodiversity effects and place a caveat on simplistic approaches omitting hierarchical levels when analysing BEF relationships. They support the view that BEF relationships experience dramatic shifts over successional time that should be acknowledged in mechanistic theories. PMID:26064620

  3. Taking a closer look: disentangling effects of functional diversity on ecosystem functions with a trait-based model across hierarchy and time.

    Science.gov (United States)

    Holzwarth, Frédéric; Rüger, Nadja; Wirth, Christian

    2015-03-01

    Biodiversity and ecosystem functioning (BEF) research has progressed from the detection of relationships to elucidating their drivers and underlying mechanisms. In this context, replacing taxonomic predictors by trait-based measures of functional composition (FC)-bridging functions of species and of ecosystems-is a widely used approach. The inherent challenge of trait-based approaches is the multi-faceted, dynamic and hierarchical nature of trait influence: (i) traits may act via different facets of their distribution in a community, (ii) their influence may change over time and (iii) traits may influence processes at different levels of the natural hierarchy of organization. Here, we made use of the forest ecosystem model 'LPJ-GUESS' parametrized with empirical trait data, which creates output of individual performance, community assembly, stand-level states and processes. To address the three challenges, we resolved the dynamics of the top-level ecosystem function 'annual biomass change' hierarchically into its various component processes (growth, leaf and root turnover, recruitment and mortality) and states (stand structures, water stress) and traced the influence of different facets of FC along this hierarchy in a path analysis. We found an independent influence of functional richness, dissimilarity and identity on ecosystem states and processes and hence biomass change. Biodiversity effects were only positive during early succession and later turned negative. Unexpectedly, resource acquisition (growth, recruitment) and conservation (mortality, turnover) played an equally important role throughout the succession. These results add to a mechanistic understanding of biodiversity effects and place a caveat on simplistic approaches omitting hierarchical levels when analysing BEF relationships. They support the view that BEF relationships experience dramatic shifts over successional time that should be acknowledged in mechanistic theories.

  4. A likelihood-based time series modeling approach for application in dendrochronology to examine the growth-climate relations and forest disturbance history

    Science.gov (United States)

    A time series intervention analysis (TSIA) of dendrochronological data to infer the tree growth-climate-disturbance relations and forest disturbance history is described. Maximum likelihood is used to estimate the parameters of a structural time series model with components for ...

  5. Web Based VRML Modelling

    NARCIS (Netherlands)

    Kiss, S.; Sarfraz, M.

    2004-01-01

    Presents a method to connect VRML (Virtual Reality Modeling Language) and Java components in a Web page using EAI (External Authoring Interface), which makes it possible to interactively generate and edit VRML meshes. The meshes used are based on regular grids, to provide an interaction and modeling

  6. RTMOD: Real-Time MODel evaluation

    International Nuclear Information System (INIS)

    Graziani, G; Galmarini, S.; Mikkelsen, T.

    2000-01-01

    The 1998 - 1999 RTMOD project is a system based on an automated statistical evaluation for the inter-comparison of real-time forecasts produced by long-range atmospheric dispersion models for national nuclear emergency predictions of cross-boundary consequences. The background of RTMOD was the 1994 ETEX project that involved about 50 models run in several Institutes around the world to simulate two real tracer releases involving a large part of the European territory. In the preliminary phase of ETEX, three dry runs (i.e. simulations in real-time of fictitious releases) were carried out. At that time, the World Wide Web was not available to all the exercise participants, and plume predictions were therefore submitted to JRC-Ispra by fax and regular mail for subsequent processing. The rapid development of the World Wide Web in the second half of the nineties, together with the experience gained during the ETEX exercises suggested the development of this project. RTMOD featured a web-based user-friendly interface for data submission and an interactive program module for displaying, intercomparison and analysis of the forecasts. RTMOD has focussed on model intercomparison of concentration predictions at the nodes of a regular grid with 0.5 degrees of resolution both in latitude and in longitude, the domain grid extending from 5W to 40E and 40N to 65N. Hypothetical releases were notified around the world to the 28 model forecasters via the web on a one-day warning in advance. They then accessed the RTMOD web page for detailed information on the actual release, and as soon as possible they then uploaded their predictions to the RTMOD server and could soon after start their inter-comparison analysis with other modelers. When additional forecast data arrived, already existing statistical results would be recalculated to include the influence by all available predictions. The new web-based RTMOD concept has proven useful as a practical decision-making tool for realtime

  7. Travel Time Reliability for Urban Networks : Modelling and Empirics

    NARCIS (Netherlands)

    Zheng, F.; Liu, Xiaobo; van Zuylen, H.J.; Li, Jie; Lu, Chao

    2017-01-01

    The importance of travel time reliability in traffic management, control, and network design has received a lot of attention in the past decade. In this paper, a network travel time distribution model based on the Johnson curve system is proposed. The model is applied to field travel time data

  8. Evidence-based guidelines, time-based health outcomes, and the Matthew effect

    NARCIS (Netherlands)

    M.L.E. Essink-Bot (Marie-Louise); M.E. Kruijshaar (Michelle); J.J.M. Barendregt (Jan); L.G.A. Bonneux (Luc)

    2007-01-01

    textabstractBackground: Cardiovascular risk management guidelines are 'risk based'; health economists' practice is 'time based'. The 'medical' risk-based allocation model maximises numbers of deaths prevented by targeting subjects at high risk, for example, elderly and smokers. The time-based model

  9. Evidence-based guidelines, time-based health outcomes, and the Matthew effect

    NARCIS (Netherlands)

    Essink-Bot, Marie-Louise; Kruijshaar, Michelle E.; Barendregt, Jan J.; Bonneux, Luc G. A.

    2007-01-01

    BACKGROUND: Cardiovascular risk management guidelines are 'risk based'; health economists' practice is 'time based'. The 'medical' risk-based allocation model maximises numbers of deaths prevented by targeting subjects at high risk, for example, elderly and smokers. The time-based model maximises

  10. Lag space estimation in time series modelling

    DEFF Research Database (Denmark)

    Goutte, Cyril

    1997-01-01

    The purpose of this article is to investigate some techniques for finding the relevant lag-space, i.e. input information, for time series modelling. This is an important aspect of time series modelling, as it conditions the design of the model through the regressor vector a.k.a. the input layer...

  11. Computer Aided Continuous Time Stochastic Process Modelling

    DEFF Research Database (Denmark)

    Kristensen, N.R.; Madsen, Henrik; Jørgensen, Sten Bay

    2001-01-01

    A grey-box approach to process modelling that combines deterministic and stochastic modelling is advocated for identification of models for model-based control of batch and semi-batch processes. A computer-aided tool designed for supporting decision-making within the corresponding modelling cycle...

  12. Underwater Time Service and Synchronization Based on Time Reversal Technique

    Science.gov (United States)

    Lu, Hao; Wang, Hai-bin; Aissa-El-Bey, Abdeldjalil; Pyndiah, Ramesh

    2010-09-01

    Real time service and synchronization are very important to many underwater systems. But the time service and synchronization in existence cannot work well due to the multi-path propagation and random phase fluctuation of signals in the ocean channel. The time reversal mirror technique can realize energy concentration through self-matching of the ocean channel and has very good spatial and temporal focusing properties. Based on the TRM technique, we present the Time Reversal Mirror Real Time service and synchronization (TRMRT) method which can bypass the processing of multi-path on the server side and reduce multi-path contamination on the client side. So TRMRT can improve the accuracy of time service. Furthermore, as an efficient and precise method of time service, TRMRT could be widely used in underwater exploration activities and underwater navigation and positioning systems.

  13. Cluster Based Text Classification Model

    DEFF Research Database (Denmark)

    Nizamani, Sarwat; Memon, Nasrullah; Wiil, Uffe Kock

    2011-01-01

    We propose a cluster based classification model for suspicious email detection and other text classification tasks. The text classification tasks comprise many training examples that require a complex classification model. Using clusters for classification makes the model simpler and increases...... the accuracy at the same time. The test example is classified using simpler and smaller model. The training examples in a particular cluster share the common vocabulary. At the time of clustering, we do not take into account the labels of the training examples. After the clusters have been created......, the classifier is trained on each cluster having reduced dimensionality and less number of examples. The experimental results show that the proposed model outperforms the existing classification models for the task of suspicious email detection and topic categorization on the Reuters-21578 and 20 Newsgroups...

  14. Formal Modeling and Analysis of Timed Systems

    DEFF Research Database (Denmark)

    Larsen, Kim Guldstrand; Niebert, Peter

    This book constitutes the thoroughly refereed post-proceedings of the First International Workshop on Formal Modeling and Analysis of Timed Systems, FORMATS 2003, held in Marseille, France in September 2003. The 19 revised full papers presented together with an invited paper and the abstracts of ...... systems, discrete time systems, timed languages, and real-time operating systems....... of two invited talks were carefully selected from 36 submissions during two rounds of reviewing and improvement. All current aspects of formal method for modeling and analyzing timed systems are addressed; among the timed systems dealt with are timed automata, timed Petri nets, max-plus algebras, real-time......This book constitutes the thoroughly refereed post-proceedings of the First International Workshop on Formal Modeling and Analysis of Timed Systems, FORMATS 2003, held in Marseille, France in September 2003. The 19 revised full papers presented together with an invited paper and the abstracts...

  15. Long Term Subsidence Analysis and Soil Fracturing Zonation Based on InSAR Time Series Modelling in Northern Zona Metropolitana del Valle de Mexico

    Directory of Open Access Journals (Sweden)

    Gabriela Llanet Siles

    2015-05-01

    Full Text Available In this study deformation processes in northern Zona Metropolitana del Valle de Mexico (ZMVM are evaluated by means of advanced multi-temporal interferometry. ERS and ENVISAT time series, covering approximately an 11-year period (between 1999 and 2010, were produced showing mainly linear subsidence behaviour for almost the entire area under study, but increasing rates that reach up to 285 mm/yr. Important non-linear deformation was identified in certain areas, presumably suggesting interaction between subsidence and other processes. Thus, a methodology for identification of probable fracturing zones based on discrimination and modelling of the non-linear (quadratic function component is presented. This component was mapped and temporal subsidence evolution profiles were constructed across areas where notable acceleration (maximum of 8 mm/yr2 or deceleration (maximum of −9 mm/yr2 is found. This methodology enables location of potential soil fractures that could impact relevant infrastructure such as the Tunel Emisor Oriente (TEO (along the structure rates exceed 200 mm/yr. Additionally, subsidence behaviour during wet and dry seasons is tackled in partially urbanized areas. This paper provides useful information for geological risk assessment in the area.

  16. The Scrap Collection per Industry Sector and the Circulation Times of Steel in the U.S. between 1900 and 2016, Calculated Based on the Volume Correlation Model

    Directory of Open Access Journals (Sweden)

    Alicia Gauffin

    2018-05-01

    Full Text Available On the basis of the Volume Correlation Model (VCM as well as data on steel consumption and scrap collection per industry sector (construction, automotive, industrial goods, and consumer goods, it was possible to estimate service lifetimes of steel in the United States between 1900 and 2016. Input data on scrap collection per industry sector was based on a scrap survey conducted by the World Steel Association for a static year in 2014 in the United States. The lifetimes of steel calculated with the VCM method were within the range of previously reported measured lifetimes of products and applications for all industry sectors. Scrapped (and apparent lifetimes of steel compared with measured lifetimes were calculated to be as follows: a scrapped lifetime of 29 years for the construction sector (apparent lifetime: 52 years compared with 44 years measured in 2014. Industrial goods: 16 (27 years compared with 19 years measured in 2010. Consumer goods: 12 (14 years compared with 13 years measured in 2014. Automotive sector: 14 (19 years compared with 17 years measured in 2011. Results show that the VCM can estimate reasonable values of scrap collection and availability per industry sector over time.

  17. Modelling road accidents: An approach using structural time series

    Science.gov (United States)

    Junus, Noor Wahida Md; Ismail, Mohd Tahir

    2014-09-01

    In this paper, the trend of road accidents in Malaysia for the years 2001 until 2012 was modelled using a structural time series approach. The structural time series model was identified using a stepwise method, and the residuals for each model were tested. The best-fitted model was chosen based on the smallest Akaike Information Criterion (AIC) and prediction error variance. In order to check the quality of the model, a data validation procedure was performed by predicting the monthly number of road accidents for the year 2012. Results indicate that the best specification of the structural time series model to represent road accidents is the local level with a seasonal model.

  18. Electric cell-substrate impedance sensing (ECIS) based real-time measurement of titer dependent cytotoxicity induced by adenoviral vectors in an IPI-2I cell culture model.

    Science.gov (United States)

    Müller, Jakob; Thirion, Christian; Pfaffl, Michael W

    2011-01-15

    Recombinant viral vectors are widespread tools for transfer of genetic material in various modern biotechnological applications like for example RNA interference (RNAi). However, an accurate and reproducible titer assignment represents the basic step for most downstream applications regarding a precise multiplicity of infection (MOI) adjustment. As necessary scaffold for the studies described in this work we introduce a quantitative real-time PCR (qPCR) based approach for viral particle measurement. Still an implicated problem concerning physiological effects is that the appliance of viral vectors is often attended by toxic effects on the individual target. To determine the critical viral dose leading to cell death we developed an electric cell-substrate impedance sensing (ECIS) based assay. With ECIS technology the impedance change of a current flow through the cell culture medium in an array plate is measured in a non-invasive manner, visualizing effects like cell attachment, cell-cell contacts or proliferation. Here we describe the potential of this online measurement technique in an in vitro model using the porcine ileal epithelial cell line IPI-2I in combination with an adenoviral transfection vector (Ad5-derivate). This approach shows a clear dose-depending toxic effect, as the amount of applied virus highly correlates (p<0.001) with the level of cell death. Thus this assay offers the possibility to discriminate the minimal non-toxic dose of the individual transfection method. In addition this work suggests that the ECIS-device bears the feasibility to transfer this assay to multiple other cytotoxicological questions. Copyright © 2010 Elsevier B.V. All rights reserved.

  19. TIME SERIES ANALYSIS USING A UNIQUE MODEL OF TRANSFORMATION

    Directory of Open Access Journals (Sweden)

    Goran Klepac

    2007-12-01

    Full Text Available REFII1 model is an authorial mathematical model for time series data mining. The main purpose of that model is to automate time series analysis, through a unique transformation model of time series. An advantage of this approach of time series analysis is the linkage of different methods for time series analysis, linking traditional data mining tools in time series, and constructing new algorithms for analyzing time series. It is worth mentioning that REFII model is not a closed system, which means that we have a finite set of methods. At first, this is a model for transformation of values of time series, which prepares data used by different sets of methods based on the same model of transformation in a domain of problem space. REFII model gives a new approach in time series analysis based on a unique model of transformation, which is a base for all kind of time series analysis. The advantage of REFII model is its possible application in many different areas such as finance, medicine, voice recognition, face recognition and text mining.

  20. Time series modeling, computation, and inference

    CERN Document Server

    Prado, Raquel

    2010-01-01

    The authors systematically develop a state-of-the-art analysis and modeling of time series. … this book is well organized and well written. The authors present various statistical models for engineers to solve problems in time series analysis. Readers no doubt will learn state-of-the-art techniques from this book.-Hsun-Hsien Chang, Computing Reviews, March 2012My favorite chapters were on dynamic linear models and vector AR and vector ARMA models.-William Seaver, Technometrics, August 2011… a very modern entry to the field of time-series modelling, with a rich reference list of the current lit

  1. Comparison of Timed Automata with Discrete Event Simulation for Modeling of Biomarker-Based Treatment Decisions: An Illustration for Metastatic Castration-Resistant Prostate Cancer.

    Science.gov (United States)

    Degeling, Koen; Schivo, Stefano; Mehra, Niven; Koffijberg, Hendrik; Langerak, Rom; de Bono, Johann S; IJzerman, Maarten J

    2017-12-01

    With the advent of personalized medicine, the field of health economic modeling is being challenged and the use of patient-level dynamic modeling techniques might be required. To illustrate the usability of two such techniques, timed automata (TA) and discrete event simulation (DES), for modeling personalized treatment decisions. An early health technology assessment on the use of circulating tumor cells, compared with prostate-specific antigen and bone scintigraphy, to inform treatment decisions in metastatic castration-resistant prostate cancer was performed. Both modeling techniques were assessed quantitatively, in terms of intermediate outcomes (e.g., overtreatment) and health economic outcomes (e.g., net monetary benefit). Qualitatively, among others, model structure, agent interactions, data management (i.e., importing and exporting data), and model transparency were assessed. Both models yielded realistic and similar intermediate and health economic outcomes. Overtreatment was reduced by 6.99 and 7.02 weeks by applying circulating tumor cell as a response marker at a net monetary benefit of -€1033 and -€1104 for the TA model and the DES model, respectively. Software-specific differences were observed regarding data management features and the support for statistical distributions, which were considered better for the DES software. Regarding method-specific differences, interactions were modeled more straightforward using TA, benefiting from its compositional model structure. Both techniques prove suitable for modeling personalized treatment decisions, although DES would be preferred given the current software-specific limitations of TA. When these limitations are resolved, TA would be an interesting modeling alternative if interactions are key or its compositional structure is useful to manage multi-agent complex problems. Copyright © 2017 International Society for Pharmacoeconomics and Outcomes Research (ISPOR). Published by Elsevier Inc. All rights

  2. Hard- and software of real time simulation tools of Electric Power System for adequate modeling power semiconductors in voltage source convertor based HVDC and FACTS

    Directory of Open Access Journals (Sweden)

    Ufa Ruslan A.

    2014-01-01

    Full Text Available The motivation of the presented research is based on the needs for development of new methods and tools for adequate simulation of Flexible Alternating Current Transmission System (FACTS devices and High Voltage Direct Current Transmission (HVDC system as part of real electric power systems (EPS. For that, a hybrid approach for advanced simulation of the FACTS and HVDC based on Voltage Source is proposed. The presented simulation results of the developed hybrid model of VSC confirm the achievement of the desired properties of the model and the effectiveness of the proposed solutions.

  3. An Efficient Explicit-time Description Method for Timed Model Checking

    Directory of Open Access Journals (Sweden)

    Hao Wang

    2009-12-01

    Full Text Available Timed model checking, the method to formally verify real-time systems, is attracting increasing attention from both the model checking community and the real-time community. Explicit-time description methods verify real-time systems using general model constructs found in standard un-timed model checkers. Lamport proposed an explicit-time description method using a clock-ticking process (Tick to simulate the passage of time together with a group of global variables to model time requirements. Two methods, the Sync-based Explicit-time Description Method using rendezvous synchronization steps and the Semaphore-based Explicit-time Description Method using only one global variable were proposed; they both achieve better modularity than Lamport's method in modeling the real-time systems. In contrast to timed automata based model checkers like UPPAAL, explicit-time description methods can access and store the current time instant for future calculations necessary for many real-time systems, especially those with pre-emptive scheduling. However, the Tick process in the above three methods increments the time by one unit in each tick; the state spaces therefore grow relatively fast as the time parameters increase, a problem when the system's time period is relatively long. In this paper, we propose a more efficient method which enables the Tick process to leap multiple time units in one tick. Preliminary experimental results in a high performance computing environment show that this new method significantly reduces the state space and improves both the time and memory efficiency.

  4. Discounting Models for Outcomes over Continuous Time

    DEFF Research Database (Denmark)

    Harvey, Charles M.; Østerdal, Lars Peter

    Events that occur over a period of time can be described either as sequences of outcomes at discrete times or as functions of outcomes in an interval of time. This paper presents discounting models for events of the latter type. Conditions on preferences are shown to be satisfied if and only if t...... if the preferences are represented by a function that is an integral of a discounting function times a scale defined on outcomes at instants of time....

  5. Average inactivity time model, associated orderings and reliability properties

    Science.gov (United States)

    Kayid, M.; Izadkhah, S.; Abouammoh, A. M.

    2018-02-01

    In this paper, we introduce and study a new model called 'average inactivity time model'. This new model is specifically applicable to handle the heterogeneity of the time of the failure of a system in which some inactive items exist. We provide some bounds for the mean average inactivity time of a lifespan unit. In addition, we discuss some dependence structures between the average variable and the mixing variable in the model when original random variable possesses some aging behaviors. Based on the conception of the new model, we introduce and study a new stochastic order. Finally, to illustrate the concept of the model, some interesting reliability problems are reserved.

  6. An USB-based time measurement system

    International Nuclear Information System (INIS)

    Qin Xi; Liu Shubin; An Qi

    2010-01-01

    In this paper,we report the electronics of a timing measurement system of PTB(portable TDC board), which is a handy tool based on USB interface, customized for high precision time measurements without any crates. The time digitization is based on the High Performance TDC Chip (HPTDC). The real-time compensation for HPTDC outputs and the USB master logic are implemented in an ALTERA's Cyclone FPGA. The architecture design and logic design are described in detail. Test of the system showed a time resolution of 13.3 ps. (authors)

  7. Stochastic approach to municipal solid waste landfill life based on the contaminant transit time modeling using the Monte Carlo (MC) simulation

    International Nuclear Information System (INIS)

    Bieda, Bogusław

    2013-01-01

    The paper is concerned with application and benefits of MC simulation proposed for estimating the life of a modern municipal solid waste (MSW) landfill. The software Crystal Ball® (CB), simulation program that helps analyze the uncertainties associated with Microsoft® Excel models by MC simulation, was proposed to calculate the transit time contaminants in porous media. The transport of contaminants in soil is represented by the one-dimensional (1D) form of the advection–dispersion equation (ADE). The computer program CONTRANS written in MATLAB language is foundation to simulate and estimate the thickness of landfill compacted clay liner. In order to simplify the task of determining the uncertainty of parameters by the MC simulation, the parameters corresponding to the expression Z2 taken from this program were used for the study. The tested parameters are: hydraulic gradient (HG), hydraulic conductivity (HC), porosity (POROS), linear thickness (TH) and diffusion coefficient (EDC). The principal output report provided by CB and presented in the study consists of the frequency chart, percentiles summary and statistics summary. Additional CB options provide a sensitivity analysis with tornado diagrams. The data that was used include available published figures as well as data concerning the Mittal Steel Poland (MSP) S.A. in Kraków, Poland. This paper discusses the results and show that the presented approach is applicable for any MSW landfill compacted clay liner thickness design. -- Highlights: ► Numerical simulation of waste in porous media is proposed. ► Statistic outputs based on correct assumptions about probability distribution are presented. ► The benefits of a MC simulation are examined. ► The uniform probability distribution is studied. ► I report a useful tool applied to determine the life of a modern MSW landfill.

  8. Stochastic approach to municipal solid waste landfill life based on the contaminant transit time modeling using the Monte Carlo (MC) simulation

    Energy Technology Data Exchange (ETDEWEB)

    Bieda, Boguslaw, E-mail: bbieda@zarz.agh.edu.pl

    2013-01-01

    The paper is concerned with application and benefits of MC simulation proposed for estimating the life of a modern municipal solid waste (MSW) landfill. The software Crystal Ball Registered-Sign (CB), simulation program that helps analyze the uncertainties associated with Microsoft Registered-Sign Excel models by MC simulation, was proposed to calculate the transit time contaminants in porous media. The transport of contaminants in soil is represented by the one-dimensional (1D) form of the advection-dispersion equation (ADE). The computer program CONTRANS written in MATLAB language is foundation to simulate and estimate the thickness of landfill compacted clay liner. In order to simplify the task of determining the uncertainty of parameters by the MC simulation, the parameters corresponding to the expression Z2 taken from this program were used for the study. The tested parameters are: hydraulic gradient (HG), hydraulic conductivity (HC), porosity (POROS), linear thickness (TH) and diffusion coefficient (EDC). The principal output report provided by CB and presented in the study consists of the frequency chart, percentiles summary and statistics summary. Additional CB options provide a sensitivity analysis with tornado diagrams. The data that was used include available published figures as well as data concerning the Mittal Steel Poland (MSP) S.A. in Krakow, Poland. This paper discusses the results and show that the presented approach is applicable for any MSW landfill compacted clay liner thickness design. -- Highlights: Black-Right-Pointing-Pointer Numerical simulation of waste in porous media is proposed. Black-Right-Pointing-Pointer Statistic outputs based on correct assumptions about probability distribution are presented. Black-Right-Pointing-Pointer The benefits of a MC simulation are examined. Black-Right-Pointing-Pointer The uniform probability distribution is studied. Black-Right-Pointing-Pointer I report a useful tool applied to determine the life of a

  9. On discrete models of space-time

    International Nuclear Information System (INIS)

    Horzela, A.; Kempczynski, J.; Kapuscik, E.; Georgia Univ., Athens, GA; Uzes, Ch.

    1992-02-01

    Analyzing the Einstein radiolocation method we come to the conclusion that results of any measurement of space-time coordinates should be expressed in terms of rational numbers. We show that this property is Lorentz invariant and may be used in the construction of discrete models of space-time different from the models of the lattice type constructed in the process of discretization of continuous models. (author)

  10. Modeling seasonality in bimonthly time series

    NARCIS (Netherlands)

    Ph.H.B.F. Franses (Philip Hans)

    1992-01-01

    textabstractA recurring issue in modeling seasonal time series variables is the choice of the most adequate model for the seasonal movements. One selection method for quarterly data is proposed in Hylleberg et al. (1990). Market response models are often constructed for bimonthly variables, and

  11. Review the number of accidents in Tehran over a two-year period and prediction of the number of events based on a time-series model

    Science.gov (United States)

    Teymuri, Ghulam Heidar; Sadeghian, Marzieh; Kangavari, Mehdi; Asghari, Mehdi; Madrese, Elham; Abbasinia, Marzieh; Ahmadnezhad, Iman; Gholizadeh, Yavar

    2013-01-01

    Background: One of the significant dangers that threaten people’s lives is the increased risk of accidents. Annually, more than 1.3 million people die around the world as a result of accidents, and it has been estimated that approximately 300 deaths occur daily due to traffic accidents in the world with more than 50% of that number being people who were not even passengers in the cars. The aim of this study was to examine traffic accidents in Tehran and forecast the number of future accidents using a time-series model. Methods: The study was a cross-sectional study that was conducted in 2011. The sample population was all traffic accidents that caused death and physical injuries in Tehran in 2010 and 2011, as registered in the Tehran Emergency ward. The present study used Minitab 15 software to provide a description of accidents in Tehran for the specified time period as well as those that occurred during April 2012. Results: The results indicated that the average number of daily traffic accidents in Tehran in 2010 was 187 with a standard deviation of 83.6. In 2011, there was an average of 180 daily traffic accidents with a standard deviation of 39.5. One-way analysis of variance indicated that the average number of accidents in the city was different for different months of the year (P accidents occurred in March, July, August, and September. Thus, more accidents occurred in the summer than in the other seasons. The number of accidents was predicted based on an auto-regressive, moving average (ARMA) for April 2012. The number of accidents displayed a seasonal trend. The prediction of the number of accidents in the city during April of 2012 indicated that a total of 4,459 accidents would occur with mean of 149 accidents per day during these three months. Conclusion: The number of accidents in Tehran displayed a seasonal trend, and the number of accidents was different for different seasons of the year. PMID:26120405

  12. A kinetic-based sigmoidal model for the polymerase chain reaction and its application to high-capacity absolute quantitative real-time PCR

    Directory of Open Access Journals (Sweden)

    Stewart Don

    2008-05-01

    Full Text Available Abstract Background Based upon defining a common reference point, current real-time quantitative PCR technologies compare relative differences in amplification profile position. As such, absolute quantification requires construction of target-specific standard curves that are highly resource intensive and prone to introducing quantitative errors. Sigmoidal modeling using nonlinear regression has previously demonstrated that absolute quantification can be accomplished without standard curves; however, quantitative errors caused by distortions within the plateau phase have impeded effective implementation of this alternative approach. Results Recognition that amplification rate is linearly correlated to amplicon quantity led to the derivation of two sigmoid functions that allow target quantification via linear regression analysis. In addition to circumventing quantitative errors produced by plateau distortions, this approach allows the amplification efficiency within individual amplification reactions to be determined. Absolute quantification is accomplished by first converting individual fluorescence readings into target quantity expressed in fluorescence units, followed by conversion into the number of target molecules via optical calibration. Founded upon expressing reaction fluorescence in relation to amplicon DNA mass, a seminal element of this study was to implement optical calibration using lambda gDNA as a universal quantitative standard. Not only does this eliminate the need to prepare target-specific quantitative standards, it relegates establishment of quantitative scale to a single, highly defined entity. The quantitative competency of this approach was assessed by exploiting "limiting dilution assay" for absolute quantification, which provided an independent gold standard from which to verify quantitative accuracy. This yielded substantive corroborating evidence that absolute accuracies of ± 25% can be routinely achieved. Comparison

  13. A comparison of the conditional inference survival forest model to random survival forests based on a simulation study as well as on two applications with time-to-event data.

    Science.gov (United States)

    Nasejje, Justine B; Mwambi, Henry; Dheda, Keertan; Lesosky, Maia

    2017-07-28

    Random survival forest (RSF) models have been identified as alternative methods to the Cox proportional hazards model in analysing time-to-event data. These methods, however, have been criticised for the bias that results from favouring covariates with many split-points and hence conditional inference forests for time-to-event data have been suggested. Conditional inference forests (CIF) are known to correct the bias in RSF models by separating the procedure for the best covariate to split on from that of the best split point search for the selected covariate. In this study, we compare the random survival forest model to the conditional inference model (CIF) using twenty-two simulated time-to-event datasets. We also analysed two real time-to-event datasets. The first dataset is based on the survival of children under-five years of age in Uganda and it consists of categorical covariates with most of them having more than two levels (many split-points). The second dataset is based on the survival of patients with extremely drug resistant tuberculosis (XDR TB) which consists of mainly categorical covariates with two levels (few split-points). The study findings indicate that the conditional inference forest model is superior to random survival forest models in analysing time-to-event data that consists of covariates with many split-points based on the values of the bootstrap cross-validated estimates for integrated Brier scores. However, conditional inference forests perform comparably similar to random survival forests models in analysing time-to-event data consisting of covariates with fewer split-points. Although survival forests are promising methods in analysing time-to-event data, it is important to identify the best forest model for analysis based on the nature of covariates of the dataset in question.

  14. A comparison of the conditional inference survival forest model to random survival forests based on a simulation study as well as on two applications with time-to-event data

    Directory of Open Access Journals (Sweden)

    Justine B. Nasejje

    2017-07-01

    Full Text Available Abstract Background Random survival forest (RSF models have been identified as alternative methods to the Cox proportional hazards model in analysing time-to-event data. These methods, however, have been criticised for the bias that results from favouring covariates with many split-points and hence conditional inference forests for time-to-event data have been suggested. Conditional inference forests (CIF are known to correct the bias in RSF models by separating the procedure for the best covariate to split on from that of the best split point search for the selected covariate. Methods In this study, we compare the random survival forest model to the conditional inference model (CIF using twenty-two simulated time-to-event datasets. We also analysed two real time-to-event datasets. The first dataset is based on the survival of children under-five years of age in Uganda and it consists of categorical covariates with most of them having more than two levels (many split-points. The second dataset is based on the survival of patients with extremely drug resistant tuberculosis (XDR TB which consists of mainly categorical covariates with two levels (few split-points. Results The study findings indicate that the conditional inference forest model is superior to random survival forest models in analysing time-to-event data that consists of covariates with many split-points based on the values of the bootstrap cross-validated estimates for integrated Brier scores. However, conditional inference forests perform comparably similar to random survival forests models in analysing time-to-event data consisting of covariates with fewer split-points. Conclusion Although survival forests are promising methods in analysing time-to-event data, it is important to identify the best forest model for analysis based on the nature of covariates of the dataset in question.

  15. Small Sample Properties of Bayesian Multivariate Autoregressive Time Series Models

    Science.gov (United States)

    Price, Larry R.

    2012-01-01

    The aim of this study was to compare the small sample (N = 1, 3, 5, 10, 15) performance of a Bayesian multivariate vector autoregressive (BVAR-SEM) time series model relative to frequentist power and parameter estimation bias. A multivariate autoregressive model was developed based on correlated autoregressive time series vectors of varying…

  16. Partition-based discrete-time quantum walks

    Science.gov (United States)

    Konno, Norio; Portugal, Renato; Sato, Iwao; Segawa, Etsuo

    2018-04-01

    We introduce a family of discrete-time quantum walks, called two-partition model, based on two equivalence-class partitions of the computational basis, which establish the notion of local dynamics. This family encompasses most versions of unitary discrete-time quantum walks driven by two local operators studied in literature, such as the coined model, Szegedy's model, and the 2-tessellable staggered model. We also analyze the connection of those models with the two-step coined model, which is driven by the square of the evolution operator of the standard discrete-time coined walk. We prove formally that the two-step coined model, an extension of Szegedy model for multigraphs, and the two-tessellable staggered model are unitarily equivalent. Then, selecting one specific model among those families is a matter of taste not generality.

  17. A splitting scheme based on the space-time CE/SE method for solving multi-dimensional hydrodynamical models of semiconductor devices

    Science.gov (United States)

    Nisar, Ubaid Ahmed; Ashraf, Waqas; Qamar, Shamsul

    2016-08-01

    Numerical solutions of the hydrodynamical model of semiconductor devices are presented in one and two-space dimension. The model describes the charge transport in semiconductor devices. Mathematically, the models can be written as a convection-diffusion type system with a right hand side describing the relaxation effects and interaction with a self consistent electric field. The proposed numerical scheme is a splitting scheme based on the conservation element and solution element (CE/SE) method for hyperbolic step, and a semi-implicit scheme for the relaxation step. The numerical results of the suggested scheme are compared with the splitting scheme based on Nessyahu-Tadmor (NT) central scheme for convection step and the same semi-implicit scheme for the relaxation step. The effects of various parameters such as low field mobility, device length, lattice temperature and voltages for one-space dimensional hydrodynamic model are explored to further validate the generic applicability of the CE/SE method for the current model equations. A two dimensional simulation is also performed by CE/SE method for a MESFET device, producing results in good agreement with those obtained by NT-central scheme.

  18. Degeneracy of time series models: The best model is not always the correct model

    International Nuclear Information System (INIS)

    Judd, Kevin; Nakamura, Tomomichi

    2006-01-01

    There are a number of good techniques for finding, in some sense, the best model of a deterministic system given a time series of observations. We examine a problem called model degeneracy, which has the consequence that even when a perfect model of a system exists, one does not find it using the best techniques currently available. The problem is illustrated using global polynomial models and the theory of Groebner bases

  19. Conceptual Modeling of Time-Varying Information

    DEFF Research Database (Denmark)

    Gregersen, Heidi; Jensen, Christian S.

    2004-01-01

    A wide range of database applications manage information that varies over time. Many of the underlying database schemas of these were designed using the Entity-Relationship (ER) model. In the research community as well as in industry, it is common knowledge that the temporal aspects of the mini......-world are important, but difficult to capture using the ER model. Several enhancements to the ER model have been proposed in an attempt to support the modeling of temporal aspects of information. Common to the existing temporally extended ER models, few or no specific requirements to the models were given...

  20. Modeling of Volatility with Non-linear Time Series Model

    OpenAIRE

    Kim Song Yon; Kim Mun Chol

    2013-01-01

    In this paper, non-linear time series models are used to describe volatility in financial time series data. To describe volatility, two of the non-linear time series are combined into form TAR (Threshold Auto-Regressive Model) with AARCH (Asymmetric Auto-Regressive Conditional Heteroskedasticity) error term and its parameter estimation is studied.

  1. Conversion and Validation of Distribution System Model from a QSTS-Based Tool to a Real-Time Dynamic Phasor Simulator: Preprint

    Energy Technology Data Exchange (ETDEWEB)

    Chamana, Manohar; Prabakar, Kumaraguru; Palmintier, Bryan; Baggu, Murali M.

    2017-04-11

    A software process is developed to convert distribution network models from a quasi-static time-series tool (OpenDSS) to a real-time dynamic phasor simulator (ePHASORSIM). The description of this process in this paper would be helpful for researchers who intend to perform similar conversions. The converter could be utilized directly by users of real-time simulators who intend to perform software-in-the-loop or hardware-in-the-loop tests on large distribution test feeders for a range of use cases, including testing functions of advanced distribution management systems against a simulated distribution system. In the future, the developers intend to release the conversion tool as open source to enable use by others.

  2. Conversion and Validation of Distribution System Model from a QSTS-Based Tool to a Real-Time Dynamic Phasor Simulator

    Energy Technology Data Exchange (ETDEWEB)

    Chamana, Manohar; Prabakar, Kumaraguru; Palmintier, Bryan; Baggu, Murali M.

    2017-05-11

    A software process is developed to convert distribution network models from a quasi-static time-series tool (OpenDSS) to a real-time dynamic phasor simulator (ePHASORSIM). The description of this process in this paper would be helpful for researchers who intend to perform similar conversions. The converter could be utilized directly by users of real-time simulators who intend to perform software-in-the-loop or hardware-in-the-loop tests on large distribution test feeders for a range of use cases, including testing functions of advanced distribution management systems against a simulated distribution system. In the future, the developers intend to release the conversion tool as open source to enable use by others.

  3. Constitutive model with time-dependent deformations

    DEFF Research Database (Denmark)

    Krogsbøll, Anette

    1998-01-01

    are common in time as well as size. This problem is adressed by means of a new constitutive model for soils. It is able to describe the behavior of soils at different deformation rates. The model defines time-dependent and stress-related deformations separately. They are related to each other and they occur...... was the difference in time scale between the geological process of deposition (millions of years) and the laboratory measurements of mechanical properties (minutes or hours). In addition, the time scale relevant to the production history of the oil field was interesting (days or years)....

  4. Observer-Based Controller Design for a Class of Nonlinear Networked Control Systems with Random Time-Delays Modeled by Markov Chains

    Directory of Open Access Journals (Sweden)

    Yanfeng Wang

    2017-01-01

    Full Text Available This paper investigates the observer-based controller design problem for a class of nonlinear networked control systems with random time-delays. The nonlinearity is assumed to satisfy a global Lipschitz condition and two dependent Markov chains are employed to describe the time-delay from sensor to controller (S-C delay and the time-delay from controller to actuator (C-A delay, respectively. The transition probabilities of S-C delay and C-A delay are both assumed to be partly inaccessible. Sufficient conditions on the stochastic stability for the closed-loop systems are obtained by constructing proper Lyapunov functional. The methods of calculating the controller and the observer gain matrix are also given. Two numerical examples are used to illustrate the effectiveness of the proposed method.

  5. Predicting the outbreak of hand, foot, and mouth disease in Nanjing, China: a time-series model based on weather variability

    Science.gov (United States)

    Liu, Sijun; Chen, Jiaping; Wang, Jianming; Wu, Zhuchao; Wu, Weihua; Xu, Zhiwei; Hu, Wenbiao; Xu, Fei; Tong, Shilu; Shen, Hongbing

    2017-10-01

    Hand, foot, and mouth disease (HFMD) is a significant public health issue in China and an accurate prediction of epidemic can improve the effectiveness of HFMD control. This study aims to develop a weather-based forecasting model for HFMD using the information on climatic variables and HFMD surveillance in Nanjing, China. Daily data on HFMD cases and meteorological variables between 2010 and 2015 were acquired from the Nanjing Center for Disease Control and Prevention, and China Meteorological Data Sharing Service System, respectively. A multivariate seasonal autoregressive integrated moving average (SARIMA) model was developed and validated by dividing HFMD infection data into two datasets: the data from 2010 to 2013 were used to construct a model and those from 2014 to 2015 were used to validate it. Moreover, we used weekly prediction for the data between 1 January 2014 and 31 December 2015 and leave-1-week-out prediction was used to validate the performance of model prediction. SARIMA (2,0,0)52 associated with the average temperature at lag of 1 week appeared to be the best model (R 2 = 0.936, BIC = 8.465), which also showed non-significant autocorrelations in the residuals of the model. In the validation of the constructed model, the predicted values matched the observed values reasonably well between 2014 and 2015. There was a high agreement rate between the predicted values and the observed values (sensitivity 80%, specificity 96.63%). This study suggests that the SARIMA model with average temperature could be used as an important tool for early detection and prediction of HFMD outbreaks in Nanjing, China.

  6. Research of Manufacture Time Management System Based on PLM

    Science.gov (United States)

    Jing, Ni; Juan, Zhu; Liangwei, Zhong

    This system is targeted by enterprises manufacturing machine shop, analyzes their business needs and builds the plant management information system of Manufacture time and Manufacture time information management. for manufacturing process Combined with WEB technology, based on EXCEL VBA development of methods, constructs a hybrid model based on PLM workshop Manufacture time management information system framework, discusses the functionality of the system architecture, database structure.

  7. A distributed model predictive control based load frequency control scheme for multi-area interconnected power system using discrete-time Laguerre functions.

    Science.gov (United States)

    Zheng, Yang; Zhou, Jianzhong; Xu, Yanhe; Zhang, Yuncheng; Qian, Zhongdong

    2017-05-01

    This paper proposes a distributed model predictive control based load frequency control (MPC-LFC) scheme to improve control performances in the frequency regulation of power system. In order to reduce the computational burden in the rolling optimization with a sufficiently large prediction horizon, the orthonormal Laguerre functions are utilized to approximate the predicted control trajectory. The closed-loop stability of the proposed MPC scheme is achieved by adding a terminal equality constraint to the online quadratic optimization and taking the cost function as the Lyapunov function. Furthermore, the treatments of some typical constraints in load frequency control have been studied based on the specific Laguerre-based formulations. Simulations have been conducted in two different interconnected power systems to validate the effectiveness of the proposed distributed MPC-LFC as well as its superiority over the comparative methods. Copyright © 2017 ISA. Published by Elsevier Ltd. All rights reserved.

  8. Aerodynamic Modeling of NREL 5-MW Wind Turbine for Nonlinear Control System Design: A Case Study Based on Real-Time Nonlinear Receding Horizon Control

    Directory of Open Access Journals (Sweden)

    Pedro A. Galvani

    2016-08-01

    Full Text Available The work presented in this paper has two major aspects: (i investigation of a simple, yet efficient model of the NREL (National Renewable Energy Laboratory 5-MW reference wind turbine; (ii nonlinear control system development through a real-time nonlinear receding horizon control methodology with application to wind turbine control dynamics. In this paper, the results of our simple wind turbine model and a real-time nonlinear control system implementation are shown in comparison with conventional control methods. For this purpose, the wind turbine control problem is converted into an optimization problem and is directly solved by the nonlinear backwards sweep Riccati method to generate the control protocol, which results in a non-iterative algorithm. One main contribution of this paper is that we provide evidence through simulations, that such an advanced control strategy can be used for real-time control of wind turbine dynamics. Examples are provided to validate and demonstrate the effectiveness of the presented scheme.

  9. Time series modeling in traffic safety research.

    Science.gov (United States)

    Lavrenz, Steven M; Vlahogianni, Eleni I; Gkritza, Konstantina; Ke, Yue

    2018-08-01

    The use of statistical models for analyzing traffic safety (crash) data has been well-established. However, time series techniques have traditionally been underrepresented in the corresponding literature, due to challenges in data collection, along with a limited knowledge of proper methodology. In recent years, new types of high-resolution traffic safety data, especially in measuring driver behavior, have made time series modeling techniques an increasingly salient topic of study. Yet there remains a dearth of information to guide analysts in their use. This paper provides an overview of the state of the art in using time series models in traffic safety research, and discusses some of the fundamental techniques and considerations in classic time series modeling. It also presents ongoing and future opportunities for expanding the use of time series models, and explores newer modeling techniques, including computational intelligence models, which hold promise in effectively handling ever-larger data sets. The information contained herein is meant to guide safety researchers in understanding this broad area of transportation data analysis, and provide a framework for understanding safety trends that can influence policy-making. Copyright © 2017 Elsevier Ltd. All rights reserved.

  10. Discrete-time rewards model-checked

    NARCIS (Netherlands)

    Larsen, K.G.; Andova, S.; Niebert, Peter; Hermanns, H.; Katoen, Joost P.

    2003-01-01

    This paper presents a model-checking approach for analyzing discrete-time Markov reward models. For this purpose, the temporal logic probabilistic CTL is extended with reward constraints. This allows to formulate complex measures – involving expected as well as accumulated rewards – in a precise and

  11. Modeling vector nonlinear time series using POLYMARS

    NARCIS (Netherlands)

    de Gooijer, J.G.; Ray, B.K.

    2003-01-01

    A modified multivariate adaptive regression splines method for modeling vector nonlinear time series is investigated. The method results in models that can capture certain types of vector self-exciting threshold autoregressive behavior, as well as provide good predictions for more general vector

  12. Forecasting with periodic autoregressive time series models

    NARCIS (Netherlands)

    Ph.H.B.F. Franses (Philip Hans); R. Paap (Richard)

    1999-01-01

    textabstractThis paper is concerned with forecasting univariate seasonal time series data using periodic autoregressive models. We show how one should account for unit roots and deterministic terms when generating out-of-sample forecasts. We illustrate the models for various quarterly UK consumption

  13. Time versus frequency domain measurements: layered model ...

    African Journals Online (AJOL)

    ... their high frequency content while among TEM data sets with low frequency content, the averaging times for the FEM ellipticity were shorter than the TEM quality. Keywords: ellipticity, frequency domain, frequency electromagnetic method, model parameter, orientation error, time domain, transient electromagnetic method

  14. Modeling nonhomogeneous Markov processes via time transformation.

    Science.gov (United States)

    Hubbard, R A; Inoue, L Y T; Fann, J R

    2008-09-01

    Longitudinal studies are a powerful tool for characterizing the course of chronic disease. These studies are usually carried out with subjects observed at periodic visits giving rise to panel data. Under this observation scheme the exact times of disease state transitions and sequence of disease states visited are unknown and Markov process models are often used to describe disease progression. Most applications of Markov process models rely on the assumption of time homogeneity, that is, that the transition rates are constant over time. This assumption is not satisfied when transition rates depend on time from the process origin. However, limited statistical tools are available for dealing with nonhomogeneity. We propose models in which the time scale of a nonhomogeneous Markov process is transformed to an operational time scale on which the process is homogeneous. We develop a method for jointly estimating the time transformation and the transition intensity matrix for the time transformed homogeneous process. We assess maximum likelihood estimation using the Fisher scoring algorithm via simulation studies and compare performance of our method to homogeneous and piecewise homogeneous models. We apply our methodology to a study of delirium progression in a cohort of stem cell transplantation recipients and show that our method identifies temporal trends in delirium incidence and recovery.

  15. Modeling discrete time-to-event data

    CERN Document Server

    Tutz, Gerhard

    2016-01-01

    This book focuses on statistical methods for the analysis of discrete failure times. Failure time analysis is one of the most important fields in statistical research, with applications affecting a wide range of disciplines, in particular, demography, econometrics, epidemiology and clinical research. Although there are a large variety of statistical methods for failure time analysis, many techniques are designed for failure times that are measured on a continuous scale. In empirical studies, however, failure times are often discrete, either because they have been measured in intervals (e.g., quarterly or yearly) or because they have been rounded or grouped. The book covers well-established methods like life-table analysis and discrete hazard regression models, but also introduces state-of-the art techniques for model evaluation, nonparametric estimation and variable selection. Throughout, the methods are illustrated by real life applications, and relationships to survival analysis in continuous time are expla...

  16. Continuous time modelling of dynamical spatial lattice data observed at sparsely distributed times

    DEFF Research Database (Denmark)

    Rasmussen, Jakob Gulddahl; Møller, Jesper

    2007-01-01

    Summary. We consider statistical and computational aspects of simulation-based Bayesian inference for a spatial-temporal model based on a multivariate point process which is only observed at sparsely distributed times. The point processes are indexed by the sites of a spatial lattice......, and they exhibit spatial interaction. For specificity we consider a particular dynamical spatial lattice data set which has previously been analysed by a discrete time model involving unknown normalizing constants. We discuss the advantages and disadvantages of using continuous time processes compared...... with discrete time processes in the setting of the present paper as well as other spatial-temporal situations....

  17. Accident diagnosis of the Angra-2 nuclear power plant based on intelligent real-time acquisition agents and a logical tree model

    Energy Technology Data Exchange (ETDEWEB)

    Paiva, Gustavo V.; Schirru, Roberto, E-mail: gustavopaiva@poli.ufrj.br, E-mail: schirru@lmp.ufrj.br [Coordenacao de Pos-Graduacao e Pesquisa de Engenharia (COPPE/UFRJ), Rio de Janeiro, RJ (Brazil). Departamento de Engenharia Nuclear

    2017-07-01

    This work aims to create a model and a prototype, using the Python language, which with the application of an Expert System uses production rules to analyze the data obtained in real time from the plant and help the operator to identify the occurrence of transients / accidents. In the event of a transient, the program alerts the operator and indicates which section of the Operation Manual should be consulted to bring the plant back to its normal state. The generic structure used to represent the knowledge of the Expert System was a Fault Tree and the data obtained from the plant was done through intelligent acquisition agents that transform the data obtained from the plant into Boolean values used in the Fault Tree, including the use of Fuzzy Logic. In order to test the program, a simplified model of the Almirante Alvaro Alberto 2 Nuclear Power Plant (Angra-2) manuals was used and with this model, simulations were performed to analyze the program's operation and if it leads to the expected results. The results of the tests presented a quick identification of the events and great accuracy, demonstrating the applicability of the model to the problem. (author)

  18. Accident diagnosis of the Angra-2 nuclear power plant based on intelligent real-time acquisition agents and a logical tree model

    International Nuclear Information System (INIS)

    Paiva, Gustavo V.; Schirru, Roberto

    2017-01-01

    This work aims to create a model and a prototype, using the Python language, which with the application of an Expert System uses production rules to analyze the data obtained in real time from the plant and help the operator to identify the occurrence of transients / accidents. In the event of a transient, the program alerts the operator and indicates which section of the Operation Manual should be consulted to bring the plant back to its normal state. The generic structure used to represent the knowledge of the Expert System was a Fault Tree and the data obtained from the plant was done through intelligent acquisition agents that transform the data obtained from the plant into Boolean values used in the Fault Tree, including the use of Fuzzy Logic. In order to test the program, a simplified model of the Almirante Alvaro Alberto 2 Nuclear Power Plant (Angra-2) manuals was used and with this model, simulations were performed to analyze the program's operation and if it leads to the expected results. The results of the tests presented a quick identification of the events and great accuracy, demonstrating the applicability of the model to the problem. (author)

  19. Computationally efficient dynamic modeling of robot manipulators with multiple flexible-links using acceleration-based discrete time transfer matrix method

    DEFF Research Database (Denmark)

    Zhang, Xuping; Sørensen, Rasmus; RahbekIversen, Mathias

    2018-01-01

    This paper presents a novel and computationally efficient modeling method for the dynamics of flexible-link robot manipulators. In this method, a robot manipulator is decomposed into components/elements. The component/element dynamics is established using Newton–Euler equations, and then is linea......This paper presents a novel and computationally efficient modeling method for the dynamics of flexible-link robot manipulators. In this method, a robot manipulator is decomposed into components/elements. The component/element dynamics is established using Newton–Euler equations......, and then is linearized based on the acceleration-based state vector. The transfer matrices for each type of components/elements are developed, and used to establish the system equations of a flexible robot manipulator by concatenating the state vector from the base to the end-effector. With this strategy, the size...... manipulators, and only involves calculating and transferring component/element dynamic equations that have small size. The numerical simulations and experimental testing of flexible-link manipulators are conducted to validate the proposed methodologies....

  20. Time series modelling of overflow structures

    DEFF Research Database (Denmark)

    Carstensen, J.; Harremoës, P.

    1997-01-01

    The dynamics of a storage pipe is examined using a grey-box model based on on-line measured data. The grey-box modelling approach uses a combination of physically-based and empirical terms in the model formulation. The model provides an on-line state estimate of the overflows, pumping capacities...... and available storage capacity in the pipe as well as predictions of future states. A linear overflow relation is found, differing significantly from the traditional modelling approach. This is due to complicated overflow structures in a hydraulic sense where the overflow is governed by inertia from the inflow...... to the overflow structures. The capacity of a pump draining the storage pipe has been estimated for two rain events, revealing that the pump was malfunctioning during the first rain event. The grey-box modelling approach is applicable for automated on-line surveillance and control. (C) 1997 IAWQ. Published...

  1. Time series sightability modeling of animal populations.

    Science.gov (United States)

    ArchMiller, Althea A; Dorazio, Robert M; St Clair, Katherine; Fieberg, John R

    2018-01-01

    Logistic regression models-or "sightability models"-fit to detection/non-detection data from marked individuals are often used to adjust for visibility bias in later detection-only surveys, with population abundance estimated using a modified Horvitz-Thompson (mHT) estimator. More recently, a model-based alternative for analyzing combined detection/non-detection and detection-only data was developed. This approach seemed promising, since it resulted in similar estimates as the mHT when applied to data from moose (Alces alces) surveys in Minnesota. More importantly, it provided a framework for developing flexible models for analyzing multiyear detection-only survey data in combination with detection/non-detection data. During initial attempts to extend the model-based approach to multiple years of detection-only data, we found that estimates of detection probabilities and population abundance were sensitive to the amount of detection-only data included in the combined (detection/non-detection and detection-only) analysis. Subsequently, we developed a robust hierarchical modeling approach where sightability model parameters are informed only by the detection/non-detection data, and we used this approach to fit a fixed-effects model (FE model) with year-specific parameters and a temporally-smoothed model (TS model) that shares information across years via random effects and a temporal spline. The abundance estimates from the TS model were more precise, with decreased interannual variability relative to the FE model and mHT abundance estimates, illustrating the potential benefits from model-based approaches that allow information to be shared across years.

  2. Using Virtual Reality to Assess Ethical Decisions in Road Traffic Scenarios: Applicability of Value-of-Life-Based Models and Influences of Time Pressure

    OpenAIRE

    S?tfeld, Leon R.; Gast, Richard; K?nig, Peter; Pipa, Gordon

    2017-01-01

    Self-driving cars are posing a new challenge to our ethics. By using algorithms to make decisions in situations where harming humans is possible, probable, or even unavoidable, a self-driving car's ethical behavior comes pre-defined. Ad hoc decisions are made in milliseconds, but can be based on extensive research and debates. The same algorithms are also likely to be used in millions of cars at a time, increasing the impact of any inherent biases, and increasing the importance of getting it ...

  3. Using the Time-Driven Activity-Based Costing Model in the Eye Clinic at The Hospital for Sick Children: A Case Study and Lessons Learned.

    Science.gov (United States)

    Gulati, Sanchita; During, David; Mainland, Jeff; Wong, Agnes M F

    2018-01-01

    One of the key challenges to healthcare organizations is the development of relevant and accurate cost information. In this paper, we used time-driven activity-based costing (TDABC) method to calculate the costs of treating individual patients with specific medical conditions over their full cycle of care. We discussed how TDABC provides a critical, systematic and data-driven approach to estimate costs accurately and dynamically, as well as its potential to enable structural and rational cost reduction to bring about a sustainable healthcare system. © 2018 Longwoods Publishing.

  4. Linear time relational prototype based learning.

    Science.gov (United States)

    Gisbrecht, Andrej; Mokbel, Bassam; Schleif, Frank-Michael; Zhu, Xibin; Hammer, Barbara

    2012-10-01

    Prototype based learning offers an intuitive interface to inspect large quantities of electronic data in supervised or unsupervised settings. Recently, many techniques have been extended to data described by general dissimilarities rather than Euclidean vectors, so-called relational data settings. Unlike the Euclidean counterparts, the techniques have quadratic time complexity due to the underlying quadratic dissimilarity matrix. Thus, they are infeasible already for medium sized data sets. The contribution of this article is twofold: On the one hand we propose a novel supervised prototype based classification technique for dissimilarity data based on popular learning vector quantization (LVQ), on the other hand we transfer a linear time approximation technique, the Nyström approximation, to this algorithm and an unsupervised counterpart, the relational generative topographic mapping (GTM). This way, linear time and space methods result. We evaluate the techniques on three examples from the biomedical domain.

  5. Time series sightability modeling of animal populations

    Science.gov (United States)

    ArchMiller, Althea A.; Dorazio, Robert; St. Clair, Katherine; Fieberg, John R.

    2018-01-01

    Logistic regression models—or “sightability models”—fit to detection/non-detection data from marked individuals are often used to adjust for visibility bias in later detection-only surveys, with population abundance estimated using a modified Horvitz-Thompson (mHT) estimator. More recently, a model-based alternative for analyzing combined detection/non-detection and detection-only data was developed. This approach seemed promising, since it resulted in similar estimates as the mHT when applied to data from moose (Alces alces) surveys in Minnesota. More importantly, it provided a framework for developing flexible models for analyzing multiyear detection-only survey data in combination with detection/non-detection data. During initial attempts to extend the model-based approach to multiple years of detection-only data, we found that estimates of detection probabilities and population abundance were sensitive to the amount of detection-only data included in the combined (detection/non-detection and detection-only) analysis. Subsequently, we developed a robust hierarchical modeling approach where sightability model parameters are informed only by the detection/non-detection data, and we used this approach to fit a fixed-effects model (FE model) with year-specific parameters and a temporally-smoothed model (TS model) that shares information across years via random effects and a temporal spline. The abundance estimates from the TS model were more precise, with decreased interannual variability relative to the FE model and mHT abundance estimates, illustrating the potential benefits from model-based approaches that allow information to be shared across years.

  6. A model for quantification of temperature profiles via germination times

    DEFF Research Database (Denmark)

    Pipper, Christian Bressen; Adolf, Verena Isabelle; Jacobsen, Sven-Erik

    2013-01-01

    Current methodology to quantify temperature characteristics in germination of seeds is predominantly based on analysis of the time to reach a given germination fraction, that is, the quantiles in the distribution of the germination time of a seed. In practice interpolation between observed...... time and a specific type of accelerated failure time models is provided. As a consequence the observed number of germinated seeds at given monitoring times may be analysed directly by a grouped time-to-event model from which characteristics of the temperature profile may be identified and estimated...... germination fractions at given monitoring times is used to obtain the time to reach a given germination fraction. As a consequence the obtained value will be highly dependent on the actual monitoring scheme used in the experiment. In this paper a link between currently used quantile models for the germination...

  7. Time to Loosen the Apron Strings: Cohort-based Evaluation of a Learner-driven Remediation Model at One Medical School.

    Science.gov (United States)

    Bierer, S Beth; Dannefer, Elaine F; Tetzlaff, John E

    2015-09-01

    Remediation in the era of competency-based assessment demands a model that empowers students to improve performance. To examine a remediation model where students, rather than faculty, develop remedial plans to improve performance. Private medical school, 177 medical students. A promotion committee uses student-generated portfolios and faculty referrals to identify struggling students, and has them develop formal remediation plans with personal reflections, improvement strategies, and performance evidence. Students submit reports to document progress until formally released from remediation by the promotion committee. Participants included 177 students from six classes (2009-2014). Twenty-six were placed in remediation, with more referrals occurring during Years 1 or 2 (n = 20, 76 %). Unprofessional behavior represented the most common reason for referral in Years 3-5. Remedial students did not differ from classmates (n = 151) on baseline characteristics (Age, Gender, US citizenship, MCAT) or willingness to recommend their medical school to future students (p < 0.05). Two remedial students did not graduate and three did not pass USLME licensure exams on first attempt. Most remedial students (92 %) generated appropriate plans to address performance deficits. Students can successfully design remedial interventions. This learner-driven remediation model promotes greater autonomy and reinforces self-regulated learning.

  8. Study on determination of planting time for some cauliflower cultivars (Brassica oleracea var. botrytis) under Samsun ecological conditions by using plant growth and developmental models based on thermal time

    International Nuclear Information System (INIS)

    Uzun, S.; Peksen, A.

    2000-01-01

    In this study, it was aimed to determine the effects of different planting times (01 July, 15 July and 01 August) on the growth and developmental components of some cauliflower cultivars (Snow King, White Cliff, White Rock, White Latin, Me & Carillon, SG 4004 F1 and Serrano) by using plant growth and developmental models. From the results of the present study, it was revealed that thermal time elapsing from planting to curd initiation should be high (about 1200 degree centigrade days) to stimulate vegetative growth while thermal time elapsing from curd initiation to the harvest should be low (around 200 degree centigrade days) in terms of curd weight. The highest curd weight and yield were obtained from the plants of the first planting time, namely 01 July, compared to the other planting times (15 July and 01 August). Although there were no significant differences between the cultivars, the highest yield was obtained form Cv. Me & Carillon (13.25 t ha-1), SG 4004 F1 (13.14 t ha-1) and White Rock (11.51 t ha-1) respectively

  9. Modeling Non-Gaussian Time Series with Nonparametric Bayesian Model.

    Science.gov (United States)

    Xu, Zhiguang; MacEachern, Steven; Xu, Xinyi

    2015-02-01

    We present a class of Bayesian copula models whose major components are the marginal (limiting) distribution of a stationary time series and the internal dynamics of the series. We argue that these are the two features with which an analyst is typically most familiar, and hence that these are natural components with which to work. For the marginal distribution, we use a nonparametric Bayesian prior distribution along with a cdf-inverse cdf transformation to obtain large support. For the internal dynamics, we rely on the traditionally successful techniques of normal-theory time series. Coupling the two components gives us a family of (Gaussian) copula transformed autoregressive models. The models provide coherent adjustments of time scales and are compatible with many extensions, including changes in volatility of the series. We describe basic properties of the models, show their ability to recover non-Gaussian marginal distributions, and use a GARCH modification of the basic model to analyze stock index return series. The models are found to provide better fit and improved short-range and long-range predictions than Gaussian competitors. The models are extensible to a large variety of fields, including continuous time models, spatial models, models for multiple series, models driven by external covariate streams, and non-stationary models.

  10. A Model-Based Evaluation of the Inverse Gaussian Transit-Time Distribution Method for Inferring Anthropogenic Carbon Storage in the Ocean

    Science.gov (United States)

    He, Yan-Chun; Tjiputra, Jerry; Langehaug, Helene R.; Jeansson, Emil; Gao, Yongqi; Schwinger, Jörg; Olsen, Are

    2018-03-01

    The Inverse Gaussian approximation of transit time distribution method (IG-TTD) is widely used to infer the anthropogenic carbon (Cant) concentration in the ocean from measurements of transient tracers such as chlorofluorocarbons (CFCs) and sulfur hexafluoride (SF6). Its accuracy relies on the validity of several assumptions, notably (i) a steady state ocean circulation, (ii) a prescribed age tracer saturation history, e.g., a constant 100% saturation, (iii) a prescribed constant degree of mixing in the ocean, (iv) a constant surface ocean air-sea CO2 disequilibrium with time, and (v) that preformed alkalinity can be sufficiently estimated by salinity or salinity and temperature. Here, these assumptions are evaluated using simulated "model-truth" of Cant. The results give the IG-TTD method a range of uncertainty from 7.8% to 13.6% (11.4 Pg C to 19.8 Pg C) due to above assumptions, which is about half of the uncertainty derived in previous model studies. Assumptions (ii), (iv) and (iii) are the three largest sources of uncertainties, accounting for 5.5%, 3.8% and 3.0%, respectively, while assumptions (i) and (v) only contribute about 0.6% and 0.7%. Regionally, the Southern Ocean contributes the largest uncertainty, of 7.8%, while the North Atlantic contributes about 1.3%. Our findings demonstrate that spatial-dependency of Δ/Γ, and temporal changes in tracer saturation and air-sea CO2 disequilibrium have strong compensating effect on the estimated Cant. The values of these parameters should be quantified to reduce the uncertainty of IG-TTD; this is increasingly important under a changing ocean climate.

  11. Analysis of the impact of crude oil price fluctuations on China's stock market in different periods-Based on time series network model

    Science.gov (United States)

    An, Yang; Sun, Mei; Gao, Cuixia; Han, Dun; Li, Xiuming

    2018-02-01

    This paper studies the influence of Brent oil price fluctuations on the stock prices of China's two distinct blocks, namely, the petrochemical block and the electric equipment and new energy block, applying the Shannon entropy of information theory. The co-movement trend of crude oil price and stock prices is divided into different fluctuation patterns with the coarse-graining method. Then, the bivariate time series network model is established for the two blocks stock in five different periods. By joint analysis of the network-oriented metrics, the key modes and underlying evolutionary mechanisms were identified. The results show that the both networks have different fluctuation characteristics in different periods. Their co-movement patterns are clustered in some key modes and conversion intermediaries. The study not only reveals the lag effect of crude oil price fluctuations on the stock in Chinese industry blocks but also verifies the necessity of research on special periods, and suggests that the government should use different energy policies to stabilize market volatility in different periods. A new way is provided to study the unidirectional influence between multiple variables or complex time series.

  12. A Multi Time Scale Wind Power Forecasting Model of a Chaotic Echo State Network Based on a Hybrid Algorithm of Particle Swarm Optimization and Tabu Search

    Directory of Open Access Journals (Sweden)

    Xiaomin Xu

    2015-11-01

    Full Text Available The uncertainty and regularity of wind power generation are caused by wind resources’ intermittent and randomness. Such volatility brings severe challenges to the wind power grid. The requirements for ultrashort-term and short-term wind power forecasting with high prediction accuracy of the model used, have great significance for reducing the phenomenon of abandoned wind power , optimizing the conventional power generation plan, adjusting the maintenance schedule and developing real-time monitoring systems. Therefore, accurate forecasting of wind power generation is important in electric load forecasting. The echo state network (ESN is a new recurrent neural network composed of input, hidden layer and output layers. It can approximate well the nonlinear system and achieves great results in nonlinear chaotic time series forecasting. Besides, the ESN is simpler and less computationally demanding than the traditional neural network training, which provides more accurate training results. Aiming at addressing the disadvantages of standard ESN, this paper has made some improvements. Combined with the complementary advantages of particle swarm optimization and tabu search, the generalization of ESN is improved. To verify the validity and applicability of this method, case studies of multitime scale forecasting of wind power output are carried out to reconstruct the chaotic time series of the actual wind power generation data in a certain region to predict wind power generation. Meanwhile, the influence of seasonal factors on wind power is taken into consideration. Compared with the classical ESN and the conventional Back Propagation (BP neural network, the results verify the superiority of the proposed method.

  13. Identification of human operator performance models utilizing time series analysis

    Science.gov (United States)

    Holden, F. M.; Shinners, S. M.

    1973-01-01

    The results of an effort performed by Sperry Systems Management Division for AMRL in applying time series analysis as a tool for modeling the human operator are presented. This technique is utilized for determining the variation of the human transfer function under various levels of stress. The human operator's model is determined based on actual input and output data from a tracking experiment.

  14. Estimating High-Dimensional Time Series Models

    DEFF Research Database (Denmark)

    Medeiros, Marcelo C.; Mendes, Eduardo F.

    We study the asymptotic properties of the Adaptive LASSO (adaLASSO) in sparse, high-dimensional, linear time-series models. We assume both the number of covariates in the model and candidate variables can increase with the number of observations and the number of candidate variables is, possibly......, larger than the number of observations. We show the adaLASSO consistently chooses the relevant variables as the number of observations increases (model selection consistency), and has the oracle property, even when the errors are non-Gaussian and conditionally heteroskedastic. A simulation study shows...

  15. Nonparametric volatility density estimation for discrete time models

    NARCIS (Netherlands)

    Es, van Bert; Spreij, P.J.C.; Zanten, van J.H.

    2005-01-01

    We consider discrete time models for asset prices with a stationary volatility process. We aim at estimating the multivariate density of this process at a set of consecutive time instants. A Fourier-type deconvolution kernel density estimator based on the logarithm of the squared process is proposed

  16. Finite Time Blowup in a Realistic Food-Chain Model

    KAUST Repository

    Parshad, Rana; Ait Abderrahmane, Hamid; Upadhyay, Ranjit Kumar; Kumari, Nitu

    2013-01-01

    We investigate a realistic three-species food-chain model, with generalist top predator. The model based on a modified version of the Leslie-Gower scheme incorporates mutual interference in all the three populations and generalizes several other known models in the ecological literature. We show that the model exhibits finite time blowup in certain parameter range and for large enough initial data. This result implies that finite time blowup is possible in a large class of such three-species food-chain models. We propose a modification to the model and prove that the modified model has globally existing classical solutions, as well as a global attractor. We reconstruct the attractor using nonlinear time series analysis and show that it pssesses rich dynamics, including chaos in certain parameter regime, whilst avoiding blowup in any parameter regime. We also provide estimates on its fractal dimension as well as provide numerical simulations to visualise the spatiotemporal chaos.

  17. Finite Time Blowup in a Realistic Food-Chain Model

    KAUST Repository

    Parshad, Rana

    2013-05-19

    We investigate a realistic three-species food-chain model, with generalist top predator. The model based on a modified version of the Leslie-Gower scheme incorporates mutual interference in all the three populations and generalizes several other known models in the ecological literature. We show that the model exhibits finite time blowup in certain parameter range and for large enough initial data. This result implies that finite time blowup is possible in a large class of such three-species food-chain models. We propose a modification to the model and prove that the modified model has globally existing classical solutions, as well as a global attractor. We reconstruct the attractor using nonlinear time series analysis and show that it pssesses rich dynamics, including chaos in certain parameter regime, whilst avoiding blowup in any parameter regime. We also provide estimates on its fractal dimension as well as provide numerical simulations to visualise the spatiotemporal chaos.

  18. Visibility Graph Based Time Series Analysis.

    Science.gov (United States)

    Stephen, Mutua; Gu, Changgui; Yang, Huijie

    2015-01-01

    Network based time series analysis has made considerable achievements in the recent years. By mapping mono/multivariate time series into networks, one can investigate both it's microscopic and macroscopic behaviors. However, most proposed approaches lead to the construction of static networks consequently providing limited information on evolutionary behaviors. In the present paper we propose a method called visibility graph based time series analysis, in which series segments are mapped to visibility graphs as being descriptions of the corresponding states and the successively occurring states are linked. This procedure converts a time series to a temporal network and at the same time a network of networks. Findings from empirical records for stock markets in USA (S&P500 and Nasdaq) and artificial series generated by means of fractional Gaussian motions show that the method can provide us rich information benefiting short-term and long-term predictions. Theoretically, we propose a method to investigate time series from the viewpoint of network of networks.

  19. Visibility Graph Based Time Series Analysis.

    Directory of Open Access Journals (Sweden)

    Mutua Stephen

    Full Text Available Network based time series analysis has made considerable achievements in the recent years. By mapping mono/multivariate time series into networks, one can investigate both it's microscopic and macroscopic behaviors. However, most proposed approaches lead to the construction of static networks consequently providing limited information on evolutionary behaviors. In the present paper we propose a method called visibility graph based time series analysis, in which series segments are mapped to visibility graphs as being descriptions of the corresponding states and the successively occurring states are linked. This procedure converts a time series to a temporal network and at the same time a network of networks. Findings from empirical records for stock markets in USA (S&P500 and Nasdaq and artificial series generated by means of fractional Gaussian motions show that the method can provide us rich information benefiting short-term and long-term predictions. Theoretically, we propose a method to investigate time series from the viewpoint of network of networks.

  20. Space-time modeling of timber prices

    Science.gov (United States)

    Mo Zhou; Joseph Buongriorno

    2006-01-01

    A space-time econometric model was developed for pine sawtimber timber prices of 21 geographically contiguous regions in the southern United States. The correlations between prices in neighboring regions helped predict future prices. The impulse response analysis showed that although southern pine sawtimber markets were not globally integrated, local supply and demand...

  1. On modeling panels of time series

    NARCIS (Netherlands)

    Ph.H.B.F. Franses (Philip Hans)

    2002-01-01

    textabstractThis paper reviews research issues in modeling panels of time series. Examples of this type of data are annually observed macroeconomic indicators for all countries in the world, daily returns on the individual stocks listed in the S&P500, and the sales records of all items in a

  2. Time Series Modelling using Proc Varmax

    DEFF Research Database (Denmark)

    Milhøj, Anders

    2007-01-01

    In this paper it will be demonstrated how various time series problems could be met using Proc Varmax. The procedure is rather new and hence new features like cointegration, testing for Granger causality are included, but it also means that more traditional ARIMA modelling as outlined by Box...

  3. Theory of Time beyond the standard model

    International Nuclear Information System (INIS)

    Poliakov, Eugene S.

    2008-01-01

    A frame of non-uniform time is discussed. A concept of 'flow of time' is presented. The principle of time relativity in analogy with Galilean principle of relativity is set. Equivalence principle is set to state that the outcome of non-uniform time in an inertial frame of reference is equivalent to the outcome of a fictitious gravity force external to the frame of reference. Thus it is flow of time that causes gravity rather than mass. The latter is compared to experimental data achieving precision of up to 0.0003%. It is shown that the law of energy conservation is inapplicable to the frames of non-uniform time. A theoretical model of a physical entity (point mass, photon) travelling in the field of non-uniform time is considered. A generalized law that allows the flow of time to replace classical energy conservation is introduced on the basis of the experiment of Pound and Rebka. It is shown that linear dependence of flow of time on spatial coordinate conforms the inverse square law of universal gravitation and Keplerian mechanics. Momentum is shown to still be conserved

  4. Bayesian estimation of the dynamics of pandemic (H1N1) 2009 influenza transmission in Queensland: A space-time SIR-based model.

    Science.gov (United States)

    Huang, Xiaodong; Clements, Archie C A; Williams, Gail; Mengersen, Kerrie; Tong, Shilu; Hu, Wenbiao

    2016-04-01

    A pandemic strain of influenza A spread rapidly around the world in 2009, now referred to as pandemic (H1N1) 2009. This study aimed to examine the spatiotemporal variation in the transmission rate of pandemic (H1N1) 2009 associated with changes in local socio-environmental conditions from May 7-December 31, 2009, at a postal area level in Queensland, Australia. We used the data on laboratory-confirmed H1N1 cases to examine the spatiotemporal dynamics of transmission using a flexible Bayesian, space-time, Susceptible-Infected-Recovered (SIR) modelling approach. The model incorporated parameters describing spatiotemporal variation in H1N1 infection and local socio-environmental factors. The weekly transmission rate of pandemic (H1N1) 2009 was negatively associated with the weekly area-mean maximum temperature at a lag of 1 week (LMXT) (posterior mean: -0.341; 95% credible interval (CI): -0.370--0.311) and the socio-economic index for area (SEIFA) (posterior mean: -0.003; 95% CI: -0.004--0.001), and was positively associated with the product of LMXT and the weekly area-mean vapour pressure at a lag of 1 week (LVAP) (posterior mean: 0.008; 95% CI: 0.007-0.009). There was substantial spatiotemporal variation in transmission rate of pandemic (H1N1) 2009 across Queensland over the epidemic period. High random effects of estimated transmission rates were apparent in remote areas and some postal areas with higher proportion of indigenous populations and smaller overall populations. Local SEIFA and local atmospheric conditions were associated with the transmission rate of pandemic (H1N1) 2009. The more populated regions displayed consistent and synchronized epidemics with low average transmission rates. The less populated regions had high average transmission rates with more variations during the H1N1 epidemic period. Copyright © 2016 Elsevier Inc. All rights reserved.

  5. Koopman Operator Framework for Time Series Modeling and Analysis

    Science.gov (United States)

    Surana, Amit

    2018-01-01

    We propose an interdisciplinary framework for time series classification, forecasting, and anomaly detection by combining concepts from Koopman operator theory, machine learning, and linear systems and control theory. At the core of this framework is nonlinear dynamic generative modeling of time series using the Koopman operator which is an infinite-dimensional but linear operator. Rather than working with the underlying nonlinear model, we propose two simpler linear representations or model forms based on Koopman spectral properties. We show that these model forms are invariants of the generative model and can be readily identified directly from data using techniques for computing Koopman spectral properties without requiring the explicit knowledge of the generative model. We also introduce different notions of distances on the space of such model forms which is essential for model comparison/clustering. We employ the space of Koopman model forms equipped with distance in conjunction with classical machine learning techniques to develop a framework for automatic feature generation for time series classification. The forecasting/anomaly detection framework is based on using Koopman model forms along with classical linear systems and control approaches. We demonstrate the proposed framework for human activity classification, and for time series forecasting/anomaly detection in power grid application.

  6. State-space prediction model for chaotic time series

    Science.gov (United States)

    Alparslan, A. K.; Sayar, M.; Atilgan, A. R.

    1998-08-01

    A simple method for predicting the continuation of scalar chaotic time series ahead in time is proposed. The false nearest neighbors technique in connection with the time-delayed embedding is employed so as to reconstruct the state space. A local forecasting model based upon the time evolution of the topological neighboring in the reconstructed phase space is suggested. A moving root-mean-square error is utilized in order to monitor the error along the prediction horizon. The model is tested for the convection amplitude of the Lorenz model. The results indicate that for approximately 100 cycles of the training data, the prediction follows the actual continuation very closely about six cycles. The proposed model, like other state-space forecasting models, captures the long-term behavior of the system due to the use of spatial neighbors in the state space.

  7. Real time wave forecasting using wind time history and numerical model

    Science.gov (United States)

    Jain, Pooja; Deo, M. C.; Latha, G.; Rajendran, V.

    Operational activities in the ocean like planning for structural repairs or fishing expeditions require real time prediction of waves over typical time duration of say a few hours. Such predictions can be made by using a numerical model or a time series model employing continuously recorded waves. This paper presents another option to do so and it is based on a different time series approach in which the input is in the form of preceding wind speed and wind direction observations. This would be useful for those stations where the costly wave buoys are not deployed and instead only meteorological buoys measuring wind are moored. The technique employs alternative artificial intelligence approaches of an artificial neural network (ANN), genetic programming (GP) and model tree (MT) to carry out the time series modeling of wind to obtain waves. Wind observations at four offshore sites along the east coast of India were used. For calibration purpose the wave data was generated using a numerical model. The predicted waves obtained using the proposed time series models when compared with the numerically generated waves showed good resemblance in terms of the selected error criteria. Large differences across the chosen techniques of ANN, GP, MT were not noticed. Wave hindcasting at the same time step and the predictions over shorter lead times were better than the predictions over longer lead times. The proposed method is a cost effective and convenient option when a site-specific information is desired.

  8. Thermally-aware composite run-time CPU power models

    OpenAIRE

    Walker, Matthew J.; Diestelhorst, Stephan; Hansson, Andreas; Balsamo, Domenico; Merrett, Geoff V.; Al-Hashimi, Bashir M.

    2016-01-01

    Accurate and stable CPU power modelling is fundamental in modern system-on-chips (SoCs) for two main reasons: 1) they enable significant online energy savings by providing a run-time manager with reliable power consumption data for controlling CPU energy-saving techniques; 2) they can be used as accurate and trusted reference models for system design and exploration. We begin by showing the limitations in typical performance monitoring counter (PMC) based power modelling approaches and illust...

  9. NASA AVOSS Fast-Time Wake Prediction Models: User's Guide

    Science.gov (United States)

    Ahmad, Nash'at N.; VanValkenburg, Randal L.; Pruis, Matthew

    2014-01-01

    The National Aeronautics and Space Administration (NASA) is developing and testing fast-time wake transport and decay models to safely enhance the capacity of the National Airspace System (NAS). The fast-time wake models are empirical algorithms used for real-time predictions of wake transport and decay based on aircraft parameters and ambient weather conditions. The aircraft dependent parameters include the initial vortex descent velocity and the vortex pair separation distance. The atmospheric initial conditions include vertical profiles of temperature or potential temperature, eddy dissipation rate, and crosswind. The current distribution includes the latest versions of the APA (3.4) and the TDP (2.1) models. This User's Guide provides detailed information on the model inputs, file formats, and the model output. An example of a model run and a brief description of the Memphis 1995 Wake Vortex Dataset is also provided.

  10. Manufacturing strategies for time based competitive advantages

    OpenAIRE

    Lin, Yong; Ma, Shihua; Zhou, Li

    2012-01-01

    Purpose – The main purpose of this paper is to investigate the current manufacturing strategies and practices of bus manufacturers in China, and to propose a framework of manufacturing strategies for time-based competitive advantages.\\ud Design/methodology/approach – The conceptual research framework is devised from a review of the literature, and case studies are used to investigate the manufacturing strategies and practices in place in the case companies. Data is collected through semi-stru...

  11. Discrete time modelization of human pilot behavior

    Science.gov (United States)

    Cavalli, D.; Soulatges, D.

    1975-01-01

    This modelization starts from the following hypotheses: pilot's behavior is a time discrete process, he can perform only one task at a time and his operating mode depends on the considered flight subphase. Pilot's behavior was observed using an electro oculometer and a simulator cockpit. A FORTRAN program has been elaborated using two strategies. The first one is a Markovian process in which the successive instrument readings are governed by a matrix of conditional probabilities. In the second one, strategy is an heuristic process and the concepts of mental load and performance are described. The results of the two aspects have been compared with simulation data.

  12. Linear Parametric Model Checking of Timed Automata

    DEFF Research Database (Denmark)

    Hune, Tohmas Seidelin; Romijn, Judi; Stoelinga, Mariëlle

    2001-01-01

    We present an extension of the model checker Uppaal capable of synthesize linear parameter constraints for the correctness of parametric timed automata. The symbolic representation of the (parametric) state-space is shown to be correct. A second contribution of this paper is the identication...... of a subclass of parametric timed automata (L/U automata), for which the emptiness problem is decidable, contrary to the full class where it is know to be undecidable. Also we present a number of lemmas enabling the verication eort to be reduced for L/U automata in some cases. We illustrate our approach...

  13. Opinion dynamics model based on quantum formalism

    Energy Technology Data Exchange (ETDEWEB)

    Artawan, I. Nengah, E-mail: nengahartawan@gmail.com [Theoretical Physics Division, Department of Physics, Udayana University (Indonesia); Trisnawati, N. L. P., E-mail: nlptrisnawati@gmail.com [Biophysics, Department of Physics, Udayana University (Indonesia)

    2016-03-11

    Opinion dynamics model based on quantum formalism is proposed. The core of the quantum formalism is on the half spin dynamics system. In this research the implicit time evolution operators are derived. The analogy between the model with Deffuant dan Sznajd models is discussed.

  14. Fragmentation of nest and foraging habitat affects time budgets of solitary bees, their fitness and pollination services, depending on traits: Results from an individual-based model

    Science.gov (United States)

    Settele, Josef; Dormann, Carsten F.

    2018-01-01

    Solitary bees are important but declining wild pollinators. During daily foraging in agricultural landscapes, they encounter a mosaic of patches with nest and foraging habitat and unsuitable matrix. It is insufficiently clear how spatial allocation of nesting and foraging resources and foraging traits of bees affect their daily foraging performance. We investigated potential brood cell construction (as proxy of fitness), number of visited flowers, foraging habitat visitation and foraging distance (pollination proxies) with the model SOLBEE (simulating pollen transport by solitary bees, tested and validated in an earlier study), for landscapes varying in landscape fragmentation and spatial allocation of nesting and foraging resources. Simulated bees varied in body size and nesting preference. We aimed to understand effects of landscape fragmentation and bee traits on bee fitness and the pollination services bees provide, as well as interactions between them, and the general consequences it has to our understanding of the system. This broad scope gives multiple key results. 1) Body size determines fitness more than landscape fragmentation, with large bees building fewer brood cells. High pollen requirements for large bees and the related high time budgets for visiting many flowers may not compensate for faster flight speeds and short handling times on flowers, giving them overall a disadvantage compared to small bees. 2) Nest preference does affect distribution of bees over the landscape, with cavity-nesting bees being restricted to nesting along field edges, which inevitably leads to performance reductions. Fragmentation mitigates this for cavity-nesting bees through increased edge habitat. 3) Landscape fragmentation alone had a relatively small effect on all responses. Instead, the local ratio of nest to foraging habitat affected bee fitness positively through reduced local competition. The spatial coverage of pollination increases steeply in response to this ratio

  15. Fragmentation of nest and foraging habitat affects time budgets of solitary bees, their fitness and pollination services, depending on traits: Results from an individual-based model.

    Science.gov (United States)

    Everaars, Jeroen; Settele, Josef; Dormann, Carsten F

    2018-01-01

    Solitary bees are important but declining wild pollinators. During daily foraging in agricultural landscapes, they encounter a mosaic of patches with nest and foraging habitat and unsuitable matrix. It is insufficiently clear how spatial allocation of nesting and foraging resources and foraging traits of bees affect their daily foraging performance. We investigated potential brood cell construction (as proxy of fitness), number of visited flowers, foraging habitat visitation and foraging distance (pollination proxies) with the model SOLBEE (simulating pollen transport by solitary bees, tested and validated in an earlier study), for landscapes varying in landscape fragmentation and spatial allocation of nesting and foraging resources. Simulated bees varied in body size and nesting preference. We aimed to understand effects of landscape fragmentation and bee traits on bee fitness and the pollination services bees provide, as well as interactions between them, and the general consequences it has to our understanding of the system. This broad scope gives multiple key results. 1) Body size determines fitness more than landscape fragmentation, with large bees building fewer brood cells. High pollen requirements for large bees and the related high time budgets for visiting many flowers may not compensate for faster flight speeds and short handling times on flowers, giving them overall a disadvantage compared to small bees. 2) Nest preference does affect distribution of bees over the landscape, with cavity-nesting bees being restricted to nesting along field edges, which inevitably leads to performance reductions. Fragmentation mitigates this for cavity-nesting bees through increased edge habitat. 3) Landscape fragmentation alone had a relatively small effect on all responses. Instead, the local ratio of nest to foraging habitat affected bee fitness positively through reduced local competition. The spatial coverage of pollination increases steeply in response to this ratio

  16. Fragmentation of nest and foraging habitat affects time budgets of solitary bees, their fitness and pollination services, depending on traits: Results from an individual-based model.

    Directory of Open Access Journals (Sweden)

    Jeroen Everaars

    Full Text Available Solitary bees are important but declining wild pollinators. During daily foraging in agricultural landscapes, they encounter a mosaic of patches with nest and foraging habitat and unsuitable matrix. It is insufficiently clear how spatial allocation of nesting and foraging resources and foraging traits of bees affect their daily foraging performance. We investigated potential brood cell construction (as proxy of fitness, number of visited flowers, foraging habitat visitation and foraging distance (pollination proxies with the model SOLBEE (simulating pollen transport by solitary bees, tested and validated in an earlier study, for landscapes varying in landscape fragmentation and spatial allocation of nesting and foraging resources. Simulated bees varied in body size and nesting preference. We aimed to understand effects of landscape fragmentation and bee traits on bee fitness and the pollination services bees provide, as well as interactions between them, and the general consequences it has to our understanding of the system. This broad scope gives multiple key results. 1 Body size determines fitness more than landscape fragmentation, with large bees building fewer brood cells. High pollen requirements for large bees and the related high time budgets for visiting many flowers may not compensate for faster flight speeds and short handling times on flowers, giving them overall a disadvantage compared to small bees. 2 Nest preference does affect distribution of bees over the landscape, with cavity-nesting bees being restricted to nesting along field edges, which inevitably leads to performance reductions. Fragmentation mitigates this for cavity-nesting bees through increased edge habitat. 3 Landscape fragmentation alone had a relatively small effect on all responses. Instead, the local ratio of nest to foraging habitat affected bee fitness positively through reduced local competition. The spatial coverage of pollination increases steeply in response

  17. Modelling of Patterns in Space and Time

    CERN Document Server

    Murray, James

    1984-01-01

    This volume contains a selection of papers presented at the work­ shop "Modelling of Patterns in Space and Time", organized by the 80nderforschungsbereich 123, "8tochastische Mathematische Modelle", in Heidelberg, July 4-8, 1983. The main aim of this workshop was to bring together physicists, chemists, biologists and mathematicians for an exchange of ideas and results in modelling patterns. Since the mathe­ matical problems arising depend only partially on the particular field of applications the interdisciplinary cooperation proved very useful. The workshop mainly treated phenomena showing spatial structures. The special areas covered were morphogenesis, growth in cell cultures, competition systems, structured populations, chemotaxis, chemical precipitation, space-time oscillations in chemical reactors, patterns in flames and fluids and mathematical methods. The discussions between experimentalists and theoreticians were especially interesting and effective. The editors hope that these proceedings reflect ...

  18. Extended Cellular Automata Models of Particles and Space-Time

    Science.gov (United States)

    Beedle, Michael

    2005-04-01

    Models of particles and space-time are explored through simulations and theoretical model