A Dynamic Travel Time Estimation Model Based on Connected Vehicles
Directory of Open Access Journals (Sweden)
Daxin Tian
2015-01-01
Full Text Available With advances in connected vehicle technology, dynamic vehicle route guidance models gradually become indispensable equipment for drivers. Traditional route guidance models are designed to direct a vehicle along the shortest path from the origin to the destination without considering the dynamic traffic information. In this paper a dynamic travel time estimation model is presented which can collect and distribute traffic data based on the connected vehicles. To estimate the real-time travel time more accurately, a road link dynamic dividing algorithm is proposed. The efficiency of the model is confirmed by simulations, and the experiment results prove the effectiveness of the travel time estimation method.
Model-Based Real-Time Head Tracking
Directory of Open Access Journals (Sweden)
Ström Jacob
2002-01-01
Full Text Available This paper treats real-time tracking of a human head using an analysis by synthesis approach. The work is based on the Structure from Motion (SfM algorithm from Azarbayejani and Pentland (1995. We will analyze the convergence properties of the SfM algorithm for planar objects, and extend it to handle new points. The extended algorithm is then used for head tracking. The system tracks feature points in the image using a texture mapped three-dimensional model of the head. The texture is updated adaptively so that points in the ear region can be tracked when the user′s head is rotated far, allowing out-of-plane rotation of up to without losing track. The covariance of the - and the -coordinates are estimated and forwarded to the Kalman filter, making the tracker robust to occlusion. The system automatically detects tracking failure and reinitializes the algorithm using information gathered in the original initialization process.
SEM Based CARMA Time Series Modeling for Arbitrary N.
Oud, Johan H L; Voelkle, Manuel C; Driver, Charles C
2018-01-01
This article explains in detail the state space specification and estimation of first and higher-order autoregressive moving-average models in continuous time (CARMA) in an extended structural equation modeling (SEM) context for N = 1 as well as N > 1. To illustrate the approach, simulations will be presented in which a single panel model (T = 41 time points) is estimated for a sample of N = 1,000 individuals as well as for samples of N = 100 and N = 50 individuals, followed by estimating 100 separate models for each of the one-hundred N = 1 cases in the N = 100 sample. Furthermore, we will demonstrate how to test the difference between the full panel model and each N = 1 model by means of a subject-group-reproducibility test. Finally, the proposed analyses will be applied in an empirical example, in which the relationships between mood at work and mood at home are studied in a sample of N = 55 women. All analyses are carried out by ctsem, an R-package for continuous time modeling, interfacing to OpenMx.
Time-of-flight estimation based on covariance models
van der Heijden, Ferdinand; Tuquerres, G.; Regtien, Paulus P.L.
We address the problem of estimating the time-of-flight (ToF) of a waveform that is disturbed heavily by additional reflections from nearby objects. These additional reflections cause interference patterns that are difficult to predict. The introduction of a model for the reflection in terms of a
A unified model of time perception accounts for duration-based and beat-based timing mechanisms
Directory of Open Access Journals (Sweden)
Sundeep eTeki
2012-01-01
Full Text Available Accurate timing is an integral aspect of sensory and motor processes such as the perception of speech and music and the execution of skilled movement. Neuropsychological studies of time perception in patient groups and functional neuroimaging studies of timing in normal participants suggest common neural substrates for perceptual and motor timing. A timing system is implicated in core regions of the motor network such as the cerebellum, inferior olive, basal ganglia, pre-supplementary and supplementary motor area, pre-motor cortex and higher regions such as the prefrontal cortex.In this article, we assess how distinct parts of the timing system subserve different aspects of perceptual timing. We previously established brain bases for absolute, duration-based timing and relative, beat-based timing in the olivocerebellar and striato-thalamo-cortical circuits respectively (Teki et al., 2011. However, neurophysiological and neuroanatomical studies provide a basis to suggest that timing functions of these circuits may not be independent.Here, we propose a unified model of time perception based on coordinated activity in the core striatal and olivocerebellar networks that are interconnected with each other and the cerebral cortex th
Real-time GPS Satellite Clock Error Prediction Based On No-stationary Time Series Model
Wang, Q.; Xu, G.; Wang, F.
2009-04-01
Analysis Centers of the IGS provide precise satellite ephemeris for GPS data post-processing. The accuracy of orbit products is better than 5cm, and that of the satellite clock errors (SCE) approaches 0.1ns (igscb.jpl.nasa.gov), which can meet with the requirements of precise point positioning (PPP). Due to the 13 day-latency of the IGS final products, only the broadcast ephemeris and IGS ultra rapid products (predicted) are applicable for real time PPP (RT-PPP). Therefore, development of an approach to estimate high precise GPS SCE in real time is of particular importance for RT-PPP. Many studies have been carried out for forecasting the corrections using models, such as Linear Model (LM), Quadratic Polynomial Model (QPM), Quadratic Polynomial Model with Cyclic corrected Terms (QPM+CT), Grey Model (GM) and Kalman Filter Model (KFM), etc. However, the precisions of these models are generally in nanosecond level. The purpose of this study is to develop a method using which SCE forecasting for RT-PPP can be reached with a precision of sub-nanosecond. Analysis of the last 8 years IGS SCE data shown that predicted precision depend on the stability of the individual satellite clock. The clocks of the most recent GPS satellites (BLOCK IIR and BLOCK IIR-M) are more stable than that of the former GPS satellites (BLOCK IIA). For the stable satellite clock, the next 6 hours SCE can be easily predict with LM. The residuals of unstable satellite clocks are periodic ones with noise components. Dominant periods of residuals are found by using Fourier Transform and Spectrum Analysis. For the rest part of the residuals, an auto-regression model is used to determine their systematic trends. Summarized from this study, a no-stationary time series model can be proposed to predict GPS SCE in real time. This prediction model includes: linear term, cyclic corrected terms and auto-regression term, which are used to represent SCE trend, cyclic parts and rest of the errors, respectively
Model based Computerized Ionospheric Tomography in space and time
Tuna, Hakan; Arikan, Orhan; Arikan, Feza
2018-04-01
Reconstruction of the ionospheric electron density distribution in space and time not only provide basis for better understanding the physical nature of the ionosphere, but also provide improvements in various applications including HF communication. Recently developed IONOLAB-CIT technique provides physically admissible 3D model of the ionosphere by using both Slant Total Electron Content (STEC) measurements obtained from a GPS satellite - receiver network and IRI-Plas model. IONOLAB-CIT technique optimizes IRI-Plas model parameters in the region of interest such that the synthetic STEC computations obtained from the IRI-Plas model are in accordance with the actual STEC measurements. In this work, the IONOLAB-CIT technique is extended to provide reconstructions both in space and time. This extension exploits the temporal continuity of the ionosphere to provide more reliable reconstructions with a reduced computational load. The proposed 4D-IONOLAB-CIT technique is validated on real measurement data obtained from TNPGN-Active GPS receiver network in Turkey.
A Feature Fusion Based Forecasting Model for Financial Time Series
Guo, Zhiqiang; Wang, Huaiqing; Liu, Quan; Yang, Jie
2014-01-01
Predicting the stock market has become an increasingly interesting research area for both researchers and investors, and many prediction models have been proposed. In these models, feature selection techniques are used to pre-process the raw data and remove noise. In this paper, a prediction model is constructed to forecast stock market behavior with the aid of independent component analysis, canonical correlation analysis, and a support vector machine. First, two types of features are extracted from the historical closing prices and 39 technical variables obtained by independent component analysis. Second, a canonical correlation analysis method is utilized to combine the two types of features and extract intrinsic features to improve the performance of the prediction model. Finally, a support vector machine is applied to forecast the next day's closing price. The proposed model is applied to the Shanghai stock market index and the Dow Jones index, and experimental results show that the proposed model performs better in the area of prediction than other two similar models. PMID:24971455
Model-Checking of Component-Based Event-Driven Real-Time Embedded Software
National Research Council Canada - National Science Library
Gu, Zonghua; Shin, Kang G
2005-01-01
.... We discuss application of model-checking to verify system-level concurrency properties of component-based real-time embedded software based on CORBA Event Service, using Avionics Mission Computing...
A prediction method based on wavelet transform and multiple models fusion for chaotic time series
International Nuclear Information System (INIS)
Zhongda, Tian; Shujiang, Li; Yanhong, Wang; Yi, Sha
2017-01-01
In order to improve the prediction accuracy of chaotic time series, a prediction method based on wavelet transform and multiple models fusion is proposed. The chaotic time series is decomposed and reconstructed by wavelet transform, and approximate components and detail components are obtained. According to different characteristics of each component, least squares support vector machine (LSSVM) is used as predictive model for approximation components. At the same time, an improved free search algorithm is utilized for predictive model parameters optimization. Auto regressive integrated moving average model (ARIMA) is used as predictive model for detail components. The multiple prediction model predictive values are fusion by Gauss–Markov algorithm, the error variance of predicted results after fusion is less than the single model, the prediction accuracy is improved. The simulation results are compared through two typical chaotic time series include Lorenz time series and Mackey–Glass time series. The simulation results show that the prediction method in this paper has a better prediction.
Timing-based business models for flexibility creation in the electric power sector
International Nuclear Information System (INIS)
Helms, Thorsten; Loock, Moritz; Bohnsack, René
2016-01-01
Energy policies in many countries push for an increase in the generation of wind and solar power. Along these developments, the balance between supply and demand becomes more challenging as the generation of wind and solar power is volatile, and flexibility of supply and demand becomes valuable. As a consequence, companies in the electric power sector develop new business models that create flexibility through activities of timing supply and demand. Based on an extensive qualitative analysis of interviews and industry research in the energy industry, the paper at hand explores the role of timing-based business models in the power sector and sheds light on the mechanisms of flexibility creation through timing. In particular we distill four ideal-type business models of flexibility creation with timing and reveal how they can be classified along two dimensions, namely costs of multiplicity and intervention costs. We put forward that these business models offer ‘coupled services’, combining resource-centered and service-centered perspectives. This complementary character has important implications for energy policy. - Highlights: •Explores timing-based business models providing flexibility in the energy industry. •Timing-based business models can be classified on two dimensions. •Timing-based business models offer ‘coupled services’. • ‘Coupled services’ couple timing as a service with supply- or demand side valuables. •Policy and managerial implications for energy market design.
Model-Based Real Time Assessment of Capability Left for Spacecraft Under Failure Mode, Phase I
National Aeronautics and Space Administration — The proposed project is aimed at developing a model based diagnostics system for spacecraft that will allow real time assessment of its state, while it is impacted...
Solution algorithm of dwell time in slope-based figuring model
Li, Yong; Zhou, Lin
2017-10-01
Surface slope profile is commonly used to evaluate X-ray reflective optics, which is used in synchrotron radiation beam. Moreover, the measurement result of measuring instrument for X-ray reflective optics is usually the surface slope profile rather than the surface height profile. To avoid the conversion error, the slope-based figuring model is introduced introduced by processing the X-ray reflective optics based on surface height-based model. However, the pulse iteration method, which can quickly obtain the dell time solution of the traditional height-based figuring model, is not applied to the slope-based figuring model because property of the slope removal function have both positive and negative values and complex asymmetric structure. To overcome this problem, we established the optimal mathematical model for the dwell time solution, By introducing the upper and lower limits of the dwell time and the time gradient constraint. Then we used the constrained least squares algorithm to solve the dwell time in slope-based figuring model. To validate the proposed algorithm, simulations and experiments are conducted. A flat mirror with effective aperture of 80 mm is polished on the ion beam machine. After iterative polishing three times, the surface slope profile error of the workpiece is converged from RMS 5.65 μrad to RMS 1.12 μrad.
An assembly process model based on object-oriented hierarchical time Petri Nets
Wang, Jiapeng; Liu, Shaoli; Liu, Jianhua; Du, Zenghui
2017-04-01
In order to improve the versatility, accuracy and integrity of the assembly process model of complex products, an assembly process model based on object-oriented hierarchical time Petri Nets is presented. A complete assembly process information model including assembly resources, assembly inspection, time, structure and flexible parts is established, and this model describes the static and dynamic data involved in the assembly process. Through the analysis of three-dimensional assembly process information, the assembly information is hierarchically divided from the whole, the local to the details and the subnet model of different levels of object-oriented Petri Nets is established. The communication problem between Petri subnets is solved by using message database, and it reduces the complexity of system modeling effectively. Finally, the modeling process is presented, and a five layer Petri Nets model is established based on the hoisting process of the engine compartment of a wheeled armored vehicle.
Research on power grid loss prediction model based on Granger causality property of time series
Energy Technology Data Exchange (ETDEWEB)
Wang, J. [North China Electric Power Univ., Beijing (China); State Grid Corp., Beijing (China); Yan, W.P.; Yuan, J. [North China Electric Power Univ., Beijing (China); Xu, H.M.; Wang, X.L. [State Grid Information and Telecommunications Corp., Beijing (China)
2009-03-11
This paper described a method of predicting power transmission line losses using the Granger causality property of time series. The stable property of the time series was investigated using unit root tests. The Granger causality relationship between line losses and other variables was then determined. Granger-caused time series were then used to create the following 3 prediction models: (1) a model based on line loss binomials that used electricity sales to predict variables, (2) a model that considered both power sales and grid capacity, and (3) a model based on autoregressive distributed lag (ARDL) approaches that incorporated both power sales and the square of power sales as variables. A case study of data from China's electric power grid between 1980 and 2008 was used to evaluate model performance. Results of the study showed that the model error rates ranged between 2.7 and 3.9 percent. 6 refs., 3 tabs., 1 fig.
Efficient model checking for duration calculus based on branching-time approximations
DEFF Research Database (Denmark)
Fränzle, Martin; Hansen, Michael Reichhardt
2008-01-01
Duration Calculus (abbreviated to DC) is an interval-based, metric-time temporal logic designed for reasoning about embedded real-time systems at a high level of abstraction. But the complexity of model checking any decidable fragment featuring both negation and chop, DC's only modality, is non...
Directory of Open Access Journals (Sweden)
Parneet Paul
2013-02-01
Full Text Available The computer modelling and simulation of wastewater treatment plant and their specific technologies, such as membrane bioreactors (MBRs, are becoming increasingly useful to consultant engineers when designing, upgrading, retrofitting, operating and controlling these plant. This research uses traditional phenomenological mechanistic models based on MBR filtration and biochemical processes to measure the effectiveness of alternative and novel time series models based upon input–output system identification methods. Both model types are calibrated and validated using similar plant layouts and data sets derived for this purpose. Results prove that although both approaches have their advantages, they also have specific disadvantages as well. In conclusion, the MBR plant designer and/or operator who wishes to use good quality, calibrated models to gain a better understanding of their process, should carefully consider which model type is selected based upon on what their initial modelling objectives are. Each situation usually proves unique.
Zhang, Baohui
Modeling has been promoted by major policy organizations as important for science learning. The purpose of this dissertation is to describe and explore middle school science students' computer-based modeling practices and their changes over time using a scaffolded modeling program. Following a "design-based research" approach, this study was conducted at an independent school. Seventh graders from three classes taught by two experienced teachers participated. Two pairs of target students were chosen from each class for observation. Students created computer-based models after their investigations in a water quality unit and a decomposition unit. The initial modeling cycle for water quality lasted for four days in the fall season, the second cycle for water quality lasted three days in the winter season, and the third cycle for decomposition lasted two days in the spring season. The major data source is video that captured student pairs' computer screen activities and their conversations. Supplementary data include classroom videos of those modeling cycles, replicated students' final models, and models in production. The data were analyzed in terms of the efficiency, meaningfulness, and purposefulness of students' modeling practices. Students' understanding of content, models and modeling, metacognition, and collaboration and their changes were analyzed as secondary learning outcomes. This dissertation shows that with appropriate scaffolding from the modeling program and the teachers, students performed a variety of modeling practices that are valued by science educators, such as planning, analyzing, synthesizing, evaluating, and publicizing. In general, student modeling practices became more efficient, meaningful, and purposeful over time. During their modeling practices, students also made use of and improved content knowledge, understanding of models and modeling, metacognition, and collaboration. Suggestions for improving the modeling program and the learning
Zhang, Jian; Yang, Xiao-hua; Chen, Xiao-juan
2015-01-01
Due to nonlinear and multiscale characteristics of temperature time series, a new model called wavelet network model based on multiple criteria decision making (WNMCDM) has been proposed, which combines the advantage of wavelet analysis, multiple criteria decision making, and artificial neural network. One case for forecasting extreme monthly maximum temperature of Miyun Reservoir has been conducted to examine the performance of WNMCDM model. Compared with nearest neighbor bootstrapping regr...
RB Particle Filter Time Synchronization Algorithm Based on the DPM Model
Guo, Chunsheng; Shen, Jia; Sun, Yao; Ying, Na
2015-01-01
Time synchronization is essential for node localization, target tracking, data fusion, and various other Wireless Sensor Network (WSN) applications. To improve the estimation accuracy of continuous clock offset and skew of mobile nodes in WSNs, we propose a novel time synchronization algorithm, the Rao-Blackwellised (RB) particle filter time synchronization algorithm based on the Dirichlet process mixture (DPM) model. In a state-space equation with a linear substructure, state variables are d...
Individual-based modelling of population growth and diffusion in discrete time.
Directory of Open Access Journals (Sweden)
Natalie Tkachenko
Full Text Available Individual-based models (IBMs of human populations capture spatio-temporal dynamics using rules that govern the birth, behavior, and death of individuals. We explore a stochastic IBM of logistic growth-diffusion with constant time steps and independent, simultaneous actions of birth, death, and movement that approaches the Fisher-Kolmogorov model in the continuum limit. This model is well-suited to parallelization on high-performance computers. We explore its emergent properties with analytical approximations and numerical simulations in parameter ranges relevant to human population dynamics and ecology, and reproduce continuous-time results in the limit of small transition probabilities. Our model prediction indicates that the population density and dispersal speed are affected by fluctuations in the number of individuals. The discrete-time model displays novel properties owing to the binomial character of the fluctuations: in certain regimes of the growth model, a decrease in time step size drives the system away from the continuum limit. These effects are especially important at local population sizes of <50 individuals, which largely correspond to group sizes of hunter-gatherers. As an application scenario, we model the late Pleistocene dispersal of Homo sapiens into the Americas, and discuss the agreement of model-based estimates of first-arrival dates with archaeological dates in dependence of IBM model parameter settings.
Fluctuation complexity of agent-based financial time series model by stochastic Potts system
Hong, Weijia; Wang, Jun
2015-03-01
Financial market is a complex evolved dynamic system with high volatilities and noises, and the modeling and analyzing of financial time series are regarded as the rather challenging tasks in financial research. In this work, by applying the Potts dynamic system, a random agent-based financial time series model is developed in an attempt to uncover the empirical laws in finance, where the Potts model is introduced to imitate the trading interactions among the investing agents. Based on the computer simulation in conjunction with the statistical analysis and the nonlinear analysis, we present numerical research to investigate the fluctuation behaviors of the proposed time series model. Furthermore, in order to get a robust conclusion, we consider the daily returns of Shanghai Composite Index and Shenzhen Component Index, and the comparison analysis of return behaviors between the simulation data and the actual data is exhibited.
A bootstrap based space-time surveillance model with an application to crime occurrences
Kim, Youngho; O'Kelly, Morton
2008-06-01
This study proposes a bootstrap-based space-time surveillance model. Designed to find emerging hotspots in near-real time, the bootstrap based model is characterized by its use of past occurrence information and bootstrap permutations. Many existing space-time surveillance methods, using population at risk data to generate expected values, have resulting hotspots bounded by administrative area units and are of limited use for near-real time applications because of the population data needed. However, this study generates expected values for local hotspots from past occurrences rather than population at risk. Also, bootstrap permutations of previous occurrences are used for significant tests. Consequently, the bootstrap-based model, without the requirement of population at risk data, (1) is free from administrative area restriction, (2) enables more frequent surveillance for continuously updated registry database, and (3) is readily applicable to criminology and epidemiology surveillance. The bootstrap-based model performs better for space-time surveillance than the space-time scan statistic. This is shown by means of simulations and an application to residential crime occurrences in Columbus, OH, year 2000.
Model-based schedulability analysis of safety critical hard real-time Java programs
DEFF Research Database (Denmark)
Bøgholm, Thomas; Kragh-Hansen, Henrik; Olsen, Petur
2008-01-01
In this paper, we present a novel approach to schedulability analysis of Safety Critical Hard Real-Time Java programs. The approach is based on a translation of programs, written in the Safety Critical Java profile introduced in [21] for the Java Optimized Processor [18], to timed automata models...... verifiable by the Uppaal model checker [23]. Schedulability analysis is reduced to a simple reachability question, checking for deadlock freedom. Model-based schedulability analysis has been developed by Amnell et al. [2], but has so far only been applied to high level specifications, not actual...... implementations in a programming language. Experiments show that model-based schedulability analysis can result in a more accurate analysis than possible with traditional approaches, thus systems deemed non-schedulable by traditional approaches may in fact be schedulable, as detected by our analysis. Our approach...
Modeling Financial Time Series Based on a Market Microstructure Model with Leverage Effect
Directory of Open Access Journals (Sweden)
Yanhui Xi
2016-01-01
Full Text Available The basic market microstructure model specifies that the price/return innovation and the volatility innovation are independent Gaussian white noise processes. However, the financial leverage effect has been found to be statistically significant in many financial time series. In this paper, a novel market microstructure model with leverage effects is proposed. The model specification assumed a negative correlation in the errors between the price/return innovation and the volatility innovation. With the new representations, a theoretical explanation of leverage effect is provided. Simulated data and daily stock market indices (Shanghai composite index, Shenzhen component index, and Standard and Poor’s 500 Composite index via Bayesian Markov Chain Monte Carlo (MCMC method are used to estimate the leverage market microstructure model. The results verify the effectiveness of the model and its estimation approach proposed in the paper and also indicate that the stock markets have strong leverage effects. Compared with the classical leverage stochastic volatility (SV model in terms of DIC (Deviance Information Criterion, the leverage market microstructure model fits the data better.
Schenkl, Sebastian; Muggenthaler, Holger; Hubig, Michael; Erdmann, Bodo; Weiser, Martin; Zachow, Stefan; Heinrich, Andreas; Güttler, Felix Victor; Teichgräber, Ulf; Mall, Gita
2017-05-01
Temperature-based death time estimation is based either on simple phenomenological models of corpse cooling or on detailed physical heat transfer models. The latter are much more complex but allow a higher accuracy of death time estimation, as in principle, all relevant cooling mechanisms can be taken into account.Here, a complete workflow for finite element-based cooling simulation is presented. The following steps are demonstrated on a CT phantom: Computer tomography (CT) scan Segmentation of the CT images for thermodynamically relevant features of individual geometries and compilation in a geometric computer-aided design (CAD) model Conversion of the segmentation result into a finite element (FE) simulation model Computation of the model cooling curve (MOD) Calculation of the cooling time (CTE) For the first time in FE-based cooling time estimation, the steps from the CT image over segmentation to FE model generation are performed semi-automatically. The cooling time calculation results are compared to cooling measurements performed on the phantoms under controlled conditions. In this context, the method is validated using a CT phantom. Some of the phantoms' thermodynamic material parameters had to be determined via independent experiments.Moreover, the impact of geometry and material parameter uncertainties on the estimated cooling time is investigated by a sensitivity analysis.
DEFF Research Database (Denmark)
Kumar, Ashish; Vercruysse, Jurgen; Vanhoorne, Valerie
2015-01-01
within each module where different granulation rate processes dominate over others. Currently, experimental data is used to determine the residence time distributions. In this study, a conceptual model based on classical chemical engineering methods is proposed to better understand and simulate...... the residence time distribution in a TSG. The experimental data were compared with the proposed most suitable conceptual model to estimate the parameters of the model and to analyse and predict the effects of changes in number of kneading discs and their stagger angle, screw speed and powder feed rate...
Directory of Open Access Journals (Sweden)
Oleg Svatos
2013-01-01
Full Text Available In this paper we analyze complexity of time limits we can find especially in regulated processes of public administration. First we review the most popular process modeling languages. There is defined an example scenario based on the current Czech legislature which is then captured in discussed process modeling languages. Analysis shows that the contemporary process modeling languages support capturing of the time limit only partially. This causes troubles to analysts and unnecessary complexity of the models. Upon unsatisfying results of the contemporary process modeling languages we analyze the complexity of the time limits in greater detail and outline lifecycles of a time limit using the multiple dynamic generalizations pattern. As an alternative to the popular process modeling languages there is presented PSD process modeling language, which supports the defined lifecycles of a time limit natively and therefore allows keeping the models simple and easy to understand.
Adjoint-based sensitivities and data assimilation with a time-dependent marine ice sheet model
Goldberg, Dan; Heimbach, Patrick
2013-04-01
To date, assimilation of observational data using large-scale ice models has consisted only of time-dependent inversions of surface velocities for basal traction, bed elevation, or ice stiffness. These inversions are for the most part based on control methods (Macayeal D R, 1992, A tutorial on the use of control methods in ice sheet modeling), which involve generating and solving the adjoint of the ice model. Quite a lot has been learned about the fast-flowing parts of the Antarctic Ice Sheet from such inversions. Still, there are limitations to these "snapshot" inversions. For instance, they cannot capture time-dependent dynamics, such as propagation of perturbations through the ice sheet. They cannot assimilate time-dependent observations, such as surface elevation changes. And they are problematic for initializing time-dependent ice sheet models, as such initializations may contain considerable model drift. We have developed an adjoint for a time-dependent land ice model, with which we will address such issues. The land ice model implements a hybrid shallow shelf-shallow ice stress balance and can represent the floating, fast-sliding, and frozen bed regimes of a marine ice sheet. The adjoint is generated by a combination of analytic methods and the use of automated differentiation (AD) software. Experiments with idealized geometries have been carried out; adjoint sensitivities reveal the "vulnerable" regions of ice shelves, and preliminary inversions of "synthetic" observations (e.g. simultaneous inversion of basal traction and topography) yield encouraging results.
Paper-Based Assessment of the Effects of Aging on Response Time: A Diffusion Model Analysis
Directory of Open Access Journals (Sweden)
Judith Dirk
2017-04-01
Full Text Available The effects of aging on response time were examined in a paper-based lexical-decision experiment with younger (age 18–36 and older (age 64–75 adults, applying Ratcliff’s diffusion model. Using digital pens allowed the paper-based assessment of response times for single items. Age differences previously reported by Ratcliff and colleagues in computer-based experiments were partly replicated: older adults responded more conservatively than younger adults and showed a slowing of their nondecision components of RT by 53 ms. The rates of evidence accumulation (drift rate showed no age-related differences. Participants with a higher score in a vocabulary test also had higher drift rates. The experiment demonstrates the possibility to use formal processing models with paper-based tests.
RB Particle Filter Time Synchronization Algorithm Based on the DPM Model
Directory of Open Access Journals (Sweden)
Chunsheng Guo
2015-09-01
Full Text Available Time synchronization is essential for node localization, target tracking, data fusion, and various other Wireless Sensor Network (WSN applications. To improve the estimation accuracy of continuous clock offset and skew of mobile nodes in WSNs, we propose a novel time synchronization algorithm, the Rao-Blackwellised (RB particle filter time synchronization algorithm based on the Dirichlet process mixture (DPM model. In a state-space equation with a linear substructure, state variables are divided into linear and non-linear variables by the RB particle filter algorithm. These two variables can be estimated using Kalman filter and particle filter, respectively, which improves the computational efficiency more so than if only the particle filter was used. In addition, the DPM model is used to describe the distribution of non-deterministic delays and to automatically adjust the number of Gaussian mixture model components based on the observational data. This improves the estimation accuracy of clock offset and skew, which allows achieving the time synchronization. The time synchronization performance of this algorithm is also validated by computer simulations and experimental measurements. The results show that the proposed algorithm has a higher time synchronization precision than traditional time synchronization algorithms.
RB Particle Filter Time Synchronization Algorithm Based on the DPM Model.
Guo, Chunsheng; Shen, Jia; Sun, Yao; Ying, Na
2015-09-03
Time synchronization is essential for node localization, target tracking, data fusion, and various other Wireless Sensor Network (WSN) applications. To improve the estimation accuracy of continuous clock offset and skew of mobile nodes in WSNs, we propose a novel time synchronization algorithm, the Rao-Blackwellised (RB) particle filter time synchronization algorithm based on the Dirichlet process mixture (DPM) model. In a state-space equation with a linear substructure, state variables are divided into linear and non-linear variables by the RB particle filter algorithm. These two variables can be estimated using Kalman filter and particle filter, respectively, which improves the computational efficiency more so than if only the particle filter was used. In addition, the DPM model is used to describe the distribution of non-deterministic delays and to automatically adjust the number of Gaussian mixture model components based on the observational data. This improves the estimation accuracy of clock offset and skew, which allows achieving the time synchronization. The time synchronization performance of this algorithm is also validated by computer simulations and experimental measurements. The results show that the proposed algorithm has a higher time synchronization precision than traditional time synchronization algorithms.
Nonlinear System Identification via Basis Functions Based Time Domain Volterra Model
Directory of Open Access Journals (Sweden)
Yazid Edwar
2014-07-01
Full Text Available This paper proposes basis functions based time domain Volterra model for nonlinear system identification. The Volterra kernels are expanded by using complex exponential basis functions and estimated via genetic algorithm (GA. The accuracy and practicability of the proposed method are then assessed experimentally from a scaled 1:100 model of a prototype truss spar platform. Identification results in time and frequency domain are presented and coherent functions are performed to check the quality of the identification results. It is shown that results between experimental data and proposed method are in good agreement.
Time-driven activity-based costing: A dynamic value assessment model in pediatric appendicitis.
Yu, Yangyang R; Abbas, Paulette I; Smith, Carolyn M; Carberry, Kathleen E; Ren, Hui; Patel, Binita; Nuchtern, Jed G; Lopez, Monica E
2017-06-01
Healthcare reform policies are emphasizing value-based healthcare delivery. We hypothesize that time-driven activity-based costing (TDABC) can be used to appraise healthcare interventions in pediatric appendicitis. Triage-based standing delegation orders, surgical advanced practice providers, and a same-day discharge protocol were implemented to target deficiencies identified in our initial TDABC model. Post-intervention process maps for a hospital episode were created using electronic time stamp data for simple appendicitis cases during February to March 2016. Total personnel and consumable costs were determined using TDABC methodology. The post-intervention TDABC model featured 6 phases of care, 33 processes, and 19 personnel types. Our interventions reduced duration and costs in the emergency department (-41min, -$23) and pre-operative floor (-57min, -$18). While post-anesthesia care unit duration and costs increased (+224min, +$41), the same-day discharge protocol eliminated post-operative floor costs (-$306). Our model incorporating all three interventions reduced total direct costs by 11% ($2753.39 to $2447.68) and duration of hospitalization by 51% (1984min to 966min). Time-driven activity-based costing can dynamically model changes in our healthcare delivery as a result of process improvement interventions. It is an effective tool to continuously assess the impact of these interventions on the value of appendicitis care. II, Type of study: Economic Analysis. Copyright © 2017 Elsevier Inc. All rights reserved.
Intuitionistic Fuzzy Time Series Forecasting Model Based on Intuitionistic Fuzzy Reasoning
Directory of Open Access Journals (Sweden)
Ya’nan Wang
2016-01-01
Full Text Available Fuzzy sets theory cannot describe the data comprehensively, which has greatly limited the objectivity of fuzzy time series in uncertain data forecasting. In this regard, an intuitionistic fuzzy time series forecasting model is built. In the new model, a fuzzy clustering algorithm is used to divide the universe of discourse into unequal intervals, and a more objective technique for ascertaining the membership function and nonmembership function of the intuitionistic fuzzy set is proposed. On these bases, forecast rules based on intuitionistic fuzzy approximate reasoning are established. At last, contrast experiments on the enrollments of the University of Alabama and the Taiwan Stock Exchange Capitalization Weighted Stock Index are carried out. The results show that the new model has a clear advantage of improving the forecast accuracy.
Detection of Common Problems in Real-Time and Multicore Systems Using Model-Based Constraints
Directory of Open Access Journals (Sweden)
Raphaël Beamonte
2016-01-01
Full Text Available Multicore systems are complex in that multiple processes are running concurrently and can interfere with each other. Real-time systems add on top of that time constraints, making results invalid as soon as a deadline has been missed. Tracing is often the most reliable and accurate tool available to study and understand those systems. However, tracing requires that users understand the kernel events and their meaning. It is therefore not very accessible. Using modeling to generate source code or represent applications’ workflow is handy for developers and has emerged as part of the model-driven development methodology. In this paper, we propose a new approach to system analysis using model-based constraints, on top of userspace and kernel traces. We introduce the constraints representation and how traces can be used to follow the application’s workflow and check the constraints we set on the model. We then present a number of common problems that we encountered in real-time and multicore systems and describe how our model-based constraints could have helped to save time by automatically identifying the unwanted behavior.
3D airborne EM modeling based on the spectral-element time-domain (SETD) method
Cao, X.; Yin, C.; Huang, X.; Liu, Y.; Zhang, B., Sr.; Cai, J.; Liu, L.
2017-12-01
In the field of 3D airborne electromagnetic (AEM) modeling, both finite-difference time-domain (FDTD) method and finite-element time-domain (FETD) method have limitations that FDTD method depends too much on the grids and time steps, while FETD requires large number of grids for complex structures. We propose a time-domain spectral-element (SETD) method based on GLL interpolation basis functions for spatial discretization and Backward Euler (BE) technique for time discretization. The spectral-element method is based on a weighted residual technique with polynomials as vector basis functions. It can contribute to an accurate result by increasing the order of polynomials and suppressing spurious solution. BE method is a stable tine discretization technique that has no limitation on time steps and can guarantee a higher accuracy during the iteration process. To minimize the non-zero number of sparse matrix and obtain a diagonal mass matrix, we apply the reduced order integral technique. A direct solver with its speed independent of the condition number is adopted for quickly solving the large-scale sparse linear equations system. To check the accuracy of our SETD algorithm, we compare our results with semi-analytical solutions for a three-layered earth model within the time lapse 10-6-10-2s for different physical meshes and SE orders. The results show that the relative errors for magnetic field B and magnetic induction are both around 3-5%. Further we calculate AEM responses for an AEM system over a 3D earth model in Figure 1. From numerical experiments for both 1D and 3D model, we draw the conclusions that: 1) SETD can deliver an accurate results for both dB/dt and B; 2) increasing SE order improves the modeling accuracy for early to middle time channels when the EM field diffuses fast so the high-order SE can model the detailed variation; 3) at very late time channels, increasing SE order has little improvement on modeling accuracy, but the time interval plays
Modelling of the acid base properties of two thermophilic bacteria at different growth times
Heinrich, Hannah T. M.; Bremer, Phil J.; McQuillan, A. James; Daughney, Christopher J.
2008-09-01
Acid-base titrations and electrophoretic mobility measurements were conducted on the thermophilic bacteria Anoxybacillus flavithermus and Geobacillus stearothermophilus at two different growth times corresponding to exponential and stationary/death phase. The data showed significant differences between the two investigated growth times for both bacterial species. In stationary/death phase samples, cells were disrupted and their buffering capacity was lower than that of exponential phase cells. For G. stearothermophilus the electrophoretic mobility profiles changed dramatically. Chemical equilibrium models were developed to simultaneously describe the data from the titrations and the electrophoretic mobility measurements. A simple approach was developed to determine confidence intervals for the overall variance between the model and the experimental data, in order to identify statistically significant changes in model fit and thereby select the simplest model that was able to adequately describe each data set. Exponential phase cells of the investigated thermophiles had a higher total site concentration than the average found for mesophilic bacteria (based on a previously published generalised model for the acid-base behaviour of mesophiles), whereas the opposite was true for cells in stationary/death phase. The results of this study indicate that growth phase is an important parameter that can affect ion binding by bacteria, that growth phase should be considered when developing or employing chemical models for bacteria-bearing systems.
Underwater Noise Modeling and Direction-Finding Based on Heteroscedastic Time Series
Directory of Open Access Journals (Sweden)
Kamarei Mahmoud
2007-01-01
Full Text Available We propose a new method for practical non-Gaussian and nonstationary underwater noise modeling. This model is very useful for passive sonar in shallow waters. In this application, measurement of additive noise in natural environment and exhibits shows that noise can sometimes be significantly non-Gaussian and a time-varying feature especially in the variance. Therefore, signal processing algorithms such as direction-finding that is optimized for Gaussian noise may degrade significantly in this environment. Generalized autoregressive conditional heteroscedasticity (GARCH models are suitable for heavy tailed PDFs and time-varying variances of stochastic process. We use a more realistic GARCH-based noise model in the maximum-likelihood approach for the estimation of direction-of-arrivals (DOAs of impinging sources onto a linear array, and demonstrate using measured noise that this approach is feasible for the additive noise and direction finding in an underwater environment.
Zhou, H. W.; Yi, H. Y.; Mishnaevsky, L.; Wang, R.; Duan, Z. Q.; Chen, Q.
2017-05-01
A modeling approach to time-dependent property of Glass Fiber Reinforced Polymers (GFRP) composites is of special interest for quantitative description of long-term behavior. An electronic creep machine is employed to investigate the time-dependent deformation of four specimens of dog-bond-shaped GFRP composites at various stress level. A negative exponent function based on structural changes is introduced to describe the damage evolution of material properties in the process of creep test. Accordingly, a new creep constitutive equation, referred to fractional derivative Maxwell model, is suggested to characterize the time-dependent behavior of GFRP composites by replacing Newtonian dashpot with the Abel dashpot in the classical Maxwell model. The analytic solution for the fractional derivative Maxwell model is given and the relative parameters are determined. The results estimated by the fractional derivative Maxwell model proposed in the paper are in a good agreement with the experimental data. It is shown that the new creep constitutive model proposed in the paper needs few parameters to represent various time-dependent behaviors.
Er, Li; Xiangying, Zeng
2014-01-01
To simulate the variation of biochemical oxygen demand (BOD) in the tidal Foshan River, inverse calculations based on time domain are applied to the longitudinal dispersion coefficient (E(x)) and BOD decay rate (K(x)) in the BOD model for the tidal Foshan River. The derivatives of the inverse calculation have been respectively established on the basis of different flow directions in the tidal river. The results of this paper indicate that the calculated values of BOD based on the inverse calculation developed for the tidal Foshan River match the measured ones well. According to the calibration and verification of the inversely calculated BOD models, K(x) is more sensitive to the models than E(x) and different data sets of E(x) and K(x) hardly affect the precision of the models.
Connectivity-based neurofeedback: Dynamic causal modeling for real-time fMRI☆
Koush, Yury; Rosa, Maria Joao; Robineau, Fabien; Heinen, Klaartje; W. Rieger, Sebastian; Weiskopf, Nikolaus; Vuilleumier, Patrik; Van De Ville, Dimitri; Scharnowski, Frank
2013-01-01
Neurofeedback based on real-time fMRI is an emerging technique that can be used to train voluntary control of brain activity. Such brain training has been shown to lead to behavioral effects that are specific to the functional role of the targeted brain area. However, real-time fMRI-based neurofeedback so far was limited to mainly training localized brain activity within a region of interest. Here, we overcome this limitation by presenting near real-time dynamic causal modeling in order to provide feedback information based on connectivity between brain areas rather than activity within a single brain area. Using a visual–spatial attention paradigm, we show that participants can voluntarily control a feedback signal that is based on the Bayesian model comparison between two predefined model alternatives, i.e. the connectivity between left visual cortex and left parietal cortex vs. the connectivity between right visual cortex and right parietal cortex. Our new approach thus allows for training voluntary control over specific functional brain networks. Because most mental functions and most neurological disorders are associated with network activity rather than with activity in a single brain region, this novel approach is an important methodological innovation in order to more directly target functionally relevant brain networks. PMID:23668967
Benachour, Hamanou; Bastogne, Thierry; Toussaint, Magali; Chemli, Yosra; Sève, Aymeric; Frochot, Céline; Lux, François; Tillement, Olivier; Vanderesse, Régis; Barberi-Heyob, Muriel
2012-01-01
Nanoparticles are widely suggested as targeted drug-delivery systems. In photodynamic therapy (PDT), the use of multifunctional nanoparticles as photoactivatable drug carriers is a promising approach for improving treatment efficiency and selectivity. However, the conventional cytotoxicity assays are not well adapted to characterize nanoparticles cytotoxic effects and to discriminate early and late cell responses. In this work, we evaluated a real-time label-free cell analysis system as a tool to investigate in vitro cyto- and photocyto-toxicity of nanoparticles-based photosensitizers compared with classical metabolic assays. To do so, we introduced a dynamic approach based on real-time cell impedance monitoring and a mathematical model-based analysis to characterize the measured dynamic cell response. Analysis of real-time cell responses requires indeed new modeling approaches able to describe suited use of dynamic models. In a first step, a multivariate analysis of variance associated with a canonical analysis of the obtained normalized cell index (NCI) values allowed us to identify different relevant time periods following nanoparticles exposure. After light irradiation, we evidenced discriminant profiles of cell index (CI) kinetics in a concentration- and light dose-dependent manner. In a second step, we proposed a full factorial design of experiments associated with a mixed effect kinetic model of the CI time responses. The estimated model parameters led to a new characterization of the dynamic cell responses such as the magnitude and the time constant of the transient phase in response to the photo-induced dynamic effects. These parameters allowed us to characterize totally the in vitro photodynamic response according to nanoparticle-grafted photosensitizer concentration and light dose. They also let us estimate the strength of the synergic photodynamic effect. This dynamic approach based on statistical modeling furnishes new insights for in vitro
Directory of Open Access Journals (Sweden)
Hamanou Benachour
Full Text Available Nanoparticles are widely suggested as targeted drug-delivery systems. In photodynamic therapy (PDT, the use of multifunctional nanoparticles as photoactivatable drug carriers is a promising approach for improving treatment efficiency and selectivity. However, the conventional cytotoxicity assays are not well adapted to characterize nanoparticles cytotoxic effects and to discriminate early and late cell responses. In this work, we evaluated a real-time label-free cell analysis system as a tool to investigate in vitro cyto- and photocyto-toxicity of nanoparticles-based photosensitizers compared with classical metabolic assays. To do so, we introduced a dynamic approach based on real-time cell impedance monitoring and a mathematical model-based analysis to characterize the measured dynamic cell response. Analysis of real-time cell responses requires indeed new modeling approaches able to describe suited use of dynamic models. In a first step, a multivariate analysis of variance associated with a canonical analysis of the obtained normalized cell index (NCI values allowed us to identify different relevant time periods following nanoparticles exposure. After light irradiation, we evidenced discriminant profiles of cell index (CI kinetics in a concentration- and light dose-dependent manner. In a second step, we proposed a full factorial design of experiments associated with a mixed effect kinetic model of the CI time responses. The estimated model parameters led to a new characterization of the dynamic cell responses such as the magnitude and the time constant of the transient phase in response to the photo-induced dynamic effects. These parameters allowed us to characterize totally the in vitro photodynamic response according to nanoparticle-grafted photosensitizer concentration and light dose. They also let us estimate the strength of the synergic photodynamic effect. This dynamic approach based on statistical modeling furnishes new insights for in
A Time-Series Water Level Forecasting Model Based on Imputation and Variable Selection Method
Directory of Open Access Journals (Sweden)
Jun-He Yang
2017-01-01
Full Text Available Reservoirs are important for households and impact the national economy. This paper proposed a time-series forecasting model based on estimating a missing value followed by variable selection to forecast the reservoir’s water level. This study collected data from the Taiwan Shimen Reservoir as well as daily atmospheric data from 2008 to 2015. The two datasets are concatenated into an integrated dataset based on ordering of the data as a research dataset. The proposed time-series forecasting model summarily has three foci. First, this study uses five imputation methods to directly delete the missing value. Second, we identified the key variable via factor analysis and then deleted the unimportant variables sequentially via the variable selection method. Finally, the proposed model uses a Random Forest to build the forecasting model of the reservoir’s water level. This was done to compare with the listing method under the forecasting error. These experimental results indicate that the Random Forest forecasting model when applied to variable selection with full variables has better forecasting performance than the listing model. In addition, this experiment shows that the proposed variable selection can help determine five forecast methods used here to improve the forecasting capability.
Time-varying metamaterials based on graphene-wrapped microwires: Modeling and potential applications
Salary, Mohammad Mahdi; Jafar-Zanjani, Samad; Mosallaei, Hossein
2018-03-01
The successful realization of metamaterials and metasurfaces requires the judicious choice of constituent elements. In this paper, we demonstrate the implementation of time-varying metamaterials in the terahertz frequency regime by utilizing graphene-wrapped microwires as building blocks and modulation of graphene conductivity through exterior electrical gating. These elements enable enhancement of light-graphene interaction by utilizing optical resonances associated with Mie scattering, yielding a large tunability and modulation depth. We develop a semianalytical framework based on transition-matrix formulation for modeling and analysis of periodic and aperiodic arrays of such time-varying building blocks. The proposed method is validated against full-wave numerical results obtained using the finite-difference time-domain method. It provides an ideal tool for mathematical synthesis and analysis of space-time gradient metamaterials, eliminating the need for computationally expensive numerical models. Moreover, it allows for a wider exploration of exotic space-time scattering phenomena in time-modulated metamaterials. We apply the method to explore the role of modulation parameters in the generation of frequency harmonics and their emerging wavefronts. Several potential applications of such platforms are demonstrated, including frequency conversion, holographic generation of frequency harmonics, and spatiotemporal manipulation of light. The presented results provide key physical insights to design time-modulated functional metadevices using various building blocks and open up new directions in the emerging paradigm of time-modulated metamaterials.
Biomedical time series clustering based on non-negative sparse coding and probabilistic topic model.
Wang, Jin; Liu, Ping; F H She, Mary; Nahavandi, Saeid; Kouzani, Abbas
2013-09-01
Biomedical time series clustering that groups a set of unlabelled temporal signals according to their underlying similarity is very useful for biomedical records management and analysis such as biosignals archiving and diagnosis. In this paper, a new framework for clustering of long-term biomedical time series such as electrocardiography (ECG) and electroencephalography (EEG) signals is proposed. Specifically, local segments extracted from the time series are projected as a combination of a small number of basis elements in a trained dictionary by non-negative sparse coding. A Bag-of-Words (BoW) representation is then constructed by summing up all the sparse coefficients of local segments in a time series. Based on the BoW representation, a probabilistic topic model that was originally developed for text document analysis is extended to discover the underlying similarity of a collection of time series. The underlying similarity of biomedical time series is well captured attributing to the statistic nature of the probabilistic topic model. Experiments on three datasets constructed from publicly available EEG and ECG signals demonstrates that the proposed approach achieves better accuracy than existing state-of-the-art methods, and is insensitive to model parameters such as length of local segments and dictionary size. Copyright © 2013 Elsevier Ireland Ltd. All rights reserved.
Directory of Open Access Journals (Sweden)
Zhibin Jiang
2015-04-01
Full Text Available Understanding the nature of rail transit dwell time has potential benefits for both the users and the operators. Crowded passenger trains cause longer dwell times and may prevent some passengers from boarding the first available train that arrives. Actual dwell time and the process of passenger alighting and boarding are interdependent through the sequence of train stops and propagated delays. A comprehensive and feasible dwell time simulation model was developed and optimized to address the problems associated with scheduled timetables. The paper introduces the factors that affect dwell time in urban rail transit systems, including train headway, the process and number of passengers alighting and boarding the train, and the inability of train doors to properly close the first time because of overcrowded vehicles. Finally, based on a time-driven micro-simulation system, Shanghai rail transit Line 8 is used as an example to quantify the feasibility of scheduled dwell times for different stations, directions of travel and time periods, and a proposed dwell time during peak hours in several crowded stations is presented according to the simulation results.
Directory of Open Access Journals (Sweden)
Rui Xue
2015-01-01
Full Text Available Although bus passenger demand prediction has attracted increased attention during recent years, limited research has been conducted in the context of short-term passenger demand forecasting. This paper proposes an interactive multiple model (IMM filter algorithm-based model to predict short-term passenger demand. After aggregated in 15 min interval, passenger demand data collected from a busy bus route over four months were used to generate time series. Considering that passenger demand exhibits various characteristics in different time scales, three time series were developed, named weekly, daily, and 15 min time series. After the correlation, periodicity, and stationarity analyses, time series models were constructed. Particularly, the heteroscedasticity of time series was explored to achieve better prediction performance. Finally, IMM filter algorithm was applied to combine individual forecasting models with dynamically predicted passenger demand for next interval. Different error indices were adopted for the analyses of individual and hybrid models. The performance comparison indicates that hybrid model forecasts are superior to individual ones in accuracy. Findings of this study are of theoretical and practical significance in bus scheduling.
Directory of Open Access Journals (Sweden)
Svetlana Postnova
Full Text Available Shift work has become an integral part of our life with almost 20% of the population being involved in different shift schedules in developed countries. However, the atypical work times, especially the night shifts, are associated with reduced quality and quantity of sleep that leads to increase of sleepiness often culminating in accidents. It has been demonstrated that shift workers' sleepiness can be improved by a proper scheduling of light exposure and optimizing shifts timing. Here, an integrated physiologically-based model of sleep-wake cycles is used to predict adaptation to shift work in different light conditions and for different shift start times for a schedule of four consecutive days of work. The integrated model combines a model of the ascending arousal system in the brain that controls the sleep-wake switch and a human circadian pacemaker model. To validate the application of the integrated model and demonstrate its utility, its dynamics are adjusted to achieve a fit to published experimental results showing adaptation of night shift workers (n = 8 in conditions of either bright or regular lighting. Further, the model is used to predict the shift workers' adaptation to the same shift schedule, but for conditions not considered in the experiment. The model demonstrates that the intensity of shift light can be reduced fourfold from that used in the experiment and still produce good adaptation to night work. The model predicts that sleepiness of the workers during night shifts on a protocol with either bright or regular lighting can be significantly improved by starting the shift earlier in the night, e.g.; at 21:00 instead of 00:00. Finally, the study predicts that people of the same chronotype, i.e. with identical sleep times in normal conditions, can have drastically different responses to shift work depending on their intrinsic circadian and homeostatic parameters.
Real time polymer nanocomposites-based physical nanosensors: theory and modeling
Bellucci, Stefano; Shunin, Yuri; Gopeyenko, Victor; Lobanova-Shunina, Tamara; Burlutskaya, Nataly; Zhukovskii, Yuri
2017-09-01
Functionalized carbon nanotubes and graphene nanoribbons nanostructures, serving as the basis for the creation of physical pressure and temperature nanosensors, are considered as tools for ecological monitoring and medical applications. Fragments of nanocarbon inclusions with different morphologies, presenting a disordered system, are regarded as models for nanocomposite materials based on carbon nanoсluster suspension in dielectric polymer environments (e.g., epoxy resins). We have formulated the approach of conductivity calculations for carbon-based polymer nanocomposites using the effective media cluster approach, disordered systems theory and conductivity mechanisms analysis, and obtained the calibration dependences. Providing a proper description of electric responses in nanosensoring systems, we demonstrate the implementation of advanced simulation models suitable for real time control nanosystems. We also consider the prospects and prototypes of the proposed physical nanosensor models providing the comparisons with experimental calibration dependences.
Parameter Estimation of a Delay Time Model of Wearing Parts Based on Objective Data
Directory of Open Access Journals (Sweden)
Y. Tang
2015-01-01
Full Text Available The wearing parts of a system have a very high failure frequency, making it necessary to carry out continual functional inspections and maintenance to protect the system from unscheduled downtime. This allows for the collection of a large amount of maintenance data. Taking the unique characteristics of the wearing parts into consideration, we establish their respective delay time models in ideal inspection cases and nonideal inspection cases. The model parameters are estimated entirely using the collected maintenance data. Then, a likelihood function of all renewal events is derived based on their occurring probability functions, and the model parameters are calculated with the maximum likelihood function method, which is solved by the CRM. Finally, using two wearing parts from the oil and gas drilling industry as examples—the filter element and the blowout preventer rubber core—the parameters of the distribution function of the initial failure time and the delay time for each example are estimated, and their distribution functions are obtained. Such parameter estimation based on objective data will contribute to the optimization of the reasonable function inspection interval and will also provide some theoretical models to support the integrity management of equipment or systems.
GOODNESS-OF-FIT TEST FOR THE ACCELERATED FAILURE TIME MODEL BASED ON MARTINGALE RESIDUALS
Czech Academy of Sciences Publication Activity Database
Novák, Petr
2013-01-01
Roč. 49, č. 1 (2013), s. 40-59 ISSN 0023-5954 R&D Projects: GA MŠk(CZ) 1M06047 Grant - others:GA MŠk(CZ) SVV 261315/2011 Keywords : accelerated failure time model * survival analysis * goodness-of-fit Subject RIV: BB - Applied Statistics, Operational Research Impact factor: 0.563, year: 2013 http://library.utia.cas.cz/separaty/2013/SI/novak-goodness-of-fit test for the aft model based on martingale residuals.pdf
A GIS-based time-dependent seismic source modeling of Northern Iran
Hashemi, Mahdi; Alesheikh, Ali Asghar; Zolfaghari, Mohammad Reza
2017-01-01
The first step in any seismic hazard study is the definition of seismogenic sources and the estimation of magnitude-frequency relationships for each source. There is as yet no standard methodology for source modeling and many researchers have worked on this topic. This study is an effort to define linear and area seismic sources for Northern Iran. The linear or fault sources are developed based on tectonic features and characteristic earthquakes while the area sources are developed based on spatial distribution of small to moderate earthquakes. Time-dependent recurrence relationships are developed for fault sources using renewal approach while time-independent frequency-magnitude relationships are proposed for area sources based on Poisson process. GIS functionalities are used in this study to introduce and incorporate spatial-temporal and geostatistical indices in delineating area seismic sources. The proposed methodology is used to model seismic sources for an area of about 500 by 400 square kilometers around Tehran. Previous researches and reports are studied to compile an earthquake/fault catalog that is as complete as possible. All events are transformed to uniform magnitude scale; duplicate events and dependent shocks are removed. Completeness and time distribution of the compiled catalog is taken into account. The proposed area and linear seismic sources in conjunction with defined recurrence relationships can be used to develop time-dependent probabilistic seismic hazard analysis of Northern Iran.
Oliveira, R.; Bijeljic, B.; Blunt, M. J.; Colbourne, A.; Sederman, A. J.; Mantle, M. D.; Gladden, L. F.
2017-12-01
Mixing and reactive processes have a large impact on the viability of enhanced oil and gas recovery projects that involve acid stimulation and CO2 injection. To achieve a successful design of the injection schemes an accurate understanding of the interplay between pore structure, flow and reactive transport is necessary. Dependent on transport and reactive conditions, this complex coupling can also be dependent on initial rock heterogeneity across a variety of scales. To address these issues, we devise a new method to study transport and reactive flow in porous media at multiple scales. The transport model is based on an efficient Particle Tracking Method based on Continuous Time Random Walks (CTRW-PTM) on a lattice. Transport is modelled using an algorithm described in Rhodes and Blunt (2006) and Srinivasan et al. (2010); this model is expanded to enable for reactive flow predictions in subsurface rock undergoing a first-order fluid/solid chemical reaction. The reaction-induced alteration in fluid/solid interface is accommodated in the model through changes in porosity and flow field, leading to time dependent transport characteristics in the form of transit time distributions which account for rock heterogeneity change. This also enables the study of concentration profiles at the scale of interest. Firstly, we validate transport model by comparing the probability of molecular displacement (propagators) measured by Nuclear Magnetic Resonance (NMR) with our modelled predictions for concentration profiles. The experimental propagators for three different porous media of increasing complexity, a beadpack, a Bentheimer sandstone and a Portland carbonate, show a good agreement with the model. Next, we capture the time evolution of the propagators distribution in a reactive flow experiment, where hydrochloric acid is injected into a limestone rock. We analyse the time-evolving non-Fickian signatures for the transport during reactive flow and observe an increase in
Kumar, Ashish; Vercruysse, Jurgen; Vanhoorne, Valérie; Toiviainen, Maunu; Panouillot, Pierre-Emmanuel; Juuti, Mikko; Vervaet, Chris; Remon, Jean Paul; Gernaey, Krist V; De Beer, Thomas; Nopens, Ingmar
2015-04-25
Twin-screw granulation is a promising continuous alternative for traditional batchwise wet granulation processes. The twin-screw granulator (TSG) screws consist of transport and kneading element modules. Therefore, the granulation to a large extent is governed by the residence time distribution within each module where different granulation rate processes dominate over others. Currently, experimental data is used to determine the residence time distributions. In this study, a conceptual model based on classical chemical engineering methods is proposed to better understand and simulate the residence time distribution in a TSG. The experimental data were compared with the proposed most suitable conceptual model to estimate the parameters of the model and to analyse and predict the effects of changes in number of kneading discs and their stagger angle, screw speed and powder feed rate on residence time. The study established that the kneading block in the screw configuration acts as a plug-flow zone inside the granulator. Furthermore, it was found that a balance between the throughput force and conveying rate is required to obtain a good axial mixing inside the twin-screw granulator. Although the granulation behaviour is different for other excipients, the experimental data collection and modelling methods applied in this study are generic and can be adapted to other excipients. Copyright © 2015 Elsevier B.V. All rights reserved.
Impact of sensor and measurement timing errors on model-based insulin sensitivity.
Pretty, Christopher G; Signal, Matthew; Fisk, Liam; Penning, Sophie; Le Compte, Aaron; Shaw, Geoffrey M; Desaive, Thomas; Chase, J Geoffrey
2014-05-01
A model-based insulin sensitivity parameter (SI) is often used in glucose-insulin system models to define the glycaemic response to insulin. As a parameter identified from clinical data, insulin sensitivity can be affected by blood glucose (BG) sensor error and measurement timing error, which can subsequently impact analyses or glycaemic variability during control. This study assessed the impact of both measurement timing and BG sensor errors on identified values of SI and its hour-to-hour variability within a common type of glucose-insulin system model. Retrospective clinical data were used from 270 patients admitted to the Christchurch Hospital ICU between 2005 and 2007 to identify insulin sensitivity profiles. We developed error models for the Abbott Optium Xceed glucometer and measurement timing from clinical data. The effect of these errors on the re-identified insulin sensitivity was investigated by Monte-Carlo analysis. The results of the study show that timing errors in isolation have little clinically significant impact on identified SI level or variability. The clinical impact of changes to SI level induced by combined sensor and timing errors is likely to be significant during glycaemic control. Identified values of SI were mostly (90th percentile) within 29% of the true value when influenced by both sources of error. However, these effects may be overshadowed by physiological factors arising from the critical condition of the patients or other under-modelled or un-modelled dynamics. Thus, glycaemic control protocols that are designed to work with data from glucometers need to be robust to these errors and not be too aggressive in dosing insulin. Copyright © 2013 Elsevier Ireland Ltd. All rights reserved.
A multiscale MDCT image-based breathing lung model with time-varying regional ventilation
Yin, Youbing; Choi, Jiwoong; Hoffman, Eric A.; Tawhai, Merryn H.; Lin, Ching-Long
2013-07-01
A novel algorithm is presented that links local structural variables (regional ventilation and deforming central airways) to global function (total lung volume) in the lung over three imaged lung volumes, to derive a breathing lung model for computational fluid dynamics simulation. The algorithm constitutes the core of an integrative, image-based computational framework for subject-specific simulation of the breathing lung. For the first time, the algorithm is applied to three multi-detector row computed tomography (MDCT) volumetric lung images of the same individual. A key technique in linking global and local variables over multiple images is an in-house mass-preserving image registration method. Throughout breathing cycles, cubic interpolation is employed to ensure C1 continuity in constructing time-varying regional ventilation at the whole lung level, flow rate fractions exiting the terminal airways, and airway deformation. The imaged exit airway flow rate fractions are derived from regional ventilation with the aid of a three-dimensional (3D) and one-dimensional (1D) coupled airway tree that connects the airways to the alveolar tissue. An in-house parallel large-eddy simulation (LES) technique is adopted to capture turbulent-transitional-laminar flows in both normal and deep breathing conditions. The results obtained by the proposed algorithm when using three lung volume images are compared with those using only one or two volume images. The three-volume-based lung model produces physiologically-consistent time-varying pressure and ventilation distribution. The one-volume-based lung model under-predicts pressure drop and yields un-physiological lobar ventilation. The two-volume-based model can account for airway deformation and non-uniform regional ventilation to some extent, but does not capture the non-linear features of the lung.
Real-time process optimization based on grey-box neural models
Directory of Open Access Journals (Sweden)
F. A. Cubillos
2007-09-01
Full Text Available This paper investigates the feasibility of using grey-box neural models (GNM in Real Time Optimization (RTO. These models are based on a suitable combination of fundamental conservation laws and neural networks, being used in at least two different ways: to complement available phenomenological knowledge with empirical information, or to reduce dimensionality of complex rigorous physical models. We have observed that the benefits of using these simple adaptable models are counteracted by some difficulties associated with the solution of the optimization problem. Nonlinear Programming (NLP algorithms failed in finding the global optimum due to the fact that neural networks can introduce multimodal objective functions. One alternative considered to solve this problem was the use of some kind of evolutionary algorithms, like Genetic Algorithms (GA. Although these algorithms produced better results in terms of finding the appropriate region, they took long periods of time to reach the global optimum. It was found that a combination of genetic and nonlinear programming algorithms can be use to fast obtain the optimum solution. The proposed approach was applied to the Williams-Otto reactor, considering three different GNM models of increasing complexity. Results demonstrated that the use of GNM models and mixed GA/NLP optimization algorithms is a promissory approach for solving dynamic RTO problems.
Time delay and profit accumulation effect on a mine-based uranium market clearing model
International Nuclear Information System (INIS)
Auzans, Aris; Teder, Allan; Tkaczyk, Alan H.
2016-01-01
Highlights: • Improved version of a mine-based uranium market clearing model for the front-end uranium market and enrichment industries is proposed. • A profit accumulation algorithm and time delay function provides more realistic uranium mine decision making process. • Operational decision delay increased uranium market price volatility. - Abstract: The mining industry faces a number of challenges such as market volatility, investment safety, issues surrounding employment and productivity. Therefore, computer simulations are highly relevant in order to reduce financial risks associated with these challenges. In the mining industry, each firm must compete with other mines and the basic target is profit maximization. The aim of this paper is to evaluate the world uranium (U) supply by simulating financial management challenges faced by an individual U mine that are caused by a variety of regulation issues. In this paper front-end nuclear fuel cycle tool is used to simulate market conditions and the effects they have on the stability of U supply. An individual U mine’s exit or entry in the market might cause changes in the U supply side which can increase or decrease the market price. In this paper we offer a more advanced version of a mine-based U market clearing model. The existing U market model incorporates the market of primary U from uranium mines with secondary uranium (depleted uranium DU), enriched uranium (HEU) and enrichment services. In the model each uranium mine acts as an independent agent that is able to make operational decisions based on the market price. This paper introduces a more realistic decision making algorithm of individual U mine that adds constraints to production decisions. The authors added an accumulated profit model, which allows for the profits accumulated to cover any possible future economic losses and the time-delay algorithm to simulate delayed process of reopening a U mine. The U market simulation covers time period 2010
Time delay and profit accumulation effect on a mine-based uranium market clearing model
Energy Technology Data Exchange (ETDEWEB)
Auzans, Aris [Institute of Physics, University of Tartu, Ostwaldi 1, EE-50411 Tartu (Estonia); Teder, Allan [School of Economics and Business Administration, University of Tartu, Narva mnt 4, EE-51009 Tartu (Estonia); Tkaczyk, Alan H., E-mail: alan@ut.ee [Institute of Physics, University of Tartu, Ostwaldi 1, EE-50411 Tartu (Estonia)
2016-12-15
Highlights: • Improved version of a mine-based uranium market clearing model for the front-end uranium market and enrichment industries is proposed. • A profit accumulation algorithm and time delay function provides more realistic uranium mine decision making process. • Operational decision delay increased uranium market price volatility. - Abstract: The mining industry faces a number of challenges such as market volatility, investment safety, issues surrounding employment and productivity. Therefore, computer simulations are highly relevant in order to reduce financial risks associated with these challenges. In the mining industry, each firm must compete with other mines and the basic target is profit maximization. The aim of this paper is to evaluate the world uranium (U) supply by simulating financial management challenges faced by an individual U mine that are caused by a variety of regulation issues. In this paper front-end nuclear fuel cycle tool is used to simulate market conditions and the effects they have on the stability of U supply. An individual U mine’s exit or entry in the market might cause changes in the U supply side which can increase or decrease the market price. In this paper we offer a more advanced version of a mine-based U market clearing model. The existing U market model incorporates the market of primary U from uranium mines with secondary uranium (depleted uranium DU), enriched uranium (HEU) and enrichment services. In the model each uranium mine acts as an independent agent that is able to make operational decisions based on the market price. This paper introduces a more realistic decision making algorithm of individual U mine that adds constraints to production decisions. The authors added an accumulated profit model, which allows for the profits accumulated to cover any possible future economic losses and the time-delay algorithm to simulate delayed process of reopening a U mine. The U market simulation covers time period 2010
Directory of Open Access Journals (Sweden)
Mário Ferreira
2015-12-01
Full Text Available Imperfect detection (i.e., failure to detect a species when the species is present is increasingly recognized as an important source of uncertainty and bias in species distribution modeling. Although methods have been developed to solve this problem by explicitly incorporating variation in detectability in the modeling procedure, their use in freshwater systems remains limited. This is probably because most methods imply repeated sampling (≥ 2 of each location within a short time frame, which may be impractical or too expensive in most studies. Here we explore a novel approach to control for detectability based on the time-to-first-detection, which requires only a single sampling occasion and so may find more general applicability in freshwaters. The approach uses a Bayesian framework to combine conventional occupancy modeling with techniques borrowed from parametric survival analysis, jointly modeling factors affecting the probability of occupancy and the time required to detect a species. To illustrate the method, we modeled large scale factors (elevation, stream order and precipitation affecting the distribution of six fish species in a catchment located in north-eastern Portugal, while accounting for factors potentially affecting detectability at sampling points (stream depth and width. Species detectability was most influenced by depth and to lesser extent by stream width and tended to increase over time for most species. Occupancy was consistently affected by stream order, elevation and annual precipitation. These species presented a widespread distribution with higher uncertainty in tributaries and upper stream reaches. This approach can be used to estimate sampling efficiency and provide a practical framework to incorporate variations in the detection rate in fish distribution models.
Agent-Based Modeling of Day-Ahead Real Time Pricing in a Pool-Based Electricity Market
Directory of Open Access Journals (Sweden)
Sh. Yousefi
2011-09-01
Full Text Available In this paper, an agent-based structure of the electricity retail market is presented based on which day-ahead (DA energy procurement for customers is modeled. Here, we focus on operation of only one Retail Energy Provider (REP agent who purchases energy from DA pool-based wholesale market and offers DA real time tariffs to a group of its customers. As a model of customer response to the offered real time prices, an hourly acceptance function is proposed in order to represent the hourly changes in the customer’s effective demand according to the prices. Here, Q-learning (QL approach is applied in day-ahead real time pricing for the customers enabling the REP agent to discover which price yields the most benefit through a trial-and-error search. Numerical studies are presented based on New England day-ahead market data which include comparing the results of RTP based on QL approach with that of genetic-based pricing.
DEFF Research Database (Denmark)
von Essen, C.; Cellone, S.; Mallonn, M.
2016-01-01
The transit timing variation technique (TTV) has been widely used to detect and characterize multiple planetary systems. Due to the observational biases imposed mainly by the photometric conditions and instrumentation and the high signal-to-noise required to produce primary transit observations...... the observing time at hand carrying out such follow-ups, or if the use of medium-to-low quality transit light curves, combined with current standard techniques of data analysis, could be playing a main role against exoplanetary search via TTVs. The purpose of this work is to investigate to what extent ground......-based observations treated with current modelling techniques are reliable to detect and characterize additional planets in already known planetary systems. To meet this goal, we simulated typical primary transit observations of a hot Jupiter mimicing an existing system, Qatar-1. To resemble ground-based observations...
Non-linear time variant model intended for polypyrrole-based actuators
Farajollahi, Meisam; Madden, John D. W.; Sassani, Farrokh
2014-03-01
Polypyrrole-based actuators are of interest due to their biocompatibility, low operation voltage and relatively high strain and force. Modeling and simulation are very important to predict the behaviour of each actuator. To develop an accurate model, we need to know the electro-chemo-mechanical specifications of the Polypyrrole. In this paper, the non-linear time-variant model of Polypyrrole film is derived and proposed using a combination of an RC transmission line model and a state space representation. The model incorporates the potential dependent ionic conductivity. A function of ionic conductivity of Polypyrrole vs. local charge is proposed and implemented in the non-linear model. Matching of the measured and simulated electrical response suggests that ionic conductivity of Polypyrrole decreases significantly at negative potential vs. silver/silver chloride and leads to reduced current in the cyclic voltammetry (CV) tests. The next stage is to relate the distributed charging of the polymer to actuation via the strain to charge ratio. Further work is also needed to identify ionic and electronic conductivities as well as capacitance as a function of oxidation state so that a fully predictive model can be created.
Impedance based time-domain modeling of lithium-ion batteries: Part I
Gantenbein, Sophia; Weiss, Michael; Ivers-Tiffée, Ellen
2018-03-01
This paper presents a novel lithium-ion cell model, which simulates the current voltage characteristic as a function of state of charge (0%-100%) and temperature (0-30 °C). It predicts the cell voltage at each operating point by calculating the total overvoltage from the individual contributions of (i) the ohmic loss η0, (ii) the charge transfer loss of the cathode ηCT,C, (iii) the charge transfer loss and the solid electrolyte interface loss of the anode ηSEI/CT,A, and (iv) the solid state and electrolyte diffusion loss ηDiff,A/C/E. This approach is based on a physically meaningful equivalent circuit model, which is parametrized by electrochemical impedance spectroscopy and time domain measurements, covering a wide frequency range from MHz to μHz. The model is exemplarily parametrized to a commercial, high-power 350 mAh graphite/LiNiCoAlO2-LiCoO2 pouch cell and validated by continuous discharge and charge curves at varying temperature. For the first time, the physical background of the model allows the operator to draw conclusions about the performance-limiting factor at various operating conditions. Not only can the model help to choose application-optimized cell characteristics, but it can also support the battery management system when taking corrective actions during operation.
A Fault Prognosis Strategy Based on Time-Delayed Digraph Model and Principal Component Analysis
Directory of Open Access Journals (Sweden)
Ningyun Lu
2012-01-01
Full Text Available Because of the interlinking of process equipments in process industry, event information may propagate through the plant and affect a lot of downstream process variables. Specifying the causality and estimating the time delays among process variables are critically important for data-driven fault prognosis. They are not only helpful to find the root cause when a plant-wide disturbance occurs, but to reveal the evolution of an abnormal event propagating through the plant. This paper concerns with the information flow directionality and time-delay estimation problems in process industry and presents an information synchronization technique to assist fault prognosis. Time-delayed mutual information (TDMI is used for both causality analysis and time-delay estimation. To represent causality structure of high-dimensional process variables, a time-delayed signed digraph (TD-SDG model is developed. Then, a general fault prognosis strategy is developed based on the TD-SDG model and principle component analysis (PCA. The proposed method is applied to an air separation unit and has achieved satisfying results in predicting the frequently occurred “nitrogen-block” fault.
Fedorov, A K; Anufriev, M N; Zhirnov, A A; Stepanov, K V; Nesterov, E T; Namiot, D E; Karasik, V E; Pnev, A B
2016-03-01
We propose a novel approach to the recognition of particular classes of non-conventional events in signals from phase-sensitive optical time-domain-reflectometry-based sensors. Our algorithmic solution has two main features: filtering aimed at the de-nosing of signals and a Gaussian mixture model to cluster them. We test the proposed algorithm using experimentally measured signals. The results show that two classes of events can be distinguished with the best-case recognition probability close to 0.9 at sufficient numbers of training samples.
Real-Time Model-Based Leak-Through Detection within Cryogenic Flow Systems
Walker, M.; Figueroa, F.
2015-01-01
The timely detection of leaks within cryogenic fuel replenishment systems is of significant importance to operators on account of the safety and economic impacts associated with material loss and operational inefficiencies. Associated loss in control of pressure also effects the stability and ability to control the phase of cryogenic fluids during replenishment operations. Current research dedicated to providing Prognostics and Health Management (PHM) coverage of such cryogenic replenishment systems has focused on the detection of leaks to atmosphere involving relatively simple model-based diagnostic approaches that, while effective, are unable to isolate the fault to specific piping system components. The authors have extended this research to focus on the detection of leaks through closed valves that are intended to isolate sections of the piping system from the flow and pressurization of cryogenic fluids. The described approach employs model-based detection of leak-through conditions based on correlations of pressure changes across isolation valves and attempts to isolate the faults to specific valves. Implementation of this capability is enabled by knowledge and information embedded in the domain model of the system. The approach has been used effectively to detect such leak-through faults during cryogenic operational testing at the Cryogenic Testbed at NASA's Kennedy Space Center.
Li, Yi; Chen, Yuren
2016-12-30
To make driving assistance system more humanized, this study focused on the prediction and assistance of drivers' perception-response time on mountain highway curves. Field tests were conducted to collect real-time driving data and driver vision information. A driver-vision lane model quantified curve elements in drivers' vision. A multinomial log-linear model was established to predict perception-response time with traffic/road environment information, driver-vision lane model, and mechanical status (last second). A corresponding assistance model showed a positive impact on drivers' perception-response times on mountain highway curves. Model results revealed that the driver-vision lane model and visual elements did have important influence on drivers' perception-response time. Compared with roadside passive road safety infrastructure, proper visual geometry design, timely visual guidance, and visual information integrality of a curve are significant factors for drivers' perception-response time.
The Trick Simulation Toolkit: A NASA/Opensource Framework for Running Time Based Physics Models
Penn, John M.
2016-01-01
The Trick Simulation Toolkit is a simulation development environment used to create high fidelity training and engineering simulations at the NASA Johnson Space Center and many other NASA facilities. Its purpose is to generate a simulation executable from a collection of user-supplied models and a simulation definition file. For each Trick-based simulation, Trick automatically provides job scheduling, numerical integration, the ability to write and restore human readable checkpoints, data recording, interactive variable manipulation, a run-time interpreter, and many other commonly needed capabilities. This allows simulation developers to concentrate on their domain expertise and the algorithms and equations of their models. Also included in Trick are tools for plotting recorded data and various other supporting utilities and libraries. Trick is written in C/C++ and Java and supports both Linux and MacOSX computer operating systems. This paper describes Trick's design and use at NASA Johnson Space Center.
An approach to the drone fleet survivability assessment based on a stochastic continues-time model
Kharchenko, Vyacheslav; Fesenko, Herman; Doukas, Nikos
2017-09-01
An approach and the algorithm to the drone fleet survivability assessment based on a stochastic continues-time model are proposed. The input data are the number of the drones, the drone fleet redundancy coefficient, the drone stability and restoration rate, the limit deviation from the norms of the drone fleet recovery, the drone fleet operational availability coefficient, the probability of the drone failure-free operation, time needed for performing the required tasks by the drone fleet. The ways for improving the recoverable drone fleet survivability taking into account amazing factors of system accident are suggested. Dependencies of the drone fleet survivability rate both on the drone stability and the number of the drones are analysed.
Fontes, Fernando A. C. C.; Paiva, Luís T.
2016-10-01
We address optimal control problems for nonlinear systems with pathwise state-constraints. These are challenging non-linear problems for which the number of discretization points is a major factor determining the computational time. Also, the location of these points has a major impact in the accuracy of the solutions. We propose an algorithm that iteratively finds an adequate time-grid to satisfy some predefined error estimate on the obtained trajectories, which is guided by information on the adjoint multipliers. The obtained results show a highly favorable comparison against the traditional equidistant-spaced time-grid methods, including the ones using discrete-time models. This way, continuous-time plant models can be directly used. The discretization procedure can be automated and there is no need to select a priori the adequate time step. Even if the optimization procedure is forced to stop in an early stage, as might be the case in real-time problems, we can still obtain a meaningful solution, although it might be a less accurate one. The extension of the procedure to a Model Predictive Control (MPC) context is proposed here. By defining a time-dependent accuracy threshold, we can generate solutions that are more accurate in the initial parts of the receding horizon, which are the most relevant for MPC.
Kernel based methods for accelerated failure time model with ultra-high dimensional data
Directory of Open Access Journals (Sweden)
Jiang Feng
2010-12-01
Full Text Available Abstract Background Most genomic data have ultra-high dimensions with more than 10,000 genes (probes. Regularization methods with L1 and Lp penalty have been extensively studied in survival analysis with high-dimensional genomic data. However, when the sample size n ≪ m (the number of genes, directly identifying a small subset of genes from ultra-high (m > 10, 000 dimensional data is time-consuming and not computationally efficient. In current microarray analysis, what people really do is select a couple of thousands (or hundreds of genes using univariate analysis or statistical tests, and then apply the LASSO-type penalty to further reduce the number of disease associated genes. This two-step procedure may introduce bias and inaccuracy and lead us to miss biologically important genes. Results The accelerated failure time (AFT model is a linear regression model and a useful alternative to the Cox model for survival analysis. In this paper, we propose a nonlinear kernel based AFT model and an efficient variable selection method with adaptive kernel ridge regression. Our proposed variable selection method is based on the kernel matrix and dual problem with a much smaller n × n matrix. It is very efficient when the number of unknown variables (genes is much larger than the number of samples. Moreover, the primal variables are explicitly updated and the sparsity in the solution is exploited. Conclusions Our proposed methods can simultaneously identify survival associated prognostic factors and predict survival outcomes with ultra-high dimensional genomic data. We have demonstrated the performance of our methods with both simulation and real data. The proposed method performs superbly with limited computational studies.
DEFF Research Database (Denmark)
Bauer-Gottwein, Peter; Christiansen, Lars; Rosbjerg, Dan
2011-01-01
Coupled hydrogeophysical inversion emerges as an attractive option to improve the calibration and predictive capability of hydrological models. Recently, ground-based time-lapse relative gravity (TLRG) measurements have attracted increasing interest because there is a direct relationship between ...... in gravity due to unmonitored non-hydrological effects, and the requirement of a gravitationally stable reference station. Application of TLRG in hydrology should be combined with other geophysical and/or traditional monitoring methods.......Coupled hydrogeophysical inversion emerges as an attractive option to improve the calibration and predictive capability of hydrological models. Recently, ground-based time-lapse relative gravity (TLRG) measurements have attracted increasing interest because there is a direct relationship between...... the signal and the change in water mass stored in the subsurface. Thus, no petrophysical relationship is required for coupled hydrogeophysical inversion. Two hydrological events were monitored with TLRG. One was a natural flooding event in the periphery of the Okavango Delta, Botswana, and one was a forced...
Li, Yi; Chen, Yuren
2016-01-01
To make driving assistance system more humanized, this study focused on the prediction and assistance of drivers’ perception-response time on mountain highway curves. Field tests were conducted to collect real-time driving data and driver vision information. A driver-vision lane model quantified curve elements in drivers’ vision. A multinomial log-linear model was established to predict perception-response time with traffic/road environment information, driver-vision lane model, and mechanica...
A CTRW-based model of time-resolved fluorescence lifetime imaging in a turbid medium.
Chernomordik, Victor; Gandjbakhche, Amir H; Hassan, Moinuddin; Pajevic, Sinisa; Weiss, George H
2010-12-01
We develop an analytic model of time-resolved fluorescent imaging of photons migrating through a semi-infinite turbid medium bounded by an infinite plane in the presence of a single stationary point fluorophore embedded in the medium. In contrast to earlier models of fluorescent imaging in which photon motion is assumed to be some form of continuous diffusion process, the present analysis is based on a continuous-time random walk (CTRW) on a simple cubic lattice, the object being to estimate the position and lifetime of the fluorophore. Such information can provide information related to local variations in pH and temperature with potential medical significance. Aspects of the theory were tested using time-resolved measurements of the fluorescence from small inclusions inside tissue-like phantoms. The experimental results were found to be in good agreement with theoretical predictions provided that the fluorophore was not located too close to the planar boundary, a common problem in many diffusive systems.
2011-01-01
Scalability of Semi-Implicit Time Integrators for Nonhydrostatic Galerkin-based Atmospheric Models on Large Scale Cluster James F. Kelly and Francis...present performance statistics to explain the scalability behavior. Keywords- atmospheric models , time intergrators, MPI, scal- ability, performance; I...moving toward the nonhy- drostatic regime. The nonhydrostatic atmospheric models , which run at resolutions finer than 10 km, possess fast- moving
Model-based framework for multi-axial real-time hybrid simulation testing
Fermandois, Gaston A.; Spencer, Billie F.
2017-10-01
Real-time hybrid simulation is an efficient and cost-effective dynamic testing technique for performance evaluation of structural systems subjected to earthquake loading with rate-dependent behavior. A loading assembly with multiple actuators is required to impose realistic boundary conditions on physical specimens. However, such a testing system is expected to exhibit significant dynamic coupling of the actuators and suffer from time lags that are associated with the dynamics of the servo-hydraulic system, as well as control-structure interaction (CSI). One approach to reducing experimental errors considers a multi-input, multi-output (MIMO) controller design, yielding accurate reference tracking and noise rejection. In this paper, a framework for multi-axial real-time hybrid simulation (maRTHS) testing is presented. The methodology employs a real-time feedback-feedforward controller for multiple actuators commanded in Cartesian coordinates. Kinematic transformations between actuator space and Cartesian space are derived for all six-degrees-offreedom of the moving platform. Then, a frequency domain identification technique is used to develop an accurate MIMO transfer function of the system. Further, a Cartesian-domain model-based feedforward-feedback controller is implemented for time lag compensation and to increase the robustness of the reference tracking for given model uncertainty. The framework is implemented using the 1/5th-scale Load and Boundary Condition Box (LBCB) located at the University of Illinois at Urbana- Champaign. To demonstrate the efficacy of the proposed methodology, a single-story frame subjected to earthquake loading is tested. One of the columns in the frame is represented physically in the laboratory as a cantilevered steel column. For realtime execution, the numerical substructure, kinematic transformations, and controllers are implemented on a digital signal processor. Results show excellent performance of the maRTHS framework when six
An advection-based model to increase the temporal resolution of PIV time series.
Scarano, Fulvio; Moore, Peter
A numerical implementation of the advection equation is proposed to increase the temporal resolution of PIV time series. The method is based on the principle that velocity fluctuations are transported passively, similar to Taylor's hypothesis of frozen turbulence . In the present work, the advection model is extended to unsteady three-dimensional flows. The main objective of the method is that of lowering the requirement on the PIV repetition rate from the Eulerian frequency toward the Lagrangian one. The local trajectory of the fluid parcel is obtained by forward projection of the instantaneous velocity at the preceding time instant and backward projection from the subsequent time step. The trajectories are approximated by the instantaneous streamlines, which yields accurate results when the amplitude of velocity fluctuations is small with respect to the convective motion. The verification is performed with two experiments conducted at temporal resolutions significantly higher than that dictated by Nyquist criterion. The flow past the trailing edge of a NACA0012 airfoil closely approximates frozen turbulence , where the largest ratio between the Lagrangian and Eulerian temporal scales is expected. An order of magnitude reduction of the needed acquisition frequency is demonstrated by the velocity spectra of super-sampled series. The application to three-dimensional data is made with time-resolved tomographic PIV measurements of a transitional jet. Here, the 3D advection equation is implemented to estimate the fluid trajectories. The reduction in the minimum sampling rate by the use of super-sampling in this case is less, due to the fact that vortices occurring in the jet shear layer are not well approximated by sole advection at large time separation. Both cases reveal that the current requirements for time-resolved PIV experiments can be revised when information is poured from space to time . An additional favorable effect is observed by the analysis in the
Kalman filtering and smoothing for model-based signal extraction that depend on time-varying spectra
Koopman, S.J.; Wong, S.Y.
2011-01-01
We develop a flexible semi-parametric method for the introduction of time-varying parameters in a model-based signal extraction procedure. Dynamic model specifications for the parameters in the model are not required. We show that signal extraction based on Kalman filtering and smoothing can be made
The "Carbon Data Explorer": Web-Based Space-Time Visualization of Modeled Carbon Fluxes
Billmire, M.; Endsley, K. A.
2014-12-01
The visualization of and scientific "sense-making" from large datasets varying in both space and time is a challenge; one that is still being addressed in a number of different fields. The approaches taken thus far are often specific to a given academic field due to the unique questions that arise in different disciplines, however, basic approaches such as geographic maps and time series plots are still widely useful. The proliferation of model estimates of increasing size and resolution further complicates what ought to be a simple workflow: Model some geophysical phenomen(on), obtain results and measure uncertainty, organize and display the data, make comparisons across trials, and share findings. A new tool is in development that is intended to help scientists with the latter parts of that workflow. The tentatively-titled "Carbon Data Explorer" (http://spatial.mtri.org/flux-client/) enables users to access carbon science and related spatio-temporal science datasets over the web. All that is required to access multiple interactive visualizations of carbon science datasets is a compatible web browser and an internet connection. While the application targets atmospheric and climate science datasets, particularly spatio-temporal model estimates of carbon products, the software architecture takes an agnostic approach to the data to be visualized. Any atmospheric, biophysical, or geophysical quanity that varies in space and time, including one or more measures of uncertainty, can be visualized within the application. Within the web application, users have seamless control over a flexible and consistent symbology for map-based visualizations and plots. Where time series data are represented by one or more data "frames" (e.g. a map), users can animate the data. In the "coordinated view," users can make direct comparisons between different frames and different models or model runs, facilitating intermodal comparisons and assessments of spatio-temporal variability. Map
ARIMA-Based Time Series Model of Stochastic Wind Power Generation
DEFF Research Database (Denmark)
Chen, Peiyuan; Pedersen, Troels; Bak-Jensen, Birgitte
2010-01-01
This paper proposes a stochastic wind power model based on an autoregressive integrated moving average (ARIMA) process. The model takes into account the nonstationarity and physical limits of stochastic wind power generation. The model is constructed based on wind power measurement of one year from...... the Nysted offshore wind farm in Denmark. The proposed limited-ARIMA (LARIMA) model introduces a limiter and characterizes the stochastic wind power generation by mean level, temporal correlation and driving noise. The model is validated against the measurement in terms of temporal correlation...... and probability distribution. The LARIMA model outperforms a first-order transition matrix based discrete Markov model in terms of temporal correlation, probability distribution and model parameter number. The proposed LARIMA model is further extended to include the monthly variation of the stochastic wind power...
Speed-Accuracy Response Models: Scoring Rules Based on Response Time and Accuracy
Maris, Gunter; van der Maas, Han
2012-01-01
Starting from an explicit scoring rule for time limit tasks incorporating both response time and accuracy, and a definite trade-off between speed and accuracy, a response model is derived. Since the scoring rule is interpreted as a sufficient statistic, the model belongs to the exponential family. The various marginal and conditional distributions…
Arentze, Theo; Ettema, D.F.; Timmermans, Harry
Existing theories and models in economics and transportation treat households’ decisions regarding allocation of time and income to activities as a resource-allocation optimization problem. This stands in contrast with the dynamic nature of day-by-day activity-travel choices. Therefore, in the
Real-Time, Model-Based Spray-Cooling Control System for Steel Continuous Casting
Petrus, Bryan; Zheng, Kai; Zhou, X.; Thomas, Brian G.; Bentsman, Joseph
2011-02-01
This article presents a new system to control secondary cooling water sprays in continuous casting of thin steel slabs (CONONLINE). It uses real-time numerical simulation of heat transfer and solidification within the strand as a software sensor in place of unreliable temperature measurements. The one-dimensional finite-difference model, CON1D, is adapted to create the real-time predictor of the slab temperature and solidification state. During operation, the model is updated with data collected by the caster automation systems. A decentralized controller configuration based on a bank of proportional-integral controllers with antiwindup is developed to maintain the shell surface-temperature profile at a desired set point. A new method of set-point generation is proposed to account for measured mold heat flux variations. A user-friendly monitor visualizes the results and accepts set-point changes from the caster operator. Example simulations demonstrate how a significantly better shell surface-temperature control is achieved.
A Virtual Machine Migration Strategy Based on Time Series Workload Prediction Using Cloud Model
Directory of Open Access Journals (Sweden)
Yanbing Liu
2014-01-01
Full Text Available Aimed at resolving the issues of the imbalance of resources and workloads at data centers and the overhead together with the high cost of virtual machine (VM migrations, this paper proposes a new VM migration strategy which is based on the cloud model time series workload prediction algorithm. By setting the upper and lower workload bounds for host machines, forecasting the tendency of their subsequent workloads by creating a workload time series using the cloud model, and stipulating a general VM migration criterion workload-aware migration (WAM, the proposed strategy selects a source host machine, a destination host machine, and a VM on the source host machine carrying out the task of the VM migration. Experimental results and analyses show, through comparison with other peer research works, that the proposed method can effectively avoid VM migrations caused by momentary peak workload values, significantly lower the number of VM migrations, and dynamically reach and maintain a resource and workload balance for virtual machines promoting an improved utilization of resources in the entire data center.
Schindewolf, Marcus; Kaiser, Andreas; Buchholtz, Arno; Schmidt, Jürgen
2017-04-01
Extreme rainfall events and resulting flash floods led to massive devastations in Germany during spring 2016. The study presented aims on the development of a early warning system, which allows the simulation and assessment of negative effects on infrastructure by radar-based heavy rainfall predictions, serving as input data for the process-based soil loss and deposition model EROSION 3D. Our approach enables a detailed identification of runoff and sediment fluxes in agricultural used landscapes. In a first step, documented historical events were analyzed concerning the accordance of measured radar rainfall and large scale erosion risk maps. A second step focused on a small scale erosion monitoring via UAV of source areas of heavy flooding events and a model reconstruction of the processes involved. In all examples damages were caused to local infrastructure. Both analyses are promising in order to detect runoff and sediment delivering areas even in a high temporal and spatial resolution. Results prove the important role of late-covering crops such as maize, sugar beet or potatoes in runoff generation. While e.g. winter wheat positively affects extensive runoff generation on undulating landscapes, massive soil loss and thus muddy flows are observed and depicted in model results. Future research aims on large scale model parameterization and application in real time, uncertainty estimation of precipitation forecast and interface developments.
DEFF Research Database (Denmark)
Zhou, H. W.; Yi, H. Y.; Mishnaevsky, Leon
2017-01-01
A modeling approach to time-dependent property of Glass Fiber Reinforced Polymers (GFRP) composites is of special interest for quantitative description of long-term behavior. An electronic creep machine is employed to investigate the time-dependent deformation of four specimens of dog-bond-shaped......A modeling approach to time-dependent property of Glass Fiber Reinforced Polymers (GFRP) composites is of special interest for quantitative description of long-term behavior. An electronic creep machine is employed to investigate the time-dependent deformation of four specimens of dog......, is suggested to characterize the time-dependent behavior of GFRP composites by replacing Newtonian dashpot with the Abel dashpot in the classical Maxwell model. The analytic solution for the fractional derivative Maxwell model is given and the relative parameters are determined. The results estimated...
A HPC based cloud model for real-time energy optimisation
Petri, Ioan; Li, Haijiang; Rezgui, Yacine; Chunfeng, Yang; Yuce, Baris; Jayan, Bejay
2016-01-01
Recent research has emphasised that an increasing number of enterprises need computation environments for executing HPC (High Performance Computing) applications. Rather than paying the cost of ownership and possess physical, fixed capacity clusters, enterprises can reserve or rent resources for undertaking the required tasks. With the emergence of new computation paradigms such as cloud computing it has become possible to solve a wider range of problems due to their capability to handle and process massive amounts of data. On the other hand, given the pressing regulatory requirement to reduce the carbon footprint of our built environment, significant researching efforts have been recently directed towards simulation-based building energy optimisation with the overall objective of reducing energy consumption. Energy optimisation in buildings represents a class of problems that requires significant computation resources and generally is a time consuming process especially when undertaken with building simulation software, such as EnergyPlus. In this paper we present how a HPC based cloud model can be efficiently used for running and deploying EnergyPlus simulation-based optimisation in order to fulfil a number of objectives related to energy consumption. We describe and evaluate the establishment of such an application-based environment, and consider a cost perspective to determine the efficiency over several cases we explore. This study identifies the following contributions: (i) a comprehensive examination of issues relevant to the HPC community, including performance, cost, user perspectives and range of user activities, (ii) a comparison of two different execution environments such as HTCondor and CometCloud and determine their effectiveness in supporting simulation-based optimisation and (iii) a detailed performance analysis to locate the limiting factors of these execution environments.
Directory of Open Access Journals (Sweden)
Naoki Kawamura
2017-11-01
Full Text Available It is known that the process of reconstruction of a Positron Emission Tomography (PET image from sinogram data is very sensitive to measurement noises; it is still an important research topic to reconstruct PET images with high signal-to-noise ratios. In this paper, we propose a new reconstruction method for a temporal series of PET images from a temporal series of sinogram data. In the proposed method, PET images are reconstructed by minimizing the Kullback–Leibler divergence between the observed sinogram data and sinogram data derived from a parametric model of PET images. The contributions of the proposition include the following: (1 regions of targets in images are explicitly expressed using a set of spatial bases in order to ignore the noises in the background; (2 a parametric time activity model of PET images is explicitly introduced as a constraint; and (3 an algorithm for solving the optimization problem is clearly described. To demonstrate the advantages of the proposed method, quantitative evaluations are performed using both synthetic and clinical data of human brains.
Examining School-Based Bullying Interventions Using Multilevel Discrete Time Hazard Modeling
Wagaman, M. Alex; Geiger, Jennifer Mullins; Bermudez-Parsai, Monica; Hedberg, E. C.
2014-01-01
Although schools have been trying to address bulling by utilizing different approaches that stop or reduce the incidence of bullying, little remains known about what specific intervention strategies are most successful in reducing bullying in the school setting. Using the social-ecological framework, this paper examines school-based disciplinary interventions often used to deliver consequences to deter the reoccurrence of bullying and aggressive behaviors among school-aged children. Data for this study are drawn from the School-Wide Information System (SWIS) with the final analytic sample consisting of 1,221 students in grades K – 12 who received an office disciplinary referral for bullying during the first semester. Using Kaplan-Meier Failure Functions and Multi-level discrete time hazard models, determinants of the probability of a student receiving a second referral over time were examined. Of the seven interventions tested, only Parent-Teacher Conference (AOR=0.65, pbullying and aggressive behaviors. By using a social-ecological framework, schools can develop strategies that deter the reoccurrence of bullying by identifying key factors that enhance a sense of connection between the students’ mesosystems as well as utilizing disciplinary strategies that take into consideration student’s microsystem roles. PMID:22878779
Scheduling of high-speed rail traffic based on discrete-time movement model
International Nuclear Information System (INIS)
Sun Ya-Hua; Cao Cheng-Xuan; Xu Yan; Wu Chao
2013-01-01
In this paper, a new simulation approach for solving the mixed train scheduling problem on the high-speed double-track rail line is presented. Based on the discrete-time movement model, we propose control strategies for mixed train movement with different speeds on a high-speed double-track rail line, including braking strategy, priority rule, travelling strategy, and departing rule. A new detailed algorithm is also presented based on the proposed control strategies for mixed train movement. Moreover, we analyze the dynamic properties of rail traffic flow on a high-speed rail line. Using our proposed method, we can effectively simulate the mixed train schedule on a rail line. The numerical results demonstrate that an appropriate decrease of the departure interval can enhance the capacity, and a suitable increase of the distance between two adjacent stations can enhance the average speed. Meanwhile, the capacity and the average speed will be increased by appropriately enhancing the ratio of faster train number to slower train number from 1. (general)
Model-based Integration of Past & Future in TimeTravel
DEFF Research Database (Denmark)
Khalefa, Mohamed E.; Fischer, Ulrike; Pedersen, Torben Bach
2012-01-01
it to answer approximate and exact queries. TimeTravel is implemented into PostgreSQL, thus achieving complete user transparency at the query level. In the demo, we show the easy building of a hierarchical model index for a real-world time series and the effect of varying the error guarantees on the speed up...
Directory of Open Access Journals (Sweden)
Seunghwan Hong
2017-01-01
Full Text Available Geometric correction of SAR satellite imagery is the process to adjust the model parameters that define the relationship between ground and image coordinates. To achieve sub-pixel geolocation accuracy, the adoption of the appropriate geometric correction model and parameters is important. Until now, various geometric correction models have been developed and applied. However, it is still difficult for general users to adopt a suitable geometric correction models having sufficient precision. In this regard, this paper evaluated the orbit-based and time-offset-based models with an error simulation. To evaluate the geometric correction models, Radarsat-1 images that have large errors in satellite orbit information and TerraSAR-X images that have a reportedly high accuracy in satellite orbit and sensor information were utilized. For Radarsat-1 imagery, the geometric correction model based on the satellite position parameters has a better performance than the model based on time-offset parameters. In the case of the TerraSAR-X imagery, two geometric correction models had similar performance and could ensure sub-pixel geolocation accuracy.
T-UPPAAL: Online Model-based Testing of Real-Time Systems
DEFF Research Database (Denmark)
Mikucionis, Marius; Larsen, Kim Guldstrand; Nielsen, Brian
2004-01-01
The goal of testing is to gain confidence in a physical computer based system by means of executing it. More than one third of typical project resources is spent on testing embedded and real-time systems, but still it remains ad-hoc, based on heuristics, and error-prone. Therefore systematic, the......, theoretically well-founded and effective automated real-time testing techniques are of great practical value. We pesent an online conformance testing tool for timed systems.......The goal of testing is to gain confidence in a physical computer based system by means of executing it. More than one third of typical project resources is spent on testing embedded and real-time systems, but still it remains ad-hoc, based on heuristics, and error-prone. Therefore systematic...
Design base transient analysis using the real-time nuclear reactor simulator model
International Nuclear Information System (INIS)
Tien, K.K.; Yakura, S.J.; Morin, J.P.; Gregory, M.V.
1987-01-01
A real-time simulation model has been developed to describe the dynamic response of all major systems in a nuclear process reactor. The model consists of a detailed representation of all hydraulic components in the external coolant circulating loops consisting of piping, valves, pumps and heat exchangers. The reactor core is described by a three-dimensional neutron kinetics model with detailed representation of assembly coolant and moderator thermal hydraulics. The models have been developed to support a real-time training simulator, therefore, they reproduce system parameters characteristic of steady state normal operation with high precision. The system responses for postulated severe transients such as large pipe breaks, loss of pumping power, piping leaks, malfunctions in control rod insertion, and emergency injection of neutron absorber are calculated to be in good agreement with reference safety analyses. Restrictions were imposed by the requirement that the resulting code be able to run in real-time with sufficient spare time to allow interfacing with secondary systems and simulator hardware. Due to hardware set-up and real plant instrumentation, simplifications due to symmetry were not allowed. The resulting code represents a coarse-node engineering model in which the level of detail has been tailored to the available computing power of a present generation super-minicomputer. Results for several significant transients, as calculated by the real-time model, are compared both to actual plant data and to results generated by fine-mesh analysis codes
A Seasonal Time-Series Model Based on Gene Expression Programming for Predicting Financial Distress
Directory of Open Access Journals (Sweden)
Ching-Hsue Cheng
2018-01-01
Full Text Available The issue of financial distress prediction plays an important and challenging research topic in the financial field. Currently, there have been many methods for predicting firm bankruptcy and financial crisis, including the artificial intelligence and the traditional statistical methods, and the past studies have shown that the prediction result of the artificial intelligence method is better than the traditional statistical method. Financial statements are quarterly reports; hence, the financial crisis of companies is seasonal time-series data, and the attribute data affecting the financial distress of companies is nonlinear and nonstationary time-series data with fluctuations. Therefore, this study employed the nonlinear attribute selection method to build a nonlinear financial distress prediction model: that is, this paper proposed a novel seasonal time-series gene expression programming model for predicting the financial distress of companies. The proposed model has several advantages including the following: (i the proposed model is different from the previous models lacking the concept of time series; (ii the proposed integrated attribute selection method can find the core attributes and reduce high dimensional data; and (iii the proposed model can generate the rules and mathematical formulas of financial distress for providing references to the investors and decision makers. The result shows that the proposed method is better than the listing classifiers under three criteria; hence, the proposed model has competitive advantages in predicting the financial distress of companies.
DEFF Research Database (Denmark)
Pagonis, V.; Ankjærgaard, Christina; Murray, Andrew
2010-01-01
This paper presents a new numerical model for thermal quenching in quartz, based on the previously suggested Mott–Seitz mechanism. In the model electrons from a dosimetric trap are raised by optical or thermal stimulation into the conduction band, followed by an electronic transition from the con...... experimental data obtained using a single-aliquot procedure on a sedimentary quartz sample....
A Simulation-Based Geostatistical Approach to Real-Time Reconciliation of the Grade Control Model
Wambeke, T.; Benndorf, J.
2017-01-01
One of the main challenges of the mining industry is to ensure that produced tonnages and grades are aligned with targets derived from model-based expectations. Unexpected deviations, resulting from large uncertainties in the grade control model, often occur and strongly impact resource recovery
A novel real-time non-linear wavelet-based model predictive controller for a coupled tank system
Owa, K; Sharma, S; Sutton, R
2014-01-01
This article presents the design, simulation and real-time implementation of a constrained non-linear model predictive controller for a coupled tank system. A novel wavelet-based function neural network model and a genetic algorithm online non-linear real-time optimisation approach were used in the non-linear model predictive controller strategy. A coupled tank system, which resembles operations in many chemical processes, is complex and has inherent non-linearity, and hence, controlling such...
Flatness-based control and Kalman filtering for a continuous-time macroeconomic model
Rigatos, G.; Siano, P.; Ghosh, T.; Busawon, K.; Binns, R.
2017-11-01
The article proposes flatness-based control for a nonlinear macro-economic model of the UK economy. The differential flatness properties of the model are proven. This enables to introduce a transformation (diffeomorphism) of the system's state variables and to express the state-space description of the model in the linear canonical (Brunowsky) form in which both the feedback control and the state estimation problem can be solved. For the linearized equivalent model of the macroeconomic system, stabilizing feedback control can be achieved using pole placement methods. Moreover, to implement stabilizing feedback control of the system by measuring only a subset of its state vector elements the Derivative-free nonlinear Kalman Filter is used. This consists of the Kalman Filter recursion applied on the linearized equivalent model of the financial system and of an inverse transformation that is based again on differential flatness theory. The asymptotic stability properties of the control scheme are confirmed.
FPGA-Based Real Time, Multichannel Emulated-Digital Retina Model Implementation
Vörösházi, Zsolt; Nagy, Zoltán; Szolgay, Péter
2009-12-01
The function of the low-level image processing that takes place in the biological retina is to compress only the relevant visual information to a manageable size. The behavior of the layers and different channels of the neuromorphic retina has been successfully modeled by cellular neural/nonlinear networks (CNNs). In this paper, we present an extended, application-specific emulated-digital CNN-universal machine (UM) architecture to compute the complex dynamic of this mammalian retina in video real time. The proposed emulated-digital implementation of multichannel retina model is compared to the previously developed models from three key aspects, which are processing speed, number of physical cells, and accuracy. Our primary aim was to build up a simple, real-time test environment with camera input and display output in order to mimic the behavior of retina model implementation on emulated digital CNN by using low-cost, moderate-sized field-programmable gate array (FPGA) architectures.
Modeling the impact of forecast-based regime switches on macroeconomic time series
K. Bel (Koen); R. Paap (Richard)
2013-01-01
textabstractForecasts of key macroeconomic variables may lead to policy changes of governments, central banks and other economic agents. Policy changes in turn lead to structural changes in macroeconomic time series models. To describe this phenomenon we introduce a logistic smooth transition
Turnip, Betty; Wahyuni, Ida; Tanjung, Yul Ifda
2016-01-01
One of the factors that can support successful learning activity is the use of learning models according to the objectives to be achieved. This study aimed to analyze the differences in problem-solving ability Physics student learning model Inquiry Training based on Just In Time Teaching [JITT] and conventional learning taught by cooperative model…
Targeting and timing promotional activities : An agent-based model for the takeoff of new products
Delre, S. A.; Jager, W.; Bijmolt, T. H. A.; Janssen, M. A.
Many marketing efforts focus on promotional activities that support the launch of new products. Promotional strategies may play a crucial role in the early stages of the product life cycle, and determine to a large extent the diffusion of a new product. This paper proposes an agent-based model to
A new costing model in hospital management: time-driven activity-based costing system.
Öker, Figen; Özyapıcı, Hasan
2013-01-01
Traditional cost systems cause cost distortions because they cannot meet the requirements of today's businesses. Therefore, a new and more effective cost system is needed. Consequently, time-driven activity-based costing system has emerged. The unit cost of supplying capacity and the time needed to perform an activity are the only 2 factors considered by the system. Furthermore, this system determines unused capacity by considering practical capacity. The purpose of this article is to emphasize the efficiency of the time-driven activity-based costing system and to display how it can be applied in a health care institution. A case study was conducted in a private hospital in Cyprus. Interviews and direct observations were used to collect the data. The case study revealed that the cost of unused capacity is allocated to both open and laparoscopic (closed) surgeries. Thus, by using the time-driven activity-based costing system, managers should eliminate the cost of unused capacity so as to obtain better results. Based on the results of the study, hospital management is better able to understand the costs of different surgeries. In addition, managers can easily notice the cost of unused capacity and decide how many employees to be dismissed or directed to other productive areas.
Real-Time Model Based Process Monitoring of Enzymatic Biodiesel Production
DEFF Research Database (Denmark)
Price, Jason Anthony; Nordblad, Mathias; Woodley, John
2015-01-01
In this contribution we extend our modelling work on the enzymatic production of biodiesel where we demonstrate the application of a Continuous-Discrete Extended Kalman Filter (a state estimator). The state estimator is used to correct for mismatch between the process data and the process model f......, given the infrequent and sometimes uncertain measurements obtained we see the use of the Continuous-Discrete Extended Kalman Filter as a viable tool for real time process monitoring.......In this contribution we extend our modelling work on the enzymatic production of biodiesel where we demonstrate the application of a Continuous-Discrete Extended Kalman Filter (a state estimator). The state estimator is used to correct for mismatch between the process data and the process model...
Extension of a Kolmogorov Atmospheric Turbulence Model for Time-Based Simulation Implementation
McMinn, John D.
1997-01-01
The development of any super/hypersonic aircraft requires the interaction of a wide variety of technical disciplines to maximize vehicle performance. For flight and engine control system design and development on this class of vehicle, realistic mathematical simulation models of atmospheric turbulence, including winds and the varying thermodynamic properties of the atmosphere, are needed. A model which has been tentatively selected by a government/industry group of flight and engine/inlet controls representatives working on the High Speed Civil Transport is one based on the Kolmogorov spectrum function. This report compares the Dryden and Kolmogorov turbulence forms, and describes enhancements that add functionality to the selected Kolmogorov model. These added features are: an altitude variation of the eddy dissipation rate based on Dryden data, the mapping of the eddy dissipation rate database onto a regular latitude and longitude grid, a method to account for flight at large vehicle attitude angles, and a procedure for transitioning smoothly across turbulence segments.
Rule-based approach to cognitive modeling of real-time decision making
International Nuclear Information System (INIS)
Thorndyke, P.W.
1982-01-01
Recent developments in the fields of cognitive science and artificial intelligence have made possible the creation of a new class of models of complex human behavior. These models, referred to as either expert or knowledge-based systems, describe the high-level cognitive processing undertaken by a skilled human to perform a complex, largely mental, task. Expert systems have been developed to provide simulations of skilled performance of a variety of tasks. These include problems of data interpretation, system monitoring and fault isolation, prediction, planning, diagnosis, and design. In general, such systems strive to produce prescriptive (error-free) behavior, rather than model descriptively the typical human's errorful behavior. However, some research has sought to develop descriptive models of human behavior using the same theoretical frameworks adopted by expert systems builders. This paper presents an overview of this theoretical framework and modeling approach, and indicates the applicability of such models to the development of a model of control room operators in a nuclear power plant. Such a model could serve several beneficial functions in plant design, licensing, and operation
A Time-Space Symmetry Based Cylindrical Model for Quantum Mechanical Interpretations
Vo Van, Thuan
2017-12-01
Following a bi-cylindrical model of geometrical dynamics, our study shows that a 6D-gravitational equation leads to geodesic description in an extended symmetrical time-space, which fits Hubble-like expansion on a microscopic scale. As a duality, the geodesic solution is mathematically equivalent to the basic Klein-Gordon-Fock equations of free massive elementary particles, in particular, the squared Dirac equations of leptons. The quantum indeterminism is proved to have originated from space-time curvatures. Interpretation of some important issues of quantum mechanical reality is carried out in comparison with the 5D space-time-matter theory. A solution of lepton mass hierarchy is proposed by extending to higher dimensional curvatures of time-like hyper-spherical surfaces than one of the cylindrical dynamical geometry. In a result, the reasonable charged lepton mass ratios have been calculated, which would be tested experimentally.
The Research of Car-Following Model Based on Real-Time Maximum Deceleration
Directory of Open Access Journals (Sweden)
Longhai Yang
2015-01-01
Full Text Available This paper is concerned with the effect of real-time maximum deceleration in car-following. The real-time maximum acceleration is estimated with vehicle dynamics. It is known that an intelligent driver model (IDM can control adaptive cruise control (ACC well. The disadvantages of IDM at high and constant speed are analyzed. A new car-following model which is applied to ACC is established accordingly to modify the desired minimum gap and structure of the IDM. We simulated the new car-following model and IDM under two different kinds of road conditions. In the first, the vehicles drive on a single road, taking dry asphalt road as the example in this paper. In the second, vehicles drive onto a different road, and this paper analyzed the situation in which vehicles drive from a dry asphalt road onto an icy road. From the simulation, we found that the new car-following model can not only ensure driving security and comfort but also control the steady driving of the vehicle with a smaller time headway than IDM.
Estimating Travel Time in Bank Filtration Systems from a Numerical Model Based on DTS Measurements.
des Tombe, Bas F; Bakker, Mark; Schaars, Frans; van der Made, Kees-Jan
2018-03-01
An approach is presented to determine the seasonal variations in travel time in a bank filtration system using a passive heat tracer test. The temperature in the aquifer varies seasonally because of temperature variations of the infiltrating surface water and at the soil surface. Temperature was measured with distributed temperature sensing along fiber optic cables that were inserted vertically into the aquifer with direct push equipment. The approach was applied to a bank filtration system consisting of a sequence of alternating, elongated recharge basins and rows of recovery wells. A SEAWAT model was developed to simulate coupled flow and heat transport. The model of a two-dimensional vertical cross section is able to simulate the temperature of the water at the well and the measured vertical temperature profiles reasonably well. MODPATH was used to compute flowpaths and the travel time distribution. At the study site, temporal variation of the pumping discharge was the dominant factor influencing the travel time distribution. For an equivalent system with a constant pumping rate, variations in the travel time distribution are caused by variations in the temperature-dependent viscosity. As a result, travel times increase in the winter, when a larger fraction of the water travels through the warmer, lower part of the aquifer, and decrease in the summer, when the upper part of the aquifer is warmer. © 2017 The Authors. Groundwater published by Wiley Periodicals, Inc. on behalf of National Ground Water Association.
Time Series Model of Wind Speed for Multi Wind Turbines based on Mixed Copula
Directory of Open Access Journals (Sweden)
Nie Dan
2016-01-01
Full Text Available Because wind power is intermittent, random and so on, large scale grid will directly affect the safe and stable operation of power grid. In order to make a quantitative study on the characteristics of the wind speed of wind turbine, the wind speed time series model of the multi wind turbine generator is constructed by using the mixed Copula-ARMA function in this paper, and a numerical example is also given. The research results show that the model can effectively predict the wind speed, ensure the efficient operation of the wind turbine, and provide theoretical basis for the stability of wind power grid connected operation.
Directory of Open Access Journals (Sweden)
Hok Pan Yuen
2016-10-01
Full Text Available Joint modelling has emerged to be a potential tool to analyse data with a time-to-event outcome and longitudinal measurements collected over a series of time points. Joint modelling involves the simultaneous modelling of the two components, namely the time-to-event component and the longitudinal component. The main challenges of joint modelling are the mathematical and computational complexity. Recent advances in joint modelling have seen the emergence of several software packages which have implemented some of the computational requirements to run joint models. These packages have opened the door for more routine use of joint modelling. Through simulations and real data based on transition to psychosis research, we compared joint model analysis of time-to-event outcome with the conventional Cox regression analysis. We also compared a number of packages for fitting joint models. Our results suggest that joint modelling do have advantages over conventional analysis despite its potential complexity. Our results also suggest that the results of analyses may depend on how the methodology is implemented.
Xu, Tao; Xiao, Na; Zhai, Xiaolong; Chan, Pak Kwan; Tin, Chung
2018-02-01
Objective. Damage to the brain, as a result of various medical conditions, impacts the everyday life of patients and there is still no complete cure to neurological disorders. Neuroprostheses that can functionally replace the damaged neural circuit have recently emerged as a possible solution to these problems. Here we describe the development of a real-time cerebellar neuroprosthetic system to substitute neural function in cerebellar circuitry for learning delay eyeblink conditioning (DEC). Approach. The system was empowered by a biologically realistic spiking neural network (SNN) model of the cerebellar neural circuit, which considers the neuronal population and anatomical connectivity of the network. The model simulated synaptic plasticity critical for learning DEC. This SNN model was carefully implemented on a field programmable gate array (FPGA) platform for real-time simulation. This hardware system was interfaced in in vivo experiments with anesthetized rats and it used neural spikes recorded online from the animal to learn and trigger conditioned eyeblink in the animal during training. Main results. This rat-FPGA hybrid system was able to process neuronal spikes in real-time with an embedded cerebellum model of ~10 000 neurons and reproduce learning of DEC with different inter-stimulus intervals. Our results validated that the system performance is physiologically relevant at both the neural (firing pattern) and behavioral (eyeblink pattern) levels. Significance. This integrated system provides the sufficient computation power for mimicking the cerebellar circuit in real-time. The system interacts with the biological system naturally at the spike level and can be generalized for including other neural components (neuron types and plasticity) and neural functions for potential neuroprosthetic applications.
Xu, Tao; Xiao, Na; Zhai, Xiaolong; Kwan Chan, Pak; Tin, Chung
2018-02-01
Damage to the brain, as a result of various medical conditions, impacts the everyday life of patients and there is still no complete cure to neurological disorders. Neuroprostheses that can functionally replace the damaged neural circuit have recently emerged as a possible solution to these problems. Here we describe the development of a real-time cerebellar neuroprosthetic system to substitute neural function in cerebellar circuitry for learning delay eyeblink conditioning (DEC). The system was empowered by a biologically realistic spiking neural network (SNN) model of the cerebellar neural circuit, which considers the neuronal population and anatomical connectivity of the network. The model simulated synaptic plasticity critical for learning DEC. This SNN model was carefully implemented on a field programmable gate array (FPGA) platform for real-time simulation. This hardware system was interfaced in in vivo experiments with anesthetized rats and it used neural spikes recorded online from the animal to learn and trigger conditioned eyeblink in the animal during training. This rat-FPGA hybrid system was able to process neuronal spikes in real-time with an embedded cerebellum model of ~10 000 neurons and reproduce learning of DEC with different inter-stimulus intervals. Our results validated that the system performance is physiologically relevant at both the neural (firing pattern) and behavioral (eyeblink pattern) levels. This integrated system provides the sufficient computation power for mimicking the cerebellar circuit in real-time. The system interacts with the biological system naturally at the spike level and can be generalized for including other neural components (neuron types and plasticity) and neural functions for potential neuroprosthetic applications.
Fuzzy model-based adaptive synchronization of time-delayed chaotic systems
International Nuclear Information System (INIS)
Vasegh, Nastaran; Majd, Vahid Johari
2009-01-01
In this paper, fuzzy model-based synchronization of a class of first order chaotic systems described by delayed-differential equations is addressed. To design the fuzzy controller, the chaotic system is modeled by Takagi-Sugeno fuzzy system considering the properties of the nonlinear part of the system. Assuming that the parameters of the chaotic system are unknown, an adaptive law is derived to estimate these unknown parameters, and the stability of error dynamics is guaranteed by Lyapunov theory. Numerical examples are given to demonstrate the validity of the proposed adaptive synchronization approach.
Model based analysis of the time scales associated to pump start-ups
Energy Technology Data Exchange (ETDEWEB)
Dazin, Antoine, E-mail: antoine.dazin@lille.ensam.fr [Arts et métiers ParisTech/LML Laboratory UMR CNRS 8107, 8 bld Louis XIV, 59046 Lille cedex (France); Caignaert, Guy [Arts et métiers ParisTech/LML Laboratory UMR CNRS 8107, 8 bld Louis XIV, 59046 Lille cedex (France); Dauphin-Tanguy, Geneviève, E-mail: genevieve.dauphin-tanguy@ec-lille.fr [Univ Lille Nord de France, Ecole Centrale de Lille/CRISTAL UMR CNRS 9189, BP 48, 59651, Villeneuve d’Ascq cedex F 59000 (France)
2015-11-15
Highlights: • A dynamic model of a hydraulic system has been built. • Three periods in a pump start-up have been identified. • The time scales of each period have been estimated. • The parameters affecting the rapidity of a pump start-up have been explored. - Abstract: The paper refers to a non dimensional analysis of the behaviour of a hydraulic system during pump fast start-ups. The system is composed of a radial flow pump and its suction and delivery pipes. It is modelled using the bond graph methodology. The prediction of the model is validated by comparison to experimental results. An analysis of the time evolution of the terms acting on the total pump pressure is proposed. It allows for a decomposition of the start-up into three consecutive periods. The time scales associated with these periods are estimated. The effects of parameters (angular acceleration, final rotation speed, pipe length and resistance) affecting the start-up rapidity are then explored.
FPGA-Based Real Time, Multichannel Emulated-Digital Retina Model Implementation
Directory of Open Access Journals (Sweden)
Zsolt Vörösházi
2009-01-01
Full Text Available The function of the low-level image processing that takes place in the biological retina is to compress only the relevant visual information to a manageable size. The behavior of the layers and different channels of the neuromorphic retina has been successfully modeled by cellular neural/nonlinear networks (CNNs. In this paper, we present an extended, application-specific emulated-digital CNN-universal machine (UM architecture to compute the complex dynamic of this mammalian retina in video real time. The proposed emulated-digital implementation of multichannel retina model is compared to the previously developed models from three key aspects, which are processing speed, number of physical cells, and accuracy. Our primary aim was to build up a simple, real-time test environment with camera input and display output in order to mimic the behavior of retina model implementation on emulated digital CNN by using low-cost, moderate-sized field-programmable gate array (FPGA architectures.
Neylon, John; Hasse, Katelyn; Sheng, Ke; Santhanam, Anand P.
2016-03-01
Breast radiation therapy is typically delivered to the patient in either supine or prone position. Each of these positioning systems has its limitations in terms of tumor localization, dose to the surrounding normal structures, and patient comfort. We envision developing a pneumatically controlled breast immobilization device that will enable the benefits of both supine and prone positioning. In this paper, we present a physics-based breast deformable model that aids in both the design of the breast immobilization device as well as a control module for the device during every day positioning. The model geometry is generated from a subject's CT scan acquired during the treatment planning stage. A GPU based deformable model is then generated for the breast. A mass-spring-damper approach is then employed for the deformable model, with the spring modeled to represent a hyperelastic tissue behavior. Each voxel of the CT scan is then associated with a mass element, which gives the model its high resolution nature. The subject specific elasticity is then estimated from a CT scan in prone position. Our results show that the model can deform at >60 deformations per second, which satisfies the real-time requirement for robotic positioning. The model interacts with a computer designed immobilization device to position the breast and tumor anatomy in a reproducible location. The design of the immobilization device was also systematically varied based on the breast geometry, tumor location, elasticity distribution and the reproducibility of the desired tumor location.
Passenger Flow Forecasting Research for Airport Terminal Based on SARIMA Time Series Model
Li, Ziyu; Bi, Jun; Li, Zhiyin
2017-12-01
Based on the data of practical operating of Kunming Changshui International Airport during2016, this paper proposes Seasonal Autoregressive Integrated Moving Average (SARIMA) model to predict the passenger flow. This article not only considers the non-stationary and autocorrelation of the sequence, but also considers the daily periodicity of the sequence. The prediction results can accurately describe the change trend of airport passenger flow and provide scientific decision support for the optimal allocation of airport resources and optimization of departure process. The result shows that this model is applicable to the short-term prediction of airport terminal departure passenger traffic and the average error ranges from 1% to 3%. The difference between the predicted and the true values of passenger traffic flow is quite small, which indicates that the model has fairly good passenger traffic flow prediction ability.
Directory of Open Access Journals (Sweden)
P. Meier
2011-03-01
Full Text Available Reliable real-time forecasts of the discharge can provide valuable information for the management of a river basin system. For the management of ecological releases even discharge forecasts with moderate accuracy can be beneficial. Sequential data assimilation using the Ensemble Kalman Filter provides a tool that is both efficient and robust for a real-time modelling framework. One key parameter in a hydrological system is the soil moisture, which recently can be characterized by satellite based measurements. A forecasting framework for the prediction of discharges is developed and applied to three different sub-basins of the Zambezi River Basin. The model is solely based on remote sensing data providing soil moisture and rainfall estimates. The soil moisture product used is based on the back-scattering intensity of a radar signal measured by a radar scatterometer. These soil moisture data correlate well with the measured discharge of the corresponding watershed if the data are shifted by a time lag which is dependent on the size and the dominant runoff process in the catchment. This time lag is the basis for the applicability of the soil moisture data for hydrological forecasts. The conceptual model developed is based on two storage compartments. The processes modeled include evaporation losses, infiltration and percolation. The application of this model in a real-time modelling framework yields good results in watersheds where soil storage is an important factor. The lead time of the forecast is dependent on the size and the retention capacity of the watershed. For the largest watershed a forecast over 40 days can be provided. However, the quality of the forecast increases significantly with decreasing prediction time. In a watershed with little soil storage and a quick response to rainfall events, the performance is relatively poor and the lead time is as short as 10 days only.
The research and practice based on the full-time visitation model in clinical medical education
Directory of Open Access Journals (Sweden)
Hong Zhang
2015-01-01
Full Text Available Most of the higher medical colleges and universities teaching hospital carry certain clinical teaching tasks, but the traditional teaching pattern of "two stage", including the early stage of the theory of teaching, the late arrangement of clinical practice, had some drawbacks such as practice time is too concentrated and the chasm between students' theory and practice. It is suggested that students contact clinical diagnosis and treatment earlier, visit more patients and increase the ratio of visitation and course. But as more and more students flood into university, clinical visitation has turned into a difficulty to improve students’ ability. To resolve this problem, we have made some efficient practice and exploration in Rizhao City People's Hospital from September 2005 to July 2014. The students were divided into full-time visitation model group and “two stage” pattern group randomly. The single factors are of great difference between the two groups. The full-time visitation model in clinical medical education builds a new mode of practice of clinical practice teaching in the medical stuents' concept of doctor-patient communication, humanistic care to patients, basic theoretical knowledge, clinical practice skills and graduate admission rate increased significantly. Continuous improvement of OSCE exam is needed to make evaluation more scientific, objective and fair.
Directory of Open Access Journals (Sweden)
Fengxia Xu
2014-01-01
Full Text Available U-model can approximate a large class of smooth nonlinear time-varying delay system to any accuracy by using time-varying delay parameters polynomial. This paper proposes a new approach, namely, U-model approach, to solving the problems of analysis and synthesis for nonlinear systems. Based on the idea of discrete-time U-model with time-varying delay, the identification algorithm of adaptive neural network is given for the nonlinear model. Then, the controller is designed by using the Newton-Raphson formula and the stability analysis is given for the closed-loop nonlinear systems. Finally, illustrative examples are given to show the validity and applicability of the obtained results.
Yizhao, Chen; Jianyang, Xia; Zhengguo, Sun; Jianlong, Li; Yiqi, Luo; Chengcheng, Gang; Zhaoqi, Wang
2015-11-06
As a key factor that determines carbon storage capacity, residence time (τE) is not well constrained in terrestrial biosphere models. This factor is recognized as an important source of model uncertainty. In this study, to understand how τE influences terrestrial carbon storage prediction in diagnostic models, we introduced a model decomposition scheme in the Boreal Ecosystem Productivity Simulator (BEPS) and then compared it with a prognostic model. The result showed that τE ranged from 32.7 to 158.2 years. The baseline residence time (τ'E) was stable for each biome, ranging from 12 to 53.7 years for forest biomes and 4.2 to 5.3 years for non-forest biomes. The spatiotemporal variations in τE were mainly determined by the environmental scalar (ξ). By comparing models, we found that the BEPS uses a more detailed pool construction but rougher parameterization for carbon allocation and decomposition. With respect to ξ comparison, the global difference in the temperature scalar (ξt) averaged 0.045, whereas the moisture scalar (ξw) had a much larger variation, with an average of 0.312. We propose that further evaluations and improvements in τ'E and ξw predictions are essential to reduce the uncertainties in predicting carbon storage by the BEPS and similar diagnostic models.
Yizhao, Chen; Jianyang, Xia; Zhengguo, Sun; Jianlong, Li; Yiqi, Luo; Chengcheng, Gang; Zhaoqi, Wang
2015-01-01
As a key factor that determines carbon storage capacity, residence time (τE) is not well constrained in terrestrial biosphere models. This factor is recognized as an important source of model uncertainty. In this study, to understand how τE influences terrestrial carbon storage prediction in diagnostic models, we introduced a model decomposition scheme in the Boreal Ecosystem Productivity Simulator (BEPS) and then compared it with a prognostic model. The result showed that τE ranged from 32.7 to 158.2 years. The baseline residence time (τ′E) was stable for each biome, ranging from 12 to 53.7 years for forest biomes and 4.2 to 5.3 years for non-forest biomes. The spatiotemporal variations in τE were mainly determined by the environmental scalar (ξ). By comparing models, we found that the BEPS uses a more detailed pool construction but rougher parameterization for carbon allocation and decomposition. With respect to ξ comparison, the global difference in the temperature scalar (ξt) averaged 0.045, whereas the moisture scalar (ξw) had a much larger variation, with an average of 0.312. We propose that further evaluations and improvements in τ′E and ξw predictions are essential to reduce the uncertainties in predicting carbon storage by the BEPS and similar diagnostic models. PMID:26541245
Directory of Open Access Journals (Sweden)
Lukas Falat
2014-01-01
Full Text Available In this paper, authors apply feed-forward artificial neural network (ANN of RBF type into the process of modelling and forecasting the future value of USD/CAD time series. Authors test the customized version of the RBF and add the evolutionary approach into it. They also combine the standard algorithm for adapting weights in neural network with an unsupervised clustering algorithm called K-means. Finally, authors suggest the new hybrid model as a combination of a standard ANN and a moving average for error modeling that is used to enhance the outputs of the network using the error part of the original RBF. Using high-frequency data, they examine the ability to forecast exchange rate values for the horizon of one day. To determine the forecasting efficiency, authors perform the comparative out-of-sample analysis of the suggested hybrid model with statistical models and the standard neural network.
Fletcher, Stephen; Kirkpatrick, Iain; Dring, Roderick; Puttock, Robert; Thring, Rob; Howroyd, Simon
2017-03-01
Supercapacitors are an emerging technology with applications in pulse power, motive power, and energy storage. However, their carbon electrodes show a variety of non-ideal behaviours that have so far eluded explanation. These include Voltage Decay after charging, Voltage Rebound after discharging, and Dispersed Kinetics at long times. In the present work, we establish that a vertical ladder network of RC components can reproduce all these puzzling phenomena. Both software and hardware realizations of the network are described. In general, porous carbon electrodes contain random distributions of resistance R and capacitance C, with a wider spread of log R values than log C values. To understand what this implies, a simplified model is developed in which log R is treated as a Gaussian random variable while log C is treated as a constant. From this model, a new family of equivalent circuits is developed in which the continuous distribution of log R values is replaced by a discrete set of log R values drawn from a geometric series. We call these Pascal Equivalent Circuits. Their behaviour is shown to resemble closely that of real supercapacitors. The results confirm that distributions of RC time constants dominate the behaviour of real supercapacitors.
Directory of Open Access Journals (Sweden)
Chih-Chieh Young
2015-01-01
Full Text Available Accurate prediction of water level fluctuation is important in lake management due to its significant impacts in various aspects. This study utilizes four model approaches to predict water levels in the Yuan-Yang Lake (YYL in Taiwan: a three-dimensional hydrodynamic model, an artificial neural network (ANN model (back propagation neural network, BPNN, a time series forecasting (autoregressive moving average with exogenous inputs, ARMAX model, and a combined hydrodynamic and ANN model. Particularly, the black-box ANN model and physically based hydrodynamic model are coupled to more accurately predict water level fluctuation. Hourly water level data (a total of 7296 observations was collected for model calibration (training and validation. Three statistical indicators (mean absolute error, root mean square error, and coefficient of correlation were adopted to evaluate model performances. Overall, the results demonstrate that the hydrodynamic model can satisfactorily predict hourly water level changes during the calibration stage but not for the validation stage. The ANN and ARMAX models better predict the water level than the hydrodynamic model does. Meanwhile, the results from an ANN model are superior to those by the ARMAX model in both training and validation phases. The novel proposed concept using a three-dimensional hydrodynamic model in conjunction with an ANN model has clearly shown the improved prediction accuracy for the water level fluctuation.
Avendaño-Valencia, Luis David; Fassois, Spilios D.
2017-12-01
The problem of vibration-based damage diagnosis in structures characterized by time-dependent dynamics under significant environmental and/or operational uncertainty is considered. A stochastic framework consisting of a Gaussian Mixture Random Coefficient model of the uncertain time-dependent dynamics under each structural health state, proper estimation methods, and Bayesian or minimum distance type decision making, is postulated. The Random Coefficient (RC) time-dependent stochastic model with coefficients following a multivariate Gaussian Mixture Model (GMM) allows for significant flexibility in uncertainty representation. Certain of the model parameters are estimated via a simple procedure which is founded on the related Multiple Model (MM) concept, while the GMM weights are explicitly estimated for optimizing damage diagnostic performance. The postulated framework is demonstrated via damage detection in a simple simulated model of a quarter-car active suspension with time-dependent dynamics and considerable uncertainty on the payload. Comparisons with a simpler Gaussian RC model based method are also presented, with the postulated framework shown to be capable of offering considerable improvement in diagnostic performance.
Smolders, K.; Volckaert, M.; Swevers, J.
2008-11-01
This paper presents a nonlinear model-based iterative learning control procedure to achieve accurate tracking control for nonlinear lumped mechanical continuous-time systems. The model structure used in this iterative learning control procedure is new and combines a linear state space model and a nonlinear feature space transformation. An intuitive two-step iterative algorithm to identify the model parameters is presented. It alternates between the estimation of the linear and the nonlinear model part. It is assumed that besides the input and output signals also the full state vector of the system is available for identification. A measurement and signal processing procedure to estimate these signals for lumped mechanical systems is presented. The iterative learning control procedure relies on the calculation of the input that generates a given model output, so-called offline model inversion. A new offline nonlinear model inversion method for continuous-time, nonlinear time-invariant, state space models based on Newton's method is presented and applied to the new model structure. This model inversion method is not restricted to minimum phase models. It requires only calculation of the first order derivatives of the state space model and is applicable to multivariable models. For periodic reference signals the method yields a compact implementation in the frequency domain. Moreover it is shown that a bandwidth can be specified up to which learning is allowed when using this inversion method in the iterative learning control procedure. Experimental results for a nonlinear single-input-single-output system corresponding to a quarter car on a hydraulic test rig are presented. It is shown that the new nonlinear approach outperforms the linear iterative learning control approach which is currently used in the automotive industry on durability test rigs.
An accurate real-time model of maglev planar motor based on compound Simpson numerical integration
Kou, Baoquan; Xing, Feng; Zhang, Lu; Zhou, Yiheng; Liu, Jiaqi
2017-05-01
To realize the high-speed and precise control of the maglev planar motor, a more accurate real-time electromagnetic model, which considers the influence of the coil corners, is proposed in this paper. Three coordinate systems for the stator, mover and corner coil are established. The coil is divided into two segments, the straight coil segment and the corner coil segment, in order to obtain a complete electromagnetic model. When only take the first harmonic of the flux density distribution of a Halbach magnet array into account, the integration method can be carried out towards the two segments according to Lorenz force law. The force and torque analysis formula of the straight coil segment can be derived directly from Newton-Leibniz formula, however, this is not applicable to the corner coil segment. Therefore, Compound Simpson numerical integration method is proposed in this paper to solve the corner segment. With the validation of simulation and experiment, the proposed model has high accuracy and can realize practical application easily.
An accurate real-time model of maglev planar motor based on compound Simpson numerical integration
Directory of Open Access Journals (Sweden)
Baoquan Kou
2017-05-01
Full Text Available To realize the high-speed and precise control of the maglev planar motor, a more accurate real-time electromagnetic model, which considers the influence of the coil corners, is proposed in this paper. Three coordinate systems for the stator, mover and corner coil are established. The coil is divided into two segments, the straight coil segment and the corner coil segment, in order to obtain a complete electromagnetic model. When only take the first harmonic of the flux density distribution of a Halbach magnet array into account, the integration method can be carried out towards the two segments according to Lorenz force law. The force and torque analysis formula of the straight coil segment can be derived directly from Newton-Leibniz formula, however, this is not applicable to the corner coil segment. Therefore, Compound Simpson numerical integration method is proposed in this paper to solve the corner segment. With the validation of simulation and experiment, the proposed model has high accuracy and can realize practical application easily.
Durbin, J.; Koopman, S.J.M.
1998-01-01
The analysis of non-Gaussian time series using state space models is considered from both classical and Bayesian perspectives. The treatment in both cases is based on simulation using importance sampling and antithetic variables; Monte Carlo Markov chain methods are not employed. Non-Gaussian
Real-time inversions for finite fault slip models and rupture geometry based on high-rate GPS data
Minson, Sarah E.; Murray, Jessica R.; Langbein, John O.; Gomberg, Joan S.
2015-01-01
We present an inversion strategy capable of using real-time high-rate GPS data to simultaneously solve for a distributed slip model and fault geometry in real time as a rupture unfolds. We employ Bayesian inference to find the optimal fault geometry and the distribution of possible slip models for that geometry using a simple analytical solution. By adopting an analytical Bayesian approach, we can solve this complex inversion problem (including calculating the uncertainties on our results) in real time. Furthermore, since the joint inversion for distributed slip and fault geometry can be computed in real time, the time required to obtain a source model of the earthquake does not depend on the computational cost. Instead, the time required is controlled by the duration of the rupture and the time required for information to propagate from the source to the receivers. We apply our modeling approach, called Bayesian Evidence-based Fault Orientation and Real-time Earthquake Slip, to the 2011 Tohoku-oki earthquake, 2003 Tokachi-oki earthquake, and a simulated Hayward fault earthquake. In all three cases, the inversion recovers the magnitude, spatial distribution of slip, and fault geometry in real time. Since our inversion relies on static offsets estimated from real-time high-rate GPS data, we also present performance tests of various approaches to estimating quasi-static offsets in real time. We find that the raw high-rate time series are the best data to use for determining the moment magnitude of the event, but slightly smoothing the raw time series helps stabilize the inversion for fault geometry.
Hitchcock, Andrew; Hunter, C Neil; Sener, Melih
2017-04-20
Cell doubling times of the purple bacterium Rhodobacter sphaeroides during photosynthetic growth are determined experimentally and computationally as a function of illumination. For this purpose, energy conversion processes in an intracytoplasmic membrane vesicle, the chromatophore, are described based on an atomic detail structural model. The cell doubling time and its illumination dependence are computed in terms of the return-on-investment (ROI) time of the chromatophore, determined computationally from the ATP production rate, and the mass ratio of chromatophores in the cell, determined experimentally from whole cell absorbance spectra. The ROI time is defined as the time it takes to produce enough ATP to pay for the construction of another chromatophore. The ROI time of the low light-growth chromatophore is 4.5-2.6 h for a typical illumination range of 10-100 μmol photons m -2 s -1 , respectively, with corresponding cell doubling times of 8.2-3.9 h. When energy expenditure is considered as a currency, the benefit-to-cost ratio computed for the chromatophore as an energy harvesting device is 2-8 times greater than for photovoltaic and fossil fuel-based energy solutions and the corresponding ROI times are approximately 3-4 orders of magnitude shorter for the chromatophore than for synthetic systems.
Time-Frequency Analysis Using Warped-Based High-Order Phase Modeling
Directory of Open Access Journals (Sweden)
Ioana Cornel
2005-01-01
Full Text Available The high-order ambiguity function (HAF was introduced for the estimation of polynomial-phase signals (PPS embedded in noise. Since the HAF is a nonlinear operator, it suffers from noise-masking effects and from the appearance of undesired cross-terms when multicomponents PPS are analyzed. In order to improve the performances of the HAF, the multi-lag HAF concept was proposed. Based on this approach, several advanced methods (e.g., product high-order ambiguity function (PHAF have been recently proposed. Nevertheless, performances of these new methods are affected by the error propagation effect which drastically limits the order of the polynomial approximation. This phenomenon acts especially when a high-order polynomial modeling is needed: representation of the digital modulation signals or the acoustic transient signals. This effect is caused by the technique used for polynomial order reduction, common for existing approaches: signal multiplication with the complex conjugated exponentials formed with the estimated coefficients. In this paper, we introduce an alternative method to reduce the polynomial order, based on the successive unitary signal transformation, according to each polynomial order. We will prove that this method reduces considerably the effect of error propagation. Namely, with this order reduction method, the estimation error at a given order will depend only on the performances of the estimation method.
International Nuclear Information System (INIS)
Chetoui, Manel; Malti, Rachid; Thomassin, Magalie; Aoun, Mohamed; Najar, Slah; Abdelkrim, Naceur
2011-01-01
This paper deals with continuous-time system identification using fractional models in a noisy input/output context. The third-order cumulants based least squares method (tocls) is extended here to fractional models. The derivatives of the third-order cumulants are computed using a new fractional state variable filter. A numerical example is used to demonstrate the performance of the proposed method called ftocls (fractional third-order cumulants based least squares). The effect of the signal-to-noise ratio and the hyperparameter is studied.
Ward-Garrison, Christian; Markstrom, Steven L.; Hay, Lauren E.
2009-01-01
The U.S. Geological Survey Downsizer is a computer application that selects, downloads, verifies, and formats station-based time-series data for environmental-resource models, particularly the Precipitation-Runoff Modeling System. Downsizer implements the client-server software architecture. The client presents a map-based, graphical user interface that is intuitive to modelers; the server provides streamflow and climate time-series data from over 40,000 measurement stations across the United States. This report is the Downsizer user's manual and provides (1) an overview of the software design, (2) installation instructions, (3) a description of the graphical user interface, (4) a description of selected output files, and (5) troubleshooting information.
A generic probability based model to derive regional patterns of crops in time and space
Wattenbach, Martin; Luedtke, Stefan; Redweik, Richard; van Oijen, Marcel; Balkovic, Juraj; Reinds, Gert Jan
2015-04-01
Croplands are not only the key to human food supply, they also change the biophysical and biogeochemical properties of the land surface leading to changes in the water cycle, energy portioning, they influence soil erosion and substantially contribute to the amount of greenhouse gases entering the atmosphere. The effects of croplands on the environment depend on the type of crop and the associated management which both are related to the site conditions, economic boundary settings as well as preferences of individual farmers. The method described here is designed to predict the most probable crop to appear at a given location and time. The method uses statistical crop area information on NUTS2 level from EUROSTAT and the Common Agricultural Policy Regionalized Impact Model (CAPRI) as observation. These crops are then spatially disaggregated to the 1 x 1 km grid scale within the region, using the assumption that the probability of a crop appearing at a given location and a given year depends on a) the suitability of the land for the cultivation of the crop derived from the MARS Crop Yield Forecast System (MCYFS) and b) expert knowledge of agricultural practices. The latter includes knowledge concerning the feasibility of one crop following another (e.g. a late-maturing crop might leave too little time for the establishment of a winter cereal crop) and the need to combat weed infestations or crop diseases. The model is implemented in R and PostGIS. The quality of the generated crop sequences per grid cell is evaluated on the basis of the given statistics reported by the joint EU/CAPRI database. The assessment is given on NUTS2 level using per cent bias as a measure with a threshold of 15% as minimum quality. The results clearly indicates that crops with a large relative share within the administrative unit are not as error prone as crops that allocate only minor parts of the unit. However, still roughly 40% show an absolute per cent bias above the 15% threshold. This
Directory of Open Access Journals (Sweden)
Shuo Wang
Full Text Available Random effect in cellular systems is an important topic in systems biology and often simulated with Gillespie's stochastic simulation algorithm (SSA. Abridgment refers to model reduction that approximates a group of reactions by a smaller group with fewer species and reactions. This paper presents a theoretical analysis, based on comparison of the first exit time, for the abridgment on a linear chain reaction model motivated by systems with multiple phosphorylation sites. The analysis shows that if the relaxation time of the fast subsystem is much smaller than the mean firing time of the slow reactions, the abridgment can be applied with little error. This analysis is further verified with numerical experiments for models of bistable switch and oscillations in which linear chain system plays a critical role.
A Real-Time Model-Based Human Motion Tracking and Analysis for Human-Computer Interface Systems
Directory of Open Access Journals (Sweden)
Chung-Lin Huang
2004-09-01
Full Text Available This paper introduces a real-time model-based human motion tracking and analysis method for human computer interface (HCI. This method tracks and analyzes the human motion from two orthogonal views without using any markers. The motion parameters are estimated by pattern matching between the extracted human silhouette and the human model. First, the human silhouette is extracted and then the body definition parameters (BDPs can be obtained. Second, the body animation parameters (BAPs are estimated by a hierarchical tritree overlapping searching algorithm. To verify the performance of our method, we demonstrate different human posture sequences and use hidden Markov model (HMM for posture recognition testing.
Ranger, Jochen; Kuhn, Jörg-Tobias; Szardenings, Carsten
2016-05-01
Psychological tests are usually analysed with item response models. Recently, some alternative measurement models have been proposed that were derived from cognitive process models developed in experimental psychology. These models consider the responses but also the response times of the test takers. Two such models are the Q-diffusion model and the D-diffusion model. Both models can be calibrated with the diffIRT package of the R statistical environment via marginal maximum likelihood (MML) estimation. In this manuscript, an alternative approach to model calibration is proposed. The approach is based on weighted least squares estimation and parallels the standard estimation approach in structural equation modelling. Estimates are determined by minimizing the discrepancy between the observed and the implied covariance matrix. The estimator is simple to implement, consistent, and asymptotically normally distributed. Least squares estimation also provides a test of model fit by comparing the observed and implied covariance matrix. The estimator and the test of model fit are evaluated in a simulation study. Although parameter recovery is good, the estimator is less efficient than the MML estimator. © 2016 The British Psychological Society.
International Nuclear Information System (INIS)
Vajna, Szabolcs; Kertész, János; Tóth, Bálint
2013-01-01
Many human-related activities show power-law decaying interevent time distribution with exponents usually varying between 1 and 2. We study a simple task-queuing model, which produces bursty time series due to the non-trivial dynamics of the task list. The model is characterized by a priority distribution as an input parameter, which describes the choice procedure from the list. We give exact results on the asymptotic behaviour of the model and we show that the interevent time distribution is power-law decaying for any kind of input distributions that remain normalizable in the infinite list limit, with exponents tunable between 1 and 2. The model satisfies a scaling law between the exponents of interevent time distribution (β) and autocorrelation function (α): α + β = 2. This law is general for renewal processes with power-law decaying interevent time distribution. We conclude that slowly decaying autocorrelation function indicates long-range dependence only if the scaling law is violated. (paper)
Travel time reliability modeling.
2011-07-01
This report includes three papers as follows: : 1. Guo F., Rakha H., and Park S. (2010), "A Multi-state Travel Time Reliability Model," : Transportation Research Record: Journal of the Transportation Research Board, n 2188, : pp. 46-54. : 2. Park S.,...
Ship-Track Models Based on Poisson-Distributed Port-Departure Times
National Research Council Canada - National Science Library
Heitmeyer, Richard
2006-01-01
... of those ships, and their nominal speeds. The probability law assumes that the ship departure times are Poisson-distributed with a time-varying departure rate and that the ship speeds and ship routes are statistically independent...
Simulating Serious Games: A Discrete-Time Computational Model Based on Cognitive Flow Theory
Westera, Wim
2018-01-01
This paper presents a computational model for simulating how people learn from serious games. While avoiding the combinatorial explosion of a games micro-states, the model offers a meso-level pathfinding approach, which is guided by cognitive flow theory and various concepts from learning sciences. It extends a basic, existing model by exposing…
Wu, Huan; Adler, Robert F.; Tian, Yudong; Huffman, George J.; Li, Hongyi; Wang, JianJian
2014-01-01
A widely used land surface model, the Variable Infiltration Capacity (VIC) model, is coupled with a newly developed hierarchical dominant river tracing-based runoff-routing model to form the Dominant river tracing-Routing Integrated with VIC Environment (DRIVE) model, which serves as the new core of the real-time Global Flood Monitoring System (GFMS). The GFMS uses real-time satellite-based precipitation to derive flood monitoring parameters for the latitude band 50 deg. N - 50 deg. S at relatively high spatial (approximately 12 km) and temporal (3 hourly) resolution. Examples of model results for recent flood events are computed using the real-time GFMS (http://flood.umd.edu). To evaluate the accuracy of the new GFMS, the DRIVE model is run retrospectively for 15 years using both research-quality and real-time satellite precipitation products. Evaluation results are slightly better for the research-quality input and significantly better for longer duration events (3 day events versus 1 day events). Basins with fewer dams tend to provide lower false alarm ratios. For events longer than three days in areas with few dams, the probability of detection is approximately 0.9 and the false alarm ratio is approximately 0.6. In general, these statistical results are better than those of the previous system. Streamflow was evaluated at 1121 river gauges across the quasi-global domain. Validation using real-time precipitation across the tropics (30 deg. S - 30 deg. N) gives positive daily Nash-Sutcliffe Coefficients for 107 out of 375 (28%) stations with a mean of 0.19 and 51% of the same gauges at monthly scale with a mean of 0.33. There were poorer results in higher latitudes, probably due to larger errors in the satellite precipitation input.
Guarnaccia, Claudio; Quartieri, Joseph; Tepedino, Carmine
2017-06-01
The dangerous effect of noise on human health is well known. Both the auditory and non-auditory effects are largely documented in literature, and represent an important hazard in human activities. Particular care is devoted to road traffic noise, since it is growing according to the growth of residential, industrial and commercial areas. For these reasons, it is important to develop effective models able to predict the noise in a certain area. In this paper, a hybrid predictive model is presented. The model is based on the mixing of two different approach: the Time Series Analysis (TSA) and the Artificial Neural Network (ANN). The TSA model is based on the evaluation of trend and seasonality in the data, while the ANN model is based on the capacity of the network to "learn" the behavior of the data. The mixed approach will consist in the evaluation of noise levels by means of TSA and, once the differences (residuals) between TSA estimations and observed data have been calculated, in the training of a ANN on the residuals. This hybrid model will exploit interesting features and results, with a significant variation related to the number of steps forward in the prediction. It will be shown that the best results, in terms of prediction, are achieved predicting one step ahead in the future. Anyway, a 7 days prediction can be performed, with a slightly greater error, but offering a larger range of prediction, with respect to the single day ahead predictive model.
Carreiro, André V; Amaral, Pedro M T; Pinto, Susana; Tomás, Pedro; de Carvalho, Mamede; Madeira, Sara C
2015-12-01
Amyotrophic Lateral Sclerosis (ALS) is a devastating disease and the most common neurodegenerative disorder of young adults. ALS patients present a rapidly progressive motor weakness. This usually leads to death in a few years by respiratory failure. The correct prediction of respiratory insufficiency is thus key for patient management. In this context, we propose an innovative approach for prognostic prediction based on patient snapshots and time windows. We first cluster temporally-related tests to obtain snapshots of the patient's condition at a given time (patient snapshots). Then we use the snapshots to predict the probability of an ALS patient to require assisted ventilation after k days from the time of clinical evaluation (time window). This probability is based on the patient's current condition, evaluated using clinical features, including functional impairment assessments and a complete set of respiratory tests. The prognostic models include three temporal windows allowing to perform short, medium and long term prognosis regarding progression to assisted ventilation. Experimental results show an area under the receiver operating characteristics curve (AUC) in the test set of approximately 79% for time windows of 90, 180 and 365 days. Creating patient snapshots using hierarchical clustering with constraints outperforms the state of the art, and the proposed prognostic model becomes the first non population-based approach for prognostic prediction in ALS. The results are promising and should enhance the current clinical practice, largely supported by non-standardized tests and clinicians' experience. Copyright © 2015 Elsevier Inc. All rights reserved.
Lee, Jun-Yi; Huang, -Chuan, Jr.
2017-04-01
Mean transit time (MTT) is one of the of fundamental catchment descriptors to advance understanding on hydrological, ecological, and biogeochemical processes and improve water resources management. However, there were few documented the base flow partitioning (BFP) and mean transit time within a mountainous catchment in typhoon alley. We used a unique data set of 18O isotope and conductivity composition of rainfall (136 mm to 778 mm) and streamflow water samples collected for 14 tropical cyclone events (during 2011 to 2015) in a steep relief forested catchment (Pinglin, in northern Taiwan). A lumped hydrological model, HBV, considering dispersion model transit time distribution was used to estimate total flow, base flow, and MTT of stream base flow. Linear regression between MTT and hydrometric (precipitation intensity and antecedent precipitation index) variables were used to explore controls on MTT variation. Results revealed that both the simulation performance of total flow and base flow were satisfactory, and the Nash-Sutcliffe model efficiency coefficient of total flow and base flow was 0.848 and 0.732, respectively. The event magnitude increased with the decrease of estimated MTTs. Meanwhile, the estimated MTTs varied 4-21 days with the increase of BFP between 63-92%. The negative correlation between event magnitude and MTT and BFP showed the forcing controls the MTT and BFP. Besides, a negative relationship between MTT and the antecedent precipitation index was also found. In other words, wetter antecedent moisture content more rapidly active the fast flow paths. This approach is well suited for constraining process-based modeling in a range of high precipitation intensity and steep relief forested environments.
Kulchitsky, A.; Maurits, S.; Watkins, B.
2006-12-01
With the widespread availability of the Internet today, many people can monitor various scientific research activities. It is important to accommodate this interest providing on-line access to dynamic and illustrative Web-resources, which could demonstrate different aspects of ongoing research. It is especially important to explain and these research activities for high school and undergraduate students, thereby providing more information for making decisions concerning their future studies. Such Web resources are also important to clarify scientific research for the general public, in order to achieve better awareness of research progress in various fields. Particularly rewarding is dissemination of information about ongoing projects within Universities and research centers to their local communities. The benefits of this type of scientific outreach are mutual, since development of Web-based automatic systems is prerequisite for many research projects targeting real-time monitoring and/or modeling of natural conditions. Continuous operation of such systems provide ongoing research opportunities for the statistically massive validation of the models, as well. We have developed a Web-based system to run the University of Alaska Fairbanks Polar Ionospheric Model in real-time. This model makes use of networking and computational resources at the Arctic Region Supercomputing Center. This system was designed to be portable among various operating systems and computational resources. Its components can be installed across different computers, separating Web servers and computational engines. The core of the system is a Real-Time Management module (RMM) written Python, which facilitates interactions of remote input data transfers, the ionospheric model runs, MySQL database filling, and PHP scripts for the Web-page preparations. The RMM downloads current geophysical inputs as soon as they become available at different on-line depositories. This information is processed to
International Nuclear Information System (INIS)
Peng Haipeng; Wei Nan; Li Lixiang; Xie Weisheng; Yang Yixian
2010-01-01
In this Letter, time-delay has been introduced in to split the networks, upon which a model of complex dynamical networks with multi-links has been constructed. Moreover, based on Lyapunov stability theory and some hypotheses, we achieve synchronization between two complex networks with different structures by designing effective controllers. The validity of the results was proved through numerical simulations of this Letter.
Rodriguez Frias, Marco A.; Yang, Wuqiang
2017-04-01
Image reconstruction for electrical capacitance tomography is a challenging task due to the severely underdetermined nature of the inverse problem. A model-based algorithm tackles this problem by reducing the number of unknowns to be calculated from the limited number of independent measurements. The conventional model-based algorithm is implemented with a finite element method to solve the forward problem at each iteration and can produce good results. However, it is time-consuming and hence the algorithm can be used for off-line image reconstruction only. In this paper, a solution to this limitation is proposed. The model-based algorithm is implemented with a database containing a set of prior solved forward problems. In this way, the time required to perform image reconstruction is drastically reduced without sacrificing accuracy, and real-time image reconstruction achieved with up to 100 frames s-1. Further enhancement in speed may be accomplished by implementing the reconstruction algorithm in a parallel processing general purpose graphics process unit.
Hendriks, C.; Kranenburg, R.; Kuenen, J. J. P.; Van den Bril, B.; Verguts, V.; Schaap, M.
2016-04-01
Accurate modelling of mitigation measures for nitrogen deposition and secondary inorganic aerosol (SIA) episodes requires a detailed representation of emission patterns from agriculture. In this study the meteorological influence on the temporal variability of ammonia emissions from livestock housing and application of manure and fertilizer are included in the chemistry transport model LOTOS-EUROS. For manure application, manure transport data from Flanders (Belgium) were used as a proxy to derive the emission variability. Using improved ammonia emission variability strongly improves model performance for ammonia, mainly by a better representation of the spring maximum. The impact on model performance for SIA was negligible as explained by the limited, ammonia rich region in which the emission variability was updated. The contribution of Flemish agriculture to modelled annual mean ammonia and SIA concentrations in Flanders were quantified at respectively 7-8 and 1-2 μg/m3. A scenario study was performed to investigate the effects of reducing ammonia emissions from manure application during PM episodes by 75%, yielding a maximum reduction in modelled SIA levels of 1-3 μg/m3 during episodes. Year-to-year emission variability and a soil module to explicitly model the emission process from manure and fertilizer application are needed to further improve the modelling of the ammonia budget.
Numerical modelling of softwood time-dependent behaviour based on microstructure
DEFF Research Database (Denmark)
Engelund, Emil Tang
2010-01-01
The time-dependent mechanical behaviour of softwood such as creep or relaxation can be predicted, from knowledge of the microstructural arrangement of the cell wall, by applying deformation kinetics. This has been done several times before; however, often without considering the constraints defined...
Directory of Open Access Journals (Sweden)
Patrícia Ramos
2016-11-01
Full Text Available In this work, a cross-validation procedure is used to identify an appropriate Autoregressive Integrated Moving Average model and an appropriate state space model for a time series. A minimum size for the training set is specified. The procedure is based on one-step forecasts and uses different training sets, each containing one more observation than the previous one. All possible state space models and all ARIMA models where the orders are allowed to range reasonably are fitted considering raw data and log-transformed data with regular differencing (up to second order differences and, if the time series is seasonal, seasonal differencing (up to first order differences. The value of root mean squared error for each model is calculated averaging the one-step forecasts obtained. The model which has the lowest root mean squared error value and passes the Ljung–Box test using all of the available data with a reasonable significance level is selected among all the ARIMA and state space models considered. The procedure is exemplified in this paper with a case study of retail sales of different categories of women’s footwear from a Portuguese retailer, and its accuracy is compared with three reliable forecasting approaches. The results show that our procedure consistently forecasts more accurately than the other approaches and the improvements in the accuracy are significant.
Model-Based Fault Diagnosis: Performing Root Cause and Impact Analyses in Real Time
Figueroa, Jorge F.; Walker, Mark G.; Kapadia, Ravi; Morris, Jonathan
2012-01-01
Generic, object-oriented fault models, built according to causal-directed graph theory, have been integrated into an overall software architecture dedicated to monitoring and predicting the health of mission- critical systems. Processing over the generic fault models is triggered by event detection logic that is defined according to the specific functional requirements of the system and its components. Once triggered, the fault models provide an automated way for performing both upstream root cause analysis (RCA), and for predicting downstream effects or impact analysis. The methodology has been applied to integrated system health management (ISHM) implementations at NASA SSC's Rocket Engine Test Stands (RETS).
International Nuclear Information System (INIS)
Maroteaux, Fadila; Pommier, Pierre-Lin
2013-01-01
Highlights: ► Turbulent time evolution is introduced in stochastic modeling approach. ► The particles number is optimized trough a restricted initial distribution. ► The initial distribution amplitude is modeled by magnitude of turbulence field. -- Abstract: Homogenous Charge Compression Ignition (HCCI) engine technology is known as an alternative to reduce NO x and particulate matter (PM) emissions. As shown by several experimental studies published in the literature, the ideally homogeneous mixture charge becomes stratified in composition and temperature, and turbulent mixing is found to play an important role in controlling the combustion progress. In a previous study, an IEM model (Interaction by Exchange with the Mean) has been used to describe the micromixing in a stochastic reactor model that simulates the HCCI process. The IEM model is a deterministic model, based on the principle that the scalar value approaches the mean value over the entire volume with a characteristic mixing time. In this previous model, the turbulent time scale was treated as a fixed parameter. The present study focuses on the development of a micro-mixing time model, in order to take into account the physical phenomena it stands for. For that purpose, a (k–ε) model is used to express this micro-mixing time model. The turbulence model used here is based on zero dimensional energy cascade applied during the compression and the expansion cycle; mean kinetic energy is converted to turbulent kinetic energy. Turbulent kinetic energy is converted to heat through viscous dissipation. Besides, in this study a relation to calculate the initial heterogeneities amplitude is proposed. The comparison of simulation results against experimental data shows overall satisfactory agreement at variable turbulent time scale
Hendriks, C.; Kranenburg, R.; Kuenen, J.J.P.; Bril, B. van den; Verguts, V.; Schaap, M.
2016-01-01
Accurate modelling of mitigation measures for nitrogen deposition and secondary inorganic aerosol (SIA) episodes requires a detailed representation of emission patterns from agriculture. In this study the meteorological influence on the temporal variability of ammonia emissions from livestock
International Nuclear Information System (INIS)
Liu Xiao-Hui; Pei Chang-Xing; Nie Min
2010-01-01
Based on the classical time division multi-channel communication theory, we present a scheme of quantum time-division multi-channel communication (QTDMC). Moreover, the model of quantum time division switch (QTDS) and correlative protocol of QTDMC are proposed. The quantum bit error rate (QBER) is analyzed and the QBER simulation test is performed. The scheme shows that the QTDS can carry out multi-user communication through quantum channel, the QBER can also reach the reliability requirement of communication, and the protocol of QTDMC has high practicability and transplantable. The scheme of QTDS may play an important role in the establishment of quantum communication in a large scale in the future. (general)
Kosmopoulos, Panagiotis G.; Kazadzis, Stelios; Taylor, Michael; Raptis, Panagiotis I.; Keramitsoglou, Iphigenia; Kiranoudis, Chris; Bais, Alkiviadis F.
2018-02-01
This study focuses on the assessment of surface solar radiation (SSR) based on operational neural network (NN) and multi-regression function (MRF) modelling techniques that produce instantaneous (in less than 1 min) outputs. Using real-time cloud and aerosol optical properties inputs from the Spinning Enhanced Visible and Infrared Imager (SEVIRI) on board the Meteosat Second Generation (MSG) satellite and the Copernicus Atmosphere Monitoring Service (CAMS), respectively, these models are capable of calculating SSR in high resolution (1 nm, 0.05°, 15 min) that can be used for spectrally integrated irradiance maps, databases and various applications related to energy exploitation. The real-time models are validated against ground-based measurements of the Baseline Surface Radiation Network (BSRN) in a temporal range varying from 15 min to monthly means, while a sensitivity analysis of the cloud and aerosol effects on SSR is performed to ensure reliability under different sky and climatological conditions. The simulated outputs, compared to their common training dataset created by the radiative transfer model (RTM) libRadtran, showed median error values in the range -15 to 15 % for the NN that produces spectral irradiances (NNS), 5-6 % underestimation for the integrated NN and close to zero errors for the MRF technique. The verification against BSRN revealed that the real-time calculation uncertainty ranges from -100 to 40 and -20 to 20 W m-2, for the 15 min and monthly mean global horizontal irradiance (GHI) averages, respectively, while the accuracy of the input parameters, in terms of aerosol and cloud optical thickness (AOD and COT), and their impact on GHI, was of the order of 10 % as compared to the ground-based measurements. The proposed system aims to be utilized through studies and real-time applications which are related to solar energy production planning and use.
Modeling and control for a magnetic levitation system based on SIMLAB platform in real time
Yaseen, Mundher H. A.; Abd, Haider J.
2018-03-01
Magnetic Levitation system becomes a hot topic of study due to the minimum friction and low energy consumption which regards as very important issues. This paper proposed a new magnetic levitation system using real-time control simulink feature of (SIMLAB) microcontroller. The control system of the maglev transportation system is verified by simulations with experimental results, and its superiority is indicated in comparison with previous literature and conventional control strategies. In addition, the proposed system was implemented under effect of three controller types which are Linear-quadratic regulator (LQR), proportional-integral-derivative controller (PID) and Lead compensation. As well, the controller system performance was compared in term of three parameters Peak overshoot, Settling time and Rise time. The findings prove the agreement of simulation with experimental results obtained. Moreover, the LQR controller produced a great stability and homogeneous response than other controllers used. For experimental results, the LQR brought a 14.6%, 0.199 and 0.064 for peak overshoot, Setting time and Rise time respectively.
DEFF Research Database (Denmark)
Bauer-Gottwein, Peter; Christiansen, Lars; Rosbjerg, Dan
2011-01-01
infiltration experiment in Denmark. The natural flooding event caused a spatio-temporally distributed increase in bank storage in an alluvial aquifer. The storage change was measured using both TLRG and traditional piezometers. A groundwater model was conditioned on both the TLRG and piezometer data. Model...... parameter uncertainty decreased significantly when TLRG data was included in the inversion. The forced infiltration experiment caused changes in unsaturated zone storage, which were monitored using TLRG and ground-penetrating radar. A numerical unsaturated zone model was subsequently conditioned on both...... the signal and the change in water mass stored in the subsurface. Thus, no petrophysical relationship is required for coupled hydrogeophysical inversion. Two hydrological events were monitored with TLRG. One was a natural flooding event in the periphery of the Okavango Delta, Botswana, and one was a forced...
On the validity of travel-time based nonlinear bioreactive transport models in steady-state flow
Sanz-Prat, Alicia; Lu, Chuanhe; Finkel, Michael; Cirpka, Olaf A.
2015-04-01
Travel-time based models simplify the description of reactive transport by replacing the spatial coordinates with the groundwater travel time, posing a quasi one-dimensional (1-D) problem and potentially rendering the determination of multidimensional parameter fields unnecessary. While the approach is exact for strictly advective transport in steady-state flow if the reactive properties of the porous medium are uniform, its validity is unclear when local-scale mixing affects the reactive behavior. We compare a two-dimensional (2-D), spatially explicit, bioreactive, advective-dispersive transport model, considered as "virtual truth", with three 1-D travel-time based models which differ in the conceptualization of longitudinal dispersion: (i) neglecting dispersive mixing altogether, (ii) introducing a local-scale longitudinal dispersivity constant in time and space, and (iii) using an effective longitudinal dispersivity that increases linearly with distance. The reactive system considers biodegradation of dissolved organic carbon, which is introduced into a hydraulically heterogeneous domain together with oxygen and nitrate. Aerobic and denitrifying bacteria use the energy of the microbial transformations for growth. We analyze six scenarios differing in the variance of log-hydraulic conductivity and in the inflow boundary conditions (constant versus time-varying concentration). The concentrations of the 1-D models are mapped to the 2-D domain by means of the kinematic (for case i), and mean groundwater age (for cases ii & iii), respectively. The comparison between concentrations of the "virtual truth" and the 1-D approaches indicates extremely good agreement when using an effective, linearly increasing longitudinal dispersivity in the majority of the scenarios, while the other two 1-D approaches reproduce at least the concentration tendencies well. At late times, all 1-D models give valid approximations of two-dimensional transport. We conclude that the
Time series forecasting using ERNN and QR based on Bayesian model averaging
Pwasong, Augustine; Sathasivam, Saratha
2017-08-01
The Bayesian model averaging technique is a multi-model combination technique. The technique was employed to amalgamate the Elman recurrent neural network (ERNN) technique with the quadratic regression (QR) technique. The amalgamation produced a hybrid technique known as the hybrid ERNN-QR technique. The potentials of forecasting with the hybrid technique are compared with the forecasting capabilities of individual techniques of ERNN and QR. The outcome revealed that the hybrid technique is superior to the individual techniques in the mean square error sense.
A time-dependent Green's function-based model for stream ...
African Journals Online (AJOL)
DRINIE
2003-07-03
Jul 3, 2003 ... rainfall is low and erratic, and droughts are common. Modelling of flow when there is interaction between an unconfined aquifer and a stream has .... those quantities are evaluated at the centroid of the element. The fundamental solution of the auxiliary equation. 1 ∂G. ∇2G = = δ(r - ri; t - τ), given by. D ∂t.
A time-dependent Green's function-based model for stream ...
African Journals Online (AJOL)
DRINIE
2003-07-03
Jul 3, 2003 ... Because the ratio of the depth to lateral dimensions of most aquifers is extremely small, this assumption is ... problem in a novel way that accommodates medium heterogeneity, varying bedrock profile, and point .... has been developed on the basis of Eq. (11), incorporating the time- dependent fundamental ...
Liu, Weihua; Yang, Yi; Xu, Haitao; Liu, Xiaoyan; Wang, Yijia; Liang, Zhicheng
2014-01-01
In mass customization logistics service, reasonable scheduling of the logistics service supply chain (LSSC), especially time scheduling, is benefit to increase its competitiveness. Therefore, the effect of a customer order decoupling point (CODP) on the time scheduling performance should be considered. To minimize the total order operation cost of the LSSC, minimize the difference between the expected and actual time of completing the service orders, and maximize the satisfaction of functional logistics service providers, this study establishes an LSSC time scheduling model based on the CODP. Matlab 7.8 software is used in the numerical analysis for a specific example. Results show that the order completion time of the LSSC can be delayed or be ahead of schedule but cannot be infinitely advanced or infinitely delayed. Obtaining the optimal comprehensive performance can be effective if the expected order completion time is appropriately delayed. The increase in supply chain comprehensive performance caused by the increase in the relationship coefficient of logistics service integrator (LSI) is limited. The relative concern degree of LSI on cost and service delivery punctuality leads to not only changes in CODP but also to those in the scheduling performance of the LSSC.
Yang, Yi; Xu, Haitao; Liu, Xiaoyan; Wang, Yijia; Liang, Zhicheng
2014-01-01
In mass customization logistics service, reasonable scheduling of the logistics service supply chain (LSSC), especially time scheduling, is benefit to increase its competitiveness. Therefore, the effect of a customer order decoupling point (CODP) on the time scheduling performance should be considered. To minimize the total order operation cost of the LSSC, minimize the difference between the expected and actual time of completing the service orders, and maximize the satisfaction of functional logistics service providers, this study establishes an LSSC time scheduling model based on the CODP. Matlab 7.8 software is used in the numerical analysis for a specific example. Results show that the order completion time of the LSSC can be delayed or be ahead of schedule but cannot be infinitely advanced or infinitely delayed. Obtaining the optimal comprehensive performance can be effective if the expected order completion time is appropriately delayed. The increase in supply chain comprehensive performance caused by the increase in the relationship coefficient of logistics service integrator (LSI) is limited. The relative concern degree of LSI on cost and service delivery punctuality leads to not only changes in CODP but also to those in the scheduling performance of the LSSC. PMID:24715818
Directory of Open Access Journals (Sweden)
Weihua Liu
2014-01-01
Full Text Available In mass customization logistics service, reasonable scheduling of the logistics service supply chain (LSSC, especially time scheduling, is benefit to increase its competitiveness. Therefore, the effect of a customer order decoupling point (CODP on the time scheduling performance should be considered. To minimize the total order operation cost of the LSSC, minimize the difference between the expected and actual time of completing the service orders, and maximize the satisfaction of functional logistics service providers, this study establishes an LSSC time scheduling model based on the CODP. Matlab 7.8 software is used in the numerical analysis for a specific example. Results show that the order completion time of the LSSC can be delayed or be ahead of schedule but cannot be infinitely advanced or infinitely delayed. Obtaining the optimal comprehensive performance can be effective if the expected order completion time is appropriately delayed. The increase in supply chain comprehensive performance caused by the increase in the relationship coefficient of logistics service integrator (LSI is limited. The relative concern degree of LSI on cost and service delivery punctuality leads to not only changes in CODP but also to those in the scheduling performance of the LSSC.
Design issues of time-based phenomena and the notion of a persistent model
DEFF Research Database (Denmark)
Peters, Brady
2012-01-01
This chapter reflects on how sound can become part the architectural design process. Sound is a complex phenomenon that traditional architectural drawing tools do not capture well. Parametric tools allow for the encoding of relationships between material, geometry, and acoustic performance...... in a digital model. Computational simulation tools can give visual and aural feedback on how designs perform. These tools give architects the ability to contemplate the sound of architectural propositions. Different sounds, sound positions, and listener positions can be tested, as can different geometric...... and material configurations. Using these tools architects can design for sound. Sound should be a part of the architectural design process and in order for it to be a useful design parameter; it must be able to be considered in the digital modeling environment. We form a spatial impression of our surroundings...
Adaptive waveform interpretation with Gaussian filtering (AWIGF) and second order bounded mean oscillation operator Z square 2(u,t,r) are TDR analysis methods based on second order differentiation. AWIGF was originally designed for relatively long probe (greater than 150 mm) TDR waveforms, while Z s...
Modeling Zombie Outbreaks: A Problem-Based Approach to Improving Mathematics One Brain at a Time
Lewis, Matthew; Powell, James A.
2016-01-01
A great deal of educational literature has focused on problem-based learning (PBL) in mathematics at the primary and secondary level, but arguably there is an even greater need for PBL in college math courses. We present a project centered around the Humans versus Zombies moderated tag game played on the Utah State University campus. We discuss…
Model-based testing of real-time embedded systems in the automotive domain
Zander-Nowicka, Justyna
2009-01-01
Die Forschung im Bereich Software-Aspekte von eingebetteten Systemen wird in naher Zukunft entscheidenden Einfluss auf Industrie-, Markt- und Alltagsleben haben. Das regt die Untersuchung dieses Anwendungsgebietes an. Weiterhin wird die Erstellung eines konsistenten, wiederverwendbaren und gut dokumentierten Modells die wichtigste Aufgabe bei der Entwicklung von eingebetteten Systemen. Designentscheidungen, die früher auf der Kodeebene beschlossen wurden, werden heute zunehmend auf einer höhe...
Using ground-based time-lapse gravity observations for hydrological model calibration
DEFF Research Database (Denmark)
Christiansen, Lars
steder vil forværre situationen. Hydrologiske modeller er computerprogrammer, der hjælper os med at forstå vandets vej fra det falder som nedbør og til det igen fordamper til atmosfæren. Modellerne bruges ligeledes til at forudse konsekvenserne af ændringer på alle skalaer. Det kan være alt fra...... vand i jorden er svært at måle, men datatypen er samtidig meget virksom når man kalibrerer modeller. Det er information, som vi ikke kan få fra brønde. De giver os blot udbredelsen af og trykket i grundvandet. I mit forskningsprojekt viser jeg, at vi kan måle ændringer i vandmængden i jorden som lokale...... ændringer i tyngdekraften og bruge dem til at kalibrere hydrologiske modeller med. Når man erstatter luften i jordens porer med vand, så stiger jordens densitet nemlig og dermed også tiltrækningen – eller sagt lidt populært: Du bliver tungere når det regner! I mit forskningsprojekt har jeg undersøgt hvordan...
Directory of Open Access Journals (Sweden)
Rajat Malik
Full Text Available A class of discrete-time models of infectious disease spread, referred to as individual-level models (ILMs, are typically fitted in a Bayesian Markov chain Monte Carlo (MCMC framework. These models quantify probabilistic outcomes regarding the risk of infection of susceptible individuals due to various susceptibility and transmissibility factors, including their spatial distance from infectious individuals. The infectious pressure from infected individuals exerted on susceptible individuals is intrinsic to these ILMs. Unfortunately, quantifying this infectious pressure for data sets containing many individuals can be computationally burdensome, leading to a time-consuming likelihood calculation and, thus, computationally prohibitive MCMC-based analysis. This problem worsens when using data augmentation to allow for uncertainty in infection times. In this paper, we develop sampling methods that can be used to calculate a fast, approximate likelihood when fitting such disease models. A simple random sampling approach is initially considered followed by various spatially-stratified schemes. We test and compare the performance of our methods with both simulated data and data from the 2001 foot-and-mouth disease (FMD epidemic in the U.K. Our results indicate that substantial computation savings can be obtained--albeit, of course, with some information loss--suggesting that such techniques may be of use in the analysis of very large epidemic data sets.
Directory of Open Access Journals (Sweden)
Jisheng Zhang
2015-06-01
Full Text Available It is essential for transportation management centers to equip and manage a network of fixed and mobile sensors in order to quickly detect traffic incidents and further monitor the related impact areas, especially for high-impact accidents with dramatic traffic congestion propagation. As emerging small Unmanned Aerial Vehicles (UAVs start to have a more flexible regulation environment, it is critically important to fully explore the potential for of using UAVs for monitoring recurring and non-recurring traffic conditions and special events on transportation networks. This paper presents a space-time network- based modeling framework for integrated fixed and mobile sensor networks, in order to provide a rapid and systematic road traffic monitoring mechanism. By constructing a discretized space-time network to characterize not only the speed for UAVs but also the time-sensitive impact areas of traffic congestion, we formulate the problem as a linear integer programming model to minimize the detection delay cost and operational cost, subject to feasible flying route constraints. A Lagrangian relaxation solution framework is developed to decompose the original complex problem into a series of computationally efficient time-dependent and least cost path finding sub-problems. Several examples are used to demonstrate the results of proposed models in UAVs’ route planning for small and medium-scale networks.
Zhang, Jisheng; Jia, Limin; Niu, Shuyun; Zhang, Fan; Tong, Lu; Zhou, Xuesong
2015-06-12
It is essential for transportation management centers to equip and manage a network of fixed and mobile sensors in order to quickly detect traffic incidents and further monitor the related impact areas, especially for high-impact accidents with dramatic traffic congestion propagation. As emerging small Unmanned Aerial Vehicles (UAVs) start to have a more flexible regulation environment, it is critically important to fully explore the potential for of using UAVs for monitoring recurring and non-recurring traffic conditions and special events on transportation networks. This paper presents a space-time network- based modeling framework for integrated fixed and mobile sensor networks, in order to provide a rapid and systematic road traffic monitoring mechanism. By constructing a discretized space-time network to characterize not only the speed for UAVs but also the time-sensitive impact areas of traffic congestion, we formulate the problem as a linear integer programming model to minimize the detection delay cost and operational cost, subject to feasible flying route constraints. A Lagrangian relaxation solution framework is developed to decompose the original complex problem into a series of computationally efficient time-dependent and least cost path finding sub-problems. Several examples are used to demonstrate the results of proposed models in UAVs' route planning for small and medium-scale networks.
Land use and land cover change based on historical space-time model
Sun, Qiong; Zhang, Chi; Liu, Min; Zhang, Yongjing
2016-09-01
Land use and cover change is a leading edge topic in the current research field of global environmental changes and case study of typical areas is an important approach understanding global environmental changes. Taking the Qiantang River (Zhejiang, China) as an example, this study explores automatic classification of land use using remote sensing technology and analyzes historical space-time change by remote sensing monitoring. This study combines spectral angle mapping (SAM) with multi-source information and creates a convenient and efficient high-precision land use computer automatic classification method which meets the application requirements and is suitable for complex landform of the studied area. This work analyzes the histological space-time characteristics of land use and cover change in the Qiantang River basin in 2001, 2007 and 2014, in order to (i) verify the feasibility of studying land use change with remote sensing technology, (ii) accurately understand the change of land use and cover as well as historical space-time evolution trend, (iii) provide a realistic basis for the sustainable development of the Qiantang River basin and (iv) provide a strong information support and new research method for optimizing the Qiantang River land use structure and achieving optimal allocation of land resources and scientific management.
A 3D Tomographic Model of Asia Based on Pn and P Travel Times from GT Events
Young, C. J.; Begnaud, M. L.; Ballard, S.; Phillips, W. S.; Hipp, J. R.; Steck, L. K.; Rowe, C. A.; Chang, M. C.
2008-12-01
Increasingly, nuclear explosion monitoring is focusing on detection, location, and identification of small events recorded at regional distances. Because Earth structure is highly variable on regional scales, locating events accurately at these distances requires the use of region-specific models to provide accurate travel times. Improved results have been achieved with composites of 1D models and with approximate 3D models with simplified upper mantle structures, but both approaches introduce non-physical boundaries that are problematic for operational monitoring use. Ultimately, what is needed is a true, seamless 3D model of the Earth. Towards that goal, we have developed a 3D tomographic model of the P velocity of the crust and mantle for the Asian continent. Our model is derived by an iterative least squares travel time inversion of more than one million Pn and teleseismic P picks from some 35,000 events recorded at 4,000+ stations. We invert for P velocities from the top of the crust to the core mantle boundary, along with source and receiver static time terms to account for the effects of event mislocation and unaccounted for fine-scale structure near the receiver. Because large portions of the model are under-constrained, we apply spatially varying damping, which constrains the inversion to update the starting model only where good data coverage is available. Our starting crustal model is taken from the a priori crust and upper mantle model of Asia developed through National Nuclear Security Administration laboratory collaboration, which is based on various global and regional studies, and we substantially increase the damping in the crust to discourage changes from this model. Our starting mantle model is AK135. To simplify the inversion, we fix the depths of the major mantle discontinuities (Moho, 410 km, 660 km). 3D rays are calculated using an implementation of the Um and Thurber ray pseudo-bending approach, with full enforcement of Snell's Law in 3D at
Pooley, C M; Bishop, S C; Marion, G
2015-06-06
Bayesian statistics provides a framework for the integration of dynamic models with incomplete data to enable inference of model parameters and unobserved aspects of the system under study. An important class of dynamic models is discrete state space, continuous-time Markov processes (DCTMPs). Simulated via the Doob-Gillespie algorithm, these have been used to model systems ranging from chemistry to ecology to epidemiology. A new type of proposal, termed 'model-based proposal' (MBP), is developed for the efficient implementation of Bayesian inference in DCTMPs using Markov chain Monte Carlo (MCMC). This new method, which in principle can be applied to any DCTMP, is compared (using simple epidemiological SIS and SIR models as easy to follow exemplars) to a standard MCMC approach and a recently proposed particle MCMC (PMCMC) technique. When measurements are made on a single-state variable (e.g. the number of infected individuals in a population during an epidemic), model-based proposal MCMC (MBP-MCMC) is marginally faster than PMCMC (by a factor of 2-8 for the tests performed), and significantly faster than the standard MCMC scheme (by a factor of 400 at least). However, when model complexity increases and measurements are made on more than one state variable (e.g. simultaneously on the number of infected individuals in spatially separated subpopulations), MBP-MCMC is significantly faster than PMCMC (more than 100-fold for just four subpopulations) and this difference becomes increasingly large. © 2015 The Author(s) Published by the Royal Society. All rights reserved.
Schilling, K.E.; Wolter, C.F.
2007-01-01
Excessive nitrate-nitrogen (nitrate) loss from agricultural watersheds is an environmental concern. A common conservation practice to improve stream water quality is to retire vulnerable row croplands to grass. In this paper, a groundwater travel time model based on a geographic information system (GIS) analysis of readily available soil and topographic variables was used to evaluate the time needed to observe stream nitrate concentration reductions from conversion of row crop land to native prairie in Walnut Creek watershed, Iowa. Average linear groundwater velocity in 5-m cells was estimated by overlaying GIS layers of soil permeability, land slope (surrogates for hydraulic conductivity and gradient, respectively) and porosity. Cells were summed backwards from the stream network to watershed divide to develop a travel time distribution map. Results suggested that groundwater from half of the land planted in prairie has reached the stream network during the 10 years of ongoing water quality monitoring. The mean travel time for the watershed was estimated to be 10.1 years, consistent with results from a simple analytical model. The proportion of land in the watershed and subbasins with prairie groundwater reaching the stream (10-22%) was similar to the measured reduction of stream nitrate (11-36%). Results provide encouragement that additional nitrate reductions in Walnut Creek are probable in the future as reduced nitrate groundwater from distal locations discharges to the stream network in the coming years. The high spatial resolution of the model (5-m cells) and its simplicity may make it potentially applicable for land managers interested in communicating lag time issues to the public, particularly related to nitrate concentration reductions over time. ?? 2007 Springer-Verlag.
Directory of Open Access Journals (Sweden)
Michael W. Gaultois
2016-05-01
Full Text Available The experimental search for new thermoelectric materials remains largely confined to a limited set of successful chemical and structural families, such as chalcogenides, skutterudites, and Zintl phases. In principle, computational tools such as density functional theory (DFT offer the possibility of rationally guiding experimental synthesis efforts toward very different chemistries. However, in practice, predicting thermoelectric properties from first principles remains a challenging endeavor [J. Carrete et al., Phys. Rev. X 4, 011019 (2014], and experimental researchers generally do not directly use computation to drive their own synthesis efforts. To bridge this practical gap between experimental needs and computational tools, we report an open machine learning-based recommendation engine (http://thermoelectrics.citrination.com for materials researchers that suggests promising new thermoelectric compositions based on pre-screening about 25 000 known materials and also evaluates the feasibility of user-designed compounds. We show this engine can identify interesting chemistries very different from known thermoelectrics. Specifically, we describe the experimental characterization of one example set of compounds derived from our engine, RE12Co5Bi (RE = Gd, Er, which exhibits surprising thermoelectric performance given its unprecedentedly high loading with metallic d and f block elements and warrants further investigation as a new thermoelectric material platform. We show that our engine predicts this family of materials to have low thermal and high electrical conductivities, but modest Seebeck coefficient, all of which are confirmed experimentally. We note that the engine also predicts materials that may simultaneously optimize all three properties entering into zT; we selected RE12Co5Bi for this study due to its interesting chemical composition and known facile synthesis.
Real-time prediction of respiratory motion based on a local dynamic model in an augmented space.
Hong, S-M; Jung, B-H; Ruan, D
2011-03-21
Motion-adaptive radiotherapy aims to deliver ablative radiation dose to the tumor target with minimal normal tissue exposure, by accounting for real-time target movement. In practice, prediction is usually necessary to compensate for system latency induced by measurement, communication and control. This work focuses on predicting respiratory motion, which is most dominant for thoracic and abdominal tumors. We develop and investigate the use of a local dynamic model in an augmented space, motivated by the observation that respiratory movement exhibits a locally circular pattern in a plane augmented with a delayed axis. By including the angular velocity as part of the system state, the proposed dynamic model effectively captures the natural evolution of respiratory motion. The first-order extended Kalman filter is used to propagate and update the state estimate. The target location is predicted by evaluating the local dynamic model equations at the required prediction length. This method is complementary to existing work in that (1) the local circular motion model characterizes 'turning', overcoming the limitation of linear motion models; (2) it uses a natural state representation including the local angular velocity and updates the state estimate systematically, offering explicit physical interpretations; (3) it relies on a parametric model and is much less data-satiate than the typical adaptive semiparametric or nonparametric method. We tested the performance of the proposed method with ten RPM traces, using the normalized root mean squared difference between the predicted value and the retrospective observation as the error metric. Its performance was compared with predictors based on the linear model, the interacting multiple linear models and the kernel density estimator for various combinations of prediction lengths and observation rates. The local dynamic model based approach provides the best performance for short to medium prediction lengths under relatively
Kazakeviciute, Agne; Ho, Chris Jun Hui; Olivo, Malini
2016-09-01
The aim of this study is to solve a problem of denoising and artifact removal from in vivo multispectral photoacoustic imaging when the level of noise is not known a priori. The study analyzes Wiener filtering in Fourier domain when a family of anisotropic shape filters is considered. The unknown noise and signal power spectral densities are estimated using spectral information of images and the autoregressive of the power 1 ( AR(1)) model. Edge preservation is achieved by detecting image edges in the original and the denoised image and superimposing a weighted contribution of the two edge images to the resulting denoised image. The method is tested on multispectral photoacoustic images from simulations, a tissue-mimicking phantom, as well as in vivo imaging of the mouse, with its performance compared against that of the standard Wiener filtering in Fourier domain. The results reveal better denoising and fine details preservation capabilities of the proposed method when compared to that of the standard Wiener filtering in Fourier domain, suggesting that this could be a useful denoising technique for other multispectral photoacoustic studies.
Directory of Open Access Journals (Sweden)
Abu-Tair A.
2016-01-01
Full Text Available Concrete is the backbone of any developed economy. Concrete can suffer from a large number of deleterious effects including physical, chemical and biological causes. Large owning bridge structures organizations are facing very serious questions when asking for maintenance budgets. The questions range from needing to justify the need for the work, its urgency, to also have to predict or show the consequences of delayed rehabilitation of a particular structure. There is therefore a need for a probabilistic model that can estimate the range of service lives of bridge populations and also the likelihood of level of deteriorations it can reached for every incremental time interval. A model was developed for such estimation based on statistical data from actual inspection records of a large reinforced concrete bridge portfolio. The method used both deterministic and stochastic methods to predict the service life of a bridge, using these service lives in combination with the just in time (JIT principle of management would enable maintenance managers to justify the need for action and the budgets needed, to intervene at the optimum time in the life of the structure and that of the deterioration. The paper will report on the model which is based on a large database of deterioration records of concrete bridges covering a period of over 60 years and include data from over 400 bridge structures. The paper will also illustrate how the service life model was developed and how these service lives combined with the JIT can be used to effectively allocate resources and use them to keep a major infrastructure asset moving with little disruption to the transport system and its users.
Zhao, X.; Rosen, D. W.
2017-01-01
As additive manufacturing is poised for growth and innovations, it faces barriers of lack of in-process metrology and control to advance into wider industry applications. The exposure controlled projection lithography (ECPL) is a layerless mask-projection stereolithographic additive manufacturing process, in which parts are fabricated from photopolymers on a stationary transparent substrate. To improve the process accuracy with closed-loop control for ECPL, this paper develops an interferometric curing monitoring and measuring (ICM&M) method which addresses the sensor modeling and algorithms issues. A physical sensor model for ICM&M is derived based on interference optics utilizing the concept of instantaneous frequency. The associated calibration procedure is outlined for ICM&M measurement accuracy. To solve the sensor model, particularly in real time, an online evolutionary parameter estimation algorithm is developed adopting moving horizon exponentially weighted Fourier curve fitting and numerical integration. As a preliminary validation, simulated real-time measurement by offline analysis of a video of interferograms acquired in the ECPL process is presented. The agreement between the cured height estimated by ICM&M and that measured by microscope indicates that the measurement principle is promising as real-time metrology for global measurement and control of the ECPL process.
International Nuclear Information System (INIS)
Zhao, X; Rosen, D W
2017-01-01
As additive manufacturing is poised for growth and innovations, it faces barriers of lack of in-process metrology and control to advance into wider industry applications. The exposure controlled projection lithography (ECPL) is a layerless mask-projection stereolithographic additive manufacturing process, in which parts are fabricated from photopolymers on a stationary transparent substrate. To improve the process accuracy with closed-loop control for ECPL, this paper develops an interferometric curing monitoring and measuring (ICM and M) method which addresses the sensor modeling and algorithms issues. A physical sensor model for ICM and M is derived based on interference optics utilizing the concept of instantaneous frequency. The associated calibration procedure is outlined for ICM and M measurement accuracy. To solve the sensor model, particularly in real time, an online evolutionary parameter estimation algorithm is developed adopting moving horizon exponentially weighted Fourier curve fitting and numerical integration. As a preliminary validation, simulated real-time measurement by offline analysis of a video of interferograms acquired in the ECPL process is presented. The agreement between the cured height estimated by ICM and M and that measured by microscope indicates that the measurement principle is promising as real-time metrology for global measurement and control of the ECPL process. (paper)
Walsh, T.; Layton, T.; Mellor, J. E.
2017-12-01
Storm damage to the electric grid impacts 23 million electric utility customers and costs US consumers $119 billion annually. Current restoration techniques rely on the past experiences of emergency managers. There are few analytical simulation and prediction tools available for utility managers to optimize storm recovery and decrease consumer cost, lost revenue and restoration time. We developed an agent based model (ABM) for storm recovery in Connecticut. An ABM is a computer modeling technique comprised of agents who are given certain behavioral rules and operate in a given environment. It allows the user to simulate complex systems by varying user-defined parameters to study emergent, unpredicted behavior. The ABM incorporates the road network and electric utility grid for the state, is validated using actual storm event recoveries and utilizes the Dijkstra routing algorithm to determine the best path for repair crews to travel between outages. The ABM has benefits for both researchers and utility managers. It can simulate complex system dynamics, rank variable importance, find tipping points that could significantly reduce restoration time or costs and test a broad range of scenarios. It is a modular, scalable and adaptable technique that can simulate scenarios in silico to inform emergency managers before and during storm events to optimize restoration strategies and better manage expectations of when power will be restored. Results indicate that total restoration time is strongly dependent on the number of crews. However, there is a threshold whereby more crews will not decrease the restoration time, which depends on the total number of outages. The addition of outside crews is more beneficial for storms with a higher number of outages. The time to restoration increases linearly with increasing repair time, while the travel speed has little overall effect on total restoration time. Crews traveling to the nearest outage reduces the total restoration time
Fukaya, Keiichi; Kawamori, Ai; Osada, Yutaka; Kitazawa, Masumi; Ishiguro, Makio
2017-09-20
Women's basal body temperature (BBT) shows a periodic pattern that associates with menstrual cycle. Although this fact suggests a possibility that daily BBT time series can be useful for estimating the underlying phase state as well as for predicting the length of current menstrual cycle, little attention has been paid to model BBT time series. In this study, we propose a state-space model that involves the menstrual phase as a latent state variable to explain the daily fluctuation of BBT and the menstruation cycle length. Conditional distributions of the phase are obtained by using sequential Bayesian filtering techniques. A predictive distribution of the next menstruation day can be derived based on this conditional distribution and the model, leading to a novel statistical framework that provides a sequentially updated prediction for upcoming menstruation day. We applied this framework to a real data set of women's BBT and menstruation days and compared prediction accuracy of the proposed method with that of previous methods, showing that the proposed method generally provides a better prediction. Because BBT can be obtained with relatively small cost and effort, the proposed method can be useful for women's health management. Potential extensions of this framework as the basis of modeling and predicting events that are associated with the menstrual cycles are discussed. © 2017 The Authors. Statistics in Medicine Published by John Wiley & Sons Ltd. © 2017 The Authors. Statistics in Medicine Published by John Wiley & Sons Ltd.
Directory of Open Access Journals (Sweden)
Yu-Pin Liao
2017-11-01
Full Text Available In the past few decades, demand forecasting has become relatively difficult due to rapid changes in the global environment. This research illustrates the use of the make-to-stock (MTS production strategy in order to explain how forecasting plays an essential role in business management. The linear mixed-effect (LME model has been extensively developed and is widely applied in various fields. However, no study has used the LME model for business forecasting. We suggest that the LME model be used as a tool for prediction and to overcome environment complexity. The data analysis is based on real data in an international display company, where the company needs accurate demand forecasting before adopting a MTS strategy. The forecasting result from the LME model is compared to the commonly used approaches, including the regression model, autoregressive model, times series model, and exponential smoothing model, with the results revealing that prediction performance provided by the LME model is more stable than using the other methods. Furthermore, product types in the data are regarded as a random effect in the LME model, hence demands of all types can be predicted simultaneously using a single LME model. However, some approaches require splitting the data into different type categories, and then predicting the type demand by establishing a model for each type. This feature also demonstrates the practicability of the LME model in real business operations.
Directory of Open Access Journals (Sweden)
Hao Yu
2018-01-01
Full Text Available This study introduces a data-driven modeling strategy for smart grid power quality (PQ coupling assessment based on time series pattern matching to quantify the influence of single and integrated disturbance among nodes in different pollution patterns. Periodic and random PQ patterns are constructed by using multidimensional frequency-domain decomposition for all disturbances. A multidimensional piecewise linear representation based on local extreme points is proposed to extract the patterns features of single and integrated disturbance in consideration of disturbance variation trend and severity. A feature distance of pattern (FDP is developed to implement pattern matching on univariate PQ time series (UPQTS and multivariate PQ time series (MPQTS to quantify the influence of single and integrated disturbance among nodes in the pollution patterns. Case studies on a 14-bus distribution system are performed and analyzed; the accuracy and applicability of the FDP in the smart grid PQ coupling assessment are verified by comparing with other time series pattern matching methods.
Directory of Open Access Journals (Sweden)
Bo Chen
2014-01-01
Full Text Available This paper aims to carry out the condition assessment on solar radiation model and thermal loading of bridges. A modification factor is developed to change the distribution of solar intensities during a whole day. In addition, a new solar radiation model for civil engineering structures is proposed to consider the shelter effects induced by cloud, mountains, and surrounding structures. The heat transfer analysis of bridge components is conducted to calculate the temperature distributions based on the proposed new solar radiation model. By assuming that the temperature along the bridge longitudinal direction is constant, one typical bridge segment is specially studied. Fine finite element models of deck plates and corrugate sheets are constructed to examine the temperature distributions and thermal loading of bridge components. The feasibility and validity of the proposed solar radiation model are investigated through detailed numerical simulation and parametric study. The numerical results are compared with the field measurement data obtained from the long-term monitoring system of the bridge and they shows a very good agreement in terms of temperature distribution in different time instants and in different seasons. The real application verifies effectiveness and validity of the proposed solar radiation and heat transfer analysis.
Zhang, Bin; Deng, Congying; Zhang, Yi
2018-03-01
Rolling element bearings are mechanical components used frequently in most rotating machinery and they are also vulnerable links representing the main source of failures in such systems. Thus, health condition monitoring and fault diagnosis of rolling element bearings have long been studied to improve operational reliability and maintenance efficiency of rotatory machines. Over the past decade, prognosis that enables forewarning of failure and estimation of residual life attracted increasing attention. To accurately and efficiently predict failure of the rolling element bearing, the degradation requires to be well represented and modelled. For this purpose, degradation of the rolling element bearing is analysed with the delay-time-based model in this paper. Also, a hybrid feature selection and health indicator construction scheme is proposed for extraction of the bearing health relevant information from condition monitoring sensor data. Effectiveness of the presented approach is validated through case studies on rolling element bearing run-to-failure experiments.
Self-calibration for lab-μCT using space-time regularized projection-based DVC and model reduction
Jailin, C.; Buljac, A.; Bouterf, A.; Poncelet, M.; Hild, F.; Roux, S.
2018-02-01
An online calibration procedure for x-ray lab-CT is developed using projection-based digital volume correlation. An initial reconstruction of the sample is positioned in the 3D space for every angle so that its projection matches the initial one. This procedure allows a space-time displacement field to be estimated for the scanned sample, which is regularized with (i) rigid body motions in space and (ii) modal time shape functions computed using model reduction techniques (i.e. proper generalized decomposition). The result is an accurate identification of the position of the sample adapted for each angle, which may deviate from the desired perfect rotation required for standard reconstructions. An application of this procedure to a 4D in situ mechanical test is shown. The proposed correction leads to a much improved tomographic reconstruction quality.
International Nuclear Information System (INIS)
Guo, Zhenhai; Chi, Dezhong; Wu, Jie; Zhang, Wenyu
2014-01-01
Highlights: • Impact of meteorological factors on wind speed forecasting is taken into account. • Forecasted wind speed results are corrected by the associated rules. • Forecasting accuracy is improved by the new wind speed forecasting strategy. • Robust of the proposed model is validated by data sampled from different sites. - Abstract: Wind energy has been the fastest growing renewable energy resource in recent years. Because of the intermittent nature of wind, wind power is a fluctuating source of electrical energy. Therefore, to minimize the impact of wind power on the electrical grid, accurate and reliable wind power forecasting is mandatory. In this paper, a new wind speed forecasting approach based on based on the chaotic time series modelling technique and the Apriori algorithm has been developed. The new approach consists of four procedures: (I) Clustering by using the k-means clustering approach; (II) Employing the Apriori algorithm to discover the association rules; (III) Forecasting the wind speed according to the chaotic time series forecasting model; and (IV) Correcting the forecasted wind speed data using the associated rules discovered previously. This procedure has been verified by 31-day-ahead daily average wind speed forecasting case studies, which employed the wind speed and other meteorological data collected from four meteorological stations located in the Hexi Corridor area of China. The results of these case studies reveal that the chaotic forecasting model can efficiently improve the accuracy of the wind speed forecasting, and the Apriori algorithm can effectively discover the association rules between the wind speed and other meteorological factors. In addition, the correction results demonstrate that the association rules discovered by the Apriori algorithm have powerful capacities in handling the forecasted wind speed values correction when the forecasted values do not match the classification discovered by the association rules
Galvín, P.; Romero, A.
2014-05-01
This paper presents a numerical method based on a three dimensional boundary element-finite element (BEM-FEM) coupled formulation in the time domain. The proposed model allows studying soil-structure interaction problems. The soil is modelled with the BEM, where the radiation condition is implicitly satisfied in the fundamental solution. Half-space Green's function including internal soil damping is considered as the fundamental solution. An effective treatment based on the integration into a complex Jordan path is proposed to avoid the singularities at the arrival time of the Rayleigh waves. The efficiency of the BEM is improved taking into account the spatial symmetry and the invariance of the fundamental solution when it is expressed in a dimensionless form. The FEM is used to represent the structure. The proposed method is validated by comparison with analytical solutions and numerical results presented in the literature. Finally, a soil-structure interaction problem concerning with a building subjected to different incident wave fields is studied.
Time-frequency representation based on time-varying ...
Indian Academy of Sciences (India)
A parametric time-frequency representation is presented based on timevarying autoregressive model (TVAR), followed by applications to non-stationary vibration signal processing. The identiﬁcation of time-varying model coefﬁcients and the determination of model order, are addressed by means of neural networks and ...
Kang, Shuo; Yan, Hao; Dong, Lijing; Li, Changchun
2018-03-01
This paper addresses the force tracking problem of electro-hydraulic load simulator under the influence of nonlinear friction and uncertain disturbance. A nonlinear system model combined with the improved generalized Maxwell-slip (GMS) friction model is firstly derived to describe the characteristics of load simulator system more accurately. Then, by using particle swarm optimization (PSO) algorithm combined with the system hysteresis characteristic analysis, the GMS friction parameters are identified. To compensate for nonlinear friction and uncertain disturbance, a finite-time adaptive sliding mode control method is proposed based on the accurate system model. This controller has the ability to ensure that the system state moves along the nonlinear sliding surface to steady state in a short time as well as good dynamic properties under the influence of parametric uncertainties and disturbance, which further improves the force loading accuracy and rapidity. At the end of this work, simulation and experimental results are employed to demonstrate the effectiveness of the proposed sliding mode control strategy.
Directory of Open Access Journals (Sweden)
Quan Wang
2017-08-01
Full Text Available The ability to learn sequential behaviors is a fundamental property of our brains. Yet a long stream of studies including recent experiments investigating motor sequence learning in adult human subjects have produced a number of puzzling and seemingly contradictory results. In particular, when subjects have to learn multiple action sequences, learning is sometimes impaired by proactive and retroactive interference effects. In other situations, however, learning is accelerated as reflected in facilitation and transfer effects. At present it is unclear what the underlying neural mechanism are that give rise to these diverse findings. Here we show that a recently developed recurrent neural network model readily reproduces this diverse set of findings. The self-organizing recurrent neural network (SORN model is a network of recurrently connected threshold units that combines a simplified form of spike-timing dependent plasticity (STDP with homeostatic plasticity mechanisms ensuring network stability, namely intrinsic plasticity (IP and synaptic normalization (SN. When trained on sequence learning tasks modeled after recent experiments we find that it reproduces the full range of interference, facilitation, and transfer effects. We show how these effects are rooted in the network's changing internal representation of the different sequences across learning and how they depend on an interaction of training schedule and task similarity. Furthermore, since learning in the model is based on fundamental neuronal plasticity mechanisms, the model reveals how these plasticity mechanisms are ultimately responsible for the network's sequence learning abilities. In particular, we find that all three plasticity mechanisms are essential for the network to learn effective internal models of the different training sequences. This ability to form effective internal models is also the basis for the observed interference and facilitation effects. This suggests that
Wang, Quan; Rothkopf, Constantin A; Triesch, Jochen
2017-08-01
The ability to learn sequential behaviors is a fundamental property of our brains. Yet a long stream of studies including recent experiments investigating motor sequence learning in adult human subjects have produced a number of puzzling and seemingly contradictory results. In particular, when subjects have to learn multiple action sequences, learning is sometimes impaired by proactive and retroactive interference effects. In other situations, however, learning is accelerated as reflected in facilitation and transfer effects. At present it is unclear what the underlying neural mechanism are that give rise to these diverse findings. Here we show that a recently developed recurrent neural network model readily reproduces this diverse set of findings. The self-organizing recurrent neural network (SORN) model is a network of recurrently connected threshold units that combines a simplified form of spike-timing dependent plasticity (STDP) with homeostatic plasticity mechanisms ensuring network stability, namely intrinsic plasticity (IP) and synaptic normalization (SN). When trained on sequence learning tasks modeled after recent experiments we find that it reproduces the full range of interference, facilitation, and transfer effects. We show how these effects are rooted in the network's changing internal representation of the different sequences across learning and how they depend on an interaction of training schedule and task similarity. Furthermore, since learning in the model is based on fundamental neuronal plasticity mechanisms, the model reveals how these plasticity mechanisms are ultimately responsible for the network's sequence learning abilities. In particular, we find that all three plasticity mechanisms are essential for the network to learn effective internal models of the different training sequences. This ability to form effective internal models is also the basis for the observed interference and facilitation effects. This suggests that STDP, IP, and SN
International Nuclear Information System (INIS)
Yang, Li; Ma, Xiaobing; Zhai, Qingqing; Zhao, Yu
2016-01-01
We propose an inspection and replacement policy for a single component system that successively executes missions with random durations. The failure process of the system can be divided into two states, namely, normal and defective, following the delay time concept. Inspections are carried out periodically and immediately after the completion of each mission (random inspections). The failed state is always identified immediately, whereas the defective state can only be revealed by an inspection. If the system fails or is defective at a periodic inspection, then replacement is immediate. If, however, the system is defective at a random inspection, then replacement will be postponed if the time to the subsequent periodic inspection is shorter than a pre-determined threshold, and immediate otherwise. We derive the long run expected cost per unit time and then investigate the optimal periodic inspection interval and postponement threshold. A numerical example is presented to demonstrate the applicability of the proposed maintenance policy. - Highlights: • A delay time model of inspection is introduced for mission-based systems. • Periodic and random inspections are performed to check the state. • Replacement of the defective system at a random inspection can be postponed.
Using Agent-Based Modeling to Enhance System-Level Real-time Control of Urban Stormwater Systems
Rimer, S.; Mullapudi, A. M.; Kerkez, B.
2017-12-01
The ability to reduce combined-sewer overflow (CSO) events is an issue that challenges over 800 U.S. municipalities. When the volume of a combined sewer system or wastewater treatment plant is exceeded, untreated wastewater then overflows (a CSO event) into nearby streams, rivers, or other water bodies causing localized urban flooding and pollution. The likelihood and impact of CSO events has only exacerbated due to urbanization, population growth, climate change, aging infrastructure, and system complexity. Thus, there is an urgent need for urban areas to manage CSO events. Traditionally, mitigating CSO events has been carried out via time-intensive and expensive structural interventions such as retention basins or sewer separation, which are able to reduce CSO events, but are costly, arduous, and only provide a fixed solution to a dynamic problem. Real-time control (RTC) of urban drainage systems using sensor and actuator networks has served as an inexpensive and versatile alternative to traditional CSO intervention. In particular, retrofitting individual stormwater elements for sensing and automated active distributed control has been shown to significantly reduce the volume of discharge during CSO events, with some RTC models demonstrating a reduction upwards of 90% when compared to traditional passive systems. As more stormwater elements become retrofitted for RTC, system-level RTC across complete watersheds is an attainable possibility. However, when considering the diverse set of control needs of each of these individual stormwater elements, such system-level RTC becomes a far more complex problem. To address such diverse control needs, agent-based modeling is employed such that each individual stormwater element is treated as an autonomous agent with a diverse decision making capabilities. We present preliminary results and limitations of utilizing the agent-based modeling computational framework for the system-level control of diverse, interacting
Directory of Open Access Journals (Sweden)
David A Shoham
Full Text Available Recent studies suggest that obesity may be "contagious" between individuals in social networks. Social contagion (influence, however, may not be identifiable using traditional statistical approaches because they cannot distinguish contagion from homophily (the propensity for individuals to select friends who are similar to themselves or from shared environmental influences. In this paper, we apply the stochastic actor-based model (SABM framework developed by Snijders and colleagues to data on adolescent body mass index (BMI, screen time, and playing active sports. Our primary hypothesis was that social influences on adolescent body size and related behaviors are independent of friend selection. Employing the SABM, we simultaneously modeled network dynamics (friendship selection based on homophily and structural characteristics of the network and social influence. We focused on the 2 largest schools in the National Longitudinal Study of Adolescent Health (Add Health and held the school environment constant by examining the 2 school networks separately (N = 624 and 1151. Results show support in both schools for homophily on BMI, but also for social influence on BMI. There was no evidence of homophily on screen time in either school, while only one of the schools showed homophily on playing active sports. There was, however, evidence of social influence on screen time in one of the schools, and playing active sports in both schools. These results suggest that both homophily and social influence are important in understanding patterns of adolescent obesity. Intervention efforts should take into consideration peers' influence on one another, rather than treating "high risk" adolescents in isolation.
Nelson, Matthew P.; Tazik, Shawna K.; Bangalore, Arjun S.; Treado, Patrick J.; Klem, Ethan; Temple, Dorota
2017-05-01
Hyperspectral imaging (HSI) systems can provide detection and identification of a variety of targets in the presence of complex backgrounds. However, current generation sensors are typically large, costly to field, do not usually operate in real time and have limited sensitivity and specificity. Despite these shortcomings, HSI-based intelligence has proven to be a valuable tool, thus resulting in increased demand for this type of technology. By moving the next generation of HSI technology into a more adaptive configuration, and a smaller and more cost effective form factor, HSI technologies can help maintain a competitive advantage for the U.S. armed forces as well as local, state and federal law enforcement agencies. Operating near the physical limits of HSI system capability is often necessary and very challenging, but is often enabled by rigorous modeling of detection performance. Specific performance envelopes we consistently strive to improve include: operating under low signal to background conditions; at higher and higher frame rates; and under less than ideal motion control scenarios. An adaptable, low cost, low footprint, standoff sensor architecture we have been maturing includes the use of conformal liquid crystal tunable filters (LCTFs). These Conformal Filters (CFs) are electro-optically tunable, multivariate HSI spectrometers that, when combined with Dual Polarization (DP) optics, produce optimized spectral passbands on demand, which can readily be reconfigured, to discriminate targets from complex backgrounds in real-time. With DARPA support, ChemImage Sensor Systems (CISS™) in collaboration with Research Triangle Institute (RTI) International are developing a novel, real-time, adaptable, compressive sensing short-wave infrared (SWIR) hyperspectral imaging technology called the Reconfigurable Conformal Imaging Sensor (RCIS) based on DP-CF technology. RCIS will address many shortcomings of current generation systems and offer improvements in
Adolf Szabó, János; Zoltán Réti, Gábor; Tóth, Tünde
2017-04-01
Today, the most significant mission of the decision makers on integrated water management issues is to carry out sustainable management for sharing the resources between a variety of users and the environment under conditions of considerable uncertainty (such as climate/land-use/population/etc. change) conditions. In light of this increasing water management complexity, we consider that the most pressing needs is to develop and implement up-to-date GIS model-based real-time hydrological forecasting and operation management systems for aiding decision-making processes to improve water management. After years of researches and developments the HYDROInform Ltd. has developed an integrated, on-line IT system (DIWA-HFMS: DIstributed WAtershed - Hydrologyc Forecasting & Modelling System) which is able to support a wide-ranging of the operational tasks in water resources management such as: forecasting, operation of lakes and reservoirs, water-control and management, etc. Following a test period, the DIWA-HFMS has been implemented for the Lake Balaton and its watershed (in 500 m resolution) at Central-Transdanubian Water Directorate (KDTVIZIG). The significant pillars of the system are: - The DIWA (DIstributed WAtershed) hydrologic model, which is a 3D dynamic water-balance model that distributed both in space and its parameters, and which was developed along combined principles but its mostly based on physical foundations. The DIWA integrates 3D soil-, 2D surface-, and 1D channel-hydraulic components as well. - Lakes and reservoir-operating component; - Radar-data integration module; - fully online data collection tools; - scenario manager tool to create alternative scenarios, - interactive, intuitive, highly graphical user interface. In Vienna, the main functions, operations and results-management of the system will be presented.
Zheng, F.
2011-01-01
Urban travel times are intrinsically uncertain due to a lot of stochastic characteristics of traffic, especially at signalized intersections. A single travel time does not have much meaning and is not informative to drivers or traffic managers. The range of travel times is large such that certain
Müller, Dirk K; Pampel, André; Möller, Harald E
2013-05-01
Quantification of magnetization-transfer (MT) experiments are typically based on the assumption of the binary spin-bath model. This model allows for the extraction of up to six parameters (relative pool sizes, relaxation times, and exchange rate constants) for the characterization of macromolecules, which are coupled via exchange processes to the water in tissues. Here, an approach is presented for estimating MT parameters acquired with arbitrary saturation schemes and imaging pulse sequences. It uses matrix algebra to solve the Bloch-McConnell equations without unwarranted simplifications, such as assuming steady-state conditions for pulsed saturation schemes or neglecting imaging pulses. The algorithm achieves sufficient efficiency for voxel-by-voxel MT parameter estimations by using a polynomial interpolation technique. Simulations, as well as experiments in agar gels with continuous-wave and pulsed MT preparation, were performed for validation and for assessing approximations in previous modeling approaches. In vivo experiments in the normal human brain yielded results that were consistent with published data. Copyright © 2013 Elsevier Inc. All rights reserved.
Santana, Erico Soriano Martins; Mueller, Carlos
2003-01-01
The occurrence of flight delays in Brazil, mostly verified at the ground (airfield), is responsible for serious disruptions at the airport level but also for the unchaining of problems in all the airport system, affecting also the airspace. The present study develops an analysis of delay and travel times at Sao Paulo International Airport/ Guarulhos (AISP/GRU) airfield based on simulation model. Different airport physical and operational scenarios had been analyzed by means of simulation. SIMMOD Plus 4.0, the computational tool developed to represent aircraft operation in the airspace and airside of airports, was used to perform these analysis. The study was mainly focused on aircraft operations on ground, at the airport runway, taxi-lanes and aprons. The visualization of the operations with increasing demand facilitated the analyses. The results generated in this work certify the viability of the methodology, they also indicated the solutions capable to solve the delay problem by travel time analysis, thus diminishing the costs for users mainly airport authority. It also indicated alternatives for airport operations, assisting the decision-making process and in the appropriate timing of the proposed changes in the existing infrastructure.
Introduction to Time Series Modeling
Kitagawa, Genshiro
2010-01-01
In time series modeling, the behavior of a certain phenomenon is expressed in relation to the past values of itself and other covariates. Since many important phenomena in statistical analysis are actually time series and the identification of conditional distribution of the phenomenon is an essential part of the statistical modeling, it is very important and useful to learn fundamental methods of time series modeling. Illustrating how to build models for time series using basic methods, "Introduction to Time Series Modeling" covers numerous time series models and the various tools f
Lu, Fengbin; Qiao, Han; Wang, Shouyang; Lai, Kin Keung; Li, Yuze
2017-01-01
This paper proposes a new time-varying coefficient vector autoregressions (VAR) model, in which the coefficient is a linear function of dynamic lagged correlation. The proposed model allows for flexibility in choices of dynamic correlation models (e.g. dynamic conditional correlation generalized autoregressive conditional heteroskedasticity (GARCH) models, Markov-switching GARCH models and multivariate stochastic volatility models), which indicates that it can describe many types of time-varying causal effects. Time-varying causal relations between West Texas Intermediate (WTI) crude oil and the US Standard and Poor's 500 (S&P 500) stock markets are examined by the proposed model. The empirical results show that their causal relations evolve with time and display complex characters. Both positive and negative causal effects of the WTI on the S&P 500 in the subperiods have been found and confirmed by the traditional VAR models. Similar results have been obtained in the causal effects of S&P 500 on WTI. In addition, the proposed model outperforms the traditional VAR model. Copyright Â© 2016 Elsevier Ltd. All rights reserved.
Directory of Open Access Journals (Sweden)
Milica Milosavljevic
2010-10-01
Full Text Available An important open problem is how values are compared to make simple choices. A natural hypothesis is that the brain carries out the computations associated with the value comparisons in a manner consistent with the Drift Diffusion Model (DDM, since this model has been able to account for a large amount of data in other domains. We investigated the ability of four different versions of the DDM to explain the data in a real binary food choice task under conditions of high and low time pressure. We found that a seven-parameter version of the DDM can account for the choice and reaction time data with high-accuracy, in both the high and low time pressure conditions. The changes associated with the introduction of time pressure could be traced to changes in two key model parameters: the barrier height and the noise in the slope of the drift process.
Charalambous, C. A.; Pike, W. T.
2013-12-01
We present the development of a soil evolution framework and multiscale modelling of the surface of Mars, Moon and Itokawa thus providing an atlas of extra-terrestrial Particle Size Distributions (PSD). These PSDs are profoundly based on a tailoring method which interconnects several datasets from different sites captured by the various missions. The final integrated product is then fully justified through a soil evolution analysis model mathematically constructed via fundamental physical principles (Charalambous, 2013). The construction of the PSD takes into account the macroscale fresh primary impacts and their products, the mesoscale distributions obtained by the in-situ data of surface missions (Golombek et al., 1997, 2012) and finally the microscopic scale distributions provided by Curiosity and Phoenix Lander (Pike, 2011). The distribution naturally extends at the magnitudinal scales at which current data does not exist due to the lack of scientific instruments capturing the populations at these data absent scales. The extension is based on the model distribution (Charalambous, 2013) which takes as parameters known values of material specific probabilities of fragmentation and grinding limits. Additionally, the establishment of a closed-form statistical distribution provides a quantitative description of the soil's structure. Consequently, reverse engineering of the model distribution allows the synthesis of soil that faithfully represents the particle population at the studied sites (Charalambous, 2011). Such representation essentially delivers a virtual soil environment to work with for numerous applications. A specific application demonstrated here will be the information that can directly be extracted for the successful drilling probability as a function of distance in an effort to aid the HP3 instrument of the 2016 Insight Mission to Mars. Pike, W. T., et al. "Quantification of the dry history of the Martian soil inferred from in situ microscopy
Ma, Fuyu; Cao, Weixing; Zhang, Lizhen; Zhu, Yan; Li, Shaokun; Zhou, Zhiguo; Li, Cundong; Xu, Lihua
2005-04-01
In this study, three cotton varieties (CRI 36, CRI 35 and CRI 41) were planted in Nanjing, Anyang, Baoding and Shihezi, respectively, in 2002, and the dynamic relationships between their development and environmental factors were analyzed. Based on this, a simulation model for cotton development stages and square-and boll development was built in terms of physiological development time (PDT). In calculating relative thermal effectiveness, the effect of diurnal temperature differences in different regions on cotton development was incorporated, and the enhancement of plastic mulching on air temperature was quantified. To simulate development stages, the initial fruiting node index (IFIN), sunlight duration factor (FSH), and solar radiation index on fruiting branch (IFBR) were introduced, besides earliness factor of a given genotype. The validation of the model with the data obtained from different years, ecological zones, genotypes, and cultivation practices indicated a high goodness of fitness between the simulated results and observed values. The root mean square error (RMSE) between simulated and observed days from sowing to emergence, emergence to squaring, anthesis to boll opening, and sowing to boll opening was 0.9, 2.2, 1.7, and 2.1 d, respectively, with a mean of 2.1 d, and in all plant sites, the RMSE between simulated and observed days from squaring to boll opening was 1.8-3.7 d, and that from squaring to opening was 4.6-5.8 d.
Abbring, J.H.
2009-01-01
We study mixed hitting-time models, which specify durations as the first time a Levy process (a continuous-time process with stationary and independent increments) crosses a heterogeneous threshold. Such models of substantial interest because they can be reduced from optimal-stopping models with
International Nuclear Information System (INIS)
Santos, Marcelo C. dos; Pereira, Claudio M.N.A.; Schirru, Roberto; Pinheiro, André Coordenacao de Pos-Graduacao e Pesquisa de Engenharia
2017-01-01
Atmospheric radionuclide dispersion systems (ARDS) are essential mechanisms to predict the consequences of unexpected radioactive releases from nuclear power plants. Considering, that during an eventuality of an accident with a radioactive material release, an accurate forecast is vital to guide the evacuation plan of the possible affected areas. However, in order to predict the dispersion of the radioactive material and its impact on the environment, the model must process information about source term (radioactive materials released, activities and location), weather condition (wind, humidity and precipitation) and geographical characteristics (topography). Furthermore, ARDS is basically composed of 4 main modules: Source Term, Wind Field, Plume Dispersion and Doses Calculations. The Wind Field and Plume Dispersion modules are the ones that require a high computational performance to achieve accurate results within an acceptable time. Taking this into account, this work focuses on the development of a GPU-based parallel Plume Dispersion module, focusing on the radionuclide transport and diffusion calculations, which use a given wind field and a released source term as parameters. The program is being developed using the C ++ programming language, allied with CUDA libraries. In comparative case study between a parallel and sequential version of the slower function of the Plume Dispersion module, a speedup of 11.63 times could be observed. (author)
Energy-based method for near-real time modeling of sound field in complex urban environments.
Pasareanu, Stephanie M; Remillieux, Marcel C; Burdisso, Ricardo A
2012-12-01
Prediction of the sound field in large urban environments has been limited thus far by the heavy computational requirements of conventional numerical methods such as boundary element (BE) or finite-difference time-domain (FDTD) methods. Recently, a considerable amount of work has been devoted to developing energy-based methods for this application, and results have shown the potential to compete with conventional methods. However, these developments have been limited to two-dimensional (2-D) studies (along street axes), and no real description of the phenomena at issue has been exposed. Here the mathematical theory of diffusion is used to predict the sound field in 3-D complex urban environments. A 3-D diffusion equation is implemented by means of a simple finite-difference scheme and applied to two different types of urban configurations. This modeling approach is validated against FDTD and geometrical acoustic (GA) solutions, showing a good overall agreement. The role played by diffraction near buildings edges close to the source is discussed, and suggestions are made on the possibility to predict accurately the sound field in complex urban environments, in near real time simulations.
Energy Technology Data Exchange (ETDEWEB)
Santos, Marcelo C. dos; Pereira, Claudio M.N.A.; Schirru, Roberto; Pinheiro, André, E-mail: jovitamarcelo@gmail.com, E-mail: cmnap@ien.gov.br, E-mail: schirru@lmp.ufrj.br, E-mail: apinheiro99@gmail.com [Instituto de Engenharia Nuclear (IEN/CNEN-RJ), Rio de Janeiro, RJ (Brazil); Coordenacao de Pos-Graduacao e Pesquisa de Engenharia (COPPE/UFRJ), Rio de Janeiro, RJ (Brazil). Programa de Engenharia Nuclear
2017-07-01
Atmospheric radionuclide dispersion systems (ARDS) are essential mechanisms to predict the consequences of unexpected radioactive releases from nuclear power plants. Considering, that during an eventuality of an accident with a radioactive material release, an accurate forecast is vital to guide the evacuation plan of the possible affected areas. However, in order to predict the dispersion of the radioactive material and its impact on the environment, the model must process information about source term (radioactive materials released, activities and location), weather condition (wind, humidity and precipitation) and geographical characteristics (topography). Furthermore, ARDS is basically composed of 4 main modules: Source Term, Wind Field, Plume Dispersion and Doses Calculations. The Wind Field and Plume Dispersion modules are the ones that require a high computational performance to achieve accurate results within an acceptable time. Taking this into account, this work focuses on the development of a GPU-based parallel Plume Dispersion module, focusing on the radionuclide transport and diffusion calculations, which use a given wind field and a released source term as parameters. The program is being developed using the C ++ programming language, allied with CUDA libraries. In comparative case study between a parallel and sequential version of the slower function of the Plume Dispersion module, a speedup of 11.63 times could be observed. (author)
Breen, Michael S; Long, Thomas C; Schultz, Bradley D; Crooks, James; Breen, Miyuki; Langstaff, John E; Isaacs, Kristin K; Tan, Yu-Mei; Williams, Ronald W; Cao, Ye; Geller, Andrew M; Devlin, Robert B; Batterman, Stuart A; Buckley, Timothy J
2014-07-01
A critical aspect of air pollution exposure assessment is the estimation of the time spent by individuals in various microenvironments (ME). Accounting for the time spent in different ME with different pollutant concentrations can reduce exposure misclassifications, while failure to do so can add uncertainty and bias to risk estimates. In this study, a classification model, called MicroTrac, was developed to estimate time of day and duration spent in eight ME (indoors and outdoors at home, work, school; inside vehicles; other locations) from global positioning system (GPS) data and geocoded building boundaries. Based on a panel study, MicroTrac estimates were compared with 24-h diary data from nine participants, with corresponding GPS data and building boundaries of home, school, and work. MicroTrac correctly classified the ME for 99.5% of the daily time spent by the participants. The capability of MicroTrac could help to reduce the time-location uncertainty in air pollution exposure models and exposure metrics for individuals in health studies.
Probabilistic Survivability Versus Time Modeling
Joyner, James J., Sr.
2015-01-01
This technical paper documents Kennedy Space Centers Independent Assessment team work completed on three assessments for the Ground Systems Development and Operations (GSDO) Program to assist the Chief Safety and Mission Assurance Officer (CSO) and GSDO management during key programmatic reviews. The assessments provided the GSDO Program with an analysis of how egress time affects the likelihood of astronaut and worker survival during an emergency. For each assessment, the team developed probability distributions for hazard scenarios to address statistical uncertainty, resulting in survivability plots over time. The first assessment developed a mathematical model of probabilistic survivability versus time to reach a safe location using an ideal Emergency Egress System at Launch Complex 39B (LC-39B); the second used the first model to evaluate and compare various egress systems under consideration at LC-39B. The third used a modified LC-39B model to determine if a specific hazard decreased survivability more rapidly than other events during flight hardware processing in Kennedys Vehicle Assembly Building (VAB).Based on the composite survivability versus time graphs from the first two assessments, there was a soft knee in the Figure of Merit graphs at eight minutes (ten minutes after egress ordered). Thus, the graphs illustrated to the decision makers that the final emergency egress design selected should have the capability of transporting the flight crew from the top of LC 39B to a safe location in eight minutes or less. Results for the third assessment were dominated by hazards that were classified as instantaneous in nature (e.g. stacking mishaps) and therefore had no effect on survivability vs time to egress the VAB. VAB emergency scenarios that degraded over time (e.g. fire) produced survivability vs time graphs that were line with aerospace industry norms.
Directory of Open Access Journals (Sweden)
Kai Wang
2016-01-01
Full Text Available Health is vital to every human being. To further improve its already respectable medical technology, the medical community is transitioning towards a proactive approach which anticipates and mitigates risks before getting ill. This approach requires measuring the physiological signals of human and analyzes these data at regular intervals. In this paper, we present a novel approach to apply deep learning in physiological signals analysis that allows doctor to identify latent risks. However, extracting high level information from physiological time-series data is a hard problem faced by the machine learning communities. Therefore, in this approach, we apply model based on convolutional neural network that can automatically learn features from raw physiological signals in an unsupervised manner and then based on the learned features use multivariate Gauss distribution anomaly detection method to detect anomaly data. Our experiment is shown to have a significant performance in physiological signals anomaly detection. So it is a promising tool for doctor to identify early signs of illness even if the criteria are unknown a priori.
Toward real-time endoscopically-guided robotic navigation based on a 3D virtual surgical field model
Gong, Yuanzheng; Hu, Danying; Hannaford, Blake; Seibel, Eric J.
2015-03-01
The challenge is to accurately guide the surgical tool within the three-dimensional (3D) surgical field for roboticallyassisted operations such as tumor margin removal from a debulked brain tumor cavity. The proposed technique is 3D image-guided surgical navigation based on matching intraoperative video frames to a 3D virtual model of the surgical field. A small laser-scanning endoscopic camera was attached to a mock minimally-invasive surgical tool that was manipulated toward a region of interest (residual tumor) within a phantom of a debulked brain tumor. Video frames from the endoscope provided features that were matched to the 3D virtual model, which were reconstructed earlier by raster scanning over the surgical field. Camera pose (position and orientation) is recovered by implementing a constrained bundle adjustment algorithm. Navigational error during the approach to fluorescence target (residual tumor) is determined by comparing the calculated camera pose to the measured camera pose using a micro-positioning stage. From these preliminary results, computation efficiency of the algorithm in MATLAB code is near real-time (2.5 sec for each estimation of pose), which can be improved by implementation in C++. Error analysis produced 3-mm distance error and 2.5 degree of orientation error on average. The sources of these errors come from 1) inaccuracy of the 3D virtual model, generated on a calibrated RAVEN robotic platform with stereo tracking; 2) inaccuracy of endoscope intrinsic parameters, such as focal length; and 3) any endoscopic image distortion from scanning irregularities. This work demonstrates feasibility of micro-camera 3D guidance of a robotic surgical tool.
Yamamoto, Yumi; Välitalo, Pyry A; Huntjens, Dymphy R; Proost, Johannes H; Vermeulen, An; Krauwinkel, Walter; Beukers, Margot W; van den Berg, Dirk-Jan; Hartman, Robin; Wong, Yin Cheong; Danhof, Meindert; van Hasselt, John G C; de Lange, Elizabeth C M
2017-01-01
Drug development targeting the central nervous system (CNS) is challenging due to poor predictability of drug concentrations in various CNS compartments. We developed a generic physiologically based pharmacokinetic (PBPK) model for prediction of drug concentrations in physiologically relevant CNS
Directory of Open Access Journals (Sweden)
Yingtao Zhang
2016-02-01
Full Text Available Dengue is a re-emerging infectious disease of humans, rapidly growing from endemic areas to dengue-free regions due to favorable conditions. In recent decades, Guangzhou has again suffered from several big outbreaks of dengue; as have its neighboring cities. This study aims to examine the impact of dengue epidemics in Guangzhou, China, and to develop a predictive model for Zhongshan based on local weather conditions and Guangzhou dengue surveillance information.We obtained weekly dengue case data from 1st January, 2005 to 31st December, 2014 for Guangzhou and Zhongshan city from the Chinese National Disease Surveillance Reporting System. Meteorological data was collected from the Zhongshan Weather Bureau and demographic data was collected from the Zhongshan Statistical Bureau. A negative binomial regression model with a log link function was used to analyze the relationship between weekly dengue cases in Guangzhou and Zhongshan, controlling for meteorological factors. Cross-correlation functions were applied to identify the time lags of the effect of each weather factor on weekly dengue cases. Models were validated using receiver operating characteristic (ROC curves and k-fold cross-validation.Our results showed that weekly dengue cases in Zhongshan were significantly associated with dengue cases in Guangzhou after the treatment of a 5 weeks prior moving average (Relative Risk (RR = 2.016, 95% Confidence Interval (CI: 1.845-2.203, controlling for weather factors including minimum temperature, relative humidity, and rainfall. ROC curve analysis indicated our forecasting model performed well at different prediction thresholds, with 0.969 area under the receiver operating characteristic curve (AUC for a threshold of 3 cases per week, 0.957 AUC for a threshold of 2 cases per week, and 0.938 AUC for a threshold of 1 case per week. Models established during k-fold cross-validation also had considerable AUC (average 0.938-0.967. The sensitivity and
A multi-component and multi-failure mode inspection model based on the delay time concept
International Nuclear Information System (INIS)
Wang Wenbin; Banjevic, Dragan; Pecht, Michael
2010-01-01
The delay time concept and the techniques developed for modelling and optimising plant inspection practices have been reported in many papers and case studies. For a system comprised of many components and subject to many different failure modes, one of the most convenient ways to model the inspection and failure processes is to use a stochastic point process for defect arrivals and a common delay time distribution for the duration between defect the arrival and failure of all defects. This is an approximation, but has been proven to be valid when the number of components is large. However, for a system with just a few key components and subject to few major failure modes, the approximation may be poor. In this paper, a model is developed to address this situation, where each component and failure mode is modelled individually and then pooled together to form the system inspection model. Since inspections are usually scheduled for the whole system rather than individual components, we then formulate the inspection model when the time to the next inspection from the point of a component failure renewal is random. This imposes some complication to the model, and an asymptotic solution was found. Simulation algorithms have also been proposed as a comparison to the analytical results. A numerical example is presented to demonstrate the model.
Modelling of Attentional Dwell Time
DEFF Research Database (Denmark)
Petersen, Anders; Kyllingsbæk, Søren; Bundesen, Claus
2009-01-01
Studies of the time course of visual attention have identified a temporary functional blindness to the second of two spatially separated targets: attending to one visual stimulus may lead to impairments in identifying a second stimulus presented between 200 to 500 ms after the first. This phenome......Studies of the time course of visual attention have identified a temporary functional blindness to the second of two spatially separated targets: attending to one visual stimulus may lead to impairments in identifying a second stimulus presented between 200 to 500 ms after the first....... This phenomenon is known as attentional dwell time (e.g. Duncan, Ward, Shapiro, 1994). All Previous studies of the attentional dwell time have looked at data averaged across subjects. In contrast, we have succeeded in running subjects for 3120 trials which has given us reliable data for modelling data from...... individual subjects. Our new model is based on the Theory of Visual Attention (TVA; Bundesen, 1990). TVA has previously been successful in explaining results from experiments where stimuli are presented simultaneously in the spatial domain (e.g. whole report and partial report) but has not yet been extended...
Pavone, Andrea; Svensson, Jakob; Langenberg, Andreas; Pablant, Novimir; Wolf, Robert C.
2017-10-01
Artificial neural networks (ANNs) can reduce the computation time required for the application of Bayesian inference on large amounts of data by several orders of magnitude, making real-time analysis possible and, at the same time, providing a reliable alternative to more conventional inversion routines. The large scale fusion experiment Wendelstein 7-X (W7-X) requires tens of diagnostics for plasma parameter measurements and is using the Minerva Bayesian modelling framework as its main inference engine, which can handle joint inference in complex systems made of several physics models. Conventional inversion routines are applied to measured data to infer the posterior distribution of the free parameters of the models implemented in the framework. We have trained ANNs on a training set made of samples from the prior distribution of the free parameters and the corresponding data calculated with the forward model, so that the trained ANNs constitute a surrogate model of the physics model. The ANNs have been then applied to 2D images measured by an X-ray spectrometer, representing the spectral emission from plasma impurities measured along a fan of lines of sight covering a major fraction of the plasma cross-section, for the inference of ion temperature profiles and then compared with the conventional inversion routines, showing that they constitute a robust and reliable alternative for real time plasma parameter inference.
Cirpka, O. A.; Loschko, M.; Wöhling, T.; Rudolph, D. L.
2017-12-01
Excess nitrate concentrations pose a threat to drinking-water production from groundwater in all regions of intensive agriculture worldwide. Natural organic matter, pyrite, and other reduced constituents of the aquifer matrix can be oxidized by aerobic and denitrifying bacteria, leading to self-cleaning of groundwater. Various studies have shown that the heterogeneity of both hydraulic and chemical aquifer properties influence the reactive behavior. Since the exact spatial distributions of these properties are not known, predictions on the temporal evolution of nitrate should be probabilistic. However, the computational effort of pde-based, spatially explicit multi-component reactive-transport simulations are so high that multiple model runs become impossible. Conversely, simplistic models that treat denitrification as first-order decay process miss important controls on denitrification. We have proposed a Lagrangian framework of nonlinear reactive transport, in which the electron-donor supply by the aquifer matrix is parameterized by a relative reactivity, that is the reaction rate relative to a standard reaction rate for identical solute concentrations (Loschko et al., 2016). We could show that reactive transport simplifies to solving a single ordinary dfferential equation in terms of the cumulative relative reactivity for a given combination of inflow concentrations. Simulating 3-D flow and reactive transport are computationally so inexpensive that Monte Carlo simulation become feasible. The original scheme did not consider a change of the relative reactivity over time, implying that the electron-donor pool in the matrix is infinite. We have modified the scheme to address the consumption of the reducing aquifer constituents upon the reactions. We also analyzed how a minimally complex model of aerobic respiration and denitrification could look like. With the revised scheme, we performed Monte Carlo simulations in 3-D domains, confirming that the uncertainty in
Energy Technology Data Exchange (ETDEWEB)
Mendelsohn, M.L.; Pierce, P.A.
1997-09-04
Biological properties of relevance when modeling cancers induced in the atom bomb survivors include the wide distribution of the induced cancers across all organs, their biological indistinguishability from background cancers, their rates being proportional to background cancer rates, their rates steadily increasing over at least 50 years as the survivors age, and their radiation dose response being linear. We have successfully described this array of properties with a modified Armitage-Doll model using 5 to 6 somatic mutations, no intermediate growth, and the dose-related replacement of any one of these time-driven mutations by a radiation-induced mutation. Such a model is contrasted to prevailing models that use fewer mutations combined with intervening growth. While the rationale and effectiveness of our model is compelling for carcinogenesis in the atom bomb survivors, the lack of a promotional component may limit the generality of the model for other types of human carcinogenesis.
Directory of Open Access Journals (Sweden)
S. C. B. Raper
2009-08-01
Full Text Available Glacier volume response time is a measure of the time taken for a glacier to adjust its geometry to a climate change. It has been previously proposed that the volume response time is given approximately by the ratio of glacier thickness to ablation at the glacier terminus. We propose a new conceptual model of glacier hypsometry (area-altitude relation and derive the volume response time where climatic and topographic parameters are separated. The former is expressed by mass balance gradients which we derive from glacier-climate modelling and the latter are quantified with data from the World Glacier Inventory. Aside from the well-known scaling relation between glacier volume and area, we establish a new scaling relation between glacier altitude range and area, and evaluate it for seven regions. The presence of this scaling parameter in our response time formula accounts for the mass balance elevation feedback and leads to longer response times than given by the simple ratio of glacier thickness to ablation at the terminus. Volume response times range from decades to thousands of years for glaciers in maritime (wet-warm and continental (dry-cold climates respectively. The combined effect of volume-area and altitude-area scaling relations is such that volume response time can increase with glacier area (Axel Heiberg Island and Svalbard, hardly change (Northern Scandinavia, Southern Norway and the Alps or even get smaller (The Caucasus and New Zealand.
Zhang, Zhen; Yan, Peng; Jiang, Huan; Ye, Peiqing
2014-09-01
In this paper, we consider the discrete time-varying internal model-based control design for high precision tracking of complicated reference trajectories generated by time-varying systems. Based on a novel parallel time-varying internal model structure, asymptotic tracking conditions for the design of internal model units are developed, and a low order robust time-varying stabilizer is further synthesized. In a discrete time setting, the high precision tracking control architecture is deployed on a Voice Coil Motor (VCM) actuated servo gantry system, where numerical simulations and real time experimental results are provided, achieving the tracking errors around 3.5‰ for frequency-varying signals. Copyright © 2014 ISA. Published by Elsevier Ltd. All rights reserved.
MODELLING OF ORDINAL TIME SERIES BY PROPORTIONAL ODDS MODEL
Directory of Open Access Journals (Sweden)
Serpil AKTAŞ ALTUNAY
2013-06-01
Full Text Available Categorical time series data with random time dependent covariates often arise when the variable categories are assigned as categorical. There are several other models that have been proposed in the literature for the analysis of categorical time series. For example, Markov chain models, integer autoregressive processes, discrete ARMA models can be utilized for modeling of categorical time series. In general, the choice of model depends on the measurement of study variables: nominal, ordinal and interval. However, regression theory is successful approach for categorical time series which is based on generalized linear models and partial likelihood inference. One of the models for ordinal time series in regression theory is proportional odds model. In this study, proportional odds model approach to ordinal categorical time series is investigated based on a real air pollution data set and the results are discussed.
International Nuclear Information System (INIS)
Zhang Yu; Sprecher, Alicia J.; Zhao Zongxi; Jiang, Jack J.
2011-01-01
Highlights: → The VWK method effectively detects the nonlinearity of a discrete map. → The method describes the chaotic time series of a biomechanical vocal fold model. → Nonlinearity in laryngeal pathology is detected from short and noisy time series. - Abstract: In this paper, we apply the Volterra-Wiener-Korenberg (VWK) model method to detect nonlinearity in disordered voice productions. The VWK method effectively describes the nonlinearity of a third-order nonlinear map. It allows for the analysis of short and noisy data sets. The extracted VWK model parameters show an agreement with the original nonlinear map parameters. Furthermore, the VWK mode method is applied to successfully assess the nonlinearity of a biomechanical voice production model simulating irregular vibratory dynamics of vocal folds with a unilateral vocal polyp. Finally, we show the clinical applicability of this nonlinear detection method to analyze the electroglottographic data generated by 14 patients with vocal nodules or polyps. The VWK model method shows potential in describing the nonlinearity inherent in disordered voice productions from short and noisy time series that are common in the clinical setting.
Directory of Open Access Journals (Sweden)
Cook Larry T
2008-05-01
Full Text Available Abstract Background Myocardial motion is an important observable for the assessment of heart condition. Accurate estimates of ventricular (LV wall motion are required for quantifying myocardial deformation and assessing local tissue function and viability. Harmonic Phase (HARP analysis was developed for measuring regional LV motion using tagged magnetic resonance imaging (tMRI data. With current computer-aided postprocessing tools including HARP analysis, large motions experienced by myocardial tissue are, however, often intractable to measure. This paper addresses this issue and provides a solution to make such measurements possible. Methods To improve the estimation performance of large cardiac motions while analyzing tMRI data sets, we propose a two-step solution. The first step involves constructing a model to describe average systolic motion of the LV wall within a subject group. The second step involves time-reversal of the model applied as a spatial coordinate transformation to digitally relax the contracted LV wall in the experimental data of a single subject to the beginning of systole. Cardiac tMRI scans were performed on four healthy rats and used for developing the forward LV model. Algorithms were implemented for preprocessing the tMRI data, optimizing the model parameters and performing the HARP analysis. Slices from the midventricular level were then analyzed for all systolic phases. Results The time-reversal operation derived from the LV model accounted for the bulk portion of the myocardial motion, which was the average motion experienced within the overall subject population. In analyzing the individual tMRI data sets, removing this average with the time-reversal operation left small magnitude residual motion unique to the case. This remaining residual portion of the motion was estimated robustly using the HARP analysis. Conclusion Utilizing a combination of the forward LV model and its time reversal improves the performance of
Kossieris, Panagiotis; Makropoulos, Christos; Onof, Christian; Koutsoyiannis, Demetris
2018-01-01
Many hydrological applications, such as flood studies, require the use of long rainfall data at fine time scales varying from daily down to 1 min time step. However, in the real world there is limited availability of data at sub-hourly scales. To cope with this issue, stochastic disaggregation techniques are typically employed to produce possible, statistically consistent, rainfall events that aggregate up to the field data collected at coarser scales. A methodology for the stochastic disaggregation of rainfall at fine time scales was recently introduced, combining the Bartlett-Lewis process to generate rainfall events along with adjusting procedures to modify the lower-level variables (i.e., hourly) so as to be consistent with the higher-level one (i.e., daily). In the present paper, we extend the aforementioned scheme, initially designed and tested for the disaggregation of daily rainfall into hourly depths, for any sub-hourly time scale. In addition, we take advantage of the recent developments in Poisson-cluster processes incorporating in the methodology a Bartlett-Lewis model variant that introduces dependence between cell intensity and duration in order to capture the variability of rainfall at sub-hourly time scales. The disaggregation scheme is implemented in an R package, named HyetosMinute, to support disaggregation from daily down to 1-min time scale. The applicability of the methodology was assessed on a 5-min rainfall records collected in Bochum, Germany, comparing the performance of the above mentioned model variant against the original Bartlett-Lewis process (non-random with 5 parameters). The analysis shows that the disaggregation process reproduces adequately the most important statistical characteristics of rainfall at wide range of time scales, while the introduction of the model with dependent intensity-duration results in a better performance in terms of skewness, rainfall extremes and dry proportions.
Directory of Open Access Journals (Sweden)
Zhilei He
2016-01-01
Full Text Available Based on mineral components and the creep experimental studies of Three Gorges granite and Beishan granite from different regions of China at various temperatures, the strength and creep property of two types of granites are compared and analyzed. Considering the damage evolution process, a new creep constitutive model is proposed to describe the creep property of granite at different temperatures based on fractional derivative. The parameters of the new creep model are determined on the basis of the experimental results of the two granites. In addition, a sensitivity study is carried out, showing effects of stress level, fractional derivative order, and the exponent m. The results indicate that the proposed creep model can describe the three creep stages of granite at different temperatures and contribute to further research on the creep property of granite.
Time Domain Waveform Inversion for the Q Model Based on the First-Order Viscoacoustic Wave Equations
Directory of Open Access Journals (Sweden)
Guowei Zhang
2016-01-01
Full Text Available Propagating seismic waves are dispersed and attenuated in the subsurface due to the conversion of elastic energy into heat. The absorptive property of a medium can be described by the quality factor Q. In this study, the first-order pressure-velocity viscoacoustic wave equations based on the standard linear solid model are used to incorporate the effect of Q. For the Q model inversion, an iterative procedure is then proposed by minimizing an objective function that measures the misfit energy between the observed data and the modeled data. The adjoint method is applied to derive the gradients of the objective function with respect to the model parameters, that is, bulk modulus, density, and Q-related parameter τ. Numerical tests on the crosswell recording geometry indicate the feasibility of the proposed approach for the Q anomaly estimation.
International Nuclear Information System (INIS)
Puglia, B.; Gallagher, D.; Amico, P.; Atefi, B.
1992-10-01
This report describes a process developed to convert an existing PRA into a model amenable to real time, risk-based technical specification calculations. In earlier studies (culminating in NUREG/CR-5742), several risk-based approaches to technical specification were evaluated. A real-time approach using a plant specific PRA capable of modeling plant configurations as they change was identified as the most comprehensive approach to control plant risk. A master fault tree logic model representative of-all of the core damage sequences was developed. Portions of the system fault trees were modularized and supercomponents comprised of component failures with similar effects were developed to reduce the size of the model and, quantification times. Modifications to the master fault tree logic were made to properly model the effect of maintenance and recovery actions. Fault trees representing several actuation systems not modeled in detail in the existing PRA were added to the master fault tree logic. This process was applied to the Surry NUREG-1150 Level 1 PRA. The master logic mode was confirmed. The model was then used to evaluate frequency associated with several plant configurations using the IRRAS code. For all cases analyzed computational time was less than three minutes. This document Volume 2, contains appendices A, B, and C. These provide, respectively: Surry Technical Specifications Model Database, Surry Technical Specifications Model, and a list of supercomponents used in the Surry Technical Specifications Model
Directory of Open Access Journals (Sweden)
Pengpeng Jiao
2016-08-01
Full Text Available Real-time traffic control is very important for urban transportation systems. Due to conflicts among different optimization objectives, the existing multi-objective models often convert into single-objective problems through weighted sum method. To obtain real-time signal parameters and evaluation indices, this article puts forward a Pareto front–based multi-objective traffic signal control model using particle swarm optimization algorithm. The article first formulates a control model for intersections based on detected real-time link volumes, with minimum delay time, minimum number of stops, and maximum effective capacity as three objectives. Moreover, this article designs a step-by-step particle swarm optimization algorithm based on Pareto front for solution. Pareto dominance relation and density distance are employed for ranking, tournament selection is used to select and weed out particles, and Pareto front for the signal timing plan is then obtained, including time-varying cycle length and split. Finally, based on actual survey data, scenario analyses determine the optimal parameters of the particle swarm algorithm, comparisons with the current situation and existing models demonstrate the excellent performances, and the experiments incorporating outliers in the input data or total failure of detectors further prove the robustness. Generally, the proposed methodology is effective and robust enough for real-time traffic signal control.
Morteza Hatami; Mitra Mohammadi Mohammadi; Reza Esmaeli; Mandana Mohammadi
2017-01-01
Epidemiological studies conducted in the past two decades indicate that air pollution causes increase in cardiovascular, breathing and chronic bronchitis disorders and even causes cardiovascular mortality. Therefore, the aim of this study was to investigate the relationship between meteorological parameters, air pollution and cardiovascular mortality in the city of Mashhad in 2014 by a time series model. Data on mortality from cardiovascular disease, meteorological parameters and air pollutio...
Directory of Open Access Journals (Sweden)
Pranav Srinivas Kumar
2016-09-01
Full Text Available This paper presents the Robot Operating System Model-driven development tool suite, (ROSMOD an integrated development environment for rapid prototyping component-based software for the Robot Operating System (ROS middleware. ROSMOD is well suited for the design, development and deployment of large-scale distributed applications on embedded devices. We present the various features of ROSMOD including the modeling language, the graphical user interface, code generators, and deployment infrastructure. We demonstrate the utility of this tool with a real-world case study: an Autonomous Ground Support Equipment (AGSE robot that was designed and prototyped using ROSMOD for the NASA Student Launch competition, 2014–2015.
A Predictive Model for Time-to-Flowering in the Common Bean Based on QTL and Environmental Variables
Directory of Open Access Journals (Sweden)
Mehul S. Bhakta
2017-12-01
Full Text Available The common bean is a tropical facultative short-day legume that is now grown in tropical and temperate zones. This observation underscores how domestication and modern breeding can change the adaptive phenology of a species. A key adaptive trait is the optimal timing of the transition from the vegetative to the reproductive stage. This trait is responsive to genetically controlled signal transduction pathways and local climatic cues. A comprehensive characterization of this trait can be started by assessing the quantitative contribution of the genetic and environmental factors, and their interactions. This study aimed to locate significant QTL (G and environmental (E factors controlling time-to-flower in the common bean, and to identify and measure G × E interactions. Phenotypic data were collected from a biparental [Andean × Mesoamerican] recombinant inbred population (F11:14, 188 genotypes grown at five environmentally distinct sites. QTL analysis using a dense linkage map revealed 12 QTL, five of which showed significant interactions with the environment. Dissection of G × E interactions using a linear mixed-effect model revealed that temperature, solar radiation, and photoperiod play major roles in controlling common bean flowering time directly, and indirectly by modifying the effect of certain QTL. The model predicts flowering time across five sites with an adjusted r-square of 0.89 and root-mean square error of 2.52 d. The model provides the means to disentangle the environmental dependencies of complex traits, and presents an opportunity to identify in silico QTL allele combinations that could yield desired phenotypes under different climatic conditions.
Models for dependent time series
Tunnicliffe Wilson, Granville; Haywood, John
2015-01-01
Models for Dependent Time Series addresses the issues that arise and the methodology that can be applied when the dependence between time series is described and modeled. Whether you work in the economic, physical, or life sciences, the book shows you how to draw meaningful, applicable, and statistically valid conclusions from multivariate (or vector) time series data.The first four chapters discuss the two main pillars of the subject that have been developed over the last 60 years: vector autoregressive modeling and multivariate spectral analysis. These chapters provide the foundational mater
Kogan, Steven M; Cho, Junhan; Simons, Leslie Gordon; Allen, Kimberly A; Beach, Steven R H; Simons, Ronald L; Gibbons, Frederick X
2015-04-01
Life History Theory (LHT), a branch of evolutionary biology, describes how organisms maximize their reproductive success in response to environmental conditions. This theory suggests that challenging environmental conditions will lead to early pubertal maturation, which in turn predicts heightened risky sexual behavior. Although largely confirmed among female adolescents, results with male youth are inconsistent. We tested a set of predictions based on LHT with a sample of 375 African American male youth assessed three times from age 11 to age 16. Harsh, unpredictable community environments and harsh, inconsistent, or unregulated parenting at age 11 were hypothesized to predict pubertal maturation at age 13; pubertal maturation was hypothesized to forecast risky sexual behavior, including early onset of intercourse, substance use during sexual activity, and lifetime numbers of sexual partners. Results were consistent with our hypotheses. Among African American male youth, community environments were a modest but significant predictor of pubertal timing. Among those youth with high negative emotionality, both parenting and community factors predicted pubertal timing. Pubertal timing at age 13 forecast risky sexual behavior at age 16. Results of analyses conducted to determine whether environmental effects on sexual risk behavior were mediated by pubertal timing were not significant. This suggests that, although evolutionary mechanisms may affect pubertal development via contextual influences for sensitive youth, the factors that predict sexual risk behavior depend less on pubertal maturation than LHT suggests.
Miranian, A; Abdollahzade, M
2013-02-01
Local modeling approaches, owing to their ability to model different operating regimes of nonlinear systems and processes by independent local models, seem appealing for modeling, identification, and prediction applications. In this paper, we propose a local neuro-fuzzy (LNF) approach based on the least-squares support vector machines (LSSVMs). The proposed LNF approach employs LSSVMs, which are powerful in modeling and predicting time series, as local models and uses hierarchical binary tree (HBT) learning algorithm for fast and efficient estimation of its parameters. The HBT algorithm heuristically partitions the input space into smaller subdomains by axis-orthogonal splits. In each partitioning, the validity functions automatically form a unity partition and therefore normalization side effects, e.g., reactivation, are prevented. Integration of LSSVMs into the LNF network as local models, along with the HBT learning algorithm, yield a high-performance approach for modeling and prediction of complex nonlinear time series. The proposed approach is applied to modeling and predictions of different nonlinear and chaotic real-world and hand-designed systems and time series. Analysis of the prediction results and comparisons with recent and old studies demonstrate the promising performance of the proposed LNF approach with the HBT learning algorithm for modeling and prediction of nonlinear and chaotic systems and time series.
Directory of Open Access Journals (Sweden)
Katelyn C. Corey
2016-09-01
Full Text Available Using a mathematical model with realistic demography, we analyze a large outbreak of measles in Muyinga sector in rural Burundi in 1988–1989. We generate simulated epidemic curves and age × time epidemic surfaces, which we qualitatively and quantitatively compare with the data. Our findings suggest that supplementary immunization activities (SIAs should be used in places where routine vaccination cannot keep up with the increasing numbers of susceptible individuals resulting from population growth or from logistical problems such as cold chain maintenance. We use the model to characterize the relationship between SIA frequency and SIA age range necessary to suppress measles outbreaks. If SIAs are less frequent, they must expand their target age range.
Tu, Juan; Guan F., J.; Matula J., T.; Crum A., L.; Wei, Rongjue
2008-01-01
The dynamic behaviour of SonoVue microbubbles, a new generation ultrasound contrast agent, is investigated in real time with light scattering method. Highly diluted SonoVue microbubbles are injected into a diluted gel made of xanthan gum and water. The responses of individual SonoVue bubbles to driven ultrasound pulses are measured. Both linear and nonlinear bubble oscillations are observed and the results suggest that SonoVue microbubbles can generate strong nonlinear responses. By fitting the experimental data of individual bubble responses with Sarkar's model, the shell coating parameter of the bubbles and dilatational viscosity is estimated to be 7.0 nm.s.Pa.
Stability analysis and model-based control in EXTRAP-T2R with time-delay compensation
Olofsson, Erik; Witrant, Emmanuel; Briat, Corentin; Niculescu, Silviu-Iulian; Brunsell, Per
2008-01-01
International audience; In this paper, we investigate the stability problems and control issues that occur in a reversed-field pinch (RFP) device, EXTRAP-T2R (T2R), used for research in fusion plasma physics and general plasma (ionized gas) dynamics. The plant exhibits, among other things, magnetohydrodynamic instabilities known as resistive-wall modes (RWMs), growing on a time-scale set by a surrounding non-perfectly conducting shell.We propose a novel model that takes into account experimen...
Directory of Open Access Journals (Sweden)
Luca Faes
2017-01-01
Full Text Available The most common approach to assess the dynamical complexity of a time series across multiple temporal scales makes use of the multiscale entropy (MSE and refined MSE (RMSE measures. In spite of their popularity, MSE and RMSE lack an analytical framework allowing their calculation for known dynamic processes and cannot be reliably computed over short time series. To overcome these limitations, we propose a method to assess RMSE for autoregressive (AR stochastic processes. The method makes use of linear state-space (SS models to provide the multiscale parametric representation of an AR process observed at different time scales and exploits the SS parameters to quantify analytically the complexity of the process. The resulting linear MSE (LMSE measure is first tested in simulations, both theoretically to relate the multiscale complexity of AR processes to their dynamical properties and over short process realizations to assess its computational reliability in comparison with RMSE. Then, it is applied to the time series of heart period, arterial pressure, and respiration measured for healthy subjects monitored in resting conditions and during physiological stress. This application to short-term cardiovascular variability documents that LMSE can describe better than RMSE the activity of physiological mechanisms producing biological oscillations at different temporal scales.
Forecasting with nonlinear time series models
DEFF Research Database (Denmark)
Kock, Anders Bredahl; Teräsvirta, Timo
applied to economic fore- casting problems, is briefly highlighted. A number of large published studies comparing macroeconomic forecasts obtained using different time series models are discussed, and the paper also contains a small simulation study comparing recursive and direct forecasts in a partic......In this paper, nonlinear models are restricted to mean nonlinear parametric models. Several such models popular in time series econo- metrics are presented and some of their properties discussed. This in- cludes two models based on universal approximators: the Kolmogorov- Gabor polynomial model...
E. Korkmaz (Evsen); R. Kuik (Roelof); D. Fok (Dennis)
2013-01-01
textabstractThis research provides a new way to validate and compare buy-till-you-defect [BTYD] models. These models specify a customer’s transaction and defection processes in a non-contractual setting. They are typically used to identify active customers in a com- pany’s customer base and to
Müftüler, Mine; İnce, Mustafa Levent
2015-08-01
This study examined how a physical activity course based on the Trans-Contextual Model affected the variables of perceived autonomy support, autonomous motivation, determinants of leisure-time physical activity behavior, basic psychological needs satisfaction, and leisure-time physical activity behaviors. The participants were 70 Turkish university students (M age=23.3 yr., SD=3.2). A pre-test-post-test control group design was constructed. Initially, the participants were randomly assigned into an experimental (n=35) and a control (n=35) group. The experimental group followed a 12 wk. trans-contextual model-based intervention. The participants were pre- and post-tested in terms of Trans-Contextual Model constructs and of self-reported leisure-time physical activity behaviors. Multivariate analyses showed significant increases over the 12 wk. period for perceived autonomy support from instructor and peers, autonomous motivation in leisure-time physical activity setting, positive intention and perceived behavioral control over leisure-time physical activity behavior, more fulfillment of psychological needs, and more engagement in leisure-time physical activity behavior in the experimental group. These results indicated that the intervention was effective in developing leisure-time physical activity and indicated that the Trans-Contextual Model is a useful way to conceptualize these relationships.
Drew, J. E.
1989-01-01
Ab initio ionization and thermal equilibrium models are calculated for the winds of O stars using the results of steady state radiation-driven wind theory to determine the input parameters. Self-consistent methods are used for the roles of H, He, and the most abundant heavy elements in both the statistical and the thermal equilibrium. The model grid was chosen to encompass all O spectral subtypes and the full range of luminosity classes. Results of earlier modeling of O star winds by Klein and Castor (1978) are reproduced and used to motivate improvements in the treatment of the hydrogen equilibrium. The wind temperature profile is revealed to be sensitive to gross changes in the heavy element abundances, but insensitive to other factors considered such as the mass-loss rate and velocity law. The reduced wind temperatures obtained in observing the luminosity dependence of the Si IV lambda 1397 wind absorption profile are shown to eliminate any prospect of explaining the observed O VI lambda 1036 line profiles in terms of time-independent radiation-driven wind theory.
Volitional and Real-Time Control Cursor Based on Eye Movement Decoding Using a Linear Decoding Model
Directory of Open Access Journals (Sweden)
Jinhua Zhang
2016-01-01
Full Text Available The aim of this study is to build a linear decoding model that reveals the relationship between the movement information and the EOG (electrooculogram data to online control a cursor continuously with blinks and eye pursuit movements. First of all, a blink detection method is proposed to reject a voluntary single eye blink or double-blink information from EOG. Then, a linear decoding model of time series is developed to predict the position of gaze, and the model parameters are calibrated by the RLS (Recursive Least Square algorithm; besides, the assessment of decoding accuracy is assessed through cross-validation procedure. Additionally, the subsection processing, increment control, and online calibration are presented to realize the online control. Finally, the technology is applied to the volitional and online control of a cursor to hit the multiple predefined targets. Experimental results show that the blink detection algorithm performs well with the voluntary blink detection rate over 95%. Through combining the merits of blinks and smooth pursuit movements, the movement information of eyes can be decoded in good conformity with the average Pearson correlation coefficient which is up to 0.9592, and all signal-to-noise ratios are greater than 0. The novel system allows people to successfully and economically control a cursor online with a hit rate of 98%.
International Nuclear Information System (INIS)
Drew, J.E.
1989-01-01
Ab initio ionization and thermal equilibrium models are calculated for the winds of O stars using the results of steady state radiation-driven wind theory to determine the input parameters. Self-consistent methods are used for the roles of H, He, and the most abundant heavy elements in both the statistical and the thermal equilibrium. The model grid was chosen to encompass all O spectral subtypes and the full range of luminosity classes. Results of earlier modeling of O star winds by Klein and Castor (1978) are reproduced and used to motivate improvements in the treatment of the hydrogen equilibrium. The wind temperature profile is revealed to be sensitive to gross changes in the heavy element abundances, but insensitive to other factors considered such as the mass-loss rate and velocity law. The reduced wind temperatures obtained in observing the luminosity dependence of the Si IV lambda 1397 wind absorption profile are shown to eliminate any prospect of explaining the observed O VI lambda 1036 line profiles in terms of time-independent radiation-driven wind theory. 55 refs
Stochastic models for time series
Doukhan, Paul
2018-01-01
This book presents essential tools for modelling non-linear time series. The first part of the book describes the main standard tools of probability and statistics that directly apply to the time series context to obtain a wide range of modelling possibilities. Functional estimation and bootstrap are discussed, and stationarity is reviewed. The second part describes a number of tools from Gaussian chaos and proposes a tour of linear time series models. It goes on to address nonlinearity from polynomial or chaotic models for which explicit expansions are available, then turns to Markov and non-Markov linear models and discusses Bernoulli shifts time series models. Finally, the volume focuses on the limit theory, starting with the ergodic theorem, which is seen as the first step for statistics of time series. It defines the distributional range to obtain generic tools for limit theory under long or short-range dependences (LRD/SRD) and explains examples of LRD behaviours. More general techniques (central limit ...
DEFF Research Database (Denmark)
Jiang, Yuewen; Chen, Meisen; You, Shi
2017-01-01
In a conventional electricity market, trading is conducted based on power forecasts in the day-ahead market, while the power imbalance is regulated in the real-time market, which is a separate trading scheme. With large-scale wind power connected into the power grid, power forecast errors increase...... in the day-ahead market which lowers the economic efficiency of the separate trading scheme. This paper proposes a robust unified trading model that includes the forecasts of real-time prices and imbalance power into the day-ahead trading scheme. The model is developed based on robust optimization in view...... swarm algorithm (QPSO). Finally, the impacts of associated parameters on the separate trading and unified trading model are analyzed to verify the superiority of the proposed model and algorithm....
Gerwin, Philip M; Norinsky, Rada M; Tolwani, Ravi J
2018-03-01
Laboratory animal programs and core laboratories often set service rates based on cost estimates. However, actual costs may be unknown, and service rates may not reflect the actual cost of services. Accurately evaluating the actual costs of services can be challenging and time-consuming. We used a time-driven activity-based costing (ABC) model to determine the cost of services provided by a resource laboratory at our institution. The time-driven approach is a more efficient approach to calculating costs than using a traditional ABC model. We calculated only 2 parameters: the time required to perform an activity and the unit cost of the activity based on employee cost. This method allowed us to rapidly and accurately calculate the actual cost of services provided, including microinjection of a DNA construct, microinjection of embryonic stem cells, embryo transfer, and in vitro fertilization. We successfully implemented a time-driven ABC model to evaluate the cost of these services and the capacity of labor used to deliver them. We determined how actual costs compared with current service rates. In addition, we determined that the labor supplied to conduct all services (10,645 min/wk) exceeded the practical labor capacity (8400 min/wk), indicating that the laboratory team was highly efficient and that additional labor capacity was needed to prevent overloading of the current team. Importantly, this time-driven ABC approach allowed us to establish a baseline model that can easily be updated to reflect operational changes or changes in labor costs. We demonstrated that a time-driven ABC model is a powerful management tool that can be applied to other core facilities as well as to entire animal programs, providing valuable information that can be used to set rates based on the actual cost of services and to improve operating efficiency.
A time-driven activity-based costing model to improve health-care resource use in Mirebalais, Haiti.
Mandigo, Morgan; O'Neill, Kathleen; Mistry, Bipin; Mundy, Bryan; Millien, Christophe; Nazaire, Yolande; Damuse, Ruth; Pierre, Claire; Mugunga, Jean Claude; Gillies, Rowan; Lucien, Franciscka; Bertrand, Karla; Luo, Eva; Costas, Ainhoa; Greenberg, Sarah L M; Meara, John G; Kaplan, Robert
2015-04-27
In resource-limited settings, efficiency is crucial to maximise resources available for patient care. Time driven activity-based costing (TDABC) estimates costs directly from clinical and administrative processes used in patient care, thereby providing valuable information for process improvements. TDABC is more accurate and simpler than traditional activity-based costing because it assigns resource costs to patients based on the amount of time clinical and staff resources are used in patient encounters. Other costing approaches use somewhat arbitrary allocations that provide little transparency into the actual clinical processes used to treat medical conditions. TDABC has been successfully applied in European and US health-care settings to facilitate process improvements and new reimbursement approaches, but it has not been used in resource-limited settings. We aimed to optimise TDABC for use in a resource-limited setting to provide accurate procedure and service costs, reliably predict financing needs, inform quality improvement initiatives, and maximise efficiency. A multidisciplinary team used TDABC to map clinical processes for obstetric care (vaginal and caesarean deliveries, from triage to post-partum discharge) and breast cancer care (diagnosis, chemotherapy, surgery, and support services, such as pharmacy, radiology, laboratory, and counselling) at Hôpital Universitaire de Mirebalais (HUM) in Haiti. The team estimated the direct costs of personnel, equipment, and facilities used in patient care based on the amount of time each of these resources was used. We calculated inpatient personnel costs by allocating provider costs per staffed bed, and assigned indirect costs (administration, facility maintenance and operations, education, procurement and warehouse, bloodbank, and morgue) to various subgroups of the patient population. This study was approved by the Partners in Health/Zanmi Lasante Research Committee. The direct cost of an uncomplicated vaginal
Lavini, Cristina; Verhoeff, Joost J. C.; Majoie, Charles B.; Stalpers, Lukas J. A.; Richel, Dick J.; Maas, Mario
2011-01-01
To compare time intensity curve (TIC)-shape analysis of dynamic contrast-enhanced magnetic resonance imaging (DCE-MRI) data with model-based analysis and semiquantitative analysis in patients with high-grade glioma treated with the antiangiogenic drug bevacizumab. Fifteen patients had a pretreatment
Bannerman, James W.
A practicum was conducted to develop a scientific management tool that would assist students in obtaining a systems view of their college curriculum and to coordinate planning with curriculum requirements. A modification of the critical path method was employed and the result was a time-based network model of the Industrial Engineering Technology…
Modelling of Attentional Dwell Time
DEFF Research Database (Denmark)
Petersen, Anders; Kyllingsbæk, Søren; Bundesen, Claus
2009-01-01
. This phenomenon is known as attentional dwell time (e.g. Duncan, Ward, Shapiro, 1994). All Previous studies of the attentional dwell time have looked at data averaged across subjects. In contrast, we have succeeded in running subjects for 3120 trials which has given us reliable data for modelling data from...... of attentional dwell time extends these mechanisms by proposing that the processing resources (cells) already engaged in a feedback loop (i.e. allocated to an object) are locked in VSTM and therefore cannot be allocated to other objects in the visual field before the encoded object has been released....... This confinement of attentional resources leads to the impairment in identifying the second target. With the model, we are able to produce close fits to data from the traditional two target dwell time paradigm. A dwell-time experiment with three targets has also been carried out for individual subjects...
Energy Technology Data Exchange (ETDEWEB)
Louit, D.M. [Komatsu Chile, Av. Americo Vespucio 0631, Quilicura, Santiago (Chile)], E-mail: rpascual@ing.puc.cl; Pascual, R. [Centro de Mineria, Pontificia Universidad Catolica de Chile, Av. Vicuna Mackenna 4860, Santiago (Chile); Jardine, A.K.S. [Department of Mechanical and Industrial Engineering, University of Toronto, 5 King' s College Road, Toronto, Ont., M5S 3G8 (Canada)
2009-10-15
Many times, reliability studies rely on false premises such as independent and identically distributed time between failures assumption (renewal process). This can lead to erroneous model selection for the time to failure of a particular component or system, which can in turn lead to wrong conclusions and decisions. A strong statistical focus, a lack of a systematic approach and sometimes inadequate theoretical background seem to have made it difficult for maintenance analysts to adopt the necessary stage of data testing before the selection of a suitable model. In this paper, a framework for model selection to represent the failure process for a component or system is presented, based on a review of available trend tests. The paper focuses only on single-time-variable models and is primarily directed to analysts responsible for reliability analyses in an industrial maintenance environment. The model selection framework is directed towards the discrimination between the use of statistical distributions to represent the time to failure ('renewal approach'); and the use of stochastic point processes ('repairable systems approach'), when there may be the presence of system ageing or reliability growth. An illustrative example based on failure data from a fleet of backhoes is included.
Directory of Open Access Journals (Sweden)
Dan Siegal-Gaskins
2009-08-01
Full Text Available In both prokaryotic and eukaryotic cells, gene expression is regulated across the cell cycle to ensure "just-in-time" assembly of select cellular structures and molecular machines. However, present in all time-series gene expression measurements is variability that arises from both systematic error in the cell synchrony process and variance in the timing of cell division at the level of the single cell. Thus, gene or protein expression data collected from a population of synchronized cells is an inaccurate measure of what occurs in the average single-cell across a cell cycle. Here, we present a general computational method to extract "single-cell"-like information from population-level time-series expression data. This method removes the effects of 1 variance in growth rate and 2 variance in the physiological and developmental state of the cell. Moreover, this method represents an advance in the deconvolution of molecular expression data in its flexibility, minimal assumptions, and the use of a cross-validation analysis to determine the appropriate level of regularization. Applying our deconvolution algorithm to cell cycle gene expression data from the dimorphic bacterium Caulobacter crescentus, we recovered critical features of cell cycle regulation in essential genes, including ctrA and ftsZ, that were obscured in population-based measurements. In doing so, we highlight the problem with using population data alone to decipher cellular regulatory mechanisms and demonstrate how our deconvolution algorithm can be applied to produce a more realistic picture of temporal regulation in a cell.
Directory of Open Access Journals (Sweden)
Ibgtc Bowala
2017-06-01
Full Text Available With the rapid growth of financial markets, analyzers are paying more attention on predictions. Stock data are time series data, with huge amounts. Feasible solution for handling the increasing amount of data is to use a cluster for parallel processing, and Hadoop parallel computing platform is a typical representative. There are various statistical models for forecasting time series data, but accurate clusters are a pre-requirement. Clustering analysis for time series data is one of the main methods for mining time series data for many other analysis processes. However, general clustering algorithms cannot perform clustering for time series data because series data has a special structure and a high dimensionality has highly co-related values due to high noise level. A novel model for time series clustering is presented using BIRCH, based on piecewise SVD, leading to a novel dimension reduction approach. Highly co-related features are handled using SVD with a novel approach for dimensionality reduction in order to keep co-related behavior optimal and then use BIRCH for clustering. The algorithm is a novel model that can handle massive time series data. Finally, this new model is successfully applied to real stock time series data of Yahoo finance with satisfactory results.
Lee, Seungyeoun; Son, Donghee; Yu, Wenbao; Park, Taesung
2016-12-01
Although a large number of genetic variants have been identified to be associated with common diseases through genome-wide association studies, there still exits limitations in explaining the missing heritability. One approach to solving this missing heritability problem is to investigate gene-gene interactions, rather than a single-locus approach. For gene-gene interaction analysis, the multifactor dimensionality reduction (MDR) method has been widely applied, since the constructive induction algorithm of MDR efficiently reduces high-order dimensions into one dimension by classifying multi-level genotypes into high- and low-risk groups. The MDR method has been extended to various phenotypes and has been improved to provide a significance test for gene-gene interactions. In this paper, we propose a simple method, called accelerated failure time (AFT) UM-MDR, in which the idea of a unified model-based MDR is extended to the survival phenotype by incorporating AFT-MDR into the classification step. The proposed AFT UM-MDR method is compared with AFT-MDR through simulation studies, and a short discussion is given.
Directory of Open Access Journals (Sweden)
Seungyeoun Lee
2016-12-01
Full Text Available Although a large number of genetic variants have been identified to be associated with common diseases through genome-wide association studies, there still exits limitations in explaining the missing heritability. One approach to solving this missing heritability problem is to investigate gene-gene interactions, rather than a single-locus approach. For gene-gene interaction analysis, the multifactor dimensionality reduction (MDR method has been widely applied, since the constructive induction algorithm of MDR efficiently reduces high-order dimensions into one dimension by classifying multi-level genotypes into high- and low-risk groups. The MDR method has been extended to various phenotypes and has been improved to provide a significance test for gene-gene interactions. In this paper, we propose a simple method, called accelerated failure time (AFT UM-MDR, in which the idea of a unified model-based MDR is extended to the survival phenotype by incorporating AFT-MDR into the classification step. The proposed AFT UM-MDR method is compared with AFT-MDR through simulation studies, and a short discussion is given.
Directory of Open Access Journals (Sweden)
Morteza Hatami
2017-10-01
Full Text Available Epidemiological studies conducted in the past two decades indicate that air pollution causes increase in cardiovascular, breathing and chronic bronchitis disorders and even causes cardiovascular mortality. Therefore, the aim of this study was to investigate the relationship between meteorological parameters, air pollution and cardiovascular mortality in the city of Mashhad in 2014 by a time series model. Data on mortality from cardiovascular disease, meteorological parameters and air pollution in 2014 were gathered from Paradises organization, meteorology organization and pollutant monitoring center, respectively. Then the relationship between these parameters was analyzed using correlation coefficient, generalized linear regression, time series models and comparison of means. The results of the study showed that the highest rate of cardiovascular mortality related to Sulfur dioxide, nitrogen dioxide and then PM2.5. So that each unit increase in SO2, NO2 and PM2.5 pollutants adds to the rate of cardiovascular mortality by 22.5, 2.9 and 0.69, respectively. Pressure, wind speed and rainfall have a significant association with mortality. So that each unit decrease in pressure and wind speed, increases the rate of cardiovascular mortality by 2.79 and 15.77, respectively. It was also found that in the case of one-unit increase in rainfall, the possibility of mortality from the mentioned disease goes up by 3.8 units. It was also found that one-year increase of the age increases the mortality caused by these diseases up to 0.57 percent. Furthermore, the highest rate of cardiovascular mortality related to cold periods of the year. Therefore, considering the growing trend of air pollution and its health effects on human health, performing actions and effective solutions is important in the field of controlling and reducing air pollution in Iranian metropolis including Mashhad.
Compiling models into real-time systems
International Nuclear Information System (INIS)
Dormoy, J.L.; Cherriaux, F.; Ancelin, J.
1992-08-01
This paper presents an architecture for building real-time systems from models, and model-compiling techniques. This has been applied for building a real-time model-based monitoring system for nuclear plants, called KSE, which is currently being used in two plants in France. We describe how we used various artificial intelligence techniques for building it: a model-based approach, a logical model of its operation, a declarative implementation of these models, and original knowledge-compiling techniques for automatically generating the real-time expert system from those models. Some of those techniques have just been borrowed from the literature, but we had to modify or invent other techniques which simply did not exist. We also discuss two important problems, which are often underestimated in the artificial intelligence literature: size, and errors. Our architecture, which could be used in other applications, combines the advantages of the model-based approach with the efficiency requirements of real-time applications, while in general model-based approaches present serious drawbacks on this point
Compiling models into real-time systems
International Nuclear Information System (INIS)
Dormoy, J.L.; Cherriaux, F.; Ancelin, J.
1992-08-01
This paper presents an architecture for building real-time systems from models, and model-compiling techniques. This has been applied for building a real-time model-base monitoring system for nuclear plants, called KSE, which is currently being used in two plants in France. We describe how we used various artificial intelligence techniques for building it: a model-based approach, a logical model of its operation, a declarative implementation of these models, and original knowledge-compiling techniques for automatically generating the real-time expert system from those models. Some of those techniques have just been borrowed from the literature, but we had to modify or invent other techniques which simply did not exist. We also discuss two important problems, which are often underestimated in the artificial intelligence literature: size, and errors. Our architecture, which could be used in other applications, combines the advantages of the model-based approach with the efficiency requirements of real-time applications, while in general model-based approaches present serious drawbacks on this point
LYSO based precision timing calorimeters
Bornheim, A.; Apresyan, A.; Ronzhin, A.; Xie, S.; Duarte, J.; Spiropulu, M.; Trevor, J.; Anderson, D.; Pena, C.; Hassanshahi, M. H.
2017-11-01
In this report we outline the study of the development of calorimeter detectors using bright scintillating crystals. We discuss how timing information with a precision of a few tens of pico seconds and below can significantly improve the reconstruction of the physics events under challenging high pileup conditions to be faced at the High-Luminosity LHC or a future hadron collider. The particular challenge in measuring the time of arrival of a high energy photon lies in the stochastic component of the distance of initial conversion and the size of the electromagnetic shower. We present studies and measurements from test beams for calorimeter based timing measurements to explore the ultimate timing precision achievable for high energy photons of 10 GeV and above. We focus on techniques to measure the timing with a high precision in association with the energy of the photon. We present test-beam studies and results on the timing performance and characterization of the time resolution of LYSO-based calorimeters. We demonstrate time resolution of 30 ps is achievable for a particular design.
From Discrete-Time Models to Continuous-Time, Asynchronous Models of Financial Markets
K. Boer-Sorban (Katalin); U. Kaymak (Uzay); J. Spiering (Jaap)
2006-01-01
textabstractMost agent-based simulation models of financial markets are discrete-time in nature. In this paper, we investigate to what degree such models are extensible to continuous-time, asynchronous modelling of financial markets. We study the behaviour of a learning market maker in a market with
Probabilistic Survivability Versus Time Modeling
Joyner, James J., Sr.
2016-01-01
This presentation documents Kennedy Space Center's Independent Assessment work completed on three assessments for the Ground Systems Development and Operations (GSDO) Program to assist the Chief Safety and Mission Assurance Officer during key programmatic reviews and provided the GSDO Program with analyses of how egress time affects the likelihood of astronaut and ground worker survival during an emergency. For each assessment, a team developed probability distributions for hazard scenarios to address statistical uncertainty, resulting in survivability plots over time. The first assessment developed a mathematical model of probabilistic survivability versus time to reach a safe location using an ideal Emergency Egress System at Launch Complex 39B (LC-39B); the second used the first model to evaluate and compare various egress systems under consideration at LC-39B. The third used a modified LC-39B model to determine if a specific hazard decreased survivability more rapidly than other events during flight hardware processing in Kennedy's Vehicle Assembly Building.
Martínez-Martínez, F; Rupérez-Moreno, M J; Martínez-Sober, M; Solves-Llorens, J A; Lorente, D; Serrano-López, A J; Martínez-Sanchis, S; Monserrat, C; Martín-Guerrero, J D
2017-11-01
This work presents a data-driven method to simulate, in real-time, the biomechanical behavior of the breast tissues in some image-guided interventions such as biopsies or radiotherapy dose delivery as well as to speed up multimodal registration algorithms. Ten real breasts were used for this work. Their deformation due to the displacement of two compression plates was simulated off-line using the finite element (FE) method. Three machine learning models were trained with the data from those simulations. Then, they were used to predict in real-time the deformation of the breast tissues during the compression. The models were a decision tree and two tree-based ensemble methods (extremely randomized trees and random forest). Two different experimental setups were designed to validate and study the performance of these models under different conditions. The mean 3D Euclidean distance between nodes predicted by the models and those extracted from the FE simulations was calculated to assess the performance of the models in the validation set. The experiments proved that extremely randomized trees performed better than the other two models. The mean error committed by the three models in the prediction of the nodal displacements was under 2 mm, a threshold usually set for clinical applications. The time needed for breast compression prediction is sufficiently short to allow its use in real-time (<0.2 s). Copyright © 2017 Elsevier Ltd. All rights reserved.
Conway, Sheila R.
2006-01-01
Simple agent-based models may be useful for investigating air traffic control strategies as a precursory screening for more costly, higher fidelity simulation. Of concern is the ability of the models to capture the essence of the system and provide insight into system behavior in a timely manner and without breaking the bank. The method is put to the test with the development of a model to address situations where capacity is overburdened and potential for propagation of the resultant delay though later flights is possible via flight dependencies. The resultant model includes primitive representations of principal air traffic system attributes, namely system capacity, demand, airline schedules and strategy, and aircraft capability. It affords a venue to explore their interdependence in a time-dependent, dynamic system simulation. The scope of the research question and the carefully-chosen modeling fidelity did allow for the development of an agent-based model in short order. The model predicted non-linear behavior given certain initial conditions and system control strategies. Additionally, a combination of the model and dimensionless techniques borrowed from fluid systems was demonstrated that can predict the system s dynamic behavior across a wide range of parametric settings.
Directory of Open Access Journals (Sweden)
Cheng Gong
2014-01-01
Full Text Available This paper investigates the H∞ filtering problem of discrete singular Markov jump systems (SMJSs with mode-dependent time delay based on T-S fuzzy model. First, by Lyapunov-Krasovskii functional approach, a delay-dependent sufficient condition on H∞-disturbance attenuation is presented, in which both stability and prescribed H∞ performance are required to be achieved for the filtering-error systems. Then, based on the condition, the delay-dependent H∞ filter design scheme for SMJSs with mode-dependent time delay based on T-S fuzzy model is developed in term of linear matrix inequality (LMI. Finally, an example is given to illustrate the effectiveness of the result.
Directory of Open Access Journals (Sweden)
Meng Xiong
2015-08-01
Full Text Available Energy storage devices are expected to be more frequently implemented in wind farms in near future. In this paper, both pumped hydro and fly wheel storage systems are used to assist a wind farm to smooth the power fluctuations. Due to the significant difference in the response speeds of the two storages types, the wind farm coordination with two types of energy storage is a problem. This paper presents two methods for the coordination problem: a two-level hierarchical model predictive control (MPC method and a single-level MPC method. In the single-level MPC method, only one MPC controller coordinates the wind farm and the two storage systems to follow the grid scheduling. Alternatively, in the two-level MPC method, two MPC controllers are used to coordinate the wind farm and the two storage systems. The structure of two level MPC consists of outer level and inner level MPC. They run alternatively to perform real-time scheduling and then stop, thus obtaining long-term scheduling results and sending some results to the inner level as input. The single-level MPC method performs both long- and short-term scheduling tasks in each interval. The simulation results show that the methods proposed can improve the utilization of wind power and reduce wind power spillage. In addition, the single-level MPC and the two-level MPC are not interchangeable. The single-level MPC has the advantage of following the grid schedule while the two-level MPC can reduce the optimization time by 60%.
Model Based Temporal Reasoning
Rabin, Marla J.; Spinrad, Paul R.; Fall, Thomas C.
1988-03-01
Systems that assess the real world must cope with evidence that is uncertain, ambiguous, and spread over time. Typically, the most important function of an assessment system is to identify when activities are occurring that are unusual or unanticipated. Model based temporal reasoning addresses both of these requirements. The differences among temporal reasoning schemes lies in the methods used to avoid computational intractability. If we had n pieces of data and we wanted to examine how they were related, the worst case would be where we had to examine every subset of these points to see if that subset satisfied the relations. This would be 2n, which is intractable. Models compress this; if several data points are all compatible with a model, then that model represents all those data points. Data points are then considered related if they lie within the same model or if they lie in models that are related. Models thus address the intractability problem. They also address the problem of determining unusual activities if the data do not agree with models that are indicated by earlier data then something out of the norm is taking place. The models can summarize what we know up to that time, so when they are not predicting correctly, either something unusual is happening or we need to revise our models. The model based reasoner developed at Advanced Decision Systems is thus both intuitive and powerful. It is currently being used on one operational system and several prototype systems. It has enough power to be used in domains spanning the spectrum from manufacturing engineering and project management to low-intensity conflict and strategic assessment.
Time-Weighted Balanced Stochastic Model Reduction
DEFF Research Database (Denmark)
Tahavori, Maryamsadat; Shaker, Hamid Reza
2011-01-01
A new relative error model reduction technique for linear time invariant (LTI) systems is proposed in this paper. Both continuous and discrete time systems can be reduced within this framework. The proposed model reduction method is mainly based upon time-weighted balanced truncation and a recently...... developed inner-outer factorization technique. Compared to the other analogous counterparts, the proposed method shows to provide more accurate results in terms of time weighted norms, when applied to different practical examples. The results are further illustrated by a numerical example....
International Nuclear Information System (INIS)
Obeng, S. O
2014-07-01
Recent developments in the electronics industry have led to the widespread use of radiofrequency (RF) devices in various areas including telecommunications. The increasing numbers of mobile base station (BTS) as well as their proximity to residential areas have been accompanied by public health concerns due to the radiation exposure. The main objective of this research was to compare and modify the ITU- T predictive model for radiofrequency radiation emission for BTS with measured data at some selected cell sites in Accra, Ghana. Theoretical and experimental assessment of radiofrequency exposures due to mobile base station antennas have been analysed. The maximum and minimum average power density measured from individual base station in the town was 1. 86µW/m2 and 0.00961µW/m2 respectively. The ITU-T Predictive model power density ranged between 6.40mW/m 2 and 0.344W/m 2 . Results obtained showed a variation between measured power density levels and the ITU-T predictive model. The ITU-T model power density levels decrease with increase in radial distance while real time measurements do not due to fluctuations during measurement. The ITU-T model overestimated the power density levels by a factor l0 5 as compared to real time measurements. The ITU-T model was modified to reduce the level of overestimation. The result showed that radiation intensity varies from one base station to another even at the same distance. Occupational exposure quotient ranged between 5.43E-10 and 1.89E-08 whilst general public exposure quotient ranged between 2.72E-09 and 9.44E-08. From the results, it shows that the RF exposure levels in Accra from these mobile phone base station antennas are below the permitted RF exposure limit to the general public recommended by the International Commission on Non-Ionizing Radiation Protection. (au)
Survey of time preference, delay discounting models
Directory of Open Access Journals (Sweden)
John R. Doyle
2013-03-01
Full Text Available The paper surveys over twenty models of delay discounting (also known as temporal discounting, time preference, time discounting, that psychologists and economists have put forward to explain the way people actually trade off time and money. Using little more than the basic algebra of powers and logarithms, I show how the models are derived, what assumptions they are based upon, and how different models relate to each other. Rather than concentrate only on discount functions themselves, I show how discount functions may be manipulated to isolate rate parameters for each model. This approach, consistently applied, helps focus attention on the three main components in any discounting model: subjectively perceived money; subjectively perceived time; and how these elements are combined. We group models by the number of parameters that have to be estimated, which means our exposition follows a trajectory of increasing complexity to the models. However, as the story unfolds it becomes clear that most models fall into a smaller number of families. We also show how new models may be constructed by combining elements of different models. The surveyed models are: Exponential; Hyperbolic; Arithmetic; Hyperboloid (Green and Myerson, Rachlin; Loewenstein and Prelec Generalized Hyperboloid; quasi-Hyperbolic (also known as beta-delta discounting; Benhabib et al's fixed cost; Benhabib et al's Exponential / Hyperbolic / quasi-Hyperbolic; Read's discounting fractions; Roelofsma's exponential time; Scholten and Read's discounting-by-intervals (DBI; Ebert and Prelec's constant sensitivity (CS; Bleichrodt et al.'s constant absolute decreasing impatience (CADI; Bleichrodt et al.'s constant relative decreasing impatience (CRDI; Green, Myerson, and Macaux's hyperboloid over intervals models; Killeen's additive utility; size-sensitive additive utility; Yi, Landes, and Bickel's memory trace models; McClure et al.'s two exponentials; and Scholten and Read's trade
Directory of Open Access Journals (Sweden)
Yuhang Guo
2017-01-01
Full Text Available Relative permeability and transverse relaxation time are both important physical parameters of rock physics. In this paper, a new transformation model between the transverse relaxation time and the wetting phase’s relative permeability is established. The data shows that the cores in the northwest of China have continuous fractal dimension characteristics, and great differences existed in the different pore size scales. Therefore, a piece-wise method is used to calculate the fractal dimension in our transformation model. The transformation results are found to be quite consistent with the relative permeability curve of the laboratory measurements. Based on this new model, we put forward a new method to identify reservoir in tight sandstone reservoir. We focus on the Well M in the northwestern China. Nuclear magnetic resonance (NMR logging is used to obtain the point-by-point relative permeability curve. In addition, we identify the gas and water layers based on new T2-Kr model and the results showed our new method is feasible. In the case of the price of crude oil being low, this method can save time and reduce the cost.
International Nuclear Information System (INIS)
Pirouzmand, Ahmad; Hadad, Kamal; Suh, Kune Y.
2011-01-01
This paper considers the concept of analog computing based on a cellular neural network (CNN) paradigm to simulate nuclear reactor dynamics using a time-dependent second order form of the neutron transport equation. Instead of solving nuclear reactor dynamic equations numerically, which is time-consuming and suffers from such weaknesses as vulnerability to transient phenomena, accumulation of round-off errors and floating-point overflows, use is made of a new method based on a cellular neural network. The state-of-the-art shows the CNN as being an alternative solution to the conventional numerical computation method. Indeed CNN is an analog computing paradigm that performs ultra-fast calculations and provides accurate results. In this study use is made of the CNN model to simulate the space-time response of scalar flux distribution in steady state and transient conditions. The CNN model also is used to simulate step perturbation in the core. The accuracy and capability of the CNN model are examined in 2D Cartesian geometry for two fixed source problems, a mini-BWR assembly, and a TWIGL Seed/Blanket problem. We also use the CNN model concurrently for a typical small PWR assembly to simulate the effect of temperature feedback, poisons, and control rods on the scalar flux distribution
Ayyad, Yassid; Mittig, Wolfgang; Bazin, Daniel; Beceiro-Novo, Saul; Cortesi, Marco
2018-02-01
The three-dimensional reconstruction of particle tracks in a time projection chamber is a challenging task that requires advanced classification and fitting algorithms. In this work, we have developed and implemented a novel algorithm based on the Random Sample Consensus Model (RANSAC). The RANSAC is used to classify tracks including pile-up, to remove uncorrelated noise hits, as well as to reconstruct the vertex of the reaction. The algorithm, developed within the Active Target Time Projection Chamber (AT-TPC) framework, was tested and validated by analyzing the 4He+4He reaction. Results, performance and quality of the proposed algorithm are presented and discussed in detail.
Sun, Bo; Liao, Baopeng; Li, Mengmeng; Ren, Yi; Feng, Qiang; Yang, Dezhen
2018-03-27
In the degradation process, the randomness and multiplicity of variables are difficult to describe by mathematical models. However, they are common in engineering and cannot be neglected, so it is necessary to study this issue in depth. In this paper, the copper bending pipe in seawater piping systems is taken as the analysis object, and the time-variant reliability is calculated by solving the interference of limit strength and maximum stress. We did degradation experiments and tensile experiments on copper material, and obtained the limit strength at each time. In addition, degradation experiments on copper bending pipe were done and the thickness at each time has been obtained, then the response of maximum stress was calculated by simulation. Further, with the help of one kind of Monte Carlo method we propose, the time-variant reliability of copper bending pipe was calculated based on the stochastic degradation process and interference theory. Compared with traditional methods and verified by maintenance records, the results show that the time-variant reliability model based on the stochastic degradation process proposed in this paper has better applicability in the reliability analysis, and it can be more convenient and accurate to predict the replacement cycle of copper bending pipe under seawater-active corrosion.
Directory of Open Access Journals (Sweden)
Bo Sun
2018-03-01
Full Text Available In the degradation process, the randomness and multiplicity of variables are difficult to describe by mathematical models. However, they are common in engineering and cannot be neglected, so it is necessary to study this issue in depth. In this paper, the copper bending pipe in seawater piping systems is taken as the analysis object, and the time-variant reliability is calculated by solving the interference of limit strength and maximum stress. We did degradation experiments and tensile experiments on copper material, and obtained the limit strength at each time. In addition, degradation experiments on copper bending pipe were done and the thickness at each time has been obtained, then the response of maximum stress was calculated by simulation. Further, with the help of one kind of Monte Carlo method we propose, the time-variant reliability of copper bending pipe was calculated based on the stochastic degradation process and interference theory. Compared with traditional methods and verified by maintenance records, the results show that the time-variant reliability model based on the stochastic degradation process proposed in this paper has better applicability in the reliability analysis, and it can be more convenient and accurate to predict the replacement cycle of copper bending pipe under seawater-active corrosion.
Zhou, Si-Da; Ma, Yuan-Chen; Liu, Li; Kang, Jie; Ma, Zhi-Sai; Yu, Lei
2018-01-01
Identification of time-varying modal parameters contributes to the structural health monitoring, fault detection, vibration control, etc. of the operational time-varying structural systems. However, it is a challenging task because there is not more information for the identification of the time-varying systems than that of the time-invariant systems. This paper presents a vector time-dependent autoregressive model and least squares support vector machine based modal parameter estimator for linear time-varying structural systems in case of output-only measurements. To reduce the computational cost, a Wendland's compactly supported radial basis function is used to achieve the sparsity of the Gram matrix. A Gamma-test-based non-parametric approach of selecting the regularization factor is adapted for the proposed estimator to replace the time-consuming n-fold cross validation. A series of numerical examples have illustrated the advantages of the proposed modal parameter estimator on the suppression of the overestimate and the short data. A laboratory experiment has further validated the proposed estimator.
Directory of Open Access Journals (Sweden)
Sang Woo Kim
2013-10-01
Full Text Available A second-order discrete-time sliding mode observer (DSMO-based method is proposed to estimate the state of charge (SOC of a Li-ion battery. Unlike the first-order sliding mode approach, the proposed method eliminates the chattering phenomenon in SOC estimation. Further, a battery model with a dynamic resistance is also proposed to improve the accuracy of the battery model. Similar to actual battery behavior, the resistance parameters in this model are changed by both the magnitude of the discharge current and the SOC level. Validation of the dynamic resistance model is performed through pulse current discharge tests at two different SOC levels. Our experimental results show that the proposed estimation method not only enhances the estimation accuracy but also eliminates the chattering phenomenon. The SOC estimation performance of the second-order DSMO is compared with that of the first-order DSMO.
Wherry, Susan A.; Wood, Tamara M.
2018-04-27
A whole lake eutrophication (WLE) model approach for phosphorus and cyanobacterial biomass in Upper Klamath Lake, south-central Oregon, is presented here. The model is a successor to a previous model developed to inform a Total Maximum Daily Load (TMDL) for phosphorus in the lake, but is based on net primary production (NPP), which can be calculated from dissolved oxygen, rather than scaling up a small-scale description of cyanobacterial growth and respiration rates. This phase 3 WLE model is a refinement of the proof-of-concept developed in phase 2, which was the first attempt to use NPP to simulate cyanobacteria in the TMDL model. The calibration of the calculated NPP WLE model was successful, with performance metrics indicating a good fit to calibration data, and the calculated NPP WLE model was able to simulate mid-season bloom decreases, a feature that previous models could not reproduce.In order to use the model to simulate future scenarios based on phosphorus load reduction, a multivariate regression model was created to simulate NPP as a function of the model state variables (phosphorus and chlorophyll a) and measured meteorological and temperature model inputs. The NPP time series was split into a low- and high-frequency component using wavelet analysis, and regression models were fit to the components separately, with moderate success.The regression models for NPP were incorporated in the WLE model, referred to as the “scenario” WLE (SWLE), and the fit statistics for phosphorus during the calibration period were mostly unchanged. The fit statistics for chlorophyll a, however, were degraded. These statistics are still an improvement over prior models, and indicate that the SWLE is appropriate for long-term predictions even though it misses some of the seasonal variations in chlorophyll a.The complete whole lake SWLE model, with multivariate regression to predict NPP, was used to make long-term simulations of the response to 10-, 20-, and 40-percent
Qin, Shanlin; Liu, Fawang; Turner, Ian W.
2018-03-01
The consideration of diffusion processes in magnetic resonance imaging (MRI) signal attenuation is classically described by the Bloch-Torrey equation. However, many recent works highlight the distinct deviation in MRI signal decay due to anomalous diffusion, which motivates the fractional order generalization of the Bloch-Torrey equation. In this work, we study the two-dimensional multi-term time and space fractional diffusion equation generalized from the time and space fractional Bloch-Torrey equation. By using the Galerkin finite element method with a structured mesh consisting of rectangular elements to discretize in space and the L1 approximation of the Caputo fractional derivative in time, a fully discrete numerical scheme is derived. A rigorous analysis of stability and error estimation is provided. Numerical experiments in the square and L-shaped domains are performed to give an insight into the efficiency and reliability of our method. Then the scheme is applied to solve the multi-term time and space fractional Bloch-Torrey equation, which shows that the extra time derivative terms impact the relaxation process.
Directory of Open Access Journals (Sweden)
Teguh Febri Sudarma
2013-06-01
Full Text Available Research was aimed to determine: (1 Students’ learning outcomes that was taught with just in time teaching based STAD cooperative learning method and STAD cooperative learning method (2 Students’ outcomes on Physics subject that had high learning activity compared with low learning activity. The research sample was random by raffling four classes to get two classes. The first class taught with just in time teaching based STAD cooperative learning method, while the second class was taught with STAD cooperative learning method. The instrument used was conceptual understanding that had been validated with 7 essay questions. The average gain values of students learning results with just in time teaching based STAD cooperative learning method 0,47 higher than average gain values of students learning results with STAD cooperative learning method. The high learning activity and low learning activity gave different learning results. In this case the average gain values of students learning results with just in time teaching based STAD cooperative learning method 0,48 higher than average gain values of students learning results with STAD cooperative learning method. There was interaction between learning model and learning activity to the physics learning result test in students
Time dependent policy-based access control
DEFF Research Database (Denmark)
Vasilikos, Panagiotis; Nielson, Flemming; Nielson, Hanne Riis
2017-01-01
Access control policies are essential to determine who is allowed to access data in a system without compromising the data's security. However, applications inside a distributed environment may require those policies to be dependent on the actual content of the data, the flow of information, while...... also on other attributes of the environment such as the time. In this paper, we use systems of Timed Automata to model distributed systems and we present a logic in which one can express time-dependent policies for access control. We show how a fragment of our logic can be reduced to a logic...... that current model checkers for Timed Automata such as UPPAAL can handle and we present a translator that performs this reduction. We then use our translator and UPPAAL to enforce time-dependent policy-based access control on an example application from the aerospace industry....
Directory of Open Access Journals (Sweden)
Abror Abror
2014-01-01
Full Text Available Indonesia located in tropic area consists of wet season and dry season. However, in last few years, in river discharge in dry season is very little, but in contrary, in wet season, frequency of flood increases with sharp peak and increasingly great water elevation. The increased flood discharge may occur due to change in land use or change in rainfall characteristic. Both matters should get clarity. Therefore, a research should be done to analyze rainfall characteristic, land use and flood discharge in some watershed area (DAS quantitatively from time series data. The research was conducted in DAS Gintung in Parakankidang, DAS Gung in Danawarih, DAS Rambut in Cipero, DAS Kemiri in Sidapurna and DAS Comal in Nambo, located in Tegal Regency and Pemalang Regency in Central Java Province. This research activity consisted of three main steps: input, DAS system and output. Input is DAS determination and selection and searching secondary data. DAS system is early secondary data processing consisting of rainfall analysis, HSS GAMA I parameter, land type analysis and DAS land use. Output is final processing step that consisting of calculation of Tadashi Tanimoto, USSCS effective rainfall, flood discharge, ARIMA analysis, result analysis and conclusion. Analytical calculation of ARIMA Box-Jenkins time series used software Number Cruncher Statistical Systems and Power Analysis Sample Size (NCSS-PASS version 2000, which result in time series characteristic in form of time series pattern, mean square errors (MSE, root mean square ( RMS, autocorrelation of residual and trend. Result of this research indicates that composite CN and flood discharge is proportional that means when composite CN trend increase then flood discharge trend also increase and vice versa. Meanwhile, decrease of rainfall trend is not always followed with decrease in flood discharge trend. The main cause of flood discharge characteristic is DAS management characteristic, not change in
Directory of Open Access Journals (Sweden)
Robert Małkowski
2016-09-01
Full Text Available First section of the paper provides technical specification of laboratory load model basing on 150 kVA power frequency converter and Simulink Real-Time platform. Assumptions, as well as control algorithm structure is presented. Theoretical considerations based on criteria which load types may be simulated using discussed laboratory setup, are described. As described model contains transformer with thyristor-controlled tap changer, wider scope of device capabilities is presented. Paper lists and describes tunable parameters, both: tunable during device operation and changed only before starting the experiment. Implementation details are given in second section of paper. Hardware structure is presented and described. Information about used communication interface, data maintenance and storage solution, as well as used Simulink real-time features are presented. List and description of all measurements is provided. Potential of laboratory setup modifications is evaluated. Third section describes performed laboratory tests. Different load configurations are described and experimental results are presented. This includes simulation of under frequency load shedding, frequency and voltage dependent characteristics of groups of load units, time characteristics of group of different load units in a chosen area and arbitrary active and reactive power regulation basing on defined schedule. Different operation modes of control algorithm are described: apparent power control, active and reactive power control, active and reactive current RMS value control.
Real-time stereo matching architecture based on 2D MRF model: a memory-efficient systolic array
Directory of Open Access Journals (Sweden)
Park Sungchan
2011-01-01
Full Text Available Abstract There is a growing need in computer vision applications for stereopsis, requiring not only accurate distance but also fast and compact physical implementation. Global energy minimization techniques provide remarkably precise results. But they suffer from huge computational complexity. One of the main challenges is to parallelize the iterative computation, solving the memory access problem between the big external memory and the massive processors. Remarkable memory saving can be obtained with our memory reduction scheme, and our new architecture is a systolic array. If we expand it into N's multiple chips in a cascaded manner, we can cope with various ranges of image resolutions. We have realized it using the FPGA technology. Our architecture records 19 times smaller memory than the global minimization technique, which is a principal step toward real-time chip implementation of the various iterative image processing algorithms with tiny and distributed memory resources like optical flow, image restoration, etc.
Building Chaotic Model From Incomplete Time Series
Siek, Michael; Solomatine, Dimitri
2010-05-01
This paper presents a number of novel techniques for building a predictive chaotic model from incomplete time series. A predictive chaotic model is built by reconstructing the time-delayed phase space from observed time series and the prediction is made by a global model or adaptive local models based on the dynamical neighbors found in the reconstructed phase space. In general, the building of any data-driven models depends on the completeness and quality of the data itself. However, the completeness of the data availability can not always be guaranteed since the measurement or data transmission is intermittently not working properly due to some reasons. We propose two main solutions dealing with incomplete time series: using imputing and non-imputing methods. For imputing methods, we utilized the interpolation methods (weighted sum of linear interpolations, Bayesian principle component analysis and cubic spline interpolation) and predictive models (neural network, kernel machine, chaotic model) for estimating the missing values. After imputing the missing values, the phase space reconstruction and chaotic model prediction are executed as a standard procedure. For non-imputing methods, we reconstructed the time-delayed phase space from observed time series with missing values. This reconstruction results in non-continuous trajectories. However, the local model prediction can still be made from the other dynamical neighbors reconstructed from non-missing values. We implemented and tested these methods to construct a chaotic model for predicting storm surges at Hoek van Holland as the entrance of Rotterdam Port. The hourly surge time series is available for duration of 1990-1996. For measuring the performance of the proposed methods, a synthetic time series with missing values generated by a particular random variable to the original (complete) time series is utilized. There exist two main performance measures used in this work: (1) error measures between the actual
Elishmereni, Moran; Kheifetz, Yuri; Shukrun, Ilan; Bevan, Graham H; Nandy, Debashis; McKenzie, Kyle M; Kohli, Manish; Agur, Zvia
2016-01-01
Prostate cancer (PCa) is a leading cause of cancer death of men worldwide. In hormone-sensitive prostate cancer (HSPC), androgen deprivation therapy (ADT) is widely used, but an eventual failure on ADT heralds the passage to the castration-resistant prostate cancer (CRPC) stage. Because predicting time to failure on ADT would allow improved planning of personal treatment strategy, we aimed to develop a predictive personalization algorithm for ADT efficacy in HSPC patients. A mathematical mechanistic model for HSPC progression and treatment was developed based on the underlying disease dynamics (represented by prostate-specific antigen; PSA) as affected by ADT. Following fine-tuning by a dataset of ADT-treated HSPC patients, the model was embedded in an algorithm, which predicts the patient's time to biochemical failure (BF) based on clinical metrics obtained before or early in-treatment. The mechanistic model, including a tumor growth law with a dynamic power and an elaborate ADT-resistance mechanism, successfully retrieved individual time-courses of PSA (R(2) = 0.783). Using the personal Gleason score (GS) and PSA at diagnosis, as well as PSA dynamics from 6 months after ADT onset, and given the full ADT regimen, the personalization algorithm accurately predicted the individual time to BF of ADT in 90% of patients in the retrospective cohort (R(2) = 0.98). The algorithm we have developed, predicting biochemical failure based on routine clinical tests, could be especially useful for patients destined for short-lived ADT responses and quick progression to CRPC. Prospective studies must validate the utility of the algorithm for clinical decision-making. © 2015 Wiley Periodicals, Inc.
Hussein, Ahmed A; May, Paul R; Ahmed, Youssef E; Saar, Matthias; Wijburg, Carl J; Richstone, Lee; Wagner, Andrew; Wilson, Timothy; Yuh, Bertram; Redorta, Joan P; Dasgupta, Prokar; Kawa, Omar; Khan, Mohammad S; Menon, Mani; Peabody, James O; Hosseini, Abolfazl; Gaboardi, Franco; Pini, Giovannalberto; Schanne, Francis; Mottrie, Alexandre; Rha, Koon-Ho; Hemal, Ashok; Stockle, Michael; Kelly, John; Tan, Wei S; Maatman, Thomas J; Poulakis, Vassilis; Kaouk, Jihad; Canda, Abdullah E; Balbay, Mevlana D; Wiklund, Peter; Guru, Khurshid A
2017-11-01
To design a methodology to predict operative times for robot-assisted radical cystectomy (RARC) based on variation in institutional, patient, and disease characteristics to help in operating room scheduling and quality control. The model included preoperative variables and therefore can be used for prediction of surgical times: institutional volume, age, gender, body mass index, American Society of Anesthesiologists score, history of prior surgery and radiation, clinical stage, neoadjuvant chemotherapy, type, technique of diversion, and the extent of lymph node dissection. A conditional inference tree method was used to fit a binary decision tree predicting operative time. Permutation tests were performed to determine the variables having the strongest association with surgical time. The data were split at the value of this variable resulting in the largest difference in means for the surgical time across the split. This process was repeated recursively on the resultant data sets until the permutation tests showed no significant association with operative time. In all, 2 134 procedures were included. The variable most strongly associated with surgical time was type of diversion, with ileal conduits being 70 min shorter (P 66 RARCs) was important, with those with a higher volume being 55 min shorter (P < 0.001). The regression tree output was in the form of box plots that show the median and ranges of surgical times according to the patient, disease, and institutional characteristics. We developed a method to estimate operative times for RARC based on patient, disease, and institutional metrics that can help operating room scheduling for RARC. © 2017 The Authors BJU International © 2017 BJU International Published by John Wiley & Sons Ltd.
Vilar, Eric; Dougal, R. A.
This work describes a non-linear time-domain model of a direct methanol fuel cell (DMFC) and uses that model to show that pulsed-current loading of a direct methanol fuel cell does not improve average efficiency. Unlike previous system level models, the one presented here is capable of predicting the step response of the fuel cell over its entire voltage range. This improved model is based on bi-functional methanol oxidation reaction kinetics and is derived from a lumped, four-step reaction mechanism. In total, six states are incorporated into the model: three states for intermediate surface adsorbates on the anode electrode, two states for the anode and cathode potentials, and one state for the liquid methanol concentration in the anode compartment. Model parameters were identified using experimental data from a real DMFC. The model was applied to study the steady-state and transient performance of a DMFC with the objective to understand the possibility of improving the efficiency of the DMFC by using periodic current pulses to drive adsorbed CO from the anode catalyst. Our results indicate that the pulsed-current method does indeed boost the average potential of the DMFC by 40 mV; but on the other hand, executing that strategy reduces the overall operating efficiency and does not yield any net benefit.
Discrete-time modelling of musical instruments
International Nuclear Information System (INIS)
Vaelimaeki, Vesa; Pakarinen, Jyri; Erkut, Cumhur; Karjalainen, Matti
2006-01-01
This article describes physical modelling techniques that can be used for simulating musical instruments. The methods are closely related to digital signal processing. They discretize the system with respect to time, because the aim is to run the simulation using a computer. The physics-based modelling methods can be classified as mass-spring, modal, wave digital, finite difference, digital waveguide and source-filter models. We present the basic theory and a discussion on possible extensions for each modelling technique. For some methods, a simple model example is chosen from the existing literature demonstrating a typical use of the method. For instance, in the case of the digital waveguide modelling technique a vibrating string model is discussed, and in the case of the wave digital filter technique we present a classical piano hammer model. We tackle some nonlinear and time-varying models and include new results on the digital waveguide modelling of a nonlinear string. Current trends and future directions in physical modelling of musical instruments are discussed
Adachi, Yasumoto; Makita, Kohei
2017-12-01
Echinococcus multilocularis is a parasite that causes highly pathogenic zoonoses and is maintained in foxes and rodents on Hokkaido Island, Japan. Detection of E. multilocularis infections in swine is epidemiologically important. In Hokkaido, administrative information is provided to swine producers based on the results of meat inspections. However, as the current criteria for providing administrative information often results in delays in providing information to producers, novel criteria are needed. Time series models were developed to monitor autocorrelations between data and lags using data collected from 84 producers at the Higashi-Mokoto Meat Inspection Center between April 2003 and November 2015. The two criteria were quantitatively compared using the sign test for the ability to rapidly detect farm-level outbreaks. Overall, the time series models based on an autoexponentially regressed zero-inflated negative binomial distribution with 60th percentile cumulative distribution function of the model detected outbreaks earlier more frequently than the current criteria (90.5%, 276/305, pdisadvantages of the current criteria to provide an earlier indication of increases in the rate of echinococcosis. Copyright © 2017 Elsevier B.V. All rights reserved.
Tan, Zhengguo; Hohage, Thorsten; Kalentev, Oleksandr; Joseph, Arun A; Wang, Xiaoqing; Voit, Dirk; Merboldt, K Dietmar; Frahm, Jens
2017-12-01
The purpose of this work is to develop an automatic method for the scaling of unknowns in model-based nonlinear inverse reconstructions and to evaluate its application to real-time phase-contrast (RT-PC) flow magnetic resonance imaging (MRI). Model-based MRI reconstructions of parametric maps which describe a physical or physiological function require the solution of a nonlinear inverse problem, because the list of unknowns in the extended MRI signal equation comprises multiple functional parameters and all coil sensitivity profiles. Iterative solutions therefore rely on an appropriate scaling of unknowns to numerically balance partial derivatives and regularization terms. The scaling of unknowns emerges as a self-adjoint and positive-definite matrix which is expressible by its maximal eigenvalue and solved by power iterations. The proposed method is applied to RT-PC flow MRI based on highly undersampled acquisitions. Experimental validations include numerical phantoms providing ground truth and a wide range of human studies in the ascending aorta, carotid arteries, deep veins during muscular exercise and cerebrospinal fluid during deep respiration. For RT-PC flow MRI, model-based reconstructions with automatic scaling not only offer velocity maps with high spatiotemporal acuity and much reduced phase noise, but also ensure fast convergence as well as accurate and precise velocities for all conditions tested, i.e. for different velocity ranges, vessel sizes and the simultaneous presence of signals with velocity aliasing. In summary, the proposed automatic scaling of unknowns in model-based MRI reconstructions yields quantitatively reliable velocities for RT-PC flow MRI in various experimental scenarios. Copyright © 2017 John Wiley & Sons, Ltd.
Wang, D.; Becker, N. C.; Weinstein, S.; Duputel, Z.; Rivera, L. A.; Hayes, G. P.; Hirshorn, B. F.; Bouchard, R. H.; Mungov, G.
2017-12-01
The Pacific Tsunami Warning Center (PTWC) began forecasting tsunamis in real-time using source parameters derived from real-time Centroid Moment Tensor (CMT) solutions in 2009. Both the USGS and PTWC typically obtain W-Phase CMT solutions for large earthquakes less than 30 minutes after earthquake origin time. Within seconds, and often before waves reach the nearest deep ocean bottom pressure sensor (DARTs), PTWC then generates a regional tsunami propagation forecast using its linear shallow water model. The model is initialized by the sea surface deformation that mimics the seafloor deformation based on Okada's (1985) dislocation model of a rectangular fault with a uniform slip. The fault length and width are empirical functions of the seismic moment. How well did this simple model perform? The DART records provide a very valuable dataset for model validation. We examine tsunami events of the last decade with earthquake magnitudes ranging from 6.5 to 9.0 including some deep events for which tsunamis were not expected. Most of the forecast results were obtained during the events. We also include events from before the implementation of the WCMT method at USGS and PTWC, 2006-2009. For these events, WCMTs were computed retrospectively (Duputel et al. 2012). We also re-ran the model with a larger domain for some events to include far-field DARTs that recorded a tsunami with identical source parameters used during the events. We conclude that our model results, in terms of maximum wave amplitude, are mostly within a factor of two of the observed at DART stations, with an average error of less than 40% for most events, including the 2010 Maule and the 2011 Tohoku tsunamis. However, the simple fault model with a uniform slip is too simplistic for the Tohoku tsunami. We note model results are sensitive to centroid location and depth, especially if the earthquake is close to land or inland. For the 2016 M7.8 New Zealand earthquake the initial forecast underestimated the
P-wave velocity changes in freezing hard low-porosity rocks: a laboratory-based time-average model
Directory of Open Access Journals (Sweden)
D. Draebing
2012-10-01
Full Text Available P-wave refraction seismics is a key method in permafrost research but its applicability to low-porosity rocks, which constitute alpine rock walls, has been denied in prior studies. These studies explain p-wave velocity changes in freezing rocks exclusively due to changing velocities of pore infill, i.e. water, air and ice. In existing models, no significant velocity increase is expected for low-porosity bedrock. We postulate, that mixing laws apply for high-porosity rocks, but freezing in confined space in low-porosity bedrock also alters physical rock matrix properties. In the laboratory, we measured p-wave velocities of 22 decimetre-large low-porosity (< 10% metamorphic, magmatic and sedimentary rock samples from permafrost sites with a natural texture (> 100 micro-fissures from 25 °C to −15 °C in 0.3 °C increments close to the freezing point. When freezing, p-wave velocity increases by 11–166% perpendicular to cleavage/bedding and equivalent to a matrix velocity increase from 11–200% coincident to an anisotropy decrease in most samples. The expansion of rigid bedrock upon freezing is restricted and ice pressure will increase matrix velocity and decrease anisotropy while changing velocities of the pore infill are insignificant. Here, we present a modified Timur's two-phase-equation implementing changes in matrix velocity dependent on lithology and demonstrate the general applicability of refraction seismics to differentiate frozen and unfrozen low-porosity bedrock.
International Nuclear Information System (INIS)
Ehlen, Mark A.; Scholand, Andrew J.; Stamber, Kevin L.
2007-01-01
An agent-based model is constructed in which a demand aggregator sells both uniform-price and real-time price (RTP) contracts to households as means for adding price elasticity in residential power use sectors, particularly during peak-price hours of the day. Simulations suggest that RTP contracts help a demand aggregator (1) shift its long-term contracts toward off-peak hours, thereby reducing its cost of power and (2) increase its short-run profits if it is one of the first aggregators to have large numbers of RTP contracts; but (3) create susceptibilities to short-term market demand and price volatilities. (author)
Real-time modeling of heat distributions
Energy Technology Data Exchange (ETDEWEB)
Hamann, Hendrik F.; Li, Hongfei; Yarlanki, Srinivas
2018-01-02
Techniques for real-time modeling temperature distributions based on streaming sensor data are provided. In one aspect, a method for creating a three-dimensional temperature distribution model for a room having a floor and a ceiling is provided. The method includes the following steps. A ceiling temperature distribution in the room is determined. A floor temperature distribution in the room is determined. An interpolation between the ceiling temperature distribution and the floor temperature distribution is used to obtain the three-dimensional temperature distribution model for the room.
Sütfeld, Leon R; Gast, Richard; König, Peter; Pipa, Gordon
2017-01-01
Self-driving cars are posing a new challenge to our ethics. By using algorithms to make decisions in situations where harming humans is possible, probable, or even unavoidable, a self-driving car's ethical behavior comes pre-defined. Ad hoc decisions are made in milliseconds, but can be based on extensive research and debates. The same algorithms are also likely to be used in millions of cars at a time, increasing the impact of any inherent biases, and increasing the importance of getting it right. Previous research has shown that moral judgment and behavior are highly context-dependent, and comprehensive and nuanced models of the underlying cognitive processes are out of reach to date. Models of ethics for self-driving cars should thus aim to match human decisions made in the same context. We employed immersive virtual reality to assess ethical behavior in simulated road traffic scenarios, and used the collected data to train and evaluate a range of decision models. In the study, participants controlled a virtual car and had to choose which of two given obstacles they would sacrifice in order to spare the other. We randomly sampled obstacles from a variety of inanimate objects, animals and humans. Our model comparison shows that simple models based on one-dimensional value-of-life scales are suited to describe human ethical behavior in these situations. Furthermore, we examined the influence of severe time pressure on the decision-making process. We found that it decreases consistency in the decision patterns, thus providing an argument for algorithmic decision-making in road traffic. This study demonstrates the suitability of virtual reality for the assessment of ethical behavior in humans, delivering consistent results across subjects, while closely matching the experimental settings to the real world scenarios in question.
Directory of Open Access Journals (Sweden)
Leon R. Sütfeld
2017-07-01
Full Text Available Self-driving cars are posing a new challenge to our ethics. By using algorithms to make decisions in situations where harming humans is possible, probable, or even unavoidable, a self-driving car's ethical behavior comes pre-defined. Ad hoc decisions are made in milliseconds, but can be based on extensive research and debates. The same algorithms are also likely to be used in millions of cars at a time, increasing the impact of any inherent biases, and increasing the importance of getting it right. Previous research has shown that moral judgment and behavior are highly context-dependent, and comprehensive and nuanced models of the underlying cognitive processes are out of reach to date. Models of ethics for self-driving cars should thus aim to match human decisions made in the same context. We employed immersive virtual reality to assess ethical behavior in simulated road traffic scenarios, and used the collected data to train and evaluate a range of decision models. In the study, participants controlled a virtual car and had to choose which of two given obstacles they would sacrifice in order to spare the other. We randomly sampled obstacles from a variety of inanimate objects, animals and humans. Our model comparison shows that simple models based on one-dimensional value-of-life scales are suited to describe human ethical behavior in these situations. Furthermore, we examined the influence of severe time pressure on the decision-making process. We found that it decreases consistency in the decision patterns, thus providing an argument for algorithmic decision-making in road traffic. This study demonstrates the suitability of virtual reality for the assessment of ethical behavior in humans, delivering consistent results across subjects, while closely matching the experimental settings to the real world scenarios in question.
Hou, Dibo; Ge, Xiaofan; Huang, Pingjie; Zhang, Guangxin; Loáiciga, Hugo
2014-01-01
A real-time, dynamic, early-warning model (EP-risk model) is proposed to cope with sudden water quality pollution accidents affecting downstream areas with raw-water intakes (denoted as EPs). The EP-risk model outputs the risk level of water pollution at the EP by calculating the likelihood of pollution and evaluating the impact of pollution. A generalized form of the EP-risk model for river pollution accidents based on Monte Carlo simulation, the analytic hierarchy process (AHP) method, and the risk matrix method is proposed. The likelihood of water pollution at the EP is calculated by the Monte Carlo method, which is used for uncertainty analysis of pollutants' transport in rivers. The impact of water pollution at the EP is evaluated by expert knowledge and the results of Monte Carlo simulation based on the analytic hierarchy process. The final risk level of water pollution at the EP is determined by the risk matrix method. A case study of the proposed method is illustrated with a phenol spill accident in China.
Gragnaniello, Cristian; Nader, Remi; van Doormaal, Tristan; Kamel, Mahmoud; Voormolen, Eduard H J; Lasio, Giovanni; Aboud, Emad; Regli, Luca; Tulleken, Cornelius A F; Al-Mefty, Ossama
2010-11-01
Resident duty-hours restrictions have now been instituted in many countries worldwide. Shortened training times and increased public scrutiny of surgical competency have led to a move away from the traditional apprenticeship model of training. The development of educational models for brain anatomy is a fascinating innovation allowing neurosurgeons to train without the need to practice on real patients and it may be a solution to achieve competency within a shortened training period. The authors describe the use of Stratathane resin ST-504 polymer (SRSP), which is inserted at different intracranial locations to closely mimic meningiomas and other pathological entities of the skull base, in a cadaveric model, for use in neurosurgical training. Silicone-injected and pressurized cadaveric heads were used for studying the SRSP model. The SRSP presents unique intrinsic metamorphic characteristics: liquid at first, it expands and foams when injected into the desired area of the brain, forming a solid tumorlike structure. The authors injected SRSP via different passages that did not influence routes used for the surgical approach for resection of the simulated lesion. For example, SRSP injection routes included endonasal transsphenoidal or transoral approaches if lesions were to be removed through standard skull base approach, or, alternatively, SRSP was injected via a cranial approach if the removal was planned to be via the transsphenoidal or transoral route. The model was set in place in 3 countries (US, Italy, and The Netherlands), and a pool of 13 physicians from 4 different institutions (all surgeons and surgeons in training) participated in evaluating it and provided feedback. All 13 evaluating physicians had overall positive impressions of the model. The overall score on 9 components evaluated--including comparison between the tumor model and real tumor cases, perioperative requirements, general impression, and applicability--was 88% (100% being the best possible
Modeling Information Accumulation in Psychological Tests Using Item Response Times
Ranger, Jochen; Kuhn, Jörg-Tobias
2015-01-01
In this article, a latent trait model is proposed for the response times in psychological tests. The latent trait model is based on the linear transformation model and subsumes popular models from survival analysis, like the proportional hazards model and the proportional odds model. Core of the model is the assumption that an unspecified monotone…
Huang, Yangxin; Lu, Xiaosun; Chen, Jiaqing; Liang, Juan; Zangmeister, Miriam
2017-10-27
Longitudinal and time-to-event data are often observed together. Finite mixture models are currently used to analyze nonlinear heterogeneous longitudinal data, which, by releasing the homogeneity restriction of nonlinear mixed-effects (NLME) models, can cluster individuals into one of the pre-specified classes with class membership probabilities. This clustering may have clinical significance, and be associated with clinically important time-to-event data. This article develops a joint modeling approach to a finite mixture of NLME models for longitudinal data and proportional hazard Cox model for time-to-event data, linked by individual latent class indicators, under a Bayesian framework. The proposed joint models and method are applied to a real AIDS clinical trial data set, followed by simulation studies to assess the performance of the proposed joint model and a naive two-step model, in which finite mixture model and Cox model are fitted separately.
Energy Technology Data Exchange (ETDEWEB)
Nava, J.L. [Universidad Autonoma Metropolitana-Iztapalapa, Departamento de Quimica, Av. San Rafael Atlixco 186, A.P. 55-534, C.P. 09340, Mexico D.F. (Mexico); Sosa, E. [Instituto Mexicano del Petroleo, Programa de Investigacion en Ingenieria Molecular, Eje Central 152, C.P. 07730, Mexico D.F. (Mexico); Carreno, G. [Universidad de Guanajuato, Facultad de Ingenieria en Geomatica e Hidraulica, Av. Juarez 77, C.P. 36000, Guanajuato, Gto. (Mexico); Ponce-de-Leon, C. [Electrochemical Engineering Group, School of Engineering Sciences, University of Southampton, Highfield, Southampton SO17 1BJ (United Kingdom)]. E-mail: capla@soton.ac.uk; Oropeza, M.T. [Centro de Graduados e Investigacion del Instituto Tecnologico de Tijuana, Blvd. Industrial, s/n, C.P. 22500, Tijuana B.C. (Mexico)
2006-05-25
A concentration versus time relationship model based on the isothermal diffusion-charge transfer mechanism was developed for a flow-by reactor with a three-dimensional (3D) reticulated vitreous carbon (RVC) electrode. The relationship was based on the effectiveness factor ({eta}) which lead to the simulation of the concentration decay at different electrode polarisation conditions, i.e. -0.1, -0.3 and -0.59 V versus SCE; the charge transfer process was used for the former and mix and a mass transport control was used for the latter. Charge transfer and mass transport parameters were estimated from experimental data using Electrochemical Impedance Spectroscopy (EIS) and Linear Voltammetry (LV) techniques, respectively.
International Nuclear Information System (INIS)
Min Yugang; Santhanam, Anand; Ruddy, Bari H; Neelakkantan, Harini; Meeks, Sanford L; Kupelian, Patrick A
2010-01-01
In this paper, we present a graphics processing unit (GPU)-based simulation framework to calculate the delivered dose to a 3D moving lung tumor and its surrounding normal tissues, which are undergoing subject-specific lung deformations. The GPU-based simulation framework models the motion of the 3D volumetric lung tumor and its surrounding tissues, simulates the dose delivery using the dose extracted from a treatment plan using Pinnacle Treatment Planning System, Phillips, for one of the 3DCTs of the 4DCT and predicts the amount and location of radiation doses deposited inside the lung. The 4DCT lung datasets were registered with each other using a modified optical flow algorithm. The motion of the tumor and the motion of the surrounding tissues were simulated by measuring the changes in lung volume during the radiotherapy treatment using spirometry. The real-time dose delivered to the tumor for each beam is generated by summing the dose delivered to the target volume at each increase in lung volume during the beam delivery time period. The simulation results showed the real-time capability of the framework at 20 discrete tumor motion steps per breath, which is higher than the number of 4DCT steps (approximately 12) reconstructed during multiple breathing cycles.
Energy Technology Data Exchange (ETDEWEB)
Min Yugang; Santhanam, Anand; Ruddy, Bari H [University of Central Florida, FL (United States); Neelakkantan, Harini; Meeks, Sanford L [M D Anderson Cancer Center Orlando, FL (United States); Kupelian, Patrick A, E-mail: anand.santhanam@orlandohealth.co [Department of Radiation Oncology, University of California, Los Angeles, CA (United States)
2010-09-07
In this paper, we present a graphics processing unit (GPU)-based simulation framework to calculate the delivered dose to a 3D moving lung tumor and its surrounding normal tissues, which are undergoing subject-specific lung deformations. The GPU-based simulation framework models the motion of the 3D volumetric lung tumor and its surrounding tissues, simulates the dose delivery using the dose extracted from a treatment plan using Pinnacle Treatment Planning System, Phillips, for one of the 3DCTs of the 4DCT and predicts the amount and location of radiation doses deposited inside the lung. The 4DCT lung datasets were registered with each other using a modified optical flow algorithm. The motion of the tumor and the motion of the surrounding tissues were simulated by measuring the changes in lung volume during the radiotherapy treatment using spirometry. The real-time dose delivered to the tumor for each beam is generated by summing the dose delivered to the target volume at each increase in lung volume during the beam delivery time period. The simulation results showed the real-time capability of the framework at 20 discrete tumor motion steps per breath, which is higher than the number of 4DCT steps (approximately 12) reconstructed during multiple breathing cycles.
Goulooze, Sebastiaan C; Välitalo, Pyry A J; Knibbe, Catherijne A J; Krekels, Elke H J
2017-11-27
Repeated time-to-event (RTTE) models are the preferred method to characterize the repeated occurrence of clinical events. Commonly used diagnostics for parametric RTTE models require representative simulations, which may be difficult to generate in situations with dose titration or informative dropout. Here, we present a novel simulation-free diagnostic tool for parametric RTTE models; the kernel-based visual hazard comparison (kbVHC). The kbVHC aims to evaluate whether the mean predicted hazard rate of a parametric RTTE model is an adequate approximation of the true hazard rate. Because the true hazard rate cannot be directly observed, the predicted hazard is compared to a non-parametric kernel estimator of the hazard rate. With the degree of smoothing of the kernel estimator being determined by its bandwidth, the local kernel bandwidth is set to the lowest value that results in a bootstrap coefficient of variation (CV) of the hazard rate that is equal to or lower than a user-defined target value (CV target ). The kbVHC was evaluated in simulated scenarios with different number of subjects, hazard rates, CV target values, and hazard models (Weibull, Gompertz, and circadian-varying hazard). The kbVHC was able to distinguish between Weibull and Gompertz hazard models, even when the hazard rate was relatively low (< 2 events per subject). Additionally, it was more sensitive than the Kaplan-Meier VPC to detect circadian variation of the hazard rate. An additional useful feature of the kernel estimator is that it can be generated prior to model development to explore the shape of the hazard rate function.
A critical aspect of air pollution exposure assessment is the estimation of the time spent by individuals in various microenvironments (ME). Accounting for the time spent in different ME with different pollutant concentrations can reduce exposure misclassifications, while failure...
Directory of Open Access Journals (Sweden)
Resha Maulida
2017-11-01
Full Text Available The purpose of this study was to determine the effect of Problem Solving learning model based Just in Time Teaching (JiTT on students' science process skills (SPS on structure and function of plant tissue concept. This research was conducted at State Senior High School in South Tangerang .The research conducted using the quasi-experimental with Nonequivalent pretest-Postest Control Group Design. The samples of this study were 34 students for experimental group and 34 students for the control group. Data was obtained using a process skill test instrument (essai type that has been tested for its validity and reliability. Result of data analysis by ANACOVA, show that there were significant difference of postest between experiment and control group, by controlling the pretest score (F = 4.958; p <0.05. Thus, the problem-solving learning based on JiTT proved to improve students’ SPS. The contribution of this treatment in improving the students’ SPS was 7.2%. This shows that there was effect of problem solving model based JiTT on students’ SPS on the Structure and function of plant tissue concept.
Time-driven activity-based costing.
Kaplan, Robert S; Anderson, Steven R
2004-11-01
In the classroom, activity-based costing (ABC) looks like a great way to manage a company's limited resources. But executives who have tried to implement ABC in their organizations on any significant scale have often abandoned the attempt in the face of rising costs and employee irritation. They should try again, because a new approach sidesteps the difficulties associated with large-scale ABC implementation. In the revised model, managers estimate the resource demands imposed by each transaction, product, or customer, rather than relying on time-consuming and costly employee surveys. This method is simpler since it requires, for each group of resources, estimates of only two parameters: how much it costs per time unit to supply resources to the business's activities (the total overhead expenditure of a department divided by the total number of minutes of employee time available) and how much time it takes to carry out one unit of each kind of activity (as estimated or observed by the manager). This approach also overcomes a serious technical problem associated with employee surveys: the fact that, when asked to estimate time spent on activities, employees invariably report percentages that add up to 100. Under the new system, managers take into account time that is idle or unused. Armed with the data, managers then construct time equations, a new feature that enables the model to reflect the complexity of real-world operations by showing how specific order, customer, and activity characteristics cause processing times to vary. This Tool Kit uses concrete examples to demonstrate how managers can obtain meaningful cost and profitability information, quickly and inexpensively. Rather than endlessly updating and maintaining ABC data,they can now spend their time addressing the deficiencies the model reveals: inefficient processes, unprofitable products and customers, and excess capacity.
Energy Technology Data Exchange (ETDEWEB)
Yi, Boram; Kang, Doo Kyoung; Kim, Tae Hee [Ajou University School of Medicine, Department of Radiology, Suwon, Gyeonggi-do (Korea, Republic of); Yoon, Dukyong [Ajou University School of Medicine, Department of Biomedical Informatics, Suwon (Korea, Republic of); Jung, Yong Sik; Kim, Ku Sang [Ajou University School of Medicine, Department of Surgery, Suwon (Korea, Republic of); Yim, Hyunee [Ajou University School of Medicine, Department of Pathology, Suwon (Korea, Republic of)
2014-05-15
To find out any correlation between dynamic contrast-enhanced (DCE) model-based parameters and model-free parameters, and evaluate correlations between perfusion parameters with histologic prognostic factors. Model-based parameters (Ktrans, Kep and Ve) of 102 invasive ductal carcinomas were obtained using DCE-MRI and post-processing software. Correlations between model-based and model-free parameters and between perfusion parameters and histologic prognostic factors were analysed. Mean Kep was significantly higher in cancers showing initial rapid enhancement (P = 0.002) and a delayed washout pattern (P = 0.001). Ve was significantly lower in cancers showing a delayed washout pattern (P = 0.015). Kep significantly correlated with time to peak enhancement (TTP) (ρ = -0.33, P < 0.001) and washout slope (ρ = 0.39, P = 0.002). Ve was significantly correlated with TTP (ρ = 0.33, P = 0.002). Mean Kep was higher in tumours with high nuclear grade (P = 0.017). Mean Ve was lower in tumours with high histologic grade (P = 0.005) and in tumours with negative oestrogen receptor status (P = 0.047). TTP was shorter in tumours with negative oestrogen receptor status (P = 0.037). We could acquire general information about the tumour vascular physiology, interstitial space volume and pathologic prognostic factors by analyzing time-signal intensity curve without a complicated acquisition process for the model-based parameters. (orig.)
Ichiji, K; Homma, N; Sakai, M; Narita, Y; Takai, Y; Yoshizawa, M
2012-06-01
Real-time tumor position/shape measurement and dynamic beam tracking techniques allow accurate and continuous irradiation to moving tumor, but there can be a delay of several hundred milliseconds between observation and irradiation. A time-variant seasonal autoregressive (TVSAR) model has been proposed for compensating the delay by predicting respiratory tumor motion with sub-millimeter accuracy for a second latency. This is the-state-of-the-art model for almost regular breathing prediction so far. In this study, we propose an extended prediction method based on TVSAR to be usable for various breathing patterns, by predicting the residual component obtained from conventional TVSAR. An essential core of the method is to take into account the residual component that is not predictable by only TVSAR. The residual component involves baseline shift, amplitude variation, and so on. In this study, the time series of the residual obtained for every new sample are predicted by using autoregressive (AR) model. The order and parameters of the AR model is adaptively determined for each residual component by using an information criterion. Eleven data sets of 3-D lung tumor motion, observed at Georgetown University Hospital by using Cyberknife Synchrony system, were used for evaluation of the prediction performance. Experimental results indicated that the proposed method is superior to those of conventional and the state-of-the-art methods for 0 to 1 s ahead prediction. The average prediction error of the proposed method was 0.920 plus/minus 0.348 mm for 0.5 s forward prediction. We have developed the new prediction method based on TVSAR model with adaptive residual prediction. The new method can predict various respiratory motions including not only regular but also a variety of irregular breathing patterns and thus can compensate the bad effect of the delay in dynamic irradiation system for moving tumor tracking. A part of this work has been financially supported by Varian
Modeling and Understanding Time-Evolving Scenarios
Directory of Open Access Journals (Sweden)
Riccardo Melen
2015-08-01
Full Text Available In this paper, we consider the problem of modeling application scenarios characterized by variability over time and involving heterogeneous kinds of knowledge. The evolution of distributed technologies creates new and challenging possibilities of integrating different kinds of problem solving methods, obtaining many benefits from the user point of view. In particular, we propose here a multilayer modeling system and adopt the Knowledge Artifact concept to tie together statistical and Artificial Intelligence rule-based methods to tackle problems in ubiquitous and distributed scenarios.
Fisher information framework for time series modeling
Venkatesan, R. C.; Plastino, A.
2017-08-01
A robust prediction model invoking the Takens embedding theorem, whose working hypothesis is obtained via an inference procedure based on the minimum Fisher information principle, is presented. The coefficients of the ansatz, central to the working hypothesis satisfy a time independent Schrödinger-like equation in a vector setting. The inference of (i) the probability density function of the coefficients of the working hypothesis and (ii) the establishing of constraint driven pseudo-inverse condition for the modeling phase of the prediction scheme, is made, for the case of normal distributions, with the aid of the quantum mechanical virial theorem. The well-known reciprocity relations and the associated Legendre transform structure for the Fisher information measure (FIM, hereafter)-based model in a vector setting (with least square constraints) are self-consistently derived. These relations are demonstrated to yield an intriguing form of the FIM for the modeling phase, which defines the working hypothesis, solely in terms of the observed data. Cases for prediction employing time series' obtained from the: (i) the Mackey-Glass delay-differential equation, (ii) one ECG signal from the MIT-Beth Israel Deaconess Hospital (MIT-BIH) cardiac arrhythmia database, and (iii) one ECG signal from the Creighton University ventricular tachyarrhythmia database. The ECG samples were obtained from the Physionet online repository. These examples demonstrate the efficiency of the prediction model. Numerical examples for exemplary cases are provided.
Zhang, X.; Anagnostou, E. N.; Schwartz, C. S.
2017-12-01
Satellite precipitation products tend to have significant biases over complex terrain. Our research investigates a statistical approach for satellite precipitation adjustment based solely on numerical weather simulations. This approach has been evaluated in two mid-latitude (Zhang et al. 2013*1, Zhang et al. 2016*2) and three topical mountainous regions by using the WRF model to adjust two high-resolution satellite products i) National Oceanic and Atmospheric Administration (NOAA) Climate Prediction Center morphing technique (CMORPH) and ii) Global Satellite Mapping of Precipitation (GSMaP). Results show the adjustment effectively reduces the satellite underestimation of high rain rates, which provides a solid proof-of-concept for continuing research of NWP-based satellite correction. In this study we investigate the feasibility of using NCAR Real-time Ensemble Forecasts*3 for adjusting near-real-time satellite precipitation datasets over complex terrain areas in the Continental United States (CONUS) such as Olympic Peninsula, California coastal mountain ranges, Rocky Mountains and South Appalachians. The research will focus on flood-inducing storms occurred from May 2015 to December 2016 and four satellite precipitation products (CMORPH, GSMaP, PERSIANN-CCS and IMERG). The error correction performance evaluation will be based on comparisons against the gauge-adjusted Stage IV precipitation data. *1 Zhang, Xinxuan, et al. "Using NWP simulations in satellite rainfall estimation of heavy precipitation events over mountainous areas." Journal of Hydrometeorology 14.6 (2013): 1844-1858. *2 Zhang, Xinxuan, et al. "Hydrologic Evaluation of NWP-Adjusted CMORPH Estimates of Hurricane-Induced Precipitation in the Southern Appalachians." Journal of Hydrometeorology 17.4 (2016): 1087-1099. *3 Schwartz, Craig S., et al. "NCAR's experimental real-time convection-allowing ensemble prediction system." Weather and Forecasting 30.6 (2015): 1645-1654.
Physical models on discrete space and time
International Nuclear Information System (INIS)
Lorente, M.
1986-01-01
The idea of space and time quantum operators with a discrete spectrum has been proposed frequently since the discovery that some physical quantities exhibit measured values that are multiples of fundamental units. This paper first reviews a number of these physical models. They are: the method of finite elements proposed by Bender et al; the quantum field theory model on discrete space-time proposed by Yamamoto; the finite dimensional quantum mechanics approach proposed by Santhanam et al; the idea of space-time as lattices of n-simplices proposed by Kaplunovsky et al; and the theory of elementary processes proposed by Weizsaecker and his colleagues. The paper then presents a model proposed by the authors and based on the (n+1)-dimensional space-time lattice where fundamental entities interact among themselves 1 to 2n in order to build up a n-dimensional cubic lattice as a ground field where the physical interactions take place. The space-time coordinates are nothing more than the labelling of the ground field and take only discrete values. 11 references
Yang, Yuanyuan; Zhang, Shuwen; Liu, Yansui
2017-04-01
(Historic Land Use Reconstruction Model) to reconstruct the spatial distribution of land use in the early reclaimed time of Northeast China. HLURM model consists of four main modules: quantity control module, spatial conversion rule module, probability module and spatial allocation module. This model could produce backward projections by analyzing land use and its change in recent decades, which is a dynamically dependent approach based on three assumptions that current spatial patterns of land use are dynamically dependent on the historic one, the boundary of historic land use with human activities does not exceed the union range of each land use type, and factors for land suitability do not change over time.
Meyer, Sebastian; Warnke, Ingeborg; Rössler, Wulf; Held, Leonhard
2016-05-01
Spatio-temporal interaction is inherent to cases of infectious diseases and occurrences of earthquakes, whereas the spread of other events, such as cancer or crime, is less evident. Statistical significance tests of space-time clustering usually assess the correlation between the spatial and temporal (transformed) distances of the events. Although appealing through simplicity, these classical tests do not adjust for the underlying population nor can they account for a distance decay of interaction. We propose to use the framework of an endemic-epidemic point process model to jointly estimate a background event rate explained by seasonal and areal characteristics, as well as a superposed epidemic component representing the hypothesis of interest. We illustrate this new model-based test for space-time interaction by analysing psychiatric inpatient admissions in Zurich, Switzerland (2007-2012). Several socio-economic factors were found to be associated with the admission rate, but there was no evidence of general clustering of the cases. Copyright © 2016 Elsevier Ltd. All rights reserved.
Horst, Fabian; Eekhoff, Alexander; Newell, Karl M; Schöllhorn, Wolfgang I
2017-01-01
Traditionally, gait analysis has been centered on the idea of average behavior and normality. On one hand, clinical diagnoses and therapeutic interventions typically assume that average gait patterns remain constant over time. On the other hand, it is well known that all our movements are accompanied by a certain amount of variability, which does not allow us to make two identical steps. The purpose of this study was to examine changes in the intra-individual gait patterns across different time-scales (i.e., tens-of-mins, tens-of-hours). Nine healthy subjects performed 15 gait trials at a self-selected speed on 6 sessions within one day (duration between two subsequent sessions from 10 to 90 mins). For each trial, time-continuous ground reaction forces and lower body joint angles were measured. A supervised learning model using a kernel-based discriminant regression was applied for classifying sessions within individual gait patterns. Discernable characteristics of intra-individual gait patterns could be distinguished between repeated sessions by classification rates of 67.8 ± 8.8% and 86.3 ± 7.9% for the six-session-classification of ground reaction forces and lower body joint angles, respectively. Furthermore, the one-on-one-classification showed that increasing classification rates go along with increasing time durations between two sessions and indicate that changes of gait patterns appear at different time-scales. Discernable characteristics between repeated sessions indicate continuous intrinsic changes in intra-individual gait patterns and suggest a predominant role of deterministic processes in human motor control and learning. Natural changes of gait patterns without any externally induced injury or intervention may reflect continuous adaptations of the motor system over several time-scales. Accordingly, the modelling of walking by means of average gait patterns that are assumed to be near constant over time needs to be reconsidered in the context of
Sun, Yan; Lang, Maoxiang; Wang, Danzhu
2016-07-28
The transportation of hazardous materials is always accompanied by considerable risk that will impact public and environment security. As an efficient and reliable transportation organization, a multimodal service should participate in the transportation of hazardous materials. In this study, we focus on transporting hazardous materials through the multimodal service network and explore the hazardous materials multimodal routing problem from the operational level of network planning. To formulate this problem more practicably, minimizing the total generalized costs of transporting the hazardous materials and the social risk along the planned routes are set as the optimization objectives. Meanwhile, the following formulation characteristics will be comprehensively modelled: (1) specific customer demands; (2) multiple hazardous material flows; (3) capacitated schedule-based rail service and uncapacitated time-flexible road service; and (4) environmental risk constraint. A bi-objective mixed integer nonlinear programming model is first built to formulate the routing problem that combines the formulation characteristics above. Then linear reformations are developed to linearize and improve the initial model so that it can be effectively solved by exact solution algorithms on standard mathematical programming software. By utilizing the normalized weighted sum method, we can generate the Pareto solutions to the bi-objective optimization problem for a specific case. Finally, a large-scale empirical case study from the Beijing-Tianjin-Hebei Region in China is presented to demonstrate the feasibility of the proposed methods in dealing with the practical problem. Various scenarios are also discussed in the case study.
Sun, Yan; Lang, Maoxiang; Wang, Danzhu
2016-01-01
The transportation of hazardous materials is always accompanied by considerable risk that will impact public and environment security. As an efficient and reliable transportation organization, a multimodal service should participate in the transportation of hazardous materials. In this study, we focus on transporting hazardous materials through the multimodal service network and explore the hazardous materials multimodal routing problem from the operational level of network planning. To formulate this problem more practicably, minimizing the total generalized costs of transporting the hazardous materials and the social risk along the planned routes are set as the optimization objectives. Meanwhile, the following formulation characteristics will be comprehensively modelled: (1) specific customer demands; (2) multiple hazardous material flows; (3) capacitated schedule-based rail service and uncapacitated time-flexible road service; and (4) environmental risk constraint. A bi-objective mixed integer nonlinear programming model is first built to formulate the routing problem that combines the formulation characteristics above. Then linear reformations are developed to linearize and improve the initial model so that it can be effectively solved by exact solution algorithms on standard mathematical programming software. By utilizing the normalized weighted sum method, we can generate the Pareto solutions to the bi-objective optimization problem for a specific case. Finally, a large-scale empirical case study from the Beijing–Tianjin–Hebei Region in China is presented to demonstrate the feasibility of the proposed methods in dealing with the practical problem. Various scenarios are also discussed in the case study. PMID:27483294
Xie, Hui; Scott, Jason L.; Caldwell, Linda L.
2018-01-01
There is limited understanding of the relationship between physical activity and use of screen-based media, two important behaviors associated with adolescents’ health outcomes. To understand this relationship, researchers may need to consider not only physical activity level but also physical activity experience (i.e., affective experience obtained from doing physical activity). Using a sample predominantly consisting of African and Latino American urban adolescents, this study examined the interrelationships between physical activity experience, physical activity level, and use of screen-based media during leisure time. Data collected using self-report, paper and pencil surveys was analyzed using structural equation modeling. Results showed that physical activity experience was positively associated with physical activity level and had a direct negative relationship with use of non-active video games for males and a direct negative relationship with use of computer/Internet for both genders, after controlling for physical activity level. Physical activity level did not have a direct relationship with use of non-active video games or computer/Internet. However, physical activity level had a direct negative association with use of TV/movies. This study suggests that physical activity experience may play an important role in promoting physical activity and thwarting use of screen-based media among adolescents. PMID:29410634
Directory of Open Access Journals (Sweden)
Hui Xie
2018-01-01
Full Text Available There is limited understanding of the relationship between physical activity and use of screen-based media, two important behaviors associated with adolescents’ health outcomes. To understand this relationship, researchers may need to consider not only physical activity level but also physical activity experience (i.e., affective experience obtained from doing physical activity. Using a sample predominantly consisting of African and Latino American urban adolescents, this study examined the interrelationships between physical activity experience, physical activity level, and use of screen-based media during leisure time. Data collected using self-report, paper and pencil surveys was analyzed using structural equation modeling. Results showed that physical activity experience was positively associated with physical activity level and had a direct negative relationship with use of non-active video games for males and a direct negative relationship with use of computer/Internet for both genders, after controlling for physical activity level. Physical activity level did not have a direct relationship with use of non-active video games or computer/Internet. However, physical activity level had a direct negative association with use of TV/movies. This study suggests that physical activity experience may play an important role in promoting physical activity and thwarting use of screen-based media among adolescents.
Quadratic Term Structure Models in Discrete Time
Marco Realdon
2006-01-01
This paper extends the results on quadratic term structure models in continuos time to the discrete time setting. The continuos time setting can be seen as a special case of the discrete time one. Recursive closed form solutions for zero coupon bonds are provided even in the presence of multiple correlated underlying factors. Pricing bond options requires simple integration. Model parameters may well be time dependent without scuppering such tractability. Model estimation does not require a r...
Time series modeling for syndromic surveillance
Directory of Open Access Journals (Sweden)
Mandl Kenneth D
2003-01-01
Full Text Available Abstract Background Emergency department (ED based syndromic surveillance systems identify abnormally high visit rates that may be an early signal of a bioterrorist attack. For example, an anthrax outbreak might first be detectable as an unusual increase in the number of patients reporting to the ED with respiratory symptoms. Reliably identifying these abnormal visit patterns requires a good understanding of the normal patterns of healthcare usage. Unfortunately, systematic methods for determining the expected number of (ED visits on a particular day have not yet been well established. We present here a generalized methodology for developing models of expected ED visit rates. Methods Using time-series methods, we developed robust models of ED utilization for the purpose of defining expected visit rates. The models were based on nearly a decade of historical data at a major metropolitan academic, tertiary care pediatric emergency department. The historical data were fit using trimmed-mean seasonal models, and additional models were fit with autoregressive integrated moving average (ARIMA residuals to account for recent trends in the data. The detection capabilities of the model were tested with simulated outbreaks. Results Models were built both for overall visits and for respiratory-related visits, classified according to the chief complaint recorded at the beginning of each visit. The mean absolute percentage error of the ARIMA models was 9.37% for overall visits and 27.54% for respiratory visits. A simple detection system based on the ARIMA model of overall visits was able to detect 7-day-long simulated outbreaks of 30 visits per day with 100% sensitivity and 97% specificity. Sensitivity decreased with outbreak size, dropping to 94% for outbreaks of 20 visits per day, and 57% for 10 visits per day, all while maintaining a 97% benchmark specificity. Conclusions Time series methods applied to historical ED utilization data are an important tool
Time lags in biological models
MacDonald, Norman
1978-01-01
In many biological models it is necessary to allow the rates of change of the variables to depend on the past history, rather than only the current values, of the variables. The models may require discrete lags, with the use of delay-differential equations, or distributed lags, with the use of integro-differential equations. In these lecture notes I discuss the reasons for including lags, especially distributed lags, in biological models. These reasons may be inherent in the system studied, or may be the result of simplifying assumptions made in the model used. I examine some of the techniques available for studying the solution of the equations. A large proportion of the material presented relates to a special method that can be applied to a particular class of distributed lags. This method uses an extended set of ordinary differential equations. I examine the local stability of equilibrium points, and the existence and frequency of periodic solutions. I discuss the qualitative effects of lags, and how these...
International Nuclear Information System (INIS)
Magome, T; Haga, A; Igaki, H; Sekiya, N; Masutani, Y; Sakumi, A; Mukasa, A; Nakagawa, K
2014-01-01
Purpose: Although many outcome prediction models based on dose-volume information have been proposed, it is well known that the prognosis may be affected also by multiple clinical factors. The purpose of this study is to predict the survival time after radiotherapy for high-grade glioma patients based on features including clinical and dose-volume histogram (DVH) information. Methods: A total of 35 patients with high-grade glioma (oligodendroglioma: 2, anaplastic astrocytoma: 3, glioblastoma: 30) were selected in this study. All patients were treated with prescribed dose of 30–80 Gy after surgical resection or biopsy from 2006 to 2013 at The University of Tokyo Hospital. All cases were randomly separated into training dataset (30 cases) and test dataset (5 cases). The survival time after radiotherapy was predicted based on a multiple linear regression analysis and artificial neural network (ANN) by using 204 candidate features. The candidate features included the 12 clinical features (tumor location, extent of surgical resection, treatment duration of radiotherapy, etc.), and the 192 DVH features (maximum dose, minimum dose, D95, V60, etc.). The effective features for the prediction were selected according to a step-wise method by using 30 training cases. The prediction accuracy was evaluated by a coefficient of determination (R 2 ) between the predicted and actual survival time for the training and test dataset. Results: In the multiple regression analysis, the value of R 2 between the predicted and actual survival time was 0.460 for the training dataset and 0.375 for the test dataset. On the other hand, in the ANN analysis, the value of R 2 was 0.806 for the training dataset and 0.811 for the test dataset. Conclusion: Although a large number of patients would be needed for more accurate and robust prediction, our preliminary Result showed the potential to predict the outcome in the patients with high-grade glioma. This work was partly supported by the JSPS Core
Werker, Gregory R; Sharif, Behnam; Sun, Huiying; Cooper, Curtis; Bansback, Nick; Anis, Aslam H
2014-02-03
Seasonal influenza vaccination offers one of the best population-level protections against influenza-like illness (ILI). For most people, a single dose prior to the flu season offers adequate immunogenicity. HIV+ patients, however, tend to exhibit a shorter period of clinical protection, and therefore may not retain immunogenicity for the entire season. Building on the work of Nosyk et al. (2011) that determined a single dose is the optimal dosing strategy for HIV+ patients, we investigate the optimal time to administer this vaccination. Using data from the "single dose" treatment arm of an RCT conducted at 12 CIHR Canadian HIV Trials Network sites we estimated semimonthly clinical seroprotection levels for a cohort (N=93) based on HAI titer levels. These estimates were combined with CDC attack rate data for the three main strains of seasonal influenza to estimate instances of ILI over different vaccination timing strategies. Using bootstrap resampling of the cohort, nine years of CDC data, and parameter distributions, we developed a Markov cohort model that included probabilistic sensitivity analysis. Cost, quality adjusted life-years (QALYs), and net monetary benefits are presented for each timing strategy. The beginning of December is the optimal time for HIV+ patients to receive the seasonal influenza vaccine. Assuming a willingness-to-pay threshold of $50,000, the net monetary benefit associated with a Dec 1 vaccination date is $19,501.49 and the annual QALY was 0.833744. Our results support a policy of administering the seasonal influenza vaccination for this population in the middle of November or beginning of December, assuming nothing is know about the upcoming flu season. But because the difference in between this strategy and the CDC guideline is small-12 deaths averted per year and a savings of $60 million across the HIV+ population in the US-more research is needed concerning strategies for subpopulations. Copyright © 2013 Elsevier Ltd. All rights
Chuang, Ching-Cheng; Lee, Chia-Yen; Chen, Chung-Ming; Hsieh, Yao-Sheng; Liu, Tsan-Chi; Sun, Chia-Wei
2012-05-01
This study proposed diffuser-aided diffuse optical imaging (DADOI) as a new approach to improve the performance of the conventional diffuse optical tomography (DOT) approach for breast imaging. The 3-D breast model for Monte Carlo simulation is remodeled from clinical MRI image. The modified Beer-Lambert's law is adopted with the DADOI approach to substitute the complex algorithms of inverse problem for mapping of spatial distribution, and the depth information is obtained based on the time-of-flight estimation. The simulation results demonstrate that the time-resolved Monte Carlo method can be capable of performing source-detector separations analysis. The dynamics of photon migration with various source-detector separations are analyzed for the characterization of breast tissue and estimation of optode arrangement. The source-detector separations should be less than 4 cm for breast imaging in DOT system. Meanwhile, the feasibility of DADOI was manifested in this study. In the results, DADOI approach can provide better imaging contrast and faster imaging than conventional DOT measurement. The DADOI approach possesses great potential to detect the breast tumor in early stage and chemotherapy monitoring that implies a good feasibility for clinical application.
Formal Modeling and Analysis of Timed Systems
DEFF Research Database (Denmark)
Larsen, Kim Guldstrand; Niebert, Peter
This book constitutes the thoroughly refereed post-proceedings of the First International Workshop on Formal Modeling and Analysis of Timed Systems, FORMATS 2003, held in Marseille, France in September 2003. The 19 revised full papers presented together with an invited paper and the abstracts...... of two invited talks were carefully selected from 36 submissions during two rounds of reviewing and improvement. All current aspects of formal method for modeling and analyzing timed systems are addressed; among the timed systems dealt with are timed automata, timed Petri nets, max-plus algebras, real......-time systems, discrete time systems, timed languages, and real-time operating systems....
Holzwarth, Frédéric; Rüger, Nadja; Wirth, Christian
2015-03-01
Biodiversity and ecosystem functioning (BEF) research has progressed from the detection of relationships to elucidating their drivers and underlying mechanisms. In this context, replacing taxonomic predictors by trait-based measures of functional composition (FC)-bridging functions of species and of ecosystems-is a widely used approach. The inherent challenge of trait-based approaches is the multi-faceted, dynamic and hierarchical nature of trait influence: (i) traits may act via different facets of their distribution in a community, (ii) their influence may change over time and (iii) traits may influence processes at different levels of the natural hierarchy of organization. Here, we made use of the forest ecosystem model 'LPJ-GUESS' parametrized with empirical trait data, which creates output of individual performance, community assembly, stand-level states and processes. To address the three challenges, we resolved the dynamics of the top-level ecosystem function 'annual biomass change' hierarchically into its various component processes (growth, leaf and root turnover, recruitment and mortality) and states (stand structures, water stress) and traced the influence of different facets of FC along this hierarchy in a path analysis. We found an independent influence of functional richness, dissimilarity and identity on ecosystem states and processes and hence biomass change. Biodiversity effects were only positive during early succession and later turned negative. Unexpectedly, resource acquisition (growth, recruitment) and conservation (mortality, turnover) played an equally important role throughout the succession. These results add to a mechanistic understanding of biodiversity effects and place a caveat on simplistic approaches omitting hierarchical levels when analysing BEF relationships. They support the view that BEF relationships experience dramatic shifts over successional time that should be acknowledged in mechanistic theories.
Chen, Bo; Wu, Zhongru; Liang, Jiachen; Dou, Yanhong
2017-01-01
The modeling of cracks and identification of dam behavior changes are difficult issues in dam health monitoring research. In this paper, a time-varying identification model for crack monitoring data is built using support vector regression (SVR) and the Bayesian evidence framework (BEF). First, the SVR method is adopted for better modeling of the nonlinear relationship between the crack opening displacement (COD) and its influencing factors. Second, the BEF approach is applied to determine th...
Sassani, Sofia G; Theofani, Antonia; Tsangaris, Sokrates; Sokolis, Dimitrios P
2013-09-27
Arteriovenous fistulae have been previously created by our group, through implantation of e-PTFE grafts between the carotid artery and jugular vein in healthy pigs, to gather comprehensive data on the time-course of the adapted geometry, composition, and biomechanical properties of the venous wall exposed to chronic increases in pressure and flow. The aim of this study was to mathematically assess the biomechanical adaptation of venous wall, by characterizing our previous in vitro inflation/extension testing data obtained 2, 4, and 12 weeks post-fistula, using a microstructure-based material model. Our choice for such a model considered a quadratic function for elastin with a four-fiber family term for collagen, and permitted realistic data characterization for both overloaded and control veins. As structural validation to the hemodynamically-driven differences in the material response, computerized histology was employed to quantitate the composition and orientation of collagen and elastin-fiber networks. The parameter values optimized showed marked differences among the overloaded and control veins, namely decrease in the quadratic function parameters and increase in the four-fiber family parameters. Differences among the two vein types were highlighted with respect to the underlying microstructure, namely the reduced elastin and increased collagen contents induced by pressure and flow-overload. Explicit correlations were found of the material parameters with the two basic scleroprotein contents, substantiating the material model used and the characterization findings presented. Our results are expected to improve the current understanding of the dynamics of venous adaptation under sustained pressure- and flow-overload conditions, for which data are largely unavailable and contradictory. Copyright © 2013 Elsevier Ltd. All rights reserved.
RTMOD: Real-Time MODel evaluation
Energy Technology Data Exchange (ETDEWEB)
Graziani, G; Galmarini, S. [Joint Research centre, Ispra (Italy); Mikkelsen, T. [Risoe National Lab., Wind Energy and Atmospheric Physics Dept. (Denmark)
2000-01-01
The 1998 - 1999 RTMOD project is a system based on an automated statistical evaluation for the inter-comparison of real-time forecasts produced by long-range atmospheric dispersion models for national nuclear emergency predictions of cross-boundary consequences. The background of RTMOD was the 1994 ETEX project that involved about 50 models run in several Institutes around the world to simulate two real tracer releases involving a large part of the European territory. In the preliminary phase of ETEX, three dry runs (i.e. simulations in real-time of fictitious releases) were carried out. At that time, the World Wide Web was not available to all the exercise participants, and plume predictions were therefore submitted to JRC-Ispra by fax and regular mail for subsequent processing. The rapid development of the World Wide Web in the second half of the nineties, together with the experience gained during the ETEX exercises suggested the development of this project. RTMOD featured a web-based user-friendly interface for data submission and an interactive program module for displaying, intercomparison and analysis of the forecasts. RTMOD has focussed on model intercomparison of concentration predictions at the nodes of a regular grid with 0.5 degrees of resolution both in latitude and in longitude, the domain grid extending from 5W to 40E and 40N to 65N. Hypothetical releases were notified around the world to the 28 model forecasters via the web on a one-day warning in advance. They then accessed the RTMOD web page for detailed information on the actual release, and as soon as possible they then uploaded their predictions to the RTMOD server and could soon after start their inter-comparison analysis with other modelers. When additional forecast data arrived, already existing statistical results would be recalculated to include the influence by all available predictions. The new web-based RTMOD concept has proven useful as a practical decision-making tool for realtime
Kiss, S.; Sarfraz, M.
2004-01-01
Presents a method to connect VRML (Virtual Reality Modeling Language) and Java components in a Web page using EAI (External Authoring Interface), which makes it possible to interactively generate and edit VRML meshes. The meshes used are based on regular grids, to provide an interaction and modeling
Kiss, S.; Banissi, E.; Khosrowshahi, F.; Sarfraz, M.; Ursyn, A.
2001-01-01
Presents a method to connect VRML (Virtual Reality Modeling Language) and Java components in a Web page using EAI (External Authoring Interface), which makes it possible to interactively generate and edit VRML meshes. The meshes used are based on regular grids, to provide an interaction and modeling
Directory of Open Access Journals (Sweden)
Bo Chen
2017-01-01
Full Text Available The modeling of cracks and identification of dam behavior changes are difficult issues in dam health monitoring research. In this paper, a time-varying identification model for crack monitoring data is built using support vector regression (SVR and the Bayesian evidence framework (BEF. First, the SVR method is adopted for better modeling of the nonlinear relationship between the crack opening displacement (COD and its influencing factors. Second, the BEF approach is applied to determine the optimal SVR modeling parameters, including the penalty coefficient, the loss coefficient, and the width coefficient of the radial kernel function, under the principle that the prediction errors between the monitored and the model forecasted values are as small as possible. Then, considering the predicted COD, the historical maximum COD, and the time-dependent component, forewarning criteria are proposed for identifying the time-varying behavior of cracks and the degree of abnormality of dam health. Finally, an example of modeling and forewarning analysis is presented using two monitoring subsequences from a real structural crack in the Chencun concrete arch-gravity dam. The findings indicate that the proposed time-varying model can provide predicted results that are more accurately nonlinearity fitted and is suitable for use in evaluating the behavior of cracks in dams.
Travel Time Reliability for Urban Networks : Modelling and Empirics
Zheng, F.; Liu, Xiaobo; van Zuylen, H.J.; Li, Jie; Lu, Chao
2017-01-01
The importance of travel time reliability in traffic management, control, and network design has received a lot of attention in the past decade. In this paper, a network travel time distribution model based on the Johnson curve system is proposed. The model is applied to field travel time data
Lag space estimation in time series modelling
DEFF Research Database (Denmark)
Goutte, Cyril
1997-01-01
The purpose of this article is to investigate some techniques for finding the relevant lag-space, i.e. input information, for time series modelling. This is an important aspect of time series modelling, as it conditions the design of the model through the regressor vector a.k.a. the input layer...
Computer Aided Continuous Time Stochastic Process Modelling
DEFF Research Database (Denmark)
Kristensen, N.R.; Madsen, Henrik; Jørgensen, Sten Bay
2001-01-01
A grey-box approach to process modelling that combines deterministic and stochastic modelling is advocated for identification of models for model-based control of batch and semi-batch processes. A computer-aided tool designed for supporting decision-making within the corresponding modelling cycle...
Directory of Open Access Journals (Sweden)
Ronny Feuer
Full Text Available Gene expression analysis is an essential part of biological and medical investigations. Quantitative real-time PCR (qPCR is characterized with excellent sensitivity, dynamic range, reproducibility and is still regarded to be the gold standard for quantifying transcripts abundance. Parallelization of qPCR such as by microfluidic Taqman Fluidigm Biomark Platform enables evaluation of multiple transcripts in samples treated under various conditions. Despite advanced technologies, correct evaluation of the measurements remains challenging. Most widely used methods for evaluating or calculating gene expression data include geNorm and ΔΔCt, respectively. They rely on one or several stable reference genes (RGs for normalization, thus potentially causing biased results. We therefore applied multivariable regression with a tailored error model to overcome the necessity of stable RGs.We developed a RG independent data normalization approach based on a tailored linear error model for parallel qPCR data, called LEMming. It uses the assumption that the mean Ct values within samples of similarly treated groups are equal. Performance of LEMming was evaluated in three data sets with different stability patterns of RGs and compared to the results of geNorm normalization. Data set 1 showed that both methods gave similar results if stable RGs are available. Data set 2 included RGs which are stable according to geNorm criteria, but became differentially expressed in normalized data evaluated by a t-test. geNorm-normalized data showed an effect of a shifted mean per gene per condition whereas LEMming-normalized data did not. Comparing the decrease of standard deviation from raw data to geNorm and to LEMming, the latter was superior. In data set 3 according to geNorm calculated average expression stability and pairwise variation, stable RGs were available, but t-tests of raw data contradicted this. Normalization with RGs resulted in distorted data contradicting
Underwater Time Service and Synchronization Based on Time Reversal Technique
Lu, Hao; Wang, Hai-bin; Aissa-El-Bey, Abdeldjalil; Pyndiah, Ramesh
2010-09-01
Real time service and synchronization are very important to many underwater systems. But the time service and synchronization in existence cannot work well due to the multi-path propagation and random phase fluctuation of signals in the ocean channel. The time reversal mirror technique can realize energy concentration through self-matching of the ocean channel and has very good spatial and temporal focusing properties. Based on the TRM technique, we present the Time Reversal Mirror Real Time service and synchronization (TRMRT) method which can bypass the processing of multi-path on the server side and reduce multi-path contamination on the client side. So TRMRT can improve the accuracy of time service. Furthermore, as an efficient and precise method of time service, TRMRT could be widely used in underwater exploration activities and underwater navigation and positioning systems.
Directory of Open Access Journals (Sweden)
Gabriela Llanet Siles
2015-05-01
Full Text Available In this study deformation processes in northern Zona Metropolitana del Valle de Mexico (ZMVM are evaluated by means of advanced multi-temporal interferometry. ERS and ENVISAT time series, covering approximately an 11-year period (between 1999 and 2010, were produced showing mainly linear subsidence behaviour for almost the entire area under study, but increasing rates that reach up to 285 mm/yr. Important non-linear deformation was identified in certain areas, presumably suggesting interaction between subsidence and other processes. Thus, a methodology for identification of probable fracturing zones based on discrimination and modelling of the non-linear (quadratic function component is presented. This component was mapped and temporal subsidence evolution profiles were constructed across areas where notable acceleration (maximum of 8 mm/yr2 or deceleration (maximum of −9 mm/yr2 is found. This methodology enables location of potential soil fractures that could impact relevant infrastructure such as the Tunel Emisor Oriente (TEO (along the structure rates exceed 200 mm/yr. Additionally, subsidence behaviour during wet and dry seasons is tackled in partially urbanized areas. This paper provides useful information for geological risk assessment in the area.
TIME SERIES ANALYSIS USING A UNIQUE MODEL OF TRANSFORMATION
Directory of Open Access Journals (Sweden)
Goran Klepac
2007-12-01
Full Text Available REFII1 model is an authorial mathematical model for time series data mining. The main purpose of that model is to automate time series analysis, through a unique transformation model of time series. An advantage of this approach of time series analysis is the linkage of different methods for time series analysis, linking traditional data mining tools in time series, and constructing new algorithms for analyzing time series. It is worth mentioning that REFII model is not a closed system, which means that we have a finite set of methods. At first, this is a model for transformation of values of time series, which prepares data used by different sets of methods based on the same model of transformation in a domain of problem space. REFII model gives a new approach in time series analysis based on a unique model of transformation, which is a base for all kind of time series analysis. The advantage of REFII model is its possible application in many different areas such as finance, medicine, voice recognition, face recognition and text mining.
Müller, Jakob; Thirion, Christian; Pfaffl, Michael W
2011-01-15
Recombinant viral vectors are widespread tools for transfer of genetic material in various modern biotechnological applications like for example RNA interference (RNAi). However, an accurate and reproducible titer assignment represents the basic step for most downstream applications regarding a precise multiplicity of infection (MOI) adjustment. As necessary scaffold for the studies described in this work we introduce a quantitative real-time PCR (qPCR) based approach for viral particle measurement. Still an implicated problem concerning physiological effects is that the appliance of viral vectors is often attended by toxic effects on the individual target. To determine the critical viral dose leading to cell death we developed an electric cell-substrate impedance sensing (ECIS) based assay. With ECIS technology the impedance change of a current flow through the cell culture medium in an array plate is measured in a non-invasive manner, visualizing effects like cell attachment, cell-cell contacts or proliferation. Here we describe the potential of this online measurement technique in an in vitro model using the porcine ileal epithelial cell line IPI-2I in combination with an adenoviral transfection vector (Ad5-derivate). This approach shows a clear dose-depending toxic effect, as the amount of applied virus highly correlates (p<0.001) with the level of cell death. Thus this assay offers the possibility to discriminate the minimal non-toxic dose of the individual transfection method. In addition this work suggests that the ECIS-device bears the feasibility to transfer this assay to multiple other cytotoxicological questions. Copyright Â© 2010 Elsevier B.V. All rights reserved.
Time series modeling, computation, and inference
Prado, Raquel
2010-01-01
The authors systematically develop a state-of-the-art analysis and modeling of time series. … this book is well organized and well written. The authors present various statistical models for engineers to solve problems in time series analysis. Readers no doubt will learn state-of-the-art techniques from this book.-Hsun-Hsien Chang, Computing Reviews, March 2012My favorite chapters were on dynamic linear models and vector AR and vector ARMA models.-William Seaver, Technometrics, August 2011… a very modern entry to the field of time-series modelling, with a rich reference list of the current lit
DEFF Research Database (Denmark)
Guzinski, Radoslaw; Anderson, M.C.; Kustas, W.P.
2013-01-01
model that enable applications using thermal observations from polar orbiting satellites, such as Terra and Aqua, with day and night overpass times over the area of interest. This allows the application of the DTD model in high latitude regions where large viewing angles preclude the use...... to significantly improved estimation of the heat fluxes from the vegetation canopy during senescence and in forests. When the modified DTD model is run with LST measurements acquired with the Moderate Resolution Imaging Spectroradiometer (MODIS) on board the Terra and Aqua satellites, generally satisfactory...
Model-based biosignal interpretation.
Andreassen, S
1994-03-01
Two relatively new approaches to model-based biosignal interpretation, qualitative simulation and modelling by causal probabilistic networks, are compared to modelling by differential equations. A major problem in applying a model to an individual patient is the estimation of the parameters. The available observations are unlikely to allow a proper estimation of the parameters, and even if they do, the task appears to have exponential computational complexity if the model is non-linear. Causal probabilistic networks have both differential equation models and qualitative simulation as special cases, and they can provide both Bayesian and maximum-likelihood parameter estimates, in most cases in much less than exponential time. In addition, they can calculate the probabilities required for a decision-theoretical approach to medical decision support. The practical applicability of causal probabilistic networks to real medical problems is illustrated by a model of glucose metabolism which is used to adjust insulin therapy in type I diabetic patients.
Degeling, Koen; Schivo, Stefano; Mehra, Niven; Koffijberg, Hendrik; Langerak, Rom; de Bono, Johann S; IJzerman, Maarten J
2017-12-01
With the advent of personalized medicine, the field of health economic modeling is being challenged and the use of patient-level dynamic modeling techniques might be required. To illustrate the usability of two such techniques, timed automata (TA) and discrete event simulation (DES), for modeling personalized treatment decisions. An early health technology assessment on the use of circulating tumor cells, compared with prostate-specific antigen and bone scintigraphy, to inform treatment decisions in metastatic castration-resistant prostate cancer was performed. Both modeling techniques were assessed quantitatively, in terms of intermediate outcomes (e.g., overtreatment) and health economic outcomes (e.g., net monetary benefit). Qualitatively, among others, model structure, agent interactions, data management (i.e., importing and exporting data), and model transparency were assessed. Both models yielded realistic and similar intermediate and health economic outcomes. Overtreatment was reduced by 6.99 and 7.02 weeks by applying circulating tumor cell as a response marker at a net monetary benefit of -€1033 and -€1104 for the TA model and the DES model, respectively. Software-specific differences were observed regarding data management features and the support for statistical distributions, which were considered better for the DES software. Regarding method-specific differences, interactions were modeled more straightforward using TA, benefiting from its compositional model structure. Both techniques prove suitable for modeling personalized treatment decisions, although DES would be preferred given the current software-specific limitations of TA. When these limitations are resolved, TA would be an interesting modeling alternative if interactions are key or its compositional structure is useful to manage multi-agent complex problems. Copyright © 2017 International Society for Pharmacoeconomics and Outcomes Research (ISPOR). Published by Elsevier Inc. All rights
Directory of Open Access Journals (Sweden)
Ufa Ruslan A.
2014-01-01
Full Text Available The motivation of the presented research is based on the needs for development of new methods and tools for adequate simulation of Flexible Alternating Current Transmission System (FACTS devices and High Voltage Direct Current Transmission (HVDC system as part of real electric power systems (EPS. For that, a hybrid approach for advanced simulation of the FACTS and HVDC based on Voltage Source is proposed. The presented simulation results of the developed hybrid model of VSC confirm the achievement of the desired properties of the model and the effectiveness of the proposed solutions.
The use of models for estimating emissions from products beyond the timeframe of an emissions test is a means of managing the time and expenses associated with product emissions certification. This paper presents a discussion of (1) the impact of uncertainty in test chamber emiss...
Real time model for public transportation management
Directory of Open Access Journals (Sweden)
Ireneusz Celiński
2014-03-01
Full Text Available Background: The article outlines managing a public transportation fleet in the dynamic aspect. There are currently many technical possibilities of identifying demand in the transportation network. It is also possible to indicate legitimate basis of estimating and steering demand. The article describes a general public transportation fleet management concept based on balancing demand and supply. Material and methods: The presented method utilizes a matrix description of demand for transportation based on telemetric and telecommunication data. Emphasis was placed mainly on a general concept and not the manner in which data was collected by other researchers. Results: The above model gave results in the form of a system for managing a fleet in real-time. The objective of the system is also to optimally utilize means of transportation at the disposal of service providers. Conclusions: The presented concept enables a new perspective on managing public transportation fleets. In case of implementation, the project would facilitate, among others, designing dynamic timetables, updated based on observed demand, and even designing dynamic points of access to public transportation lines. Further research should encompass so-called rerouting based on dynamic measurements of the characteristics of the transportation system.
Discounting Models for Outcomes over Continuous Time
DEFF Research Database (Denmark)
Harvey, Charles M.; Østerdal, Lars Peter
Events that occur over a period of time can be described either as sequences of outcomes at discrete times or as functions of outcomes in an interval of time. This paper presents discounting models for events of the latter type. Conditions on preferences are shown to be satisfied if and only if t...... if the preferences are represented by a function that is an integral of a discounting function times a scale defined on outcomes at instants of time....
On discrete models of space-time
International Nuclear Information System (INIS)
Horzela, A.; Kempczynski, J.; Kapuscik, E.; Georgia Univ., Athens, GA; Uzes, Ch.
1992-02-01
Analyzing the Einstein radiolocation method we come to the conclusion that results of any measurement of space-time coordinates should be expressed in terms of rational numbers. We show that this property is Lorentz invariant and may be used in the construction of discrete models of space-time different from the models of the lattice type constructed in the process of discretization of continuous models. (author)
Characterization of Models for Time-Dependent Behavior of Soils
DEFF Research Database (Denmark)
Liingaard, Morten; Augustesen, Anders; Lade, Poul V.
2004-01-01
developed for metals and steel but are, to some extent, used to characterize time effects in geomaterials. The third part is a review of constitutive laws that describe not only viscous effects but also the inviscid ( rate-independent) behavior of soils, in principle, under any possible loading condition...... Different classes of constitutive models have been developed to capture the time-dependent viscous phenomena ~ creep, stress relaxation, and rate effects ! observed in soils. Models based on empirical, rheological, and general stress-strain-time concepts have been studied. The first part....... Special attention is paid to elastoviscoplastic models that combine inviscid elastic and time-dependent plastic behavior. Various general elastoviscoplastic models can roughly be divided into two categories: Models based on the concept of overstress and models based on nonstationary flow surface theory...
Nasejje, Justine B; Mwambi, Henry; Dheda, Keertan; Lesosky, Maia
2017-07-28
Random survival forest (RSF) models have been identified as alternative methods to the Cox proportional hazards model in analysing time-to-event data. These methods, however, have been criticised for the bias that results from favouring covariates with many split-points and hence conditional inference forests for time-to-event data have been suggested. Conditional inference forests (CIF) are known to correct the bias in RSF models by separating the procedure for the best covariate to split on from that of the best split point search for the selected covariate. In this study, we compare the random survival forest model to the conditional inference model (CIF) using twenty-two simulated time-to-event datasets. We also analysed two real time-to-event datasets. The first dataset is based on the survival of children under-five years of age in Uganda and it consists of categorical covariates with most of them having more than two levels (many split-points). The second dataset is based on the survival of patients with extremely drug resistant tuberculosis (XDR TB) which consists of mainly categorical covariates with two levels (few split-points). The study findings indicate that the conditional inference forest model is superior to random survival forest models in analysing time-to-event data that consists of covariates with many split-points based on the values of the bootstrap cross-validated estimates for integrated Brier scores. However, conditional inference forests perform comparably similar to random survival forests models in analysing time-to-event data consisting of covariates with fewer split-points. Although survival forests are promising methods in analysing time-to-event data, it is important to identify the best forest model for analysis based on the nature of covariates of the dataset in question.
Directory of Open Access Journals (Sweden)
Stewart Don
2008-05-01
Full Text Available Abstract Background Based upon defining a common reference point, current real-time quantitative PCR technologies compare relative differences in amplification profile position. As such, absolute quantification requires construction of target-specific standard curves that are highly resource intensive and prone to introducing quantitative errors. Sigmoidal modeling using nonlinear regression has previously demonstrated that absolute quantification can be accomplished without standard curves; however, quantitative errors caused by distortions within the plateau phase have impeded effective implementation of this alternative approach. Results Recognition that amplification rate is linearly correlated to amplicon quantity led to the derivation of two sigmoid functions that allow target quantification via linear regression analysis. In addition to circumventing quantitative errors produced by plateau distortions, this approach allows the amplification efficiency within individual amplification reactions to be determined. Absolute quantification is accomplished by first converting individual fluorescence readings into target quantity expressed in fluorescence units, followed by conversion into the number of target molecules via optical calibration. Founded upon expressing reaction fluorescence in relation to amplicon DNA mass, a seminal element of this study was to implement optical calibration using lambda gDNA as a universal quantitative standard. Not only does this eliminate the need to prepare target-specific quantitative standards, it relegates establishment of quantitative scale to a single, highly defined entity. The quantitative competency of this approach was assessed by exploiting "limiting dilution assay" for absolute quantification, which provided an independent gold standard from which to verify quantitative accuracy. This yielded substantive corroborating evidence that absolute accuracies of ± 25% can be routinely achieved. Comparison
Florea, Cristina; Tanska, Petri; Mononen, Mika E; Qu, Chengjuan; Lammi, Mikko J; Laasanen, Mikko S; Korhonen, Rami K
2017-02-01
Cellular responses to mechanical stimuli are influenced by the mechanical properties of cells and the surrounding tissue matrix. Cells exhibit viscoelastic behavior in response to an applied stress. This has been attributed to fluid flow-dependent and flow-independent mechanisms. However, the particular mechanism that controls the local time-dependent behavior of cells is unknown. Here, a combined approach of experimental AFM nanoindentation with computational modeling is proposed, taking into account complex material behavior. Three constitutive models (porohyperelastic, viscohyperelastic, poroviscohyperelastic) in tandem with optimization algorithms were employed to capture the experimental stress relaxation data of chondrocytes at 5 % strain. The poroviscohyperelastic models with and without fluid flow allowed through the cell membrane provided excellent description of the experimental time-dependent cell responses (normalized mean squared error (NMSE) of 0.003 between the model and experiments). The viscohyperelastic model without fluid could not follow the entire experimental data that well (NMSE = 0.005), while the porohyperelastic model could not capture it at all (NMSE = 0.383). We also show by parametric analysis that the fluid flow has a small, but essential effect on the loading phase and short-term cell relaxation response, while the solid viscoelasticity controls the longer-term responses. We suggest that the local time-dependent cell mechanical response is determined by the combined effects of intrinsic viscoelasticity of the cytoskeleton and fluid flow redistribution in the cells, although the contribution of fluid flow is smaller when using a nanosized probe and moderate indentation rate. The present approach provides new insights into viscoelastic responses of chondrocytes, important for further understanding cell mechanobiological mechanisms in health and disease.
Leite, Argentina; Paula Rocha, Ana; Eduarda Silva, Maria
2013-06-01
Heart Rate Variability (HRV) series exhibit long memory and time-varying conditional variance. This work considers the Fractionally Integrated AutoRegressive Moving Average (ARFIMA) models with Generalized AutoRegressive Conditional Heteroscedastic (GARCH) errors. ARFIMA-GARCH models may be used to capture and remove long memory and estimate the conditional volatility in 24 h HRV recordings. The ARFIMA-GARCH approach is applied to fifteen long term HRV series available at Physionet, leading to the discrimination among normal individuals, heart failure patients, and patients with atrial fibrillation.
Eliminating time dispersion from seismic wave modeling
Koene, Erik F. M.; Robertsson, Johan O. A.; Broggini, Filippo; Andersson, Fredrik
2018-04-01
We derive an expression for the error introduced by the second-order accurate temporal finite-difference (FD) operator, as present in the FD, pseudospectral and spectral element methods for seismic wave modeling applied to time-invariant media. The `time-dispersion' error speeds up the signal as a function of frequency and time step only. Time dispersion is thus independent of the propagation path, medium or spatial modeling error. We derive two transforms to either add or remove time dispersion from synthetic seismograms after a simulation. The transforms are compared to previous related work and demonstrated on wave modeling in acoustic as well as elastic media. In addition, an application to imaging is shown. The transforms enable accurate computation of synthetic seismograms at reduced cost, benefitting modeling applications in both exploration and global seismology.
Partition-based discrete-time quantum walks
Konno, Norio; Portugal, Renato; Sato, Iwao; Segawa, Etsuo
2018-04-01
We introduce a family of discrete-time quantum walks, called two-partition model, based on two equivalence-class partitions of the computational basis, which establish the notion of local dynamics. This family encompasses most versions of unitary discrete-time quantum walks driven by two local operators studied in literature, such as the coined model, Szegedy's model, and the 2-tessellable staggered model. We also analyze the connection of those models with the two-step coined model, which is driven by the square of the evolution operator of the standard discrete-time coined walk. We prove formally that the two-step coined model, an extension of Szegedy model for multigraphs, and the two-tessellable staggered model are unitarily equivalent. Then, selecting one specific model among those families is a matter of taste not generality.
Degeneracy of time series models: The best model is not always the correct model
International Nuclear Information System (INIS)
Judd, Kevin; Nakamura, Tomomichi
2006-01-01
There are a number of good techniques for finding, in some sense, the best model of a deterministic system given a time series of observations. We examine a problem called model degeneracy, which has the consequence that even when a perfect model of a system exists, one does not find it using the best techniques currently available. The problem is illustrated using global polynomial models and the theory of Groebner bases
Time series modelling of overflow structures
DEFF Research Database (Denmark)
Carstensen, J.; Harremoës, P.
1997-01-01
The dynamics of a storage pipe is examined using a grey-box model based on on-line measured data. The grey-box modelling approach uses a combination of physically-based and empirical terms in the model formulation. The model provides an on-line state estimate of the overflows, pumping capacities...... to the overflow structures. The capacity of a pump draining the storage pipe has been estimated for two rain events, revealing that the pump was malfunctioning during the first rain event. The grey-box modelling approach is applicable for automated on-line surveillance and control. (C) 1997 IAWQ. Published...
Powell, Eric N; Klinck, John M; Hofmann, Eileen E
2011-02-21
Crassostrea oysters are protandrous hermaphrodites. Sex is thought to be determined by a single gene with a dominant male allele M and a recessive protandrous allele F, such that FF animals are protandrous and MF animals are permanent males. We investigate the possibility that a reduction in generation time, brought about for example by disease, might jeopardize retention of the M allele. Simulations show that MF males have a significantly lessened lifetime fecundity when generation time declines. The allele frequency of the M allele declines and eventually the M allele is lost. The probability of loss is modulated by population abundance. As abundance increases, the probability of M allele loss declines. Simulations suggest that stabilization of the female-to-male ratio when generation time is long is the dominant function of the M allele. As generation time shortens, the raison d'être for the M allele also fades as mortality usurps the stabilizing role. Disease and exploitation have shortened oyster generation time: one consequence may be to jeopardize retention of the M allele. Two alternative genetic bases for protandry also provide stable sex ratios when generation time is long; an F-dominant protandric allele and protandry restricted to the MF heterozygote. In both cases, simulations show that FF individuals become rare in the population at high abundance and/or long generation time. Protandry restricted to the MF heterozygote maintains sex ratio stability over a wider range of generation times and abundances than the alternatives, suggesting that sex determination based on a male-dominant allele (MM/MF) may not be the optimal solution to the genetic basis for protandry in Crassostrea. Copyright Â© 2010 Elsevier Ltd. All rights reserved.
Masquelier, Timothée
2012-06-01
We have built a phenomenological spiking model of the cat early visual system comprising the retina, the Lateral Geniculate Nucleus (LGN) and V1's layer 4, and established four main results (1) When exposed to videos that reproduce with high fidelity what a cat experiences under natural conditions, adjacent Retinal Ganglion Cells (RGCs) have spike-time correlations at a short timescale (~30 ms), despite neuronal noise and possible jitter accumulation. (2) In accordance with recent experimental findings, the LGN filters out some noise. It thus increases the spike reliability and temporal precision, the sparsity, and, importantly, further decreases down to ~15 ms adjacent cells' correlation timescale. (3) Downstream simple cells in V1's layer 4, if equipped with Spike Timing-Dependent Plasticity (STDP), may detect these fine-scale cross-correlations, and thus connect principally to ON- and OFF-centre cells with Receptive Fields (RF) aligned in the visual space, and thereby become orientation selective, in accordance with Hubel and Wiesel (Journal of Physiology 160:106-154, 1962) classic model. Up to this point we dealt with continuous vision, and there was no absolute time reference such as a stimulus onset, yet information was encoded and decoded in the relative spike times. (4) We then simulated saccades to a static image and benchmarked relative spike time coding and time-to-first spike coding w.r.t. to saccade landing in the context of orientation representation. In both the retina and the LGN, relative spike times are more precise, less affected by pre-landing history and global contrast than absolute ones, and lead to robust contrast invariant orientation representations in V1.
Conceptual Modeling of Time-Varying Information
DEFF Research Database (Denmark)
Gregersen, Heidi; Jensen, Christian Søndergaard
2004-01-01
A wide range of database applications manage information that varies over time. Many of the underlying database schemas of these were designed using the Entity-Relationship (ER) model. In the research community as well as in industry, it is common knowledge that the temporal aspects of the mini-world...... are important, but difficult to capture using the ER model. Several enhancements to the ER model have been proposed in an attempt to support the modeling of temporal aspects of information. Common to the existing temporally extended ER models, few or no specific requirements to the models were given...
Energy Technology Data Exchange (ETDEWEB)
Chamana, Manohar; Prabakar, Kumaraguru; Palmintier, Bryan; Baggu, Murali M.
2017-05-11
A software process is developed to convert distribution network models from a quasi-static time-series tool (OpenDSS) to a real-time dynamic phasor simulator (ePHASORSIM). The description of this process in this paper would be helpful for researchers who intend to perform similar conversions. The converter could be utilized directly by users of real-time simulators who intend to perform software-in-the-loop or hardware-in-the-loop tests on large distribution test feeders for a range of use cases, including testing functions of advanced distribution management systems against a simulated distribution system. In the future, the developers intend to release the conversion tool as open source to enable use by others.
Energy Technology Data Exchange (ETDEWEB)
Chamana, Manohar; Prabakar, Kumaraguru; Palmintier, Bryan; Baggu, Murali M.
2017-04-11
A software process is developed to convert distribution network models from a quasi-static time-series tool (OpenDSS) to a real-time dynamic phasor simulator (ePHASORSIM). The description of this process in this paper would be helpful for researchers who intend to perform similar conversions. The converter could be utilized directly by users of real-time simulators who intend to perform software-in-the-loop or hardware-in-the-loop tests on large distribution test feeders for a range of use cases, including testing functions of advanced distribution management systems against a simulated distribution system. In the future, the developers intend to release the conversion tool as open source to enable use by others.
Space-time with a fluctuating metric tensor model
International Nuclear Information System (INIS)
Morozov, A N
2016-01-01
A presented physical time model is based on the assumption that time is a random Poisson process, the intensity of which depends on natural irreversible processes. The introduction of metric tensor space-time fluctuations allowing describing the impact of stochastic gravitational background has been demonstrated. The use of spectral lines broadening measurement for the registration of relic gravitational waves has been suggested. (paper)
Liu, Sijun; Chen, Jiaping; Wang, Jianming; Wu, Zhuchao; Wu, Weihua; Xu, Zhiwei; Hu, Wenbiao; Xu, Fei; Tong, Shilu; Shen, Hongbing
2017-10-01
Hand, foot, and mouth disease (HFMD) is a significant public health issue in China and an accurate prediction of epidemic can improve the effectiveness of HFMD control. This study aims to develop a weather-based forecasting model for HFMD using the information on climatic variables and HFMD surveillance in Nanjing, China. Daily data on HFMD cases and meteorological variables between 2010 and 2015 were acquired from the Nanjing Center for Disease Control and Prevention, and China Meteorological Data Sharing Service System, respectively. A multivariate seasonal autoregressive integrated moving average (SARIMA) model was developed and validated by dividing HFMD infection data into two datasets: the data from 2010 to 2013 were used to construct a model and those from 2014 to 2015 were used to validate it. Moreover, we used weekly prediction for the data between 1 January 2014 and 31 December 2015 and leave-1-week-out prediction was used to validate the performance of model prediction. SARIMA (2,0,0)52 associated with the average temperature at lag of 1 week appeared to be the best model (R 2 = 0.936, BIC = 8.465), which also showed non-significant autocorrelations in the residuals of the model. In the validation of the constructed model, the predicted values matched the observed values reasonably well between 2014 and 2015. There was a high agreement rate between the predicted values and the observed values (sensitivity 80%, specificity 96.63%). This study suggests that the SARIMA model with average temperature could be used as an important tool for early detection and prediction of HFMD outbreaks in Nanjing, China.
1986-02-04
M NRL Memorandum Report 5719 An Examination of Models of Relaxation in Complex Systems 1. Continuous Time Random Walk ( CTRW ) Models K. L. NGAI, R. W...Examination of Models of Relaxation in Complex Systems I. Continuous Time Random Walk ( CTRW ) Models E. PSRSONAL AUTHOR(S) Ntgi, K.L., Rendell. R.W...necessary and idenrify by block number) Models of relaxation in complex systemL based on the continuous time random walk ( CTRW ) formalism are examined on
Research of Manufacture Time Management System Based on PLM
Jing, Ni; Juan, Zhu; Liangwei, Zhong
This system is targeted by enterprises manufacturing machine shop, analyzes their business needs and builds the plant management information system of Manufacture time and Manufacture time information management. for manufacturing process Combined with WEB technology, based on EXCEL VBA development of methods, constructs a hybrid model based on PLM workshop Manufacture time management information system framework, discusses the functionality of the system architecture, database structure.
Zheng, Yang; Zhou, Jianzhong; Xu, Yanhe; Zhang, Yuncheng; Qian, Zhongdong
2017-05-01
This paper proposes a distributed model predictive control based load frequency control (MPC-LFC) scheme to improve control performances in the frequency regulation of power system. In order to reduce the computational burden in the rolling optimization with a sufficiently large prediction horizon, the orthonormal Laguerre functions are utilized to approximate the predicted control trajectory. The closed-loop stability of the proposed MPC scheme is achieved by adding a terminal equality constraint to the online quadratic optimization and taking the cost function as the Lyapunov function. Furthermore, the treatments of some typical constraints in load frequency control have been studied based on the specific Laguerre-based formulations. Simulations have been conducted in two different interconnected power systems to validate the effectiveness of the proposed distributed MPC-LFC as well as its superiority over the comparative methods. Copyright © 2017 ISA. Published by Elsevier Ltd. All rights reserved.
Discrete-time rewards model-checked
Larsen, K.G.; Andova, S.; Niebert, Peter; Hermanns, H.; Katoen, Joost P.
2003-01-01
This paper presents a model-checking approach for analyzing discrete-time Markov reward models. For this purpose, the temporal logic probabilistic CTL is extended with reward constraints. This allows to formulate complex measures – involving expected as well as accumulated rewards – in a precise and
Modeling nonhomogeneous Markov processes via time transformation.
Hubbard, R A; Inoue, L Y T; Fann, J R
2008-09-01
Longitudinal studies are a powerful tool for characterizing the course of chronic disease. These studies are usually carried out with subjects observed at periodic visits giving rise to panel data. Under this observation scheme the exact times of disease state transitions and sequence of disease states visited are unknown and Markov process models are often used to describe disease progression. Most applications of Markov process models rely on the assumption of time homogeneity, that is, that the transition rates are constant over time. This assumption is not satisfied when transition rates depend on time from the process origin. However, limited statistical tools are available for dealing with nonhomogeneity. We propose models in which the time scale of a nonhomogeneous Markov process is transformed to an operational time scale on which the process is homogeneous. We develop a method for jointly estimating the time transformation and the transition intensity matrix for the time transformed homogeneous process. We assess maximum likelihood estimation using the Fisher scoring algorithm via simulation studies and compare performance of our method to homogeneous and piecewise homogeneous models. We apply our methodology to a study of delirium progression in a cohort of stem cell transplantation recipients and show that our method identifies temporal trends in delirium incidence and recovery.
Timed Model Checking of Security Protocols
Corin, R.J.; Etalle, Sandro; Hartel, Pieter H.; Mader, Angelika H.
We propose a method for engineering security protocols that are aware of timing aspects. We study a simplified version of the well-known Needham Schroeder protocol and the complete Yahalom protocol. Timing information allows the study of different attack scenarios. We illustrate the attacks by model
Modeling discrete time-to-event data
Tutz, Gerhard
2016-01-01
This book focuses on statistical methods for the analysis of discrete failure times. Failure time analysis is one of the most important fields in statistical research, with applications affecting a wide range of disciplines, in particular, demography, econometrics, epidemiology and clinical research. Although there are a large variety of statistical methods for failure time analysis, many techniques are designed for failure times that are measured on a continuous scale. In empirical studies, however, failure times are often discrete, either because they have been measured in intervals (e.g., quarterly or yearly) or because they have been rounded or grouped. The book covers well-established methods like life-table analysis and discrete hazard regression models, but also introduces state-of-the art techniques for model evaluation, nonparametric estimation and variable selection. Throughout, the methods are illustrated by real life applications, and relationships to survival analysis in continuous time are expla...
Rowe, Sidney E.
2010-01-01
In September 2007, the Engineering Directorate at the Marshall Space Flight Center (MSFC) created the Design System Focus Team (DSFT). MSFC was responsible for the in-house design and development of the Ares 1 Upper Stage and the Engineering Directorate was preparing to deploy a new electronic Configuration Management and Data Management System with the Design Data Management System (DDMS) based upon a Commercial Off The Shelf (COTS) Product Data Management (PDM) System. The DSFT was to establish standardized CAD practices and a new data life cycle for design data. Of special interest here, the design teams were to implement Model Based Definition (MBD) in support of the Upper Stage manufacturing contract. It is noted that this MBD does use partially dimensioned drawings for auxiliary information to the model. The design data lifecycle implemented several new release states to be used prior to formal release that allowed the models to move through a flow of progressive maturity. The DSFT identified some 17 Lessons Learned as outcomes of the standards development, pathfinder deployments and initial application to the Upper Stage design completion. Some of the high value examples are reviewed.
Modeling preference time in middle distance triathlons
Fister, Iztok; Iglesias, Andres; Deb, Suash; Fister, Dušan; Fister Jr, Iztok
2017-01-01
Modeling preference time in triathlons means predicting the intermediate times of particular sports disciplines by a given overall finish time in a specific triathlon course for the athlete with the known personal best result. This is a hard task for athletes and sport trainers due to a lot of different factors that need to be taken into account, e.g., athlete's abilities, health, mental preparations and even their current sports form. So far, this process was calculated manually without any ...
Modelling and Simulation of Asynchronous Real-Time Systems using Timed Rebeca
Directory of Open Access Journals (Sweden)
Luca Aceto
2011-07-01
Full Text Available In this paper we propose an extension of the Rebeca language that can be used to model distributed and asynchronous systems with timing constraints. We provide the formal semantics of the language using Structural Operational Semantics, and show its expressiveness by means of examples. We developed a tool for automated translation from timed Rebeca to the Erlang language, which provides a first implementation of timed Rebeca. We can use the tool to set the parameters of timed Rebeca models, which represent the environment and component variables, and use McErlang to run multiple simulations for different settings. Timed Rebeca restricts the modeller to a pure asynchronous actor-based paradigm, where the structure of the model represents the service oriented architecture, while the computational model matches the network infrastructure. Simulation is shown to be an effective analysis support, specially where model checking faces almost immediate state explosion in an asynchronous setting.
International Nuclear Information System (INIS)
Paiva, Gustavo V.; Schirru, Roberto
2017-01-01
This work aims to create a model and a prototype, using the Python language, which with the application of an Expert System uses production rules to analyze the data obtained in real time from the plant and help the operator to identify the occurrence of transients / accidents. In the event of a transient, the program alerts the operator and indicates which section of the Operation Manual should be consulted to bring the plant back to its normal state. The generic structure used to represent the knowledge of the Expert System was a Fault Tree and the data obtained from the plant was done through intelligent acquisition agents that transform the data obtained from the plant into Boolean values used in the Fault Tree, including the use of Fuzzy Logic. In order to test the program, a simplified model of the Almirante Alvaro Alberto 2 Nuclear Power Plant (Angra-2) manuals was used and with this model, simulations were performed to analyze the program's operation and if it leads to the expected results. The results of the tests presented a quick identification of the events and great accuracy, demonstrating the applicability of the model to the problem. (author)
Energy Technology Data Exchange (ETDEWEB)
Paiva, Gustavo V.; Schirru, Roberto, E-mail: gustavopaiva@poli.ufrj.br, E-mail: schirru@lmp.ufrj.br [Coordenacao de Pos-Graduacao e Pesquisa de Engenharia (COPPE/UFRJ), Rio de Janeiro, RJ (Brazil). Departamento de Engenharia Nuclear
2017-07-01
This work aims to create a model and a prototype, using the Python language, which with the application of an Expert System uses production rules to analyze the data obtained in real time from the plant and help the operator to identify the occurrence of transients / accidents. In the event of a transient, the program alerts the operator and indicates which section of the Operation Manual should be consulted to bring the plant back to its normal state. The generic structure used to represent the knowledge of the Expert System was a Fault Tree and the data obtained from the plant was done through intelligent acquisition agents that transform the data obtained from the plant into Boolean values used in the Fault Tree, including the use of Fuzzy Logic. In order to test the program, a simplified model of the Almirante Alvaro Alberto 2 Nuclear Power Plant (Angra-2) manuals was used and with this model, simulations were performed to analyze the program's operation and if it leads to the expected results. The results of the tests presented a quick identification of the events and great accuracy, demonstrating the applicability of the model to the problem. (author)
Hsieh, Chi-Hsuan; Chiu, Yu-Fang; Shen, Yi-Hsiang; Chu, Ta-Shun; Huang, Yuan-Hao
2016-02-01
This paper presents an ultra-wideband (UWB) impulse-radio radar signal processing platform used to analyze human respiratory features. Conventional radar systems used in human detection only analyze human respiration rates or the response of a target. However, additional respiratory signal information is available that has not been explored using radar detection. The authors previously proposed a modified raised cosine waveform (MRCW) respiration model and an iterative correlation search algorithm that could acquire additional respiratory features such as the inspiration and expiration speeds, respiration intensity, and respiration holding ratio. To realize real-time respiratory feature extraction by using the proposed UWB signal processing platform, this paper proposes a new four-segment linear waveform (FSLW) respiration model. This model offers a superior fit to the measured respiration signal compared with the MRCW model and decreases the computational complexity of feature extraction. In addition, an early-terminated iterative correlation search algorithm is presented, substantially decreasing the computational complexity and yielding negligible performance degradation. These extracted features can be considered the compressed signals used to decrease the amount of data storage required for use in long-term medical monitoring systems and can also be used in clinical diagnosis. The proposed respiratory feature extraction algorithm was designed and implemented using the proposed UWB radar signal processing platform including a radar front-end chip and an FPGA chip. The proposed radar system can detect human respiration rates at 0.1 to 1 Hz and facilitates the real-time analysis of the respiratory features of each respiration period.
A time fractional model to represent rainfall process
Directory of Open Access Journals (Sweden)
Jacques Golder
2014-01-01
Full Text Available This paper deals with a stochastic representation of the rainfall process. The analysis of a rainfall time series shows that cumulative representation of a rainfall time series can be modeled as a non-Gaussian random walk with a log-normal jump distribution and a time-waiting distribution following a tempered α-stable probability law. Based on the random walk model, a fractional Fokker-Planck equation (FFPE with tempered α-stable waiting times was obtained. Through the comparison of observed data and simulated results from the random walk model and FFPE model with tempered á-stable waiting times, it can be concluded that the behavior of the rainfall process is globally reproduced, and the FFPE model with tempered α-stable waiting times is more efficient in reproducing the observed behavior.
DEFF Research Database (Denmark)
Zhang, Xuping; Sørensen, Rasmus; RahbekIversen, Mathias
2018-01-01
This paper presents a novel and computationally efficient modeling method for the dynamics of flexible-link robot manipulators. In this method, a robot manipulator is decomposed into components/elements. The component/element dynamics is established using Newton–Euler equations, and then is linea......This paper presents a novel and computationally efficient modeling method for the dynamics of flexible-link robot manipulators. In this method, a robot manipulator is decomposed into components/elements. The component/element dynamics is established using Newton–Euler equations......, and then is linearized based on the acceleration-based state vector. The transfer matrices for each type of components/elements are developed, and used to establish the system equations of a flexible robot manipulator by concatenating the state vector from the base to the end-effector. With this strategy, the size...... manipulators, and only involves calculating and transferring component/element dynamic equations that have small size. The numerical simulations and experimental testing of flexible-link manipulators are conducted to validate the proposed methodologies....
Time series sightability modeling of animal populations.
Directory of Open Access Journals (Sweden)
Althea A ArchMiller
Full Text Available Logistic regression models-or "sightability models"-fit to detection/non-detection data from marked individuals are often used to adjust for visibility bias in later detection-only surveys, with population abundance estimated using a modified Horvitz-Thompson (mHT estimator. More recently, a model-based alternative for analyzing combined detection/non-detection and detection-only data was developed. This approach seemed promising, since it resulted in similar estimates as the mHT when applied to data from moose (Alces alces surveys in Minnesota. More importantly, it provided a framework for developing flexible models for analyzing multiyear detection-only survey data in combination with detection/non-detection data. During initial attempts to extend the model-based approach to multiple years of detection-only data, we found that estimates of detection probabilities and population abundance were sensitive to the amount of detection-only data included in the combined (detection/non-detection and detection-only analysis. Subsequently, we developed a robust hierarchical modeling approach where sightability model parameters are informed only by the detection/non-detection data, and we used this approach to fit a fixed-effects model (FE model with year-specific parameters and a temporally-smoothed model (TS model that shares information across years via random effects and a temporal spline. The abundance estimates from the TS model were more precise, with decreased interannual variability relative to the FE model and mHT abundance estimates, illustrating the potential benefits from model-based approaches that allow information to be shared across years.
Gulati, Sanchita; During, David; Mainland, Jeff; Wong, Agnes M F
2018-01-01
One of the key challenges to healthcare organizations is the development of relevant and accurate cost information. In this paper, we used time-driven activity-based costing (TDABC) method to calculate the costs of treating individual patients with specific medical conditions over their full cycle of care. We discussed how TDABC provides a critical, systematic and data-driven approach to estimate costs accurately and dynamically, as well as its potential to enable structural and rational cost reduction to bring about a sustainable healthcare system. © 2018 Longwoods Publishing.
A model for quantification of temperature profiles via germination times
DEFF Research Database (Denmark)
Pipper, Christian Bressen; Adolf, Verena Isabelle; Jacobsen, Sven-Erik
2013-01-01
Current methodology to quantify temperature characteristics in germination of seeds is predominantly based on analysis of the time to reach a given germination fraction, that is, the quantiles in the distribution of the germination time of a seed. In practice interpolation between observed...... germination fractions at given monitoring times is used to obtain the time to reach a given germination fraction. As a consequence the obtained value will be highly dependent on the actual monitoring scheme used in the experiment. In this paper a link between currently used quantile models for the germination...... time and a specific type of accelerated failure time models is provided. As a consequence the observed number of germinated seeds at given monitoring times may be analysed directly by a grouped time-to-event model from which characteristics of the temperature profile may be identified and estimated...
Time series sightability modeling of animal populations
ArchMiller, Althea A.; Dorazio, Robert; St. Clair, Katherine; Fieberg, John R.
2018-01-01
Logistic regression models—or “sightability models”—fit to detection/non-detection data from marked individuals are often used to adjust for visibility bias in later detection-only surveys, with population abundance estimated using a modified Horvitz-Thompson (mHT) estimator. More recently, a model-based alternative for analyzing combined detection/non-detection and detection-only data was developed. This approach seemed promising, since it resulted in similar estimates as the mHT when applied to data from moose (Alces alces) surveys in Minnesota. More importantly, it provided a framework for developing flexible models for analyzing multiyear detection-only survey data in combination with detection/non-detection data. During initial attempts to extend the model-based approach to multiple years of detection-only data, we found that estimates of detection probabilities and population abundance were sensitive to the amount of detection-only data included in the combined (detection/non-detection and detection-only) analysis. Subsequently, we developed a robust hierarchical modeling approach where sightability model parameters are informed only by the detection/non-detection data, and we used this approach to fit a fixed-effects model (FE model) with year-specific parameters and a temporally-smoothed model (TS model) that shares information across years via random effects and a temporal spline. The abundance estimates from the TS model were more precise, with decreased interannual variability relative to the FE model and mHT abundance estimates, illustrating the potential benefits from model-based approaches that allow information to be shared across years.
Linear time relational prototype based learning.
Gisbrecht, Andrej; Mokbel, Bassam; Schleif, Frank-Michael; Zhu, Xibin; Hammer, Barbara
2012-10-01
Prototype based learning offers an intuitive interface to inspect large quantities of electronic data in supervised or unsupervised settings. Recently, many techniques have been extended to data described by general dissimilarities rather than Euclidean vectors, so-called relational data settings. Unlike the Euclidean counterparts, the techniques have quadratic time complexity due to the underlying quadratic dissimilarity matrix. Thus, they are infeasible already for medium sized data sets. The contribution of this article is twofold: On the one hand we propose a novel supervised prototype based classification technique for dissimilarity data based on popular learning vector quantization (LVQ), on the other hand we transfer a linear time approximation technique, the Nyström approximation, to this algorithm and an unsupervised counterpart, the relational generative topographic mapping (GTM). This way, linear time and space methods result. We evaluate the techniques on three examples from the biomedical domain.
Bierer, S Beth; Dannefer, Elaine F; Tetzlaff, John E
2015-09-01
Remediation in the era of competency-based assessment demands a model that empowers students to improve performance. To examine a remediation model where students, rather than faculty, develop remedial plans to improve performance. Private medical school, 177 medical students. A promotion committee uses student-generated portfolios and faculty referrals to identify struggling students, and has them develop formal remediation plans with personal reflections, improvement strategies, and performance evidence. Students submit reports to document progress until formally released from remediation by the promotion committee. Participants included 177 students from six classes (2009-2014). Twenty-six were placed in remediation, with more referrals occurring during Years 1 or 2 (n = 20, 76 %). Unprofessional behavior represented the most common reason for referral in Years 3-5. Remedial students did not differ from classmates (n = 151) on baseline characteristics (Age, Gender, US citizenship, MCAT) or willingness to recommend their medical school to future students (p learner-driven remediation model promotes greater autonomy and reinforces self-regulated learning.
International Nuclear Information System (INIS)
Uzun, S.; Peksen, A.
2000-01-01
In this study, it was aimed to determine the effects of different planting times (01 July, 15 July and 01 August) on the growth and developmental components of some cauliflower cultivars (Snow King, White Cliff, White Rock, White Latin, Me & Carillon, SG 4004 F1 and Serrano) by using plant growth and developmental models. From the results of the present study, it was revealed that thermal time elapsing from planting to curd initiation should be high (about 1200 degree centigrade days) to stimulate vegetative growth while thermal time elapsing from curd initiation to the harvest should be low (around 200 degree centigrade days) in terms of curd weight. The highest curd weight and yield were obtained from the plants of the first planting time, namely 01 July, compared to the other planting times (15 July and 01 August). Although there were no significant differences between the cultivars, the highest yield was obtained form Cv. Me & Carillon (13.25 t ha-1), SG 4004 F1 (13.14 t ha-1) and White Rock (11.51 t ha-1) respectively
Multi-model Cross Pollination in Time via Data Assimilation
Du, H.; Smith, L. A.
2015-12-01
Nonlinear dynamical systems are frequently used to model physical processes including the fluid dynamics, weather and climate. Uncertainty in the observations makes identification of the exact state impossible for a chaotic nonlinear system, this suggests forecasts based on an ensemble of initial conditions to reflect the inescapable uncertainty in the observations. In general, when forecasting real systems the model class from which the particular model equations are drawn does not contain a process that is able to generate trajectories consistent with the data. Multi-model ensembles have become popular tools to account for uncertainties due to observational noise and structural model error in weather and climate simulation-based predictions on time scales from days to seasons and centuries. There have been some promising results suggesting that the multi-model ensemble forecasts outperform the single model forecasts. The current multi-model ensemble forecasts are focused on combining single model ensemble forecasts by means of statistical post-processing. Assuming each model is developed independently, every single model is likely to contain different local dynamical information from that of other models. Using statistical post-processing, such information is only carried by the simulations under a single model ensemble: no advantage is taken to influence simulations under the other models. A novel methodology, named Multi-model Cross Pollination in Time, is proposed for multi-model ensemble scheme with the aim of integrating the dynamical information from each individual model operationally in time. The proposed method generates model states in time via applying advanced nonlinear data assimilation scheme(s) over the multi-model forecasts. The proposed approach is demonstrated to outperform the traditional statistically post-processing in the 40-dimensional Lorenz96 flow. It is suggested that this illustration could form the basis for more general results which
Time dependent viscous string cloud cosmological models
Tripathy, S. K.; Nayak, S. K.; Sahu, S. K.; Routray, T. R.
2009-09-01
Bianchi type-I string cosmological models are studied in Saez-Ballester theory of gravitation when the source for the energy momentum tensor is a viscous string cloud coupled to gravitational field. The bulk viscosity is assumed to vary with time and is related to the scalar expansion. The relationship between the proper energy density ρ and string tension density λ are investigated from two different cosmological models.
A real time hyperelastic tissue model.
Zhong, Hualiang; Peters, Terry
2007-06-01
Real-time soft tissue modeling has a potential application in medical training, procedure planning and image-guided therapy. This paper characterizes the mechanical properties of organ tissue using a hyperelastic material model, an approach which is then incorporated into a real-time finite element framework. While generalizable, in this paper we use the published mechanical properties of pig liver to characterize an example application. Specifically, we calibrate the parameters of an exponential model, with a least-squares method (LSM) using the assumption that the material is isotropic and incompressible in a uniaxial compression test. From the parameters obtained, the stress-strain curves generated from the LSM are compared to those from the corresponding computational model solved by ABAQUS and also to experimental data, resulting in mean errors of 1.9 and 4.8%, respectively, which are considerably better than those obtained when employing the Neo-Hookean model. We demonstrate our approach through the simulation of a biopsy procedure, employing a tetrahedral mesh representation of human liver generated from a CT image. Using the material properties along with the geometric model, we develop a nonlinear finite element framework to simulate the behaviour of liver during an interventional procedure with a real-time performance achieved through the use of an interpolation approach.
Fundamental State Space Time Series Models for JEPX Electricity Prices
Ofuji, Kenta; Kanemoto, Shigeru
Time series models are popular in attempts to model and forecast price dynamics in various markets. In this paper, we have formulated two state space models and tested them for its applicability to power price modeling and forecasting using JEPX (Japan Electric Power eXchange) data. The state space models generally have a high degree of flexibility with its time-dependent state transition matrix and system equation configurations. Based on empirical data analysis and past literatures, we used calculation assumptions to a) extract stochastic trend component to capture non-stationarity, and b) detect structural changes underlying in the market. The stepwise calculation algorithm followed that of Kalman Filter. We then evaluated the two models' forecasting capabilities, in comparison with ordinary AR (autoregressive) and ARCH (autoregressive conditional heteroskedasticity) models. By choosing proper explanatory variables, the latter state space model yielded as good a forecasting capability as that of the AR and the ARCH models for a short forecasting horizon.
Continuous time modelling of dynamical spatial lattice data observed at sparsely distributed times
DEFF Research Database (Denmark)
Rasmussen, Jakob Gulddahl; Møller, Jesper
2007-01-01
, and they exhibit spatial interaction. For specificity we consider a particular dynamical spatial lattice data set which has previously been analysed by a discrete time model involving unknown normalizing constants. We discuss the advantages and disadvantages of using continuous time processes compared......Summary. We consider statistical and computational aspects of simulation-based Bayesian inference for a spatial-temporal model based on a multivariate point process which is only observed at sparsely distributed times. The point processes are indexed by the sites of a spatial lattice...
Finite Time Blowup in a Realistic Food-Chain Model
Parshad, Rana
2013-05-19
We investigate a realistic three-species food-chain model, with generalist top predator. The model based on a modified version of the Leslie-Gower scheme incorporates mutual interference in all the three populations and generalizes several other known models in the ecological literature. We show that the model exhibits finite time blowup in certain parameter range and for large enough initial data. This result implies that finite time blowup is possible in a large class of such three-species food-chain models. We propose a modification to the model and prove that the modified model has globally existing classical solutions, as well as a global attractor. We reconstruct the attractor using nonlinear time series analysis and show that it pssesses rich dynamics, including chaos in certain parameter regime, whilst avoiding blowup in any parameter regime. We also provide estimates on its fractal dimension as well as provide numerical simulations to visualise the spatiotemporal chaos.
An, Yang; Sun, Mei; Gao, Cuixia; Han, Dun; Li, Xiuming
2018-02-01
This paper studies the influence of Brent oil price fluctuations on the stock prices of China's two distinct blocks, namely, the petrochemical block and the electric equipment and new energy block, applying the Shannon entropy of information theory. The co-movement trend of crude oil price and stock prices is divided into different fluctuation patterns with the coarse-graining method. Then, the bivariate time series network model is established for the two blocks stock in five different periods. By joint analysis of the network-oriented metrics, the key modes and underlying evolutionary mechanisms were identified. The results show that the both networks have different fluctuation characteristics in different periods. Their co-movement patterns are clustered in some key modes and conversion intermediaries. The study not only reveals the lag effect of crude oil price fluctuations on the stock in Chinese industry blocks but also verifies the necessity of research on special periods, and suggests that the government should use different energy policies to stabilize market volatility in different periods. A new way is provided to study the unidirectional influence between multiple variables or complex time series.
Directory of Open Access Journals (Sweden)
Xiaomin Xu
2015-11-01
Full Text Available The uncertainty and regularity of wind power generation are caused by wind resources’ intermittent and randomness. Such volatility brings severe challenges to the wind power grid. The requirements for ultrashort-term and short-term wind power forecasting with high prediction accuracy of the model used, have great significance for reducing the phenomenon of abandoned wind power , optimizing the conventional power generation plan, adjusting the maintenance schedule and developing real-time monitoring systems. Therefore, accurate forecasting of wind power generation is important in electric load forecasting. The echo state network (ESN is a new recurrent neural network composed of input, hidden layer and output layers. It can approximate well the nonlinear system and achieves great results in nonlinear chaotic time series forecasting. Besides, the ESN is simpler and less computationally demanding than the traditional neural network training, which provides more accurate training results. Aiming at addressing the disadvantages of standard ESN, this paper has made some improvements. Combined with the complementary advantages of particle swarm optimization and tabu search, the generalization of ESN is improved. To verify the validity and applicability of this method, case studies of multitime scale forecasting of wind power output are carried out to reconstruct the chaotic time series of the actual wind power generation data in a certain region to predict wind power generation. Meanwhile, the influence of seasonal factors on wind power is taken into consideration. Compared with the classical ESN and the conventional Back Propagation (BP neural network, the results verify the superiority of the proposed method.
Space-time modeling of timber prices
Mo Zhou; Joseph Buongriorno
2006-01-01
A space-time econometric model was developed for pine sawtimber timber prices of 21 geographically contiguous regions in the southern United States. The correlations between prices in neighboring regions helped predict future prices. The impulse response analysis showed that although southern pine sawtimber markets were not globally integrated, local supply and demand...
Visibility Graph Based Time Series Analysis.
Stephen, Mutua; Gu, Changgui; Yang, Huijie
2015-01-01
Network based time series analysis has made considerable achievements in the recent years. By mapping mono/multivariate time series into networks, one can investigate both it's microscopic and macroscopic behaviors. However, most proposed approaches lead to the construction of static networks consequently providing limited information on evolutionary behaviors. In the present paper we propose a method called visibility graph based time series analysis, in which series segments are mapped to visibility graphs as being descriptions of the corresponding states and the successively occurring states are linked. This procedure converts a time series to a temporal network and at the same time a network of networks. Findings from empirical records for stock markets in USA (S&P500 and Nasdaq) and artificial series generated by means of fractional Gaussian motions show that the method can provide us rich information benefiting short-term and long-term predictions. Theoretically, we propose a method to investigate time series from the viewpoint of network of networks.
Time-dependent intranuclear cascade model
International Nuclear Information System (INIS)
Barashenkov, V.S.; Kostenko, B.F.; Zadorogny, A.M.
1980-01-01
An intranuclear cascade model with explicit consideration of the time coordinate in the Monte Carlo simulation of the development of a cascade particle shower has been considered. Calculations have been performed using a diffuse nuclear boundary without any step approximation of the density distribution. Changes in the properties of the target nucleus during the cascade development have been taken into account. The results of these calculations have been compared with experiment and with the data which had been obtained by means of a time-independent cascade model. The consideration of time improved agreement between experiment and theory particularly for high-energy shower particles; however, for low-energy cascade particles (with grey and black tracks in photoemulsion) a discrepancy remains at T >= 10 GeV. (orig.)
Koopman Operator Framework for Time Series Modeling and Analysis
Surana, Amit
2018-01-01
We propose an interdisciplinary framework for time series classification, forecasting, and anomaly detection by combining concepts from Koopman operator theory, machine learning, and linear systems and control theory. At the core of this framework is nonlinear dynamic generative modeling of time series using the Koopman operator which is an infinite-dimensional but linear operator. Rather than working with the underlying nonlinear model, we propose two simpler linear representations or model forms based on Koopman spectral properties. We show that these model forms are invariants of the generative model and can be readily identified directly from data using techniques for computing Koopman spectral properties without requiring the explicit knowledge of the generative model. We also introduce different notions of distances on the space of such model forms which is essential for model comparison/clustering. We employ the space of Koopman model forms equipped with distance in conjunction with classical machine learning techniques to develop a framework for automatic feature generation for time series classification. The forecasting/anomaly detection framework is based on using Koopman model forms along with classical linear systems and control approaches. We demonstrate the proposed framework for human activity classification, and for time series forecasting/anomaly detection in power grid application.
Directory of Open Access Journals (Sweden)
Yu Uneno
Full Text Available We aimed to develop an adaptable prognosis prediction model that could be applied at any time point during the treatment course for patients with cancer receiving chemotherapy, by applying time-series real-world big data.Between April 2004 and September 2014, 4,997 patients with cancer who had received systemic chemotherapy were registered in a prospective cohort database at the Kyoto University Hospital. Of these, 2,693 patients with a death record were eligible for inclusion and divided into training (n = 1,341 and test (n = 1,352 cohorts. In total, 3,471,521 laboratory data at 115,738 time points, representing 40 laboratory items [e.g., white blood cell counts and albumin (Alb levels] that were monitored for 1 year before the death event were applied for constructing prognosis prediction models. All possible prediction models comprising three different items from 40 laboratory items (40C3 = 9,880 were generated in the training cohort, and the model selection was performed in the test cohort. The fitness of the selected models was externally validated in the validation cohort from three independent settings.A prognosis prediction model utilizing Alb, lactate dehydrogenase, and neutrophils was selected based on a strong ability to predict death events within 1-6 months and a set of six prediction models corresponding to 1,2, 3, 4, 5, and 6 months was developed. The area under the curve (AUC ranged from 0.852 for the 1 month model to 0.713 for the 6 month model. External validation supported the performance of these models.By applying time-series real-world big data, we successfully developed a set of six adaptable prognosis prediction models for patients with cancer receiving chemotherapy.
Uneno, Yu; Taneishi, Kei; Kanai, Masashi; Okamoto, Kazuya; Yamamoto, Yosuke; Yoshioka, Akira; Hiramoto, Shuji; Nozaki, Akira; Nishikawa, Yoshitaka; Yamaguchi, Daisuke; Tomono, Teruko; Nakatsui, Masahiko; Baba, Mika; Morita, Tatsuya; Matsumoto, Shigemi; Kuroda, Tomohiro; Okuno, Yasushi; Muto, Manabu
2017-01-01
We aimed to develop an adaptable prognosis prediction model that could be applied at any time point during the treatment course for patients with cancer receiving chemotherapy, by applying time-series real-world big data. Between April 2004 and September 2014, 4,997 patients with cancer who had received systemic chemotherapy were registered in a prospective cohort database at the Kyoto University Hospital. Of these, 2,693 patients with a death record were eligible for inclusion and divided into training (n = 1,341) and test (n = 1,352) cohorts. In total, 3,471,521 laboratory data at 115,738 time points, representing 40 laboratory items [e.g., white blood cell counts and albumin (Alb) levels] that were monitored for 1 year before the death event were applied for constructing prognosis prediction models. All possible prediction models comprising three different items from 40 laboratory items (40C3 = 9,880) were generated in the training cohort, and the model selection was performed in the test cohort. The fitness of the selected models was externally validated in the validation cohort from three independent settings. A prognosis prediction model utilizing Alb, lactate dehydrogenase, and neutrophils was selected based on a strong ability to predict death events within 1-6 months and a set of six prediction models corresponding to 1,2, 3, 4, 5, and 6 months was developed. The area under the curve (AUC) ranged from 0.852 for the 1 month model to 0.713 for the 6 month model. External validation supported the performance of these models. By applying time-series real-world big data, we successfully developed a set of six adaptable prognosis prediction models for patients with cancer receiving chemotherapy.
Real time wave forecasting using wind time history and numerical model
Jain, Pooja; Deo, M. C.; Latha, G.; Rajendran, V.
Operational activities in the ocean like planning for structural repairs or fishing expeditions require real time prediction of waves over typical time duration of say a few hours. Such predictions can be made by using a numerical model or a time series model employing continuously recorded waves. This paper presents another option to do so and it is based on a different time series approach in which the input is in the form of preceding wind speed and wind direction observations. This would be useful for those stations where the costly wave buoys are not deployed and instead only meteorological buoys measuring wind are moored. The technique employs alternative artificial intelligence approaches of an artificial neural network (ANN), genetic programming (GP) and model tree (MT) to carry out the time series modeling of wind to obtain waves. Wind observations at four offshore sites along the east coast of India were used. For calibration purpose the wave data was generated using a numerical model. The predicted waves obtained using the proposed time series models when compared with the numerically generated waves showed good resemblance in terms of the selected error criteria. Large differences across the chosen techniques of ANN, GP, MT were not noticed. Wave hindcasting at the same time step and the predictions over shorter lead times were better than the predictions over longer lead times. The proposed method is a cost effective and convenient option when a site-specific information is desired.
NASA AVOSS Fast-Time Wake Prediction Models: User's Guide
Ahmad, Nash'at N.; VanValkenburg, Randal L.; Pruis, Matthew
2014-01-01
The National Aeronautics and Space Administration (NASA) is developing and testing fast-time wake transport and decay models to safely enhance the capacity of the National Airspace System (NAS). The fast-time wake models are empirical algorithms used for real-time predictions of wake transport and decay based on aircraft parameters and ambient weather conditions. The aircraft dependent parameters include the initial vortex descent velocity and the vortex pair separation distance. The atmospheric initial conditions include vertical profiles of temperature or potential temperature, eddy dissipation rate, and crosswind. The current distribution includes the latest versions of the APA (3.4) and the TDP (2.1) models. This User's Guide provides detailed information on the model inputs, file formats, and the model output. An example of a model run and a brief description of the Memphis 1995 Wake Vortex Dataset is also provided.
Discrete time modelization of human pilot behavior
Cavalli, D.; Soulatges, D.
1975-01-01
This modelization starts from the following hypotheses: pilot's behavior is a time discrete process, he can perform only one task at a time and his operating mode depends on the considered flight subphase. Pilot's behavior was observed using an electro oculometer and a simulator cockpit. A FORTRAN program has been elaborated using two strategies. The first one is a Markovian process in which the successive instrument readings are governed by a matrix of conditional probabilities. In the second one, strategy is an heuristic process and the concepts of mental load and performance are described. The results of the two aspects have been compared with simulation data.
Linear Parametric Model Checking of Timed Automata
DEFF Research Database (Denmark)
Hune, Tohmas Seidelin; Romijn, Judi; Stoelinga, Mariëlle
2001-01-01
We present an extension of the model checker Uppaal capable of synthesize linear parameter constraints for the correctness of parametric timed automata. The symbolic representation of the (parametric) state-space is shown to be correct. A second contribution of this paper is the identication...... of a subclass of parametric timed automata (L/U automata), for which the emptiness problem is decidable, contrary to the full class where it is know to be undecidable. Also we present a number of lemmas enabling the verication eort to be reduced for L/U automata in some cases. We illustrate our approach...
Model-Based Design for Embedded Systems
Nicolescu, Gabriela
2009-01-01
Model-based design allows teams to start the design process from a high-level model that is gradually refined through abstraction levels to ultimately yield a prototype. This book describes the main facets of heterogeneous system design. It focuses on multi-core methodological issues, real-time analysis, and modeling and validation
Modelling of Patterns in Space and Time
Murray, James
1984-01-01
This volume contains a selection of papers presented at the work shop "Modelling of Patterns in Space and Time", organized by the 80nderforschungsbereich 123, "8tochastische Mathematische Modelle", in Heidelberg, July 4-8, 1983. The main aim of this workshop was to bring together physicists, chemists, biologists and mathematicians for an exchange of ideas and results in modelling patterns. Since the mathe matical problems arising depend only partially on the particular field of applications the interdisciplinary cooperation proved very useful. The workshop mainly treated phenomena showing spatial structures. The special areas covered were morphogenesis, growth in cell cultures, competition systems, structured populations, chemotaxis, chemical precipitation, space-time oscillations in chemical reactors, patterns in flames and fluids and mathematical methods. The discussions between experimentalists and theoreticians were especially interesting and effective. The editors hope that these proceedings reflect ...
Event-Based Conceptual Modeling
DEFF Research Database (Denmark)
Bækgaard, Lars
The paper demonstrates that a wide variety of event-based modeling approaches are based on special cases of the same general event concept, and that the general event concept can be used to unify the otherwise unrelated fields of information modeling and process modeling. A set of event-based mod......The paper demonstrates that a wide variety of event-based modeling approaches are based on special cases of the same general event concept, and that the general event concept can be used to unify the otherwise unrelated fields of information modeling and process modeling. A set of event......-based modeling approaches are analyzed and the results are used to formulate a general event concept that can be used for unifying the seemingly unrelated event concepts. Events are characterized as short-duration processes that have participants, consequences, and properties, and that may be modeled in terms...
Extended Cellular Automata Models of Particles and Space-Time
Beedle, Michael
2005-04-01
Models of particles and space-time are explored through simulations and theoretical models that use Extended Cellular Automata models. The expanded Cellular Automata Models consist go beyond simple scalar binary cell-fields, into discrete multi-level group representations like S0(2), SU(2), SU(3), SPIN(3,1). The propagation and evolution of these expanded cellular automatas are then compared to quantum field theories based on the "harmonic paradigm" i.e. built by an infinite number of harmonic oscillators, and with gravitational models.
Axial model in curved space-time
Energy Technology Data Exchange (ETDEWEB)
Barcelos-Neto, J.; Farina, C.; Vaidya, A.N.
1986-12-11
We study the axial model in a background gravitational field. Using the zeta-function regularization, we obtain explicitly the anomalous divergence of the axial-vector current and the exact generating functional of the theory. We show that, as a consequence of a space-time-dependent metric, all differential equations involved in the theory generalize to their covariantized forms. We also comment on the finite-mass renormalization exhibited by the pseudoscalar field and the form of the fermion propagator.
Everaars, Jeroen; Settele, Josef; Dormann, Carsten F
2018-01-01
Solitary bees are important but declining wild pollinators. During daily foraging in agricultural landscapes, they encounter a mosaic of patches with nest and foraging habitat and unsuitable matrix. It is insufficiently clear how spatial allocation of nesting and foraging resources and foraging traits of bees affect their daily foraging performance. We investigated potential brood cell construction (as proxy of fitness), number of visited flowers, foraging habitat visitation and foraging distance (pollination proxies) with the model SOLBEE (simulating pollen transport by solitary bees, tested and validated in an earlier study), for landscapes varying in landscape fragmentation and spatial allocation of nesting and foraging resources. Simulated bees varied in body size and nesting preference. We aimed to understand effects of landscape fragmentation and bee traits on bee fitness and the pollination services bees provide, as well as interactions between them, and the general consequences it has to our understanding of the system. This broad scope gives multiple key results. 1) Body size determines fitness more than landscape fragmentation, with large bees building fewer brood cells. High pollen requirements for large bees and the related high time budgets for visiting many flowers may not compensate for faster flight speeds and short handling times on flowers, giving them overall a disadvantage compared to small bees. 2) Nest preference does affect distribution of bees over the landscape, with cavity-nesting bees being restricted to nesting along field edges, which inevitably leads to performance reductions. Fragmentation mitigates this for cavity-nesting bees through increased edge habitat. 3) Landscape fragmentation alone had a relatively small effect on all responses. Instead, the local ratio of nest to foraging habitat affected bee fitness positively through reduced local competition. The spatial coverage of pollination increases steeply in response to this ratio
Settele, Josef; Dormann, Carsten F.
2018-01-01
Solitary bees are important but declining wild pollinators. During daily foraging in agricultural landscapes, they encounter a mosaic of patches with nest and foraging habitat and unsuitable matrix. It is insufficiently clear how spatial allocation of nesting and foraging resources and foraging traits of bees affect their daily foraging performance. We investigated potential brood cell construction (as proxy of fitness), number of visited flowers, foraging habitat visitation and foraging distance (pollination proxies) with the model SOLBEE (simulating pollen transport by solitary bees, tested and validated in an earlier study), for landscapes varying in landscape fragmentation and spatial allocation of nesting and foraging resources. Simulated bees varied in body size and nesting preference. We aimed to understand effects of landscape fragmentation and bee traits on bee fitness and the pollination services bees provide, as well as interactions between them, and the general consequences it has to our understanding of the system. This broad scope gives multiple key results. 1) Body size determines fitness more than landscape fragmentation, with large bees building fewer brood cells. High pollen requirements for large bees and the related high time budgets for visiting many flowers may not compensate for faster flight speeds and short handling times on flowers, giving them overall a disadvantage compared to small bees. 2) Nest preference does affect distribution of bees over the landscape, with cavity-nesting bees being restricted to nesting along field edges, which inevitably leads to performance reductions. Fragmentation mitigates this for cavity-nesting bees through increased edge habitat. 3) Landscape fragmentation alone had a relatively small effect on all responses. Instead, the local ratio of nest to foraging habitat affected bee fitness positively through reduced local competition. The spatial coverage of pollination increases steeply in response to this ratio
Directory of Open Access Journals (Sweden)
Jeroen Everaars
Full Text Available Solitary bees are important but declining wild pollinators. During daily foraging in agricultural landscapes, they encounter a mosaic of patches with nest and foraging habitat and unsuitable matrix. It is insufficiently clear how spatial allocation of nesting and foraging resources and foraging traits of bees affect their daily foraging performance. We investigated potential brood cell construction (as proxy of fitness, number of visited flowers, foraging habitat visitation and foraging distance (pollination proxies with the model SOLBEE (simulating pollen transport by solitary bees, tested and validated in an earlier study, for landscapes varying in landscape fragmentation and spatial allocation of nesting and foraging resources. Simulated bees varied in body size and nesting preference. We aimed to understand effects of landscape fragmentation and bee traits on bee fitness and the pollination services bees provide, as well as interactions between them, and the general consequences it has to our understanding of the system. This broad scope gives multiple key results. 1 Body size determines fitness more than landscape fragmentation, with large bees building fewer brood cells. High pollen requirements for large bees and the related high time budgets for visiting many flowers may not compensate for faster flight speeds and short handling times on flowers, giving them overall a disadvantage compared to small bees. 2 Nest preference does affect distribution of bees over the landscape, with cavity-nesting bees being restricted to nesting along field edges, which inevitably leads to performance reductions. Fragmentation mitigates this for cavity-nesting bees through increased edge habitat. 3 Landscape fragmentation alone had a relatively small effect on all responses. Instead, the local ratio of nest to foraging habitat affected bee fitness positively through reduced local competition. The spatial coverage of pollination increases steeply in response
Time-frequency representation based on time-varying ...
Indian Academy of Sciences (India)
defined in a time-frequency space and represents the evolution of signal power as a function of both time and ... the physical meaning of the intrinsic mode function (IMF) resulting from the EMD sifting process and the ... In the case of the basis function approach, each of its time-varying coefficients is expressed as a weighted ...
DEFF Research Database (Denmark)
Jónsdóttir, Kristjana Ýr; Schmiegel, Jürgen; Jensen, Eva Bjørn Vedel
2008-01-01
In the present paper, we give a condensed review, for the nonspecialist reader, of a new modelling framework for spatio-temporal processes, based on Lévy theory. We show the potential of the approach in stochastic geometry and spatial statistics by studying Lévy-based growth modelling of planar o...... objects. The growth models considered are spatio-temporal stochastic processes on the circle. As a by product, flexible new models for space–time covariance functions on the circle are provided. An application of the Lévy-based growth models to tumour growth is discussed....
Modeling utilization distributions in space and time
Keating, K.A.; Cherry, S.
2009-01-01
W. Van Winkle defined the utilization distribution (UD) as a probability density that gives an animal's relative frequency of occurrence in a two-dimensional (x, y) plane. We extend Van Winkle's work by redefining the UD as the relative frequency distribution of an animal's occurrence in all four dimensions of space and time. We then describe a product kernel model estimation method, devising a novel kernel from the wrapped Cauchy distribution to handle circularly distributed temporal covariates, such as day of year. Using Monte Carlo simulations of animal movements in space and time, we assess estimator performance. Although not unbiased, the product kernel method yields models highly correlated (Pearson's r - 0.975) with true probabilities of occurrence and successfully captures temporal variations in density of occurrence. In an empirical example, we estimate the expected UD in three dimensions (x, y, and t) for animals belonging to each of two distinct bighorn sheep {Ovis canadensis) social groups in Glacier National Park, Montana, USA. Results show the method can yield ecologically informative models that successfully depict temporal variations in density of occurrence for a seasonally migratory species. Some implications of this new approach to UD modeling are discussed. ?? 2009 by the Ecological Society of America.
Reverse time migration by Krylov subspace reduced order modeling
Basir, Hadi Mahdavi; Javaherian, Abdolrahim; Shomali, Zaher Hossein; Firouz-Abadi, Roohollah Dehghani; Gholamy, Shaban Ali
2018-04-01
Imaging is a key step in seismic data processing. To date, a myriad of advanced pre-stack depth migration approaches have been developed; however, reverse time migration (RTM) is still considered as the high-end imaging algorithm. The main limitations associated with the performance cost of reverse time migration are the intensive computation of the forward and backward simulations, time consumption, and memory allocation related to imaging condition. Based on the reduced order modeling, we proposed an algorithm, which can be adapted to all the aforementioned factors. Our proposed method benefit from Krylov subspaces method to compute certain mode shapes of the velocity model computed by as an orthogonal base of reduced order modeling. Reverse time migration by reduced order modeling is helpful concerning the highly parallel computation and strongly reduces the memory requirement of reverse time migration. The synthetic model results showed that suggested method can decrease the computational costs of reverse time migration by several orders of magnitudes, compared with reverse time migration by finite element method.
A generalized additive regression model for survival times
DEFF Research Database (Denmark)
Scheike, Thomas H.
2001-01-01
Additive Aalen model; counting process; disability model; illness-death model; generalized additive models; multiple time-scales; non-parametric estimation; survival data; varying-coefficient models......Additive Aalen model; counting process; disability model; illness-death model; generalized additive models; multiple time-scales; non-parametric estimation; survival data; varying-coefficient models...
Real-time advanced nuclear reactor core model
International Nuclear Information System (INIS)
Koclas, J.; Friedman, F.; Paquette, C.; Vivier, P.
1990-01-01
The paper describes a multi-nodal advanced nuclear reactor core model. The model is based on application of modern equivalence theory to the solution of neutron diffusion equation in real time employing the finite differences method. The use of equivalence theory allows the application of the finite differences method to cores divided into hundreds of nodes, as opposed to the much finer divisions (in the order of ten thousands of nodes) where the unmodified method is currently applied. As a result the model can be used for modelling of the core kinetics for real time full scope training simulators. Results of benchmarks, validate the basic assumptions of the model and its applicability to real-time simulation. (orig./HP)
Variable selection for mixture and promotion time cure rate models.
Masud, Abdullah; Tu, Wanzhu; Yu, Zhangsheng
2016-11-16
Failure-time data with cured patients are common in clinical studies. Data from these studies are typically analyzed with cure rate models. Variable selection methods have not been well developed for cure rate models. In this research, we propose two least absolute shrinkage and selection operators based methods, for variable selection in mixture and promotion time cure models with parametric or nonparametric baseline hazards. We conduct an extensive simulation study to assess the operating characteristics of the proposed methods. We illustrate the use of the methods using data from a study of childhood wheezing. © The Author(s) 2016.
Real-Time Vocal Tract Modelling
Directory of Open Access Journals (Sweden)
K. Benkrid
2008-03-01
Full Text Available To date, most speech synthesis techniques have relied upon the representation of the vocal tract by some form of filter, a typical example being linear predictive coding (LPC. This paper describes the development of a physiologically realistic model of the vocal tract using the well-established technique of transmission line modelling (TLM. This technique is based on the principle of wave scattering at transmission line segment boundaries and may be used in one, two, or three dimensions. This work uses this technique to model the vocal tract using a one-dimensional transmission line. A six-port scattering node is applied in the region separating the pharyngeal, oral, and the nasal parts of the vocal tract.
Modelling tourists arrival using time varying parameter
Suciptawati, P.; Sukarsa, K. G.; Kencana, Eka N.
2017-06-01
The importance of tourism and its related sectors to support economic development and poverty reduction in many countries increase researchers’ attentions to study and model tourists’ arrival. This work is aimed to demonstrate time varying parameter (TVP) technique to model the arrival of Korean’s tourists to Bali. The number of Korean tourists whom visiting Bali for period January 2010 to December 2015 were used to model the number of Korean’s tourists to Bali (KOR) as dependent variable. The predictors are the exchange rate of Won to IDR (WON), the inflation rate in Korea (INFKR), and the inflation rate in Indonesia (INFID). Observing tourists visit to Bali tend to fluctuate by their nationality, then the model was built by applying TVP and its parameters were approximated using Kalman Filter algorithm. The results showed all of predictor variables (WON, INFKR, INFID) significantly affect KOR. For in-sample and out-of-sample forecast with ARIMA’s forecasted values for the predictors, TVP model gave mean absolute percentage error (MAPE) as much as 11.24 percent and 12.86 percent, respectively.
Ohta, Yusaku; Kobayashi, Tatsuya; Tsushima, Hiroaki; Miura, Satoshi; Hino, Ryota; Takasu, Tomoji; Fujimoto, Hiromi; Iinuma, Takeshi; Tachibana, Kenji; Demachi, Tomotsugu; Sato, Toshiya; Ohzono, Mako; Umino, Norihito
2012-02-01
Real-time crustal deformation monitoring is extremely important for achieving rapid understanding of actual earthquake scales, because the measured permanent displacement directly gives the true earthquake size (seismic moment, Mw) information, which in turn, provides tsunami forecasting. We have developed an algorithm to detect/estimate static ground displacements due to earthquake faulting from real-time kinematic GPS (RTK-GPS) time series. The new algorithm identifies permanent displacements by monitoring the difference of a short-term average (STA) to a long-term average (LTA) of the GPS time series. We assessed the noise property and precision of the RTK-GPS time series with various baseline length conditions and orbits and discerned that the real-time ephemerides based on the International GNSS Service (IGS) are sufficient for crustal deformation monitoring with long baselines up to ˜1,000 km. We applied the algorithm to data obtained in the 2011 off the Pacific coast of Tohoku earthquake (Mw 9.0) to test the possibility of coseismic displacement detections, and further, we inverted the obtained displacement fields for a fault model; the inversion estimated a fault model with Mw 8.7, which is close to the actual Mw of 9.0, within five minutes from the origin time. Once the fault model is estimated, tsunami waveforms can be immediately synthesized using pre-computed tsunami Green's functions. The calculated waveforms showed good agreement with the actual tsunami observations both in arrival times and wave heights, suggesting that the RTK-GPS data by our algorithm can provide reliable rapid tsunami forecasting that can complement existing tsunami forecasting systems based on seismic observations.
The manifold model for space-time
International Nuclear Information System (INIS)
Heller, M.
1981-01-01
Physical processes happen on a space-time arena. It turns out that all contemporary macroscopic physical theories presuppose a common mathematical model for this arena, the so-called manifold model of space-time. The first part of study is an heuristic introduction to the concept of a smooth manifold, starting with the intuitively more clear concepts of a curve and a surface in the Euclidean space. In the second part the definitions of the Csub(infinity) manifold and of certain structures, which arise in a natural way from the manifold concept, are given. The role of the enveloping Euclidean space (i.e. of the Euclidean space appearing in the manifold definition) in these definitions is stressed. The Euclidean character of the enveloping space induces to the manifold local Euclidean (topological and differential) properties. A suggestion is made that replacing the enveloping Euclidean space by a discrete non-Euclidean space would be a correct way towards the quantization of space-time. (author)
Time Series Based for Online Signature Verification
Directory of Open Access Journals (Sweden)
I Ketut Gede Darma Putra
2013-11-01
Full Text Available Signature verification system is to match the tested signature with a claimed signature. This paper proposes time series based for feature extraction method and dynamic time warping for match method. The system made by process of testing 900 signatures belong to 50 participants, 3 signatures for reference and 5 signatures from original user, simple imposters and trained imposters for signatures test. The final result system was tested with 50 participants with 3 references. This test obtained that system accuracy without imposters is 90,44897959% at threshold 44 with rejection errors (FNMR is 5,2% and acceptance errors (FMR is 4,35102%, when with imposters system accuracy is 80,1361% at threshold 27 with error rejection (FNMR is 15,6% and acceptance errors (average FMR is 4,263946%, with details as follows: acceptance errors is 0,391837%, acceptance errors simple imposters is 3,2% and acceptance errors trained imposters is 9,2%.
Traceability in Model-Based Testing
Directory of Open Access Journals (Sweden)
Mathew George
2012-11-01
Full Text Available The growing complexities of software and the demand for shorter time to market are two important challenges that face today’s IT industry. These challenges demand the increase of both productivity and quality of software. Model-based testing is a promising technique for meeting these challenges. Traceability modeling is a key issue and challenge in model-based testing. Relationships between the different models will help to navigate from one model to another, and trace back to the respective requirements and the design model when the test fails. In this paper, we present an approach for bridging the gaps between the different models in model-based testing. We propose relation definition markup language (RDML for defining the relationships between models.
Linear Regression Based Real-Time Filtering
Directory of Open Access Journals (Sweden)
Misel Batmend
2013-01-01
Full Text Available This paper introduces real time filtering method based on linear least squares fitted line. Method can be used in case that a filtered signal is linear. This constraint narrows a band of potential applications. Advantage over Kalman filter is that it is computationally less expensive. The paper further deals with application of introduced method on filtering data used to evaluate a position of engraved material with respect to engraving machine. The filter was implemented to the CNC engraving machine control system. Experiments showing its performance are included.
Model-based Software Engineering
DEFF Research Database (Denmark)
Kindler, Ekkart
2010-01-01
The vision of model-based software engineering is to make models the main focus of software development and to automatically generate software from these models. Part of that idea works already today. But, there are still difficulties when it comes to behaviour. Actually, there is no lack in models...
Model based design introduction: modeling game controllers to microprocessor architectures
Jungwirth, Patrick; Badawy, Abdel-Hameed
2017-04-01
We present an introduction to model based design. Model based design is a visual representation, generally a block diagram, to model and incrementally develop a complex system. Model based design is a commonly used design methodology for digital signal processing, control systems, and embedded systems. Model based design's philosophy is: to solve a problem - a step at a time. The approach can be compared to a series of steps to converge to a solution. A block diagram simulation tool allows a design to be simulated with real world measurement data. For example, if an analog control system is being upgraded to a digital control system, the analog sensor input signals can be recorded. The digital control algorithm can be simulated with the real world sensor data. The output from the simulated digital control system can then be compared to the old analog based control system. Model based design can compared to Agile software develop. The Agile software development goal is to develop working software in incremental steps. Progress is measured in completed and tested code units. Progress is measured in model based design by completed and tested blocks. We present a concept for a video game controller and then use model based design to iterate the design towards a working system. We will also describe a model based design effort to develop an OS Friendly Microprocessor Architecture based on the RISC-V.
RTMOD: Real-Time MODel evaluation
DEFF Research Database (Denmark)
Graziani, G.; Galmarini, S.; Mikkelsen, Torben
2000-01-01
the RTMOD web page for detailed information on the actual release, and as soon as possible they then uploaded their predictions to the RTMOD server and could soon after start their inter-comparison analysis with other modellers. When additionalforecast data arrived, already existing statistical results....... At that time, the World Wide Web was not available to all the exercise participants, and plume predictions were therefore submitted to JRC-Ispra by fax andregular mail for subsequent processing. The rapid development of the World Wide Web in the second half of the nineties, together with the experience gained...... during the ETEX exercises suggested the development of this project. RTMOD featured a web-baseduser-friendly interface for data submission and an interactive program module for displaying, intercomparison and analysis of the forecasts. RTMOD has focussed on model intercomparison of concentration...
Time Modeling: Salvatore Sciarrino, Windows and Beclouding
Directory of Open Access Journals (Sweden)
Acácio Tadeu de Camargo Piedade
2017-08-01
Full Text Available In this article I intend to discuss one of the figures created by the Italian composer Salvatore Sciarrino: the windowed form. After the composer's explanation of this figure, I argue that windows in composition can open inwards and outwards the musical discourse. On one side, they point to the composition's inner ambiences and constitute an internal remission. On the other, they instigate the audience to comprehend the external reference, thereby constructing intertextuality. After the outward window form, I will consider some techniques of distortion, particularly one that I call beclouding. To conclude, I will comment the question of memory and of compostition as time modeling.
Principles of models based engineering
Energy Technology Data Exchange (ETDEWEB)
Dolin, R.M.; Hefele, J.
1996-11-01
This report describes a Models Based Engineering (MBE) philosophy and implementation strategy that has been developed at Los Alamos National Laboratory`s Center for Advanced Engineering Technology. A major theme in this discussion is that models based engineering is an information management technology enabling the development of information driven engineering. Unlike other information management technologies, models based engineering encompasses the breadth of engineering information, from design intent through product definition to consumer application.
Stochastic modeling of hourly rainfall times series in Campania (Italy)
Giorgio, M.; Greco, R.
2009-04-01
Occurrence of flowslides and floods in small catchments is uneasy to predict, since it is affected by a number of variables, such as mechanical and hydraulic soil properties, slope morphology, vegetation coverage, rainfall spatial and temporal variability. Consequently, landslide risk assessment procedures and early warning systems still rely on simple empirical models based on correlation between recorded rainfall data and observed landslides and/or river discharges. Effectiveness of such systems could be improved by reliable quantitative rainfall prediction, which can allow gaining larger lead-times. Analysis of on-site recorded rainfall height time series represents the most effective approach for a reliable prediction of local temporal evolution of rainfall. Hydrological time series analysis is a widely studied field in hydrology, often carried out by means of autoregressive models, such as AR, ARMA, ARX, ARMAX (e.g. Salas [1992]). Such models gave the best results when applied to the analysis of autocorrelated hydrological time series, like river flow or level time series. Conversely, they are not able to model the behaviour of intermittent time series, like point rainfall height series usually are, especially when recorded with short sampling time intervals. More useful for this issue are the so-called DRIP (Disaggregated Rectangular Intensity Pulse) and NSRP (Neymann-Scott Rectangular Pulse) model [Heneker et al., 2001; Cowpertwait et al., 2002], usually adopted to generate synthetic point rainfall series. In this paper, the DRIP model approach is adopted, in which the sequence of rain storms and dry intervals constituting the structure of rainfall time series is modeled as an alternating renewal process. Final aim of the study is to provide a useful tool to implement an early warning system for hydrogeological risk management. Model calibration has been carried out with hourly rainfall hieght data provided by the rain gauges of Campania Region civil
Gupta, Nidhi; Christiansen, Caroline Stordal; Hanisch, Christiana; Bay, Hans; Burr, Hermann; Holtermann, Andreas
2017-01-01
Objectives To investigate the differences between a questionnaire-based and accelerometer-based sitting time, and develop a model for improving the accuracy of questionnaire-based sitting time for predicting accelerometer-based sitting time. Methods 183 workers in a cross-sectional study reported sitting time per day using a single question during the measurement period, and wore 2 Actigraph GT3X+ accelerometers on the thigh and trunk for 1–4 working days to determine their actual sitting time per day using the validated Acti4 software. Least squares regression models were fitted with questionnaire-based siting time and other self-reported predictors to predict accelerometer-based sitting time. Results Questionnaire-based and accelerometer-based average sitting times were ≈272 and ≈476 min/day, respectively. A low Pearson correlation (r=0.32), high mean bias (204.1 min) and wide limits of agreement (549.8 to −139.7 min) between questionnaire-based and accelerometer-based sitting time were found. The prediction model based on questionnaire-based sitting explained 10% of the variance in accelerometer-based sitting time. Inclusion of 9 self-reported predictors in the model increased the explained variance to 41%, with 10% optimism using a resampling bootstrap validation. Based on a split validation analysis, the developed prediction model on ≈75% of the workers (n=132) reduced the mean and the SD of the difference between questionnaire-based and accelerometer-based sitting time by 64% and 42%, respectively, in the remaining 25% of the workers. Conclusions This study indicates that questionnaire-based sitting time has low validity and that a prediction model can be one solution to materially improve the precision of questionnaire-based sitting time. PMID:28093433
Linear Time Invariant Models for Integrated Flight and Rotor Control
Olcer, Fahri Ersel
2011-12-01
Recent developments on individual blade control (IBC) and physics based reduced order models of various on-blade control (OBC) actuation concepts are opening up opportunities to explore innovative rotor control strategies for improved rotor aerodynamic performance, reduced vibration and BVI noise, and improved rotor stability, etc. Further, recent developments in computationally efficient algorithms for the extraction of Linear Time Invariant (LTI) models are providing a convenient framework for exploring integrated flight and rotor control, while accounting for the important couplings that exist between body and low frequency rotor response and high frequency rotor response. Formulation of linear time invariant (LTI) models of a nonlinear system about a periodic equilibrium using the harmonic domain representation of LTI model states has been studied in the literature. This thesis presents an alternative method and a computationally efficient scheme for implementation of the developed method for extraction of linear time invariant (LTI) models from a helicopter nonlinear model in forward flight. The fidelity of the extracted LTI models is evaluated using response comparisons between the extracted LTI models and the nonlinear model in both time and frequency domains. Moreover, the fidelity of stability properties is studied through the eigenvalue and eigenvector comparisons between LTI and LTP models by making use of the Floquet Transition Matrix. For time domain evaluations, individual blade control (IBC) and On-Blade Control (OBC) inputs that have been tried in the literature for vibration and noise control studies are used. For frequency domain evaluations, frequency sweep inputs are used to obtain frequency responses of fixed system hub loads to a single blade IBC input. The evaluation results demonstrate the fidelity of the extracted LTI models, and thus, establish the validity of the LTI model extraction process for use in integrated flight and rotor control
Bayesian Modelling of fMRI Time Series
DEFF Research Database (Denmark)
Højen-Sørensen, Pedro; Hansen, Lars Kai; Rasmussen, Carl Edward
2000-01-01
We present a Hidden Markov Model (HMM) for inferring the hidden psychological state (or neural activity) during single trial fMRI activation experiments with blocked task paradigms. Inference is based on Bayesian methodology, using a combination of analytical and a variety of Markov Chain Monte...... Carlo (MCMC) sampling techniques. The advantage of this method is that detection of short time learning effects between repeated trials is possible since inference is based only on single trial experiments....
System reliability time-dependent models
International Nuclear Information System (INIS)
Debernardo, H.D.
1991-06-01
A probabilistic methodology for safety system technical specification evaluation was developed. The method for Surveillance Test Interval (S.T.I.) evaluation basically means an optimization of S.T.I. of most important system's periodically tested components. For Allowed Outage Time (A.O.T.) calculations, the method uses system reliability time-dependent models (A computer code called FRANTIC III). A new approximation, which was called Independent Minimal Cut Sets (A.C.I.), to compute system unavailability was also developed. This approximation is better than Rare Event Approximation (A.E.R.) and the extra computing cost is neglectible. A.C.I. was joined to FRANTIC III to replace A.E.R. on future applications. The case study evaluations verified that this methodology provides a useful probabilistic assessment of surveillance test intervals and allowed outage times for many plant components. The studied system is a typical configuration of nuclear power plant safety systems (two of three logic). Because of the good results, these procedures will be used by the Argentine nuclear regulatory authorities in evaluation of technical specification of Atucha I and Embalse nuclear power plant safety systems. (Author) [es
Time domain series system definition and gear set reliability modeling
International Nuclear Information System (INIS)
Xie, Liyang; Wu, Ningxiang; Qian, Wenxue
2016-01-01
Time-dependent multi-configuration is a typical feature for mechanical systems such as gear trains and chain drives. As a series system, a gear train is distinct from a traditional series system, such as a chain, in load transmission path, system-component relationship, system functioning manner, as well as time-dependent system configuration. Firstly, the present paper defines time-domain series system to which the traditional series system reliability model is not adequate. Then, system specific reliability modeling technique is proposed for gear sets, including component (tooth) and subsystem (tooth-pair) load history description, material priori/posterior strength expression, time-dependent and system specific load-strength interference analysis, as well as statistically dependent failure events treatment. Consequently, several system reliability models are developed for gear sets with different tooth numbers in the scenario of tooth root material ultimate tensile strength failure. The application of the models is discussed in the last part, and the differences between the system specific reliability model and the traditional series system reliability model are illustrated by virtue of several numerical examples. - Highlights: • A new type of series system, i.e. time-domain multi-configuration series system is defined, that is of great significance to reliability modeling. • Multi-level statistical analysis based reliability modeling method is presented for gear transmission system. • Several system specific reliability models are established for gear set reliability estimation. • The differences between the traditional series system reliability model and the new model are illustrated.
Anderssohn, J.; Motagh, M.; Walter, T. R.; Rosenau, M.; Kaufmann, H.; Oncken, O.
2009-12-01
The variable spatio-temporal scales of Earth's surface deformation in potentially hazardous volcanic areas pose a challenge for observation and assessment. Here we used Envisat data acquired in Wide Swath Mode (WSM) and Image Mode (IM) from ascending and descending geometry, respectively, to study time-dependent ground uplift at the Lazufre volcanic system in Chile and Argentina. A least-squares adjustment was performed on 65 IM interferograms that covered the time period of 2003-2008. We obtained a clear trend of uplift reaching 15-16 cm in this 5-year interval. Using a joint inversion of ascending and descending interferograms, we evaluated the geometry and time-dependent progression of a horizontally extended pressurized source beneath the Lazufre volcanic system. Our results hence indicate that an extended magma body at a depth between 10 and 15 km would account for most of the ground uplift. The maximum inflation reached up to ~40 cm during 2003-2008. The lateral propagation velocity of the intrusion was estimated to be nearly constant at 5-10 km/yr during the observation time, which has important implications for the physical understanding of magma intrusion processes.
DEFF Research Database (Denmark)
Gupta, Nidhi; Christiansen, Caroline Stordal; Hanisch, Christiana
2017-01-01
Objectives To investigate the differences between a questionnaire-based and accelerometer-based sitting time, and develop a model for improving the accuracy of questionnaire-based sitting time for predicting accelerometer-based sitting time. Methods 183 workers in a cross-sectional study reported...... sitting time per day using a single question during the measurement period, and wore 2 Actigraph GT3X+ accelerometers on the thigh and trunk for 1-4 working days to determine their actual sitting time per day using the validated Acti4 software. Least squares regression models were fitted...... with questionnaire-based siting time and other self-reported predictors to predict accelerometer-based sitting time. Results Questionnaire-based and accelerometer-based average sitting times were ≈272 and ≈476min/day, respectively. A low Pearson correlation (r=0.32), high mean bias (204.1min) and wide limits...
Gradient-based model calibration with proxy-model assistance
Burrows, Wesley; Doherty, John
2016-02-01
Use of a proxy model in gradient-based calibration and uncertainty analysis of a complex groundwater model with large run times and problematic numerical behaviour is described. The methodology is general, and can be used with models of all types. The proxy model is based on a series of analytical functions that link all model outputs used in the calibration process to all parameters requiring estimation. In enforcing history-matching constraints during the calibration and post-calibration uncertainty analysis processes, the proxy model is run for the purposes of populating the Jacobian matrix, while the original model is run when testing parameter upgrades; the latter process is readily parallelized. Use of a proxy model in this fashion dramatically reduces the computational burden of complex model calibration and uncertainty analysis. At the same time, the effect of model numerical misbehaviour on calculation of local gradients is mitigated, this allowing access to the benefits of gradient-based analysis where lack of integrity in finite-difference derivatives calculation would otherwise have impeded such access. Construction of a proxy model, and its subsequent use in calibration of a complex model, and in analysing the uncertainties of predictions made by that model, is implemented in the PEST suite.
Software reliability growth models with normal failure time distributions
International Nuclear Information System (INIS)
Okamura, Hiroyuki; Dohi, Tadashi; Osaki, Shunji
2013-01-01
This paper proposes software reliability growth models (SRGM) where the software failure time follows a normal distribution. The proposed model is mathematically tractable and has sufficient ability of fitting to the software failure data. In particular, we consider the parameter estimation algorithm for the SRGM with normal distribution. The developed algorithm is based on an EM (expectation-maximization) algorithm and is quite simple for implementation as software application. Numerical experiment is devoted to investigating the fitting ability of the SRGMs with normal distribution through 16 types of failure time data collected in real software projects
Spatio-temporal modeling for real-time ozone forecasting.
Paci, Lucia; Gelfand, Alan E; Holland, David M
2013-05-01
The accurate assessment of exposure to ambient ozone concentrations is important for informing the public and pollution monitoring agencies about ozone levels that may lead to adverse health effects. High-resolution air quality information can offer significant health benefits by leading to improved environmental decisions. A practical challenge facing the U.S. Environmental Protection Agency (USEPA) is to provide real-time forecasting of current 8-hour average ozone exposure over the entire conterminous United States. Such real-time forecasting is now provided as spatial forecast maps of current 8-hour average ozone defined as the average of the previous four hours, current hour, and predictions for the next three hours. Current 8-hour average patterns are updated hourly throughout the day on the EPA-AIRNow web site. The contribution here is to show how we can substantially improve upon current real-time forecasting systems. To enable such forecasting, we introduce a downscaler fusion model based on first differences of real-time monitoring data and numerical model output. The model has a flexible coefficient structure and uses an efficient computational strategy to fit model parameters. Our hybrid computational strategy blends continuous background updated model fitting with real-time predictions. Model validation analyses show that we are achieving very accurate and precise ozone forecasts.
Cluster Based Text Classification Model
DEFF Research Database (Denmark)
Nizamani, Sarwat; Memon, Nasrullah; Wiil, Uffe Kock
2011-01-01
We propose a cluster based classification model for suspicious email detection and other text classification tasks. The text classification tasks comprise many training examples that require a complex classification model. Using clusters for classification makes the model simpler and increases......, the classifier is trained on each cluster having reduced dimensionality and less number of examples. The experimental results show that the proposed model outperforms the existing classification models for the task of suspicious email detection and topic categorization on the Reuters-21578 and 20 Newsgroups...... datasets. Our model also outperforms A Decision Cluster Classification (ADCC) and the Decision Cluster Forest Classification (DCFC) models on the Reuters-21578 dataset....
Graph Model Based Indoor Tracking
DEFF Research Database (Denmark)
Jensen, Christian Søndergaard; Lu, Hua; Yang, Bin
2009-01-01
The tracking of the locations of moving objects in large indoor spaces is important, as it enables a range of applications related to, e.g., security and indoor navigation and guidance. This paper presents a graph model based approach to indoor tracking that offers a uniform data management...... infrastructure for different symbolic positioning technologies, e.g., Bluetooth and RFID. More specifically, the paper proposes a model of indoor space that comprises a base graph and mappings that represent the topology of indoor space at different levels. The resulting model can be used for one or several...... indoor positioning technologies. Focusing on RFID-based positioning, an RFID specific reader deployment graph model is built from the base graph model. This model is then used in several algorithms for constructing and refining trajectories from raw RFID readings. Empirical studies with implementations...
DEFF Research Database (Denmark)
Alshareef, Abdurrahman; Sarjoughian, Hessam S.; Zarrin, Bahram
2018-01-01
architecture and the UML concepts. In this paper, we further this work by grounding Activity-based DEVS modeling and developing a fully-fledged modeling engine to demonstrate applicability. We also detail the relevant aspects of the created metamodel in terms of modeling and simulation. A significant number...
The TIME Model: Time to Make a Change to Integrate Technology
Directory of Open Access Journals (Sweden)
Debby Mitchell
2004-06-01
Full Text Available The purpose of this article is to report the successful creation and implementation of an instructional model designed to assist educators in infusing technology into the curriculum while at the same time create opportunities for faculty to learn, become more proficient,and successful at integrating technology into their own classroom curriculum.The model was successfully tested and implemented with faculty, inservice and preservice teachers at the University of Central Florida (UCF. Faculty, inservice, and preservice teachers were successfully trained to integrate technology using a theme based curriculum with an instructional model called the TIME model which consists of twelve elements that include: Vision, Incentives, Personalization, Awareness, Learning Communities, Action Plan, Research, Development of Modules, Skills, Implementation, Evidence of Change, and Evaluation/Reflection.
Directory of Open Access Journals (Sweden)
Yuancheng Sun
2016-01-01
Full Text Available For the non-Gaussian singular time-delayed stochastic distribution control (SDC system with unknown external disturbance where the output probability density function (PDF is approximated by the rational square-root B-spline basis function, a robust fault diagnosis and fault tolerant control algorithm is presented. A full-order observer is constructed to estimate the exogenous disturbance and an adaptive observer is used to estimate the fault size. A fault tolerant tracking controller is designed using the feedback of distribution tracking error, fault, and disturbance estimation to let the postfault output PDF still track desired distribution. Finally, a simulation example is included to illustrate the effectiveness of the proposed algorithms and encouraging results have been obtained.
Development of constitutive model for composites exhibiting time dependent properties
International Nuclear Information System (INIS)
Pupure, L; Joffe, R; Varna, J; Nyström, B
2013-01-01
Regenerated cellulose fibres and their composites exhibit highly nonlinear behaviour. The mechanical response of these materials can be successfully described by the model developed by Schapery for time-dependent materials. However, this model requires input parameters that are experimentally determined via large number of time-consuming tests on the studied composite material. If, for example, the volume fraction of fibres is changed we have a different material and new series of experiments on this new material are required. Therefore the ultimate objective of our studies is to develop model which determines the composite behaviour based on behaviour of constituents of the composite. This paper gives an overview of problems and difficulties, associated with development, implementation and verification of such model
Business Process Modelling based on Petri nets
Directory of Open Access Journals (Sweden)
Qin Jianglong
2017-01-01
Full Text Available Business process modelling is the way business processes are expressed. Business process modelling is the foundation of business process analysis, reengineering, reorganization and optimization. It can not only help enterprises to achieve internal information system integration and reuse, but also help enterprises to achieve with the external collaboration. Based on the prototype Petri net, this paper adds time and cost factors to form an extended generalized stochastic Petri net. It is a formal description of the business process. The semi-formalized business process modelling algorithm based on Petri nets is proposed. Finally, The case from a logistics company proved that the modelling algorithm is correct and effective.
Miyamoto, Yoshiyuki; Zhang, Hong; Cheng, Xinlu; Rubio, Angel
2017-09-01
We use time-dependent density functional theory to study laser-pulse induced decomposition of H2O molecules above the two-dimensional (2D) materials graphene, hexagonal boron nitride, and graphitic carbon nitride. We examine femtosecond-laser pulses with a full width at half maximum of 10 or 20 fs for laser-field intensity and wavelengths of 800 or 400 nm by varying the intensity of the laser field from 5 to 9 V/Å, with the corresponding range of fluence per pulse up to 10.7 J /cm2 . For a H2O molecule above the graphitic sheets, the threshold for laser-field H2O decomposition is reduced by more than 20% compared with that of an isolated H2O molecule. We also show that hole doping enhances the water adsorption energy above graphene. The present results indicate that the graphitic materials should support laser-induced chemistry and that other 2D materials that can enhance laser-induced H2O decomposition should be investigated.
Model Passengers’ Travel Time for Conventional Bus Stop
Directory of Open Access Journals (Sweden)
Guangzhao Xin
2014-01-01
Full Text Available Limited number of berths can result in a subsequent bus stopping at the upstream of a bus stop when all berths are occupied. When this traffic phenomenon occurs, passengers waiting on the platform usually prefer walking to the stopped bus, which leads to additional walking time before boarding the bus. Therefore, passengers’ travel time consumed at a bus stop is divided into waiting time, additional walking time, and boarding time. This paper proposed a mathematical model for analyzing passengers’ travel time at conventional bus stop based on theory of stochastic service system. Field-measured and simulated data were designated to demonstrate the effectiveness of the proposed model. By analyzing the results, conclusion was conducted that short headway can reduce passengers’ waiting time at bus stop. Meanwhile, the theoretical analysis explained the inefficiency of bus stops with more than three berths from the perspective of passengers’ additional walking time. Additional walking time will increase in a large scale when the number of berths at a bus stop exceedsthe threshold of three.
Modeling Coastal Vulnerability through Space and Time.
Directory of Open Access Journals (Sweden)
Thomas Hopper
Full Text Available Coastal ecosystems experience a wide range of stressors including wave forces, storm surge, sea-level rise, and anthropogenic modification and are thus vulnerable to erosion. Urban coastal ecosystems are especially important due to the large populations these limited ecosystems serve. However, few studies have addressed the issue of urban coastal vulnerability at the landscape scale with spatial data that are finely resolved. The purpose of this study was to model and map coastal vulnerability and the role of natural habitats in reducing vulnerability in Jamaica Bay, New York, in terms of nine coastal vulnerability metrics (relief, wave exposure, geomorphology, natural habitats, exposure, exposure with no habitat, habitat role, erodible shoreline, and surge under past (1609, current (2015, and future (2080 scenarios using InVEST 3.2.0. We analyzed vulnerability results both spatially and across all time periods, by stakeholder (ownership and by distance to damage from Hurricane Sandy. We found significant differences in vulnerability metrics between past, current and future scenarios for all nine metrics except relief and wave exposure. The marsh islands in the center of the bay are currently vulnerable. In the future, these islands will likely be inundated, placing additional areas of the shoreline increasingly at risk. Significant differences in vulnerability exist between stakeholders; the Breezy Point Cooperative and Gateway National Recreation Area had the largest erodible shoreline segments. Significant correlations exist for all vulnerability (exposure/surge and storm damage combinations except for exposure and distance to artificial debris. Coastal protective features, ranging from storm surge barriers and levees to natural features (e.g. wetlands, have been promoted to decrease future flood risk to communities in coastal areas around the world. Our methods of combining coastal vulnerability results with additional data and across
Modeling Coastal Vulnerability through Space and Time.
Hopper, Thomas; Meixler, Marcia S
2016-01-01
Coastal ecosystems experience a wide range of stressors including wave forces, storm surge, sea-level rise, and anthropogenic modification and are thus vulnerable to erosion. Urban coastal ecosystems are especially important due to the large populations these limited ecosystems serve. However, few studies have addressed the issue of urban coastal vulnerability at the landscape scale with spatial data that are finely resolved. The purpose of this study was to model and map coastal vulnerability and the role of natural habitats in reducing vulnerability in Jamaica Bay, New York, in terms of nine coastal vulnerability metrics (relief, wave exposure, geomorphology, natural habitats, exposure, exposure with no habitat, habitat role, erodible shoreline, and surge) under past (1609), current (2015), and future (2080) scenarios using InVEST 3.2.0. We analyzed vulnerability results both spatially and across all time periods, by stakeholder (ownership) and by distance to damage from Hurricane Sandy. We found significant differences in vulnerability metrics between past, current and future scenarios for all nine metrics except relief and wave exposure. The marsh islands in the center of the bay are currently vulnerable. In the future, these islands will likely be inundated, placing additional areas of the shoreline increasingly at risk. Significant differences in vulnerability exist between stakeholders; the Breezy Point Cooperative and Gateway National Recreation Area had the largest erodible shoreline segments. Significant correlations exist for all vulnerability (exposure/surge) and storm damage combinations except for exposure and distance to artificial debris. Coastal protective features, ranging from storm surge barriers and levees to natural features (e.g. wetlands), have been promoted to decrease future flood risk to communities in coastal areas around the world. Our methods of combining coastal vulnerability results with additional data and across multiple time
Forecast of useful energy for the TIMES-Norway model
Energy Technology Data Exchange (ETDEWEB)
Rosenberg, Eva
2012-07-25
A regional forecast of useful energy demand in seven Norwegian regions is calculated based on an earlier work with a national forecast. This forecast will be input to the energy system model TIMES-Norway and analyses will result in forecasts of energy use of different energy carriers with varying external conditions (not included in this report). The forecast presented here describes the methodology used and the resulting forecast of useful energy. lt is based on information of the long-term development of the economy by the Ministry of Finance, projections of population growths from Statistics Norway and several other studies. The definition of a forecast of useful energy demand is not absolute, but depends on the purpose. One has to be careful not to include parts that are a part of the energy system model, such as energy efficiency measures. In the forecast presented here the influence of new building regulations and the prohibition of production of incandescent light bulbs in EU etc. are included. Other energy efficiency measures such as energy management, heat pumps, tightening of leaks etc. are modelled as technologies to invest in and are included in the TIMES-Norway model. The elasticity between different energy carriers are handled by the TIMES-Norway model and some elasticity is also included as the possibility to invest in energy efficiency measures. The forecast results in an increase of the total useful energy from 2006 to 2050 by 18 o/o. The growth is expected to be highest in the regions South and East. The industry remains at a constant level in the base case and increased or reduced energy demand is analysed as different scenarios with the TIMES-Norway model. The most important driver is the population growth. Together with the assumptions made it results in increased useful energy demand in the household and service sectors of 25 o/o and 57 % respectively.(au)
Modeling highway travel time distribution with conditional probability models
Energy Technology Data Exchange (ETDEWEB)
Oliveira Neto, Francisco Moraes [ORNL; Chin, Shih-Miao [ORNL; Hwang, Ho-Ling [ORNL; Han, Lee [University of Tennessee, Knoxville (UTK)
2014-01-01
ABSTRACT Under the sponsorship of the Federal Highway Administration's Office of Freight Management and Operations, the American Transportation Research Institute (ATRI) has developed performance measures through the Freight Performance Measures (FPM) initiative. Under this program, travel speed information is derived from data collected using wireless based global positioning systems. These telemetric data systems are subscribed and used by trucking industry as an operations management tool. More than one telemetric operator submits their data dumps to ATRI on a regular basis. Each data transmission contains truck location, its travel time, and a clock time/date stamp. Data from the FPM program provides a unique opportunity for studying the upstream-downstream speed distributions at different locations, as well as different time of the day and day of the week. This research is focused on the stochastic nature of successive link travel speed data on the continental United States Interstates network. Specifically, a method to estimate route probability distributions of travel time is proposed. This method uses the concepts of convolution of probability distributions and bivariate, link-to-link, conditional probability to estimate the expected distributions for the route travel time. Major contribution of this study is the consideration of speed correlation between upstream and downstream contiguous Interstate segments through conditional probability. The established conditional probability distributions, between successive segments, can be used to provide travel time reliability measures. This study also suggests an adaptive method for calculating and updating route travel time distribution as new data or information is added. This methodology can be useful to estimate performance measures as required by the recent Moving Ahead for Progress in the 21st Century Act (MAP 21).
Event-Based Conceptual Modeling
DEFF Research Database (Denmark)
Bækgaard, Lars
2009-01-01
The purpose of the paper is to obtain insight into and provide practical advice for event-based conceptual modeling. We analyze a set of event concepts and use the results to formulate a conceptual event model that is used to identify guidelines for creation of dynamic process models and static...... information models. We characterize events as short-duration processes that have participants, consequences, and properties, and that may be modeled in terms of information structures. The conceptual event model is used to characterize a variety of event concepts and it is used to illustrate how events can...... be used to integrate dynamic modeling of processes and static modeling of information structures. The results are unique in the sense that no other general event concept has been used to unify a similar broad variety of seemingly incompatible event concepts. The general event concept can be used...
Market volatility modeling for short time window
de Mattos Neto, Paulo S. G.; Silva, David A.; Ferreira, Tiago A. E.; Cavalcanti, George D. C.
2011-10-01
The gain or loss of an investment can be defined by the movement of the market. This movement can be estimated by the difference between the magnitudes of two stock prices in distinct periods and this difference can be used to calculate the volatility of the markets. The volatility characterizes the sensitivity of a market change in the world economy. Traditionally, the probability density function (pdf) of the movement of the markets is analyzed by using power laws. The contributions of this work is two-fold: (i) an analysis of the volatility dynamic of the world market indexes is performed by using a two-year window time data. In this case, the experiments show that the pdf of the volatility is better fitted by exponential function than power laws, in all range of pdf; (ii) after that, we investigate a relationship between the volatility of the markets and the coefficient of the exponential function based on the Maxwell-Boltzmann ideal gas theory. The results show an inverse relationship between the volatility and the coefficient of the exponential function. This information can be used, for example, to predict the future behavior of the markets or to cluster the markets in order to analyze economic patterns.
DEFF Research Database (Denmark)
Bækgaard, Lars
2004-01-01
We present and discuss a modeling approach that supports event-based modeling of information and activity in information systems. Interacting human actors and IT-actors may carry out such activity. We use events to create meaningful relations between information structures and the related...
Stability Analysis and H∞ Model Reduction for Switched Discrete-Time Time-Delay Systems
Directory of Open Access Journals (Sweden)
Zheng-Fan Liu
2014-01-01
Full Text Available This paper is concerned with the problem of exponential stability and H∞ model reduction of a class of switched discrete-time systems with state time-varying delay. Some subsystems can be unstable. Based on the average dwell time technique and Lyapunov-Krasovskii functional (LKF approach, sufficient conditions for exponential stability with H∞ performance of such systems are derived in terms of linear matrix inequalities (LMIs. For the high-order systems, sufficient conditions for the existence of reduced-order model are derived in terms of LMIs. Moreover, the error system is guaranteed to be exponentially stable and an H∞ error performance is guaranteed. Numerical examples are also given to demonstrate the effectiveness and reduced conservatism of the obtained results.
Incident Duration Modeling Using Flexible Parametric Hazard-Based Models
Directory of Open Access Journals (Sweden)
Ruimin Li
2014-01-01
Full Text Available Assessing and prioritizing the duration time and effects of traffic incidents on major roads present significant challenges for road network managers. This study examines the effect of numerous factors associated with various types of incidents on their duration and proposes an incident duration prediction model. Several parametric accelerated failure time hazard-based models were examined, including Weibull, log-logistic, log-normal, and generalized gamma, as well as all models with gamma heterogeneity and flexible parametric hazard-based models with freedom ranging from one to ten, by analyzing a traffic incident dataset obtained from the Incident Reporting and Dispatching System in Beijing in 2008. Results show that different factors significantly affect different incident time phases, whose best distributions were diverse. Given the best hazard-based models of each incident time phase, the prediction result can be reasonable for most incidents. The results of this study can aid traffic incident management agencies not only in implementing strategies that would reduce incident duration, and thus reduce congestion, secondary incidents, and the associated human and economic losses, but also in effectively predicting incident duration time.
Chaos Time Series Prediction Based on Membrane Optimization Algorithms
Directory of Open Access Journals (Sweden)
Meng Li
2015-01-01
Full Text Available This paper puts forward a prediction model based on membrane computing optimization algorithm for chaos time series; the model optimizes simultaneously the parameters of phase space reconstruction (τ,m and least squares support vector machine (LS-SVM (γ,σ by using membrane computing optimization algorithm. It is an important basis for spectrum management to predict accurately the change trend of parameters in the electromagnetic environment, which can help decision makers to adopt an optimal action. Then, the model presented in this paper is used to forecast band occupancy rate of frequency modulation (FM broadcasting band and interphone band. To show the applicability and superiority of the proposed model, this paper will compare the forecast model presented in it with conventional similar models. The experimental results show that whether single-step prediction or multistep prediction, the proposed model performs best based on three error measures, namely, normalized mean square error (NMSE, root mean square error (RMSE, and mean absolute percentage error (MAPE.
SAT-based verification for timed component connectors
S. Kemper (Stephanie)
2011-01-01
textabstractComponent-based software construction relies on suitable models underlying components, and in particular the coordinators which orchestrate component behaviour. Verifying correctness and safety of such systems amounts to model checking the underlying system model. The model checking
Magnetic-time model for seed germination | Mahajan | African ...
African Journals Online (AJOL)
On the basis of this, a new germination model called magnetic time model is developed which was incorporated in hydrothermal model and hence nominated as hydrothermal magnetic time model which is proposed to incorporate the effect of magnetic field of different intensities on plants. Magnetic time constant ΘB is ...