Maximum likelihood estimation for integrated diffusion processes
DEFF Research Database (Denmark)
Baltazar-Larios, Fernando; Sørensen, Michael
We propose a method for obtaining maximum likelihood estimates of parameters in diffusion models when the data is a discrete time sample of the integral of the process, while no direct observations of the process itself are available. The data are, moreover, assumed to be contaminated...... EM-algorithm to obtain maximum likelihood estimates of the parameters in the diffusion model. As part of the algorithm, we use a recent simple method for approximate simulation of diffusion bridges. In simulation studies for the Ornstein-Uhlenbeck process and the CIR process the proposed method works...... by measurement errors. Integrated volatility is an example of this type of observations. Another example is ice-core data on oxygen isotopes used to investigate paleo-temperatures. The data can be viewed as incomplete observations of a model with a tractable likelihood function. Therefore we propose a simulated...
Maximum Temperature Detection System for Integrated Circuits
Frankiewicz, Maciej; Kos, Andrzej
2015-03-01
The paper describes structure and measurement results of the system detecting present maximum temperature on the surface of an integrated circuit. The system consists of the set of proportional to absolute temperature sensors, temperature processing path and a digital part designed in VHDL. Analogue parts of the circuit where designed with full-custom technique. The system is a part of temperature-controlled oscillator circuit - a power management system based on dynamic frequency scaling method. The oscillator cooperates with microprocessor dedicated for thermal experiments. The whole system is implemented in UMC CMOS 0.18 μm (1.8 V) technology.
Multiperiod Maximum Loss is time unit invariant.
Kovacevic, Raimund M; Breuer, Thomas
2016-01-01
Time unit invariance is introduced as an additional requirement for multiperiod risk measures: for a constant portfolio under an i.i.d. risk factor process, the multiperiod risk should equal the one period risk of the aggregated loss, for an appropriate choice of parameters and independent of the portfolio and its distribution. Multiperiod Maximum Loss over a sequence of Kullback-Leibler balls is time unit invariant. This is also the case for the entropic risk measure. On the other hand, multiperiod Value at Risk and multiperiod Expected Shortfall are not time unit invariant.
Analogue of Pontryagin's maximum principle for multiple integrals minimization problems
Mikhail, Zelikin
2016-01-01
The theorem like Pontryagin's maximum principle for multiple integrals is proved. Unlike the usual maximum principle, the maximum should be taken not over all matrices, but only on matrices of rank one. Examples are given.
Maximum likelihood window for time delay estimation
International Nuclear Information System (INIS)
Lee, Young Sup; Yoon, Dong Jin; Kim, Chi Yup
2004-01-01
Time delay estimation for the detection of leak location in underground pipelines is critically important. Because the exact leak location depends upon the precision of the time delay between sensor signals due to leak noise and the speed of elastic waves, the research on the estimation of time delay has been one of the key issues in leak lovating with the time arrival difference method. In this study, an optimal Maximum Likelihood window is considered to obtain a better estimation of the time delay. This method has been proved in experiments, which can provide much clearer and more precise peaks in cross-correlation functions of leak signals. The leak location error has been less than 1 % of the distance between sensors, for example the error was not greater than 3 m for 300 m long underground pipelines. Apart from the experiment, an intensive theoretical analysis in terms of signal processing has been described. The improved leak locating with the suggested method is due to the windowing effect in frequency domain, which offers a weighting in significant frequencies.
Extending the maximum operation time of the MNSR reactor.
Dawahra, S; Khattab, K; Saba, G
2016-09-01
An effective modification to extend the maximum operation time of the Miniature Neutron Source Reactor (MNSR) to enhance the utilization of the reactor has been tested using the MCNP4C code. This modification consisted of inserting manually in each of the reactor inner irradiation tube a chain of three polyethylene-connected containers filled of water. The total height of the chain was 11.5cm. The replacement of the actual cadmium absorber with B(10) absorber was needed as well. The rest of the core structure materials and dimensions remained unchanged. A 3-D neutronic model with the new modifications was developed to compare the neutronic parameters of the old and modified cores. The results of the old and modified core excess reactivities (ρex) were: 3.954, 6.241 mk respectively. The maximum reactor operation times were: 428, 1025min and the safety reactivity factors were: 1.654 and 1.595 respectively. Therefore, a 139% increase in the maximum reactor operation time was noticed for the modified core. This increase enhanced the utilization of the MNSR reactor to conduct a long time irradiation of the unknown samples using the NAA technique and increase the amount of radioisotope production in the reactor. Copyright © 2016 Elsevier Ltd. All rights reserved.
Maximum time-dependent space-charge limited diode currents
Energy Technology Data Exchange (ETDEWEB)
Griswold, M. E. [Tri Alpha Energy, Inc., Rancho Santa Margarita, California 92688 (United States); Fisch, N. J. [Princeton Plasma Physics Laboratory, Princeton University, Princeton, New Jersey 08543 (United States)
2016-01-15
Recent papers claim that a one dimensional (1D) diode with a time-varying voltage drop can transmit current densities that exceed the Child-Langmuir (CL) limit on average, apparently contradicting a previous conjecture that there is a hard limit on the average current density across any 1D diode, as t → ∞, that is equal to the CL limit. However, these claims rest on a different definition of the CL limit, namely, a comparison between the time-averaged diode current and the adiabatic average of the expression for the stationary CL limit. If the current were considered as a function of the maximum applied voltage, rather than the average applied voltage, then the original conjecture would not have been refuted.
Linear Time Local Approximation Algorithm for Maximum Stable Marriage
Directory of Open Access Journals (Sweden)
Zoltán Király
2013-08-01
Full Text Available We consider a two-sided market under incomplete preference lists with ties, where the goal is to find a maximum size stable matching. The problem is APX-hard, and a 3/2-approximation was given by McDermid [1]. This algorithm has a non-linear running time, and, more importantly needs global knowledge of all preference lists. We present a very natural, economically reasonable, local, linear time algorithm with the same ratio, using some ideas of Paluch [2]. In this algorithm every person make decisions using only their own list, and some information asked from members of these lists (as in the case of the famous algorithm of Gale and Shapley. Some consequences to the Hospitals/Residents problem are also discussed.
On the maximum-entropy/autoregressive modeling of time series
Chao, B. F.
1984-01-01
The autoregressive (AR) model of a random process is interpreted in the light of the Prony's relation which relates a complex conjugate pair of poles of the AR process in the z-plane (or the z domain) on the one hand, to the complex frequency of one complex harmonic function in the time domain on the other. Thus the AR model of a time series is one that models the time series as a linear combination of complex harmonic functions, which include pure sinusoids and real exponentials as special cases. An AR model is completely determined by its z-domain pole configuration. The maximum-entropy/autogressive (ME/AR) spectrum, defined on the unit circle of the z-plane (or the frequency domain), is nothing but a convenient, but ambiguous visual representation. It is asserted that the position and shape of a spectral peak is determined by the corresponding complex frequency, and the height of the spectral peak contains little information about the complex amplitude of the complex harmonic functions.
Time at which the maximum of a random acceleration process is reached
International Nuclear Information System (INIS)
Majumdar, Satya N; Rosso, Alberto; Zoia, Andrea
2010-01-01
We study the random acceleration model, which is perhaps one of the simplest, yet nontrivial, non-Markov stochastic processes, and is key to many applications. For this non-Markov process, we present exact analytical results for the probability density p(t m |T) of the time t m at which the process reaches its maximum, within a fixed time interval [0, T]. We study two different boundary conditions, which correspond to the process representing respectively (i) the integral of a Brownian bridge and (ii) the integral of a free Brownian motion. Our analytical results are also verified by numerical simulations.
50 CFR 259.34 - Minimum and maximum deposits; maximum time to deposit.
2010-10-01
... B objective. A time longer than 10 years, either by original scheduling or by subsequent extension... OCEANIC AND ATMOSPHERIC ADMINISTRATION, DEPARTMENT OF COMMERCE AID TO FISHERIES CAPITAL CONSTRUCTION FUND...) Minimum annual deposit. The minimum annual (based on each party's taxable year) deposit required by the...
Optimal control of a double integrator a primer on maximum principle
Locatelli, Arturo
2017-01-01
This book provides an introductory yet rigorous treatment of Pontryagin’s Maximum Principle and its application to optimal control problems when simple and complex constraints act on state and control variables, the two classes of variable in such problems. The achievements resulting from first-order variational methods are illustrated with reference to a large number of problems that, almost universally, relate to a particular second-order, linear and time-invariant dynamical system, referred to as the double integrator. The book is ideal for students who have some knowledge of the basics of system and control theory and possess the calculus background typically taught in undergraduate curricula in engineering. Optimal control theory, of which the Maximum Principle must be considered a cornerstone, has been very popular ever since the late 1950s. However, the possibly excessive initial enthusiasm engendered by its perceived capability to solve any kind of problem gave way to its equally unjustified rejecti...
DEFF Research Database (Denmark)
Val, Maria Rosa Rovira; Lehmann, Martin; Zinenko, Anna
From late last century there has been a great development of international standards and tools on environment, sustainability and corporate responsibility issues. This has been along with the globalization of economy and politics, as well as a shift in the social responsibilities of the private vis...... procedures, such as cases of for example ISO integrated management systems, mutual equivalences recognition of Global Compact-GRI-ISO26000, or the case of IIRC initiative to develop integrated reporting on an organization’s Financial, Environmental, Social and Governance performance. This paper focuses......-a-vis the public sectors. Internationally, organisations have implemented a collection of these standards to be in line with such development and to obtain or keep their licence to operate globally. After two decades of development and maturation, the scenario is now different: (i) the economic context has changed...
SNS Diagnostics Timing Integration
Long, Cary D; Murphy, Darryl J; Pogge, James; Purcell, John D; Sundaram, Madhan
2005-01-01
The Spallation Neutron Source (SNS) accelerator systems will deliver a 1.0 GeV, 1.4 MW proton beam to a liquid mercury target for neutron scattering research. The accelerator complex consists of a 1 GeV linear accelerator, an accumulator ring and associated transport lines. The SNS diagnostics platform is PC-based running Windows XP Embedded for its OS and LabVIEW as its programming language. Coordinating timing among the various diagnostics instruments with the generation of the beam pulse is a challenging task that we have chosen to divide into three phases. First, timing was derived from VME based systems. In the second phase, described in this paper, timing pulses are generated by an in house designed PCI timing card installed in ten diagnostics PCs. Using fan-out modules, enough triggers were generated for all instruments. This paper describes how the Timing NAD (Network Attached Device) was rapidly developed using our NAD template, LabVIEW's PCI driver wizard, and LabVIEW Channel Access library. The NAD...
DEFF Research Database (Denmark)
Ding, Tao; Kou, Yu; Yang, Yongheng
2017-01-01
. However, the intermittency of solar PV energy (e.g., due to passing clouds) may affect the PV generation in the district distribution network. To address this issue, the voltage magnitude constraints under the cloud shading conditions should be taken into account in the optimization model, which can......As photovoltaic (PV) integration increases in distribution systems, to investigate the maximum allowable PV integration capacity for a district distribution system becomes necessary in the planning phase, an optimization model is thus proposed to evaluate the maximum PV integration capacity while...
Capelli, Riccardo; Tiana, Guido; Camilloni, Carlo
2018-05-14
Inferential methods can be used to integrate experimental informations and molecular simulations. The maximum entropy principle provides a framework for using equilibrium experimental data, and it has been shown that replica-averaged simulations, restrained using a static potential, are a practical and powerful implementation of such a principle. Here we show that replica-averaged simulations restrained using a time-dependent potential are equivalent to the principle of maximum caliber, the dynamic version of the principle of maximum entropy, and thus may allow us to integrate time-resolved data in molecular dynamics simulations. We provide an analytical proof of the equivalence as well as a computational validation making use of simple models and synthetic data. Some limitations and possible solutions are also discussed.
Time integration of tensor trains
Lubich, Christian; Oseledets, Ivan; Vandereycken, Bart
2014-01-01
A robust and efficient time integrator for dynamical tensor approximation in the tensor train or matrix product state format is presented. The method is based on splitting the projector onto the tangent space of the tensor manifold. The algorithm can be used for updating time-dependent tensors in the given data-sparse tensor train / matrix product state format and for computing an approximate solution to high-dimensional tensor differential equations within this data-sparse format. The formul...
Statistics of the first passage time of Brownian motion conditioned by maximum value or area
International Nuclear Information System (INIS)
Kearney, Michael J; Majumdar, Satya N
2014-01-01
We derive the moments of the first passage time for Brownian motion conditioned by either the maximum value or the area swept out by the motion. These quantities are the natural counterparts to the moments of the maximum value and area of Brownian excursions of fixed duration, which we also derive for completeness within the same mathematical framework. Various applications are indicated. (paper)
Computing the Maximum Detour of a Plane Graph in Subquadratic Time
DEFF Research Database (Denmark)
Wulff-Nilsen, Christian
Let G be a plane graph where each edge is a line segment. We consider the problem of computing the maximum detour of G, defined as the maximum over all pairs of distinct points p and q of G of the ratio between the distance between p and q in G and the distance |pq|. The fastest known algorithm f...... for this problem has O(n^2) running time. We show how to obtain O(n^{3/2}*(log n)^3) expected running time. We also show that if G has bounded treewidth, its maximum detour can be computed in O(n*(log n)^3) expected time....
Long Pulse Integrator of Variable Integral Time Constant
International Nuclear Information System (INIS)
Wang Yong; Ji Zhenshan; Du Xiaoying; Wu Yichun; Li Shi; Luo Jiarong
2010-01-01
A kind of new long pulse integrator was designed based on the method of variable integral time constant and deducting integral drift by drift slope. The integral time constant can be changed by choosing different integral resistors, in order to improve the signal-to-noise ratio, and avoid output saturation; the slope of integral drift of a certain period of time can be calculated by digital signal processing, which can be used to deduct the drift of original integral signal in real time to reduce the integral drift. The tests show that this kind of long pulse integrator is good at reducing integral drift, which also can eliminate the effects of changing integral time constant. According to experiments, the integral time constant can be changed by remote control and manual adjustment of integral drift is avoided, which can improve the experiment efficiency greatly and can be used for electromagnetic measurement in Tokamak experiment. (authors)
49 CFR 398.6 - Hours of service of drivers; maximum driving time.
2010-10-01
... REGULATIONS TRANSPORTATION OF MIGRANT WORKERS § 398.6 Hours of service of drivers; maximum driving time. No person shall drive nor shall any motor carrier permit or require a driver employed or used by it to drive...
Local Times of Galactic Cosmic Ray Intensity Maximum and Minimum in the Diurnal Variation
Directory of Open Access Journals (Sweden)
Su Yeon Oh
2006-06-01
Full Text Available The Diurnal variation of galactic cosmic ray (GCR flux intensity observed by the ground Neutron Monitor (NM shows a sinusoidal pattern with the amplitude of 1sim 2 % of daily mean. We carried out a statistical study on tendencies of the local times of GCR intensity daily maximum and minimum. To test the influences of the solar activity and the location (cut-off rigidity on the distribution in the local times of maximum and minimum GCR intensity, we have examined the data of 1996 (solar minimum and 2000 (solar maximum at the low-latitude Haleakala (latitude: 20.72 N, cut-off rigidity: 12.91 GeV and the high-latitude Oulu (latitude: 65.05 N, cut-off rigidity: 0.81 GeV NM stations. The most frequent local times of the GCR intensity daily maximum and minimum come later about 2sim3 hours in the solar activity maximum year 2000 than in the solar activity minimum year 1996. Oulu NM station whose cut-off rigidity is smaller has the most frequent local times of the GCR intensity maximum and minimum later by 2sim3 hours from those of Haleakala station. This feature is more evident at the solar maximum. The phase of the daily variation in GCR is dependent upon the interplanetary magnetic field varying with the solar activity and the cut-off rigidity varying with the geographic latitude.
A maximum principle for time dependent transport in systems with voids
International Nuclear Information System (INIS)
Schofield, S.L.; Ackroyd, R.T.
1996-01-01
A maximum principle is developed for the first-order time dependent Boltzmann equation. The maximum principle is a generalization of Schofield's κ(θ) principle for the first-order steady state Boltzmann equation, and provides a treatment of time dependent transport in systems with void regions. The formulation comprises a direct least-squares minimization allied with a suitable choice of bilinear functional, and gives rise to a maximum principle whose functional is free of terms that have previously led to difficulties in treating void regions. (Author)
Stochastic behavior of a cold standby system with maximum repair time
Directory of Open Access Journals (Sweden)
Ashish Kumar
2015-09-01
Full Text Available The main aim of the present paper is to analyze the stochastic behavior of a cold standby system with concept of preventive maintenance, priority and maximum repair time. For this purpose, a stochastic model is developed in which initially one unit is operative and other is kept as cold standby. There is a single server who visits the system immediately as and when required. The server takes the unit under preventive maintenance after a maximum operation time at normal mode if one standby unit is available for operation. If the repair of the failed unit is not possible up to a maximum repair time, failed unit is replaced by new one. The failure time, maximum operation time and maximum repair time distributions of the unit are considered as exponentially distributed while repair and maintenance time distributions are considered as arbitrary. All random variables are statistically independent and repairs are perfect. Various measures of system effectiveness are obtained by using the technique of semi-Markov process and RPT. To highlight the importance of the study numerical results are also obtained for MTSF, availability and profit function.
Maximum Likelihood Blind Channel Estimation for Space-Time Coding Systems
Directory of Open Access Journals (Sweden)
Hakan A. Çırpan
2002-05-01
Full Text Available Sophisticated signal processing techniques have to be developed for capacity enhancement of future wireless communication systems. In recent years, space-time coding is proposed to provide significant capacity gains over the traditional communication systems in fading wireless channels. Space-time codes are obtained by combining channel coding, modulation, transmit diversity, and optional receive diversity in order to provide diversity at the receiver and coding gain without sacrificing the bandwidth. In this paper, we consider the problem of blind estimation of space-time coded signals along with the channel parameters. Both conditional and unconditional maximum likelihood approaches are developed and iterative solutions are proposed. The conditional maximum likelihood algorithm is based on iterative least squares with projection whereas the unconditional maximum likelihood approach is developed by means of finite state Markov process modelling. The performance analysis issues of the proposed methods are studied. Finally, some simulation results are presented.
Steffensen's Integral Inequality on Time Scales
Directory of Open Access Journals (Sweden)
Ozkan Umut Mutlu
2007-01-01
Full Text Available We establish generalizations of Steffensen's integral inequality on time scales via the diamond- dynamic integral, which is defined as a linear combination of the delta and nabla integrals.
A polynomial time algorithm for solving the maximum flow problem in directed networks
International Nuclear Information System (INIS)
Tlas, M.
2015-01-01
An efficient polynomial time algorithm for solving maximum flow problems has been proposed in this paper. The algorithm is basically based on the binary representation of capacities; it solves the maximum flow problem as a sequence of O(m) shortest path problems on residual networks with nodes and m arcs. It runs in O(m"2r) time, where is the smallest integer greater than or equal to log B , and B is the largest arc capacity of the network. A numerical example has been illustrated using this proposed algorithm.(author)
Energy Technology Data Exchange (ETDEWEB)
Garrigos, Ausias; Blanes, Jose M.; Carrasco, Jose A. [Area de Tecnologia Electronica, Universidad Miguel Hernandez de Elche, Avda. de la Universidad s/n, 03202 Elche, Alicante (Spain); Ejea, Juan B. [Departamento de Ingenieria Electronica, Universidad de Valencia, Avda. Dr Moliner 50, 46100 Valencia, Valencia (Spain)
2007-05-15
In this paper, an approximate curve fitting method for photovoltaic modules is presented. The operation is based on solving a simple solar cell electrical model by a microcontroller in real time. Only four voltage and current coordinates are needed to obtain the solar module parameters and set its operation at maximum power in any conditions of illumination and temperature. Despite its simplicity, this method is suitable for low cost real time applications, as control loop reference generator in photovoltaic maximum power point circuits. The theory that supports the estimator together with simulations and experimental results are presented. (author)
Adaptive double-integral-sliding-mode-maximum-power-point tracker for a photovoltaic system
Directory of Open Access Journals (Sweden)
Bidyadhar Subudhi
2015-10-01
Full Text Available This study proposed an adaptive double-integral-sliding-mode-controller-maximum-power-point tracker (DISMC-MPPT for maximum-power-point (MPP tracking of a photovoltaic (PV system. The objective of this study is to design a DISMC-MPPT with a new adaptive double-integral-sliding surface in order that MPP tracking is achieved with reduced chattering and steady-state error in the output voltage or current. The proposed adaptive DISMC-MPPT possesses a very simple and efficient PWM-based control structure that keeps switching frequency constant. The controller is designed considering the reaching and stability conditions to provide robustness and stability. The performance of the proposed adaptive DISMC-MPPT is verified through both MATLAB/Simulink simulation and experiment using a 0.2 kW prototype PV system. From the obtained results, it is found out that this DISMC-MPPT is found to be more efficient compared with that of Tan's and Jiao's DISMC-MPPTs.
A theory of timing in scintillation counters based on maximum likelihood estimation
International Nuclear Information System (INIS)
Tomitani, Takehiro
1982-01-01
A theory of timing in scintillation counters based on the maximum likelihood estimation is presented. An optimum filter that minimizes the variance of timing is described. A simple formula to estimate the variance of timing is presented as a function of photoelectron number, scintillation decay constant and the single electron transit time spread in the photomultiplier. The present method was compared with the theory by E. Gatti and V. Svelto. The proposed method was applied to two simple models and rough estimations of potential time resolution of several scintillators are given. The proposed method is applicable to the timing in Cerenkov counters and semiconductor detectors as well. (author)
Soylu, Abdullah Ruhi; Arpinar-Avsar, Pinar
2010-08-01
The effects of fatigue on maximum voluntary contraction (MVC) parameters were examined by using force and surface electromyography (sEMG) signals of the biceps brachii muscles (BBM) of 12 subjects. The purpose of the study was to find the sEMG time interval of the MVC recordings which is not affected by the muscle fatigue. At least 10s of force and sEMG signals of BBM were recorded simultaneously during MVC. The subjects reached the maximum force level within 2s by slightly increasing the force, and then contracted the BBM maximally. The time index of each sEMG and force signal were labeled with respect to the time index of the maximum force (i.e. after the time normalization, each sEMG or force signal's 0s time index corresponds to maximum force point). Then, the first 8s of sEMG and force signals were divided into 0.5s intervals. Mean force, median frequency (MF) and integrated EMG (iEMG) values were calculated for each interval. Amplitude normalization was performed by dividing the force signals to their mean values of 0s time intervals (i.e. -0.25 to 0.25s). A similar amplitude normalization procedure was repeated for the iEMG and MF signals. Statistical analysis (Friedman test with Dunn's post hoc test) was performed on the time and amplitude normalized signals (MF, iEMG). Although the ANOVA results did not give statistically significant information about the onset of the muscle fatigue, linear regression (mean force vs. time) showed a decreasing slope (Pearson-r=0.9462, pfatigue starts after the 0s time interval as the muscles cannot attain their peak force levels. This implies that the most reliable interval for MVC calculation which is not affected by the muscle fatigue is from the onset of the EMG activity to the peak force time. Mean, SD, and range of this interval (excluding 2s gradual increase time) for 12 subjects were 2353, 1258ms and 536-4186ms, respectively. Exceeding this interval introduces estimation errors in the maximum amplitude calculations
FlowMax: A Computational Tool for Maximum Likelihood Deconvolution of CFSE Time Courses.
Directory of Open Access Journals (Sweden)
Maxim Nikolaievich Shokhirev
Full Text Available The immune response is a concerted dynamic multi-cellular process. Upon infection, the dynamics of lymphocyte populations are an aggregate of molecular processes that determine the activation, division, and longevity of individual cells. The timing of these single-cell processes is remarkably widely distributed with some cells undergoing their third division while others undergo their first. High cell-to-cell variability and technical noise pose challenges for interpreting popular dye-dilution experiments objectively. It remains an unresolved challenge to avoid under- or over-interpretation of such data when phenotyping gene-targeted mouse models or patient samples. Here we develop and characterize a computational methodology to parameterize a cell population model in the context of noisy dye-dilution data. To enable objective interpretation of model fits, our method estimates fit sensitivity and redundancy by stochastically sampling the solution landscape, calculating parameter sensitivities, and clustering to determine the maximum-likelihood solution ranges. Our methodology accounts for both technical and biological variability by using a cell fluorescence model as an adaptor during population model fitting, resulting in improved fit accuracy without the need for ad hoc objective functions. We have incorporated our methodology into an integrated phenotyping tool, FlowMax, and used it to analyze B cells from two NFκB knockout mice with distinct phenotypes; we not only confirm previously published findings at a fraction of the expended effort and cost, but reveal a novel phenotype of nfkb1/p105/50 in limiting the proliferative capacity of B cells following B-cell receptor stimulation. In addition to complementing experimental work, FlowMax is suitable for high throughput analysis of dye dilution studies within clinical and pharmacological screens with objective and quantitative conclusions.
Directory of Open Access Journals (Sweden)
Petré Frederik
2004-01-01
Full Text Available In the downlink of DS-CDMA, frequency-selectivity destroys the orthogonality of the user signals and introduces multiuser interference (MUI. Space-time chip equalization is an efficient tool to restore the orthogonality of the user signals and suppress the MUI. Furthermore, multiple-input multiple-output (MIMO communication techniques can result in a significant increase in capacity. This paper focuses on space-time block coding (STBC techniques, and aims at combining STBC techniques with the original single-antenna DS-CDMA downlink scheme. This results into the so-called space-time block coded DS-CDMA downlink schemes, many of which have been presented in the past. We focus on a new scheme that enables both the maximum multiantenna diversity and the maximum multipath diversity. Although this maximum diversity can only be collected by maximum likelihood (ML detection, we pursue suboptimal detection by means of space-time chip equalization, which lowers the computational complexity significantly. To design the space-time chip equalizers, we also propose efficient pilot-based methods. Simulation results show improved performance over the space-time RAKE receiver for the space-time block coded DS-CDMA downlink schemes that have been proposed for the UMTS and IS-2000 W-CDMA standards.
Integrals of Motion for Discrete-Time Optimal Control Problems
Torres, Delfim F. M.
2003-01-01
We obtain a discrete time analog of E. Noether's theorem in Optimal Control, asserting that integrals of motion associated to the discrete time Pontryagin Maximum Principle can be computed from the quasi-invariance properties of the discrete time Lagrangian and discrete time control system. As corollaries, results for first-order and higher-order discrete problems of the calculus of variations are obtained.
The Maximum Entropy Method for Optical Spectrum Analysis of Real-Time TDDFT
International Nuclear Information System (INIS)
Toogoshi, M; Kano, S S; Zempo, Y
2015-01-01
The maximum entropy method (MEM) is one of the key techniques for spectral analysis. The major feature is that spectra in the low frequency part can be described by the short time-series data. Thus, we applied MEM to analyse the spectrum from the time dependent dipole moment obtained from the time-dependent density functional theory (TDDFT) calculation in real time. It is intensively studied for computing optical properties. In the MEM analysis, however, the maximum lag of the autocorrelation is restricted by the total number of time-series data. We proposed that, as an improved MEM analysis, we use the concatenated data set made from the several-times repeated raw data. We have applied this technique to the spectral analysis of the TDDFT dipole moment of ethylene and oligo-fluorene with n = 8. As a result, the higher resolution can be obtained, which is closer to that of FT with practically time-evoluted data as the same total number of time steps. The efficiency and the characteristic feature of this technique are presented in this paper. (paper)
Takahashi, Osamu; Nomura, Tetsuo; Tabayashi, Kiyohiko; Yamasaki, Katsuyoshi
2008-07-01
We performed spectral analysis by using the maximum entropy method instead of the traditional Fourier transform technique to investigate the short-time behavior in molecular systems, such as the energy transfer between vibrational modes and chemical reactions. This procedure was applied to direct ab initio molecular dynamics calculations for the decomposition of formic acid. More reactive trajectories of dehydrolation than those of decarboxylation were obtained for Z-formic acid, which was consistent with the prediction of previous theoretical and experimental studies. Short-time maximum entropy method analyses were performed for typical reactive and non-reactive trajectories. Spectrograms of a reactive trajectory were obtained; these clearly showed the reactant, transient, and product regions, especially for the dehydrolation path.
Maximum-likelihood methods for array processing based on time-frequency distributions
Zhang, Yimin; Mu, Weifeng; Amin, Moeness G.
1999-11-01
This paper proposes a novel time-frequency maximum likelihood (t-f ML) method for direction-of-arrival (DOA) estimation for non- stationary signals, and compares this method with conventional maximum likelihood DOA estimation techniques. Time-frequency distributions localize the signal power in the time-frequency domain, and as such enhance the effective SNR, leading to improved DOA estimation. The localization of signals with different t-f signatures permits the division of the time-frequency domain into smaller regions, each contains fewer signals than those incident on the array. The reduction of the number of signals within different time-frequency regions not only reduces the required number of sensors, but also decreases the computational load in multi- dimensional optimizations. Compared to the recently proposed time- frequency MUSIC (t-f MUSIC), the proposed t-f ML method can be applied in coherent environments, without the need to perform any type of preprocessing that is subject to both array geometry and array aperture.
STATIONARITY OF ANNUAL MAXIMUM DAILY STREAMFLOW TIME SERIES IN SOUTH-EAST BRAZILIAN RIVERS
Directory of Open Access Journals (Sweden)
Jorge Machado Damázio
2015-08-01
Full Text Available DOI: 10.12957/cadest.2014.18302The paper presents a statistical analysis of annual maxima daily streamflow between 1931 and 2013 in South-East Brazil focused in detecting and modelling non-stationarity aspects. Flood protection for the large valleys in South-East Brazil is provided by multiple purpose reservoir systems built during 20th century, which design and operation plans has been done assuming stationarity of historical flood time series. Land cover changes and rapidly-increasing level of atmosphere greenhouse gases of the last century may be affecting flood regimes in these valleys so that it can be that nonstationary modelling should be applied to re-asses dam safety and flood control operation rules at the existent reservoir system. Six annual maximum daily streamflow time series are analysed. The time series were plotted together with fitted smooth loess functions and non-parametric statistical tests are performed to check the significance of apparent trends shown by the plots. Non-stationarity is modelled by fitting univariate extreme value distribution functions which location varies linearly with time. Stationarity and non-stationarity modelling are compared with the likelihood ratio statistic. In four of the six analyzed time series non-stationarity modelling outperformed stationarity modelling.Keywords: Stationarity; Extreme Value Distributions; Flood Frequency Analysis; Maximum Likelihood Method.
Time-symmetric integration in astrophysics
Hernandez, David M.; Bertschinger, Edmund
2018-04-01
Calculating the long-term solution of ordinary differential equations, such as those of the N-body problem, is central to understanding a wide range of dynamics in astrophysics, from galaxy formation to planetary chaos. Because generally no analytic solution exists to these equations, researchers rely on numerical methods that are prone to various errors. In an effort to mitigate these errors, powerful symplectic integrators have been employed. But symplectic integrators can be severely limited because they are not compatible with adaptive stepping and thus they have difficulty in accommodating changing time and length scales. A promising alternative is time-reversible integration, which can handle adaptive time-stepping, but the errors due to time-reversible integration in astrophysics are less understood. The goal of this work is to study analytically and numerically the errors caused by time-reversible integration, with and without adaptive stepping. We derive the modified differential equations of these integrators to perform the error analysis. As an example, we consider the trapezoidal rule, a reversible non-symplectic integrator, and show that it gives secular energy error increase for a pendulum problem and for a Hénon-Heiles orbit. We conclude that using reversible integration does not guarantee good energy conservation and that, when possible, use of symplectic integrators is favoured. We also show that time-symmetry and time-reversibility are properties that are distinct for an integrator.
International Nuclear Information System (INIS)
Fiebig, H. Rudolf
2002-01-01
We study various aspects of extracting spectral information from time correlation functions of lattice QCD by means of Bayesian inference with an entropic prior, the maximum entropy method (MEM). Correlator functions of a heavy-light meson-meson system serve as a repository for lattice data with diverse statistical quality. Attention is given to spectral mass density functions, inferred from the data, and their dependence on the parameters of the MEM. We propose to employ simulated annealing, or cooling, to solve the Bayesian inference problem, and discuss the practical issues of the approach
Determination of times maximum insulation in case of internal flooding by pipe break
International Nuclear Information System (INIS)
Varas, M. I.; Orteu, E.; Laserna, J. A.
2014-01-01
This paper demonstrates the process followed in the preparation of the Manual of floods of Cofrentes NPP to identify the allowed maximum time available to the central in the isolation of a moderate or high energy pipe break, until it affects security (1E) participating in the safe stop of Reactor or in pools of spent fuel cooling-related equipment , and to determine the recommended isolation mode from the point of view of the location of the break or rupture, of the location of the 1E equipment and human factors. (Author)
The Research of Car-Following Model Based on Real-Time Maximum Deceleration
Directory of Open Access Journals (Sweden)
Longhai Yang
2015-01-01
Full Text Available This paper is concerned with the effect of real-time maximum deceleration in car-following. The real-time maximum acceleration is estimated with vehicle dynamics. It is known that an intelligent driver model (IDM can control adaptive cruise control (ACC well. The disadvantages of IDM at high and constant speed are analyzed. A new car-following model which is applied to ACC is established accordingly to modify the desired minimum gap and structure of the IDM. We simulated the new car-following model and IDM under two different kinds of road conditions. In the first, the vehicles drive on a single road, taking dry asphalt road as the example in this paper. In the second, vehicles drive onto a different road, and this paper analyzed the situation in which vehicles drive from a dry asphalt road onto an icy road. From the simulation, we found that the new car-following model can not only ensure driving security and comfort but also control the steady driving of the vehicle with a smaller time headway than IDM.
Optimal protocol for maximum work extraction in a feedback process with a time-varying potential
Kwon, Chulan
2017-12-01
The nonequilibrium nature of information thermodynamics is characterized by the inequality or non-negativity of the total entropy change of the system, memory, and reservoir. Mutual information change plays a crucial role in the inequality, in particular if work is extracted and the paradox of Maxwell's demon is raised. We consider the Brownian information engine where the protocol set of the harmonic potential is initially chosen by the measurement and varies in time. We confirm the inequality of the total entropy change by calculating, in detail, the entropic terms including the mutual information change. We rigorously find the optimal values of the time-dependent protocol for maximum extraction of work both for the finite-time and the quasi-static process.
Time Reversal Migration for Passive Sources Using a Maximum Variance Imaging Condition
Wang, H.; Alkhalifah, Tariq Ali
2017-01-01
The conventional time-reversal imaging approach for micro-seismic or passive source location is based on focusing the back-propagated wavefields from each recorded trace in a source image. It suffers from strong background noise and limited acquisition aperture, which may create unexpected artifacts and cause error in the source location. To overcome such a problem, we propose a new imaging condition for microseismic imaging, which is based on comparing the amplitude variance in certain windows, and use it to suppress the artifacts as well as find the right location for passive sources. Instead of simply searching for the maximum energy point in the back-propagated wavefield, we calculate the amplitude variances over a window moving in both space and time axis to create a highly resolved passive event image. The variance operation has negligible cost compared with the forward/backward modeling operations, which reveals that the maximum variance imaging condition is efficient and effective. We test our approach numerically on a simple three-layer model and on a piece of the Marmousi model as well, both of which have shown reasonably good results.
Time Reversal Migration for Passive Sources Using a Maximum Variance Imaging Condition
Wang, H.
2017-05-26
The conventional time-reversal imaging approach for micro-seismic or passive source location is based on focusing the back-propagated wavefields from each recorded trace in a source image. It suffers from strong background noise and limited acquisition aperture, which may create unexpected artifacts and cause error in the source location. To overcome such a problem, we propose a new imaging condition for microseismic imaging, which is based on comparing the amplitude variance in certain windows, and use it to suppress the artifacts as well as find the right location for passive sources. Instead of simply searching for the maximum energy point in the back-propagated wavefield, we calculate the amplitude variances over a window moving in both space and time axis to create a highly resolved passive event image. The variance operation has negligible cost compared with the forward/backward modeling operations, which reveals that the maximum variance imaging condition is efficient and effective. We test our approach numerically on a simple three-layer model and on a piece of the Marmousi model as well, both of which have shown reasonably good results.
Energy Technology Data Exchange (ETDEWEB)
Syafaruddin; Hiyama, Takashi [Department of Computer Science and Electrical Engineering of Kumamoto University, 2-39-1 Kurokami, Kumamoto 860-8555 (Japan); Karatepe, Engin [Department of Electrical and Electronics Engineering of Ege University, 35100 Bornova-Izmir (Turkey)
2009-12-15
It is crucial to improve the photovoltaic (PV) system efficiency and to develop the reliability of PV generation control systems. There are two ways to increase the efficiency of PV power generation system. The first is to develop materials offering high conversion efficiency at low cost. The second is to operate PV systems optimally. However, the PV system can be optimally operated only at a specific output voltage and its output power fluctuates under intermittent weather conditions. Moreover, it is very difficult to test the performance of a maximum-power point tracking (MPPT) controller under the same weather condition during the development process and also the field testing is costly and time consuming. This paper presents a novel real-time simulation technique of PV generation system by using dSPACE real-time interface system. The proposed system includes Artificial Neural Network (ANN) and fuzzy logic controller scheme using polar information. This type of fuzzy logic rules is implemented for the first time to operate the PV module at optimum operating point. ANN is utilized to determine the optimum operating voltage for monocrystalline silicon, thin-film cadmium telluride and triple junction amorphous silicon solar cells. The verification of availability and stability of the proposed system through the real-time simulator shows that the proposed system can respond accurately for different scenarios and different solar cell technologies. (author)
Age-Related Differences of Maximum Phonation Time in Patients after Cardiac Surgery
Directory of Open Access Journals (Sweden)
Kazuhiro P. Izawa
2017-12-01
Full Text Available Background and aims: Maximum phonation time (MPT, which is related to respiratory function, is widely used to evaluate maximum vocal capabilities, because its use is non-invasive, quick, and inexpensive. We aimed to examine differences in MPT by age, following recovery phase II cardiac rehabilitation (CR. Methods: This longitudinal observational study assessed 50 consecutive cardiac patients who were divided into the middle-aged group (<65 years, n = 29 and older-aged group (≥65 years, n = 21. MPTs were measured at 1 and 3 months after cardiac surgery, and were compared. Results: The duration of MPT increased more significantly from month 1 to month 3 in the middle-aged group (19.2 ± 7.8 to 27.1 ± 11.6 s, p < 0.001 than in the older-aged group (12.6 ± 3.5 to 17.9 ± 6.0 s, p < 0.001. However, no statistically significant difference occurred in the % change of MPT from 1 month to 3 months after cardiac surgery between the middle-aged group and older-aged group, respectively (41.1% vs. 42.1%. In addition, there were no significant interactions of MPT in the two groups for 1 versus 3 months (F = 1.65, p = 0.20. Conclusion: Following phase II, CR improved MPT for all cardiac surgery patients.
Age-Related Differences of Maximum Phonation Time in Patients after Cardiac Surgery.
Izawa, Kazuhiro P; Kasahara, Yusuke; Hiraki, Koji; Hirano, Yasuyuki; Watanabe, Satoshi
2017-12-21
Background and aims: Maximum phonation time (MPT), which is related to respiratory function, is widely used to evaluate maximum vocal capabilities, because its use is non-invasive, quick, and inexpensive. We aimed to examine differences in MPT by age, following recovery phase II cardiac rehabilitation (CR). Methods: This longitudinal observational study assessed 50 consecutive cardiac patients who were divided into the middle-aged group (<65 years, n = 29) and older-aged group (≥65 years, n = 21). MPTs were measured at 1 and 3 months after cardiac surgery, and were compared. Results: The duration of MPT increased more significantly from month 1 to month 3 in the middle-aged group (19.2 ± 7.8 to 27.1 ± 11.6 s, p < 0.001) than in the older-aged group (12.6 ± 3.5 to 17.9 ± 6.0 s, p < 0.001). However, no statistically significant difference occurred in the % change of MPT from 1 month to 3 months after cardiac surgery between the middle-aged group and older-aged group, respectively (41.1% vs. 42.1%). In addition, there were no significant interactions of MPT in the two groups for 1 versus 3 months (F = 1.65, p = 0.20). Conclusion: Following phase II, CR improved MPT for all cardiac surgery patients.
Maximum Likelihood Time-of-Arrival Estimation of Optical Pulses via Photon-Counting Photodetectors
Erkmen, Baris I.; Moision, Bruce E.
2010-01-01
Many optical imaging, ranging, and communications systems rely on the estimation of the arrival time of an optical pulse. Recently, such systems have been increasingly employing photon-counting photodetector technology, which changes the statistics of the observed photocurrent. This requires time-of-arrival estimators to be developed and their performances characterized. The statistics of the output of an ideal photodetector, which are well modeled as a Poisson point process, were considered. An analytical model was developed for the mean-square error of the maximum likelihood (ML) estimator, demonstrating two phenomena that cause deviations from the minimum achievable error at low signal power. An approximation was derived to the threshold at which the ML estimator essentially fails to provide better than a random guess of the pulse arrival time. Comparing the analytic model performance predictions to those obtained via simulations, it was verified that the model accurately predicts the ML performance over all regimes considered. There is little prior art that attempts to understand the fundamental limitations to time-of-arrival estimation from Poisson statistics. This work establishes both a simple mathematical description of the error behavior, and the associated physical processes that yield this behavior. Previous work on mean-square error characterization for ML estimators has predominantly focused on additive Gaussian noise. This work demonstrates that the discrete nature of the Poisson noise process leads to a distinctly different error behavior.
Relative timing of last glacial maximum and late-glacial events in the central tropical Andes
Bromley, Gordon R. M.; Schaefer, Joerg M.; Winckler, Gisela; Hall, Brenda L.; Todd, Claire E.; Rademaker, Kurt M.
2009-11-01
Whether or not tropical climate fluctuated in synchrony with global events during the Late Pleistocene is a key problem in climate research. However, the timing of past climate changes in the tropics remains controversial, with a number of recent studies reporting that tropical ice age climate is out of phase with global events. Here, we present geomorphic evidence and an in-situ cosmogenic 3He surface-exposure chronology from Nevado Coropuna, southern Peru, showing that glaciers underwent at least two significant advances during the Late Pleistocene prior to Holocene warming. Comparison of our glacial-geomorphic map at Nevado Coropuna to mid-latitude reconstructions yields a striking similarity between Last Glacial Maximum (LGM) and Late-Glacial sequences in tropical and temperate regions. Exposure ages constraining the maximum and end of the older advance at Nevado Coropuna range between 24.5 and 25.3 ka, and between 16.7 and 21.1 ka, respectively, depending on the cosmogenic production rate scaling model used. Similarly, the mean age of the younger event ranges from 10 to 13 ka. This implies that (1) the LGM and the onset of deglaciation in southern Peru occurred no earlier than at higher latitudes and (2) that a significant Late-Glacial event occurred, most likely prior to the Holocene, coherent with the glacial record from mid and high latitudes. The time elapsed between the end of the LGM and the Late-Glacial event at Nevado Coropuna is independent of scaling model and matches the period between the LGM termination and Late-Glacial reversal in classic mid-latitude records, suggesting that these events in both tropical and temperate regions were in phase.
Numerical time integration for air pollution models
J.G. Verwer (Jan); W. Hundsdorfer (Willem); J.G. Blom (Joke)
1998-01-01
textabstractDue to the large number of chemical species and the three space dimensions, off-the-shelf stiff ODE integrators are not feasible for the numerical time integration of stiff systems of advection-diffusion-reaction equations [ fracpar{c{t + nabla cdot left( vu{u c right) = nabla cdot left(
Liu, Jian; Miller, William H
2008-09-28
The maximum entropy analytic continuation (MEAC) method is used to extend the range of accuracy of the linearized semiclassical initial value representation (LSC-IVR)/classical Wigner approximation for real time correlation functions. LSC-IVR provides a very effective "prior" for the MEAC procedure since it is very good for short times, exact for all time and temperature for harmonic potentials (even for correlation functions of nonlinear operators), and becomes exact in the classical high temperature limit. This combined MEAC+LSC/IVR approach is applied here to two highly nonlinear dynamical systems, a pure quartic potential in one dimensional and liquid para-hydrogen at two thermal state points (25 and 14 K under nearly zero external pressure). The former example shows the MEAC procedure to be a very significant enhancement of the LSC-IVR for correlation functions of both linear and nonlinear operators, and especially at low temperature where semiclassical approximations are least accurate. For liquid para-hydrogen, the LSC-IVR is seen already to be excellent at T=25 K, but the MEAC procedure produces a significant correction at the lower temperature (T=14 K). Comparisons are also made as to how the MEAC procedure is able to provide corrections for other trajectory-based dynamical approximations when used as priors.
Schaefer, Andreas; Wenzel, Friedemann
2017-04-01
technically trades time with space, considering subduction zones where we have likely not observed the maximum possible event yet. However, by identifying sources of the same class, the not-yet observed temporal behavior can be replaced by spatial similarity among different subduction zones. This database aims to enhance the research and understanding of subduction zones and to quantify their potential in producing mega earthquakes considering potential strong motion impact on nearby cities and their tsunami potential.
Wu, Yuanfeng; Gao, Lianru; Zhang, Bing; Zhao, Haina; Li, Jun
2014-01-01
We present a parallel implementation of the optimized maximum noise fraction (G-OMNF) transform algorithm for feature extraction of hyperspectral images on commodity graphics processing units (GPUs). The proposed approach explored the algorithm data-level concurrency and optimized the computing flow. We first defined a three-dimensional grid, in which each thread calculates a sub-block data to easily facilitate the spatial and spectral neighborhood data searches in noise estimation, which is one of the most important steps involved in OMNF. Then, we optimized the processing flow and computed the noise covariance matrix before computing the image covariance matrix to reduce the original hyperspectral image data transmission. These optimization strategies can greatly improve the computing efficiency and can be applied to other feature extraction algorithms. The proposed parallel feature extraction algorithm was implemented on an Nvidia Tesla GPU using the compute unified device architecture and basic linear algebra subroutines library. Through the experiments on several real hyperspectral images, our GPU parallel implementation provides a significant speedup of the algorithm compared with the CPU implementation, especially for highly data parallelizable and arithmetically intensive algorithm parts, such as noise estimation. In order to further evaluate the effectiveness of G-OMNF, we used two different applications: spectral unmixing and classification for evaluation. Considering the sensor scanning rate and the data acquisition time, the proposed parallel implementation met the on-board real-time feature extraction.
Increasing the maximum daily operation time of MNSR reactor by modifying its cooling system
International Nuclear Information System (INIS)
Khamis, I.; Hainoun, A.; Al Halbi, W.; Al Isa, S.
2006-08-01
thermal-hydraulic natural convection correlations have been formulated based on a thorough analysis and modeling of the MNSR reactor. The model considers detailed description of the thermal and hydraulic aspects of cooling in the core and vessel. In addition, determination of pressure drop was made through an elaborate balancing of the overall pressure drop in the core against the sum of all individual channel pressure drops employing an iterative scheme. Using this model, an accurate estimation of various timely core-averaged hydraulic parameters such as generated power, hydraulic diameters, flow cross area, ... etc. for each one of the ten-fuel circles in the core can be made. Furthermore, distribution of coolant and fuel temperatures, including maximum fuel temperature and its location in the core, can now be determined. Correlation among core-coolant average temperature, reactor power, and core-coolant inlet temperature, during both steady and transient cases, have been established and verified against experimental data. Simulating various operating condition of MNSR, good agreement is obtained for at different power levels. Various schemes of cooling have been investigated for the purpose of assessing potential benefits on the operational characteristics of the syrian MNSR reactor. A detailed thermal hydraulic model for the analysis of MNSR has been developed. The analysis shows that an auxiliary cooling system, for the reactor vessel or installed in the pool which surrounds the lower section of the reactor vessel, will significantly offset the consumption of excess reactivity due to the negative reactivity temperature coefficient. Hence, the maximum operating time of the reactor is extended. The model considers detailed description of the thermal and hydraulic aspects of cooling the core and its surrounding vessel. Natural convection correlations have been formulated based on a thorough analysis and modeling of the MNSR reactor. The suggested 'micro model
Improved efficiency of maximum likelihood analysis of time series with temporally correlated errors
Langbein, John
2017-08-01
Most time series of geophysical phenomena have temporally correlated errors. From these measurements, various parameters are estimated. For instance, from geodetic measurements of positions, the rates and changes in rates are often estimated and are used to model tectonic processes. Along with the estimates of the size of the parameters, the error in these parameters needs to be assessed. If temporal correlations are not taken into account, or each observation is assumed to be independent, it is likely that any estimate of the error of these parameters will be too low and the estimated value of the parameter will be biased. Inclusion of better estimates of uncertainties is limited by several factors, including selection of the correct model for the background noise and the computational requirements to estimate the parameters of the selected noise model for cases where there are numerous observations. Here, I address the second problem of computational efficiency using maximum likelihood estimates (MLE). Most geophysical time series have background noise processes that can be represented as a combination of white and power-law noise, 1/f^{α } with frequency, f. With missing data, standard spectral techniques involving FFTs are not appropriate. Instead, time domain techniques involving construction and inversion of large data covariance matrices are employed. Bos et al. (J Geod, 2013. doi: 10.1007/s00190-012-0605-0) demonstrate one technique that substantially increases the efficiency of the MLE methods, yet is only an approximate solution for power-law indices >1.0 since they require the data covariance matrix to be Toeplitz. That restriction can be removed by simply forming a data filter that adds noise processes rather than combining them in quadrature. Consequently, the inversion of the data covariance matrix is simplified yet provides robust results for a wider range of power-law indices.
The timing of the maximum extent of the Rhone Glacier at Wangen a.d. Aare
Energy Technology Data Exchange (ETDEWEB)
Ivy-Ochs, S.; Schluechter, C. [Bern Univ. (Switzerland); Kubik, P.W. [Paul Scherrer Inst. (PSI), Villigen (Switzerland); Beer, J. [EAWAG, Duebendorf (Switzerland)
1997-09-01
Erratic blocks found in the region of Wangen a.d. Aare delineate the maximum position of the Solothurn lobe of the Rhone Glacier. {sup 10}Be and {sup 26}Al exposure ages of three of these blocks show that the glacier withdraw from its maximum position at or slightly before 20,000{+-}1800 years ago. (author) 1 fig., 5 refs.
Computing the Maximum Detour of a Plane Graph in Subquadratic Time
DEFF Research Database (Denmark)
Wulff-Nilsen, Christian
2008-01-01
Let G be a plane graph where each edge is a line segment. We consider the problem of computing the maximum detour of G, defined as the maximum over all pairs of distinct points p and q of G of the ratio between the distance between p and q in G and the distance |pq|. The fastest known algorithm...
Hou, Bowen; He, Zhangming; Li, Dong; Zhou, Haiyin; Wang, Jiongqi
2018-05-27
Strap-down inertial navigation system/celestial navigation system ( SINS/CNS) integrated navigation is a high precision navigation technique for ballistic missiles. The traditional navigation method has a divergence in the position error. A deeply integrated mode for SINS/CNS navigation system is proposed to improve the navigation accuracy of ballistic missile. The deeply integrated navigation principle is described and the observability of the navigation system is analyzed. The nonlinearity, as well as the large outliers and the Gaussian mixture noises, often exists during the actual navigation process, leading to the divergence phenomenon of the navigation filter. The new nonlinear Kalman filter on the basis of the maximum correntropy theory and unscented transformation, named the maximum correntropy unscented Kalman filter, is deduced, and the computational complexity is analyzed. The unscented transformation is used for restricting the nonlinearity of the system equation, and the maximum correntropy theory is used to deal with the non-Gaussian noises. Finally, numerical simulation illustrates the superiority of the proposed filter compared with the traditional unscented Kalman filter. The comparison results show that the large outliers and the influence of non-Gaussian noises for SINS/CNS deeply integrated navigation is significantly reduced through the proposed filter.
Directory of Open Access Journals (Sweden)
Bowen Hou
2018-05-01
Full Text Available Strap-down inertial navigation system/celestial navigation system ( SINS/CNS integrated navigation is a high precision navigation technique for ballistic missiles. The traditional navigation method has a divergence in the position error. A deeply integrated mode for SINS/CNS navigation system is proposed to improve the navigation accuracy of ballistic missile. The deeply integrated navigation principle is described and the observability of the navigation system is analyzed. The nonlinearity, as well as the large outliers and the Gaussian mixture noises, often exists during the actual navigation process, leading to the divergence phenomenon of the navigation filter. The new nonlinear Kalman filter on the basis of the maximum correntropy theory and unscented transformation, named the maximum correntropy unscented Kalman filter, is deduced, and the computational complexity is analyzed. The unscented transformation is used for restricting the nonlinearity of the system equation, and the maximum correntropy theory is used to deal with the non-Gaussian noises. Finally, numerical simulation illustrates the superiority of the proposed filter compared with the traditional unscented Kalman filter. The comparison results show that the large outliers and the influence of non-Gaussian noises for SINS/CNS deeply integrated navigation is significantly reduced through the proposed filter.
Maximum leaf conductance driven by CO2 effects on stomatal size and density over geologic time.
Franks, Peter J; Beerling, David J
2009-06-23
Stomatal pores are microscopic structures on the epidermis of leaves formed by 2 specialized guard cells that control the exchange of water vapor and CO(2) between plants and the atmosphere. Stomatal size (S) and density (D) determine maximum leaf diffusive (stomatal) conductance of CO(2) (g(c(max))) to sites of assimilation. Although large variations in D observed in the fossil record have been correlated with atmospheric CO(2), the crucial significance of similarly large variations in S has been overlooked. Here, we use physical diffusion theory to explain why large changes in S necessarily accompanied the changes in D and atmospheric CO(2) over the last 400 million years. In particular, we show that high densities of small stomata are the only way to attain the highest g(cmax) values required to counter CO(2)"starvation" at low atmospheric CO(2) concentrations. This explains cycles of increasing D and decreasing S evident in the fossil history of stomata under the CO(2) impoverished atmospheres of the Permo-Carboniferous and Cenozoic glaciations. The pattern was reversed under rising atmospheric CO(2) regimes. Selection for small S was crucial for attaining high g(cmax) under falling atmospheric CO(2) and, therefore, may represent a mechanism linking CO(2) and the increasing gas-exchange capacity of land plants over geologic time.
Jat, Prahlad; Serre, Marc L
2016-12-01
Widespread contamination of surface water chloride is an emerging environmental concern. Consequently accurate and cost-effective methods are needed to estimate chloride along all river miles of potentially contaminated watersheds. Here we introduce a Bayesian Maximum Entropy (BME) space/time geostatistical estimation framework that uses river distances, and we compare it with Euclidean BME to estimate surface water chloride from 2005 to 2014 in the Gunpowder-Patapsco, Severn, and Patuxent subbasins in Maryland. River BME improves the cross-validation R 2 by 23.67% over Euclidean BME, and river BME maps are significantly different than Euclidean BME maps, indicating that it is important to use river BME maps to assess water quality impairment. The river BME maps of chloride concentration show wide contamination throughout Baltimore and Columbia-Ellicott cities, the disappearance of a clean buffer separating these two large urban areas, and the emergence of multiple localized pockets of contamination in surrounding areas. The number of impaired river miles increased by 0.55% per year in 2005-2009 and by 1.23% per year in 2011-2014, corresponding to a marked acceleration of the rate of impairment. Our results support the need for control measures and increased monitoring of unassessed river miles. Copyright © 2016. Published by Elsevier Ltd.
Fast Maximum-Likelihood Decoder for Quasi-Orthogonal Space-Time Block Code
Directory of Open Access Journals (Sweden)
Adel Ahmadi
2015-01-01
Full Text Available Motivated by the decompositions of sphere and QR-based methods, in this paper we present an extremely fast maximum-likelihood (ML detection approach for quasi-orthogonal space-time block code (QOSTBC. The proposed algorithm with a relatively simple design exploits structure of quadrature amplitude modulation (QAM constellations to achieve its goal and can be extended to any arbitrary constellation. Our decoder utilizes a new decomposition technique for ML metric which divides the metric into independent positive parts and a positive interference part. Search spaces of symbols are substantially reduced by employing the independent parts and statistics of noise. Symbols within the search spaces are successively evaluated until the metric is minimized. Simulation results confirm that the proposed decoder’s performance is superior to many of the recently published state-of-the-art solutions in terms of complexity level. More specifically, it was possible to verify that application of the new algorithms with 1024-QAM would decrease the computational complexity compared to state-of-the-art solution with 16-QAM.
Novel Maximum-based Timing Acquisition for Spread-Spectrum Communications
Energy Technology Data Exchange (ETDEWEB)
Sibbetty, Taylor; Moradiz, Hussein; Farhang-Boroujeny, Behrouz
2016-12-01
This paper proposes and analyzes a new packet detection and timing acquisition method for spread spectrum systems. The proposed method provides an enhancement over the typical thresholding techniques that have been proposed for direct sequence spread spectrum (DS-SS). The effective implementation of thresholding methods typically require accurate knowledge of the received signal-to-noise ratio (SNR), which is particularly difficult to estimate in spread spectrum systems. Instead, we propose a method which utilizes a consistency metric of the location of maximum samples at the output of a filter matched to the spread spectrum waveform to achieve acquisition, and does not require knowledge of the received SNR. Through theoretical study, we show that the proposed method offers a low probability of missed detection over a large range of SNR with a corresponding probability of false alarm far lower than other methods. Computer simulations that corroborate our theoretical results are also presented. Although our work here has been motivated by our previous study of a filter bank multicarrier spread-spectrum (FB-MC-SS) system, the proposed method is applicable to DS-SS systems as well.
Directory of Open Access Journals (Sweden)
Guo-Jheng Yang
2013-08-01
Full Text Available The fragile watermarking technique is used to protect intellectual property rights while also providing security and rigorous protection. In order to protect the copyright of the creators, it can be implanted in some representative text or totem. Because all of the media on the Internet are digital, protection has become a critical issue, and determining how to use digital watermarks to protect digital media is thus the topic of our research. This paper uses the Logistic map with parameter u = 4 to generate chaotic dynamic behavior with the maximum entropy 1. This approach increases the security and rigor of the protection. The main research target of information hiding is determining how to hide confidential data so that the naked eye cannot see the difference. Next, we introduce one method of information hiding. Generally speaking, if the image only goes through Arnold’s cat map and the Logistic map, it seems to lack sufficient security. Therefore, our emphasis is on controlling Arnold’s cat map and the initial value of the chaos system to undergo small changes and generate different chaos sequences. Thus, the current time is used to not only make encryption more stringent but also to enhance the security of the digital media.
Directory of Open Access Journals (Sweden)
Johann A. Briffa
2014-06-01
Full Text Available In this study, the authors consider time-varying block (TVB codes, which generalise a number of previous synchronisation error-correcting codes. They also consider various practical issues related to maximum a posteriori (MAP decoding of these codes. Specifically, they give an expression for the expected distribution of drift between transmitter and receiver because of synchronisation errors. They determine an appropriate choice for state space limits based on the drift probability distribution. In turn, they obtain an expression for the decoder complexity under given channel conditions in terms of the state space limits used. For a given state space, they also give a number of optimisations that reduce the algorithm complexity with no further loss of decoder performance. They also show how the MAP decoder can be used in the absence of known frame boundaries, and demonstrate that an appropriate choice of decoder parameters allows the decoder to approach the performance when frame boundaries are known, at the expense of some increase in complexity. Finally, they express some existing constructions as TVB codes, comparing performance with published results and showing that improved performance is possible by taking advantage of the flexibility of TVB codes.
The early maximum likelihood estimation model of audiovisual integration in speech perception
DEFF Research Database (Denmark)
Andersen, Tobias
2015-01-01
integration to speech perception along with three model variations. In early MLE, integration is based on a continuous internal representation before categorization, which can make the model more parsimonious by imposing constraints that reflect experimental designs. The study also shows that cross......Speech perception is facilitated by seeing the articulatory mouth movements of the talker. This is due to perceptual audiovisual integration, which also causes the McGurk−MacDonald illusion, and for which a comprehensive computational account is still lacking. Decades of research have largely......-validation can evaluate models of audiovisual integration based on typical data sets taking both goodness-of-fit and model flexibility into account. All models were tested on a published data set previously used for testing the FLMP. Cross-validation favored the early MLE while more conventional error measures...
GENERIC Integrators: Structure Preserving Time Integration for Thermodynamic Systems
Öttinger, Hans Christian
2018-04-01
Thermodynamically admissible evolution equations for non-equilibrium systems are known to possess a distinct mathematical structure. Within the GENERIC (general equation for the non-equilibrium reversible-irreversible coupling) framework of non-equilibrium thermodynamics, which is based on continuous time evolution, we investigate the possibility of preserving all the structural elements in time-discretized equations. Our approach, which follows Moser's [1] construction of symplectic integrators for Hamiltonian systems, is illustrated for the damped harmonic oscillator. Alternative approaches are sketched.
Finite mixture model: A maximum likelihood estimation approach on time series data
Yen, Phoong Seuk; Ismail, Mohd Tahir; Hamzah, Firdaus Mohamad
2014-09-01
Recently, statistician emphasized on the fitting of finite mixture model by using maximum likelihood estimation as it provides asymptotic properties. In addition, it shows consistency properties as the sample sizes increases to infinity. This illustrated that maximum likelihood estimation is an unbiased estimator. Moreover, the estimate parameters obtained from the application of maximum likelihood estimation have smallest variance as compared to others statistical method as the sample sizes increases. Thus, maximum likelihood estimation is adopted in this paper to fit the two-component mixture model in order to explore the relationship between rubber price and exchange rate for Malaysia, Thailand, Philippines and Indonesia. Results described that there is a negative effect among rubber price and exchange rate for all selected countries.
Integrable Time-Dependent Quantum Hamiltonians
Sinitsyn, Nikolai A.; Yuzbashyan, Emil A.; Chernyak, Vladimir Y.; Patra, Aniket; Sun, Chen
2018-05-01
We formulate a set of conditions under which the nonstationary Schrödinger equation with a time-dependent Hamiltonian is exactly solvable analytically. The main requirement is the existence of a non-Abelian gauge field with zero curvature in the space of system parameters. Known solvable multistate Landau-Zener models satisfy these conditions. Our method provides a strategy to incorporate time dependence into various quantum integrable models while maintaining their integrability. We also validate some prior conjectures, including the solution of the driven generalized Tavis-Cummings model.
Liu, Yikan
2015-01-01
In this paper, we establish a strong maximum principle for fractional diffusion equations with multiple Caputo derivatives in time, and investigate a related inverse problem of practical importance. Exploiting the solution properties and the involved multinomial Mittag-Leffler functions, we improve the weak maximum principle for the multi-term time-fractional diffusion equation to a stronger one, which is parallel to that for its single-term counterpart as expected. As a direct application, w...
INTEGRAL EDUCATION, TIME AND SPACE: PROBLEMATIZING CONCEPTS
Directory of Open Access Journals (Sweden)
Ana Elisa Spaolonzi Queiroz Assis
2018-03-01
Full Text Available Integral Education, despite being the subject of public policy agenda for some decades, still carries disparities related to its concept. In this sense, this article aims to problematize not only the concepts of integral education but also the categories time and space contained in the magazines Em Aberto. They were organized and published by the National Institute of Educational Studies Anísio Teixeira (INEP, numbers 80 (2009 and 88 (2012, respectively entitled "Educação Integral e tempo integral" and " Políticas de educação integral em jornada ampliada". The methodology is based on Bardin’s content analysis, respecting the steps of pre-analysis (research corpus formed by the texts in the journals; material exploration (reading the texts encoding data choosing the registration units for categorization; and processing and interpretation of results, based on Saviani’s Historical-Critical Pedagogy. The work reveals convergent and divergent conceptual multiplicity, provoking a discussion about a critical conception of integral education. Keywords: Integral Education. Historical-Critical Pedagogy. Content Analysis.
Symplectic integrators with adaptive time steps
Richardson, A. S.; Finn, J. M.
2012-01-01
In recent decades, there have been many attempts to construct symplectic integrators with variable time steps, with rather disappointing results. In this paper, we identify the causes for this lack of performance, and find that they fall into two categories. In the first, the time step is considered a function of time alone, Δ = Δ(t). In this case, backward error analysis shows that while the algorithms remain symplectic, parametric instabilities may arise because of resonance between oscillations of Δ(t) and the orbital motion. In the second category the time step is a function of phase space variables Δ = Δ(q, p). In this case, the system of equations to be solved is analyzed by introducing a new time variable τ with dt = Δ(q, p) dτ. The transformed equations are no longer in Hamiltonian form, and thus do not benefit from integration methods which would be symplectic for Hamiltonian systems. We analyze two methods for integrating the transformed equations which do, however, preserve the structure of the original equations. The first is an extended phase space method, which has been successfully used in previous studies of adaptive time step symplectic integrators. The second, novel, method is based on a non-canonical mixed-variable generating function. Numerical trials for both of these methods show good results, without parametric instabilities or spurious growth or damping. It is then shown how to adapt the time step to an error estimate found by backward error analysis, in order to optimize the time-stepping scheme. Numerical results are obtained using this formulation and compared with other time-stepping schemes for the extended phase space symplectic method.
Energy drift in reversible time integration
International Nuclear Information System (INIS)
McLachlan, R I; Perlmutter, M
2004-01-01
Energy drift is commonly observed in reversible integrations of systems of molecular dynamics. We show that this drift can be modelled as a diffusion and that the typical energy error after time T is O(√T). (letter to the editor)
Interconnect rise time in superconducting integrating circuits
International Nuclear Information System (INIS)
Preis, D.; Shlager, K.
1988-01-01
The influence of resistive losses on the voltage rise time of an integrated-circuit interconnection is reported. A distribution-circuit model is used to present the interconnect. Numerous parametric curves are presented based on numerical evaluation of the exact analytical expression for the model's transient response. For the superconducting case in which the series resistance of the interconnect approaches zero, the step-response rise time is longer but signal strength increases significantly
THE ACTIVE INTEGRATED CIRCULAR PROCESS – EXPRESSION OF MAXIMUM SYNTHESIS OF SUSTAINABLE DEVELOPMENT
Directory of Open Access Journals (Sweden)
Done Ioan
2015-06-01
Full Text Available "The accelerated pace of economic growth, prompted by the need to ensure reducing disparities between the various countries, has imposed in the last two decades the adoption of sustainable development principles, particularly as a result of the Rio Declaration on Environment and Development (1992 and the UNESCO Declaration in the fall of 1997. In specific literature, in essence, sustainable development is considered "an economic and social process that is characterized by a simultaneous and concerted action at global, regional and local level. Its objective is to provide living conditions both for the present and forth future. Sustainable development “encompasses the economic, ecological, social and political aspects, linked through cultural and spiritual relationships."(Coşea, 2007In Romania, achieving sustainable development is a major, difficult objective, because it must be done in terms of convergence to the demands of the economic, social, cultural and political context of the EU, and in terms of the completion of the transition to a functioning and competitive market economy. In this context, it is imposed the economic competitiveness through reindustrialization and not least, by harnessing the active integrated circular process. Gross value added and profit chain in the structures of active integrated circular process must reflect the interests of the forces involved(employers, employees and the statethereby forming the basis of respect for the correlation between sustainable development, economic growth and increasing national wealth. The elimination or marginalization of certain links in the value chain and profit causes major disruptions or bankruptcy, with direct implications for recognizing and rewarding performance. Essentially, the building of active integrated circular process will determine the maximization of the profit – the foundation of satisfying all economic interests.
Hippocampal “Time Cells”: Time versus Path Integration
Kraus, Benjamin J.; Robinson, Robert J.; White, John A.; Eichenbaum, Howard; Hasselmo, Michael E.
2014-01-01
SUMMARY Recent studies have reported the existence of hippocampal “time cells,” neurons that fire at particular moments during periods when behavior and location are relatively constant. However, an alternative explanation of apparent time coding is that hippocampal neurons “path integrate” to encode the distance an animal has traveled. Here, we examined hippocampal neuronal firing patterns as rats ran in place on a treadmill, thus “clamping” behavior and location, while we varied the treadmill speed to distinguish time elapsed from distance traveled. Hippocampal neurons were strongly influenced by time and distance, and less so by minor variations in location. Furthermore, the activity of different neurons reflected integration over time and distance to varying extents, with most neurons strongly influenced by both factors and some significantly influenced by only time or distance. Thus, hippocampal neuronal networks captured both the organization of time and distance in a situation where these dimensions dominated an ongoing experience. PMID:23707613
Surface of Maximums of AR(2 Process Spectral Densities and its Application in Time Series Statistics
Directory of Open Access Journals (Sweden)
Alexander V. Ivanov
2017-09-01
Conclusions. The obtained formula of surface of maximums of noise spectral densities gives an opportunity to realize for which values of AR(2 process characteristic polynomial coefficients it is possible to look for greater rate of convergence to zero of the probabilities of large deviations of the considered estimates.
Monte Carlo Maximum Likelihood Estimation for Generalized Long-Memory Time Series Models
Mesters, G.; Koopman, S.J.; Ooms, M.
2016-01-01
An exact maximum likelihood method is developed for the estimation of parameters in a non-Gaussian nonlinear density function that depends on a latent Gaussian dynamic process with long-memory properties. Our method relies on the method of importance sampling and on a linear Gaussian approximating
The Crank Nicolson Time Integrator for EMPHASIS.
Energy Technology Data Exchange (ETDEWEB)
McGregor, Duncan Alisdair Odum; Love, Edward; Kramer, Richard Michael Jack
2018-03-01
We investigate the use of implicit time integrators for finite element time domain approxi- mations of Maxwell's equations in vacuum. We discretize Maxwell's equations in time using Crank-Nicolson and in 3D space using compatible finite elements. We solve the system by taking a single step of Newton's method and inverting the Eddy-Current Schur complement allowing for the use of standard preconditioning techniques. This approach also generalizes to more complex material models that can include the Unsplit PML. We present verification results and demonstrate performance at CFL numbers up to 1000.
Path integration on space times with symmetry
International Nuclear Information System (INIS)
Low, S.G.
1985-01-01
Path integration on space times with symmetry is investigated using a definition of path integration of Gaussian integrators. Gaussian integrators, systematically developed using the theory of projective distributions, may be defined in terms of a Jacobi operator Green function. This definition of the path integral yields a semiclassical expansion of the propagator which is valid on caustics. The semiclassical approximation to the free particle propagator on symmetric and reductive homogeneous spaces is computed in terms of the complete solution of the Jacobi equation. The results are used to test the validity of using the Schwinger-DeWitt transform to compute an approximation to the coincidence limit of a field theory Green function from a WKB propagator. The method is found not to be valid except for certain special cases. These cases include manifolds constructed from the direct product of flat space and group manifolds, on which the free particle WKB approximation is exact and two sphere. The multiple geodesic contribution to 2 > on Schwarzschild in the neighborhood of rho = 3M is computed using the transform
International Nuclear Information System (INIS)
Zakeri, Behnam; Syri, Sanna; Rinne, Samuli
2015-01-01
Finland is to increase the share of RES (renewable energy sources) up to 38% in final energy consumption by 2020. While benefiting from local biomass resources Finnish energy system is deemed to achieve this goal, increasing the share of other intermittent renewables is under development, namely wind power and solar energy. Yet the maximum flexibility of the existing energy system in integration of renewable energy is not investigated, which is an important step before undertaking new renewable energy obligations. This study aims at filling this gap by hourly analysis and comprehensive modeling of the energy system including electricity, heat, and transportation, by employing EnergyPLAN tool. Focusing on technical and economic implications, we assess the maximum potential of different RESs separately (including bioenergy, hydropower, wind power, solar heating and PV, and heat pumps), as well as an optimal mix of different technologies. Furthermore, we propose a new index for assessing the maximum flexibility of energy systems in absorbing variable renewable energy. The results demonstrate that wind energy can be harvested at maximum levels of 18–19% of annual power demand (approx. 16 TWh/a), without major enhancements in the flexibility of energy infrastructure. With today's energy demand, the maximum feasible renewable energy for Finland is around 44–50% by an optimal mix of different technologies, which promises 35% reduction in carbon emissions from 2012's level. Moreover, Finnish energy system is flexible to augment the share of renewables in gross electricity consumption up to 69–72%, at maximum. Higher shares of RES calls for lower energy consumption (energy efficiency) and more flexibility in balancing energy supply and consumption (e.g. by energy storage). - Highlights: • By hourly analysis, we model the whole energy system of Finland. • With existing energy infrastructure, RES (renewable energy sources) in primary energy cannot go beyond 50%.
Aspects for Run-time Component Integration
DEFF Research Database (Denmark)
Truyen, Eddy; Jørgensen, Bo Nørregaard; Joosen, Wouter
2000-01-01
Component framework technology has become the cornerstone of building a family of systems and applications. A component framework defines a generic architecture into which specialized components can be plugged. As such, the component framework leverages the glue that connects the different inserted...... to dynamically integrate into the architecture of middleware systems new services that support non-functional aspects such as security, transactions, real-time....
Shen, Hua
2016-10-19
A maximum-principle-satisfying space-time conservation element and solution element (CE/SE) scheme is constructed to solve a reduced five-equation model coupled with the stiffened equation of state for compressible multifluids. We first derive a sufficient condition for CE/SE schemes to satisfy maximum-principle when solving a general conservation law. And then we introduce a slope limiter to ensure the sufficient condition which is applicative for both central and upwind CE/SE schemes. Finally, we implement the upwind maximum-principle-satisfying CE/SE scheme to solve the volume-fraction-based five-equation model for compressible multifluids. Several numerical examples are carried out to carefully examine the accuracy, efficiency, conservativeness and maximum-principle-satisfying property of the proposed approach.
Shen, Hua; Wen, Chih-Yung; Parsani, Matteo; Shu, Chi-Wang
2016-01-01
A maximum-principle-satisfying space-time conservation element and solution element (CE/SE) scheme is constructed to solve a reduced five-equation model coupled with the stiffened equation of state for compressible multifluids. We first derive a sufficient condition for CE/SE schemes to satisfy maximum-principle when solving a general conservation law. And then we introduce a slope limiter to ensure the sufficient condition which is applicative for both central and upwind CE/SE schemes. Finally, we implement the upwind maximum-principle-satisfying CE/SE scheme to solve the volume-fraction-based five-equation model for compressible multifluids. Several numerical examples are carried out to carefully examine the accuracy, efficiency, conservativeness and maximum-principle-satisfying property of the proposed approach.
Liu, Peng; Wang, Xiaoli
2017-01-01
A new maximum lateness scheduling model in which both cooperative games and variable processing times exist simultaneously is considered in this paper. The job variable processing time is described by an increasing or a decreasing function dependent on the position of a job in the sequence. Two persons have to cooperate in order to process a set of jobs. Each of them has a single machine and their processing cost is defined as the minimum value of maximum lateness. All jobs have a common due ...
Time-Lapse Measurement of Wellbore Integrity
Duguid, A.
2017-12-01
Well integrity is becoming more important as wells are used longer or repurposed. For CO2, shale gas, and other projects it has become apparent that wells represent the most likely unintended migration pathway for fluids out of the reservoir. Comprehensive logging programs have been employed to determine the condition of legacy wells in North America. These studies provide examples of assessment technologies. Logging programs have included pulsed neutron logging, ultrasonic well mapping, and cement bond logging. While these studies provide examples of what can be measured, they have only conducted a single round of logging and cannot show if the well has changed over time. Recent experience with time-lapse logging of three monitoring wells at a US Department of Energy sponsored CO2 project has shown the full value of similar tools. Time-lapse logging has shown that well integrity changes over time can be identified. It has also shown that the inclusion of and location of monitoring technologies in the well and the choice of construction materials must be carefully considered. Two of the wells were approximately eight years old at the time of study; they were constructed with steel and fiberglass casing sections and had lines on the outside of the casing running to the surface. The third well was 68 years old when it was studied and was originally constructed as a production well. Repeat logs were collected six or eight years after initial logging. Time-lapse logging showed the evolution of the wells. The results identified locations where cement degraded over time and locations that showed little change. The ultrasonic well maps show clearly that the lines used to connect the monitoring technology to the surface are visible and have a local effect on cement isolation. Testing and sampling was conducted along with logging. It provided insight into changes identified in the time-lapse log results. Point permeability testing was used to provide an in-situ point
Mixed time slicing in path integral simulations
International Nuclear Information System (INIS)
Steele, Ryan P.; Zwickl, Jill; Shushkov, Philip; Tully, John C.
2011-01-01
A simple and efficient scheme is presented for using different time slices for different degrees of freedom in path integral calculations. This method bridges the gap between full quantization and the standard mixed quantum-classical (MQC) scheme and, therefore, still provides quantum mechanical effects in the less-quantized variables. Underlying the algorithm is the notion that time slices (beads) may be 'collapsed' in a manner that preserves quantization in the less quantum mechanical degrees of freedom. The method is shown to be analogous to multiple-time step integration techniques in classical molecular dynamics. The algorithm and its associated error are demonstrated on model systems containing coupled high- and low-frequency modes; results indicate that convergence of quantum mechanical observables can be achieved with disparate bead numbers in the different modes. Cost estimates indicate that this procedure, much like the MQC method, is most efficient for only a relatively few quantum mechanical degrees of freedom, such as proton transfer. In this regime, however, the cost of a fully quantum mechanical simulation is determined by the quantization of the least quantum mechanical degrees of freedom.
Maximum Kolmogorov-Sinai Entropy Versus Minimum Mixing Time in Markov Chains
Mihelich, M.; Dubrulle, B.; Paillard, D.; Kral, Q.; Faranda, D.
2018-01-01
We establish a link between the maximization of Kolmogorov Sinai entropy (KSE) and the minimization of the mixing time for general Markov chains. Since the maximisation of KSE is analytical and easier to compute in general than mixing time, this link provides a new faster method to approximate the minimum mixing time dynamics. It could be interesting in computer sciences and statistical physics, for computations that use random walks on graphs that can be represented as Markov chains.
Optimization of NANOGrav's time allocation for maximum sensitivity to single sources
International Nuclear Information System (INIS)
Christy, Brian; Anella, Ryan; Lommen, Andrea; Camuccio, Richard; Handzo, Emma; Finn, Lee Samuel
2014-01-01
Pulsar timing arrays (PTAs) are a collection of precisely timed millisecond pulsars (MSPs) that can search for gravitational waves (GWs) in the nanohertz frequency range by observing characteristic signatures in the timing residuals. The sensitivity of a PTA depends on the direction of the propagating GW source, the timing accuracy of the pulsars, and the allocation of the available observing time. The goal of this paper is to determine the optimal time allocation strategy among the MSPs in the North American Nanohertz Observatory for Gravitational Waves (NANOGrav) for a single source of GW under a particular set of assumptions. We consider both an isotropic distribution of sources across the sky and a specific source in the Virgo cluster. This work improves on previous efforts by modeling the effect of intrinsic spin noise for each pulsar. We find that, in general, the array is optimized by maximizing time spent on the best-timed pulsars, with sensitivity improvements typically ranging from a factor of 1.5 to 4.
International Nuclear Information System (INIS)
Gutierrez, Rafael M.; Useche, Gina M.; Buitrago, Elias
2007-01-01
We present a procedure developed to detect stochastic and deterministic information contained in empirical time series, useful to characterize and make models of different aspects of complex phenomena represented by such data. This procedure is applied to a seismological time series to obtain new information to study and understand geological phenomena. We use concepts and methods from nonlinear dynamics and maximum entropy. The mentioned method allows an optimal analysis of the available information
de Nazelle, Audrey; Arunachalam, Saravanan; Serre, Marc L
2010-08-01
States in the USA are required to demonstrate future compliance of criteria air pollutant standards by using both air quality monitors and model outputs. In the case of ozone, the demonstration tests aim at relying heavily on measured values, due to their perceived objectivity and enforceable quality. Weight given to numerical models is diminished by integrating them in the calculations only in a relative sense. For unmonitored locations, the EPA has suggested the use of a spatial interpolation technique to assign current values. We demonstrate that this approach may lead to erroneous assignments of nonattainment and may make it difficult for States to establish future compliance. We propose a method that combines different sources of information to map air pollution, using the Bayesian Maximum Entropy (BME) Framework. The approach gives precedence to measured values and integrates modeled data as a function of model performance. We demonstrate this approach in North Carolina, using the State's ozone monitoring network in combination with outputs from the Multiscale Air Quality Simulation Platform (MAQSIP) modeling system. We show that the BME data integration approach, compared to a spatial interpolation of measured data, improves the accuracy and the precision of ozone estimations across the state.
On Tuning PI Controllers for Integrating Plus Time Delay Systems
Directory of Open Access Journals (Sweden)
David Di Ruscio
2010-10-01
Full Text Available Some analytical results concerning PI controller tuning based on integrator plus time delay models are worked out and presented. A method for obtaining PI controller parameters, Kp=alpha/(k*tau, and, Ti=beta*tau, which ensures a given prescribed maximum time delay error, dtau_max, to time delay, tau, ratio parameter delta=dau_max/tau, is presented. The corner stone in this method, is a method product parameter, c=alpha*beta. Analytical relations between the PI controller parameters, Ti, and, Kp, and the time delay error parameter, delta, is presented, and we propose the setting, beta=c/a*(delta+1, and, alpha=a/(delta+1, which gives, Ti=c/a*(delta+1*tau, and Kp=a/((delta+1*k*tau, where the parameter, a, is constant in the method product parameter, c=alpha*beta. It also turns out that the integral time, Ti, is linear in, delta, and the proportional gain, Kp, inversely proportional to, delta+1. For the original Ziegler Nichols (ZN method this parameter is approximately, c=2.38, and the presented method may e.g., be used to obtain new modified ZN parameters with increased robustness margins, also documented in the paper.
The effects of disjunct sampling and averaging time on maximum mean wind speeds
DEFF Research Database (Denmark)
Larsén, Xiaoli Guo; Mann, J.
2006-01-01
Conventionally, the 50-year wind is calculated on basis of the annual maxima of consecutive 10-min averages. Very often, however, the averages are saved with a temporal spacing of several hours. We call it disjunct sampling. It may also happen that the wind speeds are averaged over a longer time...
ANALYTICAL ESTIMATION OF MINIMUM AND MAXIMUM TIME EXPENDITURES OF PASSENGERS AT AN URBAN ROUTE STOP
Directory of Open Access Journals (Sweden)
Gorbachov, P.
2013-01-01
Full Text Available This scientific paper deals with the problem related to the definition of average time spent by passengers while waiting for transport vehicles at urban stops as well as the results of analytical modeling of this value at traffic schedule unknown to the passengers and of two options of the vehicle traffic management on the given route.
Directory of Open Access Journals (Sweden)
Peng Liu
2017-01-01
Full Text Available A new maximum lateness scheduling model in which both cooperative games and variable processing times exist simultaneously is considered in this paper. The job variable processing time is described by an increasing or a decreasing function dependent on the position of a job in the sequence. Two persons have to cooperate in order to process a set of jobs. Each of them has a single machine and their processing cost is defined as the minimum value of maximum lateness. All jobs have a common due date. The objective is to maximize the multiplication of their rational positive cooperative profits. A division of those jobs should be negotiated to yield a reasonable cooperative profit allocation scheme acceptable to them. We propose the sufficient and necessary conditions for the problems to have positive integer solution.
Timing A Pulsed Thin Film Pyroelectric Generator For Maximum Power Density
International Nuclear Information System (INIS)
Smith, A.N.; Hanrahan, B.M.; Neville, C.J.; Jankowski, N.R
2016-01-01
Pyroelectric thermal-to-electric energy conversion is accomplished by a cyclic process of thermally-inducing polarization changes in the material under an applied electric field. The pyroelectric MEMS device investigated consisted of a thin film PZT capacitor with platinum bottom and iridium oxide top electrodes. Electric fields between 1-20 kV/cm with a 30% duty cycle and frequencies from 0.1 - 100 Hz were tested with a modulated continuous wave IR laser with a duty cycle of 20% creating temperature swings from 0.15 - 26 °C on the pyroelectric receiver. The net output power of the device was highly sensitive to the phase delay between the laser power and the applied electric field. A thermal model was developed to predict and explain the power loss associated with finite charge and discharge times. Excellent agreement was achieved between the theoretical model and the experiment results for the measured power density versus phase delay. Limitations on the charging and discharging rates result in reduced power and lower efficiency due to a reduced net work per cycle. (paper)
The Sidereal Time Variations of the Lorentz Force and Maximum Attainable Speed of Electrons
Nowak, Gabriel; Wojtsekhowski, Bogdan; Roblin, Yves; Schmookler, Barak
2016-09-01
The Continuous Electron Beam Accelerator Facility (CEBAF) at Jefferson Lab produces electrons that orbit through a known magnetic system. The electron beam's momentum can be determined through the radius of the beam's orbit. This project compares the beam orbit's radius while travelling in a transverse magnetic field with theoretical predictions from special relativity, which predict a constant beam orbit radius. Variations in the beam orbit's radius are found by comparing the beam's momentum entering and exiting a magnetic arc. Beam position monitors (BPMs) provide the information needed to calculate the beam momentum. Multiple BPM's are included in the analysis and fitted using the method of least squares to decrease statistical uncertainty. Preliminary results from data collected over a 24 hour period show that the relative momentum change was less than 10-4. Further study will be conducted including larger time spans and stricter cuts applied to the BPM data. The data from this analysis will be used in a larger experiment attempting to verify special relativity. While the project is not traditionally nuclear physics, it involves the same technology (the CEBAF accelerator) and the same methods (ROOT) as a nuclear physics experiment. DOE SULI Program.
DEFF Research Database (Denmark)
Kozlowski, Dawid; Worthington, Dave
2015-01-01
chain and discrete event simulation models, to provide an insightful analysis of the public hospital performance under the policy rules. The aim of this paper is to support the enhancement of the quality of elective patient care, to be brought about by better understanding of the policy implications...... on the utilization of public hospital resources. This paper illustrates the use of a queue modelling approach in the analysis of elective patient treatment governed by the maximum waiting time policy. Drawing upon the combined strengths of analytic and simulation approaches we develop both continuous-time Markov...
Leus, G.; Petré, F.; Moonen, M.
2004-01-01
In the downlink of DS-CDMA, frequency-selectivity destroys the orthogonality of the user signals and introduces multiuser interference (MUI). Space-time chip equalization is an efficient tool to restore the orthogonality of the user signals and suppress the MUI. Furthermore, multiple-input
An integration time adaptive control method for atmospheric composition detection of occultation
Ding, Lin; Hou, Shuai; Yu, Fei; Liu, Cheng; Li, Chao; Zhe, Lin
2018-01-01
When sun is used as the light source for atmospheric composition detection, it is necessary to image sun for accurate identification and stable tracking. In the course of 180 second of the occultation, the magnitude of sun light intensity through the atmosphere changes greatly. It is nearly 1100 times illumination change between the maximum atmospheric and the minimum atmospheric. And the process of light change is so severe that 2.9 times per second of light change can be reached. Therefore, it is difficult to control the integration time of sun image camera. In this paper, a novel adaptive integration time control method for occultation is presented. In this method, with the distribution of gray value in the image as the reference variable, and the concepts of speed integral PID control, the integration time adaptive control problem of high frequency imaging. The large dynamic range integration time automatic control in the occultation can be achieved.
Time for creative integration in medical sociology.
Levine, S
1995-01-01
The burgeoning of medical sociology has sometimes been accompanied by unfortunate parochialism and the presence of opposing intellectual camps that ignore and even impugn each other's work. We have lost opportunities to achieve creative discourse and integration of different perspectives, methods, and findings. At this stage we should consider how we can foster creative integration within our field.
Global integration in times of crisis
DEFF Research Database (Denmark)
Jensen, Camilla
shock) from other subsidiaries downstream in the value chain. While in a comparative perspective multinational subsidiaries are found to perform relatively better than local firms that are integrated differently (arms' length) in global production networks (e.g. offshoring outsourcing). This paper tries...... to reconcile these findings by testing a number of hypothesis about global integration strategies in the context of the global financial crisis and how it affected exporting among multinational subsidiaries operating out of Turkey. Controlling for the impact that depreciations and exchange rate volatility has...... integration strategies throughout the course of the global financial crisis....
SOCIAL INTEGRATION: TESTING ANTECEDENTS OF TIME SPENT ONLINE
Directory of Open Access Journals (Sweden)
Lily Suriani Mohd Arif
2013-07-01
Full Text Available The literature on the relationship of social integration and time spent onlineprovides conflicting evidence of the relationship of social integration with timespent online. The study identifies and highlightsthe controversy and attempts toclarify the relationship of social integration withtime spent online bydecomposing the construct social integration into its affective and behavioraldimensions . Thestudy tests antecedents and effects of time spent online in arandom sample of senior level undergraduate students at a public university inMalaysia. The findings indicated that while self-report measures of behavioralsocial integration did not predict time spent online, and, the affective socialintegration had an inverse relationship with time spent online.
Integrated watershed analysis: adapting to changing times
Gordon H. Reeves
2013-01-01
Resource managers are increasingly required to conduct integrated analyses of aquatic and terrestrial ecosystems before undertaking any activities. Th ere are a number of research studies on the impacts of management actions on these ecosystems, as well as a growing body of knowledge about ecological processes that aff ect them, particularly aquatic ecosystems, which...
Minimizing time for test in integrated circuit
Andonova, A. S.; Dimitrov, D. G.; Atanasova, N. G.
2004-01-01
The cost for testing integrated circuits represents a growing percentage of the total cost for their production. The former strictly depends on the length of the test session, and its reduction has been the target of many efforts in the past. This paper proposes a new method for reducing the test length by adopting a new architecture and exploiting an evolutionary optimisation algorithm. A prototype of the proposed approach was tested on 1SCAS standard benchmarks and theexperimental results s...
Almog, Assaf; Garlaschelli, Diego
2014-09-01
The dynamics of complex systems, from financial markets to the brain, can be monitored in terms of multiple time series of activity of the constituent units, such as stocks or neurons, respectively. While the main focus of time series analysis is on the magnitude of temporal increments, a significant piece of information is encoded into the binary projection (i.e. the sign) of such increments. In this paper we provide further evidence of this by showing strong nonlinear relations between binary and non-binary properties of financial time series. These relations are a novel quantification of the fact that extreme price increments occur more often when most stocks move in the same direction. We then introduce an information-theoretic approach to the analysis of the binary signature of single and multiple time series. Through the definition of maximum-entropy ensembles of binary matrices and their mapping to spin models in statistical physics, we quantify the information encoded into the simplest binary properties of real time series and identify the most informative property given a set of measurements. Our formalism is able to accurately replicate, and mathematically characterize, the observed binary/non-binary relations. We also obtain a phase diagram allowing us to identify, based only on the instantaneous aggregate return of a set of multiple time series, a regime where the so-called ‘market mode’ has an optimal interpretation in terms of collective (endogenous) effects, a regime where it is parsimoniously explained by pure noise, and a regime where it can be regarded as a combination of endogenous and exogenous factors. Our approach allows us to connect spin models, simple stochastic processes, and ensembles of time series inferred from partial information.
International Nuclear Information System (INIS)
Almog, Assaf; Garlaschelli, Diego
2014-01-01
The dynamics of complex systems, from financial markets to the brain, can be monitored in terms of multiple time series of activity of the constituent units, such as stocks or neurons, respectively. While the main focus of time series analysis is on the magnitude of temporal increments, a significant piece of information is encoded into the binary projection (i.e. the sign) of such increments. In this paper we provide further evidence of this by showing strong nonlinear relations between binary and non-binary properties of financial time series. These relations are a novel quantification of the fact that extreme price increments occur more often when most stocks move in the same direction. We then introduce an information-theoretic approach to the analysis of the binary signature of single and multiple time series. Through the definition of maximum-entropy ensembles of binary matrices and their mapping to spin models in statistical physics, we quantify the information encoded into the simplest binary properties of real time series and identify the most informative property given a set of measurements. Our formalism is able to accurately replicate, and mathematically characterize, the observed binary/non-binary relations. We also obtain a phase diagram allowing us to identify, based only on the instantaneous aggregate return of a set of multiple time series, a regime where the so-called ‘market mode’ has an optimal interpretation in terms of collective (endogenous) effects, a regime where it is parsimoniously explained by pure noise, and a regime where it can be regarded as a combination of endogenous and exogenous factors. Our approach allows us to connect spin models, simple stochastic processes, and ensembles of time series inferred from partial information. (paper)
?Just-in-Time? Battery Charge Depletion Control for PHEVs and E-REVs for Maximum Battery Life
Energy Technology Data Exchange (ETDEWEB)
DeVault, Robert C [ORNL
2009-01-01
Conventional methods of vehicle operation for Plug-in Hybrid Vehicles first discharge the battery to a minimum State of Charge (SOC) before switching to charge sustaining operation. This is very demanding on the battery, maximizing the number of trips ending with a depleted battery and maximizing the distance driven on a depleted battery over the vehicle s life. Several methods have been proposed to reduce the number of trips ending with a deeply discharged battery and also eliminate the need for extended driving on a depleted battery. An optimum SOC can be maintained for long battery life before discharging the battery so that the vehicle reaches an electric plug-in destination just as the battery reaches the minimum operating SOC. These Just-in-Time methods provide maximum effective battery life while getting virtually the same electricity from the grid.
Monolitic integrated circuit for the strobed charge-to-time converter
International Nuclear Information System (INIS)
Bel'skij, V.I.; Bushnin, Yu.B.; Zimin, S.A.; Punzhin, Yu.N.; Sen'ko, V.A.; Soldatov, M.M.; Tokarchuk, V.P.
1985-01-01
The developed and comercially produced semiconducting circuit - gating charge-to-time converter KR1101PD1 is described. The considered integrated circuit is a short pulse charge-to-time converter with integration of input current. The circuit is designed for construction of time-to-pulse analog-to-digital converters utilized in multichannel detection systems when studying complex topology processes. Input resistance of the circuit is 0.1 Ω permissible input current is 50 mA, maximum measured charge is 300-1000 pC
Ecotoxicology and macroecology – Time for integration
International Nuclear Information System (INIS)
Beketov, Mikhail A.; Liess, Matthias
2012-01-01
Despite considerable progress in ecotoxicology, it has become clear that this discipline cannot answer its central questions, such as, “What are the effects of toxicants on biodiversity?” and “How the ecosystem functions and services are affected by the toxicants?”. We argue that if such questions are to be answered, a paradigm shift is needed. The current bottom-up approach of ecotoxicology that implies the use of small-scale experiments to predict effects on the entire ecosystems and landscapes should be merged with a top-down macroecological approach that is directly focused on ecological effects at large spatial scales and consider ecological systems as integral entities. Analysis of the existing methods in ecotoxicology, ecology, and environmental chemistry shows that such integration is currently possible. Therefore, we conclude that to tackle the current pressing challenges, ecotoxicology has to progress using both the bottom-up and top-down approaches, similar to digging a tunnel from both ends at once. - To tackle the current pressing challenges, ecotoxicology has to progress using both the bottom-up experimental and top-down observational approaches.
Advanced integrated real-time clinical displays.
Kruger, Grant H; Tremper, Kevin K
2011-09-01
Intelligent medical displays have the potential to improve patient outcomes by integrating multiple physiologic signals, exhibiting high sensitivity and specificity, and reducing information overload for physicians. Research findings have suggested that information overload and distractions caused by patient care activities and alarms generated by multiple monitors in acute care situations, such as the operating room and the intensive care unit, may produce situations that negatively impact the outcomes of patients under anesthesia. This can be attributed to shortcomings of human-in-the-loop monitoring and the poor specificity of existing physiologic alarms. Modern artificial intelligence techniques (ie, intelligent software agents) are demonstrating the potential to meet the challenges of next-generation patient monitoring and alerting. Copyright © 2011 Elsevier Inc. All rights reserved.
DEFF Research Database (Denmark)
Cavaliere, Giuseppe; Nielsen, Morten Ørregaard; Taylor, Robert
We consider the problem of conducting estimation and inference on the parameters of univariate heteroskedastic fractionally integrated time series models. We first extend existing results in the literature, developed for conditional sum-of squares estimators in the context of parametric fractional...... time series models driven by conditionally homoskedastic shocks, to allow for conditional and unconditional heteroskedasticity both of a quite general and unknown form. Global consistency and asymptotic normality are shown to still obtain; however, the covariance matrix of the limiting distribution...... of the estimator now depends on nuisance parameters derived both from the weak dependence and heteroskedasticity present in the shocks. We then investigate classical methods of inference based on the Wald, likelihood ratio and Lagrange multiplier tests for linear hypotheses on either or both of the long and short...
An integrated technique for developing real-time systems
Hooman, J.J.M.; Vain, J.
1995-01-01
The integration of conceptual modeling techniques, formal specification, and compositional verification is considered for real time systems within the knowledge engineering context. We define constructive transformations from a conceptual meta model to a real time specification language and give
Kim, Kyungsoo; Lim, Sung-Ho; Lee, Jaeseok; Kang, Won-Seok; Moon, Cheil; Choi, Ji-Woong
2016-01-01
Electroencephalograms (EEGs) measure a brain signal that contains abundant information about the human brain function and health. For this reason, recent clinical brain research and brain computer interface (BCI) studies use EEG signals in many applications. Due to the significant noise in EEG traces, signal processing to enhance the signal to noise power ratio (SNR) is necessary for EEG analysis, especially for non-invasive EEG. A typical method to improve the SNR is averaging many trials of event related potential (ERP) signal that represents a brain’s response to a particular stimulus or a task. The averaging, however, is very sensitive to variable delays. In this study, we propose two time delay estimation (TDE) schemes based on a joint maximum likelihood (ML) criterion to compensate the uncertain delays which may be different in each trial. We evaluate the performance for different types of signals such as random, deterministic, and real EEG signals. The results show that the proposed schemes provide better performance than other conventional schemes employing averaged signal as a reference, e.g., up to 4 dB gain at the expected delay error of 10°. PMID:27322267
Directory of Open Access Journals (Sweden)
Kyungsoo Kim
2016-06-01
Full Text Available Electroencephalograms (EEGs measure a brain signal that contains abundant information about the human brain function and health. For this reason, recent clinical brain research and brain computer interface (BCI studies use EEG signals in many applications. Due to the significant noise in EEG traces, signal processing to enhance the signal to noise power ratio (SNR is necessary for EEG analysis, especially for non-invasive EEG. A typical method to improve the SNR is averaging many trials of event related potential (ERP signal that represents a brain’s response to a particular stimulus or a task. The averaging, however, is very sensitive to variable delays. In this study, we propose two time delay estimation (TDE schemes based on a joint maximum likelihood (ML criterion to compensate the uncertain delays which may be different in each trial. We evaluate the performance for different types of signals such as random, deterministic, and real EEG signals. The results show that the proposed schemes provide better performance than other conventional schemes employing averaged signal as a reference, e.g., up to 4 dB gain at the expected delay error of 10°.
Integration time for the perception of depth from motion parallax.
Nawrot, Mark; Stroyan, Keith
2012-04-15
The perception of depth from relative motion is believed to be a slow process that "builds-up" over a period of observation. However, in the case of motion parallax, the potential accuracy of the depth estimate suffers as the observer translates during the viewing period. Our recent quantitative model for the perception of depth from motion parallax proposes that relative object depth (d) can be determined from retinal image motion (dθ/dt), pursuit eye movement (dα/dt), and fixation distance (f) by the formula: d/f≈dθ/dα. Given the model's dynamics, it is important to know the integration time required by the visual system to recover dα and dθ, and then estimate d. Knowing the minimum integration time reveals the incumbent error in this process. A depth-phase discrimination task was used to determine the time necessary to perceive depth-sign from motion parallax. Observers remained stationary and viewed a briefly translating random-dot motion parallax stimulus. Stimulus duration varied between trials. Fixation on the translating stimulus was monitored and enforced with an eye-tracker. The study found that relative depth discrimination can be performed with presentations as brief as 16.6 ms, with only two stimulus frames providing both retinal image motion and the stimulus window motion for pursuit (mean range=16.6-33.2 ms). This was found for conditions in which, prior to stimulus presentation, the eye was engaged in ongoing pursuit or the eye was stationary. A large high-contrast masking stimulus disrupted depth-discrimination for stimulus presentations less than 70-75 ms in both pursuit and stationary conditions. This interval might be linked to ocular-following response eye-movement latencies. We conclude that neural mechanisms serving depth from motion parallax generate a depth estimate much more quickly than previously believed. We propose that additional sluggishness might be due to the visual system's attempt to determine the maximum dθ/dα ratio
Exponential integrators in time-dependent density-functional calculations
Kidd, Daniel; Covington, Cody; Varga, Kálmán
2017-12-01
The integrating factor and exponential time differencing methods are implemented and tested for solving the time-dependent Kohn-Sham equations. Popular time propagation methods used in physics, as well as other robust numerical approaches, are compared to these exponential integrator methods in order to judge the relative merit of the computational schemes. We determine an improvement in accuracy of multiple orders of magnitude when describing dynamics driven primarily by a nonlinear potential. For cases of dynamics driven by a time-dependent external potential, the accuracy of the exponential integrator methods are less enhanced but still match or outperform the best of the conventional methods tested.
Yoo, Cheolhee; Im, Jungho; Park, Seonyoung; Quackenbush, Lindi J.
2018-03-01
Urban air temperature is considered a significant variable for a variety of urban issues, and analyzing the spatial patterns of air temperature is important for urban planning and management. However, insufficient weather stations limit accurate spatial representation of temperature within a heterogeneous city. This study used a random forest machine learning approach to estimate daily maximum and minimum air temperatures (Tmax and Tmin) for two megacities with different climate characteristics: Los Angeles, USA, and Seoul, South Korea. This study used eight time-series land surface temperature (LST) data from Moderate Resolution Imaging Spectroradiometer (MODIS), with seven auxiliary variables: elevation, solar radiation, normalized difference vegetation index, latitude, longitude, aspect, and the percentage of impervious area. We found different relationships between the eight time-series LSTs with Tmax/Tmin for the two cities, and designed eight schemes with different input LST variables. The schemes were evaluated using the coefficient of determination (R2) and Root Mean Square Error (RMSE) from 10-fold cross-validation. The best schemes produced R2 of 0.850 and 0.777 and RMSE of 1.7 °C and 1.2 °C for Tmax and Tmin in Los Angeles, and R2 of 0.728 and 0.767 and RMSE of 1.1 °C and 1.2 °C for Tmax and Tmin in Seoul, respectively. LSTs obtained the day before were crucial for estimating daily urban air temperature. Estimated air temperature patterns showed that Tmax was highly dependent on the geographic factors (e.g., sea breeze, mountains) of the two cities, while Tmin showed marginally distinct temperature differences between built-up and vegetated areas in the two cities.
Improving Music Genre Classification by Short-Time Feature Integration
DEFF Research Database (Denmark)
Meng, Anders; Ahrendt, Peter; Larsen, Jan
2005-01-01
Many different short-time features, using time windows in the size of 10-30 ms, have been proposed for music segmentation, retrieval and genre classification. However, often the available time frame of the music to make the actual decision or comparison (the decision time horizon) is in the range...... of seconds instead of milliseconds. The problem of making new features on the larger time scale from the short-time features (feature integration) has only received little attention. This paper investigates different methods for feature integration and late information fusion for music genre classification...
Rama, Aarti; Kesari, Shreekant; Das, Pradeep; Kumar, Vijay
2017-07-24
Extensive application of routine insecticide i.e., dichlorodiphenyltrichloroethane (DDT) to control Phlebotomus argentipes (Diptera: Psychodidae), the proven vector of visceral leishmaniasis in India, had evoked the problem of resistance/tolerance against DDT, eventually nullifying the DDT dependent strategies to control this vector. Because tolerating an hour-long exposure to DDT is not challenging enough for the resistant P. argentipes, estimating susceptibility by exposing sand flies to insecticide for just an hour becomes a trivial and futile task.Therefore, this bioassay study was carried out to investigate the maximum limit of exposure time to which DDT resistant P. argentipes can endure the effect of DDT for their survival. The mortality rate of laboratory-reared DDT resistant strain P. argentipes exposed to DDT was studied at discriminating time intervals of 60 min and it was concluded that highly resistant sand flies could withstand up to 420 min of exposure to this insecticide. Additionally, the lethal time for female P. argentipes was observed to be higher than for males suggesting that they are highly resistant to DDT's toxicity. Our results support the monitoring of tolerance limit with respect to time and hence points towards an urgent need to change the World Health Organization's protocol for susceptibility identification in resistant P. argentipes.
Time resolved spectroscopy of GRB 030501 using INTEGRAL
DEFF Research Database (Denmark)
Beckmann, V.; Borkowski, J.; Courvoisier, T.J.L.
2003-01-01
The gamma-ray instruments on-board INTEGRAL offer an unique opportunity to perform time resolved analysis on GRBs. The imager IBIS allows accurate positioning of GRBs and broad band spectral analysis, while SPI provides high resolution spectroscopy. GRB 030501 was discovered by the INTEGRAL Burst...... the Ulysses and RHESSI experiments....
Time Varying Market Integration and Expected Rteurns in Emerging Markets
de Jong, F.C.J.M.; de Roon, F.A.
2001-01-01
We use a simple model in which the expected returns in emerging markets depend on their systematic risk as measured by their beta relative to the world portfolio as well as on the level of integration in that market.The level of integration is a time-varying variable that depends on the market value
Space-time transformations in radial path integrals
International Nuclear Information System (INIS)
Steiner, F.
1984-09-01
Nonlinear space-time transformations in the radial path integral are discussed. A transformation formula is derived, which relates the original path integral to the Green's function of a new quantum system with an effective potential containing an observable quantum correction proportional(h/2π) 2 . As an example the formula is applied to spherical Brownian motion. (orig.)
Path integral solution for some time-dependent potential
International Nuclear Information System (INIS)
Storchak, S.N.
1989-12-01
The quantum-mechanical problem with a time-dependent potential is solved by the path integral method. The solution is obtained by the application of the previously derived general formula for rheonomic homogeneous point transformation and reparametrization in the path integral. (author). 4 refs
An extended Halanay inequality of integral type on time scales
Directory of Open Access Journals (Sweden)
Boqun Ou
2015-07-01
Full Text Available In this paper, we obtain a Halanay-type inequality of integral type on time scales which improves and extends some earlier results for both the continuous and discrete cases. Several illustrative examples are also given.
A CMOS integrated timing discriminator circuit for fast scintillation counters
International Nuclear Information System (INIS)
Jochmann, M.W.
1998-01-01
Based on a zero-crossing discriminator using a CR differentiation network for pulse shaping, a new CMOS integrated timing discriminator circuit is proposed for fast (t r ≥ 2 ns) scintillation counters at the cooler synchrotron COSY-Juelich. By eliminating the input signal's amplitude information by means of an analog continuous-time divider, a normalized pulse shape at the zero-crossing point is gained over a wide dynamic input amplitude range. In combination with an arming comparator and a monostable multivibrator this yields in a highly precise timing discriminator circuit, that is expected to be useful in different time measurement applications. First measurement results of a CMOS integrated logarithmic amplifier, which is part of the analog continuous-time divider, agree well with the corresponding simulations. Moreover, SPICE simulations of the integrated discriminator circuit promise a time walk well below 200 ps (FWHM) over a 40 dB input amplitude dynamic range
Precise digital integration in wide time range: theory and realization
International Nuclear Information System (INIS)
Batrakov, A.M.; Pavlenko, A.V.
2017-01-01
The digital integration method based on using high-speed precision analog-to-digital converters (ADC) has become widely used over the recent years. The paper analyzes the limitations of this method that are caused by the signal properties, ADC sampling rate and noise spectral density of the ADC signal path. This analysis allowed creating digital integrators with accurate synchronization and achieving an integration error of less than 10 −5 in the time range from microseconds to tens of seconds. The structure of the integrator is described and its basic parameters are presented. The possibilities of different ADC chips in terms of their applicability to digital integrators are discussed. A comparison with other integrating devices is presented.
International Nuclear Information System (INIS)
Yamamoto, Seiichi
2008-01-01
For a multi-layer depth-of-interaction (DOI) detector using different decay times, pulse shape analysis based on two different integration times is often used to distinguish scintillators in DOI direction. This method measures a partial integration and a full integration, and calculates the ratio of these two to obtain the pulse shape distribution. The full integration time is usually set to integrate full width of the scintillation pulse. However, the optimum partial integration time is not obvious for obtaining the best separation of the pulse shape distribution. To make it clear, a theoretical analysis and experiments were conducted for pulse shape analysis by changing the partial integration time using a scintillation detector of GSOs with different amount of Ce. A scintillation detector with 1-in. round photomultiplier tube (PMT) optically coupled GSO of 1.5 mol% (decay time: 35 ns) and that of 0.5 mol% (decay time: 60 ns) was used for the experiments. The signal from PMT was digitally integrated with partial (50-150 ns) and full (160 ns) integration times and ratio of these two was calculated to obtain the pulse shape distribution. In the theoretical analysis, partial integration time of 50 ns showed largest distance between two peaks of the pulse shape distribution. In the experiments, it showed maximum at 70-80 ns of partial integration time. The peak to valley ratio showed the maximum at 120-130 ns. Because the separation of two peaks is determined by the peak to valley ratio, we conclude the optimum partial integration time for these combinations of GSOs is around 120-130 ns, relatively longer than the expected value
Martínez-García, Roberto; Ubeda-Sansano, Maria Isabel; Díez-Domingo, Javier; Pérez-Hoyos, Santiago; Gil-Salom, Manuel
2014-09-01
There is an agreement to use simple formulae (expected bladder capacity and other age based linear formulae) as bladder capacity benchmark. But real normal child's bladder capacity is unknown. To offer a systematic review of children's normal bladder capacity, to measure children's normal maximum voided volumes (MVVs), to construct models of MVVs and to compare them with the usual formulae. Computerized, manual and grey literature were reviewed until February 2013. Epidemiological, observational, transversal, multicenter study. A consecutive sample of healthy children aged 5-14 years, attending Primary Care centres with no urologic abnormality were selected. Participants filled-in a 3-day frequency-volume chart. Variables were MVVs: maximum of 24 hr, nocturnal, and daytime maximum voided volumes. diuresis and its daytime and nighttime fractions; body-measure data; and gender. The consecutive steps method was used in a multivariate regression model. Twelve articles accomplished systematic review's criteria. Five hundred and fourteen cases were analysed. Three models, one for each of the MVVs, were built. All of them were better adjusted to exponential equations. Diuresis (not age) was the most significant factor. There was poor agreement between MVVs and usual formulae. Nocturnal and daytime maximum voided volumes depend on several factors and are different. Nocturnal and daytime maximum voided volumes should be used with different meanings in clinical setting. Diuresis is the main factor for bladder capacity. This is the first model for benchmarking normal MVVs with diuresis as its main factor. Current formulae are not suitable for clinical use. © 2013 Wiley Periodicals, Inc.
Integration of MDSplus in real-time systems
International Nuclear Information System (INIS)
Luchetta, A.; Manduchi, G.; Taliercio, C.
2006-01-01
RFX-mod makes extensive usage of real-time systems for feedback control and uses MDSplus to interface them to the main Data Acquisition system. For this purpose, the core of MDSplus has been ported to VxWorks, the operating system used for real-time control in RFX. Using this approach, it is possible to integrate real-time systems, but MDSplus is used only for non-real-time tasks, i.e. those tasks which are executed before and after the pulse and whose performance does not affect the system time constraints. More extensive use of MDSplus in real-time systems is foreseen, and a real-time layer for MDSplus is under development, which will provide access to memory-mapped pulse files, shared by the tasks running on the same CPU. Real-time communication will also be integrated in the MDSplus core to provide support for distributed memory-mapped pulse files
Kuypers, Dirk R J; Vanrenterghem, Yves
2004-11-01
The aims of this study were to determine whether disposition-related pharmacokinetic parameters such as T(max) and mean residence time (MRT) could be used as predictors of clinical efficacy of tacrolimus in renal transplant recipients, and to what extent these parameters would be influenced by clinical variables. We previously demonstrated, in a prospective pharmacokinetic study in de novo renal allograft recipients, that patients who experienced early acute rejection did not differ from patients free from rejection in terms of tacrolimus pharmacokinetic exposure parameters (dose interval AUC, preadministration trough blood concentration, C(max), dose). However, recipients with acute rejection reached mean (SD) tacrolimus T(max) significantly faster than those who were free from rejection (0.96 [0.56] hour vs 1.77 [1.06] hours; P clearance nor T(1/2) could explain this unusual finding, we used data from the previous study to calculate MRT from the concentration-time curves. As part of the previous study, 100 patients (59 male, 41 female; mean [SD] age, 51.4 [13.8] years;age range, 20-75 years) were enrolled in the study The calculated MRT was significantly shorter in recipients with acute allograft rejection (11.32 [031] hours vs 11.52 [028] hours; P = 0.02), just like T(max) was an independent risk factor for acute rejection in a multivariate logistic regression model (odds ratio, 0.092 [95% CI, 0.014-0.629]; P = 0.01). Analyzing the impact of demographic, transplantation-related, and biochemical variables on MRT, we found that increasing serum albumin and hematocrit concentrations were associated with a prolonged MRT (P calculated MRT were associated with a higher incidence of early acute graft rejection. These findings suggest that a shorter transit time of tacrolimus in certain tissue compartments, rather than failure to obtain a maximum absolute tacrolimus blood concentration, might lead to inadequate immunosuppression early after transplantation.
Makos, Michał; Dzierżek, Jan; Nitychoruk, Jerzy; Zreda, Marek
2014-07-01
During the Last Glacial Maximum (LGM), long valley glaciers developed on the northern and southern sides of the High Tatra Mountains, Poland and Slovakia. Chlorine-36 exposure dating of moraine boulders suggests two major phases of moraine stabilization, at 26-21 ka (LGM I - maximum) and at 18 ka (LGM II). The dates suggest a significantly earlier maximum advance on the southern side of the range. Reconstructing the geometry of four glaciers in the Sucha Woda, Pańszczyca, Mlynicka and Velicka valleys allowed determining their equilibrium-line altitudes (ELAs) at 1460, 1460, 1650 and 1700 m asl, respectively. Based on a positive degree-day model, the mass balance and climatic parameter anomaly (temperature and precipitation) has been constrained for LGM I advance. Modeling results indicate slightly different conditions between northern and southern slopes. The N-S ELA gradient finds confirmation in slightly higher temperature (at least 1 °C) or lower precipitation (15%) on the south-facing glaciers during LGM I. The precipitation distribution over the High Tatra Mountains indicates potentially different LGM atmospheric circulation than at the present day, with reduced northwesterly inflow and increased southerly and westerly inflows of moist air masses.
The Role of Memory and Integration in Early Time Concepts.
Levin, Iris; And Others
1984-01-01
A total of 630 boys and girls from kindergarten to second grade were asked to compare durations that differ in beginning times with those that differ in ending times. Possible sources of children's failure to integrate beginning and end points when comparing durations were discussed. (Author/CI)
Doi, Yasuhiro; Matsuyama, Michinobu; Ikeda, Ryuji; Hashida, Masahiro
2016-07-01
This study was conducted to measure the recognition time of the test pattern and to investigate the effects of the maximum luminance in a medical-grade liquid-crystal display (LCD) on the recognition time. Landolt rings as signals of the test pattern were used with four random orientations, one on each of the eight gray-scale steps. Ten observers input the orientation of the gap on the Landolt rings using cursor keys on the keyboard. The recognition times were automatically measured from the display of the test pattern on the medical-grade LCD to the input of the orientation of the gap in the Landolt rings. The maximum luminance in this study was set to one of four values (100, 170, 250, and 400 cd/m(2)), for which the corresponding recognition times were measured. As a result, the average recognition times for each observer with maximum luminances of 100, 170, 250, and 400 cd/m(2) were found to be 3.96 to 7.12 s, 3.72 to 6.35 s, 3.53 to 5.97 s, and 3.37 to 5.98 s, respectively. The results indicate that the observer's recognition time is directly proportional to the luminance of the medical-grade LCD. Therefore, it is evident that the maximum luminance of the medical-grade LCD affects the test pattern recognition time.
Time-of-flight depth image enhancement using variable integration time
Kim, Sun Kwon; Choi, Ouk; Kang, Byongmin; Kim, James Dokyoon; Kim, Chang-Yeong
2013-03-01
Time-of-Flight (ToF) cameras are used for a variety of applications because it delivers depth information at a high frame rate. These cameras, however, suffer from challenging problems such as noise and motion artifacts. To increase signal-to-noise ratio (SNR), the camera should calculate a distance based on a large amount of infra-red light, which needs to be integrated over a long time. On the other hand, the integration time should be short enough to suppress motion artifacts. We propose a ToF depth imaging method to combine advantages of short and long integration times exploiting an imaging fusion scheme proposed for color imaging. To calibrate depth differences due to the change of integration times, a depth transfer function is estimated by analyzing the joint histogram of depths in the two images of different integration times. The depth images are then transformed into wavelet domains and fused into a depth image with suppressed noise and low motion artifacts. To evaluate the proposed method, we captured a moving bar of a metronome with different integration times. The experiment shows the proposed method could effectively remove the motion artifacts while preserving high SNR comparable to the depth images acquired during long integration time.
Global Format for Conservative Time Integration in Nonlinear Dynamics
DEFF Research Database (Denmark)
Krenk, Steen
2014-01-01
The widely used classic collocation-based time integration procedures like Newmark, Generalized-alpha etc. generally work well within a framework of linear problems, but typically may encounter problems, when used in connection with essentially nonlinear structures. These problems are overcome....... In the present paper a conservative time integration algorithm is developed in a format using only the internal forces and the associated tangent stiffness at the specific time integration points. Thus, the procedure is computationally very similar to a collocation method, consisting of a series of nonlinear...... equivalent static load steps, easily implemented in existing computer codes. The paper considers two aspects: representation of nonlinear internal forces in a form that implies energy conservation, and the option of an algorithmic damping with the purpose of extracting energy from undesirable high...
Explicit solution of Calderon preconditioned time domain integral equations
Ulku, Huseyin Arda
2013-07-01
An explicit marching on-in-time (MOT) scheme for solving Calderon-preconditioned time domain integral equations is proposed. The scheme uses Rao-Wilton-Glisson and Buffa-Christiansen functions to discretize the domain and range of the integral operators and a PE(CE)m type linear multistep to march on in time. Unlike its implicit counterpart, the proposed explicit solver requires the solution of an MOT system with a Gram matrix that is sparse and well-conditioned independent of the time step size. Numerical results demonstrate that the explicit solver maintains its accuracy and stability even when the time step size is chosen as large as that typically used by an implicit solver. © 2013 IEEE.
A model of interval timing by neural integration.
Simen, Patrick; Balci, Fuat; de Souza, Laura; Cohen, Jonathan D; Holmes, Philip
2011-06-22
We show that simple assumptions about neural processing lead to a model of interval timing as a temporal integration process, in which a noisy firing-rate representation of time rises linearly on average toward a response threshold over the course of an interval. Our assumptions include: that neural spike trains are approximately independent Poisson processes, that correlations among them can be largely cancelled by balancing excitation and inhibition, that neural populations can act as integrators, and that the objective of timed behavior is maximal accuracy and minimal variance. The model accounts for a variety of physiological and behavioral findings in rodents, monkeys, and humans, including ramping firing rates between the onset of reward-predicting cues and the receipt of delayed rewards, and universally scale-invariant response time distributions in interval timing tasks. It furthermore makes specific, well-supported predictions about the skewness of these distributions, a feature of timing data that is usually ignored. The model also incorporates a rapid (potentially one-shot) duration-learning procedure. Human behavioral data support the learning rule's predictions regarding learning speed in sequences of timed responses. These results suggest that simple, integration-based models should play as prominent a role in interval timing theory as they do in theories of perceptual decision making, and that a common neural mechanism may underlie both types of behavior.
Optimal Real-time Dispatch for Integrated Energy Systems
DEFF Research Database (Denmark)
Anvari-Moghaddam, Amjad; Guerrero, Josep M.; Rahimi-Kian, Ashkan
2016-01-01
With the emerging of small-scale integrated energy systems (IESs), there are significant potentials to increase the functionality of a typical demand-side management (DSM) strategy and typical implementation of building-level distributed energy resources (DERs). By integrating DSM and DERs...... into a cohesive, networked package that fully utilizes smart energy-efficient end-use devices, advanced building control/automation systems, and integrated communications architectures, it is possible to efficiently manage energy and comfort at the end-use location. In this paper, an ontology-driven multi......-agent control system with intelligent optimizers is proposed for optimal real-time dispatch of an integrated building and microgrid system considering coordinated demand response (DR) and DERs management. The optimal dispatch problem is formulated as a mixed integer nonlinear programing problem (MINLP...
Effects of integration time on in-water radiometric profiles.
D'Alimonte, Davide; Zibordi, Giuseppe; Kajiyama, Tamito
2018-03-05
This work investigates the effects of integration time on in-water downward irradiance E d , upward irradiance E u and upwelling radiance L u profile data acquired with free-fall hyperspectral systems. Analyzed quantities are the subsurface value and the diffuse attenuation coefficient derived by applying linear and non-linear regression schemes. Case studies include oligotrophic waters (Case-1), as well as waters dominated by Colored Dissolved Organic Matter (CDOM) and Non-Algal Particles (NAP). Assuming a 24-bit digitization, measurements resulting from the accumulation of photons over integration times varying between 8 and 2048ms are evaluated at depths corresponding to: 1) the beginning of each integration interval (Fst); 2) the end of each integration interval (Lst); 3) the averages of Fst and Lst values (Avg); and finally 4) the values weighted accounting for the diffuse attenuation coefficient of water (Wgt). Statistical figures show that the effects of integration time can bias results well above 5% as a function of the depth definition. Results indicate the validity of the Wgt depth definition and the fair applicability of the Avg one. Instead, both the Fst and Lst depths should not be adopted since they may introduce pronounced biases in E u and L u regression products for highly absorbing waters. Finally, the study reconfirms the relevance of combining multiple radiometric casts into a single profile to increase precision of regression products.
Spruce, Joseph P.; Hargrove, William; Gasser, Gerald; Smoot, James; Kuper, Philip D.
2012-01-01
This presentation reviews the development, integration, and testing of Near Real Time (NRT) MODIS forest % maximum NDVI change products resident to the USDA Forest Service (USFS) ForWarn System. ForWarn is an Early Warning System (EWS) tool for detection and tracking of regionally evident forest change, which includes the U.S. Forest Change Assessment Viewer (FCAV) (a publically available on-line geospatial data viewer for visualizing and assessing the context of this apparent forest change). NASA Stennis Space Center (SSC) is working collaboratively with the USFS, ORNL, and USGS to contribute MODIS forest change products to ForWarn. These change products compare current NDVI derived from expedited eMODIS data, to historical NDVI products derived from MODIS MOD13 data. A new suite of forest change products are computed every 8 days and posted to the ForWarn system; this includes three different forest change products computed using three different historical baselines: 1) previous year; 2) previous three years; and 3) all previous years in the MODIS record going back to 2000. The change product inputs are maximum value NDVI that are composited across a 24 day interval and refreshed every 8 days so that resulting images for the conterminous U.S. are predominantly cloud-free yet still retain temporally relevant fresh information on changes in forest canopy greenness. These forest change products are computed at the native nominal resolution of the input reflectance bands at 231.66 meters, which equates to approx 5.4 hectares or 13.3 acres per pixel. The Time Series Product Tool, a MATLAB-based software package developed at NASA SSC, is used to temporally process, fuse, reduce noise, interpolate data voids, and re-aggregate the historical NDVI into 24 day composites, and then custom MATLAB scripts are used to temporally process the eMODIS NDVIs so that they are in synch with the historical NDVI products. Prior to posting, an in-house snow mask classification product
A New time Integration Scheme for Cahn-hilliard Equations
Schaefer, R.
2015-06-01
In this paper we present a new integration scheme that can be applied to solving difficult non-stationary non-linear problems. It is obtained by a successive linearization of the Crank- Nicolson scheme, that is unconditionally stable, but requires solving non-linear equation at each time step. We applied our linearized scheme for the time integration of the challenging Cahn-Hilliard equation, modeling the phase separation in fluids. At each time step the resulting variational equation is solved using higher-order isogeometric finite element method, with B- spline basis functions. The method was implemented in the PETIGA framework interfaced via the PETSc toolkit. The GMRES iterative solver was utilized for the solution of a resulting linear system at every time step. We also apply a simple adaptivity rule, which increases the time step size when the number of GMRES iterations is lower than 30. We compared our method with a non-linear, two stage predictor-multicorrector scheme, utilizing a sophisticated step length adaptivity. We controlled the stability of our simulations by monitoring the Ginzburg-Landau free energy functional. The proposed integration scheme outperforms the two-stage competitor in terms of the execution time, at the same time having a similar evolution of the free energy functional.
A New time Integration Scheme for Cahn-hilliard Equations
Schaefer, R.; Smol-ka, M.; Dalcin, L; Paszyn'ski, M.
2015-01-01
In this paper we present a new integration scheme that can be applied to solving difficult non-stationary non-linear problems. It is obtained by a successive linearization of the Crank- Nicolson scheme, that is unconditionally stable, but requires solving non-linear equation at each time step. We applied our linearized scheme for the time integration of the challenging Cahn-Hilliard equation, modeling the phase separation in fluids. At each time step the resulting variational equation is solved using higher-order isogeometric finite element method, with B- spline basis functions. The method was implemented in the PETIGA framework interfaced via the PETSc toolkit. The GMRES iterative solver was utilized for the solution of a resulting linear system at every time step. We also apply a simple adaptivity rule, which increases the time step size when the number of GMRES iterations is lower than 30. We compared our method with a non-linear, two stage predictor-multicorrector scheme, utilizing a sophisticated step length adaptivity. We controlled the stability of our simulations by monitoring the Ginzburg-Landau free energy functional. The proposed integration scheme outperforms the two-stage competitor in terms of the execution time, at the same time having a similar evolution of the free energy functional.
A higher order space-time Galerkin scheme for time domain integral equations
Pray, Andrew J.; Beghein, Yves; Nair, Naveen V.; Cools, Kristof; Bagci, Hakan; Shanker, Balasubramaniam
2014-01-01
Stability of time domain integral equation (TDIE) solvers has remained an elusive goal formany years. Advancement of this research has largely progressed on four fronts: 1) Exact integration, 2) Lubich quadrature, 3) smooth temporal basis functions, and 4) space-time separation of convolutions with the retarded potential. The latter method's efficacy in stabilizing solutions to the time domain electric field integral equation (TD-EFIE) was previously reported for first-order surface descriptions (flat elements) and zeroth-order functions as the temporal basis. In this work, we develop the methodology necessary to extend the scheme to higher order surface descriptions as well as to enable its use with higher order basis functions in both space and time. These basis functions are then used in a space-time Galerkin framework. A number of results are presented that demonstrate convergence in time. The viability of the space-time separation method in producing stable results is demonstrated experimentally for these examples.
Orientation, Evaluation, and Integration of Part-Time Nursing Faculty.
Carlson, Joanne S
2015-07-10
This study helps to quantify and describe orientation, evaluation, and integration practices pertaining to part-time clinical nursing faculty teaching in prelicensure nursing education programs. A researcher designed Web-based survey was used to collect information from a convenience sample of part-time clinical nursing faculty teaching in prelicensure nursing programs. Survey questions focused on the amount and type of orientation, evaluation, and integration practices. Descriptive statistics were used to analyze results. Respondents reported on average four hours of orientation, with close to half reporting no more than two hours. Evaluative feedback was received much more often from students than from full-time faculty. Most respondents reported receiving some degree of mentoring and that it was easy to get help from full-time faculty. Respondents reported being most informed about student evaluation procedures, grading, and the steps to take when students are not meeting course objectives, and less informed about changes to ongoing curriculum and policy.
Numerical counting ratemeter with variable time constant and integrated circuits
International Nuclear Information System (INIS)
Kaiser, J.; Fuan, J.
1967-01-01
We present here the prototype of a numerical counting ratemeter which is a special version of variable time-constant frequency meter (1). The originality of this work lies in the fact that the change in the time constant is carried out automatically. Since the criterion for this change is the accuracy in the annunciated result, the integration time is varied as a function of the frequency. For the prototype described in this report, the time constant varies from 1 sec to 1 millisec. for frequencies in the range 10 Hz to 10 MHz. This prototype is built entirely of MECL-type integrated circuits from Motorola and is thus contained in two relatively small boxes. (authors) [fr
General new time formalism in the path integral
International Nuclear Information System (INIS)
Pak, N.K.; Sokmen, I.
1983-08-01
We describe a general method of applying point canonical transformations to the path integral followed by the corresponding new time transformations aimed at reducing an arbitrary one-dimensional problem into an exactly solvable form. Our result is independent of operator ordering ambiguities by construction. (author)
Prospects for direct measurement of time-integrated Bs mixing
International Nuclear Information System (INIS)
Siccama, I.
1994-01-01
This note investigates the prospects of measuring time-integrated B s mixing. Three inclusive decay modes of the B s meson are discussed. For each reconstruction mode, the expected number of events and the different background channels are discussed. Estimates are given for the uncertainty on the mixing parameter χ s . (orig.)
A Fully Integrated Discrete-Time Superheterodyne Receiver
Tohidian, M.; Madadi, I.; Staszewski, R.B.
2017-01-01
The zero/low intermediate frequency (IF) receiver (RX) architecture has enabled full CMOS integration. As the technology scales and wireless standards become ever more challenging, the issues related to time-varying dc offsets, the second-order nonlinearity, and flicker noise become more critical.
On the solution of high order stable time integration methods
Czech Academy of Sciences Publication Activity Database
Axelsson, Owe; Blaheta, Radim; Sysala, Stanislav; Ahmad, B.
2013-01-01
Roč. 108, č. 1 (2013), s. 1-22 ISSN 1687-2770 Institutional support: RVO:68145535 Keywords : evolution equations * preconditioners for quadratic matrix polynomials * a stiffly stable time integration method Subject RIV: BA - General Mathematics Impact factor: 0.836, year: 2013 http://www.boundaryvalueproblems.com/content/2013/1/108
Integrated optical delay lines for time-division multiplexers
Stopinski, S.T.; Malinowski, M.; Piramidowicz, R.; Kleijn, E.; Smit, M.K.; Leijtens, X.J.M.
2013-01-01
In this paper, we present a study of integrated optical delay lines (DLs) for application in optical time-division multiplexers. The investigated DLs are formed by spirally folded waveguides. The components were designed in a generic approach and fabricated in multi-project wafer runs on an
A revised method to calculate the concentration time integral of atmospheric pollutants
International Nuclear Information System (INIS)
Voelz, E.; Schultz, H.
1980-01-01
It is possible to calculate the spreading of a plume in the atmosphere under nonstationary and nonhomogeneous conditions by introducing the ''particle-in-cell'' method (PIC). This is a numerical method by which the transport of and the diffusion in the plume is reproduced in such a way, that particles representing the concentration are moved time step-wise in restricted regions (cells) and separately with the advection velocity and the diffusion velocity. This has a systematical advantage over the steady state Gaussian plume model usually used. The fixed-point concentration time integral is calculated directly instead of being substituted by the locally integrated concentration at a constant time as is done in the Gaussian model. In this way inaccuracies due to the above mentioned computational techniques may be avoided for short-time emissions, as may be seen by the fact that both integrals do not lead to the same results. Also the PIC method enables one to consider the height-dependent wind speed and its variations while the Gaussian model can be used only with averaged wind data. The concentration time integral calculated by the PIC method results in higher maximum values in shorter distances to the source. This is an effect often observed in measurements. (author)
Alcalá, Jesus; Palacios, David; Vazquez, Lorenzo; Juan Zamorano, Jose
2015-04-01
Andean glacial deposits are key records of climate fluctuations in the southern hemisphere. During the last decades, in situ cosmogenic nuclides have provided fresh and significant dates to determine past glacier behavior in this region. But still there are many important discrepancies such as the impact of Last Glacial Maximum or the influence of Late Glacial climatic events on glacial mass balances. Furthermore, glacial chronologies from many sites are still missing, such as HualcaHualca (15° 43' S; 71° 52' W; 6,025 masl), a high volcano of the Peruvian Andes located 70 km northwest of Arequipa. The goal of this study is to establish the age of the Maximum Glacier Extent (MGE) and deglaciation at HualcaHualca volcano. To achieve this objetive, we focused in four valleys (Huayuray, Pujro Huayjo, Mollebaya and Mucurca) characterized by a well-preserved sequence of moraines and roches moutonnées. The method is based on geomorphological analysis supported by cosmogenic 36Cl surface exposure dating. 36Cl ages have been estimated with the CHLOE calculator and were compared with other central Andean glacial chronologies as well as paleoclimatological proxies. In Huayuray valley, exposure ages indicates that MGE occurred ~ 18 - 16 ka. Later, the ice mass gradually retreated but this process was interrupted by at least two readvances; the last one has been dated at ~ 12 ka. In the other hand, 36Cl result reflects a MGE age of ~ 13 ka in Mollebaya valley. Also, two samples obtained in Pujro-Huayjo and Mucurca valleys associated with MGE have an exposure age of 10-9 ka, but likely are moraine boulders affected by exhumation or erosion processes. Deglaciation in HualcaHualca volcano began abruptly ~ 11.5 ka ago according to a 36Cl age from a polished and striated bedrock in Pujro Huayjo valley, presumably as a result of reduced precipitation as well as a global increase of temperatures. The glacier evolution at HualcaHualca volcano presents a high correlation with
van den Broek, PLC; van Egmond, J; van Rijn, CM; Takens, F; Coenen, AML; Booij, LHDJ
2005-01-01
Background: This study assessed the feasibility of online calculation of the correlation integral (C(r)) aiming to apply C(r)derived statistics. For real-time application it is important to reduce calculation time. It is shown how our method works for EEG time series. Methods: To achieve online
Broek, P.L.C. van den; Egmond, J. van; Rijn, C.M. van; Takens, F.; Coenen, A.M.L.; Booij, L.H.D.J.
2005-01-01
This study assessed the feasibility of online calculation of the correlation integral (C(r)) aiming to apply C(r)-derived statistics. For real-time application it is important to reduce calculation time. It is shown how our method works for EEG time series. Methods: To achieve online calculation of
Time Varying Market Integration and Expected Rteurns in Emerging Markets
Jong, F.C.J.M. de; Roon, F.A. de
2001-01-01
We use a simple model in which the expected returns in emerging markets depend on their systematic risk as measured by their beta relative to the world portfolio as well as on the level of integration in that market.The level of integration is a time-varying variable that depends on the market value of the assets that can be held by domestic investors only versus the market value of the assets that can be traded freely.Our empirical analysis for 30 emerging markets shows that there are strong...
Energy conservation in Newmark based time integration algorithms
DEFF Research Database (Denmark)
Krenk, Steen
2006-01-01
Energy balance equations are established for the Newmark time integration algorithm, and for the derived algorithms with algorithmic damping introduced via averaging, the so-called a-methods. The energy balance equations form a sequence applicable to: Newmark integration of the undamped equations...... of motion, an extended form including structural damping, and finally the generalized form including structural as well as algorithmic damping. In all three cases the expression for energy, appearing in the balance equation, is the mechanical energy plus some additional terms generated by the discretization...
Multisensory integration: the case of a time window of gesture-speech integration.
Obermeier, Christian; Gunter, Thomas C
2015-02-01
This experiment investigates the integration of gesture and speech from a multisensory perspective. In a disambiguation paradigm, participants were presented with short videos of an actress uttering sentences like "She was impressed by the BALL, because the GAME/DANCE...." The ambiguous noun (BALL) was accompanied by an iconic gesture fragment containing information to disambiguate the noun toward its dominant or subordinate meaning. We used four different temporal alignments between noun and gesture fragment: the identification point (IP) of the noun was either prior to (+120 msec), synchronous with (0 msec), or lagging behind the end of the gesture fragment (-200 and -600 msec). ERPs triggered to the IP of the noun showed significant differences for the integration of dominant and subordinate gesture fragments in the -200, 0, and +120 msec conditions. The outcome of this integration was revealed at the target words. These data suggest a time window for direct semantic gesture-speech integration ranging from at least -200 up to +120 msec. Although the -600 msec condition did not show any signs of direct integration at the homonym, significant disambiguation was found at the target word. An explorative analysis suggested that gesture information was directly integrated at the verb, indicating that there are multiple positions in a sentence where direct gesture-speech integration takes place. Ultimately, this would implicate that in natural communication, where a gesture lasts for some time, several aspects of that gesture will have their specific and possibly distinct impact on different positions in an utterance.
Hybrid integrated circuit for charge-to-time interval conversion
Energy Technology Data Exchange (ETDEWEB)
Basiladze, S.G.; Dotsenko, Yu.Yu.; Man' yakov, P.K.; Fedorchenko, S.N. (Joint Inst. for Nuclear Research, Dubna (USSR))
The hybrid integrated circuit for charge-to time interval conversion with nanosecond input fast response is described. The circuit can be used in energy measuring channels, time-to-digital converters and in the modified variant in amplitude-to-digital converters. The converter described consists of a buffer amplifier, a linear transmission circuit, a direct current source and a unit of time interval separation. The buffer amplifier represents a current follower providing low input and high output resistances by the current feedback. It is concluded that the described converter excelled the QT100B circuit analogous to it in a number of parameters especially, in thermostability.
Integral-Value Models for Outcomes over Continuous Time
DEFF Research Database (Denmark)
Harvey, Charles M.; Østerdal, Lars Peter
Models of preferences between outcomes over continuous time are important for individual, corporate, and social decision making, e.g., medical treatment, infrastructure development, and environmental regulation. This paper presents a foundation for such models. It shows that conditions on prefere...... on preferences between real- or vector-valued outcomes over continuous time are satisfied if and only if the preferences are represented by a value function having an integral form......Models of preferences between outcomes over continuous time are important for individual, corporate, and social decision making, e.g., medical treatment, infrastructure development, and environmental regulation. This paper presents a foundation for such models. It shows that conditions...
Directory of Open Access Journals (Sweden)
Chen Pоуu
2013-01-01
Full Text Available Products made overseas but sold in Taiwan are very common. Regarding the cross-border or interregional production and marketing of goods, inventory decision-makers often have to think about how to determine the amount of purchases per cycle, the number of transport vehicles, the working hours of each transport vehicle, and the delivery by ground or air transport to sales offices in order to minimize the total cost of the inventory in unit time. This model assumes that the amount of purchases for each order cycle should allow all rented vehicles to be fully loaded and the transport times to reach the upper limit within the time period. The main research findings of this study included the search for the optimal solution of the integer planning of the model and the results of sensitivity analysis.
A Digitally Programmable Differential Integrator with Enlarged Time Constant
Directory of Open Access Journals (Sweden)
S. K. Debroy
1994-12-01
Full Text Available A new Operational Amplifier (OA-RC integrator network is described. The novelties of the design are used of single grounded capacitor, ideal integration function realization with dual-input capability and design flexibility for extremely large time constant involving an enlargement factor (K using product of resistor ratios. The aspect of the digital control of K through a programmable resistor array (PRA controlled by a microprocessor has also been implemented. The effect of the OA-poles has been analyzed which indicates degradation of the integrator-Q at higher frequencies. An appropriate Q-compensation design scheme exhibiting 1 : |A|2 order of Q-improvement has been proposed with supporting experimental observations.
Integration and timing of basic and clinical sciences education.
Bandiera, Glen; Boucher, Andree; Neville, Alan; Kuper, Ayelet; Hodges, Brian
2013-05-01
Medical education has traditionally been compartmentalized into basic and clinical sciences, with the latter being viewed as the skillful application of the former. Over time, the relevance of basic sciences has become defined by their role in supporting clinical problem solving rather than being, of themselves, a defining knowledge base of physicians. As part of the national Future of Medical Education in Canada (FMEC MD) project, a comprehensive empirical environmental scan identified the timing and integration of basic sciences as a key pressing issue for medical education. Using the literature review, key informant interviews, stakeholder meetings, and subsequent consultation forums from the FMEC project, this paper details the empirical basis for focusing on the role of basic science, the evidentiary foundations for current practices, and the implications for medical education. Despite a dearth of definitive relevant studies, opinions about how best to integrate the sciences remain strong. Resource allocation, political power, educational philosophy, and the shift from a knowledge-based to a problem-solving profession all influence the debate. There was little disagreement that both sciences are important, that many traditional models emphasized deep understanding of limited basic science disciplines at the expense of other relevant content such as social sciences, or that teaching the sciences contemporaneously rather than sequentially has theoretical and practical merit. Innovations in integrated curriculum design have occurred internationally. Less clear are the appropriate balance of the sciences, the best integration model, and solutions to the political and practical challenges of integrated curricula. New curricula tend to emphasize integration, development of more diverse physician competencies, and preparation of physicians to adapt to evolving technology and patients' expectations. Refocusing the basic/clinical dichotomy to a foundational
Directory of Open Access Journals (Sweden)
Chandra K Pandey
2017-01-01
Interpretation & conclusions: Our results show that the cut-off value for INR ≥2.6 and K time ≥3.05 min predict bleeding and MA ≥48.8 mm predicts non-bleeding in patients with cirrhosis undergoing central venous pressure catheter cannulation.
Furbish, David; Schmeeckle, Mark; Schumer, Rina; Fathel, Siobhan
2016-01-01
We describe the most likely forms of the probability distributions of bed load particle velocities, accelerations, hop distances, and travel times, in a manner that formally appeals to inferential statistics while honoring mechanical and kinematic constraints imposed by equilibrium transport conditions. The analysis is based on E. Jaynes's elaboration of the implications of the similarity between the Gibbs entropy in statistical mechanics and the Shannon entropy in information theory. By maximizing the information entropy of a distribution subject to known constraints on its moments, our choice of the form of the distribution is unbiased. The analysis suggests that particle velocities and travel times are exponentially distributed and that particle accelerations follow a Laplace distribution with zero mean. Particle hop distances, viewed alone, ought to be distributed exponentially. However, the covariance between hop distances and travel times precludes this result. Instead, the covariance structure suggests that hop distances follow a Weibull distribution. These distributions are consistent with high-resolution measurements obtained from high-speed imaging of bed load particle motions. The analysis brings us closer to choosing distributions based on our mechanical insight.
HMC algorithm with multiple time scale integration and mass preconditioning
Urbach, C.; Jansen, K.; Shindler, A.; Wenger, U.
2006-01-01
We present a variant of the HMC algorithm with mass preconditioning (Hasenbusch acceleration) and multiple time scale integration. We have tested this variant for standard Wilson fermions at β=5.6 and at pion masses ranging from 380 to 680 MeV. We show that in this situation its performance is comparable to the recently proposed HMC variant with domain decomposition as preconditioner. We give an update of the "Berlin Wall" figure, comparing the performance of our variant of the HMC algorithm to other published performance data. Advantages of the HMC algorithm with mass preconditioning and multiple time scale integration are that it is straightforward to implement and can be used in combination with a wide variety of lattice Dirac operators.
Directory of Open Access Journals (Sweden)
Rasha Al-Hujazy
2018-03-01
Full Text Available Microfluidic platforms have received much attention in recent years. In particular, there is interest in combining spectroscopy with microfluidic platforms. This work investigates the integration of microfluidic platforms and terahertz time-domain spectroscopy (THz-TDS systems. A semiclassical computational model is used to simulate the emission of THz radiation from a GaAs photoconductive THz emitter. This model incorporates white noise with increasing noise amplitude (corresponding to decreasing dynamic range values. White noise is selected over other noise due to its contributions in THz-TDS systems. The results from this semiclassical computational model, in combination with defined sample thicknesses, can provide the maximum measurable absorption coefficient for a microfluidic-based THz-TDS system. The maximum measurable frequencies for such systems can be extracted through the relationship between the maximum measurable absorption coefficient and the absorption coefficient for representative biofluids. The sample thickness of the microfluidic platform and the dynamic range of the THz-TDS system play a role in defining the maximum measurable frequency for microfluidic-based THz-TDS systems. The results of this work serve as a design tool for the development of such systems.
Real-time hybrid simulation using the convolution integral method
International Nuclear Information System (INIS)
Kim, Sung Jig; Christenson, Richard E; Wojtkiewicz, Steven F; Johnson, Erik A
2011-01-01
This paper proposes a real-time hybrid simulation method that will allow complex systems to be tested within the hybrid test framework by employing the convolution integral (CI) method. The proposed CI method is potentially transformative for real-time hybrid simulation. The CI method can allow real-time hybrid simulation to be conducted regardless of the size and complexity of the numerical model and for numerical stability to be ensured in the presence of high frequency responses in the simulation. This paper presents the general theory behind the proposed CI method and provides experimental verification of the proposed method by comparing the CI method to the current integration time-stepping (ITS) method. Real-time hybrid simulation is conducted in the Advanced Hazard Mitigation Laboratory at the University of Connecticut. A seismically excited two-story shear frame building with a magneto-rheological (MR) fluid damper is selected as the test structure to experimentally validate the proposed method. The building structure is numerically modeled and simulated, while the MR damper is physically tested. Real-time hybrid simulation using the proposed CI method is shown to provide accurate results
System Integration for Real-Time Mobile Manipulation
Directory of Open Access Journals (Sweden)
Reza Oftadeh
2014-03-01
Full Text Available Mobile manipulators are one of the most complicated types of mechatronics systems. The performance of these robots in performing complex manipulation tasks is highly correlated with the synchronization and integration of their low-level components. This paper discusses in detail the mechatronics design of a four wheel steered mobile manipulator. It presents the manipulator's mechanical structure and electrical interfaces, designs low-level software architecture based on embedded PC-based controls, and proposes a systematic solution based on code generation products of MATLAB and Simulink. The remote development environment described here is used to develop real-time controller software and modules for the mobile manipulator under a POSIX-compliant, real-time Linux operating system. Our approach enables developers to reliably design controller modules that meet the hard real-time constraints of the entire low-level system architecture. Moreover, it provides a systematic framework for the development and integration of hardware devices with various communication mediums and protocols, which facilitates the development and integration process of the software controller.
Adaptive time-stepping Monte Carlo integration of Coulomb collisions
Särkimäki, K.; Hirvijoki, E.; Terävä, J.
2018-01-01
We report an accessible and robust tool for evaluating the effects of Coulomb collisions on a test particle in a plasma that obeys Maxwell-Jüttner statistics. The implementation is based on the Beliaev-Budker collision integral which allows both the test particle and the background plasma to be relativistic. The integration method supports adaptive time stepping, which is shown to greatly improve the computational efficiency. The Monte Carlo method is implemented for both the three-dimensional particle momentum space and the five-dimensional guiding center phase space. Detailed description is provided for both the physics and implementation of the operator. The focus is in adaptive integration of stochastic differential equations, which is an overlooked aspect among existing Monte Carlo implementations of Coulomb collision operators. We verify that our operator converges to known analytical results and demonstrate that careless implementation of the adaptive time step can lead to severely erroneous results. The operator is provided as a self-contained Fortran 95 module and can be included into existing orbit-following tools that trace either the full Larmor motion or the guiding center dynamics. The adaptive time-stepping algorithm is expected to be useful in situations where the collision frequencies vary greatly over the course of a simulation. Examples include the slowing-down of fusion products or other fast ions, and the Dreicer generation of runaway electrons as well as the generation of fast ions or electrons with ion or electron cyclotron resonance heating.
Evaluation of the filtered leapfrog-trapezoidal time integration method
International Nuclear Information System (INIS)
Roache, P.J.; Dietrich, D.E.
1988-01-01
An analysis and evaluation are presented for a new method of time integration for fluid dynamic proposed by Dietrich. The method, called the filtered leapfrog-trapezoidal (FLT) scheme, is analyzed for the one-dimensional constant-coefficient advection equation and is shown to have some advantages for quasi-steady flows. A modification (FLTW) using a weighted combination of FLT and leapfrog is developed which retains the advantages for steady flows, increases accuracy for time-dependent flows, and involves little coding effort. Merits and applicability are discussed
Integration of Real-Time Data Into Building Automation Systems
Energy Technology Data Exchange (ETDEWEB)
Mark J. Stunder; Perry Sebastian; Brenda A. Chube; Michael D. Koontz
2003-04-16
The project goal was to investigate the possibility of using predictive real-time information from the Internet as an input to building management system algorithms. The objectives were to identify the types of information most valuable to commercial and residential building owners, managers, and system designers. To comprehensively investigate and document currently available electronic real-time information suitable for use in building management systems. Verify the reliability of the information and recommend accreditation methods for data and providers. Assess methodologies to automatically retrieve and utilize the information. Characterize equipment required to implement automated integration. Demonstrate the feasibility and benefits of using the information in building management systems. Identify evolutionary control strategies.
International Nuclear Information System (INIS)
Jang, Young Ill; June, Woon Kwan; Dong, Kyeong Rae
2007-01-01
In this study, Bolus Tracking method was used to investigate the parameters affecting the time when contrast media is reached at 100 HU (T 100 ) and studied the relationship between parameters and T 100 because the time which is reached at aorta through antecubital vein after injecting contrast media is different from person to person. Using 64 MDCT, Cadiac CT, the data were obtained from 100 patients (male: 50, female: 50, age distribution: 21⁓81, average age: 57.5) during July and September, 2007 by injecting the contrast media at 4 ml∙sec -1 through their antecubital vein except having difficulties in stopping their breath and having arrhythmia. Using Somatom Sensation Cardiac 64 Siemens, patients’ height and weight were measured to know their mean Heart rate and BMI. Ejection Fraction was measured using Argus Program at Wizard Workstation. Variances of each parameter were analyzed depending on T 100 ’s variation with multiple comparison and the correlation of Heart rate, Ejection Fraction and BMI were analyzed, as well. According to T 100 ’s variation caused by Heart rate, Ejection Fraction and BMI variations, the higher patients’ Heart Rate and Ejection Fraction were, the faster T 100 ’s variations caused by Heart Rate and Ejection Fraction were. The lower their Heart Rate and Ejection Fraction were, the slower T 100 ’s variations were, but T 100 ’s variations caused by BMI were not affected. In the correlation between T 100 and parameters, Heart Rate (p⁄0.01) and Ejection Fraction (p⁄0.05) were significant, but BMI was not significant (p¤0.05). In the Heart Rate, Ejection Fraction and BMI depending on Fast (17 sec and less), Medium (18⁓21 sec), Slow (22 sec and over) Heart Rate was significant at Fast and Slow and Ejection Fraction was significant Fast and Slow as well as Medium and Slow (p⁄0.05), but BMI was not statistically significant. Of the parameters (Heart Rate, Ejection Fraction and BMI) which would affect T 100 , Heart
Plaque reduction over time of an integrated oral hygiene system.
Nunn, Martha E; Ruhlman, C Douglas; Mallatt, Philip R; Rodriguez, Sally M; Ortblad, Katherine M
2004-10-01
This article compares the efficacy of a prototype integrated system (the IntelliClean System from Sonicare and Crest) in the reduction of supragingival plaque to that of a manual toothbrush and conventional toothpaste. The integrated system was compared to a manual toothbrush with conventional toothpaste in a randomized, single-blinded, parallel, 4-week, controlled clinical trial with 100 subjects randomized to each treatment group. There was a low dropout rate, with 89 subjects in the manual toothbrush group (11% loss to follow-up) and 93 subjects in the integrated system group (7% loss to follow-up) completing the study. The Turesky modification of the Quigley and Hein Plaque Index was used to assess full-mouth plaque scores for each subject. Prebrushing plaque scores were obtained at baseline and at 4 weeks after 14 to 20 hours of plaque accumulation. A survey also was conducted at the conclusion of the study to determine the attitude toward the two oral hygiene systems. The integrated system was found to significantly reduce overall and interproximal prebrushing plaque scores over 4 weeks, both by 8.6%, demonstrating statistically significant superiority in overall plaque reduction (P = .002) and interproximal plaque reduction (P < .001) compared to the manual toothbrush with conventional toothpaste, which showed no significant reduction in either overall plaque or interproximal plaque. This study demonstrates that the IntelliClean System from Sonicare and Crest is superior to a manual toothbrush with conventional toothpaste in reducing overall plaque and interproximal plaque over time.
Rigorous time slicing approach to Feynman path integrals
Fujiwara, Daisuke
2017-01-01
This book proves that Feynman's original definition of the path integral actually converges to the fundamental solution of the Schrödinger equation at least in the short term if the potential is differentiable sufficiently many times and its derivatives of order equal to or higher than two are bounded. The semi-classical asymptotic formula up to the second term of the fundamental solution is also proved by a method different from that of Birkhoff. A bound of the remainder term is also proved. The Feynman path integral is a method of quantization using the Lagrangian function, whereas Schrödinger's quantization uses the Hamiltonian function. These two methods are believed to be equivalent. But equivalence is not fully proved mathematically, because, compared with Schrödinger's method, there is still much to be done concerning rigorous mathematical treatment of Feynman's method. Feynman himself defined a path integral as the limit of a sequence of integrals over finite-dimensional spaces which is obtained by...
Directory of Open Access Journals (Sweden)
Kodner Robin B
2010-10-01
Full Text Available Abstract Background Likelihood-based phylogenetic inference is generally considered to be the most reliable classification method for unknown sequences. However, traditional likelihood-based phylogenetic methods cannot be applied to large volumes of short reads from next-generation sequencing due to computational complexity issues and lack of phylogenetic signal. "Phylogenetic placement," where a reference tree is fixed and the unknown query sequences are placed onto the tree via a reference alignment, is a way to bring the inferential power offered by likelihood-based approaches to large data sets. Results This paper introduces pplacer, a software package for phylogenetic placement and subsequent visualization. The algorithm can place twenty thousand short reads on a reference tree of one thousand taxa per hour per processor, has essentially linear time and memory complexity in the number of reference taxa, and is easy to run in parallel. Pplacer features calculation of the posterior probability of a placement on an edge, which is a statistically rigorous way of quantifying uncertainty on an edge-by-edge basis. It also can inform the user of the positional uncertainty for query sequences by calculating expected distance between placement locations, which is crucial in the estimation of uncertainty with a well-sampled reference tree. The software provides visualizations using branch thickness and color to represent number of placements and their uncertainty. A simulation study using reads generated from 631 COG alignments shows a high level of accuracy for phylogenetic placement over a wide range of alignment diversity, and the power of edge uncertainty estimates to measure placement confidence. Conclusions Pplacer enables efficient phylogenetic placement and subsequent visualization, making likelihood-based phylogenetics methodology practical for large collections of reads; it is freely available as source code, binaries, and a web service.
Ghousiya Begum, K; Seshagiri Rao, A; Radhakrishnan, T K
2017-05-01
Internal model control (IMC) with optimal H 2 minimization framework is proposed in this paper for design of proportional-integral-derivative (PID) controllers. The controller design is addressed for integrating and double integrating time delay processes with right half plane (RHP) zeros. Blaschke product is used to derive the optimal controller. There is a single adjustable closed loop tuning parameter for controller design. Systematic guidelines are provided for selection of this tuning parameter based on maximum sensitivity. Simulation studies have been carried out on various integrating time delay processes to show the advantages of the proposed method. The proposed controller provides enhanced closed loop performances when compared to recently reported methods in the literature. Quantitative comparative analysis has been carried out using the performance indices, Integral Absolute Error (IAE) and Total Variation (TV). Copyright © 2017 ISA. Published by Elsevier Ltd. All rights reserved.
Shultz, Rebecca; Jenkyn, Thomas
2012-01-01
Measuring individual foot joint motions requires a multi-segment foot model, even when the subject is wearing a shoe. Each foot segment must be tracked with at least three skin-mounted markers, but for these markers to be visible to an optical motion capture system holes or 'windows' must be cut into the structure of the shoe. The holes must be sufficiently large avoiding interfering with the markers, but small enough that they do not compromise the shoe's structural integrity. The objective of this study was to determine the maximum size of hole that could be cut into a running shoe upper without significantly compromising its structural integrity or changing the kinematics of the foot within the shoe. Three shoe designs were tested: (1) neutral cushioning, (2) motion control and (3) stability shoes. Holes were cut progressively larger, with four sizes tested in all. Foot joint motions were measured: (1) hindfoot with respect to midfoot in the frontal plane, (2) forefoot twist with respect to midfoot in the frontal plane, (3) the height-to-length ratio of the medial longitudinal arch and (4) the hallux angle with respect to first metatarsal in the sagittal plane. A single subject performed level walking at her preferred pace in each of the three shoes with ten repetitions for each hole size. The largest hole that did not disrupt shoe integrity was an oval of 1.7cm×2.5cm. The smallest shoe deformations were seen with the motion control shoe. The least change in foot joint motion was forefoot twist in both the neutral shoe and stability shoe for any size hole. This study demonstrates that for a hole smaller than this size, optical motion capture with a cluster-based multi-segment foot model is feasible for measure foot in shoe kinematics in vivo. Copyright © 2011. Published by Elsevier Ltd.
Integrated intensities in inverse time-of-flight technique
International Nuclear Information System (INIS)
Dorner, Bruno
2006-01-01
In traditional data analysis a model function, convoluted with the resolution, is fitted to the measured data. In case that integrated intensities of signals are of main interest, one can use an approach which does not require a model function for the signal nor detailed knowledge of the resolution. For inverse TOF technique, this approach consists of two steps: (i) Normalisation of the measured spectrum with the help of a monitor, with 1/k sensitivity, which is positioned in front of the sample. This means at the same time a conversion of the data from time of flight to energy transfer. (ii) A Jacobian [I. Waller, P.O. Froeman, Ark. Phys. 4 (1952) 183] transforms data collected at constant scattering angle into data as if measured at constant momentum transfer Q. This Jacobian works correctly for signals which have a constant width at different Q along the trajectory of constant scattering angle. The approach has been tested on spectra of Compton scattering with neutrons, having epithermal energies, obtained on the inverse TOF spectrometer VESUVIO/ISIS. In this case the width of the signal is increasing proportional to Q and in consequence the application of the Jacobian leads to integrated intensities slightly too high. The resulting integrated intensities agree very well with results derived in the traditional way. Thus this completely different approach confirms the observation that signals from recoil by H-atoms at large momentum transfers are weaker than expected
Hybrid state-space time integration of rotating beams
DEFF Research Database (Denmark)
Krenk, Steen; Nielsen, Martin Bjerre
2012-01-01
An efficient time integration algorithm for the dynamic equations of flexible beams in a rotating frame of reference is presented. The equations of motion are formulated in a hybrid state-space format in terms of local displacements and local components of the absolute velocity. With inspiration...... of the system rotation enter via global operations with the angular velocity vector. The algorithm is based on an integrated form of the equations of motion with energy and momentum conserving properties, if a kinematically consistent non-linear formulation is used. A consistent monotonic scheme for algorithmic...... energy dissipation in terms of local displacements and velocities, typical of structural vibrations, is developed and implemented in the form of forward weighting of appropriate mean value terms in the algorithm. The algorithm is implemented for a beam theory with consistent quadratic non...
Integrating speech in time depends on temporal expectancies and attention.
Scharinger, Mathias; Steinberg, Johanna; Tavano, Alessandro
2017-08-01
Sensory information that unfolds in time, such as in speech perception, relies on efficient chunking mechanisms in order to yield optimally-sized units for further processing. Whether or not two successive acoustic events receive a one-unit or a two-unit interpretation seems to depend on the fit between their temporal extent and a stipulated temporal window of integration. However, there is ongoing debate on how flexible this temporal window of integration should be, especially for the processing of speech sounds. Furthermore, there is no direct evidence of whether attention may modulate the temporal constraints on the integration window. For this reason, we here examine how different word durations, which lead to different temporal separations of sound onsets, interact with attention. In an Electroencephalography (EEG) study, participants actively and passively listened to words where word-final consonants were occasionally omitted. Words had either a natural duration or were artificially prolonged in order to increase the separation of speech sound onsets. Omission responses to incomplete speech input, originating in left temporal cortex, decreased when the critical speech sound was separated from previous sounds by more than 250 msec, i.e., when the separation was larger than the stipulated temporal window of integration (125-150 msec). Attention, on the other hand, only increased omission responses for stimuli with natural durations. We complemented the event-related potential (ERP) analyses by a frequency-domain analysis on the stimulus presentation rate. Notably, the power of stimulation frequency showed the same duration and attention effects than the omission responses. We interpret these findings on the background of existing research on temporal integration windows and further suggest that our findings may be accounted for within the framework of predictive coding. Copyright © 2017 Elsevier Ltd. All rights reserved.
Integral Time and the Varieties of Post-Mortem Survival
Directory of Open Access Journals (Sweden)
Sean M. Kelly
2008-06-01
Full Text Available While the question of survival of bodily death is usually approached by focusing on the mind/body relation (and often with the idea of the soul as a special kind of substance, this paper explores the issue in the context of our understanding of time. The argument of the paper is woven around the central intuition of time as an “ever-living present.” The development of this intuition allows for a more integral or “complex-holistic” theory of time, the soul, and the question of survival. Following the introductory matter, the first section proposes a re-interpretation of Nietzsche’s doctrine of eternal recurrence in terms of moments and lives as “eternally occurring.” The next section is a treatment of Julian Barbour’s neo-Machian model of instants of time as configurations in the n-dimensional phase-space he calls “Platonia.” While rejecting his claim to have done away with time, I do find his model suggestive of the idea of moments and lives as eternally occurring. The following section begins with Fechner’s visionary ideas of the nature of the soul and its survival of bodily death, with particular attention to the notion of holonic inclusion and the central analogy of the transition from perception to memory. I turn next to Whitehead’s equally holonic notions of prehension and the concrescence of actual occasions. From his epochal theory of time and certain ambiguities in his reflections on the “divine antinomies,” we are brought to the threshold of a potentially more integral or “complex-holistic” theory of time and survival, which is treated in the last section. This section draws from my earlier work on Hegel, Jung, and Edgar Morin, as well as from key insights of Jean Gebser, for an interpretation of Sri Aurobindo’s inspired but cryptic description of the “Supramental Time Vision.” This interpretation leads to an alternative understanding of reincarnation—and to the possibility of its reconciliation
Integral Time and the Varieties of Post-Mortem Survival
Directory of Open Access Journals (Sweden)
Sean M. Kelly
2008-06-01
Full Text Available While the question of survival of bodily death is usually approached byfocusing on the mind/body relation (and often with the idea of the soul as a special kindof substance, this paper explores the issue in the context of our understanding of time.The argument of the paper is woven around the central intuition of time as an “everlivingpresent.” The development of this intuition allows for a more integral or “complexholistic”theory of time, the soul, and the question of survival. Following the introductorymatter, the first section proposes a re-interpretation of Nietzsche’s doctrine of eternalrecurrence in terms of moments and lives as “eternally occurring.” The next section is atreatment of Julian Barbour’s neo-Machian model of instants of time as configurations inthe n-dimensional phase-space he calls “Platonia.” While rejecting his claim to have doneaway with time, I do find his model suggestive of the idea of moments and lives aseternally occurring. The following section begins with Fechner’s visionary ideas of thenature of the soul and its survival of bodily death, with particular attention to the notionof holonic inclusion and the central analogy of the transition from perception to memory.I turn next to Whitehead’s equally holonic notions of prehension and the concrescence ofactual occasions. From his epochal theory of time and certain ambiguities in hisreflections on the “divine antinomies,” we are brought to the threshold of a potentiallymore integral or “complex-holistic” theory of time and survival, which is treated in thelast section. This section draws from my earlier work on Hegel, Jung, and Edgar Morin,as well as from key insights of Jean Gebser, for an interpretation of Sri Aurobindo’sinspired but cryptic description of the “Supramental Time Vision.” This interpretationleads to an alternative understanding of reincarnation—and to the possibility of itsreconciliation with the once-only view
FLRW cosmology in Weyl-integrable space-time
Energy Technology Data Exchange (ETDEWEB)
Gannouji, Radouane [Department of Physics, Faculty of Science, Tokyo University of Science, 1–3, Kagurazaka, Shinjuku-ku, Tokyo 162-8601 (Japan); Nandan, Hemwati [Department of Physics, Gurukula Kangri Vishwavidayalaya, Haridwar 249404 (India); Dadhich, Naresh, E-mail: gannouji@rs.kagu.tus.ac.jp, E-mail: hntheory@yahoo.co.in, E-mail: nkd@iucaa.ernet.in [IUCAA, Post Bag 4, Ganeshkhind, Pune 411 007 (India)
2011-11-01
We investigate the Weyl space-time extension of general relativity (GR) for studying the FLRW cosmology through focusing and defocusing of the geodesic congruences. We have derived the equations of evolution for expansion, shear and rotation in the Weyl space-time. In particular, we consider the Starobinsky modification, f(R) = R+βR{sup 2}−2Λ, of gravity in the Einstein-Palatini formalism, which turns out to reduce to the Weyl integrable space-time (WIST) with the Weyl vector being a gradient. The modified Raychaudhuri equation takes the form of the Hill-type equation which is then analysed to study the formation of the caustics. In this model, it is possible to have a Big Bang singularity free cyclic Universe but unfortunately the periodicity turns out to be extremely short.
Integrated project scheduling and staff assignment with controllable processing times.
Fernandez-Viagas, Victor; Framinan, Jose M
2014-01-01
This paper addresses a decision problem related to simultaneously scheduling the tasks in a project and assigning the staff to these tasks, taking into account that a task can be performed only by employees with certain skills, and that the length of each task depends on the number of employees assigned. This type of problems usually appears in service companies, where both tasks scheduling and staff assignment are closely related. An integer programming model for the problem is proposed, together with some extensions to cope with different situations. Additionally, the advantages of the controllable processing times approach are compared with the fixed processing times. Due to the complexity of the integrated model, a simple GRASP algorithm is implemented in order to obtain good, approximate solutions in short computation times.
Integrated Project Scheduling and Staff Assignment with Controllable Processing Times
Directory of Open Access Journals (Sweden)
Victor Fernandez-Viagas
2014-01-01
Full Text Available This paper addresses a decision problem related to simultaneously scheduling the tasks in a project and assigning the staff to these tasks, taking into account that a task can be performed only by employees with certain skills, and that the length of each task depends on the number of employees assigned. This type of problems usually appears in service companies, where both tasks scheduling and staff assignment are closely related. An integer programming model for the problem is proposed, together with some extensions to cope with different situations. Additionally, the advantages of the controllable processing times approach are compared with the fixed processing times. Due to the complexity of the integrated model, a simple GRASP algorithm is implemented in order to obtain good, approximate solutions in short computation times.
Directory of Open Access Journals (Sweden)
BLAGA IRINA
2014-03-01
Full Text Available The maximum amounts of rainfall are usually characterized by high intensity, and their effects on the substrate are revealed, at slope level, by the deepening of the existing forms of torrential erosion and also by the formation of new ones, and by landslide processes. For the 1971-2000 period, for the weather stations in the hilly area of Cluj County: Cluj- Napoca, Dej, Huedin and Turda the highest values of rainfall amounts fallen in 24, 48 and 72 hours were analyzed and extracted, based on which the variation and the spatial and temporal distribution of the precipitation were analyzed. The annual probability of exceedance of maximum rainfall amounts fallen in short time intervals (24, 48 and 72 hours, based on thresholds and class values was determined, using climatological practices and the Hyfran program facilities.
Sivak, David A; Chodera, John D; Crooks, Gavin E
2014-06-19
When simulating molecular systems using deterministic equations of motion (e.g., Newtonian dynamics), such equations are generally numerically integrated according to a well-developed set of algorithms that share commonly agreed-upon desirable properties. However, for stochastic equations of motion (e.g., Langevin dynamics), there is still broad disagreement over which integration algorithms are most appropriate. While multiple desiderata have been proposed throughout the literature, consensus on which criteria are important is absent, and no published integration scheme satisfies all desiderata simultaneously. Additional nontrivial complications stem from simulating systems driven out of equilibrium using existing stochastic integration schemes in conjunction with recently developed nonequilibrium fluctuation theorems. Here, we examine a family of discrete time integration schemes for Langevin dynamics, assessing how each member satisfies a variety of desiderata that have been enumerated in prior efforts to construct suitable Langevin integrators. We show that the incorporation of a novel time step rescaling in the deterministic updates of position and velocity can correct a number of dynamical defects in these integrators. Finally, we identify a particular splitting (related to the velocity Verlet discretization) that has essentially universally appropriate properties for the simulation of Langevin dynamics for molecular systems in equilibrium, nonequilibrium, and path sampling contexts.
A higher order space-time Galerkin scheme for time domain integral equations
Pray, Andrew J.
2014-12-01
Stability of time domain integral equation (TDIE) solvers has remained an elusive goal formany years. Advancement of this research has largely progressed on four fronts: 1) Exact integration, 2) Lubich quadrature, 3) smooth temporal basis functions, and 4) space-time separation of convolutions with the retarded potential. The latter method\\'s efficacy in stabilizing solutions to the time domain electric field integral equation (TD-EFIE) was previously reported for first-order surface descriptions (flat elements) and zeroth-order functions as the temporal basis. In this work, we develop the methodology necessary to extend the scheme to higher order surface descriptions as well as to enable its use with higher order basis functions in both space and time. These basis functions are then used in a space-time Galerkin framework. A number of results are presented that demonstrate convergence in time. The viability of the space-time separation method in producing stable results is demonstrated experimentally for these examples.
Noether symmetries and integrability in time-dependent Hamiltonian mechanics
Directory of Open Access Journals (Sweden)
Jovanović Božidar
2016-01-01
Full Text Available We consider Noether symmetries within Hamiltonian setting as transformations that preserve Poincaré-Cartan form, i.e., as symmetries of characteristic line bundles of nondegenerate 1-forms. In the case when the Poincaré-Cartan form is contact, the explicit expression for the symmetries in the inverse Noether theorem is given. As examples, we consider natural mechanical systems, in particular the Kepler problem. Finally, we prove a variant of the theorem on complete (non-commutative integrability in terms of Noether symmetries of time-dependent Hamiltonian systems.
Integral ceramic superstructure evaluation using time domain optical coherence tomography
Sinescu, Cosmin; Bradu, Adrian; Topala, Florin I.; Negrutiu, Meda Lavinia; Duma, Virgil-Florin; Podoleanu, Adrian G.
2014-02-01
Optical Coherence Tomography (OCT) is a non-invasive low coherence interferometry technique that includes several technologies (and the corresponding devices and components), such as illumination and detection, interferometry, scanning, adaptive optics, microscopy and endoscopy. From its large area of applications, we consider in this paper a critical aspect in dentistry - to be investigated with a Time Domain (TD) OCT system. The clinical situation of an edentulous mandible is considered; it can be solved by inserting 2 to 6 implants. On these implants a mesostructure will be manufactured and on it a superstructure is needed. This superstructure can be integral ceramic; in this case materials defects could be trapped inside the ceramic layers and those defects could lead to fractures of the entire superstructure. In this paper we demonstrate that a TD-OCT imaging system has the potential to properly evaluate the presence of the defects inside the ceramic layers and those defects can be fixed before inserting the prosthesis inside the oral cavity. Three integral ceramic superstructures were developed by using a CAD/CAM technology. After the milling, the ceramic layers were applied on the core. All the three samples were evaluated by a TD-OCT system working at 1300 nm. For two of the superstructures evaluated, no defects were found in the most stressed areas. The third superstructure presented four ceramic defects in the mentioned areas. Because of those defects the superstructure may fracture. The integral ceramic prosthesis was send back to the dental laboratory to fix the problems related to the material defects found. Thus, TD-OCT proved to be a valuable method for diagnosing the ceramic defects inside the integral ceramic superstructures in order to prevent fractures at this level.
Directory of Open Access Journals (Sweden)
Morten B. S. Svendsen
2016-10-01
Full Text Available Billfishes are considered to be among the fastest swimmers in the oceans. Previous studies have estimated maximum speed of sailfish and black marlin at around 35 m s−1 but theoretical work on cavitation predicts that such extreme speed is unlikely. Here we investigated maximum speed of sailfish, and three other large marine pelagic predatory fish species, by measuring the twitch contraction time of anaerobic swimming muscle. The highest estimated maximum swimming speeds were found in sailfish (8.3±1.4 m s−1, followed by barracuda (6.2±1.0 m s−1, little tunny (5.6±0.2 m s−1 and dorado (4.0±0.9 m s−1; although size-corrected performance was highest in little tunny and lowest in sailfish. Contrary to previously reported estimates, our results suggest that sailfish are incapable of exceeding swimming speeds of 10-15 m s−1, which corresponds to the speed at which cavitation is predicted to occur, with destructive consequences for fin tissues.
Electric vehicle integration in a real-time market
DEFF Research Database (Denmark)
Pedersen, Anders Bro
with an externally simulated model of the power grid, it is be possible, in real-time, to simulate the impact of EV charging and help to identify bottlenecks in the system. In EDISON the vehicles are aggregated using an entity called a Virtual Power Plant (VPP); a central server monitoring and controlling...... the distributed energy resources registered with it, in order to make them appear as a single producer in the eyes of the market. Although the concept of a VPP is used within the EcoGrid EU project, the idea of more individual control is introduced through a new proposed real-time electricity market, where......This project is rooted in the EDISON project, which dealt with Electrical Vehicle (EV) integration into the existing power grid, as well as with the infrastructure needed to facilitate the ever increasing penetration of fluctuating renewable energy resources like e.g. wind turbines. In the EDISON...
An integrated framework for SAGD real-time monitoring
Energy Technology Data Exchange (ETDEWEB)
Mohajer, M.; Perez-Damas, C.; Berbin, A.; Al-kinani, A. [Schlumberger, Calgary, AB (Canada)
2009-07-01
This study examined the technologies and workflows for real-time optimization (RTO) of the steam assisted gravity drainage (SAGD) process. Although SAGD operators have tried to control the reservoir's steam chamber distribution to optimize bitumen recovery and minimize steam oil ratios, a true optimization can only be accomplished by implementing RTO workflows. In order for these workflows to be successful, some elements must be properly designed and introduced into the system. Most notably, well completions must ensure the integrity of downhole sensors; the appropriate measuring instruments must be selected; and surface and downhole measurements must be obtained. Operators have not been early adopters of RTO workflows for SAGD because of the numerous parameters that must be monitored, harsh operating conditions, the lack of integration between the different data acquisition systems, and the complex criteria required to optimize SAGD performance. This paper discussed the first stage in the development of a fully integrated RTO workflow for SAGD. An experimental apparatus with fiber optics distributed temperature sensing (DTS) was connected to a data acquisition system, and intra-minute data was streamed directly into an engineering desktop. The paper showed how subcool calculations can be effectively performed along the length of the horizontal well in real time and the results used to improve SAGD operation. Observations were compared against simulated predictions. In the next stage, a more complex set of criteria will be derived and additional data will be incorporated, such as surface heave, cross-well microseismic, multiphase flowmeter, and observation wells. 9 refs., 9 tabs., 13 figs.
Integrative real-time geographic visualization of energy resources
International Nuclear Information System (INIS)
Sorokine, A.; Shankar, M.; Stovall, J.; Bhaduri, B.; King, T.; Fernandez, S.; Datar, N.; Omitaomu, O.
2009-01-01
'Full text:' Several models forecast that climatic changes will increase the frequency of disastrous events like droughts, hurricanes, and snow storms. Responding to these events and also to power outages caused by system errors such as the 2003 North American blackout require an interconnect-wide real-time monitoring system for various energy resources. Such a system should be capable of providing situational awareness to its users in the government and energy utilities by dynamically visualizing the status of the elements of the energy grid infrastructure and supply chain in geographic contexts. We demonstrate an approach that relies on Google Earth and similar standard-based platforms as client-side geographic viewers with a data-dependent server component. The users of the system can view status information in spatial and temporal contexts. These data can be integrated with a wide range of geographic sources including all standard Google Earth layers and a large number of energy and environmental data feeds. In addition, we show a real-time spatio-temporal data sharing capability across the users of the system, novel methods for visualizing dynamic network data, and a fine-grain access to very large multi-resolution geographic datasets for faster delivery of the data. The system can be extended to integrate contingency analysis results and other grid models to assess recovery and repair scenarios in the case of major disruption. (author)
Time-optimal control of nuclear reactor power with adaptive proportional- integral-feedforward gains
International Nuclear Information System (INIS)
Park, Moon Ghu; Cho, Nam Zin
1993-01-01
A time-optimal control method which consists of coarse and fine control stages is described here. During the coarse control stage, the maximum control effort (time-optimal) is used to direct the system toward the switching boundary which is set near the desired power level. At this boundary, the controller is switched to the fine control stage in which an adaptive proportional-integral-feedforward (PIF) controller is used to compensate for any unmodeled reactivity feedback effects. This fine control is also introduced to obtain a constructive method for determining the (adaptive) feedback gains against the sampling effect. The feedforward control term is included to suppress the over-or undershoot. The estimation and feedback of the temperature-induced reactivity is also discussed
Pneumatic oscillator circuits for timing and control of integrated microfluidics.
Duncan, Philip N; Nguyen, Transon V; Hui, Elliot E
2013-11-05
Frequency references are fundamental to most digital systems, providing the basis for process synchronization, timing of outputs, and waveform synthesis. Recently, there has been growing interest in digital logic systems that are constructed out of microfluidics rather than electronics, as a possible means toward fully integrated laboratory-on-a-chip systems that do not require any external control apparatus. However, the full realization of this goal has not been possible due to the lack of on-chip frequency references, thus requiring timing signals to be provided from off-chip. Although microfluidic oscillators have been demonstrated, there have been no reported efforts to characterize, model, or optimize timing accuracy, which is the fundamental metric of a clock. Here, we report pneumatic ring oscillator circuits built from microfluidic valves and channels. Further, we present a compressible-flow analysis that differs fundamentally from conventional circuit theory, and we show the utility of this physically based model for the optimization of oscillator stability. Finally, we leverage microfluidic clocks to demonstrate circuits for the generation of phase-shifted waveforms, self-driving peristaltic pumps, and frequency division. Thus, pneumatic oscillators can serve as on-chip frequency references for microfluidic digital logic circuits. On-chip clocks and pumps both constitute critical building blocks on the path toward achieving autonomous laboratory-on-a-chip devices.
Marxer, C Galli; Coen, M Collaud; Bissig, H; Greber, U F; Schlapbach, L
2003-10-01
Interpretation of adsorption kinetics measured with a quartz crystal microbalance (QCM) can be difficult for adlayers undergoing modification of their mechanical properties. We have studied the behavior of the oscillation amplitude, A(0), and the decay time constant, tau, of quartz during adsorption of proteins and cells, by use of a home-made QCM. We are able to measure simultaneously the frequency, f, the dissipation factor, D, the maximum amplitude, A(0), and the transient decay time constant, tau, every 300 ms in liquid, gaseous, or vacuum environments. This analysis enables adsorption and modification of liquid/mass properties to be distinguished. Moreover the surface coverage and the stiffness of the adlayer can be estimated. These improvements promise to increase the appeal of QCM methodology for any applications measuring intimate contact of a dynamic material with a solid surface.
Integrating Real-time Earthquakes into Natural Hazard Courses
Furlong, K. P.; Benz, H. M.; Whitlock, J. S.; Bittenbinder, A. N.; Bogaert, B. B.
2001-12-01
Natural hazard courses are playing an increasingly important role in college and university earth science curricula. Students' intrinsic curiosity about the subject and the potential to make the course relevant to the interests of both science and non-science students make natural hazards courses popular additions to a department's offerings. However, one vital aspect of "real-life" natural hazard management that has not translated well into the classroom is the real-time nature of both events and response. The lack of a way to entrain students into the event/response mode has made implementing such real-time activities into classroom activities problematic. Although a variety of web sites provide near real-time postings of natural hazards, students essentially learn of the event after the fact. This is particularly true for earthquakes and other events with few precursors. As a result, the "time factor" and personal responsibility associated with natural hazard response is lost to the students. We have integrated the real-time aspects of earthquake response into two natural hazard courses at Penn State (a 'general education' course for non-science majors, and an upper-level course for science majors) by implementing a modification of the USGS Earthworm system. The Earthworm Database Management System (E-DBMS) catalogs current global seismic activity. It provides earthquake professionals with real-time email/cell phone alerts of global seismic activity and access to the data for review/revision purposes. We have modified this system so that real-time response can be used to address specific scientific, policy, and social questions in our classes. As a prototype of using the E-DBMS in courses, we have established an Earthworm server at Penn State. This server receives national and global seismic network data and, in turn, transmits the tailored alerts to "on-duty" students (e-mail, pager/cell phone notification). These students are responsible to react to the alarm
International Nuclear Information System (INIS)
Badea, A.F.; Brancus, I.M.; Rebel, H.; Haungs, A.; Oehlschlaeger, J.; Zazyan, M.
1999-01-01
The average depth of the maximum X m of the EAS (Extensive Air Shower) development depends on the energy E 0 and the mass of the primary particle, and its dependence from the energy is traditionally expressed by the so-called elongation rate D e defined as change in the average depth of the maximum per decade of E 0 i.e. D e = dX m /dlog 10 E 0 . Invoking the superposition model approximation i.e. assuming that a heavy primary (A) has the same shower elongation rate like a proton, but scaled with energies E 0 /A, one can write X m = X init + D e log 10 (E 0 /A). In 1977 an indirect approach studying D e has been suggested by Linsley. This approach can be applied to shower parameters which do not depend explicitly on the energy of the primary particle, but do depend on the depth of observation X and on the depth X m of shower maximum. The distribution of the EAS muon arrival times, measured at a certain observation level relatively to the arrival time of the shower core reflect the pathlength distribution of the muon travel from locus of production (near the axis) to the observation locus. The basic a priori assumption is that we can associate the mean value or median T of the time distribution to the height of the EAS maximum X m , and that we can express T = f(X,X m ). In order to derive from the energy variation of the arrival time quantities information about elongation rate, some knowledge is required about F i.e. F = - ∂ T/∂X m ) X /∂(T/∂X) X m , in addition to the variations with the depth of observation and the zenith-angle (θ) dependence, respectively. Thus ∂T/∂log 10 E 0 | X = - F·D e ·1/X v ·∂T/∂secθ| E 0 . In a similar way the fluctuations σ(X m ) of X m may be related to the fluctuations σ(T) of T i.e. σ(T) = - σ(X m )· F σ ·1/X v ·∂T/∂secθ| E 0 , with F σ being the corresponding scaling factor for the fluctuation of F. By simulations of the EAS development using the Monte Carlo code CORSIKA the energy and angle
Parareal algorithms with local time-integrators for time fractional differential equations
Wu, Shu-Lin; Zhou, Tao
2018-04-01
It is challenge work to design parareal algorithms for time-fractional differential equations due to the historical effect of the fractional operator. A direct extension of the classical parareal method to such equations will lead to unbalance computational time in each process. In this work, we present an efficient parareal iteration scheme to overcome this issue, by adopting two recently developed local time-integrators for time fractional operators. In both approaches, one introduces auxiliary variables to localized the fractional operator. To this end, we propose a new strategy to perform the coarse grid correction so that the auxiliary variables and the solution variable are corrected separately in a mixed pattern. It is shown that the proposed parareal algorithm admits robust rate of convergence. Numerical examples are presented to support our conclusions.
Energy Technology Data Exchange (ETDEWEB)
Bylaska, Eric J., E-mail: Eric.Bylaska@pnnl.gov [Environmental Molecular Sciences Laboratory, Pacific Northwest National Laboratory, P.O. Box 999, Richland, Washington 99352 (United States); Weare, Jonathan Q., E-mail: weare@uchicago.edu [Department of Mathematics, University of Chicago, Chicago, Illinois 60637 (United States); Weare, John H., E-mail: jweare@ucsd.edu [Department of Chemistry and Biochemistry, University of California, San Diego, La Jolla, California 92093 (United States)
2013-08-21
Parallel in time simulation algorithms are presented and applied to conventional molecular dynamics (MD) and ab initio molecular dynamics (AIMD) models of realistic complexity. Assuming that a forward time integrator, f (e.g., Verlet algorithm), is available to propagate the system from time t{sub i} (trajectory positions and velocities x{sub i} = (r{sub i}, v{sub i})) to time t{sub i+1} (x{sub i+1}) by x{sub i+1} = f{sub i}(x{sub i}), the dynamics problem spanning an interval from t{sub 0}…t{sub M} can be transformed into a root finding problem, F(X) = [x{sub i} − f(x{sub (i−1})]{sub i} {sub =1,M} = 0, for the trajectory variables. The root finding problem is solved using a variety of root finding techniques, including quasi-Newton and preconditioned quasi-Newton schemes that are all unconditionally convergent. The algorithms are parallelized by assigning a processor to each time-step entry in the columns of F(X). The relation of this approach to other recently proposed parallel in time methods is discussed, and the effectiveness of various approaches to solving the root finding problem is tested. We demonstrate that more efficient dynamical models based on simplified interactions or coarsening time-steps provide preconditioners for the root finding problem. However, for MD and AIMD simulations, such preconditioners are not required to obtain reasonable convergence and their cost must be considered in the performance of the algorithm. The parallel in time algorithms developed are tested by applying them to MD and AIMD simulations of size and complexity similar to those encountered in present day applications. These include a 1000 Si atom MD simulation using Stillinger-Weber potentials, and a HCl + 4H{sub 2}O AIMD simulation at the MP2 level. The maximum speedup ((serial execution time)/(parallel execution time) ) obtained by parallelizing the Stillinger-Weber MD simulation was nearly 3.0. For the AIMD MP2 simulations, the algorithms achieved speedups of up
International Nuclear Information System (INIS)
Bylaska, Eric J.; Weare, Jonathan Q.; Weare, John H.
2013-01-01
Parallel in time simulation algorithms are presented and applied to conventional molecular dynamics (MD) and ab initio molecular dynamics (AIMD) models of realistic complexity. Assuming that a forward time integrator, f (e.g., Verlet algorithm), is available to propagate the system from time t i (trajectory positions and velocities x i = (r i , v i )) to time t i+1 (x i+1 ) by x i+1 = f i (x i ), the dynamics problem spanning an interval from t 0 …t M can be transformed into a root finding problem, F(X) = [x i − f(x (i−1 )] i =1,M = 0, for the trajectory variables. The root finding problem is solved using a variety of root finding techniques, including quasi-Newton and preconditioned quasi-Newton schemes that are all unconditionally convergent. The algorithms are parallelized by assigning a processor to each time-step entry in the columns of F(X). The relation of this approach to other recently proposed parallel in time methods is discussed, and the effectiveness of various approaches to solving the root finding problem is tested. We demonstrate that more efficient dynamical models based on simplified interactions or coarsening time-steps provide preconditioners for the root finding problem. However, for MD and AIMD simulations, such preconditioners are not required to obtain reasonable convergence and their cost must be considered in the performance of the algorithm. The parallel in time algorithms developed are tested by applying them to MD and AIMD simulations of size and complexity similar to those encountered in present day applications. These include a 1000 Si atom MD simulation using Stillinger-Weber potentials, and a HCl + 4H 2 O AIMD simulation at the MP2 level. The maximum speedup ((serial execution time)/(parallel execution time) ) obtained by parallelizing the Stillinger-Weber MD simulation was nearly 3.0. For the AIMD MP2 simulations, the algorithms achieved speedups of up to 14.3. The parallel in time algorithms can be implemented in a
Bylaska, Eric J; Weare, Jonathan Q; Weare, John H
2013-08-21
Parallel in time simulation algorithms are presented and applied to conventional molecular dynamics (MD) and ab initio molecular dynamics (AIMD) models of realistic complexity. Assuming that a forward time integrator, f (e.g., Verlet algorithm), is available to propagate the system from time ti (trajectory positions and velocities xi = (ri, vi)) to time ti + 1 (xi + 1) by xi + 1 = fi(xi), the dynamics problem spanning an interval from t0[ellipsis (horizontal)]tM can be transformed into a root finding problem, F(X) = [xi - f(x(i - 1)]i = 1, M = 0, for the trajectory variables. The root finding problem is solved using a variety of root finding techniques, including quasi-Newton and preconditioned quasi-Newton schemes that are all unconditionally convergent. The algorithms are parallelized by assigning a processor to each time-step entry in the columns of F(X). The relation of this approach to other recently proposed parallel in time methods is discussed, and the effectiveness of various approaches to solving the root finding problem is tested. We demonstrate that more efficient dynamical models based on simplified interactions or coarsening time-steps provide preconditioners for the root finding problem. However, for MD and AIMD simulations, such preconditioners are not required to obtain reasonable convergence and their cost must be considered in the performance of the algorithm. The parallel in time algorithms developed are tested by applying them to MD and AIMD simulations of size and complexity similar to those encountered in present day applications. These include a 1000 Si atom MD simulation using Stillinger-Weber potentials, and a HCl + 4H2O AIMD simulation at the MP2 level. The maximum speedup (serial execution/timeparallel execution time) obtained by parallelizing the Stillinger-Weber MD simulation was nearly 3.0. For the AIMD MP2 simulations, the algorithms achieved speedups of up to 14.3. The parallel in time algorithms can be implemented in a
Timing sensory integration for robot simulation of autistic behavior
Barakova, E.I.; Chonnaparamutt, W.
2009-01-01
The experiments in this paper show that the impact of temporal aspects of sensory integration on the precision of movement is concordant with behavioral studies of sensory integrative dysfunction and autism. Specifically, the simulation predicts that distant grasping will be performed properly by
Optimal Real-time Dispatch for Integrated Energy Systems
Energy Technology Data Exchange (ETDEWEB)
Firestone, Ryan Michael [Univ. of California, Berkeley, CA (United States)
2007-05-31
This report describes the development and application of a dispatch optimization algorithm for integrated energy systems (IES) comprised of on-site cogeneration of heat and electricity, energy storage devices, and demand response opportunities. This work is intended to aid commercial and industrial sites in making use of modern computing power and optimization algorithms to make informed, near-optimal decisions under significant uncertainty and complex objective functions. The optimization algorithm uses a finite set of randomly generated future scenarios to approximate the true, stochastic future; constraints are included that prevent solutions to this approximate problem from deviating from solutions to the actual problem. The algorithm is then expressed as a mixed integer linear program, to which a powerful commercial solver is applied. A case study of United States Postal Service Processing and Distribution Centers (P&DC) in four cities and under three different electricity tariff structures is conducted to (1) determine the added value of optimal control to a cogeneration system over current, heuristic control strategies; (2) determine the value of limited electric load curtailment opportunities, with and without cogeneration; and (3) determine the trade-off between least-cost and least-carbon operations of a cogeneration system. Key results for the P&DC sites studied include (1) in locations where the average electricity and natural gas prices suggest a marginally profitable cogeneration system, optimal control can add up to 67% to the value of the cogeneration system; optimal control adds less value in locations where cogeneration is more clearly profitable; (2) optimal control under real-time pricing is (a) more complicated than under typical time-of-use tariffs and (b) at times necessary to make cogeneration economic at all; (3) limited electric load curtailment opportunities can be more valuable as a compliment to the cogeneration system than alone; and
Recent advances in marching-on-in-time schemes for solving time domain volume integral equations
Sayed, Sadeed Bin; Ulku, Huseyin Arda; Bagci, Hakan
2015-01-01
Transient electromagnetic field interactions on inhomogeneous penetrable scatterers can be analyzed by solving time domain volume integral equations (TDVIEs). TDVIEs are constructed by setting the summation of the incident and scattered field intensities to the total field intensity on the volumetric support of the scatterer. The unknown can be the field intensity or flux/current density. Representing the total field intensity in terms of the unknown using the relevant constitutive relation and the scattered field intensity in terms of the spatiotemporal convolution of the unknown with the Green function yield the final form of the TDVIE. The unknown is expanded in terms of local spatial and temporal basis functions. Inserting this expansion into the TDVIE and testing the resulting equation at discrete times yield a system of equations that is solved by the marching on-in-time (MOT) scheme. At each time step, a smaller system of equations, termed MOT system is solved for the coefficients of the expansion. The right-hand side of this system consists of the tested incident field and discretized spatio-temporal convolution of the unknown samples computed at the previous time steps with the Green function.
Recent advances in marching-on-in-time schemes for solving time domain volume integral equations
Sayed, Sadeed Bin
2015-05-16
Transient electromagnetic field interactions on inhomogeneous penetrable scatterers can be analyzed by solving time domain volume integral equations (TDVIEs). TDVIEs are constructed by setting the summation of the incident and scattered field intensities to the total field intensity on the volumetric support of the scatterer. The unknown can be the field intensity or flux/current density. Representing the total field intensity in terms of the unknown using the relevant constitutive relation and the scattered field intensity in terms of the spatiotemporal convolution of the unknown with the Green function yield the final form of the TDVIE. The unknown is expanded in terms of local spatial and temporal basis functions. Inserting this expansion into the TDVIE and testing the resulting equation at discrete times yield a system of equations that is solved by the marching on-in-time (MOT) scheme. At each time step, a smaller system of equations, termed MOT system is solved for the coefficients of the expansion. The right-hand side of this system consists of the tested incident field and discretized spatio-temporal convolution of the unknown samples computed at the previous time steps with the Green function.
Asghari, Mohammad H; Park, Yongwoo; Azaña, José
2011-01-17
We propose and experimentally prove a novel design for implementing photonic temporal integrators simultaneously offering a high processing bandwidth and a long operation time window, namely a large time-bandwidth product. The proposed scheme is based on concatenating in series a time-limited ultrafast photonic temporal integrator, e.g. implemented using a fiber Bragg grating (FBG), with a discrete-time (bandwidth limited) optical integrator, e.g. implemented using an optical resonant cavity. This design combines the advantages of these two previously demonstrated photonic integrator solutions, providing a processing speed as high as that of the time-limited ultrafast integrator and an operation time window fixed by the discrete-time integrator. Proof-of-concept experiments are reported using a uniform fiber Bragg grating (as the original time-limited integrator) connected in series with a bulk-optics coherent interferometers' system (as a passive 4-points discrete-time photonic temporal integrator). Using this setup, we demonstrate accurate temporal integration of complex-field optical signals with time-features as fast as ~6 ps, only limited by the processing bandwidth of the FBG integrator, over time durations as long as ~200 ps, which represents a 4-fold improvement over the operation time window (~50 ps) of the original FBG integrator.
Directory of Open Access Journals (Sweden)
Chiba Shigeru
2007-09-01
Full Text Available Abstract Background Computer graphics and virtual reality techniques are useful to develop automatic and effective rehabilitation systems. However, a kind of virtual environment including unstable visual images presented to wide field screen or a head mounted display tends to induce motion sickness. The motion sickness induced in using a rehabilitation system not only inhibits effective training but also may harm patients' health. There are few studies that have objectively evaluated the effects of the repetitive exposures to these stimuli on humans. The purpose of this study is to investigate the adaptation to visually induced motion sickness by physiological data. Methods An experiment was carried out in which the same video image was presented to human subjects three times. We evaluated changes of the intensity of motion sickness they suffered from by a subjective score and the physiological index ρmax, which is defined as the maximum cross-correlation coefficient between heart rate and pulse wave transmission time and is considered to reflect the autonomic nervous activity. Results The results showed adaptation to visually-induced motion sickness by the repetitive presentation of the same image both in the subjective and the objective indices. However, there were some subjects whose intensity of sickness increased. Thus, it was possible to know the part in the video image which related to motion sickness by analyzing changes in ρmax with time. Conclusion The physiological index, ρmax, will be a good index for assessing the adaptation process to visually induced motion sickness and may be useful in checking the safety of rehabilitation systems with new image technologies.
DEFF Research Database (Denmark)
Svendsen, Morten Bo Søndergaard; Domenici, Paolo; Marras, Stefano
2016-01-01
Billfishes are considered to be among the fastest swimmers in the oceans. Previous studies have estimated maximum speed of sailfish and black marlin at around 35 m s(-1) but theoretical work on cavitation predicts that such extreme speed is unlikely. Here we investigated maximum speed of sailfish...
2010-04-01
... insurance premium excluded from limitations on maximum mortgage amounts. 203.18c Section 203.18c Housing and...-front mortgage insurance premium excluded from limitations on maximum mortgage amounts. After... LOAN INSURANCE PROGRAMS UNDER NATIONAL HOUSING ACT AND OTHER AUTHORITIES SINGLE FAMILY MORTGAGE...
National Ignition Facility sub-system design requirements integrated timing system SSDR 1.5.3
International Nuclear Information System (INIS)
Wiedwald, J.; Van Aersau, P.; Bliss, E.
1996-01-01
This System Design Requirement document establishes the performance, design, development, and test requirements for the Integrated Timing System, WBS 1.5.3 which is part of the NIF Integrated Computer Control System (ICCS). The Integrated Timing System provides all temporally-critical hardware triggers to components and equipment in other NIF systems
On the relationship between supplier integration and time-to-market
Perols, J.; Zimmermann, C.; Kortmann, S.
2013-01-01
Recent operations management and innovation management research emphasizes the importance of supplier integration. However, the empirical results as to the relationship between supplier integration and time-to-market are ambivalent. To understand this important relationship, we incorporate two major
Liu, Meilin; Bagci, Hakan
2011-01-01
A discontinuous Galerkin finite element method (DG-FEM) with a highly-accurate time integration scheme is presented. The scheme achieves its high accuracy using numerically constructed predictor-corrector integration coefficients. Numerical results
Fully Integrated SAW-Less Discrete-Time Superheterodyne Receiver
Madadi, I.
2015-01-01
There are nowadays strong business and technical demands to integrate radio- frequency (RF) receivers (RX) into a complete system-on-chip (SoC) realized in scaled digital processes technology. As a consequence, the RF circuitry has to function well in face of reduced power supply ( V DD ) while the
Hamaker, Ellen L.; Dolan, Conor V.; Molenaar, Peter C. M.
2003-01-01
Demonstrated, through simulation, that stationary autoregressive moving average (ARMA) models may be fitted readily when T>N, using normal theory raw maximum likelihood structural equation modeling. Also provides some illustrations based on real data. (SLD)
Time integration in the code Zgoubi and external usage of PTC's structures
International Nuclear Information System (INIS)
Forest, Etienne; Meot, F.
2006-06-01
The purpose of this note is to describe Zgoubi's integrator and to describe some pitfalls for time based integration when used in accelerators. We show why the convergence rate of an integrator can be affected by an improper treatment at the boundary when time is used as the integration variable. We also point out how the code PTC can be used as a container by other tracking engine. This work is not completed as far as incorporation of Zgoubi is concerned. (authors)
On the initial condition problem of the time domain PMCHWT surface integral equation
Uysal, Ismail Enes; Bagci, Hakan; Ergin, A. Arif; Ulku, H. Arda
2017-01-01
Non-physical, linearly increasing and constant current components are induced in marching on-in-time solution of time domain surface integral equations when initial conditions on time derivatives of (unknown) equivalent currents are not enforced
Kilcoyne, Isabelle; Nieto, Jorge E; Knych, Heather K; Dechant, Julie E
2018-03-01
OBJECTIVE To determine the maximum concentration (Cmax) of amikacin and time to Cmax (Tmax) in the distal interphalangeal (DIP) joint in horses after IV regional limb perfusion (IVRLP) by use of the cephalic vein. ANIMALS 9 adult horses. PROCEDURES Horses were sedated and restrained in a standing position and then subjected to IVRLP (2 g of amikacin sulfate diluted to 60 mL with saline [0.9% NaCl] solution) by use of the cephalic vein. A pneumatic tourniquet was placed 10 cm proximal to the accessory carpal bone. Perfusate was instilled with a peristaltic pump over a 3-minute period. Synovial fluid was collected from the DIP joint 5, 10, 15, 20, 25, and 30 minutes after IVRLP; the tourniquet was removed after the 20-minute sample was collected. Blood samples were collected from the jugular vein 5, 10, 15, 19, 21, 25, and 30 minutes after IVRLP. Amikacin was quantified with a fluorescence polarization immunoassay. Median Cmax of amikacin and Tmax in the DIP joint were determined. RESULTS 2 horses were excluded because an insufficient volume of synovial fluid was collected. Median Cmax for the DIP joint was 600 μg/mL (range, 37 to 2,420 μg/mL). Median Tmax for the DIP joint was 15 minutes. CONCLUSIONS AND CLINICAL RELEVANCE Tmax of amikacin was 15 minutes after IVRLP in horses and Cmax did not increase > 15 minutes after IVRLP despite maintenance of the tourniquet. Application of a tourniquet for 15 minutes should be sufficient for completion of IVRLP when attempting to achieve an adequate concentration of amikacin in the synovial fluid of the DIP joint.
Multigrid time-accurate integration of Navier-Stokes equations
Arnone, Andrea; Liou, Meng-Sing; Povinelli, Louis A.
1993-01-01
Efficient acceleration techniques typical of explicit steady-state solvers are extended to time-accurate calculations. Stability restrictions are greatly reduced by means of a fully implicit time discretization. A four-stage Runge-Kutta scheme with local time stepping, residual smoothing, and multigridding is used instead of traditional time-expensive factorizations. Some applications to natural and forced unsteady viscous flows show the capability of the procedure.
System Integration for Real-time Mobile Manipulation
Oftadeh, Reza; Aref, Mohammad M.; Ghabcheloo, Reza; Mattila, Jouni
2014-01-01
Mobile manipulators are one of the most complicated types of mechatronics systems. The performance of these robots in performing complex manipulation tasks is highly correlated with the synchronization and integration of their low-level components. This paper discusses in detail the mechatronics design of a four wheel steered mobile manipulator. It presents the manipulator ’s mechanical structure and electrical interfaces, designs low-level software architecture based on embedded PC-based con...
Knowledge Representation and Management, It's Time to Integrate!
Dhombres, F; Charlet, J
2017-08-01
Objectives: To select, present, and summarize the best papers published in 2016 in the field of Knowledge Representation and Management (KRM). Methods: A comprehensive and standardized review of the medical informatics literature was performed based on a PubMed query. Results: Among the 1,421 retrieved papers, the review process resulted in the selection of four best papers focused on the integration of heterogeneous data via the development and the alignment of terminological resources. In the first article, the authors provide a curated and standardized version of the publicly available US FDA Adverse Event Reporting System. Such a resource will improve the quality of the underlying data, and enable standardized analyses using common vocabularies. The second article describes a project developed in order to facilitate heterogeneous data integration in the i2b2 framework. The originality is to allow users integrate the data described in different terminologies and to build a new repository, with a unique model able to support the representation of the various data. The third paper is dedicated to model the association between multiple phenotypic traits described within the Human Phenotype Ontology (HPO) and the corresponding genotype in the specific context of rare diseases (rare variants). Finally, the fourth paper presents solutions to annotation-ontology mapping in genome-scale data. Of particular interest in this work is the Experimental Factor Ontology (EFO) and its generic association model, the Ontology of Biomedical AssociatioN (OBAN). Conclusion: Ontologies have started to show their efficiency to integrate medical data for various tasks in medical informatics: electronic health records data management, clinical research, and knowledge-based systems development. Georg Thieme Verlag KG Stuttgart.
Liu, Meilin
2011-07-01
A discontinuous Galerkin finite element method (DG-FEM) with a highly-accurate time integration scheme is presented. The scheme achieves its high accuracy using numerically constructed predictor-corrector integration coefficients. Numerical results show that this new time integration scheme uses considerably larger time steps than the fourth-order Runge-Kutta method when combined with a DG-FEM using higher-order spatial discretization/basis functions for high accuracy. © 2011 IEEE.
Photonic integrated circuit as a picosecond pulse timing discriminator.
Lowery, Arthur James; Zhuang, Leimeng
2016-04-18
We report the first experimental demonstration of a compact on-chip optical pulse timing discriminator that is able to provide an output voltage proportional to the relative timing of two 60-ps input pulses on separate paths. The output voltage is intrinsically low-pass-filtered, so the discriminator forms an interface between high-speed optics and low-speed electronics. Potential applications include timing synchronization of multiple pulse trains as a precursor for optical time-division multiplexing, and compact rangefinders with millimeter dimensions.
Explicit solution of Calderon preconditioned time domain integral equations
Ulku, Huseyin Arda; Bagci, Hakan; Michielssen, Eric
2013-01-01
operators and a PE(CE)m type linear multistep to march on in time. Unlike its implicit counterpart, the proposed explicit solver requires the solution of an MOT system with a Gram matrix that is sparse and well-conditioned independent of the time step size
Kwong-Wong-type integral equation on time scales
Directory of Open Access Journals (Sweden)
Baoguo Jia
2011-09-01
Full Text Available Consider the second-order nonlinear dynamic equation $$ [r(tx^Delta(ho(t]^Delta+p(tf(x(t=0, $$ where $p(t$ is the backward jump operator. We obtain a Kwong-Wong-type integral equation, that is: If $x(t$ is a nonoscillatory solution of the above equation on $[T_0,infty$, then the integral equation $$ frac{r^sigma(tx^Delta(t}{f(x^sigma(t} =P^sigma(t+int^infty_{sigma(t}frac{r^sigma(s [int^1_0f'(x_h(sdh][x^Delta(s]^2}{f(x(s f(x^sigma(s}Delta s $$ is satisfied for $tgeq T_0$, where $P^sigma(t=int^infty_{sigma(t}p(sDelta s$, and $x_h(s=x(s+hmu(sx^Delta(s$. As an application, we show that the superlinear dynamic equation $$ [r(tx^{Delta}(ho(t]^Delta+p(tf(x(t=0, $$ is oscillatory, under certain conditions.
Evaluation of time integration methods for transient response analysis of nonlinear structures
International Nuclear Information System (INIS)
Park, K.C.
1975-01-01
Recent developments in the evaluation of direct time integration methods for the transient response analysis of nonlinear structures are presented. These developments, which are based on local stability considerations of an integrator, show that the interaction between temporal step size and nonlinearities of structural systems has a pronounced effect on both accuracy and stability of a given time integration method. The resulting evaluation technique is applied to a model nonlinear problem, in order to: 1) demonstrate that it eliminates the present costly process of evaluating time integrator for nonlinear structural systems via extensive numerical experiments; 2) identify the desirable characteristics of time integration methods for nonlinear structural problems; 3) develop improved stiffly-stable methods for application to nonlinear structures. Extension of the methodology for examination of the interaction between a time integrator and the approximate treatment of nonlinearities (such as due to pseudo-force or incremental solution procedures) is also discussed. (Auth.)
van Dijk, Aalt D J; Molenaar, Jaap
2017-01-01
The appropriate timing of flowering is crucial for the reproductive success of plants. Hence, intricate genetic networks integrate various environmental and endogenous cues such as temperature or hormonal statues. These signals integrate into a network of floral pathway integrator genes. At a quantitative level, it is currently unclear how the impact of genetic variation in signaling pathways on flowering time is mediated by floral pathway integrator genes. Here, using datasets available from literature, we connect Arabidopsis thaliana flowering time in genetic backgrounds varying in upstream signalling components with the expression levels of floral pathway integrator genes in these genetic backgrounds. Our modelling results indicate that flowering time depends in a quite linear way on expression levels of floral pathway integrator genes. This gradual, proportional response of flowering time to upstream changes enables a gradual adaptation to changing environmental factors such as temperature and light.
Directory of Open Access Journals (Sweden)
Aalt D.J. van Dijk
2017-04-01
Full Text Available The appropriate timing of flowering is crucial for the reproductive success of plants. Hence, intricate genetic networks integrate various environmental and endogenous cues such as temperature or hormonal statues. These signals integrate into a network of floral pathway integrator genes. At a quantitative level, it is currently unclear how the impact of genetic variation in signaling pathways on flowering time is mediated by floral pathway integrator genes. Here, using datasets available from literature, we connect Arabidopsis thaliana flowering time in genetic backgrounds varying in upstream signalling components with the expression levels of floral pathway integrator genes in these genetic backgrounds. Our modelling results indicate that flowering time depends in a quite linear way on expression levels of floral pathway integrator genes. This gradual, proportional response of flowering time to upstream changes enables a gradual adaptation to changing environmental factors such as temperature and light.
Numerical Time Integration Methods for a Point Absorber Wave Energy Converter
DEFF Research Database (Denmark)
Zurkinden, Andrew Stephen; Kramer, Morten
2012-01-01
on a discretization of the convolution integral. The calculation of the convolution integral is performed at each time step regardless of the chosen numerical scheme. In the second model the convolution integral is replaced by a system of linear ordinary differential equations. The formulation of the state...
International Nuclear Information System (INIS)
Dasgupta, I.
1998-01-01
We discuss new bounce-like (but non-time-reversal-invariant) solutions to Euclidean equations of motion, which we dub boomerons. In the Euclidean path integral approach to quantum theories, boomerons make an imaginary contribution to the vacuum energy. The fake vacuum instability can be removed by cancelling boomeron contributions against contributions from time reversed boomerons (anti-boomerons). The cancellation rests on a sign choice whose significance is not completely understood in the path integral method. (orig.)
Integrating Security in Real-Time Embedded Systems
2017-04-26
Operations and Reports (0704-0188), 1215 Jefferson Davis Highway, Suite 1204, Arl ington, VA 22202-4302. Respondents should be aware that... operate without impacting the tinting and safety constraints of the control logic. Besides, the embedded nature of these systems limits the...only during slack times when no other real-time tasks are rwming. We propose to measure the security of the system by means of the achievable periodic
van Hoeij, F. B.; Keijsers, R. G. M.; Loffeld, B. C. A. J.; Dun, G.; Stadhouders, P. H. G. M.; Weusten, B. L. A. M.
2015-01-01
In patients undergoing F-18-FDG PET/CT, incidental colonic focal lesions can be indicative of inflammatory, premalignant or malignant lesions. The maximum standardized uptake value (SUVmax) of these lesions, representing the FDG uptake intensity, might be helpful in differentiating malignant from
van der Hout, C.M.; Witbaard, R.; Bergman, M.J.N.; Duineveld, G.C.A.; Rozemeijer, M.J.C.; Gerkema, T.
2017-01-01
The analysis of 1.8 years of data gives an understanding of the response to varying forcing of suspended particulate matter (SPM) and chlorophyll-a (CHL-a) in a coastal turbidity maximum zone (TMZ). Both temporal and vertical concentration variations in the near-bed layer (0–2 m) in the shallow (11
Hout, van der C.M.; Witbaard, R.; Bergman, M.J.N.; Duineveld, G.C.A.; Rozemeijer, M.J.C.; Gerkema, T.
2017-01-01
The analysis of 1.8. years of data gives an understanding of the response to varying forcing of suspended particulate matter (SPM) and chlorophyll-a (CHL-a) in a coastal turbidity maximum zone (TMZ). Both temporal and vertical concentration variations in the near-bed layer (0-2. m) in the shallow
Multiple time step integrators in ab initio molecular dynamics
International Nuclear Information System (INIS)
Luehr, Nathan; Martínez, Todd J.; Markland, Thomas E.
2014-01-01
Multiple time-scale algorithms exploit the natural separation of time-scales in chemical systems to greatly accelerate the efficiency of molecular dynamics simulations. Although the utility of these methods in systems where the interactions are described by empirical potentials is now well established, their application to ab initio molecular dynamics calculations has been limited by difficulties associated with splitting the ab initio potential into fast and slowly varying components. Here we present two schemes that enable efficient time-scale separation in ab initio calculations: one based on fragment decomposition and the other on range separation of the Coulomb operator in the electronic Hamiltonian. We demonstrate for both water clusters and a solvated hydroxide ion that multiple time-scale molecular dynamics allows for outer time steps of 2.5 fs, which are as large as those obtained when such schemes are applied to empirical potentials, while still allowing for bonds to be broken and reformed throughout the dynamics. This permits computational speedups of up to 4.4x, compared to standard Born-Oppenheimer ab initio molecular dynamics with a 0.5 fs time step, while maintaining the same energy conservation and accuracy
Chao, W. C.
1982-01-01
With appropriate modifications, a recently proposed explicit-multiple-time-step scheme (EMTSS) is incorporated into the UCLA model. In this scheme, the linearized terms in the governing equations that generate the gravity waves are split into different vertical modes. Each mode is integrated with an optimal time step, and at periodic intervals these modes are recombined. The other terms are integrated with a time step dictated by the CFL condition for low-frequency waves. This large time step requires a special modification of the advective terms in the polar region to maintain stability. Test runs for 72 h show that EMTSS is a stable, efficient and accurate scheme.
Time, dynamics and chaos. Integrating Poincare's "non-integrable systems"
Energy Technology Data Exchange (ETDEWEB)
Prigogine, I.
1990-01-01
This report discusses the nature of time. The author attempts to resolve the conflict between the concept of time reversibility in classical and quantum mechanics with the macroscopic world's irreversibility of time. (LSP)
Silva, Bhagya Nathali; Khan, Murad; Han, Kijun
2018-01-01
The emergence of smart devices and smart appliances has highly favored the realization of the smart home concept. Modern smart home systems handle a wide range of user requirements. Energy management and energy conservation are in the spotlight when deploying sophisticated smart homes. However, the performance of energy management systems is highly influenced by user behaviors and adopted energy management approaches. Appliance scheduling is widely accepted as an effective mechanism to manage domestic energy consumption. Hence, we propose a smart home energy management system that reduces unnecessary energy consumption by integrating an automated switching off system with load balancing and appliance scheduling algorithm. The load balancing scheme acts according to defined constraints such that the cumulative energy consumption of the household is managed below the defined maximum threshold. The scheduling of appliances adheres to the least slack time (LST) algorithm while considering user comfort during scheduling. The performance of the proposed scheme has been evaluated against an existing energy management scheme through computer simulation. The simulation results have revealed a significant improvement gained through the proposed LST-based energy management scheme in terms of cost of energy, along with reduced domestic energy consumption facilitated by an automated switching off mechanism. PMID:29495346
Directory of Open Access Journals (Sweden)
Bhagya Nathali Silva
2018-02-01
Full Text Available The emergence of smart devices and smart appliances has highly favored the realization of the smart home concept. Modern smart home systems handle a wide range of user requirements. Energy management and energy conservation are in the spotlight when deploying sophisticated smart homes. However, the performance of energy management systems is highly influenced by user behaviors and adopted energy management approaches. Appliance scheduling is widely accepted as an effective mechanism to manage domestic energy consumption. Hence, we propose a smart home energy management system that reduces unnecessary energy consumption by integrating an automated switching off system with load balancing and appliance scheduling algorithm. The load balancing scheme acts according to defined constraints such that the cumulative energy consumption of the household is managed below the defined maximum threshold. The scheduling of appliances adheres to the least slack time (LST algorithm while considering user comfort during scheduling. The performance of the proposed scheme has been evaluated against an existing energy management scheme through computer simulation. The simulation results have revealed a significant improvement gained through the proposed LST-based energy management scheme in terms of cost of energy, along with reduced domestic energy consumption facilitated by an automated switching off mechanism.
Silva, Bhagya Nathali; Khan, Murad; Han, Kijun
2018-02-25
The emergence of smart devices and smart appliances has highly favored the realization of the smart home concept. Modern smart home systems handle a wide range of user requirements. Energy management and energy conservation are in the spotlight when deploying sophisticated smart homes. However, the performance of energy management systems is highly influenced by user behaviors and adopted energy management approaches. Appliance scheduling is widely accepted as an effective mechanism to manage domestic energy consumption. Hence, we propose a smart home energy management system that reduces unnecessary energy consumption by integrating an automated switching off system with load balancing and appliance scheduling algorithm. The load balancing scheme acts according to defined constraints such that the cumulative energy consumption of the household is managed below the defined maximum threshold. The scheduling of appliances adheres to the least slack time (LST) algorithm while considering user comfort during scheduling. The performance of the proposed scheme has been evaluated against an existing energy management scheme through computer simulation. The simulation results have revealed a significant improvement gained through the proposed LST-based energy management scheme in terms of cost of energy, along with reduced domestic energy consumption facilitated by an automated switching off mechanism.
Numerical integration of the Teukolsky equation in the time domain
International Nuclear Information System (INIS)
Pazos-Avalos, Enrique; Lousto, Carlos O.
2005-01-01
We present a fourth-order convergent (2+1)-dimensional, numerical formalism to solve the Teukolsky equation in the time domain. Our approach is first to rewrite the Teukolsky equation as a system of first-order differential equations. In this way we get a system that has the form of an advection equation. This is then used in combination with a series expansion of the solution in powers of time. To obtain a fourth-order scheme we kept terms up to fourth derivative in time and use the advectionlike system of differential equations to substitute the temporal derivatives by spatial derivatives. This scheme is applied to evolve gravitational perturbations in the Schwarzschild and Kerr backgrounds. Our numerical method proved to be stable and fourth-order convergent in r* and θ directions. The correct power-law tail, ∼1/t 2l+3 , for general initial data, and ∼1/t 2l+4 , for time-symmetric data, was found in our runs. We noted that it is crucial to resolve accurately the angular dependence of the mode at late times in order to obtain these values of the exponents in the power-law decay. In other cases, when the decay was too fast and round-off error was reached before a tail was developed, then the quasinormal modes frequencies provided a test to determine the validity of our code
Approximate maximum parsimony and ancestral maximum likelihood.
Alon, Noga; Chor, Benny; Pardi, Fabio; Rapoport, Anat
2010-01-01
We explore the maximum parsimony (MP) and ancestral maximum likelihood (AML) criteria in phylogenetic tree reconstruction. Both problems are NP-hard, so we seek approximate solutions. We formulate the two problems as Steiner tree problems under appropriate distances. The gist of our approach is the succinct characterization of Steiner trees for a small number of leaves for the two distances. This enables the use of known Steiner tree approximation algorithms. The approach leads to a 16/9 approximation ratio for AML and asymptotically to a 1.55 approximation ratio for MP.
Clark, P.U.; Dyke, A.S.; Shakun, J.D.; Carlson, A.E.; Clark, J.; Wohlfarth, B.; Mitrovica, J.X.; Hostetler, S.W.; McCabe, A.M.
2009-01-01
We used 5704 14C, 10Be, and 3He ages that span the interval from 10,000 to 50,000 years ago (10 to 50 ka) to constrain the timing of the Last Glacial Maximum (LGM) in terms of global ice-sheet and mountain-glacier extent. Growth of the ice sheets to their maximum positions occurred between 33.0 and 26.5 ka in response to climate forcing from decreases in northern summer insolation, tropical Pacific sea surface temperatures, and atmospheric CO2. Nearly all ice sheets were at their LGM positions from 26.5 ka to 19 to 20 ka, corresponding to minima in these forcings. The onset of Northern Hemisphere deglaciation 19 to 20 ka was induced by an increase in northern summer insolation, providing the source for an abrupt rise in sea level. The onset of deglaciation of the West Antarctic Ice Sheet occurred between 14 and 15 ka, consistent with evidence that this was the primary source for an abrupt rise in sea level ???14.5 ka.
An integrated portable hand-held analyser for real-time isothermal nucleic acid amplification
Energy Technology Data Exchange (ETDEWEB)
Smith, Matthew C. [College of Marine Science, University of South Florida, St Petersburg, FL (United States)], E-mail: msmith@marine.usf.edu; Steimle, George; Ivanov, Stan; Holly, Mark; Fries, David P. [College of Marine Science, University of South Florida, St Petersburg, FL (United States)
2007-08-29
A compact hand-held heated fluorometric instrument for performing real-time isothermal nucleic acid amplification and detection is described. The optoelectronic instrument combines a Printed Circuit Board/Micro Electro Mechanical Systems (PCB/MEMS) reaction detection/chamber containing an integrated resistive heater with attached miniature LED light source and photo-detector and a disposable glass waveguide capillary to enable a mini-fluorometer. The fluorometer is fabricated and assembled in planar geometry, rolled into a tubular format and packaged with custom control electronics to form the hand-held reactor. Positive or negative results for each reaction are displayed to the user using an LED interface. Reaction data is stored in FLASH memory for retrieval via an in-built USB connection. Operating on one disposable 3 V lithium battery >12, 60 min reactions can be performed. Maximum dimensions of the system are 150 mm (h) x 48 mm (d) x 40 mm (w), the total instrument weight (with battery) is 140 g. The system produces comparable results to laboratory instrumentation when performing a real-time nucleic acid sequence-based amplification (NASBA) reaction, and also displayed comparable precision, accuracy and resolution to laboratory-based real-time nucleic acid amplification instrumentation. A good linear response (R{sup 2} = 0.948) to fluorescein gradients ranging from 0.5 to 10 {mu}M was also obtained from the instrument indicating that it may be utilized for other fluorometric assays. This instrument enables an inexpensive, compact approach to in-field genetic screening, providing results comparable to laboratory equipment with rapid user feedback as to the status of the reaction.
An integrated portable hand-held analyser for real-time isothermal nucleic acid amplification
International Nuclear Information System (INIS)
Smith, Matthew C.; Steimle, George; Ivanov, Stan; Holly, Mark; Fries, David P.
2007-01-01
A compact hand-held heated fluorometric instrument for performing real-time isothermal nucleic acid amplification and detection is described. The optoelectronic instrument combines a Printed Circuit Board/Micro Electro Mechanical Systems (PCB/MEMS) reaction detection/chamber containing an integrated resistive heater with attached miniature LED light source and photo-detector and a disposable glass waveguide capillary to enable a mini-fluorometer. The fluorometer is fabricated and assembled in planar geometry, rolled into a tubular format and packaged with custom control electronics to form the hand-held reactor. Positive or negative results for each reaction are displayed to the user using an LED interface. Reaction data is stored in FLASH memory for retrieval via an in-built USB connection. Operating on one disposable 3 V lithium battery >12, 60 min reactions can be performed. Maximum dimensions of the system are 150 mm (h) x 48 mm (d) x 40 mm (w), the total instrument weight (with battery) is 140 g. The system produces comparable results to laboratory instrumentation when performing a real-time nucleic acid sequence-based amplification (NASBA) reaction, and also displayed comparable precision, accuracy and resolution to laboratory-based real-time nucleic acid amplification instrumentation. A good linear response (R 2 = 0.948) to fluorescein gradients ranging from 0.5 to 10 μM was also obtained from the instrument indicating that it may be utilized for other fluorometric assays. This instrument enables an inexpensive, compact approach to in-field genetic screening, providing results comparable to laboratory equipment with rapid user feedback as to the status of the reaction
A unified approach for proportional-integral-derivative controller design for time delay processes
International Nuclear Information System (INIS)
Shamsuzzoha, Mohammad
2015-01-01
An analytical design method for PI/PID controller tuning is proposed for several types of processes with time delay. A single tuning formula gives enhanced disturbance rejection performance. The design method is based on the IMC approach, which has a single tuning parameter to adjust the performance and robustness of the controller. A simple tuning formula gives consistently better performance as compared to several well-known methods at the same degree of robustness for stable and integrating process. The performance of the unstable process has been compared with other recently published methods which also show significant improvement in the proposed method. Furthermore, the robustness of the controller is investigated by inserting a perturbation uncertainty in all parameters simultaneously, again showing comparable results with other methods. An analysis has been performed for the uncertainty margin in the different process parameters for the robust controller design. It gives the guidelines of the M s setting for the PI controller design based on the process parameters uncertainty. For the selection of the closed-loop time constant, (τ c ), a guideline is provided over a broad range of θ/τ ratios on the basis of the peak of maximum uncertainty (M s ). A comparison of the IAE has been conducted for the wide range of θ/τ ratio for the first order time delay process. The proposed method shows minimum IAE in compared to SIMC, while Lee et al. shows poor disturbance rejection in the lag dominant process. In the simulation study, the controllers were tuned to have the same degree of robustness by measuring the M s , to obtain a reasonable comparison
Real-Time Integrated Re-scheduling for Tramway Operations
Cheung, Kam-Fung; Kuo, Yong-Hong; Lai, S.W.; Leung, Janny M.Y.
2018-01-01
Our work aims to develop practical solution approaches for real-time dispatch of crews and vehicles for disruption management. The practical motivation for our research arose from the operations of a public tramway system in Hong Kong. The tram system shares the road with other vehicular traffic in
Integrated capacity and inventory management with capacity acquisition lead times
Mincsovics, G.Z.; Tan, T.; Alp, O.
2009-01-01
We model a make-to-stock production system that utilizes permanent and contingent capacity to meet non-stationary stochastic demand, where a constant lead time is associated with the acquisition of contingent capacity. We determine the structure of the optimal solution concerning both the
The Feynman integral for time-dependent anharmonic oscillators
International Nuclear Information System (INIS)
Grothaus, M.; Khandekar, D.C.; da Silva, J.L.; Streit, L.
1997-01-01
We review some basic notions and results of white noise analysis that are used in the construction of the Feynman integrand as a generalized white noise functional. We show that the Feynman integrand for the time-dependent harmonic oscillator in an external potential is a Hida distribution. copyright 1997 American Institute of Physics
Representing real time semantics for distributed application integration
Poon, P.M.S.; Dillon, T.S.; Chang, E.; Feng, L.
Traditional real time system design and development are driven by technological requirements. With the ever growing complexity of requirements and the advances in software design, the alignment of focus has gradually been shifted to the perspective of business and industrial needs. This paper
Integration of the time-dependent heat equation in the fuel rod performance program IAMBUS
International Nuclear Information System (INIS)
West, G.
1982-01-01
An iterative numerical method for integration of the time-dependent heat equation is described. No presuppositions are made for the dependency of the thermal conductivity and heat capacity on space, time and temperature. (orig.) [de
International Nuclear Information System (INIS)
Anon.
1979-01-01
This chapter presents a historic overview of the establishment of radiation guidelines by various national and international agencies. The use of maximum permissible dose and maximum permissible body burden limits to derive working standards is discussed
An efficient explicit marching on in time solver for magnetic field volume integral equation
Sayed, Sadeed Bin; Ulku, H. Arda; Bagci, Hakan
2015-01-01
An efficient explicit marching on in time (MOT) scheme for solving the magnetic field volume integral equation is proposed. The MOT system is cast in the form of an ordinary differential equation and is integrated in time using a PE(CE)m multistep
High resolution time integration for SN radiation transport
International Nuclear Information System (INIS)
Thoreson, Greg; McClarren, Ryan G.; Chang, Jae H.
2009-01-01
First-order, second-order, and high resolution time discretization schemes are implemented and studied for the discrete ordinates (S N ) equations. The high resolution method employs a rate of convergence better than first-order, but also suppresses artificial oscillations introduced by second-order schemes in hyperbolic partial differential equations. The high resolution method achieves these properties by nonlinearly adapting the time stencil to use a first-order method in regions where oscillations could be created. We employ a quasi-linear solution scheme to solve the nonlinear equations that arise from the high resolution method. All three methods were compared for accuracy and convergence rates. For non-absorbing problems, both second-order and high resolution converged to the same solution as the first-order with better convergence rates. High resolution is more accurate than first-order and matches or exceeds the second-order method
An integral time series on simulated labeling using fractal structure
International Nuclear Information System (INIS)
Djainal, D.D.
1997-01-01
This research deals with the detection of time series of vertical two-phase flow, in attempt to developed an objective indicator of time series flow patterns. One of new method is fractal analysis which can complement conventional methods in the description of highly irregular fluctuations. in the present work, fractal analysis applied to analyze simulated boiling coolant signal. this simulated signals built by sum random elements in small subchannels of the coolant channel. Two modes are defined and both modes are characterized by their void fractions. in the case of unimodal-PDF signals, the difference between these modes is relative small. on other hand, bimodal-PDF signals have relative large range. in this research, fractal dimension can indicate the characters of that signals simulation
Staatz, Christine E; Tett, Susan E
2011-12-01
This review seeks to summarize the available data about Bayesian estimation of area under the plasma concentration-time curve (AUC) and dosage prediction for mycophenolic acid (MPA) and evaluate whether sufficient evidence is available for routine use of Bayesian dosage prediction in clinical practice. A literature search identified 14 studies that assessed the predictive performance of maximum a posteriori Bayesian estimation of MPA AUC and one report that retrospectively evaluated how closely dosage recommendations based on Bayesian forecasting achieved targeted MPA exposure. Studies to date have mostly been undertaken in renal transplant recipients, with limited investigation in patients treated with MPA for autoimmune disease or haematopoietic stem cell transplantation. All of these studies have involved use of the mycophenolate mofetil (MMF) formulation of MPA, rather than the enteric-coated mycophenolate sodium (EC-MPS) formulation. Bias associated with estimation of MPA AUC using Bayesian forecasting was generally less than 10%. However some difficulties with imprecision was evident, with values ranging from 4% to 34% (based on estimation involving two or more concentration measurements). Evaluation of whether MPA dosing decisions based on Bayesian forecasting (by the free website service https://pharmaco.chu-limoges.fr) achieved target drug exposure has only been undertaken once. When MMF dosage recommendations were applied by clinicians, a higher proportion (72-80%) of subsequent estimated MPA AUC values were within the 30-60 mg · h/L target range, compared with when dosage recommendations were not followed (only 39-57% within target range). Such findings provide evidence that Bayesian dosage prediction is clinically useful for achieving target MPA AUC. This study, however, was retrospective and focussed only on adult renal transplant recipients. Furthermore, in this study, Bayesian-generated AUC estimations and dosage predictions were not compared
Tempo máximo de fonação de crianças pré-escolares Maximum phonation time in pre-school children
Directory of Open Access Journals (Sweden)
Carla Aparecida Cielo
2008-08-01
Full Text Available Pesquisas sobre o tempo máximo de fonação (TMF em crianças obtiveram diferentes resultados, constatando que tal medida pode refletir o controle neuromuscular e aerodinâmico da produção vocal, podendo ser utilizada como indicador para outras formas de avaliação, tanto qualitativas quanto objetivas. OBJETIVO: Verificar as medidas de TMF de 23 crianças pré-escolares, com idades entre quatro e seis anos e oito meses. MATERIAL E MÉTODO: O processo de amostragem contou com questionário enviado aos pais, triagem auditiva e avaliação perceptivo-auditiva vocal, por meio da escala RASAT. A coleta de dados constou dos TMF. DESENHO DO ESTUDO: Prospectivo de corte transversal. RESULTADOS: Os TMF /a/, /s/ e /z/ médios foram 7,42s, 6,35s e 7,19s; os TMF /a/ aos seis anos, foram significativamente maiores do que aos quatro anos; à medida que a idade aumentou, todos os TMF também aumentaram; e a relação s/z para todas as idades foi próxima de um. CONCLUSÕES: Os valores de TMF mostraram-se superiores aos verificados em pesquisas nacionais e inferiores aos verificados em pesquisa internacionais. Além disso, pode-se concluir que as faixas etárias analisadas no presente estudo encontram-se num período de maturação nervosa e muscular, sendo a imaturidade mais evidente na faixa etária dos quatro anos.Past studies on the maximum phonation time (MPT in children have shown different results in duration. This factor may reflect the neuromuscular and aerodynamic control of phonation in patients; such control might be used as an indicator of other evaluation methods on a qualitative and quantitative basis. AIM: to verify measures of MPT and voice acoustic characteristics in 23 children aged four to six year and eight months. METHOD: The sampling process comprised a questionnaire that was sent to parents, followed by auditory screening and a voice perceptive-auditory assessment based on the R.A.S.A.T. scale. Data collection included the MPT. STUDY
Jenkins, David R.; Basden, Alastair; Myers, Richard M.
2018-05-01
We propose a solution to the increased computational demands of Extremely Large Telescope (ELT) scale adaptive optics (AO) real-time control with the Intel Xeon Phi Knights Landing (KNL) Many Integrated Core (MIC) Architecture. The computational demands of an AO real-time controller (RTC) scale with the fourth power of telescope diameter and so the next generation ELTs require orders of magnitude more processing power for the RTC pipeline than existing systems. The Xeon Phi contains a large number (≥64) of low power x86 CPU cores and high bandwidth memory integrated into a single socketed server CPU package. The increased parallelism and memory bandwidth are crucial to providing the performance for reconstructing wavefronts with the required precision for ELT scale AO. Here, we demonstrate that the Xeon Phi KNL is capable of performing ELT scale single conjugate AO real-time control computation at over 1.0kHz with less than 20μs RMS jitter. We have also shown that with a wavefront sensor camera attached the KNL can process the real-time control loop at up to 966Hz, the maximum frame-rate of the camera, with jitter remaining below 20μs RMS. Future studies will involve exploring the use of a cluster of Xeon Phis for the real-time control of the MCAO and MOAO regimes of AO. We find that the Xeon Phi is highly suitable for ELT AO real time control.
On the initial condition problem of the time domain PMCHWT surface integral equation
Uysal, Ismail Enes
2017-05-13
Non-physical, linearly increasing and constant current components are induced in marching on-in-time solution of time domain surface integral equations when initial conditions on time derivatives of (unknown) equivalent currents are not enforced properly. This problem can be remedied by solving the time integral of the surface integral for auxiliary currents that are defined to be the time derivatives of the equivalent currents. Then the equivalent currents are obtained by numerically differentiating the auxiliary ones. In this work, this approach is applied to the marching on-in-time solution of the time domain Poggio-Miller-Chan-Harrington-Wu-Tsai surface integral equation enforced on dispersive/plasmonic scatterers. Accuracy of the proposed method is demonstrated by a numerical example.
Haseli, Y.; Oijen, van J.A.; Goey, de L.P.H.
2012-01-01
The main idea of this paper is to establish a simple approach for prediction of the ignition time of a wood particle assuming that the thermo-physical properties remain constant and ignition takes place at a characteristic ignition temperature. Using a time and space integral method, explicit
Integrated systems for real time data acquisition and processing
International Nuclear Information System (INIS)
Oprea, I.; Oprea, M.
1995-01-01
The continuous measuring of all the nuclear and dosimetric parameters in a nuclear facility is essential for personnel and environment protection. Considering the vast amount of information and data required to keep an updated overview of a situation the need for an efficient acquisition processing and information system is evident. The paper analyses the special requirements as concerning the real time inter-process communication and proposes a modern concept based on the last trends existing in the field of computer technology and operating systems. (author)
High resolution time integration for Sn radiation transport
International Nuclear Information System (INIS)
Thoreson, Greg; McClarren, Ryan G.; Chang, Jae H.
2008-01-01
First order, second order and high resolution time discretization schemes are implemented and studied for the S n equations. The high resolution method employs a rate of convergence better than first order, but also suppresses artificial oscillations introduced by second order schemes in hyperbolic differential equations. All three methods were compared for accuracy and convergence rates. For non-absorbing problems, both second order and high resolution converged to the same solution as the first order with better convergence rates. High resolution is more accurate than first order and matches or exceeds the second order method. (authors)
Krasilenko, Vladimir G.; Nikolsky, Alexander I.; Lazarev, Alexander A.; Lazareva, Maria V.
2008-03-01
In the paper the actuality of neurophysiologically motivated neuron arrays with flexibly programmable functions and operations with possibility to select required accuracy and type of nonlinear transformation and learning are shown. We consider neurons design and simulation results of multichannel spatio-time algebraic accumulation - integration of optical signals. Advantages for nonlinear transformation and summation - integration are shown. The offered circuits are simple and can have intellectual properties such as learning and adaptation. The integrator-neuron is based on CMOS current mirrors and comparators. The performance: consumable power - 100...500 μW, signal period- 0.1...1ms, input optical signals power - 0.2...20 μW time delays - less 1μs, the number of optical signals - 2...10, integration time - 10...100 of signal periods, accuracy or integration error - about 1%. Various modifications of the neuron-integrators with improved performance and for different applications are considered in the paper.
Integrated active sensor system for real time vibration monitoring.
Liang, Qijie; Yan, Xiaoqin; Liao, Xinqin; Cao, Shiyao; Lu, Shengnan; Zheng, Xin; Zhang, Yue
2015-11-05
We report a self-powered, lightweight and cost-effective active sensor system for vibration monitoring with multiplexed operation based on contact electrification between sensor and detected objects. The as-fabricated sensor matrix is capable of monitoring and mapping the vibration state of large amounts of units. The monitoring contents include: on-off state, vibration frequency and vibration amplitude of each unit. The active sensor system delivers a detection range of 0-60 Hz, high accuracy (relative error below 0.42%), long-term stability (10000 cycles). On the time dimension, the sensor can provide the vibration process memory by recording the outputs of the sensor system in an extend period of time. Besides, the developed sensor system can realize detection under contact mode and non-contact mode. Its high performance is not sensitive to the shape or the conductivity of the detected object. With these features, the active sensor system has great potential in automatic control, remote operation, surveillance and security systems.
Multi-channel time-division integrator in HL-2A
International Nuclear Information System (INIS)
Yan Ji
2008-01-01
HL-2A is China's first Tokamak device with divertor configuration (magnetic confinement controlled nuclear fusion device). To find out the details of on-going fusion reaction at different times is of important significance in achieving controlled nuclear fusion. We developed a new type multi-channel time-division integrator for HL-2A. It has functions of automatic cutting off negative pulse of the input signals, optional integrating time division spacing 0.2-1 ms, TTL starting trigger signal, automatic regularly work 20 s, and integrating 10 channel at the same time. (authors)
Maximum-entropy description of animal movement.
Fleming, Chris H; Subaşı, Yiğit; Calabrese, Justin M
2015-03-01
We introduce a class of maximum-entropy states that naturally includes within it all of the major continuous-time stochastic processes that have been applied to animal movement, including Brownian motion, Ornstein-Uhlenbeck motion, integrated Ornstein-Uhlenbeck motion, a recently discovered hybrid of the previous models, and a new model that describes central-place foraging. We are also able to predict a further hierarchy of new models that will emerge as data quality improves to better resolve the underlying continuity of animal movement. Finally, we also show that Langevin equations must obey a fluctuation-dissipation theorem to generate processes that fall from this class of maximum-entropy distributions when the constraints are purely kinematic.
Conservative fourth-order time integration of non-linear dynamic systems
DEFF Research Database (Denmark)
Krenk, Steen
2015-01-01
An energy conserving time integration algorithm with fourth-order accuracy is developed for dynamic systems with nonlinear stiffness. The discrete formulation is derived by integrating the differential state-space equations of motion over the integration time increment, and then evaluating...... the resulting time integrals of the inertia and stiffness terms via integration by parts. This process introduces the time derivatives of the state space variables, and these are then substituted from the original state-space differential equations. The resulting discrete form of the state-space equations...... is a direct fourth-order accurate representation of the original differential equations. This fourth-order form is energy conserving for systems with force potential in the form of a quartic polynomial in the displacement components. Energy conservation for a force potential of general form is obtained...
Integrated Monitoring of Mola mola Behaviour in Space and Time.
Sousa, Lara L; López-Castejón, Francisco; Gilabert, Javier; Relvas, Paulo; Couto, Ana; Queiroz, Nuno; Caldas, Renato; Dias, Paulo Sousa; Dias, Hugo; Faria, Margarida; Ferreira, Filipe; Ferreira, António Sérgio; Fortuna, João; Gomes, Ricardo Joel; Loureiro, Bruno; Martins, Ricardo; Madureira, Luis; Neiva, Jorge; Oliveira, Marina; Pereira, João; Pinto, José; Py, Frederic; Queirós, Hugo; Silva, Daniel; Sujit, P B; Zolich, Artur; Johansen, Tor Arne; de Sousa, João Borges; Rajan, Kanna
2016-01-01
Over the last decade, ocean sunfish movements have been monitored worldwide using various satellite tracking methods. This study reports the near-real time monitoring of fine-scale (vehicles to measure both the movements and the contextual environment of the fish. A total of four individuals were tracked using custom-made GPS satellite tags providing geolocation estimates of fine-scale resolution. These accurate positions further informed sunfish areas of restricted search (ARS), which were directly correlated to steep thermal frontal zones. Simultaneously, and for two different occasions, an Autonomous Underwater Vehicle (AUV) video-recorded the path of the tracked fish and detected buoyant particles in the water column. Importantly, the densities of these particles were also directly correlated to steep thermal gradients. Thus, both sunfish foraging behaviour (ARS) and possibly prey densities, were found to be influenced by analogous environmental conditions. In addition, the dynamic structure of the water transited by the tracked individuals was described by a Lagrangian modelling approach. The model informed the distribution of zooplankton in the region, both horizontally and in the water column, and the resultant simulated densities positively correlated with sunfish ARS behaviour estimator (rs = 0.184, pstructure provide a rationale for a predator's fine-scale behaviour observed over a two weeks in May 2014.
Integrated Monitoring of Mola mola Behaviour in Space and Time
Sousa, Lara L.; López-Castejón, Francisco; Gilabert, Javier; Relvas, Paulo; Couto, Ana; Queiroz, Nuno; Caldas, Renato; Dias, Paulo Sousa; Dias, Hugo; Faria, Margarida; Ferreira, Filipe; Ferreira, António Sérgio; Fortuna, João; Gomes, Ricardo Joel; Loureiro, Bruno; Martins, Ricardo; Madureira, Luis; Neiva, Jorge; Oliveira, Marina; Pereira, João; Pinto, José; Py, Frederic; Queirós, Hugo; Silva, Daniel; Sujit, P. B.; Zolich, Artur; Johansen, Tor Arne; de Sousa, João Borges; Rajan, Kanna
2016-01-01
Over the last decade, ocean sunfish movements have been monitored worldwide using various satellite tracking methods. This study reports the near-real time monitoring of fine-scale (behaviour of sunfish. The study was conducted in southern Portugal in May 2014 and involved satellite tags and underwater and surface robotic vehicles to measure both the movements and the contextual environment of the fish. A total of four individuals were tracked using custom-made GPS satellite tags providing geolocation estimates of fine-scale resolution. These accurate positions further informed sunfish areas of restricted search (ARS), which were directly correlated to steep thermal frontal zones. Simultaneously, and for two different occasions, an Autonomous Underwater Vehicle (AUV) video-recorded the path of the tracked fish and detected buoyant particles in the water column. Importantly, the densities of these particles were also directly correlated to steep thermal gradients. Thus, both sunfish foraging behaviour (ARS) and possibly prey densities, were found to be influenced by analogous environmental conditions. In addition, the dynamic structure of the water transited by the tracked individuals was described by a Lagrangian modelling approach. The model informed the distribution of zooplankton in the region, both horizontally and in the water column, and the resultant simulated densities positively correlated with sunfish ARS behaviour estimator (rs = 0.184, pbehaviour observed over a two weeks in May 2014. PMID:27494028
Integrated Monitoring of Mola mola Behaviour in Space and Time.
Directory of Open Access Journals (Sweden)
Lara L Sousa
Full Text Available Over the last decade, ocean sunfish movements have been monitored worldwide using various satellite tracking methods. This study reports the near-real time monitoring of fine-scale (< 10 m behaviour of sunfish. The study was conducted in southern Portugal in May 2014 and involved satellite tags and underwater and surface robotic vehicles to measure both the movements and the contextual environment of the fish. A total of four individuals were tracked using custom-made GPS satellite tags providing geolocation estimates of fine-scale resolution. These accurate positions further informed sunfish areas of restricted search (ARS, which were directly correlated to steep thermal frontal zones. Simultaneously, and for two different occasions, an Autonomous Underwater Vehicle (AUV video-recorded the path of the tracked fish and detected buoyant particles in the water column. Importantly, the densities of these particles were also directly correlated to steep thermal gradients. Thus, both sunfish foraging behaviour (ARS and possibly prey densities, were found to be influenced by analogous environmental conditions. In addition, the dynamic structure of the water transited by the tracked individuals was described by a Lagrangian modelling approach. The model informed the distribution of zooplankton in the region, both horizontally and in the water column, and the resultant simulated densities positively correlated with sunfish ARS behaviour estimator (rs = 0.184, p<0.001. The model also revealed that tracked fish opportunistically displace with respect to subsurface current flow. Thus, we show how physical forcing and current structure provide a rationale for a predator's fine-scale behaviour observed over a two weeks in May 2014.
DEFF Research Database (Denmark)
Nielsen, Lars; Boldreel, Lars Ole; Hansen, Thomas Mejer
2011-01-01
Group. The sonic velocities are consistent with the overall seismic layering, although they show additional fine-scale layering. Integration of gamma and sonic log with porosity data shows that seismic velocity is sensitive to clay content. In intervals near boundaries of the refraction model, moderate......The origin of the topography of southwest Scandinavia is subject to discussion. Analysis of borehole seismic velocity has formed the basis for interpretation of several hundred metres of Neogene uplift in parts of Denmark.Here, refraction seismic data constrain a 7.5km long P-wave velocity model...... of the Chalk Group below the Stevns peninsula, eastern part of the Danish Basin. The model contains four layers in the ~860m thick Chalk Group with mean velocities of 2.2km/s, 2.4km/s, 3.1km/s, and 3.9–4.3km/s. Sonic and gamma wireline log data from two cored boreholes represent the upper ~450m of the Chalk...
Velocity time integral for right upper pulmonary vein in VLBW infants with patent ductus arteriosus
Directory of Open Access Journals (Sweden)
Gianluca Lista
Full Text Available OBJECTIVE: Early diagnosis of significant patent ductus arteriosus reduces the risk of clinical worsening in very low birth weight infants. Echocardiographic patent ductus arteriosus shunt flow pattern can be used to predict significant patent ductus arteriosus. Pulmonary venous flow, expressed as vein velocity time integral, is correlated to ductus arteriosus closure. The aim of this study is to investigate the relationship between significant reductions in vein velocity time integral and non-significant patent ductus arteriosus in the first week of life. METHODS: A multicenter, prospective, observational study was conducted to evaluate very low birth weight infants (<1500 g on respiratory support. Echocardiography was used to evaluate vein velocity time integral on days 1 and 4 of life. The relationship between vein velocity time integral and other parameters was studied. RESULTS: In total, 98 very low birth weight infants on respiratory support were studied. On day 1 of life, vein velocity time integral was similar in patients with open or closed ductus. The mean vein velocity time integral significantly reduced in the first four days of life. On the fourth day of life, there was less of a reduction in patients with patent ductus compared to those with closed patent ductus arteriosus and the difference was significant. CONCLUSIONS: A significant reduction in vein velocity time integral in the first days of life is associated with ductus closure. This parameter correlates well with other echocardiographic parameters and may aid in the diagnosis and management of patent ductus arteriosus.
Directory of Open Access Journals (Sweden)
Prashant Jindal
2016-01-01
Full Text Available In the global critical economic scenario, inflation plays a vital role in deciding optimal pricing of goods in any business entity. This article presents two single-vendor single-buyer integrated supply chain inventory models with inflation and time value of money. Shortage is allowed during the lead-time and it is partially backlogged. Lead time is controllable and can be reduced using crashing cost. In the first model, we consider the demand of lead time follows a normal distribution, and in the second model, it is considered distribution-free. For both cases, our objective is to minimize the integrated system cost by simultaneously optimizing the order quantity, safety factor, lead time and number of lots. The discounted cash flow and classical optimization technique are used to derive the optimal solution for both cases. Numerical examples including the sensitivity analysis of system parameters is provided to validate the results of the supply chain models.
Reparametrization in the path integral over finite dimensional manifold with a time-dependent metric
International Nuclear Information System (INIS)
Storchak, S.N.
1988-01-01
The path reparametrization procedure in the path integral is considered using the methods of stochastic processes for diffusion on finite dimensional manifold with a time-dependent metric. the reparametrization Jacobian has been obtained. The formulas of reparametrization for a symbolic presentation of the path integral have been derived
Non-integrability of time-dependent spherically symmetric Yang-Mills equations
International Nuclear Information System (INIS)
Matinyan, S.G.; Prokhorenko, E.V.; Savvidy, G.K.
1986-01-01
The integrability of time-dependent spherically symmetric Yang-Mills equations is studied using the Fermi-Pasta-Ulam method. The phase space of this system is shown to have no quasi-periodic motion specific for integrable systems. In particular, the well-known Wu-Yang static solution is unstable, so its vicinity in phase is the stochasticity region
Non-integrability of time-dependent spherically symmetric Yang-Mills equations
Energy Technology Data Exchange (ETDEWEB)
Matinyan, S G; Prokhorenko, E B; Savvidy, G K
1988-03-07
The integrability of time-dependent spherically symmetric Yang-Mills equations is studied using the Fermi-Pasta-Ulam method. It is shown that the motion of this system is ergodic, while the system itself is non-integrable, i.e. manifests dynamical chaos.
DEFF Research Database (Denmark)
Steenstrup, Stig; Hove, Jens D; Kofoed, Klaus
2002-01-01
The distribution function of pulmonary transit times (fPTTs) contains information on the transit time of blood through the lungs and the dispersion in transit times. Most of the previous studies have used specific functional forms with adjustable parameters to characterize the fPTT. It is the pur......, we were able to accurately identify a two-peaked transfer function, which may theoretically be seen in patients with pulmonary disease confined to one lung. Transit time values for [13N]-ammonia were produced by applying the algorithm to PET studies from normal volunteers....
Li, Xiaobo; Hu, Haofeng; Liu, Tiegen; Huang, Bingjing; Song, Zhanjie
2016-04-04
We consider the degree of linear polarization (DOLP) polarimetry system, which performs two intensity measurements at orthogonal polarization states to estimate DOLP. We show that if the total integration time of intensity measurements is fixed, the variance of the DOLP estimator depends on the distribution of integration time for two intensity measurements. Therefore, by optimizing the distribution of integration time, the variance of the DOLP estimator can be decreased. In this paper, we obtain the closed-form solution of the optimal distribution of integration time in an approximate way by employing Delta method and Lagrange multiplier method. According to the theoretical analyses and real-world experiments, it is shown that the variance of the DOLP estimator can be decreased for any value of DOLP. The method proposed in this paper can effectively decrease the measurement variance and thus statistically improve the measurement accuracy of the polarimetry system.
Optimal distribution of integration time for intensity measurements in Stokes polarimetry.
Li, Xiaobo; Liu, Tiegen; Huang, Bingjing; Song, Zhanjie; Hu, Haofeng
2015-10-19
We consider the typical Stokes polarimetry system, which performs four intensity measurements to estimate a Stokes vector. We show that if the total integration time of intensity measurements is fixed, the variance of the Stokes vector estimator depends on the distribution of the integration time at four intensity measurements. Therefore, by optimizing the distribution of integration time, the variance of the Stokes vector estimator can be decreased. In this paper, we obtain the closed-form solution of the optimal distribution of integration time by employing Lagrange multiplier method. According to the theoretical analysis and real-world experiment, it is shown that the total variance of the Stokes vector estimator can be significantly decreased about 40% in the case discussed in this paper. The method proposed in this paper can effectively decrease the measurement variance and thus statistically improves the measurement accuracy of the polarimetric system.
On the mixed discretization of the time domain magnetic field integral equation
Ulku, Huseyin Arda; Bogaert, Ignace; Cools, Kristof; Andriulli, Francesco P.; Bagci, Hakan
2012-01-01
Time domain magnetic field integral equation (MFIE) is discretized using divergence-conforming Rao-Wilton-Glisson (RWG) and curl-conforming Buffa-Christiansen (BC) functions as spatial basis and testing functions, respectively. The resulting mixed
Valdé s, Felipe; Andriulli, Francesco P.; Bagci, Hakan; Michielssen, Eric
2013-01-01
Single-source time-domain electric-and magnetic-field integral equations for analyzing scattering from homogeneous penetrable objects are presented. Their temporal discretization is effected by using shifted piecewise polynomial temporal basis
Time-integrated CP violation measurements in the B mesons system at the LHCb experiment
Cardinale, R
2016-01-01
Time-integrated CP violation measurements in the B meson system provide information for testing the CKM picture of CP violation in the Standard Model. A review of recent results from the LHCb experiment is presented.
The effect of decaying atomic states on integral and time differential Moessbauer spectra
International Nuclear Information System (INIS)
Kankeleit, E.
1975-01-01
Moessbauer spectra for time dependent monopole interaction have been calculated for the case that the nuclear transition feeding the Moessbauer state excites an electric state of the atom. This is assumed to decay in a time comparable with the lifetime of the Moessbauer state. Spectra have been calculated for both time differential and integral experiments. (orig.) [de
Terminal current interpolation for multirate time integration of hierarchical IC models
Verhoeven, A.; Maten, ter E.J.W.; Dohmen, J.J.; Tasic, B.; Mattheij, R.M.M.; Fitt, A.D.; Norbury, J.; Ockendon, H.; Wilson, E.
2010-01-01
Multirate time-integration methods [3–5] appear to be attractive for initial value problems for DAEs with latency or multirate behaviour. Latency means that parts of the circuit are constant or slowly time-varying during a certain time interval, while multirate behaviour means that some variables
Ulku, Huseyin Arda; Bagci, Hakan; Michielssen, Eric
2013-01-01
An explicit marching on-in-time (MOT) scheme for solving the time-domain magnetic field integral equation (TD-MFIE) is presented. The proposed MOT-TD-MFIE solver uses Rao-Wilton-Glisson basis functions for spatial discretization and a PE(CE)m-type linear multistep method for time marching. Unlike previous explicit MOT-TD-MFIE solvers, the time step size can be chosen as large as that of the implicit MOT-TD-MFIE solvers without adversely affecting accuracy or stability. An algebraic stability analysis demonstrates the stability of the proposed explicit solver; its accuracy and efficiency are established via numerical examples. © 1963-2012 IEEE.
Ulku, Huseyin Arda
2013-08-01
An explicit marching on-in-time (MOT) scheme for solving the time-domain magnetic field integral equation (TD-MFIE) is presented. The proposed MOT-TD-MFIE solver uses Rao-Wilton-Glisson basis functions for spatial discretization and a PE(CE)m-type linear multistep method for time marching. Unlike previous explicit MOT-TD-MFIE solvers, the time step size can be chosen as large as that of the implicit MOT-TD-MFIE solvers without adversely affecting accuracy or stability. An algebraic stability analysis demonstrates the stability of the proposed explicit solver; its accuracy and efficiency are established via numerical examples. © 1963-2012 IEEE.
Design of a semi-custom integrated circuit for the SLAC SLC timing control system
International Nuclear Information System (INIS)
Linstadt, E.
1984-10-01
A semi-custom (gate array) integrated circuit has been designed for use in the SLAC Linear Collider timing and control system. The design process and SLAC's experiences during the phases of the design cycle are described. Issues concerning the partitioning of the design into semi-custom and standard components are discussed. Functional descriptions of the semi-custom integrated circuit and the timing module in which it is used are given
SEMIGROUPS N TIMES INTEGRATED AND AN APPLICATION TO A PROBLEM OF CAUCHY TYPE
Directory of Open Access Journals (Sweden)
Danessa Chirinos Fernández
2016-06-01
Full Text Available The theory of semigroups n times integrated is a generalization of strongly continuous semigroups, which was developed from 1984, and is widely used for the study of the existence and uniqueness of problems such Cauchy in which the operator domain is not necessarily dense. This paper presents an application of semigroups n times integrated into a problem of viscoelasticity, which is formulated as a Cauchy problem on a Banach space presents .
van der Hout, C. M.; Witbaard, R.; Bergman, M. J. N.; Duineveld, G. C. A.; Rozemeijer, M. J. C.; Gerkema, T.
2017-09-01
The analysis of 1.8 years of data gives an understanding of the response to varying forcing of suspended particulate matter (SPM) and chlorophyll-a (CHL-a) in a coastal turbidity maximum zone (TMZ). Both temporal and vertical concentration variations in the near-bed layer (0-2 m) in the shallow (11 m deep) coastal zone at 1 km off the Dutch coast are shown. Temporal variations in the concentration of both parameters are found on tidal and seasonal scales, and a marked response to episodic events (e.g. storms). The seasonal cycle in the near-bed CHL-a concentration is determined by the spring bloom. The role of the wave climate as the primary forcing in the SPM seasonal cycle is discussed. The tidal current provides a background signal, generated predominantly by local resuspension and settling and a minor role is for advection in the cross-shore and the alongshore direction. We tested the logarithmic Rouse profile to the vertical profiles of both the SPM and the CHL-a data, with respectively 84% and only 2% success. The resulting large percentage of low Rouse numbers for the SPM profiles suggest a mixed suspension is dominant in the TMZ, i.e. surface SPM concentrations are in the same order of magnitude as near-bed concentrations.
Explicit Time Integrators for Nonlinear Dynamics Derived from the Midpoint Rule
Directory of Open Access Journals (Sweden)
P. Krysl
2004-01-01
Full Text Available We address the design of time integrators for mechanical systems that are explicit in the forcing evaluations. Our starting point is the midpoint rule, either in the classical form for the vector space setting, or in the Lie form for the rotation group. By introducing discrete, concentrated impulses we can approximate the forcing impressed upon the system over the time step, and thus arrive at first-order integrators. These can then be composed to yield a second order integrator with very desirable properties: symplecticity and momentum conservation.
Maximum Acceleration Recording Circuit
Bozeman, Richard J., Jr.
1995-01-01
Coarsely digitized maximum levels recorded in blown fuses. Circuit feeds power to accelerometer and makes nonvolatile record of maximum level to which output of accelerometer rises during measurement interval. In comparison with inertia-type single-preset-trip-point mechanical maximum-acceleration-recording devices, circuit weighs less, occupies less space, and records accelerations within narrower bands of uncertainty. In comparison with prior electronic data-acquisition systems designed for same purpose, circuit simpler, less bulky, consumes less power, costs and analysis of data recorded in magnetic or electronic memory devices. Circuit used, for example, to record accelerations to which commodities subjected during transportation on trucks.
A Novel Multiple-Time Scale Integrator for the Hybrid Monte Carlo Algorithm
International Nuclear Information System (INIS)
Kamleh, Waseem
2011-01-01
Hybrid Monte Carlo simulations that implement the fermion action using multiple terms are commonly used. By the nature of their formulation they involve multiple integration time scales in the evolution of the system through simulation time. These different scales are usually dealt with by the Sexton-Weingarten nested leapfrog integrator. In this scheme the choice of time scales is somewhat restricted as each time step must be an exact multiple of the next smallest scale in the sequence. A novel generalisation of the nested leapfrog integrator is introduced which allows for far greater flexibility in the choice of time scales, as each scale now must only be an exact multiple of the smallest step size.
Time series analysis of the developed financial markets' integration using visibility graphs
Zhuang, Enyu; Small, Michael; Feng, Gang
2014-09-01
A time series representing the developed financial markets' segmentation from 1973 to 2012 is studied. The time series reveals an obvious market integration trend. To further uncover the features of this time series, we divide it into seven windows and generate seven visibility graphs. The measuring capabilities of the visibility graphs provide means to quantitatively analyze the original time series. It is found that the important historical incidents that influenced market integration coincide with variations in the measured graphical node degree. Through the measure of neighborhood span, the frequencies of the historical incidents are disclosed. Moreover, it is also found that large "cycles" and significant noise in the time series are linked to large and small communities in the generated visibility graphs. For large cycles, how historical incidents significantly affected market integration is distinguished by density and compactness of the corresponding communities.
Directory of Open Access Journals (Sweden)
Fei Lin
2016-03-01
Full Text Available With its large capacity, the total urban rail transit energy consumption is very high; thus, energy saving operations are quite meaningful. The effective use of regenerative braking energy is the mainstream method for improving the efficiency of energy saving. This paper examines the optimization of train dwell time and builds a multiple train operation model for energy conservation of a power supply system. By changing the dwell time, the braking energy can be absorbed and utilized by other traction trains as efficiently as possible. The application of genetic algorithms is proposed for the optimization, based on the current schedule. Next, to validate the correctness and effectiveness of the optimization, a real case is studied. Actual data from the Beijing subway Yizhuang Line are employed to perform the simulation, and the results indicate that the optimization method of the dwell time is effective.
Maximum Quantum Entropy Method
Sim, Jae-Hoon; Han, Myung Joon
2018-01-01
Maximum entropy method for analytic continuation is extended by introducing quantum relative entropy. This new method is formulated in terms of matrix-valued functions and therefore invariant under arbitrary unitary transformation of input matrix. As a result, the continuation of off-diagonal elements becomes straightforward. Without introducing any further ambiguity, the Bayesian probabilistic interpretation is maintained just as in the conventional maximum entropy method. The applications o...
International Nuclear Information System (INIS)
Biondi, L.
1998-01-01
The charging for a service is a supplier's remuneration for the expenses incurred in providing it. There are currently two charges for electricity: consumption and maximum demand. While no problem arises about the former, the issue is more complicated for the latter and the analysis in this article tends to show that the annual charge for maximum demand arbitrarily discriminates among consumer groups, to the disadvantage of some [it
Rhee, P L; Choi, M S; Kim, Y H; Son, H J; Kim, J J; Koh, K C; Paik, S W; Rhee, J C; Choi, K W
2000-10-01
Biofeedback is an effective therapy for a majority of patients with anismus. However, a significant proportion of patients still failed to respond to biofeedback, and little has been known about the factors that predict response to biofeedback. We evaluated the factors associated with poor response to biofeedback. Biofeedback therapy was offered to 45 patients with anismus with decreased bowel frequency (less than three times per week) and normal colonic transit time. Any differences in demographics, symptoms, and parameters of anorectal physiologic tests were sought between responders (in whom bowel frequency increased up to three times or more per week after biofeedback) and nonresponders (in whom bowel frequency remained less than three times per week). Thirty-one patients (68.9 percent) responded to biofeedback and 14 patients (31.1 percent) did not. Anal canal length was longer in nonresponders than in responders (4.53 +/- 0.5 vs. 4.08 +/- 0.56 cm; P = 0.02), and rectal maximum tolerable volume was larger in nonresponders than in responders. (361 +/- 87 vs. 302 +/- 69 ml; P = 0.02). Anal canal length and rectal maximum tolerable volume showed significant differences between responders and nonresponders on multivariate analysis (P = 0.027 and P = 0.034, respectively). This study showed that a long anal canal and increased rectal maximum tolerable volume are associated with poor short-term response to biofeedback for patients with anismus with decreased bowel frequency and normal colonic transit time.
Characterization of HBV integration patterns and timing in liver cancer and HBV-infected livers.
Furuta, Mayuko; Tanaka, Hiroko; Shiraishi, Yuichi; Unida, Takuro; Imamura, Michio; Fujimoto, Akihiro; Fujita, Masahi; Sasaki-Oku, Aya; Maejima, Kazuhiro; Nakano, Kaoru; Kawakami, Yoshiiku; Arihiro, Koji; Aikata, Hiroshi; Ueno, Masaki; Hayami, Shinya; Ariizumi, Shun-Ichi; Yamamoto, Masakazu; Gotoh, Kunihito; Ohdan, Hideki; Yamaue, Hiroki; Miyano, Satoru; Chayama, Kazuaki; Nakagawa, Hidewaki
2018-05-18
Integration of Hepatitis B virus (HBV) into the human genome can cause genetic instability, leading to selective advantages for HBV-induced liver cancer. Despite the large number of studies for HBV integration into liver cancer, little is known about the mechanism of initial HBV integration events owing to the limitations of materials and detection methods. We conducted an HBV sequence capture, followed by ultra-deep sequencing, to screen for HBV integrations in 111 liver samples from human-hepatocyte chimeric mice with HBV infection and human clinical samples containing 42 paired samples from non-tumorous and tumorous liver tissues. The HBV infection model using chimeric mice verified the efficiency of our HBV-capture analysis and demonstrated that HBV integration could occur 23 to 49 days after HBV infection via microhomology-mediated end joining and predominantly in mitochondrial DNA. Overall HBV integration sites in clinical samples were significantly enriched in regions annotated as exhibiting open chromatin, a high level of gene expression, and early replication timing in liver cells. These data indicate that HBV integration in liver tissue was biased according to chromatin accessibility, with additional selection pressures in the gene promoters of tumor samples. Moreover, an integrative analysis using paired non-tumorous and tumorous samples and HBV-related transcriptional change revealed the involvement of TERT and MLL4 in clonal selection. We also found frequent and non-tumorous liver-specific HBV integrations in FN1 and HBV-FN1 fusion transcript. Extensive survey of HBV integrations facilitates and improves the understanding of the timing and biology of HBV integration during infection and HBV-related hepatocarcinogenesis.
A portable storage maximum thermometer
International Nuclear Information System (INIS)
Fayart, Gerard.
1976-01-01
A clinical thermometer storing the voltage corresponding to the maximum temperature in an analog memory is described. End of the measurement is shown by a lamp switch out. The measurement time is shortened by means of a low thermal inertia platinum probe. This portable thermometer is fitted with cell test and calibration system [fr
International Nuclear Information System (INIS)
Pujols, Agnes
1991-01-01
We prove that the scattering operator for the wave equation in the exterior of an non-homogeneous obstacle exists. Its distribution kernel is represented by a time-dependent boundary integral equation. A space-time integral variational formulation is developed for determining the current induced by the scattering of an electromagnetic wave by an homogeneous object. The discrete approximation of the variational problem using a finite element method in both space and time leads to stable convergent schemes, giving a numerical code for perfectly conducting cylinders. (author) [fr
On the minima of the time integrated perturbation factor in the Scherer-Blume theory
International Nuclear Information System (INIS)
Silveira, E.F. da; Freire Junior, F.L.; Massolo, C.P.; Schaposnik, F.A.
1981-09-01
The minima in the correlation time dependence of the Scherer-Blume time integrated attenuation coefficients for the hyperfine perturbation of ions recoiling in gas are studied. Its position and depth are determined for different physical situations and comparison with experimental data is shown. (Author) [pt
Creating a Campus Culture of Integrity: Comparing the Perspectives of Full- and Part-Time Faculty
Hudd, Suzanne S.; Apgar, Caroline; Bronson, Eric Franklyn; Lee, Renee Gravois
2009-01-01
Part-time faculty play an important role in creating a culture of integrity on campus, yet they face a number of structural constraints. This paper seeks to improve our understanding of the potentially unique experiences of part-time faculty with academic misconduct and suggests ways to more effectively involve them in campus-wide academic…
Taatgen, Niels A.; van Rijn, Hedderik; Anderson, John
2007-01-01
A theory of prospective time perception is introduced and incorporated as a module in an integrated theory of cognition, thereby extending existing theories and allowing predictions about attention and learning. First, a time perception module is established by fitting existing datasets (interval estimation and bisection and impact of secondary…
Taatgen, Niels A.; van Rijn, Hedderik; Anderson, John
A theory of prospective time perception is introduced and incorporated as a module in an integrated theory of cognition, thereby extending existing theories and allowing predictions about attention and learning. First, a time perception module is established by fitting existing datasets (interval
Development of a precise long-time digital integrator for magnetic measurements in a tokamak
Energy Technology Data Exchange (ETDEWEB)
Kurihara, Kenichi; Kawamata, Youichi [Japan Atomic Energy Research Inst., Naka, Ibaraki (Japan). Naka Fusion Research Establishment
1997-10-01
Long-time D-T burning operation in a tokamak requires that a magnetic sensor must work in an environment of 14-MeV intense neutron field, and that the measurement system must output precise magnetic field values. A method of time-integration of voltage produced in a simple pick-up coil seems to have preferable features of good time response, easy maintenance, and resistance to neutron irradiation. However, an inevitably-produced signal drift makes it difficult to apply the method to the long-time integral operation. To solve this problem, we have developed a new digital integrator (a voltage-to-frequency converter and an up-down counter) with testing the trial boards in the JT-60 magnetic measurements. This reports all of the problems and their measures through the development steps in details, and shows how to apply this method to the ITER operation. (author)
Connection between Feynman integrals having different values of the space-time dimension
International Nuclear Information System (INIS)
Tarasov, O.V.
1996-05-01
A systematic algorithm for obtaining recurrence relations for dimensionally regularized Feynman integrals w.r.t. the space-time dimension d is proposed. The relation between d and d-2 dimensional integrals is given in terms of a differential operator for which an explicit formula can be obtained for each Feynman diagram. We show how the method works for one-, two- and three-loop integrals. The new recurrence relations w.r.t. d are complementary to the recurrence relations which derive from the method of integration by parts. We find that the problem of the irreducible numerators in Feynman integrals can be naturally solved in the framework of the proposed generalized recurrence relations. (orig.)
Li, Zhenjie; Li, Qiuju; Chang, Jinfan; Ma, Yichao; Liu, Peng; Wang, Zheng; Hu, Michael Y.; Zhao, Jiyong; Alp, E. E.; Xu, Wei; Tao, Ye; Wu, Chaoqun; Zhou, Yangfan
2017-10-01
A four-channel nanosecond time-resolved avalanche-photodiode (APD) detector system is developed at Beijing Synchrotron Radiation. It uses a single module for signal processing and readout. This integrated system provides better reliability and flexibility for custom improvement. The detector system consists of three parts: (i) four APD sensors, (ii) four fast preamplifiers and (iii) a time-digital-converter (TDC) readout electronics. The C30703FH silicon APD chips fabricated by Excelitas are used as the sensors of the detectors. It has an effective light-sensitive area of 10 × 10 mm2 and an absorption layer thickness of 110 μm. A fast preamplifier with a gain of 59 dB and bandwidth of 2 GHz is designed to readout of the weak signal from the C30703FH APD. The TDC is realized by a Spartan-6 field-programmable-gate-array (FPGA) with multiphase method in a resolution of 1ns. The arrival time of all scattering events between two start triggers can be recorded by the TDC. The detector has been used for nuclear resonant scattering study at both Advanced Photon Source and also at Beijing Synchrotron Radiation Facility. For the X-ray energy of 14.4 keV, the time resolution, the full width of half maximum (FWHM) of the detector (APD sensor + fast amplifier) is 0.86 ns, and the whole detector system (APD sensors + fast amplifiers + TDC readout electronics) achieves a time resolution of 1.4 ns.
Shi, Yifei
2013-08-01
Internal resonant modes are always observed in the marching-on-in-time (MOT) solution of the time domain electric field integral equation (EFIE), although \\'relaxed initial conditions,\\' which are enforced at the beginning of time marching, should in theory prevent these spurious modes from appearing. It has been conjectured that, numerical errors built up during time marching establish the necessary initial conditions and induce the internal resonant modes. However, this conjecture has never been proved by systematic numerical experiments. Our numerical results in this communication demonstrate that, the internal resonant modes\\' amplitudes are indeed dictated by the numerical errors. Additionally, it is shown that in a few cases, the internal resonant modes can be made \\'invisible\\' by significantly suppressing the numerical errors. These tests prove the conjecture that the internal resonant modes are induced by numerical errors when the time domain EFIE is solved by the MOT method. © 2013 IEEE.
Shi, Yifei; Bagci, Hakan; Lu, Mingyu
2013-01-01
Internal resonant modes are always observed in the marching-on-in-time (MOT) solution of the time domain electric field integral equation (EFIE), although 'relaxed initial conditions,' which are enforced at the beginning of time marching, should in theory prevent these spurious modes from appearing. It has been conjectured that, numerical errors built up during time marching establish the necessary initial conditions and induce the internal resonant modes. However, this conjecture has never been proved by systematic numerical experiments. Our numerical results in this communication demonstrate that, the internal resonant modes' amplitudes are indeed dictated by the numerical errors. Additionally, it is shown that in a few cases, the internal resonant modes can be made 'invisible' by significantly suppressing the numerical errors. These tests prove the conjecture that the internal resonant modes are induced by numerical errors when the time domain EFIE is solved by the MOT method. © 2013 IEEE.
Energy Technology Data Exchange (ETDEWEB)
Lee, Moonyong [Yeungnam University, Gyeongsan (Korea, Republic of); Vu, Truong Nguyen Luan [University of Technical Education of Ho Chi Minh City, Ho Chi Minh (China)
2013-03-15
A unified approach for the design of proportional-integral-derivative (PID) controllers cascaded with first-order lead-lag filters is proposed for various time-delay processes. The proposed controller’s tuning rules are directly derived using the Padé approximation on the basis of internal model control (IMC) for enhanced stability against disturbances. A two-degrees-of-freedom (2DOF) control scheme is employed to cope with both regulatory and servo problems. Simulation is conducted for a broad range of stable, integrating, and unstable processes with time delays. Each simulated controller is tuned to have the same degree of robustness in terms of maximum sensitivity (Ms). The results demonstrate that the proposed controller provides superior disturbance rejection and set-point tracking when compared with recently published PID-type controllers. Controllers’ robustness is investigated through the simultaneous introduction of perturbation uncertainties to all process parameters to obtain worst-case process-model mismatch. The process-model mismatch simulation results demonstrate that the proposed method consistently affords superior robustness.
Optimal order and time-step criterion for Aarseth-type N-body integrators
International Nuclear Information System (INIS)
Makino, Junichiro
1991-01-01
How the selection of the time-step criterion and the order of the integrator change the efficiency of Aarseth-type N-body integrators is discussed. An alternative to Aarseth's scheme based on the direct calculation of the time derivative of the force using the Hermite interpolation is compared to Aarseth's scheme, which uses the Newton interpolation to construct the predictor and corrector. How the number of particles in the system changes the behavior of integrators is examined. The Hermite scheme allows a time step twice as large as that for the standard Aarseth scheme for the same accuracy. The calculation cost of the Hermite scheme per time step is roughly twice as much as that of the standard Aarseth scheme. The optimal order of the integrators depends on both the particle number and the accuracy required. The time-step criterion of the standard Aarseth scheme is found to be inapplicable to higher-order integrators, and a more uniformly reliable criterion is proposed. 18 refs
Kanchan Mudgil; Deepali Kamthania
2013-01-01
This paper evaluates the energy payback time (EPBT) of building integrated photovoltaic thermal (BISPVT) system for Srinagar, India. Three different photovoltaic (PV) modules namely mono crystalline silicon (m-Si), poly crystalline silicon (p-Si), and amorphous silicon (a-Si) have been considered for calculation of EPBT. It is found that, the EPBT is lowest in m-Si. Hence, integration of m-Si PV modules on the roof of a room is economical.
Directory of Open Access Journals (Sweden)
Quanwu Li
2016-01-01
Full Text Available High reliability is required for the permanent magnet brushless DC motor (PM-BLDCM in an electrical pump of hypersonic vehicle. The PM-BLDCM is a short-time duty motor with high-power-density. Since thermal equilibrium is not reached for the PM-BLDCM, the temperature distribution is not uniform and there is a risk of local overheating. The winding is a main heat source and its insulation is thermally sensitive, so reducing the winding temperature rise is the key to the improvement of the reliability. In order to reduce the winding temperature rise, an electromagnetic-thermal integrated design optimization method is proposed. The method is based on electromagnetic analysis and thermal transient analysis. The requirements and constraints of electromagnetic and thermal design are considered in this method. The split ratio and the maximum flux density in stator lamination, which are highly relevant to the windings temperature rise, are optimized analytically. The analytical results are verified by finite element analysis (FEA and experiments. The maximum error between the analytical and the FEA results is 4%. The errors between the analytical and measured windings temperature rise are less than 8%. It can be proved that the method can obtain the optimal design accurately to reduce the winding temperature rise.
Ulku, Huseyin Arda; Bagci, Hakan; Michielssen, Eric
2012-01-01
An explicit yet stable marching-on-in-time (MOT) scheme for solving the time domain magnetic field integral equation (TD-MFIE) is presented. The stability of the explicit scheme is achieved via (i) accurate evaluation of the MOT matrix elements using closed form expressions and (ii) a PE(CE) m type linear multistep method for time marching. Numerical results demonstrate the accuracy and stability of the proposed explicit MOT-TD-MFIE solver. © 2012 IEEE.
Integrated Oil spill detection and forecasting using MOON real time data
De Dominicis, M.; Pinardi, N.; Coppini, G.; Tonani, M.; Guarnieri, A.; Zodiatis, G.; Lardner, R.; Santoleri, R.
2009-01-01
MOON (Mediterranean Operational Oceanography Network) is an operational distributed system ready to provide quality controlled and timely marine observations (in situ and satellite) and environmental analyses and predictions for management of oil spill accidents. MOON operational systems are based upon the real time functioning of an integrated system composed of the Real Time Observing system, the regional, sub-regional and coastal forecasting systems and a products dissemination system. All...
Ulku, Huseyin Arda
2012-09-01
An explicit yet stable marching-on-in-time (MOT) scheme for solving the time domain magnetic field integral equation (TD-MFIE) is presented. The stability of the explicit scheme is achieved via (i) accurate evaluation of the MOT matrix elements using closed form expressions and (ii) a PE(CE) m type linear multistep method for time marching. Numerical results demonstrate the accuracy and stability of the proposed explicit MOT-TD-MFIE solver. © 2012 IEEE.
International Nuclear Information System (INIS)
Mor, I; Vartsky, D; Bar, D; Feldman, G; Goldberg, M B; Brandis, M; Dangendorf, V; Tittelmeier, K; Bromberger, B; Weierganz, M
2013-01-01
The Time-Resolved Integrative Optical Neutron (TRION) detector was developed for Fast Neutron Resonance Radiography (FNRR), a fast-neutron transmission imaging method that exploits characteristic energy-variations of the total scattering cross-section in the E n = 1–10 MeV range to detect specific elements within a radiographed object. As opposed to classical event-counting time of flight (ECTOF), it integrates the detector signal during a well-defined neutron Time of Flight window corresponding to a pre-selected energy bin, e.g., the energy-interval spanning a cross-section resonance of an element such as C, O and N. The integrative characteristic of the detector permits loss-free operation at very intense, pulsed neutron fluxes, at a cost however, of recorded temporal resolution degradation This work presents a theoretical and experimental evaluation of detector related parameters which affect temporal resolution of the TRION system
Maximum likely scale estimation
DEFF Research Database (Denmark)
Loog, Marco; Pedersen, Kim Steenstrup; Markussen, Bo
2005-01-01
A maximum likelihood local scale estimation principle is presented. An actual implementation of the estimation principle uses second order moments of multiple measurements at a fixed location in the image. These measurements consist of Gaussian derivatives possibly taken at several scales and/or ...
Robust Maximum Association Estimators
A. Alfons (Andreas); C. Croux (Christophe); P. Filzmoser (Peter)
2017-01-01
textabstractThe maximum association between two multivariate variables X and Y is defined as the maximal value that a bivariate association measure between one-dimensional projections αX and αY can attain. Taking the Pearson correlation as projection index results in the first canonical correlation
A point implicit time integration technique for slow transient flow problems
Energy Technology Data Exchange (ETDEWEB)
Kadioglu, Samet Y., E-mail: kadioglu@yildiz.edu.tr [Department of Mathematical Engineering, Yildiz Technical University, 34210 Davutpasa-Esenler, Istanbul (Turkey); Berry, Ray A., E-mail: ray.berry@inl.gov [Idaho National Laboratory, P.O. Box 1625, MS 3840, Idaho Falls, ID 83415 (United States); Martineau, Richard C. [Idaho National Laboratory, P.O. Box 1625, MS 3840, Idaho Falls, ID 83415 (United States)
2015-05-15
Highlights: • This new method does not require implicit iteration; instead it time advances the solutions in a similar spirit to explicit methods. • It is unconditionally stable, as a fully implicit method would be. • It exhibits the simplicity of implementation of an explicit method. • It is specifically designed for slow transient flow problems of long duration such as can occur inside nuclear reactor coolant systems. • Our findings indicate the new method can integrate slow transient problems very efficiently; and its implementation is very robust. - Abstract: We introduce a point implicit time integration technique for slow transient flow problems. The method treats the solution variables of interest (that can be located at cell centers, cell edges, or cell nodes) implicitly and the rest of the information related to same or other variables are handled explicitly. The method does not require implicit iteration; instead it time advances the solutions in a similar spirit to explicit methods, except it involves a few additional function(s) evaluation steps. Moreover, the method is unconditionally stable, as a fully implicit method would be. This new approach exhibits the simplicity of implementation of explicit methods and the stability of implicit methods. It is specifically designed for slow transient flow problems of long duration wherein one would like to perform time integrations with very large time steps. Because the method can be time inaccurate for fast transient problems, particularly with larger time steps, an appropriate solution strategy for a problem that evolves from a fast to a slow transient would be to integrate the fast transient with an explicit or semi-implicit technique and then switch to this point implicit method as soon as the time variation slows sufficiently. We have solved several test problems that result from scalar or systems of flow equations. Our findings indicate the new method can integrate slow transient problems very
A point implicit time integration technique for slow transient flow problems
International Nuclear Information System (INIS)
Kadioglu, Samet Y.; Berry, Ray A.; Martineau, Richard C.
2015-01-01
Highlights: • This new method does not require implicit iteration; instead it time advances the solutions in a similar spirit to explicit methods. • It is unconditionally stable, as a fully implicit method would be. • It exhibits the simplicity of implementation of an explicit method. • It is specifically designed for slow transient flow problems of long duration such as can occur inside nuclear reactor coolant systems. • Our findings indicate the new method can integrate slow transient problems very efficiently; and its implementation is very robust. - Abstract: We introduce a point implicit time integration technique for slow transient flow problems. The method treats the solution variables of interest (that can be located at cell centers, cell edges, or cell nodes) implicitly and the rest of the information related to same or other variables are handled explicitly. The method does not require implicit iteration; instead it time advances the solutions in a similar spirit to explicit methods, except it involves a few additional function(s) evaluation steps. Moreover, the method is unconditionally stable, as a fully implicit method would be. This new approach exhibits the simplicity of implementation of explicit methods and the stability of implicit methods. It is specifically designed for slow transient flow problems of long duration wherein one would like to perform time integrations with very large time steps. Because the method can be time inaccurate for fast transient problems, particularly with larger time steps, an appropriate solution strategy for a problem that evolves from a fast to a slow transient would be to integrate the fast transient with an explicit or semi-implicit technique and then switch to this point implicit method as soon as the time variation slows sufficiently. We have solved several test problems that result from scalar or systems of flow equations. Our findings indicate the new method can integrate slow transient problems very
Global format for energy-momentum based time integration in nonlinear dynamics
DEFF Research Database (Denmark)
Krenk, Steen
2014-01-01
A global format is developed for momentum and energy consistent time integration of second‐order dynamic systems with general nonlinear stiffness. The algorithm is formulated by integrating the state‐space equations of motion over the time increment. The internal force is first represented...... of mean value products at the element level or explicit use of a geometric stiffness matrix. An optional monotonic algorithmic damping, increasing with response frequency, is developed in terms of a single damping parameter. In the solution procedure, the velocity is eliminated and the nonlinear...
INTEGRITY ANALYSIS OF REAL-TIME PPP TECHNIQUE WITH IGS-RTS SERVICE FOR MARITIME NAVIGATION
Directory of Open Access Journals (Sweden)
M. El-Diasty
2017-10-01
Full Text Available Open sea and inland waterways are the most widely used mode for transporting goods worldwide. It is the International Maritime Organization (IMO that defines the requirements for position fixing equipment for a worldwide radio-navigation system, in terms of accuracy, integrity, continuity, availability and coverage for the various phases of navigation. Satellite positioning systems can contribute to meet these requirements, as well as optimize marine transportation. Marine navigation usually consists of three major phases identified as Ocean/Coastal/Port approach/Inland waterway, in port navigation and automatic docking with alert limit ranges from 25 m to 0.25 m. GPS positioning is widely used for many applications and is currently recognized by IMO for a future maritime navigation. With the advancement in autonomous GPS positioning techniques such as Precise Point Positioning (PPP and with the advent of new real-time GNSS correction services such as IGS-Real-Time-Service (RTS, it is necessary to investigate the integrity of the PPP-based positioning technique along with IGS-RTS service in terms of availability and reliability for safe navigation in maritime application. This paper monitors the integrity of an autonomous real-time PPP-based GPS positioning system using the IGS real-time service (RTS for maritime applications that require minimum availability of integrity of 99.8 % to fulfil the IMO integrity standards. To examine the integrity of the real-time IGS-RTS PPP-based technique for maritime applications, kinematic data from a dual frequency GPS receiver is collected onboard a vessel and investigated with the real-time IGS-RTS PPP-based GPS positioning technique. It is shown that the availability of integrity of the real-time IGS-RTS PPP-based GPS solution is 100 % for all navigation phases and therefore fulfil the IMO integrity standards (99.8 % availability immediately (after 1 second, after 2 minutes and after 42 minutes
Integrity Analysis of Real-Time Ppp Technique with Igs-Rts Service for Maritime Navigation
El-Diasty, M.
2017-10-01
Open sea and inland waterways are the most widely used mode for transporting goods worldwide. It is the International Maritime Organization (IMO) that defines the requirements for position fixing equipment for a worldwide radio-navigation system, in terms of accuracy, integrity, continuity, availability and coverage for the various phases of navigation. Satellite positioning systems can contribute to meet these requirements, as well as optimize marine transportation. Marine navigation usually consists of three major phases identified as Ocean/Coastal/Port approach/Inland waterway, in port navigation and automatic docking with alert limit ranges from 25 m to 0.25 m. GPS positioning is widely used for many applications and is currently recognized by IMO for a future maritime navigation. With the advancement in autonomous GPS positioning techniques such as Precise Point Positioning (PPP) and with the advent of new real-time GNSS correction services such as IGS-Real-Time-Service (RTS), it is necessary to investigate the integrity of the PPP-based positioning technique along with IGS-RTS service in terms of availability and reliability for safe navigation in maritime application. This paper monitors the integrity of an autonomous real-time PPP-based GPS positioning system using the IGS real-time service (RTS) for maritime applications that require minimum availability of integrity of 99.8 % to fulfil the IMO integrity standards. To examine the integrity of the real-time IGS-RTS PPP-based technique for maritime applications, kinematic data from a dual frequency GPS receiver is collected onboard a vessel and investigated with the real-time IGS-RTS PPP-based GPS positioning technique. It is shown that the availability of integrity of the real-time IGS-RTS PPP-based GPS solution is 100 % for all navigation phases and therefore fulfil the IMO integrity standards (99.8 % availability) immediately (after 1 second), after 2 minutes and after 42 minutes of convergence
DEFF Research Database (Denmark)
Bordbar, Aarash; Yurkovich, James T.; Paglia, Giuseppe
2017-01-01
The increasing availability of metabolomics data necessitates novel methods for deeper data analysis and interpretation. We present a flux balance analysis method that allows for the computation of dynamic intracellular metabolic changes at the cellular scale through integration of time-course ab......The increasing availability of metabolomics data necessitates novel methods for deeper data analysis and interpretation. We present a flux balance analysis method that allows for the computation of dynamic intracellular metabolic changes at the cellular scale through integration of time...
An explicit marching on-in-time solver for the time domain volume magnetic field integral equation
Sayed, Sadeed Bin
2014-07-01
Transient scattering from inhomogeneous dielectric objects can be modeled using time domain volume integral equations (TDVIEs). TDVIEs are oftentimes solved using marching on-in-time (MOT) techniques. Classical MOT-TDVIE solvers expand the field induced on the scatterer using local spatio-temporal basis functions. Inserting this expansion into the TDVIE and testing the resulting equation in space and time yields a system of equations that is solved by time marching. Depending on the type of the basis and testing functions and the time step, the time marching scheme can be implicit (N. T. Gres, et al., Radio Sci., 36(3), 379-386, 2001) or explicit (A. Al-Jarro, et al., IEEE Trans. Antennas Propag., 60(11), 5203-5214, 2012). Implicit MOT schemes are known to be more stable and accurate. However, under low-frequency excitation, i.e., when the time step size is large, they call for inversion of a full matrix system at very time step.
An explicit marching on-in-time solver for the time domain volume magnetic field integral equation
Sayed, Sadeed Bin; Ulku, Huseyin Arda; Bagci, Hakan
2014-01-01
Transient scattering from inhomogeneous dielectric objects can be modeled using time domain volume integral equations (TDVIEs). TDVIEs are oftentimes solved using marching on-in-time (MOT) techniques. Classical MOT-TDVIE solvers expand the field induced on the scatterer using local spatio-temporal basis functions. Inserting this expansion into the TDVIE and testing the resulting equation in space and time yields a system of equations that is solved by time marching. Depending on the type of the basis and testing functions and the time step, the time marching scheme can be implicit (N. T. Gres, et al., Radio Sci., 36(3), 379-386, 2001) or explicit (A. Al-Jarro, et al., IEEE Trans. Antennas Propag., 60(11), 5203-5214, 2012). Implicit MOT schemes are known to be more stable and accurate. However, under low-frequency excitation, i.e., when the time step size is large, they call for inversion of a full matrix system at very time step.
Integration of Simulink, MARTe and MDSplus for rapid development of real-time applications
Energy Technology Data Exchange (ETDEWEB)
Manduchi, G., E-mail: gabriele.manduchi@igi.cnr.it [Consorzio RFX (CNR, ENEA, INFN, Università di Padova, Acciaierie Venete SpA), Padova (Italy); Luchetta, A.; Taliercio, C. [Consorzio RFX (CNR, ENEA, INFN, Università di Padova, Acciaierie Venete SpA), Padova (Italy); Neto, A.; Sartori, F. [Fusion for Energy, Barcelona (Spain); De Tommasi, G. [Fusion for Energy, Barcelona (Spain); Consorzio CREATE/DIETI, Università degli Studi di Napoli Federico II, Via Claudio 21, 80125 Napoli (Italy)
2015-10-15
Highlights: • The integration of two frameworks for real-time control and data acquisition is described. • The integration may significantly fasten the development of system components. • The system includes also a code generator for the integration of code written in Simulink. • A real-time control systemcan be implemented without the need of writing any line of code. - Abstract: Simulink is a graphical data flow programming tool for modeling and simulating dynamic systems. A component of Simulink, called Simulink Coder, generates C code from Simulink diagrams. MARTe is a framework for the implementation of real-time systems, currently in use in several fusion experiments. MDSplus is a framework widely used in the fusion community for the management of data. The three systems provide a solution to different facets of the same process, that is, real-time plasma control development. Simulink diagrams will describe the algorithms used in control, which will be implemented as MARTe GAMs and which will use parameters read from and produce results written to MDSplus pulse files. The three systems have been integrated in order to provide a tool suitable to speed up the development of real-time control applications. In particular, it will be shown how from a Simulink diagram describing a given algorithm to be used in a control system, it is possible to generate in an automated way the corresponding MARTe and MDSplus components that can be assembled to implement the target system.
Integration of Simulink, MARTe and MDSplus for rapid development of real-time applications
International Nuclear Information System (INIS)
Manduchi, G.; Luchetta, A.; Taliercio, C.; Neto, A.; Sartori, F.; De Tommasi, G.
2015-01-01
Highlights: • The integration of two frameworks for real-time control and data acquisition is described. • The integration may significantly fasten the development of system components. • The system includes also a code generator for the integration of code written in Simulink. • A real-time control systemcan be implemented without the need of writing any line of code. - Abstract: Simulink is a graphical data flow programming tool for modeling and simulating dynamic systems. A component of Simulink, called Simulink Coder, generates C code from Simulink diagrams. MARTe is a framework for the implementation of real-time systems, currently in use in several fusion experiments. MDSplus is a framework widely used in the fusion community for the management of data. The three systems provide a solution to different facets of the same process, that is, real-time plasma control development. Simulink diagrams will describe the algorithms used in control, which will be implemented as MARTe GAMs and which will use parameters read from and produce results written to MDSplus pulse files. The three systems have been integrated in order to provide a tool suitable to speed up the development of real-time control applications. In particular, it will be shown how from a Simulink diagram describing a given algorithm to be used in a control system, it is possible to generate in an automated way the corresponding MARTe and MDSplus components that can be assembled to implement the target system.
Timing of Formal Phase Safety Reviews for Large-Scale Integrated Hazard Analysis
Massie, Michael J.; Morris, A. Terry
2010-01-01
Integrated hazard analysis (IHA) is a process used to identify and control unacceptable risk. As such, it does not occur in a vacuum. IHA approaches must be tailored to fit the system being analyzed. Physical, resource, organizational and temporal constraints on large-scale integrated systems impose additional direct or derived requirements on the IHA. The timing and interaction between engineering and safety organizations can provide either benefits or hindrances to the overall end product. The traditional approach for formal phase safety review timing and content, which generally works well for small- to moderate-scale systems, does not work well for very large-scale integrated systems. This paper proposes a modified approach to timing and content of formal phase safety reviews for IHA. Details of the tailoring process for IHA will describe how to avoid temporary disconnects in major milestone reviews and how to maintain a cohesive end-to-end integration story particularly for systems where the integrator inherently has little to no insight into lower level systems. The proposal has the advantage of allowing the hazard analysis development process to occur as technical data normally matures.
Stability Analysis and Variational Integrator for Real-Time Formation Based on Potential Field
Directory of Open Access Journals (Sweden)
Shengqing Yang
2014-01-01
Full Text Available This paper investigates a framework of real-time formation of autonomous vehicles by using potential field and variational integrator. Real-time formation requires vehicles to have coordinated motion and efficient computation. Interactions described by potential field can meet the former requirement which results in a nonlinear system. Stability analysis of such nonlinear system is difficult. Our methodology of stability analysis is discussed in error dynamic system. Transformation of coordinates from inertial frame to body frame can help the stability analysis focus on the structure instead of particular coordinates. Then, the Jacobian of reduced system can be calculated. It can be proved that the formation is stable at the equilibrium point of error dynamic system with the effect of damping force. For consideration of calculation, variational integrator is introduced. It is equivalent to solving algebraic equations. Forced Euler-Lagrange equation in discrete expression is used to construct a forced variational integrator for vehicles in potential field and obstacle environment. By applying forced variational integrator on computation of vehicles' motion, real-time formation of vehicles in obstacle environment can be implemented. Algorithm based on forced variational integrator is designed for a leader-follower formation.
Maximum Power from a Solar Panel
Directory of Open Access Journals (Sweden)
Michael Miller
2010-01-01
Full Text Available Solar energy has become a promising alternative to conventional fossil fuel sources. Solar panels are used to collect solar radiation and convert it into electricity. One of the techniques used to maximize the effectiveness of this energy alternative is to maximize the power output of the solar collector. In this project the maximum power is calculated by determining the voltage and the current of maximum power. These quantities are determined by finding the maximum value for the equation for power using differentiation. After the maximum values are found for each time of day, each individual quantity, voltage of maximum power, current of maximum power, and maximum power is plotted as a function of the time of day.
Directory of Open Access Journals (Sweden)
M. C. Roa-García
2010-08-01
Full Text Available We present a new modeling approach analyzing and predicting the Transit Time Distribution (TTD and the Response Time Distribution (RTD from hourly to annual time scales as two distinct hydrological processes. The model integrates Isotope Hydrograph Separation (IHS and the Instantaneous Unit Hydrograph (IUH approach as a tool to provide a more realistic description of transit and response time of water in catchments. Individual event simulations and parameterizations were combined with long-term baseflow simulation and parameterizations; this provides a comprehensive picture of the catchment response for a long time span for the hydraulic and isotopic processes. The proposed method was tested in three Andean headwater catchments to compare the effects of land use on hydrological response and solute transport. Results show that the characteristics of events and antecedent conditions have a significant influence on TTD and RTD, but in general the RTD of the grassland dominated catchment is concentrated in the shorter time spans and has a higher cumulative TTD, while the forest dominated catchment has a relatively higher response distribution and lower cumulative TTD. The catchment where wetlands concentrate shows a flashier response, but wetlands also appear to prolong transit time.
A study of pile-up in integrated time-correlated single photon counting systems.
Arlt, Jochen; Tyndall, David; Rae, Bruce R; Li, David D-U; Richardson, Justin A; Henderson, Robert K
2013-10-01
Recent demonstration of highly integrated, solid-state, time-correlated single photon counting (TCSPC) systems in CMOS technology is set to provide significant increases in performance over existing bulky, expensive hardware. Arrays of single photon single photon avalanche diode (SPAD) detectors, timing channels, and signal processing can be integrated on a single silicon chip with a degree of parallelism and computational speed that is unattainable by discrete photomultiplier tube and photon counting card solutions. New multi-channel, multi-detector TCSPC sensor architectures with greatly enhanced throughput due to minimal detector transit (dead) time or timing channel dead time are now feasible. In this paper, we study the potential for future integrated, solid-state TCSPC sensors to exceed the photon pile-up limit through analytic formula and simulation. The results are validated using a 10% fill factor SPAD array and an 8-channel, 52 ps resolution time-to-digital conversion architecture with embedded lifetime estimation. It is demonstrated that pile-up insensitive acquisition is attainable at greater than 10 times the pulse repetition rate providing over 60 dB of extended dynamic range to the TCSPC technique. Our results predict future CMOS TCSPC sensors capable of live-cell transient observations in confocal scanning microscopy, improved resolution of near-infrared optical tomography systems, and fluorescence lifetime activated cell sorting.
International Nuclear Information System (INIS)
Xiong, Z.; Tripp, A.C.
1994-01-01
This paper presents an integral equation algorithm for 3D EM modeling at high frequencies for applications in engineering an environmental studies. The integral equation method remains the same for low and high frequencies, but the dominant roles of the displacements currents complicate both numerical treatments and interpretations. With singularity extraction technique they successively extended the application of the Hankel filtering technique to the computation of Hankel integrals occurring in high frequency EM modeling. Time domain results are calculated from frequency domain results via Fourier transforms. While frequency domain data are not obvious for interpretations, time domain data show wave-like pictures that resemble seismograms. Both 1D and 3D numerical results show clearly the layer interfaces
DEFF Research Database (Denmark)
Chen, Shanshin; Tortorelli, Daniel A.; Hansen, John Michael
1999-01-01
of ordinary diffferential equations is employed to avoid the instabilities associated with the direct integrations of differential-algebraic equations. To extend the unconditional stability of the implicit Newmark method to nonlinear dynamic systems, a discrete energy balance is enforced. This constraint......Advances in computer hardware and improved algorithms for multibody dynamics over the past decade have generated widespread interest in real-time simulations of multibody mechanics systems. At the heart of the widely used algorithms for multibody dynamics are a choice of coordinates which define...... the kinmatics of the system, and a choice of time integrations algorithms. The current approach uses a non-dissipative implict Newmark method to integrate the equations of motion defined in terms of the independent joint coordinates of the system. The reduction of the equations of motion to a minimal set...
Model-based Integration of Past & Future in TimeTravel
DEFF Research Database (Denmark)
Khalefa, Mohamed E.; Fischer, Ulrike; Pedersen, Torben Bach
2012-01-01
We demonstrate TimeTravel, an efficient DBMS system for seamless integrated querying of past and (forecasted) future values of time series, allowing the user to view past and future values as one joint time series. This functionality is important for advanced application domain like energy....... The main idea is to compactly represent time series as models. By using models, the TimeTravel system answers queries approximately on past and future data with error guarantees (absolute error and confidence) one order of magnitude faster than when accessing the time series directly. In addition...... it to answer approximate and exact queries. TimeTravel is implemented into PostgreSQL, thus achieving complete user transparency at the query level. In the demo, we show the easy building of a hierarchical model index for a real-world time series and the effect of varying the error guarantees on the speed up...
An efficient explicit marching on in time solver for magnetic field volume integral equation
Sayed, Sadeed Bin
2015-07-25
An efficient explicit marching on in time (MOT) scheme for solving the magnetic field volume integral equation is proposed. The MOT system is cast in the form of an ordinary differential equation and is integrated in time using a PE(CE)m multistep scheme. At each time step, a system with a Gram matrix is solved for the predicted/corrected field expansion coefficients. Depending on the type of spatial testing scheme Gram matrix is sparse or consists of blocks with only diagonal entries regardless of the time step size. Consequently, the resulting MOT scheme is more efficient than its implicit counterparts, which call for inversion of fuller matrix system at lower frequencies. Numerical results, which demonstrate the efficiency, accuracy, and stability of the proposed MOT scheme, are presented.
Distributed finite-time containment control for double-integrator multiagent systems.
Wang, Xiangyu; Li, Shihua; Shi, Peng
2014-09-01
In this paper, the distributed finite-time containment control problem for double-integrator multiagent systems with multiple leaders and external disturbances is discussed. In the presence of multiple dynamic leaders, by utilizing the homogeneous control technique, a distributed finite-time observer is developed for the followers to estimate the weighted average of the leaders' velocities at first. Then, based on the estimates and the generalized adding a power integrator approach, distributed finite-time containment control algorithms are designed to guarantee that the states of the followers converge to the dynamic convex hull spanned by those of the leaders in finite time. Moreover, as a special case of multiple dynamic leaders with zero velocities, the proposed containment control algorithms also work for the case of multiple stationary leaders without using the distributed observer. Simulations demonstrate the effectiveness of the proposed control algorithms.
International Nuclear Information System (INIS)
Enslin, J.H.R.
1990-01-01
A well engineered renewable remote energy system, utilizing the principal of Maximum Power Point Tracking can be m ore cost effective, has a higher reliability and can improve the quality of life in remote areas. This paper reports that a high-efficient power electronic converter, for converting the output voltage of a solar panel, or wind generator, to the required DC battery bus voltage has been realized. The converter is controlled to track the maximum power point of the input source under varying input and output parameters. Maximum power point tracking for relative small systems is achieved by maximization of the output current in a battery charging regulator, using an optimized hill-climbing, inexpensive microprocessor based algorithm. Through practical field measurements it is shown that a minimum input source saving of 15% on 3-5 kWh/day systems can easily be achieved. A total cost saving of at least 10-15% on the capital cost of these systems are achievable for relative small rating Remote Area Power Supply systems. The advantages at larger temperature variations and larger power rated systems are much higher. Other advantages include optimal sizing and system monitor and control
Receiver function estimated by maximum entropy deconvolution
Institute of Scientific and Technical Information of China (English)
吴庆举; 田小波; 张乃铃; 李卫平; 曾融生
2003-01-01
Maximum entropy deconvolution is presented to estimate receiver function, with the maximum entropy as the rule to determine auto-correlation and cross-correlation functions. The Toeplitz equation and Levinson algorithm are used to calculate the iterative formula of error-predicting filter, and receiver function is then estimated. During extrapolation, reflective coefficient is always less than 1, which keeps maximum entropy deconvolution stable. The maximum entropy of the data outside window increases the resolution of receiver function. Both synthetic and real seismograms show that maximum entropy deconvolution is an effective method to measure receiver function in time-domain.
Lötters, Joost Conrad; Groenesteijn, Jarno; van der Wouden, E.J.; Sparreboom, Wouter; Lammerink, Theodorus S.J.; Wiegerink, Remco J.
2015-01-01
We have designed and realised a fully integrated microfluidic measurement system for real-time determination of both flow rate and composition of gas- and liquid mixtures. The system comprises relative permittivity sensors, pressure sensors, a Coriolis flow and density sensor, a thermal flow sensor
Veeger, C.P.L.; Etman, L.F.P.; Lefeber, A.A.J.; Adan, I.J.B.F.; Herk, van J.; Rooda, J.E.
2011-01-01
To predict cycle time distributions of integrated processing workstations, detailed simulation models are almost exclusively used; these models require considerable development and maintenance effort. As an alternative, we propose an aggregate model that is a lumped-parameter representation of the
Approximation of itô integrals arising in stochastic time-delayed systems
Bagchi, Arunabha
1984-01-01
Likelihood functional for stochastic linear time-delayed systems involve Itô integrals with respect to the observed data. Since the Wiener process appearing in the standard observation process model for such systems is not realizable and the physically observed process is smooth, one needs to study
Roelofs, A.P.A.
2012-01-01
A few studies have examined selective attention in Stroop task performance through ex-Gaussian analyses of response time (RT) distributions. It has remained unclear whether the tail of the RT distribution in vocal responding reflects spatial integration of relevant and irrelevant attributes, as
DEFF Research Database (Denmark)
Nielsen, Martin Bjerre; Krenk, Steen
2012-01-01
A conservative time integration algorithm for rigid body rotations is presented in a purely algebraic form in terms of the four quaternions components and the four conjugate momentum variables via Hamilton’s equations. The introduction of an extended mass matrix leads to a symmetric set of eight...
Time-varying market integration and expected returns in emerging mrkets
de Jong, F.C.J.M.; de Roon, F.
2001-01-01
We use a simple model in which the expected returns in emerging markets depend on their systematicrisk as measured by their beta relative to the world portfolio as well as on the level ofintegration in that market. The level of integration is a time-varying variable that depends on themarket value
Photobleaching kinetics and time-integrated emission of fluorescent probes in cellular membranes
DEFF Research Database (Denmark)
Wüstner, Daniel; Christensen, Tanja; Solanko, Lukasz Michal
2014-01-01
Since the pioneering work of Hirschfeld, it is known that time-integrated emission (TiEm) of a fluorophore is independent of fluorescence quantum yield and illumination intensity. Practical implementation of this important result for determining exact probe distribution in living cells is often h...
Analysis of time integration methods for the compressible two-fluid model for pipe flow simulations
B. Sanderse (Benjamin); I. Eskerud Smith (Ivar); M.H.W. Hendrix (Maurice)
2017-01-01
textabstractIn this paper we analyse different time integration methods for the two-fluid model and propose the BDF2 method as the preferred choice to simulate transient compressible multiphase flow in pipelines. Compared to the prevailing Backward Euler method, the BDF2 scheme has a significantly
Time-integration methods for finite element discretisations of the second-order Maxwell equation
Sarmany, D.; Bochev, Mikhail A.; van der Vegt, Jacobus J.W.
This article deals with time integration for the second-order Maxwell equations with possibly non-zero conductivity in the context of the discontinuous Galerkin finite element method DG-FEM) and the $H(\\mathrm{curl})$-conforming FEM. For the spatial discretisation, hierarchic
Velocity time integral for right upper pulmonary vein in VLBW infants with patent ductus arteriosus.
Lista, Gianluca; Bianchi, Silvia; Mannarino, Savina; Schena, Federico; Castoldi, Francesca; Stronati, Mauro; Mosca, Fabio
2016-10-01
Early diagnosis of significant patent ductus arteriosus reduces the risk of clinical worsening in very low birth weight infants. Echocardiographic patent ductus arteriosus shunt flow pattern can be used to predict significant patent ductus arteriosus. Pulmonary venous flow, expressed as vein velocity time integral, is correlated to ductus arteriosus closure. The aim of this study is to investigate the relationship between significant reductions in vein velocity time integral and non-significant patent ductus arteriosus in the first week of life. A multicenter, prospective, observational study was conducted to evaluate very low birth weight infants (ductus. The mean vein velocity time integral significantly reduced in the first four days of life. On the fourth day of life, there was less of a reduction in patients with patent ductus compared to those with closed patent ductus arteriosus and the difference was significant. A significant reduction in vein velocity time integral in the first days of life is associated with ductus closure. This parameter correlates well with other echocardiographic parameters and may aid in the diagnosis and management of patent ductus arteriosus.
Physics in Design : Real-time Numerical Simulation Integrated into the CAD Environment
Zwier, Marijn P.; Wits, Wessel W.
2017-01-01
As today's markets are more susceptible to rapid changes and involve global players, a short time to market is required to keep a competitive edge. Concurrently, products are integrating an increasing number of functions and technologies, thus becoming progressively complex. Therefore, efficient and
Effects of attitude dissimilarity and time on social integration : A longitudinal panel study
Van der Vegt, G.S.
2002-01-01
A longitudinal panel study in 25 work groups of elementary school teachers examined the effect of attitudinal dissimilarity and time on social integration across a 9-month period. In line with the prediction based on both the similarity-attraction approach and social identity theory, cross-lagged
Long-time integrator for the study on plasma parameter fluctuations
International Nuclear Information System (INIS)
Zalkind, V.M.; Tarasenko, V.P.
1975-01-01
A device measuring the absolute value (x) of a fluctuating quantity x(t) averaged over a large number of realizations is described. The specific features of the device are the use of the time selector (Δ t = 50 μs - 1 ms) and the large time integration constant (tau = 30 hrs). The device is meant for studying fluctuations of parameters of a pulse plasma with a small repetition frequency
Integral transform method for solving time fractional systems and fractional heat equation
Directory of Open Access Journals (Sweden)
Arman Aghili
2014-01-01
Full Text Available In the present paper, time fractional partial differential equation is considered, where the fractional derivative is defined in the Caputo sense. Laplace transform method has been applied to obtain an exact solution. The authors solved certain homogeneous and nonhomogeneous time fractional heat equations using integral transform. Transform method is a powerful tool for solving fractional singular Integro - differential equations and PDEs. The result reveals that the transform method is very convenient and effective.
Directory of Open Access Journals (Sweden)
Jess Hartcher-O'Brien
Full Text Available Often multisensory information is integrated in a statistically optimal fashion where each sensory source is weighted according to its precision. This integration scheme isstatistically optimal because it theoretically results in unbiased perceptual estimates with the highest precisionpossible.There is a current lack of consensus about how the nervous system processes multiple sensory cues to elapsed time.In order to shed light upon this, we adopt a computational approach to pinpoint the integration strategy underlying duration estimationof audio/visual stimuli. One of the assumptions of our computational approach is that the multisensory signals redundantly specify the same stimulus property. Our results clearly show that despite claims to the contrary, perceived duration is the result of an optimal weighting process, similar to that adopted for estimates of space. That is, participants weight the audio and visual information to arrive at the most precise, single duration estimate possible. The work also disentangles how different integration strategies - i.e. consideringthe time of onset/offset ofsignals - might alter the final estimate. As such we provide the first concrete evidence of an optimal integration strategy in human duration estimates.
Beghein, Yves
2013-03-01
The time domain combined field integral equation (TD-CFIE), which is constructed from a weighted sum of the time domain electric and magnetic field integral equations (TD-EFIE and TD-MFIE) for analyzing transient scattering from closed perfect electrically conducting bodies, is free from spurious resonances. The standard marching-on-in-time technique for discretizing the TD-CFIE uses Galerkin and collocation schemes in space and time, respectively. Unfortunately, the standard scheme is theoretically not well understood: stability and convergence have been proven for only one class of space-time Galerkin discretizations. Moreover, existing discretization schemes are nonconforming, i.e., the TD-MFIE contribution is tested with divergence conforming functions instead of curl conforming functions. We therefore introduce a novel space-time mixed Galerkin discretization for the TD-CFIE. A family of temporal basis and testing functions with arbitrary order is introduced. It is explained how the corresponding interactions can be computed efficiently by existing collocation-in-time codes. The spatial mixed discretization is made fully conforming and consistent by leveraging both Rao-Wilton-Glisson and Buffa-Christiansen basis functions and by applying the appropriate bi-orthogonalization procedures. The combination of both techniques is essential when high accuracy over a broad frequency band is required. © 2012 IEEE.
Scalar one-loop vertex integrals as meromorphic functions of space-time dimension d
International Nuclear Information System (INIS)
Bluemlein, Johannes; Phan, Khiem Hong; Vietnam National Univ., Ho Chi Minh City; Riemann, Tord; Silesia Univ., Chorzow
2017-11-01
Representations are derived for the basic scalar one-loop vertex Feynman integrals as meromorphic functions of the space-time dimension d in terms of (generalized) hypergeometric functions 2 F 1 and F 1 . Values at asymptotic or exceptional kinematic points as well as expansions around the singular points at d=4+2n, n non-negative integers, may be derived from the representations easily. The Feynman integrals studied here may be used as building blocks for the calculation of one-loop and higher-loop scalar and tensor amplitudes. From the recursion relation presented, higher n-point functions may be obtained in a straightforward manner.
Time-Varying Market Integration and Expected Returns in Emerging Markets
de Jong, Frank; de Roon, Frans
2001-01-01
We use a simple model in which the expected returns in emerging markets depend on their systematic risk as measured by their beta relative to the world portfolio as well as on the level of integration in that market. The level of integration is a time-varying variable that depends on the market value of the assets that can be held by domestic investors only versus the market value of the assets that can be traded freely. Our empirical analysis for 30 emerging markets shows that there are stro...
Park, K. C.; Belvin, W. Keith
1990-01-01
A general form for the first-order representation of the continuous second-order linear structural-dynamics equations is introduced to derive a corresponding form of first-order continuous Kalman filtering equations. Time integration of the resulting equations is carried out via a set of linear multistep integration formulas. It is shown that a judicious combined selection of computational paths and the undetermined matrices introduced in the general form of the first-order linear structural systems leads to a class of second-order discrete Kalman filtering equations involving only symmetric sparse N x N solution matrices.
International Nuclear Information System (INIS)
Ponman, T.J.
1984-01-01
For some years now two different expressions have been in use for maximum entropy image restoration and there has been some controversy over which one is appropriate for a given problem. Here two further entropies are presented and it is argued that there is no single correct algorithm. The properties of the four different methods are compared using simple 1D simulations with a view to showing how they can be used together to gain as much information as possible about the original object. (orig.)
Moqeem, Aasia; Baig, Mirza; Gholamhosseini, Hamid; Mirza, Farhaan; Lindén, Maria
2018-01-01
This research involves the design and development of a novel Android smartphone application for real-time vital signs monitoring and decision support. The proposed application integrates market available, wireless and Bluetooth connected medical devices for collecting vital signs. The medical device data collected by the app includes heart rate, oxygen saturation and electrocardiograph (ECG). The collated data is streamed/displayed on the smartphone in real-time. This application was designed by adopting six screens approach (6S) mobile development framework and focused on user-centered approach and considered clinicians-as-a-user. The clinical engagement, consultations, feedback and usability of the application in the everyday practices were considered critical from the initial phase of the design and development. Furthermore, the proposed application is capable to deliver rich clinical decision support in real-time using the integrated medical device data.
Note: Fully integrated 3.2 Gbps quantum random number generator with real-time extraction
International Nuclear Information System (INIS)
Zhang, Xiao-Guang; Nie, You-Qi; Liang, Hao; Zhang, Jun; Pan, Jian-Wei; Zhou, Hongyi; Ma, Xiongfeng
2016-01-01
We present a real-time and fully integrated quantum random number generator (QRNG) by measuring laser phase fluctuations. The QRNG scheme based on laser phase fluctuations is featured for its capability of generating ultra-high-speed random numbers. However, the speed bottleneck of a practical QRNG lies on the limited speed of randomness extraction. To close the gap between the fast randomness generation and the slow post-processing, we propose a pipeline extraction algorithm based on Toeplitz matrix hashing and implement it in a high-speed field-programmable gate array. Further, all the QRNG components are integrated into a module, including a compact and actively stabilized interferometer, high-speed data acquisition, and real-time data post-processing and transmission. The final generation rate of the QRNG module with real-time extraction can reach 3.2 Gbps.
A Scalable, Timing-Safe, Network-on-Chip Architecture with an Integrated Clock Distribution Method
DEFF Research Database (Denmark)
Bjerregaard, Tobias; Stensgaard, Mikkel Bystrup; Sparsø, Jens
2007-01-01
Growing system sizes together with increasing performance variability are making globally synchronous operation hard to realize. Mesochronous clocking constitutes a possible solution to the problems faced. The most fundamental of problems faced when communicating between mesochronously clocked re...... is based purely on local observations. It is demonstrated with a 90 nm CMOS standard cell network-on-chip design which implements completely timing-safe, global communication in a modular system......Growing system sizes together with increasing performance variability are making globally synchronous operation hard to realize. Mesochronous clocking constitutes a possible solution to the problems faced. The most fundamental of problems faced when communicating between mesochronously clocked...... regions concerns the possibility of data corruption caused by metastability. This paper presents an integrated communication and mesochronous clocking strategy, which avoids timing related errors while maintaining a globally synchronous system perspective. The architecture is scalable as timing integrity...
Note: Fully integrated 3.2 Gbps quantum random number generator with real-time extraction
Energy Technology Data Exchange (ETDEWEB)
Zhang, Xiao-Guang; Nie, You-Qi; Liang, Hao; Zhang, Jun, E-mail: zhangjun@ustc.edu.cn; Pan, Jian-Wei [Hefei National Laboratory for Physical Sciences at the Microscale and Department of Modern Physics, University of Science and Technology of China, Hefei, Anhui 230026 (China); CAS Center for Excellence and Synergetic Innovation Center in Quantum Information and Quantum Physics, University of Science and Technology of China, Hefei, Anhui 230026 (China); Zhou, Hongyi; Ma, Xiongfeng [Center for Quantum Information, Institute for Interdisciplinary Information Sciences, Tsinghua University, Beijing 100084 (China)
2016-07-15
We present a real-time and fully integrated quantum random number generator (QRNG) by measuring laser phase fluctuations. The QRNG scheme based on laser phase fluctuations is featured for its capability of generating ultra-high-speed random numbers. However, the speed bottleneck of a practical QRNG lies on the limited speed of randomness extraction. To close the gap between the fast randomness generation and the slow post-processing, we propose a pipeline extraction algorithm based on Toeplitz matrix hashing and implement it in a high-speed field-programmable gate array. Further, all the QRNG components are integrated into a module, including a compact and actively stabilized interferometer, high-speed data acquisition, and real-time data post-processing and transmission. The final generation rate of the QRNG module with real-time extraction can reach 3.2 Gbps.
Real-time long term measurement using integrated framework for ubiquitous smart monitoring
Heo, Gwanghee; Lee, Giu; Lee, Woosang; Jeon, Joonryong; Kim, Pil-Joong
2007-04-01
Ubiquitous monitoring combining internet technologies and wireless communication is one of the most promising technologies of infrastructure health monitoring against the natural of man-made hazards. In this paper, an integrated framework of the ubiquitous monitoring is developed for real-time long term measurement in internet environment. This framework develops a wireless sensor system based on Bluetooth technology and sends measured acceleration data to the host computer through TCP/IP protocol. And it is also designed to respond to the request of web user on real time basis. In order to verify this system, real time monitoring tests are carried out on a prototype self-anchored suspension bridge. Also, wireless measurement system is analyzed to estimate its sensing capacity and evaluate its performance for monitoring purpose. Based on the evaluation, this paper proposes the effective strategies for integrated framework in order to detect structural deficiencies and to design an early warning system.
Directory of Open Access Journals (Sweden)
A. Becker
2003-01-01
Full Text Available In this paper a hybrid method combining the FDTD/FIT with a Time Domain Boundary-Integral Marching-on-in-Time Algorithm (TD-BIM is presented. Inhomogeneous regions are modelled with the FIT-method, an alternative formulation of the FDTD. Homogeneous regions (which is in the presented numerical example the open space are modelled using a TD-BIM with equivalent electric and magnetic currents flowing on the boundary between the inhomogeneous and the homogeneous regions. The regions are coupled by the tangential magnetic fields just outside the inhomogeneous regions. These fields are calculated by making use of a Mixed Potential Integral Formulation for the magnetic field. The latter consists of equivalent electric and magnetic currents on the boundary plane between the homogeneous and the inhomogeneous region. The magnetic currents result directly from the electric fields of the Yee lattice. Electric currents in the same plane are calculated by making use of the TD-BIM and using the electric field of the Yee lattice as boundary condition. The presented hybrid method only needs the interpolations inherent in FIT and no additional interpolation. A numerical result is compared to a calculation that models both regions with FDTD.
Energy Technology Data Exchange (ETDEWEB)
Varas, M. I.; Orteu, E.; Laserna, J. A.
2014-07-01
This paper demonstrates the process followed in the preparation of the Manual of floods of Cofrentes NPP to identify the allowed maximum time available to the central in the isolation of a moderate or high energy pipe break, until it affects security (1E) participating in the safe stop of Reactor or in pools of spent fuel cooling-related equipment , and to determine the recommended isolation mode from the point of view of the location of the break or rupture, of the location of the 1E equipment and human factors. (Author)
Molecular radiotherapy: The NUKFIT software for calculating the time-integrated activity coefficient
Energy Technology Data Exchange (ETDEWEB)
Kletting, P.; Schimmel, S.; Luster, M. [Klinik für Nuklearmedizin, Universität Ulm, Ulm 89081 (Germany); Kestler, H. A. [Research Group Bioinformatics and Systems Biology, Institut für Neuroinformatik, Universität Ulm, Ulm 89081 (Germany); Hänscheid, H.; Fernández, M.; Lassmann, M. [Klinik für Nuklearmedizin, Universität Würzburg, Würzburg 97080 (Germany); Bröer, J. H.; Nosske, D. [Bundesamt für Strahlenschutz, Fachbereich Strahlenschutz und Gesundheit, Oberschleißheim 85764 (Germany); Glatting, G. [Medical Radiation Physics/Radiation Protection, Medical Faculty Mannheim, Heidelberg University, Mannheim 68167 (Germany)
2013-10-15
Purpose: Calculation of the time-integrated activity coefficient (residence time) is a crucial step in dosimetry for molecular radiotherapy. However, available software is deficient in that it is either not tailored for the use in molecular radiotherapy and/or does not include all required estimation methods. The aim of this work was therefore the development and programming of an algorithm which allows for an objective and reproducible determination of the time-integrated activity coefficient and its standard error.Methods: The algorithm includes the selection of a set of fitting functions from predefined sums of exponentials and the choice of an error model for the used data. To estimate the values of the adjustable parameters an objective function, depending on the data, the parameters of the error model, the fitting function and (if required and available) Bayesian information, is minimized. To increase reproducibility and user-friendliness the starting values are automatically determined using a combination of curve stripping and random search. Visual inspection, the coefficient of determination, the standard error of the fitted parameters, and the correlation matrix are provided to evaluate the quality of the fit. The functions which are most supported by the data are determined using the corrected Akaike information criterion. The time-integrated activity coefficient is estimated by analytically integrating the fitted functions. Its standard error is determined assuming Gaussian error propagation. The software was implemented using MATLAB.Results: To validate the proper implementation of the objective function and the fit functions, the results of NUKFIT and SAAM numerical, a commercially available software tool, were compared. The automatic search for starting values was successfully tested for reproducibility. The quality criteria applied in conjunction with the Akaike information criterion allowed the selection of suitable functions. Function fit
Al Jarro, Ahmed; Salem, Mohamed; Bagci, Hakan; Benson, Trevor; Sewell, Phillip D.; Vuković, Ana
2012-01-01
An explicit marching-on-in-time (MOT) scheme for solving the time domain volume integral equation is presented. The proposed method achieves its stability by employing, at each time step, a corrector scheme, which updates/corrects fields computed by the explicit predictor scheme. The proposedmethod is computationally more efficient when compared to the existing filtering techniques used for the stabilization of explicit MOT schemes. Numerical results presented in this paper demonstrate that the proposed method maintains its stability even when applied to the analysis of electromagnetic wave interactions with electrically large structures meshed using approximately half a million discretization elements.
Long-time integration methods for mesoscopic models of pattern-forming systems
International Nuclear Information System (INIS)
Abukhdeir, Nasser Mohieddin; Vlachos, Dionisios G.; Katsoulakis, Markos; Plexousakis, Michael
2011-01-01
Spectral methods for simulation of a mesoscopic diffusion model of surface pattern formation are evaluated for long simulation times. Backwards-differencing time-integration, coupled with an underlying Newton-Krylov nonlinear solver (SUNDIALS-CVODE), is found to substantially accelerate simulations, without the typical requirement of preconditioning. Quasi-equilibrium simulations of patterned phases predicted by the model are shown to agree well with linear stability analysis. Simulation results of the effect of repulsive particle-particle interactions on pattern relaxation time and short/long-range order are discussed.
Al Jarro, Ahmed
2012-11-01
An explicit marching-on-in-time (MOT) scheme for solving the time domain volume integral equation is presented. The proposed method achieves its stability by employing, at each time step, a corrector scheme, which updates/corrects fields computed by the explicit predictor scheme. The proposedmethod is computationally more efficient when compared to the existing filtering techniques used for the stabilization of explicit MOT schemes. Numerical results presented in this paper demonstrate that the proposed method maintains its stability even when applied to the analysis of electromagnetic wave interactions with electrically large structures meshed using approximately half a million discretization elements.
Mixed-integrator-based bi-quad cell for designing a continuous time filter
International Nuclear Information System (INIS)
Chen Yong; Zhou Yumei
2010-01-01
A new mixed-integrator-based bi-quad cell is proposed. An alternative synthesis mechanism of complex poles is proposed compared with source-follower-based bi-quad cells which is designed applying the positive feedback technique. Using the negative feedback technique to combine different integrators, the proposed bi-quad cell synthesizes complex poles for designing a continuous time filter. It exhibits various advantages including compact topology, high gain, no parasitic pole, no CMFB circuit, and high capability. The fourth-order Butterworth lowpass filter using the proposed cells has been fabricated in 0.18 μm CMOS technology. The active area occupied by the filter with test buffer is only 200 x 170 μm 2 . The proposed filter consumes a low power of 201 μW and achieves a 68.5 dB dynamic range. (semiconductor integrated circuits)
Integrated simulations for fusion research in the 2030's time frame (white paper outline)
Energy Technology Data Exchange (ETDEWEB)
Friedman, Alex [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States); LoDestro, Lynda L. [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States); Parker, Jeffrey B. [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States); Xu, Xueqiao Q. [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States)
2017-11-02
This white paper presents the rationale for developing a community-wide capability for whole-device modeling, and advocates for an effort with the expectation of persistence: a long-term programmatic commitment, and support for community efforts. Statement of 2030 goal (two suggestions): (a) Robust integrated simulation tools to aid real-time experimental discharges and reactor designs by employing a hierarchy in fidelity of physics models. (b) To produce by the early 2030s a capability for validated, predictive simulation via integration of a suite of physics models from moderate through high fidelity, to understand and plan full plasma discharges, aid in data interpretation, carry out discovery science, and optimize future machine designs. We can achieve this goal via a focused effort to extend current scientific capabilities and rigorously integrate simulations of disparate physics into a comprehensive set of workflows.
Lou, Kuo-Ren; Wang, Lu
2016-05-01
The seller frequently offers the buyer trade credit to settle the purchase amount. From the seller's prospective, granting trade credit increases not only the opportunity cost (i.e., the interest loss on the buyer's purchase amount during the credit period) but also the default risk (i.e., the rate that the buyer will be unable to pay off his/her debt obligations). On the other hand, granting trade credit increases sales volume and revenue. Consequently, trade credit is an important strategy to increase seller's profitability. In this paper, we assume that the seller uses trade credit and number of shipments in a production run as decision variables to maximise his/her profit, while the buyer determines his/her replenishment cycle time and capital investment as decision variables to reduce his/her ordering cost and achieve his/her maximum profit. We then derive non-cooperative Nash solution and cooperative integrated solution in a just-in-time inventory system, in which granting trade credit increases not only the demand but also the opportunity cost and default risk, and the relationship between the capital investment and the ordering cost reduction is logarithmic. Then, we use a software to solve and compare these two distinct solutions. Finally, we use sensitivity analysis to obtain some managerial insights.
Institute of Scientific and Technical Information of China (English)
Esmaeil Ghaderi; Hossein Tohidi; Behnam Khosrozadeh
2017-01-01
The present study was carried out in order to track the maximum power point in a variable speed turbine by minimizing electromechanical torque changes using a sliding mode control strategy.In this strategy,fhst,the rotor speed is set at an optimal point for different wind speeds.As a result of which,the tip speed ratio reaches an optimal point,mechanical power coefficient is maximized,and wind turbine produces its maximum power and mechanical torque.Then,the maximum mechanical torque is tracked using electromechanical torque.In this technique,tracking error integral of maximum mechanical torque,the error,and the derivative of error are used as state variables.During changes in wind speed,sliding mode control is designed to absorb the maximum energy from the wind and minimize the response time of maximum power point tracking (MPPT).In this method,the actual control input signal is formed from a second order integral operation of the original sliding mode control input signal.The result of the second order integral in this model includes control signal integrity,full chattering attenuation,and prevention from large fluctuations in the power generator output.The simulation results,calculated by using MATLAB/m-file software,have shown the effectiveness of the proposed control strategy for wind energy systems based on the permanent magnet synchronous generator (PMSG).
A time use survey derived integrative human-physical household system energy performance model
Energy Technology Data Exchange (ETDEWEB)
Chiou, Y.S. [Carnegie Mellon Univ., Pittsburgh, PA (United States). School of Architecture
2009-07-01
This paper reported on a virtual experiment that extrapolated the stochastic yet patterned behaviour of the integrative model of a 4-bedroom house in Chicago with 4 different household compositions. The integrative household system theory considers the household as a combination of 2 sub-systems, notably the physical system and the human system. The physical system is the materials and devices of a dwelling, and the human system is the occupants that live within the dwelling. A third element is the environment that influences the operation of the 2 sub-systems. The human-physical integrative household energy model provided a platform to simulate the effect of sub-house energy conservation measures. The virtual experiment showed that the use of the bootstrap sampling approach on American Time Use Survey (ATUS) data to determine the occupant's stochastic energy consumption behaviour has resulted in a robust complex system model. Bell-shaped distributions were presented for annual appliance, heating and cooling load demands. The virtual experiment also pointed to the development of advanced multi-zone residential HVAC system as a suitable strategy for major residential energy efficiency improvement. The load profiles generated from the integrative model simulation were found to be in good agreement with those from field studies. It was concluded that the behaviour of the integrative model is a good representation of the energy consumption behaviour of real households. 10 refs., 4 tabs., 12 figs.
Directory of Open Access Journals (Sweden)
F. TopsÃƒÂ¸e
2001-09-01
Full Text Available Abstract: In its modern formulation, the Maximum Entropy Principle was promoted by E.T. Jaynes, starting in the mid-fifties. The principle dictates that one should look for a distribution, consistent with available information, which maximizes the entropy. However, this principle focuses only on distributions and it appears advantageous to bring information theoretical thinking more prominently into play by also focusing on the "observer" and on coding. This view was brought forward by the second named author in the late seventies and is the view we will follow-up on here. It leads to the consideration of a certain game, the Code Length Game and, via standard game theoretical thinking, to a principle of Game Theoretical Equilibrium. This principle is more basic than the Maximum Entropy Principle in the sense that the search for one type of optimal strategies in the Code Length Game translates directly into the search for distributions with maximum entropy. In the present paper we offer a self-contained and comprehensive treatment of fundamentals of both principles mentioned, based on a study of the Code Length Game. Though new concepts and results are presented, the reading should be instructional and accessible to a rather wide audience, at least if certain mathematical details are left aside at a rst reading. The most frequently studied instance of entropy maximization pertains to the Mean Energy Model which involves a moment constraint related to a given function, here taken to represent "energy". This type of application is very well known from the literature with hundreds of applications pertaining to several different elds and will also here serve as important illustration of the theory. But our approach reaches further, especially regarding the study of continuity properties of the entropy function, and this leads to new results which allow a discussion of models with so-called entropy loss. These results have tempted us to speculate over
Energy Technology Data Exchange (ETDEWEB)
2007-07-01
In Norway Integrated Operations (IO) is a concept which in the first phase (G1) has been used to describe how to integrate processes and people onshore and offshore using ICT solutions and facilities that improve onshore's ability to support offshore operationally. The second generation (G2) Integrated Operations aims to help operators utilize vendors' core competencies and services more efficiently. Utilizing digital services and vendor products, operators will be able to update reservoir models, drilling targets and well trajectories as wells are drilled, manage well completions remotely, optimize production from reservoir to export lines, and implement condition-based maintenance concepts. The total impact on production, recovery rates, costs and safety will be profound. When the international petroleum business moves to the Arctic region the setting is very different from what is the case on the Norwegian Continental Shelf (NCS) and new challenges will arise. The Norwegian Ministry of Environment has recently issued an Integrated Management Plan for the Barents Sea where one focus is on 'Monitoring of the Marine Environment in the North'. The Government aims to establish a new and more coordinated system for monitoring the marine ecosystems in the north. A representative group consisting of the major Operators, the Service Industry, Academia and the Authorities have developed the enclosed strategy for the OG21 Integrated Operations and Real Time Reservoir Management (IO and RTRM) Technology Target Area (TTA). Major technology and work process research and development gaps have been identified in several areas: Bandwidth down-hole to surface; Sensor development including Nano-technology; Cross discipline use of Visualisation, Simulation and model development particularly in Drilling and Reservoir management areas; Software development in terms of data handling, model updating and calculation speed; Enabling reliable and robust communications particularly for
Energy Technology Data Exchange (ETDEWEB)
2007-07-01
In Norway Integrated Operations (IO) is a concept which in the first phase (G1) has been used to describe how to integrate processes and people onshore and offshore using ICT solutions and facilities that improve onshore's ability to support offshore operationally. The second generation (G2) Integrated Operations aims to help operators utilize vendors' core competencies and services more efficiently. Utilizing digital services and vendor products, operators will be able to update reservoir models, drilling targets and well trajectories as wells are drilled, manage well completions remotely, optimize production from reservoir to export lines, and implement condition-based maintenance concepts. The total impact on production, recovery rates, costs and safety will be profound. When the international petroleum business moves to the Arctic region the setting is very different from what is the case on the Norwegian Continental Shelf (NCS) and new challenges will arise. The Norwegian Ministry of Environment has recently issued an Integrated Management Plan for the Barents Sea where one focus is on 'Monitoring of the Marine Environment in the North'. The Government aims to establish a new and more coordinated system for monitoring the marine ecosystems in the north. A representative group consisting of the major Operators, the Service Industry, Academia and the Authorities have developed the enclosed strategy for the OG21 Integrated Operations and Real Time Reservoir Management (IO and RTRM) Technology Target Area (TTA). Major technology and work process research and development gaps have been identified in several areas: Bandwidth down-hole to surface; Sensor development including Nano-technology; Cross discipline use of Visualisation, Simulation and model development particularly in Drilling and Reservoir management areas; Software development in terms of data handling, model updating and calculation speed; Enabling reliable and robust communications
Probable maximum flood control
International Nuclear Information System (INIS)
DeGabriele, C.E.; Wu, C.L.
1991-11-01
This study proposes preliminary design concepts to protect the waste-handling facilities and all shaft and ramp entries to the underground from the probable maximum flood (PMF) in the current design configuration for the proposed Nevada Nuclear Waste Storage Investigation (NNWSI) repository protection provisions were furnished by the United States Bureau of Reclamation (USSR) or developed from USSR data. Proposed flood protection provisions include site grading, drainage channels, and diversion dikes. Figures are provided to show these proposed flood protection provisions at each area investigated. These areas are the central surface facilities (including the waste-handling building and waste treatment building), tuff ramp portal, waste ramp portal, men-and-materials shaft, emplacement exhaust shaft, and exploratory shafts facility
Introduction to maximum entropy
International Nuclear Information System (INIS)
Sivia, D.S.
1988-01-01
The maximum entropy (MaxEnt) principle has been successfully used in image reconstruction in a wide variety of fields. We review the need for such methods in data analysis and show, by use of a very simple example, why MaxEnt is to be preferred over other regularizing functions. This leads to a more general interpretation of the MaxEnt method, and its use is illustrated with several different examples. Practical difficulties with non-linear problems still remain, this being highlighted by the notorious phase problem in crystallography. We conclude with an example from neutron scattering, using data from a filter difference spectrometer to contrast MaxEnt with a conventional deconvolution. 12 refs., 8 figs., 1 tab
International Nuclear Information System (INIS)
Rust, D.M.
1984-01-01
The successful retrieval and repair of the Solar Maximum Mission (SMM) satellite by Shuttle astronauts in April 1984 permitted continuance of solar flare observations that began in 1980. The SMM carries a soft X ray polychromator, gamma ray, UV and hard X ray imaging spectrometers, a coronagraph/polarimeter and particle counters. The data gathered thus far indicated that electrical potentials of 25 MeV develop in flares within 2 sec of onset. X ray data show that flares are composed of compressed magnetic loops that have come too close together. Other data have been taken on mass ejection, impacts of electron beams and conduction fronts with the chromosphere and changes in the solar radiant flux due to sunspots. 13 references
Introduction to maximum entropy
International Nuclear Information System (INIS)
Sivia, D.S.
1989-01-01
The maximum entropy (MaxEnt) principle has been successfully used in image reconstruction in a wide variety of fields. The author reviews the need for such methods in data analysis and shows, by use of a very simple example, why MaxEnt is to be preferred over other regularizing functions. This leads to a more general interpretation of the MaxEnt method, and its use is illustrated with several different examples. Practical difficulties with non-linear problems still remain, this being highlighted by the notorious phase problem in crystallography. He concludes with an example from neutron scattering, using data from a filter difference spectrometer to contrast MaxEnt with a conventional deconvolution. 12 refs., 8 figs., 1 tab
Functional Maximum Autocorrelation Factors
DEFF Research Database (Denmark)
Larsen, Rasmus; Nielsen, Allan Aasbjerg
2005-01-01
MAF outperforms the functional PCA in concentrating the interesting' spectra/shape variation in one end of the eigenvalue spectrum and allows for easier interpretation of effects. Conclusions. Functional MAF analysis is a useful methods for extracting low dimensional models of temporally or spatially......Purpose. We aim at data where samples of an underlying function are observed in a spatial or temporal layout. Examples of underlying functions are reflectance spectra and biological shapes. We apply functional models based on smoothing splines and generalize the functional PCA in......\\verb+~+\\$\\backslash\\$cite{ramsay97} to functional maximum autocorrelation factors (MAF)\\verb+~+\\$\\backslash\\$cite{switzer85,larsen2001d}. We apply the method to biological shapes as well as reflectance spectra. {\\$\\backslash\\$bf Methods}. MAF seeks linear combination of the original variables that maximize autocorrelation between...
Regularized maximum correntropy machine
Wang, Jim Jing-Yan; Wang, Yunji; Jing, Bing-Yi; Gao, Xin
2015-01-01
In this paper we investigate the usage of regularized correntropy framework for learning of classifiers from noisy labels. The class label predictors learned by minimizing transitional loss functions are sensitive to the noisy and outlying labels of training samples, because the transitional loss functions are equally applied to all the samples. To solve this problem, we propose to learn the class label predictors by maximizing the correntropy between the predicted labels and the true labels of the training samples, under the regularized Maximum Correntropy Criteria (MCC) framework. Moreover, we regularize the predictor parameter to control the complexity of the predictor. The learning problem is formulated by an objective function considering the parameter regularization and MCC simultaneously. By optimizing the objective function alternately, we develop a novel predictor learning algorithm. The experiments on two challenging pattern classification tasks show that it significantly outperforms the machines with transitional loss functions.
Regularized maximum correntropy machine
Wang, Jim Jing-Yan
2015-02-12
In this paper we investigate the usage of regularized correntropy framework for learning of classifiers from noisy labels. The class label predictors learned by minimizing transitional loss functions are sensitive to the noisy and outlying labels of training samples, because the transitional loss functions are equally applied to all the samples. To solve this problem, we propose to learn the class label predictors by maximizing the correntropy between the predicted labels and the true labels of the training samples, under the regularized Maximum Correntropy Criteria (MCC) framework. Moreover, we regularize the predictor parameter to control the complexity of the predictor. The learning problem is formulated by an objective function considering the parameter regularization and MCC simultaneously. By optimizing the objective function alternately, we develop a novel predictor learning algorithm. The experiments on two challenging pattern classification tasks show that it significantly outperforms the machines with transitional loss functions.
A Time Marching Scheme for Solving Volume Integral Equations on Nonlinear Scatterers
Bagci, Hakan
2015-01-01
Transient electromagnetic field interactions on inhomogeneous penetrable scatterers can be analyzed by solving time domain volume integral equations (TDVIEs). TDVIEs are oftentimes solved using marchingon-in-time (MOT) schemes. Unlike finite difference and finite element schemes, MOT-TDVIE solvers require discretization of only the scatterers, do not call for artificial absorbing boundary conditions, and are more robust to numerical phase dispersion. On the other hand, their computational cost is high, they suffer from late-time instabilities, and their implicit nature makes incorporation of nonlinear constitutive relations more difficult. Development of plane-wave time-domain (PWTD) and FFT-based schemes has significantly reduced the computational cost of the MOT-TDVIE solvers. Additionally, latetime instability problem has been alleviated for all practical purposes with the development of accurate integration schemes and specially designed temporal basis functions. Addressing the third challenge is the topic of this presentation. I will talk about an explicit MOT scheme developed for solving the TDVIE on scatterers with nonlinear material properties. The proposed scheme separately discretizes the TDVIE and the nonlinear constitutive relation between electric field intensity and flux density. The unknown field intensity and flux density are expanded using half and full Schaubert-Wilton-Glisson (SWG) basis functions in space and polynomial temporal interpolators in time. The resulting coupled system of the discretized TDVIE and constitutive relation is integrated in time using an explicit P E(CE) m scheme to yield the unknown expansion coefficients. Explicitness of time marching allows for straightforward incorporation of the nonlinearity as a function evaluation on the right hand side of the coupled system of equations. Consequently, the resulting MOT scheme does not call for a Newton-like nonlinear solver. Numerical examples, which demonstrate the applicability
A Time Marching Scheme for Solving Volume Integral Equations on Nonlinear Scatterers
Bagci, Hakan
2015-01-07
Transient electromagnetic field interactions on inhomogeneous penetrable scatterers can be analyzed by solving time domain volume integral equations (TDVIEs). TDVIEs are oftentimes solved using marchingon-in-time (MOT) schemes. Unlike finite difference and finite element schemes, MOT-TDVIE solvers require discretization of only the scatterers, do not call for artificial absorbing boundary conditions, and are more robust to numerical phase dispersion. On the other hand, their computational cost is high, they suffer from late-time instabilities, and their implicit nature makes incorporation of nonlinear constitutive relations more difficult. Development of plane-wave time-domain (PWTD) and FFT-based schemes has significantly reduced the computational cost of the MOT-TDVIE solvers. Additionally, latetime instability problem has been alleviated for all practical purposes with the development of accurate integration schemes and specially designed temporal basis functions. Addressing the third challenge is the topic of this presentation. I will talk about an explicit MOT scheme developed for solving the TDVIE on scatterers with nonlinear material properties. The proposed scheme separately discretizes the TDVIE and the nonlinear constitutive relation between electric field intensity and flux density. The unknown field intensity and flux density are expanded using half and full Schaubert-Wilton-Glisson (SWG) basis functions in space and polynomial temporal interpolators in time. The resulting coupled system of the discretized TDVIE and constitutive relation is integrated in time using an explicit P E(CE) m scheme to yield the unknown expansion coefficients. Explicitness of time marching allows for straightforward incorporation of the nonlinearity as a function evaluation on the right hand side of the coupled system of equations. Consequently, the resulting MOT scheme does not call for a Newton-like nonlinear solver. Numerical examples, which demonstrate the applicability
Retarded potentials and time domain boundary integral equations a road map
Sayas, Francisco-Javier
2016-01-01
This book offers a thorough and self-contained exposition of the mathematics of time-domain boundary integral equations associated to the wave equation, including applications to scattering of acoustic and elastic waves. The book offers two different approaches for the analysis of these integral equations, including a systematic treatment of their numerical discretization using Galerkin (Boundary Element) methods in the space variables and Convolution Quadrature in the time variable. The first approach follows classical work started in the late eighties, based on Laplace transforms estimates. This approach has been refined and made more accessible by tailoring the necessary mathematical tools, avoiding an excess of generality. A second approach contains a novel point of view that the author and some of his collaborators have been developing in recent years, using the semigroup theory of evolution equations to obtain improved results. The extension to electromagnetic waves is explained in one of the appendices...
Ulku, Huseyin Arda
2014-07-06
Effects of material nonlinearities on electromagnetic field interactions become dominant as field amplitudes increase. A typical example is observed in plasmonics, where highly localized fields “activate” Kerr nonlinearities. Naturally, time domain solvers are the method of choice when it comes simulating these nonlinear effects. Oftentimes, finite difference time domain (FDTD) method is used for this purpose. This is simply due to the fact that explicitness of the FDTD renders the implementation easier and the material nonlinearity can be easily accounted for using an auxiliary differential equation (J.H. Green and A. Taflove, Opt. Express, 14(18), 8305-8310, 2006). On the other hand, explicit marching on-in-time (MOT)-based time domain integral equation (TDIE) solvers have never been used for the same purpose even though they offer several advantages over FDTD (E. Michielssen, et al., ECCOMAS CFD, The Netherlands, Sep. 5-8, 2006). This is because explicit MOT solvers have never been stabilized until not so long ago. Recently an explicit but stable MOT scheme has been proposed for solving the time domain surface magnetic field integral equation (H.A. Ulku, et al., IEEE Trans. Antennas Propag., 61(8), 4120-4131, 2013) and later it has been extended for the time domain volume electric field integral equation (TDVEFIE) (S. B. Sayed, et al., Pr. Electromagn. Res. S., 378, Stockholm, 2013). This explicit MOT scheme uses predictor-corrector updates together with successive over relaxation during time marching to stabilize the solution even when time step is as large as in the implicit counterpart. In this work, an explicit MOT-TDVEFIE solver is proposed for analyzing electromagnetic wave interactions on scatterers exhibiting Kerr nonlinearity. Nonlinearity is accounted for using the constitutive relation between the electric field intensity and flux density. Then, this relation and the TDVEFIE are discretized together by expanding the intensity and flux - sing half
Optimal Energy Management for the Integrated Power and Gas Systems via Real-time Pricing
DEFF Research Database (Denmark)
Shu, KangAn; Ai, Xiaomeng; Wen, Jinyu
2018-01-01
This work proposed a bi-level formulation for energy management in the integrated power and natural gas system via real-time price signals. The upper-level problem minimizes the operational cost, in which dynamic electricity price and dynamic gas tariff are proposed. The lower level problem...... and P2Gs plants follow the system operator’s preferences such as wind power accommodation, mitigation of unsupplied load and relieving the network congestion....
Doubling time measurement by method of digital integration of pulses from a CFU7 detector
International Nuclear Information System (INIS)
Gauthier, Guy
1968-01-01
The author reports an experimental study which aimed at measuring the doubling time with a CFU7 fission chamber by using an integration method on the Siloette pile with a new core and with a spent core, and at comparing results with those obtained with a specific instrument which receives information from an ionisation chamber. The interest of this method relies on the fact that the fission chamber is insensitive to gamma radiations
Czech Academy of Sciences Publication Activity Database
Fiala, Zdeněk
2015-01-01
Roč. 226, č. 1 (2015), s. 17-35 ISSN 0001-5970 R&D Projects: GA ČR(CZ) GA103/09/2101 Institutional support: RVO:68378297 Keywords : solid mechanics * finite deformations * evolution equation of Lie-type * time-discrete integration Subject RIV: BA - General Mathematics OBOR OECD: Statistics and probability Impact factor: 1.694, year: 2015 http://link.springer.com/article/10.1007%2Fs00707-014-1162-9#page-1
Wang, Chengbin; Ma, Xiaogang; Chen, Jianguo
2018-06-01
Initiatives of open data promote the online publication and sharing of large amounts of geologic data. How to retrieve information and discover knowledge from the big data is an ongoing challenge. In this paper, we developed an ontology-driven data integration and visualization pilot system for exploring information of regional geologic time, paleontology, and fundamental geology. The pilot system (http://www2.cs.uidaho.edu/%7Emax/gts/)
International Nuclear Information System (INIS)
Haseli, Y.; Oijen, J.A. van; Goey, L.P.H. de
2012-01-01
Highlights: ► A simple model for prediction of the ignition time of a wood particle is presented. ► The formulation is given for both thermally thin and thermally thick particles. ► Transition from thermally thin to thick regime occurs at a critical particle size. ► The model is validated against a numerical model and various experimental data. - Abstract: The main idea of this paper is to establish a simple approach for prediction of the ignition time of a wood particle assuming that the thermo-physical properties remain constant and ignition takes place at a characteristic ignition temperature. Using a time and space integral method, explicit relationships are derived for computation of the ignition time of particles of three common shapes (slab, cylinder and sphere), which may be characterized as thermally thin or thermally thick. It is shown through a dimensionless analysis that the dimensionless ignition time can be described as a function of non-dimensional ignition temperature, reactor temperature or external incident heat flux, and parameter K which represents the ratio of conduction heat transfer to the external radiation heat transfer. The numerical results reveal that for the dimensionless ignition temperature between 1.25 and 2.25 and for values of K up to 8000 (corresponding to woody materials), the variation of the ignition time of a thermally thin particle with K and the dimensionless ignition temperature is linear, whereas the dependence of the ignition time of a thermally thick particle on the above two parameters obeys a quadratic function. Furthermore, it is shown that the transition from the regime of thermally thin to the regime of thermally thick occurs at K cr (corresponding to a critical size of particle) which is found to be independent of the particle shape. The model is validated by comparing the predicted and the measured ignition time of several wood particles obtained from different sources. Good agreement is achieved which
Efficient Simulation of Compressible, Viscous Fluids using Multi-rate Time Integration
Mikida, Cory; Kloeckner, Andreas; Bodony, Daniel
2017-11-01
In the numerical simulation of problems of compressible, viscous fluids with single-rate time integrators, the global timestep used is limited to that of the finest mesh point or fastest physical process. This talk discusses the application of multi-rate Adams-Bashforth (MRAB) integrators to an overset mesh framework to solve compressible viscous fluid problems of varying scale with improved efficiency, with emphasis on the strategy of timescale separation and the application of the resulting numerical method to two sample problems: subsonic viscous flow over a cylinder and a viscous jet in crossflow. The results presented indicate the numerical efficacy of MRAB integrators, outline a number of outstanding code challenges, demonstrate the expected reduction in time enabled by MRAB, and emphasize the need for proper load balancing through spatial decomposition in order for parallel runs to achieve the predicted time-saving benefit. This material is based in part upon work supported by the Department of Energy, National Nuclear Security Administration, under Award Number DE-NA0002374.
Architecture for an integrated real-time air combat and sensor network simulation
Criswell, Evans A.; Rushing, John; Lin, Hong; Graves, Sara
2007-04-01
An architecture for an integrated air combat and sensor network simulation is presented. The architecture integrates two components: a parallel real-time sensor fusion and target tracking simulation, and an air combat simulation. By integrating these two simulations, it becomes possible to experiment with scenarios in which one or both sides in a battle have very large numbers of primitive passive sensors, and to assess the likely effects of those sensors on the outcome of the battle. Modern Air Power is a real-time theater-level air combat simulation that is currently being used as a part of the USAF Air and Space Basic Course (ASBC). The simulation includes a variety of scenarios from the Vietnam war to the present day, and also includes several hypothetical future scenarios. Modern Air Power includes a scenario editor, an order of battle editor, and full AI customization features that make it possible to quickly construct scenarios for any conflict of interest. The scenario editor makes it possible to place a wide variety of sensors including both high fidelity sensors such as radars, and primitive passive sensors that provide only very limited information. The parallel real-time sensor network simulation is capable of handling very large numbers of sensors on a computing cluster of modest size. It can fuse information provided by disparate sensors to detect and track targets, and produce target tracks.
Kletting, P; Schimmel, S; Kestler, H A; Hänscheid, H; Luster, M; Fernández, M; Bröer, J H; Nosske, D; Lassmann, M; Glatting, G
2013-10-01
Calculation of the time-integrated activity coefficient (residence time) is a crucial step in dosimetry for molecular radiotherapy. However, available software is deficient in that it is either not tailored for the use in molecular radiotherapy and/or does not include all required estimation methods. The aim of this work was therefore the development and programming of an algorithm which allows for an objective and reproducible determination of the time-integrated activity coefficient and its standard error. The algorithm includes the selection of a set of fitting functions from predefined sums of exponentials and the choice of an error model for the used data. To estimate the values of the adjustable parameters an objective function, depending on the data, the parameters of the error model, the fitting function and (if required and available) Bayesian information, is minimized. To increase reproducibility and user-friendliness the starting values are automatically determined using a combination of curve stripping and random search. Visual inspection, the coefficient of determination, the standard error of the fitted parameters, and the correlation matrix are provided to evaluate the quality of the fit. The functions which are most supported by the data are determined using the corrected Akaike information criterion. The time-integrated activity coefficient is estimated by analytically integrating the fitted functions. Its standard error is determined assuming Gaussian error propagation. The software was implemented using MATLAB. To validate the proper implementation of the objective function and the fit functions, the results of NUKFIT and SAAM numerical, a commercially available software tool, were compared. The automatic search for starting values was successfully tested for reproducibility. The quality criteria applied in conjunction with the Akaike information criterion allowed the selection of suitable functions. Function fit parameters and their standard
Directory of Open Access Journals (Sweden)
Rike Steenken
Full Text Available Modern driver assistance systems make increasing use of auditory and tactile signals in order to reduce the driver's visual information load. This entails potential crossmodal interaction effects that need to be taken into account in designing an optimal system. Here we show that saccadic reaction times to visual targets (cockpit or outside mirror, presented in a driving simulator environment and accompanied by auditory or tactile accessories, follow some well-known spatiotemporal rules of multisensory integration, usually found under confined laboratory conditions. Auditory nontargets speed up reaction time by about 80 ms. The effect tends to be maximal when the nontarget is presented 50 ms before the target and when target and nontarget are spatially coincident. The effect of a tactile nontarget (vibrating steering wheel was less pronounced and not spatially specific. It is shown that the average reaction times are well-described by the stochastic "time window of integration" model for multisensory integration developed by the authors. This two-stage model postulates that crossmodal interaction occurs only if the peripheral processes from the different sensory modalities terminate within a fixed temporal interval, and that the amount of crossmodal interaction manifests itself in an increase or decrease of second stage processing time. A qualitative test is consistent with the model prediction that the probability of interaction, but not the amount of crossmodal interaction, depends on target-nontarget onset asynchrony. A quantitative model fit yields estimates of individual participants' parameters, including the size of the time window. Some consequences for the design of driver assistance systems are discussed.
Integral equation approach to time-dependent kinematic dynamos in finite domains
International Nuclear Information System (INIS)
Xu Mingtian; Stefani, Frank; Gerbeth, Gunter
2004-01-01
The homogeneous dynamo effect is at the root of cosmic magnetic field generation. With only a very few exceptions, the numerical treatment of homogeneous dynamos is carried out in the framework of the differential equation approach. The present paper tries to facilitate the use of integral equations in dynamo research. Apart from the pedagogical value to illustrate dynamo action within the well-known picture of the Biot-Savart law, the integral equation approach has a number of practical advantages. The first advantage is its proven numerical robustness and stability. The second and perhaps most important advantage is its applicability to dynamos in arbitrary geometries. The third advantage is its intimate connection to inverse problems relevant not only for dynamos but also for technical applications of magnetohydrodynamics. The paper provides the first general formulation and application of the integral equation approach to time-dependent kinematic dynamos, with stationary dynamo sources, in finite domains. The time dependence is restricted to the magnetic field, whereas the velocity or corresponding mean-field sources of dynamo action are supposed to be stationary. For the spherically symmetric α 2 dynamo model it is shown how the general formulation is reduced to a coupled system of two radial integral equations for the defining scalars of the poloidal and toroidal field components. The integral equation formulation for spherical dynamos with general stationary velocity fields is also derived. Two numerical examples - the α 2 dynamo model with radially varying α and the Bullard-Gellman model - illustrate the equivalence of the approach with the usual differential equation method. The main advantage of the method is exemplified by the treatment of an α 2 dynamo in rectangular domains
Toward the integration of European natural gas markets:A time-varying approach
International Nuclear Information System (INIS)
Renou-Maissant, Patricia
2012-01-01
Over the past fifteen years, European gas markets have radically changed. In order to build a single European gas market, a new regulatory framework has been established through three European Gas Directives. The purpose of this article is to investigate the impact of the reforms in the natural gas industry on consumer prices, with a specific focus on gas prices for industrial use. The strength of the relationship between the industrial gas prices of six western European countries is studied by testing the Law of One Price for the period 1991–2009. Estimations were carried out using both cointegration analysis and time-varying parameter models. Results highlight an emerging and on-going process of convergence between the industrial gas prices in western Europe since 2001 for the six EU member states. The strength and the level of convergence differ widely between countries. Strong integration of gas markets in continental Europe, except for the Belgian market, has been established. It appears that the convergence process between continental countries and the UK is not completed. Thus, the integration of European gas markets remains an open issue and the question of how far integration will proceed will still be widely discussed in the coming years. - Highlights: ► We investigate the integration of European natural gas markets. ► We use both cointegration analysis and time-varying parameter models. ► We show the failure of cointegration techniques to take account of evolving processes. ► An emerging and on-going process of convergence between the industrial gas prices is at work. ► Strong integration of gas markets in continental Europe has been established.
On the mixed discretization of the time domain magnetic field integral equation
Ulku, Huseyin Arda
2012-09-01
Time domain magnetic field integral equation (MFIE) is discretized using divergence-conforming Rao-Wilton-Glisson (RWG) and curl-conforming Buffa-Christiansen (BC) functions as spatial basis and testing functions, respectively. The resulting mixed discretization scheme, unlike the classical scheme which uses RWG functions as both basis and testing functions, is proper: Testing functions belong to dual space of the basis functions. Numerical results demonstrate that the marching on-in-time (MOT) solution of the mixed discretized MFIE yields more accurate results than that of classically discretized MFIE. © 2012 IEEE.
Nonperturbative time-convolutionless quantum master equation from the path integral approach
International Nuclear Information System (INIS)
Nan Guangjun; Shi Qiang; Shuai Zhigang
2009-01-01
The time-convolutionless quantum master equation is widely used to simulate reduced dynamics of a quantum system coupled to a bath. However, except for several special cases, applications of this equation are based on perturbative calculation of the dissipative tensor, and are limited to the weak system-bath coupling regime. In this paper, we derive an exact time-convolutionless quantum master equation from the path integral approach, which provides a new way to calculate the dissipative tensor nonperturbatively. Application of the new method is demonstrated in the case of an asymmetrical two-level system linearly coupled to a harmonic bath.
Integration of image exposure time into a modified laser speckle imaging method
Energy Technology Data Exchange (ETDEWEB)
RamIrez-San-Juan, J C; Salazar-Hermenegildo, N; Ramos-Garcia, R; Munoz-Lopez, J [Optics Department, INAOE, Puebla (Mexico); Huang, Y C [Department of Electrical Engineering and Computer Science, University of California, Irvine, CA (United States); Choi, B, E-mail: jcram@inaoep.m [Beckman Laser Institute and Medical Clinic, University of California, Irvine, CA (United States)
2010-11-21
Speckle-based methods have been developed to characterize tissue blood flow and perfusion. One such method, called modified laser speckle imaging (mLSI), enables computation of blood flow maps with relatively high spatial resolution. Although it is known that the sensitivity and noise in LSI measurements depend on image exposure time, a fundamental disadvantage of mLSI is that it does not take into account this parameter. In this work, we integrate the exposure time into the mLSI method and provide experimental support of our approach with measurements from an in vitro flow phantom.
Integration of image exposure time into a modified laser speckle imaging method
International Nuclear Information System (INIS)
RamIrez-San-Juan, J C; Salazar-Hermenegildo, N; Ramos-Garcia, R; Munoz-Lopez, J; Huang, Y C; Choi, B
2010-01-01
Speckle-based methods have been developed to characterize tissue blood flow and perfusion. One such method, called modified laser speckle imaging (mLSI), enables computation of blood flow maps with relatively high spatial resolution. Although it is known that the sensitivity and noise in LSI measurements depend on image exposure time, a fundamental disadvantage of mLSI is that it does not take into account this parameter. In this work, we integrate the exposure time into the mLSI method and provide experimental support of our approach with measurements from an in vitro flow phantom.
High-Order Calderón Preconditioned Time Domain Integral Equation Solvers
Valdes, Felipe
2013-05-01
Two high-order accurate Calderón preconditioned time domain electric field integral equation (TDEFIE) solvers are presented. In contrast to existing Calderón preconditioned time domain solvers, the proposed preconditioner allows for high-order surface representations and current expansions by using a novel set of fully-localized high-order div-and quasi curl-conforming (DQCC) basis functions. Numerical results demonstrate that the linear systems of equations obtained using the proposed basis functions converge rapidly, regardless of the mesh density and of the order of the current expansion. © 1963-2012 IEEE.
High-Order Calderón Preconditioned Time Domain Integral Equation Solvers
Valdes, Felipe; Ghaffari-Miab, Mohsen; Andriulli, Francesco P.; Cools, Kristof; Michielssen,
2013-01-01
Two high-order accurate Calderón preconditioned time domain electric field integral equation (TDEFIE) solvers are presented. In contrast to existing Calderón preconditioned time domain solvers, the proposed preconditioner allows for high-order surface representations and current expansions by using a novel set of fully-localized high-order div-and quasi curl-conforming (DQCC) basis functions. Numerical results demonstrate that the linear systems of equations obtained using the proposed basis functions converge rapidly, regardless of the mesh density and of the order of the current expansion. © 1963-2012 IEEE.
Comment on 'Analytical results for a Bessel function times Legendre polynomials class integrals'
International Nuclear Information System (INIS)
Cregg, P J; Svedlindh, P
2007-01-01
A result is obtained, stemming from Gegenbauer, where the products of certain Bessel functions and exponentials are expressed in terms of an infinite series of spherical Bessel functions and products of associated Legendre functions. Closed form solutions for integrals involving Bessel functions times associated Legendre functions times exponentials, recently elucidated by Neves et al (J. Phys. A: Math. Gen. 39 L293), are then shown to result directly from the orthogonality properties of the associated Legendre functions. This result offers greater flexibility in the treatment of classical Heisenberg chains and may do so in other problems such as occur in electromagnetic diffraction theory. (comment)
Towards an integrated observing system for ocean carbon and biogeochemistry at a time of change
CSIR Research Space (South Africa)
Gruber, N
2009-09-01
Full Text Available . The longest time-series for inorganic carbon started in the early 1980s (Keeling, 1993, Gruber et al., 2001; Bates, 1997) near Bermuda and in 1988 was joined by a second time-series near Hawaii (Sabine et al., 1995; Dore et al., 2003; Keeling et al., 2004... century primarily as a result of the burning of fossil fuels (Sarmiento and Gruber, 2002). In response, atmospheric CO2 has GRUBER ET AL: AN INTEGRATED BIOGEOCHEMICAL OBSERVING SYSTEM 2 increased by more than 100 ppm (30%), with today’s concentration...
International Nuclear Information System (INIS)
Edwards, R.M.; Lee, K.Y.; Kumara, S.; Levine, S.H.
1989-01-01
An approach for an integrated real-time diagnostic system is being developed for inclusion as an integral part of a power plant automatic control system. In order to participate in control decisions and automatic closed loop operation, the diagnostic system must operate in real-time. Thus far, an expert system with real-time capabilities has been developed and installed on a subsystem at the Experimental Breeder Reactor (EBR-II) in Idaho, USA. Real-time simulation testing of advanced power plant concepts at the Pennsylvania State University has been developed and was used to support the expert system development and installation at EBR-II. Recently, the US National Science Foundation (NSF) and the US Department of Energy (DOE) have funded a Penn State research program to further enhance application of real-time diagnostic systems by pursuing implementation in a distributed power plant computer system including microprocessor based controllers. This paper summarizes past, current, planned, and possible future approaches to power plant diagnostic systems research at Penn State. 34 refs., 9 figs
Valdés, Felipe
2013-03-01
Single-source time-domain electric-and magnetic-field integral equations for analyzing scattering from homogeneous penetrable objects are presented. Their temporal discretization is effected by using shifted piecewise polynomial temporal basis functions and a collocation testing procedure, thus allowing for a marching-on-in-time (MOT) solution scheme. Unlike dual-source formulations, single-source equations involve space-time domain operator products, for which spatial discretization techniques developed for standalone operators do not apply. Here, the spatial discretization of the single-source time-domain integral equations is achieved by using the high-order divergence-conforming basis functions developed by Graglia alongside the high-order divergence-and quasi curl-conforming (DQCC) basis functions of Valdés The combination of these two sets allows for a well-conditioned mapping from div-to curl-conforming function spaces that fully respects the space-mapping properties of the space-time operators involved. Numerical results corroborate the fact that the proposed procedure guarantees accuracy and stability of the MOT scheme. © 2012 IEEE.
International Nuclear Information System (INIS)
Tsuchida, Takayuki
2010-01-01
We propose a new method for discretizing the time variable in integrable lattice systems while maintaining the locality of the equations of motion. The method is based on the zero-curvature (Lax pair) representation and the lowest-order 'conservation laws'. In contrast to the pioneering work of Ablowitz and Ladik, our method allows the auxiliary dependent variables appearing in the stage of time discretization to be expressed locally in terms of the original dependent variables. The time-discretized lattice systems have the same set of conserved quantities and the same structures of the solutions as the continuous-time lattice systems; only the time evolution of the parameters in the solutions that correspond to the angle variables is discretized. The effectiveness of our method is illustrated using examples such as the Toda lattice, the Volterra lattice, the modified Volterra lattice, the Ablowitz-Ladik lattice (an integrable semi-discrete nonlinear Schroedinger system) and the lattice Heisenberg ferromagnet model. For the modified Volterra lattice, we also present its ultradiscrete analogue.
International Nuclear Information System (INIS)
Ryan, J.
1981-01-01
By understanding the sun, astrophysicists hope to expand this knowledge to understanding other stars. To study the sun, NASA launched a satellite on February 14, 1980. The project is named the Solar Maximum Mission (SMM). The satellite conducted detailed observations of the sun in collaboration with other satellites and ground-based optical and radio observations until its failure 10 months into the mission. The main objective of the SMM was to investigate one aspect of solar activity: solar flares. A brief description of the flare mechanism is given. The SMM satellite was valuable in providing information on where and how a solar flare occurs. A sequence of photographs of a solar flare taken from SMM satellite shows how a solar flare develops in a particular layer of the solar atmosphere. Two flares especially suitable for detailed observations by a joint effort occurred on April 30 and May 21 of 1980. These flares and observations of the flares are discussed. Also discussed are significant discoveries made by individual experiments
An arbitrary-order staggered time integrator for the linear acoustic wave equation
Lee, Jaejoon; Park, Hyunseo; Park, Yoonseo; Shin, Changsoo
2018-02-01
We suggest a staggered time integrator whose order of accuracy can arbitrarily be extended to solve the linear acoustic wave equation. A strategy to select the appropriate order of accuracy is also proposed based on the error analysis that quantitatively predicts the truncation error of the numerical solution. This strategy not only reduces the computational cost several times, but also allows us to flexibly set the modelling parameters such as the time step length, grid interval and P-wave speed. It is demonstrated that the proposed method can almost eliminate temporal dispersive errors during long term simulations regardless of the heterogeneity of the media and time step lengths. The method can also be successfully applied to the source problem with an absorbing boundary condition, which is frequently encountered in the practical usage for the imaging algorithms or the inverse problems.
Integration of domain and resource-based reasoning for real-time control in dynamic environments
Morgan, Keith; Whitebread, Kenneth R.; Kendus, Michael; Cromarty, Andrew S.
1993-01-01
A real-time software controller that successfully integrates domain-based and resource-based control reasoning to perform task execution in a dynamically changing environment is described. The design of the controller is based on the concept of partitioning the process to be controlled into a set of tasks, each of which achieves some process goal. It is assumed that, in general, there are multiple ways (tasks) to achieve a goal. The controller dynamically determines current goals and their current criticality, choosing and scheduling tasks to achieve those goals in the time available. It incorporates rule-based goal reasoning, a TMS-based criticality propagation mechanism, and a real-time scheduler. The controller has been used to build a knowledge-based situation assessment system that formed a major component of a real-time, distributed, cooperative problem solving system built under DARPA contract. It is also being employed in other applications now in progress.
Directory of Open Access Journals (Sweden)
Gojko eŽarić
2015-06-01
Full Text Available A failure to build solid letter-speech sound associations may contribute to reading impairments in developmental dyslexia. Whether this reduced neural integration of letters and speech sounds changes over time within individual children and how this relates to behavioral gains in reading skills remains unknown. In this research, we examined changes in event-related potential (ERP measures of letter-speech sound integration over a 6-month period during which 9-year-old dyslexic readers (n=17 followed a training in letter-speech sound coupling next to their regular reading curriculum. We presented the Dutch spoken vowels /a/ and /o/ as standard and deviant stimuli in one auditory and two audiovisual oddball conditions. In one audiovisual condition (AV0, the letter ‘a’ was presented simultaneously with the vowels, while in the other (AV200 it was preceding vowel onset for 200 ms. Prior to the training (T1, dyslexic readers showed the expected pattern of typical auditory mismatch responses, together with the absence of letter-speech sound effects in a late negativity (LN window. After the training (T2, our results showed earlier (and enhanced crossmodal effects in the LN window. Most interestingly, earlier LN latency at T2 was significantly related to higher behavioral accuracy in letter-speech sound coupling. On a more general level, the timing of the earlier mismatch negativity (MMN in the simultaneous condition (AV0 measured at T1, significantly related to reading fluency at both T1 and T2 as well as with reading gains. Our findings suggest that the reduced neural integration of letters and speech sounds in dyslexic children may show moderate improvement with reading instruction and training and that behavioral improvements relate especially to individual differences in the timing of this neural integration.
Integrating and Visualizing Tropical Cyclone Data Using the Real Time Mission Monitor
Goodman, H. Michael; Blakeslee, Richard; Conover, Helen; Hall, John; He, Yubin; Regner, Kathryn
2009-01-01
The Real Time Mission Monitor (RTMM) is a visualization and information system that fuses multiple Earth science data sources, to enable real time decision-making for airborne and ground validation experiments. Developed at the NASA Marshall Space Flight Center, RTMM is a situational awareness, decision-support system that integrates satellite imagery, radar, surface and airborne instrument data sets, model output parameters, lightning location observations, aircraft navigation data, soundings, and other applicable Earth science data sets. The integration and delivery of this information is made possible using data acquisition systems, network communication links, network server resources, and visualizations through the Google Earth virtual globe application. RTMM is extremely valuable for optimizing individual Earth science airborne field experiments. Flight planners, scientists, and managers appreciate the contributions that RTMM makes to their flight projects. A broad spectrum of interdisciplinary scientists used RTMM during field campaigns including the hurricane-focused 2006 NASA African Monsoon Multidisciplinary Analyses (NAMMA), 2007 NOAA-NASA Aerosonde Hurricane Noel flight, 2007 Tropical Composition, Cloud, and Climate Coupling (TC4), plus a soil moisture (SMAP-VEX) and two arctic research experiments (ARCTAS) in 2008. Improving and evolving RTMM is a continuous process. RTMM recently integrated the Waypoint Planning Tool, a Java-based application that enables aircraft mission scientists to easily develop a pre-mission flight plan through an interactive point-and-click interface. Individual flight legs are automatically calculated "on the fly". The resultant flight plan is then immediately posted to the Google Earth-based RTMM for interested scientists to view the planned flight track and subsequently compare it to the actual real time flight progress. We are planning additional capabilities to RTMM including collaborations with the Jet Propulsion
Lee, Jae Young; Park, Jin Bae; Choi, Yoon Ho
2015-05-01
This paper focuses on a class of reinforcement learning (RL) algorithms, named integral RL (I-RL), that solve continuous-time (CT) nonlinear optimal control problems with input-affine system dynamics. First, we extend the concepts of exploration, integral temporal difference, and invariant admissibility to the target CT nonlinear system that is governed by a control policy plus a probing signal called an exploration. Then, we show input-to-state stability (ISS) and invariant admissibility of the closed-loop systems with the policies generated by integral policy iteration (I-PI) or invariantly admissible PI (IA-PI) method. Based on these, three online I-RL algorithms named explorized I-PI and integral Q -learning I, II are proposed, all of which generate the same convergent sequences as I-PI and IA-PI under the required excitation condition on the exploration. All the proposed methods are partially or completely model free, and can simultaneously explore the state space in a stable manner during the online learning processes. ISS, invariant admissibility, and convergence properties of the proposed methods are also investigated, and related with these, we show the design principles of the exploration for safe learning. Neural-network-based implementation methods for the proposed schemes are also presented in this paper. Finally, several numerical simulations are carried out to verify the effectiveness of the proposed methods.
Technical Note: Reducing the spin-up time of integrated surface water–groundwater models
Ajami, H.
2014-06-26
One of the main challenges in catchment scale application of coupled/integrated hydrologic models is specifying a catchment\\'s initial conditions in terms of soil moisture and depth to water table (DTWT) distributions. One approach to reduce uncertainty in model initialization is to run the model recursively using a single or multiple years of forcing data until the system equilibrates with respect to state and diagnostic variables. However, such "spin-up" approaches often require many years of simulations, making them computationally intensive. In this study, a new hybrid approach was developed to reduce the computational burden of spin-up time for an integrated groundwater-surface water-land surface model (ParFlow.CLM) by using a combination of ParFlow.CLM simulations and an empirical DTWT function. The methodology is examined in two catchments located in the temperate and semi-arid regions of Denmark and Australia respectively. Our results illustrate that the hybrid approach reduced the spin-up time required by ParFlow.CLM by up to 50%, and we outline a methodology that is applicable to other coupled/integrated modelling frameworks when initialization from equilibrium state is required.
FPGA-based real-time embedded system for RISS/GPS integrated navigation.
Abdelfatah, Walid Farid; Georgy, Jacques; Iqbal, Umar; Noureldin, Aboelmagd
2012-01-01
Navigation algorithms integrating measurements from multi-sensor systems overcome the problems that arise from using GPS navigation systems in standalone mode. Algorithms which integrate the data from 2D low-cost reduced inertial sensor system (RISS), consisting of a gyroscope and an odometer or wheel encoders, along with a GPS receiver via a Kalman filter has proved to be worthy in providing a consistent and more reliable navigation solution compared to standalone GPS receivers. It has been also shown to be beneficial, especially in GPS-denied environments such as urban canyons and tunnels. The main objective of this paper is to narrow the idea-to-implementation gap that follows the algorithm development by realizing a low-cost real-time embedded navigation system capable of computing the data-fused positioning solution. The role of the developed system is to synchronize the measurements from the three sensors, relative to the pulse per second signal generated from the GPS, after which the navigation algorithm is applied to the synchronized measurements to compute the navigation solution in real-time. Employing a customizable soft-core processor on an FPGA in the kernel of the navigation system, provided the flexibility for communicating with the various sensors and the computation capability required by the Kalman filter integration algorithm.
Measurement of time-integrated $D^0\\to hh$ asymmetries at LHCb
Marino, Pietro
2016-01-01
LHCb collected the world’s largest sample of charm decays during LHC Run I, corresponding to an integrated luminosity of 3fb$^{−1}$. This has permitted many precision measurements of charm mixing and CP violation parameters. One of the most precise and important observables is the so-called $\\Delta A_{CP}$ parameter, corresponding to the difference between the time-integrated CP asymmetry in singly Cabibbo-suppressed $D^{0} \\rightarrow K^{+}K^{-}$ and $D^{0} \\rightarrow \\pi^{+}\\pi{-}$ decay modes. The flavour of the $D^{0}$ meson is inferred from the charge of the pion in $D^{∗+} \\rightarrow D^{0}\\pi^{+}$ and $D^{∗−} \\rightarrow \\overline{D}^{0}\\pi^{-}$ decays. $\\Delta A_{CP} \\equiv A_{raw}(K^{+}K^{−})−A_{raw}(\\pi^{+}\\pi{−})$ is measured to be $\\Delta A_{CP}=(−0.10±0.08±0.03)$%, where the first uncertainty is statistical and the second systematic. The measurement is consistent with the no- CP -violation hypothesis and represents the most precise measurement of time-integrated CP asymmetry ...
Integration of real-time 3D capture, reconstruction, and light-field display
Zhang, Zhaoxing; Geng, Zheng; Li, Tuotuo; Pei, Renjing; Liu, Yongchun; Zhang, Xiao
2015-03-01
Effective integration of 3D acquisition, reconstruction (modeling) and display technologies into a seamless systems provides augmented experience of visualizing and analyzing real objects and scenes with realistic 3D sensation. Applications can be found in medical imaging, gaming, virtual or augmented reality and hybrid simulations. Although 3D acquisition, reconstruction, and display technologies have gained significant momentum in recent years, there seems a lack of attention on synergistically combining these components into a "end-to-end" 3D visualization system. We designed, built and tested an integrated 3D visualization system that is able to capture in real-time 3D light-field images, perform 3D reconstruction to build 3D model of the objects, and display the 3D model on a large autostereoscopic screen. In this article, we will present our system architecture and component designs, hardware/software implementations, and experimental results. We will elaborate on our recent progress on sparse camera array light-field 3D acquisition, real-time dense 3D reconstruction, and autostereoscopic multi-view 3D display. A prototype is finally presented with test results to illustrate the effectiveness of our proposed integrated 3D visualization system.
Guarantee of remaining life time. Integrity of mechanical components and control of ageing phenomena
International Nuclear Information System (INIS)
Schuler, X.; Herter, K.H.; Koenig, G.
2012-01-01
The life time of safety relevant systems, structures and components (SSC) of Nuclear Power Plants (NPP) is determined by two main principles. First of all the required quality has to be produced during the design and fabrication process. This means that quality has to be produced and can't be improved by excessive inspections (Basis Safety - quality through production principle). The second one is assigned to the initial quality which has to be maintained during operation. This concerns safe operation during the total life time (life time management), safety against ageing phenomena (AM - ageing management) as well as proof of integrity (e.g. break preclusion or avoidance of fracture for SSC with high safety relevance). Initiated by the Fukushima Dai-ichi event in Japan in spring 2011 for German NPP's Long Term Operation (LTO) is out of question. In June 2011 legislation took decision to phase-out from nuclear by 2022. As a fact safe operation shall be guaranteed for the remaining life time. Within this technical framework the ageing management is a key element. Depending on the safety-relevance of the SSC under observation including preventive maintenance various tasks are required in particular to clarify the mechanisms which contribute systemspecifically to the damage of the components and systems and to define their controlling parameters which have to be monitored and checked. Appropriate continuous or discontinuous measures are to be considered in this connection. The approach to ensure a high standard of quality in operation for the remaining life time and the management of the technical and organizational aspects are demonstrated and explained. The basis for ageing management to be applied to NNPs is included in Nuclear Safety Standard 1403 which describes the ageing management procedures. For SSC with high safety relevance a verification analysis for rupture preclusion (proof of integrity, integrity concept) shall be performed (Nuclear Safety Standard 3206
An energy-stable time-integrator for phase-field models
Vignal, Philippe
2016-12-27
We introduce a provably energy-stable time-integration method for general classes of phase-field models with polynomial potentials. We demonstrate how Taylor series expansions of the nonlinear terms present in the partial differential equations of these models can lead to expressions that guarantee energy-stability implicitly, which are second-order accurate in time. The spatial discretization relies on a mixed finite element formulation and isogeometric analysis. We also propose an adaptive time-stepping discretization that relies on a first-order backward approximation to give an error-estimator. This error estimator is accurate, robust, and does not require the computation of extra solutions to estimate the error. This methodology can be applied to any second-order accurate time-integration scheme. We present numerical examples in two and three spatial dimensions, which confirm the stability and robustness of the method. The implementation of the numerical schemes is done in PetIGA, a high-performance isogeometric analysis framework.
Equilibrium and response properties of the integrate-and-fire neuron in discrete time
Directory of Open Access Journals (Sweden)
Moritz Helias
2010-01-01
Full Text Available The integrate-and-fire neuron with exponential postsynaptic potentials is a frequently employed model to study neural networks. Simulations in discrete time still have highest performance at moderate numerical errors, which makes them first choice for long-term simulations of plastic networks. Here we extend the population density approach to investigate how the equilibrium and response properties of the leaky integrate-and-fire neuron are affected by time discretization. We present a novel analytical treatment of the boundary condition at threshold, taking both discretization of time and finite synaptic weights into account. We uncover an increased membrane potential density just below threshold as the decisive property that explains the deviations found between simulations and the classical diffusion approximation. Temporal discretization and finite synaptic weights both contribute to this effect. Our treatment improves the standard formula to calculate the neuron’s equilibrium firing rate. Direct solution of the Markov process describing the evolution of the membrane potential density confirms our analysis and yields a method to calculate the firing rate exactly. Knowing the shape of the membrane potential distribution near threshold enables us to devise the transient response properties of the neuron model to synaptic input. We find a pronounced non-linear fast response component that has not been described by the prevailing continuous time theory for Gaussian white noise input.
An energy-stable time-integrator for phase-field models
Vignal, Philippe; Collier, N.; Dalcin, Lisandro; Brown, D.L.; Calo, V.M.
2016-01-01
We introduce a provably energy-stable time-integration method for general classes of phase-field models with polynomial potentials. We demonstrate how Taylor series expansions of the nonlinear terms present in the partial differential equations of these models can lead to expressions that guarantee energy-stability implicitly, which are second-order accurate in time. The spatial discretization relies on a mixed finite element formulation and isogeometric analysis. We also propose an adaptive time-stepping discretization that relies on a first-order backward approximation to give an error-estimator. This error estimator is accurate, robust, and does not require the computation of extra solutions to estimate the error. This methodology can be applied to any second-order accurate time-integration scheme. We present numerical examples in two and three spatial dimensions, which confirm the stability and robustness of the method. The implementation of the numerical schemes is done in PetIGA, a high-performance isogeometric analysis framework.
DEFF Research Database (Denmark)
Emerek, Ruth
2004-01-01
Bidraget diskuterer de forskellige intergrationsopfattelse i Danmark - og hvad der kan forstås ved vellykket integration......Bidraget diskuterer de forskellige intergrationsopfattelse i Danmark - og hvad der kan forstås ved vellykket integration...
DEFF Research Database (Denmark)
Wang, Zhenyu; Sekulovic, Andrea; Kutter, Jörg Peter
2006-01-01
A novel real-time PCR microchip platform with integrated thermal system and polymer waveguides has been developed. The integrated polymer optical system for real-time monitoring of PCR was fabricated in the same SU-8 layer as the PCR chamber, without additional masking steps. Two suitable DNA...... binding dyes, SYTOX Orange and TO-PRO-3, were selected and tested for the real-time PCR processes. As a model, cadF gene of Campylobacter jejuni has been amplified on the microchip. Using the integrated optical system of the real-time PCR microchip, the measured cycle threshold values of the real-time PCR...
Integration of RNA-Seq and RPPA data for survival time prediction in cancer patients.
Isik, Zerrin; Ercan, Muserref Ece
2017-10-01
Integration of several types of patient data in a computational framework can accelerate the identification of more reliable biomarkers, especially for prognostic purposes. This study aims to identify biomarkers that can successfully predict the potential survival time of a cancer patient by integrating the transcriptomic (RNA-Seq), proteomic (RPPA), and protein-protein interaction (PPI) data. The proposed method -RPBioNet- employs a random walk-based algorithm that works on a PPI network to identify a limited number of protein biomarkers. Later, the method uses gene expression measurements of the selected biomarkers to train a classifier for the survival time prediction of patients. RPBioNet was applied to classify kidney renal clear cell carcinoma (KIRC), glioblastoma multiforme (GBM), and lung squamous cell carcinoma (LUSC) patients based on their survival time classes (long- or short-term). The RPBioNet method correctly identified the survival time classes of patients with between 66% and 78% average accuracy for three data sets. RPBioNet operates with only 20 to 50 biomarkers and can achieve on average 6% higher accuracy compared to the closest alternative method, which uses only RNA-Seq data in the biomarker selection. Further analysis of the most predictive biomarkers highlighted genes that are common for both cancer types, as they may be driver proteins responsible for cancer progression. The novelty of this study is the integration of a PPI network with mRNA and protein expression data to identify more accurate prognostic biomarkers that can be used for clinical purposes in the future. Copyright © 2017 Elsevier Ltd. All rights reserved.
Generation of a quantum integrable class of discrete-time or relativistic periodic Toda chains
International Nuclear Information System (INIS)
Kundu, Anjan
1994-01-01
A new integrable class of quantum models representing a family of different discrete-time or relativistic generalisations of the periodic Toda chain (TC), including that of a recently proposed classical model close to TC [Lett. Math. Phys. 29 (1993) 165] is presented. All such models are shown to be obtainable from a single ancestor model at different realisations of the underlying quantised algebra. As a consequence the 2x2 Lax operators and the associated quantum R-matrices for these models are easily derived ensuring their quantum integrability. It is shown that the functional Bethe ansatz developed for the quantum TC is trivially generalised to achieve separation of variables also for the present models. ((orig.))
Integrating Satellite, Radar and Surface Observation with Time and Space Matching
Ho, Y.; Weber, J.
2015-12-01
The Integrated Data Viewer (IDV) from Unidata is a Java™-based software framework for analyzing and visualizing geoscience data. It brings together the ability to display and work with satellite imagery, gridded data, surface observations, balloon soundings, NWS WSR-88D Level II and Level III RADAR data, and NOAA National Profiler Network data, all within a unified interface. Applying time and space matching on the satellite, radar and surface observation datasets will automatically synchronize the display from different data sources and spatially subset to match the display area in the view window. These features allow the IDV users to effectively integrate these observations and provide 3 dimensional views of the weather system to better understand the underlying dynamics and physics of weather phenomena.
New readout integrated circuit using continuous time fixed pattern noise correction
Dupont, Bertrand; Chammings, G.; Rapellin, G.; Mandier, C.; Tchagaspanian, M.; Dupont, Benoit; Peizerat, A.; Yon, J. J.
2008-04-01
LETI has been involved in IRFPA development since 1978; the design department (LETI/DCIS) has focused its work on new ROIC architecture since many years. The trend is to integrate advanced functions into the CMOS design to achieve cost efficient sensors production. Thermal imaging market is today more and more demanding of systems with instant ON capability and low power consumption. The purpose of this paper is to present the latest developments of fixed pattern noise continuous time correction. Several architectures are proposed, some are based on hardwired digital processing and some are purely analog. Both are using scene based algorithms. Moreover a new method is proposed for simultaneous correction of pixel offsets and sensitivities. In this scope, a new architecture of readout integrated circuit has been implemented; this architecture is developed with 0.18μm CMOS technology. The specification and the application of the ROIC are discussed in details.
Scalar one-loop vertex integrals as meromorphic functions of space-time dimension d
Energy Technology Data Exchange (ETDEWEB)
Bluemlein, Johannes [Deutsches Elektronen-Synchrotron (DESY), Zeuthen (Germany); Phan, Khiem Hong [Deutsches Elektronen-Synchrotron (DESY), Zeuthen (Germany); Vietnam National Univ., Ho Chi Minh City (Viet Nam). Univ. of Science; Riemann, Tord [Deutsches Elektronen-Synchrotron (DESY), Zeuthen (Germany); Silesia Univ., Chorzow (Poland). Inst. of Physics
2017-11-15
Representations are derived for the basic scalar one-loop vertex Feynman integrals as meromorphic functions of the space-time dimension d in terms of (generalized) hypergeometric functions {sub 2}F{sub 1} and F{sub 1}. Values at asymptotic or exceptional kinematic points as well as expansions around the singular points at d=4+2n, n non-negative integers, may be derived from the representations easily. The Feynman integrals studied here may be used as building blocks for the calculation of one-loop and higher-loop scalar and tensor amplitudes. From the recursion relation presented, higher n-point functions may be obtained in a straightforward manner.
Paciello, Rossana; Coviello, Irina; Filizzola, Carolina; Genzano, Nicola; Lisi, Mariano; Mazzeo, Giuseppe; Pergola, Nicola; Sileo, Giancanio; Tramutoli, Valerio
2014-05-01
In environmental studies the integration of heterogeneous and time-varying data, is a very common requirement for investigating and possibly visualize correlations among physical parameters underlying the dynamics of complex phenomena. Datasets used in such kind of applications has often different spatial and temporal resolutions. In some case superimposition of asynchronous layers is required. Traditionally the platforms used to perform spatio-temporal visual data analyses allow to overlay spatial data, managing the time using 'snapshot' data model, each stack of layers being labeled with different time. But this kind of architecture does not incorporate the temporal indexing neither the third spatial dimension which is usually given as an independent additional layer. Conversely, the full representation of a generic environmental parameter P(x,y,z,t) in the 4D space-time domain could allow to handle asynchronous datasets as well as less traditional data-products (e.g. vertical sections, punctual time-series, etc.) . In this paper we present the 4 Dimensions Environmental Observation Platform (4-DEOS), a system based on a web services architecture Client-Broker-Server. This platform is a new open source solution for both a timely access and an easy integration and visualization of heterogeneous (maps, vertical profiles or sections, punctual time series, etc.) asynchronous, geospatial products. The innovative aspect of the 4-DEOS system is that users can analyze data/products individually moving through time, having also the possibility to stop the display of some data/products and focus on other parameters for better studying their temporal evolution. This platform gives the opportunity to choose between two distinct display modes for time interval or for single instant. Users can choose to visualize data/products in two ways: i) showing each parameter in a dedicated window or ii) visualize all parameters overlapped in a single window. A sliding time bar, allows
Time-Dependent and Time-Integrated Angular Analysis of B -> phi Ks pi0 and B -> phi K+ pi-
International Nuclear Information System (INIS)
Aubert, B.; Bona, M.; Karyotakis, Y; Lees, J.P.; Poireau, V.
2008-01-01
We perform a time-dependent and time-integrated angular analysis of the B 0 → ψK*(892) 0 , ψK* 2 (1430 0 ), and ψ(Kπ) S-wave 0 decays with the final sample of about 465 million B(bar B) pairs recorded with the BABAR detector. Overall, twelve parameters are measured for the vector-vector decay, nine parameters for the vector-tensor decay, and three parameters for the vector-scalar decay, including the branching fractions, CP-violation parameters, and parameters sensitive to final state interaction. We use the dependence on the Kπ invariant mass of the interference between the scalar and vector or tensor components to resolve discrete ambiguities of the strong and weak phases. We use the time-evolution of the B → ψK S 0 π 0 channel to extract the CP-violation phase difference Δφ 00 = 0.28 ± 0.42 ± 0.04 between the B and (bar B) decay amplitudes. When the B → ψK ± π # -+# channel is included, the fractions of longitudinal polarization f L of the vector-vector and vector-tensor decay modes are measured to be 0.494 ± 0.034 ± 0.013 and 0.901 -0.058 +0.046 ± 0.037, respectively. This polarization pattern requires the presence of a helicity-plus amplitude in the vector-vector decay from a presently unknown source
Time-Dependent and Time-Integrated Angular Analysis of B -> phi Ks pi0 and B -> phi K+ pi-
Energy Technology Data Exchange (ETDEWEB)
Aubert, B; Bona, M; Karyotakis, Y; Lees, J P; Poireau, V
2008-08-04
We perform a time-dependent and time-integrated angular analysis of the B{sup 0} {yields} {psi}K*(892){sup 0}, {psi}K*{sub 2}(1430{sup 0}), and {psi}(K{pi}){sub S-wave}{sup 0} decays with the final sample of about 465 million B{bar B} pairs recorded with the BABAR detector. Overall, twelve parameters are measured for the vector-vector decay, nine parameters for the vector-tensor decay, and three parameters for the vector-scalar decay, including the branching fractions, CP-violation parameters, and parameters sensitive to final state interaction. We use the dependence on the K{pi} invariant mass of the interference between the scalar and vector or tensor components to resolve discrete ambiguities of the strong and weak phases. We use the time-evolution of the B {yields} {psi}K{sub S}{sup 0}{pi}{sup 0} channel to extract the CP-violation phase difference {Delta}{phi}{sub 00} = 0.28 {+-} 0.42 {+-} 0.04 between the B and {bar B} decay amplitudes. When the B {yields} {psi}K{sup {+-}}{pi}{sup {-+}} channel is included, the fractions of longitudinal polarization f{sub L} of the vector-vector and vector-tensor decay modes are measured to be 0.494 {+-} 0.034 {+-} 0.013 and 0.901{sub -0.058}{sup +0.046} {+-} 0.037, respectively. This polarization pattern requires the presence of a helicity-plus amplitude in the vector-vector decay from a presently unknown source.
Mutual Information Based Dynamic Integration of Multiple Feature Streams for Robust Real-Time LVCSR
Sato, Shoei; Kobayashi, Akio; Onoe, Kazuo; Homma, Shinichi; Imai, Toru; Takagi, Tohru; Kobayashi, Tetsunori
We present a novel method of integrating the likelihoods of multiple feature streams, representing different acoustic aspects, for robust speech recognition. The integration algorithm dynamically calculates a frame-wise stream weight so that a higher weight is given to a stream that is robust to a variety of noisy environments or speaking styles. Such a robust stream is expected to show discriminative ability. A conventional method proposed for the recognition of spoken digits calculates the weights front the entropy of the whole set of HMM states. This paper extends the dynamic weighting to a real-time large-vocabulary continuous speech recognition (LVCSR) system. The proposed weight is calculated in real-time from mutual information between an input stream and active HMM states in a searchs pace without an additional likelihood calculation. Furthermore, the mutual information takes the width of the search space into account by calculating the marginal entropy from the number of active states. In this paper, we integrate three features that are extracted through auditory filters by taking into account the human auditory system's ability to extract amplitude and frequency modulations. Due to this, features representing energy, amplitude drift, and resonant frequency drifts, are integrated. These features are expected to provide complementary clues for speech recognition. Speech recognition experiments on field reports and spontaneous commentary from Japanese broadcast news showed that the proposed method reduced error words by 9.2% in field reports and 4.7% in spontaneous commentaries relative to the best result obtained from a single stream.
Timely integration of safeguards and security with projects at Los Alamos National Laboratory
International Nuclear Information System (INIS)
Price, R.; Blount, P.M.; Garcia, S.W.; Gonzales, R.L.; Salazar, J.B.; Campbell, C.H.
2004-01-01
The Safeguards and Security (S and S) Requirements Integration Team at Los Alamos National Laboratory (LANL) has developed and implemented an innovative management process that will be described in detail. This process systematically integrates S and S planning into construction, facility modifications or upgrades, mission changes, and operational projects. It extends and expands the opportunities provided by the DOE project management manual, DOE M 413.3-1. Through a series of LANL documents, a process is defined and implemented that formally identifies an S and S professional to oversee, coordinate, facilitate, and communicate among the identified S and S organizations and the project organizations over the life cycle of the project. The derived benefits, namely (1) elimination/reduction of re-work or costly retrofitting, (2) overall project cost savings because of timely and improved planning, (3) formal documentation, and (4) support of Integrated Safeguards and Security Management at LANL, will be discussed. How many times, during the construction of a new facility or the modification of an existing facility, have the persons responsible for the project waited until the last possible minute or until after construction is completed to approach the security organizations for their help in safeguarding and securing the facility? It's almost like, 'Oh, by the way, do we need access control and a fence around this building and just what are we going to do with our classified anyway?' Not only is it usually difficult; it's also typically expensive to retrofit or plan for safeguards and security after the fact. Safeguards and security organizations are often blamed for budget overruns and delays in facility occupancy and program startup, but these problems are usually due to poor front-end planning. In an effort to help projects engage safeguards and security in the pre-conceptual or conceptual stages, we implemented a high level formality of operations. We
Meyer, F. J.; Webley, P.; Dehn, J.; Arko, S. A.; McAlpin, D. B.
2013-12-01
Volcanic eruptions are among the most significant hazards to human society, capable of triggering natural disasters on regional to global scales. In the last decade, remote sensing techniques have become established in operational forecasting, monitoring, and managing of volcanic hazards. Monitoring organizations, like the Alaska Volcano Observatory (AVO), are nowadays heavily relying on remote sensing data from a variety of optical and thermal sensors to provide time-critical hazard information. Despite the high utilization of these remote sensing data to detect and monitor volcanic eruptions, the presence of clouds and a dependence on solar illumination often limit their impact on decision making processes. Synthetic Aperture Radar (SAR) systems are widely believed to be superior to optical sensors in operational monitoring situations, due to the weather and illumination independence of their observations and the sensitivity of SAR to surface changes and deformation. Despite these benefits, the contributions of SAR to operational volcano monitoring have been limited in the past due to (1) high SAR data costs, (2) traditionally long data processing times, and (3) the low temporal sampling frequencies inherent to most SAR systems. In this study, we present improved data access, data processing, and data integration techniques that mitigate some of the above mentioned limitations and allow, for the first time, a meaningful integration of SAR into operational volcano monitoring systems. We will introduce a new database interface that was developed in cooperation with the Alaska Satellite Facility (ASF) and allows for rapid and seamless data access to all of ASF's SAR data holdings. We will also present processing techniques that improve the temporal frequency with which hazard-related products can be produced. These techniques take advantage of modern signal processing technology as well as new radiometric normalization schemes, both enabling the combination of
Parallel, explicit, and PWTD-enhanced time domain volume integral equation solver
Liu, Yang
2013-07-01
Time domain volume integral equations (TDVIEs) are useful for analyzing transient scattering from inhomogeneous dielectric objects in applications as varied as photonics, optoelectronics, and bioelectromagnetics. TDVIEs typically are solved by implicit marching-on-in-time (MOT) schemes [N. T. Gres et al., Radio Sci., 36, 379-386, 2001], requiring the solution of a system of equations at each and every time step. To reduce the computational cost associated with such schemes, [A. Al-Jarro et al., IEEE Trans. Antennas Propagat., 60, 5203-5215, 2012] introduced an explicit MOT-TDVIE method that uses a predictor-corrector technique to stably update field values throughout the scatterer. By leveraging memory-efficient nodal spatial discretization and scalable parallelization schemes [A. Al-Jarro et al., in 28th Int. Rev. Progress Appl. Computat. Electromagn., 2012], this solver has been successfully applied to the analysis of scattering phenomena involving 0.5 million spatial unknowns. © 2013 IEEE.
Time travel paradoxes, path integrals, and the many worlds interpretation of quantum mechanics
International Nuclear Information System (INIS)
Everett, Allen
2004-01-01
We consider two approaches to evading paradoxes in quantum mechanics with closed timelike curves. In a model similar to Politzer's, assuming pure states and using path integrals, we show that the problems of paradoxes and of unitarity violation are related; preserving unitarity avoids paradoxes by modifying the time evolution so that improbable events become certain. Deutsch has argued, using the density matrix, that paradoxes do not occur in the 'many worlds interpretation'. We find that in this approach account must be taken of the resolution time of the device that detects objects emerging from a wormhole or other time machine. When this is done one finds that this approach is viable only if macroscopic objects traversing a wormhole interact with it so strongly that they are broken into microscopic fragments
Boundary-integral equation formulation for time-dependent inelastic deformation in metals
Energy Technology Data Exchange (ETDEWEB)
Kumar, V; Mukherjee, S
1977-01-01
The mathematical structure of various constitutive relations proposed in recent years for representing time-dependent inelastic deformation behavior of metals at elevated temperatues has certain features which permit a simple formulation of the three-dimensional inelasticity problem in terms of real time rates. A direct formulation of the boundary-integral equation method in terms of rates is discussed for the analysis of time-dependent inelastic deformation of arbitrarily shaped three-dimensional metallic bodies subjected to arbitrary mechanical and thermal loading histories and obeying constitutive relations of the kind mentioned above. The formulation is based on the assumption of infinitesimal deformations. Several illustrative examples involving creep of thick-walled spheres, long thick-walled cylinders, and rotating discs are discussed. The implementation of the method appears to be far easier than analogous BIE formulations that have been suggested for elastoplastic problems.
Integrated model for pricing, delivery time setting, and scheduling in make-to-order environments
Garmdare, Hamid Sattari; Lotfi, M. M.; Honarvar, Mahboobeh
2018-03-01
Usually, in make-to-order environments which work only in response to the customer's orders, manufacturers for maximizing the profits should offer the best price and delivery time for an order considering the existing capacity and the customer's sensitivity to both the factors. In this paper, an integrated approach for pricing, delivery time setting and scheduling of new arrival orders are proposed based on the existing capacity and accepted orders in system. In the problem, the acquired market demands dependent on the price and delivery time of both the manufacturer and its competitors. A mixed-integer non-linear programming model is presented for the problem. After converting to a pure non-linear model, it is validated through a case study. The efficiency of proposed model is confirmed by comparing it to both the literature and the current practice. Finally, sensitivity analysis for the key parameters is carried out.
The Real-Time Monitoring Service Platform for Land Supervision Based on Cloud Integration
Sun, J.; Mao, M.; Xiang, H.; Wang, G.; Liang, Y.
2018-04-01
Remote sensing monitoring has become the important means for land and resources departments to strengthen supervision. Aiming at the problems of low monitoring frequency and poor data currency in current remote sensing monitoring, this paper researched and developed the cloud-integrated real-time monitoring service platform for land supervision which enhanced the monitoring frequency by acquiring the domestic satellite image data overall and accelerated the remote sensing image data processing efficiency by exploiting the intelligent dynamic processing technology of multi-source images. Through the pilot application in Jinan Bureau of State Land Supervision, it has been proved that the real-time monitoring technical method for land supervision is feasible. In addition, the functions of real-time monitoring and early warning are carried out on illegal land use, permanent basic farmland protection and boundary breakthrough in urban development. The application has achieved remarkable results.
Directory of Open Access Journals (Sweden)
Ming-Feng Yang
2016-01-01
Full Text Available Nowadays, in order to achieve advantages in supply chain management, how to keep inventory in adequate level and how to enhance customer service level are two critical practices for decision makers. Generally, uncertain lead time and defective products have much to do with inventory and service level. Therefore, this study mainly aims at developing a multiechelon integrated just-in-time inventory model with uncertain lead time and imperfect quality to enhance the benefits of the logistics model. In addition, the Ant Colony Algorithm (ACA is established to determine the optimal solutions. Moreover, based on our proposed model and analysis, the ACA is more efficient than Particle Swarm Optimization (PSO and Lingo in SMEIJI model. An example is provided in this study to illustrate how production run and defective rate have an effect on system costs. Finally, the results of our research could provide some managerial insights which support decision makers in real-world operations.
Integrable time-dependent Hamiltonians, solvable Landau-Zener models and Gaudin magnets
Yuzbashyan, Emil A.
2018-05-01
We solve the non-stationary Schrödinger equation for several time-dependent Hamiltonians, such as the BCS Hamiltonian with an interaction strength inversely proportional to time, periodically driven BCS and linearly driven inhomogeneous Dicke models as well as various multi-level Landau-Zener tunneling models. The latter are Demkov-Osherov, bow-tie, and generalized bow-tie models. We show that these Landau-Zener problems and their certain interacting many-body generalizations map to Gaudin magnets in a magnetic field. Moreover, we demonstrate that the time-dependent Schrödinger equation for the above models has a similar structure and is integrable with a similar technique as Knizhnik-Zamolodchikov equations. We also discuss applications of our results to the problem of molecular production in an atomic Fermi gas swept through a Feshbach resonance and to the evaluation of the Landau-Zener transition probabilities.
Directory of Open Access Journals (Sweden)
Mary Hokazono
Full Text Available CONTEXT AND OBJECTIVE: Transcranial Doppler (TCD detects stroke risk among children with sickle cell anemia (SCA. Our aim was to evaluate TCD findings in patients with different sickle cell disease (SCD genotypes and correlate the time-averaged maximum mean (TAMM velocity with hematological characteristics. DESIGN AND SETTING: Cross-sectional analytical study in the Pediatric Hematology sector, Universidade Federal de São Paulo. METHODS: 85 SCD patients of both sexes, aged 2-18 years, were evaluated, divided into: group I (62 patients with SCA/Sß0 thalassemia; and group II (23 patients with SC hemoglobinopathy/Sß+ thalassemia. TCD was performed and reviewed by a single investigator using Doppler ultrasonography with a 2 MHz transducer, in accordance with the Stroke Prevention Trial in Sickle Cell Anemia (STOP protocol. The hematological parameters evaluated were: hematocrit, hemoglobin, reticulocytes, leukocytes, platelets and fetal hemoglobin. Univariate analysis was performed and Pearson's coefficient was calculated for hematological parameters and TAMM velocities (P < 0.05. RESULTS: TAMM velocities were 137 ± 28 and 103 ± 19 cm/s in groups I and II, respectively, and correlated negatively with hematocrit and hemoglobin in group I. There was one abnormal result (1.6% and five conditional results (8.1% in group I. All results were normal in group II. Middle cerebral arteries were the only vessels affected. CONCLUSION: There was a low prevalence of abnormal Doppler results in patients with sickle-cell disease. Time-average maximum mean velocity was significantly different between the genotypes and correlated with hematological characteristics.
Effecting aging time of epoxy molding compound to molding process for integrated circuit packaging
Tachapitunsuk, Jirayu; Ugsornrat, Kessararat; Srisuwitthanon, Warayoot; Thonglor, Panakamon
2017-09-01
This research studied about effecting aging time of epoxy molding compound (EMC) that effect to reliability performance of integrated circuit (IC) package in molding process. Molding process is so important of IC packaging process for protecting IC chip (or die) from temperature and humidity environment using encapsulated EMC. For general molding process, EMC are stored in the frozen at 5°C and left at room temperature at 25 °C for aging time on self before molding of die onto lead frame is 24 hours. The aging time effect to reliability performance of IC package due to different temperature and humidity inside the package. In experiment, aging time of EMC were varied from 0 to 24 hours for molding process of SOIC-8L packages. For analysis, these packages were tested by x-ray and scanning acoustic microscope to analyze properties of EMC with an aging time and also analyzed delamination, internal void, and wire sweep inside the packages with different aging time. The results revealed that different aging time of EMC effect to properties and reliability performance of molding process.
BiGGEsTS: integrated environment for biclustering analysis of time series gene expression data
Directory of Open Access Journals (Sweden)
Madeira Sara C
2009-07-01
Full Text Available Abstract Background The ability to monitor changes in expression patterns over time, and to observe the emergence of coherent temporal responses using expression time series, is critical to advance our understanding of complex biological processes. Biclustering has been recognized as an effective method for discovering local temporal expression patterns and unraveling potential regulatory mechanisms. The general biclustering problem is NP-hard. In the case of time series this problem is tractable, and efficient algorithms can be used. However, there is still a need for specialized applications able to take advantage of the temporal properties inherent to expression time series, both from a computational and a biological perspective. Findings BiGGEsTS makes available state-of-the-art biclustering algorithms for analyzing expression time series. Gene Ontology (GO annotations are used to assess the biological relevance of the biclusters. Methods for preprocessing expression time series and post-processing results are also included. The analysis is additionally supported by a visualization module capable of displaying informative representations of the data, including heatmaps, dendrograms, expression charts and graphs of enriched GO terms. Conclusion BiGGEsTS is a free open source graphical software tool for revealing local coexpression of genes in specific intervals of time, while integrating meaningful information on gene annotations. It is freely available at: http://kdbio.inesc-id.pt/software/biggests. We present a case study on the discovery of transcriptional regulatory modules in the response of Saccharomyces cerevisiae to heat stress.
Integrated survival analysis using an event-time approach in a Bayesian framework.
Walsh, Daniel P; Dreitz, Victoria J; Heisey, Dennis M
2015-02-01
Event-time or continuous-time statistical approaches have been applied throughout the biostatistical literature and have led to numerous scientific advances. However, these techniques have traditionally relied on knowing failure times. This has limited application of these analyses, particularly, within the ecological field where fates of marked animals may be unknown. To address these limitations, we developed an integrated approach within a Bayesian framework to estimate hazard rates in the face of unknown fates. We combine failure/survival times from individuals whose fates are known and times of which are interval-censored with information from those whose fates are unknown, and model the process of detecting animals with unknown fates. This provides the foundation for our integrated model and permits necessary parameter estimation. We provide the Bayesian model, its derivation, and use simulation techniques to investigate the properties and performance of our approach under several scenarios. Lastly, we apply our estimation technique using a piece-wise constant hazard function to investigate the effects of year, age, chick size and sex, sex of the tending adult, and nesting habitat on mortality hazard rates of the endangered mountain plover (Charadrius montanus) chicks. Traditional models were inappropriate for this analysis because fates of some individual chicks were unknown due to failed radio transmitters. Simulations revealed biases of posterior mean estimates were minimal (≤ 4.95%), and posterior distributions behaved as expected with RMSE of the estimates decreasing as sample sizes, detection probability, and survival increased. We determined mortality hazard rates for plover chicks were highest at birth weights and/or whose nest was within agricultural habitats. Based on its performance, our approach greatly expands the range of problems for which event-time analyses can be used by eliminating the need for having completely known fate data.
Integrated survival analysis using an event-time approach in a Bayesian framework
Walsh, Daniel P.; Dreitz, VJ; Heisey, Dennis M.
2015-01-01
Event-time or continuous-time statistical approaches have been applied throughout the biostatistical literature and have led to numerous scientific advances. However, these techniques have traditionally relied on knowing failure times. This has limited application of these analyses, particularly, within the ecological field where fates of marked animals may be unknown. To address these limitations, we developed an integrated approach within a Bayesian framework to estimate hazard rates in the face of unknown fates. We combine failure/survival times from individuals whose fates are known and times of which are interval-censored with information from those whose fates are unknown, and model the process of detecting animals with unknown fates. This provides the foundation for our integrated model and permits necessary parameter estimation. We provide the Bayesian model, its derivation, and use simulation techniques to investigate the properties and performance of our approach under several scenarios. Lastly, we apply our estimation technique using a piece-wise constant hazard function to investigate the effects of year, age, chick size and sex, sex of the tending adult, and nesting habitat on mortality hazard rates of the endangered mountain plover (Charadrius montanus) chicks. Traditional models were inappropriate for this analysis because fates of some individual chicks were unknown due to failed radio transmitters. Simulations revealed biases of posterior mean estimates were minimal (≤ 4.95%), and posterior distributions behaved as expected with RMSE of the estimates decreasing as sample sizes, detection probability, and survival increased. We determined mortality hazard rates for plover chicks were highest at birth weights and/or whose nest was within agricultural habitats. Based on its performance, our approach greatly expands the range of problems for which event-time analyses can be used by eliminating the need for having completely known fate data.
Falkensammer, Peter; Soegner, Peter I.; zur Nedden, Dieter
2002-05-01
The integration of RIS-PACS systems in radiology units are intended to reduce time consumption in radiology workflow and thus to increase radiologist productivity. Along with the RIS-PACS integration at the University Hospital Innsbruck we analyzed workflow from patient admission to release of final reports before implementation. The follow up study after six months of the implementation is currently in work. In this study we compared chest to skeletal x-ray examinations in 969 patients before the implementation. Drawing the admission-to-release-of-final-report period showed a two-peak diagram with the first peak corresponding to a release of final results on the same day and the second peak to a release on the following day. In the chest x-ray group, 57% were released the same day (mean value 4:02 hours) and 43% the next day (mean value 21:47 hours). Looking at the skeletal x-rays 40% were released the same day (mean value 3:58 hours) and 60% were released the next day (mean value 21:05 hours). Summarizing the results we should say, that the average chest x-ray requires less time than an skeletal x-ray, due to the fact that a greater percentage of reports is released the same day. The most important result is, that the most time consuming workstep is the exchange of data media between radiologist and secretary with at least 5 hours.
Time-integrated thyroid dose for accidental releases from Pakistan Research Reactor-1
International Nuclear Information System (INIS)
Raza, S Shoaib; Iqbal, M; Salahuddin, A; Avila, R; Pervez, S
2004-01-01
The two-hourly time-integrated thyroid dose due to radio-iodines released to the atmosphere through the exhaust stack of Pakistan Research Reactor-1 (PARR-1), under accident conditions, has been calculated. A computer program, PAKRAD (which was developed under an IAEA research grant, PAK/RCA/8990), was used for the dose calculations. The sensitivity of the dose results to different exhaust flow rates and atmospheric stability classes was studied. The effect of assuming a constant activity concentration (as a function of time) within the containment air volume and an exponentially decreasing air concentration on the time-integrated dose was also studied for various flow rates (1000-50,000 m 3 h -1 ). The comparison indicated that the results were insensitive to the containment air exhaust rates up to or below 2000 m 3 h -1 , when the prediction with the constant activity concentration assumption was compared to an exponentially decreasing activity concentration model. The results also indicated that the plume touchdown distance increases with increasing atmospheric stability. (note)
Feasibility of the integration of CRONOS, a 3-D neutronics code, into real-time simulators
International Nuclear Information System (INIS)
Ragusa, J.C.
2001-01-01
In its effort to contribute to nuclear power plant safety, CEA proposes the integration of an engineering grade 3-D neutronics code into a real-time plant analyser. This paper describes the capabilities of the neutronics code CRONOS to achieve a fast running performance. First, we will present current core models in simulators and explain their drawbacks. Secondly, the mean features of CRONOS's spatial-kinetics methods will be reviewed. We will then present an optimum core representation with respect to mesh size, choice of finite elements (FE) basis and execution time, for accurate results as well as the multi 1-D thermal-hydraulics (T/H) model developed to take into account 3-D effects in updating the cross-sections. A Main Steam Line Break (MSLB) End-of-Life (EOL) Hot-Zero-Power (HZP) accident will be used as an example, before we conclude with the perspectives of integrating CRONOS's 3-D core model into real-time simulators. (author)
Feasibility of the integration of CRONOS, a 3-D neutronics code, into real-time simulators
Energy Technology Data Exchange (ETDEWEB)
Ragusa, J.C. [CEA Saclay, Dept. de Mecanique et de Technologie, 91 - Gif-sur-Yvette (France)
2001-07-01
In its effort to contribute to nuclear power plant safety, CEA proposes the integration of an engineering grade 3-D neutronics code into a real-time plant analyser. This paper describes the capabilities of the neutronics code CRONOS to achieve a fast running performance. First, we will present current core models in simulators and explain their drawbacks. Secondly, the mean features of CRONOS's spatial-kinetics methods will be reviewed. We will then present an optimum core representation with respect to mesh size, choice of finite elements (FE) basis and execution time, for accurate results as well as the multi 1-D thermal-hydraulics (T/H) model developed to take into account 3-D effects in updating the cross-sections. A Main Steam Line Break (MSLB) End-of-Life (EOL) Hot-Zero-Power (HZP) accident will be used as an example, before we conclude with the perspectives of integrating CRONOS's 3-D core model into real-time simulators. (author)
Maximum permissible voltage of YBCO coated conductors
Energy Technology Data Exchange (ETDEWEB)
Wen, J.; Lin, B.; Sheng, J.; Xu, J.; Jin, Z. [Department of Electrical Engineering, Shanghai Jiao Tong University, Shanghai (China); Hong, Z., E-mail: zhiyong.hong@sjtu.edu.cn [Department of Electrical Engineering, Shanghai Jiao Tong University, Shanghai (China); Wang, D.; Zhou, H.; Shen, X.; Shen, C. [Qingpu Power Supply Company, State Grid Shanghai Municipal Electric Power Company, Shanghai (China)
2014-06-15
Highlights: • We examine three kinds of tapes’ maximum permissible voltage. • We examine the relationship between quenching duration and maximum permissible voltage. • Continuous I{sub c} degradations under repetitive quenching where tapes reaching maximum permissible voltage. • The relationship between maximum permissible voltage and resistance, temperature. - Abstract: Superconducting fault current limiter (SFCL) could reduce short circuit currents in electrical power system. One of the most important thing in developing SFCL is to find out the maximum permissible voltage of each limiting element. The maximum permissible voltage is defined as the maximum voltage per unit length at which the YBCO coated conductors (CC) do not suffer from critical current (I{sub c}) degradation or burnout. In this research, the time of quenching process is changed and voltage is raised until the I{sub c} degradation or burnout happens. YBCO coated conductors test in the experiment are from American superconductor (AMSC) and Shanghai Jiao Tong University (SJTU). Along with the quenching duration increasing, the maximum permissible voltage of CC decreases. When quenching duration is 100 ms, the maximum permissible of SJTU CC, 12 mm AMSC CC and 4 mm AMSC CC are 0.72 V/cm, 0.52 V/cm and 1.2 V/cm respectively. Based on the results of samples, the whole length of CCs used in the design of a SFCL can be determined.
Achieving maximum baryon densities
International Nuclear Information System (INIS)
Gyulassy, M.
1984-01-01
In continuing work on nuclear stopping power in the energy range E/sub lab/ approx. 10 GeV/nucleon, calculations were made of the energy and baryon densities that could be achieved in uranium-uranium collisions. Results are shown. The energy density reached could exceed 2 GeV/fm 3 and baryon densities could reach as high as ten times normal nuclear densities
Real-time functional integral approach to the quantum disordered spin systems
International Nuclear Information System (INIS)
Kopec, T.K.
1989-01-01
In this paper the effect of randomness and frustration in the quantum Ising spin glass in a transverse field is studied by using the thermofield dynamics (TFD), the real time, finite temperature quantum field theory. It is shown that the method can be conveniently used for the averaging of the free energy of the system by completely avoiding the use of the n-replica trick. The effective dynamic Lagrangian for the disorder averaged causal, correlations and response Green functions is derived by functional integral approach. Furthermore, the properties of this Lagrangian are analyzed by the saddle point method which leads to the self-consistent equation for the spin glass order parameter
International Nuclear Information System (INIS)
Pant, P.; Kandari, T.; Ramola, R.C.; Semwal, C.P.; Prasad, M.
2018-01-01
The basic processes which influenced the concentration of radon and thoron decay products are- attachment, recoil and deposition and by the room specific parameters of radon exhalation and ventilation. The freshly formed decay products have a high diffusivities (especially in air) and ability to stick to surfaces. According to UNSCEAR 1977, radon daughters may be combined as the so called equilibrium equivalent concentration which is related to the potential alpha energy distribution concentration. In the present study an effort has been made to see the diurnal variation of radon and thoron progeny concentration using time integrated flow mode sampler
Fully integrated monolithic opoelectronic transducer for real.time protein and DNA detection
DEFF Research Database (Denmark)
Misiakos, Konstatinos; S. Petrou, Panagiota; E. Kakabakos, Sotirios
2010-01-01
The development and testing of a portable bioanalytical device which was capable for real-time monitoring of binding assays was demonstrated. The device was based on arrays of nine optoelectronic transducers monolithically integrated on silicon chips. The optocouplers consisted of nine silicon av...... by exploiting wavelength filtering on photonic crystal engineered waveguides. The proposed miniaturized sensing device with proper packaging and accompanied by a portable instrument can find wide application as a platform for reliable and cost effective point-of-care diagnosis....
EVALUATION OF THE POUNDING FORCES DURING EARTHQUAKE USING EXPLICIT DYNAMIC TIME INTEGRATION METHOD
Directory of Open Access Journals (Sweden)
Nica George Bogdan
2017-09-01
Full Text Available Pounding effects during earthquake is a subject of high significance for structural engineers performing in the urban areas. In this paper, two ways to account for structural pounding are used in a MATLAB code, namely classical stereomechanics approach and nonlinear viscoelastic impact element. The numerical study is performed on SDOF structures acted by ELCentro recording. While most of the studies available in the literature are related to Newmark implicit time integration method, in this study the equations of motion are numerical integrated using central finite difference method, an explicit method, having the main advantage that in the displacement at the ith+1 step is calculated based on the loads from the ith step. Thus, the collision is checked and the pounding forces are taken into account into the equation of motion in an easier manner than in an implicit integration method. First, a comparison is done using available data in the literature. Both linear and nonlinear behavior of the structures during earthquake is further investigated. Several layout scenarios are also investigated, in which one or more weak buildings are adjacent to a stiffer building. One of the main findings in this paper is related to the behavior of a weak structure located between two stiff structures.
Signal and noise analysis in TRION-Time-Resolved Integrative Optical Fast Neutron detector
International Nuclear Information System (INIS)
Vartsky, D; Feldman, G; Mor, I; Goldberg, M B; Bar, D; Dangendorf, V
2009-01-01
TRION is a sub-mm spatial resolution fast neutron imaging detector, which employs an integrative optical time-of-flight technique. The detector was developed for fast neutron resonance radiography, a method capable of detecting a broad range of conventional and improvised explosives. In this study we have analyzed in detail, using Monte-Carlo calculations and experimentally determined parameters, all the processes that influence the signal and noise in the TRION detector. In contrast to event-counting detectors where the signal-to-noise ratio is dependent only on the number of detected events (quantum noise), in an energy-integrating detector additional factors, such as the fluctuations in imparted energy, number of photoelectrons, system gain and other factors will contribute to the noise. The excess noise factor (over the quantum noise) due to these processes was 4.3, 2.7, 2.1, 1.9 and 1.9 for incident neutron energies of 2, 4, 7.5, 10 and 14 MeV, respectively. It is shown that, even under ideal light collection conditions, a fast neutron detection system operating in an integrative mode cannot be quantum-noise-limited due to the relatively large variance in the imparted proton energy and the resulting scintillation light distributions.
Gómez Rodríguez, Rafael Ángel
2014-01-01
To say that someone possesses integrity is to claim that that person is almost predictable about responses to specific situations, that he or she can prudentially judge and to act correctly. There is a closed interrelationship between integrity and autonomy, and the autonomy rests on the deeper moral claim of all humans to integrity of the person. Integrity has two senses of significance for medical ethic: one sense refers to the integrity of the person in the bodily, psychosocial and intellectual elements; and in the second sense, the integrity is the virtue. Another facet of integrity of the person is la integrity of values we cherish and espouse. The physician must be a person of integrity if the integrity of the patient is to be safeguarded. The autonomy has reduced the violations in the past, but the character and virtues of the physician are the ultimate safeguard of autonomy of patient. A field very important in medicine is the scientific research. It is the character of the investigator that determines the moral quality of research. The problem arises when legitimate self-interests are replaced by selfish, particularly when human subjects are involved. The final safeguard of moral quality of research is the character and conscience of the investigator. Teaching must be relevant in the scientific field, but the most effective way to teach virtue ethics is through the example of the a respected scientist.
Maximum entropy tokamak configurations
International Nuclear Information System (INIS)
Minardi, E.
1989-01-01
The new entropy concept for the collective magnetic equilibria is applied to the description of the states of a tokamak subject to ohmic and auxiliary heating. The condition for the existence of steady state plasma states with vanishing entropy production implies, on one hand, the resilience of specific current density profiles and, on the other, severe restrictions on the scaling of the confinement time with power and current. These restrictions are consistent with Goldston scaling and with the existence of a heat pinch. (author)
In a Time of Change: Integrating the Arts and Humanities with Climate Change Science in Alaska
Leigh, M.; Golux, S.; Franzen, K.
2011-12-01
The arts and humanities have a powerful capacity to create lines of communication between the public, policy and scientific spheres. A growing network of visual and performing artists, writers and scientists has been actively working together since 2007 to integrate scientific and artistic perspectives on climate change in interior Alaska. These efforts have involved field workshops and collaborative creative processes culminating in public performances and a visual art exhibit. The most recent multimedia event was entitled In a Time of Change: Envisioning the Future, and challenged artists and scientists to consider future scenarios of climate change. This event included a public performance featuring original theatre, modern dance, Alaska Native Dance, poetry and music that was presented concurrently with an art exhibit featuring original works by 24 Alaskan visual artists. A related effort targeted K12 students, through an early college course entitled Climate Change and Creative Expression, which was offered to high school students at a predominantly Alaska Native charter school and integrated climate change science, creative writing, theatre and dance. Our program at Bonanza Creek Long Term Ecological Research (LTER) site is just one of many successful efforts to integrate arts and humanities with science within and beyond the NSF LTER Program. The efforts of various LTER sites to engage the arts and humanities with science, the public and policymakers have successfully generated excitement, facilitated mutual understanding, and promoted meaningful dialogue on issues facing science and society. The future outlook for integration of arts and humanities with science appears promising, with increasing interest from artists, scientists and scientific funding agencies.
Wolfs, Cecile J. A.; Brás, Mariana G.; Schyns, Lotte E. J. R.; Nijsten, Sebastiaan M. J. J. G.; van Elmpt, Wouter; Scheib, Stefan G.; Baltes, Christof; Podesta, Mark; Verhaegen, Frank
2017-08-01
The aim of this work is to assess the performance of 2D time-integrated (2D-TI), 2D time-resolved (2D-TR) and 3D time-integrated (3D-TI) portal dosimetry in detecting dose discrepancies between the planned and (simulated) delivered dose caused by simulated changes in the anatomy of lung cancer patients. For six lung cancer patients, tumor shift, tumor regression and pleural effusion are simulated by modifying their CT images. Based on the modified CT images, time-integrated (TI) and time-resolved (TR) portal dose images (PDIs) are simulated and 3D-TI doses are calculated. The modified and original PDIs and 3D doses are compared by a gamma analysis with various gamma criteria. Furthermore, the difference in the D 95% (ΔD 95%) of the GTV is calculated and used as a gold standard. The correlation between the gamma fail rate and the ΔD 95% is investigated, as well the sensitivity and specificity of all combinations of portal dosimetry method, gamma criteria and gamma fail rate threshold. On the individual patient level, there is a correlation between the gamma fail rate and the ΔD 95%, which cannot be found at the group level. The sensitivity and specificity analysis showed that there is not one combination of portal dosimetry method, gamma criteria and gamma fail rate threshold that can detect all simulated anatomical changes. This work shows that it will be more beneficial to relate portal dosimetry and DVH analysis on the patient level, rather than trying to quantify a relationship for a group of patients. With regards to optimizing sensitivity and specificity, different combinations of portal dosimetry method, gamma criteria and gamma fail rate should be used to optimally detect certain types of anatomical changes.
Wolfs, Cecile J A; Brás, Mariana G; Schyns, Lotte E J R; Nijsten, Sebastiaan M J J G; van Elmpt, Wouter; Scheib, Stefan G; Baltes, Christof; Podesta, Mark; Verhaegen, Frank
2017-07-12
The aim of this work is to assess the performance of 2D time-integrated (2D-TI), 2D time-resolved (2D-TR) and 3D time-integrated (3D-TI) portal dosimetry in detecting dose discrepancies between the planned and (simulated) delivered dose caused by simulated changes in the anatomy of lung cancer patients. For six lung cancer patients, tumor shift, tumor regression and pleural effusion are simulated by modifying their CT images. Based on the modified CT images, time-integrated (TI) and time-resolved (TR) portal dose images (PDIs) are simulated and 3D-TI doses are calculated. The modified and original PDIs and 3D doses are compared by a gamma analysis with various gamma criteria. Furthermore, the difference in the D 95% (ΔD 95% ) of the GTV is calculated and used as a gold standard. The correlation between the gamma fail rate and the ΔD 95% is investigated, as well the sensitivity and specificity of all combinations of portal dosimetry method, gamma criteria and gamma fail rate threshold. On the individual patient level, there is a correlation between the gamma fail rate and the ΔD 95% , which cannot be found at the group level. The sensitivity and specificity analysis showed that there is not one combination of portal dosimetry method, gamma criteria and gamma fail rate threshold that can detect all simulated anatomical changes. This work shows that it will be more beneficial to relate portal dosimetry and DVH analysis on the patient level, rather than trying to quantify a relationship for a group of patients. With regards to optimizing sensitivity and specificity, different combinations of portal dosimetry method, gamma criteria and gamma fail rate should be used to optimally detect certain types of anatomical changes.
An integrated approach using high time-resolved tools to study the origin of aerosols
International Nuclear Information System (INIS)
Di Gilio, A.; Gennaro, G. de; Dambruoso, P.; Ventrella, G.
2015-01-01
Long-range transport of natural and/or anthropogenic particles can contribute significantly to PM10 and PM2.5 concentrations and some European cities often fail to comply with PM daily limit values due to the additional impact of particles from remote sources. For this reason, reliable methodologies to identify long-range transport (LRT) events would be useful to better understand air pollution phenomena and support proper decision-making. This study explores the potential of an integrated and high time-resolved monitoring approach for the identification and characterization of local, regional and long-range transport events of high PM. In particular, the goal of this work was also the identification of time-limited event. For this purpose, a high time-resolved monitoring campaign was carried out at an urban background site in Bari (southern Italy) for about 20 days (1st–20th October 2011). The integration of collected data as the hourly measurements of inorganic ions in PM 2.5 and their gas precursors and of the natural radioactivity, in addition to the analyses of aerosol maps and hourly back trajectories (BT), provided useful information for the identification and chemical characterization of local sources and trans-boundary intrusions. Non-sea salt (nss) sulfate levels were found to increase when air masses came from northeastern Europe and higher dispersive conditions of the atmosphere were detected. Instead, higher nitrate and lower nss-sulfate concentrations were registered in correspondence with air mass stagnation and attributed to local traffic source. In some cases, combinations of local and trans-boundary sources were observed. Finally, statistical investigations such as the principal component analysis (PCA) applied on hourly ion concentrations and the cluster analyses, the Potential Source Contribution Function (PSCF) and the Concentration Weighted Trajectory (CWT) models computed on hourly back-trajectories enabled to complete a cognitive framework
An integrated approach using high time-resolved tools to study the origin of aerosols
Energy Technology Data Exchange (ETDEWEB)
Di Gilio, A. [Chemistry Department, University of Bari, via Orabona, 4, 70126 Bari (Italy); ARPA PUGLIA, Corso Trieste, 27, 70126 Bari (Italy); Gennaro, G. de, E-mail: gianluigi.degennaro@uniba.it [Chemistry Department, University of Bari, via Orabona, 4, 70126 Bari (Italy); ARPA PUGLIA, Corso Trieste, 27, 70126 Bari (Italy); Dambruoso, P. [Chemistry Department, University of Bari, via Orabona, 4, 70126 Bari (Italy); ARPA PUGLIA, Corso Trieste, 27, 70126 Bari (Italy); Ventrella, G. [Chemistry Department, University of Bari, via Orabona, 4, 70126 Bari (Italy)
2015-10-15
Long-range transport of natural and/or anthropogenic particles can contribute significantly to PM10 and PM2.5 concentrations and some European cities often fail to comply with PM daily limit values due to the additional impact of particles from remote sources. For this reason, reliable methodologies to identify long-range transport (LRT) events would be useful to better understand air pollution phenomena and support proper decision-making. This study explores the potential of an integrated and high time-resolved monitoring approach for the identification and characterization of local, regional and long-range transport events of high PM. In particular, the goal of this work was also the identification of time-limited event. For this purpose, a high time-resolved monitoring campaign was carried out at an urban background site in Bari (southern Italy) for about 20 days (1st–20th October 2011). The integration of collected data as the hourly measurements of inorganic ions in PM{sub 2.5} and their gas precursors and of the natural radioactivity, in addition to the analyses of aerosol maps and hourly back trajectories (BT), provided useful information for the identification and chemical characterization of local sources and trans-boundary intrusions. Non-sea salt (nss) sulfate levels were found to increase when air masses came from northeastern Europe and higher dispersive conditions of the atmosphere were detected. Instead, higher nitrate and lower nss-sulfate concentrations were registered in correspondence with air mass stagnation and attributed to local traffic source. In some cases, combinations of local and trans-boundary sources were observed. Finally, statistical investigations such as the principal component analysis (PCA) applied on hourly ion concentrations and the cluster analyses, the Potential Source Contribution Function (PSCF) and the Concentration Weighted Trajectory (CWT) models computed on hourly back-trajectories enabled to complete a cognitive
DEFF Research Database (Denmark)
Casla, Soraya; Hojman, Pernille; Cubedo, Ricardo
2014-01-01
BACKGROUND: Physical activity has been demonstrated to increase survival in breast cancer patients, but few breast cancer patients meet the general recommendations for physical activity. The aim of this pilot study was to investigate if a supervised integrated counseling and group-based exercise...... program could increase leisure-time activity in women with breast cancer. METHODS: This pilot project, designed as a single-arm study with pre-post testing, consisted of 24 classes of combined aerobic and strength exercise training as well as classes on dietary and health behavior. A total of 48 women...... with breast cancer who were undergoing or had recently completed anticancer treatment completed the study. Leisure-time physical activity, grip strength, functional capacity, quality of life (QoL), and depression were assessed at baseline, after intervention, and at the 12-week follow-up after intervention...
Li, Ping
2014-07-01
This paper presents an algorithm hybridizing discontinuous Galerkin time domain (DGTD) method and time domain boundary integral (BI) algorithm for 3-D open region electromagnetic scattering analysis. The computational domain of DGTD is rigorously truncated by analytically evaluating the incoming numerical flux from the outside of the truncation boundary through BI method based on the Huygens\\' principle. The advantages of the proposed method are that it allows the truncation boundary to be conformal to arbitrary (convex/ concave) scattering objects, well-separated scatters can be truncated by their local meshes without losing the physics (such as coupling/multiple scattering) of the problem, thus reducing the total mesh elements. Furthermore, low frequency waves can be efficiently absorbed, and the field outside the truncation domain can be conveniently calculated using the same BI formulation. Numerical examples are benchmarked to demonstrate the accuracy and versatility of the proposed method.
Li, Ping; Shi, Yifei; Jiang, Lijun; Bagci, Hakan
2014-01-01
A scheme hybridizing discontinuous Galerkin time-domain (DGTD) and time-domain boundary integral (TDBI) methods for accurately analyzing transient electromagnetic scattering is proposed. Radiation condition is enforced using the numerical flux on the truncation boundary. The fields required by the flux are computed using the TDBI from equivalent currents introduced on a Huygens' surface enclosing the scatterer. The hybrid DGTDBI ensures that the radiation condition is mathematically exact and the resulting computation domain is as small as possible since the truncation boundary conforms to scatterer's shape and is located very close to its surface. Locally truncated domains can also be defined around each disconnected scatterer additionally reducing the size of the overall computation domain. Numerical examples demonstrating the accuracy and versatility of the proposed method are presented. © 2014 IEEE.
Integral blow moulding for cycle time reduction of CFR-TP aluminium contour joint processing
Barfuss, Daniel; Würfel, Veit; Grützner, Raik; Gude, Maik; Müller, Roland
2018-05-01
Integral blow moulding (IBM) as a joining technology of carbon fibre reinforced thermoplastic (CFR-TP) hollow profiles with metallic load introduction elements enables significant cycle time reduction by shortening of the process chain. As the composite part is joined to the metallic part during its consolidation process subsequent joining steps are omitted. In combination with a multi-scale structured load introduction element its form closure function enables to pass very high loads and is capable to achieve high degrees of material utilization. This paper first shows the process set-up utilizing thermoplastic tape braided preforms and two-staged press and internal hydro formed load introduction elements. Second focuses on heating technologies and process optimization. Aiming at cycle time reduction convection and induction heating in regard to the resulting product quality is inspected by photo micrographs and computer tomographic scans. Concluding remarks give final recommendations for the process design in regard to the structural design.
International Nuclear Information System (INIS)
Jamali, J.; Aghajafari, R.; Moini, R.; Sadeghi, H.
2002-01-01
A time-domain approach is presented to calculate electromagnetic fields inside a large Electromagnetic Pulse (EMP) simulator. This type of EMP simulator is used for studying the effect of electromagnetic pulses on electrical apparatus in various structures such as vehicles, a reoplanes, etc. The simulator consists of three planar transmission lines. To solve the problem, we first model the metallic structure of the simulator as a grid of conducting wires. The numerical solution of the governing electric field integral equation is then obtained using the method of moments in time domain. To demonstrate the accuracy of the model, we consider a typical EMP simulator. The comparison of our results with those obtained experimentally in the literature validates the model introduced in this paper
Development of high speed integrated circuit for very high resolution timing measurements
International Nuclear Information System (INIS)
Mester, Christian
2009-10-01
A multi-channel high-precision low-power time-to-digital converter application specific integrated circuit for high energy physics applications has been designed and implemented in a 130 nm CMOS process. To reach a target resolution of 24.4 ps, a novel delay element has been conceived. This nominal resolution has been experimentally verified with a prototype, with a minimum resolution of 19 ps. To further improve the resolution, a new interpolation scheme has been described. The ASIC has been designed to use a reference clock with the LHC bunch crossing frequency of 40 MHz and generate all required timing signals internally, to ease to use within the framework of an LHC upgrade. Special care has been taken to minimise the power consumption. (orig.)
Komar integrals in asymptotically anti-de Sitter space-times
International Nuclear Information System (INIS)
Magnon, A.
1985-01-01
Recently, boundary conditions governing the asymptotic behavior of the gravitational field in the presence of a negative cosmological constant have been introduced using Penrose's conformal techniques. The subsequent analysis has led to expressions of conserved quantities (associated with asymptotic symmetries) involving asymptotic Weyl curvature. On the other hand, if the underlying space-time is equipped with isometries, a generalization of the Komar integral which incorporates the cosmological constant is also available. Thus, in the presence of an isometry, one is faced with two apparently unrelated definitions. It is shown that these definitions agree. This coherence supports the choice of boundary conditions for asymptotically anti-de Sitter space-times and reinforces the definitions of conserved quantities
Fragmentation function in non-equilibrium QCD using closed-time path integral formalism
International Nuclear Information System (INIS)
Nayak, Gouranga C.
2009-01-01
In this paper we implement the Schwinger-Keldysh closed-time path integral formalism in non-equilibrium QCD in accordance to the definition of the Collins-Soper fragmentation function. We consider a high-p T parton in QCD medium at initial time τ 0 with an arbitrary non-equilibrium (non-isotropic) distribution function f(vector (p)) fragmenting to a hadron. We formulate the parton-to-hadron fragmentation function in non-equilibrium QCD in the light-cone quantization formalism. It may be possible to include final-state interactions with the medium via a modification of the Wilson lines in this definition of the non-equilibrium fragmentation function. This may be relevant to the study of hadron production from a quark-gluon plasma at RHIC and LHC. (orig.)
Directory of Open Access Journals (Sweden)
Lo Kenneth
2012-08-01
Full Text Available Abstract Background Inference about regulatory networks from high-throughput genomics data is of great interest in systems biology. We present a Bayesian approach to infer gene regulatory networks from time series expression data by integrating various types of biological knowledge. Results We formulate network construction as a series of variable selection problems and use linear regression to model the data. Our method summarizes additional data sources with an informative prior probability distribution over candidate regression models. We extend the Bayesian model averaging (BMA variable selection method to select regulators in the regression framework. We summarize the external biological knowledge by an informative prior probability distribution over the candidate regression models. Conclusions We demonstrate our method on simulated data and a set of time-series microarray experiments measuring the effect of a drug perturbation on gene expression levels, and show that it outperforms leading regression-based methods in the literature.
Simulating variable-density flows with time-consistent integration of Navier-Stokes equations
Lu, Xiaoyi; Pantano, Carlos
2017-11-01
In this talk, we present several features of a high-order semi-implicit variable-density low-Mach Navier-Stokes solver. A new formulation to solve pressure Poisson-like equation of variable-density flows is highlighted. With this formulation of the numerical method, we are able to solve all variables with a uniform order of accuracy in time (consistent with the time integrator being used). The solver is primarily designed to perform direct numerical simulations for turbulent premixed flames. Therefore, we also address other important elements, such as energy-stable boundary conditions, synthetic turbulence generation, and flame anchoring method. Numerical examples include classical non-reacting constant/variable-density flows, as well as turbulent premixed flames.
Quantum-corrected plasmonic field analysis using a time domain PMCHWT integral equation
Uysal, Ismail E.
2016-03-13
When two structures are within sub-nanometer distance of each other, quantum tunneling, i.e., electrons "jumping" from one structure to another, becomes relevant. Classical electromagnetic solvers do not directly account for this additional path of current. In this work, an auxiliary tunnel made of Drude material is used to "connect" the structures as a support for this current path (R. Esteban et al., Nat. Commun., 2012). The plasmonic fields on the resulting connected structure are analyzed using a time domain surface integral equation solver. Time domain samples of the dispersive medium Green function and the dielectric permittivities are computed from the analytical inverse Fourier transform applied to the rational function representation of their frequency domain samples.
Development of high speed integrated circuit for very high resolution timing measurements
Energy Technology Data Exchange (ETDEWEB)
Mester, Christian
2009-10-15
A multi-channel high-precision low-power time-to-digital converter application specific integrated circuit for high energy physics applications has been designed and implemented in a 130 nm CMOS process. To reach a target resolution of 24.4 ps, a novel delay element has been conceived. This nominal resolution has been experimentally verified with a prototype, with a minimum resolution of 19 ps. To further improve the resolution, a new interpolation scheme has been described. The ASIC has been designed to use a reference clock with the LHC bunch crossing frequency of 40 MHz and generate all required timing signals internally, to ease to use within the framework of an LHC upgrade. Special care has been taken to minimise the power consumption. (orig.)
Li, Ping
2014-05-01
A scheme hybridizing discontinuous Galerkin time-domain (DGTD) and time-domain boundary integral (TDBI) methods for accurately analyzing transient electromagnetic scattering is proposed. Radiation condition is enforced using the numerical flux on the truncation boundary. The fields required by the flux are computed using the TDBI from equivalent currents introduced on a Huygens\\' surface enclosing the scatterer. The hybrid DGTDBI ensures that the radiation condition is mathematically exact and the resulting computation domain is as small as possible since the truncation boundary conforms to scatterer\\'s shape and is located very close to its surface. Locally truncated domains can also be defined around each disconnected scatterer additionally reducing the size of the overall computation domain. Numerical examples demonstrating the accuracy and versatility of the proposed method are presented. © 2014 IEEE.
Strait, J
2009-01-01
Following the incident in sector 34, considerable effort has been made to improve the systems for detecting similar faults and to improve the safety systems to limit the damage if a similar incident should occur. Nevertheless, even after the consolidation and repairs are completed, other faults may still occur in the superconducting magnet systems, which could result in damage to the LHC. Such faults include both direct failures of a particular component or system, or an incorrect response to a “normal” upset condition, for example a quench. I will review a range of faults which could be reasonably expected to occur in the superconducting magnet systems, and which could result in substantial damage and down-time to the LHC. I will evaluate the probability and the consequences of such faults, and suggest what mitigations, if any, are possible to protect against each.
Liu, Yang
2016-03-25
A parallel plane-wave time-domain (PWTD)-accelerated explicit marching-on-in-time (MOT) scheme for solving the time domain electric field volume integral equation (TD-EFVIE) is presented. The proposed scheme leverages pulse functions and Lagrange polynomials to spatially and temporally discretize the electric flux density induced throughout the scatterers, and a finite difference scheme to compute the electric fields from the Hertz electric vector potentials radiated by the flux density. The flux density is explicitly updated during time marching by a predictor-corrector (PC) scheme and the vector potentials are efficiently computed by a scalar PWTD scheme. The memory requirement and computational complexity of the resulting explicit PWTD-PC-EFVIE solver scale as ( log ) s s O N N and ( ) s t O N N , respectively. Here, s N is the number of spatial basis functions and t N is the number of time steps. A scalable parallelization of the proposed MOT scheme on distributed- memory CPU clusters is described. The efficiency, accuracy, and applicability of the resulting (parallelized) PWTD-PC-EFVIE solver are demonstrated via its application to the analysis of transient electromagnetic wave interactions on canonical and real-life scatterers represented with up to 25 million spatial discretization elements.
Chang, Wen-Chung; Chen, Chin-Sheng; Tai, Hung-Chi; Liu, Chia-Yuan; Chen, Yu-Jen
2014-01-01
The current practice of radiotherapy examines target coverage solely from digitally reconstructed beam's eye view (BEV) in a way that is indirectly accessible and that is not in real time. We aimed to visualize treatment targets in real time from each BEV. The image data of phantom or patients from ultrasound (US) and computed tomography (CT) scans were captured to perform image registration. We integrated US, CT, US/CT image registration, robotic manipulation of US, a radiation treatment planning system, and a linear accelerator to constitute an innovative target visualization system. The performance of this algorithm segmented the target organ in CT images, transformed and reconstructed US images to match each orientation, and generated image registration in real time mode with acceptable accuracy. This image transformation allowed physicians to visualize the CT image-reconstructed target via a US probe outside the BEV that was non-coplanar to the beam's plane. It allowed the physicians to remotely control the US probe that was equipped on a robotic arm to dynamically trace and real time monitor the coverage of the target within the BEV during a simulated beam-on situation. This target visualization system may provide a direct remotely accessible and real time way to visualize, verify, and ensure tumor targeting during radiotherapy.
Liu, Yang; Al-Jarro, Ahmed; Bagci, Hakan; Michielssen, Eric
2016-01-01
A parallel plane-wave time-domain (PWTD)-accelerated explicit marching-on-in-time (MOT) scheme for solving the time domain electric field volume integral equation (TD-EFVIE) is presented. The proposed scheme leverages pulse functions and Lagrange polynomials to spatially and temporally discretize the electric flux density induced throughout the scatterers, and a finite difference scheme to compute the electric fields from the Hertz electric vector potentials radiated by the flux density. The flux density is explicitly updated during time marching by a predictor-corrector (PC) scheme and the vector potentials are efficiently computed by a scalar PWTD scheme. The memory requirement and computational complexity of the resulting explicit PWTD-PC-EFVIE solver scale as ( log ) s s O N N and ( ) s t O N N , respectively. Here, s N is the number of spatial basis functions and t N is the number of time steps. A scalable parallelization of the proposed MOT scheme on distributed- memory CPU clusters is described. The efficiency, accuracy, and applicability of the resulting (parallelized) PWTD-PC-EFVIE solver are demonstrated via its application to the analysis of transient electromagnetic wave interactions on canonical and real-life scatterers represented with up to 25 million spatial discretization elements.
Evaluating perceptual integration: uniting response-time- and accuracy-based methodologies.
Eidels, Ami; Townsend, James T; Hughes, Howard C; Perry, Lacey A
2015-02-01
This investigation brings together a response-time system identification methodology (e.g., Townsend & Wenger Psychonomic Bulletin & Review 11, 391-418, 2004a) and an accuracy methodology, intended to assess models of integration across stimulus dimensions (features, modalities, etc.) that were proposed by Shaw and colleagues (e.g., Mulligan & Shaw Perception & Psychophysics 28, 471-478, 1980). The goal was to theoretically examine these separate strategies and to apply them conjointly to the same set of participants. The empirical phases were carried out within an extension of an established experimental design called the double factorial paradigm (e.g., Townsend & Nozawa Journal of Mathematical Psychology 39, 321-359, 1995). That paradigm, based on response times, permits assessments of architecture (parallel vs. serial processing), stopping rule (exhaustive vs. minimum time), and workload capacity, all within the same blocks of trials. The paradigm introduced by Shaw and colleagues uses a statistic formally analogous to that of the double factorial paradigm, but based on accuracy rather than response times. We demonstrate that the accuracy measure cannot discriminate between parallel and serial processing. Nonetheless, the class of models supported by the accuracy data possesses a suitable interpretation within the same set of models supported by the response-time data. The supported model, consistent across individuals, is parallel and has limited capacity, with the participants employing the appropriate stopping rule for the experimental setting.
Lee, Joong Hoon; Hwang, Ji-Young; Zhu, Jia; Hwang, Ha Ryeon; Lee, Seung Min; Cheng, Huanyu; Lee, Sang-Hoon; Hwang, Suk-Won
2018-06-14
We introduce optimized elastomeric conductive electrodes using a mixture of silver nanowires (AgNWs) with carbon nanotubes/polydimethylsiloxane (CNTs/PDMS), to build a portable earphone type of wearable system that is designed to enable recording electrophysiological activities as well as listening to music at the same time. A custom-built, plastic frame integrated with soft, deformable fabric-based memory foam of earmuffs facilitates essential electronic components, such as conductive elastomers, metal strips, signal transducers and a speaker. Such platform incorporates with accessory cables to attain wireless, real-time monitoring of electrical potentials whose information can be displayed on a cell phone during outdoor activities and music appreciation. Careful evaluations on experimental results reveal that the performance of fabricated dry electrodes are comparable to that of commercial wet electrodes, and position-dependent signal behaviors provide a route toward accomplishing maximized signal quality. This research offers a facile approach for a wearable healthcare monitor via integration of soft electronic constituents with personal belongings.
Deep Time Data Infrastructure: Integrating Our Current Geologic and Biologic Databases
Kolankowski, S. M.; Fox, P. A.; Ma, X.; Prabhu, A.
2016-12-01
As our knowledge of Earth's geologic and mineralogical history grows, we require more efficient methods of sharing immense amounts of data. Databases across numerous disciplines have been utilized to offer extensive information on very specific Epochs of Earth's history up to its current state, i.e. Fossil record, rock composition, proteins, etc. These databases could be a powerful force in identifying previously unseen correlations such as relationships between minerals and proteins. Creating a unifying site that provides a portal to these databases will aid in our ability as a collaborative scientific community to utilize our findings more effectively. The Deep-Time Data Infrastructure (DTDI) is currently being defined as part of a larger effort to accomplish this goal. DTDI will not be a new database, but an integration of existing resources. Current geologic and related databases were identified, documentation of their schema was established and will be presented as a stage by stage progression. Through conceptual modeling focused around variables from their combined records, we will determine the best way to integrate these databases using common factors. The Deep-Time Data Infrastructure will allow geoscientists to bridge gaps in data and further our understanding of our Earth's history.
Time-Dependent Heat Conduction Problems Solved by an Integral-Equation Approach
International Nuclear Information System (INIS)
Oberaigner, E.R.; Leindl, M.; Antretter, T.
2010-01-01
Full text: A classical task of mathematical physics is the formulation and solution of a time dependent thermoelastic problem. In this work we develop an algorithm for solving the time-dependent heat conduction equation c p ρ∂ t T-kT, ii =0 in an analytical, exact fashion for a two-component domain. By the Green's function approach the formal solution of the problem is obtained. As an intermediate result an integral-equation for the temperature history at the domain interface is formulated which can be solved analytically. This method is applied to a classical engineering problem, i.e. to a special case of a Stefan-Problem. The Green's function approach in conjunction with the integral-equation method is very useful in cases were strong discontinuities or jumps occur. The initial conditions and the system parameters of the investigated problem give rise to two jumps in the temperature field. Purely numerical solutions are obtained by using the FEM (finite element method) and the FDM (finite difference method) and compared with the analytical approach. At the domain boundary the analytical solution and the FEM-solution are in good agreement, but the FDM results show a signicant smearing effect. (author)
Optimizing some 3-stage W-methods for the time integration of PDEs
Gonzalez-Pinto, S.; Hernandez-Abreu, D.; Perez-Rodriguez, S.
2017-07-01
The optimization of some W-methods for the time integration of time-dependent PDEs in several spatial variables is considered. In [2, Theorem 1] several three-parametric families of three-stage W-methods for the integration of IVPs in ODEs were studied. Besides, the optimization of several specific methods for PDEs when the Approximate Matrix Factorization Splitting (AMF) is used to define the approximate Jacobian matrix (W ≈ fy(yn)) was carried out. Also, some convergence and stability properties were presented [2]. The derived methods were optimized on the base that the underlying explicit Runge-Kutta method is the one having the largest Monotonicity interval among the thee-stage order three Runge-Kutta methods [1]. Here, we propose an optimization of the methods by imposing some additional order condition [7] to keep order three for parabolic PDE problems [6] but at the price of reducing substantially the length of the nonlinear Monotonicity interval of the underlying explicit Runge-Kutta method.
Digging Back In Time: Integrating Historical Data Into an Operational Ocean Observing System
McCammon, M.
2016-02-01
Modern technologies allow reporting and display of data near real-time from in situ instrumentation live on the internet. This has given users fast access to critical information for scientific applications, marine safety, planning, and numerous other activities. Equally as valuable is having access to historical data sets. However, it is challenging to identify sources and access of historical data of interest as it exists in many different locations, depending on the funding source and provider. Also, time-varying formats can make it difficult to data-mine and display historical data. There is also the issue of data quality, and having a systematic means of assessing credibility of historical data sets. The Alaska Ocean Observing System (AOOS) data management system demonstrates the successful ingestion of historical data, both old and new (as recent as yesterday) and has integrated numerous historical data streams into user friendly data portals, available for data upload and display on the AOOS Website. An example is the inclusion of non-real-time (e.g. day old) AIS (Automatic Identification System) ship tracking data, important for scientists working in marine mammal migration regions. Other examples include historical sea ice data, and various data streams from previous research projects (e.g. moored time series, HF Radar surface currents, weather, shipboard CTD). Most program or project websites only offer access to data specific to their agency or project alone, but do not have the capacity to provide access to the plethora of other data that might be available for the region and be useful for integration, comparison and synthesis. AOOS offers end users access to a one stop-shop for data in the area they want to research, helping them identify other sources of information and access. Demonstrations of data portals using historical data illustrate these benefits.
Cendagorta, Joseph R; Bačić, Zlatko; Tuckerman, Mark E
2018-03-14
We introduce a scheme for approximating quantum time correlation functions numerically within the Feynman path integral formulation. Starting with the symmetrized version of the correlation function expressed as a discretized path integral, we introduce a change of integration variables often used in the derivation of trajectory-based semiclassical methods. In particular, we transform to sum and difference variables between forward and backward complex-time propagation paths. Once the transformation is performed, the potential energy is expanded in powers of the difference variables, which allows us to perform the integrals over these variables analytically. The manner in which this procedure is carried out results in an open-chain path integral (in the remaining sum variables) with a modified potential that is evaluated using imaginary-time path-integral sampling rather than requiring the generation of a large ensemble of trajectories. Consequently, any number of path integral sampling schemes can be employed to compute the remaining path integral, including Monte Carlo, path-integral molecular dynamics, or enhanced path-integral molecular dynamics. We believe that this approach constitutes a different perspective in semiclassical-type approximations to quantum time correlation functions. Importantly, we argue that our approximation can be systematically improved within a cumulant expansion formalism. We test this approximation on a set of one-dimensional problems that are commonly used to benchmark approximate quantum dynamical schemes. We show that the method is at least as accurate as the popular ring-polymer molecular dynamics technique and linearized semiclassical initial value representation for correlation functions of linear operators in most of these examples and improves the accuracy of correlation functions of nonlinear operators.
Cendagorta, Joseph R.; Bačić, Zlatko; Tuckerman, Mark E.
2018-03-01
We introduce a scheme for approximating quantum time correlation functions numerically within the Feynman path integral formulation. Starting with the symmetrized version of the correlation function expressed as a discretized path integral, we introduce a change of integration variables often used in the derivation of trajectory-based semiclassical methods. In particular, we transform to sum and difference variables between forward and backward complex-time propagation paths. Once the transformation is performed, the potential energy is expanded in powers of the difference variables, which allows us to perform the integrals over these variables analytically. The manner in which this procedure is carried out results in an open-chain path integral (in the remaining sum variables) with a modified potential that is evaluated using imaginary-time path-integral sampling rather than requiring the generation of a large ensemble of trajectories. Consequently, any number of path integral sampling schemes can be employed to compute the remaining path integral, including Monte Carlo, path-integral molecular dynamics, or enhanced path-integral molecular dynamics. We believe that this approach constitutes a different perspective in semiclassical-type approximations to quantum time correlation functions. Importantly, we argue that our approximation can be systematically improved within a cumulant expansion formalism. We test this approximation on a set of one-dimensional problems that are commonly used to benchmark approximate quantum dynamical schemes. We show that the method is at least as accurate as the popular ring-polymer molecular dynamics technique and linearized semiclassical initial value representation for correlation functions of linear operators in most of these examples and improves the accuracy of correlation functions of nonlinear operators.
International Nuclear Information System (INIS)
Das, Sanat Kumar; Chatterjee, Abhijit; Ghosh, Sanjay K.; Raha, Sibaji
2015-01-01
An outflow of continental haze occurs from Indo-Gangetic Basin (IGB) in the North to Bay of Bengal (BoB) in the South. An integrated campaign was organized to investigate this continental haze during December 2013–February 2014 at source and remote regions within IGB to quantify its radiative effects. Measurements were carried out at three locations in eastern India; 1) Kalas Island, Sundarban (21.68°N, 88.57°E) — an isolated island along the north-east coast of BoB, 2) Kolkata (22.57°N, 88.42°E) — an urban metropolis and 3) Siliguri (26.70°N, 88.35°E) — an urban region at the foothills of eastern Himalayas. Ground-based AOD (at 0.5 μm) is observed to be maximum (1.25 ± 0.18) over Kolkata followed by Siliguri (0.60 ± 0.17) and minimum over Sundarban (0.53 ± 0.18). Black carbon concentration is found to be maximum at Kolkata (21.6 ± 6.6 μg·m −3 ) with almost equal concentrations at Siliguri (12.6 ± 5.2 μg·m −3 ) and Sundarban (12.3 ± 3.0 μg·m −3 ). Combination of MODIS-AOD and back-trajectories analysis shows an outflow of winter-time continental haze originating from central IGB and venting out through Sundarban towards BoB. This continental haze with high extinction coefficient is identified up to central BoB using CALIPSO observations and is found to contribute ~ 75% to marine AOD over central BoB. This haze produces significantly high aerosol radiative forcing within the atmosphere over Kolkata (75.4 Wm −2 ) as well as over Siliguri and Sundarban (40 Wm −2 ) indicating large forcing over entire IGB, from foothills of the Himalayas to coastal region. This winter-time continental haze also causes about similar radiative heating (1.5 K·day −1 ) from Siliguri to Sundarban which is enhanced over Kolkata (3 K·day −1 ) due to large emission of local urban aerosols. This high aerosol heating over entire IGB and coastal region of BoB can have considerable impact on the monsoonal circulation and more importantly, such haze
Energy Technology Data Exchange (ETDEWEB)
Das, Sanat Kumar, E-mail: sanatkrdas@gmail.com [Environmental Sciences Section, Bose Institute, Kolkata (India); Center for Astroparticle Physics and Space Science, Bose Institute, Kolkata (India); Chatterjee, Abhijit [Environmental Sciences Section, Bose Institute, Kolkata (India); Center for Astroparticle Physics and Space Science, Bose Institute, Kolkata (India); National Facility on Astroparticle Physics and Space Science, Darjeeling (India); Ghosh, Sanjay K. [Center for Astroparticle Physics and Space Science, Bose Institute, Kolkata (India); National Facility on Astroparticle Physics and Space Science, Darjeeling (India); Raha, Sibaji [Environmental Sciences Section, Bose Institute, Kolkata (India); Center for Astroparticle Physics and Space Science, Bose Institute, Kolkata (India); National Facility on Astroparticle Physics and Space Science, Darjeeling (India)
2015-11-15
An outflow of continental haze occurs from Indo-Gangetic Basin (IGB) in the North to Bay of Bengal (BoB) in the South. An integrated campaign was organized to investigate this continental haze during December 2013–February 2014 at source and remote regions within IGB to quantify its radiative effects. Measurements were carried out at three locations in eastern India; 1) Kalas Island, Sundarban (21.68°N, 88.57°E) — an isolated island along the north-east coast of BoB, 2) Kolkata (22.57°N, 88.42°E) — an urban metropolis and 3) Siliguri (26.70°N, 88.35°E) — an urban region at the foothills of eastern Himalayas. Ground-based AOD (at 0.5 μm) is observed to be maximum (1.25 ± 0.18) over Kolkata followed by Siliguri (0.60 ± 0.17) and minimum over Sundarban (0.53 ± 0.18). Black carbon concentration is found to be maximum at Kolkata (21.6 ± 6.6 μg·m{sup −3}) with almost equal concentrations at Siliguri (12.6 ± 5.2 μg·m{sup −3}) and Sundarban (12.3 ± 3.0 μg·m{sup −3}). Combination of MODIS-AOD and back-trajectories analysis shows an outflow of winter-time continental haze originating from central IGB and venting out through Sundarban towards BoB. This continental haze with high extinction coefficient is identified up to central BoB using CALIPSO observations and is found to contribute ~ 75% to marine AOD over central BoB. This haze produces significantly high aerosol radiative forcing within the atmosphere over Kolkata (75.4 Wm{sup −2}) as well as over Siliguri and Sundarban (40 Wm{sup −2}) indicating large forcing over entire IGB, from foothills of the Himalayas to coastal region. This winter-time continental haze also causes about similar radiative heating (1.5 K·day{sup −1}) from Siliguri to Sundarban which is enhanced over Kolkata (3 K·day{sup −1}) due to large emission of local urban aerosols. This high aerosol heating over entire IGB and coastal region of BoB can have considerable impact on the monsoonal circulation and more