Maximum Temperature Detection System for Integrated Circuits
Frankiewicz, Maciej; Kos, Andrzej
2015-03-01
The paper describes structure and measurement results of the system detecting present maximum temperature on the surface of an integrated circuit. The system consists of the set of proportional to absolute temperature sensors, temperature processing path and a digital part designed in VHDL. Analogue parts of the circuit where designed with full-custom technique. The system is a part of temperature-controlled oscillator circuit - a power management system based on dynamic frequency scaling method. The oscillator cooperates with microprocessor dedicated for thermal experiments. The whole system is implemented in UMC CMOS 0.18 μm (1.8 V) technology.
Maximum phonation time: variability and reliability.
Speyer, Renée; Bogaardt, Hans C A; Passos, Valéria Lima; Roodenburg, Nel P H D; Zumach, Anne; Heijnen, Mariëlle A M; Baijens, Laura W J; Fleskens, Stijn J H M; Brunings, Jan W
2010-05-01
The objective of the study was to determine maximum phonation time reliability as a function of the number of trials, days, and raters in dysphonic and control subjects. Two groups of adult subjects participated in this reliability study: a group of outpatients with functional or organic dysphonia versus a group of healthy control subjects matched by age and gender. Over a period of maximally 6 weeks, three video recordings were made of five subjects' maximum phonation time trials. A panel of five experts were responsible for all measurements, including a repeated measurement of the subjects' first recordings. Patients showed significantly shorter maximum phonation times compared with healthy controls (on average, 6.6 seconds shorter). The averaged interclass correlation coefficient (ICC) over all raters per trial for the first day was 0.998. The averaged reliability coefficient per rater and per trial for repeated measurements of the first day's data was 0.997, indicating high intrarater reliability. The mean reliability coefficient per day for one trial was 0.939. When using five trials, the reliability increased to 0.987. The reliability over five trials for a single day was 0.836; for 2 days, 0.911; and for 3 days, 0.935. To conclude, the maximum phonation time has proven to be a highly reliable measure in voice assessment. A single rater is sufficient to provide highly reliable measurements.
Maximum Phonation Time: Variability and Reliability
R. Speyer; H.C.A. Bogaardt; V.L. Passos; N.P.H.D. Roodenburg; A. Zumach; M.A.M. Heijnen; L.W.J. Baijens; S.J.H.M. Fleskens; J.W. Brunings
2010-01-01
The objective of the study was to determine maximum phonation time reliability as a function of the number of trials, days, and raters in dysphonic and control subjects. Two groups of adult subjects participated in this reliability study: a group of outpatients with functional or organic dysphonia v
Maximum likelihood estimation for integrated diffusion processes
Baltazar-Larios, Fernando; Sørensen, Michael
EM-algorithm to obtain maximum likelihood estimates of the parameters in the diffusion model. As part of the algorithm, we use a recent simple method for approximate simulation of diffusion bridges. In simulation studies for the Ornstein-Uhlenbeck process and the CIR process the proposed method works...
An Integrated Modeling Framework for Probable Maximum Precipitation and Flood
Gangrade, S.; Rastogi, D.; Kao, S. C.; Ashfaq, M.; Naz, B. S.; Kabela, E.; Anantharaj, V. G.; Singh, N.; Preston, B. L.; Mei, R.
2015-12-01
With the increasing frequency and magnitude of extreme precipitation and flood events projected in the future climate, there is a strong need to enhance our modeling capabilities to assess the potential risks on critical energy-water infrastructures such as major dams and nuclear power plants. In this study, an integrated modeling framework is developed through high performance computing to investigate the climate change effects on probable maximum precipitation (PMP) and probable maximum flood (PMF). Multiple historical storms from 1981-2012 over the Alabama-Coosa-Tallapoosa River Basin near the Atlanta metropolitan area are simulated by the Weather Research and Forecasting (WRF) model using the Climate Forecast System Reanalysis (CFSR) forcings. After further WRF model tuning, these storms are used to simulate PMP through moisture maximization at initial and lateral boundaries. A high resolution hydrological model, Distributed Hydrology-Soil-Vegetation Model, implemented at 90m resolution and calibrated by the U.S. Geological Survey streamflow observations, is then used to simulate the corresponding PMF. In addition to the control simulation that is driven by CFSR, multiple storms from the Community Climate System Model version 4 under the Representative Concentrations Pathway 8.5 emission scenario are used to simulate PMP and PMF in the projected future climate conditions. The multiple PMF scenarios developed through this integrated modeling framework may be utilized to evaluate the vulnerability of existing energy-water infrastructures with various aspects associated PMP and PMF.
Time series analysis by the Maximum Entropy method
Kirk, B.L.; Rust, B.W.; Van Winkle, W.
1979-01-01
The principal subject of this report is the use of the Maximum Entropy method for spectral analysis of time series. The classical Fourier method is also discussed, mainly as a standard for comparison with the Maximum Entropy method. Examples are given which clearly demonstrate the superiority of the latter method over the former when the time series is short. The report also includes a chapter outlining the theory of the method, a discussion of the effects of noise in the data, a chapter on significance tests, a discussion of the problem of choosing the prediction filter length, and, most importantly, a description of a package of FORTRAN subroutines for making the various calculations. Cross-referenced program listings are given in the appendices. The report also includes a chapter demonstrating the use of the programs by means of an example. Real time series like the lynx data and sunspot numbers are also analyzed. 22 figures, 21 tables, 53 references.
Training Concept, Evolution Time, and the Maximum Entropy Production Principle
Alexey Bezryadin
2016-04-01
Full Text Available The maximum entropy production principle (MEPP is a type of entropy optimization which demands that complex non-equilibrium systems should organize such that the rate of the entropy production is maximized. Our take on this principle is that to prove or disprove the validity of the MEPP and to test the scope of its applicability, it is necessary to conduct experiments in which the entropy produced per unit time is measured with a high precision. Thus we study electric-field-induced self-assembly in suspensions of carbon nanotubes and realize precise measurements of the entropy production rate (EPR. As a strong voltage is applied the suspended nanotubes merge together into a conducting cloud which produces Joule heat and, correspondingly, produces entropy. We introduce two types of EPR, which have qualitatively different significance: global EPR (g-EPR and the entropy production rate of the dissipative cloud itself (DC-EPR. The following results are obtained: (1 As the system reaches the maximum of the DC-EPR, it becomes stable because the applied voltage acts as a stabilizing thermodynamic potential; (2 We discover metastable states characterized by high, near-maximum values of the DC-EPR. Under certain conditions, such efficient entropy-producing regimes can only be achieved if the system is allowed to initially evolve under mildly non-equilibrium conditions, namely at a reduced voltage; (3 Without such a “training” period the system typically is not able to reach the allowed maximum of the DC-EPR if the bias is high; (4 We observe that the DC-EPR maximum is achieved within a time, Te, the evolution time, which scales as a power-law function of the applied voltage; (5 Finally, we present a clear example in which the g-EPR theoretical maximum can never be achieved. Yet, under a wide range of conditions, the system can self-organize and achieve a dissipative regime in which the DC-EPR equals its theoretical maximum.
Efficient maximum likelihood parameterization of continuous-time Markov processes
McGibbon, Robert T
2015-01-01
Continuous-time Markov processes over finite state-spaces are widely used to model dynamical processes in many fields of natural and social science. Here, we introduce an maximum likelihood estimator for constructing such models from data observed at a finite time interval. This estimator is drastically more efficient than prior approaches, enables the calculation of deterministic confidence intervals in all model parameters, and can easily enforce important physical constraints on the models such as detailed balance. We demonstrate and discuss the advantages of these models over existing discrete-time Markov models for the analysis of molecular dynamics simulations.
Minimizing Maximum Response Time and Delay Factor in Broadcast Scheduling
Chekuri, Chandra; Moseley, Benjamin
2009-01-01
We consider online algorithms for pull-based broadcast scheduling. In this setting there are n pages of information at a server and requests for pages arrive online. When the server serves (broadcasts) a page p, all outstanding requests for that page are satisfied. We study two related metrics, namely maximum response time (waiting time) and maximum delay-factor and their weighted versions. We obtain the following results in the worst-case online competitive model. - We show that FIFO (first-in first-out) is 2-competitive even when the page sizes are different. Previously this was known only for unit-sized pages [10] via a delicate argument. Our proof differs from [10] and is perhaps more intuitive. - We give an online algorithm for maximum delay-factor that is O(1/eps^2)-competitive with (1+\\eps)-speed for unit-sized pages and with (2+\\eps)-speed for different sized pages. This improves on the algorithm in [12] which required (2+\\eps)-speed and (4+\\eps)-speed respectively. In addition we show that the algori...
Improving predictability of time series using maximum entropy methods
Chliamovitch, G.; Dupuis, A.; Golub, A.; Chopard, B.
2015-04-01
We discuss how maximum entropy methods may be applied to the reconstruction of Markov processes underlying empirical time series and compare this approach to usual frequency sampling. It is shown that, in low dimension, there exists a subset of the space of stochastic matrices for which the MaxEnt method is more efficient than sampling, in the sense that shorter historical samples have to be considered to reach the same accuracy. Considering short samples is of particular interest when modelling smoothly non-stationary processes, which provides, under some conditions, a powerful forecasting tool. The method is illustrated for a discretized empirical series of exchange rates.
Time-Reversal Acoustics and Maximum-Entropy Imaging
Berryman, J G
2001-08-22
Target location is a common problem in acoustical imaging using either passive or active data inversion. Time-reversal methods in acoustics have the important characteristic that they provide a means of determining the eigenfunctions and eigenvalues of the scattering operator for either of these problems. Each eigenfunction may often be approximately associated with an individual scatterer. The resulting decoupling of the scattered field from a collection of targets is a very useful aid to localizing the targets, and suggests a number of imaging and localization algorithms. Two of these are linear subspace methods and maximum-entropy imaging.
On the maximum-entropy/autoregressive modeling of time series
Chao, B. F.
1984-01-01
The autoregressive (AR) model of a random process is interpreted in the light of the Prony's relation which relates a complex conjugate pair of poles of the AR process in the z-plane (or the z domain) on the one hand, to the complex frequency of one complex harmonic function in the time domain on the other. Thus the AR model of a time series is one that models the time series as a linear combination of complex harmonic functions, which include pure sinusoids and real exponentials as special cases. An AR model is completely determined by its z-domain pole configuration. The maximum-entropy/autogressive (ME/AR) spectrum, defined on the unit circle of the z-plane (or the frequency domain), is nothing but a convenient, but ambiguous visual representation. It is asserted that the position and shape of a spectral peak is determined by the corresponding complex frequency, and the height of the spectral peak contains little information about the complex amplitude of the complex harmonic functions.
Val, Maria Rosa Rovira; Lehmann, Martin; Zinenko, Anna
-a-vis the public sectors. Internationally, organisations have implemented a collection of these standards to be in line with such development and to obtain or keep their licence to operate globally. After two decades of development and maturation, the scenario is now different: (i) the economic context has changed...... dramatically with many of the world’s economies facing downturn and a looming possible recession; and the global economic and political balance changing; (ii) most larger companies and quite a few SMEs now have a mature knowledge of these standards; and (iii) some standards are advocating for integration...... procedures, such as cases of for example ISO integrated management systems, mutual equivalences recognition of Global Compact-GRI-ISO26000, or the case of IIRC initiative to develop integrated reporting on an organization’s Financial, Environmental, Social and Governance performance. This paper focuses...
Optimal control of a double integrator a primer on maximum principle
Locatelli, Arturo
2017-01-01
This book provides an introductory yet rigorous treatment of Pontryagin’s Maximum Principle and its application to optimal control problems when simple and complex constraints act on state and control variables, the two classes of variable in such problems. The achievements resulting from first-order variational methods are illustrated with reference to a large number of problems that, almost universally, relate to a particular second-order, linear and time-invariant dynamical system, referred to as the double integrator. The book is ideal for students who have some knowledge of the basics of system and control theory and possess the calculus background typically taught in undergraduate curricula in engineering. Optimal control theory, of which the Maximum Principle must be considered a cornerstone, has been very popular ever since the late 1950s. However, the possibly excessive initial enthusiasm engendered by its perceived capability to solve any kind of problem gave way to its equally unjustified rejecti...
SNS Diagnostics Timing Integration
Long, Cary D; Murphy, Darryl J; Pogge, James; Purcell, John D; Sundaram, Madhan
2005-01-01
The Spallation Neutron Source (SNS) accelerator systems will deliver a 1.0 GeV, 1.4 MW proton beam to a liquid mercury target for neutron scattering research. The accelerator complex consists of a 1 GeV linear accelerator, an accumulator ring and associated transport lines. The SNS diagnostics platform is PC-based running Windows XP Embedded for its OS and LabVIEW as its programming language. Coordinating timing among the various diagnostics instruments with the generation of the beam pulse is a challenging task that we have chosen to divide into three phases. First, timing was derived from VME based systems. In the second phase, described in this paper, timing pulses are generated by an in house designed PCI timing card installed in ten diagnostics PCs. Using fan-out modules, enough triggers were generated for all instruments. This paper describes how the Timing NAD (Network Attached Device) was rapidly developed using our NAD template, LabVIEW's PCI driver wizard, and LabVIEW Channel Access library. The NAD...
Maximum Time Separation of Events in Cyclic Systems with Linear and Latest Timing Constraints
Jin, Fen; Hulgaard, Henrik; Cerny, Eduard
1998-01-01
The determination of the maximum time separations of events is important in the design, synthesis, and verification of digital systems, especially in interface timing verification. Many researchers have explored solutions to the problem with various restrictions: a) on the type of constraints......, and b) on whether the events in the specification are allowed to occur repeatedly. When the events can occur only once, the problem is well solved. There are fewer concrete results for systems where the events can occur repeatedly. We extend the work by Hulgaard et al.\\ for computing the maximum...
Integrable Equations on Time Scales
Gurses, Metin; Guseinov, Gusein Sh.; Silindir, Burcu
2005-01-01
Integrable systems are usually given in terms of functions of continuous variables (on ${\\mathbb R}$), functions of discrete variables (on ${\\mathbb Z}$) and recently in terms of functions of $q$-variables (on ${\\mathbb K}_{q}$). We formulate the Gel'fand-Dikii (GD) formalism on time scales by using the delta differentiation operator and find more general integrable nonlinear evolutionary equations. In particular they yield integrable equations over integers (difference equations) and over $q...
Xu, Yadong; Serre, Marc L; Reyes, Jeanette; Vizuete, William
2016-04-19
To improve ozone exposure estimates for ambient concentrations at a national scale, we introduce our novel Regionalized Air Quality Model Performance (RAMP) approach to integrate chemical transport model (CTM) predictions with the available ozone observations using the Bayesian Maximum Entropy (BME) framework. The framework models the nonlinear and nonhomoscedastic relation between air pollution observations and CTM predictions and for the first time accounts for variability in CTM model performance. A validation analysis using only noncollocated data outside of a validation radius rv was performed and the R(2) between observations and re-estimated values for two daily metrics, the daily maximum 8-h average (DM8A) and the daily 24-h average (D24A) ozone concentrations, were obtained with the OBS scenario using ozone observations only in contrast with the RAMP and a Constant Air Quality Model Performance (CAMP) scenarios. We show that, by accounting for the spatial and temporal variability in model performance, our novel RAMP approach is able to extract more information in terms of R(2) increase percentage, with over 12 times for the DM8A and over 3.5 times for the D24A ozone concentrations, from CTM predictions than the CAMP approach assuming that model performance does not change across space and time.
Integral equations on time scales
Georgiev, Svetlin G
2016-01-01
This book offers the reader an overview of recent developments of integral equations on time scales. It also contains elegant analytical and numerical methods. This book is primarily intended for senior undergraduate students and beginning graduate students of engineering and science courses. The students in mathematical and physical sciences will find many sections of direct relevance. The book contains nine chapters and each chapter is pedagogically organized. This book is specially designed for those who wish to understand integral equations on time scales without having extensive mathematical background.
Wang, Tianxiao
2010-01-01
This paper formulates and studies a stochastic maximum principle for forward-backward stochastic Volterra integral equations (FBSVIEs in short), while the control area is assumed to be convex. Then a linear quadratic (LQ in short) problem for backward stochastic Volterra integral equations (BSVIEs in short) is present to illustrate the aforementioned optimal control problem. Motivated by the technical skills in solving above problem, a more convenient and briefer method for the unique solvability of M-solution for BSVIEs is proposed. At last, we will investigate a risk minimization problem by means of the maximum principle for FBSVIEs. Closed-form optimal portfolio is obtained in some special cases.
JIN Qibing; LIU Qie; WANG Qi; TIAN Yuqi; WANG Yuanfei
2013-01-01
The IMC (Internal Model Control) controller based on robust tuning can improve the robustness and dynamic performance of the system.In this paper,the robustness degree of the control system is investigated based on Maximum Sensitivity (Ms) in depth.And the analytical relationship is obtained between the robustness specification and controller parameters,which gives a clear design criterion to robust IMC controller.Moreover,a novel and simple IMC-PID (Proportional-Integral-Derivative) tuning method is proposed by converting the IMC controller to PID form in terms of the time domain rather than the frequency domain adopted in some conventional IMC-based methods.Hence,the presented IMC-PID gives a good performance with a specific robustness degree.The new IMC-PID method is compared with other classical IMC-PID rules,showing the flexibility and feasibility for a wide range of plants.
Maximum holding endurance time: Effects of load and load's center of gravity height.
Lee, Tzu-Hsien
2015-01-01
Manual holding task is a potential risk to the development of musculoskeletal injuries since it is prone to induce localized muscle fatigue. Maximum holding endurance time is a significant parameter for the design of manual holding task. This study aimed to examine the effects of load and load's COG height on maximum holding endurance time. Fifteen young and healthy males were recruited as participants. A factorial design was used to examine the effects of load and load's COG height on maximum holding endurance time. Four levels of load (15% , 30% , 45% and 60% of the participant's maximum holding capacity) and two levels of load's COG height in box (0 cm and 40 cm high from the handle position) were examined. Maximum holding endurance time decreased with increasing load and/or increasing load's COG height. The effect of load's COG height on maximum holding endurance time decreased with increasing load. Load, load's COG height, and the interaction of load and load's COG height significantly affected maximum holding endurance time. Practitioners should realize the effects of load, load's COG height, and the interaction of load and load's COG height on maximum holding endurance time when setting the working conditions of holding tasks.
Jun He; Xin Yao
2004-01-01
Most of works on the time complexity analysis of evolutionary algorithms have always focused on some artificial binary problems.The time complexity of the algorithms for combinatorial optimisation has not been well understood.This paper considers the time complexity of an evolutionary algorithm for a classical combinatorial optimisation problem,to find the maximum cardinality matching in a graph.It is shown that the evolutionary algorithm can produce a matching with nearly maximum cardinality in average polynomial time.
Zhou, Si-Da; Heylen, Ward; Sas, Paul; Liu, Li
2014-05-01
This paper investigates the problem of modal parameter estimation of time-varying structures under unknown excitation. A time-frequency-domain maximum likelihood estimator of modal parameters for linear time-varying structures is presented by adapting the frequency-domain maximum likelihood estimator to the time-frequency domain. The proposed estimator is parametric, that is, the linear time-varying structures are represented by a time-dependent common-denominator model. To adapt the existing frequency-domain estimator for time-invariant structures to the time-frequency methods for time-varying cases, an orthogonal polynomial and z-domain mapping hybrid basis function is presented, which has the advantageous numerical condition and with which it is convenient to calculate the modal parameters. A series of numerical examples have evaluated and illustrated the performance of the proposed maximum likelihood estimator, and a group of laboratory experiments has further validated the proposed estimator.
Stochastic behavior of a cold standby system with maximum repair time
Ashish Kumar
2015-09-01
Full Text Available The main aim of the present paper is to analyze the stochastic behavior of a cold standby system with concept of preventive maintenance, priority and maximum repair time. For this purpose, a stochastic model is developed in which initially one unit is operative and other is kept as cold standby. There is a single server who visits the system immediately as and when required. The server takes the unit under preventive maintenance after a maximum operation time at normal mode if one standby unit is available for operation. If the repair of the failed unit is not possible up to a maximum repair time, failed unit is replaced by new one. The failure time, maximum operation time and maximum repair time distributions of the unit are considered as exponentially distributed while repair and maintenance time distributions are considered as arbitrary. All random variables are statistically independent and repairs are perfect. Various measures of system effectiveness are obtained by using the technique of semi-Markov process and RPT. To highlight the importance of the study numerical results are also obtained for MTSF, availability and profit function.
Takara, K. T.
2015-12-01
This paper describes a non-parametric frequency analysis method for hydrological extreme-value samples with a size larger than 100, verifying the estimation accuracy with a computer intensive statistics (CIS) resampling such as the bootstrap. Probable maximum values are also incorporated into the analysis for extreme events larger than a design level of flood control. Traditional parametric frequency analysis methods of extreme values include the following steps: Step 1: Collecting and checking extreme-value data; Step 2: Enumerating probability distributions that would be fitted well to the data; Step 3: Parameter estimation; Step 4: Testing goodness of fit; Step 5: Checking the variability of quantile (T-year event) estimates by the jackknife resampling method; and Step_6: Selection of the best distribution (final model). The non-parametric method (NPM) proposed here can skip Steps 2, 3, 4 and 6. Comparing traditional parameter methods (PM) with the NPM, this paper shows that PM often underestimates 100-year quantiles for annual maximum rainfall samples with records of more than 100 years. Overestimation examples are also demonstrated. The bootstrap resampling can do bias correction for the NPM and can also give the estimation accuracy as the bootstrap standard error. This NPM has advantages to avoid various difficulties in above-mentioned steps in the traditional PM. Probable maximum events are also incorporated into the NPM as an upper bound of the hydrological variable. Probable maximum precipitation (PMP) and probable maximum flood (PMF) can be a new parameter value combined with the NPM. An idea how to incorporate these values into frequency analysis is proposed for better management of disasters that exceed the design level. The idea stimulates more integrated approach by geoscientists and statisticians as well as encourages practitioners to consider the worst cases of disasters in their disaster management planning and practices.
The early maximum likelihood estimation model of audiovisual integration in speech perception
Andersen, Tobias
2015-01-01
Speech perception is facilitated by seeing the articulatory mouth movements of the talker. This is due to perceptual audiovisual integration, which also causes the McGurk−MacDonald illusion, and for which a comprehensive computational account is still lacking. Decades of research have largely...... focused on the fuzzy logical model of perception (FLMP), which provides excellent fits to experimental observations but also has been criticized for being too flexible, post hoc and difficult to interpret. The current study introduces the early maximum likelihood estimation (MLE) model of audiovisual......-validation can evaluate models of audiovisual integration based on typical data sets taking both goodness-of-fit and model flexibility into account. All models were tested on a published data set previously used for testing the FLMP. Cross-validation favored the early MLE while more conventional error measures...
A Maximum Time Difference Pipelined Arithmetic Unit Based on CMOS Gate Array
唐志敏; 夏培肃
1995-01-01
This paper describes a maximum time difference pipelined arithmetic chip,the 36-bit adder and subtractor based on 1.5μm CMOS gate array.The chip can operate at 60MHz,and consumes less than 0.5Watt.The results are also studied,and a more precise model of delay time difference is proposed.
Observation of SN2011fe with INTEGRAL. I. Pre--maximum phase
Isern, J; Bravo, E; Diehl, R; Knödlseder, J; Domingo, A; Hirschmann, A; Hoeflich, P; Lebrun, F; Renaud, M; Soldi, S; Elias-Rosa, N; Hernanz, M; Kulebi, B; Zhang, X; Badenes, C; Domínguez, I; Garcia-Senz, D; Jordi, C; Lichti, G; Vedrenne, G; Von Ballmoos, P
2013-01-01
SN2011fe was detected by the Palomar Transient Factory on August 24th 2011 in M101 a few hours after the explosion. From the early optical spectra it was immediately realized that it was a Type Ia supernova thus making this event the brightest one discovered in the last twenty years. The distance of the event offered the rare opportunity to perform a detailed observation with the instruments on board of INTEGRAL to detect the gamma-ray emission expected from the decay chains of $^{56}$Ni. The observations were performed in two runs, one before and around the optical maximum, aimed to detect the early emission from the decay of $^{56}$Ni and another after this maximum aimed to detect the emission of $^{56}$Co. The observations performed with the instruments on board of INTEGRAL (SPI, IBIS/ISGRI, JEMX and OMC) have been analyzed and compared with the existing models of gamma-ray emission from such kind of supernovae. In this paper, the analysis of the gamma-ray emission has been restricted to the first epoch. B...
Maximum Likelihood Blind Channel Estimation for Space-Time Coding Systems
Hakan A. Çırpan
2002-05-01
Full Text Available Sophisticated signal processing techniques have to be developed for capacity enhancement of future wireless communication systems. In recent years, space-time coding is proposed to provide significant capacity gains over the traditional communication systems in fading wireless channels. Space-time codes are obtained by combining channel coding, modulation, transmit diversity, and optional receive diversity in order to provide diversity at the receiver and coding gain without sacrificing the bandwidth. In this paper, we consider the problem of blind estimation of space-time coded signals along with the channel parameters. Both conditional and unconditional maximum likelihood approaches are developed and iterative solutions are proposed. The conditional maximum likelihood algorithm is based on iterative least squares with projection whereas the unconditional maximum likelihood approach is developed by means of finite state Markov process modelling. The performance analysis issues of the proposed methods are studied. Finally, some simulation results are presented.
Estimating the Size and Timing of the Maximum Amplitude of Solar Cycle 24
Ke-Jun Li; Peng-Xin Gao; Tong-Wei Su
2005-01-01
A simple statistical method is used to estimate the size and timing of maximum amplitude of the next solar cycle (cycle 24). Presuming cycle 23 to be a short cycle (as is more likely), the minimum of cycle 24 should occur about December 2006 (±2 months) and the maximum, around March 2011 (±9 months),and the amplitude is 189.9 ± 15.5, if it is a fast riser, or about 136, if it is a slow riser. If we presume cycle 23 to be a long cycle (as is less likely), the minimum of cycle 24 should occur about June 2008 (±2 months) and the maximum, about February 2013 (±8 months) and the maximum will be about 137 or 80, according as the cycle is a fast riser or a slow riser.
Maximum Likelihood Estimation of Time-Varying Loadings in High-Dimensional Factor Models
Mikkelsen, Jakob Guldbæk; Hillebrand, Eric; Urga, Giovanni
In this paper, we develop a maximum likelihood estimator of time-varying loadings in high-dimensional factor models. We specify the loadings to evolve as stationary vector autoregressions (VAR) and show that consistent estimates of the loadings parameters can be obtained by a two-step maximum...... likelihood estimation procedure. In the first step, principal components are extracted from the data to form factor estimates. In the second step, the parameters of the loadings VARs are estimated as a set of univariate regression models with time-varying coefficients. We document the finite...
Evaluating the time limit at maximum aerobic speed in elite swimmers. Training implications.
Renoux, J C
2001-12-01
The aim of the present study was to make use of the concepts of maximum aerobic speed (MAS) and time limit (tlim) in order to determine the relationship between these two elements, and this in an attempt to significantly improve both speed and swimming performance during a training season. To this same end, an intermittent training model was used, which was adapted to the value obtained for the time limit at maximum aerobic speed. During a 12 week training period, the maximum aerobic speed for a group of 9 top-ranking varsity swimmers was measured on two occasions, as was the tlim. The values generated indicated that: 1) there was an inverse relationship between MAS and the time this speed could be maintained, thus confirming the studies by Billat et al. (1994b); 2) a significant increase in MAS occurred over the 12 week period, although no such evolution was seen for the tlim; 3) there was an improvement in results; 4) the time limit could be used in designing a training program based on intermittent exercises. In addition, results of the present study should allow swimming coaches to draw up individualized training programs for a given swimmer by taking into consideration maximum aerobic speed, time limit and propelling efficiency.
FlowMax: A Computational Tool for Maximum Likelihood Deconvolution of CFSE Time Courses.
Maxim Nikolaievich Shokhirev
Full Text Available The immune response is a concerted dynamic multi-cellular process. Upon infection, the dynamics of lymphocyte populations are an aggregate of molecular processes that determine the activation, division, and longevity of individual cells. The timing of these single-cell processes is remarkably widely distributed with some cells undergoing their third division while others undergo their first. High cell-to-cell variability and technical noise pose challenges for interpreting popular dye-dilution experiments objectively. It remains an unresolved challenge to avoid under- or over-interpretation of such data when phenotyping gene-targeted mouse models or patient samples. Here we develop and characterize a computational methodology to parameterize a cell population model in the context of noisy dye-dilution data. To enable objective interpretation of model fits, our method estimates fit sensitivity and redundancy by stochastically sampling the solution landscape, calculating parameter sensitivities, and clustering to determine the maximum-likelihood solution ranges. Our methodology accounts for both technical and biological variability by using a cell fluorescence model as an adaptor during population model fitting, resulting in improved fit accuracy without the need for ad hoc objective functions. We have incorporated our methodology into an integrated phenotyping tool, FlowMax, and used it to analyze B cells from two NFκB knockout mice with distinct phenotypes; we not only confirm previously published findings at a fraction of the expended effort and cost, but reveal a novel phenotype of nfkb1/p105/50 in limiting the proliferative capacity of B cells following B-cell receptor stimulation. In addition to complementing experimental work, FlowMax is suitable for high throughput analysis of dye dilution studies within clinical and pharmacological screens with objective and quantitative conclusions.
Kirkegaard, Poul Henning; Nielsen, Søren R.K.; Micaletti, R. C.;
This paper considers estimation of the Maximum Damage Indicator (MSDI) by using time-frequency system identification techniques for an RC-structure subjected to earthquake excitation. The MSDI relates the global damage state of the RC-structure to the relative decrease of the fundamental eigenfre...
Recommended maximum holding times for prevention of discomfort of static standing postures
Miedema, M.C.; Douwes, M.; Dul, J.
1997-01-01
The aim of the present study was threefold; (1) to analyze the influence of posture on the maximum holding time (MHT), (2) to study the possibility of classifying postures on the basis of MHT, and (3) to develop ergonomic recommendations for the MHT of categories of postures. For these purposes data
Efficiency at maximum power output of quantum heat engines under finite-time operation
Wang, Jianhui; He, Jizhou; Wu, Zhaoqi
2012-03-01
We study the efficiency at maximum power, ηm, of irreversible quantum Carnot engines (QCEs) that perform finite-time cycles between a hot and a cold reservoir at temperatures Th and Tc, respectively. For QCEs in the reversible limit (long cycle period, zero dissipation), ηm becomes identical to the Carnot efficiency ηC=1-Tc/Th. For QCE cycles in which nonadiabatic dissipation and the time spent on two adiabats are included, the efficiency ηm at maximum power output is bounded from above by ηC/(2-ηC) and from below by ηC/2. In the case of symmetric dissipation, the Curzon-Ahlborn efficiency ηCA=1-Tc/Th is recovered under the condition that the time allocation between the adiabats and the contact time with the reservoir satisfy a certain relation.
Efficiency at maximum power output of quantum heat engines under finite-time operation.
Wang, Jianhui; He, Jizhou; Wu, Zhaoqi
2012-03-01
We study the efficiency at maximum power, η(m), of irreversible quantum Carnot engines (QCEs) that perform finite-time cycles between a hot and a cold reservoir at temperatures T(h) and T(c), respectively. For QCEs in the reversible limit (long cycle period, zero dissipation), η(m) becomes identical to the Carnot efficiency η(C)=1-T(c)/T(h). For QCE cycles in which nonadiabatic dissipation and the time spent on two adiabats are included, the efficiency η(m) at maximum power output is bounded from above by η(C)/(2-η(C)) and from below by η(C)/2. In the case of symmetric dissipation, the Curzon-Ahlborn efficiency η(CA)=1-√(T(c)/T(h)) is recovered under the condition that the time allocation between the adiabats and the contact time with the reservoir satisfy a certain relation.
Dong, Yi; Mihalas, Stefan; Russell, Alexander; Etienne-Cummings, Ralph; Niebur, Ernst
2011-11-01
When a neuronal spike train is observed, what can we deduce from it about the properties of the neuron that generated it? A natural way to answer this question is to make an assumption about the type of neuron, select an appropriate model for this type, and then choose the model parameters as those that are most likely to generate the observed spike train. This is the maximum likelihood method. If the neuron obeys simple integrate-and-fire dynamics, Paninski, Pillow, and Simoncelli (2004) showed that its negative log-likelihood function is convex and that, at least in principle, its unique global minimum can thus be found by gradient descent techniques. Many biological neurons are, however, known to generate a richer repertoire of spiking behaviors than can be explained in a simple integrate-and-fire model. For instance, such a model retains only an implicit (through spike-induced currents), not an explicit, memory of its input; an example of a physiological situation that cannot be explained is the absence of firing if the input current is increased very slowly. Therefore, we use an expanded model (Mihalas & Niebur, 2009 ), which is capable of generating a large number of complex firing patterns while still being linear. Linearity is important because it maintains the distribution of the random variables and still allows maximum likelihood methods to be used. In this study, we show that although convexity of the negative log-likelihood function is not guaranteed for this model, the minimum of this function yields a good estimate for the model parameters, in particular if the noise level is treated as a free parameter. Furthermore, we show that a nonlinear function minimization method (r-algorithm with space dilation) usually reaches the global minimum.
Impact maturity times and citation time windows: The 2-year maximum journal impact factor
Dorta-Gonzalez, Pablo
2013-01-01
Journal metrics are employed for the assessment of scientific scholar journals from a general bibliometric perspective. In this context, the Thomson Reuters journal impact factors (JIF) are the citation-based indicators most used. The 2-year journal impact factor (2-JIF) counts citations to one and two year old articles, while the 5-year journal impact factor (5-JIF) counts citations from one to five year old articles. Nevertheless, these indicators are not comparable among fields of science for two reasons: (i) each field has a different impact maturity time, and (ii) because of systematic differences in publication and citation behaviour across disciplines. In fact, the 5-JIF firstly appeared in the Journal Citation Reports (JCR) in 2007 with the purpose of making more comparable impacts in fields in which impact matures slowly. However, there is not an optimal fixed impact maturity time valid for all the fields. In some of them two years provides a good performance whereas in others three or more years are...
Becker, Joseph F.; Valentin, Jose
1996-01-01
The maximum entropy technique was successfully applied to the deconvolution of overlapped chromatographic peaks. An algorithm was written in which the chromatogram was represented as a vector of sample concentrations multiplied by a peak shape matrix. Simulation results demonstrated that there is a trade off between the detector noise and peak resolution in the sense that an increase of the noise level reduced the peak separation that could be recovered by the maximum entropy method. Real data originated from a sample storage column was also deconvoluted using maximum entropy. Deconvolution is useful in this type of system because the conservation of time dependent profiles depends on the band spreading processes in the chromatographic column, which might smooth out the finer details in the concentration profile. The method was also applied to the deconvolution of previously interpretted Pioneer Venus chromatograms. It was found in this case that the correct choice of peak shape function was critical to the sensitivity of maximum entropy in the reconstruction of these chromatograms.
Cluster-Based Maximum Consensus Time Synchronization for Industrial Wireless Sensor Networks †
Wang, Zhaowei; Zeng, Peng; Zhou, Mingtuo; Li, Dong; Wang, Jintao
2017-01-01
Time synchronization is one of the key technologies in Industrial Wireless Sensor Networks (IWSNs), and clustering is widely used in WSNs for data fusion and information collection to reduce redundant data and communication overhead. Considering IWSNs’ demand for low energy consumption, fast convergence, and robustness, this paper presents a novel Cluster-based Maximum consensus Time Synchronization (CMTS) method. It consists of two parts: intra-cluster time synchronization and inter-cluster time synchronization. Based on the theory of distributed consensus, the proposed method utilizes the maximum consensus approach to realize the intra-cluster time synchronization, and adjacent clusters exchange the time messages via overlapping nodes to synchronize with each other. A Revised-CMTS is further proposed to counteract the impact of bounded communication delays between two connected nodes, because the traditional stochastic models of the communication delays would distort in a dynamic environment. The simulation results show that our method reduces the communication overhead and improves the convergence rate in comparison to existing works, as well as adapting to the uncertain bounded communication delays. PMID:28098750
Cluster-Based Maximum Consensus Time Synchronization for Industrial Wireless Sensor Networks
Zhaowei Wang
2017-01-01
Full Text Available Time synchronization is one of the key technologies in Industrial Wireless Sensor Networks (IWSNs, and clustering is widely used in WSNs for data fusion and information collection to reduce redundant data and communication overhead. Considering IWSNs’ demand for low energy consumption, fast convergence, and robustness, this paper presents a novel Cluster-based Maximum consensus Time Synchronization (CMTS method. It consists of two parts: intra-cluster time synchronization and inter-cluster time synchronization. Based on the theory of distributed consensus, the proposed method utilizes the maximum consensus approach to realize the intra-cluster time synchronization, and adjacent clusters exchange the time messages via overlapping nodes to synchronize with each other. A Revised-CMTS is further proposed to counteract the impact of bounded communication delays between two connected nodes, because the traditional stochastic models of the communication delays would distort in a dynamic environment. The simulation results show that our method reduces the communication overhead and improves the convergence rate in comparison to existing works, as well as adapting to the uncertain bounded communication delays.
Cluster-Based Maximum Consensus Time Synchronization for Industrial Wireless Sensor Networks.
Wang, Zhaowei; Zeng, Peng; Zhou, Mingtuo; Li, Dong; Wang, Jintao
2017-01-13
Time synchronization is one of the key technologies in Industrial Wireless Sensor Networks (IWSNs), and clustering is widely used in WSNs for data fusion and information collection to reduce redundant data and communication overhead. Considering IWSNs' demand for low energy consumption, fast convergence, and robustness, this paper presents a novel Cluster-based Maximum consensus Time Synchronization (CMTS) method. It consists of two parts: intra-cluster time synchronization and inter-cluster time synchronization. Based on the theory of distributed consensus, the proposed method utilizes the maximum consensus approach to realize the intra-cluster time synchronization, and adjacent clusters exchange the time messages via overlapping nodes to synchronize with each other. A Revised-CMTS is further proposed to counteract the impact of bounded communication delays between two connected nodes, because the traditional stochastic models of the communication delays would distort in a dynamic environment. The simulation results show that our method reduces the communication overhead and improves the convergence rate in comparison to existing works, as well as adapting to the uncertain bounded communication delays.
Maximum-Likelihood Detection for Energy-Efficient Timing Acquisition in NB-IoT
2016-01-01
Initial timing acquisition in narrow-band IoT (NB-IoT) devices is done by detecting a periodically transmitted known sequence. The detection has to be done at lowest possible latency, because the RF-transceiver, which dominates downlink power consumption of an NB-IoT modem, has to be turned on throughout this time. Auto-correlation detectors show low computational complexity from a signal processing point of view at the price of a higher detection latency. In contrast a maximum likelihood cro...
Prediction of maximum magnitude and original time of reservoir induced seismicity
无
2001-01-01
This paper deals with the prediction of potentially maximum magnitude and origin time for reservoir induced seismicity (RIS). The factor and sign of seismology and geology of RIS has been studied, and the information quantity for magnitude of induced seismicity provided by them has been calculated. In terms of information quan-tity the biggest possible magnitude of RIS is determined. The changes of seismic frequency with time are studied using grey model method, and the time of the biggest change rate is taken as original time of the main shock. The feasibility of methods for predicting magnitude and time has been tested for the reservoir induced seismicity in the Xinfengjiang reservoir, China and the Koyna reservoir, India.
A real-time maximum-likelihood heart-rate estimator for wearable textile sensors.
Cheng, Mu-Huo; Chen, Li-Chung; Hung, Ying-Che; Yang, Chang Ming
2008-01-01
This paper presents a real-time maximum-likelihood heart-rate estimator for ECG data measured via wearable textile sensors. The ECG signals measured from wearable dry electrodes are notorious for its susceptibility to interference from the respiration or the motion of wearing person such that the signal quality may degrade dramatically. To overcome these obstacles, in the proposed heart-rate estimator we first employ the subspace approach to remove the wandering baseline, then use a simple nonlinear absolute operation to reduce the high-frequency noise contamination, and finally apply the maximum likelihood estimation technique for estimating the interval of R-R peaks. A parameter derived from the byproduct of maximum likelihood estimation is also proposed as an indicator for signal quality. To achieve the goal of real-time, we develop a simple adaptive algorithm from the numerical power method to realize the subspace filter and apply the fast-Fourier transform (FFT) technique for realization of the correlation technique such that the whole estimator can be implemented in an FPGA system. Experiments are performed to demonstrate the viability of the proposed system.
STATIONARITY OF ANNUAL MAXIMUM DAILY STREAMFLOW TIME SERIES IN SOUTH-EAST BRAZILIAN RIVERS
Jorge Machado Damázio
2015-08-01
Full Text Available DOI: 10.12957/cadest.2014.18302The paper presents a statistical analysis of annual maxima daily streamflow between 1931 and 2013 in South-East Brazil focused in detecting and modelling non-stationarity aspects. Flood protection for the large valleys in South-East Brazil is provided by multiple purpose reservoir systems built during 20th century, which design and operation plans has been done assuming stationarity of historical flood time series. Land cover changes and rapidly-increasing level of atmosphere greenhouse gases of the last century may be affecting flood regimes in these valleys so that it can be that nonstationary modelling should be applied to re-asses dam safety and flood control operation rules at the existent reservoir system. Six annual maximum daily streamflow time series are analysed. The time series were plotted together with fitted smooth loess functions and non-parametric statistical tests are performed to check the significance of apparent trends shown by the plots. Non-stationarity is modelled by fitting univariate extreme value distribution functions which location varies linearly with time. Stationarity and non-stationarity modelling are compared with the likelihood ratio statistic. In four of the six analyzed time series non-stationarity modelling outperformed stationarity modelling.Keywords: Stationarity; Extreme Value Distributions; Flood Frequency Analysis; Maximum Likelihood Method.
Local-time representation of path integrals.
Jizba, Petr; Zatloukal, Václav
2015-12-01
We derive a local-time path-integral representation for a generic one-dimensional time-independent system. In particular, we show how to rephrase the matrix elements of the Bloch density matrix as a path integral over x-dependent local-time profiles. The latter quantify the time that the sample paths x(t) in the Feynman path integral spend in the vicinity of an arbitrary point x. Generalization of the local-time representation that includes arbitrary functionals of the local time is also provided. We argue that the results obtained represent a powerful alternative to the traditional Feynman-Kac formula, particularly in the high- and low-temperature regimes. To illustrate this point, we apply our local-time representation to analyze the asymptotic behavior of the Bloch density matrix at low temperatures. Further salient issues, such as connections with the Sturm-Liouville theory and the Rayleigh-Ritz variational principle, are also discussed.
The Research of Car-Following Model Based on Real-Time Maximum Deceleration
Longhai Yang
2015-01-01
Full Text Available This paper is concerned with the effect of real-time maximum deceleration in car-following. The real-time maximum acceleration is estimated with vehicle dynamics. It is known that an intelligent driver model (IDM can control adaptive cruise control (ACC well. The disadvantages of IDM at high and constant speed are analyzed. A new car-following model which is applied to ACC is established accordingly to modify the desired minimum gap and structure of the IDM. We simulated the new car-following model and IDM under two different kinds of road conditions. In the first, the vehicles drive on a single road, taking dry asphalt road as the example in this paper. In the second, vehicles drive onto a different road, and this paper analyzed the situation in which vehicles drive from a dry asphalt road onto an icy road. From the simulation, we found that the new car-following model can not only ensure driving security and comfort but also control the steady driving of the vehicle with a smaller time headway than IDM.
Fiebig, H R
2002-01-01
We study various aspects of extracting spectral information from time correlation functions of lattice QCD by means of Bayesian inference with an entropic prior, the maximum entropy method (MEM). Correlator functions of a heavy-light meson-meson system serve as a repository for lattice data with diverse statistical quality. Attention is given to spectral mass density functions, inferred from the data, and their dependence on the parameters of the MEM. We propose to employ simulated annealing, or cooling, to solve the Bayesian inference problem, and discuss practical issues of the approach.
Maximum number of habitable planets at the time of Earth's origin: new hints for panspermia?
von Bloh, Werner; Franck, Siegfried; Bounama, Christine; Schellnhuber, Hans-Joachim
2003-04-01
New discoveries have fuelled the ongoing discussion of panspermia, i.e. the transport of life from one planet to another within the solar system (interplanetary panspermia) or even between different planetary systems (interstellar panspermia). The main factor for the probability of interstellar panspermia is the average density of stellar systems containing habitable planets. The combination of recent results for the formation rate of Earth-like planets with our estimations of extrasolar habitable zones allows us to determine the number of habitable planets in the Milky Way over cosmological time scales. We find that there was a maximum number of habitable planets around the time of Earth's origin. If at all, interstellar panspermia was most probable at that time and may have kick-started life on our planet.
Real-time maximum a-posteriori image reconstruction for fluorescence microscopy
Anwar A. Jabbar
2015-08-01
Full Text Available Rapid reconstruction of multidimensional image is crucial for enabling real-time 3D fluorescence imaging. This becomes a key factor for imaging rapidly occurring events in the cellular environment. To facilitate real-time imaging, we have developed a graphics processing unit (GPU based real-time maximum a-posteriori (MAP image reconstruction system. The parallel processing capability of GPU device that consists of a large number of tiny processing cores and the adaptability of image reconstruction algorithm to parallel processing (that employ multiple independent computing modules called threads results in high temporal resolution. Moreover, the proposed quadratic potential based MAP algorithm effectively deconvolves the images as well as suppresses the noise. The multi-node multi-threaded GPU and the Compute Unified Device Architecture (CUDA efficiently execute the iterative image reconstruction algorithm that is ≈200-fold faster (for large dataset when compared to existing CPU based systems.
Real-time maximum a-posteriori image reconstruction for fluorescence microscopy
Jabbar, Anwar A.; Dilipkumar, Shilpa; C K, Rasmi; Rajan, K.; Mondal, Partha P.
2015-08-01
Rapid reconstruction of multidimensional image is crucial for enabling real-time 3D fluorescence imaging. This becomes a key factor for imaging rapidly occurring events in the cellular environment. To facilitate real-time imaging, we have developed a graphics processing unit (GPU) based real-time maximum a-posteriori (MAP) image reconstruction system. The parallel processing capability of GPU device that consists of a large number of tiny processing cores and the adaptability of image reconstruction algorithm to parallel processing (that employ multiple independent computing modules called threads) results in high temporal resolution. Moreover, the proposed quadratic potential based MAP algorithm effectively deconvolves the images as well as suppresses the noise. The multi-node multi-threaded GPU and the Compute Unified Device Architecture (CUDA) efficiently execute the iterative image reconstruction algorithm that is ≈200-fold faster (for large dataset) when compared to existing CPU based systems.
Monolithic Time Delay Integrated APD Arrays Project
National Aeronautics and Space Administration — The overall goal of the proposed program by Epitaxial Technologies is to develop monolithic time delay integrated avalanche photodiode (APD) arrays with sensitivity...
Onset of effects of testosterone treatment and time span until maximum effects are achieved
Saad, Farid; Aversa, Antonio; Isidori, Andrea M; Zafalon, Livia; Zitzmann, Michael; Gooren, Louis
2011-01-01
Objective Testosterone has a spectrum of effects on the male organism. This review attempts to determine, from published studies, the time-course of the effects induced by testosterone replacement therapy from their first manifestation until maximum effects are attained. Design Literature data on testosterone replacement. Results Effects on sexual interest appear after 3 weeks plateauing at 6 weeks, with no further increments expected beyond. Changes in erections/ejaculations may require up to 6 months. Effects on quality of life manifest within 3–4 weeks, but maximum benefits take longer. Effects on depressive mood become detectable after 3–6 weeks with a maximum after 18–30 weeks. Effects on erythropoiesis are evident at 3 months, peaking at 9–12 months. Prostate-specific antigen and volume rise, marginally, plateauing at 12 months; further increase should be related to aging rather than therapy. Effects on lipids appear after 4 weeks, maximal after 6–12 months. Insulin sensitivity may improve within few days, but effects on glycemic control become evident only after 3–12 months. Changes in fat mass, lean body mass, and muscle strength occur within 12–16 weeks, stabilize at 6–12 months, but can marginally continue over years. Effects on inflammation occur within 3–12 weeks. Effects on bone are detectable already after 6 months while continuing at least for 3 years. Conclusion The time-course of the spectrum of effects of testosterone shows considerable variation, probably related to pharmacodynamics of the testosterone preparation. Genomic and non-genomic effects, androgen receptor polymorphism and intracellular steroid metabolism further contribute to such diversity. PMID:21753068
Jungemann, C.; Pham, A. T.; Meinerzhagen, B.; Ringhofer, C.; Bollhöfer, M.
2006-07-01
The Boltzmann equation for transport in semiconductors is projected onto spherical harmonics in such a way that the resultant balance equations for the coefficients of the distribution function times the generalized density of states can be discretized over energy and real spaces by box integration. This ensures exact current continuity for the discrete equations. Spurious oscillations of the distribution function are suppressed by stabilization based on a maximum entropy dissipation principle avoiding the H transformation. The derived formulation can be used on arbitrary grids as long as box integration is possible. The approach works not only with analytical bands but also with full band structures in the case of holes. Results are presented for holes in bulk silicon based on a full band structure and electrons in a Si NPN bipolar junction transistor. The convergence of the spherical harmonics expansion is shown for a device, and it is found that the quasiballistic transport in nanoscale devices requires an expansion of considerably higher order than the usual first one. The stability of the discretization is demonstrated for a range of grid spacings in the real space and bias points which produce huge gradients in the electron density and electric field. It is shown that the resultant large linear system of equations can be solved in a memory efficient way by the numerically robust package ILUPACK.
Time Reversal Migration for Passive Sources Using a Maximum Variance Imaging Condition
Wang, H.
2017-05-26
The conventional time-reversal imaging approach for micro-seismic or passive source location is based on focusing the back-propagated wavefields from each recorded trace in a source image. It suffers from strong background noise and limited acquisition aperture, which may create unexpected artifacts and cause error in the source location. To overcome such a problem, we propose a new imaging condition for microseismic imaging, which is based on comparing the amplitude variance in certain windows, and use it to suppress the artifacts as well as find the right location for passive sources. Instead of simply searching for the maximum energy point in the back-propagated wavefield, we calculate the amplitude variances over a window moving in both space and time axis to create a highly resolved passive event image. The variance operation has negligible cost compared with the forward/backward modeling operations, which reveals that the maximum variance imaging condition is efficient and effective. We test our approach numerically on a simple three-layer model and on a piece of the Marmousi model as well, both of which have shown reasonably good results.
Abhishek Khanna
2012-01-01
Full Text Available We revisit the problem of optimal power extraction in four-step cycles (two adiabatic and two heat-transfer branches when the finite-rate heat transfer obeys a linear law and the heat reservoirs have finite heat capacities. The heat-transfer branch follows a polytropic process in which the heat capacity of the working fluid stays constant. For the case of ideal gas as working fluid and a given switching time, it is shown that maximum work is obtained at Curzon-Ahlborn efficiency. Our expressions clearly show the dependence on the relative magnitudes of heat capacities of the fluid and the reservoirs. Many previous formulae, including infinite reservoirs, infinite-time cycles, and Carnot-like and non-Carnot-like cycles, are recovered as special cases of our model.
Component Prioritization Schema for Achieving Maximum Time and Cost Benefits from Software Testing
Srivastava, Praveen Ranjan; Pareek, Deepak
Software testing is any activity aimed at evaluating an attribute or capability of a program or system and determining that it meets its required results. Defining the end of software testing represents crucial features of any software development project. A premature release will involve risks like undetected bugs, cost of fixing faults later, and discontented customers. Any software organization would want to achieve maximum possible benefits from software testing with minimum resources. Testing time and cost need to be optimized for achieving a competitive edge in the market. In this paper, we propose a schema, called the Component Prioritization Schema (CPS), to achieve an effective and uniform prioritization of the software components. This schema serves as an extension to the Non Homogenous Poisson Process based Cumulative Priority Model. We also introduce an approach for handling time-intensive versus cost-intensive projects.
Maximum Likelihood Timing and Carrier Synchronization in Burst-Mode Satellite Transmissions
Morelli Michele
2007-01-01
Full Text Available This paper investigates the joint maximum likelihood (ML estimation of the carrier frequency offset, timing error, and carrier phase in burst-mode satellite transmissions over an AWGN channel. The synchronization process is assisted by a training sequence appended in front of each burst and composed of alternating binary symbols. The use of this particular pilot pattern results into an estimation algorithm of affordable complexity that operates in a decoupled fashion. In particular, the frequency offset is measured first and independently of the other parameters. Timing and phase estimates are subsequently computed through simple closed-form expressions. The performance of the proposed scheme is investigated by computer simulation and compared with Cramer-Rao bounds. It turns out that the estimation accuracy is very close to the theoretical limits up to relatively low signal-to-noise ratios. This makes the algorithm well suited for turbo-coded transmissions operating near the Shannon limit.
Maximum Likelihood Timing and Carrier Synchronization in Burst-Mode Satellite Transmissions
Michele Morelli
2007-06-01
Full Text Available This paper investigates the joint maximum likelihood (ML estimation of the carrier frequency offset, timing error, and carrier phase in burst-mode satellite transmissions over an AWGN channel. The synchronization process is assisted by a training sequence appended in front of each burst and composed of alternating binary symbols. The use of this particular pilot pattern results into an estimation algorithm of affordable complexity that operates in a decoupled fashion. In particular, the frequency offset is measured first and independently of the other parameters. Timing and phase estimates are subsequently computed through simple closed-form expressions. The performance of the proposed scheme is investigated by computer simulation and compared with Cramer-Rao bounds. It turns out that the estimation accuracy is very close to the theoretical limits up to relatively low signal-to-noise ratios. This makes the algorithm well suited for turbo-coded transmissions operating near the Shannon limit.
Mlpnp - a Real-Time Maximum Likelihood Solution to the Perspective-N Problem
Urban, S.; Leitloff, J.; Hinz, S.
2016-06-01
In this paper, a statistically optimal solution to the Perspective-n-Point (PnP) problem is presented. Many solutions to the PnP problem are geometrically optimal, but do not consider the uncertainties of the observations. In addition, it would be desirable to have an internal estimation of the accuracy of the estimated rotation and translation parameters of the camera pose. Thus, we propose a novel maximum likelihood solution to the PnP problem, that incorporates image observation uncertainties and remains real-time capable at the same time. Further, the presented method is general, as is works with 3D direction vectors instead of 2D image points and is thus able to cope with arbitrary central camera models. This is achieved by projecting (and thus reducing) the covariance matrices of the observations to the corresponding vector tangent space.
Some integral inequalities on time scales
Adnan Tuna; Servet Kutukcu
2008-01-01
In this article, we study the reverse Holder type inequality and Holder in-equality in two dimensional case on time scales. We also obtain many integral inequalities by using H(o)lder inequalities on time scales which give Hardy's inequalities as spacial cases.
Liu, Jian; Miller, William H.
2008-08-01
The maximum entropy analytic continuation (MEAC) method is used to extend the range of accuracy of the linearized semiclassical initial value representation (LSC-IVR)/classical Wigner approximation for real time correlation functions. The LSC-IVR provides a very effective 'prior' for the MEAC procedure since it is very good for short times, exact for all time and temperature for harmonic potentials (even for correlation functions of nonlinear operators), and becomes exact in the classical high temperature limit. This combined MEAC+LSC/IVR approach is applied here to two highly nonlinear dynamical systems, a pure quartic potential in one dimensional and liquid para-hydrogen at two thermal state points (25K and 14K under nearly zero external pressure). The former example shows the MEAC procedure to be a very significant enhancement of the LSC-IVR, for correlation functions of both linear and nonlinear operators, and especially at low temperature where semiclassical approximations are least accurate. For liquid para-hydrogen, the LSC-IVR is seen already to be excellent at T = 25K, but the MEAC procedure produces a significant correction at the lower temperature (T = 14K). Comparisons are also made to how the MEAC procedure is able to provide corrections for other trajectory-based dynamical approximations when used as priors.
Glottal closure instant and voice source analysis using time-scale lines of maximum amplitude
Christophe D’Alessandro; Nicolas Sturmel
2011-10-01
1Time-scale representation of voiced speech is applied to voice quality analysis, by introducing the Line of Maximum Amplitude (LoMA) method. This representation takes advantage of the tree patterns observed for voiced speech periods in the time-scale domain. For each period, the optimal LoMA is computed by linking amplitude maxima at each scale of a wavelet transform, using a dynamic programming algorithm. A time-scale analysis of the linear acoustic model of speech production shows several interesting properties. The LoMA points to the glottal closure instants. The LoMA phase delay is linked to the voice open quotient. The cumulated amplitude along the LoMA is related to voicing amplitude. The LoMA spectral centre of gravity is an indication of voice spectral tilt. Following these theoretical considerations, experimental results are reported. Comparative evaluation demonstrates that the LoMA is an effective method for the detection of Glottal Closure Instants (GCI). The effectiveness of LoMA analysis for open quotient, amplitude and spectral tilt estimations is also discussed with the help of some examples.
Schaefer, Andreas; Wenzel, Friedemann
2017-04-01
technically trades time with space, considering subduction zones where we have likely not observed the maximum possible event yet. However, by identifying sources of the same class, the not-yet observed temporal behavior can be replaced by spatial similarity among different subduction zones. This database aims to enhance the research and understanding of subduction zones and to quantify their potential in producing mega earthquakes considering potential strong motion impact on nearby cities and their tsunami potential.
Time-optimal excitation of maximum quantum coherence: Physical limits and pulse sequences
Köcher, S. S.; Heydenreich, T.; Zhang, Y.; Reddy, G. N. M.; Caldarelli, S.; Yuan, H.; Glaser, S. J.
2016-04-01
Here we study the optimum efficiency of the excitation of maximum quantum (MaxQ) coherence using analytical and numerical methods based on optimal control theory. The theoretical limit of the achievable MaxQ amplitude and the minimum time to achieve this limit are explored for a set of model systems consisting of up to five coupled spins. In addition to arbitrary pulse shapes, two simple pulse sequence families of practical interest are considered in the optimizations. Compared to conventional approaches, substantial gains were found both in terms of the achieved MaxQ amplitude and in pulse sequence durations. For a model system, theoretically predicted gains of a factor of three compared to the conventional pulse sequence were experimentally demonstrated. Motivated by the numerical results, also two novel analytical transfer schemes were found: Compared to conventional approaches based on non-selective pulses and delays, double-quantum coherence in two-spin systems can be created twice as fast using isotropic mixing and hard spin-selective pulses. Also it is proved that in a chain of three weakly coupled spins with the same coupling constants, triple-quantum coherence can be created in a time-optimal fashion using so-called geodesic pulses.
Wu, Yuanfeng; Gao, Lianru; Zhang, Bing; Zhao, Haina; Li, Jun
2014-01-01
We present a parallel implementation of the optimized maximum noise fraction (G-OMNF) transform algorithm for feature extraction of hyperspectral images on commodity graphics processing units (GPUs). The proposed approach explored the algorithm data-level concurrency and optimized the computing flow. We first defined a three-dimensional grid, in which each thread calculates a sub-block data to easily facilitate the spatial and spectral neighborhood data searches in noise estimation, which is one of the most important steps involved in OMNF. Then, we optimized the processing flow and computed the noise covariance matrix before computing the image covariance matrix to reduce the original hyperspectral image data transmission. These optimization strategies can greatly improve the computing efficiency and can be applied to other feature extraction algorithms. The proposed parallel feature extraction algorithm was implemented on an Nvidia Tesla GPU using the compute unified device architecture and basic linear algebra subroutines library. Through the experiments on several real hyperspectral images, our GPU parallel implementation provides a significant speedup of the algorithm compared with the CPU implementation, especially for highly data parallelizable and arithmetically intensive algorithm parts, such as noise estimation. In order to further evaluate the effectiveness of G-OMNF, we used two different applications: spectral unmixing and classification for evaluation. Considering the sensor scanning rate and the data acquisition time, the proposed parallel implementation met the on-board real-time feature extraction.
Some Nonlinear Integral Inequalities on Time Scales
Li Wei Nian
2007-01-01
Full Text Available The purpose of this paper is to investigate some nonlinear integral inequalities on time scales. Our results unify and extend some continuous inequalities and their corresponding discrete analogues. The theoretical results are illustrated by a simple example at the end of this paper.
Improved efficiency of maximum likelihood analysis of time series with temporally correlated errors
Langbein, John O.
2017-01-01
Most time series of geophysical phenomena have temporally correlated errors. From these measurements, various parameters are estimated. For instance, from geodetic measurements of positions, the rates and changes in rates are often estimated and are used to model tectonic processes. Along with the estimates of the size of the parameters, the error in these parameters needs to be assessed. If temporal correlations are not taken into account, or each observation is assumed to be independent, it is likely that any estimate of the error of these parameters will be too low and the estimated value of the parameter will be biased. Inclusion of better estimates of uncertainties is limited by several factors, including selection of the correct model for the background noise and the computational requirements to estimate the parameters of the selected noise model for cases where there are numerous observations. Here, I address the second problem of computational efficiency using maximum likelihood estimates (MLE). Most geophysical time series have background noise processes that can be represented as a combination of white and power-law noise, 1/fα">1/fα1/fα with frequency, f. With missing data, standard spectral techniques involving FFTs are not appropriate. Instead, time domain techniques involving construction and inversion of large data covariance matrices are employed. Bos et al. (J Geod, 2013. doi:10.1007/s00190-012-0605-0) demonstrate one technique that substantially increases the efficiency of the MLE methods, yet is only an approximate solution for power-law indices >1.0 since they require the data covariance matrix to be Toeplitz. That restriction can be removed by simply forming a data filter that adds noise processes rather than combining them in quadrature. Consequently, the inversion of the data covariance matrix is simplified yet provides robust results for a wider range of power-law indices.
Improved efficiency of maximum likelihood analysis of time series with temporally correlated errors
Langbein, John
2017-02-01
Most time series of geophysical phenomena have temporally correlated errors. From these measurements, various parameters are estimated. For instance, from geodetic measurements of positions, the rates and changes in rates are often estimated and are used to model tectonic processes. Along with the estimates of the size of the parameters, the error in these parameters needs to be assessed. If temporal correlations are not taken into account, or each observation is assumed to be independent, it is likely that any estimate of the error of these parameters will be too low and the estimated value of the parameter will be biased. Inclusion of better estimates of uncertainties is limited by several factors, including selection of the correct model for the background noise and the computational requirements to estimate the parameters of the selected noise model for cases where there are numerous observations. Here, I address the second problem of computational efficiency using maximum likelihood estimates (MLE). Most geophysical time series have background noise processes that can be represented as a combination of white and power-law noise, 1/f^{α } with frequency, f. With missing data, standard spectral techniques involving FFTs are not appropriate. Instead, time domain techniques involving construction and inversion of large data covariance matrices are employed. Bos et al. (J Geod, 2013. doi: 10.1007/s00190-012-0605-0) demonstrate one technique that substantially increases the efficiency of the MLE methods, yet is only an approximate solution for power-law indices >1.0 since they require the data covariance matrix to be Toeplitz. That restriction can be removed by simply forming a data filter that adds noise processes rather than combining them in quadrature. Consequently, the inversion of the data covariance matrix is simplified yet provides robust results for a wider range of power-law indices.
Improved efficiency of maximum likelihood analysis of time series with temporally correlated errors
Langbein, John
2017-08-01
Most time series of geophysical phenomena have temporally correlated errors. From these measurements, various parameters are estimated. For instance, from geodetic measurements of positions, the rates and changes in rates are often estimated and are used to model tectonic processes. Along with the estimates of the size of the parameters, the error in these parameters needs to be assessed. If temporal correlations are not taken into account, or each observation is assumed to be independent, it is likely that any estimate of the error of these parameters will be too low and the estimated value of the parameter will be biased. Inclusion of better estimates of uncertainties is limited by several factors, including selection of the correct model for the background noise and the computational requirements to estimate the parameters of the selected noise model for cases where there are numerous observations. Here, I address the second problem of computational efficiency using maximum likelihood estimates (MLE). Most geophysical time series have background noise processes that can be represented as a combination of white and power-law noise, 1/f^{α } with frequency, f. With missing data, standard spectral techniques involving FFTs are not appropriate. Instead, time domain techniques involving construction and inversion of large data covariance matrices are employed. Bos et al. (J Geod, 2013. doi: 10.1007/s00190-012-0605-0) demonstrate one technique that substantially increases the efficiency of the MLE methods, yet is only an approximate solution for power-law indices >1.0 since they require the data covariance matrix to be Toeplitz. That restriction can be removed by simply forming a data filter that adds noise processes rather than combining them in quadrature. Consequently, the inversion of the data covariance matrix is simplified yet provides robust results for a wider range of power-law indices.
The timing of the maximum extent of the Rhone Glacier at Wangen a.d. Aare
Ivy-Ochs, S.; Schluechter, C. [Bern Univ. (Switzerland); Kubik, P.W. [Paul Scherrer Inst. (PSI), Villigen (Switzerland); Beer, J. [EAWAG, Duebendorf (Switzerland)
1997-09-01
Erratic blocks found in the region of Wangen a.d. Aare delineate the maximum position of the Solothurn lobe of the Rhone Glacier. {sup 10}Be and {sup 26}Al exposure ages of three of these blocks show that the glacier withdraw from its maximum position at or slightly before 20,000{+-}1800 years ago. (author) 1 fig., 5 refs.
Jat, Prahlad; Serre, Marc L
2016-12-01
Widespread contamination of surface water chloride is an emerging environmental concern. Consequently accurate and cost-effective methods are needed to estimate chloride along all river miles of potentially contaminated watersheds. Here we introduce a Bayesian Maximum Entropy (BME) space/time geostatistical estimation framework that uses river distances, and we compare it with Euclidean BME to estimate surface water chloride from 2005 to 2014 in the Gunpowder-Patapsco, Severn, and Patuxent subbasins in Maryland. River BME improves the cross-validation R(2) by 23.67% over Euclidean BME, and river BME maps are significantly different than Euclidean BME maps, indicating that it is important to use river BME maps to assess water quality impairment. The river BME maps of chloride concentration show wide contamination throughout Baltimore and Columbia-Ellicott cities, the disappearance of a clean buffer separating these two large urban areas, and the emergence of multiple localized pockets of contamination in surrounding areas. The number of impaired river miles increased by 0.55% per year in 2005-2009 and by 1.23% per year in 2011-2014, corresponding to a marked acceleration of the rate of impairment. Our results support the need for control measures and increased monitoring of unassessed river miles.
Novel Maximum-based Timing Acquisition for Spread-Spectrum Communications
Sibbetty, Taylor; Moradiz, Hussein; Farhang-Boroujeny, Behrouz
2016-12-01
This paper proposes and analyzes a new packet detection and timing acquisition method for spread spectrum systems. The proposed method provides an enhancement over the typical thresholding techniques that have been proposed for direct sequence spread spectrum (DS-SS). The effective implementation of thresholding methods typically require accurate knowledge of the received signal-to-noise ratio (SNR), which is particularly difficult to estimate in spread spectrum systems. Instead, we propose a method which utilizes a consistency metric of the location of maximum samples at the output of a filter matched to the spread spectrum waveform to achieve acquisition, and does not require knowledge of the received SNR. Through theoretical study, we show that the proposed method offers a low probability of missed detection over a large range of SNR with a corresponding probability of false alarm far lower than other methods. Computer simulations that corroborate our theoretical results are also presented. Although our work here has been motivated by our previous study of a filter bank multicarrier spread-spectrum (FB-MC-SS) system, the proposed method is applicable to DS-SS systems as well.
Guo-Jheng Yang
2013-08-01
Full Text Available The fragile watermarking technique is used to protect intellectual property rights while also providing security and rigorous protection. In order to protect the copyright of the creators, it can be implanted in some representative text or totem. Because all of the media on the Internet are digital, protection has become a critical issue, and determining how to use digital watermarks to protect digital media is thus the topic of our research. This paper uses the Logistic map with parameter u = 4 to generate chaotic dynamic behavior with the maximum entropy 1. This approach increases the security and rigor of the protection. The main research target of information hiding is determining how to hide confidential data so that the naked eye cannot see the difference. Next, we introduce one method of information hiding. Generally speaking, if the image only goes through Arnold’s cat map and the Logistic map, it seems to lack sufficient security. Therefore, our emphasis is on controlling Arnold’s cat map and the initial value of the chaos system to undergo small changes and generate different chaos sequences. Thus, the current time is used to not only make encryption more stringent but also to enhance the security of the digital media.
Ertas, Gokhan; Gulcur, H Ozcan; Tunaci, Mehtap
2008-05-01
Effectiveness of morphological descriptors based on normalized maximum intensity-time ratio (nMITR) maps generated using a 3 x 3 pixel moving mask on dynamic contrast-enhanced magnetoresistance (MR) mammograms are studied for assessment of malignancy. After a rough indication of volume of interest on the nMITR maps, lesions are automatically segmented. Two-dimensional (2D) convexity, normalized complexity, extent, and eccentricity as well as three-dimensional (3D) versions of these descriptors and contact surface area ratio are computed. On a data set consisting of dynamic contrast-enhanced MR DCE-MR mammograms from 51 women that contain 26 benign and 32 malignant lesions, 3D convexity, complexity, and extent are found to reflect aggressiveness of malignancy better than 2D descriptors. Contact surface area ratio which is easily adaptable to different imaging resolutions is found to be the most significant and accurate descriptor (75% sensitivity, 88% specificity, 89% positive predictive values, and 74% negative predictive values).
The Maximum Coping Time Analysis of the ELAP for the OPR1400
Shin, Sung Hyun; Hah, Chang Joo [KINGS, Ulsan (Korea, Republic of); Jung, Si Chae; Lee, Chang Gyun [KEPCO E and C, Daejeon (Korea, Republic of)
2014-05-15
There have been many evaluations and recommendations for the extended Station Black Out (SBO) condition of the nuclear power plant. For example, the 'SECY-11-0093/0137', is a recommendation of NRC and the 'WCAP-17601-P' is an evaluation of the PWROG. The extended loss of AC power (ELAP) can be defined as same with the extended (or prolonged) SBO which has a Loss of Offsite Power (LOOP) condition and loss of all Emergency Diesel Generators (EDG), Alternative Alternating Current (AAC), but Direct Current (DC) source is available. This evaluation provides NSSS responses to an ELAP for the OPR1000 unit. And the results presented provide certain phenomena which occur during the ELAP, the maximum coping time until a core uncovery condition. It is assumed for this case that sufficient SG secondary makeup inventory exists or can be attained, so that the duration of the ELAP prior to core damage is dependent solely upon the loss of inventory from the RCS. Even with a limited RCS cooldown and depressurization, and conservatively high assumed RCP seal leakage, the plant can be sustained for over 65 hours prior to core uncovery.
Johann A. Briffa
2014-06-01
Full Text Available In this study, the authors consider time-varying block (TVB codes, which generalise a number of previous synchronisation error-correcting codes. They also consider various practical issues related to maximum a posteriori (MAP decoding of these codes. Specifically, they give an expression for the expected distribution of drift between transmitter and receiver because of synchronisation errors. They determine an appropriate choice for state space limits based on the drift probability distribution. In turn, they obtain an expression for the decoder complexity under given channel conditions in terms of the state space limits used. For a given state space, they also give a number of optimisations that reduce the algorithm complexity with no further loss of decoder performance. They also show how the MAP decoder can be used in the absence of known frame boundaries, and demonstrate that an appropriate choice of decoder parameters allows the decoder to approach the performance when frame boundaries are known, at the expense of some increase in complexity. Finally, they express some existing constructions as TVB codes, comparing performance with published results and showing that improved performance is possible by taking advantage of the flexibility of TVB codes.
Fast Maximum-Likelihood Decoder for Quasi-Orthogonal Space-Time Block Code
Adel Ahmadi
2015-01-01
Full Text Available Motivated by the decompositions of sphere and QR-based methods, in this paper we present an extremely fast maximum-likelihood (ML detection approach for quasi-orthogonal space-time block code (QOSTBC. The proposed algorithm with a relatively simple design exploits structure of quadrature amplitude modulation (QAM constellations to achieve its goal and can be extended to any arbitrary constellation. Our decoder utilizes a new decomposition technique for ML metric which divides the metric into independent positive parts and a positive interference part. Search spaces of symbols are substantially reduced by employing the independent parts and statistics of noise. Symbols within the search spaces are successively evaluated until the metric is minimized. Simulation results confirm that the proposed decoder’s performance is superior to many of the recently published state-of-the-art solutions in terms of complexity level. More specifically, it was possible to verify that application of the new algorithms with 1024-QAM would decrease the computational complexity compared to state-of-the-art solution with 16-QAM.
Susanne Wegener
Full Text Available After recanalization, cerebral blood flow (CBF can increase above baseline in cerebral ischemia. However, the significance of post-ischemic hyperperfusion for tissue recovery remains unclear. To analyze the course of post-ischemic hyperperfusion and its impact on vascular function, we used magnetic resonance imaging (MRI with pulsed arterial spin labeling (pASL and measured CBF quantitatively during and after a 60 minute transient middle cerebral artery occlusion (MCAO in adult rats. We added a 5% CO2 - challenge to analyze vasoreactivity in the same animals. Results from MRI were compared to histological correlates of angiogenesis. We found that CBF in the ischemic area recovered within one day and reached values significantly above contralateral thereafter. The extent of hyperperfusion changed over time, which was related to final infarct size: early (day 1 maximal hyperperfusion was associated with smaller lesions, whereas a later (day 4 maximum indicated large lesions. Furthermore, after initial vasoparalysis within the ischemic area, vasoreactivity on day 14 was above baseline in a fraction of animals, along with a higher density of blood vessels in the ischemic border zone. These data provide further evidence that late post-ischemic hyperperfusion is a sequel of ischemic damage in regions that are likely to undergo infarction. However, it is transient and its resolution coincides with re-gaining of vascular structure and function.
Effects of preload 4 repetition maximum on 100-m sprint times in collegiate women.
Linder, Elizabeth E; Prins, Jan H; Murata, Nathan M; Derenne, Coop; Morgan, Charles F; Solomon, John R
2010-05-01
The purpose of this study was to determine the effects of postactivation potentiation (PAP) on track-sprint performance after a preload set of 4 repetition maximum (4RM) parallel back half-squat exercises in collegiate women. All subjects (n = 12) participated in 2 testing sessions over a 3-week period. During the first testing session, subjects performed the Controlled protocol consisting of a 4-minute standardized warm-up, followed by a 4-minute active rest, a 100-m track sprint, a second 4-minute active rest, finalized with a second 100-m sprint. The second testing session, the Treatment protocol, consisted of a 4-minute standardized warm-up, followed by 4-minute active rest, sprint, a second 4-minute active rest, a warm-up of 4RM parallel back half-squat, a third 9-minute active rest, finalized with a second sprint. The results indicated that there was a significant improvement of 0.19 seconds (p sprint was preceded by a 4RM back-squat protocol during Treatment. The standardized effect size, d, was 0.82, indicating a large effect size. Additionally, the results indicated that it would be expected that mean sprint times would increase 0.04-0.34 seconds (p 0.05). The findings suggest that performing a 4RM parallel back half-squat warm-up before a track sprint will have a positive PAP affect on decreased track-sprint times. Track coaches, looking for the "competitive edge" (PAP effect) may re-warm up their sprinters during meets.
Jiang Zhu
2014-01-01
Full Text Available Some delta-nabla type maximum principles for second-order dynamic equations on time scales are proved. By using these maximum principles, the uniqueness theorems of the solutions, the approximation theorems of the solutions, the existence theorem, and construction techniques of the lower and upper solutions for second-order linear and nonlinear initial value problems and boundary value problems on time scales are proved, the oscillation of second-order mixed delat-nabla differential equations is discussed and, some maximum principles for second order mixed forward and backward difference dynamic system are proved.
THE ACTIVE INTEGRATED CIRCULAR PROCESS – EXPRESSION OF MAXIMUM SYNTHESIS OF SUSTAINABLE DEVELOPMENT
Done Ioan
2015-06-01
Full Text Available "The accelerated pace of economic growth, prompted by the need to ensure reducing disparities between the various countries, has imposed in the last two decades the adoption of sustainable development principles, particularly as a result of the Rio Declaration on Environment and Development (1992 and the UNESCO Declaration in the fall of 1997. In specific literature, in essence, sustainable development is considered "an economic and social process that is characterized by a simultaneous and concerted action at global, regional and local level. Its objective is to provide living conditions both for the present and forth future. Sustainable development “encompasses the economic, ecological, social and political aspects, linked through cultural and spiritual relationships."(Coşea, 2007In Romania, achieving sustainable development is a major, difficult objective, because it must be done in terms of convergence to the demands of the economic, social, cultural and political context of the EU, and in terms of the completion of the transition to a functioning and competitive market economy. In this context, it is imposed the economic competitiveness through reindustrialization and not least, by harnessing the active integrated circular process. Gross value added and profit chain in the structures of active integrated circular process must reflect the interests of the forces involved(employers, employees and the statethereby forming the basis of respect for the correlation between sustainable development, economic growth and increasing national wealth. The elimination or marginalization of certain links in the value chain and profit causes major disruptions or bankruptcy, with direct implications for recognizing and rewarding performance. Essentially, the building of active integrated circular process will determine the maximization of the profit – the foundation of satisfying all economic interests.
THE ACTIVE INTEGRATED CIRCULAR PROCESS – EXPRESSION OF MAXIMUM SYNTHESIS OF SUSTAINABLE DEVELOPMENT
Done Ioan
2015-06-01
Full Text Available "The accelerated pace of economic growth, prompted by the need to ensure reducing disparities between the various countries, has imposed in the last two decades the adoption of sustainable development principles, particularly as a result of the Rio Declaration on Environment and Development (1992 and the UNESCO Declaration in the fall of 1997 . In specific literature, in essence, sustainable development is considered " an economic and social process that is characterized by a simultaneous and concerted action at global, regional and local level. Its objective is to provide living conditions both for the present and forth future . Sustainable development “ encompasses the economic , ecological, social and political aspects, linked through cultural and spiritual relationships." (Coşea, 2007 In Romania , achieving sustainable development is a major , difficult objective, because it must be done in terms of convergence to the demands of the economic, social, cultural and political context of the EU, and in terms of the completion of the transition to a functioning and competitive market economy . In this context, it is imposed the economic competitiveness through reindustrialization and not least , by harnessing the active integrated circular process . Gross value added and profit chain in the structures of active integrated circular process must reflect the interests of the forces involved(employers, employees and the statethereby forming the basis of respect for the correlation between sustainable development, economic growth and increasing national wealth. The elimination or marginalization of certain links in the value chain and profit causes major disruptions or bankruptcy, with direct implications for recognizing and rewarding performance. Essentially, the building of active integrated circular process will determine the maximization of the profit –the foundation of satisfying all economic interests.
Ding, Tao; Kou, Yu; Yang, Yongheng
2017-01-01
guaranteeing the entire system operating constraints (e.g., network voltage magnitude) within reasonable ranges in this paper. Meanwhile, optimal inverter dispatch is employed to further improve the PV integration by ensuring the optimal set-points of both active power and reactive power for the PV inverters...... be formulated as a mixed integer nonlinear nonconvex programming. Furthermore, a sequential interior-point method is utilized to solve this problem. Case studies on the IEEE 33-bus, 69-bus distribution networks and two practical distribution networks in China demonstrate the effectiveness of the proposed method....
L. Wang
2012-09-01
Full Text Available This paper presents a Maximum Sequential Similarity Reasoning (MSSR algorithm based method for co-registration of 3D TLS data and 2D floor plans. The co-registration consists of two tasks: estimating a transformation between the two datasets and finding the vertical locations of windows and doors. The method first extracts TLS line sequences and floor plan line sequences from a series of horizontal cross-section bands of the TLS points and floor plans respectively. Then each line sequence is further decomposed into column vectors defined by using local transformation invariant information between two neighbouring line segments. Based on a normalized cross-correlation based similarity score function, the proposed MSSR algorithm is then used to iteratively estimate the vertical and horizontal locations of each floor plan by finding the longest matched consecutive column vectors between floor plan line sequences and TLS line sequences. A group matching algorithm is applied to simultaneously determine final matching results across floor plans and estimate the transformation parameters between floor plans and TLS points. With real datasets, the proposed method demonstrates its ability to deal with occlusions and multiple matching problems. It also shows the potential to detect conflict between floor plan and as-built, which makes it a promising method that can find many applications in many industrial fields.
Becker, L. W. M.; Sejrup, H. P.; Hjelstuen, B. O. B.; Haflidason, H.
2016-12-01
The extent of the NW European ice sheet during the Last Glacial Maximum is fairly well constrained to, at least in periods, the shelf edge. However, the exact timing and varying activity of the largest ice stream, the Norwegian Channel Ice Stream (NCIS), remains uncertain. We here present three sediment records, recovered proximal and distal to the upper NW European continental slope. All age models for the cores are constructed in the same way and based solely on 14C dating of planktonic foraminifera. The sand-sized sediments in the discussed cores is believed to be primarily transported by ice rafting. All records suggest ice streaming activity between 25.8 and 18.5 ka BP. However, the core proximal to the mouth of the Norwegian Channel (NC) shows distinct periods of activity and periods of very little coarse sediment input. Out of this there appear to be at least three well-defined periods of ice streaming activity which lasted each for 1.5 to 2 ka, with "pauses" of several hundred years in between. The same core shows a conspicuous variation in several proxies and sediment colour within the first peak of ice stream activity, compared to the second and third peak. The light grey colour of the sediment was earlier attributed to Triassic chalk grains, yet all "chalk" grains are in fact mollusc fragments. The low magnetic susceptibility values, the high Ca, high Sr and low Fe content compared to the other peaks suggests a different provenance for the material of the first peak. We suggest therefore, that the origin of this material is rather the British Irish Ice Sheet (BIIS) and not the Fennoscandian Ice Sheet (FIS). Earlier studies have shown an extent of the BIIS at least to the NC, whereas ice from the FIS likely stayed within the boundaries of the NC. A possible scenario for the different provenance could therefore be the build-up of the BIIS into the NC until it merged with the FIS. At this point the BIIS calved off the shelf edge southwest of the mouth of
Wu, Feilong; He, Jizhou; Ma, Yongli; Wang, Jianhui
2014-12-01
We consider the efficiency at maximum power of a quantum Otto engine, which uses a spin or a harmonic system as its working substance and works between two heat reservoirs at constant temperatures Th and Tc (Otto engine working in the linear-response regime.
Kozlowski, Dawid; Worthington, Dave
2015-01-01
Many public healthcare systems struggle with excessive waiting lists for elective patient treatment. Different countries address this problem in different ways, and one interesting method entails a maximum waiting time guarantee. Introduced in Denmark in 2002, it entitles patients to treatment at...... by hospital planners and strategic decision makers....... at a private hospital in Denmark or at a hospital abroad if the public healthcare system is unable to provide treatment within the stated maximum waiting time guarantee. Although clearly very attractive in some respects, many stakeholders have been very concerned about the negative consequences of the policy...... on the utilization of public hospital resources. This paper illustrates the use of a queue modelling approach in the analysis of elective patient treatment governed by the maximum waiting time policy. Drawing upon the combined strengths of analytic and simulation approaches we develop both continuous-time Markov...
MIN Htwe, Y. M.
2016-12-01
Myanmar has suffered many times from earthquake disasters and four times from tsunamis according to historical data. The purpose of this study is to estimate the tsunami arrival time and maximum tsunami wave amplitude for the Rakhine coast of Myanmar using the TUNAMI F1 model. In this study I calculate the tsunami arrival time and maximum tsunami wave amplitude based on a tsunamigenic earthquake source of moment magnitude 8.5 in the Arakan subduction zone off the west-coast of Myanmar, using the TUNAMI F1 model, selecting eight points on Rakhine coast. The model result indicates that the tsunami waves would first hit Kyaukpyu on the Rakhine coast about 0.05 minutes after the onset of a magnitude 8.5 earthquake, and the maximum tsunami wave amplitude would be 2.37 meters.
Nezhel'skaya, L. A.
2016-09-01
A flow of physical events (photons, electrons, and other elementary particles) is studied. One of the mathematical models of such flows is the modulated MAP flow of events circulating under conditions of unextendable dead time period. It is assumed that the dead time period is an unknown fixed value. The problem of estimation of the dead time period from observations of arrival times of events is solved by the method of maximum likelihood.
Ru/Al Multilayers Integrate Maximum Energy Density and Ductility for Reactive Materials.
Woll, K; Bergamaschi, A; Avchachov, K; Djurabekova, F; Gier, S; Pauly, C; Leibenguth, P; Wagner, C; Nordlund, K; Mücklich, F
2016-01-01
Established and already commercialized energetic materials, such as those based on Ni/Al for joining, lack the adequate combination of high energy density and ductile reaction products. To join components, this combination is required for mechanically reliable bonds. In addition to the improvement of existing technologies, expansion into new fields of application can also be anticipated which triggers the search for improved materials. Here, we present a comprehensive characterization of the key parameters that enables us to classify the Ru/Al system as new reactive material among other energetic systems. We finally found that Ru/Al exhibits the unusual integration of high energy density and ductility. For example, we measured reaction front velocities up to 10.9 (± 0.33) ms(-1) and peak reaction temperatures of about 2000 °C indicating the elevated energy density. To our knowledge, such high temperatures have never been reported in experiments for metallic multilayers. In situ experiments show the synthesis of a single-phase B2-RuAl microstructure ensuring improved ductility. Molecular dynamics simulations corroborate the transformation behavior to RuAl. This study fundamentally characterizes a Ru/Al system and demonstrates its enhanced properties fulfilling the identification requirements of a novel nanoscaled energetic material.
On Tuning PI Controllers for Integrating Plus Time Delay Systems
David Di Ruscio
2010-10-01
Full Text Available Some analytical results concerning PI controller tuning based on integrator plus time delay models are worked out and presented. A method for obtaining PI controller parameters, Kp=alpha/(k*tau, and, Ti=beta*tau, which ensures a given prescribed maximum time delay error, dtau_max, to time delay, tau, ratio parameter delta=dau_max/tau, is presented. The corner stone in this method, is a method product parameter, c=alpha*beta. Analytical relations between the PI controller parameters, Ti, and, Kp, and the time delay error parameter, delta, is presented, and we propose the setting, beta=c/a*(delta+1, and, alpha=a/(delta+1, which gives, Ti=c/a*(delta+1*tau, and Kp=a/((delta+1*k*tau, where the parameter, a, is constant in the method product parameter, c=alpha*beta. It also turns out that the integral time, Ti, is linear in, delta, and the proportional gain, Kp, inversely proportional to, delta+1. For the original Ziegler Nichols (ZN method this parameter is approximately, c=2.38, and the presented method may e.g., be used to obtain new modified ZN parameters with increased robustness margins, also documented in the paper.
Shen, Hua
2016-10-19
A maximum-principle-satisfying space-time conservation element and solution element (CE/SE) scheme is constructed to solve a reduced five-equation model coupled with the stiffened equation of state for compressible multifluids. We first derive a sufficient condition for CE/SE schemes to satisfy maximum-principle when solving a general conservation law. And then we introduce a slope limiter to ensure the sufficient condition which is applicative for both central and upwind CE/SE schemes. Finally, we implement the upwind maximum-principle-satisfying CE/SE scheme to solve the volume-fraction-based five-equation model for compressible multifluids. Several numerical examples are carried out to carefully examine the accuracy, efficiency, conservativeness and maximum-principle-satisfying property of the proposed approach.
Henning Grosse Ruse-Khan
2009-07-01
Full Text Available International intellectual property (IP protection is at the heart of controversies over the impact of economic interests on social or environmental concerns. Some see IP rights as unduly encroaching upon human rights and societal interests, others argue for stronger enforcement and additional exclusivity to incentivize new innovations and creations. Underlying these debates is the perception that international IP treaties set out minimum standards of protection - which presumably allow for additional protection with only the sky being the limit. This article challenges this view and explores the idea of maximum standards or ceilings within the existing body of international IP law. It looks at the relation between IP treaties and subsequent agreements or national laws which offer stronger protection. In particular, within the Agreement on Trade-Related Aspects of Intellectual Property Rights (TRIPS, an important qualification may serve as a door opener for ceilings: While additional IP protection may not go beyond mandatory limits within TRIPS, the qualification not to “contravene” TRIPS is unlikely to safeguard TRIPS flexibilities against TRIPS-plus norms. The article further identifies and examines the rationales for maximum standards in international IP protection as: (1 Legal security and predictability about the boundaries of protection; (2 the global protection of users’ rights; and (3 the free movement of goods, services and information. Examples of mandatory limits in the existing IP treaties and in ongoing initiatives can implement these. However, most of the relevant treaty norms are optional. The article concludes with some observations on the need for more comprehensive and precise maximum standards.
Wu, Feilong; He, Jizhou; Ma, Yongli; Wang, Jianhui
2014-12-01
We consider the efficiency at maximum power of a quantum Otto engine, which uses a spin or a harmonic system as its working substance and works between two heat reservoirs at constant temperatures T(h) and T(c) (power based on these two different kinds of quantum systems are bounded from the upper side by the same expression η(mp)≤η(+)≡η(C)(2)/[η(C)-(1-η(C))ln(1-η(C))] with η(C)=1-T(c)/T(h) as the Carnot efficiency. This expression η(mp) possesses the same universality of the CA efficiency η(CA)=1-√(1-η(C)) at small relative temperature difference. Within the context of irreversible thermodynamics, we calculate the Onsager coefficients and show that the value of η(CA) is indeed the upper bound of EMP for an Otto engine working in the linear-response regime.
de Nazelle, Audrey; Arunachalam, Saravanan; Serre, Marc L
2010-08-01
States in the USA are required to demonstrate future compliance of criteria air pollutant standards by using both air quality monitors and model outputs. In the case of ozone, the demonstration tests aim at relying heavily on measured values, due to their perceived objectivity and enforceable quality. Weight given to numerical models is diminished by integrating them in the calculations only in a relative sense. For unmonitored locations, the EPA has suggested the use of a spatial interpolation technique to assign current values. We demonstrate that this approach may lead to erroneous assignments of nonattainment and may make it difficult for States to establish future compliance. We propose a method that combines different sources of information to map air pollution, using the Bayesian Maximum Entropy (BME) Framework. The approach gives precedence to measured values and integrates modeled data as a function of model performance. We demonstrate this approach in North Carolina, using the State's ozone monitoring network in combination with outputs from the Multiscale Air Quality Simulation Platform (MAQSIP) modeling system. We show that the BME data integration approach, compared to a spatial interpolation of measured data, improves the accuracy and the precision of ozone estimations across the state.
Oudyn, Frederik W; Lyons, David J; Pringle, M J
2012-01-01
Many scientific laboratories follow, as standard practice, a relatively short maximum holding time (within 7 days) for the analysis of total suspended solids (TSS) in environmental water samples. In this study we have subsampled from bulk water samples stored at ∼4 °C in the dark, then analysed for TSS at time intervals up to 105 days after collection. The nonsignificant differences in TSS results observed over time demonstrates that storage at ∼4 °C in the dark is an effective method of preserving samples for TSS analysis, far past the 7-day standard practice. Extending the maximum holding time will ease the pressure on sample collectors and laboratory staff who until now have had to determine TSS within an impractically short period.
Integrated real-time roof monitoring
SHEN Bao-tang; GUO Hua; KING Andrew
2009-01-01
CSIRO has recently developed a real-time roof monitoring system for under-ground coal mines and successfully tried the system in gate roads at Ulan Mine. The sys-tem integrated displacement monitoring, stress monitoring and seismic monitoring in one package. It included GEL multianchor extensometers, vibrating wire uniaxial stress meters, ESG seismic monitoring system with microseismic sensors and high-frequency AE sen-sors. The monitoring system automated and the data can be automatically collected by a central computer located in an underground nonhazardous area. The data are then trans-ferred to the surface via an optical fiber cable. The real-time data were accessed at any location with an Internet connection. The trials of the system in two tailgates at Ulan Mine demonstrate that the system is effective for monitoring the behavior and stability of read-ways during Iongwall mining. The continuous roof displacement/stress data show clear precursors of roof falls. The seismic data (event count and locations) provide insights into the roof failure process during roof fall.
Simulation of transient viscoelastic flow with second order time integration
Rasmussen, Henrik Koblitz; Hassager, Ole
1995-01-01
The Lagrangian Integral Method (LIM) for the simulation of time-dependent flow of viscoelastic fluids is extended to second order accuracy in the time integration. The method is tested on the established sphere in a cylinder benchmark problem.......The Lagrangian Integral Method (LIM) for the simulation of time-dependent flow of viscoelastic fluids is extended to second order accuracy in the time integration. The method is tested on the established sphere in a cylinder benchmark problem....
The effects of disjunct sampling and averaging time on maximum mean wind speeds
Larsén, Xiaoli Guo; Mann, J.
2006-01-01
Conventionally, the 50-year wind is calculated on basis of the annual maxima of consecutive 10-min averages. Very often, however, the averages are saved with a temporal spacing of several hours. We call it disjunct sampling. It may also happen that the wind speeds are averaged over a longer time...... period before being saved. In either case, the extreme wind will be underestimated. This paper investigates the effects of the disjunct sampling interval and the averaging time on the attenuation of the extreme wind estimation by means of a simple theoretical approach as well as measurements...
ANALYTICAL ESTIMATION OF MINIMUM AND MAXIMUM TIME EXPENDITURES OF PASSENGERS AT AN URBAN ROUTE STOP
Gorbachov, P.
2013-01-01
Full Text Available This scientific paper deals with the problem related to the definition of average time spent by passengers while waiting for transport vehicles at urban stops as well as the results of analytical modeling of this value at traffic schedule unknown to the passengers and of two options of the vehicle traffic management on the given route.
The Round-Robin Mock Interview: Maximum Learning in Minimum Time
Marks, Melanie; O'Connor, Abigail H.
2006-01-01
Interview skills is critical to a job seeker's success in obtaining employment. However, learning interview skills takes time. This article offers an activity for providing students with interview practice while sacrificing only a single classroom period. The authors begin by reviewing relevant literature. Then, they outline the process of…
Numerical Integration: One Step at a Time
Yang, Yajun; Gordon, Sheldon P.
2016-01-01
This article looks at the effects that adding a single extra subdivision has on the level of accuracy of some common numerical integration routines. Instead of automatically doubling the number of subdivisions for a numerical integration rule, we investigate what happens with a systematic method of judiciously selecting one extra subdivision for…
Lee, C.-H.; Herget, C. J.
1976-01-01
This short paper considers the parameter-identification problem of general discrete-time, nonlinear, multiple input-multiple output dynamic systems with Gaussian white distributed measurement errors. Knowledge of the system parameterization is assumed to be available. Regions of constrained maximum likelihood (CML) parameter identifiability are established. A computation procedure employing interval arithmetic is proposed for finding explicit regions of parameter identifiability for the case of linear systems.
Murray, M P; Baldwin, J M; Gardner, G M; Sepic, S B; Downs, W J
1977-06-01
Isometric torque of the knee flexor and extensor muscles were recorded for 5 seconds at three knee joint positions. The subjects included healthy men in age groups from 20 to 35 and 45 to 65 years of age. The amplitudes and duration of peak torque and the time to peak torque were measured for each contraction. Peak torque was usually maintaned less than 0.1 second and never longer than 0.9 second. At each of the three angles, the mean extensor muscle torque was higher than the mean flexor muscle torque in both age groups, and the mean torque for both muscle group was higher among the younger than among the older man. The highest average torque was recorded at the knee angle of 60 degrees for the extensor muscles and 45 degrees for the flexor muscles, but this was not always a stereotyped response either for a given individual or among individuals.
Timing A Pulsed Thin Film Pyroelectric Generator For Maximum Power Denisty
Smith, A. N.; Hanrahan, B. M.; Neville, C. J.; Jankowski, N. R.
2016-11-01
Pyroelectric thermal-to-electric energy conversion is accomplished by a cyclic process of thermally-inducing polarization changes in the material under an applied electric field. The pyroelectric MEMS device investigated consisted of a thin film PZT capacitor with platinum bottom and iridium oxide top electrodes. Electric fields between 1-20 kV/cm with a 30% duty cycle and frequencies from 0.1 - 100 Hz were tested with a modulated continuous wave IR laser with a duty cycle of 20% creating temperature swings from 0.15 - 26 °C on the pyroelectric receiver. The net output power of the device was highly sensitive to the phase delay between the laser power and the applied electric field. A thermal model was developed to predict and explain the power loss associated with finite charge and discharge times. Excellent agreement was achieved between the theoretical model and the experiment results for the measured power density versus phase delay. Limitations on the charging and discharging rates result in reduced power and lower efficiency due to a reduced net work per cycle.
Biomechanical events in the time to exhaustion at maximum aerobic speed.
Gazeau, F; Koralsztein, J P; Billat, V
1997-10-01
Recent studies reported good intra-individual reproducibility, but great inter-individual variation in a sample of elite athletes, in time to exhaustion (tlim) at the maximal aerobic speed (MAS: the lowest speed that elicits VO2max in an incremental treadmill test). The purpose of the present study was, on the one hand, to detect modifications of kinematic variables at the end of the tlim of the VO2max test and, on the other hand, to evaluate the possibility that such modifications were factors responsible for the inter-individual variability in tlim. Eleven sub-elite male runners (Age = 24 +/- 6 years; VO2max = 69.2 +/- 6.8 ml kg-1 min-1; MAS = 19.2 +/- 1.45 km h-1; tlim = 301.9 +/- 82.7 s) performed two exercise tests on a treadmill (0% slope): an incremental test to determine VO2max and MAS, and an exhaustive constant velocity test to determine tlim at MAS. Statistically significant modifications were noted in several kinematic variables. The maximal angular velocity of knee during flexion was the only variable that was both modified through the tlim test and influenced the exercise duration. A multiple correlation analysis showed that tlim was predicted by the modifications of four variables (R = 0.995, P < 0.01). These variables are directly or indirectly in relation with the energic cost of running. It was concluded that runners who demonstrated stable running styles were able to run longer during MAS test because of optimal motor efficiency.
Leus, G.; Petré, F.; Moonen, M.
2004-01-01
In the downlink of DS-CDMA, frequency-selectivity destroys the orthogonality of the user signals and introduces multiuser interference (MUI). Space-time chip equalization is an efficient tool to restore the orthogonality of the user signals and suppress the MUI. Furthermore, multiple-input
Leus, G.; Petré, F.; Moonen, M.
2004-01-01
In the downlink of DS-CDMA, frequency-selectivity destroys the orthogonality of the user signals and introduces multiuser interference (MUI). Space-time chip equalization is an efficient tool to restore the orthogonality of the user signals and suppress the MUI. Furthermore, multiple-input multiple-
Global integration in times of crisis
Jensen, Camilla
to reconcile these findings by testing a number of hypothesis about global integration strategies in the context of the global financial crisis and how it affected exporting among multinational subsidiaries operating out of Turkey. Controlling for the impact that depreciations and exchange rate volatility has...... integration strategies throughout the course of the global financial crisis.......Past research suggests that a financial crisis event has a dual and ambiguous effect on the exporting strategy of subsidiaries of multinational firms in a value chain and offshoring perspective. From a total volume perspective exports are expected to contract due to a decline in demand (demand...
Ecotoxicology and macroecology--time for integration.
Beketov, Mikhail A; Liess, Matthias
2012-03-01
Despite considerable progress in ecotoxicology, it has become clear that this discipline cannot answer its central questions, such as, "What are the effects of toxicants on biodiversity?" and "How the ecosystem functions and services are affected by the toxicants?". We argue that if such questions are to be answered, a paradigm shift is needed. The current bottom-up approach of ecotoxicology that implies the use of small-scale experiments to predict effects on the entire ecosystems and landscapes should be merged with a top-down macroecological approach that is directly focused on ecological effects at large spatial scales and consider ecological systems as integral entities. Analysis of the existing methods in ecotoxicology, ecology, and environmental chemistry shows that such integration is currently possible. Therefore, we conclude that to tackle the current pressing challenges, ecotoxicology has to progress using both the bottom-up and top-down approaches, similar to digging a tunnel from both ends at once.
Santos W. N. dos
2003-01-01
Full Text Available The hot wire technique is considered to be an effective and accurate means of determining the thermal conductivity of ceramic materials. However, specifically for materials of high thermal diffusivity, the appropriate time interval to be considered in calculations is a decisive factor for getting accurate and consistent results. In this work, a numerical simulation model is proposed with the aim of determining the minimum and maximum measuring time for the hot wire parallel technique. The temperature profile generated by this model is in excellent agreement with that one experimentally obtained by this technique, where thermal conductivity, thermal diffusivity and specific heat are simultaneously determined from the same experimental temperature transient. Eighteen different specimens of refractory materials and polymers, with thermal diffusivities ranging from 1x10-7 to 70x10-7 m²/s, in shape of rectangular parallelepipeds, and with different dimensions were employed in the experimental programme. An empirical equation relating minimum and maximum measuring times and the thermal diffusivity of the sample is also obtained.
Calculation of plantar pressure time integral, an alternative approach.
Melai, Tom; IJzerman, T Herman; Schaper, Nicolaas C; de Lange, Ton L H; Willems, Paul J B; Meijer, Kenneth; Lieverse, Aloysius G; Savelberg, Hans H C M
2011-07-01
In plantar pressure measurement, both peak pressure and pressure time integral are used as variables to assess plantar loading. However, pressure time integral shows a high concordance with peak pressure. Many researchers and clinicians use Novel software (Novel GmbH Inc., Munich, Germany) that calculates this variable as the summation of the products of peak pressure and duration per time sample, which is not a genuine integral of pressure over time. Therefore, an alternative calculation method was introduced. The aim of this study was to explore the relevance of this alternative method, in different populations. Plantar pressure variables were measured in 76 people with diabetic polyneuropathy, 33 diabetic controls without polyneuropathy and 19 healthy subjects. Peak pressure and pressure time integral were obtained using Novel software. The quotient of the genuine force time integral over contact area was obtained as the alternative pressure time integral calculation. This new alternative method correlated less with peak pressure than the pressure time integral as calculated by Novel. The two methods differed significantly and these differences varied between the foot sole areas and between groups. The largest differences were found under the metatarsal heads in the group with diabetic polyneuropathy. From a theoretical perspective, the alternative approach provides a more valid calculation of the pressure time integral. In addition, this study showed that the alternative calculation is of added value, along peak pressure calculation, to interpret adapted plantar pressures patterns in particular in patients at risk for foot ulceration. Copyright © 2011 Elsevier B.V. All rights reserved.
Improving Music Genre Classification by Short-Time Feature Integration
Meng, Anders; Ahrendt, Peter; Larsen, Jan
2005-01-01
of seconds instead of milliseconds. The problem of making new features on the larger time scale from the short-time features (feature integration) has only received little attention. This paper investigates different methods for feature integration and late information fusion for music genre classification....... A new feature integration technique, the AR model, is proposed and seemingly outperforms the commonly used mean-variance features....
Almog, Assaf
2014-01-01
The dynamics of complex systems, from financial markets to the brain, can be monitored in terms of time series of activity of their fundamental elements (such as stocks or neurons respectively). While the main focus of time series analysis is on the magnitude of temporal increments, a significant piece of information is encoded into the binary projection (i.e. the sign) of such increments. In this paper we provide further evidence of this by showing strong nonlinear relationships between binary and non-binary properties of financial time series. We then introduce an information-theoretic approach to the analysis of the binary signature of single and multiple time series. Through the definition of maximum-entropy ensembles of binary matrices, we quantify the information encoded into the simplest binary properties of real time series and identify the most informative property given a set of measurements. Our formalism is able to replicate the observed binary/non-binary relations very well, and to mathematically...
Improving Music Genre Classification by Short Time Feature Integration
Meng, Anders; Ahrendt, Peter; Larsen, Jan
. The problem of making new features on the larger time scale from the short-time features (feature integration) has only received little attention. This paper investigates different methods for feature integration (early information fusion) and late information fusion (assembling of probabilistic outputs...
TimeLine: visualizing integrated patient records.
Bui, Alex A T; Aberle, Denise R; Kangarloo, Hooshang
2007-07-01
An increasing amount of data is now accrued in medical information systems; however, the organization of this data is still primarily driven by data source, and does not support the cognitive processes of physicians. As such, new methods to visualize patient medical records are becoming imperative in order to assist physicians with clinical tasks and medical decision-making. The TimeLine system is a problem-centric temporal visualization for medical data: information contained with medical records is reorganized around medical disease entities and conditions. Automatic construction of the TimeLine display from existing clinical repositories occurs in three steps: 1) data access, which uses an eXtensible Markup Language (XML) data representation to handle distributed, heterogeneous medical databases; 2) data mapping and reorganization, reformulating data into hierarchical, problemcentric views; and 3) data visualization, which renders the display to a target presentation platform. Leveraging past work, we describe the latter two components of the TimeLine system in this paper, and the issues surrounding the creation of medical problems lists and temporal visualization of medical data. A driving factor in the development of TimeLine was creating a foundation upon which new data types and the visualization metaphors could be readily incorporated.
Chiuh Cheng Chyu
2012-06-01
Full Text Available This paper studies the unrelated parallel machine scheduling problem with three minimization objectives – makespan, maximum earliness, and maximum tardiness (MET-UPMSP. The last two objectives combined are related to just-in-time (JIT performance of a solution. Three hybrid algorithms are presented to solve the MET-UPMSP: reactive GRASP with path relinking, dual-archived memetic algorithm (DAMA, and SPEA2. In order to improve the solution quality, min-max matching is included in the decoding scheme for each algorithm. An experiment is conducted to evaluate the performance of the three algorithms, using 100 (jobs x 3 (machines and 200 x 5 problem instances with three combinations of two due date factors – tight and range. The numerical results indicate that DAMA performs best and GRASP performs second for most problem instances in three performance metrics: HVR, GD, and Spread. The experimental results also show that incorporating min-max matching into decoding scheme significantly improves the solution quality for the two population-based algorithms. It is worth noting that the solutions produced by DAMA with matching decoding can be used as benchmark to evaluate the performance of other algorithms.
Zhaoyong Mao
2016-01-01
Full Text Available This paper addresses the power generation control system of a new drag-type Vertical Axis Turbine with several retractable blades. The returning blades can be entirely hidden in the drum, and negative torques can then be considerably reduced as the drum shields the blades. Thus, the power efficiency increases. Regarding the control, a Linear Quadratic Tracking (LQT optimal control algorithm for Maximum Power Point Tracking (MPPT is proposed to ensure that the wave energy conversion system can operate highly effectively under fluctuating conditions and that the tracking process accelerates over time. Two-dimensional Computational Fluid Dynamics (CFD simulations are performed to obtain the maximum power points of the turbine’s output. To plot the tip speed ratio curve, the least squares method is employed. The efficacy of the steady and dynamic performance of the control strategy was verified using Matlab/Simulink software. These validation results show that the proposed system can compensate for power fluctuations and is effective in terms of power regulation.
Asymptotics for Nonlinear Transformations of Fractionally Integrated Time Series
无
2007-01-01
The asymptotic theory for nonlinear transformations of fractionally integrated time series is developed. By the use of fractional Occupation Times Formula, various nonlinear functions of fractionally integrated series such as ARFIMA time series are studied, and the asymptotic distributions of the sample moments of such functions are obtained and analyzed. The transformations considered in this paper includes a variety of functions such as regular functions, integrable functions and asymptotically homogeneous functions that are often used in practical nonlinear econometric analysis. It is shown that the asymptotic theory of nonlinear transformations of original and normalized fractionally integrated processes is different from that of fractionally integrated processes, but is similar to the asymptotic theory of nonlinear transformations of integrated processes.
Almog, Assaf; Garlaschelli, Diego
2014-09-01
The dynamics of complex systems, from financial markets to the brain, can be monitored in terms of multiple time series of activity of the constituent units, such as stocks or neurons, respectively. While the main focus of time series analysis is on the magnitude of temporal increments, a significant piece of information is encoded into the binary projection (i.e. the sign) of such increments. In this paper we provide further evidence of this by showing strong nonlinear relations between binary and non-binary properties of financial time series. These relations are a novel quantification of the fact that extreme price increments occur more often when most stocks move in the same direction. We then introduce an information-theoretic approach to the analysis of the binary signature of single and multiple time series. Through the definition of maximum-entropy ensembles of binary matrices and their mapping to spin models in statistical physics, we quantify the information encoded into the simplest binary properties of real time series and identify the most informative property given a set of measurements. Our formalism is able to accurately replicate, and mathematically characterize, the observed binary/non-binary relations. We also obtain a phase diagram allowing us to identify, based only on the instantaneous aggregate return of a set of multiple time series, a regime where the so-called ‘market mode’ has an optimal interpretation in terms of collective (endogenous) effects, a regime where it is parsimoniously explained by pure noise, and a regime where it can be regarded as a combination of endogenous and exogenous factors. Our approach allows us to connect spin models, simple stochastic processes, and ensembles of time series inferred from partial information.
2010-10-01
... 49 Transportation 5 2010-10-01 2010-10-01 false What is the maximum collect-on-delivery amount I may demand at the time of delivery? 375.703 Section 375.703 Transportation Other Regulations Relating... amount I may demand at the time of delivery? (a) On a binding estimate, the maximum amount is the exact...
?Just-in-Time? Battery Charge Depletion Control for PHEVs and E-REVs for Maximum Battery Life
DeVault, Robert C [ORNL
2009-01-01
Conventional methods of vehicle operation for Plug-in Hybrid Vehicles first discharge the battery to a minimum State of Charge (SOC) before switching to charge sustaining operation. This is very demanding on the battery, maximizing the number of trips ending with a depleted battery and maximizing the distance driven on a depleted battery over the vehicle s life. Several methods have been proposed to reduce the number of trips ending with a deeply discharged battery and also eliminate the need for extended driving on a depleted battery. An optimum SOC can be maintained for long battery life before discharging the battery so that the vehicle reaches an electric plug-in destination just as the battery reaches the minimum operating SOC. These Just-in-Time methods provide maximum effective battery life while getting virtually the same electricity from the grid.
Ubiquitous time variability of integrated stellar populations.
Conroy, Charlie; van Dokkum, Pieter G; Choi, Jieun
2015-11-26
Long-period variable stars arise in the final stages of the asymptotic giant branch phase of stellar evolution. They have periods of up to about 1,000 days and amplitudes that can exceed a factor of three in the I-band flux. These stars pulsate predominantly in their fundamental mode, which is a function of mass and radius, and so the pulsation periods are sensitive to the age of the underlying stellar population. The overall number of long-period variables in a population is directly related to their lifetimes, which is difficult to predict from first principles because of uncertainties associated with stellar mass-loss and convective mixing. The time variability of these stars has not previously been taken into account when modelling the spectral energy distributions of galaxies. Here we construct time-dependent stellar population models that include the effects of long-period variable stars, and report the ubiquitous detection of this expected 'pixel shimmer' in the massive metal-rich galaxy M87. The pixel light curves display a variety of behaviours. The observed variation of 0.1 to 1 per cent is very well matched to the predictions of our models. The data provide a strong constraint on the properties of variable stars in an old and metal-rich stellar population, and we infer that the lifetime of long-period variables in M87 is shorter by approximately 30 per cent compared to predictions from the latest stellar evolution models.
Morphological Lightcurve Distortions due to Finite Integration Time
Kipping, David M
2010-01-01
We explore how finite integration times or equivalently temporal binning induces morphological distortions to the transit lightcurve. These distortions, if uncorrected for, lead to the retrieval of erroneous system parameters and may even lead to some planetary candidates being rejected as ostensibly unphysical. We provide analytic expressions for estimating the disturbance to the various lightcurve parameters as a function of the integration time. These effects are particularly crucial in light of the long-cadence photometry often used for discovering new exoplanets by for example CoRoT and the Kepler Mission (8.5 and 30 minutes). One of the dominant effects of long integration times is a systematic underestimation of the lightcurve derived stellar density, which has significant ramifications for transit surveys. We present a discussion of numerical integration techniques to compensate for the effects and produce expressions to quickly estimate the errors of such techniques, as a function of integration time...
Transit Light Curves with Finite Integration Time: Fisher Information Analysis
Price, Ellen M
2014-01-01
Kepler has revolutionized the study of transiting planets with its unprecedented photometric precision on more than 150,000 target stars. Most of the transiting planet candidates detected by Kepler have been observed as long-cadence targets with 30 minute integration times, and the upcoming Transiting Exoplanet Survey Satellite (TESS) will record full frame images with a similar integration time. Integrations of 30 minutes affect the transit shape, particularly for small planets and in cases of low signal-to-noise. Using the Fisher information matrix technique, we derive analytic approximations for the variances and covariances on the transit parameters obtained from fitting light curve photometry collected with a finite integration time. We find that binning the light curve can significantly increase the uncertainties and covariances on the inferred parameters when comparing scenarios with constant total signal-to-noise (constant total integration time in the absence of read noise). Uncertainties on the tran...
Ivy-Ochs, Susan; Braakhekke, Jochem; Monegato, Giovanni; Gianotti, Franco; Forno, Gabriella; Hippe, Kristina; Christl, Marcus; Akçar, Naki; Schluechter, Christian
2017-04-01
The Last Glacial Maximum (LGM) in the Alps saw much of the mountains inundated by ice. Several main accumulation areas comprising local ice caps and plateau icefields fit into a picture of transection glaciers flowing into huge valley glaciers. In the north the valley glaciers covered long distances (hundreds of kilometers) to reach the forelands where they spread out in fan-shaped piedmont lobes tens of kilometers across, e.g. the Rhine glacier. In the south travel distances to the mountain front were often shorter, the pathway steeper. Nevertheless, not all glaciers even reached beyond the front, as the temperatures were notably warmer in the south. For example at Orta the glacier snout remained within the mountains. Where glaciers reached the forelands they stopped abruptly and the moraine amphitheaters were constructed, e.g. at Ivrea and Rivoli-Avigliana. Sets of stacked moraines built-up as glacier advance was directly confined by the older moraines. We may temporally and spatially identify the culmination of the last glacial cycle by pinpointing the outermost moraines that date to the LGM (generally about 26-24 ka). On the other hand, the timing of abandonment of foreland positions is given by ages of the innermost, often lake-bounding, moraines (about 19-18 ka). Between the two, glacier fluctuations left the stadial moraines. In the Linth-Rhine system three stadials have been recognized: Killwangen, Schlieren and Zurich. Nevertheless, already in the Swiss sector correlation of the LGM stadials among the several foreland lobes is not unambiguous. Across the Alps, not only north to south but also west to east, how do the timing and extent of glaciers during the LGM vary? Recent glacier modelling by Seguinot et al. (2017) informs and suggests the possibility of differences in timing for reaching of the maximum extent and for the number of oscillations of individual lobes during the LGM. At present few sites in the Alps have detailed enough geomorphological
A long time low drift integrator with temperature control
Zhang, Donglai; Yan, Xiaolan; Zhang, Enchao; Pan, Shimin
2016-10-01
The output of an operational amplifier always contains signals that could not have been predicted, even with knowledge of the input and an accurately determined closed-loop transfer function. These signals lead to integrator zero-drift over time. A new type of integrator system with a long-term low-drift characteristic has therefore been designed. The integrator system is composed of a temperature control module and an integrator module. The aluminum printed circuit board of the integrator is glued to a thermoelectric cooler to maintain the electronic components at a stable temperature. The integration drift is automatically compensated using an analog-to-digital converter/proportional integration/digital-to-analog converter control circuit. Performance testing in a standard magnet shows that the proposed integrator, which has an integration time constant of 10 ms, has a low integration drift (<5 mV) over 1000 s after repeated measurements. The integrator can be used for magnetic flux measurements in most tokamaks and in the wire rope nondestructive test.
Bellili, Faouzi; Meftehi, Rabii; Affes, Sofiene; Stephenne, Alex
2015-01-01
In this paper, we tackle for the first time the problem of maximum likelihood (ML) estimation of the signal-to-noise ratio (SNR) parameter over time-varying single-input multiple-output (SIMO) channels. Both the data-aided (DA) and the non-data-aided (NDA) schemes are investigated. Unlike classical techniques where the channel is assumed to be slowly time-varying and, therefore, considered as constant over the entire observation period, we address the more challenging problem of instantaneous (i.e., short-term or local) SNR estimation over fast time-varying channels. The channel variations are tracked locally using a polynomial-in-time expansion. First, we derive in closed-form expressions the DA ML estimator and its bias. The latter is subsequently subtracted in order to obtain a new unbiased DA estimator whose variance and the corresponding Cram\\'er-Rao lower bound (CRLB) are also derived in closed form. Due to the extreme nonlinearity of the log-likelihood function (LLF) in the NDA case, we resort to the expectation-maximization (EM) technique to iteratively obtain the exact NDA ML SNR estimates within very few iterations. Most remarkably, the new EM-based NDA estimator is applicable to any linearly-modulated signal and provides sufficiently accurate soft estimates (i.e., soft detection) for each of the unknown transmitted symbols. Therefore, hard detection can be easily embedded in the iteration loop in order to improve its performance at low to moderate SNR levels. We show by extensive computer simulations that the new estimators are able to accurately estimate the instantaneous per-antenna SNRs as they coincide with the DA CRLB over a wide range of practical SNRs.
Cavaliere, Giuseppe; Nielsen, Morten Ørregaard; Taylor, Robert
We consider the problem of conducting estimation and inference on the parameters of univariate heteroskedastic fractionally integrated time series models. We first extend existing results in the literature, developed for conditional sum-of squares estimators in the context of parametric fractional...... time series models driven by conditionally homoskedastic shocks, to allow for conditional and unconditional heteroskedasticity both of a quite general and unknown form. Global consistency and asymptotic normality are shown to still obtain; however, the covariance matrix of the limiting distribution...... of the estimator now depends on nuisance parameters derived both from the weak dependence and heteroskedasticity present in the shocks. We then investigate classical methods of inference based on the Wald, likelihood ratio and Lagrange multiplier tests for linear hypotheses on either or both of the long and short...
EXPODE -- Advanced Exponential Time Integration Toolbox for MATLAB
2014-01-01
We present a MATLAB toolbox for five different classes of exponential integrators for solving (mildly) stiff ordinary differential equations or time-dependent partial differential equations. For the efficiency of such exponential integrators it is essential to approximate the products of the matrix functions arising in these integrators with vectors in a stable, reliable and efficient way. The toolbox contains options for computing the matrix functions directly by diagonalization or by Pade a...
Transit light curves with finite integration time: Fisher information analysis
Price, Ellen M. [California Institute of Technology, 1200 East California Boulevard, Pasadena, CA 91125 (United States); Rogers, Leslie A. [California Institute of Technology, MC249-17, 1200 East California Boulevard, Pasadena, CA 91125 (United States)
2014-10-10
Kepler has revolutionized the study of transiting planets with its unprecedented photometric precision on more than 150,000 target stars. Most of the transiting planet candidates detected by Kepler have been observed as long-cadence targets with 30 minute integration times, and the upcoming Transiting Exoplanet Survey Satellite will record full frame images with a similar integration time. Integrations of 30 minutes affect the transit shape, particularly for small planets and in cases of low signal to noise. Using the Fisher information matrix technique, we derive analytic approximations for the variances and covariances on the transit parameters obtained from fitting light curve photometry collected with a finite integration time. We find that binning the light curve can significantly increase the uncertainties and covariances on the inferred parameters when comparing scenarios with constant total signal to noise (constant total integration time in the absence of read noise). Uncertainties on the transit ingress/egress time increase by a factor of 34 for Earth-size planets and 3.4 for Jupiter-size planets around Sun-like stars for integration times of 30 minutes compared to instantaneously sampled light curves. Similarly, uncertainties on the mid-transit time for Earth and Jupiter-size planets increase by factors of 3.9 and 1.4. Uncertainties on the transit depth are largely unaffected by finite integration times. While correlations among the transit depth, ingress duration, and transit duration all increase in magnitude with longer integration times, the mid-transit time remains uncorrelated with the other parameters. We provide code in Python and Mathematica for predicting the variances and covariances at www.its.caltech.edu/∼eprice.
Time Varying Market Integration and Expected Rteurns in Emerging Markets
de Jong, F.C.J.M.; de Roon, F.A.
2001-01-01
We use a simple model in which the expected returns in emerging markets depend on their systematic risk as measured by their beta relative to the world portfolio as well as on the level of integration in that market.The level of integration is a time-varying variable that depends on the market value
Vuillemin, Aurele; Ariztegui, Daniel; Leavitt, Peter R.; Bunting, Lynda
2014-05-01
Laguna Potrok Aike is a closed basin located in the southern hemisphere's mid-latitudes (52°S) where paleoenvironmental conditions were recorded as temporal sedimentary sequences resulting from variations in the regional hydrological regime and geology of the catchment. The interpretation of the limnogeological multiproxy record developed during the ICDP-PASADO project allowed the identification of contrasting time windows associated with the fluctuations of Southern Westerly Winds. In the framework of this project, a 100-m-long core was also dedicated to a detailed geomicrobiological study which aimed at a thorough investigation of the lacustrine subsurface biosphere. Indeed, aquatic sediments do not only record past climatic conditions, but also provide a wide range of ecological niches for microbes. In this context, the influence of environmental features upon microbial development and survival remained still unexplored for the deep lacustrine realm. Therefore, we investigated living microbes throughout the sedimentary sequence using in situ ATP assays and DAPI cell count. These results, compiled with pore water analysis, SEM microscopy of authigenic concretions and methane and fatty acid biogeochemistry, provided evidence for a sustained microbial activity in deep sediments and pinpointed the substantial role of microbial processes in modifying initial organic and mineral fractions. Finally, because the genetic material associated with microorganisms can be preserved in sediments over millennia, we extracted environmental DNA from Laguna Potrok Aike sediments and established 16S rRNA bacterial and archaeal clone libraries to better define the use of DNA-based techniques in reconstructing past environments. We focused on two sedimentary horizons both displaying in situ microbial activity, respectively corresponding to the Holocene and Last Glacial Maximum periods. Sequences recovered from the productive Holocene record revealed a microbial community adapted to
Positioning Navigation and Timing System Integration Laboratory (PNT SIL)
Federal Laboratory Consortium — ThePositioning Navigation and Timing System Integration Laboratory (PNT SIL)is currently used for a number of PNT system development test and evaluation activities...
Vanavil, B.; Krishna Chaitanya, K.; Seshagiri Rao, A.
2015-06-01
In this paper, a proportional-integral-derivative controller in series with a lead-lag filter is designed for control of the open-loop unstable processes with time delay based on direct synthesis method. Study of the performance of the designed controllers has been carried out on various unstable processes. Set-point weighting is considered to reduce the undesirable overshoot. The proposed scheme consists of only one tuning parameter, and systematic guidelines are provided for selection of the tuning parameter based on the peak value of the sensitivity function (Ms). Robustness analysis has been carried out based on sensitivity and complementary sensitivity functions. Nominal and robust control performances are achieved with the proposed method and improved closed-loop performances are obtained when compared to the recently reported methods in the literature.
Kim, Kyungsoo; Lim, Sung-Ho; Lee, Jaeseok; Kang, Won-Seok; Moon, Cheil; Choi, Ji-Woong
2016-06-16
Electroencephalograms (EEGs) measure a brain signal that contains abundant information about the human brain function and health. For this reason, recent clinical brain research and brain computer interface (BCI) studies use EEG signals in many applications. Due to the significant noise in EEG traces, signal processing to enhance the signal to noise power ratio (SNR) is necessary for EEG analysis, especially for non-invasive EEG. A typical method to improve the SNR is averaging many trials of event related potential (ERP) signal that represents a brain's response to a particular stimulus or a task. The averaging, however, is very sensitive to variable delays. In this study, we propose two time delay estimation (TDE) schemes based on a joint maximum likelihood (ML) criterion to compensate the uncertain delays which may be different in each trial. We evaluate the performance for different types of signals such as random, deterministic, and real EEG signals. The results show that the proposed schemes provide better performance than other conventional schemes employing averaged signal as a reference, e.g., up to 4 dB gain at the expected delay error of 10°.
Kim, Kyungsoo; Lim, Sung-Ho; Lee, Jaeseok; Kang, Won-Seok; Moon, Cheil; Choi, Ji-Woong
2016-01-01
Electroencephalograms (EEGs) measure a brain signal that contains abundant information about the human brain function and health. For this reason, recent clinical brain research and brain computer interface (BCI) studies use EEG signals in many applications. Due to the significant noise in EEG traces, signal processing to enhance the signal to noise power ratio (SNR) is necessary for EEG analysis, especially for non-invasive EEG. A typical method to improve the SNR is averaging many trials of event related potential (ERP) signal that represents a brain’s response to a particular stimulus or a task. The averaging, however, is very sensitive to variable delays. In this study, we propose two time delay estimation (TDE) schemes based on a joint maximum likelihood (ML) criterion to compensate the uncertain delays which may be different in each trial. We evaluate the performance for different types of signals such as random, deterministic, and real EEG signals. The results show that the proposed schemes provide better performance than other conventional schemes employing averaged signal as a reference, e.g., up to 4 dB gain at the expected delay error of 10°. PMID:27322267
Kyungsoo Kim
2016-06-01
Full Text Available Electroencephalograms (EEGs measure a brain signal that contains abundant information about the human brain function and health. For this reason, recent clinical brain research and brain computer interface (BCI studies use EEG signals in many applications. Due to the significant noise in EEG traces, signal processing to enhance the signal to noise power ratio (SNR is necessary for EEG analysis, especially for non-invasive EEG. A typical method to improve the SNR is averaging many trials of event related potential (ERP signal that represents a brain’s response to a particular stimulus or a task. The averaging, however, is very sensitive to variable delays. In this study, we propose two time delay estimation (TDE schemes based on a joint maximum likelihood (ML criterion to compensate the uncertain delays which may be different in each trial. We evaluate the performance for different types of signals such as random, deterministic, and real EEG signals. The results show that the proposed schemes provide better performance than other conventional schemes employing averaged signal as a reference, e.g., up to 4 dB gain at the expected delay error of 10°.
Rama, Aarti; Kesari, Shreekant; Das, Pradeep; Kumar, Vijay
2017-07-24
Extensive application of routine insecticide i.e., dichlorodiphenyltrichloroethane (DDT) to control Phlebotomus argentipes (Diptera: Psychodidae), the proven vector of visceral leishmaniasis in India, had evoked the problem of resistance/tolerance against DDT, eventually nullifying the DDT dependent strategies to control this vector. Because tolerating an hour-long exposure to DDT is not challenging enough for the resistant P. argentipes, estimating susceptibility by exposing sand flies to insecticide for just an hour becomes a trivial and futile task.Therefore, this bioassay study was carried out to investigate the maximum limit of exposure time to which DDT resistant P. argentipes can endure the effect of DDT for their survival. The mortality rate of laboratory-reared DDT resistant strain P. argentipes exposed to DDT was studied at discriminating time intervals of 60 min and it was concluded that highly resistant sand flies could withstand up to 420 min of exposure to this insecticide. Additionally, the lethal time for female P. argentipes was observed to be higher than for males suggesting that they are highly resistant to DDT's toxicity. Our results support the monitoring of tolerance limit with respect to time and hence points towards an urgent need to change the World Health Organization's protocol for susceptibility identification in resistant P. argentipes.
Real time localization of Gamma Ray Bursts with INTEGRAL
Mereghetti, S; Borkowski, J J
2003-01-01
The INTEGRAL satellite has been successfully launched in October 2002 and has recently started its operational phase. The INTEGRAL Burst Alert System (IBAS) will distribute in real time the coordinates of the GRBs detected with INTEGRAL. After a brief introduction on the INTEGRAL instruments, we describe the main IBAS characteristics and report on the initial results. During the initial performance and verification phase of the INTEGRAL mission, which lasted about two months, two GRBs have been localized with accuracy of about 2-4 arcmin. These observations have allowed us to validate the IBAS software, which is now expected to provide quick (few seconds delay) and precise (few arcmin) localization for about 10-15 GRBs per year.
Time series analysis : Smoothed correlation integrals, autocovariances, and power spectra
Takens, F; Dumortier, F; Broer, H; Mawhin, J; Vanderbauwhede, A; Lunel, SV
2005-01-01
In this paper we relate notions from linear time series analyses, like autocovariances and power spectra, with notions from nonlinear times series analysis, like (smoothed) correlation integrals and the corresponding dimensions and entropies. The complete proofs of the results announced in this pape
Gao, Yingjie; Zhang, Jinhai; Yao, Zhenxing
2016-06-01
The symplectic integration method is popular in high-accuracy numerical simulations when discretizing temporal derivatives; however, it still suffers from time-dispersion error when the temporal interval is coarse, especially for long-term simulations and large-scale models. We employ the inverse time dispersion transform (ITDT) to the third-order symplectic integration method to reduce the time-dispersion error. First, we adopt the pseudospectral algorithm for the spatial discretization and the third-order symplectic integration method for the temporal discretization. Then, we apply the ITDT to eliminate time-dispersion error from the synthetic data. As a post-processing method, the ITDT can be easily cascaded in traditional numerical simulations. We implement the ITDT in one typical exiting third-order symplectic scheme and compare its performances with the performances of the conventional second-order scheme and the rapid expansion method. Theoretical analyses and numerical experiments show that the ITDT can significantly reduce the time-dispersion error, especially for long travel times. The implementation of the ITDT requires some additional computations on correcting the time-dispersion error, but it allows us to use the maximum temporal interval under stability conditions; thus, its final computational efficiency would be higher than that of the traditional symplectic integration method for long-term simulations. With the aid of the ITDT, we can obtain much more accurate simulation results but with a lower computational cost.
The NIF Integrated Timing System - Design and Performance
Lerche, R A; Lagin, L J; Nyholm, R; Sewall, N R; Stever, R D; Wiedwald, J D; Larkin, J; Stein, S; Martin, R
2001-01-01
The National Ignition Facility (NIF) will contain the world's most powerful laser. NIF requires more than 1500 precisely timed trigger pulses to control the timing of laser and diagnostic equipment. The Integrated Timing System applies new concepts to generate and deliver triggers at preprogrammed times to equipment throughout the laser and target areas of the facility. Trigger pulses during the last 2 seconds of a shot cycle are required to have a jitter of less than 20 ps (rms) and a wander of less than 100 ps (max). Also, the Timing System allows simultaneous, independent use by multiple clients by partitioning the system hardware into subsets that are controlled via independent software keys. The hardware necessary to implement the Integrated Timing System is commercially available. -- This work is performed under the auspices of the U.S. Department of Energy by Lawrence Livermore National Laboratory under contract No. W-7405-Eng-48.
Best tracking performance for integrator and dead time plant
无
2007-01-01
The optimal tracking performance for integrator and dead time plant in the case where plant uncertainty and control energy constraints axe to be considered jointly is inrestigated.Firstly,an average cost function of the tracking error and the plant input energy over a class of stochastic model errors are defined.Then,we obtain an internal model controller design method that minimizes the average performance and further studies optimal tracking performance for integrator and dead time plant in the simultaneous presence of plant uncertainty and control energy constraint.The results can be used to evaluate optimal tracking performance and control energy in practical designs.
Explicit solution of Calderon preconditioned time domain integral equations
Ulku, Huseyin Arda
2013-07-01
An explicit marching on-in-time (MOT) scheme for solving Calderon-preconditioned time domain integral equations is proposed. The scheme uses Rao-Wilton-Glisson and Buffa-Christiansen functions to discretize the domain and range of the integral operators and a PE(CE)m type linear multistep to march on in time. Unlike its implicit counterpart, the proposed explicit solver requires the solution of an MOT system with a Gram matrix that is sparse and well-conditioned independent of the time step size. Numerical results demonstrate that the explicit solver maintains its accuracy and stability even when the time step size is chosen as large as that typically used by an implicit solver. © 2013 IEEE.
A model of interval timing by neural integration.
Simen, Patrick; Balci, Fuat; de Souza, Laura; Cohen, Jonathan D; Holmes, Philip
2011-06-22
We show that simple assumptions about neural processing lead to a model of interval timing as a temporal integration process, in which a noisy firing-rate representation of time rises linearly on average toward a response threshold over the course of an interval. Our assumptions include: that neural spike trains are approximately independent Poisson processes, that correlations among them can be largely cancelled by balancing excitation and inhibition, that neural populations can act as integrators, and that the objective of timed behavior is maximal accuracy and minimal variance. The model accounts for a variety of physiological and behavioral findings in rodents, monkeys, and humans, including ramping firing rates between the onset of reward-predicting cues and the receipt of delayed rewards, and universally scale-invariant response time distributions in interval timing tasks. It furthermore makes specific, well-supported predictions about the skewness of these distributions, a feature of timing data that is usually ignored. The model also incorporates a rapid (potentially one-shot) duration-learning procedure. Human behavioral data support the learning rule's predictions regarding learning speed in sequences of timed responses. These results suggest that simple, integration-based models should play as prominent a role in interval timing theory as they do in theories of perceptual decision making, and that a common neural mechanism may underlie both types of behavior.
Optimal Real-time Dispatch for Integrated Energy Systems
Anvari-Moghaddam, Amjad; Guerrero, Josep M.; Rahimi-Kian, Ashkan
2016-01-01
With the emerging of small-scale integrated energy systems (IESs), there are significant potentials to increase the functionality of a typical demand-side management (DSM) strategy and typical implementation of building-level distributed energy resources (DERs). By integrating DSM and DERs...... into a cohesive, networked package that fully utilizes smart energy-efficient end-use devices, advanced building control/automation systems, and integrated communications architectures, it is possible to efficiently manage energy and comfort at the end-use location. In this paper, an ontology-driven multi......-agent control system with intelligent optimizers is proposed for optimal real-time dispatch of an integrated building and microgrid system considering coordinated demand response (DR) and DERs management. The optimal dispatch problem is formulated as a mixed integer nonlinear programing problem (MINLP...
Analysis of a Real-Time Separation Assurance System with Integrated Time-in-Trail Spacing
Aweiss, Arwa S.; Farrahi, Amir H.; Lauderdale, Todd A.; Thipphavong, Adam S.; Lee, Chu H.
2010-01-01
This paper describes the implementation and analysis of an integrated ground-based separation assurance and time-based metering prototype system into the Center-TRACON Automation System. The integration of this new capability accommodates constraints in four-dimensions: position (x-y), altitude, and meter-fix crossing time. Experiments were conducted to evaluate the performance of the integrated system and its ability to handle traffic levels up to twice that of today. Results suggest that the integrated system reduces the number and magnitude of time-in-trail spacing violations. This benefit was achieved without adversely affecting the resolution success rate of the system. Also, the data suggest that the integrated system is relatively insensitive to an increase in traffic of twice the current levels.
A New time Integration Scheme for Cahn-hilliard Equations
Schaefer, R.
2015-06-01
In this paper we present a new integration scheme that can be applied to solving difficult non-stationary non-linear problems. It is obtained by a successive linearization of the Crank- Nicolson scheme, that is unconditionally stable, but requires solving non-linear equation at each time step. We applied our linearized scheme for the time integration of the challenging Cahn-Hilliard equation, modeling the phase separation in fluids. At each time step the resulting variational equation is solved using higher-order isogeometric finite element method, with B- spline basis functions. The method was implemented in the PETIGA framework interfaced via the PETSc toolkit. The GMRES iterative solver was utilized for the solution of a resulting linear system at every time step. We also apply a simple adaptivity rule, which increases the time step size when the number of GMRES iterations is lower than 30. We compared our method with a non-linear, two stage predictor-multicorrector scheme, utilizing a sophisticated step length adaptivity. We controlled the stability of our simulations by monitoring the Ginzburg-Landau free energy functional. The proposed integration scheme outperforms the two-stage competitor in terms of the execution time, at the same time having a similar evolution of the free energy functional.
Integrable nonlinear parity-time symmetric optical oscillator
Hassan, Absar U; Miri, Mohammad-Ali; Khajavikhan, Mercedeh; Christodoulides, Demetrios N
2016-01-01
The nonlinear dynamics of a balanced parity-time symmetric optical microring arrangement are analytically investigated. By considering gain and loss saturation effects, the pertinent conservation laws are explicitly obtained in the Stokes domain-thus establishing integrability. Our analysis indicates the existence of two regimes of oscillatory dynamics and frequency locking, both of which are analogous to those expected in linear parity-time symmetric systems. Unlike other saturable parity time symmetric systems considered before, the model studied in this work first operates in the symmetric regime and then enters the broken parity-time phase.
Hybrid state-space time integration of rotating beams
Krenk, Steen; Nielsen, Martin Bjerre
2012-01-01
An efficient time integration algorithm for the dynamic equations of flexible beams in a rotating frame of reference is presented. The equations of motion are formulated in a hybrid state-space format in terms of local displacements and local components of the absolute velocity. With inspiration ...
Early Pubertal Timing and Girls' Problem Behavior: Integrating Two Hypotheses
Stattin, Hakan; Kerr, Margaret; Skoog, Therese
2011-01-01
Girls' early pubertal timing has been linked in many studies to behavioral problems such as delinquency and substance use. The theoretical explanations for these links have often involved the girls' peer relationships, but contexts have also been considered important in some explanations. By integrating two theoretical models, the…
Work-up times in an integrated brain cancer pathway
Lund Laursen, Emilie; Rasmussen, Birthe Krogh
2012-01-01
The integrated brain cancer pathway (IBCP) aims to ensure fast-track diagnostics and treatment for brain cancers in Denmark. This paper focuses on the referral pattern and the time frame of key pathway elements during the first two years following implementation of the IBCP in a regional neurology...
Global Format for Conservative Time Integration in Nonlinear Dynamics
Krenk, Steen
2014-01-01
The widely used classic collocation-based time integration procedures like Newmark, Generalized-alpha etc. generally work well within a framework of linear problems, but typically may encounter problems, when used in connection with essentially nonlinear structures. These problems are overcome in...
Modified precise time step integration method of structural dynamic analysis
Wang Mengfu; Zhou Xiyuan
2005-01-01
The precise time step integration method proposed for linear time-invariant homogeneous dynamic systems can provide precise numerical results that approach an exact solution at the integration points. However, difficulty arises when the algorithm is used for non-homogeneous dynamic systems, due to the inverse matrix calculation and the simulation accuracy of the applied loading. By combining the Gaussian quadrature method and state space theory with the calculation technique of matrix exponential function in the precise time step integration method, a new modified precise time step integration method (e.g., an algorithm with an arbitrary order of accuracy) is proposed. In the new method, no inverse matrix calculation or simulation of the applied loading is needed, and the computing efficiency is improved. In particular, the proposed method is independent of the quality of the matrix H. If the matrix H is singular or nearly singular, the advantage of the method is remarkable. The numerical stability of the proposed algorithm is discussed and a numerical example is given to demonstrate the validity and efficiency of the algorithm.
Work-up times in an integrated brain cancer pathway
Lund Laursen, Emilie; Rasmussen, Birthe Krogh
2012-01-01
The integrated brain cancer pathway (IBCP) aims to ensure fast-track diagnostics and treatment for brain cancers in Denmark. This paper focuses on the referral pattern and the time frame of key pathway elements during the first two years following implementation of the IBCP in a regional neurology...
Detection of magnetising inrush current using real time integration method
Ling, P. C. Y.; Basak, A.
1990-01-01
A technique of predicting magnetising inrush currents in transformers is described. Computed results show an inconsistency in second harmonic decay resulting detection failure while using conventional second harmonic techniques. A new detection scheme using real time integration values of the inrush current is proposed to provide reliable relay operation.
Integral-Value Models for Outcomes over Continuous Time
Harvey, Charles M.; Østerdal, Lars Peter
Models of preferences between outcomes over continuous time are important for individual, corporate, and social decision making, e.g., medical treatment, infrastructure development, and environmental regulation. This paper presents a foundation for such models. It shows that conditions...... on preferences between real- or vector-valued outcomes over continuous time are satisfied if and only if the preferences are represented by a value function having an integral form...
Kinkhabwala, Ali
2013-01-01
The most fundamental problem in statistics is the inference of an unknown probability distribution from a finite number of samples. For a specific observed data set, answers to the following questions would be desirable: (1) Estimation: Which candidate distribution provides the best fit to the observed data?, (2) Goodness-of-fit: How concordant is this distribution with the observed data?, and (3) Uncertainty: How concordant are other candidate distributions with the observed data? A simple unified approach for univariate data that addresses these traditionally distinct statistical notions is presented called "maximum fidelity". Maximum fidelity is a strict frequentist approach that is fundamentally based on model concordance with the observed data. The fidelity statistic is a general information measure based on the coordinate-independent cumulative distribution and critical yet previously neglected symmetry considerations. An approximation for the null distribution of the fidelity allows its direct conversi...
PBO Integrated Real-Time Observing Sites at Volcanic Sites
Mencin, D.; Jackson, M.; Borsa, A.; Feaux, K.; Smith, S.
2009-05-01
The Plate Boundary Observatory, an element of NSF's EarthScope program, has six integrated observatories in Yellowstone and four on Mt St Helens. These observatories consist of some combination of borehole strainmeters, borehole seismometers, GPS, tiltmeters, pore pressure, thermal measurements and meteorological data. Data from all these instruments have highly variable data rates and formats, all synchronized to GPS time which can cause significant congestion of precious communication resources. PBO has been experimenting with integrating these data streams to both maximize efficiency and minimize latency through the use of software that combines the streams, like Antelope, and VPN technologies.
Multiple integral inequalities and stability analysis of time delay systems
Gyurkovics, Eva; Takacs, Tibor
2016-01-01
This paper is devoted to stability analysis of continuous-time delay systems based on a set of Lyapunov-Krasovskii functionals. New multiple integral inequalities are derived that involve the famous Jensen's and Wirtinger's inequalities, as well as the recently presented Bessel-Legendre inequalities of A. Seuret and F. Gouaisbaut, (2015) and the Wirtinger-based multiple-integral inequalities of M. Park et al. (2015) and T.H. Lee et al. (2015). The present paper aims at showing that the propos...
Space-time combined correlation integral and earthquake interactions
L. Pietronero
2004-06-01
Full Text Available Scale invariant properties of seismicity argue for the presence of complex triggering mechanisms. We propose a new method, based on the space-time combined generalization of the correlation integral, that leads to a self-consistent visualization and analysis of both spatial and temporal correlations. The analysis has been applied on global medium-high seismicity. Results show that earthquakes do interact even on long distances and are correlated in time within defined spatial ranges varying over elapsed time. On that base we redefine the aftershock concept.
A Digitally Programmable Differential Integrator with Enlarged Time Constant
S. K. Debroy
1994-12-01
Full Text Available A new Operational Amplifier (OA-RC integrator network is described. The novelties of the design are used of single grounded capacitor, ideal integration function realization with dual-input capability and design flexibility for extremely large time constant involving an enlargement factor (K using product of resistor ratios. The aspect of the digital control of K through a programmable resistor array (PRA controlled by a microprocessor has also been implemented. The effect of the OA-poles has been analyzed which indicates degradation of the integrator-Q at higher frequencies. An appropriate Q-compensation design scheme exhibiting 1 : |A|2 order of Q-improvement has been proposed with supporting experimental observations.
Spruce, Joseph P.; Hargrove, William; Gasser, Gerald; Smoot, James; Kuper, Philip D.
2012-01-01
This presentation reviews the development, integration, and testing of Near Real Time (NRT) MODIS forest % maximum NDVI change products resident to the USDA Forest Service (USFS) ForWarn System. ForWarn is an Early Warning System (EWS) tool for detection and tracking of regionally evident forest change, which includes the U.S. Forest Change Assessment Viewer (FCAV) (a publically available on-line geospatial data viewer for visualizing and assessing the context of this apparent forest change). NASA Stennis Space Center (SSC) is working collaboratively with the USFS, ORNL, and USGS to contribute MODIS forest change products to ForWarn. These change products compare current NDVI derived from expedited eMODIS data, to historical NDVI products derived from MODIS MOD13 data. A new suite of forest change products are computed every 8 days and posted to the ForWarn system; this includes three different forest change products computed using three different historical baselines: 1) previous year; 2) previous three years; and 3) all previous years in the MODIS record going back to 2000. The change product inputs are maximum value NDVI that are composited across a 24 day interval and refreshed every 8 days so that resulting images for the conterminous U.S. are predominantly cloud-free yet still retain temporally relevant fresh information on changes in forest canopy greenness. These forest change products are computed at the native nominal resolution of the input reflectance bands at 231.66 meters, which equates to approx 5.4 hectares or 13.3 acres per pixel. The Time Series Product Tool, a MATLAB-based software package developed at NASA SSC, is used to temporally process, fuse, reduce noise, interpolate data voids, and re-aggregate the historical NDVI into 24 day composites, and then custom MATLAB scripts are used to temporally process the eMODIS NDVIs so that they are in synch with the historical NDVI products. Prior to posting, an in-house snow mask classification product
Time resolved spectroscopy of GRB 030501 using INTEGRAL
Beckmann, V.; Borkowski, J.; Courvoisier, T.J.L.;
2003-01-01
The gamma-ray instruments on-board INTEGRAL offer an unique opportunity to perform time resolved analysis on GRBs. The imager IBIS allows accurate positioning of GRBs and broad band spectral analysis, while SPI provides high resolution spectroscopy. GRB 030501 was discovered by the INTEGRAL Burst...... Alert System in the ISGRI field of view. Although the burst was fairly weak (fluence F20-200 keV similar or equal to 3.5x10(-6) erg cm(-2)) it was possible to perform time resolved spectroscopy with a resolution of a few seconds. The GRB shows a spectrum in the 20-400 keV range which is consistent...
HMC algorithm with multiple time scale integration and mass preconditioning
Urbach, C; Shindler, A; Wenger, U
2006-01-01
We present a variant of the HMC algorithm with mass preconditioning (Hasenbusch acceleration) and multiple time scale integration. We have tested this variant for standard Wilson fermions at beta=5.6 and at pion masses ranging from 380 MeV to 680 MeV. We show that in this situation its performance is comparable to the recently proposed HMC variant with domain decomposition as preconditioner. We give an update of the ``Berlin Wall'' figure, comparing the performance of our variant of the HMC algorithm to other published performance data. Advantages of the HMC algorithm with mass preconditioning and multiple time scale integration are that it is straightforward to implement and can be used in combination with a wide variety of lattice Dirac operators.
HMC algorithm with multiple time scale integration and mass preconditioning
Urbach, C.; Jansen, K.; Shindler, A.; Wenger, U.
2006-01-01
We present a variant of the HMC algorithm with mass preconditioning (Hasenbusch acceleration) and multiple time scale integration. We have tested this variant for standard Wilson fermions at β=5.6 and at pion masses ranging from 380 to 680 MeV. We show that in this situation its performance is comparable to the recently proposed HMC variant with domain decomposition as preconditioner. We give an update of the "Berlin Wall" figure, comparing the performance of our variant of the HMC algorithm to other published performance data. Advantages of the HMC algorithm with mass preconditioning and multiple time scale integration are that it is straightforward to implement and can be used in combination with a wide variety of lattice Dirac operators.
Timing Analysis with INTEGRAL: Comparing Different Reconstruction Algorithms
Grinberg, V.; Kreykenboehm, I.; Fuerst, F.; Wilms, J.; Pottschmidt, K.; Bel, M. Cadolle; Rodriquez, J.; Marcu, D. M.; Suchy, S.; Markowitz, A.;
2010-01-01
INTEGRAL is one of the few instruments capable of detecting X-rays above 20keV. It is therefore in principle well suited for studying X-ray variability in this regime. Because INTEGRAL uses coded mask instruments for imaging, the reconstruction of light curves of X-ray sources is highly non-trivial. We present results from the comparison of two commonly employed algorithms, which primarily measure flux from mask deconvolution (ii-lc-extract) and from calculating the pixel illuminated fraction (ii-light). Both methods agree well for timescales above about 10 s, the highest time resolution for which image reconstruction is possible. For higher time resolution, ii-light produces meaningful results, although the overall variance of the lightcurves is not preserved.
Software Thread Integration and Synthesis for Real-Time Applications
Dean, Alexander G.
2005-01-01
Submitted on behalf of EDAA (http://www.edaa.com/); International audience; Software Thread Integration (STI) and Asynchronous STI (ASTI) are compiler techniques which interleave functions from separate program threads at the assembly language level, creating implicitly multithreaded functions which provide low-cost concurrency on generic hardware. This extends the reach of software and reduces the need to rely upon dedicated hardware. STI and ASTI are driven by two types of timing requiremen...
Integrating Security in Real-Time Embedded Systems
2017-04-26
requirements in the form of real- time scheduling constraints. This enabled us to directly reason about the effects of integrating security into RTS (e.g...a set of sporadic tasks scheduled based on the Rate Monotone (RM) policy. We are concerned with the leakage of information between tasks of...a task with the high- est priori ty has the shortest period (rate- monotonic scheduling) that gives it privileges to be scheduled with a higher
A portable millisecond-integration-time photoelectric photometer.
Mcgraw, J. T.; Wells, D. C.; Wiant, J. R.
1973-01-01
Portable equipment for recording millisecond-integration-time photoelectric photometric data is described. Digital data are reliably recorded on standard 6.35 mm audio grade magnetic tape via a quadradial audio grade tape deck. The system is designed specifically for recording lunar occulations of stars, but the data recording technique is independent of the data source. Recovery of the data is made via minicomputer.
Real-time Space-time Integration in GIScience and Geography.
Richardson, Douglas B
2013-01-01
Space-time integration has long been the topic of study and speculation in geography. However, in recent years an entirely new form of space-time integration has become possible in GIS and GIScience: real-time space-time integration and interaction. While real-time spatiotemporal data is now being generated almost ubiquitously, and its applications in research and commerce are widespread and rapidly accelerating, the ability to continuously create and interact with fused space-time data in geography and GIScience is a recent phenomenon, made possible by the invention and development of real-time interactive (RTI) GPS/GIS technology and functionality in the late 1980s and early 1990s. This innovation has since functioned as a core change agent in geography, cartography, GIScience and many related fields, profoundly realigning traditional relationships and structures, expanding research horizons, and transforming the ways geographic data is now collected, mapped, modeled, and used, both in geography and in science and society more broadly. Real-time space-time interactive functionality remains today the underlying process generating the current explosion of fused spatiotemporal data, new geographic research initiatives, and myriad geospatial applications in governments, businesses, and society. This essay addresses briefly the development of these real-time space-time functions and capabilities; their impact on geography, cartography, and GIScience; and some implications for how discovery and change can occur in geography and GIScience, and how we might foster continued innovation in these fields.
System Integration for Real-time Mobile Manipulation
Reza Oftadeh
2014-03-01
Full Text Available Mobile manipulators are one of the most complicated types of mechatronics systems. The performance of these robots in performing complex manipulation tasks is highly correlated with the synchronization and integration of their low-level components. This paper discusses in detail the mechatronics design of a four wheel steered mobile manipulator. It presents the manipulator ’s mechanical structure and electrical interfaces, designs low-level software architecture based on embedded PC-based controls, and proposes a systematic solution based on code generation products of MATLAB and Simulink. The remote development environment described here is used to develop real-time controller software and modules for the mobile manipulator under a POSIX-compliant, real-time Linux operating system. Our approach enables developers to reliably design controller modules that meet the hard real-time constraints of the entire low-level system architecture. Moreover, it provides a systematic framework for the development and integration of hardware devices with various communication mediums and protocols, which facilitates the development and integration process of the software controller.
Integration of Real-Time Data Into Building Automation Systems
Mark J. Stunder; Perry Sebastian; Brenda A. Chube; Michael D. Koontz
2003-04-16
The project goal was to investigate the possibility of using predictive real-time information from the Internet as an input to building management system algorithms. The objectives were to identify the types of information most valuable to commercial and residential building owners, managers, and system designers. To comprehensively investigate and document currently available electronic real-time information suitable for use in building management systems. Verify the reliability of the information and recommend accreditation methods for data and providers. Assess methodologies to automatically retrieve and utilize the information. Characterize equipment required to implement automated integration. Demonstrate the feasibility and benefits of using the information in building management systems. Identify evolutionary control strategies.
Voyages Through Time: Integrated science for high schools, Pamela Harman
Harman, Pamela; Devore, Edna
Investigating the origin and evolution of the universe and life is a compelling theme for teaching science. It engages students in the key questions about change and the evidence for change over time, and offers a unifying theme for integrated science. "Voyages Through Time" is a high school integrated science curriculum on the theme of evolution. Six modules comprise the year-long course: Cosmic Evolution, Planetary Evolution, Origin of Life, Evolution of Life, and Evolution of Technology. A brief overview of the curriculum is presented. Participants conduct one or two activities representative of the six modules. Each workshop participant receives a sampler CD-ROM with a comprehensive overview of the curriculum, standards, and resources including complete lessons for use in the classroom. "Voyages Through Time" is being developed by a US team of scientists, educators, writers, and classroom teachers and students led by the SETI Institute partnered with NASA Ames Research Center, California Academy of Sciences and San Francisco State University. In 2000-2001 school year, "Voyages Through Time" was pilot tested (trialed) in high school classrooms in the San Francisco Bay Area, California. Following revisions, the curriculum was field tested (trialed) in 28 US states in more than 90 schools August 2001-June 2002. The final version is expected to be ready for publication by the beginning of 2003. "Voyages Through Time" is funded by the National Science Foundation (IMD # 9730693), NASA Astrobiology Institute, NASA Fundamental Biology, The Foundation for Microbiology, Educate America, and the Hewlett-Packard Company.
Djeison Cesar Batista
2011-09-01
Full Text Available Thermal rectification of wood was developed in the decade of 1940 and has been largely studied and produced in Europe. In Brazil, the research about this technique is still little and sparse, but it has gained attention nowadays. The aim of this study was to evaluate the influence of time and temperature of rectification on the reduction of maximum swelling of Eucalyptus grandis wood. According to the results obtained it is possible to achieve reductions of about 50% on the maximum volumetric swelling of Eucalyptus grandis wood. Best results were obtained for 230°C of thermal rectification rather than 200°C. The factor temperature was more significant than time, once that there was no significant difference between the times used (1, 2 and 3 hours. There was no significant interaction between the factors time and temperature.
On noise in time-delay integration CMOS image sensors
Levski, Deyan; Choubey, Bhaskar
2016-05-01
Time delay integration sensors are of increasing interest in CMOS processes owing to their low cost, power and ability to integrate with other circuit readout blocks. This paper presents an analysis of the noise contributors in current day CMOS Time-Delay-Integration image sensors with various readout architectures. An analysis of charge versus voltage domain readout modes is presented, followed by a noise classification of the existing Analog Accumulator Readout (AAR) and Digital Accumulator Readout (DAR) schemes for TDI imaging. The analysis and classification of existing readout schemes include, pipelined charge transfer, buffered direct injection, voltage as well as current-mode analog accumulators and all-digital accumulator techniques. Time-Delay-Integration imaging modes in CMOS processes typically use an N-number of readout steps, equivalent to the number of TDI pixel stages. In CMOS TDI sensors, where voltage domain readout is used, the requirements over speed and noise of the ADC readout chain are increased due to accumulation of the dominant voltage readout and ADC noise with every stage N. Until this day, the latter is the primary reason for a leap-back of CMOS TDI sensors as compared to their CCD counterparts. Moreover, most commercial CMOS TDI implementations are still based on a charge-domain readout, mimicking a CCD-like operation mode. Thus, having a good understanding of each noise contributor in the signal chain, as well as its magnitude in different readout architectures, is vital for the design of future generation low-noise CMOS TDI image sensors based on a voltage domain readout. This paper gives a quantitative classification of all major noise sources for all popular implementations in the literature.
Improved integration time estimation of endogenous retroviruses with phylogenetic data.
Hugo Martins
Full Text Available BACKGROUND: Endogenous retroviruses (ERVs are genetic fossils of ancient retroviral integrations that remain in the genome of many organisms. Most loci are rendered non-functional by mutations, but several intact retroviral genes are known in mammalian genomes. Some have been adopted by the host species, while the beneficial roles of others remain unclear. Besides the obvious possible immunogenic impact from transcribing intact viral genes, endogenous retroviruses have also become an interesting and useful tool to study phylogenetic relationships. The determination of the integration time of these viruses has been based upon the assumption that both 5' and 3' Long Terminal Repeats (LTRs sequences are identical at the time of integration, but evolve separately afterwards. Similar approaches have been using either a constant evolutionary rate or a range of rates for these viral loci, and only single species data. Here we show the advantages of using different approaches. RESULTS: We show that there are strong advantages in using multiple species data and state-of-the-art phylogenetic analysis. We incorporate both simple phylogenetic information and Monte Carlo Markov Chain (MCMC methods to date the integrations of these viruses based on a relaxed molecular clock approach over a Bayesian phylogeny model and applied them to several selected ERV sequences in primates. These methods treat each ERV locus as having a distinct evolutionary rate for each LTR, and make use of consensual speciation time intervals between primates to calibrate the relaxed molecular clocks. CONCLUSIONS: The use of a fixed rate produces results that vary considerably with ERV family and the actual evolutionary rate of the sequence, and should be avoided whenever multi-species phylogenetic data are available. For genome-wide studies, the simple phylogenetic approach constitutes a better alternative, while still being computationally feasible.
Rigorous time slicing approach to Feynman path integrals
Fujiwara, Daisuke
2017-01-01
This book proves that Feynman's original definition of the path integral actually converges to the fundamental solution of the Schrödinger equation at least in the short term if the potential is differentiable sufficiently many times and its derivatives of order equal to or higher than two are bounded. The semi-classical asymptotic formula up to the second term of the fundamental solution is also proved by a method different from that of Birkhoff. A bound of the remainder term is also proved. The Feynman path integral is a method of quantization using the Lagrangian function, whereas Schrödinger's quantization uses the Hamiltonian function. These two methods are believed to be equivalent. But equivalence is not fully proved mathematically, because, compared with Schrödinger's method, there is still much to be done concerning rigorous mathematical treatment of Feynman's method. Feynman himself defined a path integral as the limit of a sequence of integrals over finite-dimensional spaces which is obtained by...
Ghousiya Begum, K; Seshagiri Rao, A; Radhakrishnan, T K
2017-03-18
Internal model control (IMC) with optimal H2 minimization framework is proposed in this paper for design of proportional-integral-derivative (PID) controllers. The controller design is addressed for integrating and double integrating time delay processes with right half plane (RHP) zeros. Blaschke product is used to derive the optimal controller. There is a single adjustable closed loop tuning parameter for controller design. Systematic guidelines are provided for selection of this tuning parameter based on maximum sensitivity. Simulation studies have been carried out on various integrating time delay processes to show the advantages of the proposed method. The proposed controller provides enhanced closed loop performances when compared to recently reported methods in the literature. Quantitative comparative analysis has been carried out using the performance indices, Integral Absolute Error (IAE) and Total Variation (TV).
A Stable Higher Order Space-Time Galerkin Scheme for Time Domain Integral Equations
Pray, A J; Nair, N V; Cools, K; Bağcı, H; Shanker, B
2014-01-01
Stability of time domain integral equation (TDIE) solvers has remained an elusive goal for many years. Advancement of this research has largely progressed on four fronts: (1) Exact integration, (2) Lubich quadrature, (3) smooth temporal basis functions, and (4) Space-time separation of convolutions with the retarded potential. The latter method was explored in [Pray et al. IEEE TAP 2012]. This method's efficacy in stabilizing solutions to the time domain electric field integral equation (TD-EFIE) was demonstrated on first order surface descriptions (flat elements) in tandem with 0th order functions as the temporal basis. In this work, we develop the methodology necessary to extend to higher order surface descriptions as well as to enable its use with higher order temporal basis functions. These higher order temporal basis functions are used in a Galerkin framework. A number of results that demonstrate convergence, stability, and applicability are presented.
Time to trust: longitudinal integrated clerkships and entrustable professional activities.
Hirsh, David A; Holmboe, Eric S; ten Cate, Olle
2014-02-01
Medical education shaped by the learning sciences can better serve medical students, residents, faculty, health care institutions, and patients. With increasing innovation in undergraduate and graduate medical education and more focused attention on educational principles and how people learn, this era of educational transformation offers promise. Principles manifest in "educational continuity" are informing changes in educational structures and venues and are enriching new discourse in educational pedagogy, assessment, and scholarship. The articles by Myhre and colleagues and Woloschuk and colleagues in this issue, along with mounting evidence preceding these works, should reassure that principle-driven innovation in medical education is not only possible but can be achieved safely. In this commentary, the authors draw from these works and the wider literature on longitudinal integrated educational design. They suggest that the confluences of movements for longitudinal integrated clerkships and entrustable professional activities open new possibilities for other educational and practice advancements in quality and safety. With the advent of competency-based education, explicit milestones, and improved assessment regimens, overseers will increasingly evaluate students, trainees, and other learners on their ability rather than relying solely on time spent in an activity. The authors suggest that, for such oversight to have the most value, assessors and learners need adequate oversight time, and redesign of educational models will serve this operational imperative. As education leaders are reassessing old medical school and training models, rotational blocks, and other barriers to progress, the authors explore the dynamic interplay between longitudinal integrated learning models and entrustment.
Quantum stopping times stochastic integral in the interacting Fock space
Kang, Yuanbao, E-mail: kangyuanb@163.com [College of Mathematics Science, Chong Qing Normal University, Chongqing 400047 (China)
2015-08-15
Following the ideas of Hudson [J. Funct. Anal. 34(2), 266-281 (1979)] and Parthasarathy and Sinha [Probab. Theory Relat. Fields 73, 317-349 (1987)], we define a quantum stopping time (QST, for short) τ in the interacting Fock space (IFS, for short), Γ, over L{sup 2}(ℝ{sup +}), which is actually a spectral measure in [0, ∞] such that τ([0, t]) is an adapted process. Motivated by Parthasarathy and Sinha [Probab. Theory Relat. Fields 73, 317-349 (1987)] and Applebaum [J. Funct. Anal. 65, 273-291 (1986)], we also develop a corresponding quantum stopping time stochastic integral (QSTSI, for abbreviations) on the IFS over a subspace of L{sup 2}(ℝ{sup +}) equipped with a filtration. As an application, such integral provides a useful tool for proving that Γ admits a strong factorisation, i.e., Γ = Γ{sub τ]} ⊗ Γ{sub [τ}, where Γ{sub τ]} and Γ{sub [τ} stand for the part “before τ” and the part “after τ,” respectively. Additionally, this integral also gives rise to a natural composition operation among QST to make the space of all QSTs a semigroup.
Time resolved spectroscopy of GRB030501 using INTEGRAL
Beckmann, V; Courvoisier, Thierry J L; Goetz, D; Hudec, R; Hroch, F; Lund, N; Mereghetti, S; Shaw, S E; Wigger, C
2003-01-01
The Gamma-ray instruments on-board INTEGRAL offer an unique opportunity to perform time resolved analysis on GRBs. The imager IBIS allows accurate positiioning of GRBs and broad band spectral analysis, while SPI provides high resolution spectroscopy. GRB 030501 was discovered by the INTEGRAL Burst Alert System in the ISGRI field of view. Although the burst was fairly weak (fluence F = 3.5 * 10^-6 erg cm^-2 in the 20-200 keV energy band) it was possible to perform time resolved spectroscopy with a resolution of a few seconds. The GRB shows a spectrum in the 20 - 400 keV range which is consistent with a spectral photon index of -1.7. No emission line or spectral break was detectable in the spectrum. Although the flux seems to be correlated with the hardness of the GRB spectrum, there is no clear soft to hard evolution seen over the duration of the burst. The INTEGRAL data have been compared with results from the Ulysses and RHESSI experiments.
Energy conservation in Newmark based time integration algorithms
Krenk, Steen
2006-01-01
by the algorithm. The magnitude and character of these terms as well as the associated damping terms are discussed in relation to energy conservation and stability of the algorithms. It is demonstrated that the additional terms in the energy lead to periodic fluctuations of the mechanical energy and are the cause......Energy balance equations are established for the Newmark time integration algorithm, and for the derived algorithms with algorithmic damping introduced via averaging, the so-called a-methods. The energy balance equations form a sequence applicable to: Newmark integration of the undamped equations...... of motion, an extended form including structural damping, and finally the generalized form including structural as well as algorithmic damping. In all three cases the expression for energy, appearing in the balance equation, is the mechanical energy plus some additional terms generated by the discretization...
Biased differential relay with digital real time integration
Hijazi, M. E. A.; Basak, A.
1993-05-01
A new algorithm has been developed for a single phase relay. This can be used for the protection of both single phase and three phase transformers. This paper presents a new scheme using the real time integration technique to determine the differential current caused by either magnetizing inrush current or an internal fault. The proposed relay has been simulated for continuous monitoring of the behavior of the differential current wave form, through the peak value of each current cycle and digital integration of that cycle. The trip of the relay depends on any changes in the area under the differential current wave form. The magnetizing inrush current waveform has less area under its curve than the fault current wave form which contains a sinusoidal wave and a transient dc component. This phenomenon has been used to block the operation of the relay on ``not true'' fault current seen as differential current.
Koyama, Shinsuke; Paninski, Liam
2010-08-01
A number of important data analysis problems in neuroscience can be solved using state-space models. In this article, we describe fast methods for computing the exact maximum a posteriori (MAP) path of the hidden state variable in these models, given spike train observations. If the state transition density is log-concave and the observation model satisfies certain standard assumptions, then the optimization problem is strictly concave and can be solved rapidly with Newton-Raphson methods, because the Hessian of the loglikelihood is block tridiagonal. We can further exploit this block-tridiagonal structure to develop efficient parameter estimation methods for these models. We describe applications of this approach to neural decoding problems, with a focus on the classic integrate-and-fire model as a key example.
Integral Time and the Varieties of Post-Mortem Survival
Sean M. Kelly
2008-06-01
Full Text Available While the question of survival of bodily death is usually approached by focusing on the mind/body relation (and often with the idea of the soul as a special kind of substance, this paper explores the issue in the context of our understanding of time. The argument of the paper is woven around the central intuition of time as an “ever-living present.” The development of this intuition allows for a more integral or “complex-holistic” theory of time, the soul, and the question of survival. Following the introductory matter, the first section proposes a re-interpretation of Nietzsche’s doctrine of eternal recurrence in terms of moments and lives as “eternally occurring.” The next section is a treatment of Julian Barbour’s neo-Machian model of instants of time as configurations in the n-dimensional phase-space he calls “Platonia.” While rejecting his claim to have done away with time, I do find his model suggestive of the idea of moments and lives as eternally occurring. The following section begins with Fechner’s visionary ideas of the nature of the soul and its survival of bodily death, with particular attention to the notion of holonic inclusion and the central analogy of the transition from perception to memory. I turn next to Whitehead’s equally holonic notions of prehension and the concrescence of actual occasions. From his epochal theory of time and certain ambiguities in his reflections on the “divine antinomies,” we are brought to the threshold of a potentially more integral or “complex-holistic” theory of time and survival, which is treated in the last section. This section draws from my earlier work on Hegel, Jung, and Edgar Morin, as well as from key insights of Jean Gebser, for an interpretation of Sri Aurobindo’s inspired but cryptic description of the “Supramental Time Vision.” This interpretation leads to an alternative understanding of reincarnation—and to the possibility of its reconciliation
Integral Time and the Varieties of Post-Mortem Survival
Sean M. Kelly
2008-06-01
Full Text Available While the question of survival of bodily death is usually approached byfocusing on the mind/body relation (and often with the idea of the soul as a special kindof substance, this paper explores the issue in the context of our understanding of time.The argument of the paper is woven around the central intuition of time as an “everlivingpresent.” The development of this intuition allows for a more integral or “complexholistic”theory of time, the soul, and the question of survival. Following the introductorymatter, the first section proposes a re-interpretation of Nietzsche’s doctrine of eternalrecurrence in terms of moments and lives as “eternally occurring.” The next section is atreatment of Julian Barbour’s neo-Machian model of instants of time as configurations inthe n-dimensional phase-space he calls “Platonia.” While rejecting his claim to have doneaway with time, I do find his model suggestive of the idea of moments and lives aseternally occurring. The following section begins with Fechner’s visionary ideas of thenature of the soul and its survival of bodily death, with particular attention to the notionof holonic inclusion and the central analogy of the transition from perception to memory.I turn next to Whitehead’s equally holonic notions of prehension and the concrescence ofactual occasions. From his epochal theory of time and certain ambiguities in hisreflections on the “divine antinomies,” we are brought to the threshold of a potentiallymore integral or “complex-holistic” theory of time and survival, which is treated in thelast section. This section draws from my earlier work on Hegel, Jung, and Edgar Morin,as well as from key insights of Jean Gebser, for an interpretation of Sri Aurobindo’sinspired but cryptic description of the “Supramental Time Vision.” This interpretationleads to an alternative understanding of reincarnation—and to the possibility of itsreconciliation with the once-only view
Chen Pоуu
2013-01-01
Full Text Available Products made overseas but sold in Taiwan are very common. Regarding the cross-border or interregional production and marketing of goods, inventory decision-makers often have to think about how to determine the amount of purchases per cycle, the number of transport vehicles, the working hours of each transport vehicle, and the delivery by ground or air transport to sales offices in order to minimize the total cost of the inventory in unit time. This model assumes that the amount of purchases for each order cycle should allow all rented vehicles to be fully loaded and the transport times to reach the upper limit within the time period. The main research findings of this study included the search for the optimal solution of the integer planning of the model and the results of sensitivity analysis.
Green, Cynthia L; Brownie, Cavell; Boos, Dennis D; Lu, Jye-Chyi; Krucoff, Mitchell W
2016-04-01
We propose a novel likelihood method for analyzing time-to-event data when multiple events and multiple missing data intervals are possible prior to the first observed event for a given subject. This research is motivated by data obtained from a heart monitor used to track the recovery process of subjects experiencing an acute myocardial infarction. The time to first recovery, T1, is defined as the time when the ST-segment deviation first falls below 50% of the previous peak level. Estimation of T1 is complicated by data gaps during monitoring and the possibility that subjects can experience more than one recovery. If gaps occur prior to the first observed event, T, the first observed recovery may not be the subject's first recovery. We propose a parametric gap likelihood function conditional on the gap locations to estimate T1 Standard failure time methods that do not fully utilize the data are compared to the gap likelihood method by analyzing data from an actual study and by simulation. The proposed gap likelihood method is shown to be more efficient and less biased than interval censoring and more efficient than right censoring if data gaps occur early in the monitoring process or are short in duration.
Order reduction in time integration caused by velocity projection
Arnold, Martion [Martin Luther University, Halle (Germany); Cardona, Alberto [Universdad Nacional Lioral, Santa Fe (Argentina); Bruls, Olivier [University of Liege, Liège (Belgium)
2015-07-15
Holonomic constraints restrict the configuration of a multibody system to a subset of the configuration space. They imply so called hidden constraints at the level of velocity coordinates as well as the original constraint equations may be obtained considering both types of constraints as well as the original constraint equations may be obtained considering both types of constraints in each time step (Stabilized index-2 formulation) or using projection techniques. Both approaches are well established in the time integration of differential-algebraic equations. Recently, we have introduced a generalized-α Lie group time integration method for the stablilized index -2 formulation that achieves second order convergence for all solution components In the present paper, we show that a separate velocity projection would be less favourable since it may result in an order reduction and in large transient errors after each projection step. This undesired numerical behaviour is analysed by a one-step error recursion that considers the coupled error propagation in differential and algebraic solution components. This one-step error recursion has been used before to prove second order convergence for the application of generalized-α methods to constrained systems.
Integrating timing and conditioning approaches to study behavior.
Kalafut, Kathryn L; Freestone, David M; MacInnis, Mika L M; Church, Russell M
2014-10-01
Skinner and Pavlov had innovative ways to measure both the times of their subject's responses, as well as the rate of their responses. Since then, different subfields within the study of animal behavior have prioritized either the rate or timing of responses, creating a divide in data and theory. Both timing and conditioning fields have proven fruitful, producing large bodies of empirical data and developing sophisticated models. Despite their individual successes, a unified view of simple behavior is still lacking. This may be caused, at least in part, by the differential emphasis on data collection and analysis techniques. The result is that these subfields produce models that fit their data well, but fail to translate to the other domain. This is startling given the fact that both subfields use nearly identical experimental procedures. To highlight similarities within the subfields, and provide empirical data in support of this integration, 18 Sprague-Dawley rats were trained on trace, delay, and backward conditioning procedures. Using these empirical data we discuss how traditional summary measures used by these subfields can be limiting, and suggest methods that may aid in the integration of these subfields toward common goals.
Development of visuo-auditory integration in space and time
Monica eGori
2012-09-01
Full Text Available Adults integrate multisensory information optimally (e.g. Ernst & Banks, 2002 while children are not able to integrate multisensory visual haptic cues until 8-10 years of age (e.g. Gori, Del Viva, Sandini, & Burr, 2008. Before that age strong unisensory dominance is present for size and orientation visual-haptic judgments maybe reflecting a process of cross-sensory calibration between modalities. It is widely recognized that audition dominates time perception, while vision dominates space perception. If the cross sensory calibration process is necessary for development, then the auditory modality should calibrate vision in a bimodal temporal task, and the visual modality should calibrate audition in a bimodal spatial task. Here we measured visual-auditory integration in both the temporal and the spatial domains reproducing for the spatial task a child-friendly version of the ventriloquist stimuli used by Alais and Burr (2004 and for the temporal task a child-friendly version of the stimulus used by Burr, Banks and Morrone (2009. Unimodal and bimodal (conflictual or not conflictual audio-visual thresholds and PSEs were measured and compared with the Bayesian predictions. In the temporal domain, we found that both in children and adults, audition dominates the bimodal visuo-auditory task both in perceived time and precision thresholds. Contrarily, in the visual-auditory spatial task, children younger than 12 years of age show clear visual dominance (on PSEs and bimodal thresholds higher than the Bayesian prediction. Only in the adult group bimodal thresholds become optimal. In agreement with previous studies, our results suggest that also visual-auditory adult-like behaviour develops late. Interestingly, the visual dominance for space and the auditory dominance for time that we found might suggest a cross-sensory comparison of vision in a spatial visuo-audio task and a cross-sensory comparison of audition in a temporal visuo-audio task.
A higher order space-time Galerkin scheme for time domain integral equations
Pray, Andrew J.
2014-12-01
Stability of time domain integral equation (TDIE) solvers has remained an elusive goal formany years. Advancement of this research has largely progressed on four fronts: 1) Exact integration, 2) Lubich quadrature, 3) smooth temporal basis functions, and 4) space-time separation of convolutions with the retarded potential. The latter method\\'s efficacy in stabilizing solutions to the time domain electric field integral equation (TD-EFIE) was previously reported for first-order surface descriptions (flat elements) and zeroth-order functions as the temporal basis. In this work, we develop the methodology necessary to extend the scheme to higher order surface descriptions as well as to enable its use with higher order basis functions in both space and time. These basis functions are then used in a space-time Galerkin framework. A number of results are presented that demonstrate convergence in time. The viability of the space-time separation method in producing stable results is demonstrated experimentally for these examples.
Algorithmic properties of the midpoint predictor-corrector time integrator.
Rider, William J.; Love, Edward; Scovazzi, Guglielmo
2009-03-01
Algorithmic properties of the midpoint predictor-corrector time integration algorithm are examined. In the case of a finite number of iterations, the errors in angular momentum conservation and incremental objectivity are controlled by the number of iterations performed. Exact angular momentum conservation and exact incremental objectivity are achieved in the limit of an infinite number of iterations. A complete stability and dispersion analysis of the linearized algorithm is detailed. The main observation is that stability depends critically on the number of iterations performed.
Organic Materials for Time-Temperature Integrator Devices.
Cavallini, Massimiliano; Melucci, Manuela
2015-08-12
Time-temperature integrators (TTIs) are devices capable of recording the thermal history of a system. They have an enormous impact in the food and pharmaceutical industries. TTIs exploit several irreversible thermally activated transitions such as recrystallization, dewetting, smoothening, chemical decomposition, and polymorphic transitions, usually considered drawbacks for many technological applications. The aim of this article is to sensitize research groups working in organic synthesis and surface science toward TTI devices, enlarging the prospects of many new materials. We reviewed the principal applications highlighting the need and criticisms of TTIs, which offer a new opportunity for the development of many materials.
Time-dependent Integrated Predictive Modeling of ITER Plasmas
R.V. Budny
2007-01-01
@@ Introduction Modeling burning plasmas is important for speeding progress toward practical Tokamak energy production. Examples of issues that can be elucidated by modelinginclude requirements for heating, fueling, torque, and current drive systems, design of diagnostics, and estimates of the plasma performance (e.g., fusion power production) in various plasma scenarios. The modeling should be time-dependent to demonstrate that burning plasmas can be created, maintained (controlled), and terminated successfully. The modeling also should be integrated to treat self-consistently the nonlinearities and strong coupling between the plasma, heating, current drive, confinement, and control systems.
FENG Guolin; DONG Wenjie; GAO Hongxing
2005-01-01
The time-dependent solution of reduced air-sea coupling stochastic-dynamic model is accurately obtained by using the Fokker-Planck equation and the quantum mechanical method. The analysis of the timedependent solution suggests that when the climate system is in the ground state, the behavior of the system appears to be a Brownian movement, thus reasoning the foothold of Hasselmann's stochastic climatic model;when the system is in the first excitation state, the motion of the system exhibits a form of time-decaying,or under certain condition a periodic oscillation with the main period being 2.3 yr. At last, the results are used to discuss the impact of the doubling of carbon dioxide on climate.
Integrated real time bowel sound detector for artificial pancreas systems
Khandaker A. Al Mamun
2016-03-01
Full Text Available This paper reports an ultra-low power real time bowel sound detector with integrated feature extractor for physiologic measure of meal instances in artificial pancreas devices. The system can aid in improving long term diabetic patient care and consists of a front end detector and signal processing unit. The front end detector transduces the initial bowel sound recorded from a piezoelectric sensor into a voltage signal. The signal processor uses a feature extractor to determine whether a bowel sound is detected. The feature extractor consists of a low noise, low power signal front-end, peak and trough locator, signal slope and width detector, digitizer, and bowel pulse locator. The system was fabricated in a standard 0.18 μm CMOS process, and the bowel sound detection system was characterized and verified with experimentally recorded bowel sounds. The integrated instrument consumes 53 μW of power from a 1 V supply in a 0.96 mm2 area, and is suitable for integration with portable devices.
Real-Time Verification of Integrity Policies for Distributed Systems
Ernesto Buelna
2013-12-01
Full Text Available We introduce a mechanism for the verification of real-time integrity policies about the operation of a distributed system. Our mechanism is based on Microsoft .NET technologies. Unlike rival competitors, it is not intrusive, as it hardly modifies the source code of any component of the system to be monitored. Our mechanism consists of four modules: the specification module, which comes with a security policy specification language, geared towards the capture of integrity policies; the monitoring module, which includes a code injector, whereby the mechanism observes how specific methods of the system, referred to by some policy, are invoked; the verifier module, which examines the operation of the distributed system in order to determine whether is policy compliant or not; and, the reporter module, which notifies the system is policy compliant, or sends an alert upon the occurrence of a contingency, indicating policy violation. We argue that our mechanism can be framed within the Clark and Wilson security model, and, thus, used to realise information integrity. We illustrate the workings and the power of our mechanism on a simple, but industrial-strength, case study.
2013-08-01
2 x Dose (2) CAMI (3) Medication Max Hrs Hrs Half-lives Interv Hrs Half-lives Eq Hrs Half-lives Codeine 4.0 24 6.0 8.0 2.0 15 3.6 Morphine 7.0 24...return-to-duty time, even for individuals on the extreme metabolic margins of the general population. The variation in t½ (calculated by the CAMI
Furbish, David J.; Schmeeckle, Mark; Schumer, Rina; Fathel, Siobhan L.
2016-01-01
We describe the most likely forms of the probability distributions of bed load particle velocities, accelerations, hop distances, and travel times, in a manner that formally appeals to inferential statistics while honoring mechanical and kinematic constraints imposed by equilibrium transport conditions. The analysis is based on E. Jaynes's elaboration of the implications of the similarity between the Gibbs entropy in statistical mechanics and the Shannon entropy in information theory. By maximizing the information entropy of a distribution subject to known constraints on its moments, our choice of the form of the distribution is unbiased. The analysis suggests that particle velocities and travel times are exponentially distributed and that particle accelerations follow a Laplace distribution with zero mean. Particle hop distances, viewed alone, ought to be distributed exponentially. However, the covariance between hop distances and travel times precludes this result. Instead, the covariance structure suggests that hop distances follow a Weibull distribution. These distributions are consistent with high-resolution measurements obtained from high-speed imaging of bed load particle motions. The analysis brings us closer to choosing distributions based on our mechanical insight.
A 7.5 ps single-shot precision integrated time counter with segmented delay line
Klepacki, K.; Szplet, R.; Pelka, R.
2014-03-01
This paper describes the design and test results of time interval counter featuring the single-shot precision of 7.5 ps root mean square (rms) and measurement range of 1 ms. These parameters have been achieved by combining direct counting method with a two-stage interpolation within a single clock period. Both stages of interpolation are based on the use of tapped delay lines stabilized by delay locked loop mechanism. In the first stage, a coarse resolution is obtained with the aid of high frequency multiphase clock, while in the second stage a sub-gate delay resolution is achieved with the use of differential delay line. To reduce the nonlinearities of conversion and to improve the precision of measurement, a novel segmented delay line is proposed. An important feature of this segmented delay line is partial overlapping of measurement range and resulting enhancement of both resolution and precision of time interval counter. The maximum integral nonlinearity error of the fine-stage interpolators does not exceed 16 ps and 14 ps in START and STOP interpolators, respectively. These errors have been identified by statistical calibration procedure and corrected to achieve single-shot precision better than 7.5 ps (rms). The time counter is integrated in a single ASIC (Application Specific Integrated Circuit) chip using a standard cost-effective 0.35 μm CMOS (Complementary Metal Oxide Semiconductor) process.
Integral ceramic superstructure evaluation using time domain optical coherence tomography
Sinescu, Cosmin; Bradu, Adrian; Topala, Florin I.; Negrutiu, Meda Lavinia; Duma, Virgil-Florin; Podoleanu, Adrian G.
2014-02-01
Optical Coherence Tomography (OCT) is a non-invasive low coherence interferometry technique that includes several technologies (and the corresponding devices and components), such as illumination and detection, interferometry, scanning, adaptive optics, microscopy and endoscopy. From its large area of applications, we consider in this paper a critical aspect in dentistry - to be investigated with a Time Domain (TD) OCT system. The clinical situation of an edentulous mandible is considered; it can be solved by inserting 2 to 6 implants. On these implants a mesostructure will be manufactured and on it a superstructure is needed. This superstructure can be integral ceramic; in this case materials defects could be trapped inside the ceramic layers and those defects could lead to fractures of the entire superstructure. In this paper we demonstrate that a TD-OCT imaging system has the potential to properly evaluate the presence of the defects inside the ceramic layers and those defects can be fixed before inserting the prosthesis inside the oral cavity. Three integral ceramic superstructures were developed by using a CAD/CAM technology. After the milling, the ceramic layers were applied on the core. All the three samples were evaluated by a TD-OCT system working at 1300 nm. For two of the superstructures evaluated, no defects were found in the most stressed areas. The third superstructure presented four ceramic defects in the mentioned areas. Because of those defects the superstructure may fracture. The integral ceramic prosthesis was send back to the dental laboratory to fix the problems related to the material defects found. Thus, TD-OCT proved to be a valuable method for diagnosing the ceramic defects inside the integral ceramic superstructures in order to prevent fractures at this level.
2011-01-10
...: Establishing Maximum Allowable Operating Pressure or Maximum Operating Pressure Using Record Evidence, and... facilities of their responsibilities, under Federal integrity management (IM) regulations, to perform... system, especially when calculating Maximum Allowable Operating Pressure (MAOP) or Maximum Operating...
Decorrelating Clutter Statistics for Long Integration Time SAR Imaging
Leanza, Antonio; Monti Guarnieri, Andrea; Recchia, Andrea; Broquetas Ibars, Antoni; Ruiz Rodon, Josep
2015-05-01
It is presented an experiment aimed to assess and eventually complement the Billingsley Internal Clutter Motion (ICM) model for long integration time SAR imaging. Exploiting a real-aperture rotating antenna Ground-Based RADAR, observations of rural areas in different periods of the year have been performed. The collected data, obtained from two different acquisition modes, have been processed to obtain short-term and long-term clutter de-correlation analysis. The results obtained revealed interesting aspects of the phenomenon. In particular, it can be observed that the process is non-stationary with time, say minutes to hours, and that DC/AC ratio follows a day/night characteristic. Moreover, the results showed values of the AC component decay rate β higher than the foreseen ones in the considered spectral interval, which is quite below the one analyzed in the Billingsley experiment.
Shaolin Ji
2012-01-01
Full Text Available We study the optimal control problem of a controlled time-symmetric forward-backward doubly stochastic differential equation with initial-terminal state constraints. Applying the terminal perturbation method and Ekeland’s variation principle, a necessary condition of the stochastic optimal control, that is, stochastic maximum principle, is derived. Applications to backward doubly stochastic linear-quadratic control models are investigated.
Liu, Weidong
2009-01-01
In this paper, Cram\\'{e}r type moderate deviations for the maximum of the periodogram and its studentized version are derived. The results are then applied to a simultaneous testing problem in gene expression time series. It is shown that the level of the simultaneous tests is accurate provided that the number of genes $G$ and the sample size $n$ satisfy $G=\\exp(o(n^{1/3}))$.
Svendsen, Morten B S; Domenici, Paolo; Marras, Stefano;
2016-01-01
, and three other large marine pelagic predatory fish species, by measuring the twitch contraction time of anaerobic swimming muscle. The highest estimated maximum swimming speeds were found in sailfish (8.3±1.4 m s(-1)), followed by barracuda (6.2±1.0 m s(-1)), little tunny (5.6±0.2 m s(-1)) and dorado (4...
Night-Time Noise Index Based on the Integration of Awakening Potential
Junta Tagusari
2016-03-01
Full Text Available Sleep disturbance induced by night-time noise is a serious environmental problem that can cause adverse health effects, such as hypertension and ischemic heart disease. Night-time noise indices are used to facilitate the enforcement of permitted noise levels during night-time. However, existing night-time noise indices, such as sound exposure level (SEL, maximum sound level (LAmax and night equivalent level (Lnight are selected mainly because of practical reasons. Therefore, this study proposes a noise index based on neurophysiological determinants of the awakening process. These determinants have revealed that the potential on awakening is likely integrated into the brainstem that dominates wakefulness and sleep. From this evidence, a night-time noise index, N awake,year, was redefined based on the integration of the awakening potential unit (punit estimated from the existing dose-response relationships of awakening. The newly-defined index considers the total number of awakenings and covers a wide-range and number of noise events. We also presented examples of its applicability to traffic noise. Although further studies are needed, it may reveal a reasonable dose-response relationship between sleep disturbance and adverse health effects and provide a consistent explanation for the risks of different sound sources where the characteristics of noise exposure are quite different.
Kodner Robin B
2010-10-01
Full Text Available Abstract Background Likelihood-based phylogenetic inference is generally considered to be the most reliable classification method for unknown sequences. However, traditional likelihood-based phylogenetic methods cannot be applied to large volumes of short reads from next-generation sequencing due to computational complexity issues and lack of phylogenetic signal. "Phylogenetic placement," where a reference tree is fixed and the unknown query sequences are placed onto the tree via a reference alignment, is a way to bring the inferential power offered by likelihood-based approaches to large data sets. Results This paper introduces pplacer, a software package for phylogenetic placement and subsequent visualization. The algorithm can place twenty thousand short reads on a reference tree of one thousand taxa per hour per processor, has essentially linear time and memory complexity in the number of reference taxa, and is easy to run in parallel. Pplacer features calculation of the posterior probability of a placement on an edge, which is a statistically rigorous way of quantifying uncertainty on an edge-by-edge basis. It also can inform the user of the positional uncertainty for query sequences by calculating expected distance between placement locations, which is crucial in the estimation of uncertainty with a well-sampled reference tree. The software provides visualizations using branch thickness and color to represent number of placements and their uncertainty. A simulation study using reads generated from 631 COG alignments shows a high level of accuracy for phylogenetic placement over a wide range of alignment diversity, and the power of edge uncertainty estimates to measure placement confidence. Conclusions Pplacer enables efficient phylogenetic placement and subsequent visualization, making likelihood-based phylogenetics methodology practical for large collections of reads; it is freely available as source code, binaries, and a web service.
Shaw, Melanie; Mueller, Jochen F
2009-03-01
Many environments are subject to periodic fluctuations in pollutant concentrations. Passive samplers, which can provide time weighted average concentrations integrated over the period of deployment, are ideally suited as monitoring tools for these environments. However, the potential for fluctuating concentrations to limit the integrative period of sampling needs to be investigated before sampling data from these environments can be interpreted with confidence. In this study, Chemcatchers using SDB-RPS Empore disks as the sorbent phase were exposed to herbicides for 28 days in a calibration chamber. A pulsed event of 10-fold greater concentrations was introduced on day 5 and returned to background concentrations over a period of 3 days. Observed uptake was compared with that predicted by a first order uptake model and by the reduced form of this model describing a strictly integrative response for samplers deployed with two surfaces exposed (two-sided naked), with one surface exposed (one-sided naked) and with a polyethersulfone membrane. Membrane covered samplers predicted time weighted average water concentrations within a factor 0.7-1.2 after 28 days exposure, while one- and two-sided naked samplers under predicted the average by a factor 1.9-2.2 and 2.4-3.2, respectively. First order modeling predicted uptake in membrane covered and one-sided naked samplers and was therefore applied to predict sampler response to several fluctuating concentration event scenarios.
Duan, Gaopeng; Xiao, Feng; Wang, Long
2017-01-23
This paper focuses on the average consensus of double-integrator networked systems based on the asynchronous periodic edge-event triggered control. The asynchronous property lies in the edge event-detecting procedure. For different edges, their event detections are performed at different times and the corresponding events occur independently of each other. When an event is activated, the two adjacent agents connected by the corresponding link sample their relative state information and update their controllers. The application of incidence matrix facilitates the transformation of control objects from the agent-based to the edge-based. Practically, due to the constraints of network bandwidth and communication distance, agents usually cannot receive the instantaneous information of some others, which has an impact on the system performance. Hence, it is necessary to investigate the presence of communication time delays. For double-integrator multiagent systems with and without communication time delays, the average state consensus can be asynchronously achieved by designing appropriate parameters under the proposed event-detecting rules. The presented results specify the relationship among the maximum allowable time delays, interaction topologies, and event-detecting periods. Furthermore, the proposed protocols have the advantages of reduced communication costs and controller-updating costs. Simulation examples are given to illustrate the proposed theoretical results.
Grove, R. D.; Bowles, R. L.; Mayhew, S. C.
1972-01-01
A maximum likelihood parameter estimation procedure and program were developed for the extraction of the stability and control derivatives of aircraft from flight test data. Nonlinear six-degree-of-freedom equations describing aircraft dynamics were used to derive sensitivity equations for quasilinearization. The maximum likelihood function with quasilinearization was used to derive the parameter change equations, the covariance matrices for the parameters and measurement noise, and the performance index function. The maximum likelihood estimator was mechanized into an iterative estimation procedure utilizing a real time digital computer and graphic display system. This program was developed for 8 measured state variables and 40 parameters. Test cases were conducted with simulated data for validation of the estimation procedure and program. The program was applied to a V/STOL tilt wing aircraft, a military fighter airplane, and a light single engine airplane. The particular nonlinear equations of motion, derivation of the sensitivity equations, addition of accelerations into the algorithm, operational features of the real time digital system, and test cases are described.
Electric vehicle integration in a real-time market
Pedersen, Anders Bro; Østergaard, Jacob; Poulsen, Bjarne
” are needed at the device level. In order for this market to work, however, the proper ICTnetwork- and server-infrastructure has to be developed. The primary goal of this PhD project, has been to investigate the scope of this ICT infrastructure, required to realise price-signal based charging of electric......This project is rooted in the EDISON project, which dealt with Electrical Vehicle (EV) integration into the existing power grid, as well as with the infrastructure needed to facilitate the ever increasing penetration of fluctuating renewable energy resources like e.g. wind turbines. In the EDISON...... the distributed energy resources registered with it, in order to make them appear as a single producer in the eyes of the market. Although the concept of a VPP is used within the EcoGrid EU project, the idea of more individual control is introduced through a new proposed real-time electricity market, where...
Identification of integrating and critically damped systems with time delay
BAJARANGBALI; Somanath MAJHI
2015-01-01
This paper presents identification of second order plus dead time (SOPDT) integrating and critically damped systems based on relay feedback testing. Relay with hysteresis is applied to the unknown system to get the sustained oscillations also called as limit cycle. The limit cycle parameters are utilized in mathematical expressions which are derived using state space technique so that exact process model parameters are estimated. As the relay with hysteresis helps in generating sustained oscillations and also reduces effect of measurement noise which is an important issue in system identification. Different types of processes in the form of transfer function models are considered to show the efficacy of the proposed method and results are compared with available methods in the literature with and without noise effect.
Time series Analysis of Integrateds Building System Variables
Georgiev, Tz.; Jonkov, T.; Yonchev, E.
2010-10-01
This article deals with time series analysis of indoor and outdoor variables of the integrated building system. The kernel of these systems is heating, ventilation and air conditioning (HVAC) problems. Important outdoor and indoor variables are: air temperature, global and diffuse radiations, wind speed and direction, temperature, relative humidity, mean radiant temperature, and so on. The aim of this article is TO select the structure and investigation of a linear auto—regressive (AR) and auto—regressive with external inputs (ARX) models. The investigation of obtained models is based on real—live data. All researches are derived in MATLAB environment. The further research will focus on synthesis of robust energy saving control algorithms.
Ion trap with integrated time-of-flight mass spectrometer
Schneider, Christian; Yu, Peter; Hudson, Eric R
2015-01-01
Recently, we reported an ion trap experiment with an integrated time-of-flight mass spectrometer (TOFMS) [Phys. Rev. Appl. 2, 034013 (2014)] focussing on the improvement of mass resolution and detection limit due to sample preparation at millikelvin temperatures. The system utilizes a radio-frequency (RF) ion trap with asymmetric drive for storing and manipulating laser-cooled ions and features radial extraction into a compact $275$ mm long TOF drift tube. The mass resolution exceeds $m / \\Delta m = 500$, which provides isotopic resolution over the whole mass range of interest in current experiments and constitutes an improvement of almost an order of magnitude over other implementations. In this manuscript, we discuss the experimental implementation in detail, which is comprised of newly developed drive electronics for generating the required voltages to operate RF trap and TOFMS, as well as control electronics for regulating RF outputs and synchronizing the TOFMS extraction.
Pneumatic oscillator circuits for timing and control of integrated microfluidics.
Duncan, Philip N; Nguyen, Transon V; Hui, Elliot E
2013-11-05
Frequency references are fundamental to most digital systems, providing the basis for process synchronization, timing of outputs, and waveform synthesis. Recently, there has been growing interest in digital logic systems that are constructed out of microfluidics rather than electronics, as a possible means toward fully integrated laboratory-on-a-chip systems that do not require any external control apparatus. However, the full realization of this goal has not been possible due to the lack of on-chip frequency references, thus requiring timing signals to be provided from off-chip. Although microfluidic oscillators have been demonstrated, there have been no reported efforts to characterize, model, or optimize timing accuracy, which is the fundamental metric of a clock. Here, we report pneumatic ring oscillator circuits built from microfluidic valves and channels. Further, we present a compressible-flow analysis that differs fundamentally from conventional circuit theory, and we show the utility of this physically based model for the optimization of oscillator stability. Finally, we leverage microfluidic clocks to demonstrate circuits for the generation of phase-shifted waveforms, self-driving peristaltic pumps, and frequency division. Thus, pneumatic oscillators can serve as on-chip frequency references for microfluidic digital logic circuits. On-chip clocks and pumps both constitute critical building blocks on the path toward achieving autonomous laboratory-on-a-chip devices.
Extracting Time-Resolved Information from Time-Integrated Laser-Induced Breakdown Spectra
Emanuela Grifoni
2014-01-01
Full Text Available Laser-induced breakdown spectroscopy (LIBS data are characterized by a strong dependence on the acquisition time after the onset of the laser plasma. However, time-resolved broadband spectrometers are expensive and often not suitable for being used in portable LIBS instruments. In this paper we will show how the analysis of a series of LIBS spectra, taken at different delays after the laser pulse, allows the recovery of time-resolved spectral information. The comparison of such spectra is presented for the analysis of an aluminium alloy. The plasma parameters (electron temperature and number density are evaluated, starting from the time-integrated and time-resolved spectra, respectively. The results are compared and discussed.
Ronzhin, A., E-mail: ronzhin@fnal.gov [Fermilab, Batavia, Il 60510 (United States); Los, S.; Ramberg, E. [Fermilab, Batavia, Il 60510 (United States); Apresyan, A.; Xie, S.; Spiropulu, M. [California Institute of Technology, Pasadena, CA 91126 (United States); Kim, H. [University of Chicago, Chicago, Il 60637 (United States)
2015-09-21
We continue the study of micro-channel plate photomultiplier (MCP-PMT) as the active element of a shower maximum (SM) detector. We present test beam results obtained with Photek 240 and Photonis XP85011 MCP-PMTs devices. For proton beams, we obtained a time resolution of 9.6 ps, representing a significant improvement over past results using the same time of flight system. For electron beams, the time resolution obtained for this new type of SM detector is measured to be at the level of 13 ps when we use Photek 240 as the active element of the SM. Using the Photonis XP85011 MCP-PMT as the active element of the SM, we performed time resolution measurements with pixel readout, and achieved a TR better than 30 ps, The pixel readout was observed to improve upon the TR compared to the case where the individual channels were summed.
Morten B. S. Svendsen
2016-10-01
Full Text Available Billfishes are considered to be among the fastest swimmers in the oceans. Previous studies have estimated maximum speed of sailfish and black marlin at around 35 m s−1 but theoretical work on cavitation predicts that such extreme speed is unlikely. Here we investigated maximum speed of sailfish, and three other large marine pelagic predatory fish species, by measuring the twitch contraction time of anaerobic swimming muscle. The highest estimated maximum swimming speeds were found in sailfish (8.3±1.4 m s−1, followed by barracuda (6.2±1.0 m s−1, little tunny (5.6±0.2 m s−1 and dorado (4.0±0.9 m s−1; although size-corrected performance was highest in little tunny and lowest in sailfish. Contrary to previously reported estimates, our results suggest that sailfish are incapable of exceeding swimming speeds of 10-15 m s−1, which corresponds to the speed at which cavitation is predicted to occur, with destructive consequences for fin tissues.
Svendsen, Morten B. S.; Domenici, Paolo; Marras, Stefano; Krause, Jens; Boswell, Kevin M.; Rodriguez-Pinto, Ivan; Wilson, Alexander D. M.; Kurvers, Ralf H. J. M.; Viblanc, Paul E.; Finger, Jean S.; Steffensen, John F.
2016-01-01
ABSTRACT Billfishes are considered to be among the fastest swimmers in the oceans. Previous studies have estimated maximum speed of sailfish and black marlin at around 35 m s−1 but theoretical work on cavitation predicts that such extreme speed is unlikely. Here we investigated maximum speed of sailfish, and three other large marine pelagic predatory fish species, by measuring the twitch contraction time of anaerobic swimming muscle. The highest estimated maximum swimming speeds were found in sailfish (8.3±1.4 m s−1), followed by barracuda (6.2±1.0 m s−1), little tunny (5.6±0.2 m s−1) and dorado (4.0±0.9 m s−1); although size-corrected performance was highest in little tunny and lowest in sailfish. Contrary to previously reported estimates, our results suggest that sailfish are incapable of exceeding swimming speeds of 10-15 m s−1, which corresponds to the speed at which cavitation is predicted to occur, with destructive consequences for fin tissues. PMID:27543056
Time integration for particle Brownian motion determined through fluctuating hydrodynamics
Delmotte, Blaise
2015-01-01
Fluctuating hydrodynamics has been successfully combined with several computational methods to rapidly compute the correlated random velocities of Brownian particles. In the overdamped limit where both particle and fluid inertia are ignored, one must also account for a Brownian drift term in order to successfully update the particle positions. In this paper, we introduce and study a midpoint time integration scheme we refer to as the drifter-corrector (DC) that resolves the drift term for fluctuating hydrodynamics-based methods even when constraints are imposed on the fluid flow to obtain higher-order corrections to the particle hydrodynamic interactions. We explore this scheme in the context of the fluctuating force-coupling method (FCM) where the constraint is imposed on the rate-of-strain averaged over the volume occupied by the particle. For the DC, the constraint need only be imposed once per time step, leading to a significant reduction in computational cost with respect to other schemes. In fact, for f...
Reducing ETL Load Times by a New Data Integration Approach for Real-time Business Intelligence
Darshan M. Tank
2012-04-01
Full Text Available Reducing business latency is essential in today’s competitive and demanding environments. This means responding immediately to new information as it arrives, and having the right information in time to make the best decision. Integrating, processing and delivering results in real-time is a huge challenge, particularly as data volumes continue to increase dramatically and sources of data are ever more distributed and varied. The decision making process in traditional data warehouse environments is often delayed because data cannot be propagated from the source system to the data warehouse in time. The typical update patterns for traditional data warehouses on an overnight or even weekly basis increase this propagation delay. Keeping data current by minimizing the latency from when data is captured until it is available to decision makers in this context is a difficult task. A real-time data warehouse aims at decreasing the time it takes to make business decisions and tries to attain zero latency between the cause and effect of a business decision. An ETL process that periodically copies a snapshot of the entire source consumes too much time and resources. Alternate approaches that include timestamp columns, triggers, or complex queries often hurt performance and increase complexity. What is needed is a reliable stream of change data that is structured so that it can easily be applied by consumers to target representations of the data. To keep up with the market competition, there is increased need to minimize ETL load times. In this paper I have introduced an approach for data integration that deals with reducing ETL load times.
Esposito, Rosario; Mensitieri, Giuseppe; de Nicola, Sergio
2015-12-21
A new algorithm based on the Maximum Entropy Method (MEM) is proposed for recovering both the lifetime distribution and the zero-time shift from time-resolved fluorescence decay intensities. The developed algorithm allows the analysis of complex time decays through an iterative scheme based on entropy maximization and the Brent method to determine the minimum of the reduced chi-squared value as a function of the zero-time shift. The accuracy of this algorithm has been assessed through comparisons with simulated fluorescence decays both of multi-exponential and broad lifetime distributions for different values of the zero-time shift. The method is capable of recovering the zero-time shift with an accuracy greater than 0.2% over a time range of 2000 ps. The center and the width of the lifetime distributions are retrieved with relative discrepancies that are lower than 0.1% and 1% for the multi-exponential and continuous lifetime distributions, respectively. The MEM algorithm is experimentally validated by applying the method to fluorescence measurements of the time decays of the flavin adenine dinucleotide (FAD).
Bylaska, Eric J.; Weare, Jonathan Q.; Weare, John H.
2013-08-01
Parallel in time simulation algorithms are presented and applied to conventional molecular dynamics (MD) and ab initio molecular dynamics (AIMD) models of realistic complexity. Assuming that a forward time integrator, f (e.g., Verlet algorithm), is available to propagate the system from time ti (trajectory positions and velocities xi = (ri, vi)) to time ti + 1 (xi + 1) by xi + 1 = fi(xi), the dynamics problem spanning an interval from t0…tM can be transformed into a root finding problem, F(X) = [xi - f(x(i - 1)]i = 1, M = 0, for the trajectory variables. The root finding problem is solved using a variety of root finding techniques, including quasi-Newton and preconditioned quasi-Newton schemes that are all unconditionally convergent. The algorithms are parallelized by assigning a processor to each time-step entry in the columns of F(X). The relation of this approach to other recently proposed parallel in time methods is discussed, and the effectiveness of various approaches to solving the root finding problem is tested. We demonstrate that more efficient dynamical models based on simplified interactions or coarsening time-steps provide preconditioners for the root finding problem. However, for MD and AIMD simulations, such preconditioners are not required to obtain reasonable convergence and their cost must be considered in the performance of the algorithm. The parallel in time algorithms developed are tested by applying them to MD and AIMD simulations of size and complexity similar to those encountered in present day applications. These include a 1000 Si atom MD simulation using Stillinger-Weber potentials, and a HCl + 4H2O AIMD simulation at the MP2 level. The maximum speedup (serial execution time/parallel execution time) obtained by parallelizing the Stillinger-Weber MD simulation was nearly 3.0. For the AIMD MP2 simulations, the algorithms achieved speedups of up to 14.3. The parallel in time algorithms can be implemented in a distributed computing
Bylaska, Eric J., E-mail: Eric.Bylaska@pnnl.gov [Environmental Molecular Sciences Laboratory, Pacific Northwest National Laboratory, P.O. Box 999, Richland, Washington 99352 (United States); Weare, Jonathan Q., E-mail: weare@uchicago.edu [Department of Mathematics, University of Chicago, Chicago, Illinois 60637 (United States); Weare, John H., E-mail: jweare@ucsd.edu [Department of Chemistry and Biochemistry, University of California, San Diego, La Jolla, California 92093 (United States)
2013-08-21
Parallel in time simulation algorithms are presented and applied to conventional molecular dynamics (MD) and ab initio molecular dynamics (AIMD) models of realistic complexity. Assuming that a forward time integrator, f (e.g., Verlet algorithm), is available to propagate the system from time t{sub i} (trajectory positions and velocities x{sub i} = (r{sub i}, v{sub i})) to time t{sub i+1} (x{sub i+1}) by x{sub i+1} = f{sub i}(x{sub i}), the dynamics problem spanning an interval from t{sub 0}…t{sub M} can be transformed into a root finding problem, F(X) = [x{sub i} − f(x{sub (i−1})]{sub i} {sub =1,M} = 0, for the trajectory variables. The root finding problem is solved using a variety of root finding techniques, including quasi-Newton and preconditioned quasi-Newton schemes that are all unconditionally convergent. The algorithms are parallelized by assigning a processor to each time-step entry in the columns of F(X). The relation of this approach to other recently proposed parallel in time methods is discussed, and the effectiveness of various approaches to solving the root finding problem is tested. We demonstrate that more efficient dynamical models based on simplified interactions or coarsening time-steps provide preconditioners for the root finding problem. However, for MD and AIMD simulations, such preconditioners are not required to obtain reasonable convergence and their cost must be considered in the performance of the algorithm. The parallel in time algorithms developed are tested by applying them to MD and AIMD simulations of size and complexity similar to those encountered in present day applications. These include a 1000 Si atom MD simulation using Stillinger-Weber potentials, and a HCl + 4H{sub 2}O AIMD simulation at the MP2 level. The maximum speedup ((serial execution time)/(parallel execution time) ) obtained by parallelizing the Stillinger-Weber MD simulation was nearly 3.0. For the AIMD MP2 simulations, the algorithms achieved speedups of up
CRAUL: Compiler and Run-Time Integration for Adaptation under Load
Sotiris Ioannidis
1999-01-01
Full Text Available Clusters of workstations provide a cost‐effective, high performance parallel computing environment. These environments, however, are often shared by multiple users, or may consist of heterogeneous machines. As a result, parallel applications executing in these environments must operate despite unequal computational resources. For maximum performance, applications should automatically adapt execution to maximize use of the available resources. Ideally, this adaptation should be transparent to the application programmer. In this paper, we present CRAUL (Compiler and Run‐Time Integration for Adaptation Under Load, a system that dynamically balances computational load in a parallel application. Our target run‐time is software‐based distributed shared memory (SDSM. SDSM is a good target for parallelizing compilers since it reduces compile‐time complexity by providing data caching and other support for dynamic load balancing. CRAUL combines compile‐time support to identify data access patterns with a run‐time system that uses the access information to intelligently distribute the parallel workload in loop‐based programs. The distribution is chosen according to the relative power of the processors and so as to minimize SDSM overhead and maximize locality. We have evaluated the resulting load distribution in the presence of different types of load – computational, computational and memory intensive, and network load. CRAUL performs within 5–23% of ideal in the presence of load, and is able to improve on naive compiler‐based work distribution that does not take locality into account even in the absence of load.
3D Vectorial Time Domain Computational Integrated Photonics
Kallman, J S; Bond, T C; Koning, J M; Stowell, M L
2007-02-16
The design of integrated photonic structures poses considerable challenges. 3D-Time-Domain design tools are fundamental in enabling technologies such as all-optical logic, photonic bandgap sensors, THz imaging, and fast radiation diagnostics. Such technologies are essential to LLNL and WFO sponsors for a broad range of applications: encryption for communications and surveillance sensors (NSA, NAI and IDIV/PAT); high density optical interconnects for high-performance computing (ASCI); high-bandwidth instrumentation for NIF diagnostics; micro-sensor development for weapon miniaturization within the Stockpile Stewardship and DNT programs; and applications within HSO for CBNP detection devices. While there exist a number of photonics simulation tools on the market, they primarily model devices of interest to the communications industry. We saw the need to extend our previous software to match the Laboratory's unique emerging needs. These include modeling novel material effects (such as those of radiation induced carrier concentrations on refractive index) and device configurations (RadTracker bulk optics with radiation induced details, Optical Logic edge emitting lasers with lateral optical inputs). In addition we foresaw significant advantages to expanding our own internal simulation codes: parallel supercomputing could be incorporated from the start, and the simulation source code would be accessible for modification and extension. This work addressed Engineering's Simulation Technology Focus Area, specifically photonics. Problems addressed from the Engineering roadmap of the time included modeling the Auston switch (an important THz source/receiver), modeling Vertical Cavity Surface Emitting Lasers (VCSELs, which had been envisioned as part of fast radiation sensors), and multi-scale modeling of optical systems (for a variety of applications). We proposed to develop novel techniques to numerically solve the 3D multi-scale propagation problem for both the
Minimizing the area required for time constants in integrated circuits
Lyons, J. C.
1972-01-01
When a medium- or large-scale integrated circuit is designed, efforts are usually made to avoid the use of resistor-capacitor time constant generators. The capacitor needed for this circuit usually takes up more surface area on the chip than several resistors and transistors. When the use of this network is unavoidable, the designer usually makes an effort to see that the choice of resistor and capacitor combinations is such that a minimum amount of surface area is consumed. The optimum ratio of resistance to capacitance that will result in this minimum area is equal to the ratio of resistance to capacitance which may be obtained from a unit of surface area for the particular process being used. The minimum area required is a function of the square root of the reciprocal of the products of the resistance and capacitance per unit area. This minimum occurs when the area required by the resistor is equal to the area required by the capacitor.
Bylaska, Eric J; Weare, Jonathan Q; Weare, John H
2013-08-21
Parallel in time simulation algorithms are presented and applied to conventional molecular dynamics (MD) and ab initio molecular dynamics (AIMD) models of realistic complexity. Assuming that a forward time integrator, f (e.g., Verlet algorithm), is available to propagate the system from time ti (trajectory positions and velocities xi = (ri, vi)) to time ti + 1 (xi + 1) by xi + 1 = fi(xi), the dynamics problem spanning an interval from t0[ellipsis (horizontal)]tM can be transformed into a root finding problem, F(X) = [xi - f(x(i - 1)]i = 1, M = 0, for the trajectory variables. The root finding problem is solved using a variety of root finding techniques, including quasi-Newton and preconditioned quasi-Newton schemes that are all unconditionally convergent. The algorithms are parallelized by assigning a processor to each time-step entry in the columns of F(X). The relation of this approach to other recently proposed parallel in time methods is discussed, and the effectiveness of various approaches to solving the root finding problem is tested. We demonstrate that more efficient dynamical models based on simplified interactions or coarsening time-steps provide preconditioners for the root finding problem. However, for MD and AIMD simulations, such preconditioners are not required to obtain reasonable convergence and their cost must be considered in the performance of the algorithm. The parallel in time algorithms developed are tested by applying them to MD and AIMD simulations of size and complexity similar to those encountered in present day applications. These include a 1000 Si atom MD simulation using Stillinger-Weber potentials, and a HCl + 4H2O AIMD simulation at the MP2 level. The maximum speedup (serial execution/timeparallel execution time) obtained by parallelizing the Stillinger-Weber MD simulation was nearly 3.0. For the AIMD MP2 simulations, the algorithms achieved speedups of up to 14.3. The parallel in time algorithms can be implemented in a
Gian Paolo Beretta
2008-08-01
Full Text Available A rate equation for a discrete probability distribution is discussed as a route to describe smooth relaxation towards the maximum entropy distribution compatible at all times with one or more linear constraints. The resulting dynamics follows the path of steepest entropy ascent compatible with the constraints. The rate equation is consistent with the Onsager theorem of reciprocity and the fluctuation-dissipation theorem. The mathematical formalism was originally developed to obtain a quantum theoretical unification of mechanics and thermodinamics. It is presented here in a general, non-quantal formulation as a part of an effort to develop tools for the phenomenological treatment of non-equilibrium problems with applications in engineering, biology, sociology, and economics. The rate equation is also extended to include the case of assigned time-dependences of the constraints and the entropy, such as for modeling non-equilibrium energy and entropy exchanges.
Beretta, Gian P.
2008-09-01
A rate equation for a discrete probability distribution is discussed as a route to describe smooth relaxation towards the maximum entropy distribution compatible at all times with one or more linear constraints. The resulting dynamics follows the path of steepest entropy ascent compatible with the constraints. The rate equation is consistent with the Onsager theorem of reciprocity and the fluctuation-dissipation theorem. The mathematical formalism was originally developed to obtain a quantum theoretical unification of mechanics and thermodinamics. It is presented here in a general, non-quantal formulation as a part of an effort to develop tools for the phenomenological treatment of non-equilibrium problems with applications in engineering, biology, sociology, and economics. The rate equation is also extended to include the case of assigned time-dependences of the constraints and the entropy, such as for modeling non-equilibrium energy and entropy exchanges.
Optimal Real-time Dispatch for Integrated Energy Systems
Firestone, Ryan Michael [Univ. of California, Berkeley, CA (United States)
2007-05-31
This report describes the development and application of a dispatch optimization algorithm for integrated energy systems (IES) comprised of on-site cogeneration of heat and electricity, energy storage devices, and demand response opportunities. This work is intended to aid commercial and industrial sites in making use of modern computing power and optimization algorithms to make informed, near-optimal decisions under significant uncertainty and complex objective functions. The optimization algorithm uses a finite set of randomly generated future scenarios to approximate the true, stochastic future; constraints are included that prevent solutions to this approximate problem from deviating from solutions to the actual problem. The algorithm is then expressed as a mixed integer linear program, to which a powerful commercial solver is applied. A case study of United States Postal Service Processing and Distribution Centers (P&DC) in four cities and under three different electricity tariff structures is conducted to (1) determine the added value of optimal control to a cogeneration system over current, heuristic control strategies; (2) determine the value of limited electric load curtailment opportunities, with and without cogeneration; and (3) determine the trade-off between least-cost and least-carbon operations of a cogeneration system. Key results for the P&DC sites studied include (1) in locations where the average electricity and natural gas prices suggest a marginally profitable cogeneration system, optimal control can add up to 67% to the value of the cogeneration system; optimal control adds less value in locations where cogeneration is more clearly profitable; (2) optimal control under real-time pricing is (a) more complicated than under typical time-of-use tariffs and (b) at times necessary to make cogeneration economic at all; (3) limited electric load curtailment opportunities can be more valuable as a compliment to the cogeneration system than alone; and
An integrated device with high performance multi-function generators and time-to-digital convertors
Qin, X.; Shi, Z.; Xie, Y.; Wang, L.; Rong, X.; Jia, W.; Zhang, W.; Du, J.
2017-01-01
A highly integrated, high performance, and re-configurable device, which is designed for the Nitrogen-Vacancy (N-V) center based quantum applications, is reported. The digital compartment of the device is fully implemented in a Field-Programmable-Gate-Array (FPGA). The digital compartment is designed to manage the multi-function digital waveform generation and the time-to-digital convertors. The device provides two arbitrary-waveform-generator channels which operate at a 1 Gsps sampling rate with a maximum bandwidth of 500 MHz. There are twelve pulse channels integrated in the device with a 50 ps time resolution in both duration and delay. The pulse channels operate with the 3.3 V transistor-transistor logic. The FPGA-based time-to-digital convertor provides a 23-ps time measurement precision. A data accumulation module, which can record the input count rate and the distributions of the time measurement, is also available. A digital-to-analog convertor board is implemented as the analog compartment, which converts the digital waveforms to analog signals with 500 MHz lowpass filters. All the input and output channels of the device are equipped with 50 Ω SubMiniature version A termination. The hardware design is modularized thus it can be easily upgraded with compatible components. The device is suitable to be applied in the quantum technologies based on the N-V centers, as well as in other quantum solid state systems, such as quantum dots, phosphorus doped in silicon, and defect spins in silicon carbide.
An integrated device with high performance multi-function generators and time-to-digital convertors.
Qin, X; Shi, Z; Xie, Y; Wang, L; Rong, X; Jia, W; Zhang, W; Du, J
2017-01-01
A highly integrated, high performance, and re-configurable device, which is designed for the Nitrogen-Vacancy (N-V) center based quantum applications, is reported. The digital compartment of the device is fully implemented in a Field-Programmable-Gate-Array (FPGA). The digital compartment is designed to manage the multi-function digital waveform generation and the time-to-digital convertors. The device provides two arbitrary-waveform-generator channels which operate at a 1 Gsps sampling rate with a maximum bandwidth of 500 MHz. There are twelve pulse channels integrated in the device with a 50 ps time resolution in both duration and delay. The pulse channels operate with the 3.3 V transistor-transistor logic. The FPGA-based time-to-digital convertor provides a 23-ps time measurement precision. A data accumulation module, which can record the input count rate and the distributions of the time measurement, is also available. A digital-to-analog convertor board is implemented as the analog compartment, which converts the digital waveforms to analog signals with 500 MHz lowpass filters. All the input and output channels of the device are equipped with 50 Ω SubMiniature version A termination. The hardware design is modularized thus it can be easily upgraded with compatible components. The device is suitable to be applied in the quantum technologies based on the N-V centers, as well as in other quantum solid state systems, such as quantum dots, phosphorus doped in silicon, and defect spins in silicon carbide.
Recent advances in marching-on-in-time schemes for solving time domain volume integral equations
Sayed, Sadeed Bin
2015-05-16
Transient electromagnetic field interactions on inhomogeneous penetrable scatterers can be analyzed by solving time domain volume integral equations (TDVIEs). TDVIEs are constructed by setting the summation of the incident and scattered field intensities to the total field intensity on the volumetric support of the scatterer. The unknown can be the field intensity or flux/current density. Representing the total field intensity in terms of the unknown using the relevant constitutive relation and the scattered field intensity in terms of the spatiotemporal convolution of the unknown with the Green function yield the final form of the TDVIE. The unknown is expanded in terms of local spatial and temporal basis functions. Inserting this expansion into the TDVIE and testing the resulting equation at discrete times yield a system of equations that is solved by the marching on-in-time (MOT) scheme. At each time step, a smaller system of equations, termed MOT system is solved for the coefficients of the expansion. The right-hand side of this system consists of the tested incident field and discretized spatio-temporal convolution of the unknown samples computed at the previous time steps with the Green function.
Neuromuscular fatigue following isometric contractions with similar torque time integral.
Rozand, V; Cattagni, T; Theurel, J; Martin, A; Lepers, R
2015-01-01
Torque time integral (TTI) is the combination of intensity and duration of a contraction. The aim of this study was to compare neuromuscular alterations following different isometric sub-maximal contractions of the knee extensor muscles but with similar TTI. Sixteen participants performed 3 sustained contractions at different intensities (25%, 50%, and 75% of Maximal Voluntary Contraction (MVC) torque) with different durations (68.5±33.4 s, 35.1±16.8 s and 24.8±12.9 s, respectively) but similar TTI value. MVC torque, maximal voluntary activation level (VAL), M-wave characteristics and potentiated doublet amplitude were assessed before and immediately after the sustained contractions. EMG activity of the vastus lateralis (VL) and -rectus femoris (RF) muscles was recorded during the sustained contractions. MVC torque reduction was similar in the 3 conditions after the exercise (-23.4±2.7%). VAL decreased significantly in a similar extent (-3.1±1.3%) after the 3 sustained contractions. Potentiated doublet amplitude was similarly reduced in the 3 conditions (-19.7±1.5%), but VL and RF M-wave amplitudes remained unchanged. EMG activity of VL and RF muscles increased in the same extent during the 3 contractions (VL: 54.5±40.4%; RF: 53.1±48.7%). These results suggest that central and peripheral alterations accounting for muscle fatigue are similar following isometric contractions with similar TTI. TTI should be considered in the exploration of muscle fatigue during sustained isometric contractions.
Yamamoto Y
2013-10-01
Full Text Available Background: The objective of this study is to provide certain data on clinical outcomes and their predictors of traditional maximum androgen blockade (MAB in prostate cancer with bone metastasis. Methods: Subjects were patients with prostate adenocarcinoma with bone metastasis initiated to treat with MAB as a primary treatment without any local therapy at our hospital between January 2003 and December 2010. Time to prostate specific antigen (PSA progression, overall survival (OS time, and association of clinical factors and outcomes were retrospectively evaluated. Results: A total of 57 patients were evaluable. The median age was 70 years. The median primary PSA was 203 ng/ml. Luteinizing hormone-releasing hormone agonists had been administered in 96.5% of the patients. Bicalutamide had been chosen in 89.4 % of the patients as the initial antiandrogen. The median time to PSA progression with MAB was 11.3 months (95% confidence interval [CI], 10.4 to 13.0. The median OS was 47.3 months (95% CI, 30.7 to 81.0. Gleason score 9 or greater, decline of PSA level equal to or higher than 1.0 ng/ml with MAB, and time to PSA nadir equal to or shorter than six months after initiation of MAB were independent risk factors for time to PSA progression (P=0.010, P=0.005, and P=0.001; respectively. Time to PSA nadir longer than six months was the only independent predictor for longer OS (HR, 0.255 [95% CI, 0.109 to 0.597]; P=0.002. Conclusions: Initial time to PSA nadir should be emphasized for clinical outcome analyses in future studies on prostate cancer with bone metastasis.
National Ignition Facility sub-system design requirements integrated timing system SSDR 1.5.3
Wiedwald, J.; Van Aersau, P.; Bliss, E.
1996-08-26
This System Design Requirement document establishes the performance, design, development, and test requirements for the Integrated Timing System, WBS 1.5.3 which is part of the NIF Integrated Computer Control System (ICCS). The Integrated Timing System provides all temporally-critical hardware triggers to components and equipment in other NIF systems.
On the relationship between supplier integration and time-to-market
Perols, J.; Zimmermann, C.; Kortmann, S.
2013-01-01
Recent operations management and innovation management research emphasizes the importance of supplier integration. However, the empirical results as to the relationship between supplier integration and time-to-market are ambivalent. To understand this important relationship, we incorporate two major
On the relationship between supplier integration and time-to-market
Perols, J.; Zimmermann, C.; Kortmann, S.
2013-01-01
Recent operations management and innovation management research emphasizes the importance of supplier integration. However, the empirical results as to the relationship between supplier integration and time-to-market are ambivalent. To understand this important relationship, we incorporate two major
α-times Integrated Regularized Cosine Functions and Second Order Abstract Cauchy Problens
张寄洲; 陶有山
2001-01-01
In this paper, α -times integrated C-regularized cosine functions and mild α-times integrated C-existence families of second order are introduced. Equivalences are proved among α -times integrated C-regularized cosine function for a linear operator A, C-wellposed of (α + 1)-times abstract Cauchy problem and mild α -times integrated C-existence family of second order for A when the commutable condition is satisfied. In addition, if A = C-1AC, they are also equivalent to A generating the α -times integrated C-regularized cosine finction.The characterization of an exponentially botnded mild α -times integrated C-existence family of second order is given out in terms of a Laplace transform.
Fully Integrated SAW-Less Discrete-Time Superheterodyne Receiver
Madadi, I.
2015-01-01
There are nowadays strong business and technical demands to integrate radio- frequency (RF) receivers (RX) into a complete system-on-chip (SoC) realized in scaled digital processes technology. As a consequence, the RF circuitry has to function well in face of reduced power supply ( V DD ) while the
Fully Integrated SAW-Less Discrete-Time Superheterodyne Receiver
Madadi, I.
2015-01-01
There are nowadays strong business and technical demands to integrate radio- frequency (RF) receivers (RX) into a complete system-on-chip (SoC) realized in scaled digital processes technology. As a consequence, the RF circuitry has to function well in face of reduced power supply ( V DD ) while the
A short guide to exponential Krylov subspace time integration for Maxwell's equations
Botchev, Mike A.
2012-01-01
The exponential time integration, i.e., time integration which involves the matrix exponential, is an attractive tool for solving Maxwell's equations in time. However, its application in practice often requires a substantial knowledge of numerical linear algebra algorithms, in particular, of the Kry
A short guide to exponential Krylov subspace time integration for Maxwell's equations
Bochev, Mikhail A.
The exponential time integration, i.e., time integration which involves the matrix exponential, is an attractive tool for solving Maxwell's equations in time. However, its application in practice often requires a substantial knowledge of numerical linear algebra algorithms, in particular, of the
Liu, Meilin
2011-07-01
A discontinuous Galerkin finite element method (DG-FEM) with a highly-accurate time integration scheme is presented. The scheme achieves its high accuracy using numerically constructed predictor-corrector integration coefficients. Numerical results show that this new time integration scheme uses considerably larger time steps than the fourth-order Runge-Kutta method when combined with a DG-FEM using higher-order spatial discretization/basis functions for high accuracy. © 2011 IEEE.
Chiba Shigeru
2007-09-01
Full Text Available Abstract Background Computer graphics and virtual reality techniques are useful to develop automatic and effective rehabilitation systems. However, a kind of virtual environment including unstable visual images presented to wide field screen or a head mounted display tends to induce motion sickness. The motion sickness induced in using a rehabilitation system not only inhibits effective training but also may harm patients' health. There are few studies that have objectively evaluated the effects of the repetitive exposures to these stimuli on humans. The purpose of this study is to investigate the adaptation to visually induced motion sickness by physiological data. Methods An experiment was carried out in which the same video image was presented to human subjects three times. We evaluated changes of the intensity of motion sickness they suffered from by a subjective score and the physiological index ρmax, which is defined as the maximum cross-correlation coefficient between heart rate and pulse wave transmission time and is considered to reflect the autonomic nervous activity. Results The results showed adaptation to visually-induced motion sickness by the repetitive presentation of the same image both in the subjective and the objective indices. However, there were some subjects whose intensity of sickness increased. Thus, it was possible to know the part in the video image which related to motion sickness by analyzing changes in ρmax with time. Conclusion The physiological index, ρmax, will be a good index for assessing the adaptation process to visually induced motion sickness and may be useful in checking the safety of rehabilitation systems with new image technologies.
Svendsen, Morten Bo Søndergaard; Domenici, Paolo; Marras, Stefano
2016-01-01
Billfishes are considered to be among the fastest swimmers in the oceans. Previous studies have estimated maximum speed of sailfish and black marlin at around 35 m s(-1) but theoretical work on cavitation predicts that such extreme speed is unlikely. Here we investigated maximum speed of sailfish...
Photonic integrated circuit as a picosecond pulse timing discriminator.
Lowery, Arthur James; Zhuang, Leimeng
2016-04-18
We report the first experimental demonstration of a compact on-chip optical pulse timing discriminator that is able to provide an output voltage proportional to the relative timing of two 60-ps input pulses on separate paths. The output voltage is intrinsically low-pass-filtered, so the discriminator forms an interface between high-speed optics and low-speed electronics. Potential applications include timing synchronization of multiple pulse trains as a precursor for optical time-division multiplexing, and compact rangefinders with millimeter dimensions.
Kwong-Wong-type integral equation on time scales
Baoguo Jia
2011-09-01
Full Text Available Consider the second-order nonlinear dynamic equation $$ [r(tx^Delta(ho(t]^Delta+p(tf(x(t=0, $$ where $p(t$ is the backward jump operator. We obtain a Kwong-Wong-type integral equation, that is: If $x(t$ is a nonoscillatory solution of the above equation on $[T_0,infty$, then the integral equation $$ frac{r^sigma(tx^Delta(t}{f(x^sigma(t} =P^sigma(t+int^infty_{sigma(t}frac{r^sigma(s [int^1_0f'(x_h(sdh][x^Delta(s]^2}{f(x(s f(x^sigma(s}Delta s $$ is satisfied for $tgeq T_0$, where $P^sigma(t=int^infty_{sigma(t}p(sDelta s$, and $x_h(s=x(s+hmu(sx^Delta(s$. As an application, we show that the superlinear dynamic equation $$ [r(tx^{Delta}(ho(t]^Delta+p(tf(x(t=0, $$ is oscillatory, under certain conditions.
Conservative fourth-order time integration of non-linear dynamic systems
Krenk, Steen
2015-01-01
An energy conserving time integration algorithm with fourth-order accuracy is developed for dynamic systems with nonlinear stiffness. The discrete formulation is derived by integrating the differential state-space equations of motion over the integration time increment, and then evaluating the re...... integration of oscillatory systems with only a few integration points per period. Three numerical examples demonstrate the high accuracy of the algorithm. (C) 2015 Elsevier B.V. All rights reserved.......An energy conserving time integration algorithm with fourth-order accuracy is developed for dynamic systems with nonlinear stiffness. The discrete formulation is derived by integrating the differential state-space equations of motion over the integration time increment, and then evaluating...... the resulting time integrals of the inertia and stiffness terms via integration by parts. This process introduces the time derivatives of the state space variables, and these are then substituted from the original state-space differential equations. The resulting discrete form of the state-space equations...
Time, dynamics and chaos. Integrating Poincare's "non-integrable systems"
Prigogine, I.
1990-01-01
This report discusses the nature of time. The author attempts to resolve the conflict between the concept of time reversibility in classical and quantum mechanics with the macroscopic world's irreversibility of time. (LSP)
Hamaker, Ellen L.; Dolan, Conor V.; Molenaar, Peter C. M.
2003-01-01
Demonstrated, through simulation, that stationary autoregressive moving average (ARMA) models may be fitted readily when T>N, using normal theory raw maximum likelihood structural equation modeling. Also provides some illustrations based on real data. (SLD)
A varying time-step explicit numerical integration algorithm for solving motion equation
ZHOU Zheng-hua; WANG Yu-huan; LIU Quan; YIN Xiao-tao; YANG Cheng
2005-01-01
If a traditional explicit numerical integration algorithm is used to solve motion equation in the finite element simulation of wave motion, the time-step used by numerical integration is the smallest time-step restricted by the stability criterion in computational region. However, the excessively small time-step is usually unnecessary for a large portion of computational region. In this paper, a varying time-step explicit numerical integration algorithm is introduced, and its basic idea is to use different time-step restricted by the stability criterion in different computational region. Finally, the feasibility of the algorithm and its effect on calculating precision are verified by numerical test.
Path integrals for actions that are not quadratic in their time derivatives
Cahill, Kevin
2015-01-01
The standard way to construct a path integral is to use a Legendre transformation to find the hamiltonian, to repeatedly insert complete sets of states into the time-evolution operator, and then to integrate over the momenta. This procedure is simple when the action is quadratic in its time derivatives, but in most other cases Legendre's transformation is intractable, and the hamiltonian is unknown. This paper shows how to make path integrals without using the hamiltonian.
First integrals for time-dependent higher-order Riccati equations by nonholonomic transformation
Guha, Partha; Ghose Choudhury, A.; Khanra, Barun
2011-08-01
We exploit the notion of nonholonomic transformations to deduce a time-dependent first integral for a (generalized) second-order nonautonomous Riccati differential equation. It is further shown that the method can also be used to compute the first integrals of a particular class of third-order time-dependent ordinary differential equations and is therefore quite robust.
Cook, Gerald; Lin, Ching-Fang
1980-01-01
The local linearization algorithm is presented as a possible numerical integration scheme to be used in real-time simulation. A second-order nonlinear example problem is solved using different methods. The local linearization approach is shown to require less computing time and give significant...... improvement in accuracy over the classical second-order integration methods....
Cammi, C; Panzeri, F; Gulinatti, A; Rech, I; Ghioni, M
2012-03-01
Emerged as a solid state alternative to photo multiplier tubes (PMTs), single-photon avalanche diodes (SPADs) are nowadays widely used in the field of single-photon timing applications. Custom technology SPADs assure remarkable performance, in particular a 10 counts/s dark count rate (DCR) at low temperature, a high photon detection efficiency (PDE) with a 50% peak at 550 nm and a 30 ps (full width at half maximum, FWHM) temporal resolution, even with large area devices, have been obtained. Over the past few years, the birth of novel techniques of analysis has led to the parallelization of the measurement systems and to a consequent increasing demand for the development of monolithic arrays of detectors. Unfortunately, the implementation of a multidimensional system is a challenging task from the electrical point of view; in particular, the avalanche current pick-up circuit, used to obtain the previously reported performance, has to be modified in order to enable high parallel temporal resolution, while minimizing the electrical crosstalk probability between channels. In the past, the problem has been solved by integrating the front-end electronics next to the photodetector, in order to reduce the parasitic capacitances and consequently the filtering action on the current signal of the SPAD, leading to an improvement of the timing jitter at higher threshold. This solution has been implemented by using standard complementary metal-oxide-semiconductor (CMOS) technologies, which, however, do not allow a complete control on the SPAD structure; for this reason the intrinsic performance of CMOS SPADs, such as DCR, PDE, and afterpulsing probability, are worse than those attainable with custom detectors. In this paper, we propose a pixel architecture, which enables the development of custom SPAD arrays in which every channel maintains the performance of the best single photodetector. The system relies on the integration of the timing signal pick-up circuit next to the
A unified approach for proportional-integral-derivative controller design for time delay processes
Shamsuzzoha, Mohammad [King Fahd University of Petroleum and Minerals, Dhahran (Saudi Arabia)
2015-04-15
An analytical design method for PI/PID controller tuning is proposed for several types of processes with time delay. A single tuning formula gives enhanced disturbance rejection performance. The design method is based on the IMC approach, which has a single tuning parameter to adjust the performance and robustness of the controller. A simple tuning formula gives consistently better performance as compared to several well-known methods at the same degree of robustness for stable and integrating process. The performance of the unstable process has been compared with other recently published methods which also show significant improvement in the proposed method. Furthermore, the robustness of the controller is investigated by inserting a perturbation uncertainty in all parameters simultaneously, again showing comparable results with other methods. An analysis has been performed for the uncertainty margin in the different process parameters for the robust controller design. It gives the guidelines of the M{sub s} setting for the PI controller design based on the process parameters uncertainty. For the selection of the closed-loop time constant, (τ{sub c}), a guideline is provided over a broad range of θ/τ ratios on the basis of the peak of maximum uncertainty (M{sub s}). A comparison of the IAE has been conducted for the wide range of θ/τ ratio for the first order time delay process. The proposed method shows minimum IAE in compared to SIMC, while Lee et al. shows poor disturbance rejection in the lag dominant process. In the simulation study, the controllers were tuned to have the same degree of robustness by measuring the M{sub s}, to obtain a reasonable comparison.
An integrated portable hand-held analyser for real-time isothermal nucleic acid amplification.
Smith, Matthew C; Steimle, George; Ivanov, Stan; Holly, Mark; Fries, David P
2007-08-29
A compact hand-held heated fluorometric instrument for performing real-time isothermal nucleic acid amplification and detection is described. The optoelectronic instrument combines a Printed Circuit Board/Micro Electro Mechanical Systems (PCB/MEMS) reaction detection/chamber containing an integrated resistive heater with attached miniature LED light source and photo-detector and a disposable glass waveguide capillary to enable a mini-fluorometer. The fluorometer is fabricated and assembled in planar geometry, rolled into a tubular format and packaged with custom control electronics to form the hand-held reactor. Positive or negative results for each reaction are displayed to the user using an LED interface. Reaction data is stored in FLASH memory for retrieval via an in-built USB connection. Operating on one disposable 3 V lithium battery >12, 60 min reactions can be performed. Maximum dimensions of the system are 150 mm (h) x 48 mm (d) x 40 mm (w), the total instrument weight (with battery) is 140 g. The system produces comparable results to laboratory instrumentation when performing a real-time nucleic acid sequence-based amplification (NASBA) reaction, and also displayed comparable precision, accuracy and resolution to laboratory-based real-time nucleic acid amplification instrumentation. A good linear response (R2 = 0.948) to fluorescein gradients ranging from 0.5 to 10 microM was also obtained from the instrument indicating that it may be utilized for other fluorometric assays. This instrument enables an inexpensive, compact approach to in-field genetic screening, providing results comparable to laboratory equipment with rapid user feedback as to the status of the reaction.
Representing real time semantics for distributed application integration
Poon, P.M.S.; Dillon, T.S.; Chang, E.; Feng, L.
2006-01-01
Traditional real time system design and development are driven by technological requirements. With the ever growing complexity of requirements and the advances in software design, the alignment of focus has gradually been shifted to the perspective of business and industrial needs. This paper discus
Integral-Value Models for Outcomes over Continuous Time
Harvey, Charles M.; Østerdal, Lars Peter
Models of preferences between outcomes over continuous time are important for individual, corporate, and social decision making, e.g., medical treatment, infrastructure development, and environmental regulation. This paper presents a foundation for such models. It shows that conditions on prefere...
On the initial condition problem of the time domain PMCHWT surface integral equation
Uysal, Ismail E.
2017-05-13
Non-physical, linearly increasing and constant current components are induced in marching on-in-time solution of time domain surface integral equations when initial conditions on time derivatives of (unknown) equivalent currents are not enforced properly. This problem can be remedied by solving the time integral of the surface integral for auxiliary currents that are defined to be the time derivatives of the equivalent currents. Then the equivalent currents are obtained by numerically differentiating the auxiliary ones. In this work, this approach is applied to the marching on-in-time solution of the time domain Poggio-Miller-Chan-Harrington-Wu-Tsai surface integral equation enforced on dispersive/plasmonic scatterers. Accuracy of the proposed method is demonstrated by a numerical example.
Electric vehicle integration in a real-time market
Pedersen, Anders Bro
project, the EV is introduced as an energy buffer used to store excess energy produced at off-peak hours, while at the same time potentially benefiting the consumer by offering cheaper charging. This role as a buffer, predominantly used for delayed charging, also known as “smart charging”, can also...... the distributed energy resources registered with it, in order to make them appear as a single producer in the eyes of the market. Although the concept of a VPP is used within the EcoGrid EU project, the idea of more individual control is introduced through a new proposed real-time electricity market, where...... the consumers will have direct access to the current price. As opposed to the hourly spot-price market of today, the real-time market see price updates as often as every couple of minutes. To allow the individual resources to react to these changes, independent of each other, so called “smart controllers...
Krasilenko, Vladimir G.; Nikolsky, Alexander I.; Lazarev, Alexander A.; Lazareva, Maria V.
2008-03-01
In the paper the actuality of neurophysiologically motivated neuron arrays with flexibly programmable functions and operations with possibility to select required accuracy and type of nonlinear transformation and learning are shown. We consider neurons design and simulation results of multichannel spatio-time algebraic accumulation - integration of optical signals. Advantages for nonlinear transformation and summation - integration are shown. The offered circuits are simple and can have intellectual properties such as learning and adaptation. The integrator-neuron is based on CMOS current mirrors and comparators. The performance: consumable power - 100...500 μW, signal period- 0.1...1ms, input optical signals power - 0.2...20 μW time delays - less 1μs, the number of optical signals - 2...10, integration time - 10...100 of signal periods, accuracy or integration error - about 1%. Various modifications of the neuron-integrators with improved performance and for different applications are considered in the paper.
Integrating timescales with time-transfer functions: a practical approach for an INTIMATE database
Bronk Ramsey, Christopher; Albert, Paul; Blockley, Simon; Hardiman, Mark; Lane, Christine; Macleod, Alison; Matthews, Ian P.; Muscheler, Raimund; Palmer, Adrian; Staff, Richard A.
2014-12-01
The purpose of the INTIMATE project is to integrate palaeo-climate information from terrestrial, ice and marine records so that the timing of environmental response to climate forcing can be compared in both space and time. One of the key difficulties in doing this is the range of different methods of dating that can be used across different disciplines. For this reason, one of the main outputs of INTIMATE has been to use an event-stratigraphic approach which enables researchers to co-register synchronous events (such as the deposition of tephra from major volcanic eruptions) in different archives (Blockley et al., 2012). However, this only partly solves the problem, because it gives information only at particular short intervals where such information is present. Between these points the ability to compare different records is necessarily less precise chronologically. What is needed therefore is a way to quantify the uncertainties in the correlations between different records, even if they are dated by different methods, and make maximum use of the information available that links different records. This paper outlines the design of a database that is intended to provide integration of timescales and associated environmental proxy information. The database allows for the fact that all timescales have their own limitations, which should be quantified in terms of the uncertainties quoted. It also makes use of the fact that each timescale has strengths in terms of describing the data directly associated with it. For this reason the approach taken allows users to look at data on any timescale that can in some way be related to the data of interest, rather than specifying a specific timescale or timescales which should always be used. The information going into the database is primarily: proxy information (principally from sediments and ice cores) against depth, age depth models against reference chronologies (typically IntCal or ice core), and time-transfer functions
Integrated active sensor system for real time vibration monitoring.
Liang, Qijie; Yan, Xiaoqin; Liao, Xinqin; Cao, Shiyao; Lu, Shengnan; Zheng, Xin; Zhang, Yue
2015-11-05
We report a self-powered, lightweight and cost-effective active sensor system for vibration monitoring with multiplexed operation based on contact electrification between sensor and detected objects. The as-fabricated sensor matrix is capable of monitoring and mapping the vibration state of large amounts of units. The monitoring contents include: on-off state, vibration frequency and vibration amplitude of each unit. The active sensor system delivers a detection range of 0-60 Hz, high accuracy (relative error below 0.42%), long-term stability (10000 cycles). On the time dimension, the sensor can provide the vibration process memory by recording the outputs of the sensor system in an extend period of time. Besides, the developed sensor system can realize detection under contact mode and non-contact mode. Its high performance is not sensitive to the shape or the conductivity of the detected object. With these features, the active sensor system has great potential in automatic control, remote operation, surveillance and security systems.
Variational Time Integration Approach for Smoothed Particle Hydrodynamics Simulation of Fluids
da Silva, Leandro Tavares
2015-01-01
Variational time integrators are derived in the context of discrete mechanical systems. In this area, the governing equations for the motion of the mechanical system are built following two steps: (a) Postulating a discrete action; (b) Computing the stationary value for the discrete action. The former is formulated by considering Lagrangian (or Hamiltonian) systems with the discrete action being constructed through numerical approximations of the action integral. The latter derives the discrete Euler-Lagrange equations whose solutions give the variational time integrator. In this paper, we build variational time integrators in the context of smoothed particle hydrodynamics (SPH). So, we start with a variational formulation of SPH for fluids. Then, we apply the generalized midpoint rule, which depends on a parameter $\\alpha$, in order to generate the discrete action. Then, the step (b) yields a variational time integration scheme that reduces to a known explicit one if $\\alpha\\in\\{0,1\\}$ but it is implicit oth...
无
2010-01-01
The asymptotic Lyapunov stability of one quasi-integrable Hamiltonian system with time-delayed feedback control is studied by using Lyapunov functions and stochastic averaging method.First,a quasi-integrable Hamiltonian system with time-delayed feedback control subjected to Gaussian white noise excitation is approximated by a quasi-integrable Hamiltonian system without time delay.Then,stochastic averaging method for quasi-integrable Hamiltonian system is used to reduce the dimension of the original system,and after that the Lyapunov function of the averaged It? equation is taken as the optimal linear combination of the corresponding independent first integrals in involution.Finally,the stability of the system is determined by using the largest eigenvalue of the linearized system.Two examples are used to illustrate the proposed procedure and the effects of delayed time on the Lyapunov stability are discussed as well.
A novel class of highly efficient and accurate time-integrators in nonlinear computational mechanics
Wang, Xuechuan; Atluri, Satya N.
2017-05-01
A new class of time-integrators is presented for strongly nonlinear dynamical systems. These algorithms are far superior to the currently common time integrators in computational efficiency and accuracy. These three algorithms are based on a local variational iteration method applied over a finite interval of time. By using Chebyshev polynomials as trial functions and Dirac-Delta functions as the test functions over the finite time interval, the three algorithms are developed into three different discrete time-integrators through the collocation method. These time integrators are labeled as Chebyshev local iterative collocation methods. Through examples of the forced Duffing oscillator, the Lorenz system, and the multiple coupled Duffing equations (which arise as semi-discrete equations for beams, plates and shells undergoing large deformations), it is shown that the new algorithms are far superior to the 4th order Runge-Kutta and ODE45 of MATLAB, in predicting the chaotic responses of strongly nonlinear dynamical systems.
A novel class of highly efficient and accurate time-integrators in nonlinear computational mechanics
Wang, Xuechuan; Atluri, Satya N.
2017-01-01
A new class of time-integrators is presented for strongly nonlinear dynamical systems. These algorithms are far superior to the currently common time integrators in computational efficiency and accuracy. These three algorithms are based on a local variational iteration method applied over a finite interval of time. By using Chebyshev polynomials as trial functions and Dirac-Delta functions as the test functions over the finite time interval, the three algorithms are developed into three different discrete time-integrators through the collocation method. These time integrators are labeled as Chebyshev local iterative collocation methods. Through examples of the forced Duffing oscillator, the Lorenz system, and the multiple coupled Duffing equations (which arise as semi-discrete equations for beams, plates and shells undergoing large deformations), it is shown that the new algorithms are far superior to the 4th order Runge-Kutta and ODE45 of MATLAB, in predicting the chaotic responses of strongly nonlinear dynamical systems.
Global Format for Conservative Time Integration in Nonlinear Dynamics
Krenk, Steen
2014-01-01
equivalent static load steps, easily implemented in existing computer codes. The paper considers two aspects: representation of nonlinear internal forces in a form that implies energy conservation, and the option of an algorithmic damping with the purpose of extracting energy from undesirable high...... over the time step. This explicit formula is exact for structures with internal energy in the form of a polynomial in the displacement components of degree four. A fully general form follows by introducing an additional term based on a secant representation of the internal energy. The option......-frequency parts of the response. The energy conservation property is developed in two steps. First a fourth-order representation of the internal energy increment is obtained in terms of the mean value of the associated internal forces and an additional term containing the increment of the tangent stiffness matrix...
On the first-passage time of integrated Brownian motion
Christian H. Hesse
2005-01-01
Full Text Available Let (Bt;t≥0 be a Brownian motion process starting from B0=ν and define Xν(t=∫0tBsds. For a≥0, set τa,ν:=inf{t:Xν(t=a} (with inf φ=∞. We study the conditional moments of τa,ν given τa,ν<∞. Using martingale methods, stopping-time arguments, as well as the method of dominant balance, we obtain, in particular, an asymptotic expansion for the conditional mean E(τa,ν|τa,ν<∞ as ν→∞. Through a series of simulations, it is shown that a truncation of this expansion after the first few terms provides an accurate approximation to the unknown true conditional mean even for small ν.
van Zuijlen, A.H.
2006-01-01
The simulation of fluid-structure interaction can be a very time-consuming task due to the large amount of time steps that need to be taken in order to obtain an accurate solution. To reduce computing times a higher order time integration algorithm is applied. Based on a mixed combination of implici
Numerical stability analysis of an acceleration scheme for step size constrained time integrators
Vandekerckhove, Christophe; Roose, Dirk; Lust, Kurt
2007-01-01
Time integration schemes with a fixed time step, much smaller than the dominant slow time scales of the dynamics of the system, arise in the context of stiff ordinary differential equations or in multiscale computations, where a microscopic time-stepper is used to compute macroscopic behaviour. We d
风电场最大注入容量的研究%RESEARCH ON CAPACITY OF WIND FARM MAXIMUM POWER INTEGRATION
王湘明; 高杨; 刘丽钧
2012-01-01
In recent years, with the increasing development and application of wind power technology, the proportion of the wind power in power system has grown up, consequently the connecting wind power had impacted much on power system. In this paper, it studies of variable speed constant frequency doubly fed wind turbine, separately join 3-kind-turbine wind farms to IEEE-14. A method combined of steady and transient state has been used to analyse the maximum capacity of power systems, and ensure the wind farm capacity maintained the system stability. Simulation results show that the number of the turbines connecting to the system will be proportional to the its power, thus determine the maximum capacity of the wind farm. The method is used to determine the largest wind farm connecting to power system capacity, thus can guarantee its own wind farm and the system stability.%以变速恒频双馈风电机组为研究对象,对IEEE-14系统分别加入3种不同功率的风力发电机组成的风电场,采用稳态和暂态相结合的方法对最大装机容量进行分析,确定能使系统保持稳定的风电场容量.仿真计算结果表明,不同功率的风机并入系统中的台数与其功率有一定的比例关系,从而确定了风电场的最大容量.利用该方法确定的风电场接入电力系统最大容量,可保证风电场自身及系统运行的稳定性.
Time-domain Helmholtz-Kirchhoff integral for surface scattering in a refractive medium.
Choo, Youngmin; Song, H C; Seong, Woojae
2017-03-01
The time-domain Helmholtz-Kirchhoff (H-K) integral for surface scattering is derived for a refractive medium, which can handle shadowing effects. The starting point is the H-K integral in the frequency domain. In the high-frequency limit, the Green's function can be calculated by ray theory, while the normal derivative of the incident pressure from a point source is formulated using the ray geometry and ray-based Green's function. For a corrugated pressure-release surface, a stationary phase approximation can be applied to the H-K integral, reducing the surface integral to a line integral. Finally, a computationally-efficient, time-domain H-K integral is derived using an inverse Fourier transform. A broadband signal scattered from a sinusoidal surface in an upwardly refracting medium is evaluated with and without geometric shadow corrections, and compared to the result from a conventional ray model.
Integrated Monitoring of Mola mola Behaviour in Space and Time.
Sousa, Lara L; López-Castejón, Francisco; Gilabert, Javier; Relvas, Paulo; Couto, Ana; Queiroz, Nuno; Caldas, Renato; Dias, Paulo Sousa; Dias, Hugo; Faria, Margarida; Ferreira, Filipe; Ferreira, António Sérgio; Fortuna, João; Gomes, Ricardo Joel; Loureiro, Bruno; Martins, Ricardo; Madureira, Luis; Neiva, Jorge; Oliveira, Marina; Pereira, João; Pinto, José; Py, Frederic; Queirós, Hugo; Silva, Daniel; Sujit, P B; Zolich, Artur; Johansen, Tor Arne; de Sousa, João Borges; Rajan, Kanna
2016-01-01
Over the last decade, ocean sunfish movements have been monitored worldwide using various satellite tracking methods. This study reports the near-real time monitoring of fine-scale (< 10 m) behaviour of sunfish. The study was conducted in southern Portugal in May 2014 and involved satellite tags and underwater and surface robotic vehicles to measure both the movements and the contextual environment of the fish. A total of four individuals were tracked using custom-made GPS satellite tags providing geolocation estimates of fine-scale resolution. These accurate positions further informed sunfish areas of restricted search (ARS), which were directly correlated to steep thermal frontal zones. Simultaneously, and for two different occasions, an Autonomous Underwater Vehicle (AUV) video-recorded the path of the tracked fish and detected buoyant particles in the water column. Importantly, the densities of these particles were also directly correlated to steep thermal gradients. Thus, both sunfish foraging behaviour (ARS) and possibly prey densities, were found to be influenced by analogous environmental conditions. In addition, the dynamic structure of the water transited by the tracked individuals was described by a Lagrangian modelling approach. The model informed the distribution of zooplankton in the region, both horizontally and in the water column, and the resultant simulated densities positively correlated with sunfish ARS behaviour estimator (rs = 0.184, p<0.001). The model also revealed that tracked fish opportunistically displace with respect to subsurface current flow. Thus, we show how physical forcing and current structure provide a rationale for a predator's fine-scale behaviour observed over a two weeks in May 2014.
Integrated Monitoring of Mola mola Behaviour in Space and Time
Sousa, Lara L.; López-Castejón, Francisco; Gilabert, Javier; Relvas, Paulo; Couto, Ana; Queiroz, Nuno; Caldas, Renato; Dias, Paulo Sousa; Dias, Hugo; Faria, Margarida; Ferreira, Filipe; Ferreira, António Sérgio; Fortuna, João; Gomes, Ricardo Joel; Loureiro, Bruno; Martins, Ricardo; Madureira, Luis; Neiva, Jorge; Oliveira, Marina; Pereira, João; Pinto, José; Py, Frederic; Queirós, Hugo; Silva, Daniel; Sujit, P. B.; Zolich, Artur; Johansen, Tor Arne; de Sousa, João Borges; Rajan, Kanna
2016-01-01
Over the last decade, ocean sunfish movements have been monitored worldwide using various satellite tracking methods. This study reports the near-real time monitoring of fine-scale (< 10 m) behaviour of sunfish. The study was conducted in southern Portugal in May 2014 and involved satellite tags and underwater and surface robotic vehicles to measure both the movements and the contextual environment of the fish. A total of four individuals were tracked using custom-made GPS satellite tags providing geolocation estimates of fine-scale resolution. These accurate positions further informed sunfish areas of restricted search (ARS), which were directly correlated to steep thermal frontal zones. Simultaneously, and for two different occasions, an Autonomous Underwater Vehicle (AUV) video-recorded the path of the tracked fish and detected buoyant particles in the water column. Importantly, the densities of these particles were also directly correlated to steep thermal gradients. Thus, both sunfish foraging behaviour (ARS) and possibly prey densities, were found to be influenced by analogous environmental conditions. In addition, the dynamic structure of the water transited by the tracked individuals was described by a Lagrangian modelling approach. The model informed the distribution of zooplankton in the region, both horizontally and in the water column, and the resultant simulated densities positively correlated with sunfish ARS behaviour estimator (rs = 0.184, p<0.001). The model also revealed that tracked fish opportunistically displace with respect to subsurface current flow. Thus, we show how physical forcing and current structure provide a rationale for a predator’s fine-scale behaviour observed over a two weeks in May 2014. PMID:27494028
Finite-time control of DC-DC buck converters via integral terminal sliding modes
Chiu, Chian-Song; Shen, Chih-Teng
2012-05-01
This article presents novel terminal sliding modes for finite-time output tracking control of DC-DC buck converters. Instead of using traditional singular terminal sliding mode, two integral terminal sliding modes are introduced for robust output voltage tracking of uncertain buck converters. Different from traditional sliding mode control (SMC), the proposed controller assures finite convergence time for the tracking error and integral tracking error. Furthermore, the singular problem in traditional terminal SMC is removed from this article. When considering worse modelling, adaptive integral terminal SMC is derived to guarantee finite-time convergence under more relaxed stability conditions. In addition, several experiments show better start-up performance and robustness.
Nielsen, Lars; Boldreel, Lars Ole; Hansen, Thomas Mejer;
2011-01-01
The origin of the topography of southwest Scandinavia is subject to discussion. Analysis of borehole seismic velocity has formed the basis for interpretation of several hundred metres of Neogene uplift in parts of Denmark.Here, refraction seismic data constrain a 7.5km long P-wave velocity model...... Group. The sonic velocities are consistent with the overall seismic layering, although they show additional fine-scale layering. Integration of gamma and sonic log with porosity data shows that seismic velocity is sensitive to clay content. In intervals near boundaries of the refraction model, moderate...
Prashant Jindal
2016-01-01
Full Text Available In the global critical economic scenario, inflation plays a vital role in deciding optimal pricing of goods in any business entity. This article presents two single-vendor single-buyer integrated supply chain inventory models with inflation and time value of money. Shortage is allowed during the lead-time and it is partially backlogged. Lead time is controllable and can be reduced using crashing cost. In the first model, we consider the demand of lead time follows a normal distribution, and in the second model, it is considered distribution-free. For both cases, our objective is to minimize the integrated system cost by simultaneously optimizing the order quantity, safety factor, lead time and number of lots. The discounted cash flow and classical optimization technique are used to derive the optimal solution for both cases. Numerical examples including the sensitivity analysis of system parameters is provided to validate the results of the supply chain models.
Seeking the Optimal Time for Integrated Curriculum in Jinan University School of Medicine
Pan, Sanqiang; Cheng, Xin; Zhou, Yanghai; Li, Ke; Yang, Xuesong
2017-01-01
The curricular integration of the basic sciences and clinical medicine has been conducted for over 40 years and proved to increase medical students' study interests and clinical reasoning. However, there is still no solid data suggesting what time, freshmen or year 3, is optimal to begin with the integrated curriculum. In this study, the…
Conservative fourth-order time integration of non-linear dynamic systems
Krenk, Steen
2015-01-01
An energy conserving time integration algorithm with fourth-order accuracy is developed for dynamic systems with nonlinear stiffness. The discrete formulation is derived by integrating the differential state-space equations of motion over the integration time increment, and then evaluating the re...... integration of oscillatory systems with only a few integration points per period. Three numerical examples demonstrate the high accuracy of the algorithm. (C) 2015 Elsevier B.V. All rights reserved.......An energy conserving time integration algorithm with fourth-order accuracy is developed for dynamic systems with nonlinear stiffness. The discrete formulation is derived by integrating the differential state-space equations of motion over the integration time increment, and then evaluating...... is a direct fourth-order accurate representation of the original differential equations. This fourth-order form is energy conserving for systems with force potential in the form of a quartic polynomial in the displacement components. Energy conservation for a force potential of general form is obtained...
Action-angle coordinates for time-dependent completely integrable Hamiltonian systems
Giachetta, Giovanni; Mangiarotti, Luigi [Department of Mathematics and Physics, University of Camerino, Camerino (Italy)]. E-mails: giovanni.giachetta@unicam.it; luigi.mangiarotti@unicam.it; Sardanashvily, Gennadi [Department of Theoretical Physics, Physics Faculty, Moscow State University, Moscow (Russian Federation)]. E-mail: sard@grav.phys.msu.su
2002-07-26
A time-dependent completely integrable Hamiltonian system is proved to admit the action-angle coordinates around any instantly compact regular invariant manifold. Written relative to these coordinates, its Hamiltonian and first integrals are functions only of action coordinates. (author). Letter-to-the-editor.
Song Rongfang; Bi Guangguo
2001-01-01
Quadratic programming models for integrated space-time interference suppression in CDMA systems are proposed in this paper. The models integrate the advantages of smart antenna and RAKE receiver, mitigate multiuser access interference (MAI) and interchip interference (ICI),and combine multipath components. The zero-forcing conditions are derived. Neural network implementation of the models is also studied.
Integrability of Nonlinear Equations of Motion on Two-Dimensional World Sheet Space-Time
YAN Jun
2005-01-01
The integrability character of nonlinear equations of motion of two-dimensional gravity with dynamical torsion and bosonic string coupling is studied in this paper. The space-like and time-like first integrals of equations of motion are also found.
TIME-DOMAIN VOLUME INTEGRAL EQUATION FOR TRANSIENT SCATTERING FROM INHOMOGENEOUS OBJECTS-2D TM CASE
Wang Jianguo; Fan Ruyu
2001-01-01
This letter proposes a time-domain volume integral equation based method for analyzing the transient scattering from a 2D inhomogeneous cylinder by involking the volume equivalence principle for the transverse magnetic case. This integral equation is solved by using an MOT scheme. Numerical results obtained using this method agree very well with those obtained using the FDTD method.
TIME-DOMAIN VOLUME INTEGRAL EQUATION FOR TRANSIENT SCATTERING FROM INHOMOGENEOUS OBJECTS-2D TE CASE
Wang Jianguo; Fan Ruyu
2001-01-01
This letter proposes a time-domain volume integral equation based method for analyzing the transient scattering from a 2D inhomogeneous cylinder by involking the volume equivalence principle for the transverse electric case. This integral equation is solved by using an MOT scheme. Numerical results obtained using this method agree very well with those obtained using the FDTD method.
Tempo máximo de fonação de crianças pré-escolares Maximum phonation time in pre-school children
Carla Aparecida Cielo
2008-08-01
Full Text Available Pesquisas sobre o tempo máximo de fonação (TMF em crianças obtiveram diferentes resultados, constatando que tal medida pode refletir o controle neuromuscular e aerodinâmico da produção vocal, podendo ser utilizada como indicador para outras formas de avaliação, tanto qualitativas quanto objetivas. OBJETIVO: Verificar as medidas de TMF de 23 crianças pré-escolares, com idades entre quatro e seis anos e oito meses. MATERIAL E MÉTODO: O processo de amostragem contou com questionário enviado aos pais, triagem auditiva e avaliação perceptivo-auditiva vocal, por meio da escala RASAT. A coleta de dados constou dos TMF. DESENHO DO ESTUDO: Prospectivo de corte transversal. RESULTADOS: Os TMF /a/, /s/ e /z/ médios foram 7,42s, 6,35s e 7,19s; os TMF /a/ aos seis anos, foram significativamente maiores do que aos quatro anos; à medida que a idade aumentou, todos os TMF também aumentaram; e a relação s/z para todas as idades foi próxima de um. CONCLUSÕES: Os valores de TMF mostraram-se superiores aos verificados em pesquisas nacionais e inferiores aos verificados em pesquisa internacionais. Além disso, pode-se concluir que as faixas etárias analisadas no presente estudo encontram-se num período de maturação nervosa e muscular, sendo a imaturidade mais evidente na faixa etária dos quatro anos.Past studies on the maximum phonation time (MPT in children have shown different results in duration. This factor may reflect the neuromuscular and aerodynamic control of phonation in patients; such control might be used as an indicator of other evaluation methods on a qualitative and quantitative basis. AIM: to verify measures of MPT and voice acoustic characteristics in 23 children aged four to six year and eight months. METHOD: The sampling process comprised a questionnaire that was sent to parents, followed by auditory screening and a voice perceptive-auditory assessment based on the R.A.S.A.T. scale. Data collection included the MPT. STUDY
Time-integrated CP violation measurements in the B mesons system at the LHCb experiment
Cardinale, R
2016-01-01
Time-integrated CP violation measurements in the B meson system provide information for testing the CKM picture of CP violation in the Standard Model. A review of recent results from the LHCb experiment is presented.
Optimal distribution of integration time for intensity measurements in Stokes polarimetry.
Li, Xiaobo; Liu, Tiegen; Huang, Bingjing; Song, Zhanjie; Hu, Haofeng
2015-10-19
We consider the typical Stokes polarimetry system, which performs four intensity measurements to estimate a Stokes vector. We show that if the total integration time of intensity measurements is fixed, the variance of the Stokes vector estimator depends on the distribution of the integration time at four intensity measurements. Therefore, by optimizing the distribution of integration time, the variance of the Stokes vector estimator can be decreased. In this paper, we obtain the closed-form solution of the optimal distribution of integration time by employing Lagrange multiplier method. According to the theoretical analysis and real-world experiment, it is shown that the total variance of the Stokes vector estimator can be significantly decreased about 40% in the case discussed in this paper. The method proposed in this paper can effectively decrease the measurement variance and thus statistically improves the measurement accuracy of the polarimetric system.
Li, Xiaobo; Hu, Haofeng; Liu, Tiegen; Huang, Bingjing; Song, Zhanjie
2016-04-04
We consider the degree of linear polarization (DOLP) polarimetry system, which performs two intensity measurements at orthogonal polarization states to estimate DOLP. We show that if the total integration time of intensity measurements is fixed, the variance of the DOLP estimator depends on the distribution of integration time for two intensity measurements. Therefore, by optimizing the distribution of integration time, the variance of the DOLP estimator can be decreased. In this paper, we obtain the closed-form solution of the optimal distribution of integration time in an approximate way by employing Delta method and Lagrange multiplier method. According to the theoretical analyses and real-world experiments, it is shown that the variance of the DOLP estimator can be decreased for any value of DOLP. The method proposed in this paper can effectively decrease the measurement variance and thus statistically improve the measurement accuracy of the polarimetry system.
Integrated High-Speed Digital Optical True-Time-Delay Modules for Synthetic Aperture Radars Project
National Aeronautics and Space Administration — Crystal Research, Inc. proposes an integrated high-speed digital optical true-time-delay module for advanced synthetic aperture radars. The unique feature of this...
Maximum Autocorrelation Factorial Kriging
Nielsen, Allan Aasbjerg; Conradsen, Knut; Pedersen, John L.
2000-01-01
This paper describes maximum autocorrelation factor (MAF) analysis, maximum autocorrelation factorial kriging, and its application to irregularly sampled stream sediment geochemical data from South Greenland. Kriged MAF images are compared with kriged images of varimax rotated factors from...
A new heterogeneous asynchronous explicit-implicit time integrator for nonsmooth dynamics
Fekak, Fatima-Ezzahra; Brun, Michael; Gravouil, Anthony; Depale, Bruno
2017-03-01
In computational structural dynamics, particularly in the presence of nonsmooth behavior, the choice of the time-step and the time integrator has a critical impact on the feasibility of the simulation. Furthermore, in some cases, as in the case of a bridge crane under seismic loading, multiple time-scales coexist in the same problem. In that case, the use of multi-time scale methods is suitable. Here, we propose a new explicit-implicit heterogeneous asynchronous time integrator (HATI) for nonsmooth transient dynamics with frictionless unilateral contacts and impacts. Furthermore, we present a new explicit time integrator for contact/impact problems where the contact constraints are enforced using a Lagrange multiplier method. In other words, the aim of this paper consists in using an explicit time integrator with a fine time scale in the contact area for reproducing high frequency phenomena, while an implicit time integrator is adopted in the other parts in order to reproduce much low frequency phenomena and to optimize the CPU time. In a first step, the explicit time integrator is tested on a one-dimensional example and compared to Moreau-Jean's event-capturing schemes. The explicit algorithm is found to be very accurate and the scheme has generally a higher order of convergence than Moreau-Jean's schemes and provides also an excellent energy behavior. Then, the two time scales explicit-implicit HATI is applied to the numerical example of a bridge crane under seismic loading. The results are validated in comparison to a fine scale full explicit computation. The energy dissipated in the implicit-explicit interface is well controlled and the computational time is lower than a full-explicit simulation.
Mixed time integration schemes for transient conduction forced-convection analysis
Liu, W. K.; Lin, J. I.
1983-01-01
A partition procedure for forced-convection conduction transient problems is presented. Mixed time partitions are defined wherein coupled conduction force-matrix equations are discretized using an implicit integration method, followed by derivation of a mixed time integration technique. Explicit-implicit and explicit-explicit partitions are performed for a stability analysis for transient conditions, e.g., those found in an actively air-cooled engine and airframe structure.
Bragança, L.; Koukkari, Heli; Blok, Rijk; Gervásio, H.; Veljkovic, Milan, ed. lit.; Plewako, Zbigniew, ed. lit.; Landolfo, Raffaele; Ungureanu, Viorel
2010-01-01
The main objective of the COST Action C25 ‘Sustainability of Constructions: Integrated Approach lo Life-time Structural Engineering” is to promote science-based developments in sustainable construction in Europe through the collection and collaborative analysis of scientific results concerning life-time structural engineering and especially integration of environmental assessment methods and tolls of structural engineering. Sustainability of Construction, European Science Foundation : Cost...
Ulku, Huseyin Arda
2013-08-01
An explicit marching on-in-time (MOT) scheme for solving the time-domain magnetic field integral equation (TD-MFIE) is presented. The proposed MOT-TD-MFIE solver uses Rao-Wilton-Glisson basis functions for spatial discretization and a PE(CE)m-type linear multistep method for time marching. Unlike previous explicit MOT-TD-MFIE solvers, the time step size can be chosen as large as that of the implicit MOT-TD-MFIE solvers without adversely affecting accuracy or stability. An algebraic stability analysis demonstrates the stability of the proposed explicit solver; its accuracy and efficiency are established via numerical examples. © 1963-2012 IEEE.
2015-09-01
integration reproduced the solutions of Godunov-type Riemann solvers comparing favorably to more sophisticated and more computationally intensive schemes......grid-point simulation were then calculated over the area at the same physical time, t=40τ, and plotted in Fig. 4. (a) (b) Fig. 4 Spatial
Maximum-entropy description of animal movement.
Fleming, Chris H; Subaşı, Yiğit; Calabrese, Justin M
2015-03-01
We introduce a class of maximum-entropy states that naturally includes within it all of the major continuous-time stochastic processes that have been applied to animal movement, including Brownian motion, Ornstein-Uhlenbeck motion, integrated Ornstein-Uhlenbeck motion, a recently discovered hybrid of the previous models, and a new model that describes central-place foraging. We are also able to predict a further hierarchy of new models that will emerge as data quality improves to better resolve the underlying continuity of animal movement. Finally, we also show that Langevin equations must obey a fluctuation-dissipation theorem to generate processes that fall from this class of maximum-entropy distributions when the constraints are purely kinematic.
Time-Domain Volume Integral Equation for TM-Case Scattering from Nonlinear Penetrable Objects
WANG Jianguo; Eric Michielssen
2001-01-01
This paper presents the time-domainvolume integral equation (TDVIE) method to analyzescattering from nonlinear penetrable objects, whichare illuminated by the transverse magnetic (TM) in-cident pulse. The time-domain volume integral equa-tion is formulated in terms of two-dimensional (2D)Green's function, and solved by using the march-on-in time (MOT) technique. Some numerical results aregiven to validate this method, and comparisons aremade with the results obtained by using the finite-difference time-domain (FDTD) method.
Time series analysis of the developed financial markets' integration using visibility graphs
Zhuang, Enyu; Small, Michael; Feng, Gang
2014-09-01
A time series representing the developed financial markets' segmentation from 1973 to 2012 is studied. The time series reveals an obvious market integration trend. To further uncover the features of this time series, we divide it into seven windows and generate seven visibility graphs. The measuring capabilities of the visibility graphs provide means to quantitatively analyze the original time series. It is found that the important historical incidents that influenced market integration coincide with variations in the measured graphical node degree. Through the measure of neighborhood span, the frequencies of the historical incidents are disclosed. Moreover, it is also found that large "cycles" and significant noise in the time series are linked to large and small communities in the generated visibility graphs. For large cycles, how historical incidents significantly affected market integration is distinguished by density and compactness of the corresponding communities.
无
2009-01-01
With the enlargement of core rockfill dam construction scale and the improvement of construction mechanization level, the traditional manual construction quality control method is now difficult to meet the quality and safety demands of modern dam construction, so automatic and real-time dam construction quality monitoring with high-techs is urgently needed. The paper makes theoretical research on construction quality real-time monitoring and system integration of core rockfill dam, proposes implementation method and integrated solution of construction quality real-time monitoring of core rockfill dam construction process, realizes refining, all-whether, entire-process and real-time control and analysis on key links of dam construction, and introduces the application of the construction quality real-time monitoring and system integration technology to a practical core rockfill dam project.
ZHONG DengHua; CUI Bo; LIU DongHai; TONG DaWei
2009-01-01
With the enlargement of core rockfill dam construction scale and the Improvement of construction mechanization level, the traditional manual construction quality control method is now difficult to meet the quality and safety demands of modern dam construction, so automatic and real-time dam con-struction quality monitoring with high-techs is urgently needed.The paper makes theoretical research on construction quality real-time monitoring and system integration of core rock/ill dam, proposes im-plementation method and integrated solution of construction quality real-time monitoring of core rock-fill dam construction process, realizes refining, all-whether, entire-process and real-time control and analysis on key links of dam construction, and introduces the application of the construction quality real-time monitoring and system integration technology to a practical core rockfill dam project.
A robust stabilization methodology for time domain integral equations in electromagnetics
Pray, Andrew J.
Time domain integral equations (TDIEs) are an attractive framework from which to analyze electromagnetic scattering problems. Casting problems in the time domain enables study of systems with nonlinearities, characterization of transient behavior both at the early and late time, and broadband analysis within a single simulation. Integral equation frameworks have the advantages of restricting the computational domain to the scatterer surface (boundary integral equations) or volume (volume integral equations), implicitly satisfying the radiation boundary condition, and being free of numerical dispersion error. Despite these advantages, TDIE solvers are not widely used by computational practitioners; principally because TDIE solutions are susceptible to late-time instability. While a plethora of stabilization schemes have been developed, particularly since the early 1980s, most of these schemes either do not guarantee stability, are difficult to implement, or are impractical for certain problems. The most promising methods seem to be the space-time Galerkin schemes. These are very challenging to implement as they require the accurate evaluation of 4-dimensional spatial integrals. The most successful recent approach to implementing these schemes has been to approximate a subset of these integrals, and evaluate the remaining integrals analytically. This approach describes the quasi-exact integration methods [Shanker et al. IEEE TAP 2009, Shi et al. IEEE TAP 2011]. The method of [Shanker et al. IEEE TAP 2009] approximates 2 of the 4 dimensions using numerical quadrature. The remaining integrals are evaluated analytically by determining shadow boundaries on the domain of integration. In [Shi et al. IEEE TAP 2011], only 1 dimension is approximated, but the procedure also relies on analytical integration between shadow boundaries. These two characteristics-the need to find shadow boundaries and develop analytical integration rules-prevent these methods from being extended
Time-integrated position error accounts for sensorimotor behavior in time-constrained tasks.
Julian J Tramper
Full Text Available Several studies have shown that human motor behavior can be successfully described using optimal control theory, which describes behavior by optimizing the trade-off between the subject's effort and performance. This approach predicts that subjects reach the goal exactly at the final time. However, another strategy might be that subjects try to reach the target position well before the final time to avoid the risk of missing the target. To test this, we have investigated whether minimizing the control effort and maximizing the performance is sufficient to describe human motor behavior in time-constrained motor tasks. In addition to the standard model, we postulate a new model which includes an additional cost criterion which penalizes deviations between the position of the effector and the target throughout the trial, forcing arrival on target before the final time. To investigate which model gives the best fit to the data and to see whether that model is generic, we tested both models in two different tasks where subjects used a joystick to steer a ball on a screen to hit a target (first task or one of two targets (second task before a final time. Noise of different amplitudes was superimposed on the ball position to investigate the ability of the models to predict motor behavior for different levels of uncertainty. The results show that a cost function representing only a trade-off between effort and accuracy at the end time is insufficient to describe the observed behavior. The new model correctly predicts that subjects steer the ball to the target position well before the final time is reached, which is in agreement with the observed behavior. This result is consistent for all noise amplitudes and for both tasks.
An Approach to Integrate a Space-Time GIS Data Model with High Performance Computers
Wang, Dali [ORNL; Zhao, Ziliang [University of Tennessee, Knoxville (UTK); Shaw, Shih-Lung [ORNL
2011-01-01
In this paper, we describe an approach to integrate a Space-Time GIS data model on a high performance computing platform. The Space-Time GIS data model has been developed on a desktop computing environment. We use the Space-Time GIS data model to generate GIS module, which organizes a series of remote sensing data. We are in the process of porting the GIS module into an HPC environment, in which the GIS modules handle large dataset directly via parallel file system. Although it is an ongoing project, authors hope this effort can inspire further discussions on the integration of GIS on high performance computing platforms.
Multi-agent-based Approach for Determination of Time-quota in Integrated Environment
无
2002-01-01
Time-quota is one of important factors in producti on system. It is affected by various factors. time-quota is studied in CAPP and p roduction schedule integration environment in this paper. An agent-based time- quota method is put forward and the structure model is established by means of i ntelligent agent in integrated environment. The method can map the influencing t ime-quota factors into part agent related to process state and machine method a gent, resorting to the function of agent rule-based reaso...
A Scalable, Timing-Safe, Network-on-Chip Architecture with an Integrated Clock Distribution Method
Bjerregaard, Tobias; Stensgaard, Mikkel Bystrup; Sparsø, Jens
2007-01-01
Growing system sizes together with increasing performance variability are making globally synchronous operation hard to realize. Mesochronous clocking constitutes a possible solution to the problems faced. The most fundamental of problems faced when communicating between mesochronously clocked...... is based purely on local observations. It is demonstrated with a 90 nm CMOS standard cell network-on-chip design which implements completely timing-safe, global communication in a modular system...... regions concerns the possibility of data corruption caused by metastability. This paper presents an integrated communication and mesochronous clocking strategy, which avoids timing related errors while maintaining a globally synchronous system perspective. The architecture is scalable as timing integrity...
Taatgen, Niels A.; van Rijn, Hedderik; Anderson, John
2007-01-01
A theory of prospective time perception is introduced and incorporated as a module in an integrated theory of cognition, thereby extending existing theories and allowing predictions about attention and learning. First, a time perception module is established by fitting existing datasets (interval es
Generalized Local Time of the Indefinite Wiener Integral:White Noise Approach
Jingjun GUO
2012-01-01
In this paper,the generalized local time of the indefinite Wiener integral Xt is discussed through white noise approach,which means to regard the local time as a Hida distribution.Moreover,similar result is also obtained in case of two independent Brownian motions by using the similar approach.
Taatgen, Niels A.; van Rijn, Hedderik; Anderson, John
A theory of prospective time perception is introduced and incorporated as a module in an integrated theory of cognition, thereby extending existing theories and allowing predictions about attention and learning. First, a time perception module is established by fitting existing datasets (interval
Taatgen, Niels A.; van Rijn, Hedderik; Anderson, John
2007-01-01
A theory of prospective time perception is introduced and incorporated as a module in an integrated theory of cognition, thereby extending existing theories and allowing predictions about attention and learning. First, a time perception module is established by fitting existing datasets (interval es
Creating a Campus Culture of Integrity: Comparing the Perspectives of Full- and Part-Time Faculty
Hudd, Suzanne S.; Apgar, Caroline; Bronson, Eric Franklyn; Lee, Renee Gravois
2009-01-01
Part-time faculty play an important role in creating a culture of integrity on campus, yet they face a number of structural constraints. This paper seeks to improve our understanding of the potentially unique experiences of part-time faculty with academic misconduct and suggests ways to more effectively involve them in campus-wide academic…
Stability and Convergence of Solutions to Volterra Integral Equations on Time Scales
Eleonora Messina
2015-01-01
Full Text Available We consider Volterra integral equations on time scales and present our study about the long time behavior of their solutions. We provide sufficient conditions for the stability and investigate the convergence properties when the kernel of the equations vanishes at infinity.
Numerical tests of efficiency of the retrospective time integration scheme in the self-memory model
GU Xiangqian; YOU Xingtian; ZHU He; CAO Hongxing
2004-01-01
A set of numerical tests was carried out to compare the retrospective time integral scheme in a self-memory model,whose dynamic kernel is the barotropical quasi-geostrophic model, with the ordinary centered difference scheme in the barotropical quasigeostrophic model. The Rossby-Haurwitz wave function was taken as the initial fields for both schemes. The results show that in comparison with the ordinary centered difference scheme, the retrospective time integral scheme reduces by 2 orders of magnitude the forecast error, and the forecast error increases very little with lengthening of the time-step. Therefore, the retrospective time integral scheme has advantages of improving the forecast accuracy, extending the predictable duration and reducing the computation amount.
Development of a precise long-time digital integrator for magnetic measurements in a tokamak
Kurihara, Kenichi; Kawamata, Youichi [Japan Atomic Energy Research Inst., Naka, Ibaraki (Japan). Naka Fusion Research Establishment
1997-10-01
Long-time D-T burning operation in a tokamak requires that a magnetic sensor must work in an environment of 14-MeV intense neutron field, and that the measurement system must output precise magnetic field values. A method of time-integration of voltage produced in a simple pick-up coil seems to have preferable features of good time response, easy maintenance, and resistance to neutron irradiation. However, an inevitably-produced signal drift makes it difficult to apply the method to the long-time integral operation. To solve this problem, we have developed a new digital integrator (a voltage-to-frequency converter and an up-down counter) with testing the trial boards in the JT-60 magnetic measurements. This reports all of the problems and their measures through the development steps in details, and shows how to apply this method to the ITER operation. (author)
Exploring the History of Time in an Integrated System: the Ramifications for Water
Green, M. B.; Adams, L. E.; Allen, T. L.; Arrigo, J. S.; Bain, D. J.; Bray, E. N.; Duncan, J. M.; Hermans, C. M.; Pastore, C.; Schlosser, C. A.; Vorosmarty, C. J.; Witherell, B. B.; Wollheim, W. M.; Wreschnig, A. J.
2009-12-01
Characteristic time scales are useful and simple descriptors of geophysical and socio-economic system dynamics. Focusing on the integrative nature of the hydrologic cycle, new insights into system couplings can be gained by compiling characteristic time scales of important processes driving these systems. There are many examples of changing characteristic time scales. Human life expectancy has increased over the recent history of medical advancement. The transport time of goods has decreased with the progression from horse to rail to car to plane. The transport time of information changed with the progression from letter to telegraph to telephone to networked computing. Soil residence time (pedogenesis to estuary deposition) has been influenced by changing agricultural technology, urbanization, and forest practices. Surface water residence times have varied as beaver dams have disappeared and been replaced with modern reservoirs, flood control works, and channelization. These dynamics raise the question of how these types of time scales interact with each other to form integrated Earth system dynamics? Here we explore the coupling of geophysical and socio-economic systems in the northeast United States over the 1600 to 2010 period by examining characteristic time scales. This visualization of many time scales serves as an exploratory analysis, producing new hypotheses about how the integrated system dynamics have evolved over the last 400 years. Specifically, exponential population growth and the evolving strategies to maintain that population appears as fundamental to many of the time scales.
Shi, Yifei
2013-08-01
Internal resonant modes are always observed in the marching-on-in-time (MOT) solution of the time domain electric field integral equation (EFIE), although \\'relaxed initial conditions,\\' which are enforced at the beginning of time marching, should in theory prevent these spurious modes from appearing. It has been conjectured that, numerical errors built up during time marching establish the necessary initial conditions and induce the internal resonant modes. However, this conjecture has never been proved by systematic numerical experiments. Our numerical results in this communication demonstrate that, the internal resonant modes\\' amplitudes are indeed dictated by the numerical errors. Additionally, it is shown that in a few cases, the internal resonant modes can be made \\'invisible\\' by significantly suppressing the numerical errors. These tests prove the conjecture that the internal resonant modes are induced by numerical errors when the time domain EFIE is solved by the MOT method. © 2013 IEEE.
Kanchan Mudgil
2013-07-01
Full Text Available This paper evaluates the energy payback time (EPBT of building integrated photovoltaic thermal (BISPVT system for Srinagar, India. Three different photovoltaic (PV modules namely mono crystalline silicon (m-Si, poly crystalline silicon (p-Si, and amorphous silicon (a-Si have been considered for calculation of EPBT. It is found that, the EPBT is lowest in m-Si. Hence, integration of m-Si PV modules on the roof of a room is economical.
Exponential stability of time-delay systems via new weighted integral inequalities
Hien, L. V.; Trinh, H.
2015-01-01
In this paper, new weighted integral inequalities (WIIs) are first derived by refining the Jensen single and double inequalities. It is shown that the newly derived inequalities in this paper encompass both the Jensen inequality and its most recent improvements based on Wirtinger integral inequality. The potential capability of the proposed WIIs is demonstrated through applications in exponential stability analysis for some classes of time-delay systems in the framework of linear matrix inequ...
Integrable KdV Hierarchies on $T^2=S^1\\times S^1$
Sedra, M B
2007-01-01
Following our previous works on extended higher spin symmetries on the torus we focus in the present contribution to make a setup of the integrable KdV hierarchies on $T^{2} = S^{1} \\times S^{1}$. Actually two particular systems are considered, namely the KdV and the Burgers non linear integrable model associated to currents of conformal weights (2, 2) and (1, 1) respectively. One key steps towards proving the integrability of these systems is to find their Lax pair operators. This is explicitly done and a mapping between the two systems is discussed.
Santra, Kalyan; Zhan, Jinchun; Song, Xueyu; Smith, Emily A; Vaswani, Namrata; Petrich, Jacob W
2016-03-10
The need for measuring fluorescence lifetimes of species in subdiffraction-limited volumes in, for example, stimulated emission depletion (STED) microscopy, entails the dual challenge of probing a small number of fluorophores and fitting the concomitant sparse data set to the appropriate excited-state decay function. This need has stimulated a further investigation into the relative merits of two fitting techniques commonly referred to as "residual minimization" (RM) and "maximum likelihood" (ML). Fluorescence decays of the well-characterized standard, rose bengal in methanol at room temperature (530 ± 10 ps), were acquired in a set of five experiments in which the total number of "photon counts" was approximately 20, 200, 1000, 3000, and 6000 and there were about 2-200 counts at the maxima of the respective decays. Each set of experiments was repeated 50 times to generate the appropriate statistics. Each of the 250 data sets was analyzed by ML and two different RM methods (differing in the weighting of residuals) using in-house routines and compared with a frequently used commercial RM routine. Convolution with a real instrument response function was always included in the fitting. While RM using Pearson's weighting of residuals can recover the correct mean result with a total number of counts of 1000 or more, ML distinguishes itself by yielding, in all cases, the same mean lifetime within 2% of the accepted value. For 200 total counts and greater, ML always provides a standard deviation of <10% of the mean lifetime, and even at 20 total counts there is only 20% error in the mean lifetime. The robustness of ML advocates its use for sparse data sets such as those acquired in some subdiffraction-limited microscopies, such as STED, and, more importantly, provides greater motivation for exploiting the time-resolved capacities of this technique to acquire and analyze fluorescence lifetime data.
Ulku, Huseyin Arda
2012-09-01
An explicit yet stable marching-on-in-time (MOT) scheme for solving the time domain magnetic field integral equation (TD-MFIE) is presented. The stability of the explicit scheme is achieved via (i) accurate evaluation of the MOT matrix elements using closed form expressions and (ii) a PE(CE) m type linear multistep method for time marching. Numerical results demonstrate the accuracy and stability of the proposed explicit MOT-TD-MFIE solver. © 2012 IEEE.
Compensation for time-delayed feedback bang-bang control of quasi-integrable Hamiltonian systems
无
2009-01-01
The stochastic averaging method for quasi-integrable Hamiltonian systems with time-delayed feedback bang-bang control is first introduced. Then, two time delay compensation methods, namely the method of changing control force amplitude (CFA) and the method of changing control delay time (CDT), are proposed. The conditions applicable to each compensation method are discussed. Finally, an example is worked out in detail to illustrate the application and effectiveness of the proposed methods and the two compensation methods in combination.
Quanwu Li
2016-01-01
Full Text Available High reliability is required for the permanent magnet brushless DC motor (PM-BLDCM in an electrical pump of hypersonic vehicle. The PM-BLDCM is a short-time duty motor with high-power-density. Since thermal equilibrium is not reached for the PM-BLDCM, the temperature distribution is not uniform and there is a risk of local overheating. The winding is a main heat source and its insulation is thermally sensitive, so reducing the winding temperature rise is the key to the improvement of the reliability. In order to reduce the winding temperature rise, an electromagnetic-thermal integrated design optimization method is proposed. The method is based on electromagnetic analysis and thermal transient analysis. The requirements and constraints of electromagnetic and thermal design are considered in this method. The split ratio and the maximum flux density in stator lamination, which are highly relevant to the windings temperature rise, are optimized analytically. The analytical results are verified by finite element analysis (FEA and experiments. The maximum error between the analytical and the FEA results is 4%. The errors between the analytical and measured windings temperature rise are less than 8%. It can be proved that the method can obtain the optimal design accurately to reduce the winding temperature rise.
Control of integrating process with dead time using auto-tuning approach
G. Saravanakumar
2009-03-01
Full Text Available A modification of Smith predictor for controlling higher order processes with integral action and long dead-time is proposed in this paper. The controller used in this Smith predictor is an Integral-Proportional Derivative controller, where the Integrator is in the forward path and the Proportional and Derivative control are in the feedback, acting on the feedback signal. The main objective of this paper is to design a dead time compensator, which has minimum tuning parameters, simple controller tuning, and robust performance of tuning formulae, and to obtain a critically damped system that is as fast as possible in its set point and load disturbance rejection performance. The controller in this paper is tuned by an adaptive method. This paper also presents a survey of various dead time compensators and their performance analysis.
Neural Integration Underlying a Time-Compensated Sun Compass in the Migratory Monarch Butterfly
Eli Shlizerman
2016-04-01
Full Text Available Migrating eastern North American monarch butterflies use a time-compensated sun compass to adjust their flight to the southwest direction. Although the antennal genetic circadian clock and the azimuth of the sun are instrumental for proper function of the compass, it is unclear how these signals are represented on a neuronal level and how they are integrated to produce flight control. To address these questions, we constructed a receptive field model of the compound eye that encodes the solar azimuth. We then derived a neural circuit model that integrates azimuthal and circadian signals to correct flight direction. The model demonstrates an integration mechanism, which produces robust trajectories reaching the southwest regardless of the time of day and includes a configuration for remigration. Comparison of model simulations with flight trajectories of butterflies in a flight simulator shows analogous behaviors and affirms the prediction that midday is the optimal time for migratory flight.
Velocity time integral for right upper pulmonary vein in VLBW infants with patent ductus arteriosus.
Lista, Gianluca; Bianchi, Silvia; Mannarino, Savina; Schena, Federico; Castoldi, Francesca; Stronati, Mauro; Mosca, Fabio
2016-10-01
Early diagnosis of significant patent ductus arteriosus reduces the risk of clinical worsening in very low birth weight infants. Echocardiographic patent ductus arteriosus shunt flow pattern can be used to predict significant patent ductus arteriosus. Pulmonary venous flow, expressed as vein velocity time integral, is correlated to ductus arteriosus closure. The aim of this study is to investigate the relationship between significant reductions in vein velocity time integral and non-significant patent ductus arteriosus in the first week of life. A multicenter, prospective, observational study was conducted to evaluate very low birth weight infants (patent ductus compared to those with closed patent ductus arteriosus and the difference was significant. A significant reduction in vein velocity time integral in the first days of life is associated with ductus closure. This parameter correlates well with other echocardiographic parameters and may aid in the diagnosis and management of patent ductus arteriosus.
Constrained time-optimal control of double-integrator system and its application in MPC
Fehér, Marek; Straka, Ondřej; Šmídl, Václav
2017-01-01
The paper deals with the design of a time-optimal controller for systems subject to both state and control constraints. The focus is laid on a double-integrator system, for which the time-to-go function is calculated. The function is then used as a part of a model predictive control criterion where it represents the long-horizon part. The designed model predictive control algorithm is then used in a constrained control problem of permanent magnet synchronous motor model, which behavior can be approximated by a double integrator model. Accomplishments of the control goals are illustrated in a numerical example.
Fei Lin
2016-03-01
Full Text Available With its large capacity, the total urban rail transit energy consumption is very high; thus, energy saving operations are quite meaningful. The effective use of regenerative braking energy is the mainstream method for improving the efficiency of energy saving. This paper examines the optimization of train dwell time and builds a multiple train operation model for energy conservation of a power supply system. By changing the dwell time, the braking energy can be absorbed and utilized by other traction trains as efficiently as possible. The application of genetic algorithms is proposed for the optimization, based on the current schedule. Next, to validate the correctness and effectiveness of the optimization, a real case is studied. Actual data from the Beijing subway Yizhuang Line are employed to perform the simulation, and the results indicate that the optimization method of the dwell time is effective.
Tuning PID and FOPID Controllers using the Integral Time Absolute Error Criterion
Maiti, Deepyaman; Chakraborty, Mithun; Konar, Amit; Janarthanan, Ramadoss
2008-01-01
Particle swarm optimization (PSO) is extensively used for real parameter optimization in diverse fields of study. This paper describes an application of PSO to the problem of designing a fractional-order proportional-integral-derivative (FOPID) controller whose parameters comprise proportionality constant, integral constant, derivative constant, integral order (lambda) and derivative order (delta). The presence of five optimizable parameters makes the task of designing a FOPID controller more challenging than conventional PID controller design. Our design method focuses on minimizing the Integral Time Absolute Error (ITAE) criterion. The digital realization of the deigned system utilizes the Tustin operator-based continued fraction expansion scheme. We carry out a simulation that illustrates the effectiveness of the proposed approach especially for realizing fractional-order plants. This paper also attempts to study the behavior of fractional PID controller vis-a-vis that of its integer order counterpart and ...
An explicit marching on-in-time solver for the time domain volume magnetic field integral equation
Sayed, Sadeed Bin
2014-07-01
Transient scattering from inhomogeneous dielectric objects can be modeled using time domain volume integral equations (TDVIEs). TDVIEs are oftentimes solved using marching on-in-time (MOT) techniques. Classical MOT-TDVIE solvers expand the field induced on the scatterer using local spatio-temporal basis functions. Inserting this expansion into the TDVIE and testing the resulting equation in space and time yields a system of equations that is solved by time marching. Depending on the type of the basis and testing functions and the time step, the time marching scheme can be implicit (N. T. Gres, et al., Radio Sci., 36(3), 379-386, 2001) or explicit (A. Al-Jarro, et al., IEEE Trans. Antennas Propag., 60(11), 5203-5214, 2012). Implicit MOT schemes are known to be more stable and accurate. However, under low-frequency excitation, i.e., when the time step size is large, they call for inversion of a full matrix system at very time step.
Integration of Simulink, MARTe and MDSplus for rapid development of real-time applications
Manduchi, G., E-mail: gabriele.manduchi@igi.cnr.it [Consorzio RFX (CNR, ENEA, INFN, Università di Padova, Acciaierie Venete SpA), Padova (Italy); Luchetta, A.; Taliercio, C. [Consorzio RFX (CNR, ENEA, INFN, Università di Padova, Acciaierie Venete SpA), Padova (Italy); Neto, A.; Sartori, F. [Fusion for Energy, Barcelona (Spain); De Tommasi, G. [Fusion for Energy, Barcelona (Spain); Consorzio CREATE/DIETI, Università degli Studi di Napoli Federico II, Via Claudio 21, 80125 Napoli (Italy)
2015-10-15
Highlights: • The integration of two frameworks for real-time control and data acquisition is described. • The integration may significantly fasten the development of system components. • The system includes also a code generator for the integration of code written in Simulink. • A real-time control systemcan be implemented without the need of writing any line of code. - Abstract: Simulink is a graphical data flow programming tool for modeling and simulating dynamic systems. A component of Simulink, called Simulink Coder, generates C code from Simulink diagrams. MARTe is a framework for the implementation of real-time systems, currently in use in several fusion experiments. MDSplus is a framework widely used in the fusion community for the management of data. The three systems provide a solution to different facets of the same process, that is, real-time plasma control development. Simulink diagrams will describe the algorithms used in control, which will be implemented as MARTe GAMs and which will use parameters read from and produce results written to MDSplus pulse files. The three systems have been integrated in order to provide a tool suitable to speed up the development of real-time control applications. In particular, it will be shown how from a Simulink diagram describing a given algorithm to be used in a control system, it is possible to generate in an automated way the corresponding MARTe and MDSplus components that can be assembled to implement the target system.
Time integration algorithms for the two-dimensional Euler equations on unstructured meshes
Slack, David C.; Whitaker, D. L.; Walters, Robert W.
1994-06-01
Explicit and implicit time integration algorithms for the two-dimensional Euler equations on unstructured grids are presented. Both cell-centered and cell-vertex finite volume upwind schemes utilizing Roe's approximate Riemann solver are developed. For the cell-vertex scheme, a four-stage Runge-Kutta time integration, a fourstage Runge-Kutta time integration with implicit residual averaging, a point Jacobi method, a symmetric point Gauss-Seidel method and two methods utilizing preconditioned sparse matrix solvers are presented. For the cell-centered scheme, a Runge-Kutta scheme, an implicit tridiagonal relaxation scheme modeled after line Gauss-Seidel, a fully implicit lower-upper (LU) decomposition, and a hybrid scheme utilizing both Runge-Kutta and LU methods are presented. A reverse Cuthill-McKee renumbering scheme is employed for the direct solver to decrease CPU time by reducing the fill of the Jacobian matrix. A comparison of the various time integration schemes is made for both first-order and higher order accurate solutions using several mesh sizes, higher order accuracy is achieved by using multidimensional monotone linear reconstruction procedures. The results obtained for a transonic flow over a circular arc suggest that the preconditioned sparse matrix solvers perform better than the other methods as the number of elements in the mesh increases.
Stability Analysis and Variational Integrator for Real-Time Formation Based on Potential Field
Shengqing Yang
2014-01-01
Full Text Available This paper investigates a framework of real-time formation of autonomous vehicles by using potential field and variational integrator. Real-time formation requires vehicles to have coordinated motion and efficient computation. Interactions described by potential field can meet the former requirement which results in a nonlinear system. Stability analysis of such nonlinear system is difficult. Our methodology of stability analysis is discussed in error dynamic system. Transformation of coordinates from inertial frame to body frame can help the stability analysis focus on the structure instead of particular coordinates. Then, the Jacobian of reduced system can be calculated. It can be proved that the formation is stable at the equilibrium point of error dynamic system with the effect of damping force. For consideration of calculation, variational integrator is introduced. It is equivalent to solving algebraic equations. Forced Euler-Lagrange equation in discrete expression is used to construct a forced variational integrator for vehicles in potential field and obstacle environment. By applying forced variational integrator on computation of vehicles' motion, real-time formation of vehicles in obstacle environment can be implemented. Algorithm based on forced variational integrator is designed for a leader-follower formation.
Hoeij, F.B. van; Stadhouders, P.H.G.M.; Weusten, B.L.A.M. [St Antonius Ziekenhuis, Department of Gastroenterology, Nieuwegein (Netherlands); Keijsers, R.G.M. [St Antonius Ziekenhuis, Department of Nuclear Medicine, Nieuwegein (Netherlands); Loffeld, B.C.A.J. [Zuwe Hofpoort Ziekenhuis, Department of Internal Medicine, Woerden (Netherlands); Dun, G. [Ziekenhuis Rivierenland, Department of Internal Medicine, Tiel (Netherlands)
2015-01-15
In patients undergoing {sup 18}F-FDG PET/CT, incidental colonic focal lesions can be indicative of inflammatory, premalignant or malignant lesions. The maximum standardized uptake value (SUV{sub max}) of these lesions, representing the FDG uptake intensity, might be helpful in differentiating malignant from benign lesions, and thereby be helpful in determining the urgency of colonoscopy. The aim of our study was to assess the incidence and underlying pathology of incidental PET-positive colonic lesions in a large cohort of patients, and to determine the usefulness of the SUV{sub max} in differentiating benign from malignant pathology. The electronic records of all patients who underwent FDG PET/CT from January 2010 to March 2013 in our hospital were retrospectively reviewed. The main indications for PET/CT were: characterization of an indeterminate mass on radiological imaging, suspicion or staging of malignancy, and suspicion of inflammation. In patients with incidental focal FDG uptake in the large bowel, data regarding subsequent colonoscopy were retrieved, if performed within 120 days. The final diagnosis was defined using colonoscopy findings, combined with additional histopathological assessment of the lesion, if applicable. Of 7,318 patients analysed, 359 (5 %) had 404 foci of unexpected colonic FDG uptake. In 242 of these 404 lesions (60 %), colonoscopy follow-up data were available. Final diagnoses were: adenocarcinoma in 25 (10 %), adenoma in 90 (37 %), and benign in 127 (53 %). The median [IQR] SUV{sub max} was significantly higher in adenocarcinoma (16.6 [12 - 20.8]) than in benign lesions (8.2 [5.9 - 10.1]; p < 0.0001), non-advanced adenoma (8.3 [6.1 - 10.5]; p < 0.0001) and advanced adenoma (9.7 [7.2 - 12.6]; p < 0.001). The receiver operating characteristic curve of SUV{sub max} for malignant versus nonmalignant lesions had an area under the curve of 0.868 (SD ± 0.038), the optimal cut-off value being 11.4 (sensitivity 80 %, specificity 82
Ceylan, Omer; Shafique, Atia; Burak, Abdurrahman; Caliskan, Can; Yazici, Melik; Abbasi, Shahbaz; Galioglu, Arman; Kayahan, Huseyin; Gurbuz, Yasar
2016-11-01
This paper presents a digital readout integrated circuit (DROIC) implementing time delay and integration (TDI) for scanning type infrared focal plane arrays (IRFPAs) with a charge handling capacity of 44.8 Me- while achieving quantization noise of 198 e- and power consumption of 14.35 mW. Conventional pulse frequency modulation (PFM) method is supported by a single slope ramp ADC technique to have a very low quantization noise together with a low power consumption. The proposed digital TDI ROIC converts the photocurrent into digital domain in two phases; in the first phase, most significant bits (MSBs) are generated by the conventional PFM technique in the charge domain, while in the second phase least significant bits (LSBs) are generated by a single slope ramp ADC in the time domain. A 90 × 8 prototype has been fabricated and verified, showing a significantly improved signal-to-noise ratio (SNR) of 51 dB for low illumination levels (280,000 collected electrons), which is attributed to the TDI implementation method and very low quantization noise due to the single slope ADC implemented for LSBs. Proposed digital TDI ROIC proves the benefit of digital readouts for scanning arrays enabling smaller pixel pitches, better SNR for the low illumination levels and lower power consumption compared to analog TDI readouts for scanning arrays.
A study of pile-up in integrated time-correlated single photon counting systems.
Arlt, Jochen; Tyndall, David; Rae, Bruce R; Li, David D-U; Richardson, Justin A; Henderson, Robert K
2013-10-01
Recent demonstration of highly integrated, solid-state, time-correlated single photon counting (TCSPC) systems in CMOS technology is set to provide significant increases in performance over existing bulky, expensive hardware. Arrays of single photon single photon avalanche diode (SPAD) detectors, timing channels, and signal processing can be integrated on a single silicon chip with a degree of parallelism and computational speed that is unattainable by discrete photomultiplier tube and photon counting card solutions. New multi-channel, multi-detector TCSPC sensor architectures with greatly enhanced throughput due to minimal detector transit (dead) time or timing channel dead time are now feasible. In this paper, we study the potential for future integrated, solid-state TCSPC sensors to exceed the photon pile-up limit through analytic formula and simulation. The results are validated using a 10% fill factor SPAD array and an 8-channel, 52 ps resolution time-to-digital conversion architecture with embedded lifetime estimation. It is demonstrated that pile-up insensitive acquisition is attainable at greater than 10 times the pulse repetition rate providing over 60 dB of extended dynamic range to the TCSPC technique. Our results predict future CMOS TCSPC sensors capable of live-cell transient observations in confocal scanning microscopy, improved resolution of near-infrared optical tomography systems, and fluorescence lifetime activated cell sorting.
Breaking the challenge of signal integrity using time-domain spoof surface plasmon polaritons
Zhang, Hao Chi; Zhang, Qian; Fan, Yifeng; Fu, Xiaojian
2015-01-01
In modern integrated circuits and wireless communication systems/devices, three key features need to be solved simultaneously to reach higher performance and more compact size: signal integrity, interference suppression, and miniaturization. However, the above-mentioned requests are almost contradictory using the traditional techniques. To overcome this challenge, here we propose time-domain spoof surface plasmon polaritons (SPPs) as the carrier of signals. By designing a special plasmonic waveguide constructed by printing two narrow corrugated metallic strips on the top and bottom surfaces of a dielectric substrate with mirror symmetry, we show that spoof SPPs are supported from very low frequency to the cutoff frequency with strong subwavelength effects, which can be converted to the time-domain SPPs. When two such plasmonic waveguides are tightly packed with deep-subwavelength separation, which commonly happens in the integrated circuits and wireless communications due to limited space, we demonstrate theo...
Quasi-three-dimensional integration scheme using time-domain interconnection
Kotani, Koji
2017-07-01
A quasi-three-dimensional integration scheme involving time-domain interconnection (Q3D-TD) is proposed. By utilizing the time space as the third integration dimension, circuit functions can be integrated densely with a quasi-3D interconnect system, resulting in the decrease in critical path delay and the high operation speed of the circuit. As an example of the application of this concept in digital signal processing, multiple-layered 2D image averaging filters (8×8 pixels, 8 bit depth, and 3×3 core) are designed and evaluated. An average speedup of 10.2% is achieved by Q3D-TD for two-layer to eight-layer 2D image filters.
Li, Zhichen; Bai, Yan; Huang, Congzhi; Yan, Huaicheng
2017-05-01
This paper investigates the stability and stabilization problems for interval time-delay systems. By introducing a new delay partitioning approach, various Lyapunov-Krasovskii functionals with triple-integral terms are established to make full use of system information. In order to reduce the conservatism, improved integral inequalities are developed for estimation of double integrals, which show remarkable outperformance over the Jensen and Wirtinger ones. Particularly, the relationship between the time-delay and each subinterval is taken into consideration. The resulting stability criteria are less conservative than some recent methods. Based on the derived condition, the state-feedback controller design approach is also given. Finally, the numerical examples and the application to inverted pendulum system are provided to illustrate the effectiveness of the proposed approaches. Copyright © 2017 ISA. Published by Elsevier Ltd. All rights reserved.
M. C. Roa-García
2010-08-01
Full Text Available We present a new modeling approach analyzing and predicting the Transit Time Distribution (TTD and the Response Time Distribution (RTD from hourly to annual time scales as two distinct hydrological processes. The model integrates Isotope Hydrograph Separation (IHS and the Instantaneous Unit Hydrograph (IUH approach as a tool to provide a more realistic description of transit and response time of water in catchments. Individual event simulations and parameterizations were combined with long-term baseflow simulation and parameterizations; this provides a comprehensive picture of the catchment response for a long time span for the hydraulic and isotopic processes. The proposed method was tested in three Andean headwater catchments to compare the effects of land use on hydrological response and solute transport. Results show that the characteristics of events and antecedent conditions have a significant influence on TTD and RTD, but in general the RTD of the grassland dominated catchment is concentrated in the shorter time spans and has a higher cumulative TTD, while the forest dominated catchment has a relatively higher response distribution and lower cumulative TTD. The catchment where wetlands concentrate shows a flashier response, but wetlands also appear to prolong transit time.
M. C. Roa-García
2010-01-01
Full Text Available We present a new modeling approach analyzing and predicting the Transit Time Distribution (TTD and the Response Time Distribution (RTD from hourly to annual time scales as two distinct hydrological processes. The model integrates Isotope Hydrograph Separation (IHS and the Instantaneous Unit Hydrograph (IUH approach as a tool to provide a more realistic description of transit and response time of water in catchments. Individual event simulations and parameterizations were combined with long-term baseflow simulation and parameterizations to provide a comprehensive picture of the catchment response for a long time span for the hydraulic and isotopic processes. The proposed method was tested in three Andean headwater catchments to compare the effects of land use on hydrological response and solute transport. Results show that the characteristics of events and antecedent conditions have a significant influence on TTD and RTD, but in general the RTD of the grassland dominated catchment is concentrated in the shorter time spans and has a higher cumulative TTD, while the forest dominated catchment has a relatively longer response distribution and lower cumulative TTD. The catchment where wetlands concentrate shows a flashier response, but wetlands also appear to contribute to prolong transit time.
Model-based Integration of Past & Future in TimeTravel
Khalefa, Mohamed E.; Fischer, Ulrike; Pedersen, Torben Bach
2012-01-01
We demonstrate TimeTravel, an efficient DBMS system for seamless integrated querying of past and (forecasted) future values of time series, allowing the user to view past and future values as one joint time series. This functionality is important for advanced application domain like energy....... The main idea is to compactly represent time series as models. By using models, the TimeTravel system answers queries approximately on past and future data with error guarantees (absolute error and confidence) one order of magnitude faster than when accessing the time series directly. In addition...... it to answer approximate and exact queries. TimeTravel is implemented into PostgreSQL, thus achieving complete user transparency at the query level. In the demo, we show the easy building of a hierarchical model index for a real-world time series and the effect of varying the error guarantees on the speed up...
Distributed finite-time containment control for double-integrator multiagent systems.
Wang, Xiangyu; Li, Shihua; Shi, Peng
2014-09-01
In this paper, the distributed finite-time containment control problem for double-integrator multiagent systems with multiple leaders and external disturbances is discussed. In the presence of multiple dynamic leaders, by utilizing the homogeneous control technique, a distributed finite-time observer is developed for the followers to estimate the weighted average of the leaders' velocities at first. Then, based on the estimates and the generalized adding a power integrator approach, distributed finite-time containment control algorithms are designed to guarantee that the states of the followers converge to the dynamic convex hull spanned by those of the leaders in finite time. Moreover, as a special case of multiple dynamic leaders with zero velocities, the proposed containment control algorithms also work for the case of multiple stationary leaders without using the distributed observer. Simulations demonstrate the effectiveness of the proposed control algorithms.
An efficient explicit marching on in time solver for magnetic field volume integral equation
Sayed, Sadeed Bin
2015-07-25
An efficient explicit marching on in time (MOT) scheme for solving the magnetic field volume integral equation is proposed. The MOT system is cast in the form of an ordinary differential equation and is integrated in time using a PE(CE)m multistep scheme. At each time step, a system with a Gram matrix is solved for the predicted/corrected field expansion coefficients. Depending on the type of spatial testing scheme Gram matrix is sparse or consists of blocks with only diagonal entries regardless of the time step size. Consequently, the resulting MOT scheme is more efficient than its implicit counterparts, which call for inversion of fuller matrix system at lower frequencies. Numerical results, which demonstrate the efficiency, accuracy, and stability of the proposed MOT scheme, are presented.
Maximum Autocorrelation Factorial Kriging
Nielsen, Allan Aasbjerg; Conradsen, Knut; Pedersen, John L.; Steenfelt, Agnete
2000-01-01
This paper describes maximum autocorrelation factor (MAF) analysis, maximum autocorrelation factorial kriging, and its application to irregularly sampled stream sediment geochemical data from South Greenland. Kriged MAF images are compared with kriged images of varimax rotated factors from an ordinary non-spatial factor analysis, and they are interpreted in a geological context. It is demonstrated that MAF analysis contrary to ordinary non-spatial factor analysis gives an objective discrimina...
Model-based Integration of Past & Future in TimeTravel
Khalefa, Mohamed E.; Fischer, Ulrike; Pedersen, Torben Bach
2012-01-01
usually exhibits seasonal behavior, models in this index incorporate seasonality. To construct a hierarchical model index, the user specifies seasonality period, error guarantees levels, and a statistical forecast method. As time proceeds, the system incrementally updates the index and utilizes......We demonstrate TimeTravel, an efficient DBMS system for seamless integrated querying of past and (forecasted) future values of time series, allowing the user to view past and future values as one joint time series. This functionality is important for advanced application domain like energy...
Time-varying market integration and expected returns in emerging mrkets
de Jong, F.C.J.M.; de Roon, F.
2001-01-01
We use a simple model in which the expected returns in emerging markets depend on their systematicrisk as measured by their beta relative to the world portfolio as well as on the level ofintegration in that market. The level of integration is a time-varying variable that depends on themarket value o
Hybrid state‐space time integration in a rotating frame of reference
Krenk, Steen; Nielsen, Martin Bjerre
2011-01-01
A time integration algorithm is developed for the equations of motion of a flexible body in a rotating frame of reference. The equations are formulated in a hybrid state‐space, formed by the local displacement components and the global velocity components. In the spatial discretization the local ...
Nielsen, Martin Bjerre; Krenk, Steen
2012-01-01
A conservative time integration algorithm for rigid body rotations is presented in a purely algebraic form in terms of the four quaternions components and the four conjugate momentum variables via Hamilton’s equations. The introduction of an extended mass matrix leads to a symmetric set of eight...
An adaptive time integration scheme for blast loading on a saturated soil mass
Al-Khoury, R.; Weerheijm, J.; Dingerdis, K.; Sluys, L.J.
2011-01-01
This paper presents a time integration scheme capable of simulating blast loading of relatively high frequency on porous media, using coarse meshes. The scheme is based on the partition of unity finite element method. The discontinuity is imposed on the velocity field, while the displacement field i
On Integration Concept of Space & Time%论时空整体观念
苏越; 叶进
2001-01-01
The integration concept of space and time is the development of dialectical materialism in modern society as well as the sceentific generalization and reflection of the continuance in time and extension in space. The integration concept integrates the “being”of one thing with time and space. It is of great significance to adopt the integration concept of time and space and to train the three-dimenesional thinding habit for a correct understanding of matters and solving problems scientifically.%时空整体观念是唯物辩证法现代发展的重要内容，是对时间的连续性与空间的广延性的科学概括与反映。时空整体观念是把物的存在与时间、空间作为一个有机整体加以把握。树立时空整体观念，培养立体思维习惯，对于正确认识事物与科学处理问题具有重大意义。
Measurement of the time-integrated CP asymmetry in D-0 -> (KSKS0)-K-0 decays
Aaij, R.; Adeva, B.; Adinolfi, M.; Affolder, A.; Ajaltouni, Z.; Akar, S.; Albrecht, J.; Alessio, F.; Alexander, M.; Ali, S.; Alkhazov, G.; Cartelle, P. Alvarez; Alves, A. A.; Amato, S.; Amerio, S.; Amhis, Y.; An, L.; Anderlini, L.; Anderson, J.; Andreassi, G.; Andreotti, M.; Andrews, J. E.; Appleby, R. B.; Gutierrez, O. Aquines; Archilli, F.; d'Argent, P.; Artamonov, A.; Artuso, M.; Aslanides, E.; Auriemma, G.; Baalouch, M.; Bachmann, S.; Back, J. J.; Badalov, A.; Baesso, C.; Baldini, W.; Barlow, R. J.; Barschel, C.; Barsuk, S.; Barter, W.; Batozskaya, V.; Battista, V.; Bay, A.; Beaucourt, L.; Beddow, J.; Bedeschi, F.; Bediaga, I.; Bel, L. J.; Bellee, V.; Belloli, N.; Belyaev, I.; Ben-Haim, E.; Bencivenni, G.; Benson, S.; Benton, J.; Berezhnoy, A.; Bernet, R.; Bertolin, A.; Bettler, M-O; van Beuzekom, M.; Bien, A.; Bifani, S.; Billoir, P.; Bird, T.; Birnkraut, A.; Bizzeti, A.; Blake, T.; Blanc, F.; Blouw, J.; Blusk, S.; Bocci, V.; Bondar, A.; Bondar, N.; Bonivento, W.; Borghi, S.; Borsato, M.; Bowcock, T. J. V.; Bowen, E.; Bozzi, C.; Braun, S.; Britsch, M.; Britton, T.; Brodzicka, J.; Brook, N. H.; Buchanan, E.; Bursche, A.; Buytaert, J.; Cadeddu, S.; Calabrese, R.; Calvi, M.; Calvo Gomez, M.; Campana, P.; Perez, D. Campora; Capriotti, L.; Carbone, A.; Carboni, G.; Cardinale, R.; Cardini, A.; Carniti, P.; Carson, L.; Akiba, K. Carvalho; Casse, G.; Cassina, L.; Garcia, L. Castillo; Cattaneo, M.; Cauet, Ch.; Cavallero, G.; Cenci, R.; Charles, M.; Charpentier, Ph.; Chefdeville, M.; Chen, S.; Cheung, S-F; Chiapolini, N.; Chrzaszcz, M.; Vidal, X. Cid; Ciezarek, G.; Clarke, P. E. L.; Clemencic, M.; Cliff, H. V.; Closier, J.; Coco, V.; Cogan, J.; Cogneras, E.; Cogoni, V.; Cojocariu, L.; Collazuol, G.; Collins, P.; Comerma-Montells, A.; Contu, A.; Cook, A.; Coombes, M.; Coquereau, S.; Corti, G.; Corvo, M.; Couturier, B.; Cowan, G. A.; Craik, D. C.; Crocombe, A.; Cruz Torres, M.; Cunliffe, S.; Currie, R.; D'Ambrosio, C.; Dall'Occo, E.; Dalseno, J.; David, P. N. Y.; Davis, A.; De Bruyn, K.; De Capua, S.; De Cian, M.; De Miranda, J. M.; De Paula, L.; De Simone, P.; Dean, C-T; Decamp, D.; Deckenhoff, M.; Del Buono, L.; Deleage, N.; Demmer, M.; Derkach, D.; Deschamps, O.; Dettori, F.; Dey, B.; Di Canto, A.; Di Ruscio, F.; Dijkstra, H.; Donleavy, S.; Dordei, F.; Dorigo, M.; Dosil Suarez, A.; Dossett, D.; Dovbnya, A.; Dreimanis, K.; Dufour, L.; Dujany, G.; Dupertuis, F.; Durante, P.; Dzhelyadin, R.; Dziurda, A.; Dzyuba, A.; Easo, S.; Egede, U.; Egorychev, V.; Eidelman, S.; Eisenhardt, S.; Eitschberger, U.; Ekelhof, R.; Eklund, L.; El Rifai, I.; Elsasser, Ch.; Ely, S.; Esen, S.; Evans, H. M.; Evans, T.; Falabella, A.; Faerber, C.; Farley, N.; Farry, S.; Fay, R.; Ferguson, D.; Fernandez Albor, V.; Ferrari, F.; Ferreira Rodrigues, F.; Ferro-Luzzi, M.; Filippov, S.; Fiore, M.; Fiorini, M.; Firlej, M.; Fitzpatrick, C.; Fiutowski, T.; Fohl, K.; Fol, P.; Fontana, M.; Fontanelli, F.; Forty, R.; Francisco, O.; Frank, M.; Frei, C.; Frosini, M.; Fu, J.; Furfaro, E.; Gallas Torreira, A.; Galli, D.; Gallorini, S.; Gambetta, S.; Gandelman, M.; Gandini, P.; Gao, Y.; Garcia Pardinas, J.; Tico, J. Garra; Garrido, L.; Gascon, D.; Gaspar, C.; Gauld, R.; Gavardi, L.; Gazzoni, G.; Gerick, D.; Gersabeck, E.; Gersabeck, M.; Gershon, T.; Ghez, Ph.; Giani, S.; Gibson, V.; Girard, O. G.; Giubega, L.; Gligorov, V. V.; Goebel, C.; Golubkov, D.; Golutvin, A.; Gomes, A.; Gotti, C.; Gandara, M. Grabalosa; Graciani Diaz, R.; Cardoso, L. A. Granado; Grauges, E.; Graverini, E.; Graziani, G.; Grecu, A.; Greening, E.; Gregson, S.; Griffith, P.; Grillo, L.; Gruenberg, O.; Gui, B.; Gushchin, E.; Guz, Yu.; Gys, T.; Hadavizadeh, T.; Hadjivasiliou, C.; Haefeli, G.; Haen, C.; Haines, S. C.; Hall, S.; Hamilton, B.; Han, X.; Hansmann-Menzemer, S.; Harnew, N.; Harnew, S. T.; Harrison, J.; He, J.; Head, T.; Heijne, V.; Hennessy, K.; Henrard, P.; Henry, L.; van Herwijnen, E.; Hess, M.; Hicheur, A.; Hill, D.; Hoballah, M.; Hombach, C.; Hulsbergen, W.; Humair, T.; Hussain, N.; Hutchcroft, D.; Hynds, D.; Idzik, M.; Ilten, P.; Jacobsson, R.; Jaeger, A.; Jalocha, J.; Jans, E.; Jawahery, A.; Jing, F.; John, M.; Johnson, D.; Jones, C. R.; Joram, C.; Jost, B.; Jurik, N.; Kandybei, S.; Kanso, W.; Karacson, M.; Karbach, T. M.; Karodia, S.; Kecke, M.; Kelsey, M.; Kenyon, I. R.; Kenzie, M.; Ketel, T.; Khanji, B.; Khurewathanakul, C.; Klaver, S.; Klimaszewski, K.; Kochebina, O.; Kolpin, M.; Komarov, I.; Koopman, R. F.; Koppenburg, P.; Kozeiha, M.; Kravchuk, L.; Kreplin, K.; Kreps, M.; Krocker, G.; Krokovny, P.; Kruse, F.; Krzemien, W.; Kucewicz, W.; Kucharczyk, M.; Kudryavtsev, V.; Kuonen, A. K.; Kurek, K.; Kvaratskheliya, T.; Lacarrere, D.; Lafferty, G.; Lai, A.; Lambert, D.; Lanfranchi, G.; Langenbruch, C.; Langhans, B.; Latham, T.; Lazzeroni, C.; Le Gac, R.; van Leerdam, J.; Lees, J-P; Lefevre, R.; Leflat, A.; Lefrancois, J.; Leroy, O.; Lesiak, T.; Leverington, B.; Li, Y.; Likhomanenko, T.; Liles, M.; Lindner, R.; Linn, C.; Lionetto, F.; Liu, B.; Liu, X.; Loh, D.; Longstaff, I.; Lopes, J. H.; Lucchesi, D.; Lucio Martinez, M.; Luo, H.; Lupato, A.; Luppi, E.; Lupton, O.; Lusiani, A.; Machefert, F.; Maciuc, F.; Maev, O.; Maguire, K.; Malde, S.; Malinin, A.; Manca, G.; Mancinelli, G.; Manning, P.; Mapelli, A.; Maratas, J.; Marchand, J. F.; Marconi, U.; Marin Benito, C.; Marino, P.; Marks, J.; Martellotti, G.; Martin, M.; Martinelli, M.; Martinez Santos, D.; Martinez Vidal, F.; Martins Tostes, D.; Massafferri, A.; Matev, R.; Mathad, A.; Mathe, Z.; Matteuzzi, C.; Mauri, A.; Maurin, B.; Mazurov, A.; McCann, M.; McCarthy, J.; McNab, A.; McNulty, R.; Meadows, B.; Meier, F.; Meissner, M.; Melnychuk, D.; Merk, M.; Michielin, E.; Milanes, D. A.; Minard, M-N; Mitzel, D. S.; Molina Rodriguez, J.; Monroy, I. A.; Monteil, S.; Morandin, M.; Morawski, P.; Morda, A.; Morello, M. J.; Moron, J.; Morris, A. B.; Mountain, R.; Muheim, F.; Mueller, D.; Mueller, J.; Mueller, K.; Mueller, V.; Mussini, M.; Muster, B.; Naik, P.; Nakada, T.; Nandakumar, R.; Nandi, A.; Nasteva, I.; Needham, M.; Neri, N.; Neubert, S.; Neufeld, N.; Neuner, M.; Nguyen, A. D.; Nguyen, T. D.; Nguyen-Mau, C.; Niess, V.; Niet, R.; Nikitin, N.; Nikodem, T.; Ninci, D.; Novoselo, A.; O'Hanlon, D. P.; Oblakowska-Mucha, A.; Obraztsov, V.; Ogilvy, S.; Okhrimenko, O.; Oldeman, R.; Onderwater, C. J. G.; Osorio Rodrigues, B.; Otalora Goicochea, J. M.; Otto, A.; Owen, P.; Oyanguren, A.; Palano, A.; Palombo, F.; Palutan, M.; Panman, J.; Papanestis, A.; Pappagallo, M.; Pappalardo, L. L.; Pappenheimer, C.; Parkes, C.; Passaleva, G.; Patel, G. D.; Patel, M.; Patrignani, C.; Pearce, A.; Pellegrino, A.; Penso, G.; Altarelli, M. Pepe; Perazzini, S.; Perret, P.; Pescatore, L.; Petridis, K.; Petrolini, A.; Petruzzo, M.; Picatoste Olloqui, E.; Pietrzyk, B.; Pilar, T.; Pinci, D.; Pistone, A.; Piucci, A.; Playfer, S.; Plo Casasus, M.; Poikela, T.; Polci, F.; Poluektov, A.; Polyakov, I.; Polycarpo, E.; Popov, A.; Popov, D.; Popovici, B.; Potterat, C.; Price, E.; Price, J. D.; Prisciandaro, J.; Pritchard, A.; Prouve, C.; Pugatch, V.; Navarro, A. Puig; Punzi, G.; Qian, W.; Quagliani, R.; Rachwal, B.; Rademacker, J. H.; Rama, M.; Rangel, M. S.; Raniuk, I.; Rauschmayr, N.; Raven, G.; Redi, F.; Reichert, S.; Reid, M. M.; dos Reis, A. C.; Ricciardi, S.; Richards, S.; Rihl, M.; Rinnert, K.; Rives Molina, V.; Robbe, P.; Rodrigues, A. B.; Rodrigues, E.; Rodriguez Lopez, J. A.; Perez, P. Rodriguez; Roiser, S.; Romanovsky, V.; Romero Vidal, A.; Ronayne, J. W.; Rotondo, M.; Rouvinet, J.; Ruf, T.; Ruiz Valls, P.; Saborido Silva, J. J.; Sagidova, N.; Sail, P.; Saitta, B.; Salustino Guimaraes, V.; Sanchez Mayordomo, C.; Sanmartin Sedes, B.; Santacesaria, R.; Santamarina Rios, C.; Santimaria, M.; Santovetti, E.; Sarti, A.; Satriano, C.; Satta, A.; Saunders, D. M.; Savrina, D.; Schiller, M.; Schindler, H.; Schlupp, M.; Schmelling, M.; Schmelzer, T.; Schmidt, B.; Schneider, O.; Schopper, A.; Schubiger, M.; Schune, M-H; Schwemmer, R.; Sciascia, B.; Sciubba, A.; Semennikov, A.; Serra, N.; Serrano, J.; Sestini, L.; Seyfert, P.; Shapkin, M.; Shapoval, I.; Shcheglov, Y.; Shears, T.; Shekhtman, L.; Shevchenko, V.; Shires, A.; Siddi, B. G.; Coutinho, R. Silva; Silva de Oliveira, L.; Simi, G.; Sirendi, M.; Skidmore, N.; Skillicorn, I.; Skwarnicki, T.; Smith, E.; Smith, E.; Smith, I. T.; Smith, J.; Smith, M.; Snoek, H.; Sokoloff, M. D.; Soler, F. J. P.; Soomro, F.; Souza, D.; Souza De Paula, B.; Spaan, B.; Spradlin, P.; Sridharan, S.; Stagni, F.; Stahl, M.; Stahl, S.; Stefkova, S.; Steinkamp, O.; Stenyakin, O.; Stevenson, S.; Stoica, S.; Stone, S.; Storaci, B.; Stracka, S.; Straticiuc, M.; Straumann, U.; Sun, L.; Sutcliffe, W.; Swientek, K.; Swientek, S.; Syropoulos, V.; Szczekowski, M.; Szczypka, P.; Szumlak, T.; T'Jampens, S.; Tayduganov, A.; Tekampe, T.; Teklishyn, M.; Tellarini, G.; Teubert, F.; Thomas, C.; Thomas, E.; van Tilburg, J.; Tisserand, V.; Tobin, M.; Todd, J.; Tolk, S.; Tomassetti, L.; Tonelli, D.; Topp-Joergensen, S.; Torr, N.; Tournefier, E.; Tourneur, S.; Trabelsi, K.; Tran, M. T.; Tresch, M.; Trisovic, A.; Tsaregorodtsev, A.; Tsopelas, P.; Tuning, N.; Ukleja, A.; Ustyuzhanin, A.; Uwer, U.; Vacca, C.; Vagnoni, V.; Valenti, G.; Vallier, A.; Gomez, R. Vazquez; Vazquez Regueiro, P.; Vazquez Sierra, C.; Vecchi, S.; Velthuis, J. J.; Veltri, M.; Veneziano, G.; Vesterinen, M.; Viaud, B.; Vieira, D.; Vieites Diaz, M.; Vilasis-Cardona, X.; Volkov, V.; Vollhardt, A.; Volyanskyy, D.; Voong, D.; Vorobyev, A.; Vorobyev, V.; Voss, C.; de Vries, J. A.; Waldi, R.; Wallace, C.; Wallace, R.; Walsh, J.; Wandernoth, S.; Wang, J.; Ward, D. R.; Watson, N. K.; Websdale, D.; Weiden, A.; Whitehead, M.; Wilkinson, G.; Wilkinson, M.; Williams, M.; Williams, M. P.; Williams, M.; Williams, T.; Wilson, F. F.; Wimberley, J.; Wishahi, J.; Wislicki, W.; Witek, M.; Wormser, G.; Wotton, S. A.; Wright, S.; Wyllie, K.; Xie, Y.; Xu, Z.; Yang, Z.; Yu, J.; Yuan, X.; Yushchenko, O.; Zangoli, M.; Zavertyaev, M.; Zhang, L.; Zhang, Y.; Zhelezov, A.; Zhokhov, A.; Zhong, L.; Zucchelli, S.
2015-01-01
The time-integrated CP asymmetry in the decay D-0 -> (KSKS0)-K-0 is measured using 3 fb(-1) of proton-proton collision data collected by the LHCb experiment at centreof- mass energies of 7 and 8TeV. The flavour of the D-0 meson is determined by use of the decay D*(+) -> D-0 pi(+) and its charge
Žarić, G.; Fraga González, G.; Tijms, J.; van der Molen, M.W.; Blomert, L.; Bonte, M.
2015-01-01
A failure to build solid letter-speech sound associations may contribute to reading impairments in developmental dyslexia. Whether this reduced neural integration of letters and speech sounds changes over time within individual children and how this relates to behavioral gains in reading skills
Advances and Challenges in Time-Integration of PDE’s
2003-08-20
methods have been noted for their superior performance, especially for long time integration. On the other hand, it has long been known that conservative...di Matematica e Informatica Universita’ di Catania Viale Andrea Doria 6 95125 Catania Italy russo@dmi.unict.it Abstract We present implicit-explicit
Principles of 5D modeling, full integration of 3D space, time and scale
Van Oosterom, P.; Stoter, J.
2012-01-01
This paper proposes an approach for data modelling in five dimensions. Apart from three dimensions for geometrical representation and a fourth dimension for time, we identify scale as fifth dimensional characteristic. Considering scale as an extra dimension of geographic information, fully integrate
Time-integration methods for finite element discretisations of the second-order Maxwell equation
Sármány, D.; Botchev, M.A.; Vegt, van der J.J.W.
2012-01-01
This article deals with time integration for the second-order Maxwell equations with possibly non-zero conductivity in the context of the discontinuous Galerkin finite element method DG-FEM) and the $H(\\mathrm{curl})$-conforming FEM. For the spatial discretisation, hierarchic $H(\\mathrm{curl})$-conf
Effects of attitude dissimilarity and time on social integration : A longitudinal panel study
Van der Vegt, GS
2002-01-01
A longitudinal panel study in 25 work groups of elementary school teachers examined the effect of attitudinal dissimilarity and time on social integration across a 9-month period. In line with the prediction based on both the similarity-attraction approach and social identity theory, cross-lagged re
Fully integrated monolithic opoelectronic transducer for real.time protein and DNA detection
Misiakos, Konstatinos; S. Petrou, Panagiota; E. Kakabakos, Sotirios
2010-01-01
The development and testing of a portable bioanalytical device which was capable for real-time monitoring of binding assays was demonstrated. The device was based on arrays of nine optoelectronic transducers monolithically integrated on silicon chips. The optocouplers consisted of nine silicon av...
A Comparison of Constant Time Delay Instruction with High and Low Treatment Integrity
Tekin Iftar, Elif; Kurt, Onur; Cetin, Ozlem
2011-01-01
Time delay (TD) procedure is an effective procedure in teaching various skills to children with developmental disabilities. Moreover, research has shown that it is used with high treatment integrity (HTI). However, there are several barriers which may prevent delivery instruction with HTI. Therefore, this study was designed to compare the…
Long-time asymptotics for the defocusing integrable discrete nonlinear Schr\\"odinger equation
YAMANE, HIDESHI
2011-01-01
We investigate the long-time asymptotics for the defocusing integrable discrete nonlinear Schr\\"odinger equation by means of the Deift-Zhou nonlinear steepest descent method. The leading term is a sum of two terms that oscillate with decay of order $t^{-1/2}$.
Long-time asymptotics for the defocusing integrable discrete nonlinear Schrödinger equation
YAMANE, HIDESHI
2014-01-01
We investigate the long-time asymptotics for the defocusing integrable discrete nonlinear Schrödinger equation of Ablowitz-Ladik by means of the inverse scattering transform and the Deift-Zhou nonlinear steepest descent method. The leading part is a sum of two terms that oscillate with decay of order $t^{-1/2}$.
A new time integration scheme for shock propagation in saturated soil
Weerheijm, J.; Sluys, L.J.
2010-01-01
This paper presents a time integration scheme capable to simulate blast loading causing high frequency wave propagation in porous media using coarse meshes. The scheme is based on the partition of unity method. The discontinuity is imposed on the velocity field, while the displacement field is kept
Time-integrating acousto-optic correlator for wideband random noise radar
Kim, Sangtaek; Narayanan, Ram; Zhou, Wei; Wagner, Kelvin
2004-10-01
A time-integrating acousto-optic correlator (TIAOC) is a good candidate for imaging and target detection using a wideband random noise radar system. We have developed such a correlator for a random noise radar with a signal frequency range of 1-2 GHz. This system has demonstrated good wideband signal correlation performance with good dynamic range and fine tuning of delays.
Time-minimal control of dissipative two-level quantum systems: The Integrable case
Bonnard, B
2008-01-01
The objective of this article is to apply recent developments in geometric optimal control to analyze the time minimum control problem of dissipative two-level quantum systems whose dynamics is governed by the Lindblad equation. We focus our analysis on the case where the extremal Hamiltonian is integrable.
Integration of real-time 3D image acquisition and multiview 3D display
Zhang, Zhaoxing; Geng, Zheng; Li, Tuotuo; Li, Wei; Wang, Jingyi; Liu, Yongchun
2014-03-01
Seamless integration of 3D acquisition and 3D display systems offers enhanced experience in 3D visualization of the real world objects or scenes. The vivid representation of captured 3D objects displayed on a glasses-free 3D display screen could bring the realistic viewing experience to viewers as if they are viewing real-world scene. Although the technologies in 3D acquisition and 3D display have advanced rapidly in recent years, effort is lacking in studying the seamless integration of these two different aspects of 3D technologies. In this paper, we describe our recent progress on integrating a light-field 3D acquisition system and an autostereoscopic multiview 3D display for real-time light field capture and display. This paper focuses on both the architecture design and the implementation of the hardware and the software of this integrated 3D system. A prototype of the integrated 3D system is built to demonstrate the real-time 3D acquisition and 3D display capability of our proposed system.
Hartcher-O'Brien, Jess; Di Luca, Massimiliano; Ernst, Marc O.
2014-01-01
Often multisensory information is integrated in a statistically optimal fashion where each sensory source is weighted according to its precision. This integration scheme is statistically optimal because it theoretically results in unbiased perceptual estimates with the highest precision possible. There is a current lack of consensus about how the nervous system processes multiple sensory cues to elapsed time. In order to shed light upon this, we adopt a computational approach to pinpoint the integration strategy underlying duration estimation of audio/visual stimuli. One of the assumptions of our computational approach is that the multisensory signals redundantly specify the same stimulus property. Our results clearly show that despite claims to the contrary, perceived duration is the result of an optimal weighting process, similar to that adopted for estimates of space. That is, participants weight the audio and visual information to arrive at the most precise, single duration estimate possible. The work also disentangles how different integration strategies – i.e. considering the time of onset/offset of signals - might alter the final estimate. As such we provide the first concrete evidence of an optimal integration strategy in human duration estimates. PMID:24594578
Bohlen, Thomas; Wittkamp, Florian
2016-03-01
We analyse the performance of a higher order accurate staggered viscoelastic time-domain finite-difference method, in which the staggered Adams-Bashforth (ABS) third-order and fourth-order accurate time integrators are used for temporal discretization. ABS is a multistep method that uses previously calculated wavefields to increase the order of accuracy in time. The analysis shows that the numerical dispersion is much lower than that of the widely used second-order leapfrog method. Numerical dissipation is introduced by the ABS method which is significantly smaller for fourth-order than third-order accuracy. In 1-D and 3-D simulation experiments, we verify the convincing improvements of simulation accuracy of the fourth-order ABS method. In a realistic elastic 3-D scenario, the computing time reduces by a factor of approximately 2.4, whereas the memory requirements increase by approximately a factor of 2.2. The ABS method thus provides an alternative strategy to increase the simulation accuracy in time by investing computer memory instead of computing time.
Beghein, Yves
2013-03-01
The time domain combined field integral equation (TD-CFIE), which is constructed from a weighted sum of the time domain electric and magnetic field integral equations (TD-EFIE and TD-MFIE) for analyzing transient scattering from closed perfect electrically conducting bodies, is free from spurious resonances. The standard marching-on-in-time technique for discretizing the TD-CFIE uses Galerkin and collocation schemes in space and time, respectively. Unfortunately, the standard scheme is theoretically not well understood: stability and convergence have been proven for only one class of space-time Galerkin discretizations. Moreover, existing discretization schemes are nonconforming, i.e., the TD-MFIE contribution is tested with divergence conforming functions instead of curl conforming functions. We therefore introduce a novel space-time mixed Galerkin discretization for the TD-CFIE. A family of temporal basis and testing functions with arbitrary order is introduced. It is explained how the corresponding interactions can be computed efficiently by existing collocation-in-time codes. The spatial mixed discretization is made fully conforming and consistent by leveraging both Rao-Wilton-Glisson and Buffa-Christiansen basis functions and by applying the appropriate bi-orthogonalization procedures. The combination of both techniques is essential when high accuracy over a broad frequency band is required. © 2012 IEEE.
Terahertz time domain attenuated total reflection spectroscopy with an integrated prism system.
Nakanishi, Atsushi; Kawada, Yoichi; Yasuda, Takashi; Akiyama, Koichiro; Takahashi, Hironori
2012-03-01
We demonstrated attenuated total reflection (ATR) spectroscopy with an integrated prism system that included a terahertz emitter, a terahertz receiver, and an ATR prism. The ATR prism had two internal off-axis parabolic mirrors for, respectively, collimating and focusing the terahertz waves. The Fresnel loss at each interface was reduced, and the total propagation efficiency was 3.36 times larger than when using a non-integrated prism system. The refractive index of water samples calculated from the experimental data showed good agreement with values reported in the literature.
Romá, Federico; Cugliandolo, Leticia F; Lozano, Gustavo S
2014-08-01
We introduce a numerical method to integrate the stochastic Landau-Lifshitz-Gilbert equation in spherical coordinates for generic discretization schemes. This method conserves the magnetization modulus and ensures the approach to equilibrium under the expected conditions. We test the algorithm on a benchmark problem: the dynamics of a uniformly magnetized ellipsoid. We investigate the influence of various parameters, and in particular, we analyze the efficiency of the numerical integration, in terms of the number of steps needed to reach a chosen long time with a given accuracy.
Lenovo-IBM: Bridging Cultures, Languages, and Time Zones Integration Challenges (B)
Stahl, Günter; Köster, Kathrin
2013-01-01
The focus of this case lies in the post-merger integration issues that Lenovo had to master in order to extract full value as well as synergies from its acquisition. The time span analyzed is from the merger until approximately one year after. The case describes the "Best of Both Worlds" integration approach adopted by Lenovo and the top management team's attempts to set aside egos and learn from each other, as well as to make decisions that are in the best interest of the new company, e.g., ...
Jagacinski, Richard J; Rizzi, Emanuele; Kim, Tae Hoon; Lavender, Steven A; Speller, Lassiter F; Klapp, Stuart T
2016-11-01
Skilled drummers performed a 4:3:2 polyrhythm with 2 hands and 1 foot. For each pair of limbs patterns of temporal covariation were used to infer relatively independent parallel streams versus integrated timing relationships. Parallel timing was more prevalent between hand and foot than between the 2 hands, and parallel timing generally increased with tapping rate. Different combinations of integrated and parallel timing were found among the 3 limbs. A second experiment used a wider range of tapping rates and explored 3:2 tapping with 2 hands, 2 feet, or hand and foot. The latter 2 limb pairs resulted in greater prevalence of parallel timing. These results can be interpreted in terms of a Gestalt principle of grouping known as Korte's Third Law, which can be extended from the perceptual domain to the perceptual-motor domain. This principle indicates that perceived velocity is a key factor in determining whether a sequence of events is represented as a single integrated pattern or as multiple parallel patterns. The present results put disparate previous findings on bimanual polyrhythmic tapping and rhythmic aspects of the golf swing under a common theoretical perspective. (PsycINFO Database Record
Tuning of IMC based PID controllers for integrating systems with time delay.
Kumar, D B Santosh; Padma Sree, R
2016-07-01
Design of Proportional Integral and Derivative (PID) controllers based on IMC principles for various types of integrating systems with time delay is proposed. PID parameters are given in terms of process model parameters and a tuning parameter. The tuning parameter is IMC filter time constant. In the present work, the IMC filter (Q) is chosen in such a manner that the order of the denominator of IMC controller is one less than the order of the numerator. The IMC filter time constant (λ) is tuned in such a way that a good compromise is made between performance and robustness for both servo and regulatory problems. To improve servo response of the controller a set point filter is designed such that the closed loop response is similar to that of first order plus time delay system. The proposed controller design method is applied to various transfer function models and to the non-linear model equations of jacketed CSTR to demonstrate its applicability and effectiveness. The performance of the proposed controller is compared with the recently reported methods in terms of IAE and ITAE. The smooth functioning of the controller is determined in terms of total variation and compared with recently reported methods. Simulation studies are carried out on various integrating systems with time delay to show the effectiveness and superiority of the proposed controllers.
Goličnik, Marko
2011-04-15
Various explicit reformulations of time-dependent solutions for the classical two-step irreversible Michaelis-Menten enzyme reaction model have been described recently. In the current study, I present further improvements in terms of a generalized integrated form of the Michaelis-Menten equation for computation of substrate or product concentrations as functions of time for more real-world, enzyme-catalyzed reactions affected by the product. The explicit equations presented here can be considered as a simpler and useful alternative to the exact solution for the generalized integrated Michaelis-Menten equation when fitted to time course data using standard curve-fitting software. Copyright © 2011 Elsevier Inc. All rights reserved.
On the local integrability of almost-product structures defined by space-time metrics
Delphenich, D H
2016-01-01
The splitting of the tangent bundle of space-time into temporal and spatial sub-bundles defines an almost-product structure. In particular, any space-time metric can be locally expressed in time-orthogonal form, in such a way that whether or not that almost-product structure is locally generated by a coordinate chart is a matter of the integrability of the Pfaff equation that the temporal 1-form of that expression for the metric defines. When one applies that analysis to the known exact solutions to the Einstein field equations, one finds that many of the common ones are completely-integrable, although some of the physically-interesting ones are not.
Numerical Time Integration Methods for a Point Absorber Wave Energy Converter
Zurkinden, Andrew Stephen; Kramer, Morten
2012-01-01
interaction (small deformations of the fluid surface and body), inviscid incompressible, irrotational flow and a linearized Euler-Bernoulli formulation of the fluid pressure. The time-domain analysis of a floating structure involves the calculation of a convolution integral between the impulse response...... function of the radiation force and the unknown body velocity due to an external force. The convolution integral can be seen as a memory effect where the system response in the past affects the response in the future. Two different time-domain models will be presented. The first one is based......-space model is advantageous regarding the computational effort and the robustness of the solver. Another important feature is the linear-time invariance of the system. In a next step the influence of the nonlinear hydrostatic behavior of the float is investigated by using a simplified formulation....
Feature integration with random forests for real-time human activity recognition
Kataoka, Hirokatsu; Hashimoto, Kiyoshi; Aoki, Yoshimitsu
2015-02-01
This paper presents an approach for real-time human activity recognition. Three different kinds of features (flow, shape, and a keypoint-based feature) are applied in activity recognition. We use random forests for feature integration and activity classification. A forest is created at each feature that performs as a weak classifier. The international classification of functioning, disability and health (ICF) proposed by WHO is applied in order to set the novel definition in activity recognition. Experiments on human activity recognition using the proposed framework show - 99.2% (Weizmann action dataset), 95.5% (KTH human actions dataset), and 54.6% (UCF50 dataset) recognition accuracy with a real-time processing speed. The feature integration and activity-class definition allow us to accomplish high-accuracy recognition match for the state-of-the-art in real-time.
Path integral approach to the pricing of timer options with the Duru-Kleinert time transformation.
Liang, L Z J; Lemmens, D; Tempere, J
2011-05-01
In this paper, a time substitution as used by Duru and Kleinert in their treatment of the hydrogen atom with path integrals is performed to price timer options under stochastic volatility models. We present general pricing formulas for both the perpetual timer call options and the finite time-horizon timer call options. These general results allow us to find closed-form pricing formulas for both the perpetual and the finite time-horizon timer options under the 3/2 stochastic volatility model as well as under the Heston stochastic volatility model. For the treatment of timer options under the 3/2 model we will rely on the path integral for the Morse potential, with the Heston model we will rely on the Kratzer potential. © 2011 American Physical Society
Note: Fully integrated 3.2 Gbps quantum random number generator with real-time extraction
Zhang, Xiao-Guang; Nie, You-Qi; Zhou, Hongyi; Liang, Hao; Ma, Xiongfeng; Zhang, Jun; Pan, Jian-Wei
2016-07-01
We present a real-time and fully integrated quantum random number generator (QRNG) by measuring laser phase fluctuations. The QRNG scheme based on laser phase fluctuations is featured for its capability of generating ultra-high-speed random numbers. However, the speed bottleneck of a practical QRNG lies on the limited speed of randomness extraction. To close the gap between the fast randomness generation and the slow post-processing, we propose a pipeline extraction algorithm based on Toeplitz matrix hashing and implement it in a high-speed field-programmable gate array. Further, all the QRNG components are integrated into a module, including a compact and actively stabilized interferometer, high-speed data acquisition, and real-time data post-processing and transmission. The final generation rate of the QRNG module with real-time extraction can reach 3.2 Gbps.
Wang, Dong; Liu, Tao; Sun, Ximing; Zhong, Chongquan
2016-07-01
A discrete-time domain two-degree-of-freedom (2DOF) design method is proposed for integrating and unstable processes with time delay. Based on a 2DOF control structure recently developed, a controller is analytically designed in terms of the H2 optimal control performance specification for the set-point tracking, and another controller is derived by proposing the desired closed-loop transfer function for load disturbance rejection. Both controllers can be tuned relatively independent to realize control optimization. Analytical expression of the set-point response is given for quantitatively tuning the single adjustable parameter in the set-point tracking controller. At the meantime, sufficient and necessary conditions for holding robust stability of the closed-loop control system are established for tuning another adjustable parameter in the disturbance rejection controller, along with numerical tuning guidelines. Illustrative examples from the literature are used to demonstrate the effectiveness of the proposed method.
Molecular radiotherapy: The NUKFIT software for calculating the time-integrated activity coefficient
Kletting, P.; Schimmel, S.; Luster, M. [Klinik für Nuklearmedizin, Universität Ulm, Ulm 89081 (Germany); Kestler, H. A. [Research Group Bioinformatics and Systems Biology, Institut für Neuroinformatik, Universität Ulm, Ulm 89081 (Germany); Hänscheid, H.; Fernández, M.; Lassmann, M. [Klinik für Nuklearmedizin, Universität Würzburg, Würzburg 97080 (Germany); Bröer, J. H.; Nosske, D. [Bundesamt für Strahlenschutz, Fachbereich Strahlenschutz und Gesundheit, Oberschleißheim 85764 (Germany); Glatting, G. [Medical Radiation Physics/Radiation Protection, Medical Faculty Mannheim, Heidelberg University, Mannheim 68167 (Germany)
2013-10-15
Purpose: Calculation of the time-integrated activity coefficient (residence time) is a crucial step in dosimetry for molecular radiotherapy. However, available software is deficient in that it is either not tailored for the use in molecular radiotherapy and/or does not include all required estimation methods. The aim of this work was therefore the development and programming of an algorithm which allows for an objective and reproducible determination of the time-integrated activity coefficient and its standard error.Methods: The algorithm includes the selection of a set of fitting functions from predefined sums of exponentials and the choice of an error model for the used data. To estimate the values of the adjustable parameters an objective function, depending on the data, the parameters of the error model, the fitting function and (if required and available) Bayesian information, is minimized. To increase reproducibility and user-friendliness the starting values are automatically determined using a combination of curve stripping and random search. Visual inspection, the coefficient of determination, the standard error of the fitted parameters, and the correlation matrix are provided to evaluate the quality of the fit. The functions which are most supported by the data are determined using the corrected Akaike information criterion. The time-integrated activity coefficient is estimated by analytically integrating the fitted functions. Its standard error is determined assuming Gaussian error propagation. The software was implemented using MATLAB.Results: To validate the proper implementation of the objective function and the fit functions, the results of NUKFIT and SAAM numerical, a commercially available software tool, were compared. The automatic search for starting values was successfully tested for reproducibility. The quality criteria applied in conjunction with the Akaike information criterion allowed the selection of suitable functions. Function fit
Al Jarro, Ahmed
2012-11-01
An explicit marching-on-in-time (MOT) scheme for solving the time domain volume integral equation is presented. The proposed method achieves its stability by employing, at each time step, a corrector scheme, which updates/corrects fields computed by the explicit predictor scheme. The proposedmethod is computationally more efficient when compared to the existing filtering techniques used for the stabilization of explicit MOT schemes. Numerical results presented in this paper demonstrate that the proposed method maintains its stability even when applied to the analysis of electromagnetic wave interactions with electrically large structures meshed using approximately half a million discretization elements.
Rosangela Alves de Mendonça
2012-12-01
Full Text Available OBJETIVO: medir os limites do tempo máximo de fonação pré e pós-aplicação do Programa de Exercícios Funcionais Vocais de Stemple e Gerdeman em professoras, com e sem alteração vocal, que atuam no ensino fundamental do Município de Niterói-RJ. MÉTODO: participaram do estudo 17 professoras, que aceitaram participar espontaneamente da aplicação do programa de exercícios: vogal /i/ sustentada, glissando ascendente e descendente da palavra /nol/, e escala de tons musicais Dó,Ré,Mi,Fá,Sol, com emissão de /ol/, pelo tempo máximo de fonação. A medida do tempo foi coletada pré e pós-aplicação do programa por meio da vogal [ε], após a participante ter sido submetida a exame de videolaringoestroboscopia. RESULTADOS: verificou-se expressivo ganho do tempo máximo de fonação do pré para o pós-exercício e o valor do programa, que em sua aplicação, prioriza a execução dos exercícios com o maior tempo possível de fonação. CONCLUSÃO: o Programa de Exercícios Funcionais Vocais de Stemple e Gerdeman favoreceu o aumento do tempo máximo de fonação intrassujeito, possibilitando melhores condições de saúde vocal no desempenho profissional e social.PURPOSE: to measure the limits of the maximum phonation time pre and post application of the Stemple and Gerdeman Vocal Function Exercises Program in teachers of the Elementary School Education level in Niterói/Brazil, with or without voice alterations. METHOD: there were 17 female teachers who spontaneously agreed to participate. The exercise program that was applied consisted in the sustained vowel /i/, ascending and descending gliding on the word /nol/, and musical scale tones Do Re Mi Fa Sol - issuing the /ol/ for the maximum time of phonation. The measure of the maximum phonation time was counted pre and post exercise program through the vowel /ε/. RESULTS: the results revealed that the teachers presented an expressive increase in the maximum phonation and time
Mei, Hua; Li, Shaoyuan
2007-04-01
In order to identify those multivariable processes with integrating factors in their transfer function matrices, a simple yet robust decentralized identification method from closed-loop step tests is proposed. By the frequency response matrix computed from the closed-loop system data and the knowledge of the decentralized controller, the structural information of the multivariable integrating process is determined firstly and then the continuous parametric model with dead times is approximated similarly with the parameterization of the open-loop stable process. Computer simulations and an application to a 3 x 3 integrating multiple-tank water level system verify the validation of the proposed method even if the closed-loop system is affected by some stochastic noise sources.
Lou, Kuo-Ren; Wang, Lu
2016-05-01
The seller frequently offers the buyer trade credit to settle the purchase amount. From the seller's prospective, granting trade credit increases not only the opportunity cost (i.e., the interest loss on the buyer's purchase amount during the credit period) but also the default risk (i.e., the rate that the buyer will be unable to pay off his/her debt obligations). On the other hand, granting trade credit increases sales volume and revenue. Consequently, trade credit is an important strategy to increase seller's profitability. In this paper, we assume that the seller uses trade credit and number of shipments in a production run as decision variables to maximise his/her profit, while the buyer determines his/her replenishment cycle time and capital investment as decision variables to reduce his/her ordering cost and achieve his/her maximum profit. We then derive non-cooperative Nash solution and cooperative integrated solution in a just-in-time inventory system, in which granting trade credit increases not only the demand but also the opportunity cost and default risk, and the relationship between the capital investment and the ordering cost reduction is logarithmic. Then, we use a software to solve and compare these two distinct solutions. Finally, we use sensitivity analysis to obtain some managerial insights.
Tao Jia
2014-01-01
Full Text Available We investigate an integrated inventory routing problem (IRP in which one supplier with limited production capacity distributes a single item to a set of retailers using homogeneous vehicles. In the objective function we consider a loading cost which is often neglected in previous research. Considering the deterioration in the products, we set a soft time window during the transportation stage and a hard time window during the sales stage, and to prevent jams and waiting cost, the time interval of two successive vehicles returning to the supplier’s facilities is required not to be overly short. Combining all of these factors, a two-echelon supply chain mixed integer programming model under discrete time is proposed, and a two-phase algorithm is developed. The first phase uses tabu search to obtain the retailers’ ordering matrix. The second phase is to generate production scheduling and distribution routing, adopting a saving algorithm and a neighbourhood search, respectively. Computational experiments are conducted to illustrate the effectiveness of the proposed model and algorithm.
On the use of exponential time integration methods in atmospheric models
Colm Clancy
2013-12-01
Full Text Available Exponential integration methods offer a highly accurate approach to the time integration of large systems of differential equations. In recent years, they have attracted increased attention in a number of diverse fields due to advances in their computational efficiency. This has been as a result of the use of Krylov subspace methods for the approximation of the matrix exponentials which typically arise. In this work, we investigate the potential of exponential integration methods for use in atmospheric models. Two schemes are implemented in a shallow water model and tested against reference explicit and semi-implicit methods. In a number of experiments with standard test cases, the exponential methods are found to yield very accurate solutions with time-steps far longer than even the semi-implicit method allows. The relative efficiency of the exponential integrators, which depends mainly on the choice of the specific algorithm used for the calculation of the matrix exponent, is also discussed. The future work aimed at further improvements of the proposed methodology is outlined.
Maximum likely scale estimation
Loog, Marco; Pedersen, Kim Steenstrup; Markussen, Bo
2005-01-01
A maximum likelihood local scale estimation principle is presented. An actual implementation of the estimation principle uses second order moments of multiple measurements at a fixed location in the image. These measurements consist of Gaussian derivatives possibly taken at several scales and/or ...
2007-07-01
In Norway Integrated Operations (IO) is a concept which in the first phase (G1) has been used to describe how to integrate processes and people onshore and offshore using ICT solutions and facilities that improve onshore's ability to support offshore operationally. The second generation (G2) Integrated Operations aims to help operators utilize vendors' core competencies and services more efficiently. Utilizing digital services and vendor products, operators will be able to update reservoir models, drilling targets and well trajectories as wells are drilled, manage well completions remotely, optimize production from reservoir to export lines, and implement condition-based maintenance concepts. The total impact on production, recovery rates, costs and safety will be profound. When the international petroleum business moves to the Arctic region the setting is very different from what is the case on the Norwegian Continental Shelf (NCS) and new challenges will arise. The Norwegian Ministry of Environment has recently issued an Integrated Management Plan for the Barents Sea where one focus is on 'Monitoring of the Marine Environment in the North'. The Government aims to establish a new and more coordinated system for monitoring the marine ecosystems in the north. A representative group consisting of the major Operators, the Service Industry, Academia and the Authorities have developed the enclosed strategy for the OG21 Integrated Operations and Real Time Reservoir Management (IO and RTRM) Technology Target Area (TTA). Major technology and work process research and development gaps have been identified in several areas: Bandwidth down-hole to surface; Sensor development including Nano-technology; Cross discipline use of Visualisation, Simulation and model development particularly in Drilling and Reservoir management areas; Software development in terms of data handling, model updating and calculation speed; Enabling reliable and robust communications
A Time Marching Scheme for Solving Volume Integral Equations on Nonlinear Scatterers
Bagci, Hakan
2015-01-07
Transient electromagnetic field interactions on inhomogeneous penetrable scatterers can be analyzed by solving time domain volume integral equations (TDVIEs). TDVIEs are oftentimes solved using marchingon-in-time (MOT) schemes. Unlike finite difference and finite element schemes, MOT-TDVIE solvers require discretization of only the scatterers, do not call for artificial absorbing boundary conditions, and are more robust to numerical phase dispersion. On the other hand, their computational cost is high, they suffer from late-time instabilities, and their implicit nature makes incorporation of nonlinear constitutive relations more difficult. Development of plane-wave time-domain (PWTD) and FFT-based schemes has significantly reduced the computational cost of the MOT-TDVIE solvers. Additionally, latetime instability problem has been alleviated for all practical purposes with the development of accurate integration schemes and specially designed temporal basis functions. Addressing the third challenge is the topic of this presentation. I will talk about an explicit MOT scheme developed for solving the TDVIE on scatterers with nonlinear material properties. The proposed scheme separately discretizes the TDVIE and the nonlinear constitutive relation between electric field intensity and flux density. The unknown field intensity and flux density are expanded using half and full Schaubert-Wilton-Glisson (SWG) basis functions in space and polynomial temporal interpolators in time. The resulting coupled system of the discretized TDVIE and constitutive relation is integrated in time using an explicit P E(CE) m scheme to yield the unknown expansion coefficients. Explicitness of time marching allows for straightforward incorporation of the nonlinearity as a function evaluation on the right hand side of the coupled system of equations. Consequently, the resulting MOT scheme does not call for a Newton-like nonlinear solver. Numerical examples, which demonstrate the applicability
2007-07-01
In Norway Integrated Operations (IO) is a concept which in the first phase (G1) has been used to describe how to integrate processes and people onshore and offshore using ICT solutions and facilities that improve onshore's ability to support offshore operationally. The second generation (G2) Integrated Operations aims to help operators utilize vendors' core competencies and services more efficiently. Utilizing digital services and vendor products, operators will be able to update reservoir models, drilling targets and well trajectories as wells are drilled, manage well completions remotely, optimize production from reservoir to export lines, and implement condition-based maintenance concepts. The total impact on production, recovery rates, costs and safety will be profound. When the international petroleum business moves to the Arctic region the setting is very different from what is the case on the Norwegian Continental Shelf (NCS) and new challenges will arise. The Norwegian Ministry of Environment has recently issued an Integrated Management Plan for the Barents Sea where one focus is on 'Monitoring of the Marine Environment in the North'. The Government aims to establish a new and more coordinated system for monitoring the marine ecosystems in the north. A representative group consisting of the major Operators, the Service Industry, Academia and the Authorities have developed the enclosed strategy for the OG21 Integrated Operations and Real Time Reservoir Management (IO and RTRM) Technology Target Area (TTA). Major technology and work process research and development gaps have been identified in several areas: Bandwidth down-hole to surface; Sensor development including Nano-technology; Cross discipline use of Visualisation, Simulation and model development particularly in Drilling and Reservoir management areas; Software development in terms of data handling, model updating and calculation speed; Enabling reliable and robust communications
Maximum Power from a Solar Panel
Michael Miller
2010-01-01
Full Text Available Solar energy has become a promising alternative to conventional fossil fuel sources. Solar panels are used to collect solar radiation and convert it into electricity. One of the techniques used to maximize the effectiveness of this energy alternative is to maximize the power output of the solar collector. In this project the maximum power is calculated by determining the voltage and the current of maximum power. These quantities are determined by finding the maximum value for the equation for power using differentiation. After the maximum values are found for each time of day, each individual quantity, voltage of maximum power, current of maximum power, and maximum power is plotted as a function of the time of day.
Dependence of the energy resolution of a scintillating crystal on the readout integration time
Bocci, V; Chiodi, G; Faccini, R; Ferroni, F; Lunadei, R; Martellotti, G; Penso, G; Pinci, D; Recchia, L
2012-01-01
The possibilty of performing high-rate calorimetry with a slow scintillator crystal is studied. In this experimental situation, to avoid pulse pile-up, it can be necessary to base the energy measurement on only a fraction of the emitted light, thus spoiling the energy resolution. This effect was experimentally studied with a BGO crystal and a photomultiplier followed by an integrator, by measuring the peak amplitude of the signals. The experimental data show that the energy resolution is exclusively due to the statistical fluctuations of the number of photoelectrons contributing to the peak amplitude. When such number is small its fluctuations are even smaller than those predicted by Poisson statistics. These results were confirmed by a Monte Carlo simulation which allows to estimate, in a general case, the energy resolution, given the total number of photoelectrons, the scintillation time and the integration time.
Global format for energy-momentum based time integration in nonlinear dynamics
Krenk, Steen
2014-01-01
A global format is developed for momentum and energy consistent time integration of second‐order dynamic systems with general nonlinear stiffness. The algorithm is formulated by integrating the state‐space equations of motion over the time increment. The internal force is first represented...... in fourth‐order form consisting of the end‐point mean value plus a term containing the stiffness matrix increment. This form gives energy conservation for systems with internal energy as a quartic function of the displacement components. This representation is then extended to general energy conservation...... of mean value products at the element level or explicit use of a geometric stiffness matrix. An optional monotonic algorithmic damping, increasing with response frequency, is developed in terms of a single damping parameter. In the solution procedure, the velocity is eliminated and the nonlinear...
Retarded potentials and time domain boundary integral equations a road map
Sayas, Francisco-Javier
2016-01-01
This book offers a thorough and self-contained exposition of the mathematics of time-domain boundary integral equations associated to the wave equation, including applications to scattering of acoustic and elastic waves. The book offers two different approaches for the analysis of these integral equations, including a systematic treatment of their numerical discretization using Galerkin (Boundary Element) methods in the space variables and Convolution Quadrature in the time variable. The first approach follows classical work started in the late eighties, based on Laplace transforms estimates. This approach has been refined and made more accessible by tailoring the necessary mathematical tools, avoiding an excess of generality. A second approach contains a novel point of view that the author and some of his collaborators have been developing in recent years, using the semigroup theory of evolution equations to obtain improved results. The extension to electromagnetic waves is explained in one of the appendices...
Comparing Numerical Integration Schemes for Time-Continuous Car-Following Models
Treiber, Martin
2014-01-01
When simulating trajectories by integrating time-continuous car-following models, standard integration schemes such as the forth-order Runge-Kutta method (RK4) are rarely used while the simple Euler's method is popular among researchers. We compare four explicit methods: Euler's method, ballistic update, Heun's method (trapezoidal rule), and the standard forth-order RK4. As performance metrics, we plot the global discretization error as a function of the numerical complexity. We tested the methods on several time-continuous car-following models in several multi-vehicle simulation scenarios with and without discontinuities such as stops or a discontinuous behavior of an external leader. We find that the theoretical advantage of RK4 (consistency order~4) only plays a role if both the acceleration function of the model and the external data of the simulation scenario are sufficiently often differentiable. Otherwise, we obtain lower (and often fractional) consistency orders. Although, to our knowledge, Heun's met...
Sequence Diagram Test Case Specification and Virtual Integration Analysis using Timed-Arc Petri Nets
Sven Sieverding
2013-02-01
Full Text Available In this paper, we formally define Test Case Sequence Diagrams (TCSD as an easy-to-use means to specify test cases for components including timing constraints. These test cases are modeled using the UML2 syntax and can be specified by standard UML-modeling-tools. In a component-based design an early identification of errors can be achieved by a virtual integration of components before the actual system is build. We define such a procedure which integrates the individual test cases of the components according to the interconnections of a given architecture and checks if all specified communication sequences are consistent. Therefore, we formally define the transformation of TCSD into timed-arc Petri nets and a process for the combination of these nets. The applicability of our approach is demonstrated on an avionic use case from the ARP4761 standard.
Ulku, Huseyin Arda
2014-07-06
Effects of material nonlinearities on electromagnetic field interactions become dominant as field amplitudes increase. A typical example is observed in plasmonics, where highly localized fields “activate” Kerr nonlinearities. Naturally, time domain solvers are the method of choice when it comes simulating these nonlinear effects. Oftentimes, finite difference time domain (FDTD) method is used for this purpose. This is simply due to the fact that explicitness of the FDTD renders the implementation easier and the material nonlinearity can be easily accounted for using an auxiliary differential equation (J.H. Green and A. Taflove, Opt. Express, 14(18), 8305-8310, 2006). On the other hand, explicit marching on-in-time (MOT)-based time domain integral equation (TDIE) solvers have never been used for the same purpose even though they offer several advantages over FDTD (E. Michielssen, et al., ECCOMAS CFD, The Netherlands, Sep. 5-8, 2006). This is because explicit MOT solvers have never been stabilized until not so long ago. Recently an explicit but stable MOT scheme has been proposed for solving the time domain surface magnetic field integral equation (H.A. Ulku, et al., IEEE Trans. Antennas Propag., 61(8), 4120-4131, 2013) and later it has been extended for the time domain volume electric field integral equation (TDVEFIE) (S. B. Sayed, et al., Pr. Electromagn. Res. S., 378, Stockholm, 2013). This explicit MOT scheme uses predictor-corrector updates together with successive over relaxation during time marching to stabilize the solution even when time step is as large as in the implicit counterpart. In this work, an explicit MOT-TDVEFIE solver is proposed for analyzing electromagnetic wave interactions on scatterers exhibiting Kerr nonlinearity. Nonlinearity is accounted for using the constitutive relation between the electric field intensity and flux density. Then, this relation and the TDVEFIE are discretized together by expanding the intensity and flux - sing half
Using the Simulation Modeling Methods for the Designing Real-Time Integrated Expert Systems
Rybina, Galina; Rybin, Victor
2003-01-01
Certain theoretical and methodological problems of designing real-time dynamical expert systems, which belong to the class of the most complex integrated expert systems, are discussed. Primary attention is given to the problems of designing subsystems for modeling the external environment in the case where the environment is represented by complex engineering systems. A specific approach to designing simulation models for complex engineering systems is proposed and examples of...
Measurement of the time-integrated CP asymmetry in D-0 -> (KSKS0)-K-0 decays
Aaij, R.; Adeva, B.; Adinolfi, M.; Affolder, A.; Ajaltouni, Z.; Akar, S.; Albrecht, J.; Alessio, F.; Alexander, M.; Ali, S.; Alkhazov, G.; Cartelle, P. Alvarez; Alves, A. A.; Amato, S.; Amerio, S.; Amhis, Y.; An, L.; Anderlini, L.; Anderson, J.; Andreassi, G.; Andreotti, M.; Andrews, J. E.; Appleby, R. B.; Gutierrez, O. Aquines; Archilli, F.; d'Argent, P.; Artamonov, A.; Artuso, M.; Aslanides, E.; Auriemma, G.; Baalouch, M.; Bachmann, S.; Back, J. J.; Badalov, A.; Baesso, C.; Baldini, W.; Barlow, R. J.; Barschel, C.; Barsuk, S.; Barter, W.; Batozskaya, V.; Battista, V.; Bay, A.; Beaucourt, L.; Beddow, J.; Bedeschi, F.; Bediaga, I.; Bel, L. J.; Bellee, V.; Belloli, N.; Belyaev, I.; Ben-Haim, E.; Bencivenni, G.; Benson, S.; Benton, J.; Berezhnoy, A.; Bernet, R.; Bertolin, A.; Bettler, M-O; van Beuzekom, M.; Bien, A.; Bifani, S.; Billoir, P.; Bird, T.; Birnkraut, A.; Bizzeti, A.; Blake, T.; Blanc, F.; Blouw, J.; Blusk, S.; Bocci, V.; Bondar, A.; Bondar, N.; Bonivento, W.; Borghi, S.; Borsato, M.; Bowcock, T. J. V.; Bowen, E.; Bozzi, C.; Braun, S.; Britsch, M.; Britton, T.; Brodzicka, J.; Brook, N. H.; Buchanan, E.; Bursche, A.; Buytaert, J.; Cadeddu, S.; Calabrese, R.; Calvi, M.; Calvo Gomez, M.; Campana, P.; Perez, D. Campora; Capriotti, L.; Carbone, A.; Carboni, G.; Cardinale, R.; Cardini, A.; Carniti, P.; Carson, L.; Akiba, K. Carvalho; Casse, G.; Cassina, L.; Garcia, L. Castillo; Cattaneo, M.; Cauet, Ch.; Cavallero, G.; Cenci, R.; Charles, M.; Charpentier, Ph.; Chefdeville, M.; Chen, S.; Cheung, S-F; Chiapolini, N.; Chrzaszcz, M.; Vidal, X. Cid; Ciezarek, G.; Clarke, P. E. L.; Clemencic, M.; Cliff, H. V.; Closier, J.; Coco, V.; Cogan, J.; Cogneras, E.; Cogoni, V.; Cojocariu, L.; Collazuol, G.; Collins, P.; Comerma-Montells, A.; Contu, A.; Cook, A.; Coombes, M.; Coquereau, S.; Corti, G.; Corvo, M.; Couturier, B.; Cowan, G. A.; Craik, D. C.; Crocombe, A.; Cruz Torres, M.; Cunliffe, S.; Currie, R.; D'Ambrosio, C.; Dall'Occo, E.; Dalseno, J.; David, P. N. Y.; Davis, A.; De Bruyn, K.; De Capua, S.; De Cian, M.; De Miranda, J. M.; De Paula, L.; De Simone, P.; Dean, C-T; Decamp, D.; Deckenhoff, M.; Del Buono, L.; Deleage, N.; Demmer, M.; Derkach, D.; Deschamps, O.; Dettori, F.; Dey, B.; Di Canto, A.; Di Ruscio, F.; Dijkstra, H.; Donleavy, S.; Dordei, F.; Dorigo, M.; Dosil Suarez, A.; Dossett, D.; Dovbnya, A.; Dreimanis, K.; Dufour, L.; Dujany, G.; Dupertuis, F.; Durante, P.; Dzhelyadin, R.; Dziurda, A.; Dzyuba, A.; Easo, S.; Egede, U.; Egorychev, V.; Eidelman, S.; Eisenhardt, S.; Eitschberger, U.; Ekelhof, R.; Eklund, L.; El Rifai, I.; Elsasser, Ch.; Ely, S.; Esen, S.; Evans, H. M.; Evans, T.; Falabella, A.; Faerber, C.; Farley, N.; Farry, S.; Fay, R.; Ferguson, D.; Fernandez Albor, V.; Ferrari, F.; Ferreira Rodrigues, F.; Ferro-Luzzi, M.; Filippov, S.; Fiore, M.; Fiorini, M.; Firlej, M.; Fitzpatrick, C.; Fiutowski, T.; Fohl, K.; Fol, P.; Fontana, M.; Fontanelli, F.; Forty, R.; Francisco, O.; Frank, M.; Frei, C.; Frosini, M.; Fu, J.; Furfaro, E.; Gallas Torreira, A.; Galli, D.; Gallorini, S.; Gambetta, S.; Gandelman, M.; Gandini, P.; Gao, Y.; Garcia Pardinas, J.; Tico, J. Garra; Garrido, L.; Gascon, D.; Gaspar, C.; Gauld, R.; Gavardi, L.; Gazzoni, G.; Gerick, D.; Gersabeck, E.; Gersabeck, M.; Gershon, T.; Ghez, Ph.; Giani, S.; Gibson, V.; Girard, O. G.; Giubega, L.; Gligorov, V. V.; Goebel, C.; Golubkov, D.; Golutvin, A.; Gomes, A.; Gotti, C.; Gandara, M. Grabalosa; Graciani Diaz, R.; Cardoso, L. A. Granado; Grauges, E.; Graverini, E.; Graziani, G.; Grecu, A.; Greening, E.; Gregson, S.; Griffith, P.; Grillo, L.; Gruenberg, O.; Gui, B.; Gushchin, E.; Guz, Yu.; Gys, T.; Hadavizadeh, T.; Hadjivasiliou, C.; Haefeli, G.; Haen, C.; Haines, S. C.; Hall, S.; Hamilton, B.; Han, X.; Hansmann-Menzemer, S.; Harnew, N.; Harnew, S. T.; Harrison, J.; He, J.; Head, T.; Heijne, V.; Hennessy, K.; Henrard, P.; Henry, L.; van Herwijnen, E.; Hess, M.; Hicheur, A.; Hill, D.; Hoballah, M.; Hombach, C.; Hulsbergen, W.; Humair, T.; Hussain, N.; Hutchcroft, D.; Hynds, D.; Idzik, M.; Ilten, P.; Jacobsson, R.; Jaeger, A.; Jalocha, J.; Jans, E.; Jawahery, A.; Jing, F.; John, M.; Johnson, D.; Jones, C. R.; Joram, C.; Jost, B.; Jurik, N.; Kandybei, S.; Kanso, W.; Karacson, M.; Karbach, T. M.; Karodia, S.; Kecke, M.; Kelsey, M.; Kenyon, I. R.; Kenzie, M.; Ketel, T.; Khanji, B.; Khurewathanakul, C.; Klaver, S.; Klimaszewski, K.; Kochebina, O.; Kolpin, M.; Komarov, I.; Koopman, R. F.; Koppenburg, P.; Kozeiha, M.; Kravchuk, L.; Kreplin, K.; Kreps, M.; Krocker, G.; Krokovny, P.; Kruse, F.; Krzemien, W.; Kucewicz, W.; Kucharczyk, M.; Kudryavtsev, V.; Kuonen, A. K.; Kurek, K.; Kvaratskheliya, T.; Lacarrere, D.; Lafferty, G.; Lai, A.; Lambert, D.; Lanfranchi, G.; Langenbruch, C.; Langhans, B.; Latham, T.; Lazzeroni, C.; Le Gac, R.; van Leerdam, J.; Lees, J-P
2015-01-01
The time-integrated CP asymmetry in the decay D-0 -> (KSKS0)-K-0 is measured using 3 fb(-1) of proton-proton collision data collected by the LHCb experiment at centreof- mass energies of 7 and 8TeV. The flavour of the D-0 meson is determined by use of the decay D*(+) -> D-0 pi(+) and its charge conj
Asymptotic stability of monostable wavefronts in discrete-time integral recursions
无
2010-01-01
The aim of this work is to study the traveling wavefronts in a discrete-time integral recursion with a Gauss kernel in R2.We first establish the existence of traveling wavefronts as well as their precise asymptotic behavior.Then,by employing the comparison principle and upper and lower solutions technique,we prove the asymptotic stability and uniqueness of such monostable wavefronts in the sense of phase shift and circumnutation.We also obtain some similar results in R.
Fast explicit diffiusion for long-time integration of parabolic problems
Bähr, Martin; Breuß, Michael; Wunderlich, Ralf
2017-07-01
The goal of this paper is to motivate the application of the recent numerical scheme called Fast Explicit Diffusion (FED) to solve long-term parabolic problems. With the purpose of performing long integration times the FED method is a simple and fast explicit solver, which has been introduced in the field of image processing. We show that FED is at least as fast as standard implicit methods, often has comparable or even better accuracy and is much easier to implement.
Weak convergence of stochastic integrals driven by continuous-time random walks
Burr, Meredith N
2011-01-01
Brownian motion is a well-known model for normal diffusion, but not all physical phenomena behave according to a Brownian motion. Many phenomena exhibit irregular diffusive behavior, called anomalous diffusion. Examples of anomalous diffusion have been observed in physics, hydrology, biology, and finance, among many other fields. Continuous-time random walks (CTRWs), introduced by Montroll and Weiss, serve as models for anomalous diffusion. CTRWs generalize the usual random walk model by allowing random waiting times between successive random jumps. Under certain conditions on the jumps and waiting times, scaled CTRWs can be shown to converge in distribution to a limit process M(t) in the cadlag space D[0,infinity) with the Skorohod J_1 or M_1 topology. An interesting question is whether stochastic integrals driven by the scaled CTRWs X^n(t) converge in distribution to a stochastic integral driven by the CTRW limit process M(t). We prove weak convergence of the stochastic integrals driven by CTRWs for certain...
Study of time-accurate integration of the variable-density Navier-Stokes equations
Lu, Xiaoyi; Pantano, Carlos
2015-11-01
We present several theoretical elements that affect time-consistent integration of the low-Mach number approximation of variable-density Navier-Stokes equations. The goal is for velocity, pressure, density, and scalars to achieve uniform order of accuracy, consistent with the time integrator being used. We show examples of second-order (using Crank-Nicolson and Adams-Bashforth) and third-order (using additive semi-implicit Runge-Kutta) uniform convergence with the proposed conceptual framework. Furthermore, the consistent approach can be extended to other time integrators. In addition, the method is formulated using approximate/incomplete factorization methods for easy incorporation in existing solvers. One of the observed benefits of the proposed approach is improved stability, even for large density difference, in comparison with other existing formulations. A linearized stability analysis is also carried out for some test problems to better understand the behavior of the approach. This work was supported in part by the Department of Energy, National Nuclear Security Administration, under award no. DE-NA0002382 and the California Institute of Technology.
Balistrieri, Laurie S.; Nimick, David A.; Mebane, Christopher A.
2012-01-01
Evaluating water quality and the health of aquatic organisms is challenging in systems with systematic diel (24 hour) or less predictable runoff-induced changes in water composition. To advance our understanding of how to evaluate environmental health in these dynamic systems, field studies of diel cycling were conducted in two streams (Silver Bow Creek and High Ore Creek) affected by historical mining activities in southwestern Montana. A combination of sampling and modeling tools were used to assess the toxicity of metals in these systems. Diffusive Gradients in Thin Films (DGT) samplers were deployed at multiple time intervals during diel sampling to confirm that DGT integrates time-varying concentrations of dissolved metals. Thermodynamic speciation calculations using site specific water compositions, including time-integrated dissolved metal concentrations determined from DGT, and a competitive, multiple-metal biotic ligand model incorporated into the Windemere Humic Aqueous Model Version 6.0 (WHAM VI) were used to determine the chemical speciation of dissolved metals and biotic ligands. The model results were combined with previously collected toxicity data on cutthroat trout to derive a relationship that predicts the relative survivability of these fish at a given site. This integrative approach may prove useful for assessing water quality and toxicity of metals to aquatic organisms in dynamic systems and evaluating whether potential changes in environmental health of aquatic systems are due to anthropogenic activities or natural variability.
Leclerc, A.; Jolicard, G.; Viennot, D.; Killingbeck, J. P.
2012-01-01
The constrained adiabatic trajectory method (CATM) is reexamined as an integrator for the Schrödinger equation. An initial discussion places the CATM in the context of the different integrators used in the literature for time-independent or explicitly time-dependent Hamiltonians. The emphasis is put on adiabatic processes and within this adiabatic framework the interdependence between the CATM, the wave operator, the Floquet, and the (t, t') theories is presented in detail. Two points are then more particularly analyzed and illustrated by a numerical calculation describing the H_2^+ ion submitted to a laser pulse. The first point is the ability of the CATM to dilate the Hamiltonian spectrum and thus to make the perturbative treatment of the equations defining the wave function possible, possibly by using a Krylov subspace approach as a complement. The second point is the ability of the CATM to handle extremely complex time-dependencies, such as those which appear when interaction representations are used to integrate the system.
Receiver function estimated by maximum entropy deconvolution
吴庆举; 田小波; 张乃铃; 李卫平; 曾融生
2003-01-01
Maximum entropy deconvolution is presented to estimate receiver function, with the maximum entropy as the rule to determine auto-correlation and cross-correlation functions. The Toeplitz equation and Levinson algorithm are used to calculate the iterative formula of error-predicting filter, and receiver function is then estimated. During extrapolation, reflective coefficient is always less than 1, which keeps maximum entropy deconvolution stable. The maximum entropy of the data outside window increases the resolution of receiver function. Both synthetic and real seismograms show that maximum entropy deconvolution is an effective method to measure receiver function in time-domain.
Rike Steenken
Full Text Available Modern driver assistance systems make increasing use of auditory and tactile signals in order to reduce the driver's visual information load. This entails potential crossmodal interaction effects that need to be taken into account in designing an optimal system. Here we show that saccadic reaction times to visual targets (cockpit or outside mirror, presented in a driving simulator environment and accompanied by auditory or tactile accessories, follow some well-known spatiotemporal rules of multisensory integration, usually found under confined laboratory conditions. Auditory nontargets speed up reaction time by about 80 ms. The effect tends to be maximal when the nontarget is presented 50 ms before the target and when target and nontarget are spatially coincident. The effect of a tactile nontarget (vibrating steering wheel was less pronounced and not spatially specific. It is shown that the average reaction times are well-described by the stochastic "time window of integration" model for multisensory integration developed by the authors. This two-stage model postulates that crossmodal interaction occurs only if the peripheral processes from the different sensory modalities terminate within a fixed temporal interval, and that the amount of crossmodal interaction manifests itself in an increase or decrease of second stage processing time. A qualitative test is consistent with the model prediction that the probability of interaction, but not the amount of crossmodal interaction, depends on target-nontarget onset asynchrony. A quantitative model fit yields estimates of individual participants' parameters, including the size of the time window. Some consequences for the design of driver assistance systems are discussed.
Post-traumatic stress, depression, and community integration a long time after whiplash injury
Britt-Marie Stålnacke
2010-01-01
Full Text Available Psychological factors such as post-traumatic stress and depression may play an important role in the recovery after whiplash injuries. Difficulties in psychosocial functioning with limitations in everyday life may dominate for some time after the injury. Our study therefore investigates the relationships between pain, post-traumatic stress, depression, and community integration. A set of questionnaires was answered by 191 persons (88 men, 103 women five years after a whiplash injury to assess pain intensity (visual analogue scale, VAS, whiplash-related symptoms, post-traumatic stress (impact of event scale, IES, depression (Beck depression inventory, BDI-II, community integration (community integration questionnaire, CIQ, life satisfaction (LiSat-11. One or more depressive symptoms were reported by 74% of persons; 22% reported scores that were classified as mild to severe depression. The presence of at least one post-traumatic symptom was reported by 70% of persons, and 38% reported mild to severe stress. Total scores of community integration for women were statistically significantly higher than for men. The total VAS score was correl-ated positively to the IES (r=0.456, P less than 0.456, the BDI (r=0.646, P less than 0.001, and negatively to the CIQ (r=-0.300, P less than 0.001. These results highlight the view that a significant proportion of people experience both pain and psycho- logical difficulties for a long time after a whiplash injury. These findings should be taken into consideration in the management of subjects with chronic whiplash symptoms and may support a multi-professional rehabilitation model that integrates physical, psychological, and psychosocial factors.
Matrix of integrated superconducting single-photon detectors with high timing resolution
Schuck, Carsten; Minaeva, Olga; Li, Mo; Gol'tsman, Gregory; Sergienko, Alexander V; Tang, Hong X
2013-01-01
We demonstrate a large grid of individually addressable superconducting single photon detectors on a single chip. Each detector element is fully integrated into an independent waveguide circuit with custom functionality at telecom wavelengths. High device density is achieved by fabricating the nanowire detectors in traveling wave geometry directly on top of silicon-on-insulator waveguides. Our superconducting single-photon detector matrix includes detector designs optimized for high detection efficiency, low dark count rate and high timing accuracy. As an example, we exploit the high timing resolution of a particularly short nanowire design to resolve individual photon round-trips in a cavity ring-down measurement of a silicon ring resonator.
Integral sliding mode control for a class of nonlinear neutral systems with time-varying delays
Lou Xu-Yang; Cui Bao-Tong
2008-01-01
This paper focuses on sliding mode control problems for a class of nonlinear neutral systems with time-varying delays. An integral sliding surface is firstly constructed. Then it finds a useful criteria to guarantee the global stability for the nonlinear neutral systems with time-varying delays in the specified switching surface, whose condition is formulated as linear matrix inequality. The synthesized sliding mode controller guarantees the reachability of the specified sliding surface. Finally, a numerical simulation validates the effectiveness and feasibility of the proposed technique.
Integration of image exposure time into a modified laser speckle imaging method
RamIrez-San-Juan, J C; Salazar-Hermenegildo, N; Ramos-Garcia, R; Munoz-Lopez, J [Optics Department, INAOE, Puebla (Mexico); Huang, Y C [Department of Electrical Engineering and Computer Science, University of California, Irvine, CA (United States); Choi, B, E-mail: jcram@inaoep.m [Beckman Laser Institute and Medical Clinic, University of California, Irvine, CA (United States)
2010-11-21
Speckle-based methods have been developed to characterize tissue blood flow and perfusion. One such method, called modified laser speckle imaging (mLSI), enables computation of blood flow maps with relatively high spatial resolution. Although it is known that the sensitivity and noise in LSI measurements depend on image exposure time, a fundamental disadvantage of mLSI is that it does not take into account this parameter. In this work, we integrate the exposure time into the mLSI method and provide experimental support of our approach with measurements from an in vitro flow phantom.
Hierarchical Direct Time Integration Method and Adaptive Procedure for Dynamic Analysis
无
2000-01-01
New hierarchical direct time integration method for structural dynamic analysis is developed by using Taylor series expansions in each time step. Very accurate results can be obtained by increasing the order of the Taylor series. Furthermore, the local error can be estimated by simply comparing the solutions obtained by the proposed method with the higher order solutions. This local estimate is then used to develop an adaptive order-control technique. Numerical examples are given to illustrate the performance of the present method and its adaptive procedure.
A Co-integration Approach to Forecasting Container Carriers' Time Charter Rates
CHEN Fei-er; ZHANG Ren-yi
2008-01-01
A vector autoregressive model was developed for a sample of container carrier time charter rates.Although the series of time charter rates are themselves found non-stationary, thus precluding the use of manymodeling methodologies, evidence provided by co-integration tests points to the existence of stable long-termrelationships between the series. An assessment of the forecasts derived from the model suggests that the spec-ification of these long-term relationships does not improve the accuracy of long-term forecasts. These resultsare interpreted as a corroboration of the efficient market hypothesis.
Application of retrospective time integration scheme to the prediction of torrential rain
Feng Guo-Lin; Dong Wen-Jie; Jia Xiao-Jing
2004-01-01
The retrospective time integration scheme presented on the principle of the self-memory of the atmosphere is applied to the mesoscale grid model MM5, constructing a mesoscale self-memorial model SMM5, and then the shortrange prediction experiments of torrential rain are performed in this paper. Results show that in comparison with MM5 the prediction accuracy of SMM5 is obviously improved due to its utilization of multiple time level past observations,and the precipitation area and intensity predicted by SMM5 are closer to observational fields than those by MM5.
High-Order Calderón Preconditioned Time Domain Integral Equation Solvers
Valdes, Felipe
2013-05-01
Two high-order accurate Calderón preconditioned time domain electric field integral equation (TDEFIE) solvers are presented. In contrast to existing Calderón preconditioned time domain solvers, the proposed preconditioner allows for high-order surface representations and current expansions by using a novel set of fully-localized high-order div-and quasi curl-conforming (DQCC) basis functions. Numerical results demonstrate that the linear systems of equations obtained using the proposed basis functions converge rapidly, regardless of the mesh density and of the order of the current expansion. © 1963-2012 IEEE.
On the mixed discretization of the time domain magnetic field integral equation
Ulku, Huseyin Arda
2012-09-01
Time domain magnetic field integral equation (MFIE) is discretized using divergence-conforming Rao-Wilton-Glisson (RWG) and curl-conforming Buffa-Christiansen (BC) functions as spatial basis and testing functions, respectively. The resulting mixed discretization scheme, unlike the classical scheme which uses RWG functions as both basis and testing functions, is proper: Testing functions belong to dual space of the basis functions. Numerical results demonstrate that the marching on-in-time (MOT) solution of the mixed discretized MFIE yields more accurate results than that of classically discretized MFIE. © 2012 IEEE.
Maximum information photoelectron metrology
Hockett, P; Wollenhaupt, M; Baumert, T
2015-01-01
Photoelectron interferograms, manifested in photoelectron angular distributions (PADs), are a high-information, coherent observable. In order to obtain the maximum information from angle-resolved photoionization experiments it is desirable to record the full, 3D, photoelectron momentum distribution. Here we apply tomographic reconstruction techniques to obtain such 3D distributions from multiphoton ionization of potassium atoms, and fully analyse the energy and angular content of the 3D data. The PADs obtained as a function of energy indicate good agreement with previous 2D data and detailed analysis [Hockett et. al., Phys. Rev. Lett. 112, 223001 (2014)] over the main spectral features, but also indicate unexpected symmetry-breaking in certain regions of momentum space, thus revealing additional continuum interferences which cannot otherwise be observed. These observations reflect the presence of additional ionization pathways and, most generally, illustrate the power of maximum information measurements of th...
Valdés, Felipe
2013-03-01
Single-source time-domain electric-and magnetic-field integral equations for analyzing scattering from homogeneous penetrable objects are presented. Their temporal discretization is effected by using shifted piecewise polynomial temporal basis functions and a collocation testing procedure, thus allowing for a marching-on-in-time (MOT) solution scheme. Unlike dual-source formulations, single-source equations involve space-time domain operator products, for which spatial discretization techniques developed for standalone operators do not apply. Here, the spatial discretization of the single-source time-domain integral equations is achieved by using the high-order divergence-conforming basis functions developed by Graglia alongside the high-order divergence-and quasi curl-conforming (DQCC) basis functions of Valdés The combination of these two sets allows for a well-conditioned mapping from div-to curl-conforming function spaces that fully respects the space-mapping properties of the space-time operators involved. Numerical results corroborate the fact that the proposed procedure guarantees accuracy and stability of the MOT scheme. © 2012 IEEE.
Žarić, Gojko; Fraga González, Gorka; Tijms, Jurgen; van der Molen, Maurits W; Blomert, Leo; Bonte, Milene
2015-01-01
A failure to build solid letter-speech sound associations may contribute to reading impairments in developmental dyslexia. Whether this reduced neural integration of letters and speech sounds changes over time within individual children and how this relates to behavioral gains in reading skills remains unknown. In this research, we examined changes in event-related potential (ERP) measures of letter-speech sound integration over a 6-month period during which 9-year-old dyslexic readers (n = 17) followed a training in letter-speech sound coupling next to their regular reading curriculum. We presented the Dutch spoken vowels /a/ and /o/ as standard and deviant stimuli in one auditory and two audiovisual oddball conditions. In one audiovisual condition (AV0), the letter "a" was presented simultaneously with the vowels, while in the other (AV200) it was preceding vowel onset for 200 ms. Prior to the training (T1), dyslexic readers showed the expected pattern of typical auditory mismatch responses, together with the absence of letter-speech sound effects in a late negativity (LN) window. After the training (T2), our results showed earlier (and enhanced) crossmodal effects in the LN window. Most interestingly, earlier LN latency at T2 was significantly related to higher behavioral accuracy in letter-speech sound coupling. On a more general level, the timing of the earlier mismatch negativity (MMN) in the simultaneous condition (AV0) measured at T1, significantly related to reading fluency at both T1 and T2 as well as with reading gains. Our findings suggest that the reduced neural integration of letters and speech sounds in dyslexic children may show moderate improvement with reading instruction and training and that behavioral improvements relate especially to individual differences in the timing of this neural integration.
Gojko eŽarić
2015-06-01
Full Text Available A failure to build solid letter-speech sound associations may contribute to reading impairments in developmental dyslexia. Whether this reduced neural integration of letters and speech sounds changes over time within individual children and how this relates to behavioral gains in reading skills remains unknown. In this research, we examined changes in event-related potential (ERP measures of letter-speech sound integration over a 6-month period during which 9-year-old dyslexic readers (n=17 followed a training in letter-speech sound coupling next to their regular reading curriculum. We presented the Dutch spoken vowels /a/ and /o/ as standard and deviant stimuli in one auditory and two audiovisual oddball conditions. In one audiovisual condition (AV0, the letter ‘a’ was presented simultaneously with the vowels, while in the other (AV200 it was preceding vowel onset for 200 ms. Prior to the training (T1, dyslexic readers showed the expected pattern of typical auditory mismatch responses, together with the absence of letter-speech sound effects in a late negativity (LN window. After the training (T2, our results showed earlier (and enhanced crossmodal effects in the LN window. Most interestingly, earlier LN latency at T2 was significantly related to higher behavioral accuracy in letter-speech sound coupling. On a more general level, the timing of the earlier mismatch negativity (MMN in the simultaneous condition (AV0 measured at T1, significantly related to reading fluency at both T1 and T2 as well as with reading gains. Our findings suggest that the reduced neural integration of letters and speech sounds in dyslexic children may show moderate improvement with reading instruction and training and that behavioral improvements relate especially to individual differences in the timing of this neural integration.
Integrating and Visualizing Tropical Cyclone Data Using the Real Time Mission Monitor
Goodman, H. Michael; Blakeslee, Richard; Conover, Helen; Hall, John; He, Yubin; Regner, Kathryn
2009-01-01
The Real Time Mission Monitor (RTMM) is a visualization and information system that fuses multiple Earth science data sources, to enable real time decision-making for airborne and ground validation experiments. Developed at the NASA Marshall Space Flight Center, RTMM is a situational awareness, decision-support system that integrates satellite imagery, radar, surface and airborne instrument data sets, model output parameters, lightning location observations, aircraft navigation data, soundings, and other applicable Earth science data sets. The integration and delivery of this information is made possible using data acquisition systems, network communication links, network server resources, and visualizations through the Google Earth virtual globe application. RTMM is extremely valuable for optimizing individual Earth science airborne field experiments. Flight planners, scientists, and managers appreciate the contributions that RTMM makes to their flight projects. A broad spectrum of interdisciplinary scientists used RTMM during field campaigns including the hurricane-focused 2006 NASA African Monsoon Multidisciplinary Analyses (NAMMA), 2007 NOAA-NASA Aerosonde Hurricane Noel flight, 2007 Tropical Composition, Cloud, and Climate Coupling (TC4), plus a soil moisture (SMAP-VEX) and two arctic research experiments (ARCTAS) in 2008. Improving and evolving RTMM is a continuous process. RTMM recently integrated the Waypoint Planning Tool, a Java-based application that enables aircraft mission scientists to easily develop a pre-mission flight plan through an interactive point-and-click interface. Individual flight legs are automatically calculated "on the fly". The resultant flight plan is then immediately posted to the Google Earth-based RTMM for interested scientists to view the planned flight track and subsequently compare it to the actual real time flight progress. We are planning additional capabilities to RTMM including collaborations with the Jet Propulsion
A New Perspective on Path Integral Quantum Mechanics in Curved Space-Time
Singh Dinesh
2013-09-01
Full Text Available Abstract. A new approach to path integral quantum mechanics in curved space-time is presented for scalar particle propagation, expressed in terms of Lie transport and Fermi or Riemann normal co-ordinates to describe local curvature. While the presence of local curvature results in a strictly non-unitary representation of local time translation, the formalism nevertheless correctly recovers the free-particle Lagrangian in curved space-time, along with new terms that predict a simultaneous breakdown of time-reversal symmetry and a quantum violation of the weak equivalence principle at the particle’s Compton wavelength scale. Furthermore, the formalism reveals the prediction of a gauge-invariant phase factor interpreted as the gravitational Aharonov-Bohm effect and Berry’s phase.
Influence of time lag and noncolocation on integrated structural/control system designs
Manning, R. A.; Schmit, L. A.
1989-01-01
Recent research efforts have led to the development of simultaneous structural/control system design procedures. Absent in any of the work is the time delay present in the control system sensors and actuators and the computational time delay for synthesizing actuator commands from sensor measurements. Madden has shown that the time delay present in the control system can have profound effects on the resulting system performance and stability regardless of its source. In addition, many of the simultaneous structural/control system design procedures have used colocated sensors and actuators for implementation of the control system. In actual practice, colocation in not always possible. The issue of stability degradation when using noncolocated sensor and actuators was raised. The integrated structural/control system design procedure is extended to include the effects of time lag and noncolocation of sensors and actuators on the resulting optimum designs.
Gregory, Jonathan M; Sukhera, Javeed; Taylor-Gates, Melissa
2017-01-01
As smartphone technology becomes an increasingly important part of youth mental health, there has been little to no examination of how to effectively integrate smartphone-based safety planning with inpatient care. Our study sought to examine whether or not we could effectively integrate smartphone-based safety planning into the discharge process on a child and adolescent inpatient psychiatry unit. Staff members completed a survey to determine the extent of smartphone ownership in a population of admitted child and adolescent inpatients. In addition to quantifying smartphone ownership, the survey also tracked whether youth would integrate their previously-established safety plan with a specific safety planning application on their smartphone (Be Safe) at the time of discharge. Sixty-six percent (50/76) of discharged youth owned a smartphone, which is consistent with prior reports of high smartphone ownership in adult psychiatric populations. A minority of youth (18%) downloaded the Be Safe app prior to discharge, with most (68%) suggesting they would download the app after discharge. Notably, all patients who downloaded the app prior to discharge were on their first admission to a psychiatric inpatient unit. Child and adolescent psychiatric inpatients have a clear interest in smartphone-based safety planning. Our results suggest that integrating smartphone-related interventions earlier in an admission might improve access before discharge. This highlights the tension between restricting and incorporating smartphone access for child and adolescent inpatients and may inform future study in this area.
Lee, Jae Young; Park, Jin Bae; Choi, Yoon Ho
2015-05-01
This paper focuses on a class of reinforcement learning (RL) algorithms, named integral RL (I-RL), that solve continuous-time (CT) nonlinear optimal control problems with input-affine system dynamics. First, we extend the concepts of exploration, integral temporal difference, and invariant admissibility to the target CT nonlinear system that is governed by a control policy plus a probing signal called an exploration. Then, we show input-to-state stability (ISS) and invariant admissibility of the closed-loop systems with the policies generated by integral policy iteration (I-PI) or invariantly admissible PI (IA-PI) method. Based on these, three online I-RL algorithms named explorized I-PI and integral Q -learning I, II are proposed, all of which generate the same convergent sequences as I-PI and IA-PI under the required excitation condition on the exploration. All the proposed methods are partially or completely model free, and can simultaneously explore the state space in a stable manner during the online learning processes. ISS, invariant admissibility, and convergence properties of the proposed methods are also investigated, and related with these, we show the design principles of the exploration for safe learning. Neural-network-based implementation methods for the proposed schemes are also presented in this paper. Finally, several numerical simulations are carried out to verify the effectiveness of the proposed methods.
Emerek, Ruth
2004-01-01
Bidraget diskuterer de forskellige intergrationsopfattelse i Danmark - og hvad der kan forstås ved vellykket integration......Bidraget diskuterer de forskellige intergrationsopfattelse i Danmark - og hvad der kan forstås ved vellykket integration...
Varas, M. I.; Orteu, E.; Laserna, J. A.
2014-07-01
This paper demonstrates the process followed in the preparation of the Manual of floods of Cofrentes NPP to identify the allowed maximum time available to the central in the isolation of a moderate or high energy pipe break, until it affects security (1E) participating in the safe stop of Reactor or in pools of spent fuel cooling-related equipment , and to determine the recommended isolation mode from the point of view of the location of the break or rupture, of the location of the 1E equipment and human factors. (Author)
Measurement of time-integrated $D^0\\to hh$ asymmetries at LHCb
Marino, Pietro
2016-01-01
LHCb collected the world’s largest sample of charm decays during LHC Run I, corresponding to an integrated luminosity of 3fb$^{−1}$. This has permitted many precision measurements of charm mixing and CP violation parameters. One of the most precise and important observables is the so-called $\\Delta A_{CP}$ parameter, corresponding to the difference between the time-integrated CP asymmetry in singly Cabibbo-suppressed $D^{0} \\rightarrow K^{+}K^{-}$ and $D^{0} \\rightarrow \\pi^{+}\\pi{-}$ decay modes. The flavour of the $D^{0}$ meson is inferred from the charge of the pion in $D^{∗+} \\rightarrow D^{0}\\pi^{+}$ and $D^{∗−} \\rightarrow \\overline{D}^{0}\\pi^{-}$ decays. $\\Delta A_{CP} \\equiv A_{raw}(K^{+}K^{−})−A_{raw}(\\pi^{+}\\pi{−})$ is measured to be $\\Delta A_{CP}=(−0.10±0.08±0.03)$%, where the first uncertainty is statistical and the second systematic. The measurement is consistent with the no- CP -violation hypothesis and represents the most precise measurement of time-integrated CP asymmetry ...
FPGA-based real-time embedded system for RISS/GPS integrated navigation.
Abdelfatah, Walid Farid; Georgy, Jacques; Iqbal, Umar; Noureldin, Aboelmagd
2012-01-01
Navigation algorithms integrating measurements from multi-sensor systems overcome the problems that arise from using GPS navigation systems in standalone mode. Algorithms which integrate the data from 2D low-cost reduced inertial sensor system (RISS), consisting of a gyroscope and an odometer or wheel encoders, along with a GPS receiver via a Kalman filter has proved to be worthy in providing a consistent and more reliable navigation solution compared to standalone GPS receivers. It has been also shown to be beneficial, especially in GPS-denied environments such as urban canyons and tunnels. The main objective of this paper is to narrow the idea-to-implementation gap that follows the algorithm development by realizing a low-cost real-time embedded navigation system capable of computing the data-fused positioning solution. The role of the developed system is to synchronize the measurements from the three sensors, relative to the pulse per second signal generated from the GPS, after which the navigation algorithm is applied to the synchronized measurements to compute the navigation solution in real-time. Employing a customizable soft-core processor on an FPGA in the kernel of the navigation system, provided the flexibility for communicating with the various sensors and the computation capability required by the Kalman filter integration algorithm.
Integration of real-time 3D capture, reconstruction, and light-field display
Zhang, Zhaoxing; Geng, Zheng; Li, Tuotuo; Pei, Renjing; Liu, Yongchun; Zhang, Xiao
2015-03-01
Effective integration of 3D acquisition, reconstruction (modeling) and display technologies into a seamless systems provides augmented experience of visualizing and analyzing real objects and scenes with realistic 3D sensation. Applications can be found in medical imaging, gaming, virtual or augmented reality and hybrid simulations. Although 3D acquisition, reconstruction, and display technologies have gained significant momentum in recent years, there seems a lack of attention on synergistically combining these components into a "end-to-end" 3D visualization system. We designed, built and tested an integrated 3D visualization system that is able to capture in real-time 3D light-field images, perform 3D reconstruction to build 3D model of the objects, and display the 3D model on a large autostereoscopic screen. In this article, we will present our system architecture and component designs, hardware/software implementations, and experimental results. We will elaborate on our recent progress on sparse camera array light-field 3D acquisition, real-time dense 3D reconstruction, and autostereoscopic multi-view 3D display. A prototype is finally presented with test results to illustrate the effectiveness of our proposed integrated 3D visualization system.
Technical Note: Reducing the spin-up time of integrated surface water–groundwater models
Ajami, H.
2014-06-26
One of the main challenges in catchment scale application of coupled/integrated hydrologic models is specifying a catchment\\'s initial conditions in terms of soil moisture and depth to water table (DTWT) distributions. One approach to reduce uncertainty in model initialization is to run the model recursively using a single or multiple years of forcing data until the system equilibrates with respect to state and diagnostic variables. However, such "spin-up" approaches often require many years of simulations, making them computationally intensive. In this study, a new hybrid approach was developed to reduce the computational burden of spin-up time for an integrated groundwater-surface water-land surface model (ParFlow.CLM) by using a combination of ParFlow.CLM simulations and an empirical DTWT function. The methodology is examined in two catchments located in the temperate and semi-arid regions of Denmark and Australia respectively. Our results illustrate that the hybrid approach reduced the spin-up time required by ParFlow.CLM by up to 50%, and we outline a methodology that is applicable to other coupled/integrated modelling frameworks when initialization from equilibrium state is required.
Technical Note: Reducing the spin-up time of integrated surface water–groundwater models
H. Ajami
2014-06-01
Full Text Available One of the main challenges in catchment scale application of coupled/integrated hydrologic models is specifying a catchment's initial conditions in terms of soil moisture and depth to water table (DTWT distributions. One approach to reduce uncertainty in model initialization is to run the model recursively using a single or multiple years of forcing data until the system equilibrates with respect to state and diagnostic variables. However, such "spin-up" approaches often require many years of simulations, making them computationally intensive. In this study, a new hybrid approach was developed to reduce the computational burden of spin-up time for an integrated groundwater-surface water-land surface model (ParFlow.CLM by using a combination of ParFlow.CLM simulations and an empirical DTWT function. The methodology is examined in two catchments located in the temperate and semi-arid regions of Denmark and Australia respectively. Our results illustrate that the hybrid approach reduced the spin-up time required by ParFlow.CLM by up to 50%, and we outline a methodology that is applicable to other coupled/integrated modelling frameworks when initialization from equilibrium state is required.
Guarantee of remaining life time. Integrity of mechanical components and control of ageing phenomena
Schuler, X.; Herter, K.H. [Stuttgart Univ. (Germany). MPA; Hienstorfer, W. [TUEV SUED Energietechnik GmbH, Filderstadt (Germany); Koenig, G. [EnBW-Kernkraftwerk GmbH, Neckarwestheim (Germany)
2012-07-01
The life time of safety relevant systems, structures and components (SSC) of Nuclear Power Plants (NPP) is determined by two main principles. First of all the required quality has to be produced during the design and fabrication process. This means that quality has to be produced and can't be improved by excessive inspections (Basis Safety - quality through production principle). The second one is assigned to the initial quality which has to be maintained during operation. This concerns safe operation during the total life time (life time management), safety against ageing phenomena (AM - ageing management) as well as proof of integrity (e.g. break preclusion or avoidance of fracture for SSC with high safety relevance). Initiated by the Fukushima Dai-ichi event in Japan in spring 2011 for German NPP's Long Term Operation (LTO) is out of question. In June 2011 legislation took decision to phase-out from nuclear by 2022. As a fact safe operation shall be guaranteed for the remaining life time. Within this technical framework the ageing management is a key element. Depending on the safety-relevance of the SSC under observation including preventive maintenance various tasks are required in particular to clarify the mechanisms which contribute systemspecifically to the damage of the components and systems and to define their controlling parameters which have to be monitored and checked. Appropriate continuous or discontinuous measures are to be considered in this connection. The approach to ensure a high standard of quality in operation for the remaining life time and the management of the technical and organizational aspects are demonstrated and explained. The basis for ageing management to be applied to NNPs is included in Nuclear Safety Standard 1403 which describes the ageing management procedures. For SSC with high safety relevance a verification analysis for rupture preclusion (proof of integrity, integrity concept) shall be performed (Nuclear Safety
Real Time Metrics and Analysis of Integrated Arrival, Departure, and Surface Operations
Sharma, Shivanjli; Fergus, John
2017-01-01
A real time dashboard was developed in order to inform and present users notifications and integrated information regarding airport surface operations. The dashboard is a supplement to capabilities and tools that incorporate arrival, departure, and surface air-traffic operations concepts in a NextGen environment. As trajectory-based departure scheduling and collaborative decision making tools are introduced in order to reduce delays and uncertainties in taxi and climb operations across the National Airspace System, users across a number of roles benefit from a real time system that enables common situational awareness. In addition to shared situational awareness the dashboard offers the ability to compute real time metrics and analysis to inform users about capacity, predictability, and efficiency of the system as a whole. This paper describes the architecture of the real time dashboard as well as an initial set of metrics computed on operational data. The potential impact of the real time dashboard is studied at the site identified for initial deployment and demonstration in 2017; Charlotte-Douglas International Airport. Analysis and metrics computed in real time illustrate the opportunity to provide common situational awareness and inform users of metrics across delay, throughput, taxi time, and airport capacity. In addition, common awareness of delays and the impact of takeoff and departure restrictions stemming from traffic flow management initiatives are explored. The potential of the real time tool to inform the predictability and efficiency of using a trajectory-based departure scheduling system is also discussed.
An energy-stable time-integrator for phase-field models
Vignal, P.
2016-12-27
We introduce a provably energy-stable time-integration method for general classes of phase-field models with polynomial potentials. We demonstrate how Taylor series expansions of the nonlinear terms present in the partial differential equations of these models can lead to expressions that guarantee energy-stability implicitly, which are second-order accurate in time. The spatial discretization relies on a mixed finite element formulation and isogeometric analysis. We also propose an adaptive time-stepping discretization that relies on a first-order backward approximation to give an error-estimator. This error estimator is accurate, robust, and does not require the computation of extra solutions to estimate the error. This methodology can be applied to any second-order accurate time-integration scheme. We present numerical examples in two and three spatial dimensions, which confirm the stability and robustness of the method. The implementation of the numerical schemes is done in PetIGA, a high-performance isogeometric analysis framework.
Freeman, Elliot D; Ipser, Alberta; Palmbaha, Austra; Paunoiu, Diana; Brown, Peter; Lambert, Christian; Leff, Alex; Driver, Jon
2013-01-01
The sight and sound of a person speaking or a ball bouncing may seem simultaneous, but their corresponding neural signals are spread out over time as they arrive at different multisensory brain sites. How subjective timing relates to such neural timing remains a fundamental neuroscientific and philosophical puzzle. A dominant assumption is that temporal coherence is achieved by sensory resynchronisation or recalibration across asynchronous brain events. This assumption is easily confirmed by estimating subjective audiovisual timing for groups of subjects, which is on average similar across different measures and stimuli, and approximately veridical. But few studies have examined normal and pathological individual differences in such measures. Case PH, with lesions in pons and basal ganglia, hears people speak before seeing their lips move. Temporal order judgements (TOJs) confirmed this: voices had to lag lip-movements (by ∼200 msec) to seem synchronous to PH. Curiously, voices had to lead lips (also by ∼200 msec) to maximise the McGurk illusion (a measure of audiovisual speech integration). On average across these measures, PH's timing was therefore still veridical. Age-matched control participants showed similar discrepancies. Indeed, normal individual differences in TOJ and McGurk timing correlated negatively: subjects needing an auditory lag for subjective simultaneity needed an auditory lead for maximal McGurk, and vice versa. This generalised to the Stream-Bounce illusion. Such surprising antagonism seems opposed to good sensory resynchronisation, yet average timing across tasks was still near-veridical. Our findings reveal remarkable disunity of audiovisual timing within and between subjects. To explain this we propose that the timing of audiovisual signals within different brain mechanisms is perceived relative to the average timing across mechanisms. Such renormalisation fully explains the curious antagonistic relationship between disparate timing
Saturations-based nonlinear controllers with integral term: validation in real-time
Alatorre, A. G.; Castillo, P.; Mondié, S.
2016-05-01
Popular saturations-based nonlinear controller usually include proportional and derivative components of the state or output. The fact that in many applications, these components do not suffice to insure the convergence to the desired output values, motivate the addition of an integral term. In this paper, three configurations of nonlinear controllers based on saturation functions are improved with an integral component. The stability of the three algorithms is analysed using the Lyapunov theory. Simulation results validate the proposed control laws when they are applied to nonlinear systems with constant and unknown perturbations. Real-time experiments realised with a quad-rotor aerial vehicle and a hovercraft vehicle show that the proposed scheme can follow autonomously some trajectories, and that it could be robust with respect to delays.
Further triple integral approach to mixed-delay-dependent stability of time-delay neutral systems.
Wang, Ting; Li, Tao; Zhang, Guobao; Fei, Shumin
2017-09-01
This paper studies the asymptotic stability for a class of neutral systems with mixed time-varying delays. Through utilizing some Wirtinger-based integral inequalities and extending the convex combination technique, the upper bound on derivative of Lyapunov-Krasovskii (L-K) functional can be estimated more tightly and three mixed-delay-dependent criteria are proposed in terms of linear matrix inequalities (LMIs), in which the nonlinearity and parameter uncertainties are also involved, respectively. Different from those existent works, based on the interconnected relationship between neutral delay and state one, some novel triple integral functional terms are constructed and the conservatism can be effectively reduced. Finally, two numerical examples are given to show the benefits of the proposed criteria. Copyright © 2017 ISA. Published by Elsevier Ltd. All rights reserved.
Mucientes, A. E.; de la Pena, M. A.
2009-01-01
The concentration-time integrals method has been used to solve kinetic equations of parallel-consecutive first-order reactions with a reversible step. This method involves the determination of the area under the curve for the concentration of a given species against time. Computer techniques are used to integrate experimental curves and the method…
The Testability of Maximum Magnitude
Clements, R.; Schorlemmer, D.; Gonzalez, A.; Zoeller, G.; Schneider, M.
2012-12-01
Recent disasters caused by earthquakes of unexpectedly large magnitude (such as Tohoku) illustrate the need for reliable assessments of the seismic hazard. Estimates of the maximum possible magnitude M at a given fault or in a particular zone are essential parameters in probabilistic seismic hazard assessment (PSHA), but their accuracy remains untested. In this study, we discuss the testability of long-term and short-term M estimates and the limitations that arise from testing such rare events. Of considerable importance is whether or not those limitations imply a lack of testability of a useful maximum magnitude estimate, and whether this should have any influence on current PSHA methodology. We use a simple extreme value theory approach to derive a probability distribution for the expected maximum magnitude in a future time interval, and we perform a sensitivity analysis on this distribution to determine if there is a reasonable avenue available for testing M estimates as they are commonly reported today: devoid of an appropriate probability distribution of their own and estimated only for infinite time (or relatively large untestable periods). Our results imply that any attempt at testing such estimates is futile, and that the distribution is highly sensitive to M estimates only under certain optimal conditions that are rarely observed in practice. In the future we suggest that PSHA modelers be brutally honest about the uncertainty of M estimates, or must find a way to decrease its influence on the estimated hazard.
Rubin, Stephen P.; Reisenbichler, Reginald R.; Slatton, Stacey L.; Rubin, Stephen P.; Reisenbichler, Reginald R.; Wetzel, Lisa A.; Hayes, Michael C.
2012-01-01
The accuracy of a model that predicts time between fertilization and maximum alevin wet weight (MAWW) from incubation temperature was tested for steelhead Oncorhynchus mykiss from Dworshak National Fish Hatchery on the Clearwater River, Idaho. MAWW corresponds to the button-up fry stage of development. Embryos were incubated at warm (mean=11.6°C) or cold (mean=7.3°C) temperatures and time between fertilization and MAWW was measured for each temperature. Model predictions of time to MAWW were within 1% of measured time to MAWW. Mean egg weight ranged from 0.101-0.136 g among females (mean = 0.116). Time to MAWW was positively related to egg size for each temperature, but the increase in time to MAWW with increasing egg size was greater for embryos reared at the warm than at the cold temperature. We developed equations accounting for the effect of egg size on time to MAWW for each temperature, and also for the mean of those temperatures (9.3°C).
Maximum Likelihood Associative Memories
Gripon, Vincent; Rabbat, Michael
2013-01-01
Associative memories are structures that store data in such a way that it can later be retrieved given only a part of its content -- a sort-of error/erasure-resilience property. They are used in applications ranging from caches and memory management in CPUs to database engines. In this work we study associative memories built on the maximum likelihood principle. We derive minimum residual error rates when the data stored comes from a uniform binary source. Second, we determine the minimum amo...
Maximum likely scale estimation
Loog, Marco; Pedersen, Kim Steenstrup; Markussen, Bo
2005-01-01
A maximum likelihood local scale estimation principle is presented. An actual implementation of the estimation principle uses second order moments of multiple measurements at a fixed location in the image. These measurements consist of Gaussian derivatives possibly taken at several scales and....../or having different derivative orders. Although the principle is applicable to a wide variety of image models, the main focus here is on the Brownian model and its use for scale selection in natural images. Furthermore, in the examples provided, the simplifying assumption is made that the behavior...... of the measurements is completely characterized by all moments up to second order....
Meyer, F. J.; Webley, P.; Dehn, J.; Arko, S. A.; McAlpin, D. B.
2013-12-01
Volcanic eruptions are among the most significant hazards to human society, capable of triggering natural disasters on regional to global scales. In the last decade, remote sensing techniques have become established in operational forecasting, monitoring, and managing of volcanic hazards. Monitoring organizations, like the Alaska Volcano Observatory (AVO), are nowadays heavily relying on remote sensing data from a variety of optical and thermal sensors to provide time-critical hazard information. Despite the high utilization of these remote sensing data to detect and monitor volcanic eruptions, the presence of clouds and a dependence on solar illumination often limit their impact on decision making processes. Synthetic Aperture Radar (SAR) systems are widely believed to be superior to optical sensors in operational monitoring situations, due to the weather and illumination independence of their observations and the sensitivity of SAR to surface changes and deformation. Despite these benefits, the contributions of SAR to operational volcano monitoring have been limited in the past due to (1) high SAR data costs, (2) traditionally long data processing times, and (3) the low temporal sampling frequencies inherent to most SAR systems. In this study, we present improved data access, data processing, and data integration techniques that mitigate some of the above mentioned limitations and allow, for the first time, a meaningful integration of SAR into operational volcano monitoring systems. We will introduce a new database interface that was developed in cooperation with the Alaska Satellite Facility (ASF) and allows for rapid and seamless data access to all of ASF's SAR data holdings. We will also present processing techniques that improve the temporal frequency with which hazard-related products can be produced. These techniques take advantage of modern signal processing technology as well as new radiometric normalization schemes, both enabling the combination of
Time-Dependent and Time-Integrated Angular Analysis of B -> phi Ks pi0 and B -> phi K+ pi-
Aubert, B; Bona, M; Karyotakis, Y; Lees, J P; Poireau, V
2008-08-04
We perform a time-dependent and time-integrated angular analysis of the B{sup 0} {yields} {psi}K*(892){sup 0}, {psi}K*{sub 2}(1430{sup 0}), and {psi}(K{pi}){sub S-wave}{sup 0} decays with the final sample of about 465 million B{bar B} pairs recorded with the BABAR detector. Overall, twelve parameters are measured for the vector-vector decay, nine parameters for the vector-tensor decay, and three parameters for the vector-scalar decay, including the branching fractions, CP-violation parameters, and parameters sensitive to final state interaction. We use the dependence on the K{pi} invariant mass of the interference between the scalar and vector or tensor components to resolve discrete ambiguities of the strong and weak phases. We use the time-evolution of the B {yields} {psi}K{sub S}{sup 0}{pi}{sup 0} channel to extract the CP-violation phase difference {Delta}{phi}{sub 00} = 0.28 {+-} 0.42 {+-} 0.04 between the B and {bar B} decay amplitudes. When the B {yields} {psi}K{sup {+-}}{pi}{sup {-+}} channel is included, the fractions of longitudinal polarization f{sub L} of the vector-vector and vector-tensor decay modes are measured to be 0.494 {+-} 0.034 {+-} 0.013 and 0.901{sub -0.058}{sup +0.046} {+-} 0.037, respectively. This polarization pattern requires the presence of a helicity-plus amplitude in the vector-vector decay from a presently unknown source.
Ming-Feng Yang
2016-01-01
Full Text Available Nowadays, in order to achieve advantages in supply chain management, how to keep inventory in adequate level and how to enhance customer service level are two critical practices for decision makers. Generally, uncertain lead time and defective products have much to do with inventory and service level. Therefore, this study mainly aims at developing a multiechelon integrated just-in-time inventory model with uncertain lead time and imperfect quality to enhance the benefits of the logistics model. In addition, the Ant Colony Algorithm (ACA is established to determine the optimal solutions. Moreover, based on our proposed model and analysis, the ACA is more efficient than Particle Swarm Optimization (PSO and Lingo in SMEIJI model. An example is provided in this study to illustrate how production run and defective rate have an effect on system costs. Finally, the results of our research could provide some managerial insights which support decision makers in real-world operations.
Integrated model for pricing, delivery time setting, and scheduling in make-to-order environments
Garmdare, Hamid Sattari; Lotfi, M. M.; Honarvar, Mahboobeh
2017-05-01
Usually, in make-to-order environments which work only in response to the customer's orders, manufacturers for maximizing the profits should offer the best price and delivery time for an order considering the existing capacity and the customer's sensitivity to both the factors. In this paper, an integrated approach for pricing, delivery time setting and scheduling of new arrival orders are proposed based on the existing capacity and accepted orders in system. In the problem, the acquired market demands dependent on the price and delivery time of both the manufacturer and its competitors. A mixed-integer non-linear programming model is presented for the problem. After converting to a pure non-linear model, it is validated through a case study. The efficiency of proposed model is confirmed by comparing it to both the literature and the current practice. Finally, sensitivity analysis for the key parameters is carried out.
Analysis of DNA sequences by an optical time-integrating correlator.
Brousseau, N; Brousseau, R; Salt, J W; Gutz, L; Tucker, M D
1992-08-10
The analysis of the molecular structure called DNA is of particular interest for the understanding of the basic processes governing life. Correlation techniques implemented on digital computers are currently used to do this analysis, but the process is so slow that the mapping and sequencing of the entire human genome requires a computational breakthrough. This paper presents a new method of performing the analysis of DNA sequences with an optical time-integrating correlator. The method is characterized by short processing times that make the analysis of the entire human genome a tractable enterprise. A processing strategy and the resultant processing times are presented. Experimental proofs of concept for the two types of analysis specified by the strategy are also included.
Fully integrated monolithic opoelectronic transducer for real.time protein and DNA detection
Misiakos, Konstatinos; S. Petrou, Panagiota; E. Kakabakos, Sotirios
2010-01-01
scheme through a board-to-board receptacle was developed and combined with a portable customized readout and control instrument. Real-time detection of deleterious mutations in BRCA1 gene related to predisposition to hereditary breast/ovarian cancer was performed with the instrument developed using PCR......The development and testing of a portable bioanalytical device which was capable for real-time monitoring of binding assays was demonstrated. The device was based on arrays of nine optoelectronic transducers monolithically integrated on silicon chips. The optocouplers consisted of nine silicon...... products. Detection was based on waveguided photons elimination through interaction with fluorescently labeled PCR products. Detection of single biomolecular binding events was also demonstrated using nanoparticles as labels. In addition, label-free monitoring of bioreactions in real time was achieved...
A new simple method of implicit time integration for dynamic problems of engineering structures
Jun Zhou; Youhe Zhou
2007-01-01
This paper presents a new simple methodof implicit time integration with two control parame-ters for solving initial-value problems of dynamics suchthat its accuracy is at least of order two along with theconditional and unconditional stability regions of theparameters.When the control parameters in the methodare optimally taken in their regions,the accuracy maybe improved to reach of order three.It is found thatthe new scheme can achieve lower numerical ampli-tude dissipation and period dispersion than some of theexisting methods,e.g.the Newmark method and Zhai'sapproach,when the same time step size is used.Theregion of time step dependent on the parameters in thenew scheme is explicitly obtained.Finally,some exam-ples of dynamic problems are given to show the accu-racy and efficiency of the proposed scheme applied indynamic systems.
Integrated multi-channel receiver for a pulsed time-of-flight laser radar
Jiang, Yan; Liu, Ruqing; Zhu, Jingguo
2015-04-01
An integrated multi-channel receiver for a pulsed time-of-flight (TOF) laser rangefinder has been designed in this paper. The receiver chip as an important component of the laser radar device has been implemented in a 0.18um CMOS process. It consists of sixteen channels and every channel includes preamplifier, amplifier stages, high-pass filter and a timing discriminator which contains a timing comparator and a noise comparator. Each signal paths is independent of other channels. Based on the simulations, the bandwidth and transimpedance of the amplifier channel are 652MHz, 99dBΩ. Under the simulation condition of TT corner and 27°C, the propagation delay of the discriminator is 2.15ns and the propagation delay dispersion is 223ps. The power consumption during continuous measurement is 810mW, and the operating temperature range of the device is -10~60°C.
Parallel, explicit, and PWTD-enhanced time domain volume integral equation solver
Liu, Yang
2013-07-01
Time domain volume integral equations (TDVIEs) are useful for analyzing transient scattering from inhomogeneous dielectric objects in applications as varied as photonics, optoelectronics, and bioelectromagnetics. TDVIEs typically are solved by implicit marching-on-in-time (MOT) schemes [N. T. Gres et al., Radio Sci., 36, 379-386, 2001], requiring the solution of a system of equations at each and every time step. To reduce the computational cost associated with such schemes, [A. Al-Jarro et al., IEEE Trans. Antennas Propagat., 60, 5203-5215, 2012] introduced an explicit MOT-TDVIE method that uses a predictor-corrector technique to stably update field values throughout the scatterer. By leveraging memory-efficient nodal spatial discretization and scalable parallelization schemes [A. Al-Jarro et al., in 28th Int. Rev. Progress Appl. Computat. Electromagn., 2012], this solver has been successfully applied to the analysis of scattering phenomena involving 0.5 million spatial unknowns. © 2013 IEEE.
BiGGEsTS: integrated environment for biclustering analysis of time series gene expression data.
Gonçalves, Joana P; Madeira, Sara C; Oliveira, Arlindo L
2009-07-07
The ability to monitor changes in expression patterns over time, and to observe the emergence of coherent temporal responses using expression time series, is critical to advance our understanding of complex biological processes. Biclustering has been recognized as an effective method for discovering local temporal expression patterns and unraveling potential regulatory mechanisms. The general biclustering problem is NP-hard. In the case of time series this problem is tractable, and efficient algorithms can be used. However, there is still a need for specialized applications able to take advantage of the temporal properties inherent to expression time series, both from a computational and a biological perspective. BiGGEsTS makes available state-of-the-art biclustering algorithms for analyzing expression time series. Gene Ontology (GO) annotations are used to assess the biological relevance of the biclusters. Methods for preprocessing expression time series and post-processing results are also included. The analysis is additionally supported by a visualization module capable of displaying informative representations of the data, including heatmaps, dendrograms, expression charts and graphs of enriched GO terms. BiGGEsTS is a free open source graphical software tool for revealing local coexpression of genes in specific intervals of time, while integrating meaningful information on gene annotations. It is freely available at: http://kdbio.inesc-id.pt/software/biggests. We present a case study on the discovery of transcriptional regulatory modules in the response of Saccharomyces cerevisiae to heat stress.
Integrated survival analysis using an event-time approach in a Bayesian framework
Walsh, Daniel P.; Dreitz, VJ; Heisey, Dennis M.
2015-01-01
Event-time or continuous-time statistical approaches have been applied throughout the biostatistical literature and have led to numerous scientific advances. However, these techniques have traditionally relied on knowing failure times. This has limited application of these analyses, particularly, within the ecological field where fates of marked animals may be unknown. To address these limitations, we developed an integrated approach within a Bayesian framework to estimate hazard rates in the face of unknown fates. We combine failure/survival times from individuals whose fates are known and times of which are interval-censored with information from those whose fates are unknown, and model the process of detecting animals with unknown fates. This provides the foundation for our integrated model and permits necessary parameter estimation. We provide the Bayesian model, its derivation, and use simulation techniques to investigate the properties and performance of our approach under several scenarios. Lastly, we apply our estimation technique using a piece-wise constant hazard function to investigate the effects of year, age, chick size and sex, sex of the tending adult, and nesting habitat on mortality hazard rates of the endangered mountain plover (Charadrius montanus) chicks. Traditional models were inappropriate for this analysis because fates of some individual chicks were unknown due to failed radio transmitters. Simulations revealed biases of posterior mean estimates were minimal (≤ 4.95%), and posterior distributions behaved as expected with RMSE of the estimates decreasing as sample sizes, detection probability, and survival increased. We determined mortality hazard rates for plover chicks were highest at <5 days old and were lower for chicks with larger birth weights and/or whose nest was within agricultural habitats. Based on its performance, our approach greatly expands the range of problems for which event-time analyses can be used by eliminating the
Gómez Rodríguez, Rafael Ángel
2014-01-01
To say that someone possesses integrity is to claim that that person is almost predictable about responses to specific situations, that he or she can prudentially judge and to act correctly. There is a closed interrelationship between integrity and autonomy, and the autonomy rests on the deeper moral claim of all humans to integrity of the person. Integrity has two senses of significance for medical ethic: one sense refers to the integrity of the person in the bodily, psychosocial and intellectual elements; and in the second sense, the integrity is the virtue. Another facet of integrity of the person is la integrity of values we cherish and espouse. The physician must be a person of integrity if the integrity of the patient is to be safeguarded. The autonomy has reduced the violations in the past, but the character and virtues of the physician are the ultimate safeguard of autonomy of patient. A field very important in medicine is the scientific research. It is the character of the investigator that determines the moral quality of research. The problem arises when legitimate self-interests are replaced by selfish, particularly when human subjects are involved. The final safeguard of moral quality of research is the character and conscience of the investigator. Teaching must be relevant in the scientific field, but the most effective way to teach virtue ethics is through the example of the a respected scientist.
Prashant Jindal
2016-06-01
Full Text Available For the past four decades the integrated vendor and buyer supply chain inventory model has been an interesting topic, but quality improvement of defective items in the integrated inventory model with backorder price discount involving controllable lead time has been rarely discussed. The aim of this paper is to minimize the total related cost in the continuous review model by considering the order quantity, reorder point, lead time, process quality, backorder price discount and number of shipment as decision variables. Moreover, we assume that an investment function is used to improve the process quality. The lead time demand follows a normal distribution. In addition, the buyer offers backorder price discount to motivate the customers for possible backorders. There are some defective items in the arrival lot, so its treatment is also taken in account in this paper. We develop an iterative procedure for finding the optimal values of decision variables and numerical example is presented to illustrate the solution procedure. Additionally, sensitivity analysis with respect to major parameters is also carried out.
Importance of a midterm time horizon for addressing ethical issues integral to nanobiotechnology.
Khushf, George
2007-01-01
There is a consensus emerging on the importance of upstream ethical engagement in nanobiotechnology. Such a preventive ethic would anticipate downstream concerns that might arise and mitigate them as part of the research and development process. However, there is an unappreciated tension between the time horizon of upstream ethics and that assumed by most bioethical research. Current standards of high-quality research on ethical issues biases the research in favor of near-term, science-based, results-oriented work. A near-term focus would miss many of the important ethical issues integral to nanobiotechnology and undermine the goals integral to upstream ethical engagement. However, if we move to a far-term time horizon, the ethical debates tend to get too speculative and are no longer disciplined by existing research trajectories. This paper addresses the link between the midterm time horizon necessary for upstream ethics and the form, content, and style of ethical reflection. New paradigm cases, standards, and criteria will be needed for high-quality upstream ethics work in the area of nanobiotechnology.
An Integrative Approach with Sequential Game to Real-Time Gate Assignment under CDM Mechanism
Jun-qiang Liu
2014-01-01
Full Text Available This paper focuses on real-time airport gate assignment problem when small-scale or medium- to large-scale flight delays occur. Taking into account the collaborative decision making (CDM of the airlines and the airport, as well as the interests of multiagent (airlines, airports, and passengers, especially those influenced by flight banks, slot assignment and gate assignment are integrated into mixed set programming (MSP, and a real-time gate assignment model is built and solved through MSP coupled with sequential game. By this approach, the delay costs of multiagent can be minimized simultaneously; the fuel consumption of each airline can be basically equalized; the computation time can be significantly saved by sequential game; most importantly, the collaboration of the airlines and the airport is achieved so that the transferring cost caused by the delay of flight banks can be decreased as much as possible. A case study on small-scale flight delays verifies that the proposed approach is economical, robust, timesaving, and collaborative. A comparison of the traditional staged method and the proposed approach under medium- to large-scale flight delays proves that the integrative method is much more economical and timesaving than the traditional staged method.
Bordbar, Aarash; Yurkovich, James T.; Paglia, Giuseppe
2017-01-01
The increasing availability of metabolomics data necessitates novel methods for deeper data analysis and interpretation. We present a flux balance analysis method that allows for the computation of dynamic intracellular metabolic changes at the cellular scale through integration of time-course ab......The increasing availability of metabolomics data necessitates novel methods for deeper data analysis and interpretation. We present a flux balance analysis method that allows for the computation of dynamic intracellular metabolic changes at the cellular scale through integration of time......-course absolute quantitative metabolomics. This approach, termed " unsteady-state flux balance analysis" (uFBA), is applied to four cellular systems: three dynamic and one steady-state as a negative control. uFBA and FBA predictions are contrasted, and uFBA is found to be more accurate in predicting dynamic...... isotopic labeling and metabolic flux analysis in stored red blood cells. Utilizing time-course metabolomics data, uFBA provides an accurate method to predict metabolic physiology at the cellular scale for dynamic systems....
Leclerc, Arnaud; Viennot, David; Killingbeck, John P; 10.1063/1.3673320
2012-01-01
The Constrained Adiabatic Trajectory Method (CATM) is reexamined as an integrator for the Schr\\"odinger equation. An initial discussion places the CATM in the context of the different integrators used in the literature for time-independent or explicitly time-dependent Hamiltonians. The emphasis is put on adiabatic processes and within this adiabatic framework the interdependence between the CATM, the wave operator, the Floquet and the (t,t') theories is presented in detail. Two points are then more particularly analysed and illustrated by a numerical calculation describing the $H_2^+$ ion submitted to a laser pulse. The first point is the ability of the CATM to dilate the Hamiltonian spectrum and thus to make the perturbative treatment of the equations defining the wave function possible, possibly by using a Krylov subspace approach as a complement. The second point is the ability of the CATM to handle extremely complex time-dependencies, such as those which appear when interaction representations are used to...
Feasibility of the integration of CRONOS, a 3-D neutronics code, into real-time simulators
Ragusa, J.C. [CEA Saclay, Dept. de Mecanique et de Technologie, 91 - Gif-sur-Yvette (France)
2001-07-01
In its effort to contribute to nuclear power plant safety, CEA proposes the integration of an engineering grade 3-D neutronics code into a real-time plant analyser. This paper describes the capabilities of the neutronics code CRONOS to achieve a fast running performance. First, we will present current core models in simulators and explain their drawbacks. Secondly, the mean features of CRONOS's spatial-kinetics methods will be reviewed. We will then present an optimum core representation with respect to mesh size, choice of finite elements (FE) basis and execution time, for accurate results as well as the multi 1-D thermal-hydraulics (T/H) model developed to take into account 3-D effects in updating the cross-sections. A Main Steam Line Break (MSLB) End-of-Life (EOL) Hot-Zero-Power (HZP) accident will be used as an example, before we conclude with the perspectives of integrating CRONOS's 3-D core model into real-time simulators. (author)
Kim, Sangtaek; Wagner, Kelvin H.; Narayanan, Ram M.; Zhou, Wei
2005-10-01
We describe a time-integrating acousto-optic correlator (TIAOC) developed for imaging and target detection using a wideband random-noise radar system. This novel polarization interferometric in-line TIAOC uses an intensity-modulated laser diode for the random noise reference and a polarization-switching, self-collimating acoustic shear-mode gallium phosphide (GaP) acousto-optic device for traveling-wave modulation of the radar returns. The time-integrated correlation output is detected on a 1-D charge-coupled device (CCD) detector array and calibrated and demodulated in real time to produce the complex radar range profile. The complex radar reflectivity is measured in more than 150 radar range bins in parallel on the 3000 pixels of the CCD, improving target acquisition speeds and sensitivities by 150 over previous serial analog correlator approaches. The polarization interferometric detection of the correlation using the undiffracted light as the reference allows us to use the full acousto-optic device (AOD) bandwidth as the system bandwidth. Also, the experimental result shows the fully complex random-noise signal correlation and coherent demodulation without an explicit carrier, demonstrating that optically processed random-noise radars do not need a stable local oscillator.
Piao, Daqing; Holyoak, G Reed; Patel, Sanjay
2016-01-01
We demonstrate a laparoscopic applicator probe and a method thereof for real-time en-face topographic mapping of near-surface heterogeneity for potential use in intraoperative margin assessment during minimally invasive oncological procedures. The probe fits in a 12mm port and houses at its maximum 128 copper-coated 750um fibers that form radially alternating illumination (70 fibers) and detection (58 fibers) channels. By simultaneously illuminating the 70 source channels of the probe that is in contact with a scattering medium and concurrently measuring the light diffusely propagated to the 58 detector channels, the presence of near-surface optical heterogeneities can be resolved in an en-face 9.5mm field-of-view in real-time. Visualization of a subsurface margin of strong attenuation contrast at a depth up to 3mm is demonstrated at one wavelength at a frame rate of 1.25Hz.
G-larmS: Integrating Real-Time GPS into Earthquake Early Warning
Grapenthin, R.; Johanson, I. A.; Allen, R. M.
2013-12-01
In an effort to improve earthquake parameter estimation in earthquake early warning for large earthquakes (such as moment magnitude and finite fault geometry), the BSL is working to integrate information from real-time GPS and now generates and archives real-time position estimates using data from 62 GPS stations in the greater San Francisco Bay Area. This includes 26 stations that are operated by the BSL as part of the Bay Area Regional Deformation (BARD) network, 8 that are operated by the USGS, and 29 stations operated by the Plate Boundary Observatory. Data from these sites are processed in a fully triangulated network scheme in which neighboring station pairs are processed with the software trackRT. Positioning time series are produced operationally for 172 station pairs; additional station pairs will be added as more real-time stations become available. G-larmS, the geodetic alarm system, sits on top of real-time GPS processors such as trackRT and analyzes real-time positioning time series, and determines and broadcasts static offsets and quality parameters from these. Following this, G-larmS derives fault and magnitude information from the static offsets and broadcasts these results as well. This prototype Python implementation is tightly integrated into seismic alarm systems (CISN ShakeAlert, ElarmS) as it uses their P-wave detection alarms to trigger its processing. Testing the results of real-time GPS for earthquake early warning (EEW) under realistic conditions, and for scenarios that are relevant to the San Francisco Bay Area's tectonic environment, is a major step toward having our work accepted for integration with an operational EEW system. While Northern California has many small earthquakes (i.e., Mw6.5) that real-time GPS is expected to provide a significant contribution. This is because for larger events seismic systems need additional information to correctly estimate magnitude and finite fault extent and because real-time GPS suffers from a
FIRST-PASSAGE TIME OF QUASI-NON-INTEGRABLE-HAMILTONIAN SYSTEM
甘春标; 徐博侯
2000-01-01
Studies on first-passage failure are extended to the multi-degree-offreedom quasi-non～integrable-Hamiltonian systems under parametric excitations of Gaussian white noises in this paper. By the stochastic averaging method of energy envelope, the system's energy can be modeled as a one-dimensional approximate diffusion process by which the classical Pontryagin equation with suitable boundary conditions is applicable to analyzing the statistical moments of the first-passage time of an arbitrary order. An example is studied in detail and some numerical results are given to illustrate the above procedure.
Li-Hsing Ho
2013-01-01
Full Text Available Just-In-Time (JIT has been playing an important role in supply chain environments. Countless firms have been applying JIT in production to gain and maintain a competitive advantage. This study introduces an innovative model which integrates inventory and quality assurance in a JIT supply chain. This approach assumes that manufacturing will produce some defective items and those products will not influence the buyerâs purchase policy. The vendor absorbs all the inspection costs. Using a function to compute the expected amount of total cost every year will minimize the total cost and the nonconforming fraction. Finally, a numerical example further confirms this model.
Local Exponential Methods: a domain decomposition approach to exponential time integration of PDEs
Bonaventura, Luca
2015-01-01
A local approach to the time integration of PDEs by exponential methods is proposed, motivated by theoretical estimates by A.Iserles on the decay of off-diagonal terms in the exponentials of sparse matrices. An overlapping domain decomposition technique is outlined, that allows to replace the computation of a global exponential matrix by a number of independent and easily parallelizable local problems. Advantages and potential problems of the proposed technique are discussed. Numerical experiments on simple, yet relevant model problems show that the resulting method allows to increase computational efficiency with respect to standard implementations of exponential methods.
Nielsen, Martin Bjerre; Krenk, Steen
2012-01-01
A conservative time integration algorithm for rigid body rotations is presented in a purely algebraic form in terms of the four quaternions components and the four conjugate momentum variables via Hamilton’s equations. The introduction of an extended mass matrix leads to a symmetric set of eight...... state-space equations where constraints are embedded without explicit use of Lagrange multipliers. The algorithm is developed by forming a finite increment of the Hamiltonian, which defines the proper selection of increments and mean values that leads to conservation of energy and momentum. The accuracy...... and conservation properties are illustrated by examples....
Auditory temporal resolution and integration - stages of analyzing time-varying sounds
Pedersen, Benjamin
2007-01-01
, much is still unknown of how temporal information is analyzed and represented in the auditory system. The PhD lecture concerns the topic of temporal processing in hearing and the topic is approached via four different listening experiments designed to probe several aspects of temporal processing...... scheme: Effects such as attention seem to play an important role in loudness integration, and further, it will be demonstrated that the auditory system can rely on temporal cues at a much finer level of detail than predicted be existing models (temporal details in the time-range of 60 ?s can...
Data Integration in Support of a Real-Time Biosurveillance Network
Cross, S. L.; Scott, G. I.; Miglarese, J. V.
2008-12-01
Recent emergency and security events from both human and natural causes have increased the urgency for multidisciplinary data integration. For example, understanding natural resource mortalities on any given day and time of year may result in the timely identification of an intentional biological or chemical act, as well as assist in the development of recovery and restoration plans. The South Carolina Environmental Surveillance Network (ESN) is a real-time surveillance network of coastal-zone wildlife mortality incidents (e.g. fish kills, bird kills, animal disease outbreaks, harmful algal blooms, marine mammal strandings, etc.) that (1) notifies participating network science and regulatory experts of incidents; (2) allows for quick assessments of potential links between and among mortalities and (3) provides a mechanism to alert the emergency management community of incidents that could impact commerce and public health. The ESN data management system relies on a resource-based (or RESTful) approach and includes a Web mapping application that provides access to both real-time and historical data, as well as data flow that analyzes event co-occurrence and provides for email notification of a number of state and federal partners. Notably, it is not simply the occurrence, but the co-occurrence of these events that can signal emergency conditions; thus the real value of the ESN is in its integration of data streams across state and federal administrative lines that have historically provided barriers to data and information flow. In our experience, two recurring types of obstacles to data system integration are particularly challenging. One is the cultural tendency for an agency or agent to maintain tight control over data that they have collected. The second is the reluctance of Information Technology (IT) managers to allow remote access to data systems under their control, regardless of security measures taken. The ESN development has thus far been successful due
A micrometer-scale integrated silicon source of time-energy entangled photons
Grassani, Davide; Liscidini, Marco; Galli, Matteo; Strain, Michael J; Sorel, Marc; Sipe, J E; Bajoni, Daniele
2014-01-01
Entanglement is a fundamental resource in quantum information processing. Several studies have explored the integration of sources of entangled states on a silicon chip but the sources demonstrated so far require millimeter lengths and pump powers of the order of hundreds of mWs to produce an appreciable photon flux, hindering their scalability and dense integration. Microring resonators have been shown to be efficient sources of photon pairs, but entangled state emission has never been demonstrated. Here we report the first demonstration of a microring resonator capable of emitting time-energy entangled photons. We use a Franson experiment to show a violation of Bell's inequality by as much as 11 standard deviations. The source is integrated on a silicon chip, operates at sub-mW pump power, emits in the telecom band with a pair generation rate exceeding 10$^7$ Hz per $nm$, and outputs into a photonic waveguide. These are all essential features of an entangled states emitter for a quantum photonic networks.
Real-time feedback control of pH within microfluidics using integrated sensing and actuation.
Welch, David; Christen, Jennifer Blain
2014-03-21
We demonstrate a microfluidic system which applies engineering feedback principles to control the pH of a solution with a high degree of precision. The system utilizes an extended-gate ion-sensitive field-effect transistor (ISFET) along with an integrated pseudo-reference electrode to monitor pH values within a microfluidic reaction chamber. The monitored reaction chamber has an approximate volume of 90 nL. The pH value is controlled by adjusting the flow through two input channels using a pulse-width modulated signal applied to on-chip integrated valves. We demonstrate real-time control of pH through the feedback-controlled stepping of 0.14 pH increments in both the increasing and decreasing direction. The system converges to the pH setpoint within approximately 20 seconds of a step change. The integration of feedback theory into a microfluidic environment is a necessary step for achieving complete control over the microenvironment.
Analysis of an Integrated Security System using Real time Network Packets Scrutiny
K. Umamageswari
2015-11-01
Full Text Available With the tremendous growth of internet services, websites are becoming indispensable and common source through which they are made accessible to all. Intrusion by worms or viruses through the network is continuously increasing and evolving. Firewall and intrusion detection and prevention subsystem, and its functionality is becoming more advanced for the security system against external attacks that use various security vulnerabilities. As such, enterprises are investing in various measures for an integrated security system to identify the threats of network security-based security vulnerabilities and cope with theme effectively. In sum, the network visibility plane should facilitate the following changes in network monitoring for the purposes of promoting disaggregation of analytics tool functions for long term monitoring sustainability and flexibility. In this work, the network packet in-depth test-based, integrated security system that analyzes the threat factors through an overall study of network packets dispersed in real-time and applies various protection functions to manage with integrated security threats in the future.
F. TopsÃƒÂ¸e
2001-09-01
Full Text Available Abstract: In its modern formulation, the Maximum Entropy Principle was promoted by E.T. Jaynes, starting in the mid-fifties. The principle dictates that one should look for a distribution, consistent with available information, which maximizes the entropy. However, this principle focuses only on distributions and it appears advantageous to bring information theoretical thinking more prominently into play by also focusing on the "observer" and on coding. This view was brought forward by the second named author in the late seventies and is the view we will follow-up on here. It leads to the consideration of a certain game, the Code Length Game and, via standard game theoretical thinking, to a principle of Game Theoretical Equilibrium. This principle is more basic than the Maximum Entropy Principle in the sense that the search for one type of optimal strategies in the Code Length Game translates directly into the search for distributions with maximum entropy. In the present paper we offer a self-contained and comprehensive treatment of fundamentals of both principles mentioned, based on a study of the Code Length Game. Though new concepts and results are presented, the reading should be instructional and accessible to a rather wide audience, at least if certain mathematical details are left aside at a rst reading. The most frequently studied instance of entropy maximization pertains to the Mean Energy Model which involves a moment constraint related to a given function, here taken to represent "energy". This type of application is very well known from the literature with hundreds of applications pertaining to several different elds and will also here serve as important illustration of the theory. But our approach reaches further, especially regarding the study of continuity properties of the entropy function, and this leads to new results which allow a discussion of models with so-called entropy loss. These results have tempted us to speculate over
Das, Sanat Kumar; Chatterjee, Abhijit; Ghosh, Sanjay K; Raha, Sibaji
2015-11-15
An outflow of continental haze occurs from Indo-Gangetic Basin (IGB) in the North to Bay of Bengal (BoB) in the South. An integrated campaign was organized to investigate this continental haze during December 2013-February 2014 at source and remote regions within IGB to quantify its radiative effects. Measurements were carried out at three locations in eastern India; 1) Kalas Island, Sundarban (21.68°N, 88.57°E) - an isolated island along the north-east coast of BoB, 2) Kolkata (22.57°N, 88.42°E) - an urban metropolis and 3) Siliguri (26.70°N, 88.35°E) - an urban region at the foothills of eastern Himalayas. Ground-based AOD (at 0.5 μm) is observed to be maximum (1.25±0.18) over Kolkata followed by Siliguri (0.60±0.17) and minimum over Sundarban (0.53±0.18). Black carbon concentration is found to be maximum at Kolkata (21.6±6.6 μg·m(-3)) with almost equal concentrations at Siliguri (12.6±5.2 μg·m(-3)) and Sundarban (12.3±3.0 μg·m(-3)). Combination of MODIS-AOD and back-trajectories analysis shows an outflow of winter-time continental haze originating from central IGB and venting out through Sundarban towards BoB. This continental haze with high extinction coefficient is identified up to central BoB using CALIPSO observations and is found to contribute ~75% to marine AOD over central BoB. This haze produces significantly high aerosol radiative forcing within the atmosphere over Kolkata (75.4 Wm(-2)) as well as over Siliguri and Sundarban (40 Wm(-2)) indicating large forcing over entire IGB, from foothills of the Himalayas to coastal region. This winter-time continental haze also causes about similar radiative heating (1.5 K·day(-1)) from Siliguri to Sundarban which is enhanced over Kolkata (3 K·day(-1)) due to large emission of local urban aerosols. This high aerosol heating over entire IGB and coastal region of BoB can have considerable impact on the monsoonal circulation and more importantly, such haze transported over to BoB can significantly
Regularized maximum correntropy machine
Wang, Jim Jing-Yan
2015-02-12
In this paper we investigate the usage of regularized correntropy framework for learning of classifiers from noisy labels. The class label predictors learned by minimizing transitional loss functions are sensitive to the noisy and outlying labels of training samples, because the transitional loss functions are equally applied to all the samples. To solve this problem, we propose to learn the class label predictors by maximizing the correntropy between the predicted labels and the true labels of the training samples, under the regularized Maximum Correntropy Criteria (MCC) framework. Moreover, we regularize the predictor parameter to control the complexity of the predictor. The learning problem is formulated by an objective function considering the parameter regularization and MCC simultaneously. By optimizing the objective function alternately, we develop a novel predictor learning algorithm. The experiments on two challenging pattern classification tasks show that it significantly outperforms the machines with transitional loss functions.
In a Time of Change: Integrating the Arts and Humanities with Climate Change Science in Alaska
Leigh, M.; Golux, S.; Franzen, K.
2011-12-01
The arts and humanities have a powerful capacity to create lines of communication between the public, policy and scientific spheres. A growing network of visual and performing artists, writers and scientists has been actively working together since 2007 to integrate scientific and artistic perspectives on climate change in interior Alaska. These efforts have involved field workshops and collaborative creative processes culminating in public performances and a visual art exhibit. The most recent multimedia event was entitled In a Time of Change: Envisioning the Future, and challenged artists and scientists to consider future scenarios of climate change. This event included a public performance featuring original theatre, modern dance, Alaska Native Dance, poetry and music that was presented concurrently with an art exhibit featuring original works by 24 Alaskan visual artists. A related effort targeted K12 students, through an early college course entitled Climate Change and Creative Expression, which was offered to high school students at a predominantly Alaska Native charter school and integrated climate change science, creative writing, theatre and dance. Our program at Bonanza Creek Long Term Ecological Research (LTER) site is just one of many successful efforts to integrate arts and humanities with science within and beyond the NSF LTER Program. The efforts of various LTER sites to engage the arts and humanities with science, the public and policymakers have successfully generated excitement, facilitated mutual understanding, and promoted meaningful dialogue on issues facing science and society. The future outlook for integration of arts and humanities with science appears promising, with increasing interest from artists, scientists and scientific funding agencies.
刘圣波; 刘贺; 赵燕东
2013-01-01
为了提高光伏太阳能转换率，拓展传统纹波控制技术的应用，该文提出了离散时间纹波控制算法，通过对纹波控制技术的离散化处理，将最大功率点跟踪控制问题转换为离散采样-控制问题。以太阳能板输出电压为状态量，在其处于极大值和极小值时对系统进行采样；随后采取离散时间纹波控制算法使系统快速追踪到系统的最大功率点。该文在Simulink系统中对离散时间纹波控制算法进行了仿真。仿真结果表明，在1000和200 W/cm2，25℃的条件下，算法均可以快速准确地追踪到太阳能系统的最大功率点，追踪精度高达96%；在外部环境由1000变为200 W/cm2时，系统能够在0.1 s内准确地追踪到新的最大功率点。%Solar photovoltaic technology has been widely used in modern agriculture. Due to the volatility of solar power, it is hard to maximize the use of solar energy. In order to seek a way to improve the conversion rate of photovoltaic solar panels, this paper developed a new algorithm to utilize solar energy more efficiently. Since tracking solar maximum power point is a valid method to maintain the solar panel power output at a high level, at this paper, we choose ripple correlation control (RCC) to keep tracking the maximum power point of a solar photovoltaic (PV) system. Ripple correlation control is a real-time optimal method particularly suitable for power convertor control. The objective of RCC in solar PV system is to maximize the energy quantity. This paper extended the traditional analog RCC technique to the digital domain. With discretization and simplifications of math model, the RCC method can be transformed to a sampling problem. The control method shows that when the solar PV system reaches the maximum power point, power outputs at both maximum and minimum state should be nearly the same. Moreover, since voltage output of a system is easy to observe and directly related to power
Wolfs, Cecile J. A.; Brás, Mariana G.; Schyns, Lotte E. J. R.; Nijsten, Sebastiaan M. J. J. G.; van Elmpt, Wouter; Scheib, Stefan G.; Baltes, Christof; Podesta, Mark; Verhaegen, Frank
2017-08-01
The aim of this work is to assess the performance of 2D time-integrated (2D-TI), 2D time-resolved (2D-TR) and 3D time-integrated (3D-TI) portal dosimetry in detecting dose discrepancies between the planned and (simulated) delivered dose caused by simulated changes in the anatomy of lung cancer patients. For six lung cancer patients, tumor shift, tumor regression and pleural effusion are simulated by modifying their CT images. Based on the modified CT images, time-integrated (TI) and time-resolved (TR) portal dose images (PDIs) are simulated and 3D-TI doses are calculated. The modified and original PDIs and 3D doses are compared by a gamma analysis with various gamma criteria. Furthermore, the difference in the D 95% (ΔD 95%) of the GTV is calculated and used as a gold standard. The correlation between the gamma fail rate and the ΔD 95% is investigated, as well the sensitivity and specificity of all combinations of portal dosimetry method, gamma criteria and gamma fail rate threshold. On the individual patient level, there is a correlation between the gamma fail rate and the ΔD 95%, which cannot be found at the group level. The sensitivity and specificity analysis showed that there is not one combination of portal dosimetry method, gamma criteria and gamma fail rate threshold that can detect all simulated anatomical changes. This work shows that it will be more beneficial to relate portal dosimetry and DVH analysis on the patient level, rather than trying to quantify a relationship for a group of patients. With regards to optimizing sensitivity and specificity, different combinations of portal dosimetry method, gamma criteria and gamma fail rate should be used to optimally detect certain types of anatomical changes.
Beisser, Daniela; Grohme, Markus A; Kopka, Joachim; Frohme, Marcus; Schill, Ralph O; Hengherr, Steffen; Dandekar, Thomas; Klau, Gunnar W; Dittrich, Marcus; Müller, Tobias
2012-06-19
Tardigrades are multicellular organisms, resistant to extreme environmental changes such as heat, drought, radiation and freezing. They outlast these conditions in an inactive form (tun) to escape damage to cellular structures and cell death. Tardigrades are apparently able to prevent or repair such damage and are therefore a crucial model organism for stress tolerance. Cultures of the tardigrade Milnesium tardigradum were dehydrated by removing the surrounding water to induce tun formation. During this process and the subsequent rehydration, metabolites were measured in a time series by GC-MS. Additionally expressed sequence tags are available, especially libraries generated from the active and inactive state. The aim of this integrated analysis is to trace changes in tardigrade metabolism and identify pathways responsible for their extreme resistance against physical stress. In this study we propose a novel integrative approach for the analysis of metabolic networks to identify modules of joint shifts on the transcriptomic and metabolic levels. We derive a tardigrade-specific metabolic network represented as an undirected graph with 3,658 nodes (metabolites) and 4,378 edges (reactions). Time course metabolite profiles are used to score the network nodes showing a significant change over time. The edges are scored according to information on enzymes from the EST data. Using this combined information, we identify a key subnetwork (functional module) of concerted changes in metabolic pathways, specific for de- and rehydration. The module is enriched in reactions showing significant changes in metabolite levels and enzyme abundance during the transition. It resembles the cessation of a measurable metabolism (e.g. glycolysis and amino acid anabolism) during the tun formation, the production of storage metabolites and bioprotectants, such as DNA stabilizers, and the generation of amino acids and cellular components from monosaccharides as carbon and energy source
Beisser Daniela
2012-06-01
Full Text Available Abstract Background Tardigrades are multicellular organisms, resistant to extreme environmental changes such as heat, drought, radiation and freezing. They outlast these conditions in an inactive form (tun to escape damage to cellular structures and cell death. Tardigrades are apparently able to prevent or repair such damage and are therefore a crucial model organism for stress tolerance. Cultures of the tardigrade Milnesium tardigradum were dehydrated by removing the surrounding water to induce tun formation. During this process and the subsequent rehydration, metabolites were measured in a time series by GC-MS. Additionally expressed sequence tags are available, especially libraries generated from the active and inactive state. The aim of this integrated analysis is to trace changes in tardigrade metabolism and identify pathways responsible for their extreme resistance against physical stress. Results In this study we propose a novel integrative approach for the analysis of metabolic networks to identify modules of joint shifts on the transcriptomic and metabolic levels. We derive a tardigrade-specific metabolic network represented as an undirected graph with 3,658 nodes (metabolites and 4,378 edges (reactions. Time course metabolite profiles are used to score the network nodes showing a significant change over time. The edges are scored according to information on enzymes from the EST data. Using this combined information, we identify a key subnetwork (functional module of concerted changes in metabolic pathways, specific for de- and rehydration. The module is enriched in reactions showing significant changes in metabolite levels and enzyme abundance during the transition. It resembles the cessation of a measurable metabolism (e.g. glycolysis and amino acid anabolism during the tun formation, the production of storage metabolites and bioprotectants, such as DNA stabilizers, and the generation of amino acids and cellular components from
An integrated approach using high time-resolved tools to study the origin of aerosols
Di Gilio, A. [Chemistry Department, University of Bari, via Orabona, 4, 70126 Bari (Italy); ARPA PUGLIA, Corso Trieste, 27, 70126 Bari (Italy); Gennaro, G. de, E-mail: gianluigi.degennaro@uniba.it [Chemistry Department, University of Bari, via Orabona, 4, 70126 Bari (Italy); ARPA PUGLIA, Corso Trieste, 27, 70126 Bari (Italy); Dambruoso, P. [Chemistry Department, University of Bari, via Orabona, 4, 70126 Bari (Italy); ARPA PUGLIA, Corso Trieste, 27, 70126 Bari (Italy); Ventrella, G. [Chemistry Department, University of Bari, via Orabona, 4, 70126 Bari (Italy)
2015-10-15
Long-range transport of natural and/or anthropogenic particles can contribute significantly to PM10 and PM2.5 concentrations and some European cities often fail to comply with PM daily limit values due to the additional impact of particles from remote sources. For this reason, reliable methodologies to identify long-range transport (LRT) events would be useful to better understand air pollution phenomena and support proper decision-making. This study explores the potential of an integrated and high time-resolved monitoring approach for the identification and characterization of local, regional and long-range transport events of high PM. In particular, the goal of this work was also the identification of time-limited event. For this purpose, a high time-resolved monitoring campaign was carried out at an urban background site in Bari (southern Italy) for about 20 days (1st–20th October 2011). The integration of collected data as the hourly measurements of inorganic ions in PM{sub 2.5} and their gas precursors and of the natural radioactivity, in addition to the analyses of aerosol maps and hourly back trajectories (BT), provided useful information for the identification and chemical characterization of local sources and trans-boundary intrusions. Non-sea salt (nss) sulfate levels were found to increase when air masses came from northeastern Europe and higher dispersive conditions of the atmosphere were detected. Instead, higher nitrate and lower nss-sulfate concentrations were registered in correspondence with air mass stagnation and attributed to local traffic source. In some cases, combinations of local and trans-boundary sources were observed. Finally, statistical investigations such as the principal component analysis (PCA) applied on hourly ion concentrations and the cluster analyses, the Potential Source Contribution Function (PSCF) and the Concentration Weighted Trajectory (CWT) models computed on hourly back-trajectories enabled to complete a cognitive
Numerical Time Integration Methods for a Point Absorber Wave Energy Converter
Zurkinden, Andrew Stephen; Kramer, Morten
2012-01-01
The objective of this abstract is to provide a review of models for motion simulation of marine structures with a special emphasis on wave energy converters. The time-domain model is applied to a point absorber system working in pitch mode only. The device is similar to the well-known Wavestar...... float located in the Danish North Sea. The main objective is to produce a tool that can accurately simulate the dynamics of a floating structure with an arbitrary geometry provided the frequency domain coefficients are calculated beforehand. The latter calculation is based on linear fluid structure...... interaction (small deformations of the fluid surface and body), inviscid incompressible, irrotational flow and a linearized Euler-Bernoulli formulation of the fluid pressure. The time-domain analysis of a floating structure involves the calculation of a convolution integral between the impulse response...
Continuous-Time Low-Pass Filters for Integrated Wideband Radio Receivers
Saari, Ville; Lindfors, Saska
2012-01-01
This book presents a new filter design approach and concentrates on the circuit techniques that can be utilized when designing continuous-time low-pass filters in modern ultra-deep-submicron CMOS technologies for integrated wideband radio receivers. Coverage includes system-level issues related to the design and implementation of a complete single-chip radio receiver and related to the design and implementation of a filter circuit as a part of a complete single-chip radio receiver. Presents a new filter design approach, emphasizing low-voltage circuit solutions that can be implemented in modern, ultra-deep-submicron CMOS technologies; Includes filter circuit implementations designed as a part of a single-chip radio receiver in modern 1.2V 0.13um and 65nm CMOS; Describes design and implementation of a continuous-time low-pass filter for a multicarrier WCDMA base-station; Emphasizes system-level considerations throughout.
Li, Ping
2014-07-01
This paper presents an algorithm hybridizing discontinuous Galerkin time domain (DGTD) method and time domain boundary integral (BI) algorithm for 3-D open region electromagnetic scattering analysis. The computational domain of DGTD is rigorously truncated by analytically evaluating the incoming numerical flux from the outside of the truncation boundary through BI method based on the Huygens\\' principle. The advantages of the proposed method are that it allows the truncation boundary to be conformal to arbitrary (convex/ concave) scattering objects, well-separated scatters can be truncated by their local meshes without losing the physics (such as coupling/multiple scattering) of the problem, thus reducing the total mesh elements. Furthermore, low frequency waves can be efficiently absorbed, and the field outside the truncation domain can be conveniently calculated using the same BI formulation. Numerical examples are benchmarked to demonstrate the accuracy and versatility of the proposed method.
Jamali, J; Moini, R; Sadeghi, H
2002-01-01
A time-domain approach is presented to calculate electromagnetic fields inside a large Electromagnetic Pulse (EMP) simulator. This type of EMP simulator is used for studying the effect of electromagnetic pulses on electrical apparatus in various structures such as vehicles, a reoplanes, etc. The simulator consists of three planar transmission lines. To solve the problem, we first model the metallic structure of the simulator as a grid of conducting wires. The numerical solution of the governing electric field integral equation is then obtained using the method of moments in time domain. To demonstrate the accuracy of the model, we consider a typical EMP simulator. The comparison of our results with those obtained experimentally in the literature validates the model introduced in this paper.
Wang, Qi; Zhang, Chunyu; Ding, Yi
2015-01-01
reviews typical RTMs respectively in the North America, Australia and Europe, focusing on their market architectures and incentive policies for integrating DER and DR in electricity markets. In this paper, RTMs are classified into three groups: Group I applies nodal prices implemented by optimal power......The high penetration of both Distributed Energy Resources (DER) and Demand Response (DR) in modern power systems requires a sequence of advanced strategies and technologies for maintaining system reliability and flexibility. Real-time electricity markets (RTM) are the nondiscriminatory transaction...... flow, which clears energy prices every 5 minutes. Group II applies zonal prices, with the time resolution of 5-min. Group III is a general balancing market, which clears zonal prices intro-hourly. The various successful RTM experiences have been summarized and discussed, which provides a technical...