WorldWideScience

Sample records for maximum integration time

  1. Maximum likelihood estimation for integrated diffusion processes

    DEFF Research Database (Denmark)

    Baltazar-Larios, Fernando; Sørensen, Michael

    We propose a method for obtaining maximum likelihood estimates of parameters in diffusion models when the data is a discrete time sample of the integral of the process, while no direct observations of the process itself are available. The data are, moreover, assumed to be contaminated...... EM-algorithm to obtain maximum likelihood estimates of the parameters in the diffusion model. As part of the algorithm, we use a recent simple method for approximate simulation of diffusion bridges. In simulation studies for the Ornstein-Uhlenbeck process and the CIR process the proposed method works...... by measurement errors. Integrated volatility is an example of this type of observations. Another example is ice-core data on oxygen isotopes used to investigate paleo-temperatures. The data can be viewed as incomplete observations of a model with a tractable likelihood function. Therefore we propose a simulated...

  2. An implementation of the maximum-caliber principle by replica-averaged time-resolved restrained simulations.

    Science.gov (United States)

    Capelli, Riccardo; Tiana, Guido; Camilloni, Carlo

    2018-05-14

    Inferential methods can be used to integrate experimental informations and molecular simulations. The maximum entropy principle provides a framework for using equilibrium experimental data, and it has been shown that replica-averaged simulations, restrained using a static potential, are a practical and powerful implementation of such a principle. Here we show that replica-averaged simulations restrained using a time-dependent potential are equivalent to the principle of maximum caliber, the dynamic version of the principle of maximum entropy, and thus may allow us to integrate time-resolved data in molecular dynamics simulations. We provide an analytical proof of the equivalence as well as a computational validation making use of simple models and synthetic data. Some limitations and possible solutions are also discussed.

  3. Analogue of Pontryagin's maximum principle for multiple integrals minimization problems

    OpenAIRE

    Mikhail, Zelikin

    2016-01-01

    The theorem like Pontryagin's maximum principle for multiple integrals is proved. Unlike the usual maximum principle, the maximum should be taken not over all matrices, but only on matrices of rank one. Examples are given.

  4. Time at which the maximum of a random acceleration process is reached

    International Nuclear Information System (INIS)

    Majumdar, Satya N; Rosso, Alberto; Zoia, Andrea

    2010-01-01

    We study the random acceleration model, which is perhaps one of the simplest, yet nontrivial, non-Markov stochastic processes, and is key to many applications. For this non-Markov process, we present exact analytical results for the probability density p(t m |T) of the time t m at which the process reaches its maximum, within a fixed time interval [0, T]. We study two different boundary conditions, which correspond to the process representing respectively (i) the integral of a Brownian bridge and (ii) the integral of a free Brownian motion. Our analytical results are also verified by numerical simulations.

  5. Optimal control of a double integrator a primer on maximum principle

    CERN Document Server

    Locatelli, Arturo

    2017-01-01

    This book provides an introductory yet rigorous treatment of Pontryagin’s Maximum Principle and its application to optimal control problems when simple and complex constraints act on state and control variables, the two classes of variable in such problems. The achievements resulting from first-order variational methods are illustrated with reference to a large number of problems that, almost universally, relate to a particular second-order, linear and time-invariant dynamical system, referred to as the double integrator. The book is ideal for students who have some knowledge of the basics of system and control theory and possess the calculus background typically taught in undergraduate curricula in engineering. Optimal control theory, of which the Maximum Principle must be considered a cornerstone, has been very popular ever since the late 1950s. However, the possibly excessive initial enthusiasm engendered by its perceived capability to solve any kind of problem gave way to its equally unjustified rejecti...

  6. Integrals of Motion for Discrete-Time Optimal Control Problems

    OpenAIRE

    Torres, Delfim F. M.

    2003-01-01

    We obtain a discrete time analog of E. Noether's theorem in Optimal Control, asserting that integrals of motion associated to the discrete time Pontryagin Maximum Principle can be computed from the quasi-invariance properties of the discrete time Lagrangian and discrete time control system. As corollaries, results for first-order and higher-order discrete problems of the calculus of variations are obtained.

  7. Maximum Temperature Detection System for Integrated Circuits

    Science.gov (United States)

    Frankiewicz, Maciej; Kos, Andrzej

    2015-03-01

    The paper describes structure and measurement results of the system detecting present maximum temperature on the surface of an integrated circuit. The system consists of the set of proportional to absolute temperature sensors, temperature processing path and a digital part designed in VHDL. Analogue parts of the circuit where designed with full-custom technique. The system is a part of temperature-controlled oscillator circuit - a power management system based on dynamic frequency scaling method. The oscillator cooperates with microprocessor dedicated for thermal experiments. The whole system is implemented in UMC CMOS 0.18 μm (1.8 V) technology.

  8. Monolitic integrated circuit for the strobed charge-to-time converter

    International Nuclear Information System (INIS)

    Bel'skij, V.I.; Bushnin, Yu.B.; Zimin, S.A.; Punzhin, Yu.N.; Sen'ko, V.A.; Soldatov, M.M.; Tokarchuk, V.P.

    1985-01-01

    The developed and comercially produced semiconducting circuit - gating charge-to-time converter KR1101PD1 is described. The considered integrated circuit is a short pulse charge-to-time converter with integration of input current. The circuit is designed for construction of time-to-pulse analog-to-digital converters utilized in multichannel detection systems when studying complex topology processes. Input resistance of the circuit is 0.1 Ω permissible input current is 50 mA, maximum measured charge is 300-1000 pC

  9. Stochastic behavior of a cold standby system with maximum repair time

    Directory of Open Access Journals (Sweden)

    Ashish Kumar

    2015-09-01

    Full Text Available The main aim of the present paper is to analyze the stochastic behavior of a cold standby system with concept of preventive maintenance, priority and maximum repair time. For this purpose, a stochastic model is developed in which initially one unit is operative and other is kept as cold standby. There is a single server who visits the system immediately as and when required. The server takes the unit under preventive maintenance after a maximum operation time at normal mode if one standby unit is available for operation. If the repair of the failed unit is not possible up to a maximum repair time, failed unit is replaced by new one. The failure time, maximum operation time and maximum repair time distributions of the unit are considered as exponentially distributed while repair and maintenance time distributions are considered as arbitrary. All random variables are statistically independent and repairs are perfect. Various measures of system effectiveness are obtained by using the technique of semi-Markov process and RPT. To highlight the importance of the study numerical results are also obtained for MTSF, availability and profit function.

  10. Multiperiod Maximum Loss is time unit invariant.

    Science.gov (United States)

    Kovacevic, Raimund M; Breuer, Thomas

    2016-01-01

    Time unit invariance is introduced as an additional requirement for multiperiod risk measures: for a constant portfolio under an i.i.d. risk factor process, the multiperiod risk should equal the one period risk of the aggregated loss, for an appropriate choice of parameters and independent of the portfolio and its distribution. Multiperiod Maximum Loss over a sequence of Kullback-Leibler balls is time unit invariant. This is also the case for the entropic risk measure. On the other hand, multiperiod Value at Risk and multiperiod Expected Shortfall are not time unit invariant.

  11. Space-Time Chip Equalization for Maximum Diversity Space-Time Block Coded DS-CDMA Downlink Transmission

    Directory of Open Access Journals (Sweden)

    Petré Frederik

    2004-01-01

    Full Text Available In the downlink of DS-CDMA, frequency-selectivity destroys the orthogonality of the user signals and introduces multiuser interference (MUI. Space-time chip equalization is an efficient tool to restore the orthogonality of the user signals and suppress the MUI. Furthermore, multiple-input multiple-output (MIMO communication techniques can result in a significant increase in capacity. This paper focuses on space-time block coding (STBC techniques, and aims at combining STBC techniques with the original single-antenna DS-CDMA downlink scheme. This results into the so-called space-time block coded DS-CDMA downlink schemes, many of which have been presented in the past. We focus on a new scheme that enables both the maximum multiantenna diversity and the maximum multipath diversity. Although this maximum diversity can only be collected by maximum likelihood (ML detection, we pursue suboptimal detection by means of space-time chip equalization, which lowers the computational complexity significantly. To design the space-time chip equalizers, we also propose efficient pilot-based methods. Simulation results show improved performance over the space-time RAKE receiver for the space-time block coded DS-CDMA downlink schemes that have been proposed for the UMTS and IS-2000 W-CDMA standards.

  12. Evaluating Maximum Photovoltaic Integration in District Distribution Systems Considering Optimal Inverter Dispatch and Cloud Shading Conditions

    DEFF Research Database (Denmark)

    Ding, Tao; Kou, Yu; Yang, Yongheng

    2017-01-01

    . However, the intermittency of solar PV energy (e.g., due to passing clouds) may affect the PV generation in the district distribution network. To address this issue, the voltage magnitude constraints under the cloud shading conditions should be taken into account in the optimization model, which can......As photovoltaic (PV) integration increases in distribution systems, to investigate the maximum allowable PV integration capacity for a district distribution system becomes necessary in the planning phase, an optimization model is thus proposed to evaluate the maximum PV integration capacity while...

  13. An integration time adaptive control method for atmospheric composition detection of occultation

    Science.gov (United States)

    Ding, Lin; Hou, Shuai; Yu, Fei; Liu, Cheng; Li, Chao; Zhe, Lin

    2018-01-01

    When sun is used as the light source for atmospheric composition detection, it is necessary to image sun for accurate identification and stable tracking. In the course of 180 second of the occultation, the magnitude of sun light intensity through the atmosphere changes greatly. It is nearly 1100 times illumination change between the maximum atmospheric and the minimum atmospheric. And the process of light change is so severe that 2.9 times per second of light change can be reached. Therefore, it is difficult to control the integration time of sun image camera. In this paper, a novel adaptive integration time control method for occultation is presented. In this method, with the distribution of gray value in the image as the reference variable, and the concepts of speed integral PID control, the integration time adaptive control problem of high frequency imaging. The large dynamic range integration time automatic control in the occultation can be achieved.

  14. Extending the maximum operation time of the MNSR reactor.

    Science.gov (United States)

    Dawahra, S; Khattab, K; Saba, G

    2016-09-01

    An effective modification to extend the maximum operation time of the Miniature Neutron Source Reactor (MNSR) to enhance the utilization of the reactor has been tested using the MCNP4C code. This modification consisted of inserting manually in each of the reactor inner irradiation tube a chain of three polyethylene-connected containers filled of water. The total height of the chain was 11.5cm. The replacement of the actual cadmium absorber with B(10) absorber was needed as well. The rest of the core structure materials and dimensions remained unchanged. A 3-D neutronic model with the new modifications was developed to compare the neutronic parameters of the old and modified cores. The results of the old and modified core excess reactivities (ρex) were: 3.954, 6.241 mk respectively. The maximum reactor operation times were: 428, 1025min and the safety reactivity factors were: 1.654 and 1.595 respectively. Therefore, a 139% increase in the maximum reactor operation time was noticed for the modified core. This increase enhanced the utilization of the MNSR reactor to conduct a long time irradiation of the unknown samples using the NAA technique and increase the amount of radioisotope production in the reactor. Copyright © 2016 Elsevier Ltd. All rights reserved.

  15. Computing the Maximum Detour of a Plane Graph in Subquadratic Time

    DEFF Research Database (Denmark)

    Wulff-Nilsen, Christian

    Let G be a plane graph where each edge is a line segment. We consider the problem of computing the maximum detour of G, defined as the maximum over all pairs of distinct points p and q of G of the ratio between the distance between p and q in G and the distance |pq|. The fastest known algorithm f...... for this problem has O(n^2) running time. We show how to obtain O(n^{3/2}*(log n)^3) expected running time. We also show that if G has bounded treewidth, its maximum detour can be computed in O(n*(log n)^3) expected time....

  16. A maximum principle for time dependent transport in systems with voids

    International Nuclear Information System (INIS)

    Schofield, S.L.; Ackroyd, R.T.

    1996-01-01

    A maximum principle is developed for the first-order time dependent Boltzmann equation. The maximum principle is a generalization of Schofield's κ(θ) principle for the first-order steady state Boltzmann equation, and provides a treatment of time dependent transport in systems with void regions. The formulation comprises a direct least-squares minimization allied with a suitable choice of bilinear functional, and gives rise to a maximum principle whose functional is free of terms that have previously led to difficulties in treating void regions. (Author)

  17. Maximum Likelihood Blind Channel Estimation for Space-Time Coding Systems

    Directory of Open Access Journals (Sweden)

    Hakan A. Çırpan

    2002-05-01

    Full Text Available Sophisticated signal processing techniques have to be developed for capacity enhancement of future wireless communication systems. In recent years, space-time coding is proposed to provide significant capacity gains over the traditional communication systems in fading wireless channels. Space-time codes are obtained by combining channel coding, modulation, transmit diversity, and optional receive diversity in order to provide diversity at the receiver and coding gain without sacrificing the bandwidth. In this paper, we consider the problem of blind estimation of space-time coded signals along with the channel parameters. Both conditional and unconditional maximum likelihood approaches are developed and iterative solutions are proposed. The conditional maximum likelihood algorithm is based on iterative least squares with projection whereas the unconditional maximum likelihood approach is developed by means of finite state Markov process modelling. The performance analysis issues of the proposed methods are studied. Finally, some simulation results are presented.

  18. Long Pulse Integrator of Variable Integral Time Constant

    International Nuclear Information System (INIS)

    Wang Yong; Ji Zhenshan; Du Xiaoying; Wu Yichun; Li Shi; Luo Jiarong

    2010-01-01

    A kind of new long pulse integrator was designed based on the method of variable integral time constant and deducting integral drift by drift slope. The integral time constant can be changed by choosing different integral resistors, in order to improve the signal-to-noise ratio, and avoid output saturation; the slope of integral drift of a certain period of time can be calculated by digital signal processing, which can be used to deduct the drift of original integral signal in real time to reduce the integral drift. The tests show that this kind of long pulse integrator is good at reducing integral drift, which also can eliminate the effects of changing integral time constant. According to experiments, the integral time constant can be changed by remote control and manual adjustment of integral drift is avoided, which can improve the experiment efficiency greatly and can be used for electromagnetic measurement in Tokamak experiment. (authors)

  19. Adaptive double-integral-sliding-mode-maximum-power-point tracker for a photovoltaic system

    Directory of Open Access Journals (Sweden)

    Bidyadhar Subudhi

    2015-10-01

    Full Text Available This study proposed an adaptive double-integral-sliding-mode-controller-maximum-power-point tracker (DISMC-MPPT for maximum-power-point (MPP tracking of a photovoltaic (PV system. The objective of this study is to design a DISMC-MPPT with a new adaptive double-integral-sliding surface in order that MPP tracking is achieved with reduced chattering and steady-state error in the output voltage or current. The proposed adaptive DISMC-MPPT possesses a very simple and efficient PWM-based control structure that keeps switching frequency constant. The controller is designed considering the reaching and stability conditions to provide robustness and stability. The performance of the proposed adaptive DISMC-MPPT is verified through both MATLAB/Simulink simulation and experiment using a 0.2 kW prototype PV system. From the obtained results, it is found out that this DISMC-MPPT is found to be more efficient compared with that of Tan's and Jiao's DISMC-MPPTs.

  20. Maximum Power Point Tracking in Variable Speed Wind Turbine Based on Permanent Magnet Synchronous Generator Using Maximum Torque Sliding Mode Control Strategy

    Institute of Scientific and Technical Information of China (English)

    Esmaeil Ghaderi; Hossein Tohidi; Behnam Khosrozadeh

    2017-01-01

    The present study was carried out in order to track the maximum power point in a variable speed turbine by minimizing electromechanical torque changes using a sliding mode control strategy.In this strategy,fhst,the rotor speed is set at an optimal point for different wind speeds.As a result of which,the tip speed ratio reaches an optimal point,mechanical power coefficient is maximized,and wind turbine produces its maximum power and mechanical torque.Then,the maximum mechanical torque is tracked using electromechanical torque.In this technique,tracking error integral of maximum mechanical torque,the error,and the derivative of error are used as state variables.During changes in wind speed,sliding mode control is designed to absorb the maximum energy from the wind and minimize the response time of maximum power point tracking (MPPT).In this method,the actual control input signal is formed from a second order integral operation of the original sliding mode control input signal.The result of the second order integral in this model includes control signal integrity,full chattering attenuation,and prevention from large fluctuations in the power generator output.The simulation results,calculated by using MATLAB/m-file software,have shown the effectiveness of the proposed control strategy for wind energy systems based on the permanent magnet synchronous generator (PMSG).

  1. Maximum Correntropy Unscented Kalman Filter for Ballistic Missile Navigation System based on SINS/CNS Deeply Integrated Mode.

    Science.gov (United States)

    Hou, Bowen; He, Zhangming; Li, Dong; Zhou, Haiyin; Wang, Jiongqi

    2018-05-27

    Strap-down inertial navigation system/celestial navigation system ( SINS/CNS) integrated navigation is a high precision navigation technique for ballistic missiles. The traditional navigation method has a divergence in the position error. A deeply integrated mode for SINS/CNS navigation system is proposed to improve the navigation accuracy of ballistic missile. The deeply integrated navigation principle is described and the observability of the navigation system is analyzed. The nonlinearity, as well as the large outliers and the Gaussian mixture noises, often exists during the actual navigation process, leading to the divergence phenomenon of the navigation filter. The new nonlinear Kalman filter on the basis of the maximum correntropy theory and unscented transformation, named the maximum correntropy unscented Kalman filter, is deduced, and the computational complexity is analyzed. The unscented transformation is used for restricting the nonlinearity of the system equation, and the maximum correntropy theory is used to deal with the non-Gaussian noises. Finally, numerical simulation illustrates the superiority of the proposed filter compared with the traditional unscented Kalman filter. The comparison results show that the large outliers and the influence of non-Gaussian noises for SINS/CNS deeply integrated navigation is significantly reduced through the proposed filter.

  2. Maximum likelihood window for time delay estimation

    International Nuclear Information System (INIS)

    Lee, Young Sup; Yoon, Dong Jin; Kim, Chi Yup

    2004-01-01

    Time delay estimation for the detection of leak location in underground pipelines is critically important. Because the exact leak location depends upon the precision of the time delay between sensor signals due to leak noise and the speed of elastic waves, the research on the estimation of time delay has been one of the key issues in leak lovating with the time arrival difference method. In this study, an optimal Maximum Likelihood window is considered to obtain a better estimation of the time delay. This method has been proved in experiments, which can provide much clearer and more precise peaks in cross-correlation functions of leak signals. The leak location error has been less than 1 % of the distance between sensors, for example the error was not greater than 3 m for 300 m long underground pipelines. Apart from the experiment, an intensive theoretical analysis in terms of signal processing has been described. The improved leak locating with the suggested method is due to the windowing effect in frequency domain, which offers a weighting in significant frequencies.

  3. Optimization of the integration time of pulse shape analysis for dual-layer GSO detector with different amount of Ce

    International Nuclear Information System (INIS)

    Yamamoto, Seiichi

    2008-01-01

    For a multi-layer depth-of-interaction (DOI) detector using different decay times, pulse shape analysis based on two different integration times is often used to distinguish scintillators in DOI direction. This method measures a partial integration and a full integration, and calculates the ratio of these two to obtain the pulse shape distribution. The full integration time is usually set to integrate full width of the scintillation pulse. However, the optimum partial integration time is not obvious for obtaining the best separation of the pulse shape distribution. To make it clear, a theoretical analysis and experiments were conducted for pulse shape analysis by changing the partial integration time using a scintillation detector of GSOs with different amount of Ce. A scintillation detector with 1-in. round photomultiplier tube (PMT) optically coupled GSO of 1.5 mol% (decay time: 35 ns) and that of 0.5 mol% (decay time: 60 ns) was used for the experiments. The signal from PMT was digitally integrated with partial (50-150 ns) and full (160 ns) integration times and ratio of these two was calculated to obtain the pulse shape distribution. In the theoretical analysis, partial integration time of 50 ns showed largest distance between two peaks of the pulse shape distribution. In the experiments, it showed maximum at 70-80 ns of partial integration time. The peak to valley ratio showed the maximum at 120-130 ns. Because the separation of two peaks is determined by the peak to valley ratio, we conclude the optimum partial integration time for these combinations of GSOs is around 120-130 ns, relatively longer than the expected value

  4. Detection of surface electromyography recording time interval without muscle fatigue effect for biceps brachii muscle during maximum voluntary contraction.

    Science.gov (United States)

    Soylu, Abdullah Ruhi; Arpinar-Avsar, Pinar

    2010-08-01

    The effects of fatigue on maximum voluntary contraction (MVC) parameters were examined by using force and surface electromyography (sEMG) signals of the biceps brachii muscles (BBM) of 12 subjects. The purpose of the study was to find the sEMG time interval of the MVC recordings which is not affected by the muscle fatigue. At least 10s of force and sEMG signals of BBM were recorded simultaneously during MVC. The subjects reached the maximum force level within 2s by slightly increasing the force, and then contracted the BBM maximally. The time index of each sEMG and force signal were labeled with respect to the time index of the maximum force (i.e. after the time normalization, each sEMG or force signal's 0s time index corresponds to maximum force point). Then, the first 8s of sEMG and force signals were divided into 0.5s intervals. Mean force, median frequency (MF) and integrated EMG (iEMG) values were calculated for each interval. Amplitude normalization was performed by dividing the force signals to their mean values of 0s time intervals (i.e. -0.25 to 0.25s). A similar amplitude normalization procedure was repeated for the iEMG and MF signals. Statistical analysis (Friedman test with Dunn's post hoc test) was performed on the time and amplitude normalized signals (MF, iEMG). Although the ANOVA results did not give statistically significant information about the onset of the muscle fatigue, linear regression (mean force vs. time) showed a decreasing slope (Pearson-r=0.9462, pfatigue starts after the 0s time interval as the muscles cannot attain their peak force levels. This implies that the most reliable interval for MVC calculation which is not affected by the muscle fatigue is from the onset of the EMG activity to the peak force time. Mean, SD, and range of this interval (excluding 2s gradual increase time) for 12 subjects were 2353, 1258ms and 536-4186ms, respectively. Exceeding this interval introduces estimation errors in the maximum amplitude calculations

  5. Maximum Correntropy Unscented Kalman Filter for Ballistic Missile Navigation System based on SINS/CNS Deeply Integrated Mode

    Directory of Open Access Journals (Sweden)

    Bowen Hou

    2018-05-01

    Full Text Available Strap-down inertial navigation system/celestial navigation system ( SINS/CNS integrated navigation is a high precision navigation technique for ballistic missiles. The traditional navigation method has a divergence in the position error. A deeply integrated mode for SINS/CNS navigation system is proposed to improve the navigation accuracy of ballistic missile. The deeply integrated navigation principle is described and the observability of the navigation system is analyzed. The nonlinearity, as well as the large outliers and the Gaussian mixture noises, often exists during the actual navigation process, leading to the divergence phenomenon of the navigation filter. The new nonlinear Kalman filter on the basis of the maximum correntropy theory and unscented transformation, named the maximum correntropy unscented Kalman filter, is deduced, and the computational complexity is analyzed. The unscented transformation is used for restricting the nonlinearity of the system equation, and the maximum correntropy theory is used to deal with the non-Gaussian noises. Finally, numerical simulation illustrates the superiority of the proposed filter compared with the traditional unscented Kalman filter. The comparison results show that the large outliers and the influence of non-Gaussian noises for SINS/CNS deeply integrated navigation is significantly reduced through the proposed filter.

  6. Maximum-likelihood methods for array processing based on time-frequency distributions

    Science.gov (United States)

    Zhang, Yimin; Mu, Weifeng; Amin, Moeness G.

    1999-11-01

    This paper proposes a novel time-frequency maximum likelihood (t-f ML) method for direction-of-arrival (DOA) estimation for non- stationary signals, and compares this method with conventional maximum likelihood DOA estimation techniques. Time-frequency distributions localize the signal power in the time-frequency domain, and as such enhance the effective SNR, leading to improved DOA estimation. The localization of signals with different t-f signatures permits the division of the time-frequency domain into smaller regions, each contains fewer signals than those incident on the array. The reduction of the number of signals within different time-frequency regions not only reduces the required number of sensors, but also decreases the computational load in multi- dimensional optimizations. Compared to the recently proposed time- frequency MUSIC (t-f MUSIC), the proposed t-f ML method can be applied in coherent environments, without the need to perform any type of preprocessing that is subject to both array geometry and array aperture.

  7. Local Times of Galactic Cosmic Ray Intensity Maximum and Minimum in the Diurnal Variation

    Directory of Open Access Journals (Sweden)

    Su Yeon Oh

    2006-06-01

    Full Text Available The Diurnal variation of galactic cosmic ray (GCR flux intensity observed by the ground Neutron Monitor (NM shows a sinusoidal pattern with the amplitude of 1sim 2 % of daily mean. We carried out a statistical study on tendencies of the local times of GCR intensity daily maximum and minimum. To test the influences of the solar activity and the location (cut-off rigidity on the distribution in the local times of maximum and minimum GCR intensity, we have examined the data of 1996 (solar minimum and 2000 (solar maximum at the low-latitude Haleakala (latitude: 20.72 N, cut-off rigidity: 12.91 GeV and the high-latitude Oulu (latitude: 65.05 N, cut-off rigidity: 0.81 GeV NM stations. The most frequent local times of the GCR intensity daily maximum and minimum come later about 2sim3 hours in the solar activity maximum year 2000 than in the solar activity minimum year 1996. Oulu NM station whose cut-off rigidity is smaller has the most frequent local times of the GCR intensity maximum and minimum later by 2sim3 hours from those of Haleakala station. This feature is more evident at the solar maximum. The phase of the daily variation in GCR is dependent upon the interplanetary magnetic field varying with the solar activity and the cut-off rigidity varying with the geographic latitude.

  8. Time-symmetric integration in astrophysics

    Science.gov (United States)

    Hernandez, David M.; Bertschinger, Edmund

    2018-04-01

    Calculating the long-term solution of ordinary differential equations, such as those of the N-body problem, is central to understanding a wide range of dynamics in astrophysics, from galaxy formation to planetary chaos. Because generally no analytic solution exists to these equations, researchers rely on numerical methods that are prone to various errors. In an effort to mitigate these errors, powerful symplectic integrators have been employed. But symplectic integrators can be severely limited because they are not compatible with adaptive stepping and thus they have difficulty in accommodating changing time and length scales. A promising alternative is time-reversible integration, which can handle adaptive time-stepping, but the errors due to time-reversible integration in astrophysics are less understood. The goal of this work is to study analytically and numerically the errors caused by time-reversible integration, with and without adaptive stepping. We derive the modified differential equations of these integrators to perform the error analysis. As an example, we consider the trapezoidal rule, a reversible non-symplectic integrator, and show that it gives secular energy error increase for a pendulum problem and for a Hénon-Heiles orbit. We conclude that using reversible integration does not guarantee good energy conservation and that, when possible, use of symplectic integrators is favoured. We also show that time-symmetry and time-reversibility are properties that are distinct for an integrator.

  9. A revised method to calculate the concentration time integral of atmospheric pollutants

    International Nuclear Information System (INIS)

    Voelz, E.; Schultz, H.

    1980-01-01

    It is possible to calculate the spreading of a plume in the atmosphere under nonstationary and nonhomogeneous conditions by introducing the ''particle-in-cell'' method (PIC). This is a numerical method by which the transport of and the diffusion in the plume is reproduced in such a way, that particles representing the concentration are moved time step-wise in restricted regions (cells) and separately with the advection velocity and the diffusion velocity. This has a systematical advantage over the steady state Gaussian plume model usually used. The fixed-point concentration time integral is calculated directly instead of being substituted by the locally integrated concentration at a constant time as is done in the Gaussian model. In this way inaccuracies due to the above mentioned computational techniques may be avoided for short-time emissions, as may be seen by the fact that both integrals do not lead to the same results. Also the PIC method enables one to consider the height-dependent wind speed and its variations while the Gaussian model can be used only with averaged wind data. The concentration time integral calculated by the PIC method results in higher maximum values in shorter distances to the source. This is an effect often observed in measurements. (author)

  10. On Tuning PI Controllers for Integrating Plus Time Delay Systems

    Directory of Open Access Journals (Sweden)

    David Di Ruscio

    2010-10-01

    Full Text Available Some analytical results concerning PI controller tuning based on integrator plus time delay models are worked out and presented. A method for obtaining PI controller parameters, Kp=alpha/(k*tau, and, Ti=beta*tau, which ensures a given prescribed maximum time delay error, dtau_max, to time delay, tau, ratio parameter delta=dau_max/tau, is presented. The corner stone in this method, is a method product parameter, c=alpha*beta. Analytical relations between the PI controller parameters, Ti, and, Kp, and the time delay error parameter, delta, is presented, and we propose the setting, beta=c/a*(delta+1, and, alpha=a/(delta+1, which gives, Ti=c/a*(delta+1*tau, and Kp=a/((delta+1*k*tau, where the parameter, a, is constant in the method product parameter, c=alpha*beta. It also turns out that the integral time, Ti, is linear in, delta, and the proportional gain, Kp, inversely proportional to, delta+1. For the original Ziegler Nichols (ZN method this parameter is approximately, c=2.38, and the presented method may e.g., be used to obtain new modified ZN parameters with increased robustness margins, also documented in the paper.

  11. Maximum time-dependent space-charge limited diode currents

    Energy Technology Data Exchange (ETDEWEB)

    Griswold, M. E. [Tri Alpha Energy, Inc., Rancho Santa Margarita, California 92688 (United States); Fisch, N. J. [Princeton Plasma Physics Laboratory, Princeton University, Princeton, New Jersey 08543 (United States)

    2016-01-15

    Recent papers claim that a one dimensional (1D) diode with a time-varying voltage drop can transmit current densities that exceed the Child-Langmuir (CL) limit on average, apparently contradicting a previous conjecture that there is a hard limit on the average current density across any 1D diode, as t → ∞, that is equal to the CL limit. However, these claims rest on a different definition of the CL limit, namely, a comparison between the time-averaged diode current and the adiabatic average of the expression for the stationary CL limit. If the current were considered as a function of the maximum applied voltage, rather than the average applied voltage, then the original conjecture would not have been refuted.

  12. The Maximum Entropy Method for Optical Spectrum Analysis of Real-Time TDDFT

    International Nuclear Information System (INIS)

    Toogoshi, M; Kano, S S; Zempo, Y

    2015-01-01

    The maximum entropy method (MEM) is one of the key techniques for spectral analysis. The major feature is that spectra in the low frequency part can be described by the short time-series data. Thus, we applied MEM to analyse the spectrum from the time dependent dipole moment obtained from the time-dependent density functional theory (TDDFT) calculation in real time. It is intensively studied for computing optical properties. In the MEM analysis, however, the maximum lag of the autocorrelation is restricted by the total number of time-series data. We proposed that, as an improved MEM analysis, we use the concatenated data set made from the several-times repeated raw data. We have applied this technique to the spectral analysis of the TDDFT dipole moment of ethylene and oligo-fluorene with n = 8. As a result, the higher resolution can be obtained, which is closer to that of FT with practically time-evoluted data as the same total number of time steps. The efficiency and the characteristic feature of this technique are presented in this paper. (paper)

  13. A polynomial time algorithm for solving the maximum flow problem in directed networks

    International Nuclear Information System (INIS)

    Tlas, M.

    2015-01-01

    An efficient polynomial time algorithm for solving maximum flow problems has been proposed in this paper. The algorithm is basically based on the binary representation of capacities; it solves the maximum flow problem as a sequence of O(m) shortest path problems on residual networks with nodes and m arcs. It runs in O(m"2r) time, where is the smallest integer greater than or equal to log B , and B is the largest arc capacity of the network. A numerical example has been illustrated using this proposed algorithm.(author)

  14. Maximum-entropy description of animal movement.

    Science.gov (United States)

    Fleming, Chris H; Subaşı, Yiğit; Calabrese, Justin M

    2015-03-01

    We introduce a class of maximum-entropy states that naturally includes within it all of the major continuous-time stochastic processes that have been applied to animal movement, including Brownian motion, Ornstein-Uhlenbeck motion, integrated Ornstein-Uhlenbeck motion, a recently discovered hybrid of the previous models, and a new model that describes central-place foraging. We are also able to predict a further hierarchy of new models that will emerge as data quality improves to better resolve the underlying continuity of animal movement. Finally, we also show that Langevin equations must obey a fluctuation-dissipation theorem to generate processes that fall from this class of maximum-entropy distributions when the constraints are purely kinematic.

  15. Time-of-flight depth image enhancement using variable integration time

    Science.gov (United States)

    Kim, Sun Kwon; Choi, Ouk; Kang, Byongmin; Kim, James Dokyoon; Kim, Chang-Yeong

    2013-03-01

    Time-of-Flight (ToF) cameras are used for a variety of applications because it delivers depth information at a high frame rate. These cameras, however, suffer from challenging problems such as noise and motion artifacts. To increase signal-to-noise ratio (SNR), the camera should calculate a distance based on a large amount of infra-red light, which needs to be integrated over a long time. On the other hand, the integration time should be short enough to suppress motion artifacts. We propose a ToF depth imaging method to combine advantages of short and long integration times exploiting an imaging fusion scheme proposed for color imaging. To calibrate depth differences due to the change of integration times, a depth transfer function is estimated by analyzing the joint histogram of depths in the two images of different integration times. The depth images are then transformed into wavelet domains and fused into a depth image with suppressed noise and low motion artifacts. To evaluate the proposed method, we captured a moving bar of a metronome with different integration times. The experiment shows the proposed method could effectively remove the motion artifacts while preserving high SNR comparable to the depth images acquired during long integration time.

  16. Linear Time Local Approximation Algorithm for Maximum Stable Marriage

    Directory of Open Access Journals (Sweden)

    Zoltán Király

    2013-08-01

    Full Text Available We consider a two-sided market under incomplete preference lists with ties, where the goal is to find a maximum size stable matching. The problem is APX-hard, and a 3/2-approximation was given by McDermid [1]. This algorithm has a non-linear running time, and, more importantly needs global knowledge of all preference lists. We present a very natural, economically reasonable, local, linear time algorithm with the same ratio, using some ideas of Paluch [2]. In this algorithm every person make decisions using only their own list, and some information asked from members of these lists (as in the case of the famous algorithm of Gale and Shapley. Some consequences to the Hospitals/Residents problem are also discussed.

  17. Enhanced IMC based PID controller design for non-minimum phase (NMP) integrating processes with time delays.

    Science.gov (United States)

    Ghousiya Begum, K; Seshagiri Rao, A; Radhakrishnan, T K

    2017-05-01

    Internal model control (IMC) with optimal H 2 minimization framework is proposed in this paper for design of proportional-integral-derivative (PID) controllers. The controller design is addressed for integrating and double integrating time delay processes with right half plane (RHP) zeros. Blaschke product is used to derive the optimal controller. There is a single adjustable closed loop tuning parameter for controller design. Systematic guidelines are provided for selection of this tuning parameter based on maximum sensitivity. Simulation studies have been carried out on various integrating time delay processes to show the advantages of the proposed method. The proposed controller provides enhanced closed loop performances when compared to recently reported methods in the literature. Quantitative comparative analysis has been carried out using the performance indices, Integral Absolute Error (IAE) and Total Variation (TV). Copyright © 2017 ISA. Published by Elsevier Ltd. All rights reserved.

  18. 49 CFR 398.6 - Hours of service of drivers; maximum driving time.

    Science.gov (United States)

    2010-10-01

    ... REGULATIONS TRANSPORTATION OF MIGRANT WORKERS § 398.6 Hours of service of drivers; maximum driving time. No person shall drive nor shall any motor carrier permit or require a driver employed or used by it to drive...

  19. Design Considerations for Integration of Terahertz Time-Domain Spectroscopy in Microfluidic Platforms

    Directory of Open Access Journals (Sweden)

    Rasha Al-Hujazy

    2018-03-01

    Full Text Available Microfluidic platforms have received much attention in recent years. In particular, there is interest in combining spectroscopy with microfluidic platforms. This work investigates the integration of microfluidic platforms and terahertz time-domain spectroscopy (THz-TDS systems. A semiclassical computational model is used to simulate the emission of THz radiation from a GaAs photoconductive THz emitter. This model incorporates white noise with increasing noise amplitude (corresponding to decreasing dynamic range values. White noise is selected over other noise due to its contributions in THz-TDS systems. The results from this semiclassical computational model, in combination with defined sample thicknesses, can provide the maximum measurable absorption coefficient for a microfluidic-based THz-TDS system. The maximum measurable frequencies for such systems can be extracted through the relationship between the maximum measurable absorption coefficient and the absorption coefficient for representative biofluids. The sample thickness of the microfluidic platform and the dynamic range of the THz-TDS system play a role in defining the maximum measurable frequency for microfluidic-based THz-TDS systems. The results of this work serve as a design tool for the development of such systems.

  20. The Research of Car-Following Model Based on Real-Time Maximum Deceleration

    Directory of Open Access Journals (Sweden)

    Longhai Yang

    2015-01-01

    Full Text Available This paper is concerned with the effect of real-time maximum deceleration in car-following. The real-time maximum acceleration is estimated with vehicle dynamics. It is known that an intelligent driver model (IDM can control adaptive cruise control (ACC well. The disadvantages of IDM at high and constant speed are analyzed. A new car-following model which is applied to ACC is established accordingly to modify the desired minimum gap and structure of the IDM. We simulated the new car-following model and IDM under two different kinds of road conditions. In the first, the vehicles drive on a single road, taking dry asphalt road as the example in this paper. In the second, vehicles drive onto a different road, and this paper analyzed the situation in which vehicles drive from a dry asphalt road onto an icy road. From the simulation, we found that the new car-following model can not only ensure driving security and comfort but also control the steady driving of the vehicle with a smaller time headway than IDM.

  1. A theory of timing in scintillation counters based on maximum likelihood estimation

    International Nuclear Information System (INIS)

    Tomitani, Takehiro

    1982-01-01

    A theory of timing in scintillation counters based on the maximum likelihood estimation is presented. An optimum filter that minimizes the variance of timing is described. A simple formula to estimate the variance of timing is presented as a function of photoelectron number, scintillation decay constant and the single electron transit time spread in the photomultiplier. The present method was compared with the theory by E. Gatti and V. Svelto. The proposed method was applied to two simple models and rough estimations of potential time resolution of several scintillators are given. The proposed method is applicable to the timing in Cerenkov counters and semiconductor detectors as well. (author)

  2. On the maximum-entropy/autoregressive modeling of time series

    Science.gov (United States)

    Chao, B. F.

    1984-01-01

    The autoregressive (AR) model of a random process is interpreted in the light of the Prony's relation which relates a complex conjugate pair of poles of the AR process in the z-plane (or the z domain) on the one hand, to the complex frequency of one complex harmonic function in the time domain on the other. Thus the AR model of a time series is one that models the time series as a linear combination of complex harmonic functions, which include pure sinusoids and real exponentials as special cases. An AR model is completely determined by its z-domain pole configuration. The maximum-entropy/autogressive (ME/AR) spectrum, defined on the unit circle of the z-plane (or the frequency domain), is nothing but a convenient, but ambiguous visual representation. It is asserted that the position and shape of a spectral peak is determined by the corresponding complex frequency, and the height of the spectral peak contains little information about the complex amplitude of the complex harmonic functions.

  3. Higher renewable energy integration into the existing energy system of Finland – Is there any maximum limit?

    International Nuclear Information System (INIS)

    Zakeri, Behnam; Syri, Sanna; Rinne, Samuli

    2015-01-01

    Finland is to increase the share of RES (renewable energy sources) up to 38% in final energy consumption by 2020. While benefiting from local biomass resources Finnish energy system is deemed to achieve this goal, increasing the share of other intermittent renewables is under development, namely wind power and solar energy. Yet the maximum flexibility of the existing energy system in integration of renewable energy is not investigated, which is an important step before undertaking new renewable energy obligations. This study aims at filling this gap by hourly analysis and comprehensive modeling of the energy system including electricity, heat, and transportation, by employing EnergyPLAN tool. Focusing on technical and economic implications, we assess the maximum potential of different RESs separately (including bioenergy, hydropower, wind power, solar heating and PV, and heat pumps), as well as an optimal mix of different technologies. Furthermore, we propose a new index for assessing the maximum flexibility of energy systems in absorbing variable renewable energy. The results demonstrate that wind energy can be harvested at maximum levels of 18–19% of annual power demand (approx. 16 TWh/a), without major enhancements in the flexibility of energy infrastructure. With today's energy demand, the maximum feasible renewable energy for Finland is around 44–50% by an optimal mix of different technologies, which promises 35% reduction in carbon emissions from 2012's level. Moreover, Finnish energy system is flexible to augment the share of renewables in gross electricity consumption up to 69–72%, at maximum. Higher shares of RES calls for lower energy consumption (energy efficiency) and more flexibility in balancing energy supply and consumption (e.g. by energy storage). - Highlights: • By hourly analysis, we model the whole energy system of Finland. • With existing energy infrastructure, RES (renewable energy sources) in primary energy cannot go beyond 50%.

  4. Steffensen's Integral Inequality on Time Scales

    Directory of Open Access Journals (Sweden)

    Ozkan Umut Mutlu

    2007-01-01

    Full Text Available We establish generalizations of Steffensen's integral inequality on time scales via the diamond- dynamic integral, which is defined as a linear combination of the delta and nabla integrals.

  5. STATIONARITY OF ANNUAL MAXIMUM DAILY STREAMFLOW TIME SERIES IN SOUTH-EAST BRAZILIAN RIVERS

    Directory of Open Access Journals (Sweden)

    Jorge Machado Damázio

    2015-08-01

    Full Text Available DOI: 10.12957/cadest.2014.18302The paper presents a statistical analysis of annual maxima daily streamflow between 1931 and 2013 in South-East Brazil focused in detecting and modelling non-stationarity aspects. Flood protection for the large valleys in South-East Brazil is provided by multiple purpose reservoir systems built during 20th century, which design and operation plans has been done assuming stationarity of historical flood time series. Land cover changes and rapidly-increasing level of atmosphere greenhouse gases of the last century may be affecting flood regimes in these valleys so that it can be that nonstationary modelling should be applied to re-asses dam safety and flood control operation rules at the existent reservoir system. Six annual maximum daily streamflow time series are analysed. The time series were plotted together with fitted smooth loess functions and non-parametric statistical tests are performed to check the significance of apparent trends shown by the plots. Non-stationarity is modelled by fitting univariate extreme value distribution functions which location varies linearly with time. Stationarity and non-stationarity modelling are compared with the likelihood ratio statistic. In four of the six analyzed time series non-stationarity modelling outperformed stationarity modelling.Keywords: Stationarity; Extreme Value Distributions; Flood Frequency Analysis; Maximum Likelihood Method.

  6. Statistics of the first passage time of Brownian motion conditioned by maximum value or area

    International Nuclear Information System (INIS)

    Kearney, Michael J; Majumdar, Satya N

    2014-01-01

    We derive the moments of the first passage time for Brownian motion conditioned by either the maximum value or the area swept out by the motion. These quantities are the natural counterparts to the moments of the maximum value and area of Brownian excursions of fixed duration, which we also derive for completeness within the same mathematical framework. Various applications are indicated. (paper)

  7. Maximum-principle-satisfying space-time conservation element and solution element scheme applied to compressible multifluids

    KAUST Repository

    Shen, Hua; Wen, Chih-Yung; Parsani, Matteo; Shu, Chi-Wang

    2016-01-01

    A maximum-principle-satisfying space-time conservation element and solution element (CE/SE) scheme is constructed to solve a reduced five-equation model coupled with the stiffened equation of state for compressible multifluids. We first derive a sufficient condition for CE/SE schemes to satisfy maximum-principle when solving a general conservation law. And then we introduce a slope limiter to ensure the sufficient condition which is applicative for both central and upwind CE/SE schemes. Finally, we implement the upwind maximum-principle-satisfying CE/SE scheme to solve the volume-fraction-based five-equation model for compressible multifluids. Several numerical examples are carried out to carefully examine the accuracy, efficiency, conservativeness and maximum-principle-satisfying property of the proposed approach.

  8. Maximum-principle-satisfying space-time conservation element and solution element scheme applied to compressible multifluids

    KAUST Repository

    Shen, Hua

    2016-10-19

    A maximum-principle-satisfying space-time conservation element and solution element (CE/SE) scheme is constructed to solve a reduced five-equation model coupled with the stiffened equation of state for compressible multifluids. We first derive a sufficient condition for CE/SE schemes to satisfy maximum-principle when solving a general conservation law. And then we introduce a slope limiter to ensure the sufficient condition which is applicative for both central and upwind CE/SE schemes. Finally, we implement the upwind maximum-principle-satisfying CE/SE scheme to solve the volume-fraction-based five-equation model for compressible multifluids. Several numerical examples are carried out to carefully examine the accuracy, efficiency, conservativeness and maximum-principle-satisfying property of the proposed approach.

  9. Real time estimation of photovoltaic modules characteristics and its application to maximum power point operation

    Energy Technology Data Exchange (ETDEWEB)

    Garrigos, Ausias; Blanes, Jose M.; Carrasco, Jose A. [Area de Tecnologia Electronica, Universidad Miguel Hernandez de Elche, Avda. de la Universidad s/n, 03202 Elche, Alicante (Spain); Ejea, Juan B. [Departamento de Ingenieria Electronica, Universidad de Valencia, Avda. Dr Moliner 50, 46100 Valencia, Valencia (Spain)

    2007-05-15

    In this paper, an approximate curve fitting method for photovoltaic modules is presented. The operation is based on solving a simple solar cell electrical model by a microcontroller in real time. Only four voltage and current coordinates are needed to obtain the solar module parameters and set its operation at maximum power in any conditions of illumination and temperature. Despite its simplicity, this method is suitable for low cost real time applications, as control loop reference generator in photovoltaic maximum power point circuits. The theory that supports the estimator together with simulations and experimental results are presented. (author)

  10. Integrable Time-Dependent Quantum Hamiltonians

    Science.gov (United States)

    Sinitsyn, Nikolai A.; Yuzbashyan, Emil A.; Chernyak, Vladimir Y.; Patra, Aniket; Sun, Chen

    2018-05-01

    We formulate a set of conditions under which the nonstationary Schrödinger equation with a time-dependent Hamiltonian is exactly solvable analytically. The main requirement is the existence of a non-Abelian gauge field with zero curvature in the space of system parameters. Known solvable multistate Landau-Zener models satisfy these conditions. Our method provides a strategy to incorporate time dependence into various quantum integrable models while maintaining their integrability. We also validate some prior conjectures, including the solution of the driven generalized Tavis-Cummings model.

  11. Maximum Lateness Scheduling on Two-Person Cooperative Games with Variable Processing Times and Common Due Date

    OpenAIRE

    Liu, Peng; Wang, Xiaoli

    2017-01-01

    A new maximum lateness scheduling model in which both cooperative games and variable processing times exist simultaneously is considered in this paper. The job variable processing time is described by an increasing or a decreasing function dependent on the position of a job in the sequence. Two persons have to cooperate in order to process a set of jobs. Each of them has a single machine and their processing cost is defined as the minimum value of maximum lateness. All jobs have a common due ...

  12. Time Reversal Migration for Passive Sources Using a Maximum Variance Imaging Condition

    KAUST Repository

    Wang, H.; Alkhalifah, Tariq Ali

    2017-01-01

    The conventional time-reversal imaging approach for micro-seismic or passive source location is based on focusing the back-propagated wavefields from each recorded trace in a source image. It suffers from strong background noise and limited acquisition aperture, which may create unexpected artifacts and cause error in the source location. To overcome such a problem, we propose a new imaging condition for microseismic imaging, which is based on comparing the amplitude variance in certain windows, and use it to suppress the artifacts as well as find the right location for passive sources. Instead of simply searching for the maximum energy point in the back-propagated wavefield, we calculate the amplitude variances over a window moving in both space and time axis to create a highly resolved passive event image. The variance operation has negligible cost compared with the forward/backward modeling operations, which reveals that the maximum variance imaging condition is efficient and effective. We test our approach numerically on a simple three-layer model and on a piece of the Marmousi model as well, both of which have shown reasonably good results.

  13. Time Reversal Migration for Passive Sources Using a Maximum Variance Imaging Condition

    KAUST Repository

    Wang, H.

    2017-05-26

    The conventional time-reversal imaging approach for micro-seismic or passive source location is based on focusing the back-propagated wavefields from each recorded trace in a source image. It suffers from strong background noise and limited acquisition aperture, which may create unexpected artifacts and cause error in the source location. To overcome such a problem, we propose a new imaging condition for microseismic imaging, which is based on comparing the amplitude variance in certain windows, and use it to suppress the artifacts as well as find the right location for passive sources. Instead of simply searching for the maximum energy point in the back-propagated wavefield, we calculate the amplitude variances over a window moving in both space and time axis to create a highly resolved passive event image. The variance operation has negligible cost compared with the forward/backward modeling operations, which reveals that the maximum variance imaging condition is efficient and effective. We test our approach numerically on a simple three-layer model and on a piece of the Marmousi model as well, both of which have shown reasonably good results.

  14. SOCIAL INTEGRATION: TESTING ANTECEDENTS OF TIME SPENT ONLINE

    Directory of Open Access Journals (Sweden)

    Lily Suriani Mohd Arif

    2013-07-01

    Full Text Available The literature on the relationship of social integration and time spent onlineprovides conflicting evidence of the relationship of social integration with timespent online. The study identifies and highlightsthe controversy and attempts toclarify the relationship of social integration withtime spent online bydecomposing the construct social integration into its affective and behavioraldimensions . Thestudy tests antecedents and effects of time spent online in arandom sample of senior level undergraduate students at a public university inMalaysia. The findings indicated that while self-report measures of behavioralsocial integration did not predict time spent online, and, the affective socialintegration had an inverse relationship with time spent online.

  15. Short-time maximum entropy method analysis of molecular dynamics simulation: Unimolecular decomposition of formic acid

    Science.gov (United States)

    Takahashi, Osamu; Nomura, Tetsuo; Tabayashi, Kiyohiko; Yamasaki, Katsuyoshi

    2008-07-01

    We performed spectral analysis by using the maximum entropy method instead of the traditional Fourier transform technique to investigate the short-time behavior in molecular systems, such as the energy transfer between vibrational modes and chemical reactions. This procedure was applied to direct ab initio molecular dynamics calculations for the decomposition of formic acid. More reactive trajectories of dehydrolation than those of decarboxylation were obtained for Z-formic acid, which was consistent with the prediction of previous theoretical and experimental studies. Short-time maximum entropy method analyses were performed for typical reactive and non-reactive trajectories. Spectrograms of a reactive trajectory were obtained; these clearly showed the reactant, transient, and product regions, especially for the dehydrolation path.

  16. Symplectic integrators with adaptive time steps

    Science.gov (United States)

    Richardson, A. S.; Finn, J. M.

    2012-01-01

    In recent decades, there have been many attempts to construct symplectic integrators with variable time steps, with rather disappointing results. In this paper, we identify the causes for this lack of performance, and find that they fall into two categories. In the first, the time step is considered a function of time alone, Δ = Δ(t). In this case, backward error analysis shows that while the algorithms remain symplectic, parametric instabilities may arise because of resonance between oscillations of Δ(t) and the orbital motion. In the second category the time step is a function of phase space variables Δ = Δ(q, p). In this case, the system of equations to be solved is analyzed by introducing a new time variable τ with dt = Δ(q, p) dτ. The transformed equations are no longer in Hamiltonian form, and thus do not benefit from integration methods which would be symplectic for Hamiltonian systems. We analyze two methods for integrating the transformed equations which do, however, preserve the structure of the original equations. The first is an extended phase space method, which has been successfully used in previous studies of adaptive time step symplectic integrators. The second, novel, method is based on a non-canonical mixed-variable generating function. Numerical trials for both of these methods show good results, without parametric instabilities or spurious growth or damping. It is then shown how to adapt the time step to an error estimate found by backward error analysis, in order to optimize the time-stepping scheme. Numerical results are obtained using this formulation and compared with other time-stepping schemes for the extended phase space symplectic method.

  17. Time-optimal control of nuclear reactor power with adaptive proportional- integral-feedforward gains

    International Nuclear Information System (INIS)

    Park, Moon Ghu; Cho, Nam Zin

    1993-01-01

    A time-optimal control method which consists of coarse and fine control stages is described here. During the coarse control stage, the maximum control effort (time-optimal) is used to direct the system toward the switching boundary which is set near the desired power level. At this boundary, the controller is switched to the fine control stage in which an adaptive proportional-integral-feedforward (PIF) controller is used to compensate for any unmodeled reactivity feedback effects. This fine control is also introduced to obtain a constructive method for determining the (adaptive) feedback gains against the sampling effect. The feedforward control term is included to suppress the over-or undershoot. The estimation and feedback of the temperature-induced reactivity is also discussed

  18. Modelling information flow along the human connectome using maximum flow.

    Science.gov (United States)

    Lyoo, Youngwook; Kim, Jieun E; Yoon, Sujung

    2018-01-01

    The human connectome is a complex network that transmits information between interlinked brain regions. Using graph theory, previously well-known network measures of integration between brain regions have been constructed under the key assumption that information flows strictly along the shortest paths possible between two nodes. However, it is now apparent that information does flow through non-shortest paths in many real-world networks such as cellular networks, social networks, and the internet. In the current hypothesis, we present a novel framework using the maximum flow to quantify information flow along all possible paths within the brain, so as to implement an analogy to network traffic. We hypothesize that the connection strengths of brain networks represent a limit on the amount of information that can flow through the connections per unit of time. This allows us to compute the maximum amount of information flow between two brain regions along all possible paths. Using this novel framework of maximum flow, previous network topological measures are expanded to account for information flow through non-shortest paths. The most important advantage of the current approach using maximum flow is that it can integrate the weighted connectivity data in a way that better reflects the real information flow of the brain network. The current framework and its concept regarding maximum flow provides insight on how network structure shapes information flow in contrast to graph theory, and suggests future applications such as investigating structural and functional connectomes at a neuronal level. Copyright © 2017 Elsevier Ltd. All rights reserved.

  19. MODEL PREDICTIVE CONTROL FOR PHOTOVOLTAIC STATION MAXIMUM POWER POINT TRACKING SYSTEM

    Directory of Open Access Journals (Sweden)

    I. Elzein

    2015-01-01

    Full Text Available The purpose of this paper is to present an alternative maximum power point tracking, MPPT, algorithm for a photovoltaic module, PVM, to produce the maximum power, Pmax, using the optimal duty ratio, D, for different types of converters and load matching.We present a state-based approach to the design of the maximum power point tracker for a stand-alone photovoltaic power generation system. The system under consideration consists of a solar array with nonlinear time-varying characteristics, a step-up converter with appropriate filter.The proposed algorithm has the advantages of maximizing the efficiency of the power utilization, can be integrated to other MPPT algorithms without affecting the PVM performance, is excellent for Real-Time applications and is a robust analytical method, different from the traditional MPPT algorithms which are more based on trial and error, or comparisons between present and past states. The procedure to calculate the optimal duty ratio for a buck, boost and buck-boost converters, to transfer the maximum power from a PVM to a load, is presented in the paper. Additionally, the existence and uniqueness of optimal internal impedance, to transfer the maximum power from a photovoltaic module using load matching, is proved.

  20. A maximum power point tracking for photovoltaic-SPE system using a maximum current controller

    Energy Technology Data Exchange (ETDEWEB)

    Muhida, Riza [Osaka Univ., Dept. of Physical Science, Toyonaka, Osaka (Japan); Osaka Univ., Dept. of Electrical Engineering, Suita, Osaka (Japan); Park, Minwon; Dakkak, Mohammed; Matsuura, Kenji [Osaka Univ., Dept. of Electrical Engineering, Suita, Osaka (Japan); Tsuyoshi, Akira; Michira, Masakazu [Kobe City College of Technology, Nishi-ku, Kobe (Japan)

    2003-02-01

    Processes to produce hydrogen from solar photovoltaic (PV)-powered water electrolysis using solid polymer electrolysis (SPE) are reported. An alternative control of maximum power point tracking (MPPT) in the PV-SPE system based on the maximum current searching methods has been designed and implemented. Based on the characteristics of voltage-current and theoretical analysis of SPE, it can be shown that the tracking of the maximum current output of DC-DC converter in SPE side will track the MPPT of photovoltaic panel simultaneously. This method uses a proportional integrator controller to control the duty factor of DC-DC converter with pulse-width modulator (PWM). The MPPT performance and hydrogen production performance of this method have been evaluated and discussed based on the results of the experiment. (Author)

  1. Strong Maximum Principle for Multi-Term Time-Fractional Diffusion Equations and its Application to an Inverse Source Problem

    OpenAIRE

    Liu, Yikan

    2015-01-01

    In this paper, we establish a strong maximum principle for fractional diffusion equations with multiple Caputo derivatives in time, and investigate a related inverse problem of practical importance. Exploiting the solution properties and the involved multinomial Mittag-Leffler functions, we improve the weak maximum principle for the multi-term time-fractional diffusion equation to a stronger one, which is parallel to that for its single-term counterpart as expected. As a direct application, w...

  2. A higher order space-time Galerkin scheme for time domain integral equations

    KAUST Repository

    Pray, Andrew J.

    2014-12-01

    Stability of time domain integral equation (TDIE) solvers has remained an elusive goal formany years. Advancement of this research has largely progressed on four fronts: 1) Exact integration, 2) Lubich quadrature, 3) smooth temporal basis functions, and 4) space-time separation of convolutions with the retarded potential. The latter method\\'s efficacy in stabilizing solutions to the time domain electric field integral equation (TD-EFIE) was previously reported for first-order surface descriptions (flat elements) and zeroth-order functions as the temporal basis. In this work, we develop the methodology necessary to extend the scheme to higher order surface descriptions as well as to enable its use with higher order basis functions in both space and time. These basis functions are then used in a space-time Galerkin framework. A number of results are presented that demonstrate convergence in time. The viability of the space-time separation method in producing stable results is demonstrated experimentally for these examples.

  3. A higher order space-time Galerkin scheme for time domain integral equations

    KAUST Repository

    Pray, Andrew J.; Beghein, Yves; Nair, Naveen V.; Cools, Kristof; Bagci, Hakan; Shanker, Balasubramaniam

    2014-01-01

    Stability of time domain integral equation (TDIE) solvers has remained an elusive goal formany years. Advancement of this research has largely progressed on four fronts: 1) Exact integration, 2) Lubich quadrature, 3) smooth temporal basis functions, and 4) space-time separation of convolutions with the retarded potential. The latter method's efficacy in stabilizing solutions to the time domain electric field integral equation (TD-EFIE) was previously reported for first-order surface descriptions (flat elements) and zeroth-order functions as the temporal basis. In this work, we develop the methodology necessary to extend the scheme to higher order surface descriptions as well as to enable its use with higher order basis functions in both space and time. These basis functions are then used in a space-time Galerkin framework. A number of results are presented that demonstrate convergence in time. The viability of the space-time separation method in producing stable results is demonstrated experimentally for these examples.

  4. Maximum Power from a Solar Panel

    Directory of Open Access Journals (Sweden)

    Michael Miller

    2010-01-01

    Full Text Available Solar energy has become a promising alternative to conventional fossil fuel sources. Solar panels are used to collect solar radiation and convert it into electricity. One of the techniques used to maximize the effectiveness of this energy alternative is to maximize the power output of the solar collector. In this project the maximum power is calculated by determining the voltage and the current of maximum power. These quantities are determined by finding the maximum value for the equation for power using differentiation. After the maximum values are found for each time of day, each individual quantity, voltage of maximum power, current of maximum power, and maximum power is plotted as a function of the time of day.

  5. Exponential integrators in time-dependent density-functional calculations

    Science.gov (United States)

    Kidd, Daniel; Covington, Cody; Varga, Kálmán

    2017-12-01

    The integrating factor and exponential time differencing methods are implemented and tested for solving the time-dependent Kohn-Sham equations. Popular time propagation methods used in physics, as well as other robust numerical approaches, are compared to these exponential integrator methods in order to judge the relative merit of the computational schemes. We determine an improvement in accuracy of multiple orders of magnitude when describing dynamics driven primarily by a nonlinear potential. For cases of dynamics driven by a time-dependent external potential, the accuracy of the exponential integrator methods are less enhanced but still match or outperform the best of the conventional methods tested.

  6. A unified approach to the design of advanced proportional-integral-derivative controllers for time-delay processes

    Energy Technology Data Exchange (ETDEWEB)

    Lee, Moonyong [Yeungnam University, Gyeongsan (Korea, Republic of); Vu, Truong Nguyen Luan [University of Technical Education of Ho Chi Minh City, Ho Chi Minh (China)

    2013-03-15

    A unified approach for the design of proportional-integral-derivative (PID) controllers cascaded with first-order lead-lag filters is proposed for various time-delay processes. The proposed controller’s tuning rules are directly derived using the Padé approximation on the basis of internal model control (IMC) for enhanced stability against disturbances. A two-degrees-of-freedom (2DOF) control scheme is employed to cope with both regulatory and servo problems. Simulation is conducted for a broad range of stable, integrating, and unstable processes with time delays. Each simulated controller is tuned to have the same degree of robustness in terms of maximum sensitivity (Ms). The results demonstrate that the proposed controller provides superior disturbance rejection and set-point tracking when compared with recently published PID-type controllers. Controllers’ robustness is investigated through the simultaneous introduction of perturbation uncertainties to all process parameters to obtain worst-case process-model mismatch. The process-model mismatch simulation results demonstrate that the proposed method consistently affords superior robustness.

  7. Integration of MDSplus in real-time systems

    International Nuclear Information System (INIS)

    Luchetta, A.; Manduchi, G.; Taliercio, C.

    2006-01-01

    RFX-mod makes extensive usage of real-time systems for feedback control and uses MDSplus to interface them to the main Data Acquisition system. For this purpose, the core of MDSplus has been ported to VxWorks, the operating system used for real-time control in RFX. Using this approach, it is possible to integrate real-time systems, but MDSplus is used only for non-real-time tasks, i.e. those tasks which are executed before and after the pulse and whose performance does not affect the system time constraints. More extensive use of MDSplus in real-time systems is foreseen, and a real-time layer for MDSplus is under development, which will provide access to memory-mapped pulse files, shared by the tasks running on the same CPU. Real-time communication will also be integrated in the MDSplus core to provide support for distributed memory-mapped pulse files

  8. Multi-channel time-division integrator in HL-2A

    International Nuclear Information System (INIS)

    Yan Ji

    2008-01-01

    HL-2A is China's first Tokamak device with divertor configuration (magnetic confinement controlled nuclear fusion device). To find out the details of on-going fusion reaction at different times is of important significance in achieving controlled nuclear fusion. We developed a new type multi-channel time-division integrator for HL-2A. It has functions of automatic cutting off negative pulse of the input signals, optional integrating time division spacing 0.2-1 ms, TTL starting trigger signal, automatic regularly work 20 s, and integrating 10 channel at the same time. (authors)

  9. Maximum Lateness Scheduling on Two-Person Cooperative Games with Variable Processing Times and Common Due Date

    Directory of Open Access Journals (Sweden)

    Peng Liu

    2017-01-01

    Full Text Available A new maximum lateness scheduling model in which both cooperative games and variable processing times exist simultaneously is considered in this paper. The job variable processing time is described by an increasing or a decreasing function dependent on the position of a job in the sequence. Two persons have to cooperate in order to process a set of jobs. Each of them has a single machine and their processing cost is defined as the minimum value of maximum lateness. All jobs have a common due date. The objective is to maximize the multiplication of their rational positive cooperative profits. A division of those jobs should be negotiated to yield a reasonable cooperative profit allocation scheme acceptable to them. We propose the sufficient and necessary conditions for the problems to have positive integer solution.

  10. Hippocampal “Time Cells”: Time versus Path Integration

    Science.gov (United States)

    Kraus, Benjamin J.; Robinson, Robert J.; White, John A.; Eichenbaum, Howard; Hasselmo, Michael E.

    2014-01-01

    SUMMARY Recent studies have reported the existence of hippocampal “time cells,” neurons that fire at particular moments during periods when behavior and location are relatively constant. However, an alternative explanation of apparent time coding is that hippocampal neurons “path integrate” to encode the distance an animal has traveled. Here, we examined hippocampal neuronal firing patterns as rats ran in place on a treadmill, thus “clamping” behavior and location, while we varied the treadmill speed to distinguish time elapsed from distance traveled. Hippocampal neurons were strongly influenced by time and distance, and less so by minor variations in location. Furthermore, the activity of different neurons reflected integration over time and distance to varying extents, with most neurons strongly influenced by both factors and some significantly influenced by only time or distance. Thus, hippocampal neuronal networks captured both the organization of time and distance in a situation where these dimensions dominated an ongoing experience. PMID:23707613

  11. Time integration of tensor trains

    OpenAIRE

    Lubich, Christian; Oseledets, Ivan; Vandereycken, Bart

    2014-01-01

    A robust and efficient time integrator for dynamical tensor approximation in the tensor train or matrix product state format is presented. The method is based on splitting the projector onto the tangent space of the tensor manifold. The algorithm can be used for updating time-dependent tensors in the given data-sparse tensor train / matrix product state format and for computing an approximate solution to high-dimensional tensor differential equations within this data-sparse format. The formul...

  12. GENERIC Integrators: Structure Preserving Time Integration for Thermodynamic Systems

    Science.gov (United States)

    Öttinger, Hans Christian

    2018-04-01

    Thermodynamically admissible evolution equations for non-equilibrium systems are known to possess a distinct mathematical structure. Within the GENERIC (general equation for the non-equilibrium reversible-irreversible coupling) framework of non-equilibrium thermodynamics, which is based on continuous time evolution, we investigate the possibility of preserving all the structural elements in time-discretized equations. Our approach, which follows Moser's [1] construction of symplectic integrators for Hamiltonian systems, is illustrated for the damped harmonic oscillator. Alternative approaches are sketched.

  13. Numerical time integration for air pollution models

    NARCIS (Netherlands)

    J.G. Verwer (Jan); W. Hundsdorfer (Willem); J.G. Blom (Joke)

    1998-01-01

    textabstractDue to the large number of chemical species and the three space dimensions, off-the-shelf stiff ODE integrators are not feasible for the numerical time integration of stiff systems of advection-diffusion-reaction equations [ fracpar{c{t + nabla cdot left( vu{u c right) = nabla cdot left(

  14. Novel Maximum-based Timing Acquisition for Spread-Spectrum Communications

    Energy Technology Data Exchange (ETDEWEB)

    Sibbetty, Taylor; Moradiz, Hussein; Farhang-Boroujeny, Behrouz

    2016-12-01

    This paper proposes and analyzes a new packet detection and timing acquisition method for spread spectrum systems. The proposed method provides an enhancement over the typical thresholding techniques that have been proposed for direct sequence spread spectrum (DS-SS). The effective implementation of thresholding methods typically require accurate knowledge of the received signal-to-noise ratio (SNR), which is particularly difficult to estimate in spread spectrum systems. Instead, we propose a method which utilizes a consistency metric of the location of maximum samples at the output of a filter matched to the spread spectrum waveform to achieve acquisition, and does not require knowledge of the received SNR. Through theoretical study, we show that the proposed method offers a low probability of missed detection over a large range of SNR with a corresponding probability of false alarm far lower than other methods. Computer simulations that corroborate our theoretical results are also presented. Although our work here has been motivated by our previous study of a filter bank multicarrier spread-spectrum (FB-MC-SS) system, the proposed method is applicable to DS-SS systems as well.

  15. Computing the Maximum Detour of a Plane Graph in Subquadratic Time

    DEFF Research Database (Denmark)

    Wulff-Nilsen, Christian

    2008-01-01

    Let G be a plane graph where each edge is a line segment. We consider the problem of computing the maximum detour of G, defined as the maximum over all pairs of distinct points p and q of G of the ratio between the distance between p and q in G and the distance |pq|. The fastest known algorithm...

  16. Time step rescaling recovers continuous-time dynamical properties for discrete-time Langevin integration of nonequilibrium systems.

    Science.gov (United States)

    Sivak, David A; Chodera, John D; Crooks, Gavin E

    2014-06-19

    When simulating molecular systems using deterministic equations of motion (e.g., Newtonian dynamics), such equations are generally numerically integrated according to a well-developed set of algorithms that share commonly agreed-upon desirable properties. However, for stochastic equations of motion (e.g., Langevin dynamics), there is still broad disagreement over which integration algorithms are most appropriate. While multiple desiderata have been proposed throughout the literature, consensus on which criteria are important is absent, and no published integration scheme satisfies all desiderata simultaneously. Additional nontrivial complications stem from simulating systems driven out of equilibrium using existing stochastic integration schemes in conjunction with recently developed nonequilibrium fluctuation theorems. Here, we examine a family of discrete time integration schemes for Langevin dynamics, assessing how each member satisfies a variety of desiderata that have been enumerated in prior efforts to construct suitable Langevin integrators. We show that the incorporation of a novel time step rescaling in the deterministic updates of position and velocity can correct a number of dynamical defects in these integrators. Finally, we identify a particular splitting (related to the velocity Verlet discretization) that has essentially universally appropriate properties for the simulation of Langevin dynamics for molecular systems in equilibrium, nonequilibrium, and path sampling contexts.

  17. Finite mixture model: A maximum likelihood estimation approach on time series data

    Science.gov (United States)

    Yen, Phoong Seuk; Ismail, Mohd Tahir; Hamzah, Firdaus Mohamad

    2014-09-01

    Recently, statistician emphasized on the fitting of finite mixture model by using maximum likelihood estimation as it provides asymptotic properties. In addition, it shows consistency properties as the sample sizes increases to infinity. This illustrated that maximum likelihood estimation is an unbiased estimator. Moreover, the estimate parameters obtained from the application of maximum likelihood estimation have smallest variance as compared to others statistical method as the sample sizes increases. Thus, maximum likelihood estimation is adopted in this paper to fit the two-component mixture model in order to explore the relationship between rubber price and exchange rate for Malaysia, Thailand, Philippines and Indonesia. Results described that there is a negative effect among rubber price and exchange rate for all selected countries.

  18. Conservative fourth-order time integration of non-linear dynamic systems

    DEFF Research Database (Denmark)

    Krenk, Steen

    2015-01-01

    An energy conserving time integration algorithm with fourth-order accuracy is developed for dynamic systems with nonlinear stiffness. The discrete formulation is derived by integrating the differential state-space equations of motion over the integration time increment, and then evaluating...... the resulting time integrals of the inertia and stiffness terms via integration by parts. This process introduces the time derivatives of the state space variables, and these are then substituted from the original state-space differential equations. The resulting discrete form of the state-space equations...... is a direct fourth-order accurate representation of the original differential equations. This fourth-order form is energy conserving for systems with force potential in the form of a quartic polynomial in the displacement components. Energy conservation for a force potential of general form is obtained...

  19. Determination of times maximum insulation in case of internal flooding by pipe break

    International Nuclear Information System (INIS)

    Varas, M. I.; Orteu, E.; Laserna, J. A.

    2014-01-01

    This paper demonstrates the process followed in the preparation of the Manual of floods of Cofrentes NPP to identify the allowed maximum time available to the central in the isolation of a moderate or high energy pipe break, until it affects security (1E) participating in the safe stop of Reactor or in pools of spent fuel cooling-related equipment , and to determine the recommended isolation mode from the point of view of the location of the break or rupture, of the location of the 1E equipment and human factors. (Author)

  20. System Integration for Real-Time Mobile Manipulation

    Directory of Open Access Journals (Sweden)

    Reza Oftadeh

    2014-03-01

    Full Text Available Mobile manipulators are one of the most complicated types of mechatronics systems. The performance of these robots in performing complex manipulation tasks is highly correlated with the synchronization and integration of their low-level components. This paper discusses in detail the mechatronics design of a four wheel steered mobile manipulator. It presents the manipulator's mechanical structure and electrical interfaces, designs low-level software architecture based on embedded PC-based controls, and proposes a systematic solution based on code generation products of MATLAB and Simulink. The remote development environment described here is used to develop real-time controller software and modules for the mobile manipulator under a POSIX-compliant, real-time Linux operating system. Our approach enables developers to reliably design controller modules that meet the hard real-time constraints of the entire low-level system architecture. Moreover, it provides a systematic framework for the development and integration of hardware devices with various communication mediums and protocols, which facilitates the development and integration process of the software controller.

  1. Bayesian maximum entropy integration of ozone observations and model predictions: an application for attainment demonstration in North Carolina.

    Science.gov (United States)

    de Nazelle, Audrey; Arunachalam, Saravanan; Serre, Marc L

    2010-08-01

    States in the USA are required to demonstrate future compliance of criteria air pollutant standards by using both air quality monitors and model outputs. In the case of ozone, the demonstration tests aim at relying heavily on measured values, due to their perceived objectivity and enforceable quality. Weight given to numerical models is diminished by integrating them in the calculations only in a relative sense. For unmonitored locations, the EPA has suggested the use of a spatial interpolation technique to assign current values. We demonstrate that this approach may lead to erroneous assignments of nonattainment and may make it difficult for States to establish future compliance. We propose a method that combines different sources of information to map air pollution, using the Bayesian Maximum Entropy (BME) Framework. The approach gives precedence to measured values and integrates modeled data as a function of model performance. We demonstrate this approach in North Carolina, using the State's ozone monitoring network in combination with outputs from the Multiscale Air Quality Simulation Platform (MAQSIP) modeling system. We show that the BME data integration approach, compared to a spatial interpolation of measured data, improves the accuracy and the precision of ozone estimations across the state.

  2. Space-time transformations in radial path integrals

    International Nuclear Information System (INIS)

    Steiner, F.

    1984-09-01

    Nonlinear space-time transformations in the radial path integral are discussed. A transformation formula is derived, which relates the original path integral to the Green's function of a new quantum system with an effective potential containing an observable quantum correction proportional(h/2π) 2 . As an example the formula is applied to spherical Brownian motion. (orig.)

  3. Improving Music Genre Classification by Short-Time Feature Integration

    DEFF Research Database (Denmark)

    Meng, Anders; Ahrendt, Peter; Larsen, Jan

    2005-01-01

    Many different short-time features, using time windows in the size of 10-30 ms, have been proposed for music segmentation, retrieval and genre classification. However, often the available time frame of the music to make the actual decision or comparison (the decision time horizon) is in the range...... of seconds instead of milliseconds. The problem of making new features on the larger time scale from the short-time features (feature integration) has only received little attention. This paper investigates different methods for feature integration and late information fusion for music genre classification...

  4. Multisensory integration: the case of a time window of gesture-speech integration.

    Science.gov (United States)

    Obermeier, Christian; Gunter, Thomas C

    2015-02-01

    This experiment investigates the integration of gesture and speech from a multisensory perspective. In a disambiguation paradigm, participants were presented with short videos of an actress uttering sentences like "She was impressed by the BALL, because the GAME/DANCE...." The ambiguous noun (BALL) was accompanied by an iconic gesture fragment containing information to disambiguate the noun toward its dominant or subordinate meaning. We used four different temporal alignments between noun and gesture fragment: the identification point (IP) of the noun was either prior to (+120 msec), synchronous with (0 msec), or lagging behind the end of the gesture fragment (-200 and -600 msec). ERPs triggered to the IP of the noun showed significant differences for the integration of dominant and subordinate gesture fragments in the -200, 0, and +120 msec conditions. The outcome of this integration was revealed at the target words. These data suggest a time window for direct semantic gesture-speech integration ranging from at least -200 up to +120 msec. Although the -600 msec condition did not show any signs of direct integration at the homonym, significant disambiguation was found at the target word. An explorative analysis suggested that gesture information was directly integrated at the verb, indicating that there are multiple positions in a sentence where direct gesture-speech integration takes place. Ultimately, this would implicate that in natural communication, where a gesture lasts for some time, several aspects of that gesture will have their specific and possibly distinct impact on different positions in an utterance.

  5. Integration time for the perception of depth from motion parallax.

    Science.gov (United States)

    Nawrot, Mark; Stroyan, Keith

    2012-04-15

    The perception of depth from relative motion is believed to be a slow process that "builds-up" over a period of observation. However, in the case of motion parallax, the potential accuracy of the depth estimate suffers as the observer translates during the viewing period. Our recent quantitative model for the perception of depth from motion parallax proposes that relative object depth (d) can be determined from retinal image motion (dθ/dt), pursuit eye movement (dα/dt), and fixation distance (f) by the formula: d/f≈dθ/dα. Given the model's dynamics, it is important to know the integration time required by the visual system to recover dα and dθ, and then estimate d. Knowing the minimum integration time reveals the incumbent error in this process. A depth-phase discrimination task was used to determine the time necessary to perceive depth-sign from motion parallax. Observers remained stationary and viewed a briefly translating random-dot motion parallax stimulus. Stimulus duration varied between trials. Fixation on the translating stimulus was monitored and enforced with an eye-tracker. The study found that relative depth discrimination can be performed with presentations as brief as 16.6 ms, with only two stimulus frames providing both retinal image motion and the stimulus window motion for pursuit (mean range=16.6-33.2 ms). This was found for conditions in which, prior to stimulus presentation, the eye was engaged in ongoing pursuit or the eye was stationary. A large high-contrast masking stimulus disrupted depth-discrimination for stimulus presentations less than 70-75 ms in both pursuit and stationary conditions. This interval might be linked to ocular-following response eye-movement latencies. We conclude that neural mechanisms serving depth from motion parallax generate a depth estimate much more quickly than previously believed. We propose that additional sluggishness might be due to the visual system's attempt to determine the maximum dθ/dα ratio

  6. FlowMax: A Computational Tool for Maximum Likelihood Deconvolution of CFSE Time Courses.

    Directory of Open Access Journals (Sweden)

    Maxim Nikolaievich Shokhirev

    Full Text Available The immune response is a concerted dynamic multi-cellular process. Upon infection, the dynamics of lymphocyte populations are an aggregate of molecular processes that determine the activation, division, and longevity of individual cells. The timing of these single-cell processes is remarkably widely distributed with some cells undergoing their third division while others undergo their first. High cell-to-cell variability and technical noise pose challenges for interpreting popular dye-dilution experiments objectively. It remains an unresolved challenge to avoid under- or over-interpretation of such data when phenotyping gene-targeted mouse models or patient samples. Here we develop and characterize a computational methodology to parameterize a cell population model in the context of noisy dye-dilution data. To enable objective interpretation of model fits, our method estimates fit sensitivity and redundancy by stochastically sampling the solution landscape, calculating parameter sensitivities, and clustering to determine the maximum-likelihood solution ranges. Our methodology accounts for both technical and biological variability by using a cell fluorescence model as an adaptor during population model fitting, resulting in improved fit accuracy without the need for ad hoc objective functions. We have incorporated our methodology into an integrated phenotyping tool, FlowMax, and used it to analyze B cells from two NFκB knockout mice with distinct phenotypes; we not only confirm previously published findings at a fraction of the expended effort and cost, but reveal a novel phenotype of nfkb1/p105/50 in limiting the proliferative capacity of B cells following B-cell receptor stimulation. In addition to complementing experimental work, FlowMax is suitable for high throughput analysis of dye dilution studies within clinical and pharmacological screens with objective and quantitative conclusions.

  7. Age-Related Differences of Maximum Phonation Time in Patients after Cardiac Surgery

    Directory of Open Access Journals (Sweden)

    Kazuhiro P. Izawa

    2017-12-01

    Full Text Available Background and aims: Maximum phonation time (MPT, which is related to respiratory function, is widely used to evaluate maximum vocal capabilities, because its use is non-invasive, quick, and inexpensive. We aimed to examine differences in MPT by age, following recovery phase II cardiac rehabilitation (CR. Methods: This longitudinal observational study assessed 50 consecutive cardiac patients who were divided into the middle-aged group (<65 years, n = 29 and older-aged group (≥65 years, n = 21. MPTs were measured at 1 and 3 months after cardiac surgery, and were compared. Results: The duration of MPT increased more significantly from month 1 to month 3 in the middle-aged group (19.2 ± 7.8 to 27.1 ± 11.6 s, p < 0.001 than in the older-aged group (12.6 ± 3.5 to 17.9 ± 6.0 s, p < 0.001. However, no statistically significant difference occurred in the % change of MPT from 1 month to 3 months after cardiac surgery between the middle-aged group and older-aged group, respectively (41.1% vs. 42.1%. In addition, there were no significant interactions of MPT in the two groups for 1 versus 3 months (F = 1.65, p = 0.20. Conclusion: Following phase II, CR improved MPT for all cardiac surgery patients.

  8. Age-Related Differences of Maximum Phonation Time in Patients after Cardiac Surgery.

    Science.gov (United States)

    Izawa, Kazuhiro P; Kasahara, Yusuke; Hiraki, Koji; Hirano, Yasuyuki; Watanabe, Satoshi

    2017-12-21

    Background and aims: Maximum phonation time (MPT), which is related to respiratory function, is widely used to evaluate maximum vocal capabilities, because its use is non-invasive, quick, and inexpensive. We aimed to examine differences in MPT by age, following recovery phase II cardiac rehabilitation (CR). Methods: This longitudinal observational study assessed 50 consecutive cardiac patients who were divided into the middle-aged group (<65 years, n = 29) and older-aged group (≥65 years, n = 21). MPTs were measured at 1 and 3 months after cardiac surgery, and were compared. Results: The duration of MPT increased more significantly from month 1 to month 3 in the middle-aged group (19.2 ± 7.8 to 27.1 ± 11.6 s, p < 0.001) than in the older-aged group (12.6 ± 3.5 to 17.9 ± 6.0 s, p < 0.001). However, no statistically significant difference occurred in the % change of MPT from 1 month to 3 months after cardiac surgery between the middle-aged group and older-aged group, respectively (41.1% vs. 42.1%). In addition, there were no significant interactions of MPT in the two groups for 1 versus 3 months (F = 1.65, p = 0.20). Conclusion: Following phase II, CR improved MPT for all cardiac surgery patients.

  9. ELT-scale Adaptive Optics real-time control with thes Intel Xeon Phi Many Integrated Core Architecture

    Science.gov (United States)

    Jenkins, David R.; Basden, Alastair; Myers, Richard M.

    2018-05-01

    We propose a solution to the increased computational demands of Extremely Large Telescope (ELT) scale adaptive optics (AO) real-time control with the Intel Xeon Phi Knights Landing (KNL) Many Integrated Core (MIC) Architecture. The computational demands of an AO real-time controller (RTC) scale with the fourth power of telescope diameter and so the next generation ELTs require orders of magnitude more processing power for the RTC pipeline than existing systems. The Xeon Phi contains a large number (≥64) of low power x86 CPU cores and high bandwidth memory integrated into a single socketed server CPU package. The increased parallelism and memory bandwidth are crucial to providing the performance for reconstructing wavefronts with the required precision for ELT scale AO. Here, we demonstrate that the Xeon Phi KNL is capable of performing ELT scale single conjugate AO real-time control computation at over 1.0kHz with less than 20μs RMS jitter. We have also shown that with a wavefront sensor camera attached the KNL can process the real-time control loop at up to 966Hz, the maximum frame-rate of the camera, with jitter remaining below 20μs RMS. Future studies will involve exploring the use of a cluster of Xeon Phis for the real-time control of the MCAO and MOAO regimes of AO. We find that the Xeon Phi is highly suitable for ELT AO real time control.

  10. A model of interval timing by neural integration.

    Science.gov (United States)

    Simen, Patrick; Balci, Fuat; de Souza, Laura; Cohen, Jonathan D; Holmes, Philip

    2011-06-22

    We show that simple assumptions about neural processing lead to a model of interval timing as a temporal integration process, in which a noisy firing-rate representation of time rises linearly on average toward a response threshold over the course of an interval. Our assumptions include: that neural spike trains are approximately independent Poisson processes, that correlations among them can be largely cancelled by balancing excitation and inhibition, that neural populations can act as integrators, and that the objective of timed behavior is maximal accuracy and minimal variance. The model accounts for a variety of physiological and behavioral findings in rodents, monkeys, and humans, including ramping firing rates between the onset of reward-predicting cues and the receipt of delayed rewards, and universally scale-invariant response time distributions in interval timing tasks. It furthermore makes specific, well-supported predictions about the skewness of these distributions, a feature of timing data that is usually ignored. The model also incorporates a rapid (potentially one-shot) duration-learning procedure. Human behavioral data support the learning rule's predictions regarding learning speed in sequences of timed responses. These results suggest that simple, integration-based models should play as prominent a role in interval timing theory as they do in theories of perceptual decision making, and that a common neural mechanism may underlie both types of behavior.

  11. INTEGRAL EDUCATION, TIME AND SPACE: PROBLEMATIZING CONCEPTS

    Directory of Open Access Journals (Sweden)

    Ana Elisa Spaolonzi Queiroz Assis

    2018-03-01

    Full Text Available Integral Education, despite being the subject of public policy agenda for some decades, still carries disparities related to its concept. In this sense, this article aims to problematize not only the concepts of integral education but also the categories time and space contained in the magazines Em Aberto. They were organized and published by the National Institute of Educational Studies Anísio Teixeira (INEP, numbers 80 (2009 and 88 (2012, respectively entitled "Educação Integral e tempo integral" and " Políticas de educação integral em jornada ampliada". The methodology is based on Bardin’s content analysis, respecting the steps of pre-analysis (research corpus formed by the texts in the journals; material exploration (reading the texts encoding data choosing the registration units for categorization; and processing and interpretation of results, based on Saviani’s Historical-Critical Pedagogy. The work reveals convergent and divergent conceptual multiplicity, provoking a discussion about a critical conception of integral education. Keywords: Integral Education. Historical-Critical Pedagogy. Content Analysis.

  12. Path integral solution for some time-dependent potential

    International Nuclear Information System (INIS)

    Storchak, S.N.

    1989-12-01

    The quantum-mechanical problem with a time-dependent potential is solved by the path integral method. The solution is obtained by the application of the previously derived general formula for rheonomic homogeneous point transformation and reparametrization in the path integral. (author). 4 refs

  13. Energy drift in reversible time integration

    International Nuclear Information System (INIS)

    McLachlan, R I; Perlmutter, M

    2004-01-01

    Energy drift is commonly observed in reversible integrations of systems of molecular dynamics. We show that this drift can be modelled as a diffusion and that the typical energy error after time T is O(√T). (letter to the editor)

  14. Time-Lapse Measurement of Wellbore Integrity

    Science.gov (United States)

    Duguid, A.

    2017-12-01

    Well integrity is becoming more important as wells are used longer or repurposed. For CO2, shale gas, and other projects it has become apparent that wells represent the most likely unintended migration pathway for fluids out of the reservoir. Comprehensive logging programs have been employed to determine the condition of legacy wells in North America. These studies provide examples of assessment technologies. Logging programs have included pulsed neutron logging, ultrasonic well mapping, and cement bond logging. While these studies provide examples of what can be measured, they have only conducted a single round of logging and cannot show if the well has changed over time. Recent experience with time-lapse logging of three monitoring wells at a US Department of Energy sponsored CO2 project has shown the full value of similar tools. Time-lapse logging has shown that well integrity changes over time can be identified. It has also shown that the inclusion of and location of monitoring technologies in the well and the choice of construction materials must be carefully considered. Two of the wells were approximately eight years old at the time of study; they were constructed with steel and fiberglass casing sections and had lines on the outside of the casing running to the surface. The third well was 68 years old when it was studied and was originally constructed as a production well. Repeat logs were collected six or eight years after initial logging. Time-lapse logging showed the evolution of the wells. The results identified locations where cement degraded over time and locations that showed little change. The ultrasonic well maps show clearly that the lines used to connect the monitoring technology to the surface are visible and have a local effect on cement isolation. Testing and sampling was conducted along with logging. It provided insight into changes identified in the time-lapse log results. Point permeability testing was used to provide an in-situ point

  15. Estimation of the Maximum Theoretical Productivity of Fed-Batch Bioreactors

    Energy Technology Data Exchange (ETDEWEB)

    Bomble, Yannick J [National Renewable Energy Laboratory (NREL), Golden, CO (United States); St. John, Peter C [National Renewable Energy Laboratory (NREL), Golden, CO (United States); Crowley, Michael F [National Renewable Energy Laboratory (NREL), Golden, CO (United States)

    2017-10-18

    A key step towards the development of an integrated biorefinery is the screening of economically viable processes, which depends sharply on the yields and productivities that can be achieved by an engineered microorganism. In this study, we extend an earlier method which used dynamic optimization to find the maximum theoretical productivity of batch cultures to explicitly include fed-batch bioreactors. In addition to optimizing the intracellular distribution of metabolites between cell growth and product formation, we calculate the optimal control trajectory of feed rate versus time. We further analyze how sensitive the productivity is to substrate uptake and growth parameters.

  16. Linearized semiclassical initial value time correlation functions with maximum entropy analytic continuation.

    Science.gov (United States)

    Liu, Jian; Miller, William H

    2008-09-28

    The maximum entropy analytic continuation (MEAC) method is used to extend the range of accuracy of the linearized semiclassical initial value representation (LSC-IVR)/classical Wigner approximation for real time correlation functions. LSC-IVR provides a very effective "prior" for the MEAC procedure since it is very good for short times, exact for all time and temperature for harmonic potentials (even for correlation functions of nonlinear operators), and becomes exact in the classical high temperature limit. This combined MEAC+LSC/IVR approach is applied here to two highly nonlinear dynamical systems, a pure quartic potential in one dimensional and liquid para-hydrogen at two thermal state points (25 and 14 K under nearly zero external pressure). The former example shows the MEAC procedure to be a very significant enhancement of the LSC-IVR for correlation functions of both linear and nonlinear operators, and especially at low temperature where semiclassical approximations are least accurate. For liquid para-hydrogen, the LSC-IVR is seen already to be excellent at T=25 K, but the MEAC procedure produces a significant correction at the lower temperature (T=14 K). Comparisons are also made as to how the MEAC procedure is able to provide corrections for other trajectory-based dynamical approximations when used as priors.

  17. Optimal protocol for maximum work extraction in a feedback process with a time-varying potential

    Science.gov (United States)

    Kwon, Chulan

    2017-12-01

    The nonequilibrium nature of information thermodynamics is characterized by the inequality or non-negativity of the total entropy change of the system, memory, and reservoir. Mutual information change plays a crucial role in the inequality, in particular if work is extracted and the paradox of Maxwell's demon is raised. We consider the Brownian information engine where the protocol set of the harmonic potential is initially chosen by the measurement and varies in time. We confirm the inequality of the total entropy change by calculating, in detail, the entropic terms including the mutual information change. We rigorously find the optimal values of the time-dependent protocol for maximum extraction of work both for the finite-time and the quasi-static process.

  18. Maximum entropy estimation via Gauss-LP quadratures

    NARCIS (Netherlands)

    Thély, Maxime; Sutter, Tobias; Mohajerin Esfahani, P.; Lygeros, John; Dochain, Denis; Henrion, Didier; Peaucelle, Dimitri

    2017-01-01

    We present an approximation method to a class of parametric integration problems that naturally appear when solving the dual of the maximum entropy estimation problem. Our method builds up on a recent generalization of Gauss quadratures via an infinite-dimensional linear program, and utilizes a

  19. Time resolved spectroscopy of GRB 030501 using INTEGRAL

    DEFF Research Database (Denmark)

    Beckmann, V.; Borkowski, J.; Courvoisier, T.J.L.

    2003-01-01

    The gamma-ray instruments on-board INTEGRAL offer an unique opportunity to perform time resolved analysis on GRBs. The imager IBIS allows accurate positioning of GRBs and broad band spectral analysis, while SPI provides high resolution spectroscopy. GRB 030501 was discovered by the INTEGRAL Burst...... the Ulysses and RHESSI experiments....

  20. Separation of Stochastic and Deterministic Information from Seismological Time Series with Nonlinear Dynamics and Maximum Entropy Methods

    International Nuclear Information System (INIS)

    Gutierrez, Rafael M.; Useche, Gina M.; Buitrago, Elias

    2007-01-01

    We present a procedure developed to detect stochastic and deterministic information contained in empirical time series, useful to characterize and make models of different aspects of complex phenomena represented by such data. This procedure is applied to a seismological time series to obtain new information to study and understand geological phenomena. We use concepts and methods from nonlinear dynamics and maximum entropy. The mentioned method allows an optimal analysis of the available information

  1. Global Format for Conservative Time Integration in Nonlinear Dynamics

    DEFF Research Database (Denmark)

    Krenk, Steen

    2014-01-01

    The widely used classic collocation-based time integration procedures like Newmark, Generalized-alpha etc. generally work well within a framework of linear problems, but typically may encounter problems, when used in connection with essentially nonlinear structures. These problems are overcome....... In the present paper a conservative time integration algorithm is developed in a format using only the internal forces and the associated tangent stiffness at the specific time integration points. Thus, the procedure is computationally very similar to a collocation method, consisting of a series of nonlinear...... equivalent static load steps, easily implemented in existing computer codes. The paper considers two aspects: representation of nonlinear internal forces in a form that implies energy conservation, and the option of an algorithmic damping with the purpose of extracting energy from undesirable high...

  2. Explicit solution of Calderon preconditioned time domain integral equations

    KAUST Repository

    Ulku, Huseyin Arda

    2013-07-01

    An explicit marching on-in-time (MOT) scheme for solving Calderon-preconditioned time domain integral equations is proposed. The scheme uses Rao-Wilton-Glisson and Buffa-Christiansen functions to discretize the domain and range of the integral operators and a PE(CE)m type linear multistep to march on in time. Unlike its implicit counterpart, the proposed explicit solver requires the solution of an MOT system with a Gram matrix that is sparse and well-conditioned independent of the time step size. Numerical results demonstrate that the explicit solver maintains its accuracy and stability even when the time step size is chosen as large as that typically used by an implicit solver. © 2013 IEEE.

  3. A CMOS integrated timing discriminator circuit for fast scintillation counters

    International Nuclear Information System (INIS)

    Jochmann, M.W.

    1998-01-01

    Based on a zero-crossing discriminator using a CR differentiation network for pulse shaping, a new CMOS integrated timing discriminator circuit is proposed for fast (t r ≥ 2 ns) scintillation counters at the cooler synchrotron COSY-Juelich. By eliminating the input signal's amplitude information by means of an analog continuous-time divider, a normalized pulse shape at the zero-crossing point is gained over a wide dynamic input amplitude range. In combination with an arming comparator and a monostable multivibrator this yields in a highly precise timing discriminator circuit, that is expected to be useful in different time measurement applications. First measurement results of a CMOS integrated logarithmic amplifier, which is part of the analog continuous-time divider, agree well with the corresponding simulations. Moreover, SPICE simulations of the integrated discriminator circuit promise a time walk well below 200 ps (FWHM) over a 40 dB input amplitude dynamic range

  4. An integrated technique for developing real-time systems

    NARCIS (Netherlands)

    Hooman, J.J.M.; Vain, J.

    1995-01-01

    The integration of conceptual modeling techniques, formal specification, and compositional verification is considered for real time systems within the knowledge engineering context. We define constructive transformations from a conceptual meta model to a real time specification language and give

  5. Influence of Dynamic Neuromuscular Stabilization Approach on Maximum Kayak Paddling Force

    Directory of Open Access Journals (Sweden)

    Davidek Pavel

    2018-03-01

    Full Text Available The purpose of this study was to examine the effect of Dynamic Neuromuscular Stabilization (DNS exercise on maximum paddling force (PF and self-reported pain perception in the shoulder girdle area in flatwater kayakers. Twenty male flatwater kayakers from a local club (age = 21.9 ± 2.4 years, body height = 185.1 ± 7.9 cm, body mass = 83.9 ± 9.1 kg were randomly assigned to the intervention or control groups. During the 6-week study, subjects from both groups performed standard off-season training. Additionally, the intervention group engaged in a DNS-based core stabilization exercise program (quadruped exercise, side sitting exercise, sitting exercise and squat exercise after each standard training session. Using a kayak ergometer, the maximum PF stroke was measured four times during the six weeks. All subjects completed the Disabilities of the Arm, Shoulder and Hand (DASH questionnaire before and after the 6-week interval to evaluate subjective pain perception in the shoulder girdle area. Initially, no significant differences in maximum PF and the DASH questionnaire were identified between the two groups. Repeated measures analysis of variance indicated that the experimental group improved significantly compared to the control group on maximum PF (p = .004; Cohen’s d = .85, but not on the DASH questionnaire score (p = .731 during the study. Integration of DNS with traditional flatwater kayak training may significantly increase maximum PF, but may not affect pain perception to the same extent.

  6. Path integration on space times with symmetry

    International Nuclear Information System (INIS)

    Low, S.G.

    1985-01-01

    Path integration on space times with symmetry is investigated using a definition of path integration of Gaussian integrators. Gaussian integrators, systematically developed using the theory of projective distributions, may be defined in terms of a Jacobi operator Green function. This definition of the path integral yields a semiclassical expansion of the propagator which is valid on caustics. The semiclassical approximation to the free particle propagator on symmetric and reductive homogeneous spaces is computed in terms of the complete solution of the Jacobi equation. The results are used to test the validity of using the Schwinger-DeWitt transform to compute an approximation to the coincidence limit of a field theory Green function from a WKB propagator. The method is found not to be valid except for certain special cases. These cases include manifolds constructed from the direct product of flat space and group manifolds, on which the free particle WKB approximation is exact and two sphere. The multiple geodesic contribution to 2 > on Schwarzschild in the neighborhood of rho = 3M is computed using the transform

  7. Contribution of National near Real Time MODIS Forest Maximum Percentage NDVI Change Products to the U.S. ForWarn System

    Science.gov (United States)

    Spruce, Joseph P.; Hargrove, William; Gasser, Gerald; Smoot, James; Kuper, Philip D.

    2012-01-01

    This presentation reviews the development, integration, and testing of Near Real Time (NRT) MODIS forest % maximum NDVI change products resident to the USDA Forest Service (USFS) ForWarn System. ForWarn is an Early Warning System (EWS) tool for detection and tracking of regionally evident forest change, which includes the U.S. Forest Change Assessment Viewer (FCAV) (a publically available on-line geospatial data viewer for visualizing and assessing the context of this apparent forest change). NASA Stennis Space Center (SSC) is working collaboratively with the USFS, ORNL, and USGS to contribute MODIS forest change products to ForWarn. These change products compare current NDVI derived from expedited eMODIS data, to historical NDVI products derived from MODIS MOD13 data. A new suite of forest change products are computed every 8 days and posted to the ForWarn system; this includes three different forest change products computed using three different historical baselines: 1) previous year; 2) previous three years; and 3) all previous years in the MODIS record going back to 2000. The change product inputs are maximum value NDVI that are composited across a 24 day interval and refreshed every 8 days so that resulting images for the conterminous U.S. are predominantly cloud-free yet still retain temporally relevant fresh information on changes in forest canopy greenness. These forest change products are computed at the native nominal resolution of the input reflectance bands at 231.66 meters, which equates to approx 5.4 hectares or 13.3 acres per pixel. The Time Series Product Tool, a MATLAB-based software package developed at NASA SSC, is used to temporally process, fuse, reduce noise, interpolate data voids, and re-aggregate the historical NDVI into 24 day composites, and then custom MATLAB scripts are used to temporally process the eMODIS NDVIs so that they are in synch with the historical NDVI products. Prior to posting, an in-house snow mask classification product

  8. A New time Integration Scheme for Cahn-hilliard Equations

    KAUST Repository

    Schaefer, R.

    2015-06-01

    In this paper we present a new integration scheme that can be applied to solving difficult non-stationary non-linear problems. It is obtained by a successive linearization of the Crank- Nicolson scheme, that is unconditionally stable, but requires solving non-linear equation at each time step. We applied our linearized scheme for the time integration of the challenging Cahn-Hilliard equation, modeling the phase separation in fluids. At each time step the resulting variational equation is solved using higher-order isogeometric finite element method, with B- spline basis functions. The method was implemented in the PETIGA framework interfaced via the PETSc toolkit. The GMRES iterative solver was utilized for the solution of a resulting linear system at every time step. We also apply a simple adaptivity rule, which increases the time step size when the number of GMRES iterations is lower than 30. We compared our method with a non-linear, two stage predictor-multicorrector scheme, utilizing a sophisticated step length adaptivity. We controlled the stability of our simulations by monitoring the Ginzburg-Landau free energy functional. The proposed integration scheme outperforms the two-stage competitor in terms of the execution time, at the same time having a similar evolution of the free energy functional.

  9. A New time Integration Scheme for Cahn-hilliard Equations

    KAUST Repository

    Schaefer, R.; Smol-ka, M.; Dalcin, L; Paszyn'ski, M.

    2015-01-01

    In this paper we present a new integration scheme that can be applied to solving difficult non-stationary non-linear problems. It is obtained by a successive linearization of the Crank- Nicolson scheme, that is unconditionally stable, but requires solving non-linear equation at each time step. We applied our linearized scheme for the time integration of the challenging Cahn-Hilliard equation, modeling the phase separation in fluids. At each time step the resulting variational equation is solved using higher-order isogeometric finite element method, with B- spline basis functions. The method was implemented in the PETIGA framework interfaced via the PETSc toolkit. The GMRES iterative solver was utilized for the solution of a resulting linear system at every time step. We also apply a simple adaptivity rule, which increases the time step size when the number of GMRES iterations is lower than 30. We compared our method with a non-linear, two stage predictor-multicorrector scheme, utilizing a sophisticated step length adaptivity. We controlled the stability of our simulations by monitoring the Ginzburg-Landau free energy functional. The proposed integration scheme outperforms the two-stage competitor in terms of the execution time, at the same time having a similar evolution of the free energy functional.

  10. Evaluation of time integration methods for transient response analysis of nonlinear structures

    International Nuclear Information System (INIS)

    Park, K.C.

    1975-01-01

    Recent developments in the evaluation of direct time integration methods for the transient response analysis of nonlinear structures are presented. These developments, which are based on local stability considerations of an integrator, show that the interaction between temporal step size and nonlinearities of structural systems has a pronounced effect on both accuracy and stability of a given time integration method. The resulting evaluation technique is applied to a model nonlinear problem, in order to: 1) demonstrate that it eliminates the present costly process of evaluating time integrator for nonlinear structural systems via extensive numerical experiments; 2) identify the desirable characteristics of time integration methods for nonlinear structural problems; 3) develop improved stiffly-stable methods for application to nonlinear structures. Extension of the methodology for examination of the interaction between a time integrator and the approximate treatment of nonlinearities (such as due to pseudo-force or incremental solution procedures) is also discussed. (Auth.)

  11. Effects of integration time on in-water radiometric profiles.

    Science.gov (United States)

    D'Alimonte, Davide; Zibordi, Giuseppe; Kajiyama, Tamito

    2018-03-05

    This work investigates the effects of integration time on in-water downward irradiance E d , upward irradiance E u and upwelling radiance L u profile data acquired with free-fall hyperspectral systems. Analyzed quantities are the subsurface value and the diffuse attenuation coefficient derived by applying linear and non-linear regression schemes. Case studies include oligotrophic waters (Case-1), as well as waters dominated by Colored Dissolved Organic Matter (CDOM) and Non-Algal Particles (NAP). Assuming a 24-bit digitization, measurements resulting from the accumulation of photons over integration times varying between 8 and 2048ms are evaluated at depths corresponding to: 1) the beginning of each integration interval (Fst); 2) the end of each integration interval (Lst); 3) the averages of Fst and Lst values (Avg); and finally 4) the values weighted accounting for the diffuse attenuation coefficient of water (Wgt). Statistical figures show that the effects of integration time can bias results well above 5% as a function of the depth definition. Results indicate the validity of the Wgt depth definition and the fair applicability of the Avg one. Instead, both the Fst and Lst depths should not be adopted since they may introduce pronounced biases in E u and L u regression products for highly absorbing waters. Finally, the study reconfirms the relevance of combining multiple radiometric casts into a single profile to increase precision of regression products.

  12. Fast Maximum-Likelihood Decoder for Quasi-Orthogonal Space-Time Block Code

    Directory of Open Access Journals (Sweden)

    Adel Ahmadi

    2015-01-01

    Full Text Available Motivated by the decompositions of sphere and QR-based methods, in this paper we present an extremely fast maximum-likelihood (ML detection approach for quasi-orthogonal space-time block code (QOSTBC. The proposed algorithm with a relatively simple design exploits structure of quadrature amplitude modulation (QAM constellations to achieve its goal and can be extended to any arbitrary constellation. Our decoder utilizes a new decomposition technique for ML metric which divides the metric into independent positive parts and a positive interference part. Search spaces of symbols are substantially reduced by employing the independent parts and statistics of noise. Symbols within the search spaces are successively evaluated until the metric is minimized. Simulation results confirm that the proposed decoder’s performance is superior to many of the recently published state-of-the-art solutions in terms of complexity level. More specifically, it was possible to verify that application of the new algorithms with 1024-QAM would decrease the computational complexity compared to state-of-the-art solution with 16-QAM.

  13. Maximum Principles for Discrete and Semidiscrete Reaction-Diffusion Equation

    Directory of Open Access Journals (Sweden)

    Petr Stehlík

    2015-01-01

    Full Text Available We study reaction-diffusion equations with a general reaction function f on one-dimensional lattices with continuous or discrete time ux′  (or  Δtux=k(ux-1-2ux+ux+1+f(ux, x∈Z. We prove weak and strong maximum and minimum principles for corresponding initial-boundary value problems. Whereas the maximum principles in the semidiscrete case (continuous time exhibit similar features to those of fully continuous reaction-diffusion model, in the discrete case the weak maximum principle holds for a smaller class of functions and the strong maximum principle is valid in a weaker sense. We describe in detail how the validity of maximum principles depends on the nonlinearity and the time step. We illustrate our results on the Nagumo equation with the bistable nonlinearity.

  14. Extending molecular simulation time scales: Parallel in time integrations for high-level quantum chemistry and complex force representations

    International Nuclear Information System (INIS)

    Bylaska, Eric J.; Weare, Jonathan Q.; Weare, John H.

    2013-01-01

    Parallel in time simulation algorithms are presented and applied to conventional molecular dynamics (MD) and ab initio molecular dynamics (AIMD) models of realistic complexity. Assuming that a forward time integrator, f (e.g., Verlet algorithm), is available to propagate the system from time t i (trajectory positions and velocities x i = (r i , v i )) to time t i+1 (x i+1 ) by x i+1 = f i (x i ), the dynamics problem spanning an interval from t 0 …t M can be transformed into a root finding problem, F(X) = [x i − f(x (i−1 )] i =1,M = 0, for the trajectory variables. The root finding problem is solved using a variety of root finding techniques, including quasi-Newton and preconditioned quasi-Newton schemes that are all unconditionally convergent. The algorithms are parallelized by assigning a processor to each time-step entry in the columns of F(X). The relation of this approach to other recently proposed parallel in time methods is discussed, and the effectiveness of various approaches to solving the root finding problem is tested. We demonstrate that more efficient dynamical models based on simplified interactions or coarsening time-steps provide preconditioners for the root finding problem. However, for MD and AIMD simulations, such preconditioners are not required to obtain reasonable convergence and their cost must be considered in the performance of the algorithm. The parallel in time algorithms developed are tested by applying them to MD and AIMD simulations of size and complexity similar to those encountered in present day applications. These include a 1000 Si atom MD simulation using Stillinger-Weber potentials, and a HCl + 4H 2 O AIMD simulation at the MP2 level. The maximum speedup ((serial execution time)/(parallel execution time) ) obtained by parallelizing the Stillinger-Weber MD simulation was nearly 3.0. For the AIMD MP2 simulations, the algorithms achieved speedups of up to 14.3. The parallel in time algorithms can be implemented in a

  15. Receiver function estimated by maximum entropy deconvolution

    Institute of Scientific and Technical Information of China (English)

    吴庆举; 田小波; 张乃铃; 李卫平; 曾融生

    2003-01-01

    Maximum entropy deconvolution is presented to estimate receiver function, with the maximum entropy as the rule to determine auto-correlation and cross-correlation functions. The Toeplitz equation and Levinson algorithm are used to calculate the iterative formula of error-predicting filter, and receiver function is then estimated. During extrapolation, reflective coefficient is always less than 1, which keeps maximum entropy deconvolution stable. The maximum entropy of the data outside window increases the resolution of receiver function. Both synthetic and real seismograms show that maximum entropy deconvolution is an effective method to measure receiver function in time-domain.

  16. Maximum Likelihood Time-of-Arrival Estimation of Optical Pulses via Photon-Counting Photodetectors

    Science.gov (United States)

    Erkmen, Baris I.; Moision, Bruce E.

    2010-01-01

    Many optical imaging, ranging, and communications systems rely on the estimation of the arrival time of an optical pulse. Recently, such systems have been increasingly employing photon-counting photodetector technology, which changes the statistics of the observed photocurrent. This requires time-of-arrival estimators to be developed and their performances characterized. The statistics of the output of an ideal photodetector, which are well modeled as a Poisson point process, were considered. An analytical model was developed for the mean-square error of the maximum likelihood (ML) estimator, demonstrating two phenomena that cause deviations from the minimum achievable error at low signal power. An approximation was derived to the threshold at which the ML estimator essentially fails to provide better than a random guess of the pulse arrival time. Comparing the analytic model performance predictions to those obtained via simulations, it was verified that the model accurately predicts the ML performance over all regimes considered. There is little prior art that attempts to understand the fundamental limitations to time-of-arrival estimation from Poisson statistics. This work establishes both a simple mathematical description of the error behavior, and the associated physical processes that yield this behavior. Previous work on mean-square error characterization for ML estimators has predominantly focused on additive Gaussian noise. This work demonstrates that the discrete nature of the Poisson noise process leads to a distinctly different error behavior.

  17. Mixed time slicing in path integral simulations

    International Nuclear Information System (INIS)

    Steele, Ryan P.; Zwickl, Jill; Shushkov, Philip; Tully, John C.

    2011-01-01

    A simple and efficient scheme is presented for using different time slices for different degrees of freedom in path integral calculations. This method bridges the gap between full quantization and the standard mixed quantum-classical (MQC) scheme and, therefore, still provides quantum mechanical effects in the less-quantized variables. Underlying the algorithm is the notion that time slices (beads) may be 'collapsed' in a manner that preserves quantization in the less quantum mechanical degrees of freedom. The method is shown to be analogous to multiple-time step integration techniques in classical molecular dynamics. The algorithm and its associated error are demonstrated on model systems containing coupled high- and low-frequency modes; results indicate that convergence of quantum mechanical observables can be achieved with disparate bead numbers in the different modes. Cost estimates indicate that this procedure, much like the MQC method, is most efficient for only a relatively few quantum mechanical degrees of freedom, such as proton transfer. In this regime, however, the cost of a fully quantum mechanical simulation is determined by the quantization of the least quantum mechanical degrees of freedom.

  18. Optimal Real-time Dispatch for Integrated Energy Systems

    DEFF Research Database (Denmark)

    Anvari-Moghaddam, Amjad; Guerrero, Josep M.; Rahimi-Kian, Ashkan

    2016-01-01

    With the emerging of small-scale integrated energy systems (IESs), there are significant potentials to increase the functionality of a typical demand-side management (DSM) strategy and typical implementation of building-level distributed energy resources (DERs). By integrating DSM and DERs...... into a cohesive, networked package that fully utilizes smart energy-efficient end-use devices, advanced building control/automation systems, and integrated communications architectures, it is possible to efficiently manage energy and comfort at the end-use location. In this paper, an ontology-driven multi......-agent control system with intelligent optimizers is proposed for optimal real-time dispatch of an integrated building and microgrid system considering coordinated demand response (DR) and DERs management. The optimal dispatch problem is formulated as a mixed integer nonlinear programing problem (MINLP...

  19. Extending molecular simulation time scales: Parallel in time integrations for high-level quantum chemistry and complex force representations.

    Science.gov (United States)

    Bylaska, Eric J; Weare, Jonathan Q; Weare, John H

    2013-08-21

    Parallel in time simulation algorithms are presented and applied to conventional molecular dynamics (MD) and ab initio molecular dynamics (AIMD) models of realistic complexity. Assuming that a forward time integrator, f (e.g., Verlet algorithm), is available to propagate the system from time ti (trajectory positions and velocities xi = (ri, vi)) to time ti + 1 (xi + 1) by xi + 1 = fi(xi), the dynamics problem spanning an interval from t0[ellipsis (horizontal)]tM can be transformed into a root finding problem, F(X) = [xi - f(x(i - 1)]i = 1, M = 0, for the trajectory variables. The root finding problem is solved using a variety of root finding techniques, including quasi-Newton and preconditioned quasi-Newton schemes that are all unconditionally convergent. The algorithms are parallelized by assigning a processor to each time-step entry in the columns of F(X). The relation of this approach to other recently proposed parallel in time methods is discussed, and the effectiveness of various approaches to solving the root finding problem is tested. We demonstrate that more efficient dynamical models based on simplified interactions or coarsening time-steps provide preconditioners for the root finding problem. However, for MD and AIMD simulations, such preconditioners are not required to obtain reasonable convergence and their cost must be considered in the performance of the algorithm. The parallel in time algorithms developed are tested by applying them to MD and AIMD simulations of size and complexity similar to those encountered in present day applications. These include a 1000 Si atom MD simulation using Stillinger-Weber potentials, and a HCl + 4H2O AIMD simulation at the MP2 level. The maximum speedup (serial execution/timeparallel execution time) obtained by parallelizing the Stillinger-Weber MD simulation was nearly 3.0. For the AIMD MP2 simulations, the algorithms achieved speedups of up to 14.3. The parallel in time algorithms can be implemented in a

  20. A Fully Integrated Discrete-Time Superheterodyne Receiver

    NARCIS (Netherlands)

    Tohidian, M.; Madadi, I.; Staszewski, R.B.

    2017-01-01

    The zero/low intermediate frequency (IF) receiver (RX) architecture has enabled full CMOS integration. As the technology scales and wireless standards become ever more challenging, the issues related to time-varying dc offsets, the second-order nonlinearity, and flicker noise become more critical.

  1. Optimal distribution of integration time for intensity measurements in Stokes polarimetry.

    Science.gov (United States)

    Li, Xiaobo; Liu, Tiegen; Huang, Bingjing; Song, Zhanjie; Hu, Haofeng

    2015-10-19

    We consider the typical Stokes polarimetry system, which performs four intensity measurements to estimate a Stokes vector. We show that if the total integration time of intensity measurements is fixed, the variance of the Stokes vector estimator depends on the distribution of the integration time at four intensity measurements. Therefore, by optimizing the distribution of integration time, the variance of the Stokes vector estimator can be decreased. In this paper, we obtain the closed-form solution of the optimal distribution of integration time by employing Lagrange multiplier method. According to the theoretical analysis and real-world experiment, it is shown that the total variance of the Stokes vector estimator can be significantly decreased about 40% in the case discussed in this paper. The method proposed in this paper can effectively decrease the measurement variance and thus statistically improves the measurement accuracy of the polarimetric system.

  2. Spectral density analysis of time correlation functions in lattice QCD using the maximum entropy method

    International Nuclear Information System (INIS)

    Fiebig, H. Rudolf

    2002-01-01

    We study various aspects of extracting spectral information from time correlation functions of lattice QCD by means of Bayesian inference with an entropic prior, the maximum entropy method (MEM). Correlator functions of a heavy-light meson-meson system serve as a repository for lattice data with diverse statistical quality. Attention is given to spectral mass density functions, inferred from the data, and their dependence on the parameters of the MEM. We propose to employ simulated annealing, or cooling, to solve the Bayesian inference problem, and discuss the practical issues of the approach

  3. Optimal order and time-step criterion for Aarseth-type N-body integrators

    International Nuclear Information System (INIS)

    Makino, Junichiro

    1991-01-01

    How the selection of the time-step criterion and the order of the integrator change the efficiency of Aarseth-type N-body integrators is discussed. An alternative to Aarseth's scheme based on the direct calculation of the time derivative of the force using the Hermite interpolation is compared to Aarseth's scheme, which uses the Newton interpolation to construct the predictor and corrector. How the number of particles in the system changes the behavior of integrators is examined. The Hermite scheme allows a time step twice as large as that for the standard Aarseth scheme for the same accuracy. The calculation cost of the Hermite scheme per time step is roughly twice as much as that of the standard Aarseth scheme. The optimal order of the integrators depends on both the particle number and the accuracy required. The time-step criterion of the standard Aarseth scheme is found to be inapplicable to higher-order integrators, and a more uniformly reliable criterion is proposed. 18 refs

  4. Interconnect rise time in superconducting integrating circuits

    International Nuclear Information System (INIS)

    Preis, D.; Shlager, K.

    1988-01-01

    The influence of resistive losses on the voltage rise time of an integrated-circuit interconnection is reported. A distribution-circuit model is used to present the interconnect. Numerous parametric curves are presented based on numerical evaluation of the exact analytical expression for the model's transient response. For the superconducting case in which the series resistance of the interconnect approaches zero, the step-response rise time is longer but signal strength increases significantly

  5. Electromagnetic-Thermal Integrated Design Optimization for Hypersonic Vehicle Short-Time Duty PM Brushless DC Motor

    Directory of Open Access Journals (Sweden)

    Quanwu Li

    2016-01-01

    Full Text Available High reliability is required for the permanent magnet brushless DC motor (PM-BLDCM in an electrical pump of hypersonic vehicle. The PM-BLDCM is a short-time duty motor with high-power-density. Since thermal equilibrium is not reached for the PM-BLDCM, the temperature distribution is not uniform and there is a risk of local overheating. The winding is a main heat source and its insulation is thermally sensitive, so reducing the winding temperature rise is the key to the improvement of the reliability. In order to reduce the winding temperature rise, an electromagnetic-thermal integrated design optimization method is proposed. The method is based on electromagnetic analysis and thermal transient analysis. The requirements and constraints of electromagnetic and thermal design are considered in this method. The split ratio and the maximum flux density in stator lamination, which are highly relevant to the windings temperature rise, are optimized analytically. The analytical results are verified by finite element analysis (FEA and experiments. The maximum error between the analytical and the FEA results is 4%. The errors between the analytical and measured windings temperature rise are less than 8%. It can be proved that the method can obtain the optimal design accurately to reduce the winding temperature rise.

  6. Increasing the maximum daily operation time of MNSR reactor by modifying its cooling system

    International Nuclear Information System (INIS)

    Khamis, I.; Hainoun, A.; Al Halbi, W.; Al Isa, S.

    2006-08-01

    thermal-hydraulic natural convection correlations have been formulated based on a thorough analysis and modeling of the MNSR reactor. The model considers detailed description of the thermal and hydraulic aspects of cooling in the core and vessel. In addition, determination of pressure drop was made through an elaborate balancing of the overall pressure drop in the core against the sum of all individual channel pressure drops employing an iterative scheme. Using this model, an accurate estimation of various timely core-averaged hydraulic parameters such as generated power, hydraulic diameters, flow cross area, ... etc. for each one of the ten-fuel circles in the core can be made. Furthermore, distribution of coolant and fuel temperatures, including maximum fuel temperature and its location in the core, can now be determined. Correlation among core-coolant average temperature, reactor power, and core-coolant inlet temperature, during both steady and transient cases, have been established and verified against experimental data. Simulating various operating condition of MNSR, good agreement is obtained for at different power levels. Various schemes of cooling have been investigated for the purpose of assessing potential benefits on the operational characteristics of the syrian MNSR reactor. A detailed thermal hydraulic model for the analysis of MNSR has been developed. The analysis shows that an auxiliary cooling system, for the reactor vessel or installed in the pool which surrounds the lower section of the reactor vessel, will significantly offset the consumption of excess reactivity due to the negative reactivity temperature coefficient. Hence, the maximum operating time of the reactor is extended. The model considers detailed description of the thermal and hydraulic aspects of cooling the core and its surrounding vessel. Natural convection correlations have been formulated based on a thorough analysis and modeling of the MNSR reactor. The suggested 'micro model

  7. Aspects for Run-time Component Integration

    DEFF Research Database (Denmark)

    Truyen, Eddy; Jørgensen, Bo Nørregaard; Joosen, Wouter

    2000-01-01

    Component framework technology has become the cornerstone of building a family of systems and applications. A component framework defines a generic architecture into which specialized components can be plugged. As such, the component framework leverages the glue that connects the different inserted...... to dynamically integrate into the architecture of middleware systems new services that support non-functional aspects such as security, transactions, real-time....

  8. Effects of the Maximum Luminance in a Medical-grade Liquid-crystal Display on the Recognition Time of a Test Pattern: Observer Performance Using Landolt Rings.

    Science.gov (United States)

    Doi, Yasuhiro; Matsuyama, Michinobu; Ikeda, Ryuji; Hashida, Masahiro

    2016-07-01

    This study was conducted to measure the recognition time of the test pattern and to investigate the effects of the maximum luminance in a medical-grade liquid-crystal display (LCD) on the recognition time. Landolt rings as signals of the test pattern were used with four random orientations, one on each of the eight gray-scale steps. Ten observers input the orientation of the gap on the Landolt rings using cursor keys on the keyboard. The recognition times were automatically measured from the display of the test pattern on the medical-grade LCD to the input of the orientation of the gap in the Landolt rings. The maximum luminance in this study was set to one of four values (100, 170, 250, and 400 cd/m(2)), for which the corresponding recognition times were measured. As a result, the average recognition times for each observer with maximum luminances of 100, 170, 250, and 400 cd/m(2) were found to be 3.96 to 7.12 s, 3.72 to 6.35 s, 3.53 to 5.97 s, and 3.37 to 5.98 s, respectively. The results indicate that the observer's recognition time is directly proportional to the luminance of the medical-grade LCD. Therefore, it is evident that the maximum luminance of the medical-grade LCD affects the test pattern recognition time.

  9. Polar coordinated fuzzy controller based real-time maximum-power point control of photovoltaic system

    Energy Technology Data Exchange (ETDEWEB)

    Syafaruddin; Hiyama, Takashi [Department of Computer Science and Electrical Engineering of Kumamoto University, 2-39-1 Kurokami, Kumamoto 860-8555 (Japan); Karatepe, Engin [Department of Electrical and Electronics Engineering of Ege University, 35100 Bornova-Izmir (Turkey)

    2009-12-15

    It is crucial to improve the photovoltaic (PV) system efficiency and to develop the reliability of PV generation control systems. There are two ways to increase the efficiency of PV power generation system. The first is to develop materials offering high conversion efficiency at low cost. The second is to operate PV systems optimally. However, the PV system can be optimally operated only at a specific output voltage and its output power fluctuates under intermittent weather conditions. Moreover, it is very difficult to test the performance of a maximum-power point tracking (MPPT) controller under the same weather condition during the development process and also the field testing is costly and time consuming. This paper presents a novel real-time simulation technique of PV generation system by using dSPACE real-time interface system. The proposed system includes Artificial Neural Network (ANN) and fuzzy logic controller scheme using polar information. This type of fuzzy logic rules is implemented for the first time to operate the PV module at optimum operating point. ANN is utilized to determine the optimum operating voltage for monocrystalline silicon, thin-film cadmium telluride and triple junction amorphous silicon solar cells. The verification of availability and stability of the proposed system through the real-time simulator shows that the proposed system can respond accurately for different scenarios and different solar cell technologies. (author)

  10. Conditional maximum-entropy method for selecting prior distributions in Bayesian statistics

    Science.gov (United States)

    Abe, Sumiyoshi

    2014-11-01

    The conditional maximum-entropy method (abbreviated here as C-MaxEnt) is formulated for selecting prior probability distributions in Bayesian statistics for parameter estimation. This method is inspired by a statistical-mechanical approach to systems governed by dynamics with largely separated time scales and is based on three key concepts: conjugate pairs of variables, dimensionless integration measures with coarse-graining factors and partial maximization of the joint entropy. The method enables one to calculate a prior purely from a likelihood in a simple way. It is shown, in particular, how it not only yields Jeffreys's rules but also reveals new structures hidden behind them.

  11. On the initial condition problem of the time domain PMCHWT surface integral equation

    KAUST Repository

    Uysal, Ismail Enes

    2017-05-13

    Non-physical, linearly increasing and constant current components are induced in marching on-in-time solution of time domain surface integral equations when initial conditions on time derivatives of (unknown) equivalent currents are not enforced properly. This problem can be remedied by solving the time integral of the surface integral for auxiliary currents that are defined to be the time derivatives of the equivalent currents. Then the equivalent currents are obtained by numerically differentiating the auxiliary ones. In this work, this approach is applied to the marching on-in-time solution of the time domain Poggio-Miller-Chan-Harrington-Wu-Tsai surface integral equation enforced on dispersive/plasmonic scatterers. Accuracy of the proposed method is demonstrated by a numerical example.

  12. Extending molecular simulation time scales: Parallel in time integrations for high-level quantum chemistry and complex force representations

    Energy Technology Data Exchange (ETDEWEB)

    Bylaska, Eric J., E-mail: Eric.Bylaska@pnnl.gov [Environmental Molecular Sciences Laboratory, Pacific Northwest National Laboratory, P.O. Box 999, Richland, Washington 99352 (United States); Weare, Jonathan Q., E-mail: weare@uchicago.edu [Department of Mathematics, University of Chicago, Chicago, Illinois 60637 (United States); Weare, John H., E-mail: jweare@ucsd.edu [Department of Chemistry and Biochemistry, University of California, San Diego, La Jolla, California 92093 (United States)

    2013-08-21

    Parallel in time simulation algorithms are presented and applied to conventional molecular dynamics (MD) and ab initio molecular dynamics (AIMD) models of realistic complexity. Assuming that a forward time integrator, f (e.g., Verlet algorithm), is available to propagate the system from time t{sub i} (trajectory positions and velocities x{sub i} = (r{sub i}, v{sub i})) to time t{sub i+1} (x{sub i+1}) by x{sub i+1} = f{sub i}(x{sub i}), the dynamics problem spanning an interval from t{sub 0}…t{sub M} can be transformed into a root finding problem, F(X) = [x{sub i} − f(x{sub (i−1})]{sub i} {sub =1,M} = 0, for the trajectory variables. The root finding problem is solved using a variety of root finding techniques, including quasi-Newton and preconditioned quasi-Newton schemes that are all unconditionally convergent. The algorithms are parallelized by assigning a processor to each time-step entry in the columns of F(X). The relation of this approach to other recently proposed parallel in time methods is discussed, and the effectiveness of various approaches to solving the root finding problem is tested. We demonstrate that more efficient dynamical models based on simplified interactions or coarsening time-steps provide preconditioners for the root finding problem. However, for MD and AIMD simulations, such preconditioners are not required to obtain reasonable convergence and their cost must be considered in the performance of the algorithm. The parallel in time algorithms developed are tested by applying them to MD and AIMD simulations of size and complexity similar to those encountered in present day applications. These include a 1000 Si atom MD simulation using Stillinger-Weber potentials, and a HCl + 4H{sub 2}O AIMD simulation at the MP2 level. The maximum speedup ((serial execution time)/(parallel execution time) ) obtained by parallelizing the Stillinger-Weber MD simulation was nearly 3.0. For the AIMD MP2 simulations, the algorithms achieved speedups of up

  13. The Crank Nicolson Time Integrator for EMPHASIS.

    Energy Technology Data Exchange (ETDEWEB)

    McGregor, Duncan Alisdair Odum; Love, Edward; Kramer, Richard Michael Jack

    2018-03-01

    We investigate the use of implicit time integrators for finite element time domain approxi- mations of Maxwell's equations in vacuum. We discretize Maxwell's equations in time using Crank-Nicolson and in 3D space using compatible finite elements. We solve the system by taking a single step of Newton's method and inverting the Eddy-Current Schur complement allowing for the use of standard preconditioning techniques. This approach also generalizes to more complex material models that can include the Unsplit PML. We present verification results and demonstrate performance at CFL numbers up to 1000.

  14. On the relationship between supplier integration and time-to-market

    NARCIS (Netherlands)

    Perols, J.; Zimmermann, C.; Kortmann, S.

    2013-01-01

    Recent operations management and innovation management research emphasizes the importance of supplier integration. However, the empirical results as to the relationship between supplier integration and time-to-market are ambivalent. To understand this important relationship, we incorporate two major

  15. Time integration in the code Zgoubi and external usage of PTC's structures

    International Nuclear Information System (INIS)

    Forest, Etienne; Meot, F.

    2006-06-01

    The purpose of this note is to describe Zgoubi's integrator and to describe some pitfalls for time based integration when used in accelerators. We show why the convergence rate of an integrator can be affected by an improper treatment at the boundary when time is used as the integration variable. We also point out how the code PTC can be used as a container by other tracking engine. This work is not completed as far as incorporation of Zgoubi is concerned. (authors)

  16. Non-integrability of time-dependent spherically symmetric Yang-Mills equations

    Energy Technology Data Exchange (ETDEWEB)

    Matinyan, S G; Prokhorenko, E B; Savvidy, G K

    1988-03-07

    The integrability of time-dependent spherically symmetric Yang-Mills equations is studied using the Fermi-Pasta-Ulam method. It is shown that the motion of this system is ergodic, while the system itself is non-integrable, i.e. manifests dynamical chaos.

  17. Parareal algorithms with local time-integrators for time fractional differential equations

    Science.gov (United States)

    Wu, Shu-Lin; Zhou, Tao

    2018-04-01

    It is challenge work to design parareal algorithms for time-fractional differential equations due to the historical effect of the fractional operator. A direct extension of the classical parareal method to such equations will lead to unbalance computational time in each process. In this work, we present an efficient parareal iteration scheme to overcome this issue, by adopting two recently developed local time-integrators for time fractional operators. In both approaches, one introduces auxiliary variables to localized the fractional operator. To this end, we propose a new strategy to perform the coarse grid correction so that the auxiliary variables and the solution variable are corrected separately in a mixed pattern. It is shown that the proposed parareal algorithm admits robust rate of convergence. Numerical examples are presented to support our conclusions.

  18. A Novel Multiple-Time Scale Integrator for the Hybrid Monte Carlo Algorithm

    International Nuclear Information System (INIS)

    Kamleh, Waseem

    2011-01-01

    Hybrid Monte Carlo simulations that implement the fermion action using multiple terms are commonly used. By the nature of their formulation they involve multiple integration time scales in the evolution of the system through simulation time. These different scales are usually dealt with by the Sexton-Weingarten nested leapfrog integrator. In this scheme the choice of time scales is somewhat restricted as each time step must be an exact multiple of the next smallest scale in the sequence. A novel generalisation of the nested leapfrog integrator is introduced which allows for far greater flexibility in the choice of time scales, as each scale now must only be an exact multiple of the smallest step size.

  19. Stochastic modelling of the monthly average maximum and minimum temperature patterns in India 1981-2015

    Science.gov (United States)

    Narasimha Murthy, K. V.; Saravana, R.; Vijaya Kumar, K.

    2018-04-01

    The paper investigates the stochastic modelling and forecasting of monthly average maximum and minimum temperature patterns through suitable seasonal auto regressive integrated moving average (SARIMA) model for the period 1981-2015 in India. The variations and distributions of monthly maximum and minimum temperatures are analyzed through Box plots and cumulative distribution functions. The time series plot indicates that the maximum temperature series contain sharp peaks in almost all the years, while it is not true for the minimum temperature series, so both the series are modelled separately. The possible SARIMA model has been chosen based on observing autocorrelation function (ACF), partial autocorrelation function (PACF), and inverse autocorrelation function (IACF) of the logarithmic transformed temperature series. The SARIMA (1, 0, 0) × (0, 1, 1)12 model is selected for monthly average maximum and minimum temperature series based on minimum Bayesian information criteria. The model parameters are obtained using maximum-likelihood method with the help of standard error of residuals. The adequacy of the selected model is determined using correlation diagnostic checking through ACF, PACF, IACF, and p values of Ljung-Box test statistic of residuals and using normal diagnostic checking through the kernel and normal density curves of histogram and Q-Q plot. Finally, the forecasting of monthly maximum and minimum temperature patterns of India for the next 3 years has been noticed with the help of selected model.

  20. Numerical counting ratemeter with variable time constant and integrated circuits

    International Nuclear Information System (INIS)

    Kaiser, J.; Fuan, J.

    1967-01-01

    We present here the prototype of a numerical counting ratemeter which is a special version of variable time-constant frequency meter (1). The originality of this work lies in the fact that the change in the time constant is carried out automatically. Since the criterion for this change is the accuracy in the annunciated result, the integration time is varied as a function of the frequency. For the prototype described in this report, the time constant varies from 1 sec to 1 millisec. for frequencies in the range 10 Hz to 10 MHz. This prototype is built entirely of MECL-type integrated circuits from Motorola and is thus contained in two relatively small boxes. (authors) [fr

  1. Precise digital integration in wide time range: theory and realization

    International Nuclear Information System (INIS)

    Batrakov, A.M.; Pavlenko, A.V.

    2017-01-01

    The digital integration method based on using high-speed precision analog-to-digital converters (ADC) has become widely used over the recent years. The paper analyzes the limitations of this method that are caused by the signal properties, ADC sampling rate and noise spectral density of the ADC signal path. This analysis allowed creating digital integrators with accurate synchronization and achieving an integration error of less than 10 −5 in the time range from microseconds to tens of seconds. The structure of the integrator is described and its basic parameters are presented. The possibilities of different ADC chips in terms of their applicability to digital integrators are discussed. A comparison with other integrating devices is presented.

  2. Numerical Time Integration Methods for a Point Absorber Wave Energy Converter

    DEFF Research Database (Denmark)

    Zurkinden, Andrew Stephen; Kramer, Morten

    2012-01-01

    on a discretization of the convolution integral. The calculation of the convolution integral is performed at each time step regardless of the chosen numerical scheme. In the second model the convolution integral is replaced by a system of linear ordinary differential equations. The formulation of the state...

  3. Novel methods for estimating lithium-ion battery state of energy and maximum available energy

    International Nuclear Information System (INIS)

    Zheng, Linfeng; Zhu, Jianguo; Wang, Guoxiu; He, Tingting; Wei, Yiying

    2016-01-01

    Highlights: • Study on temperature, current, aging dependencies of maximum available energy. • Study on the various factors dependencies of relationships between SOE and SOC. • A quantitative relationship between SOE and SOC is proposed for SOE estimation. • Estimate maximum available energy by means of moving-window energy-integral. • The robustness and feasibility of the proposed approaches are systematic evaluated. - Abstract: The battery state of energy (SOE) allows a direct determination of the ratio between the remaining and maximum available energy of a battery, which is critical for energy optimization and management in energy storage systems. In this paper, the ambient temperature, battery discharge/charge current rate and cell aging level dependencies of battery maximum available energy and SOE are comprehensively analyzed. An explicit quantitative relationship between SOE and state of charge (SOC) for LiMn_2O_4 battery cells is proposed for SOE estimation, and a moving-window energy-integral technique is incorporated to estimate battery maximum available energy. Experimental results show that the proposed approaches can estimate battery maximum available energy and SOE with high precision. The robustness of the proposed approaches against various operation conditions and cell aging levels is systematically evaluated.

  4. Feasibility of real-time calculation of correlation integral derived statistics applied to EGG time series

    NARCIS (Netherlands)

    van den Broek, PLC; van Egmond, J; van Rijn, CM; Takens, F; Coenen, AML; Booij, LHDJ

    2005-01-01

    Background: This study assessed the feasibility of online calculation of the correlation integral (C(r)) aiming to apply C(r)derived statistics. For real-time application it is important to reduce calculation time. It is shown how our method works for EEG time series. Methods: To achieve online

  5. Feasibility of real-time calculation of correlation integral derived statistics applied to EEG time series

    NARCIS (Netherlands)

    Broek, P.L.C. van den; Egmond, J. van; Rijn, C.M. van; Takens, F.; Coenen, A.M.L.; Booij, L.H.D.J.

    2005-01-01

    This study assessed the feasibility of online calculation of the correlation integral (C(r)) aiming to apply C(r)-derived statistics. For real-time application it is important to reduce calculation time. It is shown how our method works for EEG time series. Methods: To achieve online calculation of

  6. Explicit Time Integrators for Nonlinear Dynamics Derived from the Midpoint Rule

    Directory of Open Access Journals (Sweden)

    P. Krysl

    2004-01-01

    Full Text Available We address the design of time integrators for mechanical systems that are explicit in the forcing evaluations. Our starting point is the midpoint rule, either in the classical form for the vector space setting, or in the Lie form for the rotation group. By introducing discrete, concentrated impulses we can approximate the forcing impressed upon the system over the time step, and thus arrive at first-order integrators. These can then be composed to yield a second order integrator with very desirable properties: symplecticity and momentum conservation. 

  7. Energy conservation in Newmark based time integration algorithms

    DEFF Research Database (Denmark)

    Krenk, Steen

    2006-01-01

    Energy balance equations are established for the Newmark time integration algorithm, and for the derived algorithms with algorithmic damping introduced via averaging, the so-called a-methods. The energy balance equations form a sequence applicable to: Newmark integration of the undamped equations...... of motion, an extended form including structural damping, and finally the generalized form including structural as well as algorithmic damping. In all three cases the expression for energy, appearing in the balance equation, is the mechanical energy plus some additional terms generated by the discretization...

  8. The imaginary-time path integral and non-time-reversal-invariant saddle points of the Euclidean action

    International Nuclear Information System (INIS)

    Dasgupta, I.

    1998-01-01

    We discuss new bounce-like (but non-time-reversal-invariant) solutions to Euclidean equations of motion, which we dub boomerons. In the Euclidean path integral approach to quantum theories, boomerons make an imaginary contribution to the vacuum energy. The fake vacuum instability can be removed by cancelling boomeron contributions against contributions from time reversed boomerons (anti-boomerons). The cancellation rests on a sign choice whose significance is not completely understood in the path integral method. (orig.)

  9. Use of queue modelling in the analysis of elective patient treatment governed by a maximum waiting time policy

    DEFF Research Database (Denmark)

    Kozlowski, Dawid; Worthington, Dave

    2015-01-01

    chain and discrete event simulation models, to provide an insightful analysis of the public hospital performance under the policy rules. The aim of this paper is to support the enhancement of the quality of elective patient care, to be brought about by better understanding of the policy implications...... on the utilization of public hospital resources. This paper illustrates the use of a queue modelling approach in the analysis of elective patient treatment governed by the maximum waiting time policy. Drawing upon the combined strengths of analytic and simulation approaches we develop both continuous-time Markov...

  10. Rigorous time slicing approach to Feynman path integrals

    CERN Document Server

    Fujiwara, Daisuke

    2017-01-01

    This book proves that Feynman's original definition of the path integral actually converges to the fundamental solution of the Schrödinger equation at least in the short term if the potential is differentiable sufficiently many times and its derivatives of order equal to or higher than two are bounded. The semi-classical asymptotic formula up to the second term of the fundamental solution is also proved by a method different from that of Birkhoff. A bound of the remainder term is also proved. The Feynman path integral is a method of quantization using the Lagrangian function, whereas Schrödinger's quantization uses the Hamiltonian function. These two methods are believed to be equivalent. But equivalence is not fully proved mathematically, because, compared with Schrödinger's method, there is still much to be done concerning rigorous mathematical treatment of Feynman's method. Feynman himself defined a path integral as the limit of a sequence of integrals over finite-dimensional spaces which is obtained by...

  11. New design for photonic temporal integration with combined high processing speed and long operation time window.

    Science.gov (United States)

    Asghari, Mohammad H; Park, Yongwoo; Azaña, José

    2011-01-17

    We propose and experimentally prove a novel design for implementing photonic temporal integrators simultaneously offering a high processing bandwidth and a long operation time window, namely a large time-bandwidth product. The proposed scheme is based on concatenating in series a time-limited ultrafast photonic temporal integrator, e.g. implemented using a fiber Bragg grating (FBG), with a discrete-time (bandwidth limited) optical integrator, e.g. implemented using an optical resonant cavity. This design combines the advantages of these two previously demonstrated photonic integrator solutions, providing a processing speed as high as that of the time-limited ultrafast integrator and an operation time window fixed by the discrete-time integrator. Proof-of-concept experiments are reported using a uniform fiber Bragg grating (as the original time-limited integrator) connected in series with a bulk-optics coherent interferometers' system (as a passive 4-points discrete-time photonic temporal integrator). Using this setup, we demonstrate accurate temporal integration of complex-field optical signals with time-features as fast as ~6 ps, only limited by the processing bandwidth of the FBG integrator, over time durations as long as ~200 ps, which represents a 4-fold improvement over the operation time window (~50 ps) of the original FBG integrator.

  12. An open-chain imaginary-time path-integral sampling approach to the calculation of approximate symmetrized quantum time correlation functions

    Science.gov (United States)

    Cendagorta, Joseph R.; Bačić, Zlatko; Tuckerman, Mark E.

    2018-03-01

    We introduce a scheme for approximating quantum time correlation functions numerically within the Feynman path integral formulation. Starting with the symmetrized version of the correlation function expressed as a discretized path integral, we introduce a change of integration variables often used in the derivation of trajectory-based semiclassical methods. In particular, we transform to sum and difference variables between forward and backward complex-time propagation paths. Once the transformation is performed, the potential energy is expanded in powers of the difference variables, which allows us to perform the integrals over these variables analytically. The manner in which this procedure is carried out results in an open-chain path integral (in the remaining sum variables) with a modified potential that is evaluated using imaginary-time path-integral sampling rather than requiring the generation of a large ensemble of trajectories. Consequently, any number of path integral sampling schemes can be employed to compute the remaining path integral, including Monte Carlo, path-integral molecular dynamics, or enhanced path-integral molecular dynamics. We believe that this approach constitutes a different perspective in semiclassical-type approximations to quantum time correlation functions. Importantly, we argue that our approximation can be systematically improved within a cumulant expansion formalism. We test this approximation on a set of one-dimensional problems that are commonly used to benchmark approximate quantum dynamical schemes. We show that the method is at least as accurate as the popular ring-polymer molecular dynamics technique and linearized semiclassical initial value representation for correlation functions of linear operators in most of these examples and improves the accuracy of correlation functions of nonlinear operators.

  13. An open-chain imaginary-time path-integral sampling approach to the calculation of approximate symmetrized quantum time correlation functions.

    Science.gov (United States)

    Cendagorta, Joseph R; Bačić, Zlatko; Tuckerman, Mark E

    2018-03-14

    We introduce a scheme for approximating quantum time correlation functions numerically within the Feynman path integral formulation. Starting with the symmetrized version of the correlation function expressed as a discretized path integral, we introduce a change of integration variables often used in the derivation of trajectory-based semiclassical methods. In particular, we transform to sum and difference variables between forward and backward complex-time propagation paths. Once the transformation is performed, the potential energy is expanded in powers of the difference variables, which allows us to perform the integrals over these variables analytically. The manner in which this procedure is carried out results in an open-chain path integral (in the remaining sum variables) with a modified potential that is evaluated using imaginary-time path-integral sampling rather than requiring the generation of a large ensemble of trajectories. Consequently, any number of path integral sampling schemes can be employed to compute the remaining path integral, including Monte Carlo, path-integral molecular dynamics, or enhanced path-integral molecular dynamics. We believe that this approach constitutes a different perspective in semiclassical-type approximations to quantum time correlation functions. Importantly, we argue that our approximation can be systematically improved within a cumulant expansion formalism. We test this approximation on a set of one-dimensional problems that are commonly used to benchmark approximate quantum dynamical schemes. We show that the method is at least as accurate as the popular ring-polymer molecular dynamics technique and linearized semiclassical initial value representation for correlation functions of linear operators in most of these examples and improves the accuracy of correlation functions of nonlinear operators.

  14. Relative timing of last glacial maximum and late-glacial events in the central tropical Andes

    Science.gov (United States)

    Bromley, Gordon R. M.; Schaefer, Joerg M.; Winckler, Gisela; Hall, Brenda L.; Todd, Claire E.; Rademaker, Kurt M.

    2009-11-01

    Whether or not tropical climate fluctuated in synchrony with global events during the Late Pleistocene is a key problem in climate research. However, the timing of past climate changes in the tropics remains controversial, with a number of recent studies reporting that tropical ice age climate is out of phase with global events. Here, we present geomorphic evidence and an in-situ cosmogenic 3He surface-exposure chronology from Nevado Coropuna, southern Peru, showing that glaciers underwent at least two significant advances during the Late Pleistocene prior to Holocene warming. Comparison of our glacial-geomorphic map at Nevado Coropuna to mid-latitude reconstructions yields a striking similarity between Last Glacial Maximum (LGM) and Late-Glacial sequences in tropical and temperate regions. Exposure ages constraining the maximum and end of the older advance at Nevado Coropuna range between 24.5 and 25.3 ka, and between 16.7 and 21.1 ka, respectively, depending on the cosmogenic production rate scaling model used. Similarly, the mean age of the younger event ranges from 10 to 13 ka. This implies that (1) the LGM and the onset of deglaciation in southern Peru occurred no earlier than at higher latitudes and (2) that a significant Late-Glacial event occurred, most likely prior to the Holocene, coherent with the glacial record from mid and high latitudes. The time elapsed between the end of the LGM and the Late-Glacial event at Nevado Coropuna is independent of scaling model and matches the period between the LGM termination and Late-Glacial reversal in classic mid-latitude records, suggesting that these events in both tropical and temperate regions were in phase.

  15. Effect of Ovality on Maximum External Pressure of Helically Coiled Steam Generator Tubes with a Rectangular Wear

    Energy Technology Data Exchange (ETDEWEB)

    Shin, Dong In; Lim, Eun Mo; Huh, Nam Su [Seoul National Univ. of Science and Technology, Seoul (Korea, Republic of); Choi, Shin Beom; Yu, Je Yong; Kim, Ji Ho; Choi, Suhn [Korea Atomic Energy Research Institute, Daejeon (Korea, Republic of)

    2013-10-15

    A structural integrity of steam generator tubes of nuclear power plants is one of crucial parameters for safe operation of nuclear power plants. Thus, many studies have been made to provide engineering methods to assess integrity of defective tubes of commercial nuclear power plants considering its operating environments and defect characteristics. As described above, the geometric and operating conditions of steam generator tubes in integral reactor are significantly different from those of commercial reactor. Therefore, the structural integrity assessment of defective tubes of integral reactor taking into account its own operating conditions and geometric characteristics, i. e., external pressure and helically coiled shape, should be made to demonstrate compliance with the current design criteria. Also, ovality is very specific characteristics of the helically coiled tube because it is occurred during the coiling processes. The wear, occurring from FIV (Flow Induced Vibration) and so on, is main degradation of steam generator tube. In the present study, maximum external pressure of helically coiled steam generator tube with wear is predicted based on the detailed 3-dimensional finite element analysis. As for shape of wear defect, the rectangular shape is considered. In particular, the effect of ovality on the maximum external pressure of helically coiled tubes with rectangular shaped wear is investigated. In the present work, the maximum external pressure of helically coiled steam generator tube with rectangular shaped wear is investigated via detailed 3-D FE analyses. In order to cover a practical range of geometries for defective tube, the variables affecting the maximum external pressure were systematically varied. In particular, the effect of tube ovality on the maximum external pressure is evaluated. It is expected that the present results can be used as a technical backgrounds for establishing a practical structural integrity assessment guideline of

  16. Orientation, Evaluation, and Integration of Part-Time Nursing Faculty.

    Science.gov (United States)

    Carlson, Joanne S

    2015-07-10

    This study helps to quantify and describe orientation, evaluation, and integration practices pertaining to part-time clinical nursing faculty teaching in prelicensure nursing education programs. A researcher designed Web-based survey was used to collect information from a convenience sample of part-time clinical nursing faculty teaching in prelicensure nursing programs. Survey questions focused on the amount and type of orientation, evaluation, and integration practices. Descriptive statistics were used to analyze results. Respondents reported on average four hours of orientation, with close to half reporting no more than two hours. Evaluative feedback was received much more often from students than from full-time faculty. Most respondents reported receiving some degree of mentoring and that it was easy to get help from full-time faculty. Respondents reported being most informed about student evaluation procedures, grading, and the steps to take when students are not meeting course objectives, and less informed about changes to ongoing curriculum and policy.

  17. An energy-stable time-integrator for phase-field models

    KAUST Repository

    Vignal, Philippe

    2016-12-27

    We introduce a provably energy-stable time-integration method for general classes of phase-field models with polynomial potentials. We demonstrate how Taylor series expansions of the nonlinear terms present in the partial differential equations of these models can lead to expressions that guarantee energy-stability implicitly, which are second-order accurate in time. The spatial discretization relies on a mixed finite element formulation and isogeometric analysis. We also propose an adaptive time-stepping discretization that relies on a first-order backward approximation to give an error-estimator. This error estimator is accurate, robust, and does not require the computation of extra solutions to estimate the error. This methodology can be applied to any second-order accurate time-integration scheme. We present numerical examples in two and three spatial dimensions, which confirm the stability and robustness of the method. The implementation of the numerical schemes is done in PetIGA, a high-performance isogeometric analysis framework.

  18. An energy-stable time-integrator for phase-field models

    KAUST Repository

    Vignal, Philippe; Collier, N.; Dalcin, Lisandro; Brown, D.L.; Calo, V.M.

    2016-01-01

    We introduce a provably energy-stable time-integration method for general classes of phase-field models with polynomial potentials. We demonstrate how Taylor series expansions of the nonlinear terms present in the partial differential equations of these models can lead to expressions that guarantee energy-stability implicitly, which are second-order accurate in time. The spatial discretization relies on a mixed finite element formulation and isogeometric analysis. We also propose an adaptive time-stepping discretization that relies on a first-order backward approximation to give an error-estimator. This error estimator is accurate, robust, and does not require the computation of extra solutions to estimate the error. This methodology can be applied to any second-order accurate time-integration scheme. We present numerical examples in two and three spatial dimensions, which confirm the stability and robustness of the method. The implementation of the numerical schemes is done in PetIGA, a high-performance isogeometric analysis framework.

  19. A space-time mixed galerkin marching-on-in-time scheme for the time-domain combined field integral equation

    KAUST Repository

    Beghein, Yves

    2013-03-01

    The time domain combined field integral equation (TD-CFIE), which is constructed from a weighted sum of the time domain electric and magnetic field integral equations (TD-EFIE and TD-MFIE) for analyzing transient scattering from closed perfect electrically conducting bodies, is free from spurious resonances. The standard marching-on-in-time technique for discretizing the TD-CFIE uses Galerkin and collocation schemes in space and time, respectively. Unfortunately, the standard scheme is theoretically not well understood: stability and convergence have been proven for only one class of space-time Galerkin discretizations. Moreover, existing discretization schemes are nonconforming, i.e., the TD-MFIE contribution is tested with divergence conforming functions instead of curl conforming functions. We therefore introduce a novel space-time mixed Galerkin discretization for the TD-CFIE. A family of temporal basis and testing functions with arbitrary order is introduced. It is explained how the corresponding interactions can be computed efficiently by existing collocation-in-time codes. The spatial mixed discretization is made fully conforming and consistent by leveraging both Rao-Wilton-Glisson and Buffa-Christiansen basis functions and by applying the appropriate bi-orthogonalization procedures. The combination of both techniques is essential when high accuracy over a broad frequency band is required. © 2012 IEEE.

  20. Detection of anatomical changes in lung cancer patients with 2D time-integrated, 2D time-resolved and 3D time-integrated portal dosimetry: a simulation study

    Science.gov (United States)

    Wolfs, Cecile J. A.; Brás, Mariana G.; Schyns, Lotte E. J. R.; Nijsten, Sebastiaan M. J. J. G.; van Elmpt, Wouter; Scheib, Stefan G.; Baltes, Christof; Podesta, Mark; Verhaegen, Frank

    2017-08-01

    The aim of this work is to assess the performance of 2D time-integrated (2D-TI), 2D time-resolved (2D-TR) and 3D time-integrated (3D-TI) portal dosimetry in detecting dose discrepancies between the planned and (simulated) delivered dose caused by simulated changes in the anatomy of lung cancer patients. For six lung cancer patients, tumor shift, tumor regression and pleural effusion are simulated by modifying their CT images. Based on the modified CT images, time-integrated (TI) and time-resolved (TR) portal dose images (PDIs) are simulated and 3D-TI doses are calculated. The modified and original PDIs and 3D doses are compared by a gamma analysis with various gamma criteria. Furthermore, the difference in the D 95% (ΔD 95%) of the GTV is calculated and used as a gold standard. The correlation between the gamma fail rate and the ΔD 95% is investigated, as well the sensitivity and specificity of all combinations of portal dosimetry method, gamma criteria and gamma fail rate threshold. On the individual patient level, there is a correlation between the gamma fail rate and the ΔD 95%, which cannot be found at the group level. The sensitivity and specificity analysis showed that there is not one combination of portal dosimetry method, gamma criteria and gamma fail rate threshold that can detect all simulated anatomical changes. This work shows that it will be more beneficial to relate portal dosimetry and DVH analysis on the patient level, rather than trying to quantify a relationship for a group of patients. With regards to optimizing sensitivity and specificity, different combinations of portal dosimetry method, gamma criteria and gamma fail rate should be used to optimally detect certain types of anatomical changes.

  1. Detection of anatomical changes in lung cancer patients with 2D time-integrated, 2D time-resolved and 3D time-integrated portal dosimetry: a simulation study.

    Science.gov (United States)

    Wolfs, Cecile J A; Brás, Mariana G; Schyns, Lotte E J R; Nijsten, Sebastiaan M J J G; van Elmpt, Wouter; Scheib, Stefan G; Baltes, Christof; Podesta, Mark; Verhaegen, Frank

    2017-07-12

    The aim of this work is to assess the performance of 2D time-integrated (2D-TI), 2D time-resolved (2D-TR) and 3D time-integrated (3D-TI) portal dosimetry in detecting dose discrepancies between the planned and (simulated) delivered dose caused by simulated changes in the anatomy of lung cancer patients. For six lung cancer patients, tumor shift, tumor regression and pleural effusion are simulated by modifying their CT images. Based on the modified CT images, time-integrated (TI) and time-resolved (TR) portal dose images (PDIs) are simulated and 3D-TI doses are calculated. The modified and original PDIs and 3D doses are compared by a gamma analysis with various gamma criteria. Furthermore, the difference in the D 95% (ΔD 95% ) of the GTV is calculated and used as a gold standard. The correlation between the gamma fail rate and the ΔD 95% is investigated, as well the sensitivity and specificity of all combinations of portal dosimetry method, gamma criteria and gamma fail rate threshold. On the individual patient level, there is a correlation between the gamma fail rate and the ΔD 95% , which cannot be found at the group level. The sensitivity and specificity analysis showed that there is not one combination of portal dosimetry method, gamma criteria and gamma fail rate threshold that can detect all simulated anatomical changes. This work shows that it will be more beneficial to relate portal dosimetry and DVH analysis on the patient level, rather than trying to quantify a relationship for a group of patients. With regards to optimizing sensitivity and specificity, different combinations of portal dosimetry method, gamma criteria and gamma fail rate should be used to optimally detect certain types of anatomical changes.

  2. Real-time hybrid simulation using the convolution integral method

    International Nuclear Information System (INIS)

    Kim, Sung Jig; Christenson, Richard E; Wojtkiewicz, Steven F; Johnson, Erik A

    2011-01-01

    This paper proposes a real-time hybrid simulation method that will allow complex systems to be tested within the hybrid test framework by employing the convolution integral (CI) method. The proposed CI method is potentially transformative for real-time hybrid simulation. The CI method can allow real-time hybrid simulation to be conducted regardless of the size and complexity of the numerical model and for numerical stability to be ensured in the presence of high frequency responses in the simulation. This paper presents the general theory behind the proposed CI method and provides experimental verification of the proposed method by comparing the CI method to the current integration time-stepping (ITS) method. Real-time hybrid simulation is conducted in the Advanced Hazard Mitigation Laboratory at the University of Connecticut. A seismically excited two-story shear frame building with a magneto-rheological (MR) fluid damper is selected as the test structure to experimentally validate the proposed method. The building structure is numerically modeled and simulated, while the MR damper is physically tested. Real-time hybrid simulation using the proposed CI method is shown to provide accurate results

  3. A point implicit time integration technique for slow transient flow problems

    Energy Technology Data Exchange (ETDEWEB)

    Kadioglu, Samet Y., E-mail: kadioglu@yildiz.edu.tr [Department of Mathematical Engineering, Yildiz Technical University, 34210 Davutpasa-Esenler, Istanbul (Turkey); Berry, Ray A., E-mail: ray.berry@inl.gov [Idaho National Laboratory, P.O. Box 1625, MS 3840, Idaho Falls, ID 83415 (United States); Martineau, Richard C. [Idaho National Laboratory, P.O. Box 1625, MS 3840, Idaho Falls, ID 83415 (United States)

    2015-05-15

    Highlights: • This new method does not require implicit iteration; instead it time advances the solutions in a similar spirit to explicit methods. • It is unconditionally stable, as a fully implicit method would be. • It exhibits the simplicity of implementation of an explicit method. • It is specifically designed for slow transient flow problems of long duration such as can occur inside nuclear reactor coolant systems. • Our findings indicate the new method can integrate slow transient problems very efficiently; and its implementation is very robust. - Abstract: We introduce a point implicit time integration technique for slow transient flow problems. The method treats the solution variables of interest (that can be located at cell centers, cell edges, or cell nodes) implicitly and the rest of the information related to same or other variables are handled explicitly. The method does not require implicit iteration; instead it time advances the solutions in a similar spirit to explicit methods, except it involves a few additional function(s) evaluation steps. Moreover, the method is unconditionally stable, as a fully implicit method would be. This new approach exhibits the simplicity of implementation of explicit methods and the stability of implicit methods. It is specifically designed for slow transient flow problems of long duration wherein one would like to perform time integrations with very large time steps. Because the method can be time inaccurate for fast transient problems, particularly with larger time steps, an appropriate solution strategy for a problem that evolves from a fast to a slow transient would be to integrate the fast transient with an explicit or semi-implicit technique and then switch to this point implicit method as soon as the time variation slows sufficiently. We have solved several test problems that result from scalar or systems of flow equations. Our findings indicate the new method can integrate slow transient problems very

  4. A point implicit time integration technique for slow transient flow problems

    International Nuclear Information System (INIS)

    Kadioglu, Samet Y.; Berry, Ray A.; Martineau, Richard C.

    2015-01-01

    Highlights: • This new method does not require implicit iteration; instead it time advances the solutions in a similar spirit to explicit methods. • It is unconditionally stable, as a fully implicit method would be. • It exhibits the simplicity of implementation of an explicit method. • It is specifically designed for slow transient flow problems of long duration such as can occur inside nuclear reactor coolant systems. • Our findings indicate the new method can integrate slow transient problems very efficiently; and its implementation is very robust. - Abstract: We introduce a point implicit time integration technique for slow transient flow problems. The method treats the solution variables of interest (that can be located at cell centers, cell edges, or cell nodes) implicitly and the rest of the information related to same or other variables are handled explicitly. The method does not require implicit iteration; instead it time advances the solutions in a similar spirit to explicit methods, except it involves a few additional function(s) evaluation steps. Moreover, the method is unconditionally stable, as a fully implicit method would be. This new approach exhibits the simplicity of implementation of explicit methods and the stability of implicit methods. It is specifically designed for slow transient flow problems of long duration wherein one would like to perform time integrations with very large time steps. Because the method can be time inaccurate for fast transient problems, particularly with larger time steps, an appropriate solution strategy for a problem that evolves from a fast to a slow transient would be to integrate the fast transient with an explicit or semi-implicit technique and then switch to this point implicit method as soon as the time variation slows sufficiently. We have solved several test problems that result from scalar or systems of flow equations. Our findings indicate the new method can integrate slow transient problems very

  5. The early maximum likelihood estimation model of audiovisual integration in speech perception

    DEFF Research Database (Denmark)

    Andersen, Tobias

    2015-01-01

    integration to speech perception along with three model variations. In early MLE, integration is based on a continuous internal representation before categorization, which can make the model more parsimonious by imposing constraints that reflect experimental designs. The study also shows that cross......Speech perception is facilitated by seeing the articulatory mouth movements of the talker. This is due to perceptual audiovisual integration, which also causes the McGurk−MacDonald illusion, and for which a comprehensive computational account is still lacking. Decades of research have largely......-validation can evaluate models of audiovisual integration based on typical data sets taking both goodness-of-fit and model flexibility into account. All models were tested on a published data set previously used for testing the FLMP. Cross-validation favored the early MLE while more conventional error measures...

  6. Design of time-pulse coded optoelectronic neuronal elements for nonlinear transformation and integration

    Science.gov (United States)

    Krasilenko, Vladimir G.; Nikolsky, Alexander I.; Lazarev, Alexander A.; Lazareva, Maria V.

    2008-03-01

    In the paper the actuality of neurophysiologically motivated neuron arrays with flexibly programmable functions and operations with possibility to select required accuracy and type of nonlinear transformation and learning are shown. We consider neurons design and simulation results of multichannel spatio-time algebraic accumulation - integration of optical signals. Advantages for nonlinear transformation and summation - integration are shown. The offered circuits are simple and can have intellectual properties such as learning and adaptation. The integrator-neuron is based on CMOS current mirrors and comparators. The performance: consumable power - 100...500 μW, signal period- 0.1...1ms, input optical signals power - 0.2...20 μW time delays - less 1μs, the number of optical signals - 2...10, integration time - 10...100 of signal periods, accuracy or integration error - about 1%. Various modifications of the neuron-integrators with improved performance and for different applications are considered in the paper.

  7. Time series analysis of the developed financial markets' integration using visibility graphs

    Science.gov (United States)

    Zhuang, Enyu; Small, Michael; Feng, Gang

    2014-09-01

    A time series representing the developed financial markets' segmentation from 1973 to 2012 is studied. The time series reveals an obvious market integration trend. To further uncover the features of this time series, we divide it into seven windows and generate seven visibility graphs. The measuring capabilities of the visibility graphs provide means to quantitatively analyze the original time series. It is found that the important historical incidents that influenced market integration coincide with variations in the measured graphical node degree. Through the measure of neighborhood span, the frequencies of the historical incidents are disclosed. Moreover, it is also found that large "cycles" and significant noise in the time series are linked to large and small communities in the generated visibility graphs. For large cycles, how historical incidents significantly affected market integration is distinguished by density and compactness of the corresponding communities.

  8. General new time formalism in the path integral

    International Nuclear Information System (INIS)

    Pak, N.K.; Sokmen, I.

    1983-08-01

    We describe a general method of applying point canonical transformations to the path integral followed by the corresponding new time transformations aimed at reducing an arbitrary one-dimensional problem into an exactly solvable form. Our result is independent of operator ordering ambiguities by construction. (author)

  9. Time Varying Market Integration and Expected Rteurns in Emerging Markets

    NARCIS (Netherlands)

    de Jong, F.C.J.M.; de Roon, F.A.

    2001-01-01

    We use a simple model in which the expected returns in emerging markets depend on their systematic risk as measured by their beta relative to the world portfolio as well as on the level of integration in that market.The level of integration is a time-varying variable that depends on the market value

  10. Maximum permissible voltage of YBCO coated conductors

    Energy Technology Data Exchange (ETDEWEB)

    Wen, J.; Lin, B.; Sheng, J.; Xu, J.; Jin, Z. [Department of Electrical Engineering, Shanghai Jiao Tong University, Shanghai (China); Hong, Z., E-mail: zhiyong.hong@sjtu.edu.cn [Department of Electrical Engineering, Shanghai Jiao Tong University, Shanghai (China); Wang, D.; Zhou, H.; Shen, X.; Shen, C. [Qingpu Power Supply Company, State Grid Shanghai Municipal Electric Power Company, Shanghai (China)

    2014-06-15

    Highlights: • We examine three kinds of tapes’ maximum permissible voltage. • We examine the relationship between quenching duration and maximum permissible voltage. • Continuous I{sub c} degradations under repetitive quenching where tapes reaching maximum permissible voltage. • The relationship between maximum permissible voltage and resistance, temperature. - Abstract: Superconducting fault current limiter (SFCL) could reduce short circuit currents in electrical power system. One of the most important thing in developing SFCL is to find out the maximum permissible voltage of each limiting element. The maximum permissible voltage is defined as the maximum voltage per unit length at which the YBCO coated conductors (CC) do not suffer from critical current (I{sub c}) degradation or burnout. In this research, the time of quenching process is changed and voltage is raised until the I{sub c} degradation or burnout happens. YBCO coated conductors test in the experiment are from American superconductor (AMSC) and Shanghai Jiao Tong University (SJTU). Along with the quenching duration increasing, the maximum permissible voltage of CC decreases. When quenching duration is 100 ms, the maximum permissible of SJTU CC, 12 mm AMSC CC and 4 mm AMSC CC are 0.72 V/cm, 0.52 V/cm and 1.2 V/cm respectively. Based on the results of samples, the whole length of CCs used in the design of a SFCL can be determined.

  11. Minimal length, Friedmann equations and maximum density

    Energy Technology Data Exchange (ETDEWEB)

    Awad, Adel [Center for Theoretical Physics, British University of Egypt,Sherouk City 11837, P.O. Box 43 (Egypt); Department of Physics, Faculty of Science, Ain Shams University,Cairo, 11566 (Egypt); Ali, Ahmed Farag [Centre for Fundamental Physics, Zewail City of Science and Technology,Sheikh Zayed, 12588, Giza (Egypt); Department of Physics, Faculty of Science, Benha University,Benha, 13518 (Egypt)

    2014-06-16

    Inspired by Jacobson’s thermodynamic approach, Cai et al. have shown the emergence of Friedmann equations from the first law of thermodynamics. We extend Akbar-Cai derivation http://dx.doi.org/10.1103/PhysRevD.75.084003 of Friedmann equations to accommodate a general entropy-area law. Studying the resulted Friedmann equations using a specific entropy-area law, which is motivated by the generalized uncertainty principle (GUP), reveals the existence of a maximum energy density closed to Planck density. Allowing for a general continuous pressure p(ρ,a) leads to bounded curvature invariants and a general nonsingular evolution. In this case, the maximum energy density is reached in a finite time and there is no cosmological evolution beyond this point which leaves the big bang singularity inaccessible from a spacetime prospective. The existence of maximum energy density and a general nonsingular evolution is independent of the equation of state and the spacial curvature k. As an example we study the evolution of the equation of state p=ωρ through its phase-space diagram to show the existence of a maximum energy which is reachable in a finite time.

  12. Development of an integrated four-channel fast avalanche-photodiode detector system with nanosecond time resolution

    Science.gov (United States)

    Li, Zhenjie; Li, Qiuju; Chang, Jinfan; Ma, Yichao; Liu, Peng; Wang, Zheng; Hu, Michael Y.; Zhao, Jiyong; Alp, E. E.; Xu, Wei; Tao, Ye; Wu, Chaoqun; Zhou, Yangfan

    2017-10-01

    A four-channel nanosecond time-resolved avalanche-photodiode (APD) detector system is developed at Beijing Synchrotron Radiation. It uses a single module for signal processing and readout. This integrated system provides better reliability and flexibility for custom improvement. The detector system consists of three parts: (i) four APD sensors, (ii) four fast preamplifiers and (iii) a time-digital-converter (TDC) readout electronics. The C30703FH silicon APD chips fabricated by Excelitas are used as the sensors of the detectors. It has an effective light-sensitive area of 10 × 10 mm2 and an absorption layer thickness of 110 μm. A fast preamplifier with a gain of 59 dB and bandwidth of 2 GHz is designed to readout of the weak signal from the C30703FH APD. The TDC is realized by a Spartan-6 field-programmable-gate-array (FPGA) with multiphase method in a resolution of 1ns. The arrival time of all scattering events between two start triggers can be recorded by the TDC. The detector has been used for nuclear resonant scattering study at both Advanced Photon Source and also at Beijing Synchrotron Radiation Facility. For the X-ray energy of 14.4 keV, the time resolution, the full width of half maximum (FWHM) of the detector (APD sensor + fast amplifier) is 0.86 ns, and the whole detector system (APD sensors + fast amplifiers + TDC readout electronics) achieves a time resolution of 1.4 ns.

  13. Delayed Consensus Problem for Single and Double Integrator Systems

    Directory of Open Access Journals (Sweden)

    Martín Velasco-Villa

    2015-01-01

    Full Text Available This work deals with the analysis of the consensus problem for networks of agents constituted by single and double integrator systems. It is assumed that the communication among agents is affected by a constant time-delay. Previous and numerous analysis of the problem shows that the maximum communication time-delay that can be introduced to the network without affecting the consensus of the group of the agents depends on the considered topology. In this work, a control scheme that is based on the estimation of future states of the agents and that allows increasing the magnitude of a possible time-delay affecting the communication channels is proposed. How the proposed delay compensation strategy is independent of the network topology in the sense that the maximum allowable time-delay that could be supported by the network depends on a design parameter and not on the maximum eigenvalue of the corresponding Laplacian matrix is shown. It is formally proven that, under the proposed prediction scheme, the consensus of the group can be achieved by improving the maximum time-delay bounds previously reported in the literature. Numerical simulations show the effectiveness of the proposed solution.

  14. Distributed finite-time containment control for double-integrator multiagent systems.

    Science.gov (United States)

    Wang, Xiangyu; Li, Shihua; Shi, Peng

    2014-09-01

    In this paper, the distributed finite-time containment control problem for double-integrator multiagent systems with multiple leaders and external disturbances is discussed. In the presence of multiple dynamic leaders, by utilizing the homogeneous control technique, a distributed finite-time observer is developed for the followers to estimate the weighted average of the leaders' velocities at first. Then, based on the estimates and the generalized adding a power integrator approach, distributed finite-time containment control algorithms are designed to guarantee that the states of the followers converge to the dynamic convex hull spanned by those of the leaders in finite time. Moreover, as a special case of multiple dynamic leaders with zero velocities, the proposed containment control algorithms also work for the case of multiple stationary leaders without using the distributed observer. Simulations demonstrate the effectiveness of the proposed control algorithms.

  15. Integrated optical delay lines for time-division multiplexers

    NARCIS (Netherlands)

    Stopinski, S.T.; Malinowski, M.; Piramidowicz, R.; Kleijn, E.; Smit, M.K.; Leijtens, X.J.M.

    2013-01-01

    In this paper, we present a study of integrated optical delay lines (DLs) for application in optical time-division multiplexers. The investigated DLs are formed by spirally folded waveguides. The components were designed in a generic approach and fabricated in multi-project wafer runs on an

  16. Non-integrability of time-dependent spherically symmetric Yang-Mills equations

    International Nuclear Information System (INIS)

    Matinyan, S.G.; Prokhorenko, E.V.; Savvidy, G.K.

    1986-01-01

    The integrability of time-dependent spherically symmetric Yang-Mills equations is studied using the Fermi-Pasta-Ulam method. The phase space of this system is shown to have no quasi-periodic motion specific for integrable systems. In particular, the well-known Wu-Yang static solution is unstable, so its vicinity in phase is the stochasticity region

  17. Integration and timing of basic and clinical sciences education.

    Science.gov (United States)

    Bandiera, Glen; Boucher, Andree; Neville, Alan; Kuper, Ayelet; Hodges, Brian

    2013-05-01

    Medical education has traditionally been compartmentalized into basic and clinical sciences, with the latter being viewed as the skillful application of the former. Over time, the relevance of basic sciences has become defined by their role in supporting clinical problem solving rather than being, of themselves, a defining knowledge base of physicians. As part of the national Future of Medical Education in Canada (FMEC MD) project, a comprehensive empirical environmental scan identified the timing and integration of basic sciences as a key pressing issue for medical education. Using the literature review, key informant interviews, stakeholder meetings, and subsequent consultation forums from the FMEC project, this paper details the empirical basis for focusing on the role of basic science, the evidentiary foundations for current practices, and the implications for medical education. Despite a dearth of definitive relevant studies, opinions about how best to integrate the sciences remain strong. Resource allocation, political power, educational philosophy, and the shift from a knowledge-based to a problem-solving profession all influence the debate. There was little disagreement that both sciences are important, that many traditional models emphasized deep understanding of limited basic science disciplines at the expense of other relevant content such as social sciences, or that teaching the sciences contemporaneously rather than sequentially has theoretical and practical merit. Innovations in integrated curriculum design have occurred internationally. Less clear are the appropriate balance of the sciences, the best integration model, and solutions to the political and practical challenges of integrated curricula. New curricula tend to emphasize integration, development of more diverse physician competencies, and preparation of physicians to adapt to evolving technology and patients' expectations. Refocusing the basic/clinical dichotomy to a foundational

  18. Evaluation of a photovoltaic energy mechatronics system with a built-in quadratic maximum power point tracking algorithm

    Energy Technology Data Exchange (ETDEWEB)

    Chao, R.M.; Ko, S.H.; Lin, I.H. [Department of Systems and Naval Mechatronics Engineering, National Cheng Kung University, Tainan, Taiwan 701 (China); Pai, F.S. [Department of Electronic Engineering, National University of Tainan (China); Chang, C.C. [Department of Environment and Energy, National University of Tainan (China)

    2009-12-15

    The historically high cost of crude oil price is stimulating research into solar (green) energy as an alternative energy source. In general, applications with large solar energy output require a maximum power point tracking (MPPT) algorithm to optimize the power generated by the photovoltaic effect. This work aims to provide a stand-alone solution for solar energy applications by integrating a DC/DC buck converter to a newly developed quadratic MPPT algorithm along with its appropriate software and hardware. The quadratic MPPT method utilizes three previously used duty cycles with their corresponding power outputs. It approaches the maximum value by using a second order polynomial formula, which converges faster than the existing MPPT algorithm. The hardware implementation takes advantage of the real-time controller system from National Instruments, USA. Experimental results have shown that the proposed solar mechatronics system can correctly and effectively track the maximum power point without any difficulties. (author)

  19. A Digitally Programmable Differential Integrator with Enlarged Time Constant

    Directory of Open Access Journals (Sweden)

    S. K. Debroy

    1994-12-01

    Full Text Available A new Operational Amplifier (OA-RC integrator network is described. The novelties of the design are used of single grounded capacitor, ideal integration function realization with dual-input capability and design flexibility for extremely large time constant involving an enlargement factor (K using product of resistor ratios. The aspect of the digital control of K through a programmable resistor array (PRA controlled by a microprocessor has also been implemented. The effect of the OA-poles has been analyzed which indicates degradation of the integrator-Q at higher frequencies. An appropriate Q-compensation design scheme exhibiting 1 : |A|2 order of Q-improvement has been proposed with supporting experimental observations.

  20. Integrated vendor-buyer inventory models with inflation and time value of money in controllable lead time

    Directory of Open Access Journals (Sweden)

    Prashant Jindal

    2016-01-01

    Full Text Available In the global critical economic scenario, inflation plays a vital role in deciding optimal pricing of goods in any business entity. This article presents two single-vendor single-buyer integrated supply chain inventory models with inflation and time value of money. Shortage is allowed during the lead-time and it is partially backlogged. Lead time is controllable and can be reduced using crashing cost. In the first model, we consider the demand of lead time follows a normal distribution, and in the second model, it is considered distribution-free. For both cases, our objective is to minimize the integrated system cost by simultaneously optimizing the order quantity, safety factor, lead time and number of lots. The discounted cash flow and classical optimization technique are used to derive the optimal solution for both cases. Numerical examples including the sensitivity analysis of system parameters is provided to validate the results of the supply chain models.

  1. FLRW cosmology in Weyl-integrable space-time

    Energy Technology Data Exchange (ETDEWEB)

    Gannouji, Radouane [Department of Physics, Faculty of Science, Tokyo University of Science, 1–3, Kagurazaka, Shinjuku-ku, Tokyo 162-8601 (Japan); Nandan, Hemwati [Department of Physics, Gurukula Kangri Vishwavidayalaya, Haridwar 249404 (India); Dadhich, Naresh, E-mail: gannouji@rs.kagu.tus.ac.jp, E-mail: hntheory@yahoo.co.in, E-mail: nkd@iucaa.ernet.in [IUCAA, Post Bag 4, Ganeshkhind, Pune 411 007 (India)

    2011-11-01

    We investigate the Weyl space-time extension of general relativity (GR) for studying the FLRW cosmology through focusing and defocusing of the geodesic congruences. We have derived the equations of evolution for expansion, shear and rotation in the Weyl space-time. In particular, we consider the Starobinsky modification, f(R) = R+βR{sup 2}−2Λ, of gravity in the Einstein-Palatini formalism, which turns out to reduce to the Weyl integrable space-time (WIST) with the Weyl vector being a gradient. The modified Raychaudhuri equation takes the form of the Hill-type equation which is then analysed to study the formation of the caustics. In this model, it is possible to have a Big Bang singularity free cyclic Universe but unfortunately the periodicity turns out to be extremely short.

  2. Automatic maximum entropy spectral reconstruction in NMR

    International Nuclear Information System (INIS)

    Mobli, Mehdi; Maciejewski, Mark W.; Gryk, Michael R.; Hoch, Jeffrey C.

    2007-01-01

    Developments in superconducting magnets, cryogenic probes, isotope labeling strategies, and sophisticated pulse sequences together have enabled the application, in principle, of high-resolution NMR spectroscopy to biomolecular systems approaching 1 megadalton. In practice, however, conventional approaches to NMR that utilize the fast Fourier transform, which require data collected at uniform time intervals, result in prohibitively lengthy data collection times in order to achieve the full resolution afforded by high field magnets. A variety of approaches that involve nonuniform sampling have been proposed, each utilizing a non-Fourier method of spectrum analysis. A very general non-Fourier method that is capable of utilizing data collected using any of the proposed nonuniform sampling strategies is maximum entropy reconstruction. A limiting factor in the adoption of maximum entropy reconstruction in NMR has been the need to specify non-intuitive parameters. Here we describe a fully automated system for maximum entropy reconstruction that requires no user-specified parameters. A web-accessible script generator provides the user interface to the system

  3. National Ignition Facility sub-system design requirements integrated timing system SSDR 1.5.3

    International Nuclear Information System (INIS)

    Wiedwald, J.; Van Aersau, P.; Bliss, E.

    1996-01-01

    This System Design Requirement document establishes the performance, design, development, and test requirements for the Integrated Timing System, WBS 1.5.3 which is part of the NIF Integrated Computer Control System (ICCS). The Integrated Timing System provides all temporally-critical hardware triggers to components and equipment in other NIF systems

  4. An integrated portable hand-held analyser for real-time isothermal nucleic acid amplification

    International Nuclear Information System (INIS)

    Smith, Matthew C.; Steimle, George; Ivanov, Stan; Holly, Mark; Fries, David P.

    2007-01-01

    A compact hand-held heated fluorometric instrument for performing real-time isothermal nucleic acid amplification and detection is described. The optoelectronic instrument combines a Printed Circuit Board/Micro Electro Mechanical Systems (PCB/MEMS) reaction detection/chamber containing an integrated resistive heater with attached miniature LED light source and photo-detector and a disposable glass waveguide capillary to enable a mini-fluorometer. The fluorometer is fabricated and assembled in planar geometry, rolled into a tubular format and packaged with custom control electronics to form the hand-held reactor. Positive or negative results for each reaction are displayed to the user using an LED interface. Reaction data is stored in FLASH memory for retrieval via an in-built USB connection. Operating on one disposable 3 V lithium battery >12, 60 min reactions can be performed. Maximum dimensions of the system are 150 mm (h) x 48 mm (d) x 40 mm (w), the total instrument weight (with battery) is 140 g. The system produces comparable results to laboratory instrumentation when performing a real-time nucleic acid sequence-based amplification (NASBA) reaction, and also displayed comparable precision, accuracy and resolution to laboratory-based real-time nucleic acid amplification instrumentation. A good linear response (R 2 = 0.948) to fluorescein gradients ranging from 0.5 to 10 μM was also obtained from the instrument indicating that it may be utilized for other fluorometric assays. This instrument enables an inexpensive, compact approach to in-field genetic screening, providing results comparable to laboratory equipment with rapid user feedback as to the status of the reaction

  5. An integrated portable hand-held analyser for real-time isothermal nucleic acid amplification

    Energy Technology Data Exchange (ETDEWEB)

    Smith, Matthew C. [College of Marine Science, University of South Florida, St Petersburg, FL (United States)], E-mail: msmith@marine.usf.edu; Steimle, George; Ivanov, Stan; Holly, Mark; Fries, David P. [College of Marine Science, University of South Florida, St Petersburg, FL (United States)

    2007-08-29

    A compact hand-held heated fluorometric instrument for performing real-time isothermal nucleic acid amplification and detection is described. The optoelectronic instrument combines a Printed Circuit Board/Micro Electro Mechanical Systems (PCB/MEMS) reaction detection/chamber containing an integrated resistive heater with attached miniature LED light source and photo-detector and a disposable glass waveguide capillary to enable a mini-fluorometer. The fluorometer is fabricated and assembled in planar geometry, rolled into a tubular format and packaged with custom control electronics to form the hand-held reactor. Positive or negative results for each reaction are displayed to the user using an LED interface. Reaction data is stored in FLASH memory for retrieval via an in-built USB connection. Operating on one disposable 3 V lithium battery >12, 60 min reactions can be performed. Maximum dimensions of the system are 150 mm (h) x 48 mm (d) x 40 mm (w), the total instrument weight (with battery) is 140 g. The system produces comparable results to laboratory instrumentation when performing a real-time nucleic acid sequence-based amplification (NASBA) reaction, and also displayed comparable precision, accuracy and resolution to laboratory-based real-time nucleic acid amplification instrumentation. A good linear response (R{sup 2} = 0.948) to fluorescein gradients ranging from 0.5 to 10 {mu}M was also obtained from the instrument indicating that it may be utilized for other fluorometric assays. This instrument enables an inexpensive, compact approach to in-field genetic screening, providing results comparable to laboratory equipment with rapid user feedback as to the status of the reaction.

  6. Probable Maximum Earthquake Magnitudes for the Cascadia Subduction

    Science.gov (United States)

    Rong, Y.; Jackson, D. D.; Magistrale, H.; Goldfinger, C.

    2013-12-01

    The concept of maximum earthquake magnitude (mx) is widely used in seismic hazard and risk analysis. However, absolute mx lacks a precise definition and cannot be determined from a finite earthquake history. The surprising magnitudes of the 2004 Sumatra and the 2011 Tohoku earthquakes showed that most methods for estimating mx underestimate the true maximum if it exists. Thus, we introduced the alternate concept of mp(T), probable maximum magnitude within a time interval T. The mp(T) can be solved using theoretical magnitude-frequency distributions such as Tapered Gutenberg-Richter (TGR) distribution. The two TGR parameters, β-value (which equals 2/3 b-value in the GR distribution) and corner magnitude (mc), can be obtained by applying maximum likelihood method to earthquake catalogs with additional constraint from tectonic moment rate. Here, we integrate the paleoseismic data in the Cascadia subduction zone to estimate mp. The Cascadia subduction zone has been seismically quiescent since at least 1900. Fortunately, turbidite studies have unearthed a 10,000 year record of great earthquakes along the subduction zone. We thoroughly investigate the earthquake magnitude-frequency distribution of the region by combining instrumental and paleoseismic data, and using the tectonic moment rate information. To use the paleoseismic data, we first estimate event magnitudes, which we achieve by using the time interval between events, rupture extent of the events, and turbidite thickness. We estimate three sets of TGR parameters: for the first two sets, we consider a geographically large Cascadia region that includes the subduction zone, and the Explorer, Juan de Fuca, and Gorda plates; for the third set, we consider a narrow geographic region straddling the subduction zone. In the first set, the β-value is derived using the GCMT catalog. In the second and third sets, the β-value is derived using both the GCMT and paleoseismic data. Next, we calculate the corresponding mc

  7. Timing of Formal Phase Safety Reviews for Large-Scale Integrated Hazard Analysis

    Science.gov (United States)

    Massie, Michael J.; Morris, A. Terry

    2010-01-01

    Integrated hazard analysis (IHA) is a process used to identify and control unacceptable risk. As such, it does not occur in a vacuum. IHA approaches must be tailored to fit the system being analyzed. Physical, resource, organizational and temporal constraints on large-scale integrated systems impose additional direct or derived requirements on the IHA. The timing and interaction between engineering and safety organizations can provide either benefits or hindrances to the overall end product. The traditional approach for formal phase safety review timing and content, which generally works well for small- to moderate-scale systems, does not work well for very large-scale integrated systems. This paper proposes a modified approach to timing and content of formal phase safety reviews for IHA. Details of the tailoring process for IHA will describe how to avoid temporary disconnects in major milestone reviews and how to maintain a cohesive end-to-end integration story particularly for systems where the integrator inherently has little to no insight into lower level systems. The proposal has the advantage of allowing the hazard analysis development process to occur as technical data normally matures.

  8. Efficient Simulation of Compressible, Viscous Fluids using Multi-rate Time Integration

    Science.gov (United States)

    Mikida, Cory; Kloeckner, Andreas; Bodony, Daniel

    2017-11-01

    In the numerical simulation of problems of compressible, viscous fluids with single-rate time integrators, the global timestep used is limited to that of the finest mesh point or fastest physical process. This talk discusses the application of multi-rate Adams-Bashforth (MRAB) integrators to an overset mesh framework to solve compressible viscous fluid problems of varying scale with improved efficiency, with emphasis on the strategy of timescale separation and the application of the resulting numerical method to two sample problems: subsonic viscous flow over a cylinder and a viscous jet in crossflow. The results presented indicate the numerical efficacy of MRAB integrators, outline a number of outstanding code challenges, demonstrate the expected reduction in time enabled by MRAB, and emphasize the need for proper load balancing through spatial decomposition in order for parallel runs to achieve the predicted time-saving benefit. This material is based in part upon work supported by the Department of Energy, National Nuclear Security Administration, under Award Number DE-NA0002374.

  9. 50 CFR 259.34 - Minimum and maximum deposits; maximum time to deposit.

    Science.gov (United States)

    2010-10-01

    ... B objective. A time longer than 10 years, either by original scheduling or by subsequent extension... OCEANIC AND ATMOSPHERIC ADMINISTRATION, DEPARTMENT OF COMMERCE AID TO FISHERIES CAPITAL CONSTRUCTION FUND...) Minimum annual deposit. The minimum annual (based on each party's taxable year) deposit required by the...

  10. Fractal Dimension and Maximum Sunspot Number in Solar Cycle

    Directory of Open Access Journals (Sweden)

    R.-S. Kim

    2006-09-01

    Full Text Available The fractal dimension is a quantitative parameter describing the characteristics of irregular time series. In this study, we use this parameter to analyze the irregular aspects of solar activity and to predict the maximum sunspot number in the following solar cycle by examining time series of the sunspot number. For this, we considered the daily sunspot number since 1850 from SIDC (Solar Influences Data analysis Center and then estimated cycle variation of the fractal dimension by using Higuchi's method. We examined the relationship between this fractal dimension and the maximum monthly sunspot number in each solar cycle. As a result, we found that there is a strong inverse relationship between the fractal dimension and the maximum monthly sunspot number. By using this relation we predicted the maximum sunspot number in the solar cycle from the fractal dimension of the sunspot numbers during the solar activity increasing phase. The successful prediction is proven by a good correlation (r=0.89 between the observed and predicted maximum sunspot numbers in the solar cycles.

  11. Monte Carlo Maximum Likelihood Estimation for Generalized Long-Memory Time Series Models

    NARCIS (Netherlands)

    Mesters, G.; Koopman, S.J.; Ooms, M.

    2016-01-01

    An exact maximum likelihood method is developed for the estimation of parameters in a non-Gaussian nonlinear density function that depends on a latent Gaussian dynamic process with long-memory properties. Our method relies on the method of importance sampling and on a linear Gaussian approximating

  12. Integral-Value Models for Outcomes over Continuous Time

    DEFF Research Database (Denmark)

    Harvey, Charles M.; Østerdal, Lars Peter

    Models of preferences between outcomes over continuous time are important for individual, corporate, and social decision making, e.g., medical treatment, infrastructure development, and environmental regulation. This paper presents a foundation for such models. It shows that conditions on prefere...... on preferences between real- or vector-valued outcomes over continuous time are satisfied if and only if the preferences are represented by a value function having an integral form......Models of preferences between outcomes over continuous time are important for individual, corporate, and social decision making, e.g., medical treatment, infrastructure development, and environmental regulation. This paper presents a foundation for such models. It shows that conditions...

  13. The Role of Memory and Integration in Early Time Concepts.

    Science.gov (United States)

    Levin, Iris; And Others

    1984-01-01

    A total of 630 boys and girls from kindergarten to second grade were asked to compare durations that differ in beginning times with those that differ in ending times. Possible sources of children's failure to integrate beginning and end points when comparing durations were discussed. (Author/CI)

  14. Adaptive time-stepping Monte Carlo integration of Coulomb collisions

    Science.gov (United States)

    Särkimäki, K.; Hirvijoki, E.; Terävä, J.

    2018-01-01

    We report an accessible and robust tool for evaluating the effects of Coulomb collisions on a test particle in a plasma that obeys Maxwell-Jüttner statistics. The implementation is based on the Beliaev-Budker collision integral which allows both the test particle and the background plasma to be relativistic. The integration method supports adaptive time stepping, which is shown to greatly improve the computational efficiency. The Monte Carlo method is implemented for both the three-dimensional particle momentum space and the five-dimensional guiding center phase space. Detailed description is provided for both the physics and implementation of the operator. The focus is in adaptive integration of stochastic differential equations, which is an overlooked aspect among existing Monte Carlo implementations of Coulomb collision operators. We verify that our operator converges to known analytical results and demonstrate that careless implementation of the adaptive time step can lead to severely erroneous results. The operator is provided as a self-contained Fortran 95 module and can be included into existing orbit-following tools that trace either the full Larmor motion or the guiding center dynamics. The adaptive time-stepping algorithm is expected to be useful in situations where the collision frequencies vary greatly over the course of a simulation. Examples include the slowing-down of fusion products or other fast ions, and the Dreicer generation of runaway electrons as well as the generation of fast ions or electrons with ion or electron cyclotron resonance heating.

  15. Integrated intensities in inverse time-of-flight technique

    International Nuclear Information System (INIS)

    Dorner, Bruno

    2006-01-01

    In traditional data analysis a model function, convoluted with the resolution, is fitted to the measured data. In case that integrated intensities of signals are of main interest, one can use an approach which does not require a model function for the signal nor detailed knowledge of the resolution. For inverse TOF technique, this approach consists of two steps: (i) Normalisation of the measured spectrum with the help of a monitor, with 1/k sensitivity, which is positioned in front of the sample. This means at the same time a conversion of the data from time of flight to energy transfer. (ii) A Jacobian [I. Waller, P.O. Froeman, Ark. Phys. 4 (1952) 183] transforms data collected at constant scattering angle into data as if measured at constant momentum transfer Q. This Jacobian works correctly for signals which have a constant width at different Q along the trajectory of constant scattering angle. The approach has been tested on spectra of Compton scattering with neutrons, having epithermal energies, obtained on the inverse TOF spectrometer VESUVIO/ISIS. In this case the width of the signal is increasing proportional to Q and in consequence the application of the Jacobian leads to integrated intensities slightly too high. The resulting integrated intensities agree very well with results derived in the traditional way. Thus this completely different approach confirms the observation that signals from recoil by H-atoms at large momentum transfers are weaker than expected

  16. Global format for energy-momentum based time integration in nonlinear dynamics

    DEFF Research Database (Denmark)

    Krenk, Steen

    2014-01-01

    A global format is developed for momentum and energy consistent time integration of second‐order dynamic systems with general nonlinear stiffness. The algorithm is formulated by integrating the state‐space equations of motion over the time increment. The internal force is first represented...... of mean value products at the element level or explicit use of a geometric stiffness matrix. An optional monotonic algorithmic damping, increasing with response frequency, is developed in terms of a single damping parameter. In the solution procedure, the velocity is eliminated and the nonlinear...

  17. Terminal current interpolation for multirate time integration of hierarchical IC models

    NARCIS (Netherlands)

    Verhoeven, A.; Maten, ter E.J.W.; Dohmen, J.J.; Tasic, B.; Mattheij, R.M.M.; Fitt, A.D.; Norbury, J.; Ockendon, H.; Wilson, E.

    2010-01-01

    Multirate time-integration methods [3–5] appear to be attractive for initial value problems for DAEs with latency or multirate behaviour. Latency means that parts of the circuit are constant or slowly time-varying during a certain time interval, while multirate behaviour means that some variables

  18. The timing of the maximum extent of the Rhone Glacier at Wangen a.d. Aare

    Energy Technology Data Exchange (ETDEWEB)

    Ivy-Ochs, S.; Schluechter, C. [Bern Univ. (Switzerland); Kubik, P.W. [Paul Scherrer Inst. (PSI), Villigen (Switzerland); Beer, J. [EAWAG, Duebendorf (Switzerland)

    1997-09-01

    Erratic blocks found in the region of Wangen a.d. Aare delineate the maximum position of the Solothurn lobe of the Rhone Glacier. {sup 10}Be and {sup 26}Al exposure ages of three of these blocks show that the glacier withdraw from its maximum position at or slightly before 20,000{+-}1800 years ago. (author) 1 fig., 5 refs.

  19. A portable storage maximum thermometer

    International Nuclear Information System (INIS)

    Fayart, Gerard.

    1976-01-01

    A clinical thermometer storing the voltage corresponding to the maximum temperature in an analog memory is described. End of the measurement is shown by a lamp switch out. The measurement time is shortened by means of a low thermal inertia platinum probe. This portable thermometer is fitted with cell test and calibration system [fr

  20. Improved efficiency of maximum likelihood analysis of time series with temporally correlated errors

    Science.gov (United States)

    Langbein, John

    2017-08-01

    Most time series of geophysical phenomena have temporally correlated errors. From these measurements, various parameters are estimated. For instance, from geodetic measurements of positions, the rates and changes in rates are often estimated and are used to model tectonic processes. Along with the estimates of the size of the parameters, the error in these parameters needs to be assessed. If temporal correlations are not taken into account, or each observation is assumed to be independent, it is likely that any estimate of the error of these parameters will be too low and the estimated value of the parameter will be biased. Inclusion of better estimates of uncertainties is limited by several factors, including selection of the correct model for the background noise and the computational requirements to estimate the parameters of the selected noise model for cases where there are numerous observations. Here, I address the second problem of computational efficiency using maximum likelihood estimates (MLE). Most geophysical time series have background noise processes that can be represented as a combination of white and power-law noise, 1/f^{α } with frequency, f. With missing data, standard spectral techniques involving FFTs are not appropriate. Instead, time domain techniques involving construction and inversion of large data covariance matrices are employed. Bos et al. (J Geod, 2013. doi: 10.1007/s00190-012-0605-0) demonstrate one technique that substantially increases the efficiency of the MLE methods, yet is only an approximate solution for power-law indices >1.0 since they require the data covariance matrix to be Toeplitz. That restriction can be removed by simply forming a data filter that adds noise processes rather than combining them in quadrature. Consequently, the inversion of the data covariance matrix is simplified yet provides robust results for a wider range of power-law indices.

  1. A simulation study of Linsley's approach to infer elongation rate and fluctuations of the EAS maximum depth from muon arrival time distributions

    International Nuclear Information System (INIS)

    Badea, A.F.; Brancus, I.M.; Rebel, H.; Haungs, A.; Oehlschlaeger, J.; Zazyan, M.

    1999-01-01

    The average depth of the maximum X m of the EAS (Extensive Air Shower) development depends on the energy E 0 and the mass of the primary particle, and its dependence from the energy is traditionally expressed by the so-called elongation rate D e defined as change in the average depth of the maximum per decade of E 0 i.e. D e = dX m /dlog 10 E 0 . Invoking the superposition model approximation i.e. assuming that a heavy primary (A) has the same shower elongation rate like a proton, but scaled with energies E 0 /A, one can write X m = X init + D e log 10 (E 0 /A). In 1977 an indirect approach studying D e has been suggested by Linsley. This approach can be applied to shower parameters which do not depend explicitly on the energy of the primary particle, but do depend on the depth of observation X and on the depth X m of shower maximum. The distribution of the EAS muon arrival times, measured at a certain observation level relatively to the arrival time of the shower core reflect the pathlength distribution of the muon travel from locus of production (near the axis) to the observation locus. The basic a priori assumption is that we can associate the mean value or median T of the time distribution to the height of the EAS maximum X m , and that we can express T = f(X,X m ). In order to derive from the energy variation of the arrival time quantities information about elongation rate, some knowledge is required about F i.e. F = - ∂ T/∂X m ) X /∂(T/∂X) X m , in addition to the variations with the depth of observation and the zenith-angle (θ) dependence, respectively. Thus ∂T/∂log 10 E 0 | X = - F·D e ·1/X v ·∂T/∂secθ| E 0 . In a similar way the fluctuations σ(X m ) of X m may be related to the fluctuations σ(T) of T i.e. σ(T) = - σ(X m )· F σ ·1/X v ·∂T/∂secθ| E 0 , with F σ being the corresponding scaling factor for the fluctuation of F. By simulations of the EAS development using the Monte Carlo code CORSIKA the energy and angle

  2. Formulation of an explicit-multiple-time-step time integration method for use in a global primitive equation grid model

    Science.gov (United States)

    Chao, W. C.

    1982-01-01

    With appropriate modifications, a recently proposed explicit-multiple-time-step scheme (EMTSS) is incorporated into the UCLA model. In this scheme, the linearized terms in the governing equations that generate the gravity waves are split into different vertical modes. Each mode is integrated with an optimal time step, and at periodic intervals these modes are recombined. The other terms are integrated with a time step dictated by the CFL condition for low-frequency waves. This large time step requires a special modification of the advective terms in the polar region to maintain stability. Test runs for 72 h show that EMTSS is a stable, efficient and accurate scheme.

  3. An integrated framework for SAGD real-time monitoring

    Energy Technology Data Exchange (ETDEWEB)

    Mohajer, M.; Perez-Damas, C.; Berbin, A.; Al-kinani, A. [Schlumberger, Calgary, AB (Canada)

    2009-07-01

    This study examined the technologies and workflows for real-time optimization (RTO) of the steam assisted gravity drainage (SAGD) process. Although SAGD operators have tried to control the reservoir's steam chamber distribution to optimize bitumen recovery and minimize steam oil ratios, a true optimization can only be accomplished by implementing RTO workflows. In order for these workflows to be successful, some elements must be properly designed and introduced into the system. Most notably, well completions must ensure the integrity of downhole sensors; the appropriate measuring instruments must be selected; and surface and downhole measurements must be obtained. Operators have not been early adopters of RTO workflows for SAGD because of the numerous parameters that must be monitored, harsh operating conditions, the lack of integration between the different data acquisition systems, and the complex criteria required to optimize SAGD performance. This paper discussed the first stage in the development of a fully integrated RTO workflow for SAGD. An experimental apparatus with fiber optics distributed temperature sensing (DTS) was connected to a data acquisition system, and intra-minute data was streamed directly into an engineering desktop. The paper showed how subcool calculations can be effectively performed along the length of the horizontal well in real time and the results used to improve SAGD operation. Observations were compared against simulated predictions. In the next stage, a more complex set of criteria will be derived and additional data will be incorporated, such as surface heave, cross-well microseismic, multiphase flowmeter, and observation wells. 9 refs., 9 tabs., 13 figs.

  4. Approximate maximum parsimony and ancestral maximum likelihood.

    Science.gov (United States)

    Alon, Noga; Chor, Benny; Pardi, Fabio; Rapoport, Anat

    2010-01-01

    We explore the maximum parsimony (MP) and ancestral maximum likelihood (AML) criteria in phylogenetic tree reconstruction. Both problems are NP-hard, so we seek approximate solutions. We formulate the two problems as Steiner tree problems under appropriate distances. The gist of our approach is the succinct characterization of Steiner trees for a small number of leaves for the two distances. This enables the use of known Steiner tree approximation algorithms. The approach leads to a 16/9 approximation ratio for AML and asymptotically to a 1.55 approximation ratio for MP.

  5. Integral Time and the Varieties of Post-Mortem Survival

    Directory of Open Access Journals (Sweden)

    Sean M. Kelly

    2008-06-01

    Full Text Available While the question of survival of bodily death is usually approached byfocusing on the mind/body relation (and often with the idea of the soul as a special kindof substance, this paper explores the issue in the context of our understanding of time.The argument of the paper is woven around the central intuition of time as an “everlivingpresent.” The development of this intuition allows for a more integral or “complexholistic”theory of time, the soul, and the question of survival. Following the introductorymatter, the first section proposes a re-interpretation of Nietzsche’s doctrine of eternalrecurrence in terms of moments and lives as “eternally occurring.” The next section is atreatment of Julian Barbour’s neo-Machian model of instants of time as configurations inthe n-dimensional phase-space he calls “Platonia.” While rejecting his claim to have doneaway with time, I do find his model suggestive of the idea of moments and lives aseternally occurring. The following section begins with Fechner’s visionary ideas of thenature of the soul and its survival of bodily death, with particular attention to the notionof holonic inclusion and the central analogy of the transition from perception to memory.I turn next to Whitehead’s equally holonic notions of prehension and the concrescence ofactual occasions. From his epochal theory of time and certain ambiguities in hisreflections on the “divine antinomies,” we are brought to the threshold of a potentiallymore integral or “complex-holistic” theory of time and survival, which is treated in thelast section. This section draws from my earlier work on Hegel, Jung, and Edgar Morin,as well as from key insights of Jean Gebser, for an interpretation of Sri Aurobindo’sinspired but cryptic description of the “Supramental Time Vision.” This interpretationleads to an alternative understanding of reincarnation—and to the possibility of itsreconciliation with the once-only view

  6. A discontinous Galerkin finite element method with an efficient time integration scheme for accurate simulations

    KAUST Repository

    Liu, Meilin

    2011-07-01

    A discontinuous Galerkin finite element method (DG-FEM) with a highly-accurate time integration scheme is presented. The scheme achieves its high accuracy using numerically constructed predictor-corrector integration coefficients. Numerical results show that this new time integration scheme uses considerably larger time steps than the fourth-order Runge-Kutta method when combined with a DG-FEM using higher-order spatial discretization/basis functions for high accuracy. © 2011 IEEE.

  7. HMC algorithm with multiple time scale integration and mass preconditioning

    Science.gov (United States)

    Urbach, C.; Jansen, K.; Shindler, A.; Wenger, U.

    2006-01-01

    We present a variant of the HMC algorithm with mass preconditioning (Hasenbusch acceleration) and multiple time scale integration. We have tested this variant for standard Wilson fermions at β=5.6 and at pion masses ranging from 380 to 680 MeV. We show that in this situation its performance is comparable to the recently proposed HMC variant with domain decomposition as preconditioner. We give an update of the "Berlin Wall" figure, comparing the performance of our variant of the HMC algorithm to other published performance data. Advantages of the HMC algorithm with mass preconditioning and multiple time scale integration are that it is straightforward to implement and can be used in combination with a wide variety of lattice Dirac operators.

  8. Soil and Water Assessment Tool model predictions of annual maximum pesticide concentrations in high vulnerability watersheds.

    Science.gov (United States)

    Winchell, Michael F; Peranginangin, Natalia; Srinivasan, Raghavan; Chen, Wenlin

    2018-05-01

    Recent national regulatory assessments of potential pesticide exposure of threatened and endangered species in aquatic habitats have led to increased need for watershed-scale predictions of pesticide concentrations in flowing water bodies. This study was conducted to assess the ability of the uncalibrated Soil and Water Assessment Tool (SWAT) to predict annual maximum pesticide concentrations in the flowing water bodies of highly vulnerable small- to medium-sized watersheds. The SWAT was applied to 27 watersheds, largely within the midwest corn belt of the United States, ranging from 20 to 386 km 2 , and evaluated using consistent input data sets and an uncalibrated parameterization approach. The watersheds were selected from the Atrazine Ecological Exposure Monitoring Program and the Heidelberg Tributary Loading Program, both of which contain high temporal resolution atrazine sampling data from watersheds with exceptionally high vulnerability to atrazine exposure. The model performance was assessed based upon predictions of annual maximum atrazine concentrations in 1-d and 60-d durations, predictions critical in pesticide-threatened and endangered species risk assessments when evaluating potential acute and chronic exposure to aquatic organisms. The simulation results showed that for nearly half of the watersheds simulated, the uncalibrated SWAT model was able to predict annual maximum pesticide concentrations within a narrow range of uncertainty resulting from atrazine application timing patterns. An uncalibrated model's predictive performance is essential for the assessment of pesticide exposure in flowing water bodies, the majority of which have insufficient monitoring data for direct calibration, even in data-rich countries. In situations in which SWAT over- or underpredicted the annual maximum concentrations, the magnitude of the over- or underprediction was commonly less than a factor of 2, indicating that the model and uncalibrated parameterization

  9. Maximum Likelihood Estimation and Inference With Examples in R, SAS and ADMB

    CERN Document Server

    Millar, Russell B

    2011-01-01

    This book takes a fresh look at the popular and well-established method of maximum likelihood for statistical estimation and inference. It begins with an intuitive introduction to the concepts and background of likelihood, and moves through to the latest developments in maximum likelihood methodology, including general latent variable models and new material for the practical implementation of integrated likelihood using the free ADMB software. Fundamental issues of statistical inference are also examined, with a presentation of some of the philosophical debates underlying the choice of statis

  10. A unified approach for proportional-integral-derivative controller design for time delay processes

    International Nuclear Information System (INIS)

    Shamsuzzoha, Mohammad

    2015-01-01

    An analytical design method for PI/PID controller tuning is proposed for several types of processes with time delay. A single tuning formula gives enhanced disturbance rejection performance. The design method is based on the IMC approach, which has a single tuning parameter to adjust the performance and robustness of the controller. A simple tuning formula gives consistently better performance as compared to several well-known methods at the same degree of robustness for stable and integrating process. The performance of the unstable process has been compared with other recently published methods which also show significant improvement in the proposed method. Furthermore, the robustness of the controller is investigated by inserting a perturbation uncertainty in all parameters simultaneously, again showing comparable results with other methods. An analysis has been performed for the uncertainty margin in the different process parameters for the robust controller design. It gives the guidelines of the M s setting for the PI controller design based on the process parameters uncertainty. For the selection of the closed-loop time constant, (τ c ), a guideline is provided over a broad range of θ/τ ratios on the basis of the peak of maximum uncertainty (M s ). A comparison of the IAE has been conducted for the wide range of θ/τ ratio for the first order time delay process. The proposed method shows minimum IAE in compared to SIMC, while Lee et al. shows poor disturbance rejection in the lag dominant process. In the simulation study, the controllers were tuned to have the same degree of robustness by measuring the M s , to obtain a reasonable comparison

  11. Spatio-temporal observations of the tertiary ozone maximum

    Directory of Open Access Journals (Sweden)

    V. F. Sofieva

    2009-07-01

    Full Text Available We present spatio-temporal distributions of the tertiary ozone maximum (TOM, based on GOMOS (Global Ozone Monitoring by Occultation of Stars ozone measurements in 2002–2006. The tertiary ozone maximum is typically observed in the high-latitude winter mesosphere at an altitude of ~72 km. Although the explanation for this phenomenon has been found recently – low concentrations of odd-hydrogen cause the subsequent decrease in odd-oxygen losses – models have had significant deviations from existing observations until recently. Good coverage of polar night regions by GOMOS data has allowed for the first time to obtain spatial and temporal observational distributions of night-time ozone mixing ratio in the mesosphere.

    The distributions obtained from GOMOS data have specific features, which are variable from year to year. In particular, due to a long lifetime of ozone in polar night conditions, the downward transport of polar air by the meridional circulation is clearly observed in the tertiary ozone maximum time series. Although the maximum tertiary ozone mixing ratio is achieved close to the polar night terminator (as predicted by the theory, TOM can be observed also at very high latitudes, not only in the beginning and at the end, but also in the middle of winter. We have compared the observational spatio-temporal distributions of the tertiary ozone maximum with that obtained using WACCM (Whole Atmosphere Community Climate Model and found that the specific features are reproduced satisfactorily by the model.

    Since ozone in the mesosphere is very sensitive to HOx concentrations, energetic particle precipitation can significantly modify the shape of the ozone profiles. In particular, GOMOS observations have shown that the tertiary ozone maximum was temporarily destroyed during the January 2005 and December 2006 solar proton events as a result of the HOx enhancement from the increased ionization.

  12. Floral pathway integrator gene expression mediates gradual transmission of environmental and endogenous cues to flowering time.

    Science.gov (United States)

    van Dijk, Aalt D J; Molenaar, Jaap

    2017-01-01

    The appropriate timing of flowering is crucial for the reproductive success of plants. Hence, intricate genetic networks integrate various environmental and endogenous cues such as temperature or hormonal statues. These signals integrate into a network of floral pathway integrator genes. At a quantitative level, it is currently unclear how the impact of genetic variation in signaling pathways on flowering time is mediated by floral pathway integrator genes. Here, using datasets available from literature, we connect Arabidopsis thaliana flowering time in genetic backgrounds varying in upstream signalling components with the expression levels of floral pathway integrator genes in these genetic backgrounds. Our modelling results indicate that flowering time depends in a quite linear way on expression levels of floral pathway integrator genes. This gradual, proportional response of flowering time to upstream changes enables a gradual adaptation to changing environmental factors such as temperature and light.

  13. Floral pathway integrator gene expression mediates gradual transmission of environmental and endogenous cues to flowering time

    Directory of Open Access Journals (Sweden)

    Aalt D.J. van Dijk

    2017-04-01

    Full Text Available The appropriate timing of flowering is crucial for the reproductive success of plants. Hence, intricate genetic networks integrate various environmental and endogenous cues such as temperature or hormonal statues. These signals integrate into a network of floral pathway integrator genes. At a quantitative level, it is currently unclear how the impact of genetic variation in signaling pathways on flowering time is mediated by floral pathway integrator genes. Here, using datasets available from literature, we connect Arabidopsis thaliana flowering time in genetic backgrounds varying in upstream signalling components with the expression levels of floral pathway integrator genes in these genetic backgrounds. Our modelling results indicate that flowering time depends in a quite linear way on expression levels of floral pathway integrator genes. This gradual, proportional response of flowering time to upstream changes enables a gradual adaptation to changing environmental factors such as temperature and light.

  14. Parameters affecting temporal resolution of Time Resolved Integrative Optical Neutron Detector (TRION)

    International Nuclear Information System (INIS)

    Mor, I; Vartsky, D; Bar, D; Feldman, G; Goldberg, M B; Brandis, M; Dangendorf, V; Tittelmeier, K; Bromberger, B; Weierganz, M

    2013-01-01

    The Time-Resolved Integrative Optical Neutron (TRION) detector was developed for Fast Neutron Resonance Radiography (FNRR), a fast-neutron transmission imaging method that exploits characteristic energy-variations of the total scattering cross-section in the E n = 1–10 MeV range to detect specific elements within a radiographed object. As opposed to classical event-counting time of flight (ECTOF), it integrates the detector signal during a well-defined neutron Time of Flight window corresponding to a pre-selected energy bin, e.g., the energy-interval spanning a cross-section resonance of an element such as C, O and N. The integrative characteristic of the detector permits loss-free operation at very intense, pulsed neutron fluxes, at a cost however, of recorded temporal resolution degradation This work presents a theoretical and experimental evaluation of detector related parameters which affect temporal resolution of the TRION system

  15. The use of artificial intelligence techniques to improve the multiple payload integration process

    Science.gov (United States)

    Cutts, Dannie E.; Widgren, Brian K.

    1992-01-01

    A maximum return of science and products with a minimum expenditure of time and resources is a major goal of mission payload integration. A critical component then, in successful mission payload integration is the acquisition and analysis of experiment requirements from the principal investigator and payload element developer teams. One effort to use artificial intelligence techniques to improve the acquisition and analysis of experiment requirements within the payload integration process is described.

  16. INTEGRITY ANALYSIS OF REAL-TIME PPP TECHNIQUE WITH IGS-RTS SERVICE FOR MARITIME NAVIGATION

    Directory of Open Access Journals (Sweden)

    M. El-Diasty

    2017-10-01

    Full Text Available Open sea and inland waterways are the most widely used mode for transporting goods worldwide. It is the International Maritime Organization (IMO that defines the requirements for position fixing equipment for a worldwide radio-navigation system, in terms of accuracy, integrity, continuity, availability and coverage for the various phases of navigation. Satellite positioning systems can contribute to meet these requirements, as well as optimize marine transportation. Marine navigation usually consists of three major phases identified as Ocean/Coastal/Port approach/Inland waterway, in port navigation and automatic docking with alert limit ranges from 25 m to 0.25 m. GPS positioning is widely used for many applications and is currently recognized by IMO for a future maritime navigation. With the advancement in autonomous GPS positioning techniques such as Precise Point Positioning (PPP and with the advent of new real-time GNSS correction services such as IGS-Real-Time-Service (RTS, it is necessary to investigate the integrity of the PPP-based positioning technique along with IGS-RTS service in terms of availability and reliability for safe navigation in maritime application. This paper monitors the integrity of an autonomous real-time PPP-based GPS positioning system using the IGS real-time service (RTS for maritime applications that require minimum availability of integrity of 99.8 % to fulfil the IMO integrity standards. To examine the integrity of the real-time IGS-RTS PPP-based technique for maritime applications, kinematic data from a dual frequency GPS receiver is collected onboard a vessel and investigated with the real-time IGS-RTS PPP-based GPS positioning technique. It is shown that the availability of integrity of the real-time IGS-RTS PPP-based GPS solution is 100 % for all navigation phases and therefore fulfil the IMO integrity standards (99.8 % availability immediately (after 1 second, after 2 minutes and after 42 minutes

  17. Integrity Analysis of Real-Time Ppp Technique with Igs-Rts Service for Maritime Navigation

    Science.gov (United States)

    El-Diasty, M.

    2017-10-01

    Open sea and inland waterways are the most widely used mode for transporting goods worldwide. It is the International Maritime Organization (IMO) that defines the requirements for position fixing equipment for a worldwide radio-navigation system, in terms of accuracy, integrity, continuity, availability and coverage for the various phases of navigation. Satellite positioning systems can contribute to meet these requirements, as well as optimize marine transportation. Marine navigation usually consists of three major phases identified as Ocean/Coastal/Port approach/Inland waterway, in port navigation and automatic docking with alert limit ranges from 25 m to 0.25 m. GPS positioning is widely used for many applications and is currently recognized by IMO for a future maritime navigation. With the advancement in autonomous GPS positioning techniques such as Precise Point Positioning (PPP) and with the advent of new real-time GNSS correction services such as IGS-Real-Time-Service (RTS), it is necessary to investigate the integrity of the PPP-based positioning technique along with IGS-RTS service in terms of availability and reliability for safe navigation in maritime application. This paper monitors the integrity of an autonomous real-time PPP-based GPS positioning system using the IGS real-time service (RTS) for maritime applications that require minimum availability of integrity of 99.8 % to fulfil the IMO integrity standards. To examine the integrity of the real-time IGS-RTS PPP-based technique for maritime applications, kinematic data from a dual frequency GPS receiver is collected onboard a vessel and investigated with the real-time IGS-RTS PPP-based GPS positioning technique. It is shown that the availability of integrity of the real-time IGS-RTS PPP-based GPS solution is 100 % for all navigation phases and therefore fulfil the IMO integrity standards (99.8 % availability) immediately (after 1 second), after 2 minutes and after 42 minutes of convergence

  18. An efficient explicit marching on in time solver for magnetic field volume integral equation

    KAUST Repository

    Sayed, Sadeed Bin; Ulku, H. Arda; Bagci, Hakan

    2015-01-01

    An efficient explicit marching on in time (MOT) scheme for solving the magnetic field volume integral equation is proposed. The MOT system is cast in the form of an ordinary differential equation and is integrated in time using a PE(CE)m multistep

  19. Optimal distribution of integration time for intensity measurements in degree of linear polarization polarimetry.

    Science.gov (United States)

    Li, Xiaobo; Hu, Haofeng; Liu, Tiegen; Huang, Bingjing; Song, Zhanjie

    2016-04-04

    We consider the degree of linear polarization (DOLP) polarimetry system, which performs two intensity measurements at orthogonal polarization states to estimate DOLP. We show that if the total integration time of intensity measurements is fixed, the variance of the DOLP estimator depends on the distribution of integration time for two intensity measurements. Therefore, by optimizing the distribution of integration time, the variance of the DOLP estimator can be decreased. In this paper, we obtain the closed-form solution of the optimal distribution of integration time in an approximate way by employing Delta method and Lagrange multiplier method. According to the theoretical analyses and real-world experiments, it is shown that the variance of the DOLP estimator can be decreased for any value of DOLP. The method proposed in this paper can effectively decrease the measurement variance and thus statistically improve the measurement accuracy of the polarimetry system.

  20. Molecular radiotherapy: The NUKFIT software for calculating the time-integrated activity coefficient

    Energy Technology Data Exchange (ETDEWEB)

    Kletting, P.; Schimmel, S.; Luster, M. [Klinik für Nuklearmedizin, Universität Ulm, Ulm 89081 (Germany); Kestler, H. A. [Research Group Bioinformatics and Systems Biology, Institut für Neuroinformatik, Universität Ulm, Ulm 89081 (Germany); Hänscheid, H.; Fernández, M.; Lassmann, M. [Klinik für Nuklearmedizin, Universität Würzburg, Würzburg 97080 (Germany); Bröer, J. H.; Nosske, D. [Bundesamt für Strahlenschutz, Fachbereich Strahlenschutz und Gesundheit, Oberschleißheim 85764 (Germany); Glatting, G. [Medical Radiation Physics/Radiation Protection, Medical Faculty Mannheim, Heidelberg University, Mannheim 68167 (Germany)

    2013-10-15

    Purpose: Calculation of the time-integrated activity coefficient (residence time) is a crucial step in dosimetry for molecular radiotherapy. However, available software is deficient in that it is either not tailored for the use in molecular radiotherapy and/or does not include all required estimation methods. The aim of this work was therefore the development and programming of an algorithm which allows for an objective and reproducible determination of the time-integrated activity coefficient and its standard error.Methods: The algorithm includes the selection of a set of fitting functions from predefined sums of exponentials and the choice of an error model for the used data. To estimate the values of the adjustable parameters an objective function, depending on the data, the parameters of the error model, the fitting function and (if required and available) Bayesian information, is minimized. To increase reproducibility and user-friendliness the starting values are automatically determined using a combination of curve stripping and random search. Visual inspection, the coefficient of determination, the standard error of the fitted parameters, and the correlation matrix are provided to evaluate the quality of the fit. The functions which are most supported by the data are determined using the corrected Akaike information criterion. The time-integrated activity coefficient is estimated by analytically integrating the fitted functions. Its standard error is determined assuming Gaussian error propagation. The software was implemented using MATLAB.Results: To validate the proper implementation of the objective function and the fit functions, the results of NUKFIT and SAAM numerical, a commercially available software tool, were compared. The automatic search for starting values was successfully tested for reproducibility. The quality criteria applied in conjunction with the Akaike information criterion allowed the selection of suitable functions. Function fit

  1. Velocity time integral for right upper pulmonary vein in VLBW infants with patent ductus arteriosus

    Directory of Open Access Journals (Sweden)

    Gianluca Lista

    Full Text Available OBJECTIVE: Early diagnosis of significant patent ductus arteriosus reduces the risk of clinical worsening in very low birth weight infants. Echocardiographic patent ductus arteriosus shunt flow pattern can be used to predict significant patent ductus arteriosus. Pulmonary venous flow, expressed as vein velocity time integral, is correlated to ductus arteriosus closure. The aim of this study is to investigate the relationship between significant reductions in vein velocity time integral and non-significant patent ductus arteriosus in the first week of life. METHODS: A multicenter, prospective, observational study was conducted to evaluate very low birth weight infants (<1500 g on respiratory support. Echocardiography was used to evaluate vein velocity time integral on days 1 and 4 of life. The relationship between vein velocity time integral and other parameters was studied. RESULTS: In total, 98 very low birth weight infants on respiratory support were studied. On day 1 of life, vein velocity time integral was similar in patients with open or closed ductus. The mean vein velocity time integral significantly reduced in the first four days of life. On the fourth day of life, there was less of a reduction in patients with patent ductus compared to those with closed patent ductus arteriosus and the difference was significant. CONCLUSIONS: A significant reduction in vein velocity time integral in the first days of life is associated with ductus closure. This parameter correlates well with other echocardiographic parameters and may aid in the diagnosis and management of patent ductus arteriosus.

  2. Symmetric and arbitrarily high-order Birkhoff-Hermite time integrators and their long-time behaviour for solving nonlinear Klein-Gordon equations

    Science.gov (United States)

    Liu, Changying; Iserles, Arieh; Wu, Xinyuan

    2018-03-01

    The Klein-Gordon equation with nonlinear potential occurs in a wide range of application areas in science and engineering. Its computation represents a major challenge. The main theme of this paper is the construction of symmetric and arbitrarily high-order time integrators for the nonlinear Klein-Gordon equation by integrating Birkhoff-Hermite interpolation polynomials. To this end, under the assumption of periodic boundary conditions, we begin with the formulation of the nonlinear Klein-Gordon equation as an abstract second-order ordinary differential equation (ODE) and its operator-variation-of-constants formula. We then derive a symmetric and arbitrarily high-order Birkhoff-Hermite time integration formula for the nonlinear abstract ODE. Accordingly, the stability, convergence and long-time behaviour are rigorously analysed once the spatial differential operator is approximated by an appropriate positive semi-definite matrix, subject to suitable temporal and spatial smoothness. A remarkable characteristic of this new approach is that the requirement of temporal smoothness is reduced compared with the traditional numerical methods for PDEs in the literature. Numerical results demonstrate the advantage and efficiency of our time integrators in comparison with the existing numerical approaches.

  3. Stability Analysis and Variational Integrator for Real-Time Formation Based on Potential Field

    Directory of Open Access Journals (Sweden)

    Shengqing Yang

    2014-01-01

    Full Text Available This paper investigates a framework of real-time formation of autonomous vehicles by using potential field and variational integrator. Real-time formation requires vehicles to have coordinated motion and efficient computation. Interactions described by potential field can meet the former requirement which results in a nonlinear system. Stability analysis of such nonlinear system is difficult. Our methodology of stability analysis is discussed in error dynamic system. Transformation of coordinates from inertial frame to body frame can help the stability analysis focus on the structure instead of particular coordinates. Then, the Jacobian of reduced system can be calculated. It can be proved that the formation is stable at the equilibrium point of error dynamic system with the effect of damping force. For consideration of calculation, variational integrator is introduced. It is equivalent to solving algebraic equations. Forced Euler-Lagrange equation in discrete expression is used to construct a forced variational integrator for vehicles in potential field and obstacle environment. By applying forced variational integrator on computation of vehicles' motion, real-time formation of vehicles in obstacle environment can be implemented. Algorithm based on forced variational integrator is designed for a leader-follower formation.

  4. Maximum principles and sharp constants for solutions of elliptic and parabolic systems

    CERN Document Server

    Kresin, Gershon

    2012-01-01

    The main goal of this book is to present results pertaining to various versions of the maximum principle for elliptic and parabolic systems of arbitrary order. In particular, the authors present necessary and sufficient conditions for validity of the classical maximum modulus principles for systems of second order and obtain sharp constants in inequalities of Miranda-Agmon type and in many other inequalities of a similar nature. Somewhat related to this topic are explicit formulas for the norms and the essential norms of boundary integral operators. The proofs are based on a unified approach using, on one hand, representations of the norms of matrix-valued integral operators whose target spaces are linear and finite dimensional, and, on the other hand, on solving certain finite dimensional optimization problems. This book reflects results obtained by the authors, and can be useful to research mathematicians and graduate students interested in partial differential equations.

  5. Elucidating dynamic metabolic physiology through network integration of quantitative time-course metabolomics

    DEFF Research Database (Denmark)

    Bordbar, Aarash; Yurkovich, James T.; Paglia, Giuseppe

    2017-01-01

    The increasing availability of metabolomics data necessitates novel methods for deeper data analysis and interpretation. We present a flux balance analysis method that allows for the computation of dynamic intracellular metabolic changes at the cellular scale through integration of time-course ab......The increasing availability of metabolomics data necessitates novel methods for deeper data analysis and interpretation. We present a flux balance analysis method that allows for the computation of dynamic intracellular metabolic changes at the cellular scale through integration of time...

  6. Model-based Integration of Past & Future in TimeTravel

    DEFF Research Database (Denmark)

    Khalefa, Mohamed E.; Fischer, Ulrike; Pedersen, Torben Bach

    2012-01-01

    We demonstrate TimeTravel, an efficient DBMS system for seamless integrated querying of past and (forecasted) future values of time series, allowing the user to view past and future values as one joint time series. This functionality is important for advanced application domain like energy....... The main idea is to compactly represent time series as models. By using models, the TimeTravel system answers queries approximately on past and future data with error guarantees (absolute error and confidence) one order of magnitude faster than when accessing the time series directly. In addition...... it to answer approximate and exact queries. TimeTravel is implemented into PostgreSQL, thus achieving complete user transparency at the query level. In the demo, we show the easy building of a hierarchical model index for a real-world time series and the effect of varying the error guarantees on the speed up...

  7. High performance monolithic power management system with dynamic maximum power point tracking for microbial fuel cells.

    Science.gov (United States)

    Erbay, Celal; Carreon-Bautista, Salvador; Sanchez-Sinencio, Edgar; Han, Arum

    2014-12-02

    Microbial fuel cell (MFC) that can directly generate electricity from organic waste or biomass is a promising renewable and clean technology. However, low power and low voltage output of MFCs typically do not allow directly operating most electrical applications, whether it is supplementing electricity to wastewater treatment plants or for powering autonomous wireless sensor networks. Power management systems (PMSs) can overcome this limitation by boosting the MFC output voltage and managing the power for maximum efficiency. We present a monolithic low-power-consuming PMS integrated circuit (IC) chip capable of dynamic maximum power point tracking (MPPT) to maximize the extracted power from MFCs, regardless of the power and voltage fluctuations from MFCs over time. The proposed PMS continuously detects the maximum power point (MPP) of the MFC and matches the load impedance of the PMS for maximum efficiency. The system also operates autonomously by directly drawing power from the MFC itself without any external power. The overall system efficiency, defined as the ratio between input energy from the MFC and output energy stored into the supercapacitor of the PMS, was 30%. As a demonstration, the PMS connected to a 240 mL two-chamber MFC (generating 0.4 V and 512 μW at MPP) successfully powered a wireless temperature sensor that requires a voltage of 2.5 V and consumes power of 85 mW each time it transmit the sensor data, and successfully transmitted a sensor reading every 7.5 min. The PMS also efficiently managed the power output of a lower-power producing MFC, demonstrating that the PMS works efficiently at various MFC power output level.

  8. Time-independent integral equation for Maxwell's system. Application of radar cross section computation

    International Nuclear Information System (INIS)

    Pujols, Agnes

    1991-01-01

    We prove that the scattering operator for the wave equation in the exterior of an non-homogeneous obstacle exists. Its distribution kernel is represented by a time-dependent boundary integral equation. A space-time integral variational formulation is developed for determining the current induced by the scattering of an electromagnetic wave by an homogeneous object. The discrete approximation of the variational problem using a finite element method in both space and time leads to stable convergent schemes, giving a numerical code for perfectly conducting cylinders. (author) [fr

  9. Hybrid state-space time integration of rotating beams

    DEFF Research Database (Denmark)

    Krenk, Steen; Nielsen, Martin Bjerre

    2012-01-01

    An efficient time integration algorithm for the dynamic equations of flexible beams in a rotating frame of reference is presented. The equations of motion are formulated in a hybrid state-space format in terms of local displacements and local components of the absolute velocity. With inspiration...... of the system rotation enter via global operations with the angular velocity vector. The algorithm is based on an integrated form of the equations of motion with energy and momentum conserving properties, if a kinematically consistent non-linear formulation is used. A consistent monotonic scheme for algorithmic...... energy dissipation in terms of local displacements and velocities, typical of structural vibrations, is developed and implemented in the form of forward weighting of appropriate mean value terms in the algorithm. The algorithm is implemented for a beam theory with consistent quadratic non...

  10. Integral Time and the Varieties of Post-Mortem Survival

    Directory of Open Access Journals (Sweden)

    Sean M. Kelly

    2008-06-01

    Full Text Available While the question of survival of bodily death is usually approached by focusing on the mind/body relation (and often with the idea of the soul as a special kind of substance, this paper explores the issue in the context of our understanding of time. The argument of the paper is woven around the central intuition of time as an “ever-living present.” The development of this intuition allows for a more integral or “complex-holistic” theory of time, the soul, and the question of survival. Following the introductory matter, the first section proposes a re-interpretation of Nietzsche’s doctrine of eternal recurrence in terms of moments and lives as “eternally occurring.” The next section is a treatment of Julian Barbour’s neo-Machian model of instants of time as configurations in the n-dimensional phase-space he calls “Platonia.” While rejecting his claim to have done away with time, I do find his model suggestive of the idea of moments and lives as eternally occurring. The following section begins with Fechner’s visionary ideas of the nature of the soul and its survival of bodily death, with particular attention to the notion of holonic inclusion and the central analogy of the transition from perception to memory. I turn next to Whitehead’s equally holonic notions of prehension and the concrescence of actual occasions. From his epochal theory of time and certain ambiguities in his reflections on the “divine antinomies,” we are brought to the threshold of a potentially more integral or “complex-holistic” theory of time and survival, which is treated in the last section. This section draws from my earlier work on Hegel, Jung, and Edgar Morin, as well as from key insights of Jean Gebser, for an interpretation of Sri Aurobindo’s inspired but cryptic description of the “Supramental Time Vision.” This interpretation leads to an alternative understanding of reincarnation—and to the possibility of its reconciliation

  11. Connection between Feynman integrals having different values of the space-time dimension

    International Nuclear Information System (INIS)

    Tarasov, O.V.

    1996-05-01

    A systematic algorithm for obtaining recurrence relations for dimensionally regularized Feynman integrals w.r.t. the space-time dimension d is proposed. The relation between d and d-2 dimensional integrals is given in terms of a differential operator for which an explicit formula can be obtained for each Feynman diagram. We show how the method works for one-, two- and three-loop integrals. The new recurrence relations w.r.t. d are complementary to the recurrence relations which derive from the method of integration by parts. We find that the problem of the irreducible numerators in Feynman integrals can be naturally solved in the framework of the proposed generalized recurrence relations. (orig.)

  12. Prospects for direct measurement of time-integrated Bs mixing

    International Nuclear Information System (INIS)

    Siccama, I.

    1994-01-01

    This note investigates the prospects of measuring time-integrated B s mixing. Three inclusive decay modes of the B s meson are discussed. For each reconstruction mode, the expected number of events and the different background channels are discussed. Estimates are given for the uncertainty on the mixing parameter χ s . (orig.)

  13. Evaluation of the filtered leapfrog-trapezoidal time integration method

    International Nuclear Information System (INIS)

    Roache, P.J.; Dietrich, D.E.

    1988-01-01

    An analysis and evaluation are presented for a new method of time integration for fluid dynamic proposed by Dietrich. The method, called the filtered leapfrog-trapezoidal (FLT) scheme, is analyzed for the one-dimensional constant-coefficient advection equation and is shown to have some advantages for quasi-steady flows. A modification (FLTW) using a weighted combination of FLT and leapfrog is developed which retains the advantages for steady flows, increases accuracy for time-dependent flows, and involves little coding effort. Merits and applicability are discussed

  14. Hybrid integrated circuit for charge-to-time interval conversion

    Energy Technology Data Exchange (ETDEWEB)

    Basiladze, S.G.; Dotsenko, Yu.Yu.; Man' yakov, P.K.; Fedorchenko, S.N. (Joint Inst. for Nuclear Research, Dubna (USSR))

    The hybrid integrated circuit for charge-to time interval conversion with nanosecond input fast response is described. The circuit can be used in energy measuring channels, time-to-digital converters and in the modified variant in amplitude-to-digital converters. The converter described consists of a buffer amplifier, a linear transmission circuit, a direct current source and a unit of time interval separation. The buffer amplifier represents a current follower providing low input and high output resistances by the current feedback. It is concluded that the described converter excelled the QT100B circuit analogous to it in a number of parameters especially, in thermostability.

  15. An extended Halanay inequality of integral type on time scales

    Directory of Open Access Journals (Sweden)

    Boqun Ou

    2015-07-01

    Full Text Available In this paper, we obtain a Halanay-type inequality of integral type on time scales which improves and extends some earlier results for both the continuous and discrete cases. Several illustrative examples are also given.

  16. Multi-channel normal speed gated integrator in the measurement of the laser scattering light energy

    International Nuclear Information System (INIS)

    Yang Dong; Yu Xiaoqi; Hu Yuanfeng

    2005-01-01

    With the method of integration in a limited time, a Multi-channel normal speed gated integrator based on VXI system has been developed for measuring the signals with changeable pulse width in laser scattering light experiment. It has been tested with signal sources in ICF experiment. In tests, the integral nonlinearity between the integral results of the gated integrator and that of an oscilloscope is less than 1%. In the ICF experiments the maximum error between the integral results of the gated integrator and that of oscilloscope is less than 3% of the full scale range of the gated integrator. (authors)

  17. Determining the maximum diameter for holes in the shoe without compromising shoe integrity when using a multi-segment foot model.

    Science.gov (United States)

    Shultz, Rebecca; Jenkyn, Thomas

    2012-01-01

    Measuring individual foot joint motions requires a multi-segment foot model, even when the subject is wearing a shoe. Each foot segment must be tracked with at least three skin-mounted markers, but for these markers to be visible to an optical motion capture system holes or 'windows' must be cut into the structure of the shoe. The holes must be sufficiently large avoiding interfering with the markers, but small enough that they do not compromise the shoe's structural integrity. The objective of this study was to determine the maximum size of hole that could be cut into a running shoe upper without significantly compromising its structural integrity or changing the kinematics of the foot within the shoe. Three shoe designs were tested: (1) neutral cushioning, (2) motion control and (3) stability shoes. Holes were cut progressively larger, with four sizes tested in all. Foot joint motions were measured: (1) hindfoot with respect to midfoot in the frontal plane, (2) forefoot twist with respect to midfoot in the frontal plane, (3) the height-to-length ratio of the medial longitudinal arch and (4) the hallux angle with respect to first metatarsal in the sagittal plane. A single subject performed level walking at her preferred pace in each of the three shoes with ten repetitions for each hole size. The largest hole that did not disrupt shoe integrity was an oval of 1.7cm×2.5cm. The smallest shoe deformations were seen with the motion control shoe. The least change in foot joint motion was forefoot twist in both the neutral shoe and stability shoe for any size hole. This study demonstrates that for a hole smaller than this size, optical motion capture with a cluster-based multi-segment foot model is feasible for measure foot in shoe kinematics in vivo. Copyright © 2011. Published by Elsevier Ltd.

  18. Time-Varying Market Integration and Expected Returns in Emerging Markets

    OpenAIRE

    de Jong, Frank; de Roon, Frans

    2001-01-01

    We use a simple model in which the expected returns in emerging markets depend on their systematic risk as measured by their beta relative to the world portfolio as well as on the level of integration in that market. The level of integration is a time-varying variable that depends on the market value of the assets that can be held by domestic investors only versus the market value of the assets that can be traded freely. Our empirical analysis for 30 emerging markets shows that there are stro...

  19. Time Varying Market Integration and Expected Rteurns in Emerging Markets

    OpenAIRE

    Jong, F.C.J.M. de; Roon, F.A. de

    2001-01-01

    We use a simple model in which the expected returns in emerging markets depend on their systematic risk as measured by their beta relative to the world portfolio as well as on the level of integration in that market.The level of integration is a time-varying variable that depends on the market value of the assets that can be held by domestic investors only versus the market value of the assets that can be traded freely.Our empirical analysis for 30 emerging markets shows that there are strong...

  20. High titer oncolytic measles virus production process by integration of dielectric spectroscopy as online monitoring system.

    Science.gov (United States)

    Grein, Tanja A; Loewe, Daniel; Dieken, Hauke; Salzig, Denise; Weidner, Tobias; Czermak, Peter

    2018-05-01

    Oncolytic viruses offer new hope to millions of patients with incurable cancer. One promising class of oncolytic viruses is Measles virus, but its broad administration to cancer patients is currently hampered by the inability to produce the large amounts of virus needed for treatment (10 10 -10 12 virus particles per dose). Measles virus is unstable, leading to very low virus titers during production. The time of infection and time of harvest are therefore critical parameters in a Measles virus production process, and their optimization requires an accurate online monitoring system. We integrated a probe based on dielectric spectroscopy (DS) into a stirred tank reactor to characterize the Measles virus production process in adherent growing Vero cells. We found that DS could be used to monitor cell adhesion on the microcarrier and that the optimal virus harvest time correlated with the global maximum permittivity signal. In 16 independent bioreactor runs, the maximum Measles virus titer was achieved approximately 40 hr after the permittivity maximum. Compared to an uncontrolled Measles virus production process, the integration of DS increased the maximum virus concentration by more than three orders of magnitude. This was sufficient to achieve an active Measles virus concentration of > 10 10 TCID 50 ml -1 . © 2017 Wiley Periodicals, Inc.

  1. 20 CFR 10.806 - How are the maximum fees defined?

    Science.gov (United States)

    2010-04-01

    ... AMENDED Information for Medical Providers Medical Fee Schedule § 10.806 How are the maximum fees defined? For professional medical services, the Director shall maintain a schedule of maximum allowable fees.../Current Procedural Terminology (HCPCS/CPT) code which represents the relative skill, effort, risk and time...

  2. Global-scale high-resolution ( 1 km) modelling of mean, maximum and minimum annual streamflow

    Science.gov (United States)

    Barbarossa, Valerio; Huijbregts, Mark; Hendriks, Jan; Beusen, Arthur; Clavreul, Julie; King, Henry; Schipper, Aafke

    2017-04-01

    Quantifying mean, maximum and minimum annual flow (AF) of rivers at ungauged sites is essential for a number of applications, including assessments of global water supply, ecosystem integrity and water footprints. AF metrics can be quantified with spatially explicit process-based models, which might be overly time-consuming and data-intensive for this purpose, or with empirical regression models that predict AF metrics based on climate and catchment characteristics. Yet, so far, regression models have mostly been developed at a regional scale and the extent to which they can be extrapolated to other regions is not known. We developed global-scale regression models that quantify mean, maximum and minimum AF as function of catchment area and catchment-averaged slope, elevation, and mean, maximum and minimum annual precipitation and air temperature. We then used these models to obtain global 30 arc-seconds (˜ 1 km) maps of mean, maximum and minimum AF for each year from 1960 through 2015, based on a newly developed hydrologically conditioned digital elevation model. We calibrated our regression models based on observations of discharge and catchment characteristics from about 4,000 catchments worldwide, ranging from 100 to 106 km2 in size, and validated them against independent measurements as well as the output of a number of process-based global hydrological models (GHMs). The variance explained by our regression models ranged up to 90% and the performance of the models compared well with the performance of existing GHMs. Yet, our AF maps provide a level of spatial detail that cannot yet be achieved by current GHMs.

  3. The last glacial maximum

    Science.gov (United States)

    Clark, P.U.; Dyke, A.S.; Shakun, J.D.; Carlson, A.E.; Clark, J.; Wohlfarth, B.; Mitrovica, J.X.; Hostetler, S.W.; McCabe, A.M.

    2009-01-01

    We used 5704 14C, 10Be, and 3He ages that span the interval from 10,000 to 50,000 years ago (10 to 50 ka) to constrain the timing of the Last Glacial Maximum (LGM) in terms of global ice-sheet and mountain-glacier extent. Growth of the ice sheets to their maximum positions occurred between 33.0 and 26.5 ka in response to climate forcing from decreases in northern summer insolation, tropical Pacific sea surface temperatures, and atmospheric CO2. Nearly all ice sheets were at their LGM positions from 26.5 ka to 19 to 20 ka, corresponding to minima in these forcings. The onset of Northern Hemisphere deglaciation 19 to 20 ka was induced by an increase in northern summer insolation, providing the source for an abrupt rise in sea level. The onset of deglaciation of the West Antarctic Ice Sheet occurred between 14 and 15 ka, consistent with evidence that this was the primary source for an abrupt rise in sea level ???14.5 ka.

  4. Binary versus non-binary information in real time series: empirical results and maximum-entropy matrix models

    Science.gov (United States)

    Almog, Assaf; Garlaschelli, Diego

    2014-09-01

    The dynamics of complex systems, from financial markets to the brain, can be monitored in terms of multiple time series of activity of the constituent units, such as stocks or neurons, respectively. While the main focus of time series analysis is on the magnitude of temporal increments, a significant piece of information is encoded into the binary projection (i.e. the sign) of such increments. In this paper we provide further evidence of this by showing strong nonlinear relations between binary and non-binary properties of financial time series. These relations are a novel quantification of the fact that extreme price increments occur more often when most stocks move in the same direction. We then introduce an information-theoretic approach to the analysis of the binary signature of single and multiple time series. Through the definition of maximum-entropy ensembles of binary matrices and their mapping to spin models in statistical physics, we quantify the information encoded into the simplest binary properties of real time series and identify the most informative property given a set of measurements. Our formalism is able to accurately replicate, and mathematically characterize, the observed binary/non-binary relations. We also obtain a phase diagram allowing us to identify, based only on the instantaneous aggregate return of a set of multiple time series, a regime where the so-called ‘market mode’ has an optimal interpretation in terms of collective (endogenous) effects, a regime where it is parsimoniously explained by pure noise, and a regime where it can be regarded as a combination of endogenous and exogenous factors. Our approach allows us to connect spin models, simple stochastic processes, and ensembles of time series inferred from partial information.

  5. Binary versus non-binary information in real time series: empirical results and maximum-entropy matrix models

    International Nuclear Information System (INIS)

    Almog, Assaf; Garlaschelli, Diego

    2014-01-01

    The dynamics of complex systems, from financial markets to the brain, can be monitored in terms of multiple time series of activity of the constituent units, such as stocks or neurons, respectively. While the main focus of time series analysis is on the magnitude of temporal increments, a significant piece of information is encoded into the binary projection (i.e. the sign) of such increments. In this paper we provide further evidence of this by showing strong nonlinear relations between binary and non-binary properties of financial time series. These relations are a novel quantification of the fact that extreme price increments occur more often when most stocks move in the same direction. We then introduce an information-theoretic approach to the analysis of the binary signature of single and multiple time series. Through the definition of maximum-entropy ensembles of binary matrices and their mapping to spin models in statistical physics, we quantify the information encoded into the simplest binary properties of real time series and identify the most informative property given a set of measurements. Our formalism is able to accurately replicate, and mathematically characterize, the observed binary/non-binary relations. We also obtain a phase diagram allowing us to identify, based only on the instantaneous aggregate return of a set of multiple time series, a regime where the so-called ‘market mode’ has an optimal interpretation in terms of collective (endogenous) effects, a regime where it is parsimoniously explained by pure noise, and a regime where it can be regarded as a combination of endogenous and exogenous factors. Our approach allows us to connect spin models, simple stochastic processes, and ensembles of time series inferred from partial information. (paper)

  6. Long-time integrator for the study on plasma parameter fluctuations

    International Nuclear Information System (INIS)

    Zalkind, V.M.; Tarasenko, V.P.

    1975-01-01

    A device measuring the absolute value (x) of a fluctuating quantity x(t) averaged over a large number of realizations is described. The specific features of the device are the use of the time selector (Δ t = 50 μs - 1 ms) and the large time integration constant (tau = 30 hrs). The device is meant for studying fluctuations of parameters of a pulse plasma with a small repetition frequency

  7. Molecular radiotherapy: the NUKFIT software for calculating the time-integrated activity coefficient.

    Science.gov (United States)

    Kletting, P; Schimmel, S; Kestler, H A; Hänscheid, H; Luster, M; Fernández, M; Bröer, J H; Nosske, D; Lassmann, M; Glatting, G

    2013-10-01

    Calculation of the time-integrated activity coefficient (residence time) is a crucial step in dosimetry for molecular radiotherapy. However, available software is deficient in that it is either not tailored for the use in molecular radiotherapy and/or does not include all required estimation methods. The aim of this work was therefore the development and programming of an algorithm which allows for an objective and reproducible determination of the time-integrated activity coefficient and its standard error. The algorithm includes the selection of a set of fitting functions from predefined sums of exponentials and the choice of an error model for the used data. To estimate the values of the adjustable parameters an objective function, depending on the data, the parameters of the error model, the fitting function and (if required and available) Bayesian information, is minimized. To increase reproducibility and user-friendliness the starting values are automatically determined using a combination of curve stripping and random search. Visual inspection, the coefficient of determination, the standard error of the fitted parameters, and the correlation matrix are provided to evaluate the quality of the fit. The functions which are most supported by the data are determined using the corrected Akaike information criterion. The time-integrated activity coefficient is estimated by analytically integrating the fitted functions. Its standard error is determined assuming Gaussian error propagation. The software was implemented using MATLAB. To validate the proper implementation of the objective function and the fit functions, the results of NUKFIT and SAAM numerical, a commercially available software tool, were compared. The automatic search for starting values was successfully tested for reproducibility. The quality criteria applied in conjunction with the Akaike information criterion allowed the selection of suitable functions. Function fit parameters and their standard

  8. SEMIGROUPS N TIMES INTEGRATED AND AN APPLICATION TO A PROBLEM OF CAUCHY TYPE

    Directory of Open Access Journals (Sweden)

    Danessa Chirinos Fernández

    2016-06-01

    Full Text Available The theory of semigroups n times integrated is a generalization of strongly continuous semigroups, which was developed from 1984, and is widely used for the study of the existence and uniqueness of problems such Cauchy in which the operator domain is not necessarily dense. This paper presents an application of semigroups n times integrated into a problem of viscoelasticity, which is formulated as a Cauchy problem on a Banach space presents .

  9. Maximum permissible dose

    International Nuclear Information System (INIS)

    Anon.

    1979-01-01

    This chapter presents a historic overview of the establishment of radiation guidelines by various national and international agencies. The use of maximum permissible dose and maximum permissible body burden limits to derive working standards is discussed

  10. Towards a portable microchip system with integrated thermal control and polymer waveguides for real-time PCR

    DEFF Research Database (Denmark)

    Wang, Zhenyu; Sekulovic, Andrea; Kutter, Jörg Peter

    2006-01-01

    A novel real-time PCR microchip platform with integrated thermal system and polymer waveguides has been developed. The integrated polymer optical system for real-time monitoring of PCR was fabricated in the same SU-8 layer as the PCR chamber, without additional masking steps. Two suitable DNA...... binding dyes, SYTOX Orange and TO-PRO-3, were selected and tested for the real-time PCR processes. As a model, cadF gene of Campylobacter jejuni has been amplified on the microchip. Using the integrated optical system of the real-time PCR microchip, the measured cycle threshold values of the real-time PCR...

  11. On the initial condition problem of the time domain PMCHWT surface integral equation

    KAUST Repository

    Uysal, Ismail Enes; Bagci, Hakan; Ergin, A. Arif; Ulku, H. Arda

    2017-01-01

    Non-physical, linearly increasing and constant current components are induced in marching on-in-time solution of time domain surface integral equations when initial conditions on time derivatives of (unknown) equivalent currents are not enforced

  12. Integration of Simulink, MARTe and MDSplus for rapid development of real-time applications

    Energy Technology Data Exchange (ETDEWEB)

    Manduchi, G., E-mail: gabriele.manduchi@igi.cnr.it [Consorzio RFX (CNR, ENEA, INFN, Università di Padova, Acciaierie Venete SpA), Padova (Italy); Luchetta, A.; Taliercio, C. [Consorzio RFX (CNR, ENEA, INFN, Università di Padova, Acciaierie Venete SpA), Padova (Italy); Neto, A.; Sartori, F. [Fusion for Energy, Barcelona (Spain); De Tommasi, G. [Fusion for Energy, Barcelona (Spain); Consorzio CREATE/DIETI, Università degli Studi di Napoli Federico II, Via Claudio 21, 80125 Napoli (Italy)

    2015-10-15

    Highlights: • The integration of two frameworks for real-time control and data acquisition is described. • The integration may significantly fasten the development of system components. • The system includes also a code generator for the integration of code written in Simulink. • A real-time control systemcan be implemented without the need of writing any line of code. - Abstract: Simulink is a graphical data flow programming tool for modeling and simulating dynamic systems. A component of Simulink, called Simulink Coder, generates C code from Simulink diagrams. MARTe is a framework for the implementation of real-time systems, currently in use in several fusion experiments. MDSplus is a framework widely used in the fusion community for the management of data. The three systems provide a solution to different facets of the same process, that is, real-time plasma control development. Simulink diagrams will describe the algorithms used in control, which will be implemented as MARTe GAMs and which will use parameters read from and produce results written to MDSplus pulse files. The three systems have been integrated in order to provide a tool suitable to speed up the development of real-time control applications. In particular, it will be shown how from a Simulink diagram describing a given algorithm to be used in a control system, it is possible to generate in an automated way the corresponding MARTe and MDSplus components that can be assembled to implement the target system.

  13. Integration of Simulink, MARTe and MDSplus for rapid development of real-time applications

    International Nuclear Information System (INIS)

    Manduchi, G.; Luchetta, A.; Taliercio, C.; Neto, A.; Sartori, F.; De Tommasi, G.

    2015-01-01

    Highlights: • The integration of two frameworks for real-time control and data acquisition is described. • The integration may significantly fasten the development of system components. • The system includes also a code generator for the integration of code written in Simulink. • A real-time control systemcan be implemented without the need of writing any line of code. - Abstract: Simulink is a graphical data flow programming tool for modeling and simulating dynamic systems. A component of Simulink, called Simulink Coder, generates C code from Simulink diagrams. MARTe is a framework for the implementation of real-time systems, currently in use in several fusion experiments. MDSplus is a framework widely used in the fusion community for the management of data. The three systems provide a solution to different facets of the same process, that is, real-time plasma control development. Simulink diagrams will describe the algorithms used in control, which will be implemented as MARTe GAMs and which will use parameters read from and produce results written to MDSplus pulse files. The three systems have been integrated in order to provide a tool suitable to speed up the development of real-time control applications. In particular, it will be shown how from a Simulink diagram describing a given algorithm to be used in a control system, it is possible to generate in an automated way the corresponding MARTe and MDSplus components that can be assembled to implement the target system.

  14. Integrated Oil spill detection and forecasting using MOON real time data

    OpenAIRE

    De Dominicis, M.; Pinardi, N.; Coppini, G.; Tonani, M.; Guarnieri, A.; Zodiatis, G.; Lardner, R.; Santoleri, R.

    2009-01-01

    MOON (Mediterranean Operational Oceanography Network) is an operational distributed system ready to provide quality controlled and timely marine observations (in situ and satellite) and environmental analyses and predictions for management of oil spill accidents. MOON operational systems are based upon the real time functioning of an integrated system composed of the Real Time Observing system, the regional, sub-regional and coastal forecasting systems and a products dissemination system. All...

  15. Measurement of time-integrated $D^0\\to hh$ asymmetries at LHCb

    CERN Document Server

    Marino, Pietro

    2016-01-01

    LHCb collected the world’s largest sample of charm decays during LHC Run I, corresponding to an integrated luminosity of 3fb$^{−1}$. This has permitted many precision measurements of charm mixing and CP violation parameters. One of the most precise and important observables is the so-called $\\Delta A_{CP}$ parameter, corresponding to the difference between the time-integrated CP asymmetry in singly Cabibbo-suppressed $D^{0} \\rightarrow K^{+}K^{-}$ and $D^{0} \\rightarrow \\pi^{+}\\pi{-}$ decay modes. The flavour of the $D^{0}$ meson is inferred from the charge of the pion in $D^{∗+} \\rightarrow D^{0}\\pi^{+}$ and $D^{∗−} \\rightarrow \\overline{D}^{0}\\pi^{-}$ decays. $\\Delta A_{CP} \\equiv A_{raw}(K^{+}K^{−})−A_{raw}(\\pi^{+}\\pi{−})$ is measured to be $\\Delta A_{CP}=(−0.10±0.08±0.03)$%, where the first uncertainty is statistical and the second systematic. The measurement is consistent with the no- CP -violation hypothesis and represents the most precise measurement of time-integrated CP asymmetry ...

  16. What controls the maximum magnitude of injection-induced earthquakes?

    Science.gov (United States)

    Eaton, D. W. S.

    2017-12-01

    Three different approaches for estimation of maximum magnitude are considered here, along with their implications for managing risk. The first approach is based on a deterministic limit for seismic moment proposed by McGarr (1976), which was originally designed for application to mining-induced seismicity. This approach has since been reformulated for earthquakes induced by fluid injection (McGarr, 2014). In essence, this method assumes that the upper limit for seismic moment release is constrained by the pressure-induced stress change. A deterministic limit is given by the product of shear modulus and the net injected fluid volume. This method is based on the assumptions that the medium is fully saturated and in a state of incipient failure. An alternative geometrical approach was proposed by Shapiro et al. (2011), who postulated that the rupture area for an induced earthquake falls entirely within the stimulated volume. This assumption reduces the maximum-magnitude problem to one of estimating the largest potential slip surface area within a given stimulated volume. Finally, van der Elst et al. (2016) proposed that the maximum observed magnitude, statistically speaking, is the expected maximum value for a finite sample drawn from an unbounded Gutenberg-Richter distribution. These three models imply different approaches for risk management. The deterministic method proposed by McGarr (2014) implies that a ceiling on the maximum magnitude can be imposed by limiting the net injected volume, whereas the approach developed by Shapiro et al. (2011) implies that the time-dependent maximum magnitude is governed by the spatial size of the microseismic event cloud. Finally, the sample-size hypothesis of Van der Elst et al. (2016) implies that the best available estimate of the maximum magnitude is based upon observed seismicity rate. The latter two approaches suggest that real-time monitoring is essential for effective management of risk. A reliable estimate of maximum

  17. Integrative real-time geographic visualization of energy resources

    International Nuclear Information System (INIS)

    Sorokine, A.; Shankar, M.; Stovall, J.; Bhaduri, B.; King, T.; Fernandez, S.; Datar, N.; Omitaomu, O.

    2009-01-01

    'Full text:' Several models forecast that climatic changes will increase the frequency of disastrous events like droughts, hurricanes, and snow storms. Responding to these events and also to power outages caused by system errors such as the 2003 North American blackout require an interconnect-wide real-time monitoring system for various energy resources. Such a system should be capable of providing situational awareness to its users in the government and energy utilities by dynamically visualizing the status of the elements of the energy grid infrastructure and supply chain in geographic contexts. We demonstrate an approach that relies on Google Earth and similar standard-based platforms as client-side geographic viewers with a data-dependent server component. The users of the system can view status information in spatial and temporal contexts. These data can be integrated with a wide range of geographic sources including all standard Google Earth layers and a large number of energy and environmental data feeds. In addition, we show a real-time spatio-temporal data sharing capability across the users of the system, novel methods for visualizing dynamic network data, and a fine-grain access to very large multi-resolution geographic datasets for faster delivery of the data. The system can be extended to integrate contingency analysis results and other grid models to assess recovery and repair scenarios in the case of major disruption. (author)

  18. ?Just-in-Time? Battery Charge Depletion Control for PHEVs and E-REVs for Maximum Battery Life

    Energy Technology Data Exchange (ETDEWEB)

    DeVault, Robert C [ORNL

    2009-01-01

    Conventional methods of vehicle operation for Plug-in Hybrid Vehicles first discharge the battery to a minimum State of Charge (SOC) before switching to charge sustaining operation. This is very demanding on the battery, maximizing the number of trips ending with a depleted battery and maximizing the distance driven on a depleted battery over the vehicle s life. Several methods have been proposed to reduce the number of trips ending with a deeply discharged battery and also eliminate the need for extended driving on a depleted battery. An optimum SOC can be maintained for long battery life before discharging the battery so that the vehicle reaches an electric plug-in destination just as the battery reaches the minimum operating SOC. These Just-in-Time methods provide maximum effective battery life while getting virtually the same electricity from the grid.

  19. A study of pile-up in integrated time-correlated single photon counting systems.

    Science.gov (United States)

    Arlt, Jochen; Tyndall, David; Rae, Bruce R; Li, David D-U; Richardson, Justin A; Henderson, Robert K

    2013-10-01

    Recent demonstration of highly integrated, solid-state, time-correlated single photon counting (TCSPC) systems in CMOS technology is set to provide significant increases in performance over existing bulky, expensive hardware. Arrays of single photon single photon avalanche diode (SPAD) detectors, timing channels, and signal processing can be integrated on a single silicon chip with a degree of parallelism and computational speed that is unattainable by discrete photomultiplier tube and photon counting card solutions. New multi-channel, multi-detector TCSPC sensor architectures with greatly enhanced throughput due to minimal detector transit (dead) time or timing channel dead time are now feasible. In this paper, we study the potential for future integrated, solid-state TCSPC sensors to exceed the photon pile-up limit through analytic formula and simulation. The results are validated using a 10% fill factor SPAD array and an 8-channel, 52 ps resolution time-to-digital conversion architecture with embedded lifetime estimation. It is demonstrated that pile-up insensitive acquisition is attainable at greater than 10 times the pulse repetition rate providing over 60 dB of extended dynamic range to the TCSPC technique. Our results predict future CMOS TCSPC sensors capable of live-cell transient observations in confocal scanning microscopy, improved resolution of near-infrared optical tomography systems, and fluorescence lifetime activated cell sorting.

  20. Load Balancing Integrated Least Slack Time-Based Appliance Scheduling for Smart Home Energy Management.

    Science.gov (United States)

    Silva, Bhagya Nathali; Khan, Murad; Han, Kijun

    2018-02-25

    The emergence of smart devices and smart appliances has highly favored the realization of the smart home concept. Modern smart home systems handle a wide range of user requirements. Energy management and energy conservation are in the spotlight when deploying sophisticated smart homes. However, the performance of energy management systems is highly influenced by user behaviors and adopted energy management approaches. Appliance scheduling is widely accepted as an effective mechanism to manage domestic energy consumption. Hence, we propose a smart home energy management system that reduces unnecessary energy consumption by integrating an automated switching off system with load balancing and appliance scheduling algorithm. The load balancing scheme acts according to defined constraints such that the cumulative energy consumption of the household is managed below the defined maximum threshold. The scheduling of appliances adheres to the least slack time (LST) algorithm while considering user comfort during scheduling. The performance of the proposed scheme has been evaluated against an existing energy management scheme through computer simulation. The simulation results have revealed a significant improvement gained through the proposed LST-based energy management scheme in terms of cost of energy, along with reduced domestic energy consumption facilitated by an automated switching off mechanism.

  1. Load Balancing Integrated Least Slack Time-Based Appliance Scheduling for Smart Home Energy Management

    Science.gov (United States)

    Silva, Bhagya Nathali; Khan, Murad; Han, Kijun

    2018-01-01

    The emergence of smart devices and smart appliances has highly favored the realization of the smart home concept. Modern smart home systems handle a wide range of user requirements. Energy management and energy conservation are in the spotlight when deploying sophisticated smart homes. However, the performance of energy management systems is highly influenced by user behaviors and adopted energy management approaches. Appliance scheduling is widely accepted as an effective mechanism to manage domestic energy consumption. Hence, we propose a smart home energy management system that reduces unnecessary energy consumption by integrating an automated switching off system with load balancing and appliance scheduling algorithm. The load balancing scheme acts according to defined constraints such that the cumulative energy consumption of the household is managed below the defined maximum threshold. The scheduling of appliances adheres to the least slack time (LST) algorithm while considering user comfort during scheduling. The performance of the proposed scheme has been evaluated against an existing energy management scheme through computer simulation. The simulation results have revealed a significant improvement gained through the proposed LST-based energy management scheme in terms of cost of energy, along with reduced domestic energy consumption facilitated by an automated switching off mechanism. PMID:29495346

  2. Load Balancing Integrated Least Slack Time-Based Appliance Scheduling for Smart Home Energy Management

    Directory of Open Access Journals (Sweden)

    Bhagya Nathali Silva

    2018-02-01

    Full Text Available The emergence of smart devices and smart appliances has highly favored the realization of the smart home concept. Modern smart home systems handle a wide range of user requirements. Energy management and energy conservation are in the spotlight when deploying sophisticated smart homes. However, the performance of energy management systems is highly influenced by user behaviors and adopted energy management approaches. Appliance scheduling is widely accepted as an effective mechanism to manage domestic energy consumption. Hence, we propose a smart home energy management system that reduces unnecessary energy consumption by integrating an automated switching off system with load balancing and appliance scheduling algorithm. The load balancing scheme acts according to defined constraints such that the cumulative energy consumption of the household is managed below the defined maximum threshold. The scheduling of appliances adheres to the least slack time (LST algorithm while considering user comfort during scheduling. The performance of the proposed scheme has been evaluated against an existing energy management scheme through computer simulation. The simulation results have revealed a significant improvement gained through the proposed LST-based energy management scheme in terms of cost of energy, along with reduced domestic energy consumption facilitated by an automated switching off mechanism.

  3. Development of a precise long-time digital integrator for magnetic measurements in a tokamak

    Energy Technology Data Exchange (ETDEWEB)

    Kurihara, Kenichi; Kawamata, Youichi [Japan Atomic Energy Research Inst., Naka, Ibaraki (Japan). Naka Fusion Research Establishment

    1997-10-01

    Long-time D-T burning operation in a tokamak requires that a magnetic sensor must work in an environment of 14-MeV intense neutron field, and that the measurement system must output precise magnetic field values. A method of time-integration of voltage produced in a simple pick-up coil seems to have preferable features of good time response, easy maintenance, and resistance to neutron irradiation. However, an inevitably-produced signal drift makes it difficult to apply the method to the long-time integral operation. To solve this problem, we have developed a new digital integrator (a voltage-to-frequency converter and an up-down counter) with testing the trial boards in the JT-60 magnetic measurements. This reports all of the problems and their measures through the development steps in details, and shows how to apply this method to the ITER operation. (author)

  4. Time-varying block codes for synchronisation errors: maximum a posteriori decoder and practical issues

    Directory of Open Access Journals (Sweden)

    Johann A. Briffa

    2014-06-01

    Full Text Available In this study, the authors consider time-varying block (TVB codes, which generalise a number of previous synchronisation error-correcting codes. They also consider various practical issues related to maximum a posteriori (MAP decoding of these codes. Specifically, they give an expression for the expected distribution of drift between transmitter and receiver because of synchronisation errors. They determine an appropriate choice for state space limits based on the drift probability distribution. In turn, they obtain an expression for the decoder complexity under given channel conditions in terms of the state space limits used. For a given state space, they also give a number of optimisations that reduce the algorithm complexity with no further loss of decoder performance. They also show how the MAP decoder can be used in the absence of known frame boundaries, and demonstrate that an appropriate choice of decoder parameters allows the decoder to approach the performance when frame boundaries are known, at the expense of some increase in complexity. Finally, they express some existing constructions as TVB codes, comparing performance with published results and showing that improved performance is possible by taking advantage of the flexibility of TVB codes.

  5. Real-time implementation of optimized maximum noise fraction transform for feature extraction of hyperspectral images

    Science.gov (United States)

    Wu, Yuanfeng; Gao, Lianru; Zhang, Bing; Zhao, Haina; Li, Jun

    2014-01-01

    We present a parallel implementation of the optimized maximum noise fraction (G-OMNF) transform algorithm for feature extraction of hyperspectral images on commodity graphics processing units (GPUs). The proposed approach explored the algorithm data-level concurrency and optimized the computing flow. We first defined a three-dimensional grid, in which each thread calculates a sub-block data to easily facilitate the spatial and spectral neighborhood data searches in noise estimation, which is one of the most important steps involved in OMNF. Then, we optimized the processing flow and computed the noise covariance matrix before computing the image covariance matrix to reduce the original hyperspectral image data transmission. These optimization strategies can greatly improve the computing efficiency and can be applied to other feature extraction algorithms. The proposed parallel feature extraction algorithm was implemented on an Nvidia Tesla GPU using the compute unified device architecture and basic linear algebra subroutines library. Through the experiments on several real hyperspectral images, our GPU parallel implementation provides a significant speedup of the algorithm compared with the CPU implementation, especially for highly data parallelizable and arithmetically intensive algorithm parts, such as noise estimation. In order to further evaluate the effectiveness of G-OMNF, we used two different applications: spectral unmixing and classification for evaluation. Considering the sensor scanning rate and the data acquisition time, the proposed parallel implementation met the on-board real-time feature extraction.

  6. MPBoot: fast phylogenetic maximum parsimony tree inference and bootstrap approximation.

    Science.gov (United States)

    Hoang, Diep Thi; Vinh, Le Sy; Flouri, Tomáš; Stamatakis, Alexandros; von Haeseler, Arndt; Minh, Bui Quang

    2018-02-02

    The nonparametric bootstrap is widely used to measure the branch support of phylogenetic trees. However, bootstrapping is computationally expensive and remains a bottleneck in phylogenetic analyses. Recently, an ultrafast bootstrap approximation (UFBoot) approach was proposed for maximum likelihood analyses. However, such an approach is still missing for maximum parsimony. To close this gap we present MPBoot, an adaptation and extension of UFBoot to compute branch supports under the maximum parsimony principle. MPBoot works for both uniform and non-uniform cost matrices. Our analyses on biological DNA and protein showed that under uniform cost matrices, MPBoot runs on average 4.7 (DNA) to 7 times (protein data) (range: 1.2-20.7) faster than the standard parsimony bootstrap implemented in PAUP*; but 1.6 (DNA) to 4.1 times (protein data) slower than the standard bootstrap with a fast search routine in TNT (fast-TNT). However, for non-uniform cost matrices MPBoot is 5 (DNA) to 13 times (protein data) (range:0.3-63.9) faster than fast-TNT. We note that MPBoot achieves better scores more frequently than PAUP* and fast-TNT. However, this effect is less pronounced if an intensive but slower search in TNT is invoked. Moreover, experiments on large-scale simulated data show that while both PAUP* and TNT bootstrap estimates are too conservative, MPBoot bootstrap estimates appear more unbiased. MPBoot provides an efficient alternative to the standard maximum parsimony bootstrap procedure. It shows favorable performance in terms of run time, the capability of finding a maximum parsimony tree, and high bootstrap accuracy on simulated as well as empirical data sets. MPBoot is easy-to-use, open-source and available at http://www.cibiv.at/software/mpboot .

  7. Design and Implementation of Photovoltaic Maximum Power Point Tracking Controller

    Directory of Open Access Journals (Sweden)

    Fawaz S. Abdullah

    2018-03-01

    Full Text Available  The power supplied by any solar array depends upon the environmental conditions as weather conditions (temperature and radiation intensity and the incident angle of the radiant source. The work aims to study the maximum power tracking schemes that used to compare the system performance without and with different types of controllers. The maximum power points of the solar panel under test studied and compared with two controller's types.  The first controller is the proportional- integral - derivative controller type and the second is the perturbation and observation algorithm controller. The associated converter system is a microcontroller based type, whereas the results studied and compared of greatest power point of the Photovoltaic panels under the different two controllers. The experimental tests results compared with simulation results to verify accurate performance.

  8. Maximum surface level and temperature histories for Hanford waste tanks

    International Nuclear Information System (INIS)

    Flanagan, B.D.; Ha, N.D.; Huisingh, J.S.

    1994-01-01

    Radioactive defense waste resulting from the chemical processing of spent nuclear fuel has been accumulating at the Hanford Site since 1944. This waste is stored in underground waste-storage tanks. The Hanford Site Tank Farm Facilities Interim Safety Basis (ISB) provides a ready reference to the safety envelope for applicable tank farm facilities and installations. During preparation of the ISB, tank structural integrity concerns were identified as a key element in defining the safety envelope. These concerns, along with several deficiencies in the technical bases associated with the structural integrity issues and the corresponding operational limits/controls specified for conduct of normal tank farm operations are documented in the ISB. Consequently, a plan was initiated to upgrade the safety envelope technical bases by conducting Accelerated Safety Analyses-Phase 1 (ASA-Phase 1) sensitivity studies and additional structural evaluations. The purpose of this report is to facilitate the ASA-Phase 1 studies and future analyses of the single-shell tanks (SSTs) and double-shell tanks (DSTs) by compiling a quantitative summary of some of the past operating conditions the tanks have experienced during their existence. This report documents the available summaries of recorded maximum surface levels and maximum waste temperatures and references other sources for more specific data

  9. Information Entropy Production of Maximum Entropy Markov Chains from Spike Trains

    Directory of Open Access Journals (Sweden)

    Rodrigo Cofré

    2018-01-01

    Full Text Available The spiking activity of neuronal networks follows laws that are not time-reversal symmetric; the notion of pre-synaptic and post-synaptic neurons, stimulus correlations and noise correlations have a clear time order. Therefore, a biologically realistic statistical model for the spiking activity should be able to capture some degree of time irreversibility. We use the thermodynamic formalism to build a framework in the context maximum entropy models to quantify the degree of time irreversibility, providing an explicit formula for the information entropy production of the inferred maximum entropy Markov chain. We provide examples to illustrate our results and discuss the importance of time irreversibility for modeling the spike train statistics.

  10. In times of Integration

    DEFF Research Database (Denmark)

    Val, Maria Rosa Rovira; Lehmann, Martin; Zinenko, Anna

    From late last century there has been a great development of international standards and tools on environment, sustainability and corporate responsibility issues. This has been along with the globalization of economy and politics, as well as a shift in the social responsibilities of the private vis...... procedures, such as cases of for example ISO integrated management systems, mutual equivalences recognition of Global Compact-GRI-ISO26000, or the case of IIRC initiative to develop integrated reporting on an organization’s Financial, Environmental, Social and Governance performance. This paper focuses......-a-vis the public sectors. Internationally, organisations have implemented a collection of these standards to be in line with such development and to obtain or keep their licence to operate globally. After two decades of development and maturation, the scenario is now different: (i) the economic context has changed...

  11. On the solution of high order stable time integration methods

    Czech Academy of Sciences Publication Activity Database

    Axelsson, Owe; Blaheta, Radim; Sysala, Stanislav; Ahmad, B.

    2013-01-01

    Roč. 108, č. 1 (2013), s. 1-22 ISSN 1687-2770 Institutional support: RVO:68145535 Keywords : evolution equations * preconditioners for quadratic matrix polynomials * a stiffly stable time integration method Subject RIV: BA - General Mathematics Impact factor: 0.836, year: 2013 http://www.boundaryvalueproblems.com/content/2013/1/108

  12. Fuzzy Controller Design Using FPGA for Photovoltaic Maximum Power Point Tracking

    OpenAIRE

    Basil M Hamed; Mohammed S. El-Moghany

    2012-01-01

    The cell has optimum operating point to be able to get maximum power. To obtain Maximum Power from photovoltaic array, photovoltaic power system usually requires Maximum Power Point Tracking (MPPT) controller. This paper provides a small power photovoltaic control system based on fuzzy control with FPGA technology design and implementation for MPPT. The system composed of photovoltaic module, buck converter and the fuzzy logic controller implemented on FPGA for controlling on/off time of MOSF...

  13. Integral equation approach to time-dependent kinematic dynamos in finite domains

    International Nuclear Information System (INIS)

    Xu Mingtian; Stefani, Frank; Gerbeth, Gunter

    2004-01-01

    The homogeneous dynamo effect is at the root of cosmic magnetic field generation. With only a very few exceptions, the numerical treatment of homogeneous dynamos is carried out in the framework of the differential equation approach. The present paper tries to facilitate the use of integral equations in dynamo research. Apart from the pedagogical value to illustrate dynamo action within the well-known picture of the Biot-Savart law, the integral equation approach has a number of practical advantages. The first advantage is its proven numerical robustness and stability. The second and perhaps most important advantage is its applicability to dynamos in arbitrary geometries. The third advantage is its intimate connection to inverse problems relevant not only for dynamos but also for technical applications of magnetohydrodynamics. The paper provides the first general formulation and application of the integral equation approach to time-dependent kinematic dynamos, with stationary dynamo sources, in finite domains. The time dependence is restricted to the magnetic field, whereas the velocity or corresponding mean-field sources of dynamo action are supposed to be stationary. For the spherically symmetric α 2 dynamo model it is shown how the general formulation is reduced to a coupled system of two radial integral equations for the defining scalars of the poloidal and toroidal field components. The integral equation formulation for spherical dynamos with general stationary velocity fields is also derived. Two numerical examples - the α 2 dynamo model with radially varying α and the Bullard-Gellman model - illustrate the equivalence of the approach with the usual differential equation method. The main advantage of the method is exemplified by the treatment of an α 2 dynamo in rectangular domains

  14. Design of a semi-custom integrated circuit for the SLAC SLC timing control system

    International Nuclear Information System (INIS)

    Linstadt, E.

    1984-10-01

    A semi-custom (gate array) integrated circuit has been designed for use in the SLAC Linear Collider timing and control system. The design process and SLAC's experiences during the phases of the design cycle are described. Issues concerning the partitioning of the design into semi-custom and standard components are discussed. Functional descriptions of the semi-custom integrated circuit and the timing module in which it is used are given

  15. Mass extinction in tetraodontiform fishes linked to the Palaeocene-Eocene thermal maximum.

    Science.gov (United States)

    Arcila, Dahiana; Tyler, James C

    2017-11-15

    Integrative evolutionary analyses based upon fossil and extant species provide a powerful approach for understanding past diversification events and for assessing the tempo of evolution across the Tree of Life. Herein, we demonstrate the importance of integrating fossil and extant species for inferring patterns of lineage diversification that would otherwise be masked in analyses that examine only one source of evidence. We infer the phylogeny and macroevolutionary history of the Tetraodontiformes (triggerfishes, pufferfishes and allies), a group with one of the most extensive fossil records among fishes. Our analyses combine molecular and morphological data, based on an expanded matrix that adds newly coded fossil species and character states. Beyond confidently resolving the relationships and divergence times of tetraodontiforms, our diversification analyses detect a major mass-extinction event during the Palaeocene-Eocene Thermal Maximum (PETM), followed by a marked increase in speciation rates. This pattern is consistently obtained when fossil and extant species are integrated, whereas examination of the fossil occurrences alone failed to detect major diversification changes during the PETM. When taking into account non-homogeneous models, our analyses also detect a rapid lineage diversification increase in one of the groups (tetraodontoids) during the middle Miocene, which is considered a key period in the evolution of reef fishes associated with trophic changes and ecological opportunity. In summary, our analyses show distinct diversification dynamics estimated from phylogenies and the fossil record, suggesting that different episodes shaped the evolution of tetraodontiforms during the Cenozoic. © 2017 The Author(s).

  16. Time-varying market integration and expected returns in emerging mrkets

    NARCIS (Netherlands)

    de Jong, F.C.J.M.; de Roon, F.

    2001-01-01

    We use a simple model in which the expected returns in emerging markets depend on their systematicrisk as measured by their beta relative to the world portfolio as well as on the level ofintegration in that market. The level of integration is a time-varying variable that depends on themarket value

  17. Studying DDT Susceptibility at Discriminating Time Intervals Focusing on Maximum Limit of Exposure Time Survived by DDT Resistant Phlebotomus argentipes (Diptera: Psychodidae): an Investigative Report.

    Science.gov (United States)

    Rama, Aarti; Kesari, Shreekant; Das, Pradeep; Kumar, Vijay

    2017-07-24

    Extensive application of routine insecticide i.e., dichlorodiphenyltrichloroethane (DDT) to control Phlebotomus argentipes (Diptera: Psychodidae), the proven vector of visceral leishmaniasis in India, had evoked the problem of resistance/tolerance against DDT, eventually nullifying the DDT dependent strategies to control this vector. Because tolerating an hour-long exposure to DDT is not challenging enough for the resistant P. argentipes, estimating susceptibility by exposing sand flies to insecticide for just an hour becomes a trivial and futile task.Therefore, this bioassay study was carried out to investigate the maximum limit of exposure time to which DDT resistant P. argentipes can endure the effect of DDT for their survival. The mortality rate of laboratory-reared DDT resistant strain P. argentipes exposed to DDT was studied at discriminating time intervals of 60 min and it was concluded that highly resistant sand flies could withstand up to 420 min of exposure to this insecticide. Additionally, the lethal time for female P. argentipes was observed to be higher than for males suggesting that they are highly resistant to DDT's toxicity. Our results support the monitoring of tolerance limit with respect to time and hence points towards an urgent need to change the World Health Organization's protocol for susceptibility identification in resistant P. argentipes.

  18. Unconditionally Energy Stable Implicit Time Integration: Application to Multibody System Analysis and Design

    DEFF Research Database (Denmark)

    Chen, Shanshin; Tortorelli, Daniel A.; Hansen, John Michael

    1999-01-01

    of ordinary diffferential equations is employed to avoid the instabilities associated with the direct integrations of differential-algebraic equations. To extend the unconditional stability of the implicit Newmark method to nonlinear dynamic systems, a discrete energy balance is enforced. This constraint......Advances in computer hardware and improved algorithms for multibody dynamics over the past decade have generated widespread interest in real-time simulations of multibody mechanics systems. At the heart of the widely used algorithms for multibody dynamics are a choice of coordinates which define...... the kinmatics of the system, and a choice of time integrations algorithms. The current approach uses a non-dissipative implict Newmark method to integrate the equations of motion defined in terms of the independent joint coordinates of the system. The reduction of the equations of motion to a minimal set...

  19. Velocity time integral for right upper pulmonary vein in VLBW infants with patent ductus arteriosus.

    Science.gov (United States)

    Lista, Gianluca; Bianchi, Silvia; Mannarino, Savina; Schena, Federico; Castoldi, Francesca; Stronati, Mauro; Mosca, Fabio

    2016-10-01

    Early diagnosis of significant patent ductus arteriosus reduces the risk of clinical worsening in very low birth weight infants. Echocardiographic patent ductus arteriosus shunt flow pattern can be used to predict significant patent ductus arteriosus. Pulmonary venous flow, expressed as vein velocity time integral, is correlated to ductus arteriosus closure. The aim of this study is to investigate the relationship between significant reductions in vein velocity time integral and non-significant patent ductus arteriosus in the first week of life. A multicenter, prospective, observational study was conducted to evaluate very low birth weight infants (ductus. The mean vein velocity time integral significantly reduced in the first four days of life. On the fourth day of life, there was less of a reduction in patients with patent ductus compared to those with closed patent ductus arteriosus and the difference was significant. A significant reduction in vein velocity time integral in the first days of life is associated with ductus closure. This parameter correlates well with other echocardiographic parameters and may aid in the diagnosis and management of patent ductus arteriosus.

  20. Estimation of maximum credible atmospheric radioactivity concentrations and dose rates from nuclear tests

    International Nuclear Information System (INIS)

    Telegadas, K.

    1979-01-01

    A simple technique is presented for estimating maximum credible gross beta air concentrations from nuclear detonations in the atmosphere, based on aircraft sampling of radioactivity following each Chinese nuclear test from 1964 to 1976. The calculated concentration is a function of the total yield and fission yield, initial vertical radioactivity distribution, time after detonation, and rate of horizontal spread of the debris with time. calculated maximum credible concentrations are compared with the highest concentrations measured during aircraft sampling. The technique provides a reasonable estimate of maximum air concentrations from 1 to 10 days after a detonation. An estimate of the whole-body external gamma dose rate corresponding to the maximum credible gross beta concentration is also given. (author)

  1. Integration of Real-Time Data Into Building Automation Systems

    Energy Technology Data Exchange (ETDEWEB)

    Mark J. Stunder; Perry Sebastian; Brenda A. Chube; Michael D. Koontz

    2003-04-16

    The project goal was to investigate the possibility of using predictive real-time information from the Internet as an input to building management system algorithms. The objectives were to identify the types of information most valuable to commercial and residential building owners, managers, and system designers. To comprehensively investigate and document currently available electronic real-time information suitable for use in building management systems. Verify the reliability of the information and recommend accreditation methods for data and providers. Assess methodologies to automatically retrieve and utilize the information. Characterize equipment required to implement automated integration. Demonstrate the feasibility and benefits of using the information in building management systems. Identify evolutionary control strategies.

  2. Analytical solutions for prediction of the ignition time of wood particles based on a time and space integral method

    NARCIS (Netherlands)

    Haseli, Y.; Oijen, van J.A.; Goey, de L.P.H.

    2012-01-01

    The main idea of this paper is to establish a simple approach for prediction of the ignition time of a wood particle assuming that the thermo-physical properties remain constant and ignition takes place at a characteristic ignition temperature. Using a time and space integral method, explicit

  3. On the maximum of wave surface of sea waves

    Energy Technology Data Exchange (ETDEWEB)

    Zhang, B

    1980-01-01

    This article considers wave surface as a normal stationary random process to solve the estimation of the maximum of wave surface in a given time interval by means of the theoretical results of probability theory. The results are represented by formulas (13) to (19) in this article. It was proved in this article that when time interval approaches infinite, the formulas (3), (6) of E )eta max) that were derived from the references (Cartwright, Longuet-Higgins) can also be derived by asymptotic distribution of the maximum of wave surface provided by the article. The advantage of the results obtained from this point of view as compared with the results obtained from the references was discussed.

  4. CONNJUR Workflow Builder: a software integration environment for spectral reconstruction.

    Science.gov (United States)

    Fenwick, Matthew; Weatherby, Gerard; Vyas, Jay; Sesanker, Colbert; Martyn, Timothy O; Ellis, Heidi J C; Gryk, Michael R

    2015-07-01

    CONNJUR Workflow Builder (WB) is an open-source software integration environment that leverages existing spectral reconstruction tools to create a synergistic, coherent platform for converting biomolecular NMR data from the time domain to the frequency domain. WB provides data integration of primary data and metadata using a relational database, and includes a library of pre-built workflows for processing time domain data. WB simplifies maximum entropy reconstruction, facilitating the processing of non-uniformly sampled time domain data. As will be shown in the paper, the unique features of WB provide it with novel abilities to enhance the quality, accuracy, and fidelity of the spectral reconstruction process. WB also provides features which promote collaboration, education, parameterization, and non-uniform data sets along with processing integrated with the Rowland NMR Toolkit (RNMRTK) and NMRPipe software packages. WB is available free of charge in perpetuity, dual-licensed under the MIT and GPL open source licenses.

  5. CONNJUR Workflow Builder: a software integration environment for spectral reconstruction

    Energy Technology Data Exchange (ETDEWEB)

    Fenwick, Matthew; Weatherby, Gerard; Vyas, Jay; Sesanker, Colbert [UConn Health, Department of Molecular Biology and Biophysics (United States); Martyn, Timothy O. [Rensselaer at Hartford, Department of Engineering and Science (United States); Ellis, Heidi J. C. [Western New England College, Department of Computer Science and Information Technology (United States); Gryk, Michael R., E-mail: gryk@uchc.edu [UConn Health, Department of Molecular Biology and Biophysics (United States)

    2015-07-15

    CONNJUR Workflow Builder (WB) is an open-source software integration environment that leverages existing spectral reconstruction tools to create a synergistic, coherent platform for converting biomolecular NMR data from the time domain to the frequency domain. WB provides data integration of primary data and metadata using a relational database, and includes a library of pre-built workflows for processing time domain data. WB simplifies maximum entropy reconstruction, facilitating the processing of non-uniformly sampled time domain data. As will be shown in the paper, the unique features of WB provide it with novel abilities to enhance the quality, accuracy, and fidelity of the spectral reconstruction process. WB also provides features which promote collaboration, education, parameterization, and non-uniform data sets along with processing integrated with the Rowland NMR Toolkit (RNMRTK) and NMRPipe software packages. WB is available free of charge in perpetuity, dual-licensed under the MIT and GPL open source licenses.

  6. CONNJUR Workflow Builder: a software integration environment for spectral reconstruction

    International Nuclear Information System (INIS)

    Fenwick, Matthew; Weatherby, Gerard; Vyas, Jay; Sesanker, Colbert; Martyn, Timothy O.; Ellis, Heidi J. C.; Gryk, Michael R.

    2015-01-01

    CONNJUR Workflow Builder (WB) is an open-source software integration environment that leverages existing spectral reconstruction tools to create a synergistic, coherent platform for converting biomolecular NMR data from the time domain to the frequency domain. WB provides data integration of primary data and metadata using a relational database, and includes a library of pre-built workflows for processing time domain data. WB simplifies maximum entropy reconstruction, facilitating the processing of non-uniformly sampled time domain data. As will be shown in the paper, the unique features of WB provide it with novel abilities to enhance the quality, accuracy, and fidelity of the spectral reconstruction process. WB also provides features which promote collaboration, education, parameterization, and non-uniform data sets along with processing integrated with the Rowland NMR Toolkit (RNMRTK) and NMRPipe software packages. WB is available free of charge in perpetuity, dual-licensed under the MIT and GPL open source licenses

  7. A Time Marching Scheme for Solving Volume Integral Equations on Nonlinear Scatterers

    KAUST Repository

    Bagci, Hakan

    2015-01-07

    Transient electromagnetic field interactions on inhomogeneous penetrable scatterers can be analyzed by solving time domain volume integral equations (TDVIEs). TDVIEs are oftentimes solved using marchingon-in-time (MOT) schemes. Unlike finite difference and finite element schemes, MOT-TDVIE solvers require discretization of only the scatterers, do not call for artificial absorbing boundary conditions, and are more robust to numerical phase dispersion. On the other hand, their computational cost is high, they suffer from late-time instabilities, and their implicit nature makes incorporation of nonlinear constitutive relations more difficult. Development of plane-wave time-domain (PWTD) and FFT-based schemes has significantly reduced the computational cost of the MOT-TDVIE solvers. Additionally, latetime instability problem has been alleviated for all practical purposes with the development of accurate integration schemes and specially designed temporal basis functions. Addressing the third challenge is the topic of this presentation. I will talk about an explicit MOT scheme developed for solving the TDVIE on scatterers with nonlinear material properties. The proposed scheme separately discretizes the TDVIE and the nonlinear constitutive relation between electric field intensity and flux density. The unknown field intensity and flux density are expanded using half and full Schaubert-Wilton-Glisson (SWG) basis functions in space and polynomial temporal interpolators in time. The resulting coupled system of the discretized TDVIE and constitutive relation is integrated in time using an explicit P E(CE) m scheme to yield the unknown expansion coefficients. Explicitness of time marching allows for straightforward incorporation of the nonlinearity as a function evaluation on the right hand side of the coupled system of equations. Consequently, the resulting MOT scheme does not call for a Newton-like nonlinear solver. Numerical examples, which demonstrate the applicability

  8. A Time Marching Scheme for Solving Volume Integral Equations on Nonlinear Scatterers

    KAUST Repository

    Bagci, Hakan

    2015-01-01

    Transient electromagnetic field interactions on inhomogeneous penetrable scatterers can be analyzed by solving time domain volume integral equations (TDVIEs). TDVIEs are oftentimes solved using marchingon-in-time (MOT) schemes. Unlike finite difference and finite element schemes, MOT-TDVIE solvers require discretization of only the scatterers, do not call for artificial absorbing boundary conditions, and are more robust to numerical phase dispersion. On the other hand, their computational cost is high, they suffer from late-time instabilities, and their implicit nature makes incorporation of nonlinear constitutive relations more difficult. Development of plane-wave time-domain (PWTD) and FFT-based schemes has significantly reduced the computational cost of the MOT-TDVIE solvers. Additionally, latetime instability problem has been alleviated for all practical purposes with the development of accurate integration schemes and specially designed temporal basis functions. Addressing the third challenge is the topic of this presentation. I will talk about an explicit MOT scheme developed for solving the TDVIE on scatterers with nonlinear material properties. The proposed scheme separately discretizes the TDVIE and the nonlinear constitutive relation between electric field intensity and flux density. The unknown field intensity and flux density are expanded using half and full Schaubert-Wilton-Glisson (SWG) basis functions in space and polynomial temporal interpolators in time. The resulting coupled system of the discretized TDVIE and constitutive relation is integrated in time using an explicit P E(CE) m scheme to yield the unknown expansion coefficients. Explicitness of time marching allows for straightforward incorporation of the nonlinearity as a function evaluation on the right hand side of the coupled system of equations. Consequently, the resulting MOT scheme does not call for a Newton-like nonlinear solver. Numerical examples, which demonstrate the applicability

  9. Modelling maximum likelihood estimation of availability

    International Nuclear Information System (INIS)

    Waller, R.A.; Tietjen, G.L.; Rock, G.W.

    1975-01-01

    Suppose the performance of a nuclear powered electrical generating power plant is continuously monitored to record the sequence of failure and repairs during sustained operation. The purpose of this study is to assess one method of estimating the performance of the power plant when the measure of performance is availability. That is, we determine the probability that the plant is operational at time t. To study the availability of a power plant, we first assume statistical models for the variables, X and Y, which denote the time-to-failure and the time-to-repair variables, respectively. Once those statistical models are specified, the availability, A(t), can be expressed as a function of some or all of their parameters. Usually those parameters are unknown in practice and so A(t) is unknown. This paper discusses the maximum likelihood estimator of A(t) when the time-to-failure model for X is an exponential density with parameter, lambda, and the time-to-repair model for Y is an exponential density with parameter, theta. Under the assumption of exponential models for X and Y, it follows that the instantaneous availability at time t is A(t)=lambda/(lambda+theta)+theta/(lambda+theta)exp[-[(1/lambda)+(1/theta)]t] with t>0. Also, the steady-state availability is A(infinity)=lambda/(lambda+theta). We use the observations from n failure-repair cycles of the power plant, say X 1 , X 2 , ..., Xsub(n), Y 1 , Y 2 , ..., Ysub(n) to present the maximum likelihood estimators of A(t) and A(infinity). The exact sampling distributions for those estimators and some statistical properties are discussed before a simulation model is used to determine 95% simulation intervals for A(t). The methodology is applied to two examples which approximate the operating history of two nuclear power plants. (author)

  10. Trading Time with Space - Development of subduction zone parameter database for a maximum magnitude correlation assessment

    Science.gov (United States)

    Schaefer, Andreas; Wenzel, Friedemann

    2017-04-01

    technically trades time with space, considering subduction zones where we have likely not observed the maximum possible event yet. However, by identifying sources of the same class, the not-yet observed temporal behavior can be replaced by spatial similarity among different subduction zones. This database aims to enhance the research and understanding of subduction zones and to quantify their potential in producing mega earthquakes considering potential strong motion impact on nearby cities and their tsunami potential.

  11. Modelling non-stationary annual maximum flood heights in the lower Limpopo River basin of Mozambique

    Directory of Open Access Journals (Sweden)

    Daniel Maposa

    2016-05-01

    Full Text Available In this article we fit a time-dependent generalised extreme value (GEV distribution to annual maximum flood heights at three sites: Chokwe, Sicacate and Combomune in the lower Limpopo River basin of Mozambique. A GEV distribution is fitted to six annual maximum time series models at each site, namely: annual daily maximum (AM1, annual 2-day maximum (AM2, annual 5-day maximum (AM5, annual 7-day maximum (AM7, annual 10-day maximum (AM10 and annual 30-day maximum (AM30. Non-stationary time-dependent GEV models with a linear trend in location and scale parameters are considered in this study. The results show lack of sufficient evidence to indicate a linear trend in the location parameter at all three sites. On the other hand, the findings in this study reveal strong evidence of the existence of a linear trend in the scale parameter at Combomune and Sicacate, whilst the scale parameter had no significant linear trend at Chokwe. Further investigation in this study also reveals that the location parameter at Sicacate can be modelled by a nonlinear quadratic trend; however, the complexity of the overall model is not worthwhile in fit over a time-homogeneous model. This study shows the importance of extending the time-homogeneous GEV model to incorporate climate change factors such as trend in the lower Limpopo River basin, particularly in this era of global warming and a changing climate. Keywords: nonstationary extremes; annual maxima; lower Limpopo River; generalised extreme value

  12. Simulation of the WWER-440/213 maximum credible accident at the EhNITs stand

    International Nuclear Information System (INIS)

    Blinkov, V.N.; Melikhov, O.I.; Melikhov, V.I.; Davydov, M.V.; Sokolin, A.V.; Shchepetil'nikov, Eh.Yu.

    2000-01-01

    The calculations of thermohydraulic processes through the ATHLET code for determining optimal conditions for modeling the coolant leakage at the EhNITs stand by the maximum credible accident at the NPP with WWER-440/213 reactor are presented. The diameters of the nozzle at the stand, whereby the local criterion of coincidence with the data on the NPP (by the maximum flow) and integral criterion of coincidence (by the mass and energy of the coolant, effluent during 10 s) are determined in the process of parametric calculations [ru

  13. Characterization of HBV integration patterns and timing in liver cancer and HBV-infected livers.

    Science.gov (United States)

    Furuta, Mayuko; Tanaka, Hiroko; Shiraishi, Yuichi; Unida, Takuro; Imamura, Michio; Fujimoto, Akihiro; Fujita, Masahi; Sasaki-Oku, Aya; Maejima, Kazuhiro; Nakano, Kaoru; Kawakami, Yoshiiku; Arihiro, Koji; Aikata, Hiroshi; Ueno, Masaki; Hayami, Shinya; Ariizumi, Shun-Ichi; Yamamoto, Masakazu; Gotoh, Kunihito; Ohdan, Hideki; Yamaue, Hiroki; Miyano, Satoru; Chayama, Kazuaki; Nakagawa, Hidewaki

    2018-05-18

    Integration of Hepatitis B virus (HBV) into the human genome can cause genetic instability, leading to selective advantages for HBV-induced liver cancer. Despite the large number of studies for HBV integration into liver cancer, little is known about the mechanism of initial HBV integration events owing to the limitations of materials and detection methods. We conducted an HBV sequence capture, followed by ultra-deep sequencing, to screen for HBV integrations in 111 liver samples from human-hepatocyte chimeric mice with HBV infection and human clinical samples containing 42 paired samples from non-tumorous and tumorous liver tissues. The HBV infection model using chimeric mice verified the efficiency of our HBV-capture analysis and demonstrated that HBV integration could occur 23 to 49 days after HBV infection via microhomology-mediated end joining and predominantly in mitochondrial DNA. Overall HBV integration sites in clinical samples were significantly enriched in regions annotated as exhibiting open chromatin, a high level of gene expression, and early replication timing in liver cells. These data indicate that HBV integration in liver tissue was biased according to chromatin accessibility, with additional selection pressures in the gene promoters of tumor samples. Moreover, an integrative analysis using paired non-tumorous and tumorous samples and HBV-related transcriptional change revealed the involvement of TERT and MLL4 in clonal selection. We also found frequent and non-tumorous liver-specific HBV integrations in FN1 and HBV-FN1 fusion transcript. Extensive survey of HBV integrations facilitates and improves the understanding of the timing and biology of HBV integration during infection and HBV-related hepatocarcinogenesis.

  14. Towards a frequency-dependent discrete maximum principle for the implicit Monte Carlo equations

    International Nuclear Information System (INIS)

    Wollaber, Allan B.; Larsen, Edward W.; Densmore, Jeffery D.

    2011-01-01

    It has long been known that temperature solutions of the Implicit Monte Carlo (IMC) equations can exceed the external boundary temperatures, a so-called violation of the 'maximum principle'. Previous attempts at prescribing a maximum value of the time-step size Δ t that is sufficient to eliminate these violations have recommended a Δ t that is typically too small to be used in practice and that appeared to be much too conservative when compared to numerical solutions of the IMC equations for practical problems. In this paper, we derive a new estimator for the maximum time-step size that includes the spatial-grid size Δ x . This explicitly demonstrates that the effect of coarsening Δ x is to reduce the limitation on Δ t , which helps explain the overly conservative nature of the earlier, grid-independent results. We demonstrate that our new time-step restriction is a much more accurate means of predicting violations of the maximum principle. We discuss how the implications of the new, grid-dependent time-step restriction can impact IMC solution algorithms. (author)

  15. A 75 ps rms time resolution BiCMOS time to digital converter optimized for high rate imaging detectors

    CERN Document Server

    Hervé, C

    2002-01-01

    This paper presents an integrated time to digital converter (TDC) with a bin size adjustable in the range of 125 to 175 ps and a differential nonlinearity of +-0.3%. The TDC has four channels. Its architecture has been optimized for the readout of imaging detectors in use at Synchrotron Radiation facilities. In particular, a built-in logic flags piled-up events. Multi-hit patterns are also supported for other applications. Time measurements are extracted off chip at the maximum throughput of 40 MHz. The dynamic range is 14 bits. It has been fabricated in 0.8 mu m BiCMOS technology. Time critical inputs are PECL compatible whereas other signals are CMOS compatible. A second application specific integrated circuit (ASIC) has been developed which translates NIM electrical levels to PECL ones. Both circuits are used to assemble board level TDCs complying with industry standards like VME, NIM and PCI.

  16. Long-time integration methods for mesoscopic models of pattern-forming systems

    International Nuclear Information System (INIS)

    Abukhdeir, Nasser Mohieddin; Vlachos, Dionisios G.; Katsoulakis, Markos; Plexousakis, Michael

    2011-01-01

    Spectral methods for simulation of a mesoscopic diffusion model of surface pattern formation are evaluated for long simulation times. Backwards-differencing time-integration, coupled with an underlying Newton-Krylov nonlinear solver (SUNDIALS-CVODE), is found to substantially accelerate simulations, without the typical requirement of preconditioning. Quasi-equilibrium simulations of patterned phases predicted by the model are shown to agree well with linear stability analysis. Simulation results of the effect of repulsive particle-particle interactions on pattern relaxation time and short/long-range order are discussed.

  17. Global integration in times of crisis

    DEFF Research Database (Denmark)

    Jensen, Camilla

    shock) from other subsidiaries downstream in the value chain. While in a comparative perspective multinational subsidiaries are found to perform relatively better than local firms that are integrated differently (arms' length) in global production networks (e.g. offshoring outsourcing). This paper tries...... to reconcile these findings by testing a number of hypothesis about global integration strategies in the context of the global financial crisis and how it affected exporting among multinational subsidiaries operating out of Turkey. Controlling for the impact that depreciations and exchange rate volatility has...... integration strategies throughout the course of the global financial crisis....

  18. Three-level grid-connected photovoltaic inverter with maximum power point tracking

    International Nuclear Information System (INIS)

    Tsang, K.M.; Chan, W.L.

    2013-01-01

    Highlight: ► This paper reports a novel 3-level grid connected photovoltaic inverter. ► The inverter features maximum power point tracking and grid current shaping. ► The inverter can be acted as an active filter and a renewable power source. - Abstract: This paper presents a systematic way of designing control scheme for a grid-connected photovoltaic (PV) inverter featuring maximum power point tracking (MPPT) and grid current shaping. Unlike conventional design, only four power switches are required to achieve three output levels and it is not necessary to use any phase-locked-loop circuitry. For the proposed scheme, a simple integral controller has been designed for the tracking of the maximum power point of a PV array based on an improved extremum seeking control method. For the grid-connected inverter, a current loop controller and a voltage loop controller have been designed. The current loop controller is designed to shape the inverter output current while the voltage loop controller can maintain the capacitor voltage at a certain level and provide a reference inverter output current for the PV inverter without affecting the maximum power point of the PV array. Experimental results are included to demonstrate the effectiveness of the tracking and control scheme.

  19. A Maximum Entropy-Based Chaotic Time-Variant Fragile Watermarking Scheme for Image Tampering Detection

    Directory of Open Access Journals (Sweden)

    Guo-Jheng Yang

    2013-08-01

    Full Text Available The fragile watermarking technique is used to protect intellectual property rights while also providing security and rigorous protection. In order to protect the copyright of the creators, it can be implanted in some representative text or totem. Because all of the media on the Internet are digital, protection has become a critical issue, and determining how to use digital watermarks to protect digital media is thus the topic of our research. This paper uses the Logistic map with parameter u = 4 to generate chaotic dynamic behavior with the maximum entropy 1. This approach increases the security and rigor of the protection. The main research target of information hiding is determining how to hide confidential data so that the naked eye cannot see the difference. Next, we introduce one method of information hiding. Generally speaking, if the image only goes through Arnold’s cat map and the Logistic map, it seems to lack sufficient security. Therefore, our emphasis is on controlling Arnold’s cat map and the initial value of the chaos system to undergo small changes and generate different chaos sequences. Thus, the current time is used to not only make encryption more stringent but also to enhance the security of the digital media.

  20. An explicit marching on-in-time solver for the time domain volume magnetic field integral equation

    KAUST Repository

    Sayed, Sadeed Bin

    2014-07-01

    Transient scattering from inhomogeneous dielectric objects can be modeled using time domain volume integral equations (TDVIEs). TDVIEs are oftentimes solved using marching on-in-time (MOT) techniques. Classical MOT-TDVIE solvers expand the field induced on the scatterer using local spatio-temporal basis functions. Inserting this expansion into the TDVIE and testing the resulting equation in space and time yields a system of equations that is solved by time marching. Depending on the type of the basis and testing functions and the time step, the time marching scheme can be implicit (N. T. Gres, et al., Radio Sci., 36(3), 379-386, 2001) or explicit (A. Al-Jarro, et al., IEEE Trans. Antennas Propag., 60(11), 5203-5214, 2012). Implicit MOT schemes are known to be more stable and accurate. However, under low-frequency excitation, i.e., when the time step size is large, they call for inversion of a full matrix system at very time step.

  1. An explicit marching on-in-time solver for the time domain volume magnetic field integral equation

    KAUST Repository

    Sayed, Sadeed Bin; Ulku, Huseyin Arda; Bagci, Hakan

    2014-01-01

    Transient scattering from inhomogeneous dielectric objects can be modeled using time domain volume integral equations (TDVIEs). TDVIEs are oftentimes solved using marching on-in-time (MOT) techniques. Classical MOT-TDVIE solvers expand the field induced on the scatterer using local spatio-temporal basis functions. Inserting this expansion into the TDVIE and testing the resulting equation in space and time yields a system of equations that is solved by time marching. Depending on the type of the basis and testing functions and the time step, the time marching scheme can be implicit (N. T. Gres, et al., Radio Sci., 36(3), 379-386, 2001) or explicit (A. Al-Jarro, et al., IEEE Trans. Antennas Propag., 60(11), 5203-5214, 2012). Implicit MOT schemes are known to be more stable and accurate. However, under low-frequency excitation, i.e., when the time step size is large, they call for inversion of a full matrix system at very time step.

  2. Time-domain single-source integral equations for analyzing scattering from homogeneous penetrable objects

    KAUST Repository

    Valdés, Felipe

    2013-03-01

    Single-source time-domain electric-and magnetic-field integral equations for analyzing scattering from homogeneous penetrable objects are presented. Their temporal discretization is effected by using shifted piecewise polynomial temporal basis functions and a collocation testing procedure, thus allowing for a marching-on-in-time (MOT) solution scheme. Unlike dual-source formulations, single-source equations involve space-time domain operator products, for which spatial discretization techniques developed for standalone operators do not apply. Here, the spatial discretization of the single-source time-domain integral equations is achieved by using the high-order divergence-conforming basis functions developed by Graglia alongside the high-order divergence-and quasi curl-conforming (DQCC) basis functions of Valdés The combination of these two sets allows for a well-conditioned mapping from div-to curl-conforming function spaces that fully respects the space-mapping properties of the space-time operators involved. Numerical results corroborate the fact that the proposed procedure guarantees accuracy and stability of the MOT scheme. © 2012 IEEE.

  3. Nash and integrated solutions in a just-in-time seller-buyer supply chain with buyer's ordering cost reductions

    Science.gov (United States)

    Lou, Kuo-Ren; Wang, Lu

    2016-05-01

    The seller frequently offers the buyer trade credit to settle the purchase amount. From the seller's prospective, granting trade credit increases not only the opportunity cost (i.e., the interest loss on the buyer's purchase amount during the credit period) but also the default risk (i.e., the rate that the buyer will be unable to pay off his/her debt obligations). On the other hand, granting trade credit increases sales volume and revenue. Consequently, trade credit is an important strategy to increase seller's profitability. In this paper, we assume that the seller uses trade credit and number of shipments in a production run as decision variables to maximise his/her profit, while the buyer determines his/her replenishment cycle time and capital investment as decision variables to reduce his/her ordering cost and achieve his/her maximum profit. We then derive non-cooperative Nash solution and cooperative integrated solution in a just-in-time inventory system, in which granting trade credit increases not only the demand but also the opportunity cost and default risk, and the relationship between the capital investment and the ordering cost reduction is logarithmic. Then, we use a software to solve and compare these two distinct solutions. Finally, we use sensitivity analysis to obtain some managerial insights.

  4. FPGA-based real-time embedded system for RISS/GPS integrated navigation.

    Science.gov (United States)

    Abdelfatah, Walid Farid; Georgy, Jacques; Iqbal, Umar; Noureldin, Aboelmagd

    2012-01-01

    Navigation algorithms integrating measurements from multi-sensor systems overcome the problems that arise from using GPS navigation systems in standalone mode. Algorithms which integrate the data from 2D low-cost reduced inertial sensor system (RISS), consisting of a gyroscope and an odometer or wheel encoders, along with a GPS receiver via a Kalman filter has proved to be worthy in providing a consistent and more reliable navigation solution compared to standalone GPS receivers. It has been also shown to be beneficial, especially in GPS-denied environments such as urban canyons and tunnels. The main objective of this paper is to narrow the idea-to-implementation gap that follows the algorithm development by realizing a low-cost real-time embedded navigation system capable of computing the data-fused positioning solution. The role of the developed system is to synchronize the measurements from the three sensors, relative to the pulse per second signal generated from the GPS, after which the navigation algorithm is applied to the synchronized measurements to compute the navigation solution in real-time. Employing a customizable soft-core processor on an FPGA in the kernel of the navigation system, provided the flexibility for communicating with the various sensors and the computation capability required by the Kalman filter integration algorithm.

  5. Komar integrals in asymptotically anti-de Sitter space-times

    International Nuclear Information System (INIS)

    Magnon, A.

    1985-01-01

    Recently, boundary conditions governing the asymptotic behavior of the gravitational field in the presence of a negative cosmological constant have been introduced using Penrose's conformal techniques. The subsequent analysis has led to expressions of conserved quantities (associated with asymptotic symmetries) involving asymptotic Weyl curvature. On the other hand, if the underlying space-time is equipped with isometries, a generalization of the Komar integral which incorporates the cosmological constant is also available. Thus, in the presence of an isometry, one is faced with two apparently unrelated definitions. It is shown that these definitions agree. This coherence supports the choice of boundary conditions for asymptotically anti-de Sitter space-times and reinforces the definitions of conserved quantities

  6. Retarded potentials and time domain boundary integral equations a road map

    CERN Document Server

    Sayas, Francisco-Javier

    2016-01-01

    This book offers a thorough and self-contained exposition of the mathematics of time-domain boundary integral equations associated to the wave equation, including applications to scattering of acoustic and elastic waves. The book offers two different approaches for the analysis of these integral equations, including a systematic treatment of their numerical discretization using Galerkin (Boundary Element) methods in the space variables and Convolution Quadrature in the time variable. The first approach follows classical work started in the late eighties, based on Laplace transforms estimates. This approach has been refined and made more accessible by tailoring the necessary mathematical tools, avoiding an excess of generality. A second approach contains a novel point of view that the author and some of his collaborators have been developing in recent years, using the semigroup theory of evolution equations to obtain improved results. The extension to electromagnetic waves is explained in one of the appendices...

  7. Post optimization paradigm in maximum 3-satisfiability logic programming

    Science.gov (United States)

    Mansor, Mohd. Asyraf; Sathasivam, Saratha; Kasihmuddin, Mohd Shareduwan Mohd

    2017-08-01

    Maximum 3-Satisfiability (MAX-3SAT) is a counterpart of the Boolean satisfiability problem that can be treated as a constraint optimization problem. It deals with a conundrum of searching the maximum number of satisfied clauses in a particular 3-SAT formula. This paper presents the implementation of enhanced Hopfield network in hastening the Maximum 3-Satisfiability (MAX-3SAT) logic programming. Four post optimization techniques are investigated, including the Elliot symmetric activation function, Gaussian activation function, Wavelet activation function and Hyperbolic tangent activation function. The performances of these post optimization techniques in accelerating MAX-3SAT logic programming will be discussed in terms of the ratio of maximum satisfied clauses, Hamming distance and the computation time. Dev-C++ was used as the platform for training, testing and validating our proposed techniques. The results depict the Hyperbolic tangent activation function and Elliot symmetric activation function can be used in doing MAX-3SAT logic programming.

  8. A discontinous Galerkin finite element method with an efficient time integration scheme for accurate simulations

    KAUST Repository

    Liu, Meilin; Bagci, Hakan

    2011-01-01

    A discontinuous Galerkin finite element method (DG-FEM) with a highly-accurate time integration scheme is presented. The scheme achieves its high accuracy using numerically constructed predictor-corrector integration coefficients. Numerical results

  9. Development of wide range charge integration application specified integrated circuit for photo-sensor

    Energy Technology Data Exchange (ETDEWEB)

    Katayose, Yusaku, E-mail: katayose@ynu.ac.jp [Department of Physics, Yokohama National University, 79-5 Tokiwadai, Hodogaya-ku, Yokohama, Kanagawa 240-8501 (Japan); Ikeda, Hirokazu [Institute of Space and Astronautical Science (ISAS)/Japan Aerospace Exploration Agency (JAXA), 3-1-1 Yoshinodai, Chuo-ku, Sagamihara, Kanagawa 252-5210 (Japan); Tanaka, Manobu [National Laboratory for High Energy Physics, KEK, 1-1 Oho, Tsukuba, Ibaraki 305-0801 (Japan); Shibata, Makio [Department of Physics, Yokohama National University, 79-5 Tokiwadai, Hodogaya-ku, Yokohama, Kanagawa 240-8501 (Japan)

    2013-01-21

    A front-end application specified integrated circuit (ASIC) is developed with a wide dynamic range amplifier (WDAMP) to read-out signals from a photo-sensor like a photodiode. The WDAMP ASIC consists of a charge sensitive preamplifier, four wave-shaping circuits with different amplification factors and Wilkinson-type analog-to-digital converter (ADC). To realize a wider range, the integrating capacitor in the preamplifier can be changed from 4 pF to 16 pF by a two-bit switch. The output of a preamplifier is shared by the four wave-shaping circuits with four gains of 1, 4, 16 and 64 to adapt the input range of ADC. A 0.25-μm CMOS process (of UMC electronics CO., LTD) is used to fabricate the ASIC with four-channels. The dynamic range of four orders of magnitude is achieved with the maximum range over 20 pC and the noise performance of 0.46 fC + 6.4×10{sup −4} fC/pF. -- Highlights: ► A front-end ASIC is developed with a wide dynamic range amplifier. ► The ASIC consists of a CSA, four wave-shaping circuits and pulse-height-to-time converters. ► The dynamic range of four orders of magnitude is achieved with the maximum range over 20 pC.

  10. Real-time long term measurement using integrated framework for ubiquitous smart monitoring

    Science.gov (United States)

    Heo, Gwanghee; Lee, Giu; Lee, Woosang; Jeon, Joonryong; Kim, Pil-Joong

    2007-04-01

    Ubiquitous monitoring combining internet technologies and wireless communication is one of the most promising technologies of infrastructure health monitoring against the natural of man-made hazards. In this paper, an integrated framework of the ubiquitous monitoring is developed for real-time long term measurement in internet environment. This framework develops a wireless sensor system based on Bluetooth technology and sends measured acceleration data to the host computer through TCP/IP protocol. And it is also designed to respond to the request of web user on real time basis. In order to verify this system, real time monitoring tests are carried out on a prototype self-anchored suspension bridge. Also, wireless measurement system is analyzed to estimate its sensing capacity and evaluate its performance for monitoring purpose. Based on the evaluation, this paper proposes the effective strategies for integrated framework in order to detect structural deficiencies and to design an early warning system.

  11. Quasi-Maximum Likelihood Estimation and Bootstrap Inference in Fractional Time Series Models with Heteroskedasticity of Unknown Form

    DEFF Research Database (Denmark)

    Cavaliere, Giuseppe; Nielsen, Morten Ørregaard; Taylor, Robert

    We consider the problem of conducting estimation and inference on the parameters of univariate heteroskedastic fractionally integrated time series models. We first extend existing results in the literature, developed for conditional sum-of squares estimators in the context of parametric fractional...... time series models driven by conditionally homoskedastic shocks, to allow for conditional and unconditional heteroskedasticity both of a quite general and unknown form. Global consistency and asymptotic normality are shown to still obtain; however, the covariance matrix of the limiting distribution...... of the estimator now depends on nuisance parameters derived both from the weak dependence and heteroskedasticity present in the shocks. We then investigate classical methods of inference based on the Wald, likelihood ratio and Lagrange multiplier tests for linear hypotheses on either or both of the long and short...

  12. The effect of decaying atomic states on integral and time differential Moessbauer spectra

    International Nuclear Information System (INIS)

    Kankeleit, E.

    1975-01-01

    Moessbauer spectra for time dependent monopole interaction have been calculated for the case that the nuclear transition feeding the Moessbauer state excites an electric state of the atom. This is assumed to decay in a time comparable with the lifetime of the Moessbauer state. Spectra have been calculated for both time differential and integral experiments. (orig.) [de

  13. An Efficient Algorithm for the Maximum Distance Problem

    Directory of Open Access Journals (Sweden)

    Gabrielle Assunta Grün

    2001-12-01

    Full Text Available Efficient algorithms for temporal reasoning are essential in knowledge-based systems. This is central in many areas of Artificial Intelligence including scheduling, planning, plan recognition, and natural language understanding. As such, scalability is a crucial consideration in temporal reasoning. While reasoning in the interval algebra is NP-complete, reasoning in the less expressive point algebra is tractable. In this paper, we explore an extension to the work of Gerevini and Schubert which is based on the point algebra. In their seminal framework, temporal relations are expressed as a directed acyclic graph partitioned into chains and supported by a metagraph data structure, where time points or events are represented by vertices, and directed edges are labelled with < or ≤. They are interested in fast algorithms for determining the strongest relation between two events. They begin by developing fast algorithms for the case where all points lie on a chain. In this paper, we are interested in a generalization of this, namely we consider the problem of finding the maximum ``distance'' between two vertices in a chain ; this problem arises in real world applications such as in process control and crew scheduling. We describe an O(n time preprocessing algorithm for the maximum distance problem on chains. It allows queries for the maximum number of < edges between two vertices to be answered in O(1 time. This matches the performance of the algorithm of Gerevini and Schubert for determining the strongest relation holding between two vertices in a chain.

  14. Architecture for an integrated real-time air combat and sensor network simulation

    Science.gov (United States)

    Criswell, Evans A.; Rushing, John; Lin, Hong; Graves, Sara

    2007-04-01

    An architecture for an integrated air combat and sensor network simulation is presented. The architecture integrates two components: a parallel real-time sensor fusion and target tracking simulation, and an air combat simulation. By integrating these two simulations, it becomes possible to experiment with scenarios in which one or both sides in a battle have very large numbers of primitive passive sensors, and to assess the likely effects of those sensors on the outcome of the battle. Modern Air Power is a real-time theater-level air combat simulation that is currently being used as a part of the USAF Air and Space Basic Course (ASBC). The simulation includes a variety of scenarios from the Vietnam war to the present day, and also includes several hypothetical future scenarios. Modern Air Power includes a scenario editor, an order of battle editor, and full AI customization features that make it possible to quickly construct scenarios for any conflict of interest. The scenario editor makes it possible to place a wide variety of sensors including both high fidelity sensors such as radars, and primitive passive sensors that provide only very limited information. The parallel real-time sensor network simulation is capable of handling very large numbers of sensors on a computing cluster of modest size. It can fuse information provided by disparate sensors to detect and track targets, and produce target tracks.

  15. Cosmic shear measurement with maximum likelihood and maximum a posteriori inference

    Science.gov (United States)

    Hall, Alex; Taylor, Andy

    2017-06-01

    We investigate the problem of noise bias in maximum likelihood and maximum a posteriori estimators for cosmic shear. We derive the leading and next-to-leading order biases and compute them in the context of galaxy ellipticity measurements, extending previous work on maximum likelihood inference for weak lensing. We show that a large part of the bias on these point estimators can be removed using information already contained in the likelihood when a galaxy model is specified, without the need for external calibration. We test these bias-corrected estimators on simulated galaxy images similar to those expected from planned space-based weak lensing surveys, with promising results. We find that the introduction of an intrinsic shape prior can help with mitigation of noise bias, such that the maximum a posteriori estimate can be made less biased than the maximum likelihood estimate. Second-order terms offer a check on the convergence of the estimators, but are largely subdominant. We show how biases propagate to shear estimates, demonstrating in our simple set-up that shear biases can be reduced by orders of magnitude and potentially to within the requirements of planned space-based surveys at mild signal-to-noise ratio. We find that second-order terms can exhibit significant cancellations at low signal-to-noise ratio when Gaussian noise is assumed, which has implications for inferring the performance of shear-measurement algorithms from simplified simulations. We discuss the viability of our point estimators as tools for lensing inference, arguing that they allow for the robust measurement of ellipticity and shear.

  16. Maximum swimming speeds of sailfish and three other large marine predatory fish species based on muscle contraction time and stride length: a myth revisited

    Directory of Open Access Journals (Sweden)

    Morten B. S. Svendsen

    2016-10-01

    Full Text Available Billfishes are considered to be among the fastest swimmers in the oceans. Previous studies have estimated maximum speed of sailfish and black marlin at around 35 m s−1 but theoretical work on cavitation predicts that such extreme speed is unlikely. Here we investigated maximum speed of sailfish, and three other large marine pelagic predatory fish species, by measuring the twitch contraction time of anaerobic swimming muscle. The highest estimated maximum swimming speeds were found in sailfish (8.3±1.4 m s−1, followed by barracuda (6.2±1.0 m s−1, little tunny (5.6±0.2 m s−1 and dorado (4.0±0.9 m s−1; although size-corrected performance was highest in little tunny and lowest in sailfish. Contrary to previously reported estimates, our results suggest that sailfish are incapable of exceeding swimming speeds of 10-15 m s−1, which corresponds to the speed at which cavitation is predicted to occur, with destructive consequences for fin tissues.

  17. On the mixed discretization of the time domain magnetic field integral equation

    KAUST Repository

    Ulku, Huseyin Arda; Bogaert, Ignace; Cools, Kristof; Andriulli, Francesco P.; Bagci, Hakan

    2012-01-01

    Time domain magnetic field integral equation (MFIE) is discretized using divergence-conforming Rao-Wilton-Glisson (RWG) and curl-conforming Buffa-Christiansen (BC) functions as spatial basis and testing functions, respectively. The resulting mixed

  18. Reparametrization in the path integral over finite dimensional manifold with a time-dependent metric

    International Nuclear Information System (INIS)

    Storchak, S.N.

    1988-01-01

    The path reparametrization procedure in the path integral is considered using the methods of stochastic processes for diffusion on finite dimensional manifold with a time-dependent metric. the reparametrization Jacobian has been obtained. The formulas of reparametrization for a symbolic presentation of the path integral have been derived

  19. Approximation of itô integrals arising in stochastic time-delayed systems

    NARCIS (Netherlands)

    Bagchi, Arunabha

    1984-01-01

    Likelihood functional for stochastic linear time-delayed systems involve Itô integrals with respect to the observed data. Since the Wiener process appearing in the standard observation process model for such systems is not realizable and the physically observed process is smooth, one needs to study

  20. Maximum power point tracker based on fuzzy logic

    International Nuclear Information System (INIS)

    Daoud, A.; Midoun, A.

    2006-01-01

    The solar energy is used as power source in photovoltaic power systems and the need for an intelligent power management system is important to obtain the maximum power from the limited solar panels. With the changing of the sun illumination due to variation of angle of incidence of sun radiation and of the temperature of the panels, Maximum Power Point Tracker (MPPT) enables optimization of solar power generation. The MPPT is a sub-system designed to extract the maximum power from a power source. In the case of solar panels power source. the maximum power point varies as a result of changes in its electrical characteristics which in turn are functions of radiation dose, temperature, ageing and other effects. The MPPT maximum the power output from panels for a given set of conditions by detecting the best working point of the power characteristic and then controls the current through the panels or the voltage across them. Many MPPT methods have been reported in literature. These techniques of MPPT can be classified into three main categories that include: lookup table methods, hill climbing methods and computational methods. The techniques vary according to the degree of sophistication, processing time and memory requirements. The perturbation and observation algorithm (hill climbing technique) is commonly used due to its ease of implementation, and relative tracking efficiency. However, it has been shown that when the insolation changes rapidly, the perturbation and observation method is slow to track the maximum power point. In recent years, the fuzzy controllers are used for maximum power point tracking. This method only requires the linguistic control rules for maximum power point, the mathematical model is not required and therefore the implementation of this control method is easy to real control system. In this paper, we we present a simple robust MPPT using fuzzy set theory where the hardware consists of the microchip's microcontroller unit control card and

  1. Sequential and Parallel Algorithms for Finding a Maximum Convex Polygon

    DEFF Research Database (Denmark)

    Fischer, Paul

    1997-01-01

    This paper investigates the problem where one is given a finite set of n points in the plane each of which is labeled either ?positive? or ?negative?. We consider bounded convex polygons, the vertices of which are positive points and which do not contain any negative point. It is shown how...... such a polygon which is maximal with respect to area can be found in time O(n³ log n). With the same running time one can also find such a polygon which contains a maximum number of positive points. If, in addition, the number of vertices of the polygon is restricted to be at most M, then the running time...... becomes O(M n³ log n). It is also shown how to find a maximum convex polygon which contains a given point in time O(n³ log n). Two parallel algorithms for the basic problem are also presented. The first one runs in time O(n log n) using O(n²) processors, the second one has polylogarithmic time but needs O...

  2. An integrated campaign for investigation of winter-time continental haze over Indo-Gangetic Basin and its radiative effects

    International Nuclear Information System (INIS)

    Das, Sanat Kumar; Chatterjee, Abhijit; Ghosh, Sanjay K.; Raha, Sibaji

    2015-01-01

    An outflow of continental haze occurs from Indo-Gangetic Basin (IGB) in the North to Bay of Bengal (BoB) in the South. An integrated campaign was organized to investigate this continental haze during December 2013–February 2014 at source and remote regions within IGB to quantify its radiative effects. Measurements were carried out at three locations in eastern India; 1) Kalas Island, Sundarban (21.68°N, 88.57°E) — an isolated island along the north-east coast of BoB, 2) Kolkata (22.57°N, 88.42°E) — an urban metropolis and 3) Siliguri (26.70°N, 88.35°E) — an urban region at the foothills of eastern Himalayas. Ground-based AOD (at 0.5 μm) is observed to be maximum (1.25 ± 0.18) over Kolkata followed by Siliguri (0.60 ± 0.17) and minimum over Sundarban (0.53 ± 0.18). Black carbon concentration is found to be maximum at Kolkata (21.6 ± 6.6 μg·m −3 ) with almost equal concentrations at Siliguri (12.6 ± 5.2 μg·m −3 ) and Sundarban (12.3 ± 3.0 μg·m −3 ). Combination of MODIS-AOD and back-trajectories analysis shows an outflow of winter-time continental haze originating from central IGB and venting out through Sundarban towards BoB. This continental haze with high extinction coefficient is identified up to central BoB using CALIPSO observations and is found to contribute ~ 75% to marine AOD over central BoB. This haze produces significantly high aerosol radiative forcing within the atmosphere over Kolkata (75.4 Wm −2 ) as well as over Siliguri and Sundarban (40 Wm −2 ) indicating large forcing over entire IGB, from foothills of the Himalayas to coastal region. This winter-time continental haze also causes about similar radiative heating (1.5 K·day −1 ) from Siliguri to Sundarban which is enhanced over Kolkata (3 K·day −1 ) due to large emission of local urban aerosols. This high aerosol heating over entire IGB and coastal region of BoB can have considerable impact on the monsoonal circulation and more importantly, such haze

  3. An integrated campaign for investigation of winter-time continental haze over Indo-Gangetic Basin and its radiative effects

    Energy Technology Data Exchange (ETDEWEB)

    Das, Sanat Kumar, E-mail: sanatkrdas@gmail.com [Environmental Sciences Section, Bose Institute, Kolkata (India); Center for Astroparticle Physics and Space Science, Bose Institute, Kolkata (India); Chatterjee, Abhijit [Environmental Sciences Section, Bose Institute, Kolkata (India); Center for Astroparticle Physics and Space Science, Bose Institute, Kolkata (India); National Facility on Astroparticle Physics and Space Science, Darjeeling (India); Ghosh, Sanjay K. [Center for Astroparticle Physics and Space Science, Bose Institute, Kolkata (India); National Facility on Astroparticle Physics and Space Science, Darjeeling (India); Raha, Sibaji [Environmental Sciences Section, Bose Institute, Kolkata (India); Center for Astroparticle Physics and Space Science, Bose Institute, Kolkata (India); National Facility on Astroparticle Physics and Space Science, Darjeeling (India)

    2015-11-15

    An outflow of continental haze occurs from Indo-Gangetic Basin (IGB) in the North to Bay of Bengal (BoB) in the South. An integrated campaign was organized to investigate this continental haze during December 2013–February 2014 at source and remote regions within IGB to quantify its radiative effects. Measurements were carried out at three locations in eastern India; 1) Kalas Island, Sundarban (21.68°N, 88.57°E) — an isolated island along the north-east coast of BoB, 2) Kolkata (22.57°N, 88.42°E) — an urban metropolis and 3) Siliguri (26.70°N, 88.35°E) — an urban region at the foothills of eastern Himalayas. Ground-based AOD (at 0.5 μm) is observed to be maximum (1.25 ± 0.18) over Kolkata followed by Siliguri (0.60 ± 0.17) and minimum over Sundarban (0.53 ± 0.18). Black carbon concentration is found to be maximum at Kolkata (21.6 ± 6.6 μg·m{sup −3}) with almost equal concentrations at Siliguri (12.6 ± 5.2 μg·m{sup −3}) and Sundarban (12.3 ± 3.0 μg·m{sup −3}). Combination of MODIS-AOD and back-trajectories analysis shows an outflow of winter-time continental haze originating from central IGB and venting out through Sundarban towards BoB. This continental haze with high extinction coefficient is identified up to central BoB using CALIPSO observations and is found to contribute ~ 75% to marine AOD over central BoB. This haze produces significantly high aerosol radiative forcing within the atmosphere over Kolkata (75.4 Wm{sup −2}) as well as over Siliguri and Sundarban (40 Wm{sup −2}) indicating large forcing over entire IGB, from foothills of the Himalayas to coastal region. This winter-time continental haze also causes about similar radiative heating (1.5 K·day{sup −1}) from Siliguri to Sundarban which is enhanced over Kolkata (3 K·day{sup −1}) due to large emission of local urban aerosols. This high aerosol heating over entire IGB and coastal region of BoB can have considerable impact on the monsoonal circulation and more

  4. Analysis of factors influencing the integrated bolus peak timing in contrast-enhanced brain computed tomographic angiography

    International Nuclear Information System (INIS)

    Son, Soon Yong; Choi, Kwan Woo; Jeong, Hoi Woun; Jang, Seo Goo; Jung, Jae Young; Yun, Jung Soo; Kim, Ki Won; Lee, Young Ah; Son, Jin Hyun; Min, Jung Whan

    2016-01-01

    The objective of this study was to analyze the factors influencing integrated bolus peak timing in contrast- enhanced computed tomographic angiography (CTA) and to determine a method of calculating personal peak time. The optimal time was calculated by performing multiple linear regression analysis, after finding the influence factors through correlation analysis between integrated peak time of contrast medium and personal measured value by monitoring CTA scans. The radiation exposure dose in CTA was 716.53 mGy·cm and the radiation exposure dose in monitoring scan was 15.52 mGy (2 - 34 mGy). The results were statistically significant (p < .01). Regression analysis revealed, a -0.160 times decrease with a one-step increase in heart rate in male, and -0.004, -0.174, and 0.006 times decrease with one-step in DBP, heart rate, and blood sugar, respectively, in female. In a consistency test of peak time by calculating measured peak time and peak time by using the regression equation, the consistency was determined to be very high for male and female. This study could prevent unnecessary dose exposure by encouraging in clinic calculation of personal integrated peak time of contrast medium prior to examination

  5. Post-irradiation effects in CMOS integrated circuits

    International Nuclear Information System (INIS)

    Zietlow, T.C.; Barnes, C.E.; Morse, T.C.; Grusynski, J.S.; Nakamura, K.; Amram, A.; Wilson, K.T.

    1988-01-01

    The post-irradiation response of CMOS integrated circuits from three vendors has been measured as a function of temperature and irradiation bias. The author's have found that a worst-case anneal temperature for rebound testing is highly process dependent. At an anneal temperature of 80 0 C, the timing parameters of a 16K SRAM from vendor A quickly saturate at maximum values, and display no further changes at this temperature. At higher temperature, evidence for the anneal of interface state charge is observed. Dynamic bias during irradiation results in the same saturation value for the timing parameters, but the anneal time required to reach this value is longer. CMOS/SOS integrated circuits (vendor B) were also examined, and showed similar behavior, except that the saturation value for the timing parameters was stable up to 105 0 C. After irradiation to 10 Mrad(Si), a 16K SRAM (vendor C) was annealed at 80 0 C. In contrast to the results from the vendor A SRAM, the access time decreased toward prerad values during the anneal. Another part irradiated in the same manner but annealed at room temperature showed a slight increase during the anneal

  6. Scalability of Direct Solver for Non-stationary Cahn-Hilliard Simulations with Linearized time Integration Scheme

    KAUST Repository

    Woźniak, M.

    2016-06-02

    We study the features of a new mixed integration scheme dedicated to solving the non-stationary variational problems. The scheme is composed of the FEM approximation with respect to the space variable coupled with a 3-leveled time integration scheme with a linearized right-hand side operator. It was applied in solving the Cahn-Hilliard parabolic equation with a nonlinear, fourth-order elliptic part. The second order of the approximation along the time variable was proven. Moreover, the good scalability of the software based on this scheme was confirmed during simulations. We verify the proposed time integration scheme by monitoring the Ginzburg-Landau free energy. The numerical simulations are performed by using a parallel multi-frontal direct solver executed over STAMPEDE Linux cluster. Its scalability was compared to the results of the three direct solvers, including MUMPS, SuperLU and PaSTiX.

  7. An efficient explicit marching on in time solver for magnetic field volume integral equation

    KAUST Repository

    Sayed, Sadeed Bin

    2015-07-25

    An efficient explicit marching on in time (MOT) scheme for solving the magnetic field volume integral equation is proposed. The MOT system is cast in the form of an ordinary differential equation and is integrated in time using a PE(CE)m multistep scheme. At each time step, a system with a Gram matrix is solved for the predicted/corrected field expansion coefficients. Depending on the type of spatial testing scheme Gram matrix is sparse or consists of blocks with only diagonal entries regardless of the time step size. Consequently, the resulting MOT scheme is more efficient than its implicit counterparts, which call for inversion of fuller matrix system at lower frequencies. Numerical results, which demonstrate the efficiency, accuracy, and stability of the proposed MOT scheme, are presented.

  8. Toward the integration of European natural gas markets:A time-varying approach

    International Nuclear Information System (INIS)

    Renou-Maissant, Patricia

    2012-01-01

    Over the past fifteen years, European gas markets have radically changed. In order to build a single European gas market, a new regulatory framework has been established through three European Gas Directives. The purpose of this article is to investigate the impact of the reforms in the natural gas industry on consumer prices, with a specific focus on gas prices for industrial use. The strength of the relationship between the industrial gas prices of six western European countries is studied by testing the Law of One Price for the period 1991–2009. Estimations were carried out using both cointegration analysis and time-varying parameter models. Results highlight an emerging and on-going process of convergence between the industrial gas prices in western Europe since 2001 for the six EU member states. The strength and the level of convergence differ widely between countries. Strong integration of gas markets in continental Europe, except for the Belgian market, has been established. It appears that the convergence process between continental countries and the UK is not completed. Thus, the integration of European gas markets remains an open issue and the question of how far integration will proceed will still be widely discussed in the coming years. - Highlights: ► We investigate the integration of European natural gas markets. ► We use both cointegration analysis and time-varying parameter models. ► We show the failure of cointegration techniques to take account of evolving processes. ► An emerging and on-going process of convergence between the industrial gas prices is at work. ► Strong integration of gas markets in continental Europe has been established.

  9. Towards a frequency-dependent discrete maximum principle for the implicit Monte Carlo equations

    Energy Technology Data Exchange (ETDEWEB)

    Wollaber, Allan B [Los Alamos National Laboratory; Larsen, Edward W [Los Alamos National Laboratory; Densmore, Jeffery D [Los Alamos National Laboratory

    2010-12-15

    It has long been known that temperature solutions of the Implicit Monte Carlo (IMC) equations can exceed the external boundary temperatures, a so-called violation of the 'maximum principle.' Previous attempts at prescribing a maximum value of the time-step size {Delta}{sub t} that is sufficient to eliminate these violations have recommended a {Delta}{sub t} that is typically too small to be used in practice and that appeared to be much too conservative when compared to numerical solutions of the IMC equations for practical problems. In this paper, we derive a new estimator for the maximum time-step size that includes the spatial-grid size {Delta}{sub x}. This explicitly demonstrates that the effect of coarsening {Delta}{sub x} is to reduce the limitation on {Delta}{sub t}, which helps explain the overly conservative nature of the earlier, grid-independent results. We demonstrate that our new time-step restriction is a much more accurate means of predicting violations of the maximum principle. We discuss how the implications of the new, grid-dependent timestep restriction can impact IMC solution algorithms.

  10. Integration of real-time 3D capture, reconstruction, and light-field display

    Science.gov (United States)

    Zhang, Zhaoxing; Geng, Zheng; Li, Tuotuo; Pei, Renjing; Liu, Yongchun; Zhang, Xiao

    2015-03-01

    Effective integration of 3D acquisition, reconstruction (modeling) and display technologies into a seamless systems provides augmented experience of visualizing and analyzing real objects and scenes with realistic 3D sensation. Applications can be found in medical imaging, gaming, virtual or augmented reality and hybrid simulations. Although 3D acquisition, reconstruction, and display technologies have gained significant momentum in recent years, there seems a lack of attention on synergistically combining these components into a "end-to-end" 3D visualization system. We designed, built and tested an integrated 3D visualization system that is able to capture in real-time 3D light-field images, perform 3D reconstruction to build 3D model of the objects, and display the 3D model on a large autostereoscopic screen. In this article, we will present our system architecture and component designs, hardware/software implementations, and experimental results. We will elaborate on our recent progress on sparse camera array light-field 3D acquisition, real-time dense 3D reconstruction, and autostereoscopic multi-view 3D display. A prototype is finally presented with test results to illustrate the effectiveness of our proposed integrated 3D visualization system.

  11. Integration of the time-dependent heat equation in the fuel rod performance program IAMBUS

    International Nuclear Information System (INIS)

    West, G.

    1982-01-01

    An iterative numerical method for integration of the time-dependent heat equation is described. No presuppositions are made for the dependency of the thermal conductivity and heat capacity on space, time and temperature. (orig.) [de

  12. Fully implicit solution of large-scale non-equilibrium radiation diffusion with high order time integration

    International Nuclear Information System (INIS)

    Brown, Peter N.; Shumaker, Dana E.; Woodward, Carol S.

    2005-01-01

    We present a solution method for fully implicit radiation diffusion problems discretized on meshes having millions of spatial zones. This solution method makes use of high order in time integration techniques, inexact Newton-Krylov nonlinear solvers, and multigrid preconditioners. We explore the advantages and disadvantages of high order time integration methods for the fully implicit formulation on both two- and three-dimensional problems with tabulated opacities and highly nonlinear fusion source terms

  13. Medical Device Integrated Vital Signs Monitoring Application with Real-Time Clinical Decision Support.

    Science.gov (United States)

    Moqeem, Aasia; Baig, Mirza; Gholamhosseini, Hamid; Mirza, Farhaan; Lindén, Maria

    2018-01-01

    This research involves the design and development of a novel Android smartphone application for real-time vital signs monitoring and decision support. The proposed application integrates market available, wireless and Bluetooth connected medical devices for collecting vital signs. The medical device data collected by the app includes heart rate, oxygen saturation and electrocardiograph (ECG). The collated data is streamed/displayed on the smartphone in real-time. This application was designed by adopting six screens approach (6S) mobile development framework and focused on user-centered approach and considered clinicians-as-a-user. The clinical engagement, consultations, feedback and usability of the application in the everyday practices were considered critical from the initial phase of the design and development. Furthermore, the proposed application is capable to deliver rich clinical decision support in real-time using the integrated medical device data.

  14. Electric vehicle integration in a real-time market

    DEFF Research Database (Denmark)

    Pedersen, Anders Bro

    with an externally simulated model of the power grid, it is be possible, in real-time, to simulate the impact of EV charging and help to identify bottlenecks in the system. In EDISON the vehicles are aggregated using an entity called a Virtual Power Plant (VPP); a central server monitoring and controlling...... the distributed energy resources registered with it, in order to make them appear as a single producer in the eyes of the market. Although the concept of a VPP is used within the EcoGrid EU project, the idea of more individual control is introduced through a new proposed real-time electricity market, where......This project is rooted in the EDISON project, which dealt with Electrical Vehicle (EV) integration into the existing power grid, as well as with the infrastructure needed to facilitate the ever increasing penetration of fluctuating renewable energy resources like e.g. wind turbines. In the EDISON...

  15. Maximum vehicle cabin temperatures under different meteorological conditions

    Science.gov (United States)

    Grundstein, Andrew; Meentemeyer, Vernon; Dowd, John

    2009-05-01

    A variety of studies have documented the dangerously high temperatures that may occur within the passenger compartment (cabin) of cars under clear sky conditions, even at relatively low ambient air temperatures. Our study, however, is the first to examine cabin temperatures under variable weather conditions. It uses a unique maximum vehicle cabin temperature dataset in conjunction with directly comparable ambient air temperature, solar radiation, and cloud cover data collected from April through August 2007 in Athens, GA. Maximum cabin temperatures, ranging from 41-76°C, varied considerably depending on the weather conditions and the time of year. Clear days had the highest cabin temperatures, with average values of 68°C in the summer and 61°C in the spring. Cloudy days in both the spring and summer were on average approximately 10°C cooler. Our findings indicate that even on cloudy days with lower ambient air temperatures, vehicle cabin temperatures may reach deadly levels. Additionally, two predictive models of maximum daily vehicle cabin temperatures were developed using commonly available meteorological data. One model uses maximum ambient air temperature and average daily solar radiation while the other uses cloud cover percentage as a surrogate for solar radiation. From these models, two maximum vehicle cabin temperature indices were developed to assess the level of danger. The models and indices may be useful for forecasting hazardous conditions, promoting public awareness, and to estimate past cabin temperatures for use in forensic analyses.

  16. Einstein-Dirac theory in spin maximum I

    International Nuclear Information System (INIS)

    Crumeyrolle, A.

    1975-01-01

    An unitary Einstein-Dirac theory, first in spin maximum 1, is constructed. An original feature of this article is that it is written without any tetrapod technics; basic notions and existence conditions for spinor structures on pseudo-Riemannian fibre bundles are only used. A coupling gravitation-electromagnetic field is pointed out, in the geometric setting of the tangent bundle over space-time. Generalized Maxwell equations for inductive media in presence of gravitational field are obtained. Enlarged Einstein-Schroedinger theory, gives a particular case of this E.D. theory. E. S. theory is a truncated E.D. theory in spin maximum 1. A close relation between torsion-vector and Schroedinger's potential exists and nullity of torsion-vector has a spinor meaning. Finally the Petiau-Duffin-Kemmer theory is incorporated in this geometric setting [fr

  17. How long do centenarians survive? Life expectancy and maximum lifespan.

    Science.gov (United States)

    Modig, K; Andersson, T; Vaupel, J; Rau, R; Ahlbom, A

    2017-08-01

    The purpose of this study was to explore the pattern of mortality above the age of 100 years. In particular, we aimed to examine whether Scandinavian data support the theory that mortality reaches a plateau at particularly old ages. Whether the maximum length of life increases with time was also investigated. The analyses were based on individual level data on all Swedish and Danish centenarians born from 1870 to 1901; in total 3006 men and 10 963 women were included. Birth cohort-specific probabilities of dying were calculated. Exact ages were used for calculations of maximum length of life. Whether maximum age changed over time was analysed taking into account increases in cohort size. The results confirm that there has not been any improvement in mortality amongst centenarians in the past 30 years and that the current rise in life expectancy is driven by reductions in mortality below the age of 100 years. The death risks seem to reach a plateau of around 50% at the age 103 years for men and 107 years for women. Despite the rising life expectancy, the maximum age does not appear to increase, in particular after accounting for the increasing number of individuals of advanced age. Mortality amongst centenarians is not changing despite improvements at younger ages. An extension of the maximum lifespan and a sizeable extension of life expectancy both require reductions in mortality above the age of 100 years. © 2017 The Association for the Publication of the Journal of Internal Medicine.

  18. Time-dependent approach to electron scattering and ionization in the s-wave model

    International Nuclear Information System (INIS)

    Ihra, W.; Draeger, M.; Handke, G.; Friedrich, H.

    1995-01-01

    The time-dependent Schroedinger equation is integrated for continuum states of two-electron atoms in the framework of the s-wave model, in which both electrons are restricted to having vanishing individual orbital angular momenta. The method is suitable for studying the time evolution of correlations in the two-electron wave functions and yields probabilities for elastic and inelastic electron scattering and for electron-impact ionization. The spin-averaged probabilities for electron-impact ionization of hydrogen in the s-wave model reproduce the shape of the experimentally observed integrated ionization cross section remarkably well for energies near and above the maximum

  19. Scalability of Direct Solver for Non-stationary Cahn-Hilliard Simulations with Linearized time Integration Scheme

    KAUST Repository

    Woźniak, M.; Smołka, M.; Cortes, Adriano Mauricio; Paszyński, M.; Schaefer, R.

    2016-01-01

    We study the features of a new mixed integration scheme dedicated to solving the non-stationary variational problems. The scheme is composed of the FEM approximation with respect to the space variable coupled with a 3-leveled time integration scheme

  20. Maximum entropy deconvolution of low count nuclear medicine images

    International Nuclear Information System (INIS)

    McGrath, D.M.

    1998-12-01

    Maximum entropy is applied to the problem of deconvolving nuclear medicine images, with special consideration for very low count data. The physics of the formation of scintigraphic images is described, illustrating the phenomena which degrade planar estimates of the tracer distribution. Various techniques which are used to restore these images are reviewed, outlining the relative merits of each. The development and theoretical justification of maximum entropy as an image processing technique is discussed. Maximum entropy is then applied to the problem of planar deconvolution, highlighting the question of the choice of error parameters for low count data. A novel iterative version of the algorithm is suggested which allows the errors to be estimated from the predicted Poisson mean values. This method is shown to produce the exact results predicted by combining Poisson statistics and a Bayesian interpretation of the maximum entropy approach. A facility for total count preservation has also been incorporated, leading to improved quantification. In order to evaluate this iterative maximum entropy technique, two comparable methods, Wiener filtering and a novel Bayesian maximum likelihood expectation maximisation technique, were implemented. The comparison of results obtained indicated that this maximum entropy approach may produce equivalent or better measures of image quality than the compared methods, depending upon the accuracy of the system model used. The novel Bayesian maximum likelihood expectation maximisation technique was shown to be preferable over many existing maximum a posteriori methods due to its simplicity of implementation. A single parameter is required to define the Bayesian prior, which suppresses noise in the solution and may reduce the processing time substantially. Finally, maximum entropy deconvolution was applied as a pre-processing step in single photon emission computed tomography reconstruction of low count data. Higher contrast results were

  1. Maximum Acceleration Recording Circuit

    Science.gov (United States)

    Bozeman, Richard J., Jr.

    1995-01-01

    Coarsely digitized maximum levels recorded in blown fuses. Circuit feeds power to accelerometer and makes nonvolatile record of maximum level to which output of accelerometer rises during measurement interval. In comparison with inertia-type single-preset-trip-point mechanical maximum-acceleration-recording devices, circuit weighs less, occupies less space, and records accelerations within narrower bands of uncertainty. In comparison with prior electronic data-acquisition systems designed for same purpose, circuit simpler, less bulky, consumes less power, costs and analysis of data recorded in magnetic or electronic memory devices. Circuit used, for example, to record accelerations to which commodities subjected during transportation on trucks.

  2. 3-D electromagnetic modeling for very early time sounding of shallow targets using integral equations

    International Nuclear Information System (INIS)

    Xiong, Z.; Tripp, A.C.

    1994-01-01

    This paper presents an integral equation algorithm for 3D EM modeling at high frequencies for applications in engineering an environmental studies. The integral equation method remains the same for low and high frequencies, but the dominant roles of the displacements currents complicate both numerical treatments and interpretations. With singularity extraction technique they successively extended the application of the Hankel filtering technique to the computation of Hankel integrals occurring in high frequency EM modeling. Time domain results are calculated from frequency domain results via Fourier transforms. While frequency domain data are not obvious for interpretations, time domain data show wave-like pictures that resemble seismograms. Both 1D and 3D numerical results show clearly the layer interfaces

  3. Marching on-in-time solution of the time domain magnetic field integral equation using a predictor-corrector scheme

    KAUST Repository

    Ulku, Huseyin Arda; Bagci, Hakan; Michielssen, Eric

    2013-01-01

    An explicit marching on-in-time (MOT) scheme for solving the time-domain magnetic field integral equation (TD-MFIE) is presented. The proposed MOT-TD-MFIE solver uses Rao-Wilton-Glisson basis functions for spatial discretization and a PE(CE)m-type linear multistep method for time marching. Unlike previous explicit MOT-TD-MFIE solvers, the time step size can be chosen as large as that of the implicit MOT-TD-MFIE solvers without adversely affecting accuracy or stability. An algebraic stability analysis demonstrates the stability of the proposed explicit solver; its accuracy and efficiency are established via numerical examples. © 1963-2012 IEEE.

  4. Marching on-in-time solution of the time domain magnetic field integral equation using a predictor-corrector scheme

    KAUST Repository

    Ulku, Huseyin Arda

    2013-08-01

    An explicit marching on-in-time (MOT) scheme for solving the time-domain magnetic field integral equation (TD-MFIE) is presented. The proposed MOT-TD-MFIE solver uses Rao-Wilton-Glisson basis functions for spatial discretization and a PE(CE)m-type linear multistep method for time marching. Unlike previous explicit MOT-TD-MFIE solvers, the time step size can be chosen as large as that of the implicit MOT-TD-MFIE solvers without adversely affecting accuracy or stability. An algebraic stability analysis demonstrates the stability of the proposed explicit solver; its accuracy and efficiency are established via numerical examples. © 1963-2012 IEEE.

  5. 24 CFR 203.18c - One-time or up-front mortgage insurance premium excluded from limitations on maximum mortgage...

    Science.gov (United States)

    2010-04-01

    ... insurance premium excluded from limitations on maximum mortgage amounts. 203.18c Section 203.18c Housing and...-front mortgage insurance premium excluded from limitations on maximum mortgage amounts. After... LOAN INSURANCE PROGRAMS UNDER NATIONAL HOUSING ACT AND OTHER AUTHORITIES SINGLE FAMILY MORTGAGE...

  6. Neutron spectra unfolding with maximum entropy and maximum likelihood

    International Nuclear Information System (INIS)

    Itoh, Shikoh; Tsunoda, Toshiharu

    1989-01-01

    A new unfolding theory has been established on the basis of the maximum entropy principle and the maximum likelihood method. This theory correctly embodies the Poisson statistics of neutron detection, and always brings a positive solution over the whole energy range. Moreover, the theory unifies both problems of overdetermined and of underdetermined. For the latter, the ambiguity in assigning a prior probability, i.e. the initial guess in the Bayesian sense, has become extinct by virtue of the principle. An approximate expression of the covariance matrix for the resultant spectra is also presented. An efficient algorithm to solve the nonlinear system, which appears in the present study, has been established. Results of computer simulation showed the effectiveness of the present theory. (author)

  7. Maximum Entropy and Probability Kinematics Constrained by Conditionals

    Directory of Open Access Journals (Sweden)

    Stefan Lukits

    2015-03-01

    Full Text Available Two open questions of inductive reasoning are solved: (1 does the principle of maximum entropy (PME give a solution to the obverse Majerník problem; and (2 isWagner correct when he claims that Jeffrey’s updating principle (JUP contradicts PME? Majerník shows that PME provides unique and plausible marginal probabilities, given conditional probabilities. The obverse problem posed here is whether PME also provides such conditional probabilities, given certain marginal probabilities. The theorem developed to solve the obverse Majerník problem demonstrates that in the special case introduced by Wagner PME does not contradict JUP, but elegantly generalizes it and offers a more integrated approach to probability updating.

  8. Time-integrated thyroid dose for accidental releases from Pakistan Research Reactor-1

    International Nuclear Information System (INIS)

    Raza, S Shoaib; Iqbal, M; Salahuddin, A; Avila, R; Pervez, S

    2004-01-01

    The two-hourly time-integrated thyroid dose due to radio-iodines released to the atmosphere through the exhaust stack of Pakistan Research Reactor-1 (PARR-1), under accident conditions, has been calculated. A computer program, PAKRAD (which was developed under an IAEA research grant, PAK/RCA/8990), was used for the dose calculations. The sensitivity of the dose results to different exhaust flow rates and atmospheric stability classes was studied. The effect of assuming a constant activity concentration (as a function of time) within the containment air volume and an exponentially decreasing air concentration on the time-integrated dose was also studied for various flow rates (1000-50,000 m 3 h -1 ). The comparison indicated that the results were insensitive to the containment air exhaust rates up to or below 2000 m 3 h -1 , when the prediction with the constant activity concentration assumption was compared to an exponentially decreasing activity concentration model. The results also indicated that the plume touchdown distance increases with increasing atmospheric stability. (note)

  9. Variational Symplectic Integrator for Long-Time Simulations of the Guiding-Center Motion of Charged Particles in General Magnetic Fields

    International Nuclear Information System (INIS)

    Qin Hong; Guan Xiaoyin

    2008-01-01

    A variational symplectic integrator for the guiding-center motion of charged particles in general magnetic fields is developed for long-time simulation studies of magnetized plasmas. Instead of discretizing the differential equations of the guiding-center motion, the action of the guiding-center motion is discretized and minimized to obtain the iteration rules for advancing the dynamics. The variational symplectic integrator conserves exactly a discrete Lagrangian symplectic structure, and has better numerical properties over long integration time, compared with standard integrators, such as the standard and variable time-step fourth order Runge-Kutta methods

  10. Variational Symplectic Integrator for Long-Time Simulations of the Guiding-Center Motion of Charged Particles in General Magnetic Fields

    International Nuclear Information System (INIS)

    Qin, H.; Guan, X.

    2008-01-01

    A variational symplectic integrator for the guiding-center motion of charged particles in general magnetic fields is developed for long-time simulation studies of magnetized plasmas. Instead of discretizing the differential equations of the guiding-center motion, the action of the guiding-center motion is discretized and minimized to obtain the iteration rules for advancing the dynamics. The variational symplectic integrator conserves exactly a discrete Lagrangian symplectic structure, and has better numerical properties over long integration time, compared with standard integrators, such as the standard and variable time-step fourth order Runge-Kutta methods.

  11. Study of monolithic integrated solar blind GaN-based photodetectors

    Science.gov (United States)

    Wang, Ling; Zhang, Yan; Li, Xiaojuan; Xie, Jing; Wang, Jiqiang; Li, Xiangyang

    2018-02-01

    Monolithic integrated solar blind devices on the GaN-based epilayer, which can directly readout voltage signal, were fabricated and studied. Unlike conventional GaN-based photodiodes, the integrated devices can finish those steps: generation, accumulation of carriers and conversion of carriers to voltage. In the test process, the resetting voltage was square wave with the frequency of 15 and 110 Hz, its maximal voltage of ˜2.5 V. Under LEDs illumination, the maximum of voltage swing is about 2.5 V, and the rise time of voltage swing from 0 to 2.5 V is only about 1.6 ms. However, in dark condition, the node voltage between detector and capacitance nearly decline to zero with time when the resetting voltage was equal to zero. It is found that the leakage current in the circuit gives rise to discharge of the integrated charge. Storage mode operation can offer gain, which is advantage to detection of weak photo signal.

  12. Integrated project scheduling and staff assignment with controllable processing times.

    Science.gov (United States)

    Fernandez-Viagas, Victor; Framinan, Jose M

    2014-01-01

    This paper addresses a decision problem related to simultaneously scheduling the tasks in a project and assigning the staff to these tasks, taking into account that a task can be performed only by employees with certain skills, and that the length of each task depends on the number of employees assigned. This type of problems usually appears in service companies, where both tasks scheduling and staff assignment are closely related. An integer programming model for the problem is proposed, together with some extensions to cope with different situations. Additionally, the advantages of the controllable processing times approach are compared with the fixed processing times. Due to the complexity of the integrated model, a simple GRASP algorithm is implemented in order to obtain good, approximate solutions in short computation times.

  13. Integrated Project Scheduling and Staff Assignment with Controllable Processing Times

    Directory of Open Access Journals (Sweden)

    Victor Fernandez-Viagas

    2014-01-01

    Full Text Available This paper addresses a decision problem related to simultaneously scheduling the tasks in a project and assigning the staff to these tasks, taking into account that a task can be performed only by employees with certain skills, and that the length of each task depends on the number of employees assigned. This type of problems usually appears in service companies, where both tasks scheduling and staff assignment are closely related. An integer programming model for the problem is proposed, together with some extensions to cope with different situations. Additionally, the advantages of the controllable processing times approach are compared with the fixed processing times. Due to the complexity of the integrated model, a simple GRASP algorithm is implemented in order to obtain good, approximate solutions in short computation times.

  14. Predicting cycle time distributions for integrated processing workstations : an aggregate modeling approach

    NARCIS (Netherlands)

    Veeger, C.P.L.; Etman, L.F.P.; Lefeber, A.A.J.; Adan, I.J.B.F.; Herk, van J.; Rooda, J.E.

    2011-01-01

    To predict cycle time distributions of integrated processing workstations, detailed simulation models are almost exclusively used; these models require considerable development and maintenance effort. As an alternative, we propose an aggregate model that is a lumped-parameter representation of the

  15. Plaque reduction over time of an integrated oral hygiene system.

    Science.gov (United States)

    Nunn, Martha E; Ruhlman, C Douglas; Mallatt, Philip R; Rodriguez, Sally M; Ortblad, Katherine M

    2004-10-01

    This article compares the efficacy of a prototype integrated system (the IntelliClean System from Sonicare and Crest) in the reduction of supragingival plaque to that of a manual toothbrush and conventional toothpaste. The integrated system was compared to a manual toothbrush with conventional toothpaste in a randomized, single-blinded, parallel, 4-week, controlled clinical trial with 100 subjects randomized to each treatment group. There was a low dropout rate, with 89 subjects in the manual toothbrush group (11% loss to follow-up) and 93 subjects in the integrated system group (7% loss to follow-up) completing the study. The Turesky modification of the Quigley and Hein Plaque Index was used to assess full-mouth plaque scores for each subject. Prebrushing plaque scores were obtained at baseline and at 4 weeks after 14 to 20 hours of plaque accumulation. A survey also was conducted at the conclusion of the study to determine the attitude toward the two oral hygiene systems. The integrated system was found to significantly reduce overall and interproximal prebrushing plaque scores over 4 weeks, both by 8.6%, demonstrating statistically significant superiority in overall plaque reduction (P = .002) and interproximal plaque reduction (P < .001) compared to the manual toothbrush with conventional toothpaste, which showed no significant reduction in either overall plaque or interproximal plaque. This study demonstrates that the IntelliClean System from Sonicare and Crest is superior to a manual toothbrush with conventional toothpaste in reducing overall plaque and interproximal plaque over time.

  16. A GIS-driven integrated real-time surveillance pilot system for national West Nile virus dead bird surveillance in Canada

    Directory of Open Access Journals (Sweden)

    Aramini Jeff

    2006-04-01

    Full Text Available Abstract Background An extensive West Nile virus surveillance program of dead birds, mosquitoes, horses, and human infection has been launched as a result of West Nile virus first being reported in Canada in 2001. Some desktop and web GIS have been applied to West Nile virus dead bird surveillance. There have been urgent needs for a comprehensive GIS services and real-time surveillance. Results A pilot system was developed to integrate real-time surveillance, real-time GIS, and Open GIS technology in order to enhance West Nile virus dead bird surveillance in Canada. Driven and linked by the newly developed real-time web GIS technology, this integrated real-time surveillance system includes conventional real-time web-based surveillance components, integrated real-time GIS components, and integrated Open GIS components. The pilot system identified the major GIS functions and capacities that may be important to public health surveillance. The six web GIS clients provide a wide range of GIS tools for public health surveillance. The pilot system has been serving Canadian national West Nile virus dead bird surveillance since 2005 and is adaptable to serve other disease surveillance. Conclusion This pilot system has streamlined, enriched and enhanced national West Nile virus dead bird surveillance in Canada, improved productivity, and reduced operation cost. Its real-time GIS technology, static map technology, WMS integration, and its integration with non-GIS real-time surveillance system made this pilot system unique in surveillance and public health GIS.

  17. Noether symmetries and integrability in time-dependent Hamiltonian mechanics

    Directory of Open Access Journals (Sweden)

    Jovanović Božidar

    2016-01-01

    Full Text Available We consider Noether symmetries within Hamiltonian setting as transformations that preserve Poincaré-Cartan form, i.e., as symmetries of characteristic line bundles of nondegenerate 1-forms. In the case when the Poincaré-Cartan form is contact, the explicit expression for the symmetries in the inverse Noether theorem is given. As examples, we consider natural mechanical systems, in particular the Kepler problem. Finally, we prove a variant of the theorem on complete (non-commutative integrability in terms of Noether symmetries of time-dependent Hamiltonian systems.

  18. Entanglement dynamics after quantum quenches in generic integrable systems

    Directory of Open Access Journals (Sweden)

    Vincenzo Alba, Pasquale Calabrese

    2018-03-01

    Full Text Available The time evolution of the entanglement entropy in non-equilibrium quantum systems provides crucial information about the structure of the time-dependent state. For quantum quench protocols, by combining a quasiparticle picture for the entanglement spreading with the exact knowledge of the stationary state provided by Bethe ansatz, it is possible to obtain an exact and analytic description of the evolution of the entanglement entropy. Here we discuss the application of these ideas to several integrable models. First we show that for non-interacting systems, both bosonic and fermionic, the exact time-dependence of the entanglement entropy can be derived by elementary techniques and without solving the dynamics. We then provide exact results for interacting spin chains that are carefully tested against numerical simulations. Finally, we apply this method to integrable one-dimensional Bose gases (Lieb-Liniger model both in the attractive and repulsive regimes. We highlight a peculiar behaviour of the entanglement entropy due to the absence of a maximum velocity of excitations.

  19. Rapid calculation of maximum particle lifetime for diffusion in complex geometries

    Science.gov (United States)

    Carr, Elliot J.; Simpson, Matthew J.

    2018-03-01

    Diffusion of molecules within biological cells and tissues is strongly influenced by crowding. A key quantity to characterize diffusion is the particle lifetime, which is the time taken for a diffusing particle to exit by hitting an absorbing boundary. Calculating the particle lifetime provides valuable information, for example, by allowing us to compare the timescale of diffusion and the timescale of the reaction, thereby helping us to develop appropriate mathematical models. Previous methods to quantify particle lifetimes focus on the mean particle lifetime. Here, we take a different approach and present a simple method for calculating the maximum particle lifetime. This is the time after which only a small specified proportion of particles in an ensemble remain in the system. Our approach produces accurate estimates of the maximum particle lifetime, whereas the mean particle lifetime always underestimates this value compared with data from stochastic simulations. Furthermore, we find that differences between the mean and maximum particle lifetimes become increasingly important when considering diffusion hindered by obstacles.

  20. Bayesian Maximum Entropy space/time estimation of surface water chloride in Maryland using river distances.

    Science.gov (United States)

    Jat, Prahlad; Serre, Marc L

    2016-12-01

    Widespread contamination of surface water chloride is an emerging environmental concern. Consequently accurate and cost-effective methods are needed to estimate chloride along all river miles of potentially contaminated watersheds. Here we introduce a Bayesian Maximum Entropy (BME) space/time geostatistical estimation framework that uses river distances, and we compare it with Euclidean BME to estimate surface water chloride from 2005 to 2014 in the Gunpowder-Patapsco, Severn, and Patuxent subbasins in Maryland. River BME improves the cross-validation R 2 by 23.67% over Euclidean BME, and river BME maps are significantly different than Euclidean BME maps, indicating that it is important to use river BME maps to assess water quality impairment. The river BME maps of chloride concentration show wide contamination throughout Baltimore and Columbia-Ellicott cities, the disappearance of a clean buffer separating these two large urban areas, and the emergence of multiple localized pockets of contamination in surrounding areas. The number of impaired river miles increased by 0.55% per year in 2005-2009 and by 1.23% per year in 2011-2014, corresponding to a marked acceleration of the rate of impairment. Our results support the need for control measures and increased monitoring of unassessed river miles. Copyright © 2016. Published by Elsevier Ltd.

  1. Recent advances in marching-on-in-time schemes for solving time domain volume integral equations

    KAUST Repository

    Sayed, Sadeed Bin; Ulku, Huseyin Arda; Bagci, Hakan

    2015-01-01

    Transient electromagnetic field interactions on inhomogeneous penetrable scatterers can be analyzed by solving time domain volume integral equations (TDVIEs). TDVIEs are constructed by setting the summation of the incident and scattered field intensities to the total field intensity on the volumetric support of the scatterer. The unknown can be the field intensity or flux/current density. Representing the total field intensity in terms of the unknown using the relevant constitutive relation and the scattered field intensity in terms of the spatiotemporal convolution of the unknown with the Green function yield the final form of the TDVIE. The unknown is expanded in terms of local spatial and temporal basis functions. Inserting this expansion into the TDVIE and testing the resulting equation at discrete times yield a system of equations that is solved by the marching on-in-time (MOT) scheme. At each time step, a smaller system of equations, termed MOT system is solved for the coefficients of the expansion. The right-hand side of this system consists of the tested incident field and discretized spatio-temporal convolution of the unknown samples computed at the previous time steps with the Green function.

  2. Recent advances in marching-on-in-time schemes for solving time domain volume integral equations

    KAUST Repository

    Sayed, Sadeed Bin

    2015-05-16

    Transient electromagnetic field interactions on inhomogeneous penetrable scatterers can be analyzed by solving time domain volume integral equations (TDVIEs). TDVIEs are constructed by setting the summation of the incident and scattered field intensities to the total field intensity on the volumetric support of the scatterer. The unknown can be the field intensity or flux/current density. Representing the total field intensity in terms of the unknown using the relevant constitutive relation and the scattered field intensity in terms of the spatiotemporal convolution of the unknown with the Green function yield the final form of the TDVIE. The unknown is expanded in terms of local spatial and temporal basis functions. Inserting this expansion into the TDVIE and testing the resulting equation at discrete times yield a system of equations that is solved by the marching on-in-time (MOT) scheme. At each time step, a smaller system of equations, termed MOT system is solved for the coefficients of the expansion. The right-hand side of this system consists of the tested incident field and discretized spatio-temporal convolution of the unknown samples computed at the previous time steps with the Green function.

  3. Physics in Design : Real-time Numerical Simulation Integrated into the CAD Environment

    NARCIS (Netherlands)

    Zwier, Marijn P.; Wits, Wessel W.

    2017-01-01

    As today's markets are more susceptible to rapid changes and involve global players, a short time to market is required to keep a competitive edge. Concurrently, products are integrating an increasing number of functions and technologies, thus becoming progressively complex. Therefore, efficient and

  4. High-Order Calderón Preconditioned Time Domain Integral Equation Solvers

    KAUST Repository

    Valdes, Felipe; Ghaffari-Miab, Mohsen; Andriulli, Francesco P.; Cools, Kristof; Michielssen,

    2013-01-01

    Two high-order accurate Calderón preconditioned time domain electric field integral equation (TDEFIE) solvers are presented. In contrast to existing Calderón preconditioned time domain solvers, the proposed preconditioner allows for high-order surface representations and current expansions by using a novel set of fully-localized high-order div-and quasi curl-conforming (DQCC) basis functions. Numerical results demonstrate that the linear systems of equations obtained using the proposed basis functions converge rapidly, regardless of the mesh density and of the order of the current expansion. © 1963-2012 IEEE.

  5. High-Order Calderón Preconditioned Time Domain Integral Equation Solvers

    KAUST Repository

    Valdes, Felipe

    2013-05-01

    Two high-order accurate Calderón preconditioned time domain electric field integral equation (TDEFIE) solvers are presented. In contrast to existing Calderón preconditioned time domain solvers, the proposed preconditioner allows for high-order surface representations and current expansions by using a novel set of fully-localized high-order div-and quasi curl-conforming (DQCC) basis functions. Numerical results demonstrate that the linear systems of equations obtained using the proposed basis functions converge rapidly, regardless of the mesh density and of the order of the current expansion. © 1963-2012 IEEE.

  6. Integrating speech in time depends on temporal expectancies and attention.

    Science.gov (United States)

    Scharinger, Mathias; Steinberg, Johanna; Tavano, Alessandro

    2017-08-01

    Sensory information that unfolds in time, such as in speech perception, relies on efficient chunking mechanisms in order to yield optimally-sized units for further processing. Whether or not two successive acoustic events receive a one-unit or a two-unit interpretation seems to depend on the fit between their temporal extent and a stipulated temporal window of integration. However, there is ongoing debate on how flexible this temporal window of integration should be, especially for the processing of speech sounds. Furthermore, there is no direct evidence of whether attention may modulate the temporal constraints on the integration window. For this reason, we here examine how different word durations, which lead to different temporal separations of sound onsets, interact with attention. In an Electroencephalography (EEG) study, participants actively and passively listened to words where word-final consonants were occasionally omitted. Words had either a natural duration or were artificially prolonged in order to increase the separation of speech sound onsets. Omission responses to incomplete speech input, originating in left temporal cortex, decreased when the critical speech sound was separated from previous sounds by more than 250 msec, i.e., when the separation was larger than the stipulated temporal window of integration (125-150 msec). Attention, on the other hand, only increased omission responses for stimuli with natural durations. We complemented the event-related potential (ERP) analyses by a frequency-domain analysis on the stimulus presentation rate. Notably, the power of stimulation frequency showed the same duration and attention effects than the omission responses. We interpret these findings on the background of existing research on temporal integration windows and further suggest that our findings may be accounted for within the framework of predictive coding. Copyright © 2017 Elsevier Ltd. All rights reserved.

  7. Time-domain single-source integral equations for analyzing scattering from homogeneous penetrable objects

    KAUST Repository

    Valdé s, Felipe; Andriulli, Francesco P.; Bagci, Hakan; Michielssen, Eric

    2013-01-01

    Single-source time-domain electric-and magnetic-field integral equations for analyzing scattering from homogeneous penetrable objects are presented. Their temporal discretization is effected by using shifted piecewise polynomial temporal basis

  8. Guarantee of remaining life time. Integrity of mechanical components and control of ageing phenomena

    International Nuclear Information System (INIS)

    Schuler, X.; Herter, K.H.; Koenig, G.

    2012-01-01

    The life time of safety relevant systems, structures and components (SSC) of Nuclear Power Plants (NPP) is determined by two main principles. First of all the required quality has to be produced during the design and fabrication process. This means that quality has to be produced and can't be improved by excessive inspections (Basis Safety - quality through production principle). The second one is assigned to the initial quality which has to be maintained during operation. This concerns safe operation during the total life time (life time management), safety against ageing phenomena (AM - ageing management) as well as proof of integrity (e.g. break preclusion or avoidance of fracture for SSC with high safety relevance). Initiated by the Fukushima Dai-ichi event in Japan in spring 2011 for German NPP's Long Term Operation (LTO) is out of question. In June 2011 legislation took decision to phase-out from nuclear by 2022. As a fact safe operation shall be guaranteed for the remaining life time. Within this technical framework the ageing management is a key element. Depending on the safety-relevance of the SSC under observation including preventive maintenance various tasks are required in particular to clarify the mechanisms which contribute systemspecifically to the damage of the components and systems and to define their controlling parameters which have to be monitored and checked. Appropriate continuous or discontinuous measures are to be considered in this connection. The approach to ensure a high standard of quality in operation for the remaining life time and the management of the technical and organizational aspects are demonstrated and explained. The basis for ageing management to be applied to NNPs is included in Nuclear Safety Standard 1403 which describes the ageing management procedures. For SSC with high safety relevance a verification analysis for rupture preclusion (proof of integrity, integrity concept) shall be performed (Nuclear Safety Standard 3206

  9. The fossil record of phenotypic integration and modularity: A deep-time perspective on developmental and evolutionary dynamics.

    Science.gov (United States)

    Goswami, Anjali; Binder, Wendy J; Meachen, Julie; O'Keefe, F Robin

    2015-04-21

    Variation is the raw material for natural selection, but the factors shaping variation are still poorly understood. Genetic and developmental interactions can direct variation, but there has been little synthesis of these effects with the extrinsic factors that can shape biodiversity over large scales. The study of phenotypic integration and modularity has the capacity to unify these aspects of evolutionary study by estimating genetic and developmental interactions through the quantitative analysis of morphology, allowing for combined assessment of intrinsic and extrinsic effects. Data from the fossil record in particular are central to our understanding of phenotypic integration and modularity because they provide the only information on deep-time developmental and evolutionary dynamics, including trends in trait relationships and their role in shaping organismal diversity. Here, we demonstrate the important perspective on phenotypic integration provided by the fossil record with a study of Smilodon fatalis (saber-toothed cats) and Canis dirus (dire wolves). We quantified temporal trends in size, variance, phenotypic integration, and direct developmental integration (fluctuating asymmetry) through 27,000 y of Late Pleistocene climate change. Both S. fatalis and C. dirus showed a gradual decrease in magnitude of phenotypic integration and an increase in variance and the correlation between fluctuating asymmetry and overall integration through time, suggesting that developmental integration mediated morphological response to environmental change in the later populations of these species. These results are consistent with experimental studies and represent, to our knowledge, the first deep-time validation of the importance of developmental integration in stabilizing morphological evolution through periods of environmental change.

  10. A current value Hamiltonian Approach for Discrete time Optimal Control Problems arising in Economic Growth

    OpenAIRE

    Naz, Rehana

    2018-01-01

    Pontrygin-type maximum principle is extended for the present value Hamiltonian systems and current value Hamiltonian systems of nonlinear difference equations for uniform time step $h$. A new method termed as a discrete time current value Hamiltonian method is established for the construction of first integrals for current value Hamiltonian systems of ordinary difference equations arising in Economic growth theory.

  11. Mixed-integrator-based bi-quad cell for designing a continuous time filter

    International Nuclear Information System (INIS)

    Chen Yong; Zhou Yumei

    2010-01-01

    A new mixed-integrator-based bi-quad cell is proposed. An alternative synthesis mechanism of complex poles is proposed compared with source-follower-based bi-quad cells which is designed applying the positive feedback technique. Using the negative feedback technique to combine different integrators, the proposed bi-quad cell synthesizes complex poles for designing a continuous time filter. It exhibits various advantages including compact topology, high gain, no parasitic pole, no CMFB circuit, and high capability. The fourth-order Butterworth lowpass filter using the proposed cells has been fabricated in 0.18 μm CMOS technology. The active area occupied by the filter with test buffer is only 200 x 170 μm 2 . The proposed filter consumes a low power of 201 μW and achieves a 68.5 dB dynamic range. (semiconductor integrated circuits)

  12. Photobleaching kinetics and time-integrated emission of fluorescent probes in cellular membranes

    DEFF Research Database (Denmark)

    Wüstner, Daniel; Christensen, Tanja; Solanko, Lukasz Michal

    2014-01-01

    Since the pioneering work of Hirschfeld, it is known that time-integrated emission (TiEm) of a fluorophore is independent of fluorescence quantum yield and illumination intensity. Practical implementation of this important result for determining exact probe distribution in living cells is often h...

  13. On the minima of the time integrated perturbation factor in the Scherer-Blume theory

    International Nuclear Information System (INIS)

    Silveira, E.F. da; Freire Junior, F.L.; Massolo, C.P.; Schaposnik, F.A.

    1981-09-01

    The minima in the correlation time dependence of the Scherer-Blume time integrated attenuation coefficients for the hyperfine perturbation of ions recoiling in gas are studied. Its position and depth are determined for different physical situations and comparison with experimental data is shown. (Author) [pt

  14. THE MAXIMUM AMOUNTS OF RAINFALL FALLEN IN SHORT PERIODS OF TIME IN THE HILLY AREA OF CLUJ COUNTY - GENESIS, DISTRIBUTION AND PROBABILITY OF OCCURRENCE

    Directory of Open Access Journals (Sweden)

    BLAGA IRINA

    2014-03-01

    Full Text Available The maximum amounts of rainfall are usually characterized by high intensity, and their effects on the substrate are revealed, at slope level, by the deepening of the existing forms of torrential erosion and also by the formation of new ones, and by landslide processes. For the 1971-2000 period, for the weather stations in the hilly area of Cluj County: Cluj- Napoca, Dej, Huedin and Turda the highest values of rainfall amounts fallen in 24, 48 and 72 hours were analyzed and extracted, based on which the variation and the spatial and temporal distribution of the precipitation were analyzed. The annual probability of exceedance of maximum rainfall amounts fallen in short time intervals (24, 48 and 72 hours, based on thresholds and class values was determined, using climatological practices and the Hyfran program facilities.

  15. Integrating and Visualizing Tropical Cyclone Data Using the Real Time Mission Monitor

    Science.gov (United States)

    Goodman, H. Michael; Blakeslee, Richard; Conover, Helen; Hall, John; He, Yubin; Regner, Kathryn

    2009-01-01

    The Real Time Mission Monitor (RTMM) is a visualization and information system that fuses multiple Earth science data sources, to enable real time decision-making for airborne and ground validation experiments. Developed at the NASA Marshall Space Flight Center, RTMM is a situational awareness, decision-support system that integrates satellite imagery, radar, surface and airborne instrument data sets, model output parameters, lightning location observations, aircraft navigation data, soundings, and other applicable Earth science data sets. The integration and delivery of this information is made possible using data acquisition systems, network communication links, network server resources, and visualizations through the Google Earth virtual globe application. RTMM is extremely valuable for optimizing individual Earth science airborne field experiments. Flight planners, scientists, and managers appreciate the contributions that RTMM makes to their flight projects. A broad spectrum of interdisciplinary scientists used RTMM during field campaigns including the hurricane-focused 2006 NASA African Monsoon Multidisciplinary Analyses (NAMMA), 2007 NOAA-NASA Aerosonde Hurricane Noel flight, 2007 Tropical Composition, Cloud, and Climate Coupling (TC4), plus a soil moisture (SMAP-VEX) and two arctic research experiments (ARCTAS) in 2008. Improving and evolving RTMM is a continuous process. RTMM recently integrated the Waypoint Planning Tool, a Java-based application that enables aircraft mission scientists to easily develop a pre-mission flight plan through an interactive point-and-click interface. Individual flight legs are automatically calculated "on the fly". The resultant flight plan is then immediately posted to the Google Earth-based RTMM for interested scientists to view the planned flight track and subsequently compare it to the actual real time flight progress. We are planning additional capabilities to RTMM including collaborations with the Jet Propulsion

  16. Integrating Satellite, Radar and Surface Observation with Time and Space Matching

    Science.gov (United States)

    Ho, Y.; Weber, J.

    2015-12-01

    The Integrated Data Viewer (IDV) from Unidata is a Java™-based software framework for analyzing and visualizing geoscience data. It brings together the ability to display and work with satellite imagery, gridded data, surface observations, balloon soundings, NWS WSR-88D Level II and Level III RADAR data, and NOAA National Profiler Network data, all within a unified interface. Applying time and space matching on the satellite, radar and surface observation datasets will automatically synchronize the display from different data sources and spatially subset to match the display area in the view window. These features allow the IDV users to effectively integrate these observations and provide 3 dimensional views of the weather system to better understand the underlying dynamics and physics of weather phenomena.

  17. Modeling the impact of integrating HIV and outpatient health services on patient waiting times in an urban health clinic in Zambia.

    Directory of Open Access Journals (Sweden)

    Sarang Deo

    Full Text Available Rapid scale up of HIV treatment programs in sub-Saharan Africa has refueled the long-standing health policy debate regarding the merits and drawbacks of vertical and integrated system. Recent pilots of integrating outpatient and HIV services have shown an improvement in some patient outcomes but deterioration in waiting times, which can lead to worse health outcomes in the long run.A pilot intervention involving integration of outpatient and HIV services in an urban primary care facility in Lusaka, Zambia was studied. Data on waiting time of patients during two seven-day periods before and six months after the integration were collected using a time and motion study. Statistical tests were conducted to investigate whether the two observation periods differed in operational details such as staffing, patient arrival rates, mix of patients etc. A discrete event simulation model was constructed to facilitate a fair comparison of waiting times before and after integration. The simulation model was also used to develop alternative configurations of integration and to estimate the resulting waiting times.Comparison of raw data showed that waiting times increased by 32% and 36% after integration for OPD and ART patients respectively (p<0.01. Using simulation modeling, we found that a large portion of this increase could be explained by changes in operational conditions before and after integration such as reduced staff availability (p<0.01 and longer breaks between consecutive patients (p<0.05. Controlling for these differences, integration of services, per se, would have resulted in a significant decrease in waiting times for OPD and a moderate decrease for HIV services.Integrating health services has the potential of reducing waiting times due to more efficient use of resources. However, one needs to ensure that other operational factors such as staff availability are not adversely affected due to integration.

  18. Analysis of electromagnetic wave interactions on nonlinear scatterers using time domain volume integral equations

    KAUST Repository

    Ulku, Huseyin Arda

    2014-07-06

    Effects of material nonlinearities on electromagnetic field interactions become dominant as field amplitudes increase. A typical example is observed in plasmonics, where highly localized fields “activate” Kerr nonlinearities. Naturally, time domain solvers are the method of choice when it comes simulating these nonlinear effects. Oftentimes, finite difference time domain (FDTD) method is used for this purpose. This is simply due to the fact that explicitness of the FDTD renders the implementation easier and the material nonlinearity can be easily accounted for using an auxiliary differential equation (J.H. Green and A. Taflove, Opt. Express, 14(18), 8305-8310, 2006). On the other hand, explicit marching on-in-time (MOT)-based time domain integral equation (TDIE) solvers have never been used for the same purpose even though they offer several advantages over FDTD (E. Michielssen, et al., ECCOMAS CFD, The Netherlands, Sep. 5-8, 2006). This is because explicit MOT solvers have never been stabilized until not so long ago. Recently an explicit but stable MOT scheme has been proposed for solving the time domain surface magnetic field integral equation (H.A. Ulku, et al., IEEE Trans. Antennas Propag., 61(8), 4120-4131, 2013) and later it has been extended for the time domain volume electric field integral equation (TDVEFIE) (S. B. Sayed, et al., Pr. Electromagn. Res. S., 378, Stockholm, 2013). This explicit MOT scheme uses predictor-corrector updates together with successive over relaxation during time marching to stabilize the solution even when time step is as large as in the implicit counterpart. In this work, an explicit MOT-TDVEFIE solver is proposed for analyzing electromagnetic wave interactions on scatterers exhibiting Kerr nonlinearity. Nonlinearity is accounted for using the constitutive relation between the electric field intensity and flux density. Then, this relation and the TDVEFIE are discretized together by expanding the intensity and flux - sing half

  19. Multivariate normal maximum likelihood with both ordinal and continuous variables, and data missing at random.

    Science.gov (United States)

    Pritikin, Joshua N; Brick, Timothy R; Neale, Michael C

    2018-04-01

    A novel method for the maximum likelihood estimation of structural equation models (SEM) with both ordinal and continuous indicators is introduced using a flexible multivariate probit model for the ordinal indicators. A full information approach ensures unbiased estimates for data missing at random. Exceeding the capability of prior methods, up to 13 ordinal variables can be included before integration time increases beyond 1 s per row. The method relies on the axiom of conditional probability to split apart the distribution of continuous and ordinal variables. Due to the symmetry of the axiom, two similar methods are available. A simulation study provides evidence that the two similar approaches offer equal accuracy. A further simulation is used to develop a heuristic to automatically select the most computationally efficient approach. Joint ordinal continuous SEM is implemented in OpenMx, free and open-source software.

  20. Optimizing some 3-stage W-methods for the time integration of PDEs

    Science.gov (United States)

    Gonzalez-Pinto, S.; Hernandez-Abreu, D.; Perez-Rodriguez, S.

    2017-07-01

    The optimization of some W-methods for the time integration of time-dependent PDEs in several spatial variables is considered. In [2, Theorem 1] several three-parametric families of three-stage W-methods for the integration of IVPs in ODEs were studied. Besides, the optimization of several specific methods for PDEs when the Approximate Matrix Factorization Splitting (AMF) is used to define the approximate Jacobian matrix (W ≈ fy(yn)) was carried out. Also, some convergence and stability properties were presented [2]. The derived methods were optimized on the base that the underlying explicit Runge-Kutta method is the one having the largest Monotonicity interval among the thee-stage order three Runge-Kutta methods [1]. Here, we propose an optimization of the methods by imposing some additional order condition [7] to keep order three for parabolic PDE problems [6] but at the price of reducing substantially the length of the nonlinear Monotonicity interval of the underlying explicit Runge-Kutta method.

  1. Integral transform method for solving time fractional systems and fractional heat equation

    Directory of Open Access Journals (Sweden)

    Arman Aghili

    2014-01-01

    Full Text Available In the present paper, time fractional partial differential equation is considered, where the fractional derivative is defined in the Caputo sense. Laplace transform method has been applied to obtain an exact solution. The authors solved certain homogeneous and nonhomogeneous time fractional heat equations using integral transform. Transform method is a powerful tool for solving fractional singular Integro - differential equations and PDEs. The result reveals that the transform method is very convenient and effective.

  2. Maximum concentrations at work and maximum biologically tolerable concentration for working materials 1991

    International Nuclear Information System (INIS)

    1991-01-01

    The meaning of the term 'maximum concentration at work' in regard of various pollutants is discussed. Specifically, a number of dusts and smokes are dealt with. The valuation criteria for maximum biologically tolerable concentrations for working materials are indicated. The working materials in question are corcinogeneous substances or substances liable to cause allergies or mutate the genome. (VT) [de

  3. An integrated real-time diagnostic concept using expert systems, qualitative reasoning and quantitative analysis

    International Nuclear Information System (INIS)

    Edwards, R.M.; Lee, K.Y.; Kumara, S.; Levine, S.H.

    1989-01-01

    An approach for an integrated real-time diagnostic system is being developed for inclusion as an integral part of a power plant automatic control system. In order to participate in control decisions and automatic closed loop operation, the diagnostic system must operate in real-time. Thus far, an expert system with real-time capabilities has been developed and installed on a subsystem at the Experimental Breeder Reactor (EBR-II) in Idaho, USA. Real-time simulation testing of advanced power plant concepts at the Pennsylvania State University has been developed and was used to support the expert system development and installation at EBR-II. Recently, the US National Science Foundation (NSF) and the US Department of Energy (DOE) have funded a Penn State research program to further enhance application of real-time diagnostic systems by pursuing implementation in a distributed power plant computer system including microprocessor based controllers. This paper summarizes past, current, planned, and possible future approaches to power plant diagnostic systems research at Penn State. 34 refs., 9 figs

  4. Use of J-integral and modified J-integral as measures of elastic-plastic fracture toughness

    International Nuclear Information System (INIS)

    Davis, D.A.; Hays, R.A.; Hackett, E.M.; Joyce, J.A.

    1988-01-01

    J-R Curve tests were conducted on 12T, 1T and 2T compact specimens of materials having J/sub IC/ values ranging from 150 in-lbsq in to over 2600 in-lbsq in. These materials were chosen such that some would exceed the maximum crack length criterion of ASTM E1152-87 prior to reaching the maximum J criterion (3-Ni steel, 5000 series Al) and some would exceed the maximum J criterion first (A533B, A710). The elastic-plastic fracture behavior of these materials was examined using both the deformation theory J-integral (J/sub D/) and the modified J-integral (J/sub M/). The J-R curve testing was performed to very large values of crack opening displacement (COD) where the crack growth was typically 75% of the original remaining ligament. The results of this work suggest that the J/sub D/-R curves exhibit no specimen size dependence to crack extensions far in excess of the E1152 allowables. The J/sub M/-R curves calculated for the same specimens show a significant amount of specimen size dependence which becomes larger as the material toughness decreases. This work suggests that it is premature to utilize the modified J-integral in assessing the flaw tolerance of structures

  5. 40 CFR 1042.140 - Maximum engine power, displacement, power density, and maximum in-use engine speed.

    Science.gov (United States)

    2010-07-01

    ... cylinders having an internal diameter of 13.0 cm and a 15.5 cm stroke length, the rounded displacement would... 40 Protection of Environment 32 2010-07-01 2010-07-01 false Maximum engine power, displacement... Maximum engine power, displacement, power density, and maximum in-use engine speed. This section describes...

  6. Two-Agent Scheduling to Minimize the Maximum Cost with Position-Dependent Jobs

    Directory of Open Access Journals (Sweden)

    Long Wan

    2015-01-01

    Full Text Available This paper investigates a single-machine two-agent scheduling problem to minimize the maximum costs with position-dependent jobs. There are two agents, each with a set of independent jobs, competing to perform their jobs on a common machine. In our scheduling setting, the actual position-dependent processing time of one job is characterized by variable function dependent on the position of the job in the sequence. Each agent wants to fulfil the objective of minimizing the maximum cost of its own jobs. We develop a feasible method to achieve all the Pareto optimal points in polynomial time.

  7. Credal Networks under Maximum Entropy

    OpenAIRE

    Lukasiewicz, Thomas

    2013-01-01

    We apply the principle of maximum entropy to select a unique joint probability distribution from the set of all joint probability distributions specified by a credal network. In detail, we start by showing that the unique joint distribution of a Bayesian tree coincides with the maximum entropy model of its conditional distributions. This result, however, does not hold anymore for general Bayesian networks. We thus present a new kind of maximum entropy models, which are computed sequentially. ...

  8. Nonperturbative time-convolutionless quantum master equation from the path integral approach

    International Nuclear Information System (INIS)

    Nan Guangjun; Shi Qiang; Shuai Zhigang

    2009-01-01

    The time-convolutionless quantum master equation is widely used to simulate reduced dynamics of a quantum system coupled to a bath. However, except for several special cases, applications of this equation are based on perturbative calculation of the dissipative tensor, and are limited to the weak system-bath coupling regime. In this paper, we derive an exact time-convolutionless quantum master equation from the path integral approach, which provides a new way to calculate the dissipative tensor nonperturbatively. Application of the new method is demonstrated in the case of an asymmetrical two-level system linearly coupled to a harmonic bath.

  9. Computing the stretch factor and maximum detour of paths, trees, and cycles in the normed space

    DEFF Research Database (Denmark)

    Wulff-Nilsen, Christian; Grüne, Ansgar; Klein, Rolf

    2012-01-01

    (n log n) in the algebraic computation tree model and describe a worst-case O(σn log 2 n) time algorithm for computing the stretch factor or maximum detour of a path embedded in the plane with a weighted fixed orientation metric defined by σ time algorithm to d...... time. We also obtain an optimal O(n) time algorithm for computing the maximum detour of a monotone rectilinear path in L 1 plane....

  10. Note: Fully integrated 3.2 Gbps quantum random number generator with real-time extraction

    International Nuclear Information System (INIS)

    Zhang, Xiao-Guang; Nie, You-Qi; Liang, Hao; Zhang, Jun; Pan, Jian-Wei; Zhou, Hongyi; Ma, Xiongfeng

    2016-01-01

    We present a real-time and fully integrated quantum random number generator (QRNG) by measuring laser phase fluctuations. The QRNG scheme based on laser phase fluctuations is featured for its capability of generating ultra-high-speed random numbers. However, the speed bottleneck of a practical QRNG lies on the limited speed of randomness extraction. To close the gap between the fast randomness generation and the slow post-processing, we propose a pipeline extraction algorithm based on Toeplitz matrix hashing and implement it in a high-speed field-programmable gate array. Further, all the QRNG components are integrated into a module, including a compact and actively stabilized interferometer, high-speed data acquisition, and real-time data post-processing and transmission. The final generation rate of the QRNG module with real-time extraction can reach 3.2 Gbps.

  11. Note: Fully integrated 3.2 Gbps quantum random number generator with real-time extraction

    Energy Technology Data Exchange (ETDEWEB)

    Zhang, Xiao-Guang; Nie, You-Qi; Liang, Hao; Zhang, Jun, E-mail: zhangjun@ustc.edu.cn; Pan, Jian-Wei [Hefei National Laboratory for Physical Sciences at the Microscale and Department of Modern Physics, University of Science and Technology of China, Hefei, Anhui 230026 (China); CAS Center for Excellence and Synergetic Innovation Center in Quantum Information and Quantum Physics, University of Science and Technology of China, Hefei, Anhui 230026 (China); Zhou, Hongyi; Ma, Xiongfeng [Center for Quantum Information, Institute for Interdisciplinary Information Sciences, Tsinghua University, Beijing 100084 (China)

    2016-07-15

    We present a real-time and fully integrated quantum random number generator (QRNG) by measuring laser phase fluctuations. The QRNG scheme based on laser phase fluctuations is featured for its capability of generating ultra-high-speed random numbers. However, the speed bottleneck of a practical QRNG lies on the limited speed of randomness extraction. To close the gap between the fast randomness generation and the slow post-processing, we propose a pipeline extraction algorithm based on Toeplitz matrix hashing and implement it in a high-speed field-programmable gate array. Further, all the QRNG components are integrated into a module, including a compact and actively stabilized interferometer, high-speed data acquisition, and real-time data post-processing and transmission. The final generation rate of the QRNG module with real-time extraction can reach 3.2 Gbps.

  12. Issues in measure-preserving three dimensional flow integrators: Self-adjointness, reversibility, and non-uniform time stepping

    International Nuclear Information System (INIS)

    Finn, John M.

    2015-01-01

    Properties of integration schemes for solenoidal fields in three dimensions are studied, with a focus on integrating magnetic field lines in a plasma using adaptive time stepping. It is shown that implicit midpoint (IM) and a scheme we call three-dimensional leapfrog (LF) can do a good job (in the sense of preserving KAM tori) of integrating fields that are reversible, or (for LF) have a “special divergence-free” (SDF) property. We review the notion of a self-adjoint scheme, showing that such schemes are at least second order accurate and can always be formed by composing an arbitrary scheme with its adjoint. We also review the concept of reversibility, showing that a reversible but not exactly volume-preserving scheme can lead to a fractal invariant measure in a chaotic region, although this property may not often be observable. We also show numerical results indicating that the IM and LF schemes can fail to preserve KAM tori when the reversibility property (and the SDF property for LF) of the field is broken. We discuss extensions to measure preserving flows, the integration of magnetic field lines in a plasma and the integration of rays for several plasma waves. The main new result of this paper relates to non-uniform time stepping for volume-preserving flows. We investigate two potential schemes, both based on the general method of Feng and Shang [Numer. Math. 71, 451 (1995)], in which the flow is integrated in split time steps, each Hamiltonian in two dimensions. The first scheme is an extension of the method of extended phase space, a well-proven method of symplectic integration with non-uniform time steps. This method is found not to work, and an explanation is given. The second method investigated is a method based on transformation to canonical variables for the two split-step Hamiltonian systems. This method, which is related to the method of non-canonical generating functions of Richardson and Finn [Plasma Phys. Controlled Fusion 54, 014004 (2012

  13. Maximum Entropy in Drug Discovery

    Directory of Open Access Journals (Sweden)

    Chih-Yuan Tseng

    2014-07-01

    Full Text Available Drug discovery applies multidisciplinary approaches either experimentally, computationally or both ways to identify lead compounds to treat various diseases. While conventional approaches have yielded many US Food and Drug Administration (FDA-approved drugs, researchers continue investigating and designing better approaches to increase the success rate in the discovery process. In this article, we provide an overview of the current strategies and point out where and how the method of maximum entropy has been introduced in this area. The maximum entropy principle has its root in thermodynamics, yet since Jaynes’ pioneering work in the 1950s, the maximum entropy principle has not only been used as a physics law, but also as a reasoning tool that allows us to process information in hand with the least bias. Its applicability in various disciplines has been abundantly demonstrated. We give several examples of applications of maximum entropy in different stages of drug discovery. Finally, we discuss a promising new direction in drug discovery that is likely to hinge on the ways of utilizing maximum entropy.

  14. Technical Note: Reducing the spin-up time of integrated surface water–groundwater models

    KAUST Repository

    Ajami, H.

    2014-06-26

    One of the main challenges in catchment scale application of coupled/integrated hydrologic models is specifying a catchment\\'s initial conditions in terms of soil moisture and depth to water table (DTWT) distributions. One approach to reduce uncertainty in model initialization is to run the model recursively using a single or multiple years of forcing data until the system equilibrates with respect to state and diagnostic variables. However, such "spin-up" approaches often require many years of simulations, making them computationally intensive. In this study, a new hybrid approach was developed to reduce the computational burden of spin-up time for an integrated groundwater-surface water-land surface model (ParFlow.CLM) by using a combination of ParFlow.CLM simulations and an empirical DTWT function. The methodology is examined in two catchments located in the temperate and semi-arid regions of Denmark and Australia respectively. Our results illustrate that the hybrid approach reduced the spin-up time required by ParFlow.CLM by up to 50%, and we outline a methodology that is applicable to other coupled/integrated modelling frameworks when initialization from equilibrium state is required.

  15. Time-Dependent Heat Conduction Problems Solved by an Integral-Equation Approach

    International Nuclear Information System (INIS)

    Oberaigner, E.R.; Leindl, M.; Antretter, T.

    2010-01-01

    Full text: A classical task of mathematical physics is the formulation and solution of a time dependent thermoelastic problem. In this work we develop an algorithm for solving the time-dependent heat conduction equation c p ρ∂ t T-kT, ii =0 in an analytical, exact fashion for a two-component domain. By the Green's function approach the formal solution of the problem is obtained. As an intermediate result an integral-equation for the temperature history at the domain interface is formulated which can be solved analytically. This method is applied to a classical engineering problem, i.e. to a special case of a Stefan-Problem. The Green's function approach in conjunction with the integral-equation method is very useful in cases were strong discontinuities or jumps occur. The initial conditions and the system parameters of the investigated problem give rise to two jumps in the temperature field. Purely numerical solutions are obtained by using the FEM (finite element method) and the FDM (finite difference method) and compared with the analytical approach. At the domain boundary the analytical solution and the FEM-solution are in good agreement, but the FDM results show a signicant smearing effect. (author)

  16. Deep Time Data Infrastructure: Integrating Our Current Geologic and Biologic Databases

    Science.gov (United States)

    Kolankowski, S. M.; Fox, P. A.; Ma, X.; Prabhu, A.

    2016-12-01

    As our knowledge of Earth's geologic and mineralogical history grows, we require more efficient methods of sharing immense amounts of data. Databases across numerous disciplines have been utilized to offer extensive information on very specific Epochs of Earth's history up to its current state, i.e. Fossil record, rock composition, proteins, etc. These databases could be a powerful force in identifying previously unseen correlations such as relationships between minerals and proteins. Creating a unifying site that provides a portal to these databases will aid in our ability as a collaborative scientific community to utilize our findings more effectively. The Deep-Time Data Infrastructure (DTDI) is currently being defined as part of a larger effort to accomplish this goal. DTDI will not be a new database, but an integration of existing resources. Current geologic and related databases were identified, documentation of their schema was established and will be presented as a stage by stage progression. Through conceptual modeling focused around variables from their combined records, we will determine the best way to integrate these databases using common factors. The Deep-Time Data Infrastructure will allow geoscientists to bridge gaps in data and further our understanding of our Earth's history.

  17. Maximum Quantum Entropy Method

    OpenAIRE

    Sim, Jae-Hoon; Han, Myung Joon

    2018-01-01

    Maximum entropy method for analytic continuation is extended by introducing quantum relative entropy. This new method is formulated in terms of matrix-valued functions and therefore invariant under arbitrary unitary transformation of input matrix. As a result, the continuation of off-diagonal elements becomes straightforward. Without introducing any further ambiguity, the Bayesian probabilistic interpretation is maintained just as in the conventional maximum entropy method. The applications o...

  18. THE ACTIVE INTEGRATED CIRCULAR PROCESS – EXPRESSION OF MAXIMUM SYNTHESIS OF SUSTAINABLE DEVELOPMENT

    Directory of Open Access Journals (Sweden)

    Done Ioan

    2015-06-01

    Full Text Available "The accelerated pace of economic growth, prompted by the need to ensure reducing disparities between the various countries, has imposed in the last two decades the adoption of sustainable development principles, particularly as a result of the Rio Declaration on Environment and Development (1992 and the UNESCO Declaration in the fall of 1997. In specific literature, in essence, sustainable development is considered "an economic and social process that is characterized by a simultaneous and concerted action at global, regional and local level. Its objective is to provide living conditions both for the present and forth future. Sustainable development “encompasses the economic, ecological, social and political aspects, linked through cultural and spiritual relationships."(Coşea, 2007In Romania, achieving sustainable development is a major, difficult objective, because it must be done in terms of convergence to the demands of the economic, social, cultural and political context of the EU, and in terms of the completion of the transition to a functioning and competitive market economy. In this context, it is imposed the economic competitiveness through reindustrialization and not least, by harnessing the active integrated circular process. Gross value added and profit chain in the structures of active integrated circular process must reflect the interests of the forces involved(employers, employees and the statethereby forming the basis of respect for the correlation between sustainable development, economic growth and increasing national wealth. The elimination or marginalization of certain links in the value chain and profit causes major disruptions or bankruptcy, with direct implications for recognizing and rewarding performance. Essentially, the building of active integrated circular process will determine the maximization of the profit – the foundation of satisfying all economic interests.

  19. A spatiotemporal dengue fever early warning model accounting for nonlinear associations with meteorological factors: a Bayesian maximum entropy approach

    Science.gov (United States)

    Lee, Chieh-Han; Yu, Hwa-Lung; Chien, Lung-Chang

    2014-05-01

    Dengue fever has been identified as one of the most widespread vector-borne diseases in tropical and sub-tropical. In the last decade, dengue is an emerging infectious disease epidemic in Taiwan especially in the southern area where have annually high incidences. For the purpose of disease prevention and control, an early warning system is urgently needed. Previous studies have showed significant relationships between climate variables, in particular, rainfall and temperature, and the temporal epidemic patterns of dengue cases. However, the transmission of the dengue fever is a complex interactive process that mostly understated the composite space-time effects of dengue fever. This study proposes developing a one-week ahead warning system of dengue fever epidemics in the southern Taiwan that considered nonlinear associations between weekly dengue cases and meteorological factors across space and time. The early warning system based on an integration of distributed lag nonlinear model (DLNM) and stochastic Bayesian Maximum Entropy (BME) analysis. The study identified the most significant meteorological measures including weekly minimum temperature and maximum 24-hour rainfall with continuous 15-week lagged time to dengue cases variation under condition of uncertainty. Subsequently, the combination of nonlinear lagged effects of climate variables and space-time dependence function is implemented via a Bayesian framework to predict dengue fever occurrences in the southern Taiwan during 2012. The result shows the early warning system is useful for providing potential outbreak spatio-temporal prediction of dengue fever distribution. In conclusion, the proposed approach can provide a practical disease control tool for environmental regulators seeking more effective strategies for dengue fever prevention.

  20. Weighted Maximum-Clique Transversal Sets of Graphs

    OpenAIRE

    Chuan-Min Lee

    2011-01-01

    A maximum-clique transversal set of a graph G is a subset of vertices intersecting all maximum cliques of G. The maximum-clique transversal set problem is to find a maximum-clique transversal set of G of minimum cardinality. Motivated by the placement of transmitters for cellular telephones, Chang, Kloks, and Lee introduced the concept of maximum-clique transversal sets on graphs in 2001. In this paper, we study the weighted version of the maximum-clique transversal set problem for split grap...

  1. Time-varying volatility in Malaysian stock exchange: An empirical study using multiple-volatility-shift fractionally integrated model

    Science.gov (United States)

    Cheong, Chin Wen

    2008-02-01

    This article investigated the influences of structural breaks on the fractionally integrated time-varying volatility model in the Malaysian stock markets which included the Kuala Lumpur composite index and four major sectoral indices. A fractionally integrated time-varying volatility model combined with sudden changes is developed to study the possibility of structural change in the empirical data sets. Our empirical results showed substantial reduction in fractional differencing parameters after the inclusion of structural change during the Asian financial and currency crises. Moreover, the fractionally integrated model with sudden change in volatility performed better in the estimation and specification evaluations.

  2. It is time to abandon "expected bladder capacity." Systematic review and new models for children's normal maximum voided volumes.

    Science.gov (United States)

    Martínez-García, Roberto; Ubeda-Sansano, Maria Isabel; Díez-Domingo, Javier; Pérez-Hoyos, Santiago; Gil-Salom, Manuel

    2014-09-01

    There is an agreement to use simple formulae (expected bladder capacity and other age based linear formulae) as bladder capacity benchmark. But real normal child's bladder capacity is unknown. To offer a systematic review of children's normal bladder capacity, to measure children's normal maximum voided volumes (MVVs), to construct models of MVVs and to compare them with the usual formulae. Computerized, manual and grey literature were reviewed until February 2013. Epidemiological, observational, transversal, multicenter study. A consecutive sample of healthy children aged 5-14 years, attending Primary Care centres with no urologic abnormality were selected. Participants filled-in a 3-day frequency-volume chart. Variables were MVVs: maximum of 24 hr, nocturnal, and daytime maximum voided volumes. diuresis and its daytime and nighttime fractions; body-measure data; and gender. The consecutive steps method was used in a multivariate regression model. Twelve articles accomplished systematic review's criteria. Five hundred and fourteen cases were analysed. Three models, one for each of the MVVs, were built. All of them were better adjusted to exponential equations. Diuresis (not age) was the most significant factor. There was poor agreement between MVVs and usual formulae. Nocturnal and daytime maximum voided volumes depend on several factors and are different. Nocturnal and daytime maximum voided volumes should be used with different meanings in clinical setting. Diuresis is the main factor for bladder capacity. This is the first model for benchmarking normal MVVs with diuresis as its main factor. Current formulae are not suitable for clinical use. © 2013 Wiley Periodicals, Inc.

  3. Time for creative integration in medical sociology.

    Science.gov (United States)

    Levine, S

    1995-01-01

    The burgeoning of medical sociology has sometimes been accompanied by unfortunate parochialism and the presence of opposing intellectual camps that ignore and even impugn each other's work. We have lost opportunities to achieve creative discourse and integration of different perspectives, methods, and findings. At this stage we should consider how we can foster creative integration within our field.

  4. Narrow band interference cancelation in OFDM: Astructured maximum likelihood approach

    KAUST Repository

    Sohail, Muhammad Sadiq; Al-Naffouri, Tareq Y.; Al-Ghadhban, Samir N.

    2012-01-01

    This paper presents a maximum likelihood (ML) approach to mitigate the effect of narrow band interference (NBI) in a zero padded orthogonal frequency division multiplexing (ZP-OFDM) system. The NBI is assumed to be time variant and asynchronous

  5. Measuring fragmentation in dissociative identity disorder: the integration measure and relationship to switching and time in therapy.

    Science.gov (United States)

    Barlow, M Rose; Chu, James A

    2014-01-01

    Some people with dissociative identity disorder (DID) have very little communication or awareness among the parts of their identity, while others experience a great deal of cooperation among alternate identities. Previous research on this topic has been sparse. Currently, there is no empirical measure of integration versus fragmentation in a person with DID. In this study, we report the development of such a measure. The goal of this study was to pilot the integration measure (IM) and to address its psychometric properties and relationships to other measures. The IM is the first standardized measure of integration in DID. Eleven women with DID participated in an experiment that included a variety of tasks. They filled out questionnaires about trauma and dissociation as well as the IM. They also provided verbal results about switching among alternate identities during the study sessions. Participants switched among identities an average of 5.8 times during the first session, and switching was highly correlated with trauma. Integration was related to switching, though this relationship may be non-linear. Integration was not related to time in psychotherapy. The IM provides a useful beginning to quantify and study integration and fragmentation in DID. Directions for future research are also discussed, including expanding the IM from this pilot. The IM may be useful in treatment settings to assess progress or change over time.

  6. Maximum power demand cost

    International Nuclear Information System (INIS)

    Biondi, L.

    1998-01-01

    The charging for a service is a supplier's remuneration for the expenses incurred in providing it. There are currently two charges for electricity: consumption and maximum demand. While no problem arises about the former, the issue is more complicated for the latter and the analysis in this article tends to show that the annual charge for maximum demand arbitrarily discriminates among consumer groups, to the disadvantage of some [it

  7. A systematic method for constructing time discretizations of integrable lattice systems: local equations of motion

    International Nuclear Information System (INIS)

    Tsuchida, Takayuki

    2010-01-01

    We propose a new method for discretizing the time variable in integrable lattice systems while maintaining the locality of the equations of motion. The method is based on the zero-curvature (Lax pair) representation and the lowest-order 'conservation laws'. In contrast to the pioneering work of Ablowitz and Ladik, our method allows the auxiliary dependent variables appearing in the stage of time discretization to be expressed locally in terms of the original dependent variables. The time-discretized lattice systems have the same set of conserved quantities and the same structures of the solutions as the continuous-time lattice systems; only the time evolution of the parameters in the solutions that correspond to the angle variables is discretized. The effectiveness of our method is illustrated using examples such as the Toda lattice, the Volterra lattice, the modified Volterra lattice, the Ablowitz-Ladik lattice (an integrable semi-discrete nonlinear Schroedinger system) and the lattice Heisenberg ferromagnet model. For the modified Volterra lattice, we also present its ultradiscrete analogue.

  8. A three-dimensional integrated nanogenerator for effectively harvesting sound energy from the environment

    Science.gov (United States)

    Liu, Jinmei; Cui, Nuanyang; Gu, Long; Chen, Xiaobo; Bai, Suo; Zheng, Youbin; Hu, Caixia; Qin, Yong

    2016-02-01

    An integrated triboelectric nanogenerator (ITNG) with a three-dimensional structure benefiting sound propagation and adsorption is demonstrated to more effectively harvest sound energy with improved output performance. With different multifunctional integrated layers working harmonically, it could generate a short-circuit current up to 2.1 mA, an open-circuit voltage up to 232 V and the maximum charging rate can reach 453 μC s-1 for a 1 mF capacitor, which are 4.6 times, 2.6 times and 7.4 times the highest reported values, respectively. Further study shows that the ITNG works well under sound in a wide range of sound intensity levels (SILs) and frequencies, and its output is sensitive to the SIL and frequency of the sound, which reveals that the ITNG can act as a self-powered active sensor for real-time noise surveillance and health care. Moreover, this generator can be used to directly power the Fe(OH)3 sol electrophoresis and shows great potential as a wireless power supply in the electrochemical industry.An integrated triboelectric nanogenerator (ITNG) with a three-dimensional structure benefiting sound propagation and adsorption is demonstrated to more effectively harvest sound energy with improved output performance. With different multifunctional integrated layers working harmonically, it could generate a short-circuit current up to 2.1 mA, an open-circuit voltage up to 232 V and the maximum charging rate can reach 453 μC s-1 for a 1 mF capacitor, which are 4.6 times, 2.6 times and 7.4 times the highest reported values, respectively. Further study shows that the ITNG works well under sound in a wide range of sound intensity levels (SILs) and frequencies, and its output is sensitive to the SIL and frequency of the sound, which reveals that the ITNG can act as a self-powered active sensor for real-time noise surveillance and health care. Moreover, this generator can be used to directly power the Fe(OH)3 sol electrophoresis and shows great potential as a

  9. 24 CFR 982.634 - Homeownership option: Maximum term of homeownership assistance.

    Science.gov (United States)

    2010-04-01

    ... VOUCHER PROGRAM Special Housing Types Homeownership Option § 982.634 Homeownership option: Maximum term of... unit during the time that homeownership payments are made; or (2) Is the spouse of any member of the household who has an ownership interest in the unit during the time homeownership payments are made. (c...

  10. A stochastic approach for quantifying immigrant integration: the Spanish test case

    Science.gov (United States)

    Agliari, Elena; Barra, Adriano; Contucci, Pierluigi; Sandell, Richard; Vernia, Cecilia

    2014-10-01

    We apply stochastic process theory to the analysis of immigrant integration. Using a unique and detailed data set from Spain, we study the relationship between local immigrant density and two social and two economic immigration quantifiers for the period 1999-2010. As opposed to the classic time-series approach, by letting immigrant density play the role of ‘time’ and the quantifier the role of ‘space,’ it becomes possible to analyse the behavior of the quantifiers by means of continuous time random walks. Two classes of results are then obtained. First, we show that social integration quantifiers evolve following diffusion law, while the evolution of economic quantifiers exhibits ballistic dynamics. Second, we make predictions of best- and worst-case scenarios taking into account large local fluctuations. Our stochastic process approach to integration lends itself to interesting forecasting scenarios which, in the hands of policy makers, have the potential to improve political responses to integration problems. For instance, estimating the standard first-passage time and maximum-span walk reveals local differences in integration performance for different immigration scenarios. Thus, by recognizing the importance of local fluctuations around national means, this research constitutes an important tool to assess the impact of immigration phenomena on municipal budgets and to set up solid multi-ethnic plans at the municipal level as immigration pressures build.

  11. A stochastic approach for quantifying immigrant integration: the Spanish test case

    International Nuclear Information System (INIS)

    Agliari, Elena; Barra, Adriano; Contucci, Pierluigi; Sandell, Richard; Vernia, Cecilia

    2014-01-01

    We apply stochastic process theory to the analysis of immigrant integration. Using a unique and detailed data set from Spain, we study the relationship between local immigrant density and two social and two economic immigration quantifiers for the period 1999–2010. As opposed to the classic time-series approach, by letting immigrant density play the role of ‘time’ and the quantifier the role of ‘space,’ it becomes possible to analyse the behavior of the quantifiers by means of continuous time random walks. Two classes of results are then obtained. First, we show that social integration quantifiers evolve following diffusion law, while the evolution of economic quantifiers exhibits ballistic dynamics. Second, we make predictions of best- and worst-case scenarios taking into account large local fluctuations. Our stochastic process approach to integration lends itself to interesting forecasting scenarios which, in the hands of policy makers, have the potential to improve political responses to integration problems. For instance, estimating the standard first-passage time and maximum-span walk reveals local differences in integration performance for different immigration scenarios. Thus, by recognizing the importance of local fluctuations around national means, this research constitutes an important tool to assess the impact of immigration phenomena on municipal budgets and to set up solid multi-ethnic plans at the municipal level as immigration pressures build. (paper)

  12. A Research on Maximum Symbolic Entropy from Intrinsic Mode Function and Its Application in Fault Diagnosis

    Directory of Open Access Journals (Sweden)

    Zhuofei Xu

    2017-01-01

    Full Text Available Empirical mode decomposition (EMD is a self-adaptive analysis method for nonlinear and nonstationary signals. It has been widely applied to machinery fault diagnosis and structural damage detection. A novel feature, maximum symbolic entropy of intrinsic mode function based on EMD, is proposed to enhance the ability of recognition of EMD in this paper. First, a signal is decomposed into a collection of intrinsic mode functions (IMFs based on the local characteristic time scale of the signal, and then IMFs are transformed into a serious of symbolic sequence with different parameters. Second, it can be found that the entropies of symbolic IMFs are quite different. However, there is always a maximum value for a certain symbolic IMF. Third, take the maximum symbolic entropy as features to describe IMFs from a signal. Finally, the proposed features are applied to evaluate the effect of maximum symbolic entropy in fault diagnosis of rolling bearing, and then the maximum symbolic entropy is compared with other standard time analysis features in a contrast experiment. Although maximum symbolic entropy is only a time domain feature, it can reveal the signal characteristic information accurately. It can also be used in other fields related to EMD method.

  13. Maximum total organic carbon limit for DWPF melter feed

    International Nuclear Information System (INIS)

    Choi, A.S.

    1995-01-01

    DWPF recently decided to control the potential flammability of melter off-gas by limiting the total carbon content in the melter feed and maintaining adequate conditions for combustion in the melter plenum. With this new strategy, all the LFL analyzers and associated interlocks and alarms were removed from both the primary and backup melter off-gas systems. Subsequently, D. Iverson of DWPF- T ampersand E requested that SRTC determine the maximum allowable total organic carbon (TOC) content in the melter feed which can be implemented as part of the Process Requirements for melter feed preparation (PR-S04). The maximum TOC limit thus determined in this study was about 24,000 ppm on an aqueous slurry basis. At the TOC levels below this, the peak concentration of combustible components in the quenched off-gas will not exceed 60 percent of the LFL during off-gas surges of magnitudes up to three times nominal, provided that the melter plenum temperature and the air purge rate to the BUFC are monitored and controlled above 650 degrees C and 220 lb/hr, respectively. Appropriate interlocks should discontinue the feeding when one or both of these conditions are not met. Both the magnitude and duration of an off-gas surge have a major impact on the maximum TOC limit, since they directly affect the melter plenum temperature and combustion. Although the data obtained during recent DWPF melter startup tests showed that the peak magnitude of a surge can be greater than three times nominal, the observed duration was considerably shorter, on the order of several seconds. The long surge duration assumed in this study has a greater impact on the plenum temperature than the peak magnitude, thus making the maximum TOC estimate conservative. Two models were used to make the necessary calculations to determine the TOC limit

  14. A Scalable, Timing-Safe, Network-on-Chip Architecture with an Integrated Clock Distribution Method

    DEFF Research Database (Denmark)

    Bjerregaard, Tobias; Stensgaard, Mikkel Bystrup; Sparsø, Jens

    2007-01-01

    Growing system sizes together with increasing performance variability are making globally synchronous operation hard to realize. Mesochronous clocking constitutes a possible solution to the problems faced. The most fundamental of problems faced when communicating between mesochronously clocked re...... is based purely on local observations. It is demonstrated with a 90 nm CMOS standard cell network-on-chip design which implements completely timing-safe, global communication in a modular system......Growing system sizes together with increasing performance variability are making globally synchronous operation hard to realize. Mesochronous clocking constitutes a possible solution to the problems faced. The most fundamental of problems faced when communicating between mesochronously clocked...... regions concerns the possibility of data corruption caused by metastability. This paper presents an integrated communication and mesochronous clocking strategy, which avoids timing related errors while maintaining a globally synchronous system perspective. The architecture is scalable as timing integrity...

  15. Path integration of head direction: updating a packet of neural activity at the correct speed using neuronal time constants.

    Science.gov (United States)

    Walters, D M; Stringer, S M

    2010-07-01

    A key question in understanding the neural basis of path integration is how individual, spatially responsive, neurons may self-organize into networks that can, through learning, integrate velocity signals to update a continuous representation of location within an environment. It is of vital importance that this internal representation of position is updated at the correct speed, and in real time, to accurately reflect the motion of the animal. In this article, we present a biologically plausible model of velocity path integration of head direction that can solve this problem using neuronal time constants to effect natural time delays, over which associations can be learned through associative Hebbian learning rules. The model comprises a linked continuous attractor network and competitive network. In simulation, we show that the same model is able to learn two different speeds of rotation when implemented with two different values for the time constant, and without the need to alter any other model parameters. The proposed model could be extended to path integration of place in the environment, and path integration of spatial view.

  16. Time-integration methods for finite element discretisations of the second-order Maxwell equation

    NARCIS (Netherlands)

    Sarmany, D.; Bochev, Mikhail A.; van der Vegt, Jacobus J.W.

    This article deals with time integration for the second-order Maxwell equations with possibly non-zero conductivity in the context of the discontinuous Galerkin finite element method DG-FEM) and the $H(\\mathrm{curl})$-conforming FEM. For the spatial discretisation, hierarchic

  17. Uncertainty of angular displacement measurement with a MEMS gyroscope integrated in a smartphone

    International Nuclear Information System (INIS)

    De Campos Porath, Maurício; Dolci, Ricardo

    2015-01-01

    Low-cost inertial sensors have recently gained popularity and are now widely used in electronic devices such as smartphones and tablets. In this paper we present the results of a set of experiments aiming to assess the angular displacement measurement errors of a gyroscope integrated in a smartphone of a recent model. The goal is to verify whether these sensors could substitute dedicated electronic inclinometers for the measurement of angular displacement. We estimated a maximum error of 0.3° (sum of expanded uncertainty and maximum absolute bias) for the roll and pitch axes, for a measurement time without referencing up to 1 h. (paper)

  18. MAXIMUM-LIKELIHOOD-ESTIMATION OF THE ENTROPY OF AN ATTRACTOR

    NARCIS (Netherlands)

    SCHOUTEN, JC; TAKENS, F; VANDENBLEEK, CM

    In this paper, a maximum-likelihood estimate of the (Kolmogorov) entropy of an attractor is proposed that can be obtained directly from a time series. Also, the relative standard deviation of the entropy estimate is derived; it is dependent on the entropy and on the number of samples used in the

  19. Atomization in graphite-furnace atomic absorption spectrometry. Peak-height method vs. integration method of measuring absorbance: carbon rod atomizer 63

    International Nuclear Information System (INIS)

    Sturgeon, R.E.; Chakrabarti, C.L.; Maines, I.S.; Bertels, P.C.

    1975-01-01

    Oscilloscopic traces of transient atomic absorption signals generated during continuous heating of a Carbon Rod Atomizer model 63 show features which are characteristic of the element being atomized. This research was undertaken to determine the significance and usefulness of the two analytically significant parameters, absorbance maximum and integrated absorbance. For measuring integrated absorbance, an electronic integrating control unit consisting of a timing circuit, a lock-in amplifier, and a digital voltmeter, which functions as a direct absorbance x second readout, has been designed, developed, and successfully tested. Oscilloscopic and recorder traces of the absorbance maximum and digital display of the integrated absorbance are simultaneously obtained. For the elements studied, Cd, Zn, Cu, Al, Sn, Mo, and V, the detection limits and the precision obtained are practically identical for both methods of measurements. The sensitivities by the integration method are about the same as, or less than, those obtained by the peak-height method, whereas the calibration curves by the former are generally linear over wider ranges of concentrations. (U.S.)

  20. An integrated theory of prospective time interval estimation : The role of cognition, attention, and learning

    NARCIS (Netherlands)

    Taatgen, Niels A.; van Rijn, Hedderik; Anderson, John

    A theory of prospective time perception is introduced and incorporated as a module in an integrated theory of cognition, thereby extending existing theories and allowing predictions about attention and learning. First, a time perception module is established by fitting existing datasets (interval

  1. Power and hydrogen production from ammonia in a micro-thermophotovoltaic device integrated with a micro-reformer

    International Nuclear Information System (INIS)

    Um, Dong Hyun; Kim, Tae Young; Kwon, Oh Chae

    2014-01-01

    Power and hydrogen (H 2 ) production by burning and reforming ammonia (NH 3 ) in a micro-TPV (microscale-thermophotovoltaic) device integrated with a micro-reformer is studied experimentally. A heat-recirculating micro-emitter with the cyclone and helical adapters that enhance the residence time of fed fuel-air mixtures and uniform burning burns H 2 -added NH 3 -air mixtures. A micro-reformer that converts NH 3 to H 2 using ruthenium as a catalyst surrounds the micro-emitter as a heat source. The micro-reformer is surrounded by a chamber, the inner and outer walls of which have installations of gallium antimonide photovoltaic cells and cooling fins. For the micro-reformer-integrated micro-TPV device the maximum overall efficiency of 8.1% with electrical power of 4.5 W and the maximum NH 3 conversion rate of 96.0% with the H 2 production rate of 22.6 W (based on lower heating value) are obtained, indicating that the overall efficiency is remarkably enhanced compared with 2.0% when the micro-TPV device operates alone. This supports the potential of improving the overall efficiency of a micro-TPV device through integrating it with a micro-reformer. Also, the feasibility of using NH 3 as a carbon-free fuel for both burning and reforming in practical micro power and H 2 generation devices has been demonstrated. - Highlights: • Performance of micro-TPV device integrated with micro-reformer is evaluated. • Feasibility of using NH 3 –H 2 blends in integrated system has been demonstrated. • Integration with micro-reformer improves performance of micro-TPV device. • Maximum overall efficiency of 8.1% is found compared with 2.0% without integration

  2. CBP TOOLBOX VERSION 2.0: CODE INTEGRATION ENHANCEMENTS

    Energy Technology Data Exchange (ETDEWEB)

    Smith, F.; Flach, G.; BROWN, K.

    2013-06-01

    This report describes enhancements made to code integration aspects of the Cementitious Barriers Project (CBP) Toolbox as a result of development work performed at the Savannah River National Laboratory (SRNL) in collaboration with Vanderbilt University (VU) in the first half of fiscal year 2013. Code integration refers to the interfacing to standalone CBP partner codes, used to analyze the performance of cementitious materials, with the CBP Software Toolbox. The most significant enhancements are: 1) Improved graphical display of model results. 2) Improved error analysis and reporting. 3) Increase in the default maximum model mesh size from 301 to 501 nodes. 4) The ability to set the LeachXS/Orchestra simulation times through the GoldSim interface. These code interface enhancements have been included in a new release (Version 2.0) of the CBP Toolbox.

  3. Maximum Power Point Tracking of Photovoltaic System for Traffic Light Application

    Directory of Open Access Journals (Sweden)

    Riza Muhida

    2013-07-01

    Full Text Available Photovoltaic traffic light system is a significant application of renewable energy source. The development of the system is an alternative effort of local authority to reduce expenditure for paying fees to power supplier which the power comes from conventional energy source. Since photovoltaic (PV modules still have relatively low conversion efficiency, an alternative control of maximum power point tracking (MPPT method is applied to the traffic light system. MPPT is intended to catch up the maximum power at daytime in order to charge the battery at the maximum rate in which the power from the battery is intended to be used at night time or cloudy day. MPPT is actually a DC-DC converter that can step up or down voltage in order to achieve the maximum power using Pulse Width Modulation (PWM control. From experiment, we obtained the voltage of operation using MPPT is at 16.454 V, this value has error of 2.6%, if we compared with maximum power point voltage of PV module that is 16.9 V. Based on this result it can be said that this MPPT control works successfully to deliver the power from PV module to battery maximally.

  4. On application of a new hybrid maximum power point tracking (MPPT) based photovoltaic system to the closed plant factory

    International Nuclear Information System (INIS)

    Jiang, Joe-Air; Su, Yu-Li; Shieh, Jyh-Cherng; Kuo, Kun-Chang; Lin, Tzu-Shiang; Lin, Ta-Te; Fang, Wei; Chou, Jui-Jen; Wang, Jen-Cheng

    2014-01-01

    Highlights: • Hybrid MPPT method was developed and utilized in a PV system of closed plant factory. • The tracking of the maximum power output of PV system can be achieved in real time. • Hybrid MPPT method not only decreases energy loss but increases power utilization. • The feasibility of applying PV system to the closed plant factory has been examined. • The PV system significantly reduced CO 2 emissions and curtailed the fossil fuels. - Abstract: Photovoltaic (PV) generation systems have been shown to have a promising role for use in high electric-load buildings, such as the closed plant factory which is dependent upon artificial lighting. The power generated by the PV systems can be either directly supplied to the buildings or fed back into the electrical grid to reduce the high economic costs and environmental impact associated with the traditional energy sources such as nuclear power and fossil fuels. However, PV systems usually suffer from low energy-conversion efficiency, and it is therefore necessary to improve their performance by tackling the energy loss issues. The maximum power point tracking (MPPT) control technique is essential to the PV-assisted generation systems in order to achieve the maximum power output in real time. In this study, we integrate the previously proposed direct-prediction MPP method with a perturbation and observation (P and O) method to develop a new hybrid MPPT method. The proposed MPPT method is further utilized in the PV inverters in a PV system installed on the roof of a closed plant factory at National Taiwan University. The tested PV system is constructed as a two-stage grid-connected photovoltaic power conditioning (PVPC) system with a boost-buck full bridge design configuration. A control scheme based on the hybrid MPPT method is also developed and implemented in the PV inverters of the PVPC system to achieve tracking of the maximum power output of the PV system in real time. Based on experimental results

  5. A Stable Marching on-in-time Scheme for Solving the Time Domain Electric Field Volume Integral Equation on High-contrast Scatterers

    KAUST Repository

    Sayed, Sadeed Bin

    2015-05-05

    A time domain electric field volume integral equation (TD-EFVIE) solver is proposed for characterizing transient electromagnetic wave interactions on high-contrast dielectric scatterers. The TD-EFVIE is discretized using the Schaubert- Wilton-Glisson (SWG) and approximate prolate spherical wave (APSW) functions in space and time, respectively. The resulting system of equations can not be solved by a straightforward application of the marching on-in-time (MOT) scheme since the two-sided APSW interpolation functions require the knowledge of unknown “future” field samples during time marching. Causality of the MOT scheme is restored using an extrapolation technique that predicts the future samples from known “past” ones. Unlike the extrapolation techniques developed for MOT schemes that are used in solving time domain surface integral equations, this scheme trains the extrapolation coefficients using samples of exponentials with exponents on the complex frequency plane. This increases the stability of the MOT-TD-EFVIE solver significantly, since the temporal behavior of decaying and oscillating electromagnetic modes induced inside the scatterers is very accurately taken into account by this new extrapolation scheme. Numerical results demonstrate that the proposed MOT solver maintains its stability even when applied to analyzing wave interactions on high-contrast scatterers.

  6. A Stable Marching on-in-time Scheme for Solving the Time Domain Electric Field Volume Integral Equation on High-contrast Scatterers

    KAUST Repository

    Sayed, Sadeed Bin; Ulku, Huseyin; Bagci, Hakan

    2015-01-01

    A time domain electric field volume integral equation (TD-EFVIE) solver is proposed for characterizing transient electromagnetic wave interactions on high-contrast dielectric scatterers. The TD-EFVIE is discretized using the Schaubert- Wilton-Glisson (SWG) and approximate prolate spherical wave (APSW) functions in space and time, respectively. The resulting system of equations can not be solved by a straightforward application of the marching on-in-time (MOT) scheme since the two-sided APSW interpolation functions require the knowledge of unknown “future” field samples during time marching. Causality of the MOT scheme is restored using an extrapolation technique that predicts the future samples from known “past” ones. Unlike the extrapolation techniques developed for MOT schemes that are used in solving time domain surface integral equations, this scheme trains the extrapolation coefficients using samples of exponentials with exponents on the complex frequency plane. This increases the stability of the MOT-TD-EFVIE solver significantly, since the temporal behavior of decaying and oscillating electromagnetic modes induced inside the scatterers is very accurately taken into account by this new extrapolation scheme. Numerical results demonstrate that the proposed MOT solver maintains its stability even when applied to analyzing wave interactions on high-contrast scatterers.

  7. Designing driver assistance systems with crossmodal signals: multisensory integration rules for saccadic reaction times apply.

    Directory of Open Access Journals (Sweden)

    Rike Steenken

    Full Text Available Modern driver assistance systems make increasing use of auditory and tactile signals in order to reduce the driver's visual information load. This entails potential crossmodal interaction effects that need to be taken into account in designing an optimal system. Here we show that saccadic reaction times to visual targets (cockpit or outside mirror, presented in a driving simulator environment and accompanied by auditory or tactile accessories, follow some well-known spatiotemporal rules of multisensory integration, usually found under confined laboratory conditions. Auditory nontargets speed up reaction time by about 80 ms. The effect tends to be maximal when the nontarget is presented 50 ms before the target and when target and nontarget are spatially coincident. The effect of a tactile nontarget (vibrating steering wheel was less pronounced and not spatially specific. It is shown that the average reaction times are well-described by the stochastic "time window of integration" model for multisensory integration developed by the authors. This two-stage model postulates that crossmodal interaction occurs only if the peripheral processes from the different sensory modalities terminate within a fixed temporal interval, and that the amount of crossmodal interaction manifests itself in an increase or decrease of second stage processing time. A qualitative test is consistent with the model prediction that the probability of interaction, but not the amount of crossmodal interaction, depends on target-nontarget onset asynchrony. A quantitative model fit yields estimates of individual participants' parameters, including the size of the time window. Some consequences for the design of driver assistance systems are discussed.

  8. Surface of Maximums of AR(2 Process Spectral Densities and its Application in Time Series Statistics

    Directory of Open Access Journals (Sweden)

    Alexander V. Ivanov

    2017-09-01

    Conclusions. The obtained formula of surface of maximums of noise spectral densities gives an opportunity to realize for which values of AR(2 process characteristic polynomial coefficients it is possible to look for greater rate of convergence to zero of the probabilities of large deviations of the considered estimates.

  9. A hybrid method combining the FDTD and a time domain boundary-integral equation marching-on-in-time algorithm

    Directory of Open Access Journals (Sweden)

    A. Becker

    2003-01-01

    Full Text Available In this paper a hybrid method combining the FDTD/FIT with a Time Domain Boundary-Integral Marching-on-in-Time Algorithm (TD-BIM is presented. Inhomogeneous regions are modelled with the FIT-method, an alternative formulation of the FDTD. Homogeneous regions (which is in the presented numerical example the open space are modelled using a TD-BIM with equivalent electric and magnetic currents flowing on the boundary between the inhomogeneous and the homogeneous regions. The regions are coupled by the tangential magnetic fields just outside the inhomogeneous regions. These fields are calculated by making use of a Mixed Potential Integral Formulation for the magnetic field. The latter consists of equivalent electric and magnetic currents on the boundary plane between the homogeneous and the inhomogeneous region. The magnetic currents result directly from the electric fields of the Yee lattice. Electric currents in the same plane are calculated by making use of the TD-BIM and using the electric field of the Yee lattice as boundary condition. The presented hybrid method only needs the interpolations inherent in FIT and no additional interpolation. A numerical result is compared to a calculation that models both regions with FDTD.

  10. The duration of uncertain times: audiovisual information about intervals is integrated in a statistically optimal fashion.

    Directory of Open Access Journals (Sweden)

    Jess Hartcher-O'Brien

    Full Text Available Often multisensory information is integrated in a statistically optimal fashion where each sensory source is weighted according to its precision. This integration scheme isstatistically optimal because it theoretically results in unbiased perceptual estimates with the highest precisionpossible.There is a current lack of consensus about how the nervous system processes multiple sensory cues to elapsed time.In order to shed light upon this, we adopt a computational approach to pinpoint the integration strategy underlying duration estimationof audio/visual stimuli. One of the assumptions of our computational approach is that the multisensory signals redundantly specify the same stimulus property. Our results clearly show that despite claims to the contrary, perceived duration is the result of an optimal weighting process, similar to that adopted for estimates of space. That is, participants weight the audio and visual information to arrive at the most precise, single duration estimate possible. The work also disentangles how different integration strategies - i.e. consideringthe time of onset/offset ofsignals - might alter the final estimate. As such we provide the first concrete evidence of an optimal integration strategy in human duration estimates.

  11. Evaluation of a timing integrated circuit architecture for continuous crystal and SiPM based PET systems

    International Nuclear Information System (INIS)

    Monzo, J M; Ros, A; Herrero-Bosch, V; Perino, I V; Aliaga, R J; Gadea-Girones, R; Colom-Palero, R J

    2013-01-01

    Improving timing resolution in positron emission tomography (PET), thus having fine time information of the detected pulses, is important to increase the reconstructed images signal to noise ratio (SNR) [1]. In the present work, an integrated circuit topology for time extraction of the incoming pulses is evaluated. An accurate simulation including the detector physics and the electronics with different configurations has been developed. The selected architecture is intended for a PET system based on a continuous scintillation crystal attached to a SiPM array. The integrated circuit extracts the time stamp from the first few photons generated when the gamma-ray interacts with the scintillator, thus obtaining the best time resolution. To get the time stamp from the detected pulses, a time to digital converter (TDC) array based architecture has been proposed as in [2] or [3]. The TDC input stage uses a current comparator to transform the analog signal into a digital signal. Individually configurable trigger levels allow us to avoid false triggers due to signal noise. Using a TDC per SiPM configuration results in a very area consuming integrated circuit. One solution to this problem is to join several SiPM outputs to one TDC. This reduces the number of TDCs but, on the other hand, the first photons will be more difficult to be detected. For this reason, it is important to simulate how the time resolution is degraded when the number of TDCs is reduced. Following this criteria, the best configuration will be selected considering the trade-off between achievable time resolution and the cost per chip. A simulation is presented that uses Geant4 for simulation of the physics process and, for the electronic blocks, spice and Matlab. The Geant4 stage simulates the gamma-ray interaction with the scintillator, the photon shower generation and the first stages of the SiPM. The electronics simulation includes an electrical model of the SiPM array and all the integrated circuitry

  12. On the mixed discretization of the time domain magnetic field integral equation

    KAUST Repository

    Ulku, Huseyin Arda

    2012-09-01

    Time domain magnetic field integral equation (MFIE) is discretized using divergence-conforming Rao-Wilton-Glisson (RWG) and curl-conforming Buffa-Christiansen (BC) functions as spatial basis and testing functions, respectively. The resulting mixed discretization scheme, unlike the classical scheme which uses RWG functions as both basis and testing functions, is proper: Testing functions belong to dual space of the basis functions. Numerical results demonstrate that the marching on-in-time (MOT) solution of the mixed discretized MFIE yields more accurate results than that of classically discretized MFIE. © 2012 IEEE.

  13. Integration of image exposure time into a modified laser speckle imaging method

    Energy Technology Data Exchange (ETDEWEB)

    RamIrez-San-Juan, J C; Salazar-Hermenegildo, N; Ramos-Garcia, R; Munoz-Lopez, J [Optics Department, INAOE, Puebla (Mexico); Huang, Y C [Department of Electrical Engineering and Computer Science, University of California, Irvine, CA (United States); Choi, B, E-mail: jcram@inaoep.m [Beckman Laser Institute and Medical Clinic, University of California, Irvine, CA (United States)

    2010-11-21

    Speckle-based methods have been developed to characterize tissue blood flow and perfusion. One such method, called modified laser speckle imaging (mLSI), enables computation of blood flow maps with relatively high spatial resolution. Although it is known that the sensitivity and noise in LSI measurements depend on image exposure time, a fundamental disadvantage of mLSI is that it does not take into account this parameter. In this work, we integrate the exposure time into the mLSI method and provide experimental support of our approach with measurements from an in vitro flow phantom.

  14. Integration of image exposure time into a modified laser speckle imaging method

    International Nuclear Information System (INIS)

    RamIrez-San-Juan, J C; Salazar-Hermenegildo, N; Ramos-Garcia, R; Munoz-Lopez, J; Huang, Y C; Choi, B

    2010-01-01

    Speckle-based methods have been developed to characterize tissue blood flow and perfusion. One such method, called modified laser speckle imaging (mLSI), enables computation of blood flow maps with relatively high spatial resolution. Although it is known that the sensitivity and noise in LSI measurements depend on image exposure time, a fundamental disadvantage of mLSI is that it does not take into account this parameter. In this work, we integrate the exposure time into the mLSI method and provide experimental support of our approach with measurements from an in vitro flow phantom.

  15. Integral ceramic superstructure evaluation using time domain optical coherence tomography

    Science.gov (United States)

    Sinescu, Cosmin; Bradu, Adrian; Topala, Florin I.; Negrutiu, Meda Lavinia; Duma, Virgil-Florin; Podoleanu, Adrian G.

    2014-02-01

    Optical Coherence Tomography (OCT) is a non-invasive low coherence interferometry technique that includes several technologies (and the corresponding devices and components), such as illumination and detection, interferometry, scanning, adaptive optics, microscopy and endoscopy. From its large area of applications, we consider in this paper a critical aspect in dentistry - to be investigated with a Time Domain (TD) OCT system. The clinical situation of an edentulous mandible is considered; it can be solved by inserting 2 to 6 implants. On these implants a mesostructure will be manufactured and on it a superstructure is needed. This superstructure can be integral ceramic; in this case materials defects could be trapped inside the ceramic layers and those defects could lead to fractures of the entire superstructure. In this paper we demonstrate that a TD-OCT imaging system has the potential to properly evaluate the presence of the defects inside the ceramic layers and those defects can be fixed before inserting the prosthesis inside the oral cavity. Three integral ceramic superstructures were developed by using a CAD/CAM technology. After the milling, the ceramic layers were applied on the core. All the three samples were evaluated by a TD-OCT system working at 1300 nm. For two of the superstructures evaluated, no defects were found in the most stressed areas. The third superstructure presented four ceramic defects in the mentioned areas. Because of those defects the superstructure may fracture. The integral ceramic prosthesis was send back to the dental laboratory to fix the problems related to the material defects found. Thus, TD-OCT proved to be a valuable method for diagnosing the ceramic defects inside the integral ceramic superstructures in order to prevent fractures at this level.

  16. Measuring fragmentation in dissociative identity disorder: the integration measure and relationship to switching and time in therapy

    Directory of Open Access Journals (Sweden)

    Margaret Rose Barlow

    2014-01-01

    Full Text Available Background: Some people with dissociative identity disorder (DID have very little communication or awareness among the parts of their identity, while others experience a great deal of cooperation among alternate identities. Previous research on this topic has been sparse. Currently, there is no empirical measure of integration versus fragmentation in a person with DID. In this study, we report the development of such a measure. Objective: The goal of this study was to pilot the integration measure (IM and to address its psychometric properties and relationships to other measures. The IM is the first standardized measure of integration in DID. Method: Eleven women with DID participated in an experiment that included a variety of tasks. They filled out questionnaires about trauma and dissociation as well as the IM. They also provided verbal results about switching among alternate identities during the study sessions. Results: Participants switched among identities an average of 5.8 times during the first session, and switching was highly correlated with trauma. Integration was related to switching, though this relationship may be non-linear. Integration was not related to time in psychotherapy. Conclusions: The IM provides a useful beginning to quantify and study integration and fragmentation in DID. Directions for future research are also discussed, including expanding the IM from this pilot. The IM may be useful in treatment settings to assess progress or change over time.

  17. Simultaneous measurement of the maximum oscillation amplitude and the transient decay time constant of the QCM reveals stiffness changes of the adlayer.

    Science.gov (United States)

    Marxer, C Galli; Coen, M Collaud; Bissig, H; Greber, U F; Schlapbach, L

    2003-10-01

    Interpretation of adsorption kinetics measured with a quartz crystal microbalance (QCM) can be difficult for adlayers undergoing modification of their mechanical properties. We have studied the behavior of the oscillation amplitude, A(0), and the decay time constant, tau, of quartz during adsorption of proteins and cells, by use of a home-made QCM. We are able to measure simultaneously the frequency, f, the dissipation factor, D, the maximum amplitude, A(0), and the transient decay time constant, tau, every 300 ms in liquid, gaseous, or vacuum environments. This analysis enables adsorption and modification of liquid/mass properties to be distinguished. Moreover the surface coverage and the stiffness of the adlayer can be estimated. These improvements promise to increase the appeal of QCM methodology for any applications measuring intimate contact of a dynamic material with a solid surface.

  18. 49 CFR 230.24 - Maximum allowable stress.

    Science.gov (United States)

    2010-10-01

    ... 49 Transportation 4 2010-10-01 2010-10-01 false Maximum allowable stress. 230.24 Section 230.24... Allowable Stress § 230.24 Maximum allowable stress. (a) Maximum allowable stress value. The maximum allowable stress value on any component of a steam locomotive boiler shall not exceed 1/4 of the ultimate...

  19. Integration of RNA-Seq and RPPA data for survival time prediction in cancer patients.

    Science.gov (United States)

    Isik, Zerrin; Ercan, Muserref Ece

    2017-10-01

    Integration of several types of patient data in a computational framework can accelerate the identification of more reliable biomarkers, especially for prognostic purposes. This study aims to identify biomarkers that can successfully predict the potential survival time of a cancer patient by integrating the transcriptomic (RNA-Seq), proteomic (RPPA), and protein-protein interaction (PPI) data. The proposed method -RPBioNet- employs a random walk-based algorithm that works on a PPI network to identify a limited number of protein biomarkers. Later, the method uses gene expression measurements of the selected biomarkers to train a classifier for the survival time prediction of patients. RPBioNet was applied to classify kidney renal clear cell carcinoma (KIRC), glioblastoma multiforme (GBM), and lung squamous cell carcinoma (LUSC) patients based on their survival time classes (long- or short-term). The RPBioNet method correctly identified the survival time classes of patients with between 66% and 78% average accuracy for three data sets. RPBioNet operates with only 20 to 50 biomarkers and can achieve on average 6% higher accuracy compared to the closest alternative method, which uses only RNA-Seq data in the biomarker selection. Further analysis of the most predictive biomarkers highlighted genes that are common for both cancer types, as they may be driver proteins responsible for cancer progression. The novelty of this study is the integration of a PPI network with mRNA and protein expression data to identify more accurate prognostic biomarkers that can be used for clinical purposes in the future. Copyright © 2017 Elsevier Ltd. All rights reserved.

  20. Integrated survival analysis using an event-time approach in a Bayesian framework.

    Science.gov (United States)

    Walsh, Daniel P; Dreitz, Victoria J; Heisey, Dennis M

    2015-02-01

    Event-time or continuous-time statistical approaches have been applied throughout the biostatistical literature and have led to numerous scientific advances. However, these techniques have traditionally relied on knowing failure times. This has limited application of these analyses, particularly, within the ecological field where fates of marked animals may be unknown. To address these limitations, we developed an integrated approach within a Bayesian framework to estimate hazard rates in the face of unknown fates. We combine failure/survival times from individuals whose fates are known and times of which are interval-censored with information from those whose fates are unknown, and model the process of detecting animals with unknown fates. This provides the foundation for our integrated model and permits necessary parameter estimation. We provide the Bayesian model, its derivation, and use simulation techniques to investigate the properties and performance of our approach under several scenarios. Lastly, we apply our estimation technique using a piece-wise constant hazard function to investigate the effects of year, age, chick size and sex, sex of the tending adult, and nesting habitat on mortality hazard rates of the endangered mountain plover (Charadrius montanus) chicks. Traditional models were inappropriate for this analysis because fates of some individual chicks were unknown due to failed radio transmitters. Simulations revealed biases of posterior mean estimates were minimal (≤ 4.95%), and posterior distributions behaved as expected with RMSE of the estimates decreasing as sample sizes, detection probability, and survival increased. We determined mortality hazard rates for plover chicks were highest at birth weights and/or whose nest was within agricultural habitats. Based on its performance, our approach greatly expands the range of problems for which event-time analyses can be used by eliminating the need for having completely known fate data.

  1. Integrated survival analysis using an event-time approach in a Bayesian framework

    Science.gov (United States)

    Walsh, Daniel P.; Dreitz, VJ; Heisey, Dennis M.

    2015-01-01

    Event-time or continuous-time statistical approaches have been applied throughout the biostatistical literature and have led to numerous scientific advances. However, these techniques have traditionally relied on knowing failure times. This has limited application of these analyses, particularly, within the ecological field where fates of marked animals may be unknown. To address these limitations, we developed an integrated approach within a Bayesian framework to estimate hazard rates in the face of unknown fates. We combine failure/survival times from individuals whose fates are known and times of which are interval-censored with information from those whose fates are unknown, and model the process of detecting animals with unknown fates. This provides the foundation for our integrated model and permits necessary parameter estimation. We provide the Bayesian model, its derivation, and use simulation techniques to investigate the properties and performance of our approach under several scenarios. Lastly, we apply our estimation technique using a piece-wise constant hazard function to investigate the effects of year, age, chick size and sex, sex of the tending adult, and nesting habitat on mortality hazard rates of the endangered mountain plover (Charadrius montanus) chicks. Traditional models were inappropriate for this analysis because fates of some individual chicks were unknown due to failed radio transmitters. Simulations revealed biases of posterior mean estimates were minimal (≤ 4.95%), and posterior distributions behaved as expected with RMSE of the estimates decreasing as sample sizes, detection probability, and survival increased. We determined mortality hazard rates for plover chicks were highest at birth weights and/or whose nest was within agricultural habitats. Based on its performance, our approach greatly expands the range of problems for which event-time analyses can be used by eliminating the need for having completely known fate data.

  2. Verification of maximum impact force for interim storage cask for the Fast Flux Testing Facility

    International Nuclear Information System (INIS)

    Chen, W.W.; Chang, S.J.

    1996-01-01

    The objective of this paper is to perform an impact analysis of the Interim Storage Cask (ISC) of the Fast Flux Test Facility (FFTF) for a 4-ft end drop. The ISC is a concrete cask used to store spent nuclear fuels. The analysis is to justify the impact force calculated by General Atomics (General Atomics, 1994) using the ILMOD computer code. ILMOD determines the maximum force developed by the concrete crushing which occurs when the drop energy has been absorbed. The maximum force, multiplied by the dynamic load factor (DLF), was used to determine the maximum g-level on the cask during a 4-ft end drop accident onto the heavily reinforced FFTF Reactor Service Building's concrete surface. For the analysis, this surface was assumed to be unyielding and the cask absorbed all the drop energy. This conservative assumption simplified the modeling used to qualify the cask's structural integrity for this accident condition

  3. On the use of J-integral and modified J-integral as measures of elastic-plastic fracture toughness

    International Nuclear Information System (INIS)

    Davis, D.A.; Hays, R.A.; Hackett, E.M.; Joyce, J.A.

    1988-01-01

    J-R Curve tests were conducted on 1/2T, 1T and 2T compact specimens of materials having J IC values ranging from 150 in-1b/sq in to over 2600 in-lb/sq in. These materials were chosen such that some would exceed the maximum crack length criterion of ASTM E1152-87 prior to reaching the maximum J criterion (3-Ni steel, 5000 series A1) and some would exceed the maximum J criterion first (A533B, A710). The elastic-plastic fracture behavior of these materials was examined using both the deformation theory J-integral (J D ) and the modified J-integral (J M ). The J-R curve testing was performed to very large values of crack opening displacement (COD) where the crack growth was typically 75% of the original remaining ligament. The results of this work suggest that the J D -R curves exhibit no specimen size dependence to crack extensions far in excess of the E1152 allowables. The J M -R curves calculated for the same specimens show a significant amount of specimen size dependence which becomes larger as the material toughness decreases. This work suggests that it is premature to utilize the modified J-integral in assessing the flaw tolerance of structures. (author)

  4. Time to reach tacrolimus maximum blood concentration,mean residence time, and acute renal allograft rejection: an open-label, prospective, pharmacokinetic study in adult recipients.

    Science.gov (United States)

    Kuypers, Dirk R J; Vanrenterghem, Yves

    2004-11-01

    The aims of this study were to determine whether disposition-related pharmacokinetic parameters such as T(max) and mean residence time (MRT) could be used as predictors of clinical efficacy of tacrolimus in renal transplant recipients, and to what extent these parameters would be influenced by clinical variables. We previously demonstrated, in a prospective pharmacokinetic study in de novo renal allograft recipients, that patients who experienced early acute rejection did not differ from patients free from rejection in terms of tacrolimus pharmacokinetic exposure parameters (dose interval AUC, preadministration trough blood concentration, C(max), dose). However, recipients with acute rejection reached mean (SD) tacrolimus T(max) significantly faster than those who were free from rejection (0.96 [0.56] hour vs 1.77 [1.06] hours; P clearance nor T(1/2) could explain this unusual finding, we used data from the previous study to calculate MRT from the concentration-time curves. As part of the previous study, 100 patients (59 male, 41 female; mean [SD] age, 51.4 [13.8] years;age range, 20-75 years) were enrolled in the study The calculated MRT was significantly shorter in recipients with acute allograft rejection (11.32 [031] hours vs 11.52 [028] hours; P = 0.02), just like T(max) was an independent risk factor for acute rejection in a multivariate logistic regression model (odds ratio, 0.092 [95% CI, 0.014-0.629]; P = 0.01). Analyzing the impact of demographic, transplantation-related, and biochemical variables on MRT, we found that increasing serum albumin and hematocrit concentrations were associated with a prolonged MRT (P calculated MRT were associated with a higher incidence of early acute graft rejection. These findings suggest that a shorter transit time of tacrolimus in certain tissue compartments, rather than failure to obtain a maximum absolute tacrolimus blood concentration, might lead to inadequate immunosuppression early after transplantation.

  5. Timing of glacier advances and climate in the High Tatra Mountains (Western Carpathians) during the Last Glacial Maximum

    Science.gov (United States)

    Makos, Michał; Dzierżek, Jan; Nitychoruk, Jerzy; Zreda, Marek

    2014-07-01

    During the Last Glacial Maximum (LGM), long valley glaciers developed on the northern and southern sides of the High Tatra Mountains, Poland and Slovakia. Chlorine-36 exposure dating of moraine boulders suggests two major phases of moraine stabilization, at 26-21 ka (LGM I - maximum) and at 18 ka (LGM II). The dates suggest a significantly earlier maximum advance on the southern side of the range. Reconstructing the geometry of four glaciers in the Sucha Woda, Pańszczyca, Mlynicka and Velicka valleys allowed determining their equilibrium-line altitudes (ELAs) at 1460, 1460, 1650 and 1700 m asl, respectively. Based on a positive degree-day model, the mass balance and climatic parameter anomaly (temperature and precipitation) has been constrained for LGM I advance. Modeling results indicate slightly different conditions between northern and southern slopes. The N-S ELA gradient finds confirmation in slightly higher temperature (at least 1 °C) or lower precipitation (15%) on the south-facing glaciers during LGM I. The precipitation distribution over the High Tatra Mountains indicates potentially different LGM atmospheric circulation than at the present day, with reduced northwesterly inflow and increased southerly and westerly inflows of moist air masses.

  6. Boundary-integral equation formulation for time-dependent inelastic deformation in metals

    Energy Technology Data Exchange (ETDEWEB)

    Kumar, V; Mukherjee, S

    1977-01-01

    The mathematical structure of various constitutive relations proposed in recent years for representing time-dependent inelastic deformation behavior of metals at elevated temperatues has certain features which permit a simple formulation of the three-dimensional inelasticity problem in terms of real time rates. A direct formulation of the boundary-integral equation method in terms of rates is discussed for the analysis of time-dependent inelastic deformation of arbitrarily shaped three-dimensional metallic bodies subjected to arbitrary mechanical and thermal loading histories and obeying constitutive relations of the kind mentioned above. The formulation is based on the assumption of infinitesimal deformations. Several illustrative examples involving creep of thick-walled spheres, long thick-walled cylinders, and rotating discs are discussed. The implementation of the method appears to be far easier than analogous BIE formulations that have been suggested for elastoplastic problems.

  7. Integrable time-dependent Hamiltonians, solvable Landau-Zener models and Gaudin magnets

    Science.gov (United States)

    Yuzbashyan, Emil A.

    2018-05-01

    We solve the non-stationary Schrödinger equation for several time-dependent Hamiltonians, such as the BCS Hamiltonian with an interaction strength inversely proportional to time, periodically driven BCS and linearly driven inhomogeneous Dicke models as well as various multi-level Landau-Zener tunneling models. The latter are Demkov-Osherov, bow-tie, and generalized bow-tie models. We show that these Landau-Zener problems and their certain interacting many-body generalizations map to Gaudin magnets in a magnetic field. Moreover, we demonstrate that the time-dependent Schrödinger equation for the above models has a similar structure and is integrable with a similar technique as Knizhnik-Zamolodchikov equations. We also discuss applications of our results to the problem of molecular production in an atomic Fermi gas swept through a Feshbach resonance and to the evaluation of the Landau-Zener transition probabilities.

  8. A practical exact maximum compatibility algorithm for reconstruction of recent evolutionary history.

    Science.gov (United States)

    Cherry, Joshua L

    2017-02-23

    Maximum compatibility is a method of phylogenetic reconstruction that is seldom applied to molecular sequences. It may be ideal for certain applications, such as reconstructing phylogenies of closely-related bacteria on the basis of whole-genome sequencing. Here I present an algorithm that rapidly computes phylogenies according to a compatibility criterion. Although based on solutions to the maximum clique problem, this algorithm deals properly with ambiguities in the data. The algorithm is applied to bacterial data sets containing up to nearly 2000 genomes with several thousand variable nucleotide sites. Run times are several seconds or less. Computational experiments show that maximum compatibility is less sensitive than maximum parsimony to the inclusion of nucleotide data that, though derived from actual sequence reads, has been identified as likely to be misleading. Maximum compatibility is a useful tool for certain phylogenetic problems, such as inferring the relationships among closely-related bacteria from whole-genome sequence data. The algorithm presented here rapidly solves fairly large problems of this type, and provides robustness against misleading characters than can pollute large-scale sequencing data.

  9. Effects of attitude dissimilarity and time on social integration : A longitudinal panel study

    NARCIS (Netherlands)

    Van der Vegt, G.S.

    2002-01-01

    A longitudinal panel study in 25 work groups of elementary school teachers examined the effect of attitudinal dissimilarity and time on social integration across a 9-month period. In line with the prediction based on both the similarity-attraction approach and social identity theory, cross-lagged

  10. New readout integrated circuit using continuous time fixed pattern noise correction

    Science.gov (United States)

    Dupont, Bertrand; Chammings, G.; Rapellin, G.; Mandier, C.; Tchagaspanian, M.; Dupont, Benoit; Peizerat, A.; Yon, J. J.

    2008-04-01

    LETI has been involved in IRFPA development since 1978; the design department (LETI/DCIS) has focused its work on new ROIC architecture since many years. The trend is to integrate advanced functions into the CMOS design to achieve cost efficient sensors production. Thermal imaging market is today more and more demanding of systems with instant ON capability and low power consumption. The purpose of this paper is to present the latest developments of fixed pattern noise continuous time correction. Several architectures are proposed, some are based on hardwired digital processing and some are purely analog. Both are using scene based algorithms. Moreover a new method is proposed for simultaneous correction of pixel offsets and sensitivities. In this scope, a new architecture of readout integrated circuit has been implemented; this architecture is developed with 0.18μm CMOS technology. The specification and the application of the ROIC are discussed in details.

  11. Time-integrated CP violation measurements in the B mesons system at the LHCb experiment

    CERN Document Server

    Cardinale, R

    2016-01-01

    Time-integrated CP violation measurements in the B meson system provide information for testing the CKM picture of CP violation in the Standard Model. A review of recent results from the LHCb experiment is presented.

  12. An Integrated Theory of Prospective Time Interval Estimation: The Role of Cognition, Attention, and Learning

    Science.gov (United States)

    Taatgen, Niels A.; van Rijn, Hedderik; Anderson, John

    2007-01-01

    A theory of prospective time perception is introduced and incorporated as a module in an integrated theory of cognition, thereby extending existing theories and allowing predictions about attention and learning. First, a time perception module is established by fitting existing datasets (interval estimation and bisection and impact of secondary…

  13. LIBOR troubles: Anomalous movements detection based on maximum entropy

    Science.gov (United States)

    Bariviera, Aurelio F.; Martín, María T.; Plastino, Angelo; Vampa, Victoria

    2016-05-01

    According to the definition of the London Interbank Offered Rate (LIBOR), contributing banks should give fair estimates of their own borrowing costs in the interbank market. Between 2007 and 2009, several banks made inappropriate submissions of LIBOR, sometimes motivated by profit-seeking from their trading positions. In 2012, several newspapers' articles began to cast doubt on LIBOR integrity, leading surveillance authorities to conduct investigations on banks' behavior. Such procedures resulted in severe fines imposed to involved banks, who recognized their financial inappropriate conduct. In this paper, we uncover such unfair behavior by using a forecasting method based on the Maximum Entropy principle. Our results are robust against changes in parameter settings and could be of great help for market surveillance.

  14. Evaluating Maximum Wind Energy Exploitation in Active Distribution Networks

    DEFF Research Database (Denmark)

    Siano, Pierluigi; Chen, Peiyuan; Chen, Zhe

    2010-01-01

    The increased spreading of distributed and renewable generation requires moving towards active management of distribution networks. In this paper, in order to evaluate maximum wind energy exploitation in active distribution networks, a method based on a multi-period optimal power flow (OPF......) analysis is proposed. Active network management schemes such as coordinated voltage control, energy curtailment and power factor control are integrated in the method in order to investigate their impacts on the maximization of wind energy exploitation. Some case studies, using real data from a Danish...... distribution system, confirmed the effectiveness of the proposed method in evaluating the optimal applications of active management schemes to increase wind energy harvesting without costly network reinforcement for the connection of wind generation....

  15. An extension theory-based maximum power tracker using a particle swarm optimization algorithm

    International Nuclear Information System (INIS)

    Chao, Kuei-Hsiang

    2014-01-01

    Highlights: • We propose an adaptive maximum power point tracking (MPPT) approach for PV systems. • Transient and steady state performances in tracking process are improved. • The proposed MPPT can automatically tune tracking step size along a P–V curve. • A PSO algorithm is used to determine the weighting values of extension theory. - Abstract: The aim of this work is to present an adaptive maximum power point tracking (MPPT) approach for photovoltaic (PV) power generation system. Integrating the extension theory as well as the conventional perturb and observe method, an maximum power point (MPP) tracker is made able to automatically tune tracking step size by way of the category recognition along a P–V characteristic curve. Accordingly, the transient and steady state performances in tracking process are improved. Furthermore, an optimization approach is proposed on the basis of a particle swarm optimization (PSO) algorithm for the complexity reduction in the determination of weighting values. At the end of this work, a simulated improvement in the tracking performance is experimentally validated by an MPP tracker with a programmable system-on-chip (PSoC) based controller

  16. A clinically integrated curriculum in evidence-based medicine for just-in-time learning through on-the-job training: the EU-EBM project.

    Science.gov (United States)

    Coppus, Sjors F P J; Emparanza, Jose I; Hadley, Julie; Kulier, Regina; Weinbrenner, Susanne; Arvanitis, Theodoros N; Burls, Amanda; Cabello, Juan B; Decsi, Tamas; Horvath, Andrea R; Kaczor, Marcin; Zanrei, Gianni; Pierer, Karin; Stawiarz, Katarzyna; Kunz, Regina; Mol, Ben W J; Khan, Khalid S

    2007-11-27

    Over the last years key stake holders in the healthcare sector have increasingly recognised evidence based medicine (EBM) as a means to improving the quality of healthcare. However, there is considerable uncertainty about the best way to disseminate basic knowledge of EBM. As a result, huge variation in EBM educational provision, setting, duration, intensity, content, and teaching methodology exists across Europe and worldwide. Most courses for health care professionals are delivered outside the work context ('stand alone') and lack adaptation to the specific needs for EBM at the learners' workplace. Courses with modern 'adaptive' EBM teaching that employ principles of effective continuing education might fill that gap. We aimed to develop a course for post-graduate education which is clinically integrated and allows maximum flexibility for teachers and learners. A group of experienced EBM teachers, clinical epidemiologists, clinicians and educationalists from institutions from eight European countries participated. We used an established methodology of curriculum development to design a clinically integrated EBM course with substantial components of e-learning. An independent European steering committee provided input into the process. We defined explicit learning objectives about knowledge, skills, attitudes and behaviour for the five steps of EBM. A handbook guides facilitator and learner through five modules with clinical and e-learning components. Focussed activities and targeted assignments round off the learning process, after which each module is formally assessed. The course is learner-centred, problem-based, integrated with activities in the workplace and flexible. When successfully implemented, the course is designed to provide just-in-time learning through on-the-job-training, with the potential for teaching and learning to directly impact on practice.

  17. Design of integral magnetic field sensor

    International Nuclear Information System (INIS)

    Ma Liang; Cheng Yinhui; Wu Wei; Li Baozhong; Zhou Hui; Li Jinxi; Zhu Meng

    2010-01-01

    Magnetic field is one of the important physical parameters in the measuring process of pulsed EMP. We researched on anti-interference and high-sensitivity measurement technique of magnetic field in this report. Semi rigid cables were to bent into ringed antenna so that the antenna was shielded from electric-field interference and had little inductance; In order to have high sensitivity, operational transconductance amplifier was used to produce an active integrator; We designed an optical-electronic transferring module to upgrade anti-interference capability of the magnetic-field measurement system. A measurement system of magnetic field was accomplished. The measurement system was composed of antenna, integrator, and optical-electric transferring module and so on. We calibrated the measurement system in coaxial TEM cell. It indicates that, the measurement system's respondence of rise time is up to 2.5 ns, and output width at 90%-maximum of the pulse is wider than 200 ns. (authors)

  18. Artificial Neural Network In Maximum Power Point Tracking Algorithm Of Photovoltaic Systems

    Directory of Open Access Journals (Sweden)

    Modestas Pikutis

    2014-05-01

    Full Text Available Scientists are looking for ways to improve the efficiency of solar cells all the time. The efficiency of solar cells which are available to the general public is up to 20%. Part of the solar energy is unused and a capacity of solar power plant is significantly reduced – if slow controller or controller which cannot stay at maximum power point of solar modules is used. Various algorithms of maximum power point tracking were created, but mostly algorithms are slow or make mistakes. In the literature more and more oftenartificial neural networks (ANN in maximum power point tracking process are mentioned, in order to improve performance of the controller. Self-learner artificial neural network and IncCond algorithm were used for maximum power point tracking in created solar power plant model. The algorithm for control was created. Solar power plant model is implemented in Matlab/Simulink environment.

  19. Lake Basin Fetch and Maximum Length/Width

    Data.gov (United States)

    Minnesota Department of Natural Resources — Linear features representing the Fetch, Maximum Length and Maximum Width of a lake basin. Fetch, maximum length and average width are calcuated from the lake polygon...

  20. Efficient heuristics for maximum common substructure search.

    Science.gov (United States)

    Englert, Péter; Kovács, Péter

    2015-05-26

    Maximum common substructure search is a computationally hard optimization problem with diverse applications in the field of cheminformatics, including similarity search, lead optimization, molecule alignment, and clustering. Most of these applications have strict constraints on running time, so heuristic methods are often preferred. However, the development of an algorithm that is both fast enough and accurate enough for most practical purposes is still a challenge. Moreover, in some applications, the quality of a common substructure depends not only on its size but also on various topological features of the one-to-one atom correspondence it defines. Two state-of-the-art heuristic algorithms for finding maximum common substructures have been implemented at ChemAxon Ltd., and effective heuristics have been developed to improve both their efficiency and the relevance of the atom mappings they provide. The implementations have been thoroughly evaluated and compared with existing solutions (KCOMBU and Indigo). The heuristics have been found to greatly improve the performance and applicability of the algorithms. The purpose of this paper is to introduce the applied methods and present the experimental results.

  1. On the quirks of maximum parsimony and likelihood on phylogenetic networks.

    Science.gov (United States)

    Bryant, Christopher; Fischer, Mareike; Linz, Simone; Semple, Charles

    2017-03-21

    Maximum parsimony is one of the most frequently-discussed tree reconstruction methods in phylogenetic estimation. However, in recent years it has become more and more apparent that phylogenetic trees are often not sufficient to describe evolution accurately. For instance, processes like hybridization or lateral gene transfer that are commonplace in many groups of organisms and result in mosaic patterns of relationships cannot be represented by a single phylogenetic tree. This is why phylogenetic networks, which can display such events, are becoming of more and more interest in phylogenetic research. It is therefore necessary to extend concepts like maximum parsimony from phylogenetic trees to networks. Several suggestions for possible extensions can be found in recent literature, for instance the softwired and the hardwired parsimony concepts. In this paper, we analyze the so-called big parsimony problem under these two concepts, i.e. we investigate maximum parsimonious networks and analyze their properties. In particular, we show that finding a softwired maximum parsimony network is possible in polynomial time. We also show that the set of maximum parsimony networks for the hardwired definition always contains at least one phylogenetic tree. Lastly, we investigate some parallels of parsimony to different likelihood concepts on phylogenetic networks. Copyright © 2017 Elsevier Ltd. All rights reserved.

  2. Laboratory informatics tools integration strategies for drug discovery: integration of LIMS, ELN, CDS, and SDMS.

    Science.gov (United States)

    Machina, Hari K; Wild, David J

    2013-04-01

    There are technologies on the horizon that could dramatically change how informatics organizations design, develop, deliver, and support applications and data infrastructures to deliver maximum value to drug discovery organizations. Effective integration of data and laboratory informatics tools promises the ability of organizations to make better informed decisions about resource allocation during the drug discovery and development process and for more informed decisions to be made with respect to the market opportunity for compounds. We propose in this article a new integration model called ELN-centric laboratory informatics tools integration.

  3. Equilibrium and response properties of the integrate-and-fire neuron in discrete time

    Directory of Open Access Journals (Sweden)

    Moritz Helias

    2010-01-01

    Full Text Available The integrate-and-fire neuron with exponential postsynaptic potentials is a frequently employed model to study neural networks. Simulations in discrete time still have highest performance at moderate numerical errors, which makes them first choice for long-term simulations of plastic networks. Here we extend the population density approach to investigate how the equilibrium and response properties of the leaky integrate-and-fire neuron are affected by time discretization. We present a novel analytical treatment of the boundary condition at threshold, taking both discretization of time and finite synaptic weights into account. We uncover an increased membrane potential density just below threshold as the decisive property that explains the deviations found between simulations and the classical diffusion approximation. Temporal discretization and finite synaptic weights both contribute to this effect. Our treatment improves the standard formula to calculate the neuron’s equilibrium firing rate. Direct solution of the Markov process describing the evolution of the membrane potential density confirms our analysis and yields a method to calculate the firing rate exactly. Knowing the shape of the membrane potential distribution near threshold enables us to devise the transient response properties of the neuron model to synaptic input. We find a pronounced non-linear fast response component that has not been described by the prevailing continuous time theory for Gaussian white noise input.

  4. Scalar one-loop vertex integrals as meromorphic functions of space-time dimension d

    International Nuclear Information System (INIS)

    Bluemlein, Johannes; Phan, Khiem Hong; Vietnam National Univ., Ho Chi Minh City; Riemann, Tord; Silesia Univ., Chorzow

    2017-11-01

    Representations are derived for the basic scalar one-loop vertex Feynman integrals as meromorphic functions of the space-time dimension d in terms of (generalized) hypergeometric functions 2 F 1 and F 1 . Values at asymptotic or exceptional kinematic points as well as expansions around the singular points at d=4+2n, n non-negative integers, may be derived from the representations easily. The Feynman integrals studied here may be used as building blocks for the calculation of one-loop and higher-loop scalar and tensor amplitudes. From the recursion relation presented, higher n-point functions may be obtained in a straightforward manner.

  5. Scalar one-loop vertex integrals as meromorphic functions of space-time dimension d

    Energy Technology Data Exchange (ETDEWEB)

    Bluemlein, Johannes [Deutsches Elektronen-Synchrotron (DESY), Zeuthen (Germany); Phan, Khiem Hong [Deutsches Elektronen-Synchrotron (DESY), Zeuthen (Germany); Vietnam National Univ., Ho Chi Minh City (Viet Nam). Univ. of Science; Riemann, Tord [Deutsches Elektronen-Synchrotron (DESY), Zeuthen (Germany); Silesia Univ., Chorzow (Poland). Inst. of Physics

    2017-11-15

    Representations are derived for the basic scalar one-loop vertex Feynman integrals as meromorphic functions of the space-time dimension d in terms of (generalized) hypergeometric functions {sub 2}F{sub 1} and F{sub 1}. Values at asymptotic or exceptional kinematic points as well as expansions around the singular points at d=4+2n, n non-negative integers, may be derived from the representations easily. The Feynman integrals studied here may be used as building blocks for the calculation of one-loop and higher-loop scalar and tensor amplitudes. From the recursion relation presented, higher n-point functions may be obtained in a straightforward manner.

  6. Self-consistent predictor/corrector algorithms for stable and efficient integration of the time-dependent Kohn-Sham equation

    Science.gov (United States)

    Zhu, Ying; Herbert, John M.

    2018-01-01

    The "real time" formulation of time-dependent density functional theory (TDDFT) involves integration of the time-dependent Kohn-Sham (TDKS) equation in order to describe the time evolution of the electron density following a perturbation. This approach, which is complementary to the more traditional linear-response formulation of TDDFT, is more efficient for computation of broad-band spectra (including core-excited states) and for systems where the density of states is large. Integration of the TDKS equation is complicated by the time-dependent nature of the effective Hamiltonian, and we introduce several predictor/corrector algorithms to propagate the density matrix, one of which can be viewed as a self-consistent extension of the widely used modified-midpoint algorithm. The predictor/corrector algorithms facilitate larger time steps and are shown to be more efficient despite requiring more than one Fock build per time step, and furthermore can be used to detect a divergent simulation on-the-fly, which can then be halted or else the time step modified.

  7. Performance characteristics and parametric choices of a solar thermophotovoltaic cell at the maximum efficiency

    International Nuclear Information System (INIS)

    Dong, Qingchun; Liao, Tianjun; Yang, Zhimin; Chen, Xiaohang; Chen, Jincan

    2017-01-01

    Graphical abstract: The overall model of the solar thermophotovoltaic cell (STPVC) composed of an optical lens, an absorber, an emitter, and a photovoltaic (PV) cell with an integrated back-side reflector is updated to include various irreversible losses. - Highlights: • A new model of the irreversible solar thermophotovoltaic system is proposed. • The material and structure parameters of the system are considered. • The performance characteristics at the maximum efficiency are revealed. • The optimal values of key parameters are determined. • The system can obtain a large efficiency under a relative low concentration ratio. - Abstract: The overall model of the solar thermophotovoltaic cell (STPVC) composed of an optical lens, an absorber, an emitter, and a photovoltaic (PV) cell with an integrated back-side reflector is updated to include various irreversible losses. The power output and efficiency of the cell are analytically derived. The performance characteristics of the STPVC at the maximum efficiency are revealed. The optimum values of several important parameters, such as the voltage output of the PV cell, the area ratio of the absorber to the emitter, and the band-gap of the semiconductor material, are determined. It is found that under the condition of a relative low concentration ratio, the optimally designed STPVC can obtain a relative large efficiency.

  8. The radial distribution of cosmic rays in the heliosphere at solar maximum

    Science.gov (United States)

    McDonald, F. B.; Fujii, Z.; Heikkila, B.; Lal, N.

    2003-08-01

    To obtain a more detailed profile of the radial distribution of galactic (GCRs) and anomalous (ACRs) cosmic rays, a unique time in the 11-year solar activity cycle has been selected - that of solar maximum. At this time of minimum cosmic ray intensity a simple, straight-forward normalization technique has been found that allows the cosmic ray data from IMP 8, Pioneer 10 (P-10) and Voyagers 1 and 2 (V1, V2) to be combined for the solar maxima of cycles 21, 22 and 23. This combined distribution reveals a functional form of the radial gradient that varies as G 0/r with G 0 being constant and relatively small in the inner heliosphere. After a transition region between ˜10 and 20 AU, G 0 increases to a much larger value that remains constant between ˜25 and 82 AU. This implies that at solar maximum the changes that produce the 11-year modulation cycle are mainly occurring in the outer heliosphere between ˜15 AU and the termination shock. These observations are not inconsistent with the concept that Global Merged Interaction. regions (GMIRs) are the principal agent of modulation between solar minimum and solar maximum. There does not appear to be a significant change in the amount of heliosheath modulation occurring between the 1997 solar minimum and the cycle 23 solar maximum.

  9. Effects of variability in probable maximum precipitation patterns on flood losses

    Science.gov (United States)

    Zischg, Andreas Paul; Felder, Guido; Weingartner, Rolf; Quinn, Niall; Coxon, Gemma; Neal, Jeffrey; Freer, Jim; Bates, Paul

    2018-05-01

    The assessment of the impacts of extreme floods is important for dealing with residual risk, particularly for critical infrastructure management and for insurance purposes. Thus, modelling of the probable maximum flood (PMF) from probable maximum precipitation (PMP) by coupling hydrological and hydraulic models has gained interest in recent years. Herein, we examine whether variability in precipitation patterns exceeds or is below selected uncertainty factors in flood loss estimation and if the flood losses within a river basin are related to the probable maximum discharge at the basin outlet. We developed a model experiment with an ensemble of probable maximum precipitation scenarios created by Monte Carlo simulations. For each rainfall pattern, we computed the flood losses with a model chain and benchmarked the effects of variability in rainfall distribution with other model uncertainties. The results show that flood losses vary considerably within the river basin and depend on the timing and superimposition of the flood peaks from the basin's sub-catchments. In addition to the flood hazard component, the other components of flood risk, exposure, and vulnerability contribute remarkably to the overall variability. This leads to the conclusion that the estimation of the probable maximum expectable flood losses in a river basin should not be based exclusively on the PMF. Consequently, the basin-specific sensitivities to different precipitation patterns and the spatial organization of the settlements within the river basin need to be considered in the analyses of probable maximum flood losses.

  10. Parallel, explicit, and PWTD-enhanced time domain volume integral equation solver

    KAUST Repository

    Liu, Yang

    2013-07-01

    Time domain volume integral equations (TDVIEs) are useful for analyzing transient scattering from inhomogeneous dielectric objects in applications as varied as photonics, optoelectronics, and bioelectromagnetics. TDVIEs typically are solved by implicit marching-on-in-time (MOT) schemes [N. T. Gres et al., Radio Sci., 36, 379-386, 2001], requiring the solution of a system of equations at each and every time step. To reduce the computational cost associated with such schemes, [A. Al-Jarro et al., IEEE Trans. Antennas Propagat., 60, 5203-5215, 2012] introduced an explicit MOT-TDVIE method that uses a predictor-corrector technique to stably update field values throughout the scatterer. By leveraging memory-efficient nodal spatial discretization and scalable parallelization schemes [A. Al-Jarro et al., in 28th Int. Rev. Progress Appl. Computat. Electromagn., 2012], this solver has been successfully applied to the analysis of scattering phenomena involving 0.5 million spatial unknowns. © 2013 IEEE.

  11. Energy Payback Time Calculation for a Building Integrated Semitransparent Thermal (BISPVT) System with Air Duct

    OpenAIRE

    Kanchan Mudgil; Deepali Kamthania

    2013-01-01

    This paper evaluates the energy payback time (EPBT) of building integrated photovoltaic thermal (BISPVT) system for Srinagar, India. Three different photovoltaic (PV) modules namely mono crystalline silicon (m-Si), poly crystalline silicon (p-Si), and amorphous silicon (a-Si) have been considered for calculation of EPBT. It is found that, the EPBT is lowest in m-Si. Hence, integration of m-Si PV modules on the roof of a room is economical.

  12. A General Stochastic Maximum Principle for SDEs of Mean-field Type

    International Nuclear Information System (INIS)

    Buckdahn, Rainer; Djehiche, Boualem; Li Juan

    2011-01-01

    We study the optimal control for stochastic differential equations (SDEs) of mean-field type, in which the coefficients depend on the state of the solution process as well as of its expected value. Moreover, the cost functional is also of mean-field type. This makes the control problem time inconsistent in the sense that the Bellman optimality principle does not hold. For a general action space a Peng’s-type stochastic maximum principle (Peng, S.: SIAM J. Control Optim. 2(4), 966–979, 1990) is derived, specifying the necessary conditions for optimality. This maximum principle differs from the classical one in the sense that here the first order adjoint equation turns out to be a linear mean-field backward SDE, while the second order adjoint equation remains the same as in Peng’s stochastic maximum principle.

  13. Merging daily sea surface temperature data from multiple satellites using a Bayesian maximum entropy method

    Science.gov (United States)

    Tang, Shaolei; Yang, Xiaofeng; Dong, Di; Li, Ziwei

    2015-12-01

    Sea surface temperature (SST) is an important variable for understanding interactions between the ocean and the atmosphere. SST fusion is crucial for acquiring SST products of high spatial resolution and coverage. This study introduces a Bayesian maximum entropy (BME) method for blending daily SSTs from multiple satellite sensors. A new spatiotemporal covariance model of an SST field is built to integrate not only single-day SSTs but also time-adjacent SSTs. In addition, AVHRR 30-year SST climatology data are introduced as soft data at the estimation points to improve the accuracy of blended results within the BME framework. The merged SSTs, with a spatial resolution of 4 km and a temporal resolution of 24 hours, are produced in the Western Pacific Ocean region to demonstrate and evaluate the proposed methodology. Comparisons with in situ drifting buoy observations show that the merged SSTs are accurate and the bias and root-mean-square errors for the comparison are 0.15°C and 0.72°C, respectively.

  14. An arbitrary-order staggered time integrator for the linear acoustic wave equation

    Science.gov (United States)

    Lee, Jaejoon; Park, Hyunseo; Park, Yoonseo; Shin, Changsoo

    2018-02-01

    We suggest a staggered time integrator whose order of accuracy can arbitrarily be extended to solve the linear acoustic wave equation. A strategy to select the appropriate order of accuracy is also proposed based on the error analysis that quantitatively predicts the truncation error of the numerical solution. This strategy not only reduces the computational cost several times, but also allows us to flexibly set the modelling parameters such as the time step length, grid interval and P-wave speed. It is demonstrated that the proposed method can almost eliminate temporal dispersive errors during long term simulations regardless of the heterogeneity of the media and time step lengths. The method can also be successfully applied to the source problem with an absorbing boundary condition, which is frequently encountered in the practical usage for the imaging algorithms or the inverse problems.

  15. Discrete integration of continuous Kalman filtering equations for time invariant second-order structural systems

    Science.gov (United States)

    Park, K. C.; Belvin, W. Keith

    1990-01-01

    A general form for the first-order representation of the continuous second-order linear structural-dynamics equations is introduced to derive a corresponding form of first-order continuous Kalman filtering equations. Time integration of the resulting equations is carried out via a set of linear multistep integration formulas. It is shown that a judicious combined selection of computational paths and the undetermined matrices introduced in the general form of the first-order linear structural systems leads to a class of second-order discrete Kalman filtering equations involving only symmetric sparse N x N solution matrices.

  16. On the internal resonant modes in marching-on-in-time solution of the time domain electric field integral equation

    KAUST Repository

    Shi, Yifei; Bagci, Hakan; Lu, Mingyu

    2013-01-01

    Internal resonant modes are always observed in the marching-on-in-time (MOT) solution of the time domain electric field integral equation (EFIE), although 'relaxed initial conditions,' which are enforced at the beginning of time marching, should in theory prevent these spurious modes from appearing. It has been conjectured that, numerical errors built up during time marching establish the necessary initial conditions and induce the internal resonant modes. However, this conjecture has never been proved by systematic numerical experiments. Our numerical results in this communication demonstrate that, the internal resonant modes' amplitudes are indeed dictated by the numerical errors. Additionally, it is shown that in a few cases, the internal resonant modes can be made 'invisible' by significantly suppressing the numerical errors. These tests prove the conjecture that the internal resonant modes are induced by numerical errors when the time domain EFIE is solved by the MOT method. © 2013 IEEE.

  17. On the internal resonant modes in marching-on-in-time solution of the time domain electric field integral equation

    KAUST Repository

    Shi, Yifei

    2013-08-01

    Internal resonant modes are always observed in the marching-on-in-time (MOT) solution of the time domain electric field integral equation (EFIE), although \\'relaxed initial conditions,\\' which are enforced at the beginning of time marching, should in theory prevent these spurious modes from appearing. It has been conjectured that, numerical errors built up during time marching establish the necessary initial conditions and induce the internal resonant modes. However, this conjecture has never been proved by systematic numerical experiments. Our numerical results in this communication demonstrate that, the internal resonant modes\\' amplitudes are indeed dictated by the numerical errors. Additionally, it is shown that in a few cases, the internal resonant modes can be made \\'invisible\\' by significantly suppressing the numerical errors. These tests prove the conjecture that the internal resonant modes are induced by numerical errors when the time domain EFIE is solved by the MOT method. © 2013 IEEE.

  18. Time-interval for integration of stabilizing haptic and visual information in subjects balancing under static and dynamic conditions

    Directory of Open Access Journals (Sweden)

    Jean-Louis eHoneine

    2014-10-01

    Full Text Available Maintaining equilibrium is basically a sensorimotor integration task. The central nervous system continually and selectively weights and rapidly integrates sensory inputs from multiple sources, and coordinates multiple outputs. The weighting process is based on the availability and accuracy of afferent signals at a given instant, on the time-period required to process each input, and possibly on the plasticity of the relevant pathways. The likelihood that sensory inflow changes while balancing under static or dynamic conditions is high, because subjects can pass from a dark to a well-lit environment or from a tactile-guided stabilization to loss of haptic inflow. This review article presents recent data on the temporal events accompanying sensory transition, on which basic information is fragmentary. The processing time from sensory shift to reaching a new steady state includes the time to (a subtract or integrate sensory inputs, (b move from allocentric to egocentric reference or vice versa, and (c adjust the calibration of motor activity in time and amplitude to the new sensory set. We present examples of processes of integration of posture-stabilizing information, and of the respective sensorimotor time-intervals while allowing or occluding vision or adding or subtracting tactile information. These intervals are short, in the order of 1-2 s for different postural conditions, modalities and deliberate or passive shift. They are just longer for haptic than visual shift, just shorter on withdrawal than on addition of stabilizing input, and on deliberate than unexpected mode. The delays are the shortest (for haptic shift in blind subjects. Since automatic balance stabilization may be vulnerable to sensory-integration delays and to interference from concurrent cognitive tasks in patients with sensorimotor problems, insight into the processing time for balance control represents a critical step in the design of new balance- and locomotion training

  19. Time-interval for integration of stabilizing haptic and visual information in subjects balancing under static and dynamic conditions

    Science.gov (United States)

    Honeine, Jean-Louis; Schieppati, Marco

    2014-01-01

    Maintaining equilibrium is basically a sensorimotor integration task. The central nervous system (CNS) continually and selectively weights and rapidly integrates sensory inputs from multiple sources, and coordinates multiple outputs. The weighting process is based on the availability and accuracy of afferent signals at a given instant, on the time-period required to process each input, and possibly on the plasticity of the relevant pathways. The likelihood that sensory inflow changes while balancing under static or dynamic conditions is high, because subjects can pass from a dark to a well-lit environment or from a tactile-guided stabilization to loss of haptic inflow. This review article presents recent data on the temporal events accompanying sensory transition, on which basic information is fragmentary. The processing time from sensory shift to reaching a new steady state includes the time to (a) subtract or integrate sensory inputs; (b) move from allocentric to egocentric reference or vice versa; and (c) adjust the calibration of motor activity in time and amplitude to the new sensory set. We present examples of processes of integration of posture-stabilizing information, and of the respective sensorimotor time-intervals while allowing or occluding vision or adding or subtracting tactile information. These intervals are short, in the order of 1–2 s for different postural conditions, modalities and deliberate or passive shift. They are just longer for haptic than visual shift, just shorter on withdrawal than on addition of stabilizing input, and on deliberate than unexpected mode. The delays are the shortest (for haptic shift) in blind subjects. Since automatic balance stabilization may be vulnerable to sensory-integration delays and to interference from concurrent cognitive tasks in patients with sensorimotor problems, insight into the processing time for balance control represents a critical step in the design of new balance- and locomotion training devices

  20. Integrated response and transit time distributions of watersheds by combining hydrograph separation and long-term transit time modeling

    Directory of Open Access Journals (Sweden)

    M. C. Roa-García

    2010-08-01

    Full Text Available We present a new modeling approach analyzing and predicting the Transit Time Distribution (TTD and the Response Time Distribution (RTD from hourly to annual time scales as two distinct hydrological processes. The model integrates Isotope Hydrograph Separation (IHS and the Instantaneous Unit Hydrograph (IUH approach as a tool to provide a more realistic description of transit and response time of water in catchments. Individual event simulations and parameterizations were combined with long-term baseflow simulation and parameterizations; this provides a comprehensive picture of the catchment response for a long time span for the hydraulic and isotopic processes. The proposed method was tested in three Andean headwater catchments to compare the effects of land use on hydrological response and solute transport. Results show that the characteristics of events and antecedent conditions have a significant influence on TTD and RTD, but in general the RTD of the grassland dominated catchment is concentrated in the shorter time spans and has a higher cumulative TTD, while the forest dominated catchment has a relatively higher response distribution and lower cumulative TTD. The catchment where wetlands concentrate shows a flashier response, but wetlands also appear to prolong transit time.

  1. An increased rectal maximum tolerable volume and long anal canal are associated with poor short-term response to biofeedback therapy for patients with anismus with decreased bowel frequency and normal colonic transit time.

    Science.gov (United States)

    Rhee, P L; Choi, M S; Kim, Y H; Son, H J; Kim, J J; Koh, K C; Paik, S W; Rhee, J C; Choi, K W

    2000-10-01

    Biofeedback is an effective therapy for a majority of patients with anismus. However, a significant proportion of patients still failed to respond to biofeedback, and little has been known about the factors that predict response to biofeedback. We evaluated the factors associated with poor response to biofeedback. Biofeedback therapy was offered to 45 patients with anismus with decreased bowel frequency (less than three times per week) and normal colonic transit time. Any differences in demographics, symptoms, and parameters of anorectal physiologic tests were sought between responders (in whom bowel frequency increased up to three times or more per week after biofeedback) and nonresponders (in whom bowel frequency remained less than three times per week). Thirty-one patients (68.9 percent) responded to biofeedback and 14 patients (31.1 percent) did not. Anal canal length was longer in nonresponders than in responders (4.53 +/- 0.5 vs. 4.08 +/- 0.56 cm; P = 0.02), and rectal maximum tolerable volume was larger in nonresponders than in responders. (361 +/- 87 vs. 302 +/- 69 ml; P = 0.02). Anal canal length and rectal maximum tolerable volume showed significant differences between responders and nonresponders on multivariate analysis (P = 0.027 and P = 0.034, respectively). This study showed that a long anal canal and increased rectal maximum tolerable volume are associated with poor short-term response to biofeedback for patients with anismus with decreased bowel frequency and normal colonic transit time.

  2. Latitudinal Change of Tropical Cyclone Maximum Intensity in the Western North Pacific

    OpenAIRE

    Choi, Jae-Won; Cha, Yumi; Kim, Hae-Dong; Kang, Sung-Dae

    2016-01-01

    This study obtained the latitude where tropical cyclones (TCs) show maximum intensity and applied statistical change-point analysis on the time series data of the average annual values. The analysis results found that the latitude of the TC maximum intensity increased from 1999. To investigate the reason behind this phenomenon, the difference of the average latitude between 1999 and 2013 and the average between 1977 and 1998 was analyzed. In a difference of 500 hPa streamline between the two ...

  3. Maximum leaf conductance driven by CO2 effects on stomatal size and density over geologic time.

    Science.gov (United States)

    Franks, Peter J; Beerling, David J

    2009-06-23

    Stomatal pores are microscopic structures on the epidermis of leaves formed by 2 specialized guard cells that control the exchange of water vapor and CO(2) between plants and the atmosphere. Stomatal size (S) and density (D) determine maximum leaf diffusive (stomatal) conductance of CO(2) (g(c(max))) to sites of assimilation. Although large variations in D observed in the fossil record have been correlated with atmospheric CO(2), the crucial significance of similarly large variations in S has been overlooked. Here, we use physical diffusion theory to explain why large changes in S necessarily accompanied the changes in D and atmospheric CO(2) over the last 400 million years. In particular, we show that high densities of small stomata are the only way to attain the highest g(cmax) values required to counter CO(2)"starvation" at low atmospheric CO(2) concentrations. This explains cycles of increasing D and decreasing S evident in the fossil history of stomata under the CO(2) impoverished atmospheres of the Permo-Carboniferous and Cenozoic glaciations. The pattern was reversed under rising atmospheric CO(2) regimes. Selection for small S was crucial for attaining high g(cmax) under falling atmospheric CO(2) and, therefore, may represent a mechanism linking CO(2) and the increasing gas-exchange capacity of land plants over geologic time.

  4. Evaluating perceptual integration: uniting response-time- and accuracy-based methodologies.

    Science.gov (United States)

    Eidels, Ami; Townsend, James T; Hughes, Howard C; Perry, Lacey A

    2015-02-01

    This investigation brings together a response-time system identification methodology (e.g., Townsend & Wenger Psychonomic Bulletin & Review 11, 391-418, 2004a) and an accuracy methodology, intended to assess models of integration across stimulus dimensions (features, modalities, etc.) that were proposed by Shaw and colleagues (e.g., Mulligan & Shaw Perception & Psychophysics 28, 471-478, 1980). The goal was to theoretically examine these separate strategies and to apply them conjointly to the same set of participants. The empirical phases were carried out within an extension of an established experimental design called the double factorial paradigm (e.g., Townsend & Nozawa Journal of Mathematical Psychology 39, 321-359, 1995). That paradigm, based on response times, permits assessments of architecture (parallel vs. serial processing), stopping rule (exhaustive vs. minimum time), and workload capacity, all within the same blocks of trials. The paradigm introduced by Shaw and colleagues uses a statistic formally analogous to that of the double factorial paradigm, but based on accuracy rather than response times. We demonstrate that the accuracy measure cannot discriminate between parallel and serial processing. Nonetheless, the class of models supported by the accuracy data possesses a suitable interpretation within the same set of models supported by the response-time data. The supported model, consistent across individuals, is parallel and has limited capacity, with the participants employing the appropriate stopping rule for the experimental setting.

  5. The effect of complete integration of HIV and TB services on time to initiation of antiretroviral therapy: a before-after study.

    Directory of Open Access Journals (Sweden)

    Bernhard Kerschberger

    Full Text Available Studies have shown that early ART initiation in TB/HIV co-infected patients lowers mortality. One way to implement earlier ART commencement could be through integration of TB and HIV services, a more efficient model of care than separate, vertical programs. We present a model of full TB/HIV integration and estimate its effect on time to initiation of ART.We retrospectively reviewed TB registers and clinical notes of 209 TB/HIV co-infected adults with a CD4 count <250 cells/µl and registered for TB treatment at one primary care clinic in a South African township between June 2008 and May 2009. Using Kaplan-Meier and Cox proportional hazard analysis, we compared time between initiation of TB treatment and ART for the periods before and after full, "one-stop shop" integration of TB and HIV services (in December 2009. Potential confounders were determined a priori through directed acyclic graphs. Robustness of assumptions was investigated by sensitivity analyses. The analysis included 188 patients (100 pre- and 88 post-integration, yielding 56 person-years of observation. Baseline characteristics of the two groups were similar. Median time to ART initiation decreased from 147 days (95% confidence interval [CI] 85-188 before integration of services to 75 days (95% CI 52-119 post-integration. In adjusted analyses, patients attending the clinic post-integration were 1.60 times (95% CI 1.11-2.29 more likely to have started ART relative to the pre-integration period. Sensitivity analyses supported these findings.Full TB/HIV care integration is feasible and led to a 60% increased chance of co-infected patients starting ART, while reducing time to ART initiation by an average of 72 days. Although these estimates should be confirmed through larger studies, they suggest that scale-up of full TB/HIV service integration in high TB/HIV prevalence settings may shorten time to ART initiation, which might reduce excess mortality and morbidity.

  6. Maximum swimming speeds of sailfish and three other large marine predatory fish species based on muscle contraction time and stride length

    DEFF Research Database (Denmark)

    Svendsen, Morten Bo Søndergaard; Domenici, Paolo; Marras, Stefano

    2016-01-01

    Billfishes are considered to be among the fastest swimmers in the oceans. Previous studies have estimated maximum speed of sailfish and black marlin at around 35 m s(-1) but theoretical work on cavitation predicts that such extreme speed is unlikely. Here we investigated maximum speed of sailfish...

  7. Bistability, non-ergodicity, and inhibition in pairwise maximum-entropy models.

    Science.gov (United States)

    Rostami, Vahid; Porta Mana, PierGianLuca; Grün, Sonja; Helias, Moritz

    2017-10-01

    Pairwise maximum-entropy models have been used in neuroscience to predict the activity of neuronal populations, given only the time-averaged correlations of the neuron activities. This paper provides evidence that the pairwise model, applied to experimental recordings, would produce a bimodal distribution for the population-averaged activity, and for some population sizes the second mode would peak at high activities, that experimentally would be equivalent to 90% of the neuron population active within time-windows of few milliseconds. Several problems are connected with this bimodality: 1. The presence of the high-activity mode is unrealistic in view of observed neuronal activity and on neurobiological grounds. 2. Boltzmann learning becomes non-ergodic, hence the pairwise maximum-entropy distribution cannot be found: in fact, Boltzmann learning would produce an incorrect distribution; similarly, common variants of mean-field approximations also produce an incorrect distribution. 3. The Glauber dynamics associated with the model is unrealistically bistable and cannot be used to generate realistic surrogate data. This bimodality problem is first demonstrated for an experimental dataset from 159 neurons in the motor cortex of macaque monkey. Evidence is then provided that this problem affects typical neural recordings of population sizes of a couple of hundreds or more neurons. The cause of the bimodality problem is identified as the inability of standard maximum-entropy distributions with a uniform reference measure to model neuronal inhibition. To eliminate this problem a modified maximum-entropy model is presented, which reflects a basic effect of inhibition in the form of a simple but non-uniform reference measure. This model does not lead to unrealistic bimodalities, can be found with Boltzmann learning, and has an associated Glauber dynamics which incorporates a minimal asymmetric inhibition.

  8. Maximum Gene-Support Tree

    Directory of Open Access Journals (Sweden)

    Yunfeng Shan

    2008-01-01

    Full Text Available Genomes and genes diversify during evolution; however, it is unclear to what extent genes still retain the relationship among species. Model species for molecular phylogenetic studies include yeasts and viruses whose genomes were sequenced as well as plants that have the fossil-supported true phylogenetic trees available. In this study, we generated single gene trees of seven yeast species as well as single gene trees of nine baculovirus species using all the orthologous genes among the species compared. Homologous genes among seven known plants were used for validation of the finding. Four algorithms—maximum parsimony (MP, minimum evolution (ME, maximum likelihood (ML, and neighbor-joining (NJ—were used. Trees were reconstructed before and after weighting the DNA and protein sequence lengths among genes. Rarely a gene can always generate the “true tree” by all the four algorithms. However, the most frequent gene tree, termed “maximum gene-support tree” (MGS tree, or WMGS tree for the weighted one, in yeasts, baculoviruses, or plants was consistently found to be the “true tree” among the species. The results provide insights into the overall degree of divergence of orthologous genes of the genomes analyzed and suggest the following: 1 The true tree relationship among the species studied is still maintained by the largest group of orthologous genes; 2 There are usually more orthologous genes with higher similarities between genetically closer species than between genetically more distant ones; and 3 The maximum gene-support tree reflects the phylogenetic relationship among species in comparison.

  9. Creating a Campus Culture of Integrity: Comparing the Perspectives of Full- and Part-Time Faculty

    Science.gov (United States)

    Hudd, Suzanne S.; Apgar, Caroline; Bronson, Eric Franklyn; Lee, Renee Gravois

    2009-01-01

    Part-time faculty play an important role in creating a culture of integrity on campus, yet they face a number of structural constraints. This paper seeks to improve our understanding of the potentially unique experiences of part-time faculty with academic misconduct and suggests ways to more effectively involve them in campus-wide academic…

  10. Integrated High-Speed Digital Optical True-Time-Delay Modules for Synthetic Aperture Radars, Phase I

    Data.gov (United States)

    National Aeronautics and Space Administration — Crystal Research, Inc. proposes an integrated high-speed digital optical true-time-delay module for advanced synthetic aperture radars. The unique feature of this...

  11. 24 CFR 982.508 - Maximum family share at initial occupancy.

    Science.gov (United States)

    2010-04-01

    ... URBAN DEVELOPMENT SECTION 8 TENANT BASED ASSISTANCE: HOUSING CHOICE VOUCHER PROGRAM Rent and Housing Assistance Payment § 982.508 Maximum family share at initial occupancy. At the time the PHA approves a... program, and where the gross rent of the unit exceeds the applicable payment standard for the family, the...

  12. Determing and monitoring of maximum permissible power for HWRR-3

    International Nuclear Information System (INIS)

    Jia Zhanli; Xiao Shigang; Jin Huajin; Lu Changshen

    1987-01-01

    The operating power of a reactor is an important parameter to be monitored. This report briefly describes the determining and monitoring of maximum permissiable power for HWRR-3. The calculating method is described, and the result of calculation and analysis of error are also given. On-line calculation and real time monitoring have been realized at the heavy water reactor. It provides the reactor with a real time and reliable supervision. This makes operation convenient and increases reliability

  13. Fast-PPP assessment in European and equatorial region near the solar cycle maximum

    Science.gov (United States)

    Rovira-Garcia, Adria; Juan, José Miguel; Sanz, Jaume

    2014-05-01

    The Fast Precise Point Positioning (Fast-PPP) is a technique to provide quick high-accuracy navigation with ambiguity fixing capability, thanks to an accurate modelling of the ionosphere. Indeed, once the availability of real-time precise satellite orbits and clocks is granted to users, the next challenge is the accuracy of real-time ionospheric corrections. Several steps had been taken by gAGE/UPC to develop such global system for precise navigation. First Wide-Area Real-Time Kinematics (WARTK) feasibility studies enabled precise relative continental navigation using a few tens of reference stations. Later multi-frequency and multi-constellation assessments in different ionospheric scenarios, including maximum solar-cycle conditions, were focussed on user-domain performance. Recently, a mature evolution of the technique consists on a dual service scheme; a global Precise Point Positioning (PPP) service, together with a continental enhancement to shorten convergence. A end to end performance assessment of the Fast-PPP technique is presented in this work, focussed in Europe and in the equatorial region of South East Asia (SEA), both near the solar cycle maximum. The accuracy of the Central Processing Facility (CPF) real-time precise satellite orbits and clocks is respectively, 4 centimetres and 0.2 nanoseconds, in line with the accuracy of the International GNSS Service (IGS) analysis centres. This global PPP service is enhanced by the Fast-PPP by adding the capability of global undifferenced ambiguity fixing thanks to the fractional part of the ambiguities determination. The core of the Fast-PPP is the capability to compute real-time ionospheric determinations with accuracies at the level or better than 1 Total Electron Content Unit (TECU), improving the widely-accepted Global Ionospheric Maps (GIM), with declared accuracies of 2-8 TECU. This large improvement in the modelling accuracy is achieved thanks to a two-layer description of the ionosphere combined with

  14. A summary of the assessment of fuel behaviour, fission product release and pressure tube integrity following a postulated large loss-of-coolant accident

    International Nuclear Information System (INIS)

    Langman, V.J.; Weaver, K.R.

    1984-05-01

    The Ontario Hydro analyses of fuel and pressure tube temperatures, fuel behaviour, fission product release and pressure tube integrity for large break loss-of-coolant accidents in Bruce A or Pickering A have been critically reviewed. The determinations of maximum fuel temperatures and fission product release are very uncertain, and pressure tube integrity cannot be assured where low steam flows are predicted to persist for times on the order of minutes

  15. Characteristics of individuals with integrated pensions.

    Science.gov (United States)

    Bender, K A

    1999-01-01

    Employer pensions that integrate benefits with Social Security have been the focus of relatively little research. Since changes in Social Security benefit levels and other program characteristics can affect the benefit levels and other features of integrated pension plans, it is important to know who is covered by these plans. This article examines the characteristics of workers covered by integrated pension plans, compared to those with nonintegrated plans and those with no pension coverage. Integrated pension plans are those that explicitly adjust their benefit structure to help compensate for the employer's contributions to the Social Security program. There are two basic integration methods used by defined benefit (DB) plans. The offset method causes a reduction in employer pension benefits by up to half of the Social Security retirement benefit; the excess rate method is characterized by an accrual rate that is lower for earnings below the Social Security taxable maximum than above it. Defined contribution (DC) pension plans can be integrated along the lines of the excess rate method. To date, research on integrated pensions has focused on plan characteristics, as reported to the Bureau of Labor Statistics (BLS) through its Employee Benefits Survey (EBS). This research has examined the prevalence of integration among full-time, private sector workers by industry, firm size, and broad occupational categories. However, because the EBS provides virtually no data on worker characteristics, analyses of the effects of pension integration on retirement benefits have used hypothetical workers, varying according to assumed levels of earnings and job tenure. This kind of analysis is not particularly helpful in examining the potential effects of changes in the Social Security program on workers' pension benefits. However, data on pension integration at the individual level are available, most recently from the Health and Retirement Study (HRS), a nationally

  16. Algorithms of maximum likelihood data clustering with applications

    Science.gov (United States)

    Giada, Lorenzo; Marsili, Matteo

    2002-12-01

    We address the problem of data clustering by introducing an unsupervised, parameter-free approach based on maximum likelihood principle. Starting from the observation that data sets belonging to the same cluster share a common information, we construct an expression for the likelihood of any possible cluster structure. The likelihood in turn depends only on the Pearson's coefficient of the data. We discuss clustering algorithms that provide a fast and reliable approximation to maximum likelihood configurations. Compared to standard clustering methods, our approach has the advantages that (i) it is parameter free, (ii) the number of clusters need not be fixed in advance and (iii) the interpretation of the results is transparent. In order to test our approach and compare it with standard clustering algorithms, we analyze two very different data sets: time series of financial market returns and gene expression data. We find that different maximization algorithms produce similar cluster structures whereas the outcome of standard algorithms has a much wider variability.

  17. A 2-Dof LQR based PID controller for integrating processes considering robustness/performance tradeoff.

    Science.gov (United States)

    Srivastava, Saurabh; Pandit, V S

    2017-11-01

    This paper focuses on the analytical design of a Proportional Integral and Derivative (PID) controller together with a unique set point filter that makes the overall Two-Degree of-Freedom (2-Dof) control system for integrating processes with time delay. The PID controller tuning is based on the Linear Quadratic Regulator (LQR) using dominant pole placement approach to obtain good regulatory response. The set point filter is designed with the calculated PID parameters and using a single filter time constant (λ) to precisely control the servo response. The effectiveness of the proposed methodology is demonstrated through a series of illustrative examples using real industrial integrated process models. The whole range of PID parameters is obtained for each case in a tradeoff between the robustness of the closed loop system measured in terms of Maximum Sensitivity (M s ) and the load disturbance measured in terms of Integral of Absolute Errors (IAE). Results show improved closed loop response in terms of regulatory and servo responses with less control efforts when compared with the latest PID tuning methods of integrating systems. Copyright © 2017 ISA. Published by Elsevier Ltd. All rights reserved.

  18. Influence of neural monitoring during thyroid surgery on nerve integrity and postoperative vocal function.

    Science.gov (United States)

    Engelsman, A F; Warhurst, S; Fraser, S; Novakovic, D; Sidhu, S B

    2018-06-01

    Integrity of the recurrent laryngeal nerve (RLN) and the external branch of the superior laryngeal nerve (EBSLN) can be checked by intraoperative nerve monitoring (IONM) after visualization. The aim of this study was to determine the prevalence and nature of voice dysfunction following thyroid surgery with routine IONM. Thyroidectomies were performed with routine division of strap muscles and nerve monitoring to confirm integrity of the RLN and EBSLN following dissection. Patients were assessed for vocal function before surgery and at 1 and 3 months after operation. Assessment included use of the Voice Handicap Index (VHI) 10, maximum phonation time, fundamental frequency, pitch range, harmonic to noise ratio, cepstral peak prominence and smoothed cepstral peak prominence. A total of 172 nerves at risk were analysed in 102 consecutive patients undergoing elective thyroid surgery. In 23·3 per cent of EBSLNs and 0·6 per cent of RLNs nerve identification required the assistance of IONM in addition to visualization. Nerve integrity was confirmed during surgery for 98·8 per cent of EBSLNs and 98·3 per cent of RLNs. There were no differences between preoperative and postoperative VHI-10 scores. Acoustic voice assessment showed small changes in maximum phonation time at 1 and 3 months after surgery. Where there is routine division of strap muscles, thyroidectomy using nerve monitoring confirmation of RLN and EBSLN function following dissection results in no clinically significant voice change.

  19. Maximum Kolmogorov-Sinai Entropy Versus Minimum Mixing Time in Markov Chains

    Science.gov (United States)

    Mihelich, M.; Dubrulle, B.; Paillard, D.; Kral, Q.; Faranda, D.

    2018-01-01

    We establish a link between the maximization of Kolmogorov Sinai entropy (KSE) and the minimization of the mixing time for general Markov chains. Since the maximisation of KSE is analytical and easier to compute in general than mixing time, this link provides a new faster method to approximate the minimum mixing time dynamics. It could be interesting in computer sciences and statistical physics, for computations that use random walks on graphs that can be represented as Markov chains.

  20. Maximum neutron flux in thermal reactors

    International Nuclear Information System (INIS)

    Strugar, P.V.

    1968-12-01

    Direct approach to the problem is to calculate spatial distribution of fuel concentration if the reactor core directly using the condition of maximum neutron flux and comply with thermal limitations. This paper proved that the problem can be solved by applying the variational calculus, i.e. by using the maximum principle of Pontryagin. Mathematical model of reactor core is based on the two-group neutron diffusion theory with some simplifications which make it appropriate from maximum principle point of view. Here applied theory of maximum principle are suitable for application. The solution of optimum distribution of fuel concentration in the reactor core is obtained in explicit analytical form. The reactor critical dimensions are roots of a system of nonlinear equations and verification of optimum conditions can be done only for specific examples