Relaxing a large cosmological constant
International Nuclear Information System (INIS)
Bauer, Florian; Sola, Joan; Stefancic, Hrvoje
2009-01-01
The cosmological constant (CC) problem is the biggest enigma of theoretical physics ever. In recent times, it has been rephrased as the dark energy (DE) problem in order to encompass a wider spectrum of possibilities. It is, in any case, a polyhedric puzzle with many faces, including the cosmic coincidence problem, i.e. why the density of matter ρ m is presently so close to the CC density ρ Λ . However, the oldest, toughest and most intriguing face of this polyhedron is the big CC problem, namely why the measured value of ρ Λ at present is so small as compared to any typical density scale existing in high energy physics, especially taking into account the many phase transitions that our Universe has undergone since the early times, including inflation. In this Letter, we propose to extend the field equations of General Relativity by including a class of invariant terms that automatically relax the value of the CC irrespective of the initial size of the vacuum energy in the early epochs. We show that, at late times, the Universe enters an eternal de Sitter stage mimicking a tiny positive cosmological constant. Thus, these models could be able to solve the big CC problem without fine-tuning and have also a bearing on the cosmic coincidence problem. Remarkably, they mimic the ΛCDM model to a large extent, but they still leave some characteristic imprints that should be testable in the next generation of experiments.
An Efficient Format for Nearly Constant-Time Access to Arbitrary Time Intervals in Large Trace Files
Directory of Open Access Journals (Sweden)
Anthony Chan
2008-01-01
Full Text Available A powerful method to aid in understanding the performance of parallel applications uses log or trace files containing time-stamped events and states (pairs of events. These trace files can be very large, often hundreds or even thousands of megabytes. Because of the cost of accessing and displaying such files, other methods are often used that reduce the size of the tracefiles at the cost of sacrificing detail or other information. This paper describes a hierarchical trace file format that provides for display of an arbitrary time window in a time independent of the total size of the file and roughly proportional to the number of events within the time window. This format eliminates the need to sacrifice data to achieve a smaller trace file size (since storage is inexpensive, it is necessary only to make efficient use of bandwidth to that storage. The format can be used to organize a trace file or to create a separate file of annotations that may be used with conventional trace files. We present an analysis of the time to access all of the events relevant to an interval of time and we describe experiments demonstrating the performance of this file format.
Modified large number theory with constant G
International Nuclear Information System (INIS)
Recami, E.
1983-01-01
The inspiring ''numerology'' uncovered by Dirac, Eddington, Weyl, et al. can be explained and derived when it is slightly modified so to connect the ''gravitational world'' (cosmos) with the ''strong world'' (hadron), rather than with the electromagnetic one. The aim of this note is to show the following. In the present approach to the ''Large Number Theory,'' cosmos and hadrons are considered to be (finite) similar systems, so that the ratio R-bar/r-bar of the cosmos typical length R-bar to the hadron typical length r-bar is constant in time (for instance, if both cosmos and hadrons undergo an expansion/contraction cycle: according to the ''cyclical big-bang'' hypothesis: then R-bar and r-bar can be chosen to be the maximum radii, or the average radii). As a consequence, then gravitational constant G results to be independent of time. The present note is based on work done in collaboration with P.Caldirola, G. D. Maccarrone, and M. Pavsic
Cryptography in constant parallel time
Applebaum, Benny
2013-01-01
Locally computable (NC0) functions are 'simple' functions for which every bit of the output can be computed by reading a small number of bits of their input. The study of locally computable cryptography attempts to construct cryptographic functions that achieve this strong notion of simplicity and simultaneously provide a high level of security. Such constructions are highly parallelizable and they can be realized by Boolean circuits of constant depth.This book establishes, for the first time, the possibility of local implementations for many basic cryptographic primitives such as one-way func
The time constant of the somatogravic illusion.
Correia Grácio, B J; de Winkel, K N; Groen, E L; Wentink, M; Bos, J E
2013-02-01
Without visual feedback, humans perceive tilt when experiencing a sustained linear acceleration. This tilt illusion is commonly referred to as the somatogravic illusion. Although the physiological basis of the illusion seems to be well understood, the dynamic behavior is still subject to discussion. In this study, the dynamic behavior of the illusion was measured experimentally for three motion profiles with different frequency content. Subjects were exposed to pure centripetal accelerations in the lateral direction and were asked to indicate their tilt percept by means of a joystick. Variable-radius centrifugation during constant angular rotation was used to generate these motion profiles. Two self-motion perception models were fitted to the experimental data and were used to obtain the time constant of the somatogravic illusion. Results showed that the time constant of the somatogravic illusion was on the order of two seconds, in contrast to the higher time constant found in fixed-radius centrifugation studies. Furthermore, the time constant was significantly affected by the frequency content of the motion profiles. Motion profiles with higher frequency content revealed shorter time constants which cannot be explained by self-motion perception models that assume a fixed time constant. Therefore, these models need to be improved with a mechanism that deals with this variable time constant. Apart from the fundamental importance, these results also have practical consequences for the simulation of sustained accelerations in motion simulators.
Time constant of logarithmic creep and relaxation
CSIR Research Space (South Africa)
Nabarro, FRN
2001-07-15
Full Text Available length and hardness which vary logarithmically with time. For dimensional reasons, a logarithmic variation must involve a time constant tau characteristic of the process, so that the deformation is proportional to ln(t/tau). Two distinct mechanisms...
Large Dielectric Constant Enhancement in MXene Percolative Polymer Composites
Tu, Shao Bo
2018-04-06
near the percolation limit of about 15.0 wt % MXene loading, which surpasses all previously reported composites made of carbon-based fillers in the same polymer. With up to 10 wt % MXene loading, the dielectric loss of the MXene/P(VDF-TrFE-CFE) composite indicates only an approximately 5-fold increase (from 0.06 to 0.35), while the dielectric constant increased by 25 times over the same composition range. Furthermore, the ratio of permittivity to loss factor of the MXene-polymer composite is superior to that of all previously reported fillers in this same polymer. The dielectric constant enhancement effect is demonstrated to exist in other polymers as well when loaded with MXene. We show that the dielectric constant enhancement is largely due to the charge accumulation caused by the formation of microscopic dipoles at the surfaces between the MXene sheets and the polymer matrix under an external applied electric field.
Ventricular fibrillation time constant for swine
International Nuclear Information System (INIS)
Wu, Jiun-Yan; Sun, Hongyu; Nimunkar, Amit J; Webster, John G; O'Rourke, Ann; Huebner, Shane; Will, James A
2008-01-01
The strength–duration curve for cardiac excitation can be modeled by a parallel resistor–capacitor circuit that has a time constant. Experiments on six pigs were performed by delivering current from the X26 Taser dart at a distance from the heart to cause ventricular fibrillation (VF). The X26 Taser is an electromuscular incapacitation device (EMD), which generates about 50 kV and delivers a pulse train of about 15–19 pulses s −1 with a pulse duration of about 150 µs and peak current about 2 A. Similarly a continuous 60 Hz alternating current of the amplitude required to cause VF was delivered from the same distance. The average current and duration of the current pulse were estimated in both sets of experiments. The strength–duration equation was solved to yield an average time constant of 2.87 ms ± 1.90 (SD). Results obtained may help in the development of safety standards for future electromuscular incapacitation devices (EMDs) without requiring additional animal tests
Large numbers hypothesis. IV - The cosmological constant and quantum physics
Adams, P. J.
1983-01-01
In standard physics quantum field theory is based on a flat vacuum space-time. This quantum field theory predicts a nonzero cosmological constant. Hence the gravitational field equations do not admit a flat vacuum space-time. This dilemma is resolved using the units covariant gravitational field equations. This paper shows that the field equations admit a flat vacuum space-time with nonzero cosmological constant if and only if the canonical LNH is valid. This allows an interpretation of the LNH phenomena in terms of a time-dependent vacuum state. If this is correct then the cosmological constant must be positive.
A modified large number theory with constant G
Recami, Erasmo
1983-03-01
The inspiring “numerology” uncovered by Dirac, Eddington, Weyl, et al. can be explained and derived when it is slightly modified so to connect the “gravitational world” (cosmos) with the “strong world” (hadron), rather than with the electromagnetic one. The aim of this note is to show the following. In the present approach to the “Large Number Theory,” cosmos and hadrons are considered to be (finite) similar systems, so that the ratio{{bar R} / {{bar R} {bar r}} of the cosmos typical lengthbar R to the hadron typical lengthbar r is constant in time (for instance, if both cosmos and hadrons undergo an expansion/contraction cycle—according to the “cyclical bigbang” hypothesis—thenbar R andbar r can be chosen to be the maximum radii, or the average radii). As a consequence, then gravitational constant G results to be independent of time. The present note is based on work done in collaboration with P. Caldirola, G. D. Maccarrone, and M. Pavšič.
Union-Find with Constant Time Deletions
DEFF Research Database (Denmark)
Alstrup, Stephen; Thorup, Mikkel; Gørtz, Inge Li
2014-01-01
operations performed, and α_M/N_(n) is a functional inverse of Ackermann’s function. They left open the question whether delete operations can be implemented more efficiently than find operations, for example, in o(log n) worst-case time. We resolve this open problem by presenting a relatively simple...
Supersymmetric large extra dimensions and the cosmological constant: an update
International Nuclear Information System (INIS)
Burgess, C.P.
2004-01-01
This article critically reviews the proposal for addressing the cosmological constant problem within the framework of supersymmetric large extra dimensions (SLED), as recently proposed in hep-th/0304256. After a brief restatement of the cosmological constant problem, a short summary of the proposed mechanism is given. The emphasis is on the perspective of the low-energy effective theory in order to see how it addresses the problem of why low-energy particles like the electron do not contribute too large a vacuum energy. This is followed by a discussion of the main objections, which are grouped into the following five topics: (1) Weinberg's No-Go Theorem. (2) Are hidden tunings of the theory required, and are these stable under renormalization? (3) Why should the mechanism apply only now and not rule out possible earlier epochs of inflationary dynamics? (4) How big are quantum effects, and which are the most dangerous? and (5) Even if successful, can the mechanism be consistent with cosmological or current observational constraints? It is argued that there are plausible reasons why the mechanism can thread the potential objections, but that a definitive proof that it does depends on addressing well-defined technical points. These points include identifying what fixes the size of the extra dimensions, checking how topological obstructions renormalize and performing specific calculations of quantum corrections. More detailed studies of these issues, which are well within reach of our present understanding of extra-dimensional theories, are currently underway. As such, the jury remains out concerning the proposal, although the prospects for acquittal still seem good. (An abridged version of this article appears in the proceedings of SUSY 2003.)
Using Constant Time Delay to Teach Braille Word Recognition
Hooper, Jonathan; Ivy, Sarah; Hatton, Deborah
2014-01-01
Introduction: Constant time delay has been identified as an evidence-based practice to teach print sight words and picture recognition (Browder, Ahlbrim-Delzell, Spooner, Mims, & Baker, 2009). For the study presented here, we tested the effectiveness of constant time delay to teach new braille words. Methods: A single-subject multiple baseline…
Constant scalar curvature hypersurfaces in extended Schwarzschild space-time
International Nuclear Information System (INIS)
Pareja, M. J.; Frauendiener, J.
2006-01-01
We present a class of spherically symmetric hypersurfaces in the Kruskal extension of the Schwarzschild space-time. The hypersurfaces have constant negative scalar curvature, so they are hyperboloidal in the regions of space-time which are asymptotically flat
Fundamental Constants in Physics and their Time Dependence
CERN. Geneva
2008-01-01
In the Standard Model of Particle Physics we are dealing with 28 fundamental constants. In the experiments these constants can be measured, but theoretically they are not understood. I will discuss these constants, which are mostly mass parameters. Astrophysical measurements indicate that the finestructure constant is not a real constant, but depends on time. Grand unification then implies also a time variation of the QCD scale. Thus the masses of the atomic nuclei and the magnetic moments of the nuclei will depend on time. I proposed an experiment, which is currently done by Prof. Haensch in Munich and his group. The first results indicate a time dependence of the QCD scale. I will discuss the theoretical implications.
Time variable cosmological constants from the age of universe
International Nuclear Information System (INIS)
Xu Lixin; Lu Jianbo; Li Wenbo
2010-01-01
In this Letter, time variable cosmological constant, dubbed age cosmological constant, is investigated motivated by the fact: any cosmological length scale and time scale can introduce a cosmological constant or vacuum energy density into Einstein's theory. The age cosmological constant takes the form ρ Λ =3c 2 M P 2 /t Λ 2 , where t Λ is the age or conformal age of our universe. The effective equation of state (EoS) of age cosmological constant are w Λ eff =-1+2/3 (√(Ω Λ ))/c and w Λ eff =-1+2/3 (√(Ω Λ ))/c (1+z) when the age and conformal age of universe are taken as the role of cosmological time scales respectively. The EoS are the same as the so-called agegraphic dark energy models. However, the evolution histories are different from the agegraphic ones for their different evolution equations.
Long Pulse Integrator of Variable Integral Time Constant
International Nuclear Information System (INIS)
Wang Yong; Ji Zhenshan; Du Xiaoying; Wu Yichun; Li Shi; Luo Jiarong
2010-01-01
A kind of new long pulse integrator was designed based on the method of variable integral time constant and deducting integral drift by drift slope. The integral time constant can be changed by choosing different integral resistors, in order to improve the signal-to-noise ratio, and avoid output saturation; the slope of integral drift of a certain period of time can be calculated by digital signal processing, which can be used to deduct the drift of original integral signal in real time to reduce the integral drift. The tests show that this kind of long pulse integrator is good at reducing integral drift, which also can eliminate the effects of changing integral time constant. According to experiments, the integral time constant can be changed by remote control and manual adjustment of integral drift is avoided, which can improve the experiment efficiency greatly and can be used for electromagnetic measurement in Tokamak experiment. (authors)
A Digitally Programmable Differential Integrator with Enlarged Time Constant
Directory of Open Access Journals (Sweden)
S. K. Debroy
1994-12-01
Full Text Available A new Operational Amplifier (OA-RC integrator network is described. The novelties of the design are used of single grounded capacitor, ideal integration function realization with dual-input capability and design flexibility for extremely large time constant involving an enlargement factor (K using product of resistor ratios. The aspect of the digital control of K through a programmable resistor array (PRA controlled by a microprocessor has also been implemented. The effect of the OA-poles has been analyzed which indicates degradation of the integrator-Q at higher frequencies. An appropriate Q-compensation design scheme exhibiting 1 : |A|2 order of Q-improvement has been proposed with supporting experimental observations.
On time variation of fundamental constants in superstring theories
International Nuclear Information System (INIS)
Maeda, K.I.
1988-01-01
Assuming the action from the string theory and taking into account the dynamical freedom of a dilaton and its coupling to matter fluid, the authors show that fundamental 'constants' in string theories are independent of the 'radius' of the internal space. Since the scalar related to the 'constants' is coupled to the 4-dimensional gravity and matter fluid in the same way as in the Jordan-Brans Dicke theory with ω = -1, it must be massive and can get a mass easily through some symmetry breaking mechanism (e.g. the SUSY breaking due to a gluino condensation). Consequently, time variation of fundamental constants is too small to be observed
Simple Model with Time-Varying Fine-Structure ``Constant''
Berman, M. S.
2009-10-01
Extending the original version written in colaboration with L.A. Trevisan, we study the generalisation of Dirac's LNH, so that time-variation of the fine-structure constant, due to varying electrical and magnetic permittivities is included along with other variations (cosmological and gravitational ``constants''), etc. We consider the present Universe, and also an inflationary scenario. Rotation of the Universe is a given possibility in this model.
Numerical counting ratemeter with variable time constant and integrated circuits
International Nuclear Information System (INIS)
Kaiser, J.; Fuan, J.
1967-01-01
We present here the prototype of a numerical counting ratemeter which is a special version of variable time-constant frequency meter (1). The originality of this work lies in the fact that the change in the time constant is carried out automatically. Since the criterion for this change is the accuracy in the annunciated result, the integration time is varied as a function of the frequency. For the prototype described in this report, the time constant varies from 1 sec to 1 millisec. for frequencies in the range 10 Hz to 10 MHz. This prototype is built entirely of MECL-type integrated circuits from Motorola and is thus contained in two relatively small boxes. (authors) [fr
Automated real time constant-specificity surveillance for disease outbreaks
Directory of Open Access Journals (Sweden)
Brownstein John S
2007-06-01
Full Text Available Abstract Background For real time surveillance, detection of abnormal disease patterns is based on a difference between patterns observed, and those predicted by models of historical data. The usefulness of outbreak detection strategies depends on their specificity; the false alarm rate affects the interpretation of alarms. Results We evaluate the specificity of five traditional models: autoregressive, Serfling, trimmed seasonal, wavelet-based, and generalized linear. We apply each to 12 years of emergency department visits for respiratory infection syndromes at a pediatric hospital, finding that the specificity of the five models was almost always a non-constant function of the day of the week, month, and year of the study (p Conclusion Modeling the variance of visit patterns enables real-time detection with known, constant specificity at all times. With constant specificity, public health practitioners can better interpret the alarms and better evaluate the cost-effectiveness of surveillance systems.
Fuzzy logic estimator of rotor time constant in induction motors
Energy Technology Data Exchange (ETDEWEB)
Alminoja, J. [Tampere University of Technology (Finland). Control Engineering Laboratory; Koivo, H. [Helsinki University of Technology, Otaniemi (Finland). Control Engineering Laboratory
1997-12-31
Vector control of AC machines is a well-known and widely used technique in induction machine control. It offers an exact method for speed control of induction motors, but it is also sensitive to the changes in machine parameters. E.g. rotor time constant has a strong dependence on temperature. In this paper a fuzzy logic estimator is developed, with which the rotor time constant can be estimated when the machine has a load. It is more simple than the estimators proposed in the literature. The fuzzy estimator is tested by simulation when step-wise abrupt changes and slow drifting occurs. (orig.) 7 refs.
Generating k-independent variables in constant time
DEFF Research Database (Denmark)
Christiani, Tobias Lybecker; Pagh, Rasmus
2014-01-01
The generation of pseudorandom elements over finite fields is fundamental to the time, space and randomness complexity of randomized algorithms and data structures. We consider the problem of generating k-independent random values over a finite field F in a word RAM model equipped with constant...
Automated real time constant-specificity surveillance for disease outbreaks.
Wieland, Shannon C; Brownstein, John S; Berger, Bonnie; Mandl, Kenneth D
2007-06-13
For real time surveillance, detection of abnormal disease patterns is based on a difference between patterns observed, and those predicted by models of historical data. The usefulness of outbreak detection strategies depends on their specificity; the false alarm rate affects the interpretation of alarms. We evaluate the specificity of five traditional models: autoregressive, Serfling, trimmed seasonal, wavelet-based, and generalized linear. We apply each to 12 years of emergency department visits for respiratory infection syndromes at a pediatric hospital, finding that the specificity of the five models was almost always a non-constant function of the day of the week, month, and year of the study (p accounting for not only the expected number of visits, but also the variance of the number of visits. The expectation-variance model achieves constant specificity on all three time scales, as well as earlier detection and improved sensitivity compared to traditional methods in most circumstances. Modeling the variance of visit patterns enables real-time detection with known, constant specificity at all times. With constant specificity, public health practitioners can better interpret the alarms and better evaluate the cost-effectiveness of surveillance systems.
A Parallel Priority Queue with Constant Time Operations
DEFF Research Database (Denmark)
Brodal, Gerth Stølting; Träff, Jesper Larsson; Zaroliagis, Christos D.
1998-01-01
We present a parallel priority queue that supports the following operations in constant time:parallel insertionof a sequence of elements ordered according to key,parallel decrease keyfor a sequence of elements ordered according to key,deletion of the minimum key element, anddeletion of an arbitrary...... application is a parallel implementation of Dijkstra's algorithm for the single-source shortest path problem, which runs inO(n) time andO(mlogn) work on a CREW PRAM on graphs withnvertices andmedges. This is a logarithmic factor improvement in the running time compared with previous approaches....
International Nuclear Information System (INIS)
Kostamovaara, J.; Myllylae, R.
1985-01-01
The construction and the performance of a time-to-amplitude converter equipped with constant fraction discriminators is described. The TAC consists of digital and analog parts which are constructed on two printed circuit boards, both of which are located in a single width NIM module. The dead time of the TAC for a start pulse which is not followed by a stop pulse within the time range of the device (proportional100 ns) is only proportional100 ns, which enables one to avoid counting rate saturation even with a high random input signal rate. The differential and integral nonlinearities of the TAC are better than +-1.5% and 0.05%, respectively. The resolution for input timing pulses of constant shape is 20 ps (fwhm), and less than 10 ps (fwhm) with a modification in the digital part. The walk error of the constant fraction timing discriminators is presented and various parameters affecting it are discussed. The effect of the various disturbances in linearity caused by the fast ECL logic and their minimization are also discussed. The time-to-amplitude converter has been used in positron lifetime studies and for laser range finding. (orig.)
Newtonian cosmology with a time-varying constant of gravitation
International Nuclear Information System (INIS)
McVittie, G.C.
1978-01-01
Newtonian cosmology is based on the Eulerian equations of fluid mechanics combined with Poisson's equation modified by the introduction of a time-varying G. Spherically symmetric model universes are worked out with instantaneously uniform densities. They are indeterminate unless instantaneous uniformity of the pressure is imposed. When G varies as an inverse power of the time, the models can in some cases be shown to depend on the solution of a second-order differential equation which also occurs in the Friedmann models of general relativity. In Section 3, a method for 'passing through' a singularity of this equation is proposed which entails making four arbitrary mathematical assumptions. When G varies as (time) -1 , models with initially cycloidal motion are possible, each cycle becoming longer as time progresses. Finally, gravitation becomes so weak that the model expands to infinity. Kinetic and potential energies for the whole model are derived from the basic equations; their sum is not constant. (author)
Isothermal titration calorimetry in nanoliter droplets with subsecond time constants.
Lubbers, Brad; Baudenbacher, Franz
2011-10-15
We reduced the reaction volume in microfabricated suspended-membrane titration calorimeters to nanoliter droplets and improved the sensitivities to below a nanowatt with time constants of around 100 ms. The device performance was characterized using exothermic acid-base neutralizations and a detailed numerical model. The finite element based numerical model allowed us to determine the sensitivities within 1% and the temporal dynamics of the temperature rise in neutralization reactions as a function of droplet size. The model was used to determine the optimum calorimeter design (membrane size and thickness, junction area, and thermopile thickness) and sensitivities for sample volumes of 1 nL for silicon nitride and polymer membranes. We obtained a maximum sensitivity of 153 pW/(Hz)(1/2) for a 1 μm SiN membrane and 79 pW/(Hz)(1/2) for a 1 μm polymer membrane. The time constant of the calorimeter system was determined experimentally using a pulsed laser to increase the temperature of nanoliter sample volumes. For a 2.5 nanoliter sample volume, we experimentally determined a noise equivalent power of 500 pW/(Hz)(1/2) and a 1/e time constant of 110 ms for a modified commercially available infrared sensor with a thin-film thermopile. Furthermore, we demonstrated detection of 1.4 nJ reaction energies from injection of 25 pL of 1 mM HCl into a 2.5 nL droplet of 1 mM NaOH. © 2011 American Chemical Society
Certificateless Public Auditing Protocol with Constant Verification Time
Directory of Open Access Journals (Sweden)
Dongmin Kim
2017-01-01
Full Text Available To provide the integrity of outsourced data in the cloud storage services, many public auditing schemes which allow a user to check the integrity of the outsourced data have been proposed. Since most of the schemes are constructed on Public Key Infrastructure (PKI, they suffer from several concerns like management of certificates. To resolve the problems, certificateless public auditing schemes also have been studied in recent years. In this paper, we propose a certificateless public auditing scheme which has the constant-time verification algorithm. Therefore, our scheme is more efficient than previous certificateless public auditing schemes. To prove the security of our certificateless public auditing scheme, we first define three formal security models and prove the security of our scheme under the three security models.
The Hubble constant estimation using 18 gravitational lensing time delays
Jaelani, Anton T.; Premadi, Premana W.
2014-03-01
Gravitational lens time delay method has been used to estimate the rate of cosmological expansion, called the Hubble constant, H0, independently of the standard candle method. This gravitational lensing method requires a good knowledge of the lens mass distribution, reconstructed using the lens image properties. The observed positions of the images, and the redshifts of the lens and the images serve as strong constraints to the lens equations, which are then solved as a set of simultaneous linear equations. Here we made use of a non-parametric technique to reconstruct the lens mass distribution, which is manifested in a linear equations solver named PixeLens. Input for the calculation is chosen based on prior known parameters obtained from analyzed result of the lens case observations, including time-delay, position angles of the images and the lens, and their redshifts. In this project, 18 fairly well studied lens cases are further grouped according to a number of common properties to examine how each property affects the character of the data, and therefore affects the calculation of H0. The considered lens case properties are lens morphology, number of image, completeness of time delays, and symmetry of lens mass distribution. Analysis of simulation shows that paucity of constraints on mass distribution of a lens yields wide range value of H0, which reflects the uniqueness of each lens system. Nonetheless, gravitational lens method still yields H0 within an acceptable range of value when compared to those determined by many other methods. Grouping the cases in the above manner allowed us to assess the robustness of PixeLens and thereby use it selectively. In addition, we use glafic, a parametric mass reconstruction solver, to refine the mass distribution of one lens case, as a comparison.
Large scale geometry and evolution of a universe with radiation pressure and cosmological constant
Coquereaux, Robert; Coquereaux, Robert; Grossmann, Alex
2000-01-01
In view of new experimental results that strongly suggest a non-zero cosmological constant, it becomes interesting to revisit the Friedmann-Lemaitre model of evolution of a universe with cosmological constant and radiation pressure. In this paper, we discuss the explicit solutions for that model, and perform numerical explorations for reasonable values of cosmological parameters. We also analyse the behaviour of redshifts in such models and the description of ``very large scale geometrical features'' when analysed by distant observers.
Flow-through electroporation based on constant voltage for large-volume transfection of cells.
Geng, Tao; Zhan, Yihong; Wang, Hsiang-Yu; Witting, Scott R; Cornetta, Kenneth G; Lu, Chang
2010-05-21
Genetic modification of cells is a critical step involved in many cell therapy and gene therapy protocols. In these applications, cell samples of large volume (10(8)-10(9)cells) are often processed for transfection. This poses new challenges for current transfection methods and practices. Here we present a novel flow-through electroporation method for delivery of genes into cells at high flow rates (up to approximately 20 mL/min) based on disposable microfluidic chips, a syringe pump, and a low-cost direct current (DC) power supply that provides a constant voltage. By eliminating pulse generators used in conventional electroporation, we dramatically lowered the cost of the apparatus and improved the stability and consistency of the electroporation field for long-time operation. We tested the delivery of pEFGP-C1 plasmids encoding enhanced green fluorescent protein into Chinese hamster ovary (CHO-K1) cells in the devices of various dimensions and geometries. Cells were mixed with plasmids and then flowed through a fluidic channel continuously while a constant voltage was established across the device. Together with the applied voltage, the geometry and dimensions of the fluidic channel determined the electrical parameters of the electroporation. With the optimal design, approximately 75% of the viable CHO cells were transfected after the procedure. We also generalize the guidelines for scaling up these flow-through electroporation devices. We envision that this technique will serve as a generic and low-cost tool for a variety of clinical applications requiring large volume of transfected cells. Copyright 2010 Elsevier B.V. All rights reserved.
Time variation of the fine structure constant driven by quintessence
International Nuclear Information System (INIS)
Anchordoqui, Luis; Goldberg, Haim
2003-01-01
There are indications from the study of quasar absorption spectra that the fine structure constant α may have been measurably smaller for redshifts z>2. Analyses of other data ( 149 Sm fission rate for the Oklo natural reactor, variation of 187 Re β-decay rate in meteorite studies, atomic clock measurements) which probe variations of α in the more recent past imply much smaller deviations from its present value. In this work we tie the variation of α to the evolution of the quintessence field proposed by Albrecht and Skordis, and show that agreement with all these data, as well as consistency with Wilkinson Microwave Anisotropy Probe observations, can be achieved for a range of parameters. Some definite predictions follow for upcoming space missions searching for violations of the equivalence principle
DEFF Research Database (Denmark)
Berg, Rune W.; Ditlevsen, Susanne
2013-01-01
When recording the membrane potential, V, of a neuron it is desirable to be able to extract the synaptic input. Critically, the synaptic input is stochastic and non-reproducible so one is therefore often restricted to single trial data. Here, we introduce means of estimating the inhibition and ex...... close to soma (recording site). Though our data is in current-clamp, the method also works in V-clamp recordings, with some minor adaptations. All custom made procedures are provided in Matlab....... and excitation and their confidence limits from single sweep trials. The estimates are based on the mean membrane potential, (V) , and the membrane time constant,τ. The time constant provides the total conductance (G = capacitance/τ) and is extracted from the autocorrelation of V. The synaptic conductances can....... The method gives best results if the synaptic input is large compared to other conductances, the intrinsic conductances have little or no time dependence or are comparably small, the ligand gated kinetics is faster than the membrane time constant, and the majority of synaptic contacts are electrotonically...
Time resolution studies using digital constant fraction discrimination
International Nuclear Information System (INIS)
Fallu-Labruyere, A.; Tan, H.; Hennig, W.; Warburton, W.K.
2007-01-01
Digital Pulse Processing (DPP) modules are being increasingly considered to replace modular analog electronics in medium-scale nuclear physics experiments (100-1000s of channels). One major area remains, however, where it has not been convincingly demonstrated that DPP modules are competitive with their analog predecessors-time-of-arrival measurement. While analog discriminators and time-to-amplitude converters can readily achieve coincidence time resolutions in the 300-500 ps range with suitably fast scintillators and Photomultiplier Tubes (PMTs), this capability has not been widely demonstrated with DPPs. Some concern has been expressed, in fact, that such time resolutions are attainable with the 10 ns sampling times that are presently commonly available. In this work, we present time-coincidence measurements taken using a commercially available DPP (the Pixie-4 from XIA LLC) directly coupled to pairs of fast PMTs mated with either LSO or LaBr 3 scintillator crystals and excited by 22 Na γ-ray emissions. Our results, 886 ps for LSO and 576 ps for LaBr 3 , while not matching the best literature results using analog electronics, are already well below 1 ns and fully adequate for a wide variety of experiments. These results are shown not to be limited by the DPPs themselves, which achieved 57 ps time resolution using a pulser, but are degraded in part both by the somewhat limited number of photoelectrons we collected and by a sub-optimum choice of PMT. Analysis further suggests that increasing the sampling speed would further improve performance. We therefore conclude that DPP time-of-arrival resolution is already adequate to supplant analog processing in many applications and that further improvements could be achieved with only modest efforts
AC loss time constant measurements on Nb3Al and NbTi multifilamentary superconductors
International Nuclear Information System (INIS)
Painter, T.A.
1988-03-01
The AC loss time constant is a previously univestigated property of Nb 3 Al, a superconductor which, with recent technological developments, shows some advantages over the more commonly used superconductors, NbTi and Nb 3 Sn. Four Nb 3 Al samples with varying twist pitches and one NbTi sample are inductively measured for their AC loss time constants. The measured time constants are compared to the theoretical time constant limits imposed by the limits of the transverse resistivity found by Carr [5] and to the theoretical time constants found using the Bean Model as well as to each other. The measured time constants of the Nb 3 Al samples fall approximately halfway between the theoretical time constant limits, and the measured time constants of the NbTi sample is close to the theoretical lower time constant limit. The Bean Model adequately accounts for the variance of the permeability of the Nb 3 Al superconductor in a background magnetic field. Finally, the measured time constant values of the Nb 3 Al samples vary approximately according to the square of their twist pitch. (author)
Slab-diffusion approximation from time-constant-like calculations
International Nuclear Information System (INIS)
Johnson, R.W.
1976-12-01
Two equations were derived which describe the quantity and any fluid diffused from a slab as a function of time. One equation is applicable to the initial stage of the process; the other to the final stage. Accuracy is 0.2 percent at the one point where both approximations apply and where accuracy of either approximation is the poorest. Characterizing other rate processes might be facilitated by the use of the concept of NOLOR (normal of the logarithm of the rate) and its time dependence
Constant pressure and temperature discrete-time Langevin molecular dynamics
Energy Technology Data Exchange (ETDEWEB)
Grønbech-Jensen, Niels [Department of Mechanical and Aerospace Engineering, University of California, Davis, California 95616 (United States); Department of Mathematics, University of California, Davis, California 95616 (United States); Farago, Oded [Department of Biomedical Engineering, Ben Gurion University of the Negev, Be' er Sheva 84105 (Israel); Ilse Katz Institute for Nanoscale Science and Technology, Ben Gurion University of the Negev, Be' er Sheva 84105 (Israel)
2014-11-21
We present a new and improved method for simultaneous control of temperature and pressure in molecular dynamics simulations with periodic boundary conditions. The thermostat-barostat equations are built on our previously developed stochastic thermostat, which has been shown to provide correct statistical configurational sampling for any time step that yields stable trajectories. Here, we extend the method and develop a set of discrete-time equations of motion for both particle dynamics and system volume in order to seek pressure control that is insensitive to the choice of the numerical time step. The resulting method is simple, practical, and efficient. The method is demonstrated through direct numerical simulations of two characteristic model systems—a one-dimensional particle chain for which exact statistical results can be obtained and used as benchmarks, and a three-dimensional system of Lennard-Jones interacting particles simulated in both solid and liquid phases. The results, which are compared against the method of Kolb and Dünweg [J. Chem. Phys. 111, 4453 (1999)], show that the new method behaves according to the objective, namely that acquired statistical averages and fluctuations of configurational measures are accurate and robust against the chosen time step applied to the simulation.
On Throughput Maximization in Constant Travel-Time Robotic Cells
Milind Dawande; Chelliah Sriskandarajah; Suresh Sethi
2002-01-01
We consider the problem of scheduling operations in bufferless robotic cells that produce identical parts. The objective is to find a cyclic sequence of robot moves that minimizes the long-run average time to produce a part or, equivalently, maximizes the throughput rate. The robot can be moved in simple cycles that produce one unit or, in more complicated cycles, that produce multiple units. Because one-unit cycles are the easiest to understand, implement, and control, they are widely used i...
International Nuclear Information System (INIS)
Wu, S.M.; Hsu, M.C.; Chow, M.C.
1979-01-01
A new modeling technique is introduced for on-line sensor time constant identification, both for the resistance temperature detector (RTD) and for the pressure sensor using power plant operational data. The sensor's time constant is estimated from a real characteristic root of the fitted autoregressive moving average model. The RTD's time constant values were identified to be 8.4 s, with a standard deviation of 1.2 s. The pressure sensor time constant was identified to be 28.6 ms, with a standard deviation of 3.5 ms
Static Pull Testing of a New Type of Large Deformation Cable with Constant Resistance
Directory of Open Access Journals (Sweden)
Zhigang Tao
2017-01-01
Full Text Available A new type of energy-absorbing cable, Constant-Resistance Large Deformation cable (CRLD cable with three different specifications, has been recently developed and tested. An effective cable should occupy the ability of absorbing deformation energy from these geodisaster loads and additionally must be able to yield with the sliding mass movements and plastic deformation over large distances at high displacement rates. The new cable mainly consists of constant-resistance casing tube and frictional cone unit that transfers the load from the slope. When experiencing a static or dynamic load and especially the load exceeding the constant resistance force (CR-F, a static friction force derived from the movement of frictional cone unit in casing tube of CRLD cable, the frictional cone unit will move in the casing tube along the axis and absorb deformation energy, accordingly. In order to assess the performance of three different specified cables in situ, a series of field static pull tests have been performed. The results showed that the first type of CRLD cable can yield 2000 mm displacement while acting 850 kN static pull load, which is superior to that of other two types, analyzing based on the length of the displacement and the level of static pull load.
International Nuclear Information System (INIS)
Prisecaru, Ilie; Panait; Adrian; Serban, Viorel; Ciocan, George; Androne, Marian; Florea, Ioana; State, Elena
2004-01-01
Full text: To avoid some drawbacks in the classical supports employed currently in networks of pipes it was conceived, designed, built and experimentally tested a new type of constant load supports which attenuate largely the shocks and vibrations for networks of pipes subjected to large thermal dilatation. These supports are particularly needed for solving the severe problems of the vibrations in networks of pipes in thermoelectric stations, nuclear power plants, or heavy water production plants. These supports allow building networks of new types, more reliable and of lower cost. The new type of support was developed on the basis of a number of patents protected by OSIM. It has a simple structure, ensures a secure functioning without blocking or other kinds of failures and is resistant to a very large variety of stresses. The new type of support of constant load avoids the drawbacks in classical supports i.e. the stress/deformation diagram is practically independent of stress level. The characteristic of the support is geometrically non-linear and presents a plateau with a small slope over a rather large deformation range which results from a serially mounted structure of sandwiches the deformation of which is controlled by a system of deforming central and peripheral pieces. The new supports of constant load, called SERB-PIPE, present a controlled elasticity and a high degree of damping as the package of elastic blades (the sandwich structure) is made of two sub-packages with relative movements what ensure the attenuation of the shocks and vibrations produced by the fluid flow within the pipes and or by seismic motions. By contrast with classical supports, the new supports have a simple structure and a high reliability. Breakdown under stress leading to severe changes in the stress distribution in pipe networks, which could generate overloads in pipes and over-loading in other supports, cannot occur. One can also mention that these supports can be built in a
International Nuclear Information System (INIS)
Landriau, M.; Shellard, E.P.S.
2004-01-01
In this paper, we present results for large-angle cosmic microwave background anisotropies generated from high resolution simulations of cosmic string networks in a range of flat Friedmann-Robertson-Walker universes with a cosmological constant. Using an ensemble of all-sky maps, we compare with the Cosmic Background Explorer data to infer a normalization (or upper bound) on the string linear energy density μ. For a flat matter-dominated model (Ω M =1) we find Gμ/c 2 ≅0.7x10 -6 , which is lower than previous constraints probably because of the more accurate inclusion of string small-scale structure. For a cosmological constant within an observationally acceptable range, we find a relatively weak dependence with Gμ/c 2 less than 10% higher
Lin, Chenxi; Povinelli, Michelle L
2009-10-26
In this paper, we use the transfer matrix method to calculate the optical absorptance of vertically-aligned silicon nanowire (SiNW) arrays. For fixed filling ratio, significant optical absorption enhancement occurs when the lattice constant is increased from 100 nm to 600 nm. The enhancement arises from an increase in field concentration within the nanowire as well as excitation of guided resonance modes. We quantify the absorption enhancement in terms of ultimate efficiency. Results show that an optimized SiNW array with lattice constant of 600 nm and wire diameter of 540 nm has a 72.4% higher ultimate efficiency than a Si thin film of equal thickness. The enhancement effect can be maintained over a large range of incidence angles.
Dependence of the time-constant of a fuel rod on different design and operational parameters
International Nuclear Information System (INIS)
Elenkov, D.; Lassmann, K.; Schubert, A.; Laar, J. van de
2001-01-01
The temperature response during a reactor shutdown has been measured for many years in the OECD-Halden Project. It has been shown that the complicated shutdown processes can be characterized by a time constant τ which depends on different fuel design and operational parameters, such as fuel geometry, gap size, fill gas pressure and composition, burnup and linear heat rate. In the paper the concept of a time constant is analyzed and the dependence of the time constant on various parameters is investigated analytically. Measured time constants for different designs and conditions are compared with those derived from calculations of the TRANSURANUS code. Employing standard models results in a systematic underprediction of the time constant, i.e. the heat transfer during shutdown is overestimated. (author)
Delay of constant light-induced persistent vaginal estrus by 24-hour time cues in rats.
Weber, A L; Adler, N T
1979-04-20
The normal ovarian cycle of female rats is typically replaced by persistent estrus when these animals are housed under constant light. Evidence presented here shows that the maintenance of periodicity in the environment can at least delay (if not prevent) the photic induction of persistent vaginal estrus. Female rats in constant light were exposed to vaginal smearing at random times or at the same time every day. In another experiment, female rats were exposed to either constant bright light, constant dim light, or a 24-hour photic cycle of bright and dim light. The onset of persistent vaginal estrus was delayed in rats exposed to 24-hour time cues even though the light intensities were the same as or greater than those for the aperiodic control groups. The results suggest that the absence of 24-hour time cues in constant light contributes to the induction of persistent estrus.
Time of flight and range of the motion of a projectile in a constant gravitational field
Directory of Open Access Journals (Sweden)
P. A. Karkantzakos
2009-01-01
Full Text Available In this paper we study the classical problem of the motion of a projectile in a constant gravitational field under the influenceof a retarding force proportional to the velocity. Specifically, we express the time of flight, the time of fall and the range ofthe motion as a function of the constant of resistance per unit mass of the projectile. We also prove that the time of fall isgreater than the time of rise with the exception of the case of zero constant of resistance where we have equality. Finally weprove a formula from which we can compute the constant of resistance per unit mass of the projectile from time of flight andrange of the motion when the acceleration due to gravity and the initial velocity of the projectile are known.
Shock tube measurements of the rate constants for seven large alkanes+OH
Badra, Jihad
2015-01-01
Reaction rate constants for seven large alkanes + hydroxyl (OH) radicals were measured behind reflected shock waves using OH laser absorption. The alkanes, n-hexane, 2-methyl-pentane, 3-methyl-pentane, 2,2-dimethyl-butane, 2,3-dimethyl-butane, 2-methyl-heptane, and 4-methyl-heptane, were selected to investigate the rates of site-specific H-abstraction by OH at secondary and tertiary carbons. Hydroxyl radicals were monitored using narrow-line-width ring-dye laser absorption of the R1(5) transition of the OH spectrum near 306.7 nm. The high sensitivity of the diagnostic enabled the use of low reactant concentrations and pseudo-first-order kinetics. Rate constants were measured at temperatures ranging from 880 K to 1440 K and pressures near 1.5 atm. High-temperature measurements of the rate constants for OH + n-hexane and OH + 2,2-dimethyl-butane are in agreement with earlier studies, and the rate constants of the five other alkanes with OH, we believe, are the first direct measurements at combustion temperatures. Using these measurements and the site-specific H-abstraction measurements of Sivaramakrishnan and Michael (2009) [1,2], general expressions for three secondary and two tertiary abstraction rates were determined as follows (the subscripts indicate the number of carbon atoms bonded to the next-nearest-neighbor carbon): S20=1.58×10-11exp(-1550K/T)cm3molecule-1s-1(887-1327K)S30=2.37×10-11exp(-1850K/T)cm3molecule-1s-1(887-1327K)S21=4.5×10-12exp(-793.7K/T)cm3molecule-1s-1(833-1440K)T100=2.85×10-11exp(-1138.3K/T)cm3molecule-1s-1(878-1375K)T101=7.16×10-12exp(-993K/T)cm3molecule-1s-1(883-1362K) © 2014 The Combustion Institute.
International Nuclear Information System (INIS)
Peng Huanwu
2005-01-01
Taking Dirac's large number hypothesis as true, we have shown [Commun. Theor. Phys. (Beijing, China) 42 (2004) 703] the inconsistency of applying Einstein's theory of general relativity with fixed gravitation constant G to cosmology, and a modified theory for varying G is found, which reduces to Einstein's theory outside the gravitating body for phenomena of short duration in small distances, thereby agrees with all the crucial tests formerly supporting Einstein's theory. The modified theory, when applied to the usual homogeneous cosmological model, gives rise to a variable cosmological tensor term determined by the derivatives of G, in place of the cosmological constant term usually introduced ad hoc. Without any free parameter the theoretical Hubble's relation obtained from the modified theory seems not in contradiction to observations, as Dr. Wang's preliminary analysis of the recent data indicates [Commun. Theor. Phys. (Beijing, China) 42 (2004) 703]. As a complement to Commun. Theor. Phys. (Beijing, China) 42 (2004) 703 we shall study in this paper the modification of electromagnetism due to Dirac's large number hypothesis in more detail to show that the approximation of geometric optics still leads to null geodesics for the path of light, and that the general relation between the luminosity distance and the proper geometric distance is still valid in our theory as in Einstein's theory, and give the equations for homogeneous cosmological model involving matter plus electromagnetic radiation. Finally we consider the impact of the modification to quantum mechanics and statistical mechanics, and arrive at a systematic theory of evolving natural constants including Planck's h-bar as well as Boltzmann's k B by finding out their cosmologically combined counterparts with factors of appropriate powers of G that may remain truly constant to cosmologically long time.
The time dependence of rate constants of esub(aq)sup(-) reactions
International Nuclear Information System (INIS)
Burcl, R.; Byakov, V.M.; Grafutin, V.I.
1982-01-01
Published data about the time dependence of rate constants k(esub(aq)sup(-)+Ac) of esub(aq)sup(-) reactions with the acceptor Ac are analyzed, using the results of rate constant k(Ps+Ac) measurements for positronium reactions. It is shown that neither esub(aq)sup(-) nor Ps reaction rate constants depend on time in the observable range. Experimentally found concentration dependence of k(esub(aq)sup(-)+Ac) is due to other factors, connected with the existence of electric charge of esub(aq)sup(-), e.g. ionic strength, tunnelling effect etc. (author)
Time constants and feedback transfer functions of EBR-II subassembly types
International Nuclear Information System (INIS)
Grimm, K.N.; Meneghetti, D.
1986-01-01
Time constants, feedback reactivity transfer functions and power coefficients are calculated for stereotypical subassemblies in the EBR-II reactor. These quantities are calculated from nodal reactivities obtained from a reactor kinetic code analysis for a step change in power. Due to the multiplicity of eigenvalues, there are several time constants for each nodal position in a subassembly. Compared with these calculated values are analytically derived values for the initial node of a given channel
International Nuclear Information System (INIS)
Grimm, K.N.; Meneghetti, D.
1986-09-01
Time constants, feedback reactivity transfer functions and power coefficients are calculated for stereotypical subassemblies in the EBR-II reactor. These quantities are calculated from nodal reactivities obtained from a reactor kinetic code analysis for a step change in power. Due to the multiplicity of eigenvalues, there are several time constants for each nodal position in a subassembly. Compared with these calculated values are analytically derived values for the initial node of a given channel
Time constants and feedback transfer functions of EBR-II subassembly types
International Nuclear Information System (INIS)
Grimm, K.N.; Meneghetti, D.
1987-01-01
Time constants, feedback reactivity transfer functions and power coefficients are calculated for stereotypical subassemblies in the EBR-II reactor. These quantities are calculated from nodal reactivities obtained from a reactor kinetic code analysis for a step change in power. Due to the multiplicity of eigenvalues, there are several time constants for each nodal position in a subassembly. Compared with these calculated values are analytically derived values for the initial node of a given channel. (author)
Confronting the relaxation mechanism for a large cosmological constant with observations
International Nuclear Information System (INIS)
Basilakos, Spyros; Bauer, Florian; Solà, Joan
2012-01-01
In order to deal with a large cosmological constant a relaxation mechanism based on modified gravity has been proposed recently. By virtue of this mechanism the effect of the vacuum energy density of a given quantum field/string theory (no matter how big is its initial value in the early universe) can be neutralized dynamically, i.e. without fine tuning, and hence a Big Bang-like evolution of the cosmos becomes possible. Remarkably, a large class (F n m ) of models of this kind, namely capable of dynamically adjusting the vacuum energy irrespective of its value and size, has been identified. In this paper, we carefully put them to the experimental test. By performing a joint likelihood analysis we confront these models with the most recent observational data on type Ia supernovae (SNIa), the Cosmic Microwave Background (CMB), the Baryonic Acoustic Oscillations (BAO) and the high redshift data on the expansion rate, so as to determine which ones are the most favored by observations. We compare the optimal relaxation models F n m found by this method with the standard or concordance ΛCDM model, and find that some of these models may appear as almost indistinguishable from it. Interestingly enough, this shows that it is possible to construct viable solutions to the tough cosmological fine tuning problem with models that display the same basic phenomenological features as the concordance model
Running vacuum in the Universe and the time variation of the fundamental constants of Nature
Energy Technology Data Exchange (ETDEWEB)
Fritzsch, Harald [Nanyang Technological University, Institute for Advanced Study, Singapore (Singapore); Universitaet Muenchen, Physik-Department, Munich (Germany); Sola, Joan [Nanyang Technological University, Institute for Advanced Study, Singapore (Singapore); Universitat de Barcelona, Departament de Fisica Quantica i Astrofisica, Barcelona, Catalonia (Spain); Universitat de Barcelona (ICCUB), Institute of Cosmos Sciences, Barcelona, Catalonia (Spain); Nunes, Rafael C. [Universidade Federal de Juiz de Fora, Dept. de Fisica, Juiz de Fora, MG (Brazil)
2017-03-15
We compute the time variation of the fundamental constants (such as the ratio of the proton mass to the electron mass, the strong coupling constant, the fine-structure constant and Newton's constant) within the context of the so-called running vacuum models (RVMs) of the cosmic evolution. Recently, compelling evidence has been provided that these models are able to fit the main cosmological data (SNIa+BAO+H(z)+LSS+BBN+CMB) significantly better than the concordance ΛCDM model. Specifically, the vacuum parameters of the RVM (i.e. those responsible for the dynamics of the vacuum energy) prove to be nonzero at a confidence level >or similar 3σ. Here we use such remarkable status of the RVMs to make definite predictions on the cosmic time variation of the fundamental constants. It turns out that the predicted variations are close to the present observational limits. Furthermore, we find that the time evolution of the dark matter particle masses should be crucially involved in the total mass variation of our Universe. A positive measurement of this kind of effects could be interpreted as strong support to the ''micro-macro connection'' (viz. the dynamical feedback between the evolution of the cosmological parameters and the time variation of the fundamental constants of the microscopic world), previously proposed by two of us (HF and JS). (orig.)
A new variable interval schedule with constant hazard rate and finite time range.
Bugallo, Mehdi; Machado, Armando; Vasconcelos, Marco
2018-05-27
We propose a new variable interval (VI) schedule that achieves constant probability of reinforcement in time while using a bounded range of intervals. By sampling each trial duration from a uniform distribution ranging from 0 to 2 T seconds, and then applying a reinforcement rule that depends linearly on trial duration, the schedule alternates reinforced and unreinforced trials, each less than 2 T seconds, while preserving a constant hazard function. © 2018 Society for the Experimental Analysis of Behavior.
Constant resolution of time-dependent Hartree--Fock phase ambiguity
International Nuclear Information System (INIS)
Lichtner, P.C.; Griffin, J.J.; Schultheis, H.; Schultheis, R.; Volkov, A.B.
1978-01-01
The customary time-dependent Hartree--Fock problem is shown to be ambiguous up to an arbitrary function of time additive to H/sub HF/, and, consequently, up to an arbitrary time-dependent phase for the solution, PHI(t). The ''constant'' (H)'' phase is proposed as the best resolution of this ambiguity. It leads to the following attractive features: (a) the time-dependent Hartree--Fock (TDHF) Hamiltonian, H/sub HF/, becomes a quantity whose expectation value is equal to the average energy and, hence, constant in time; (b) eigenstates described exactly by determinants, have time-dependent Hartree--Fock solutions identical with the exact time-dependent solutions; (c) among all possible TDHF solutions this choice minimizes the norm of the quantity (H--i dirac constant delta/delta t) operating on the ket PHI, and guarantees optimal time evolution over an infinitesimal period; (d) this choice corresponds both to the stationary value of the absolute difference between (H) and (i dirac constant delta/delta t) and simultaneously to its absolute minimal value with respect to choice of the time-dependent phase. The source of the ambiguity is discussed. It lies in the time-dependent generalization of the freedom to transform unitarily among the single-particle states of a determinant at the (physically irrelevant for stationary states) cost of altering only a factor of unit magnitude
Use of thermal time constant concept in the analysis of reactivity induced accidents with feedback
International Nuclear Information System (INIS)
Narain, R.
1981-01-01
A simple heat transfer model based on the thermal time constant concept which leads to significant reduction in fuel temperature computing time and gives a physical insight of the phenomena is presented. The fuel temperatures can be used to estimate the reactivity feedback using the measured or calculated Doppler coefficients. (E.G.) [pt
Queueing systems with constant service time and evaluation of M/D/1,k
DEFF Research Database (Denmark)
Iversen, Villy Bæk
1997-01-01
Systems with constant service times have the particular property that the customers leave the servers in the same order in which they areaccepted for service. Probabilitites of integral waiting times can be expressed by the state probabilities, and non-integral waiting timescan be expressed...
Technique for determination of the time constant for relay radioisotope instruments
International Nuclear Information System (INIS)
Gol'din, M.L.; Shestialtynov, V.K.
1981-01-01
A technique for calculating time constant of a gamma relay used in radio isotope automatics is suggested. It is shown that the time constant of a radioisotope relay device (RRD) mainly depends on parameters of the intergrating circuit ratemeter. Considering the ratemeter as a real communication channel with a limited transmission band, the equation for the active front duration at a ratemeter outlet when applying abrupt voltage to its inlet is obtained. From the complex transmission function of a ratemeter the upper boundary cyclic transmission frequency the substitution of which in the equation of the active front durationg ives the RRD time constant is determined. On the example of calculating the ratemeter for the GR-6 gamma relay a satisfactory coincidence of calculational results with the certificate data is shown [ru
Methodology of measurement of thermal neutron time decay constant in Canberra 35+ MCA system
Energy Technology Data Exchange (ETDEWEB)
Drozdowicz, K.; Gabanska, B.; Igielski, A.; Krynicka, E.; Woznicka, U. [Institute of Nuclear Physics, Cracow (Poland)
1993-12-31
A method of the thermal neutron time decay constant measurement in small bounded media is presented. A 14 MeV pulsed neutron generator is the neutron source. The system of recording of a die-away curve of thermal neutrons consists of a {sup 3}He detector and of a multichannel time analyzer based on analyzer Canberra 35+ with multi scaler module MCS 7880 (microsecond range). Optimum parameters for the measuring system are considered. Experimental verification of a dead time of the instrumentation system is made and a count-loss correction is incorporated into the data treatment. An attention is paid to evaluate with a high accuracy the fundamental mode decay constant of the registered decaying curve. A new procedure of the determination of the decay constant by a multiple recording of the die-away curve is presented and results of test measurements are shown. (author). 11 refs, 12 figs, 4 tabs.
Methodology of measurement of thermal neutron time decay constant in Canberra 35+ MCA system
International Nuclear Information System (INIS)
Drozdowicz, K.; Gabanska, B.; Igielski, A.; Krynicka, E.; Woznicka, U.
1993-01-01
A method of the thermal neutron time decay constant measurement in small bounded media is presented. A 14 MeV pulsed neutron generator is the neutron source. The system of recording of a die-away curve of thermal neutrons consists of a 3 He detector and of a multichannel time analyzer based on analyzer Canberra 35+ with multi scaler module MCS 7880 (microsecond range). Optimum parameters for the measuring system are considered. Experimental verification of a dead time of the instrumentation system is made and a count-loss correction is incorporated into the data treatment. An attention is paid to evaluate with a high accuracy the fundamental mode decay constant of the registered decaying curve. A new procedure of the determination of the decay constant by a multiple recording of the die-away curve is presented and results of test measurements are shown. (author). 11 refs, 12 figs, 4 tabs
Methodology of measurement of thermal neutron time decay constant in Canberra 35+ MCA system
Energy Technology Data Exchange (ETDEWEB)
Drozdowicz, K; Gabanska, B; Igielski, A; Krynicka, E; Woznicka, U [Institute of Nuclear Physics, Cracow (Poland)
1994-12-31
A method of the thermal neutron time decay constant measurement in small bounded media is presented. A 14 MeV pulsed neutron generator is the neutron source. The system of recording of a die-away curve of thermal neutrons consists of a {sup 3}He detector and of a multichannel time analyzer based on analyzer Canberra 35+ with multi scaler module MCS 7880 (microsecond range). Optimum parameters for the measuring system are considered. Experimental verification of a dead time of the instrumentation system is made and a count-loss correction is incorporated into the data treatment. An attention is paid to evaluate with a high accuracy the fundamental mode decay constant of the registered decaying curve. A new procedure of the determination of the decay constant by a multiple recording of the die-away curve is presented and results of test measurements are shown. (author). 11 refs, 12 figs, 4 tabs.
Helicopter TEM parameters analysis and system optimization based on time constant
Xiao, Pan; Wu, Xin; Shi, Zongyang; Li, Jutao; Liu, Lihua; Fang, Guangyou
2018-03-01
Helicopter transient electromagnetic (TEM) method is a kind of common geophysical prospecting method, widely used in mineral detection, underground water exploration and environment investigation. In order to develop an efficient helicopter TEM system, it is necessary to analyze and optimize the system parameters. In this paper, a simple and quantitative method is proposed to analyze the system parameters, such as waveform, power, base frequency, measured field and sampling time. A wire loop model is used to define a comprehensive 'time constant domain' that shows a range of time constant, analogous to a range of conductance, after which the characteristics of the system parameters in this domain is obtained. It is found that the distortion caused by the transmitting base frequency is less than 5% when the ratio of the transmitting period to the target time constant is greater than 6. When the sampling time window is less than the target time constant, the distortion caused by the sampling time window is less than 5%. According to this method, a helicopter TEM system, called CASHTEM, is designed, and flight test has been carried out in the known mining area. The test results show that the system has good detection performance, verifying the effectiveness of the method.
Estimation of the Plant Time Constant of Current-Controlled Voltage Source Converters
DEFF Research Database (Denmark)
Vidal, Ana; Yepes, Alejandro G.; Malvar, Jano
2014-01-01
Precise knowledge of the plant time constant is essential to perform a thorough analysis of the current control loop in voltage source converters (VSCs). As the loop behavior can be significantly influenced by the VSC working conditions, the effects associated to converter losses should be included...... in the model, through an equivalent series resistance. In a recent work, an algorithm to identify this parameter was developed, considering the inductance value as known and practically constant. Nevertheless, the plant inductance can also present important uncertainties with respect to the inductance...... of the VSC interface filter measured at rated conditions. This paper extends that method so that both parameters of the plant time constant (resistance and inductance) are estimated. Such enhancement is achieved through the evaluation of the closed-loop transient responses of both axes of the synchronous...
The ruin probability of a discrete time risk model under constant interest rate with heavy tails
Tang, Q.
2004-01-01
This paper investigates the ultimate ruin probability of a discrete time risk model with a positive constant interest rate. Under the assumption that the gross loss of the company within one year is subexponentially distributed, a simple asymptotic relation for the ruin probability is derived and
Using a Constant Time Delay Procedure to Teach Foundational Swimming Skills to Children with Autism
Rogers, Laura; Hemmeter, Mary Louise; Wolery, Mark
2010-01-01
The purpose of this study was to evaluate the effectiveness of using a constant time delay procedure to teach foundational swimming skills to three children with autism. The skills included flutter kick, front-crawl arm strokes, and head turns to the side. A multiple-probe design across behaviors and replicated across participants was used.…
Bethlem, H.L.; Ubachs, W.M.G.
2009-01-01
The recently demonstrated methods to cool and manipulate neutral molecules offer new possibilities for precision tests of fundamental physics theories. We here discuss the possibility of testing the time-invariance of fundamental constants using near degeneracies between rotational levels in the
A critical oscillation constant as a variable of time scales for half-linear dynamic equations
Czech Academy of Sciences Publication Activity Database
Řehák, Pavel
2010-01-01
Roč. 60, č. 2 (2010), s. 237-256 ISSN 0139-9918 R&D Projects: GA AV ČR KJB100190701 Institutional research plan: CEZ:AV0Z10190503 Keywords : dynamic equation * time scale * half-linear equation * (non)oscillation criteria * Hille-Nehari criteria * Kneser criteria * critical constant * oscillation constant * Hardy inequality Subject RIV: BA - General Mathematics Impact factor: 0.316, year: 2010 http://link.springer.com/article/10.2478%2Fs12175-010-0009-7
Acceleration-enlarged symmetries in nonrelativistic space-time with a cosmological constant TH1"-->
Lukierski, J.; Stichel, P. C.; Zakrzewski, W. J.
2008-05-01
By considering the nonrelativistic limit of de Sitter geometry one obtains the nonrelativistic space-time with a cosmological constant and Newton Hooke (NH) symmetries. We show that the NH symmetry algebra can be enlarged by the addition of the constant acceleration generators and endowed with central extensions (one in any dimension (D) and three in D=(2+1)). We present a classical Lagrangian and Hamiltonian framework for constructing models quasi-invariant under enlarged NH symmetries that depend on three parameters described by three nonvanishing central charges. The Hamiltonian dynamics then splits into external and internal sectors with new noncommutative structures of external and internal phase spaces. We show that in the limit of vanishing cosmological constant the system reduces to the one, which possesses acceleration-enlarged Galilean symmetries.
Directory of Open Access Journals (Sweden)
Perumal Puvanasvaran
2013-06-01
Full Text Available Purpose: The paper is primarily done on the purpose of introducing new concept in defining the Overall Equipment Effectiveness (OEE with the consideration of both machine utilization and customer demand requested. Previous literature concerning the limitation and difficulty of OEE implementation has been investigated in order to track out the potential opportunities to be improved, since the OEE has been widely accepted by most of the industries regardless their manufacturing environment.Design/methodology/approach: The paper is conducting the study based on literature review and the computerized data collection. In details, the novel definition and method of processing the computerized data are all interpreted based on similar studies performed by others and supported by related journals in proving the validation of the output. Over the things, the computerized data are the product amount and total time elapsed on each production which is automatically recorded by the system at the manufacturing site.Findings: The finding of this paper is firstly the exposure and emphasis of limitation exists in current implementation of OEE, which showing that high utilization of the machine is encouraged regardless of the customer demand and is having conflict with the inventory handling cost. This is certainly obvious with overproduction issue especially during low customer demand period. The second limitation in general implementation of OEE is the difficulty in obtaining the ideal cycle time, especially those equipments with constant process time. The section of this paper afterward comes out with the proposed solution in fixing this problem through the definition of performance ratio and then usage of this definition in measuring the machine utilization from time to time. Before this, the time available for the production is calculated incorporating the availability of OEE, which is then used to get the Takt time.Research limitations/implications: Future
Directory of Open Access Journals (Sweden)
Pieprzyca J.
2015-04-01
Full Text Available A common method used in identification of hydrodynamics phenomena occurring in Continuous Casting (CC device's tundish is to determine the RTD curves of time. These curves allows to determine the way of the liquid steel flowing and mixing in the tundish. These can be identified either as the result of numerical simulation or by the experiments - as the result of researching the physical models. Special problem is to objectify it while conducting physical research. It is necessary to precisely determine the time constants which characterize researched phenomena basing on the data acquired in the measured change of the concentration of the tracer in model liquid's volume. The mathematical description of determined curves is based on the approximate differential equations formulated in the theory of fluid mechanics. Solving these equations to calculate the time constants requires a special software and it is very time-consuming. To improve the process a method was created to calculate the time constants with use of automation elements. It allows to solve problems using algebraic method, which improves interpretation of the research results of physical modeling.
Hadron spectrum, quark masses and decay constants from light overlap fermions on large lattices
International Nuclear Information System (INIS)
Galletly, D.; Horsley, R.; Streuer, T.; Freie Univ. Berlin
2006-07-01
We present results from a simulation of quenched overlap fermions with Luescher-Weisz gauge field action on lattices up to 24 3 48 and for pion masses down to ∼250 MeV. Among the quantities we study are the pion, rho and nucleon masses, the light and strange quark masses, and the pion decay constant. The renormalization of the scalar and axial vector currents is done nonperturbatively in the RI-MOM scheme. The simulations are performed at two different lattice spacings, a ∼0.1 fm and ∼0.15 fm, and on two different physical volumes, to test the scaling properties of our action and to study finite volume effects. We compare our results with the predictions of chiral perturbation theory and compute several of its low-energy constants. The pion mass is computed in sectors of fixed topology as well. (orig.)
Hadron spectrum, quark masses and decay constants from light overlap fermions on large lattices
Energy Technology Data Exchange (ETDEWEB)
Galletly, D.; Horsley, R. [Edinburgh Univ. (United Kingdom). School of Physics; Guertler, M. [Deutsches Elektronen-Synchrotron (DESY), Zeuthen (Germany). John von Neumann-Inst. fuer Computing NIC; Perlt, H.; Schiller, A. [Leipzig Univ. (Germany). Inst. fuer Theoretische Physik; Rakow, P.E.L. [Liverpool Univ. (United Kingdom). Theoretical Physics Division, Dept. of Mathematical Sciences; Schierholz, G. [Deutsches Elektronen-Synchrotron (DESY), Zeuthen (Germany). John von Neumann-Inst. fuer Computing NIC]|[Deutsches Elektronen-Synchrotron (DESY), Hamburg (Germany); Streuer, T. [Deutsches Elektronen-Synchrotron (DESY), Zeuthen (Germany). John von Neumann-Inst. fuer Computing NIC]|[Freie Univ. Berlin (Germany). Inst. fuer Theoretische Physik
2006-07-15
We present results from a simulation of quenched overlap fermions with Luescher-Weisz gauge field action on lattices up to 24{sup 3} 48 and for pion masses down to {approx}250 MeV. Among the quantities we study are the pion, rho and nucleon masses, the light and strange quark masses, and the pion decay constant. The renormalization of the scalar and axial vector currents is done nonperturbatively in the RI-MOM scheme. The simulations are performed at two different lattice spacings, a {approx}0.1 fm and {approx}0.15 fm, and on two different physical volumes, to test the scaling properties of our action and to study finite volume effects. We compare our results with the predictions of chiral perturbation theory and compute several of its low-energy constants. The pion mass is computed in sectors of fixed topology as well. (orig.)
International Nuclear Information System (INIS)
Myint, Wazo; Ishima, Rieko
2009-01-01
In the analysis of the constant-time Carr-Purcell-Meiboom-Gill (CT-CPMG) relaxation dispersion experiment, chemical exchange parameters, such as rate of exchange and population of the exchanging species, are typically optimized using equations that predict experimental relaxation rates recorded as a function of effective field strength. In this process, the effect of chemical exchange during the CPMG pulses is typically assumed to be the same as during the free-precession. This approximation may introduce systematic errors into the analysis of data because the number of CPMG pulses is incremented during the constant-time relaxation period, and the total pulse duration therefore varies as a function of the effective field strength. In order to estimate the size of such errors, we simulate the time-dependence of magnetization during the entire constant time period, explicitly taking into account the effect of the CPMG pulses on the spin relaxation rate. We show that in general the difference in the relaxation dispersion profile calculated using a practical pulse width from that calculated using an extremely short pulse width is small, but under certain circumstances can exceed 1 s -1 . The difference increases significantly when CPMG pulses are miscalibrated
A constant travel time budget? In search for explanations for an increase in average travel time
Rietveld, P.; Wee, van B.
2002-01-01
Recent research suggests that during the past decades the average travel time of the Dutch population has probably increased. However, different datasources show different levels of increase. Possible causes of the increase in average travel time are presented here. Increased incomes have
On the thermal inertia and time constant of single-family houses
Energy Technology Data Exchange (ETDEWEB)
Hedbrant, J.
2001-08-01
Since the 1970s, electricity has become a common heating source in Swedish single-family houses. About one million small houses can use electricity for heating, about 600.000 have electricity as the only heating source, A liberalised European electricity market would most likely raise the Swedish electricity prices during daytime on weekdays and lower it at other times. In the long run, electrical heating of houses would be replaced by fuels, but in the shorter perspective, other strategies may be considered. This report evaluates the use of electricity for heating a dwelling, or part of it, at night when both the demand and the price are low. The stored heat is utilised in the daytime some hours later, when the electricity price is high. Essential for heat storage is the thermal time constant. The report gives a simple theoretical framework for the calculation of the time constant for a single-family house with furniture. Furthermore the comfort time constant, that is, the time for a house to cool down from a maximum to a minimum acceptable temperature, is derived. Two theoretical model houses are calculated, and the results are compared to data from empirical studies in three inhabited test houses. The results show that it was possible to store about 8 kWh/K in a house from the seventies and about 5 kWh/K in a house from the eighties. The time constants were 34 h and 53 h, respectively. During winter conditions with 0 deg C outdoor, the 'comfort' time constants with maximum and minimum indoor temperatures of 23 and 20 deg C were 6 h and 10 h. The results indicate that the maximum load-shifting potential of an average single family house is about 1 kw during 16 daytime hours shifted into 2 kw during 8 night hours. Upscaled to the one million Swedish single-family houses that can use electricity as a heating source, the maximum potential is 1000 MW daytime time-shifted into 2000 MW at night.
New constraints on time-dependent variations of fundamental constants using Planck data
Hart, Luke; Chluba, Jens
2018-02-01
Observations of the cosmic microwave background (CMB) today allow us to answer detailed questions about the properties of our Universe, targeting both standard and non-standard physics. In this paper, we study the effects of varying fundamental constants (i.e. the fine-structure constant, αEM, and electron rest mass, me) around last scattering using the recombination codes COSMOREC and RECFAST++. We approach the problem in a pedagogical manner, illustrating the importance of various effects on the free electron fraction, Thomson visibility function and CMB power spectra, highlighting various degeneracies. We demonstrate that the simpler RECFAST++ treatment (based on a three-level atom approach) can be used to accurately represent the full computation of COSMOREC. We also include explicit time-dependent variations using a phenomenological power-law description. We reproduce previous Planck 2013 results in our analysis. Assuming constant variations relative to the standard values, we find the improved constraints αEM/αEM, 0 = 0.9993 ± 0.0025 (CMB only) and me/me, 0 = 1.0039 ± 0.0074 (including BAO) using Planck 2015 data. For a redshift-dependent variation, αEM(z) = αEM(z0) [(1 + z)/1100]p with αEM(z0) ≡ αEM, 0 at z0 = 1100, we obtain p = 0.0008 ± 0.0025. Allowing simultaneous variations of αEM(z0) and p yields αEM(z0)/αEM, 0 = 0.9998 ± 0.0036 and p = 0.0006 ± 0.0036. We also discuss combined limits on αEM and me. Our analysis shows that existing data are not only sensitive to the value of the fundamental constants around recombination but also its first time derivative. This suggests that a wider class of varying fundamental constant models can be probed using the CMB.
A simplified controller and detailed dynamics of constant off-time peak current control
Van den Bossche, Alex; Dimitrova, Ekaterina; Valchev, Vencislav; Feradov, Firgan
2017-09-01
A fast and reliable current control is often the base of power electronic converters. The traditional constant frequency peak control is unstable above 50 % duty ratio. In contrast, the constant off-time peak current control (COTCC) is unconditionally stable and fast, so it is worth analyzing it. Another feature of the COTCC is that one can combine a current control together with a current protection. The time dynamics show a zero-transient response, even when the inductor changes in a wide range. It can also be modeled as a special transfer function for all frequencies. The article shows also that it can be implemented in a simple analog circuit using a wide temperature range IC, such as the LM2903, which is compatible with PV conversion and automotive temperature range. Experiments are done using a 3 kW step-up converter. A drawback is still that the principle does not easily fit in usual digital controllers up to now.
Most Probable Failures in LHC Magnets and Time Constants of their Effects on the Beam.
Gomez Alonso, Andres
2006-01-01
During the LHC operation, energies up to 360 MJ will be stored in each proton beam and over 10 GJ in the main electrical circuits. With such high energies, beam losses can quickly lead to important equipment damage. The Machine Protection Systems have been designed to provide reliable protection of the LHC through detection of the failures leading to beam losses and fast dumping of the beams. In order to determine the protection strategies, it is important to know the time constants of the failure effects on the beam. In this report, we give an estimation of the time constants of quenches and powering failures in LHC magnets. The most critical failures are powering failures in certain normal conducting circuits, leading to relevant effects on the beam in ~1 ms. The failures on super conducting magnets leading to fastest losses are quenches. In this case, the effects on the beam can be signficant ~10 ms after the quench occurs.
Constant time distance queries in planar unweighted graphs with subquadratic preprocessing time
DEFF Research Database (Denmark)
Wulff-Nilsen, C.
2013-01-01
Let G be an n-vertex planar, undirected, and unweighted graph. It was stated as open problems whether the Wiener index, defined as the sum of all-pairs shortest path distances, and the diameter of G can be computed in o(n(2)) time. We show that both problems can be solved in O(n(2) log log n/log n......) time with O(n) space. The techniques that we apply allow us to build, within the same time bound, an oracle for exact distance queries in G. More generally, for any parameter S is an element of [(log n/log log n)(2), n(2/5)], distance queries can be answered in O (root S log S/log n) time per query...... with O(n(2)/root S) preprocessing time and space requirement. With respect to running time, this is better than previous algorithms when log S = o(log n). All algorithms have linear space requirement. Our results generalize to a larger class of graphs including those with a fixed excluded minor. (C) 2012...
Mathur, Neha; Glesk, Ivan; Buis, Arjan
2016-06-01
Elevated skin temperature at the body/device interface of lower-limb prostheses is one of the major factors that affect tissue health. The heat dissipation in prosthetic sockets is greatly influenced by the thermal conductive properties of the hard socket and liner material employed. However, monitoring of the interface temperature at skin level in lower-limb prosthesis is notoriously complicated. This is due to the flexible nature of the interface liners used which requires consistent positioning of sensors during donning and doffing. Predicting the residual limb temperature by monitoring the temperature between socket and liner rather than skin and liner could be an important step in alleviating complaints on increased temperature and perspiration in prosthetic sockets. To predict the residual limb temperature, a machine learning algorithm - Gaussian processes is employed, which utilizes the thermal time constant values of commonly used socket and liner materials. This Letter highlights the relevance of thermal time constant of prosthetic materials in Gaussian processes technique which would be useful in addressing the challenge of non-invasively monitoring the residual limb skin temperature. With the introduction of thermal time constant, the model can be optimised and generalised for a given prosthetic setup, thereby making the predictions more reliable.
International Nuclear Information System (INIS)
Roth, H.D.; Hutton, R.S.; Hwang, Kuochu; Turro, N.J.; Welsh, K.M.
1989-01-01
Nuclear spin polarization effects induced in radical pairs with one or more strong ( 13 C) hyperfine coupling constants have been evaluated. The pairs were generated by photoinduced α-cleavage or hydrogen abstraction reactions of carbonyl compounds. Several examples illustrate how changes in the magnetic field strength (H 0 ) and the g-factor difference (Δg) affect the general appearance of the resulting CIDNP multiplets. The results bear out an earlier caveat concerning the qualitative interpretation of CIDNP effects observed for multiplets
International Nuclear Information System (INIS)
Chowdhury, A.R.; Roy, T.
1980-01-01
We have considered the problem of evaluating the large order estimates of perturbation theory in a quantum field theory with more than one coupling constant. The theory considered is four dimensional and possesses instanton-type solutions. It contains a Boson field coupled with a Fermion through the usual g anti psi psi phi type interaction, along with the self-interaction of the Boson lambda phi 4 . Our analysis reveals a phenomenon not observed in a theory with only one coupling constant. One gets different kinds of behavior in different regions of the (lambda, g) plane. The results are quite encouraging for the application to more realistic field theories
Near constant-time optimal piecewise LDR to HDR inverse tone mapping
Chen, Qian; Su, Guan-Ming; Yin, Peng
2015-02-01
In a backward compatible HDR image/video compression, it is a general approach to reconstruct HDR from compressed LDR as a prediction to original HDR, which is referred to as inverse tone mapping. Experimental results show that 2- piecewise 2nd order polynomial has the best mapping accuracy than 1 piece high order or 2-piecewise linear, but it is also the most time-consuming method because to find the optimal pivot point to split LDR range to 2 pieces requires exhaustive search. In this paper, we propose a fast algorithm that completes optimal 2-piecewise 2nd order polynomial inverse tone mapping in near constant time without quality degradation. We observe that in least square solution, each entry in the intermediate matrix can be written as the sum of some basic terms, which can be pre-calculated into look-up tables. Since solving the matrix becomes looking up values in tables, computation time barely differs regardless of the number of points searched. Hence, we can carry out the most thorough pivot point search to find the optimal pivot that minimizes MSE in near constant time. Experiment shows that our proposed method achieves the same PSNR performance while saving 60 times computation time compared to the traditional exhaustive search in 2-piecewise 2nd order polynomial inverse tone mapping with continuous constraint.
Time-dependent leak behavior of flawed Alloy 600 tube specimens at constant pressure
Energy Technology Data Exchange (ETDEWEB)
Bahn, Chi Bum, E-mail: bahn@anl.gov [Argonne National Laboratory, Argonne, IL 60439 (United States); Majumdar, Saurin [Argonne National Laboratory, Argonne, IL 60439 (United States); Harris, Charles [United States Nuclear Regulatory Commission, Rockville, MD 20852 (United States)
2011-10-15
Leak rate testing has been performed using Alloy 600 tube specimens with throughwall flaws. Some specimens have shown time-dependent leak behavior at constant pressure conditions. Fractographic characterization was performed to identify the time-dependent crack growth mechanism. The fracture surface of the specimens showed the typical features of ductile fracture, as well as the distinct crystallographic facets, typical of fatigue crack growth at low {Delta}K level. Structural vibration appears to have been caused by the oscillation of pressure, induced by a high-pressure pump used in a test facility, and by the water jet/tube structure interaction. Analyses of the leak behaviors and crack growth indicated that both the high-pressure pump and the water jet could significantly contribute to fatigue crack growth. To determine whether the fatigue crack growth during the leak testing can occur solely by the water jet effect, leak rate tests at constant pressure without the high-pressure pump need to be performed. - Highlights: > Leak rate of flawed Alloy 600 tubing increased at constant pressure condition. > Fractography revealed two cases: ductile tearing and crystallographic facets. > Crystallographic facets are typical features of fatigue crack growth at low {Delta}K. > Fatigue source could be water jet-induced vibration and/or high-pressure pump pulsation.
Exponential stability of fuzzy cellular neural networks with constant and time-varying delays
International Nuclear Information System (INIS)
Liu Yanqing; Tang Wansheng
2004-01-01
In this Letter, the global stability of delayed fuzzy cellular neural networks (FCNN) with either constant delays or time varying delays is proposed. Firstly, we give the existence and uniqueness of the equilibrium point by using the theory of topological degree and the properties of nonsingular M-matrix and the sufficient conditions for ascertaining the global exponential stability by constructing a suitable Lyapunov functional. Secondly, the criteria for guaranteeing the global exponential stability of FCNN with time varying delays are given and the estimation of exponential convergence rate with regard to speed of vary of delays is presented by constructing a suitable Lyapunov functional
Badra, Jihad; Elwardany, Ahmed E; Farooq, Aamir
2014-06-28
Reaction rate constants of the reaction of four large ketones with hydroxyl (OH) are investigated behind reflected shock waves using OH laser absorption. The studied ketones are isomers of hexanone and include 2-hexanone, 3-hexanone, 3-methyl-2-pentanone, and 4-methl-2-pentanone. Rate constants are measured under pseudo-first-order kinetics at temperatures ranging from 866 K to 1375 K and pressures near 1.5 atm. The reported high-temperature rate constant measurements are the first direct measurements for these ketones under combustion-relevant conditions. The effects of the position of the carbonyl group (C=O) and methyl (CH3) branching on the overall rate constant with OH are examined. Using previously published data, rate constant expressions covering, low-to-high temperatures, are developed for acetone, 2-butanone, 3-pentanone, and the hexanone isomers studied here. These Arrhenius expressions are used to devise rate rules for H-abstraction from various sites. Specifically, the current scheme is applied with good success to H-abstraction by OH from a series of n-ketones. Finally, general expressions for primary and secondary site-specific H-abstraction by OH from ketones are proposed as follows (the subscript numbers indicate the number of carbon atoms bonded to the next-nearest-neighbor carbon atom, the subscript CO indicates that the abstraction is from a site next to the carbonyl group (C=O), and the prime is used to differentiate different neighboring environments of a methylene group):
International Nuclear Information System (INIS)
Nagao, S
2009-01-01
Nature of the time and requirements to work as a time dimension are investigated. A potential scenario of the development of the universe is conceptually investigated starting from energy as vibration in multiple dimensions. A model is proposed, in which the Big Bang is a phase transition of energy from vibration in 4-dimensional space to energy distribution in 3-D surface of a 4-D sphere. The Time which we observe passing at a constant speed is not such a reference frame which we unintentionally believe to be the time, but the radius dimension of the 4-D sphere. The feature of the Dark Matter and the mystery of the Dark Energy are naturally explained from the model.
Energy Technology Data Exchange (ETDEWEB)
Nagao, S, E-mail: snagao@lilac.plala.or.j [Business Development and Licensing Department, Nippon Boehringer Ingelheim Co., Ltd., ThinkPark Tower, 2-1-1, Osaki, Shinagawa, Tokyo 141-6017 (Japan)
2009-06-01
Nature of the time and requirements to work as a time dimension are investigated. A potential scenario of the development of the universe is conceptually investigated starting from energy as vibration in multiple dimensions. A model is proposed, in which the Big Bang is a phase transition of energy from vibration in 4-dimensional space to energy distribution in 3-D surface of a 4-D sphere. The Time which we observe passing at a constant speed is not such a reference frame which we unintentionally believe to be the time, but the radius dimension of the 4-D sphere. The feature of the Dark Matter and the mystery of the Dark Energy are naturally explained from the model.
Influence of the Gilbert damping constant on the flux rise time of write head fields
International Nuclear Information System (INIS)
Ertl, Othmar; Schrefl, Thomas; Suess, Dieter; Schabes, Manfred E.
2005-01-01
Magnetic recording at fast data rates requires write heads with rapid rise times of the magnetic flux during the write process. We present three-dimensional (3D) micromagnetic finite element calculations of an entire ring head including 3D coil geometry during the writing of magnetic bits in granular media. The simulations demonstrate how input current profiles translate into magnetization processes in the head and which in turn generate the write head field. The flux rise time significantly depends on the Gilbert damping constant of the head material. Low damping causes incoherent magnetization processes, leading to long rise times and low head fields. High damping leads to coherent reversal of the magnetization in the head. As a consequence, the gap region can be quickly saturated which causes high head fields with short rise times
Time constants and transfer functions for a homogeneous 900 MWt metallic fueled LMR
International Nuclear Information System (INIS)
Grimm, K.N.; Meneghetti, D.
1988-01-01
Nodal transfer functions are calculated for a 900 MWt U10Zr-fueled sodium cooled reactor. From the transfer functions the time constants, feedback reactivity transfer function coefficients, and power coefficients can be determined. These quantities are calculated for core fuel, upper and lower axial reflector steel, radial blanket fuel, radial reflector steel, and B 4 C rod shaft expansion effect. The quantities are compared to the analogous quantities of a 60 MWt metallic-fueled sodium cooled Experimental Breeder Reactor II configuration. 8 refs., 2 figs., 6 tabs
Yang, Yong-Qiang; Li, Xue-Bo; Shao, Ru-Yue; Lyu, Zhou; Li, Hong-Wei; Li, Gen-Ping; Xu, Lyu-Zi; Wan, Li-Hua
2016-09-01
The characteristic life stages of infesting blowflies (Calliphoridae) such as Chrysomya megacephala (Fabricius) are powerful evidence for estimating the death time of a corpse, but an established reference of developmental times for local blowfly species is required. We determined the developmental rates of C. megacephala from southwest China at seven constant temperatures (16-34°C). Isomegalen and isomorphen diagrams were constructed based on the larval length and time for each developmental event (first ecdysis, second ecdysis, wandering, pupariation, and eclosion), at each temperature. A thermal summation model was constructed by estimating the developmental threshold temperature D0 and the thermal summation constant K. The thermal summation model indicated that, for complete development from egg hatching to eclosion, D0 = 9.07 ± 0.54°C and K = 3991.07 ± 187.26 h °C. This reference can increase the accuracy of estimations of postmortem intervals in China by predicting the growth of C. megacephala. © 2016 American Academy of Forensic Sciences.
Comparison of Cole-Cole and Constant Phase Angle modeling in time-domain induced polarization
DEFF Research Database (Denmark)
Lajaunie, Myriam; Maurya, Pradip Kumar; Fiandaca, Gianluca
The Cole-Cole model and the constant phase angle (CPA) model are two prevailing phenomenological descriptions of the induced polarization (IP), used for both frequency domain (FD) and time domain (TD) modeling. The former one is a 4-parameter description, while the latest one involves only two......, forward modeling of quadrupolar sequences on 1D and 2D heterogeneous CPA models shows that the CPA decays differ among each other only by a multiplication factor. Consequently, the inspection of field data in log-log plots gives insight on the modeling needed for fitting them: the CPA inversion cannot...... is reflected in TDIP data, and therefore, at identifying (1) if and when it is possible to distinguish, in time domain, between a Cole-Cole description and a CPA one, and (2) if features of time domain data exist in order to know, from a simple data inspection, which model will be the most adapted to the data...
Time series clustering in large data sets
Directory of Open Access Journals (Sweden)
Jiří Fejfar
2011-01-01
Full Text Available The clustering of time series is a widely researched area. There are many methods for dealing with this task. We are actually using the Self-organizing map (SOM with the unsupervised learning algorithm for clustering of time series. After the first experiment (Fejfar, Weinlichová, Šťastný, 2009 it seems that the whole concept of the clustering algorithm is correct but that we have to perform time series clustering on much larger dataset to obtain more accurate results and to find the correlation between configured parameters and results more precisely. The second requirement arose in a need for a well-defined evaluation of results. It seems useful to use sound recordings as instances of time series again. There are many recordings to use in digital libraries, many interesting features and patterns can be found in this area. We are searching for recordings with the similar development of information density in this experiment. It can be used for musical form investigation, cover songs detection and many others applications.The objective of the presented paper is to compare clustering results made with different parameters of feature vectors and the SOM itself. We are describing time series in a simplistic way evaluating standard deviations for separated parts of recordings. The resulting feature vectors are clustered with the SOM in batch training mode with different topologies varying from few neurons to large maps.There are other algorithms discussed, usable for finding similarities between time series and finally conclusions for further research are presented. We also present an overview of the related actual literature and projects.
International Nuclear Information System (INIS)
Doebrich, Marcus; Markstaller, Klaus; Karmrodt, Jens; Kauczor, Hans-Ulrich; Eberle, Balthasar; Weiler, Norbert; Thelen, Manfred; Schreiber, Wolfgang G
2005-01-01
In this study, an algorithm was developed to measure the distribution of pulmonary time constants (TCs) from dynamic computed tomography (CT) data sets during a sudden airway pressure step up. Simulations with synthetic data were performed to test the methodology as well as the influence of experimental noise. Furthermore the algorithm was applied to in vivo data. In five pigs sudden changes in airway pressure were imposed during dynamic CT acquisition in healthy lungs and in a saline lavage ARDS model. The fractional gas content in the imaged slice (FGC) was calculated by density measurements for each CT image. Temporal variations of the FGC were analysed assuming a model with a continuous distribution of exponentially decaying time constants. The simulations proved the feasibility of the method. The influence of experimental noise could be well evaluated. Analysis of the in vivo data showed that in healthy lungs ventilation processes can be more likely characterized by discrete TCs whereas in ARDS lungs continuous distributions of TCs are observed. The temporal behaviour of lung inflation and deflation can be characterized objectively using the described new methodology. This study indicates that continuous distributions of TCs reflect lung ventilation mechanics more accurately compared to discrete TCs
Ivy, Sarah E.; Guerra, Jennifer A.; Hatton, Deborah D.
2017-01-01
Introduction: Constant time delay is an evidence-based practice to teach sight word recognition to students with a variety of disabilities. To date, two studies have documented its effectiveness for teaching braille. Methods: Using a multiple-baseline design, we evaluated the effectiveness of constant time delay to teach highly motivating words to…
Directory of Open Access Journals (Sweden)
Mina eRanjbaran
2016-03-01
Full Text Available The vestibulo-ocular reflex (VOR is essential in our daily life to stabilize retinal images during head movements. Balanced vestibular functionality secures optimal reflex performance which can be distorted in case of peripheral vestibular lesions. Luckily, vestibular compensation in different neuronal sites restores VOR function to some extent over time. Studying vestibular compensation gives insight into the possible mechanisms for plasticity in the brain.In this work, novel experimental analysis tools are employed to reevaluate the VOR characteristics following unilateral vestibular lesions and compensation. Our results suggest that following vestibular lesions, asymmetric performance of the VOR is not only limited to its gain. Vestibular compensation also causes asymmetric dynamics, i.e. different time constants for the VOR during leftward or rightward passive head rotation. Potential mechanisms for these experimental observations are provided using simulation studies.
International Nuclear Information System (INIS)
Chand, F.
2010-01-01
Exact fourth-order constants of motion are investigated for three-dimensional classical and quantum Hamiltonian systems. The rationalization method is utilized to obtain constants of motion for classical systems. Constants of motion for quantum systems are obtained by adding quantum correction terms, computed using Moyal's bracket, to the corresponding classical counterparts. (author)
International Nuclear Information System (INIS)
Shore, G.M. . E-mail g.m.shore@swansea.ac.uk
2006-01-01
The QCD formulae for the radiative decays η,η ' ->γγ, and the corresponding Dashen-Gell-Mann-Oakes-Renner relations, differ from conventional PCAC results due to the gluonic U(1) A axial anomaly. This introduces a critical dependence on the gluon topological susceptibility. In this paper, we revisit our earlier theoretical analysis of radiative pseudoscalar decays and the DGMOR relations and extract explicit experimental values for the decay constants. This is our main result. The flavour singlet DGMOR relation is the generalisation of the Witten-Veneziano formula beyond large N c , so we are able to give a quantitative assessment of the realisation of the 1/N c expansion in the U(1) A sector of QCD. Applications to other aspects of η ' physics, including the relation with the first moment sum rule for the polarised photon structure function g 1 γ , are highlighted. The U(1) A Goldberger-Treiman relation is extended to accommodate SU(3) flavour breaking and the implications of a more precise measurement of the η and η ' -nucleon couplings are discussed. A comparison with the existing literature on pseudoscalar meson decay constants using large-N c chiral Lagrangians is also made
Sassen, S.A.E.; Wal, van der J.
1997-01-01
For a real-time shared-memory database with optimistic concurrency control, an approximation for the transaction response-time distribution is obtained. The model assumes that transactions arrive at the database according to a Poisson process, that every transaction uses an equal number of
Badra, Jihad
2014-01-01
Reaction rate constants of the reaction of four large ketones with hydroxyl (OH) are investigated behind reflected shock waves using OH laser absorption. The studied ketones are isomers of hexanone and include 2-hexanone, 3-hexanone, 3-methyl-2-pentanone, and 4-methl-2-pentanone. Rate constants are measured under pseudo-first-order kinetics at temperatures ranging from 866 K to 1375 K and pressures near 1.5 atm. The reported high-temperature rate constant measurements are the first direct measurements for these ketones under combustion-relevant conditions. The effects of the position of the carbonyl group (CO) and methyl (CH3) branching on the overall rate constant with OH are examined. Using previously published data, rate constant expressions covering, low-to-high temperatures, are developed for acetone, 2-butanone, 3-pentanone, and the hexanone isomers studied here. These Arrhenius expressions are used to devise rate rules for H-abstraction from various sites. Specifically, the current scheme is applied with good success to H-abstraction by OH from a series of n-ketones. Finally, general expressions for primary and secondary site-specific H-abstraction by OH from ketones are proposed as follows (the subscript numbers indicate the number of carbon atoms bonded to the next-nearest-neighbor carbon atom, the subscript CO indicates that the abstraction is from a site next to the carbonyl group (CO), and the prime is used to differentiate different neighboring environments of a methylene group):P1,CO = 7.38 × 10-14 exp(-274 K/T) + 9.17 × 10-12 exp(-2499 K/T) (285-1355 K)S10,CO = 1.20 × 10-11 exp(-2046 K/T) + 2.20 × 10-13 exp(160 K/T) (222-1464 K)S11,CO = 4.50 × 10-11 exp(-3000 K/T) + 8.50 × 10-15 exp(1440 K/T) (248-1302 K)S11′,CO = 3.80 × 10-11 exp(-2500 K/T) + 8.50 × 10-15 exp(1550 K/T) (263-1370 K)S 21,CO = 5.00 × 10-11 exp(-2500 K/T) + 4.00 × 10-13 exp(775 K/T) (297-1376 K) © 2014 the Partner Organisations.
Directory of Open Access Journals (Sweden)
Lorenzo Iorio
2018-03-01
Full Text Available Independent tests aiming to constrain the value of the cosmological constant Λ are usually difficult because of its extreme smallness ( Λ ≃ 1 × 10 - 52 m - 2 , or 2 . 89 × 10 - 122 in Planck units . Bounds on it from Solar System orbital motions determined with spacecraft tracking are currently at the ≃ 10 - 43 – 10 - 44 m - 2 ( 5 – 1 × 10 - 113 in Planck units level, but they may turn out to be optimistic since Λ has not yet been explicitly modeled in the planetary data reductions. Accurate ( σ τ p ≃ 1 – 10 μ s timing of expected pulsars orbiting the Black Hole at the Galactic Center, preferably along highly eccentric and wide orbits, might, at least in principle, improve the planetary constraints by several orders of magnitude. By looking at the average time shift per orbit Δ δ τ ¯ p Λ , an S2-like orbital configuration with e = 0 . 8839 , P b = 16 yr would permit a preliminarily upper bound of the order of Λ ≲ 9 × 10 - 47 m - 2 ≲ 2 × 10 - 116 in Planck units if only σ τ p were to be considered. Our results can be easily extended to modified models of gravity using Λ -type parameters.
Gauge theories, time-dependence of the gravitational constant and antigravity in the early universe
International Nuclear Information System (INIS)
Linde, A.D.
1980-01-01
It is shown that the interaction of the gravitational field with matter leads to a strong modification of the effective gravitational constant in the early universe. In certain cases this leads even to the change of sign of the gravitational constant, i.e. to antigravity in the early universe. (orig.)
New determination of the gravitational constant G with time-of-swing method
International Nuclear Information System (INIS)
Tu Liangcheng; Li Qing; Wang Qinglan; Shao Chenggang; Yang Shanqing; Liu Linxia; Liu Qi; Luo Jun
2010-01-01
A new determination of the Newtonian gravitational constant G is presented by using a torsion pendulum with the time-of-swing method. Compared with our previous measurement with the same method, several improvements greatly reduced the uncertainties as follows: (i) two stainless steel spheres with more homogeneous density are used as the source masses instead of the cylinders used in the previous experiment, and the offset of the mass center from the geometric center is measured and found to be much smaller than that of the cylinders; (ii) a rectangular glass block is used as the main body of the pendulum, which has fewer vibration modes and hence improves the stability of the period and reduces the uncertainty of the moment of inertia; (iii) both the pendulum and source masses are placed in the same vacuum chamber to reduce the error of measuring the relative positions; (iv) changing the configurations between the ''near'' and ''far'' positions is remotely operated by using a stepper motor to lower the environmental disturbances; and (v) the anelastic effect of the torsion fiber is first measured directly by using two disk pendulums with the help of a high-Q quartz fiber. We have performed two independent G measurements, and the two G values differ by only 9 ppm. The combined value of G is (6.673 49±0.000 18)x10 -11 m 3 kg -1 s -2 with a relative uncertainty of 26 ppm.
Bilateral control of master-slave manipulators with constant time delay.
Forouzantabar, A; Talebi, H A; Sedigh, A K
2012-01-01
This paper presents a novel teleoperation controller for a nonlinear master-slave robotic system with constant time delay in communication channel. The proposed controller enables the teleoperation system to compensate human and environmental disturbances, while achieving master and slave position coordination in both free motion and contact situation. The current work basically extends the passivity based architecture upon the earlier work of Lee and Spong (2006) [14] to improve position tracking and consequently transparency in the face of disturbances and environmental contacts. The proposed controller employs a PID controller in each side to overcome some limitations of a PD controller and guarantee an improved performance. Moreover, by using Fourier transform and Parseval's identity in the frequency domain, we demonstrate that this new PID controller preserves the passivity of the system. Simulation and semi-experimental results show that the PID controller tracking performance is superior to that of the PD controller tracking performance in slave/environmental contacts. Copyright © 2011 ISA. Published by Elsevier Ltd. All rights reserved.
Construction and Start-up of a Large-Volume Thermostat for Dielectric-Constant Gas Thermometry
Merlone, A.; Moro, F.; Zandt, T.; Gaiser, C.; Fellmuth, B.
2010-07-01
A liquid-bath thermostat with a volume of about 800 L was designed to provide a suitable thermal environment for a dielectric-constant gas thermometer (DCGT) in the range from the triple point of mercury to the melting point of gallium. In the article, results obtained with the unique, huge thermostat without the DCGT measuring chamber are reported to demonstrate the capability of controlling the temperature of very large systems at a metrological level. First tests showed that the bath together with its temperature controller provide a temperature variation of less than ±0.5mK peak-to-peak. This temperature instability could be maintained over a period of several days. In the central working volume (diameter—500mm, height—650mm), in which the vacuum chamber containing the measuring system of the DCGT will be placed later, the temperature inhomogeneity has been demonstrated to be also well below 1mK.
Are fundamental constants really constant
International Nuclear Information System (INIS)
Norman, E.B.
1986-01-01
Reasons for suspecting that fundamental constants might change with time are reviewed. Possible consequences of such variations are examined. The present status of experimental tests of these ideas is discussed
Roatta , Luca
2017-01-01
Assuming that space and time can only have discrete values, we obtain the expression of the gravitational potential energy that at large distance coincides with the Newtonian. In very precise circumstances it coincides with the relativistic mass-energy relation: this shows that the Universe is a black hole in which all bodies are subjected to an acceleration toward the border of the Universe itself. Since the Universe is a black hole with a fixed radius, we can obtain the density of the Unive...
Miller, Jeff; Sproesser, Gudrun; Ulrich, Rolf
2008-01-01
In two experiments, we used response signals (RSs) to control processing time and trace out speed accuracy trade-off (SAT) functions in a difficult perceptual discrimination task. Each experiment compared performance in blocks of trials with constant and, hence, temporally predictable RS lags against performance in blocks with variable, unpredictable RS lags. In both experiments, essentially equivalent SAT functions were observed with constant and variable RS lags. We conclude that there is l...
OPTIMAL TIME FOR SUBSTITUTION OF Eucalyptus spp POPULATIONS – THE CASE OF CONSTANT TECHNOLOGY
Directory of Open Access Journals (Sweden)
Álvaro Nogueira de Souza1;
2001-01-01
Full Text Available The few studies on renewal of Eucalyptus spp populations done in Brazil consider constant technology. This is done this way for facilitating the modeling of how variables affect this activity, such as income, costs, rates of discount and yield. The reason for not considering the gains earned through technological progress is the lack of a specific dynamic model. This study was carried out aiming to get to know the forest rotation with values from the sixties (beginning of tax exemption programme and current values (nineties aiming to obtain wood for cellulose and charcoal production; to determine the moment of substitution of a population which presents the same yield and the same cost structure through time as well as to determine how many cuttings should be done until the final cycle; to determine how many cuttings should be done until substitution (substitution chain; to verify the sensitivity of the substitution time to variations in the discount rates, wood prices, yield, land costs, harvesting costs and coppice yield. The results were tested in a case study, employing the Gompertz Function to determine the population yield. The Current Net Value Method was used as a crieterion of economic decision. It has been concluded that: The forest rotation to produce charcoal in the sixties was at 13 years of age; the current rotation is at 7 years of age; the final cycle allows up to 13 cuttings, but considering the possibility of land leasing, the best alternative is to conduce the sproutings up to the third cutting; an increase in factors such as discount rates, wood prices and yield caused reduction of the cutting age; increase in land costs did not affect the cutting ages; increase in the logging cost increased the cutting ages; the substitution of population now a days happens after 3 cuttings, while in the sixties it happened after 2 cuttings due to the lesser loss; an increase in factors such as discount rates, wood prices, logging costs and
International Nuclear Information System (INIS)
Torquato, S.; Kim, I.C.; Cule, D.
1999-01-01
We generalize the Brownian motion simulation method of Kim and Torquato [J. Appl. Phys. 68, 3892 (1990)] to compute the effective conductivity, dielectric constant and diffusion coefficient of digitized composite media. This is accomplished by first generalizing the first-passage-time equations to treat first-passage regions of arbitrary shape. We then develop the appropriate first-passage-time equations for digitized media: first-passage squares in two dimensions and first-passage cubes in three dimensions. A severe test case to prove the accuracy of the method is the two-phase periodic checkerboard in which conduction, for sufficiently large phase contrasts, is dominated by corners that join two conducting-phase pixels. Conventional numerical techniques (such as finite differences or elements) do not accurately capture the local fields here for reasonable grid resolution and hence lead to inaccurate estimates of the effective conductivity. By contrast, we show that our algorithm yields accurate estimates of the effective conductivity of the periodic checkerboard for widely different phase conductivities. Finally, we illustrate our method by computing the effective conductivity of the random checkerboard for a wide range of volume fractions and several phase contrast ratios. These results always lie within rigorous four-point bounds on the effective conductivity. copyright 1999 American Institute of Physics
Bîrlea, Sinziana I; Corley, Gavin J; Bîrlea, Nicolae M; Breen, Paul P; Quondamatteo, Fabio; OLaighin, Gearóid
2009-01-01
We propose a new method for extracting the electrical properties of human skin based on the time constant analysis of its exponential response to impulse stimulation. As a result of this analysis an adjacent finding has arisen. We have found that stratum corneum electroporation can be detected using this analysis method. We have observed that a one time-constant model is appropriate for describing the electrical properties of human skin at low amplitude applied voltages (30V). Higher voltage amplitudes (>30V) have been proven to create pores in the skin's stratum corneum which offer a new, lower resistance, pathway for the passage of current through the skin. Our data shows that when pores are formed in the stratum corneum they can be detected, in-vivo, due to the fact that a second time constant describes current flow through them.
Saengow, C.; Giacomin, A. J.
2017-12-01
The Oldroyd 8-constant framework for continuum constitutive theory contains a rich diversity of popular special cases for polymeric liquids. In this paper, we use part of our exact solution for shear stress to arrive at unique exact analytical solutions for the normal stress difference responses to large-amplitude oscillatory shear (LAOS) flow. The nonlinearity of the polymeric liquids, triggered by LAOS, causes these responses at even multiples of the test frequency. We call responses at a frequency higher than twice the test frequency higher harmonics. We find the new exact analytical solutions to be compact and intrinsically beautiful. These solutions reduce to those of our previous work on the special case of the corotational Maxwell fluid. Our solutions also agree with our new truncated Goddard integral expansion for the special case of the corotational Jeffreys fluid. The limiting behaviors of these exact solutions also yield new explicit expressions. Finally, we use our exact solutions to see how η∞ affects the normal stress differences in LAOS.
Miller, Jeff; Sproesser, Gudrun; Ulrich, Rolf
2008-07-01
In two experiments, we used response signals (RSs) to control processing time and trace out speed--accuracy trade-off(SAT) functions in a difficult perceptual discrimination task. Each experiment compared performance in blocks of trials with constant and, hence, temporally predictable RS lags against performance in blocks with variable, unpredictable RS lags. In both experiments, essentially equivalent SAT functions were observed with constant and variable RS lags. We conclude that there is little effect of advance preparation for a given processing time, suggesting that the discrimination mechanisms underlying SAT functions are driven solely by bottom-up information processing in perceptual discrimination tasks.
Large-time behavior of solutions to a reaction-diffusion system with distributed microstructure
Muntean, A.
2009-01-01
Abstract We study the large-time behavior of a class of reaction-diffusion systems with constant distributed microstructure arising when modeling diffusion and reaction in structured porous media. The main result of this Note is the following: As t ¿ 8 the macroscopic concentration vanishes, while
Badra, Jihad; Elwardani, Ahmed Elsaid; Farooq, Aamir
2014-01-01
-pentanone, and 4-methl-2-pentanone. Rate constants are measured under pseudo-first-order kinetics at temperatures ranging from 866 K to 1375 K and pressures near 1.5 atm. The reported high-temperature rate constant measurements are the first direct
Asymptotic structure of space-time with a positive cosmological constant
Kesavan, Aruna
In general relativity a satisfactory framework for describing isolated systems exists when the cosmological constant Lambda is zero. The detailed analysis of the asymptotic structure of the gravitational field, which constitutes the framework of asymptotic flatness, lays the foundation for research in diverse areas in gravitational science. However, the framework is incomplete in two respects. First, asymptotic flatness provides well-defined expressions for physical observables such as energy and momentum as 'charges' of asymptotic symmetries at null infinity, [special character omitted] +. But the asymptotic symmetry group, called the Bondi-Metzner-Sachs group is infinite-dimensional and a tensorial expression for the 'charge' integral of an arbitrary BMS element is missing. We address this issue by providing a charge formula which is a 2-sphere integral over fields local to the 2-sphere and refers to no extraneous structure. The second, and more significant shortcoming is that observations have established that Lambda is not zero but positive in our universe. Can the framework describing isolated systems and their gravitational radiation be extended to incorporate this fact? In this dissertation we show that, unfortunately, the standard framework does not extend from the Lambda = 0 case to the Lambda > 0 case in a physically useful manner. In particular, we do not have an invariant notion of gravitational waves in the non-linear regime, nor an analog of the Bondi 'news tensor', nor positive energy theorems. In addition, we argue that the stronger boundary condition of conformal flatness of intrinsic metric on [special character omitted]+, which reduces the asymptotic symmetry group from Diff([special character omitted]) to the de Sitter group, is insufficient to characterize gravitational fluxes and is physically unreasonable. To obtain guidance for the full non-linear theory with Lambda > 0, linearized gravitational waves in de Sitter space-time are analyzed in
Optical timing receiver for the NASA laser ranging system. Part I. Constant-fraction discriminator
International Nuclear Information System (INIS)
Leskovar, B.; Lo, C.C.
1975-01-01
Position-resolution capabilities of the NASA laser ranging system are essentially determined by time-resolution capabilities of its optical timing receiver. The optical timing receiver consists of a fast photoelectric device, primarily a standard of microchannel-plate-type photomultiplier or an avalanche photodiode detector, a timing discriminator, a high-precision time-interval digitizer, and a signal-processing system. The time-resolution capabilities of the receiver are determined by the photoelectron time spread of the photoelectric device, the time walk and resolution characteristics of the timing discriminator, and the time-interval digitizer. It is thus necessary to evaluate available fast photoelectronic devices with respect to their time-resolution capabilities, and to design a very low time walk timing discriminator and a high-precision time digitizer which will be used in the laser ranging system receiver. (auth)
Dogoe, Maud S.; Banda, Devender R.; Lock, Robin H.; Feinstein, Rita
2011-01-01
This study examined the effectiveness of the constant timed delay procedure for teaching two young adults with autism to read, define, and state the contextual meaning of keywords on product warning labels of common household products. Training sessions were conducted in the dyad format using flash cards. Results indicated that both participants…
Fukahata, Yukitoshi; Matsu'ura, Mitsuhiro
2018-02-01
The viscoelastic deformation of an elastic-viscoelastic composite system is significantly different from that of a simple viscoelastic medium. Here, we show that complicated transient deformation due to viscoelastic stress relaxation after a megathrust earthquake can occur even in a very simple situation, in which an elastic surface layer (lithosphere) is underlain by a viscoelastic substratum (asthenosphere) under gravity. Although the overall decay rate of the system is controlled by the intrinsic relaxation time constant of the asthenosphere, the apparent decay time constant at each observation point is significantly different from place to place and generally much longer than the intrinsic relaxation time constant of the asthenosphere. It is also not rare that the sense of displacement rate is reversed during the viscoelastic relaxation. If we do not bear these points in mind, we may draw false conclusions from observed deformation data. Such complicated transient behavior can be explained mathematically from the characteristics of viscoelastic solution: for an elastic-viscoelastic layered half-space, the viscoelastic solution is expressed as superposition of three decaying components with different relaxation time constants that depend on wavelength.
Energy Technology Data Exchange (ETDEWEB)
Bourguillot, R; Lohez, P [Commissariat a l' Energie Atomique, Saclay (France).Centre d' Etudes Nucleaires
1960-07-01
We have been led to study the design of an automatic device for the filling of liquid nitrogen traps at constant time intervals in connection with the maintenance of a type MS 5 mass spectrometer; in the tube of this apparatus it is necessary to maintain a vacuum of about 10{sup -7} mm of mercury. The replenishing is done every four hours. The presence in the vacuum section of an electron multiplier has led us to provide a safety-device making it impossible for mercury vapour to come into contact with either the copper tube or the multiplier in the event of an incident leading to the warming up of the traps. In case of a breakdown, the vacuum section is therefore brought up to atmospheric pressure by the introduction of nitrogen. (author) [French] Nous avons ete conduits pour la maintenance d'un spectrometre de masse type MS 5, dans le tube duquel il faut entretenir un vide de quelques 10{sup -7} mm de mercure, a etudier un systeme de remplissage automatique a intervalle de temps fixe des pieges en azote liquide. Ce remplissage se fait toutes les quatre heures. La presence dans l'enceinte sous vide, d'un multiplicateur d'electrons, nous a amenes a prevoir un systeme de securite evitant de mettre le tube en cuivre et le multiplicateur en contact avec la vapeur de mercure en cas d'incident amenant le rechauffage des pieges. En cas de panne, l'enceinte sous vide est donc ramenee a la pression atmospherique par une introduction d'azote. (auteur)
Bellotti, E.; Broggini, C.; Di Carlo, G.; Laubenstein, M.; Menegazzo, R.
2018-05-01
Time modulations at per mil level have been reported to take place in the decay constant of several nuclei with period of one year (most cases) but also of about one month or one day. On the other hand, experiments with similar or better sensitivity have been unable to detect any modulation. In this letter we give the results of the activity study of two different sources: 40K and 226Ra. The two gamma spectrometry experiments have been performed underground at the Gran Sasso Laboratory, this way suppressing the time dependent cosmic ray background. Briefly, our measurements reached the sensitivity of 3.4 and 3.5 parts over 106 for 40K and 226Ra, respectively (1 sigma) and they do not show any statistically significant evidence of time dependence in the decay constant. We also give the results of the activity measurement at the time of the two strong X-class solar flares which took place in September 2017. Our data do not show any unexpected time dependence in the decay rate of 40K in correspondence with the two flares. To the best of our knowledge, these are the most precise and accurate results on the stability of the decay constant as function of time.
Real time simulation of large systems on mini-computer
International Nuclear Information System (INIS)
Nakhle, Michel; Roux, Pierre.
1979-01-01
Most simulation languages will only accept an explicit formulation of differential equations, and logical variables hold no special status therein. The pace of the suggested methods of integration is limited by the smallest time constant of the model submitted. The NEPTUNIX 2 simulation software has a language that will take implicit equations and an integration method of which the variable pace is not limited by the time constants of the model. This, together with high time and memory ressources optimization of the code generated, makes NEPTUNIX 2 a basic tool for simulation on mini-computers. Since the logical variables are specific entities under centralized control, correct processing of discontinuities and synchronization with a real process are feasible. The NEPTUNIX 2 is the industrial version of NEPTUNIX 1 [fr
International Nuclear Information System (INIS)
Omel'yanov, G.A.
1995-07-01
The non-isothermal Cahn-Hilliard equations in the n-dimensional case (n = 2,3) are considered. The interaction length is proportional to a small parameter, and the relaxation time is proportional to a constant. The asymptotic solutions describing two metastable processes are constructed and justified. The soliton type solution describes the first stage of separation in alloy, when a set of ''superheated liquid'' appears inside the ''solid'' part. The Van der Waals type solution describes the free interface dynamics for large time. The smoothness of temperature is established for large time and the Mullins-Sekerka problem describing the free interface is derived. (author). 46 refs
International Nuclear Information System (INIS)
Sylvia, J.I.; Chandar, S. Clement Ravi; Velusamy, K.
2014-01-01
Highlights: • Core temperature sensor was mathematically modeled. • Ramp signal generated during reactor operating condition is used. • Procedure and methodology has been demonstrated by applying it to FBTR. • Same technique will be implemented for all fast reactors. - Abstract: Core temperature monitoring system is an important component of reactor protection system in the current generation fast reactors. In this system, multiple thermocouples are housed inside a thermowell of fuel subassemblies. Response time of the thermocouple assembly forms an important input for safety analysis of fast reactor and hence frequent calibration/time constant estimation is essential. In fast reactors the central fuel subassembly is provided with bare fast response thermocouples to detect under cooling events in reactor and take proper safety action. On the other hand, thermocouples in thermowell are mainly used for blockage detection in individual fuel subassemblies. The time constant of thermocouples in thermowell can drift due to creep, vibration and thermal fatigue of the thermowell assembly. A novel method for in-situ estimation of time constant is proposed. This method uses the Safety Control Rod Accelerated Mechanism (SCRAM) or lowering of control Rod (LOR) signals of the reactor along with response of the central subassembly thermocouples as reference data. Validation of the procedure has been demonstrated by applying it to FBTR
Succinct Dynamic Cardinal Trees with Constant Time Operations for Small Alphabet
DEFF Research Database (Denmark)
Davoodi, Pooya; Satti, Srinivasa Rao
2011-01-01
) bits and performs the following operations in O(1) time: parent, child(i), label-child(alpha), degree, subtree-size, preorder, is-ancestor(x), insert-leaf (alpha), delete-leaf(alpha). The update times are amortized. The space is close to the information theoretic lower bound. The operations...
One-machine job-scheduling with non-constant capacity - Minimizing weighted completion times
Amaddeo, H.F.; Amaddeo, H.F.; Nawijn, W.M.; van Harten, Aart
1997-01-01
In this paper an n-job one-machine scheduling problem is considered, in which the machine capacity is time-dependent and jobs are characterized by their work content. The objective is to minimize the sum of weighted completion times. A necessary optimality condition is presented and we discuss some
International Nuclear Information System (INIS)
Yabu-uti, B.F.C.; Roversi, J.A.
2011-01-01
We propose an alternative scheme to implement a two-qubit controlled-R (rotation) gate in the hybrid atom-CCA (coupled cavities array) system. Our scheme results in a constant gating time and, with an adjustable qubit-bus coupling (atom-resonator), one can specify a particular rotation R on the target qubit. We believe that this proposal may open promising perspectives for networking quantum information processors and implementing distributed and scalable quantum computation. -- Highlights: → We propose an alternative two-qubit controlled-rotation gate implementation. → Our gate is realized in a constant gating time for any rotation. → A particular rotation on the target qubit can be specified by an adjustable qubit-bus coupling. → Our proposal may open promising perspectives for implementing distributed and scalable quantum computation.
Mikhailova, Valentina A; Malykhin, Roman E; Ivanov, Anatoly I
2018-05-16
To elucidate the regularities inherent in the kinetics of ultrafast charge recombination following photoinduced charge separation in donor-acceptor dyads in solutions, the simulations of the kinetics have been performed within the stochastic multichannel point-transition model. Increasing the solvent relaxation time scales has been shown to strongly vary the dependence of the charge recombination rate constant on the free energy gap. In slow relaxing solvents the non-equilibrium charge recombination occurring in parallel with solvent relaxation is very effective so that the charge recombination terminates at the non-equilibrium stage. This results in a crucial difference between the free energy gap laws for the ultrafast charge recombination and the thermal charge transfer. For the thermal reactions the well-known Marcus bell-shaped dependence of the rate constant on the free energy gap is realized while for the ultrafast charge recombination only a descending branch is predicted in the whole area of the free energy gap exceeding 0.2 eV. From the available experimental data on the population kinetics of the second and first excited states for a series of Zn-porphyrin-imide dyads in toluene and tetrahydrofuran solutions, an effective rate constant of the charge recombination into the first excited state has been calculated. The obtained rate constant being very high is nearly invariable in the area of the charge recombination free energy gap from 0.2 to 0.6 eV that supports the theoretical prediction.
International Nuclear Information System (INIS)
Bernardin, B.; Le Guillou, G.; Parcy, JP.
1981-04-01
Usual spectral methods, based on temperature fluctuation analysis, aiming at thermocouple time constant identification are using an equipment too much sophisticated for on-line application. It is shown that numerical filtering is optimal for this application, the equipment is simpler than for spectral methods and less samples of signals are needed for the same accuracy. The method is described and a parametric study was performed using a temperature noise simulator [fr
Conformally invariant amplitudes and field theory in a space-time of constant curvature
International Nuclear Information System (INIS)
Drummond, I.T.
1977-02-01
The problem of calculating the ultra violet divergences of a field theory in a spherical space-time is reduced to analysing the pole structure of conformally invariant integrals which are analogous to amplitudes which occur in the theory of dual models. The calculations are illustrated with phi 3 -theory in six-dimensions. (author)
How the constants in Hille-Nehari theorems depend on time scales
Czech Academy of Sciences Publication Activity Database
Řehák, Pavel
2006-01-01
Roč. 2006, - (2006), s. 1-15 ISSN 1687-1839 R&D Pro jects: GA ČR(CZ) GA201/01/0079; GA ČR(CZ) GP201/01/P041 Institutional research plan: CEZ:AV0Z10190503 Keywords : dynamic equation * time scales * oscillation criteria Subject RIV: BA - General Mathematics
McBits: fast constant-time code-based cryptography
Bernstein, D.J.; Chou, T.; Schwabe, P.
2015-01-01
This paper presents extremely fast algorithms for code-based public-key cryptography, including full protection against timing attacks. For example, at a 2^128 security level, this paper achieves a reciprocal decryption throughput of just 60493 cycles (plus cipher cost etc.) on a single Ivy Bridge
International Nuclear Information System (INIS)
Heilbronn, Lawrence; Iwata, Yoshiyuki; Iwase, H.
2003-01-01
A method for reducing excessive constant-fraction-discriminator walk that utilizes experimental data in the off-line analysis stage is introduced. Excessive walk is defined here as any walk that leads to an overall timing resolution that is much greater than the intrinsic timing resolution of the detection system. The method is able to reduce the contribution to the overall timing resolution from the walk that is equal to or less than the intrinsic timing resolution of the detectors. Although the method is explained in the context of a neutron time-of-flight experiment, it is applicable to any data set that satisfies two conditions. (1) A measure of the signal amplitude for each event must be recorded on an event-by-event basis; and (2) There must be a distinguishable class of events present where the timing information is known a priori
International Nuclear Information System (INIS)
Heilbronn, L.; Iwata, Y.; Iwase, H.
2004-01-01
A method for reducing excessive constant-fraction-discriminator walk that utilizes experimental data in the off-line analysis stage is introduced. Excessive walk is defined here as any walk that leads to an overall timing resolution that is much greater than the intrinsic timing resolution of the detection system. The method is able to reduce the contribution to the overall timing resolution from the walk to a value that is equal to or less than the intrinsic timing resolution of the detectors. Although the method is explained in the context of a neutron time-of-flight experiment, it is applicable to any data set that satisfies two conditions: (1) a measure of the signal amplitude for each event must be recorded on an event-by-event basis; and (2) there must be a distinguishable class of events present where the timing information is known a priori
A frequency-domain method for solving linear time delay systems with constant coefficients
Jin, Mengshi; Chen, Wei; Song, Hanwen; Xu, Jian
2018-03-01
In an active control system, time delay will occur due to processes such as signal acquisition and transmission, calculation, and actuation. Time delay systems are usually described by delay differential equations (DDEs). Since it is hard to obtain an analytical solution to a DDE, numerical solution is of necessity. This paper presents a frequency-domain method that uses a truncated transfer function to solve a class of DDEs. The theoretical transfer function is the sum of infinite items expressed in terms of poles and residues. The basic idea is to select the dominant poles and residues to truncate the transfer function, thus ensuring the validity of the solution while improving the efficiency of calculation. Meanwhile, the guideline of selecting these poles and residues is provided. Numerical simulations of both stable and unstable delayed systems are given to verify the proposed method, and the results are presented and analysed in detail.
Dynamical Solution to the Problem of a Small Cosmological Constant and Late-Time Cosmic Acceleration
International Nuclear Information System (INIS)
Armendariz-Picon, C.; Mukhanov, V.; Steinhardt, Paul J.
2000-01-01
Increasing evidence suggests that most of the energy density of the universe consists of a dark energy component with negative pressure that causes the cosmic expansion to accelerate. We address why this component comes to dominate the universe only recently. We present a class of theories based on an evolving scalar field where the explanation is based entirely on internal dynamical properties of the solutions. In the theories we consider, the dynamics causes the scalar field to lock automatically into a negative pressure state at the onset of matter domination such that the present epoch is the earliest possible time consistent with nucleosynthesis restrictions when it can start to dominate
Lucke, Robert L.; Sirlin, Samuel W.; San Martin, A. M.
1992-01-01
For most imaging sensors, a constant (dc) pointing error is unimportant (unless large), but time-dependent (ac) errors degrade performance by either distorting or smearing the image. When properly quantified, the separation of the root-mean-square effects of random line-of-sight motions into dc and ac components can be used to obtain the minimum necessary line-of-sight stability specifications. The relation between stability requirements and sensor resolution is discussed, with a view to improving communication between the data analyst and the control systems engineer.
Walters, D M; Stringer, S M
2010-07-01
A key question in understanding the neural basis of path integration is how individual, spatially responsive, neurons may self-organize into networks that can, through learning, integrate velocity signals to update a continuous representation of location within an environment. It is of vital importance that this internal representation of position is updated at the correct speed, and in real time, to accurately reflect the motion of the animal. In this article, we present a biologically plausible model of velocity path integration of head direction that can solve this problem using neuronal time constants to effect natural time delays, over which associations can be learned through associative Hebbian learning rules. The model comprises a linked continuous attractor network and competitive network. In simulation, we show that the same model is able to learn two different speeds of rotation when implemented with two different values for the time constant, and without the need to alter any other model parameters. The proposed model could be extended to path integration of place in the environment, and path integration of spatial view.
International Nuclear Information System (INIS)
Ishak, Mustapha
2008-01-01
The contributions of the cosmological constant to the deflection angle and the time delays are derived from the integration of the gravitational potential as well as from Fermat's principle. The findings are in agreement with recent results using exact solutions to Einstein's equations and reproduce precisely the new Λ term in the bending angle and the lens equation. The consequences on time-delay expressions are explored. While it is known that Λ contributes to the gravitational time delay, it is shown here that a new Λ term appears in the geometrical time delay as well. Although these newly derived terms are perhaps small for current observations, they do not cancel out as previously claimed. Moreover, as shown before, at galaxy cluster scale, the Λ contribution can be larger than the second-order term in the Einstein deflection angle for several cluster lens systems.
Reschke, M. F.; Kozlovskaya, I. B.; Kofman, I. S.; Tomilovskaya, E. S.; Cerisano, J. M.; Bloomberg, J. J.; Stenger, M. B.; Platts, S. H.; Rukavishnikov, I. V.; Fomina, E. V.;
2015-01-01
INTRODUCTION Testing of crew responses following long-duration flights has not been previously possible until a minimum of more than 24 hours after landing. As a result, it has not been possible to determine the trend of the early recovery process, nor has it been possible to accurately assess the full impact of the decrements associated with long-duration flight. To overcome these limitations, both the Russian and U.S. programs have implemented joint testing at the Soyuz landing site. This International Space Station research effort has been identified as the functional Field Test, and represents data collect on NASA, Russian, European Space Agency, and Japanese Aerospace Exploration Agency crews. RESEARCH The primary goal of this research is to determine functional abilities associated with long-duration space flight crews beginning as soon after landing as possible on the day of landing (typically within 1 to 1.5 hours). This goal has both sensorimotor and cardiovascular elements. To date, a total of 15 subjects have participated in a 'pilot' version of the full 'field test'. The full version of the 'field test' will assess functional sensorimotor measurements included hand/eye coordination, standing from a seated position (sit-to-stand), walking normally without falling, measurement of dynamic visual acuity, discriminating different forces generated with the hands (both strength and ability to judge just noticeable differences of force), standing from a prone position, coordinated walking involving tandem heel-to-toe placement (tested with eyes both closed and open), walking normally while avoiding obstacles of differing heights, and determining postural ataxia while standing (measurement of quiet stance). Sensorimotor performance has been obtained using video records, and data from body worn inertial sensors. The cardiovascular portion of the investigation has measured blood pressure and heart rate during a timed stand test in conjunction with postural ataxia
Is average daily travel time expenditure constant? In search of explanations for an increase in average travel time.
van Wee, B.; Rietveld, P.; Meurs, H.
2006-01-01
Recent research suggests that the average time spent travelling by the Dutch population has increased over the past decades. However, different data sources show different levels of increase. This paper explores possible causes for this increase. They include a rise in incomes, which has probably
Nair, S. P.; Righetti, R.
2015-05-01
Recent elastography techniques focus on imaging information on properties of materials which can be modeled as viscoelastic or poroelastic. These techniques often require the fitting of temporal strain data, acquired from either a creep or stress-relaxation experiment to a mathematical model using least square error (LSE) parameter estimation. It is known that the strain versus time relationships for tissues undergoing creep compression have a non-linear relationship. In non-linear cases, devising a measure of estimate reliability can be challenging. In this article, we have developed and tested a method to provide non linear LSE parameter estimate reliability: which we called Resimulation of Noise (RoN). RoN provides a measure of reliability by estimating the spread of parameter estimates from a single experiment realization. We have tested RoN specifically for the case of axial strain time constant parameter estimation in poroelastic media. Our tests show that the RoN estimated precision has a linear relationship to the actual precision of the LSE estimator. We have also compared results from the RoN derived measure of reliability against a commonly used reliability measure: the correlation coefficient (CorrCoeff). Our results show that CorrCoeff is a poor measure of estimate reliability for non-linear LSE parameter estimation. While the RoN is specifically tested only for axial strain time constant imaging, a general algorithm is provided for use in all LSE parameter estimation.
Energy Technology Data Exchange (ETDEWEB)
Typel, S; Wolter, H H [Sektion Physik, Univ. Muenchen, Garching (Germany)
1998-06-01
Nuclear matter and ground state properties for (proton and neutron) semi-closed shell nuclei are described in relativistic mean field theory with coupling constants which depend on the vector density. The parametrization of the density dependence for {sigma}-, {omega}- and {rho}-mesons is obtained by fitting to properties of nuclear matter and some finite nuclei. The equation of state for symmetric and asymmetric nuclear matter is discussed. Finite nuclei are described in Hartree approximation, including a charge and an improved center-of-mass correction. Pairing is considered in the BCS approximation. Special attention is directed to the predictions for properties at the neutron and proton driplines, e.g. for separation energies, spin-orbit splittings and density distributions. (orig.)
Parrish, K. E.; Zhang, J.; Teasdale, E.
2007-12-01
An exact analytical solution to the ordinary one-dimensional partial differential equation is derived for transient groundwater flow in a homogeneous, confined, horizontal aquifer using Laplace transformation. The theoretical analysis is based on the assumption that the aquifer is homogeneous and one-dimensional (horizontal); confined between impermeable formations on top and bottom; and of infinite horizontal extent and constant thickness. It is also assumed that there is only a single pumping well penetrating the entire aquifer; flow is everywhere horizontal within the aquifer to the well; the well is pumping with a constant discharge rate; the well diameter is infinitesimally small; and the hydraulic head is uniform throughout the aquifer before pumping. Similar to the Theis solution, this solution is suited to determine transmissivity and storativity for a two- dimensional, vertically confined aquifer, such as a long vertically fractured zone of high permeability within low permeable rocks or a long, high-permeability trench inside a low-permeability porous media. In addition, it can be used to analyze time-drawdown responses to pumping and injection in similar settings. The solution can also be used to approximate the groundwater flow for unconfined conditions if (1) the variation of transmissivity is negligible (groundwater table variation is small in comparison to the saturated thickness); and (2) the unsaturated flow is negligible. The errors associated with the use of the solution to unconfined conditions depend on the accuracies of the above two assumptions. The solution can also be used to assess the impacts of recharge from a seasonal river or irrigation canal on the groundwater system by assuming uniform, time- constant recharge along the river or canal. This paper presents the details for derivation of the analytical solution. The analytical solution is compared to numerical simulation results with example cases. Its accuracy is also assessed and
Singh, Trailokyanath; Mishra, Pandit Jagatananda; Pattanayak, Hadibandhu
2017-12-01
In this paper, an economic order quantity (EOQ) inventory model for a deteriorating item is developed with the following characteristics: (i) The demand rate is deterministic and two-staged, i.e., it is constant in first part of the cycle and linear function of time in the second part. (ii) Deterioration rate is time-proportional. (iii) Shortages are not allowed to occur. The optimal cycle time and the optimal order quantity have been derived by minimizing the total average cost. A simple solution procedure is provided to illustrate the proposed model. The article concludes with a numerical example and sensitivity analysis of various parameters as illustrations of the theoretical results.
Stompor, Radoslaw; Gorski, Krzysztof M.
1994-01-01
We obtain predictions for cosmic microwave background anisotropies at angular scales near 1 deg in the context of cold dark matter models with a nonzero cosmological constant, normalized to the Cosmic Background Explorer (COBE) Differential Microwave Radiometer (DMR) detection. The results are compared to those computed in the matter-dominated models. We show that the coherence length of the Cosmic Microwave Background (CMB) anisotropy is almost insensitive to cosmological parameters, and the rms amplitude of the anisotropy increases moderately with decreasing total matter density, while being most sensitive to the baryon abundance. We apply these results in the statistical analysis of the published data from the UCSB South Pole (SP) experiment (Gaier et al. 1992; Schuster et al. 1993). We reject most of the Cold Dark Matter (CDM)-Lambda models at the 95% confidence level when both SP scans are simulated together (although the combined data set renders less stringent limits than the Gaier et al. data alone). However, the Schuster et al. data considered alone as well as the results of some other recent experiments (MAX, MSAM, Saskatoon), suggest that typical temperature fluctuations on degree scales may be larger than is indicated by the Gaier et al. scan. If so, CDM-Lambda models may indeed provide, from a point of view of CMB anisotropies, an acceptable alternative to flat CDM models.
Use of a large time-compensated scintillation detector in neutron time-of-flight measurements
International Nuclear Information System (INIS)
Goodman, C.D.
1979-01-01
A scintillator for neutron time-of-flight measurements is positioned at a desired angle with respect to the neutron beam, and as a function of the energy thereof, such that the sum of the transit times of the neutrons and photons in the scintillator are substantially independent of the points of scintillations within the scintillator. Extrapolated zero timing is employed rather than the usual constant fraction timing. As a result, a substantially larger scintillator can be employed that substantially increases the data rate and shortens the experiment time. 3 claims
Discrete-time optimal control and games on large intervals
Zaslavski, Alexander J
2017-01-01
Devoted to the structure of approximate solutions of discrete-time optimal control problems and approximate solutions of dynamic discrete-time two-player zero-sum games, this book presents results on properties of approximate solutions in an interval that is independent lengthwise, for all sufficiently large intervals. Results concerning the so-called turnpike property of optimal control problems and zero-sum games in the regions close to the endpoints of the time intervals are the main focus of this book. The description of the structure of approximate solutions on sufficiently large intervals and its stability will interest graduate students and mathematicians in optimal control and game theory, engineering, and economics. This book begins with a brief overview and moves on to analyze the structure of approximate solutions of autonomous nonconcave discrete-time optimal control Lagrange problems.Next the structures of approximate solutions of autonomous discrete-time optimal control problems that are discret...
Constant physics and characteristics of fundamental constant
International Nuclear Information System (INIS)
Tarrach, R.
1998-01-01
We present some evidence which supports a surprising physical interpretation of the fundamental constants. First, we relate two of them through the renormalization group. This leaves as many fundamental constants as base units. Second, we introduce and a dimensional system of units without fundamental constants. Third, and most important, we find, while interpreting the units of the a dimensional system, that is all cases accessible to experimentation the fundamental constants indicate either discretization at small values or boundedness at large values of the corresponding physical quantity. (Author) 12 refs
International Nuclear Information System (INIS)
Billard, I.; Luetzenkirchen, K.
2003-01-01
Equilibrium constants for aqueous reactions between lanthanide or actinide ions and (in-) organic ligands contain important information for various radiochemical problems, such as nuclear reprocessing or the migration of radioelements in the geosphere. We study the conditions required to determine equilibrium constants by time-resolved fluorescence spectroscopy measurements. Based on a simulation study it is shown that the possibility to determine equilibrium constants depends upon the reaction rates in the photoexcited states of the lanthanide or actinide ions. (orig.)
Large Deviations for Two-Time-Scale Diffusions, with Delays
International Nuclear Information System (INIS)
Kushner, Harold J.
2010-01-01
We consider the problem of large deviations for a two-time-scale reflected diffusion process, possibly with delays in the dynamical terms. The Dupuis-Ellis weak convergence approach is used. It is perhaps the most intuitive and simplest for the problems of concern. The results have applications to the problem of approximating optimal controls for two-time-scale systems via use of the averaged equation.
Parallel time domain solvers for electrically large transient scattering problems
Liu, Yang
2014-09-26
Marching on in time (MOT)-based integral equation solvers represent an increasingly appealing avenue for analyzing transient electromagnetic interactions with large and complex structures. MOT integral equation solvers for analyzing electromagnetic scattering from perfect electrically conducting objects are obtained by enforcing electric field boundary conditions and implicitly time advance electric surface current densities by iteratively solving sparse systems of equations at all time steps. Contrary to finite difference and element competitors, these solvers apply to nonlinear and multi-scale structures comprising geometrically intricate and deep sub-wavelength features residing atop electrically large platforms. Moreover, they are high-order accurate, stable in the low- and high-frequency limits, and applicable to conducting and penetrable structures represented by highly irregular meshes. This presentation reviews some recent advances in the parallel implementations of time domain integral equation solvers, specifically those that leverage multilevel plane-wave time-domain algorithm (PWTD) on modern manycore computer architectures including graphics processing units (GPUs) and distributed memory supercomputers. The GPU-based implementation achieves at least one order of magnitude speedups compared to serial implementations while the distributed parallel implementation are highly scalable to thousands of compute-nodes. A distributed parallel PWTD kernel has been adopted to solve time domain surface/volume integral equations (TDSIE/TDVIE) for analyzing transient scattering from large and complex-shaped perfectly electrically conducting (PEC)/dielectric objects involving ten million/tens of millions of spatial unknowns.
Time dispersion in large plastic scintillation neutron detectors
International Nuclear Information System (INIS)
De, A.; Dasgupta, S.S.; Sen, D.
1993-01-01
Time dispersion (TD) has been computed for large neutron detectors using plastic scintillators. It has been shown that TD seen by the PM tube does not necessarily increase with incident neutron energy, a result not fully in agreement with the usual finding
Vibration amplitude rule study for rotor under large time scale
International Nuclear Information System (INIS)
Yang Xuan; Zuo Jianli; Duan Changcheng
2014-01-01
The rotor is an important part of the rotating machinery; its vibration performance is one of the important factors affecting the service life. This paper presents both theoretical analyses and experimental demonstrations of the vibration rule of the rotor under large time scales. The rule can be used for the service life estimation of the rotor. (authors)
The Large Observatory For x-ray Timing
DEFF Research Database (Denmark)
Feroci, M.; Herder, J. W. den; Bozzo, E.
2014-01-01
The Large Observatory For x-ray Timing (LOFT) was studied within ESA M3 Cosmic Vision framework and participated in the final down-selection for a launch slot in 2022-2024. Thanks to the unprecedented combination of effective area and spectral resolution of its main instrument, LOFT will study th...
Haldren, H. A.; Perey, D. F.; Yost, W. T.; Cramer, K. E.; Gupta, M. C.
2018-05-01
A digitally controlled instrument for conducting single-frequency and swept-frequency ultrasonic phase measurements has been developed based on a constant-frequency pulsed phase-locked-loop (CFPPLL) design. This instrument uses a pair of direct digital synthesizers to generate an ultrasonically transceived tone-burst and an internal reference wave for phase comparison. Real-time, constant-frequency phase tracking in an interrogated specimen is possible with a resolution of 0.000 38 rad (0.022°), and swept-frequency phase measurements can be obtained. Using phase measurements, an absolute thickness in borosilicate glass is presented to show the instrument's efficacy, and these results are compared to conventional ultrasonic pulse-echo time-of-flight (ToF) measurements. The newly developed instrument predicted the thickness with a mean error of -0.04 μm and a standard deviation of error of 1.35 μm. Additionally, the CFPPLL instrument shows a lower measured phase error in the absence of changing temperature and couplant thickness than high-resolution cross-correlation ToF measurements at a similar signal-to-noise ratio. By showing higher accuracy and precision than conventional pulse-echo ToF measurements and lower phase errors than cross-correlation ToF measurements, the new digitally controlled CFPPLL instrument provides high-resolution absolute ultrasonic velocity or path-length measurements in solids or liquids, as well as tracking of material property changes with high sensitivity. The ability to obtain absolute phase measurements allows for many new applications than possible with previous ultrasonic pulsed phase-locked loop instruments. In addition to improved resolution, swept-frequency phase measurements add useful capability in measuring properties of layered structures, such as bonded joints, or materials which exhibit non-linear frequency-dependent behavior, such as dispersive media.
Petrillo, M.; Cherubini, P.; Fravolini, G.; Ascher, J.; Schärer, M.; Synal, H.-A.; Bertoldi, D.; Camin, F.; Larcher, R.; Egli, M.
2015-09-01
Due to the large size and highly heterogeneous spatial distribution of deadwood, the time scales involved in the coarse woody debris (CWD) decay of Picea abies (L.) Karst. and Larix decidua Mill. in Alpine forests have been poorly investigated and are largely unknown. We investigated the CWD decay dynamics in an Alpine valley in Italy using the five-decay class system commonly employed for forest surveys, based on a macromorphological and visual assessment. For the decay classes 1 to 3, most of the dendrochronological samples were cross-dated to assess the time that had elapsed since tree death, but for decay classes 4 and 5 (poorly preserved tree rings) and some others not having enough tree rings, radiocarbon dating was used. In addition, density, cellulose and lignin data were measured for the dated CWD. The decay rate constants for spruce and larch were estimated on the basis of the density loss using a single negative exponential model. In the decay classes 1 to 3, the ages of the CWD were similar varying between 1 and 54 years for spruce and 3 and 40 years for larch with no significant differences between the classes; classes 1-3 are therefore not indicative for deadwood age. We found, however, distinct tree species-specific differences in decay classes 4 and 5, with larch CWD reaching an average age of 210 years in class 5 and spruce only 77 years. The mean CWD rate constants were 0.012 to 0.018 yr-1 for spruce and 0.005 to 0.012 yr-1 for larch. Cellulose and lignin time trends half-lives (using a multiple-exponential model) could be derived on the basis of the ages of the CWD. The half-lives for cellulose were 21 yr for spruce and 50 yr for larch. The half-life of lignin is considerably higher and may be more than 100 years in larch CWD.
Real-time simulation of large-scale floods
Liu, Q.; Qin, Y.; Li, G. D.; Liu, Z.; Cheng, D. J.; Zhao, Y. H.
2016-08-01
According to the complex real-time water situation, the real-time simulation of large-scale floods is very important for flood prevention practice. Model robustness and running efficiency are two critical factors in successful real-time flood simulation. This paper proposed a robust, two-dimensional, shallow water model based on the unstructured Godunov- type finite volume method. A robust wet/dry front method is used to enhance the numerical stability. An adaptive method is proposed to improve the running efficiency. The proposed model is used for large-scale flood simulation on real topography. Results compared to those of MIKE21 show the strong performance of the proposed model.
Miller, Jacob; Sanders, Stephen; Miyake, Akimasa
2017-12-01
While quantum speed-up in solving certain decision problems by a fault-tolerant universal quantum computer has been promised, a timely research interest includes how far one can reduce the resource requirement to demonstrate a provable advantage in quantum devices without demanding quantum error correction, which is crucial for prolonging the coherence time of qubits. We propose a model device made of locally interacting multiple qubits, designed such that simultaneous single-qubit measurements on it can output probability distributions whose average-case sampling is classically intractable, under similar assumptions as the sampling of noninteracting bosons and instantaneous quantum circuits. Notably, in contrast to these previous unitary-based realizations, our measurement-based implementation has two distinctive features. (i) Our implementation involves no adaptation of measurement bases, leading output probability distributions to be generated in constant time, independent of the system size. Thus, it could be implemented in principle without quantum error correction. (ii) Verifying the classical intractability of our sampling is done by changing the Pauli measurement bases only at certain output qubits. Our usage of random commuting quantum circuits in place of computationally universal circuits allows a unique unification of sampling and verification, so they require the same physical resource requirements in contrast to the more demanding verification protocols seen elsewhere in the literature.
Fortage, Jérôme; Scarpaci, Annabelle; Viau, Lydie; Pellegrin, Yann; Blart, Errol; Falkenström, Magnus; Hammarström, Leif; Asselberghs, Inge; Kellens, Ruben; Libaers, Wim; Clays, Koen; Eng, Mattias P; Odobel, Fabrice
2009-09-14
We report the synthesis and the characterizations of a novel dyad composed of a zinc porphyrin (ZnP) linked to a gold porphyrin (AuP) through an ethynyl spacer. The UV/Vis absorption spectrum and the electrochemical properties clearly reveal that this dyad exhibits a strong electronic coupling in the ground state as evidenced by shifted redox potentials and the appearance of an intense charge-transfer band localized at lambda = 739 nm in dichloromethane. A spectroelectrochemical study of the dyad along with the parent homometallic system (i.e., ZnP-ZnP and AuP-AuP) was undertaken to determine the spectra of the reduced and oxidized porphyrin units. Femtosecond transient absorption spectroscopic analysis showed that the photoexcitation of the heterometallic dyad leads to an ultrafast formation of a charge-separated state ((+)ZnP-AuP(*)) that displays a particularly long lifetime (tau = 4 ns in toluene) for such a short separation distance. The molecular orbitals of the dyad were determined by DFT quantum-chemical calculations. This theoretical study confirms that the observed intense band at lambda = 739 nm corresponds to an interporphyrin charge-transfer transition from the HOMO orbital localized on the zinc porphyrin to LUMO orbitals localized on the gold porphyrin. Finally, a Hyper-Rayleigh scattering study shows that the dyad possesses a large first molecular hyperpolarizability coefficient (beta = 2100x10(-30) esu at lambda = 1064 nm), thus highlighting the valuable nonlinear optical properties of this new type of push-pull porphyrin system.
Calibration of the fine-structure constant of graphene by time-dependent density-functional theory
Sindona, A.; Pisarra, M.; Vacacela Gomez, C.; Riccardi, P.; Falcone, G.; Bellucci, S.
2017-11-01
One of the amazing properties of graphene is the ultrarelativistic behavior of its loosely bound electrons, mimicking massless fermions that move with a constant velocity, inversely proportional to a fine-structure constant αg of the order of unity. The effective interaction between these quasiparticles is, however, better controlled by the coupling parameter αg*=αg/ɛ , which accounts for the dynamic screening due to the complex permittivity ɛ of the many-valence electron system. This concept was introduced in a couple of previous studies [Reed et al., Science 330, 805 (2010) and Gan et al., Phys. Rev. B 93, 195150 (2016)], where inelastic x-ray scattering measurements on crystal graphite were converted into an experimentally derived form of αg* for graphene, over an energy-momentum region on the eV Å -1 scale. Here, an accurate theoretical framework is provided for αg*, using time-dependent density-functional theory in the random-phase approximation, with a cutoff in the interaction between excited electrons in graphene, which translates to an effective interlayer interaction in graphite. The predictions of the approach are in excellent agreement with the above-mentioned measurements, suggesting a calibration method to substantially improve the experimental derivation of αg*, which tends to a static limiting value of ˜0.14 . Thus, the ab initio calibration procedure outlined demonstrates the accuracy of perturbation expansion treatments for the two-dimensional gas of massless Dirac fermions in graphene, in parallel with quantum electrodynamics.
Time simulation of flutter with large stiffness changes
Karpel, Mordechay; Wieseman, Carol D.
1992-01-01
Time simulation of flutter, involving large local structural changes, is formulated with a state-space model that is based on a relatively small number of generalized coordinates. Free-free vibration modes are first calculated for a nominal finite-element model with relatively large fictitious masses located at the area of structural changes. A low-frequency subset of these modes is then transformed into a set of structural modal coordinates with which the entire simulation is performed. These generalized coordinates and the associated oscillatory aerodynamic force coefficient matrices are used to construct an efficient time-domain, state-space model for a basic aeroelastic case. The time simulation can then be performed by simply changing the mass, stiffness, and damping coupling terms when structural changes occur. It is shown that the size of the aeroelastic model required for time simulation with large structural changes at a few apriori known locations is similar to that required for direct analysis of a single structural case. The method is applied to the simulation of an aeroelastic wind-tunnel model. The diverging oscillations are followed by the activation of a tip-ballast decoupling mechanism that stabilizes the system but may cause significant transient overshoots.
Time-Sliced Perturbation Theory for Large Scale Structure I: General Formalism
Blas, Diego; Ivanov, Mikhail M.; Sibiryakov, Sergey
2016-01-01
We present a new analytic approach to describe large scale structure formation in the mildly non-linear regime. The central object of the method is the time-dependent probability distribution function generating correlators of the cosmological observables at a given moment of time. Expanding the distribution function around the Gaussian weight we formulate a perturbative technique to calculate non-linear corrections to cosmological correlators, similar to the diagrammatic expansion in a three-dimensional Euclidean quantum field theory, with time playing the role of an external parameter. For the physically relevant case of cold dark matter in an Einstein--de Sitter universe, the time evolution of the distribution function can be found exactly and is encapsulated by a time-dependent coupling constant controlling the perturbative expansion. We show that all building blocks of the expansion are free from spurious infrared enhanced contributions that plague the standard cosmological perturbation theory. This pave...
SmB6 electron-phonon coupling constant from time- and angle-resolved photoelectron spectroscopy
Sterzi, A.; Crepaldi, A.; Cilento, F.; Manzoni, G.; Frantzeskakis, E.; Zacchigna, M.; van Heumen, E.; Huang, Y. K.; Golden, M. S.; Parmigiani, F.
2016-08-01
SmB6 is a mixed valence Kondo system resulting from the hybridization between localized f electrons and delocalized d electrons. We have investigated its out-of-equilibrium electron dynamics by means of time- and angle-resolved photoelectron spectroscopy. The transient electronic population above the Fermi level can be described by a time-dependent Fermi-Dirac distribution. By solving a two-temperature model that well reproduces the relaxation dynamics of the effective electronic temperature, we estimate the electron-phonon coupling constant λ to range from 0.13 ±0.03 to 0.04 ±0.01 . These extremes are obtained assuming a coupling of the electrons with either a phonon mode at 10 or 19 meV. A realistic value of the average phonon energy will give an actual value of λ within this range. Our results provide an experimental report on the material electron-phonon coupling, contributing to both the electronic transport and the macroscopic thermodynamic properties of SmB6.
Dwell time considerations for large area cold plasma decontamination
Konesky, Gregory
2009-05-01
Atmospheric discharge cold plasmas have been shown to be effective in the reduction of pathogenic bacteria and spores and in the decontamination of simulated chemical warfare agents, without the generation of toxic or harmful by-products. Cold plasmas may also be useful in assisting cleanup of radiological "dirty bombs." For practical applications in realistic scenarios, the plasma applicator must have both a large area of coverage, and a reasonably short dwell time. However, the literature contains a wide range of reported dwell times, from a few seconds to several minutes, needed to achieve a given level of reduction. This is largely due to different experimental conditions, and especially, different methods of generating the decontaminating plasma. We consider these different approaches and attempt to draw equivalencies among them, and use this to develop requirements for a practical, field-deployable plasma decontamination system. A plasma applicator with 12 square inches area and integral high voltage, high frequency generator is described.
Large Time Behavior of the Vlasov-Poisson-Boltzmann System
Directory of Open Access Journals (Sweden)
Li Li
2013-01-01
Full Text Available The motion of dilute charged particles can be modeled by Vlasov-Poisson-Boltzmann system. We study the large time stability of the VPB system. To be precise, we prove that when time goes to infinity, the solution of VPB system tends to global Maxwellian state in a rate Ot−∞, by using a method developed for Boltzmann equation without force in the work of Desvillettes and Villani (2005. The improvement of the present paper is the removal of condition on parameter λ as in the work of Li (2008.
Just-in-time connectivity for large spiking networks.
Lytton, William W; Omurtag, Ahmet; Neymotin, Samuel A; Hines, Michael L
2008-11-01
The scale of large neuronal network simulations is memory limited due to the need to store connectivity information: connectivity storage grows as the square of neuron number up to anatomically relevant limits. Using the NEURON simulator as a discrete-event simulator (no integration), we explored the consequences of avoiding the space costs of connectivity through regenerating connectivity parameters when needed: just in time after a presynaptic cell fires. We explored various strategies for automated generation of one or more of the basic static connectivity parameters: delays, postsynaptic cell identities, and weights, as well as run-time connectivity state: the event queue. Comparison of the JitCon implementation to NEURON's standard NetCon connectivity method showed substantial space savings, with associated run-time penalty. Although JitCon saved space by eliminating connectivity parameters, larger simulations were still memory limited due to growth of the synaptic event queue. We therefore designed a JitEvent algorithm that added items to the queue only when required: instead of alerting multiple postsynaptic cells, a spiking presynaptic cell posted a callback event at the shortest synaptic delay time. At the time of the callback, this same presynaptic cell directly notified the first postsynaptic cell and generated another self-callback for the next delay time. The JitEvent implementation yielded substantial additional time and space savings. We conclude that just-in-time strategies are necessary for very large network simulations but that a variety of alternative strategies should be considered whose optimality will depend on the characteristics of the simulation to be run.
FTSPlot: fast time series visualization for large datasets.
Directory of Open Access Journals (Sweden)
Michael Riss
Full Text Available The analysis of electrophysiological recordings often involves visual inspection of time series data to locate specific experiment epochs, mask artifacts, and verify the results of signal processing steps, such as filtering or spike detection. Long-term experiments with continuous data acquisition generate large amounts of data. Rapid browsing through these massive datasets poses a challenge to conventional data plotting software because the plotting time increases proportionately to the increase in the volume of data. This paper presents FTSPlot, which is a visualization concept for large-scale time series datasets using techniques from the field of high performance computer graphics, such as hierarchic level of detail and out-of-core data handling. In a preprocessing step, time series data, event, and interval annotations are converted into an optimized data format, which then permits fast, interactive visualization. The preprocessing step has a computational complexity of O(n x log(N; the visualization itself can be done with a complexity of O(1 and is therefore independent of the amount of data. A demonstration prototype has been implemented and benchmarks show that the technology is capable of displaying large amounts of time series data, event, and interval annotations lag-free with < 20 ms ms. The current 64-bit implementation theoretically supports datasets with up to 2(64 bytes, on the x86_64 architecture currently up to 2(48 bytes are supported, and benchmarks have been conducted with 2(40 bytes/1 TiB or 1.3 x 10(11 double precision samples. The presented software is freely available and can be included as a Qt GUI component in future software projects, providing a standard visualization method for long-term electrophysiological experiments.
Petrillo, Marta; Cherubini, Paolo; Fravolini, Giulia; Marchetti, Marco; Ascher-Jenull, Judith; Schärer, Michael; Synal, Hans-Arno; Bertoldi, Daniela; Camin, Federica; Larcher, Roberto; Egli, Markus
2016-03-01
Due to the large size (e.g. sections of tree trunks) and highly heterogeneous spatial distribution of deadwood, the timescales involved in the coarse woody debris (CWD) decay of Picea abies (L.) Karst. and Larix decidua Mill. in Alpine forests are largely unknown. We investigated the CWD decay dynamics in an Alpine valley in Italy using the chronosequence approach and the five-decay class system that is based on a macromorphological assessment. For the decay classes 1-3, most of the dendrochronological samples were cross-dated to assess the time that had elapsed since tree death, but for decay classes 4 and 5 (poorly preserved tree rings) radiocarbon dating was used. In addition, density, cellulose, and lignin data were measured for the dated CWD. The decay rate constants for spruce and larch were estimated on the basis of the density loss using a single negative exponential model, a regression approach, and the stage-based matrix model. In the decay classes 1-3, the ages of the CWD were similar and varied between 1 and 54 years for spruce and 3 and 40 years for larch, with no significant differences between the classes; classes 1-3 are therefore not indicative of deadwood age. This seems to be due to a time lag between the death of a standing tree and its contact with the soil. We found distinct tree-species-specific differences in decay classes 4 and 5, with larch CWD reaching an average age of 210 years in class 5 and spruce only 77 years. The mean CWD rate constants were estimated to be in the range 0.018 to 0.022 y-1 for spruce and to about 0.012 y-1 for larch. Snapshot sampling (chronosequences) may overestimate the age and mean residence time of CWD. No sampling bias was, however, detectable using the stage-based matrix model. Cellulose and lignin time trends could be derived on the basis of the ages of the CWD. The half-lives for cellulose were 21 years for spruce and 50 years for larch. The half-life of lignin is considerably higher and may be more than
Zhang, Zhuomin; Zhan, Yisen; Huang, Yichun; Li, Gongke
2017-08-05
In this work, a portable large-volume constant-concentration (LVCC) sampling technique coupling with surface-enhanced Raman spectroscopy (SERS) was developed for the rapid on-site gas analysis based on suitable derivatization methods. LVCC sampling technique mainly consisted of a specially designed sampling cell including the rigid sample container and flexible sampling bag, and an absorption-derivatization module with a portable pump and a gas flowmeter. LVCC sampling technique allowed large, alterable and well-controlled sampling volume, which kept the concentration of gas target in headspace phase constant during the entire sampling process and made the sampling result more representative. Moreover, absorption and derivatization of gas target during LVCC sampling process were efficiently merged in one step using bromine-thiourea and OPA-NH 4 + strategy for ethylene and SO 2 respectively, which made LVCC sampling technique conveniently adapted to consequent SERS analysis. Finally, a new LVCC sampling-SERS method was developed and successfully applied for rapid analysis of trace ethylene and SO 2 from fruits. It was satisfied that trace ethylene and SO 2 from real fruit samples could be actually and accurately quantified by this method. The minor concentration fluctuations of ethylene and SO 2 during the entire LVCC sampling process were proved to be gas targets from real samples by SERS. Copyright © 2017 Elsevier B.V. All rights reserved.
Boyle, Patrick J; Büchner, Andreas; Stone, Michael A; Lenarz, Thomas; Moore, Brian C J
2009-04-01
Cochlear implants usually employ an automatic gain control (AGC) system as a first stage of processing. AGC1 was a fast-acting (syllabic) compressor. AGC2 was a dual-time-constant system; it usually performed as a slow-acting compressor, but incorporated an additional fast-acting system to provide protection from sudden increases in sound level. Six experienced cochlear-implant users were tested in a counterbalanced order, receiving one-month of experience with a given AGC type before switching to the other type. Performance was evaluated shortly after provision of a given AGC type and after one-month of experience with that AGC type. Questionnaires, mainly relating to listening in quiet situations, did not reveal significant differences between the two AGC types. However, fixed-level and roving-level tests of sentence identification in noise both revealed significantly better performance for AGC2. It is suggested that the poorer performance for AGC1 occurred because AGC1 introduced cross-modulation between the target speech and background noise, which made perceptual separation of the target and background more difficult.
Directory of Open Access Journals (Sweden)
Daosheng Ling
2016-01-01
Full Text Available It is important to test water content of rock-soil mixtures efficiently and accurately to ensure both the quality control of compaction and assessment of the geotechnical engineering properties. To overcome time and energy wastage and probe insertion problems when using the traditional calibration method, a TDR coaxial test tube calibration arrangement using an upward infiltration method was designed. This arrangement was then used to study the influence of dry density, pore fluid conductivity, and soil/rock ratio on the relationship between water content and the dielectric constant of rock-soil mixtures. The results show that the empirical calibration equation forms for rock-soil mixtures can be the same as for soil materials. The effect of dry density on the calibration equation has the most significance and the influence of pore fluid conductivity can be ignored. The impact of variation of the soil/rock ratio can be neutralized by considering the effect of dry density in the calibration equation for the same kind of soil and rock. The empirical equations proposed by Zhao et al. show a good accuracy for rock-soil mixtures, indicating that the TDR method can be used to test gravimetric water content conveniently and efficiently without calibration in the field.
Large holographic displays for real-time applications
Schwerdtner, A.; Häussler, R.; Leister, N.
2008-02-01
Holography is generally accepted as the ultimate approach to display three-dimensional scenes or objects. Principally, the reconstruction of an object from a perfect hologram would appear indistinguishable from viewing the corresponding real-world object. Up to now two main obstacles have prevented large-screen Computer-Generated Holograms (CGH) from achieving a satisfactory laboratory prototype not to mention a marketable one. The reason is a small cell pitch CGH resulting in a huge number of hologram cells and a very high computational load for encoding the CGH. These seemingly inevitable technological hurdles for a long time have not been cleared limiting the use of holography to special applications, such as optical filtering, interference, beam forming, digital holography for capturing the 3-D shape of objects, and others. SeeReal Technologies has developed a new approach for real-time capable CGH using the socalled Tracked Viewing Windows technology to overcome these problems. The paper will show that today's state of the art reconfigurable Spatial Light Modulators (SLM), especially today's feasible LCD panels are suited for reconstructing large 3-D scenes which can be observed from large viewing angles. For this to achieve the original holographic concept of containing information from the entire scene in each part of the CGH has been abandoned. This substantially reduces the hologram resolution and thus the computational load by several orders of magnitude making thus real-time computation possible. A monochrome real-time prototype measuring 20 inches has been built and demonstrated at last year's SID conference and exhibition 2007 and at several other events.
Process evaluation of treatment times in a large radiotherapy department
International Nuclear Information System (INIS)
Beech, R.; Burgess, K.; Stratford, J.
2016-01-01
Purpose/objective: The Department of Health (DH) recognises access to appropriate and timely radiotherapy (RT) services as crucial in improving cancer patient outcomes, especially when facing a predicted increase in cancer diagnosis. There is a lack of ‘real-time’ data regarding daily demand of a linear accelerator, the impact of increasingly complex techniques on treatment times, and whether current scheduling reflects time needed for RT delivery, which would be valuable in highlighting current RT provision. Material/methods: A systematic quantitative process evaluation was undertaken in a large regional cancer centre, including a satellite centre, between January and April 2014. Data collected included treatment room-occupancy time, RT site, RT and verification technique and patient mobility status. Data was analysed descriptively; average room-occupancy times were calculated for RT techniques and compared to historical standardised treatment times within the department. Results: Room-occupancy was recorded for over 1300 fractions, over 50% of which overran their allotted treatment time. In a focused sample of 16 common techniques, 10 overran their allocated timeslots. Verification increased room-occupancy by six minutes (50%) over non-imaging. Treatments for patients requiring mobility assistance took four minutes (29%) longer. Conclusion: The majority of treatments overran their standardised timeslots. Although technique advancement has reduced RT delivery time, room-occupancy has not necessarily decreased. Verification increases room-occupancy and needs to be considered when moving towards adaptive techniques. Mobility affects room-occupancy and will become increasingly significant in an ageing population. This evaluation assesses validity of current treatment times in this department, and can be modified and repeated as necessary. - Highlights: • A process evaluation examined room-occupancy for various radiotherapy techniques. • Appointment lengths
Liang, Hua; Miao, Hongyu; Wu, Hulin
2010-03-01
Modeling viral dynamics in HIV/AIDS studies has resulted in deep understanding of pathogenesis of HIV infection from which novel antiviral treatment guidance and strategies have been derived. Viral dynamics models based on nonlinear differential equations have been proposed and well developed over the past few decades. However, it is quite challenging to use experimental or clinical data to estimate the unknown parameters (both constant and time-varying parameters) in complex nonlinear differential equation models. Therefore, investigators usually fix some parameter values, from the literature or by experience, to obtain only parameter estimates of interest from clinical or experimental data. However, when such prior information is not available, it is desirable to determine all the parameter estimates from data. In this paper, we intend to combine the newly developed approaches, a multi-stage smoothing-based (MSSB) method and the spline-enhanced nonlinear least squares (SNLS) approach, to estimate all HIV viral dynamic parameters in a nonlinear differential equation model. In particular, to the best of our knowledge, this is the first attempt to propose a comparatively thorough procedure, accounting for both efficiency and accuracy, to rigorously estimate all key kinetic parameters in a nonlinear differential equation model of HIV dynamics from clinical data. These parameters include the proliferation rate and death rate of uninfected HIV-targeted cells, the average number of virions produced by an infected cell, and the infection rate which is related to the antiviral treatment effect and is time-varying. To validate the estimation methods, we verified the identifiability of the HIV viral dynamic model and performed simulation studies. We applied the proposed techniques to estimate the key HIV viral dynamic parameters for two individual AIDS patients treated with antiretroviral therapies. We demonstrate that HIV viral dynamics can be well characterized and
Libera, Arianna; de Barros, Felipe P. J.; Riva, Monica; Guadagnini, Alberto
2017-10-01
Our study is keyed to the analysis of the interplay between engineering factors (i.e., transient pumping rates versus less realistic but commonly analyzed uniform extraction rates) and the heterogeneous structure of the aquifer (as expressed by the probability distribution characterizing transmissivity) on contaminant transport. We explore the joint influence of diverse (a) groundwater pumping schedules (constant and variable in time) and (b) representations of the stochastic heterogeneous transmissivity (T) field on temporal histories of solute concentrations observed at an extraction well. The stochastic nature of T is rendered by modeling its natural logarithm, Y = ln T, through a typical Gaussian representation and the recently introduced Generalized sub-Gaussian (GSG) model. The latter has the unique property to embed scale-dependent non-Gaussian features of the main statistics of Y and its (spatial) increments, which have been documented in a variety of studies. We rely on numerical Monte Carlo simulations and compute the temporal evolution at the well of low order moments of the solute concentration (C), as well as statistics of the peak concentration (Cp), identified as the environmental performance metric of interest in this study. We show that the pumping schedule strongly affects the pattern of the temporal evolution of the first two statistical moments of C, regardless the nature (Gaussian or non-Gaussian) of the underlying Y field, whereas the latter quantitatively influences their magnitude. Our results show that uncertainty associated with C and Cp estimates is larger when operating under a transient extraction scheme than under the action of a uniform withdrawal schedule. The probability density function (PDF) of Cp displays a long positive tail in the presence of time-varying pumping schedule. All these aspects are magnified in the presence of non-Gaussian Y fields. Additionally, the PDF of Cp displays a bimodal shape for all types of pumping
International Nuclear Information System (INIS)
Foos, J.
1999-01-01
This paper is written in two tables. The first one describes the different particles (bosons and fermions). The second one gives the isotopes nuclear constants of the different elements, for Z = 1 to 56. (A.L.B.)
International Nuclear Information System (INIS)
Foos, J.
2000-01-01
This paper is written in two tables. The first one describes the different particles (bosons and fermions). The second one gives the isotopes nuclear constants of the different elements, for Z = 56 to 68. (A.L.B.)
International Nuclear Information System (INIS)
Foos, J.
1998-01-01
This paper is made of two tables. The first table describes the different particles (bosons and fermions) while the second one gives the nuclear constants of isotopes from the different elements with Z = 1 to 25. (J.S.)
International Nuclear Information System (INIS)
Foos, J.
1999-01-01
This paper is written in two tables. The first one describes the different particles (bosons and fermions). The second one gives the isotopes nuclear constants of the different elements, for Z = 56 to 68. (A.L.B.)
Irregular Morphing for Real-Time Rendering of Large Terrain
Directory of Open Access Journals (Sweden)
S. Kalem
2016-06-01
Full Text Available The following paper proposes an alternative approach to the real-time adaptive triangulation problem. A new region-based multi-resolution approach for terrain rendering is described which improves on-the-fly the distribution of the density of triangles inside the tile after selecting appropriate Level-Of-Detail by an adaptive sampling. This proposed approach organizes the heightmap into a QuadTree of tiles that are processed independently. This technique combines the benefits of both Triangular Irregular Network approach and region-based multi-resolution approach by improving the distribution of the density of triangles inside the tile. Our technique morphs the initial regular grid of the tile to deformed grid in order to minimize approximation error. The proposed technique strives to combine large tile size and real-time processing while guaranteeing an upper bound on the screen space error. Thus, this approach adapts terrain rendering process to local surface characteristics and enables on-the-fly handling of large amount of terrain data. Morphing is based-on the multi-resolution wavelet analysis. The use of the D2WT multi-resolution analysis of the terrain height-map speeds up processing and permits to satisfy an interactive terrain rendering. Tests and experiments demonstrate that Haar B-Spline wavelet, well known for its properties of localization and its compact support, is suitable for fast and accurate redistribution. Such technique could be exploited in client-server architecture for supporting interactive high-quality remote visualization of very large terrain.
Directory of Open Access Journals (Sweden)
G. T. Kulakov
2008-01-01
Full Text Available The paper is devoted to computational investigation of influence relative time constant of an object which changes in broad band on quality of steam temperature control behind a boiler with due account of value of regulating action in the system with PI- and PID- regulator. The simulation has been based on a single-loop automatic control system (ACS. It has been revealed that the less value of the relative time constant of an object leads to more integral control error in system with PID- regulator while operating external ACS perturbation. Decrease of numerical value of relative time constant of an object while operating external perturbation causes decrease of relative time concerning appearance of maximum dynamic control error from common relative control time.
Zhang, Zhuomin; Zhan, Yisen; Huang, Yichun; Li, Gongke
2017-08-01
In this work, a portable large-volume constant-concentration (LVCC) sampling technique coupling with surface-enhanced Raman spectroscopy (SERS) was developed for the rapid on-site gas analysis based on suitable derivatization methods. LVCC sampling technique mainly consisted of a specially designed sampling cell including the rigid sample container and flexible sampling bag, and an absorption-derivatization module with a portable pump and a gas flowmeter. LVCC sampling technique allowed large, alterable and well-controlled sampling volume, which kept the concentration of gas target in headspace phase constant during the entire sampling process and made the sampling result more representative. Moreover, absorption and derivatization of gas target during LVCC sampling process were efficiently merged in one step using bromine-thiourea and OPA-NH4+ strategy for ethylene and SO2 respectively, which made LVCC sampling technique conveniently adapted to consequent SERS analysis. Finally, a new LVCC sampling-SERS method was developed and successfully applied for rapid analysis of trace ethylene and SO2 from fruits. It was satisfied that trace ethylene and SO2 from real fruit samples could be actually and accurately quantified by this method. The minor concentration fluctuations of ethylene and SO2 during the entire LVCC sampling process were proved to be samples were achieved in range of 95.0-101% and 97.0-104% respectively. It is expected that portable LVCC sampling technique would pave the way for rapid on-site analysis of accurate concentrations of trace gas targets from real samples by SERS.
Large area spark counters with fine time and position resolution
International Nuclear Information System (INIS)
Ogawa, A.; Atwood, W.B.; Fujiwara, N.; Pestov, Yu.N.; Sugahara, R.
1983-10-01
Spark counters trace their history back over three decades but have been used in only a limited number of experiments. The key properties of these devices include their capability of precision timing (at the sub 100 ps level) and of measuring the position of the charged particle to high accuracy. At SLAC we have undertaken a program to develop these devices for use in high energy physics experiments involving large detectors. A spark counter of size 1.2 m x 0.1 m has been constructed and has been operating continuously in our test setup for several months. In this talk I will discuss some details of its construction and its properties as a particle detector. 14 references
Large, real time detectors for solar neutrinos and magnetic monopoles
International Nuclear Information System (INIS)
Gonzalez-Mestres, L.
1990-01-01
We discuss the present status of superheated superconducting granules (SSG) development for the real time detection of magnetic monopoles of any speed and of low energy solar neutrinos down to the pp region (indium project). Basic properties of SSG and progress made in the recent years are briefly reviewed. Possible ways for further improvement are discussed. The performances reached in ultrasonic grain production at ∼ 100 μm size, as well as in conventional read-out electronics, look particularly promising for a large scale monopole experiment. Alternative approaches are briefly dealt with: induction loops for magnetic monopoles; scintillators, semiconductors or superconducting tunnel junctions for a solar neutrino detector based on an indium target
Directory of Open Access Journals (Sweden)
Yang Yang
2011-01-01
Full Text Available We propose a general continuous-time risk model with a constant interest rate. In this model, claims arrive according to an arbitrary counting process, while their sizes have dominantly varying tails and fulfill an extended negative dependence structure. We obtain an asymptotic formula for the finite-time ruin probability, which extends a corresponding result of Wang (2008.
Time-sliced perturbation theory for large scale structure I: general formalism
Energy Technology Data Exchange (ETDEWEB)
Blas, Diego; Garny, Mathias; Sibiryakov, Sergey [Theory Division, CERN, CH-1211 Genève 23 (Switzerland); Ivanov, Mikhail M., E-mail: diego.blas@cern.ch, E-mail: mathias.garny@cern.ch, E-mail: mikhail.ivanov@cern.ch, E-mail: sergey.sibiryakov@cern.ch [FSB/ITP/LPPC, École Polytechnique Fédérale de Lausanne, CH-1015, Lausanne (Switzerland)
2016-07-01
We present a new analytic approach to describe large scale structure formation in the mildly non-linear regime. The central object of the method is the time-dependent probability distribution function generating correlators of the cosmological observables at a given moment of time. Expanding the distribution function around the Gaussian weight we formulate a perturbative technique to calculate non-linear corrections to cosmological correlators, similar to the diagrammatic expansion in a three-dimensional Euclidean quantum field theory, with time playing the role of an external parameter. For the physically relevant case of cold dark matter in an Einstein-de Sitter universe, the time evolution of the distribution function can be found exactly and is encapsulated by a time-dependent coupling constant controlling the perturbative expansion. We show that all building blocks of the expansion are free from spurious infrared enhanced contributions that plague the standard cosmological perturbation theory. This paves the way towards the systematic resummation of infrared effects in large scale structure formation. We also argue that the approach proposed here provides a natural framework to account for the influence of short-scale dynamics on larger scales along the lines of effective field theory.
Real-time vibration compensation for large telescopes
Böhm, M.; Pott, J.-U.; Sawodny, O.; Herbst, T.; Kürster, M.
2014-08-01
We compare different strategies for minimizing the effects of telescope vibrations to the differential piston (optical pathway difference) for the Near-InfraRed/Visible Adaptive Camera and INterferometer for Astronomy (LINC-NIRVANA) at the Large Binocular Telescope (LBT) using an accelerometer feedforward compensation approach. We summarize, why this technology is important for LINC-NIRVANA, and also for future telescopes and already existing instruments. The main objective is outlining a solution for the estimation problem in general and its specifics at the LBT. Emphasis is put on realistic evaluation of the used algorithms in the laboratory, such that predictions for the expected performance at the LBT can be made. Model-based estimation and broad-band filtering techniques can be used to solve the estimation task, and the differences are discussed. Simulation results and measurements are shown to motivate our choice of the estimation algorithm for LINC-NIRVANA. The laboratory setup is aimed at imitating the vibration behaviour at the LBT in general, and the M2 as main contributor in particular. For our measurements, we introduce a disturbance time series which has a frequency spectrum comparable to what can be measured at the LBT on a typical night. The controllers' ability to suppress vibrations in the critical frequency range of 8-60 Hz is demonstrated. The experimental results are promising, indicating the ability to suppress differential piston induced by telescope vibrations by a factor of about 5 (rms), which is significantly better than any currently commissioned system.
A time-focusing Fourier chopper time-of-flight diffractometer for large scattering angles
International Nuclear Information System (INIS)
Heinonen, R.; Hiismaeki, P.; Piirto, A.; Poeyry, H.; Tiitta, A.
1975-01-01
A high-resolution time-of-flight diffractometer utilizing time focusing principles in conjunction with a Fourier chopper is under construction at Otaniemi. The design is an improved version of a test facility which has been used for single-crystal and powder diffraction studies with promising results. A polychromatic neutron beam from a radial beam tube of the FiR 1 reactor, collimated to dia. 70 mm, is modulated by a Fourier chopper (dia. 400 mm) which is placed inside a massive boron-loaded particle board shielding of 900 mm wall thickness. A thin flat sample (5 mm x dia. 80 mm typically) is mounted on a turntable at a distance of 4 m from the chopper, and the diffracted neutrons are counted by a scintillation detector at 4 m distance from the sample. The scattering angle 2theta can be chosen between 90deg and 160deg to cover Bragg angles from 45deg up to 80deg. The angle between the chopper disc and the incident beam direction as well as the angle of the detector surface relative to the diffracted beam can be adjusted between 45deg and 90deg in order to accomplish time-focusing. In our set-up, with equal flight paths from chopper to sample and from sample to detector, the time-focusing conditions are fulfilled when the chopper and the detector are parallel to the sample-plane. The time-of-flight spectrum of the scattered neutrons is measured by the reverse time-of-flight method in which, instead of neutrons, one essentially records the modulation function of the chopper during constant periods preceding each detected neutron. With a Fourier chopper whose speed is varied in a suitable way, the method is equivalent to the conventional Fourier method but the spectrum is obtained directly without any off-line calculations. The new diffractometer is operated automatically by a Super Nova computer which not only accumulates the synthetized diffraction pattern but also controls the chopper speed according to the modulation frequency sweep chosen by the user to obtain a
Time Domain View of Liquid-like Screening and Large Polaron Formation in Lead Halide Perovskites
Joshi, Prakriti Pradhan; Miyata, Kiyoshi; Trinh, M. Tuan; Zhu, Xiaoyang
The structural softness and dynamic disorder of lead halide perovskites contributes to their remarkable optoelectronic properties through efficient charge screening and large polaron formation. Here we provide a direct time-domain view of the liquid-like structural dynamics and polaron formation in single crystal CH3NH3PbBr3 and CsPbBr3 using femtosecond optical Kerr effect spectroscopy in conjunction with transient reflectance spectroscopy. We investigate structural dynamics as function of pump energy, which enables us to examine the dynamics in the absence and presence of charge carriers. In the absence of charge carriers, structural dynamics are dominated by over-damped picosecond motions of the inorganic PbBr3- sub-lattice and these motions are strongly coupled to band-gap electronic transitions. Carrier injection from across-gap optical excitation triggers additional 0.26 ps dynamics in CH3NH3PbBr3 that can be attributed to the formation of large polarons. In comparison, large polaron formation is slower in CsPbBr3 with a time constant of 0.6 ps. We discuss how such dynamic screening protects charge carriers in lead halide perovskites. US Department of Energy, Office of Science - Basic Energy Sciences.
Evolution of the solar constant
International Nuclear Information System (INIS)
Newman, M.J.
1978-01-01
The ultimate source of the energy utilized by life on Earth is the Sun, and the behavior of the Sun determines to a large extent the conditions under which life originated and continues to thrive. What can be said about the history of the Sun. Has the solar constant, the rate at which energy is received by the Earth from the Sun per unit area per unit time, been constant at its present level since Archean times. Three mechanisms by which it has been suggested that the solar energy output can vary with time are discussed, characterized by long (approx. 10 9 years), intermediate (approx. 10 8 years), and short (approx. years to decades) time scales
Mizuta, Sora; Saito, Itsuro; Isoyama, Takashi; Hara, Shintaro; Yurimoto, Terumi; Li, Xinyang; Murakami, Haruka; Ono, Toshiya; Mabuchi, Kunihiko; Abe, Yusuke
2017-09-01
1/R control is a physiological control method of the total artificial heart (TAH) with which long-term survival was obtained with animal experiments. However, 1/R control occasionally diverged in the undulation pump TAH (UPTAH) animal experiment. To improve the control stability of the 1/R control, appropriate control time constant in relation to characteristics of the baroreflex vascular system was investigated with frequency analysis and numerical simulation. In the frequency analysis, data of five goats in which the UPTAH was implanted were analyzed with first Fourier transform technique to examine the vasomotion frequency. The numerical simulation was carried out repeatedly changing baroreflex parameters and control time constant using the elements-expanded Windkessel model. Results of the frequency analysis showed that the 1/R control tended to diverge when very low frequency band that was an indication of the vasomotion frequency was relative high. In numerical simulation, divergence of the 1/R control could be reproduced and the boundary curves between the divergence and convergence of the 1/R control varied depending on the control time constant. These results suggested that the 1/R control tended to be unstable when the TAH recipient had high reflex speed in the baroreflex vascular system. Therefore, the control time constant should be adjusted appropriately with the individual vasomotion frequency.
Cosmological constants and variations
International Nuclear Information System (INIS)
Barrow, John D
2005-01-01
We review properties of theories for the variation of the gravitation and fine structure 'constants'. We highlight some general features of the cosmological models that exist in these theories with reference to recent quasar data that is consistent with time-variation in the fine structure 'constant' since a redshift of 3.5. The behaviour of a simple class of varying alpha cosmologies is outlined in the light of all the observational constraints. We also discuss some of the consequences of varying 'constants' for oscillating universes and show by means of exact solutions that they appear to evolve monotonically in time even though the scale factor of the universe oscillates
Parallel time domain solvers for electrically large transient scattering problems
Liu, Yang; Yucel, Abdulkadir; Bagcý , Hakan; Michielssen, Eric
2014-01-01
scattering from perfect electrically conducting objects are obtained by enforcing electric field boundary conditions and implicitly time advance electric surface current densities by iteratively solving sparse systems of equations at all time steps. Contrary
Directory of Open Access Journals (Sweden)
De Rosa Matteo
2017-03-01
Full Text Available In our previous research we have observed that the fluorescence emission from water solutions of Single-Walled Carbon Nano-Tubes (SWCNT, excited by a laser with a wavelength of 830nm, diminishes with the time. We have already proved that such a fading is a function of the storage time and the storage temperature. In order to study the emission of the SWCNT as a function of these two parameters we have designed and realized a special measurement compartment with a cuvette holder where the SWCNT solutions can be measured and stored at a fixed constant temperature for periods of time as long as several weeks. To maintain the measurement setup under a constant temperature we have designed special experimental setup based on two Peltier cells with electronic temperature control.
Spiliopoulos, Leonidas
2018-03-01
The investigation of response time and behavior has a long tradition in cognitive psychology, particularly for non-strategic decision-making. Recently, experimental economists have also studied response time in strategic interactions, but with an emphasis on either one-shot games or repeated social-dilemmas. I investigate the determinants of response time in a repeated (pure-conflict) game, admitting a unique mixed strategy Nash equilibrium, with fixed partner matching. Response times depend upon the interaction of two decision models embedded in a dual-process framework (Achtziger and Alós-Ferrer, 2014; Alós-Ferrer, 2016). The first decision model is the commonly used win-stay/lose-shift heuristic and the second the pattern-detecting reinforcement learning model in Spiliopoulos (2013b). The former is less complex and can be executed more quickly than the latter. As predicted, conflict between these two models (i.e., each one recommending a different course of action) led to longer response times than cases without conflict. The dual-process framework makes other qualitative response time predictions arising from the interaction between the existence (or not) of conflict and which one of the two decision models the chosen action is consistent with-these were broadly verified by the data. Other determinants of RT were hypothesized on the basis of existing theory and tested empirically. Response times were strongly dependent on the actions chosen by both players in the previous rounds and the resulting outcomes. Specifically, response time was shortest after a win in the previous round where the maximum possible payoff was obtained; response time after losses was significantly longer. Strongly auto-correlated behavior (regardless of its sign) was also associated with longer response times. I conclude that, similar to other tasks, there is a strong coupling in repeated games between behavior and RT, which can be exploited to further our understanding of decision
Marxer, C Galli; Coen, M Collaud; Bissig, H; Greber, U F; Schlapbach, L
2003-10-01
Interpretation of adsorption kinetics measured with a quartz crystal microbalance (QCM) can be difficult for adlayers undergoing modification of their mechanical properties. We have studied the behavior of the oscillation amplitude, A(0), and the decay time constant, tau, of quartz during adsorption of proteins and cells, by use of a home-made QCM. We are able to measure simultaneously the frequency, f, the dissipation factor, D, the maximum amplitude, A(0), and the transient decay time constant, tau, every 300 ms in liquid, gaseous, or vacuum environments. This analysis enables adsorption and modification of liquid/mass properties to be distinguished. Moreover the surface coverage and the stiffness of the adlayer can be estimated. These improvements promise to increase the appeal of QCM methodology for any applications measuring intimate contact of a dynamic material with a solid surface.
Time-domain hybrid method for simulating large amplitude motions of ships advancing in waves
Directory of Open Access Journals (Sweden)
Shukui Liu
2011-03-01
Full Text Available Typical results obtained by a newly developed, nonlinear time domain hybrid method for simulating large amplitude motions of ships advancing with constant forward speed in waves are presented. The method is hybrid in the way of combining a time-domain transient Green function method and a Rankine source method. The present approach employs a simple double integration algorithm with respect to time to simulate the free-surface boundary condition. During the simulation, the diffraction and radiation forces are computed by pressure integration over the mean wetted surface, whereas the incident wave and hydrostatic restoring forces/moments are calculated on the instantaneously wetted surface of the hull. Typical numerical results of application of the method to the seakeeping performance of a standard containership, namely the ITTC S175, are herein presented. Comparisons have been made between the results from the present method, the frequency domain 3D panel method (NEWDRIFT of NTUA-SDL and available experimental data and good agreement has been observed for all studied cases between the results of the present method and comparable other data.
Eriguchi, Koji; Wei, Zhiqiang; Takagi, Takeshi; Ohta, Hiroaki; Ono, Kouichi
2009-01-01
Constant voltage stress (CVS) was applied to Fe–O films prepared by a sputtering process to investigate a stress-induced resistance increase leading to a fundamental mechanism for switching behaviors. Under the CVS, an abrupt resistance increase was found for both stress polarities. A conduction mechanism after the resistance increase exhibited non-Ohmic transport. The time-to-resistance increase (tr) under the CVS was revealed to strongly depend on stress voltage as well as the polarity. Fro...
Energy Technology Data Exchange (ETDEWEB)
Regis, J.-M., E-mail: regis@ikp.uni-koeln.de [Institut fuer Kernphysik der Universitaet zu Koeln, Zuelpicher Str. 77, 50937 Koeln (Germany); Rudigier, M.; Jolie, J.; Blazhev, A.; Fransen, C.; Pascovici, G.; Warr, N. [Institut fuer Kernphysik der Universitaet zu Koeln, Zuelpicher Str. 77, 50937 Koeln (Germany)
2012-08-21
The electronic {gamma}-{gamma} fast timing technique allows for direct nuclear lifetime determination down to the few picoseconds region by measuring the time difference between two coincident {gamma}-ray transitions. Using high resolution ultra-fast LaBr{sub 3}(Ce) scintillator detectors in combination with the recently developed mirror symmetric centroid difference method, nuclear lifetimes are measured with a time resolving power of around 5 ps. The essence of the method is to calibrate the energy dependent position (centroid) of the prompt response function of the setup which is obtained for simultaneously occurring events. This time-walk of the prompt response function induced by the analog constant fraction discriminator has been determined by systematic measurements using different photomultiplier tubes and timing adjustments of the constant fraction discriminator. We propose a universal calibration function which describes the time-walk or the combined {gamma}-{gamma} time-walk characteristics, respectively, for either a linear or a non-linear amplitude versus energy dependency of the scintillator detector output pulses.
Elliott, Mark A; du Bois, Naomi
2017-01-01
From the point of view of the cognitive dynamicist the organization of brain circuitry into assemblies defined by their synchrony at particular (and precise) oscillation frequencies is important for the correct correlation of all independent cortical responses to the different aspects of a given complex thought or object. From the point of view of anyone operating complex mechanical systems, i.e., those comprising independent components that are required to interact precisely in time, it follows that the precise timing of such a system is essential - not only essential but measurable, and scalable. It must also be reliable over observations to bring about consistent behavior, whatever that behavior is. The catastrophic consequence of an absence of such precision, for instance that required to govern the interference engine in many automobiles, is indicative of how important timing is for the function of dynamical systems at all levels of operation. The dynamics and temporal considerations combined indicate that it is necessary to consider the operating characteristic of any dynamical, cognitive brain system in terms, superficially at least, of oscillation frequencies. These may, themselves, be forensic of an underlying time-related taxonomy. Currently there are only two sets of relevant and necessarily systematic observations in this field: one of these reports the precise dynamical structure of the perceptual systems engaged in dynamical binding across form and time; the second, derived both empirically from perceptual performance data, as well as obtained from theoretical models, demonstrates a timing taxonomy related to a fundamental operator referred to as the time quantum. In this contribution both sets of theory and observations are reviewed and compared for their predictive consistency. Conclusions about direct comparability are discussed for both theories of cognitive dynamics and time quantum models. Finally, a brief review of some experimental data
In vivo estimation of transverse relaxation time constant (T2 ) of 17 human brain metabolites at 3T.
Wyss, Patrik O; Bianchini, Claudio; Scheidegger, Milan; Giapitzakis, Ioannis A; Hock, Andreas; Fuchs, Alexander; Henning, Anke
2018-08-01
The transverse relaxation times T 2 of 17 metabolites in vivo at 3T is reported and region specific differences are addressed. An echo-time series protocol was applied to one, two, or three volumes of interest with different fraction of white and gray matter including a total number of 106 healthy volunteers and acquiring a total number of 128 spectra. The data were fitted with the 2D fitting tool ProFit2, which included individual line shape modeling for all metabolites and allowed the T 2 calculation of 28 moieties of 17 metabolites. The T 2 of 10 metabolites and their moieties have been reported for the first time. Region specific T 2 differences in white and gray matter enriched tissue occur in 16 of 17 metabolites examined including single resonance lines and coupled spin systems. The relaxation time T 2 is regions specific and has to be considered when applying tissue composition correction for internal water referencing. Magn Reson Med 80:452-461, 2018. © 2018 International Society for Magnetic Resonance in Medicine. © 2018 International Society for Magnetic Resonance in Medicine.
Diamond detector time resolution for large angle tracks
Energy Technology Data Exchange (ETDEWEB)
Chiodini, G., E-mail: chiodini@le.infn.it [INFN - Sezione di Lecce (Italy); Fiore, G.; Perrino, R. [INFN - Sezione di Lecce (Italy); Pinto, C.; Spagnolo, S. [INFN - Sezione di Lecce (Italy); Dip. di Matematica e Fisica “Ennio De Giorgi”, Uni. del Salento (Italy)
2015-10-01
The applications which have stimulated greater interest in diamond sensors are related to detectors close to particle beams, therefore in an environment with high radiation level (beam monitor, luminosity measurement, detection of primary and secondary-interaction vertices). Our aims is to extend the studies performed so far by developing the technical advances needed to prove the competitiveness of this technology in terms of time resolution, with respect to more usual ones, which does not guarantee the required tolerance to a high level of radiation doses. In virtue of these goals, measurements of diamond detector time resolution with tracks incident at different angles are discussed. In particular, preliminary testbeam results obtained with 5 GeV electrons and polycrystalline diamond strip detectors are shown.
Yoon-Ho Kim; Jung-Hyeon Ryu; Jin-Hwan Kim; Kern-Joong Kim
2016-01-01
The equivalent test circuit that can deliver both short-circuit current and recovery voltage is used to verify the performance of high-voltage circuit breakers. Most of the parameters in this circuit can be obtained by using a simple calculation or a simulation program. The ratings of the circuit breaker include rated short-circuit breaking current, rated short-circuit making current, rated operating sequence of the circuit breaker and rated short-time current. Among these ratings, the short-...
Strange, P.
2012-01-01
In this paper we demonstrate a surprising aspect of quantum mechanics that is accessible to an undergraduate student. We discuss probability backflow for an electron in a constant magnetic field. It is shown that even for a wavepacket composed entirely of states with negative angular momentum the effective angular momentum can take on positive…
Directory of Open Access Journals (Sweden)
W. Zhang
2015-03-01
Full Text Available Abstract This work examines the influence of the residence-time distribution (RTD of surface elements on a model of cross-flow microfiltration that has been proposed recently (Hasan et al., 2013. Along with the RTD from the previous work (Case 1, two other RTD functions (Cases 2 and 3 are used to develop theoretical expressions for the permeate-flux decline and cake buildup in the filter as a function of process time. The three different RTDs correspond to three different startup conditions of the filtration process. The analytical expressions for the permeate flux, each of which contains three basic parameters (membrane resistance, specific cake resistance and rate of surface renewal, are fitted to experimental permeate flow rate data in the microfiltration of fermentation broths in laboratory- and pilot-scale units. All three expressions for the permeate flux fit the experimental data fairly well with average root-mean-square errors of 4.6% for Cases 1 and 2, and 4.2% for Case 3, respectively, which points towards the constructive nature of the model - a common feature of theoretical models used in science and engineering.
Batisse, Nicolas; Raymundo-Piñero, Encarnación
2017-11-29
A more detailed understanding of the electrode/electrolyte interface degradation during the charging cycle in supercapacitors is of great interest for exploring the voltage stability range and therefore the extractable energy. The evaluation of the gas evolution during the charging, discharging, and aging processes is a powerful tool toward determining the stability and energy capacity of supercapacitors. Here, we attempt to fit the gas analysis resolution to the time response of a low-gas-generation power device by adopting a modified pulsed electrochemical mass spectrometry (PEMS) method. The pertinence of the method is shown using a symmetric carbon/carbon supercapacitor operating in different aqueous electrolytes. The differences observed in the gas levels and compositions as a function of the cell voltage correlate to the evolution of the physicochemical characteristics of the carbon electrodes and to the electrochemical performance, giving a complete picture of the processes taking place at the electrode/electrolyte interface.
Thermal motion in proteins: Large effects on the time-averaged interaction energies
International Nuclear Information System (INIS)
Goethe, Martin; Rubi, J. Miguel; Fita, Ignacio
2016-01-01
As a consequence of thermal motion, inter-atomic distances in proteins fluctuate strongly around their average values, and hence, also interaction energies (i.e. the pair-potentials evaluated at the fluctuating distances) are not constant in time but exhibit pronounced fluctuations. These fluctuations cause that time-averaged interaction energies do generally not coincide with the energy values obtained by evaluating the pair-potentials at the average distances. More precisely, time-averaged interaction energies behave typically smoother in terms of the average distance than the corresponding pair-potentials. This averaging effect is referred to as the thermal smoothing effect. Here, we estimate the strength of the thermal smoothing effect on the Lennard-Jones pair-potential for globular proteins at ambient conditions using x-ray diffraction and simulation data of a representative set of proteins. For specific atom species, we find a significant smoothing effect where the time-averaged interaction energy of a single atom pair can differ by various tens of cal/mol from the Lennard-Jones potential at the average distance. Importantly, we observe a dependency of the effect on the local environment of the involved atoms. The effect is typically weaker for bulky backbone atoms in beta sheets than for side-chain atoms belonging to other secondary structure on the surface of the protein. The results of this work have important practical implications for protein software relying on free energy expressions. We show that the accuracy of free energy expressions can largely be increased by introducing environment specific Lennard-Jones parameters accounting for the fact that the typical thermal motion of protein atoms depends strongly on their local environment.
Thermal motion in proteins: Large effects on the time-averaged interaction energies
Energy Technology Data Exchange (ETDEWEB)
Goethe, Martin, E-mail: martingoethe@ub.edu; Rubi, J. Miguel [Departament de Física Fonamental, Universitat de Barcelona, Martí i Franquès 1, 08028 Barcelona (Spain); Fita, Ignacio [Institut de Biologia Molecular de Barcelona, Baldiri Reixac 10, 08028 Barcelona (Spain)
2016-03-15
As a consequence of thermal motion, inter-atomic distances in proteins fluctuate strongly around their average values, and hence, also interaction energies (i.e. the pair-potentials evaluated at the fluctuating distances) are not constant in time but exhibit pronounced fluctuations. These fluctuations cause that time-averaged interaction energies do generally not coincide with the energy values obtained by evaluating the pair-potentials at the average distances. More precisely, time-averaged interaction energies behave typically smoother in terms of the average distance than the corresponding pair-potentials. This averaging effect is referred to as the thermal smoothing effect. Here, we estimate the strength of the thermal smoothing effect on the Lennard-Jones pair-potential for globular proteins at ambient conditions using x-ray diffraction and simulation data of a representative set of proteins. For specific atom species, we find a significant smoothing effect where the time-averaged interaction energy of a single atom pair can differ by various tens of cal/mol from the Lennard-Jones potential at the average distance. Importantly, we observe a dependency of the effect on the local environment of the involved atoms. The effect is typically weaker for bulky backbone atoms in beta sheets than for side-chain atoms belonging to other secondary structure on the surface of the protein. The results of this work have important practical implications for protein software relying on free energy expressions. We show that the accuracy of free energy expressions can largely be increased by introducing environment specific Lennard-Jones parameters accounting for the fact that the typical thermal motion of protein atoms depends strongly on their local environment.
Potential constants and centrifugal distortion constants of octahedral hexafluoride molecules
Energy Technology Data Exchange (ETDEWEB)
Manivannan, G [Government Thirumagal Mill' s Coll., Gudiyattam, Tamil Nadu (India)
1981-04-01
The kinetic constants method outlined by Thirugnanasambandham (1964) based on Wilson's (1955) group theory has been adapted in evaluating the potential constants for SF/sub 6/, SeF/sub 6/, WF/sub 6/, IrF/sub 6/, UF/sub 6/, NpF/sub 6/, and PuF/sub 6/ using the experimentally observed vibrational frequency data. These constants are used to calculate the centrifugal distortion constants for the first time.
International Nuclear Information System (INIS)
Drozdowicz, K.; Krynicka-Drozdowicz, E.
1979-01-01
The LAMA program (in FORTRAN 1900 language), which fits the set of decaying experimental values to the sum of the two (or one) exponentials with background, is described. The method of calculation and its accuracy and the interpretation of the program results are given. The changes and the extensions of the calculation, referred to the dead time effect taken into account for time analysers having the constant dead time after each registered pulse, are described. (author)
Quintessence and the cosmological constant
International Nuclear Information System (INIS)
Doran, M.; Wetterich, C.
2003-01-01
Quintessence -- the energy density of a slowly evolving scalar field -- may constitute a dynamical form of the homogeneous dark energy in the universe. We review the basic idea in the light of the cosmological constant problem. Cosmological observations or a time variation of fundamental 'constants' can distinguish quintessence from a cosmological constant
Protopopescu, V.; D'Helon, C.; Barhen, J.
2003-06-01
A constant-time solution of the continuous global optimization problem (GOP) is obtained by using an ensemble algorithm. We show that under certain assumptions, the solution can be guaranteed by mapping the GOP onto a discrete unsorted search problem, whereupon Brüschweiler's ensemble search algorithm is applied. For adequate sensitivities of the measurement technique, the query complexity of the ensemble search algorithm depends linearly on the size of the function's domain. Advantages and limitations of an eventual NMR implementation are discussed.
Tückmantel, Joachim
2008-01-01
Artificial creation of arbitrary noise signals is used in accelerator physics to reproduce a measured perturbation spectrum for simulations but also to generate real-time shaped noise spectra for controlled emittance blow-up giving tailored properties to the final bunch shape. It is demonstrated here how one can produce numerically what is, for all practical purposes, an unlimited quantity of non-periodic noise data having any predefined spectral density. This spectral density may be constant or varying with time. The noise output never repeats and has excellent statistical properties, important for very long-term applications. It is difficult to obtain such flexibility and spectral cleanliness using analogue techniques. This algorithm was applied both in computer simulations of bunch behaviour in the presence of RF noise in the PS, SPS and LHC and also to generate real-time noise, tracking the synchrotron frequency change during the energy ramp of the SPS and producing controlled longitudinal emittance blow-...
Directory of Open Access Journals (Sweden)
Ziaei Poor Hamed
2016-01-01
Full Text Available This article focuses on temperature response of skin tissue due to time-dependent surface heat fluxes. Analytical solution is constructed for DPL bio-heat transfer equation with constant, periodic and pulse train heat flux conditions on skin surface. Separation of variables and Duhamel’s theorem for a skin tissue as a finite domain are employed. The transient temperature responses for constant and time-dependent boundary conditions are obtained and discussed. The results show that there is major discrepancy between the predicted temperature of parabolic (Pennes bio-heat transfer, hyperbolic (thermal wave and DPL bio-heat transfer models when high heat flux accidents on the skin surface with a short duration or propagation speed of thermal wave is finite. The results illustrate that the DPL model reduces to the hyperbolic model when τT approaches zero and the classic Fourier model when both thermal relaxations approach zero. However for τq = τT the DPL model anticipates different temperature distribution with that predicted by the Pennes model. Such discrepancy is due to the blood perfusion term in energy equation. It is in contrast to results from the literature for pure conduction material, where the DPL model approaches the Fourier heat conduction model when τq = τT . The burn injury is also investigated.
Baker, Robert G. V.
2017-02-01
Self-similar matrices of the fine structure constant of solar electromagnetic force and its inverse, multiplied by the Carrington synodic rotation, have been previously shown to account for at least 98% of the top one hundred significant frequencies and periodicities observed in the ACRIM composite irradiance satellite measurement and the terrestrial 10.7cm Penticton Adjusted Daily Flux data sets. This self-similarity allows for the development of a time-space differential equation (DE) where the solutions define a solar model for transmissions through the core, radiative, tachocline, convective and coronal zones with some encouraging empirical and theoretical results. The DE assumes a fundamental complex oscillation in the solar core and that time at the tachocline is smeared with real and imaginary constructs. The resulting solutions simulate for tachocline transmission, the solar cycle where time-line trajectories either 'loop' as Hermite polynomials for an active Sun or 'tail' as complementary error functions for a passive Sun. Further, a mechanism that allows for the stable energy transmission through the tachocline is explored and the model predicts the initial exponential coronal heating from nanoflare supercharging. The twisting of the field at the tachocline is then described as a quaternion within which neutrinos can oscillate. The resulting fractal bubbles are simulated as a Julia Set which can then aggregate from nanoflares into solar flares and prominences. Empirical examples demonstrate that time and space fractals are important constructs in understanding the behaviour of the Sun, from the impact on climate and biological histories on Earth, to the fractal influence on the spatial distributions of the solar system. The research suggests that there is a fractal clock underpinning solar frequencies in packages defined by the fine structure constant, where magnetic flipping and irradiance fluctuations at phase changes, have periodically impacted on the
Yang, Yi; Xu, Haitao; Liu, Xiaoyan; Wang, Yijia; Liang, Zhicheng
2014-01-01
In mass customization logistics service, reasonable scheduling of the logistics service supply chain (LSSC), especially time scheduling, is benefit to increase its competitiveness. Therefore, the effect of a customer order decoupling point (CODP) on the time scheduling performance should be considered. To minimize the total order operation cost of the LSSC, minimize the difference between the expected and actual time of completing the service orders, and maximize the satisfaction of functional logistics service providers, this study establishes an LSSC time scheduling model based on the CODP. Matlab 7.8 software is used in the numerical analysis for a specific example. Results show that the order completion time of the LSSC can be delayed or be ahead of schedule but cannot be infinitely advanced or infinitely delayed. Obtaining the optimal comprehensive performance can be effective if the expected order completion time is appropriately delayed. The increase in supply chain comprehensive performance caused by the increase in the relationship coefficient of logistics service integrator (LSI) is limited. The relative concern degree of LSI on cost and service delivery punctuality leads to not only changes in CODP but also to those in the scheduling performance of the LSSC. PMID:24715818
Liu, Weihua; Yang, Yi; Xu, Haitao; Liu, Xiaoyan; Wang, Yijia; Liang, Zhicheng
2014-01-01
In mass customization logistics service, reasonable scheduling of the logistics service supply chain (LSSC), especially time scheduling, is benefit to increase its competitiveness. Therefore, the effect of a customer order decoupling point (CODP) on the time scheduling performance should be considered. To minimize the total order operation cost of the LSSC, minimize the difference between the expected and actual time of completing the service orders, and maximize the satisfaction of functional logistics service providers, this study establishes an LSSC time scheduling model based on the CODP. Matlab 7.8 software is used in the numerical analysis for a specific example. Results show that the order completion time of the LSSC can be delayed or be ahead of schedule but cannot be infinitely advanced or infinitely delayed. Obtaining the optimal comprehensive performance can be effective if the expected order completion time is appropriately delayed. The increase in supply chain comprehensive performance caused by the increase in the relationship coefficient of logistics service integrator (LSI) is limited. The relative concern degree of LSI on cost and service delivery punctuality leads to not only changes in CODP but also to those in the scheduling performance of the LSSC.
International Nuclear Information System (INIS)
Eriguchi, Koji; Ohta, Hiroaki; Ono, Kouichi; Wei Zhiqiang; Takagi, Takeshi
2009-01-01
Constant voltage stress (CVS) was applied to Fe-O films prepared by a sputtering process to investigate a stress-induced resistance increase leading to a fundamental mechanism for switching behaviors. Under the CVS, an abrupt resistance increase was found for both stress polarities. A conduction mechanism after the resistance increase exhibited non-Ohmic transport. The time-to-resistance increase (t r ) under the CVS was revealed to strongly depend on stress voltage as well as the polarity. From a polarity-dependent resistance increase determined by a time-zero measurement, the voltage and polarity-dependent t r were discussed on the basis of field- and structure-enhanced thermochemical reaction mechanisms
Rivero Santamaría, Alejandro; Dayou, Fabrice; Rubayo-Soneira, Jesus; Monnerville, Maurice
2017-03-02
The dynamics of the Si( 3 P) + OH(X 2 Π) → SiO(X 1 Σ + ) + H( 2 S) reaction is investigated by means of the time-dependent wave packet (TDWP) approach using an ab initio potential energy surface recently developed by Dayou et al. ( J. Chem. Phys. 2013 , 139 , 204305 ) for the ground X 2 A' electronic state. Total reaction probabilities have been calculated for the first 15 rotational states j = 0-14 of OH(v=0,j) at a total angular momentum J = 0 up to a collision energy of 1 eV. Integral cross sections and state-selected rate constants for the temperature range 10-500 K were obtained within the J-shifting approximation. The reaction probabilities display highly oscillatory structures indicating the contribution of long-lived quasibound states supported by the deep SiOH/HSiO wells. The cross sections behave with collision energies as expected for a barrierless reaction and are slightly sensitive to the initial rotational excitation of OH. The thermal rate constants show a marked temperature dependence below 200 K with a maximum value around 15 K. The TDWP results globally agree with the results of earlier quasi-classical trajectory (QCT) calculations carried out by Rivero-Santamaria et al. ( Chem. Phys. Lett. 2014 , 610-611 , 335 - 340 ) with the same potential energy surface. In particular, the thermal rate constants display a similar temperature dependence, with TDWP values smaller than the QCT ones over the whole temperature range.
Finite-Time Stability of Large-Scale Systems with Interval Time-Varying Delay in Interconnection
Directory of Open Access Journals (Sweden)
T. La-inchua
2017-01-01
Full Text Available We investigate finite-time stability of a class of nonlinear large-scale systems with interval time-varying delays in interconnection. Time-delay functions are continuous but not necessarily differentiable. Based on Lyapunov stability theory and new integral bounding technique, finite-time stability of large-scale systems with interval time-varying delays in interconnection is derived. The finite-time stability criteria are delays-dependent and are given in terms of linear matrix inequalities which can be solved by various available algorithms. Numerical examples are given to illustrate effectiveness of the proposed method.
International Nuclear Information System (INIS)
Blake, J.B.; Dearborn, D.S.P.
1979-01-01
Small fluctuations in the solar constant can occur on timescales much shorter than the Kelvin time. Changes in the ability of convection to transmit energy through the superadiabatic and transition regions of the convection zone cause structure adjustments which can occur on a time scale of days. The bulk of the convection zone reacts to maintain hydrostatic equilibrium (though not thermal equilibrium) and causes a luminosity change. While small radius variations will occur, most of the change will be seen in temperature
DEFF Research Database (Denmark)
Rossi, Matteo; Olsson, Per-Ivar; Johansson, Sara
2017-01-01
-current resistivity distribution of the subsoil and the phase of the complex conductivity using a constant-phase angle model. The joint interpretation of electrical resistivity and induced-polarization models leads to a better understanding of complex three-dimensional subsoil geometries. The results have been......An investigation of geological conditions is always a key point for planning infrastructure constructions. Bedrock surface and rock quality must be estimated carefully in the designing process of infrastructures. A large direct-current resistivity and time-domain induced-polarization survey has......, there are northwest-trending Permian dolerite dykes that are less deformed. Four 2D direct-current resistivity and time-domain induced-polarization profiles of about 1-km length have been carefully pre-processed to retrieve time-domain induced polarization responses and inverted to obtain the direct...
Blanchet, Adrien
2009-01-01
A periodic perturbation of a Gaussian measure modifies the sharp constants in Poincarae and logarithmic Sobolev inequalities in the homogeniz ation limit, that is, when the period of a periodic perturbation converges to zero. We use variational techniques to determine the homogenized constants and get optimal convergence rates toward s equilibrium of the solutions of the perturbed diffusion equations. The study of these sharp constants is motivated by the study of the stochastic Stokes\\' drift. It also applies to Brownian ratchets and molecular motors in biology. We first establish a transport phenomenon. Asymptotically, the center of mass of the solution moves with a constant velocity, which is determined by a doubly periodic problem. In the reference frame attached to the center of mass, the behavior of the solution is governed at large scale by a diffusion with a modified diffusion coefficient. Using the homogenized logarithmic Sobolev inequality, we prove that the solution converges in self-similar variables attached to t he center of mass to a stationary solution of a Fokker-Planck equation modulated by a periodic perturbation with fast oscillations, with an explicit rate. We also give an asymptotic expansion of the traveling diffusion front corresponding to the stochastic Stokes\\' drift with given potential flow. © 2009 Society for Industrial and Applied Mathematics.
International Nuclear Information System (INIS)
Monzo, Jose M.; Lerche, Christoph W.; Martinez, Jorge D.; Esteve, Raul; Toledo, Jose; Gadea, Rafael; Colom, Ricardo J.; Herrero, Vicente; Ferrando, Nestor; Aliaga, Ramon J.; Mateo, Fernando; Sanchez, Filomeno; Mora, Francisco J.; Benlloch, Jose M.; Sebastia, Angel
2009-01-01
PET systems need good time resolution to improve the true event rate, random event rejection, and pile-up rejection. In this study we propose a digital procedure for this task using a low pass filter interpolation plus a Digital Constant Fraction Discriminator (DCFD). We analyzed the best way to implement this algorithm on our dual head PET system and how varying the quality of the acquired signal and electronic noise analytically affects timing resolution. Our detector uses two continuous LSO crystals with a position sensitive PMT. Six signals per detector are acquired using an analog electronics front-end and these signals are processed using an in-house digital acquisition board. The test bench developed simulates the electronics and digital algorithms using Matlab. Results show that electronic noise and other undesired effects have a significant effect on the timing resolution of the system. Interpolated DCFD gives better results than non-interpolated DCFD. In high noise environments, differences are reduced. An optimum delay selection, based on the environment noise, improves time resolution.
Energy Technology Data Exchange (ETDEWEB)
Monzo, Jose M. [Digital Systems Design (DSD) Group, ITACA Institute, Universidad Politecnica de Valencia, Camino de Vera s/n, 46022 Valencia (Spain)], E-mail: jmonfer@aaa.upv.es; Lerche, Christoph W.; Martinez, Jorge D.; Esteve, Raul; Toledo, Jose; Gadea, Rafael; Colom, Ricardo J.; Herrero, Vicente; Ferrando, Nestor; Aliaga, Ramon J.; Mateo, Fernando [Digital Systems Design (DSD) Group, ITACA Institute, Universidad Politecnica de Valencia, Camino de Vera s/n, 46022 Valencia (Spain); Sanchez, Filomeno [Nuclear Medical Physics Group, IFIC Institute, Consejo Superior de Investigaciones Cientificas (CSIC), 46980 Paterna (Spain); Mora, Francisco J. [Digital Systems Design (DSD) Group, ITACA Institute, Universidad Politecnica de Valencia, Camino de Vera s/n, 46022 Valencia (Spain); Benlloch, Jose M. [Nuclear Medical Physics Group, IFIC Institute, Consejo Superior de Investigaciones Cientificas (CSIC), 46980 Paterna (Spain); Sebastia, Angel [Digital Systems Design (DSD) Group, ITACA Institute, Universidad Politecnica de Valencia, Camino de Vera s/n, 46022 Valencia (Spain)
2009-06-01
PET systems need good time resolution to improve the true event rate, random event rejection, and pile-up rejection. In this study we propose a digital procedure for this task using a low pass filter interpolation plus a Digital Constant Fraction Discriminator (DCFD). We analyzed the best way to implement this algorithm on our dual head PET system and how varying the quality of the acquired signal and electronic noise analytically affects timing resolution. Our detector uses two continuous LSO crystals with a position sensitive PMT. Six signals per detector are acquired using an analog electronics front-end and these signals are processed using an in-house digital acquisition board. The test bench developed simulates the electronics and digital algorithms using Matlab. Results show that electronic noise and other undesired effects have a significant effect on the timing resolution of the system. Interpolated DCFD gives better results than non-interpolated DCFD. In high noise environments, differences are reduced. An optimum delay selection, based on the environment noise, improves time resolution.
Energy Technology Data Exchange (ETDEWEB)
Lipkin, H.
1954-03-15
The author reports the use of the Nordheim formula in the case of Zoe (a pile located in Chatillon), while considering the case bars in natural uranium (instead of UO{sub 2}), and while taking the effect of geometry on photo-neutron production into account. The Nordheim formula is calculated for two different experimental values of the constants of delayed neutrons. The author discusses the difference between the results obtained for reactivity with these new conditions and with the older ones. He reports the calculation of the geometrical factor.
Production in constant evolution
International Nuclear Information System (INIS)
Lozano, T.
2009-01-01
The Cofrentes Nuclear Power Plant now has 25 years of operation behind it: a quarter century adding value and demonstrating the reasons why it is one of the most important energy producing facilities in the Spanish power market. Particularly noteworthy is the enterprising spirit of the plant, which has strived to continuously improve with the large number of modernization projects that it has undertaken over the past 25 years. The plant has constantly evolved thanks to the amount of investments made to improve safety and reliability and the perseverance to stay technologically up to date. Efficiency, training and teamwork have been key to the success of the plant over these 25 years of constant change and progress. (Author)
Connecting Fundamental Constants
International Nuclear Information System (INIS)
Di Mario, D.
2008-01-01
A model for a black hole electron is built from three basic constants only: h, c and G. The result is a description of the electron with its mass and charge. The nature of this black hole seems to fit the properties of the Planck particle and new relationships among basic constants are possible. The time dilation factor in a black hole associated with a variable gravitational field would appear to us as a charge; on the other hand the Planck time is acting as a time gap drastically limiting what we are able to measure and its dimension will appear in some quantities. This is why the Planck time is numerically very close to the gravitational/electric force ratio in an electron: its difference, disregarding a π√(2) factor, is only 0.2%. This is not a coincidence, it is always the same particle and the small difference is between a rotating and a non-rotating particle. The determination of its rotational speed yields accurate numbers for many quantities, including the fine structure constant and the electron magnetic moment
International Nuclear Information System (INIS)
Yu Qiu; Nasu, Keiichiro
2005-01-01
In connection with the recent experimental discoveries on gigantic photoenhancements of the electronic conductivity and the quasi-static dielectric susceptibility in SrTiO 3 , we theoretically study a photo-generation mechanism of a charged ferroelectric domain in this quantum dielectric. The photo-generated electron, being quite itinerant in the 3d band of Ti 4+ , is assumed to couple weakly but quadratically with soft-anharmonic T 1u phonons in this quantum dielectric. The photo-generated electron is also assumed to couple strongly but linearly with the breathing type high energy phonons. Using a tight binding model for electron, we will show that these two types of electron-phonon couplings result in two types of polarons, a 'super-para-electric (SPE) large polaron' with a quasi-global parity violation, and an 'off-centre type self-trapped polaron' with only a local parity violation. We will also show that this SPE large polaron is nothing else but a singly charged (e - ) and conductive ferroelectric (or SPE) domain with a quasi macroscopic size. This polaron or domain is also shown to have a high mobility and a large quasi-static dielectric susceptibility
Amplitude and rise time compensated timing optimized for large semiconductor detectors
International Nuclear Information System (INIS)
Kozyczkowski, J.J.; Bialkowski, J.
1976-01-01
The ARC timing described has excellent timing properties even when using a wide range e.g. from 10 keV to over 1 MeV. The detector signal from a preamplifier is accepted directly by the unit as a timing filter amplifier with a sensitivity of 1 mV is incorporated. The adjustable rise time rejection feature makes it possible to achieve a good prompt time spectrum with symmetrical exponential shape down to less than 1/100 of the peak value. A complete block diagram of the unit is given together with results of extensive tests of its performance. For example the time spectrum for (1330+-20) keV of 60 Co taken with a 43 cm 3 Ge(Li) detector has the following parameters: fwhm = 2.2ns, fwtm = 4.4 ns and fw (0.01) m = 7.6 ns and for (50 +- 10) keV of 22 Na the following was obtained: fwhm = 10.8 ns, fwtm = 21.6 ns and fw (0.01) m = 34.6 ns. In another experiment with two fast plastic scintillations (NE 102A) and using a 20% dynamic energy range the following was measured: fwhm = 280 ps, fwtm = 470 ps and fw (0.01) m = 70ps. (Auth.)
Stabilized power constant alimentation
International Nuclear Information System (INIS)
Roussel, L.
1968-06-01
The study and realization of a stabilized power alimentation variable from 5 to 100 watts are described. In order to realize a constant power drift of Lithium compensated diodes, we have searched a 1 per cent precision of regulation and a response time minus than 1 sec. Recent components like Hall multiplicator and integrated amplifiers give this possibility and it is easy to use permutable circuits. (author) [fr
Cydzik-Kwiatkowska, Agnieszka; Rusanowska, Paulina; Zielińska, Magdalena; Bernat, Katarzyna; Wojnowska-Baryła, Irena
2014-02-01
This study investigated how hydraulic retention time (HRT) and COD/N ratio affect nitrogen-converting consortia in constantly aerated granules treating high-ammonium digester supernatant. Three HRTs (10, 13, 19 h) were tested at COD/N ratios of 4.5 and 2.3. Denaturing gradient gel electrophoresis and relative real-time PCR were used to characterize the microbial communities. When changes in HRT and COD/N increased nitrogen loading, the ratio of the relative abundance of aerobic to anaerobic ammonium-oxidizers decreased. The COD/N ratio determined the species composition of the denitrifiers; however, Thiobacillus denitrificans, Pseudomonas denitrificans and Azoarcus sp. showed a high tolerance to the environmental conditions and occurred in the granules from all reactors. Denitrifier genera that support granule formation were identified, such as Pseudomonas, Shinella, and Flavobacterium. In aerated granules, nirK-possessing bacteria were more diverse than nirS-possessing bacteria. At a low COD/N ratio, N2O-reducer diversity increased because of the presence of bacteria known as aerobic denitrifiers. Copyright © 2013 Elsevier Ltd. All rights reserved.
1995-08-01
function (the Hubble relation) of the distance to the object. [3] A supernova at redshift 0.3 was found some years ago at ESO during an earlier search programme (Noergaard-Nielsen et al., Nature, Vol. 339, page 523, 1989) and before now the most distant known supernova was located in a galaxy at redshift 0.458 (Perlmutter et al., Astrophysical Journal, Vol. 440, Page L41, 1995) [4] For comparison, a Type Ia supernova at maximum brightness emits nearly 6,000 million times more light than the Sun. [5] The brighter the supernova at a given redshift is at maximum, the larger is q0. APPENDIX: Messages From the Deceleration Parameter q0 A determination of the deceleration parameter q0 by means of astronomical observations is important because it will allow us to choose between the various current theories of the evolution of the Universe, or at least to eliminate some of them as impossible. If the value turns of to be small, e.g. q0 ~ 0, then there has been only a small decrease (deceleration) of the universal expansion in the past. In this case, a galaxy's velocity does not change much with time and the actual distance is very nearly as indicated from the Hubble relation. Should, however, the value of q0 be significantly larger, then a galaxy's velocity would have been larger in the past than it is now. The velocity we now measure would therefore be ``too high'' (since it refers to the time the light was emitted from the galaxy), and the distance obtained by dividing with the Hubble constant will be too large. The value of q0 is proportional to the total amount of matter in the Universe. A measurement of q0 will establish limits for the amount of ``missing matter'', i.e. the ``invisible'' matter which cannot be directly observed with current observational techniques and which is believed to be the dominant mass component. If q0 is near 0, the expansion of the Universe will continue unabated (the Universe is ``open''). If, however, q0 is larger than 0.5, then the expansion will
International Nuclear Information System (INIS)
Post, C.B.; Ray, W.J. Jr.; Gorenstein, D.G.
1989-01-01
Time-dependent 31 P saturation-transfer studies were conducted with the Cd 2+ -activated form of muscle phosphoglucomutase to probe the origin of the 100-fold difference between its catalytic efficiency (in terms of k cat ) and that of the more efficient Mg 2+ -activated enzyme. The present paper describes the equilibrium mixture of phosphoglucomutase and its substrate/product pair when the concentration of the Cd 2+ enzyme approaches that of the substrate and how the nine-spin 31 P NMR system provided by this mixture was treated. It shows that the presence of abortive complexes is not a significant factor in the reduced activity of the Cd 2+ enzyme since the complex of the dephosphoenzyme and glucose 1,6-bisphosphate, which accounts for a large majority of the enzyme present at equilibrium, is catalytically competent. It also shows that rate constants for saturation transfer obtained at three different ratios of enzyme to free substrate are mutually compatible. These constants, which were measured at chemical equilibrium, can be used to provide a quantitative kinetic rationale for the reduced steady-state activity elicited by Cd 2+ relative to Mg 2+ . They also provide minimal estimates of 350 and 150 s -1 for the rate constants describing (PO 3 - ) transfer from the Cd 2+ phosphoenzyme to the 6-position of bound glucose 1-phosphate and to the 1-position of bound glucose 6-phosphate, respectively. These minimal estimates are compared with analogous estimates for the Mg 2+ and Li + forms of the enzyme in the accompanying paper
Energy Technology Data Exchange (ETDEWEB)
Dietrich, Olaf, E-mail: od@dtrx.net [Josef Lissner Laboratory for Biomedical Imaging, Institute for Clinical Radiology, Ludwig-Maximilians-University Hospital Munich, Munich (Germany); Gaass, Thomas [Josef Lissner Laboratory for Biomedical Imaging, Institute for Clinical Radiology, Ludwig-Maximilians-University Hospital Munich, Munich (Germany); Comprehensive Pneumology Center, German Center for Lung Research, Munich (Germany); Reiser, Maximilian F. [Josef Lissner Laboratory for Biomedical Imaging, Institute for Clinical Radiology, Ludwig-Maximilians-University Hospital Munich, Munich (Germany)
2017-01-15
Purpose: To pool and summarize published data from magnetic resonance longitudinal relaxation measurements of the human lung at 1.5 T to provide a reliable basis of T{sub 1} relaxation time constants of healthy lung tissue both under respiration of room air and of pure oxygen. In particular, the oxygen-induced shortening of T{sub 1} was evaluated. Materials and methods: The PubMed database was comprehensively searched up to June 2016 for original publications in English containing quantitative T{sub 1} data (at least mean values and standard deviations) of the lung parenchyma of healthy subjects (minimum subject number: 3) at 1.5 T. From all included publications, T{sub 1} values of the lung of healthy subjects were extracted (inhaling room air and, if available, inhaling pure oxygen). Weighted mean values and standard deviations of all extracted data and the oxygen transfer function (OTF) were calculated. Results: 22 publications were included with a total number of 188 examined healthy subjects. 103 of these subjects (from 13 studies) were examined while breathing pure oxygen and room air; 85 subjects were examined only under room-air conditions. The weighted mean value (weighted sample standard deviation) of the room-air T{sub 1} values over all 22 studies was 1196 ms (152 ms). Based on studies with room-air and oxygen results, the mean T{sub 1} value at room-air conditions was 1172 ms (161 ms); breathing pure oxygen, the mean T{sub 1} value was reduced to 1054 ms (138 ms). This corresponds to a mean T{sub 1} reduction by 118 ms (35 ms) or 10.0 % (2.3 %) and to a mean OTF value of 1.22 (0.32) × 10{sup −3} s{sup −1}/(%O{sub 2}). Conclusion: This meta-analysis with data from 188 subjects indicates that the average T{sub 1} relaxation time constant of healthy lung tissue at 1.5 T is distributed around 1200 ms with a standard deviation of about 150 ms; breathing pure oxygen reduces this value significantly by 10 % to about 1050 ms.
Arrhenius Rate: constant volume burn
Energy Technology Data Exchange (ETDEWEB)
Menikoff, Ralph [Los Alamos National Lab. (LANL), Los Alamos, NM (United States)
2017-12-06
A constant volume burn occurs for an idealized initial state in which a large volume of reactants at rest is suddenly raised to a high temperature and begins to burn. Due to the uniform spatial state, there is no fluid motion and no heat conduction. This reduces the time evolu tion to an ODE for the reaction progress variable. With an Arrhenius reaction rate, two characteristics of thermal ignition are illustrated: induction time and thermal runaway. The Frank-Kamenetskii approximation then leads to a simple expression for the adiabatic induction time. For a first order reaction, the analytic solution is derived and used to illustrate the effect of varying the activation temperature; in particular, on the induction time. In general, the ODE can be solved numerically. This is used to illustrate the effect of varying the reaction order. We note that for a first order reaction, the time evolution of the reaction progress variable has an exponential tail. In contrast, for a reaction order less than one, the reaction completes in a nite time. The reaction order also affects the induction time.
Dewberry, C. T.; Grubbs, G. S.; Cooke, S. A.
2009-09-01
Using pulsed jet chirped-pulse, and cavity-based Fourier transform microwave spectroscopies over 900 transitions have been recorded for the title molecule in the 1-4 GHz and 8-18 GHz regions. The C,C and C carbon-13 species have been observed in natural abundance allowing a substitution structure for the CCC backbone to be determined. Nearly all the transitions observed were either a-type R branches or b-type Q branches. No c-type transitions were observed consistent with only the trans conformer being present under our experimental conditions. The χaa,χbb,χcc and χab components of the iodine nuclear quadrupole coupling tensor have been determined. Of note, several forbidden, ΔJ±2 transitions, and one ΔJ±3 transition were observed with quite reasonable intensity. These observations have been rationalized through considerations of near degeneracies between energy levels connected via a large χab value (≈1 GHz).
Desvillettes, Laurent
2010-01-01
We study a continuous coagulation-fragmentation model with constant kernels for reacting polymers (see [M. Aizenman and T. Bak, Comm. Math. Phys., 65 (1979), pp. 203-230]). The polymers are set to diffuse within a smooth bounded one-dimensional domain with no-flux boundary conditions. In particular, we consider size-dependent diffusion coefficients, which may degenerate for small and large cluster-sizes. We prove that the entropy-entropy dissipation method applies directly in this inhomogeneous setting. We first show the necessary basic a priori estimates in dimension one, and second we show faster-than-polynomial convergence toward global equilibria for diffusion coefficients which vanish not faster than linearly for large sizes. This extends the previous results of [J.A. Carrillo, L. Desvillettes, and K. Fellner, Comm. Math. Phys., 278 (2008), pp. 433-451], which assumes that the diffusion coefficients are bounded below. © 2009 Society for Industrial and Applied Mathematics.
Salgado, Diana; Torres, J Antonio; Welti-Chanes, Jorge; Velazquez, Gonzalo
2011-08-01
-processing and determine opportunities for improvement. This should include a systematic approach to consider variability in the parameters for the models used by food process engineers when designing a thermal process. The Monte Carlo procedure here presented is a tool to facilitate this task for the determination of process time at a constant lethal temperature. © 2011 Institute of Food Technologists®
Data warehousing technologies for large-scale and right-time data
DEFF Research Database (Denmark)
Xiufeng, Liu
heterogeneous sources into a central data warehouse (DW) by Extract-Transform-Load (ETL) at regular time intervals, e.g., monthly, weekly, or daily. But now, it becomes challenging for large-scale data, and hard to meet the near real-time/right-time business decisions. This thesis considers some...
Ziganshin, Ayrat M; Schmidt, Thomas; Lv, Zuopeng; Liebetrau, Jan; Richnow, Hans Hermann; Kleinsteuber, Sabine; Nikolausz, Marcell
2016-10-01
The effects of hydraulic retention time (HRT) reduction at constant high organic loading rate on the activity of hydrogen-producing bacteria and methanogens were investigated in reactors digesting thin stillage. Stable isotope fingerprinting was additionally applied to assess methanogenic pathways. Based on hydA gene transcripts, Clostridiales was the most active hydrogen-producing order in continuous stirred tank reactor (CSTR), fixed-bed reactor (FBR) and anaerobic sequencing batch reactor (ASBR), but shorter HRT stimulated the activity of Spirochaetales. Further decreasing HRT diminished Spirochaetales activity in systems with biomass retention. Based on mcrA gene transcripts, Methanoculleus and Methanosarcina were the predominantly active in CSTR and ASBR, whereas Methanosaeta and Methanospirillum activity was more significant in stably performing FBR. Isotope values indicated the predominance of aceticlastic pathway in FBR. Interestingly, an increased activity of Methanosaeta was observed during shortening HRT in CSTR and ASBR despite high organic acids concentrations, what was supported by stable isotope data. Copyright © 2016 Elsevier Ltd. All rights reserved.
Karthikayan, S; Sankaranarayanan, G; Karthikeyan, R
2015-11-01
Present energy strategies focus on environmental issues, especially environmental pollution prevention and control by eco-friendly green technologies. This includes, increase in the energy supplies, encouraging cleaner and more efficient energy management, addressing air pollution, greenhouse effect, global warming, and climate change. Biofuels provide the panorama of new fiscal opportunities for people in rural area for meeting their need and also the demand of the local market. Biofuels concern protection of the environment and job creation. Renewable energy sources are self-reliance resources, have the potential in energy management with less emissions of air pollutants. Biofuels are expected to reduce dependability on imported crude oil with connected economic susceptibility, reduce greenhouse gases, other pollutants and invigorate the economy by increasing demand and prices for agricultural products. The use of neat paradise tree oil and induction of eco-friendly material Hydrogen through inlet manifold in a constant pressure heat addition cycle engine (diesel engine) with optimized engine operating parameters such as injection timing, injection pressure and compression ratio. The results shows the heat utilization efficiency for neat vegetable oil is 29% and neat oil with 15% Hydrogen as 33%. The exhaust gas temperature (EGT) for 15% of H2 share as 450°C at full load and the heat release of 80J/deg. crank angle for 15% Hydrogen energy share. Copyright © 2015 Elsevier Inc. All rights reserved.
Verbiest, J. P. W.; Bailes, M.; van Straten, W.; Hobbs, G. B.; Edwards, R. T.; Manchester, R. N.; Bhat, N. D. R.; Sarkissian, J. M.; Jacoby, B. A.; Kulkarni, S. R.
2008-05-01
Analysis of 10 years of high-precision timing data on the millisecond pulsar PSR J0437-4715 has resulted in a model-independent kinematic distance based on an apparent orbital period derivative, dot Pb , determined at the 1.5% level of precision (Dk = 157.0 +/- 2.4 pc), making it one of the most accurate stellar distance estimates published to date. The discrepancy between this measurement and a previously published parallax distance estimate is attributed to errors in the DE200 solar system ephemerides. The precise measurement of dot Pb allows a limit on the variation of Newton's gravitational constant, |Ġ/G| <= 23 × 10-12 yr-1. We also constrain any anomalous acceleration along the line of sight to the pulsar to |a⊙/c| <= 1.5 × 10-18 s-1 at 95% confidence, and derive a pulsar mass, mpsr = 1.76 +/- 0.20 M⊙, one of the highest estimates so far obtained.
Fast optimization algorithms and the cosmological constant
Bao, Ning; Bousso, Raphael; Jordan, Stephen; Lackey, Brad
2017-11-01
Denef and Douglas have observed that in certain landscape models the problem of finding small values of the cosmological constant is a large instance of a problem that is hard for the complexity class NP (Nondeterministic Polynomial-time). The number of elementary operations (quantum gates) needed to solve this problem by brute force search exceeds the estimated computational capacity of the observable Universe. Here we describe a way out of this puzzling circumstance: despite being NP-hard, the problem of finding a small cosmological constant can be attacked by more sophisticated algorithms whose performance vastly exceeds brute force search. In fact, in some parameter regimes the average-case complexity is polynomial. We demonstrate this by explicitly finding a cosmological constant of order 10-120 in a randomly generated 1 09-dimensional Arkani-Hamed-Dimopoulos-Kachru landscape.
Sex ratio and time to pregnancy: analysis of four large European population surveys
DEFF Research Database (Denmark)
Joffe, Mike; Bennett, James; Best, Nicky
2007-01-01
To test whether the secondary sex ratio (proportion of male births) is associated with time to pregnancy, a marker of fertility. Design Analysis of four large population surveys. Setting Denmark and the United Kingdom. Participants 49 506 pregnancies.......To test whether the secondary sex ratio (proportion of male births) is associated with time to pregnancy, a marker of fertility. Design Analysis of four large population surveys. Setting Denmark and the United Kingdom. Participants 49 506 pregnancies....
Czech Academy of Sciences Publication Activity Database
Stone, J.; Ohya, S.; Rikovska, J.; Woehr, A.; Betts, P.; Dupák, Jan; Fogelberg, B.; Jacobsson, L.
č. 133 (2001), s. 111 - 115 ISSN 0304-3843 Institutional research plan: CEZ:AV0Z2065902 Keywords : nuclear orientation * Korringa constant Subject RIV: BM - Solid Matter Physics ; Magnetism Impact factor: 0.634, year: 2001
Decrease of the tunneling time and violation of the Hartman effect for large barriers
International Nuclear Information System (INIS)
Olkhovsky, V.S.; Zaichenko, A.K.; Petrillo, V.
2004-01-01
The explicit formulation of the initial conditions of the definition of the wave-packet tunneling time is proposed. This formulation takes adequately into account the irreversibility of the wave-packet space-time spreading. Moreover, it explains the violations of the Hartman effect, leading to a strong decrease of the tunneling times up to negative values for wave packets with large momentum spreads due to strong wave-packet time spreading
Elongational flow of polymer melts at constant strain rate, constant stress and constant force
Wagner, Manfred H.; Rolón-Garrido, Víctor H.
2013-04-01
Characterization of polymer melts in elongational flow is typically performed at constant elongational rate or rarely at constant tensile stress conditions. One of the disadvantages of these deformation modes is that they are hampered by the onset of "necking" instabilities according to the Considère criterion. Experiments at constant tensile force have been performed even more rarely, in spite of the fact that this deformation mode is free from necking instabilities and is of considerable industrial relevance as it is the correct analogue of steady fiber spinning. It is the objective of the present contribution to present for the first time a full experimental characterization of a long-chain branched polyethylene melt in elongational flow. Experiments were performed at constant elongation rate, constant tensile stress and constant tensile force by use of a Sentmanat Extensional Rheometer (SER) in combination with an Anton Paar MCR301 rotational rheometer. The accessible experimental window and experimental limitations are discussed. The experimental data are modelled by using the Wagner I model. Predictions of the steady-start elongational viscosity in constant strain rate and creep experiments are found to be identical, albeit only by extrapolation of the experimental data to Hencky strains of the order of 6. For constant stress experiments, a minimum in the strain rate and a corresponding maximum in the elongational viscosity is found at a Hencky strain of the order of 3, which, although larger than the steady-state value, follows roughly the general trend of the steady-state elongational viscosity. The constitutive analysis also reveals that constant tensile force experiments indicate a larger strain hardening potential than seen in constant elongation rate or constant tensile stress experiments. This may be indicative of the effect of necking under constant elongation rate or constant tensile stress conditions according to the Considère criterion.
Cosmological Hubble constant and nuclear Hubble constant
International Nuclear Information System (INIS)
Horbuniev, Amelia; Besliu, Calin; Jipa, Alexandru
2005-01-01
The evolution of the Universe after the Big Bang and the evolution of the dense and highly excited nuclear matter formed by relativistic nuclear collisions are investigated and compared. Values of the Hubble constants for cosmological and nuclear processes are obtained. For nucleus-nucleus collisions at high energies the nuclear Hubble constant is obtained in the frame of different models involving the hydrodynamic flow of the nuclear matter. Significant difference in the values of the two Hubble constant - cosmological and nuclear - is observed
Directory of Open Access Journals (Sweden)
Neal Jackson
2015-09-01
Full Text Available I review the current state of determinations of the Hubble constant, which gives the length scale of the Universe by relating the expansion velocity of objects to their distance. There are two broad categories of measurements. The first uses individual astrophysical objects which have some property that allows their intrinsic luminosity or size to be determined, or allows the determination of their distance by geometric means. The second category comprises the use of all-sky cosmic microwave background, or correlations between large samples of galaxies, to determine information about the geometry of the Universe and hence the Hubble constant, typically in a combination with other cosmological parameters. Many, but not all, object-based measurements give H_0 values of around 72–74 km s^–1 Mpc^–1, with typical errors of 2–3 km s^–1 Mpc^–1. This is in mild discrepancy with CMB-based measurements, in particular those from the Planck satellite, which give values of 67–68 km s^–1 Mpc^–1 and typical errors of 1–2 km s^–1 Mpc^–1. The size of the remaining systematics indicate that accuracy rather than precision is the remaining problem in a good determination of the Hubble constant. Whether a discrepancy exists, and whether new physics is needed to resolve it, depends on details of the systematics of the object-based methods, and also on the assumptions about other cosmological parameters and which datasets are combined in the case of the all-sky methods.
Shen, Qikun; Zhang, Tianping
2015-05-01
The paper addresses a practical issue for adaptive synchronization in master-slave large-scale systems with constant channel time-delay., and a novel adaptive synchronization control scheme is proposed to guarantee the synchronization errors asymptotically converge to the origin, in which the matching condition as in the related literatures is not necessary. The real value of channel time-delay can be estimated online by a proper adaptation mechanism, which removes the conditions that the channel time-delay should be known exactly as in existing works. Finally, simulation results demonstrate the effectiveness of the approach.
Time dispersion in large plastic scintillation neutron detector [Paper No.:B3
International Nuclear Information System (INIS)
De, A.; Dasgupta, S.S.; Sen, D.
1993-01-01
Time dispersion seen by photomultiplier (PM) tube in large plastic scintillation neutron detector and the light collection mechanism by the same have been computed showing that this time dispersion (TD) seen by the PM tube does not necessarily increase with increasing incident neutron energy in contrast to the usual finding that TD increases with increasing energy. (author). 8 refs., 4 figs
Directory of Open Access Journals (Sweden)
Shichao Sun
2015-01-01
Full Text Available This paper addressed the vehicle routing problem (VRP in large-scale urban transportation networks with stochastic time-dependent (STD travel times. The subproblem which is how to find the optimal path connecting any pair of customer nodes in a STD network was solved through a robust approach without requiring the probability distributions of link travel times. Based on that, the proposed STD-VRP model can be converted into solving a normal time-dependent VRP (TD-VRP, and algorithms for such TD-VRPs can also be introduced to obtain the solution. Numerical experiments were conducted to address STD-VRPTW of practical sizes on a real world urban network, demonstrated here on the road network of Shenzhen, China. The stochastic time-dependent link travel times of the network were calibrated by historical floating car data. A route construction algorithm was applied to solve the STD problem in 4 delivery scenarios efficiently. The computational results showed that the proposed STD-VRPTW model can improve the level of customer service by satisfying the time-window constraint under any circumstances. The improvement can be very significant especially for large-scale network delivery tasks with no more increase in cost and environmental impacts.
High resolution time-of-flight measurements in small and large scintillation counters
International Nuclear Information System (INIS)
D'Agostini, G.; Marini, G.; Martellotti, G.; Massa, F.; Rambaldi, A.; Sciubba, A.
1981-01-01
In a test run, the experimental time-of-flight resolution was measured for several different scintillation counters of small (10 x 5 cm 2 ) and large (100 x 15 cm 2 and 75 x 25 cm 2 ) area. The design characteristics were decided on the basis of theoretical Monte Carlo calculations. We report results using twisted, fish-tail, and rectangular light- guides and different types of scintillator (NE 114 and PILOT U). Time resolution up to approx. equal to 130-150 ps fwhm for the small counters and up to approx. equal to 280-300 ps fwhm for the large counters were obtained. The spatial resolution from time measurements in the large counters is also reported. The results of Monte Carlo calculations on the type of scintillator, the shape and dimensions of the light-guides, and the nature of the external wrapping surfaces - to be used in order to optimize the time resolution - are also summarized. (orig.)
Energy Technology Data Exchange (ETDEWEB)
Salimi, S; Radgohar, R, E-mail: shsalimi@uok.ac.i, E-mail: r.radgohar@uok.ac.i [Faculty of Science, Department of Physics, University of Kurdistan, Pasdaran Ave, Sanandaj (Iran, Islamic Republic of)
2010-01-28
In this paper, we consider decoherence in continuous-time quantum walks on long-range interacting cycles (LRICs), which are the extensions of the cycle graphs. For this purpose, we use Gurvitz's model and assume that every node is monitored by the corresponding point-contact induced by the decoherence process. Then, we focus on large rates of decoherence and calculate the probability distribution analytically and obtain the lower and upper bounds of the mixing time. Our results prove that the mixing time is proportional to the rate of decoherence and the inverse of the square of the distance parameter (m). This shows that the mixing time decreases with increasing range of interaction. Also, what we obtain for m = 0 is in agreement with Fedichkin, Solenov and Tamon's results [48] for cycle, and we see that the mixing time of CTQWs on cycle improves with adding interacting edges.
The fundamental constants a mystery of physics
Fritzsch, Harald
2009-01-01
The speed of light, the fine structure constant, and Newton's constant of gravity — these are just three among the many physical constants that define our picture of the world. Where do they come from? Are they constant in time and across space? In this book, physicist and author Harald Fritzsch invites the reader to explore the mystery of the fundamental constants of physics in the company of Isaac Newton, Albert Einstein, and a modern-day physicist
A Short Proof of the Large Time Energy Growth for the Boussinesq System
Brandolese, Lorenzo; Mouzouni, Charafeddine
2017-10-01
We give a direct proof of the fact that the L^p-norms of global solutions of the Boussinesq system in R^3 grow large as t→ ∞ for 1R+× R3. In particular, the kinetic energy blows up as \\Vert u(t)\\Vert _2^2˜ ct^{1/2} for large time. This contrasts with the case of the Navier-Stokes equations.
Versatile synchronized real-time MEG hardware controller for large-scale fast data acquisition
Sun, Limin; Han, Menglai; Pratt, Kevin; Paulson, Douglas; Dinh, Christoph; Esch, Lorenz; Okada, Yoshio; Hämäläinen, Matti
2017-05-01
Versatile controllers for accurate, fast, and real-time synchronized acquisition of large-scale data are useful in many areas of science, engineering, and technology. Here, we describe the development of a controller software based on a technique called queued state machine for controlling the data acquisition (DAQ) hardware, continuously acquiring a large amount of data synchronized across a large number of channels (>400) at a fast rate (up to 20 kHz/channel) in real time, and interfacing with applications for real-time data analysis and display of electrophysiological data. This DAQ controller was developed specifically for a 384-channel pediatric whole-head magnetoencephalography (MEG) system, but its architecture is useful for wide applications. This controller running in a LabVIEW environment interfaces with microprocessors in the MEG sensor electronics to control their real-time operation. It also interfaces with a real-time MEG analysis software via transmission control protocol/internet protocol, to control the synchronous acquisition and transfer of the data in real time from >400 channels to acquisition and analysis workstations. The successful implementation of this controller for an MEG system with a large number of channels demonstrates the feasibility of employing the present architecture in several other applications.
FORMATION CONSTANTS AND THERMODYNAMIC ...
African Journals Online (AJOL)
KEY WORDS: Metal complexes, Schiff base ligand, Formation constant, DFT calculation ... best values for the formation constants of the proposed equilibrium model by .... to its positive charge distribution and the ligand deformation geometry.
Hauff, F.; Hoernle, K.; Tilton, G.; Graham, D. W.; Kerr, A. C.
2000-01-01
Oceanic flood basalts are poorly understood, short-term expressions of highly increased heat flux and mass flow within the convecting mantle. The uniqueness of the Caribbean Large Igneous Province (CLIP, 92-74 Ma) with respect to other Cretaceous oceanic plateaus is its extensive sub-aerial exposures, providing an excellent basis to investigate the temporal and compositional relationships within a starting plume head. We present major element, trace element and initial Sr-Nd-Pb isotope composition of 40 extrusive rocks from the Caribbean Plateau, including onland sections in Costa Rica, Colombia and Curaçao as well as DSDP Sites in the Central Caribbean. Even though the lavas were erupted over an area of ˜3×10 6 km 2, the majority have strikingly uniform incompatible element patterns (La/Yb=0.96±0.16, n=64 out of 79 samples, 2σ) and initial Nd-Pb isotopic compositions (e.g. 143Nd/ 144Nd in=0.51291±3, ɛNdi=7.3±0.6, 206Pb/ 204Pb in=18.86±0.12, n=54 out of 66, 2σ). Lavas with endmember compositions have only been sampled at the DSDP Sites, Gorgona Island (Colombia) and the 65-60 Ma accreted Quepos and Osa igneous complexes (Costa Rica) of the subsequent hotspot track. Despite the relatively uniform composition of most lavas, linear correlations exist between isotope ratios and between isotope and highly incompatible trace element ratios. The Sr-Nd-Pb isotope and trace element signatures of the chemically enriched lavas are compatible with derivation from recycled oceanic crust, while the depleted lavas are derived from a highly residual source. This source could represent either oceanic lithospheric mantle left after ocean crust formation or gabbros with interlayered ultramafic cumulates of the lower oceanic crust. High 3He/ 4He in olivines of enriched picrites at Quepos are ˜12 times higher than the atmospheric ratio suggesting that the enriched component may have once resided in the lower mantle. Evaluation of the Sm-Nd and U-Pb isotope systematics on
A robust and high-performance queue management controller for large round trip time networks
Khoshnevisan, Ladan; Salmasi, Farzad R.
2016-05-01
Congestion management for transmission control protocol is of utmost importance to prevent packet loss within a network. This necessitates strategies for active queue management. The most applied active queue management strategies have their inherent disadvantages which lead to suboptimal performance and even instability in the case of large round trip time and/or external disturbance. This paper presents an internal model control robust queue management scheme with two degrees of freedom in order to restrict the undesired effects of large and small round trip time and parameter variations in the queue management. Conventional approaches such as proportional integral and random early detection procedures lead to unstable behaviour due to large delay. Moreover, internal model control-Smith scheme suffers from large oscillations due to the large round trip time. On the other hand, other schemes such as internal model control-proportional integral and derivative show excessive sluggish performance for small round trip time values. To overcome these shortcomings, we introduce a system entailing two individual controllers for queue management and disturbance rejection, simultaneously. Simulation results based on Matlab/Simulink and also Network Simulator 2 (NS2) demonstrate the effectiveness of the procedure and verify the analytical approach.
Ion exchange equilibrium constants
Marcus, Y
2013-01-01
Ion Exchange Equilibrium Constants focuses on the test-compilation of equilibrium constants for ion exchange reactions. The book first underscores the scope of the compilation, equilibrium constants, symbols used, and arrangement of the table. The manuscript then presents the table of equilibrium constants, including polystyrene sulfonate cation exchanger, polyacrylate cation exchanger, polymethacrylate cation exchanger, polysterene phosphate cation exchanger, and zirconium phosphate cation exchanger. The text highlights zirconium oxide anion exchanger, zeolite type 13Y cation exchanger, and
International Nuclear Information System (INIS)
Syed, E.V.; Salaita, G.N.; McCaffery, F.G.
1991-01-01
Cased hole logging with pulsed neutron tools finds extensive use for identifying zones of water breakthrough and monitoring oil-water contacts in oil reservoirs being depleted by waterflooding or natural water drive. Results of such surveys then find direct use for planning recompletions and water shutoff treatments. Pulsed neutron capture (PNC) logs are useful for estimating water saturation changes behind casing in the presence of a constant, high-salinity environment. PNC log surveys run at different times, i.e., in a time-lapse mode, are particularly amenable to quantitative analysis. The combined use of the original open hole and PNC time-lapse log information can then provide information on remaining or residual oil saturations in a reservoir. This paper reports analyses of historical pulsed neutron capture log data to assess residual oil saturation in naturally water-swept zones for selected wells from a large sandstone reservoir in the Middle East. Quantitative determination of oil saturations was aided by PNC log information obtained from a series of tests conducted in a new well in the same field
CAN LARGE TIME DELAYS OBSERVED IN LIGHT CURVES OF CORONAL LOOPS BE EXPLAINED IN IMPULSIVE HEATING?
International Nuclear Information System (INIS)
Lionello, Roberto; Linker, Jon A.; Mikić, Zoran; Alexander, Caroline E.; Winebarger, Amy R.
2016-01-01
The light curves of solar coronal loops often peak first in channels associated with higher temperatures and then in those associated with lower temperatures. The delay times between the different narrowband EUV channels have been measured for many individual loops and recently for every pixel of an active region observation. The time delays between channels for an active region exhibit a wide range of values. The maximum time delay in each channel pair can be quite large, i.e., >5000 s. These large time delays make-up 3%–26% (depending on the channel pair) of the pixels where a trustworthy, positive time delay is measured. It has been suggested that these time delays can be explained by simple impulsive heating, i.e., a short burst of energy that heats the plasma to a high temperature, after which the plasma is allowed to cool through radiation and conduction back to its original state. In this paper, we investigate whether the largest observed time delays can be explained by this hypothesis by simulating a series of coronal loops with different heating rates, loop lengths, abundances, and geometries to determine the range of expected time delays between a set of four EUV channels. We find that impulsive heating cannot address the largest time delays observed in two of the channel pairs and that the majority of the large time delays can only be explained by long, expanding loops with photospheric abundances. Additional observations may rule out these simulations as an explanation for the long time delays. We suggest that either the time delays found in this manner may not be representative of real loop evolution, or that the impulsive heating and cooling scenario may be too simple to explain the observations, and other potential heating scenarios must be explored
Cosmological special relativity the large scale structure of space, time and velocity
Carmeli, Moshe
1997-01-01
This book deals with special relativity theory and its application to cosmology. It presents Einstein's theory of space and time in detail, and describes the large scale structure of space, time and velocity as a new cosmological special relativity. A cosmological Lorentz-like transformation, which relates events at different cosmic times, is derived and applied. A new law of addition of cosmic times is obtained, and the inflation of the space at the early universe is derived, both from the cosmological transformation. The book will be of interest to cosmologists, astrophysicists, theoretical
Cosmological special relativity the large scale structure of space, time and velocity
Carmeli, Moshe
2002-01-01
This book presents Einstein's theory of space and time in detail, and describes the large-scale structure of space, time and velocity as a new cosmological special relativity. A cosmological Lorentz-like transformation, which relates events at different cosmic times, is derived and applied. A new law of addition of cosmic times is obtained, and the inflation of the space at the early universe is derived, both from the cosmological transformation. The relationship between cosmic velocity, acceleration and distances is given. In the appendices gravitation is added in the form of a cosmological g
Computing the real-time Green's Functions of large Hamiltonian matrices
Iitaka, Toshiaki
1998-01-01
A numerical method is developed for calculating the real time Green's functions of very large sparse Hamiltonian matrices, which exploits the numerical solution of the inhomogeneous time-dependent Schroedinger equation. The method has a clear-cut structure reflecting the most naive definition of the Green's functions, and is very suitable to parallel and vector supercomputers. The effectiveness of the method is illustrated by applying it to simple lattice models. An application of this method...
Calculation of neutron die-away times in a large-vehicle portal monitor
International Nuclear Information System (INIS)
Lillie, R.A.; Santoro, R.T.; Alsmiller, R.G. Jr.
1980-05-01
Monte Carlo methods have been used to calculate neutron die-away times in a large-vehicle portal monitor. These calculations were performed to investigate the adequacy of using neutron die-away time measurements to detect the clandestine movement of shielded nuclear materials. The geometry consisted of a large tunnel lined with He 3 proportional counters. The time behavior of the (n,p) capture reaction in these counters was calculated when the tunnel contained a number of different tractor-trailer load configurations. Neutron die-away times obtained from weighted least squares fits to these data were compared. The change in neutron die-away time due to the replacement of cargo in a fully loaded truck with a spherical shell containing 240 kg of borated polyethylene was calculated to be less than 3%. This result together with the overall behavior of neutron die-away time versus mass inside the tunnel strongly suggested that measurements of this type will not provide a reliable means of detecting shielded nuclear materials in a large vehicle. 5 figures, 4 tables
Venkatachari, Balaji Shankar; Streett, Craig L.; Chang, Chau-Lyan; Friedlander, David J.; Wang, Xiao-Yen; Chang, Sin-Chung
2016-01-01
Despite decades of development of unstructured mesh methods, high-fidelity time-accurate simulations are still predominantly carried out on structured, or unstructured hexahedral meshes by using high-order finite-difference, weighted essentially non-oscillatory (WENO), or hybrid schemes formed by their combinations. In this work, the space-time conservation element solution element (CESE) method is used to simulate several flow problems including supersonic jet/shock interaction and its impact on launch vehicle acoustics, and direct numerical simulations of turbulent flows using tetrahedral meshes. This paper provides a status report for the continuing development of the space-time conservation element solution element (CESE) numerical and software framework under the Revolutionary Computational Aerosciences (RCA) project. Solution accuracy and large-scale parallel performance of the numerical framework is assessed with the goal of providing a viable paradigm for future high-fidelity flow physics simulations.
AUTHOR|(SzGeCERN)756497; The ATLAS collaboration; Garcia Garcia, Pedro Javier; Vandelli, Wainer; Froening, Holger
2017-01-01
Data acquisition systems for large-scale high-energy physics experiments have to handle hundreds of gigabytes per second of data, and are typically realized as specialized data centers that connect a very large number of front-end electronics devices to an event detection and storage system. The design of such systems is often based on many assumptions, small-scale experiments and a substantial amount of over-provisioning. In this work, we introduce a discrete event-based simulation tool that models the data flow of the current ATLAS data acquisition system, with the main goal to be accurate with regard to the main operational characteristics. We measure buffer occupancy counting the number of elements in buffers, resource utilization measuring output bandwidth and counting the number of active processing units, and their time evolution by comparing data over many consecutive and small periods of time. We perform studies on the error of simulation when comparing the results to a large amount of real-world ope...
AUTHOR|(SzGeCERN)756497; The ATLAS collaboration; Garcia Garcia, Pedro Javier; Vandelli, Wainer; Froening, Holger
2017-01-01
Data acquisition systems for large-scale high-energy physics experiments have to handle hundreds of gigabytes per second of data, and are typically implemented as specialized data centers that connect a very large number of front-end electronics devices to an event detection and storage system. The design of such systems is often based on many assumptions, small-scale experiments and a substantial amount of over-provisioning. In this paper, we introduce a discrete event-based simulation tool that models the dataflow of the current ATLAS data acquisition system, with the main goal to be accurate with regard to the main operational characteristics. We measure buffer occupancy counting the number of elements in buffers; resource utilization measuring output bandwidth and counting the number of active processing units, and their time evolution by comparing data over many consecutive and small periods of time. We perform studies on the error in simulation when comparing the results to a large amount of real-world ...
The LOFT (Large Observatory for X-ray Timing) background simulations
DEFF Research Database (Denmark)
Campana, R.; Feroci, M.; Del Monte, E.
2012-01-01
The Large Observatory For X-ray Timing (LOFT) is an innovative medium-class mission selected for an assessment phase in the framework of the ESA M3 Cosmic Vision call. LOFT is intended to answer fundamental questions about the behavior of matter in theh very strong gravitational and magnetic fields...
Universal relation between spectroscopic constants
Indian Academy of Sciences (India)
(3) The author has used eq. (6) of his paper to calculate De. This relation leads to a large deviation from the correct value depending upon the extent to which experimental values are known. Guided by this fact, in our work, we used experimentally observed De values to derive the relation between spectroscopic constants.
RankExplorer: Visualization of Ranking Changes in Large Time Series Data.
Shi, Conglei; Cui, Weiwei; Liu, Shixia; Xu, Panpan; Chen, Wei; Qu, Huamin
2012-12-01
For many applications involving time series data, people are often interested in the changes of item values over time as well as their ranking changes. For example, people search many words via search engines like Google and Bing every day. Analysts are interested in both the absolute searching number for each word as well as their relative rankings. Both sets of statistics may change over time. For very large time series data with thousands of items, how to visually present ranking changes is an interesting challenge. In this paper, we propose RankExplorer, a novel visualization method based on ThemeRiver to reveal the ranking changes. Our method consists of four major components: 1) a segmentation method which partitions a large set of time series curves into a manageable number of ranking categories; 2) an extended ThemeRiver view with embedded color bars and changing glyphs to show the evolution of aggregation values related to each ranking category over time as well as the content changes in each ranking category; 3) a trend curve to show the degree of ranking changes over time; 4) rich user interactions to support interactive exploration of ranking changes. We have applied our method to some real time series data and the case studies demonstrate that our method can reveal the underlying patterns related to ranking changes which might otherwise be obscured in traditional visualizations.
Rapid Modeling of and Response to Large Earthquakes Using Real-Time GPS Networks (Invited)
Crowell, B. W.; Bock, Y.; Squibb, M. B.
2010-12-01
Real-time GPS networks have the advantage of capturing motions throughout the entire earthquake cycle (interseismic, seismic, coseismic, postseismic), and because of this, are ideal for real-time monitoring of fault slip in the region. Real-time GPS networks provide the perfect supplement to seismic networks, which operate with lower noise and higher sampling rates than GPS networks, but only measure accelerations or velocities, putting them at a supreme disadvantage for ascertaining the full extent of slip during a large earthquake in real-time. Here we report on two examples of rapid modeling of recent large earthquakes near large regional real-time GPS networks. The first utilizes Japan’s GEONET consisting of about 1200 stations during the 2003 Mw 8.3 Tokachi-Oki earthquake about 100 km offshore Hokkaido Island and the second investigates the 2010 Mw 7.2 El Mayor-Cucapah earthquake recorded by more than 100 stations in the California Real Time Network. The principal components of strain were computed throughout the networks and utilized as a trigger to initiate earthquake modeling. Total displacement waveforms were then computed in a simulated real-time fashion using a real-time network adjustment algorithm that fixes a station far away from the rupture to obtain a stable reference frame. Initial peak ground displacement measurements can then be used to obtain an initial size through scaling relationships. Finally, a full coseismic model of the event can be run minutes after the event, given predefined fault geometries, allowing emergency first responders and researchers to pinpoint the regions of highest damage. Furthermore, we are also investigating using total displacement waveforms for real-time moment tensor inversions to look at spatiotemporal variations in slip.
Rosenkrantz, Andrew B; Liang, Yu; Duszak, Richard; Recht, Michael P
2017-09-01
This study aims to assess the impact of off-campus facility expansion by a large academic health system on patient travel times for screening mammography. Screening mammograms performed from 2013 to 2015 and associated patient demographics were identified using the NYU Langone Medical Center Enterprise Data Warehouse. During this time, the system's number of mammography facilities increased from 6 to 19, reflecting expansion beyond Manhattan throughout the New York metropolitan region. Geocoding software was used to estimate driving times from patients' homes to imaging facilities. For 147,566 screening mammograms, the mean estimated patient travel time was 19.9 ± 15.2 minutes. With facility expansion, travel times declined significantly (P travel times between such subgroups. However, travel times to pre-expansion facilities remained stable (initial: 26.8 ± 18.9 minutes, final: 26.7 ± 18.6 minutes). Among women undergoing mammography before and after expansion, travel times were shorter for the postexpansion mammogram in only 6.3%, but this rate varied significantly (all P travel burden and reduce travel time variation among sociodemographic populations. Nonetheless, existing patients strongly tend to return to established facilities despite potentially shorter travel time locations, suggesting strong site loyalty. Variation in travel times likely relates to various factors other than facility proximity. Copyright © 2017 The Association of University Radiologists. Published by Elsevier Inc. All rights reserved.
Varying Constants, Gravitation and Cosmology
Directory of Open Access Journals (Sweden)
Jean-Philippe Uzan
2011-03-01
Full Text Available Fundamental constants are a cornerstone of our physical laws. Any constant varying in space and/or time would reflect the existence of an almost massless field that couples to matter. This will induce a violation of the universality of free fall. Thus, it is of utmost importance for our understanding of gravity and of the domain of validity of general relativity to test for their constancy. We detail the relations between the constants, the tests of the local position invariance and of the universality of free fall. We then review the main experimental and observational constraints that have been obtained from atomic clocks, the Oklo phenomenon, solar system observations, meteorite dating, quasar absorption spectra, stellar physics, pulsar timing, the cosmic microwave background and big bang nucleosynthesis. At each step we describe the basics of each system, its dependence with respect to the constants, the known systematic effects and the most recent constraints that have been obtained. We then describe the main theoretical frameworks in which the low-energy constants may actually be varying and we focus on the unification mechanisms and the relations between the variation of different constants. To finish, we discuss the more speculative possibility of understanding their numerical values and the apparent fine-tuning that they confront us with.
Time delay effects on large-scale MR damper based semi-active control strategies
International Nuclear Information System (INIS)
Cha, Y-J; Agrawal, A K; Dyke, S J
2013-01-01
This paper presents a detailed investigation on the robustness of large-scale 200 kN MR damper based semi-active control strategies in the presence of time delays in the control system. Although the effects of time delay on stability and performance degradation of an actively controlled system have been investigated extensively by many researchers, degradation in the performance of semi-active systems due to time delay has yet to be investigated. Since semi-active systems are inherently stable, instability problems due to time delay are unlikely to arise. This paper investigates the effects of time delay on the performance of a building with a large-scale MR damper, using numerical simulations of near- and far-field earthquakes. The MR damper is considered to be controlled by four different semi-active control algorithms, namely (i) clipped-optimal control (COC), (ii) decentralized output feedback polynomial control (DOFPC), (iii) Lyapunov control, and (iv) simple-passive control (SPC). It is observed that all controllers except for the COC are significantly robust with respect to time delay. On the other hand, the clipped-optimal controller should be integrated with a compensator to improve the performance in the presence of time delay. (paper)
TIME DISTRIBUTIONS OF LARGE AND SMALL SUNSPOT GROUPS OVER FOUR SOLAR CYCLES
International Nuclear Information System (INIS)
Kilcik, A.; Yurchyshyn, V. B.; Abramenko, V.; Goode, P. R.; Cao, W.; Ozguc, A.; Rozelot, J. P.
2011-01-01
Here we analyze solar activity by focusing on time variations of the number of sunspot groups (SGs) as a function of their modified Zurich class. We analyzed data for solar cycles 20-23 by using Rome (cycles 20 and 21) and Learmonth Solar Observatory (cycles 22 and 23) SG numbers. All SGs recorded during these time intervals were separated into two groups. The first group includes small SGs (A, B, C, H, and J classes by Zurich classification), and the second group consists of large SGs (D, E, F, and G classes). We then calculated small and large SG numbers from their daily mean numbers as observed on the solar disk during a given month. We report that the time variations of small and large SG numbers are asymmetric except for solar cycle 22. In general, large SG numbers appear to reach their maximum in the middle of the solar cycle (phases 0.45-0.5), while the international sunspot numbers and the small SG numbers generally peak much earlier (solar cycle phases 0.29-0.35). Moreover, the 10.7 cm solar radio flux, the facular area, and the maximum coronal mass ejection speed show better agreement with the large SG numbers than they do with the small SG numbers. Our results suggest that the large SG numbers are more likely to shed light on solar activity and its geophysical implications. Our findings may also influence our understanding of long-term variations of the total solar irradiance, which is thought to be an important factor in the Sun-Earth climate relationship.
Modeling and control of a large nuclear reactor. A three-time-scale approach
Energy Technology Data Exchange (ETDEWEB)
Shimjith, S.R. [Indian Institute of Technology Bombay, Mumbai (India); Bhabha Atomic Research Centre, Mumbai (India); Tiwari, A.P. [Bhabha Atomic Research Centre, Mumbai (India); Bandyopadhyay, B. [Indian Institute of Technology Bombay, Mumbai (India). IDP in Systems and Control Engineering
2013-07-01
Recent research on Modeling and Control of a Large Nuclear Reactor. Presents a three-time-scale approach. Written by leading experts in the field. Control analysis and design of large nuclear reactors requires a suitable mathematical model representing the steady state and dynamic behavior of the reactor with reasonable accuracy. This task is, however, quite challenging because of several complex dynamic phenomena existing in a reactor. Quite often, the models developed would be of prohibitively large order, non-linear and of complex structure not readily amenable for control studies. Moreover, the existence of simultaneously occurring dynamic variations at different speeds makes the mathematical model susceptible to numerical ill-conditioning, inhibiting direct application of standard control techniques. This monograph introduces a technique for mathematical modeling of large nuclear reactors in the framework of multi-point kinetics, to obtain a comparatively smaller order model in standard state space form thus overcoming these difficulties. It further brings in innovative methods for controller design for systems exhibiting multi-time-scale property, with emphasis on three-time-scale systems.
The large discretization step method for time-dependent partial differential equations
Haras, Zigo; Taasan, Shlomo
1995-01-01
A new method for the acceleration of linear and nonlinear time dependent calculations is presented. It is based on the Large Discretization Step (LDS) approximation, defined in this work, which employs an extended system of low accuracy schemes to approximate a high accuracy discrete approximation to a time dependent differential operator. Error bounds on such approximations are derived. These approximations are efficiently implemented in the LDS methods for linear and nonlinear hyperbolic equations, presented here. In these algorithms the high and low accuracy schemes are interpreted as the same discretization of a time dependent operator on fine and coarse grids, respectively. Thus, a system of correction terms and corresponding equations are derived and solved on the coarse grid to yield the fine grid accuracy. These terms are initialized by visiting the fine grid once in many coarse grid time steps. The resulting methods are very general, simple to implement and may be used to accelerate many existing time marching schemes.
Displacement in the parameter space versus spurious solution of discretization with large time step
International Nuclear Information System (INIS)
Mendes, Eduardo; Letellier, Christophe
2004-01-01
In order to investigate a possible correspondence between differential and difference equations, it is important to possess discretization of ordinary differential equations. It is well known that when differential equations are discretized, the solution thus obtained depends on the time step used. In the majority of cases, such a solution is considered spurious when it does not resemble the expected solution of the differential equation. This often happens when the time step taken into consideration is too large. In this work, we show that, even for quite large time steps, some solutions which do not correspond to the expected ones are still topologically equivalent to solutions of the original continuous system if a displacement in the parameter space is considered. To reduce such a displacement, a judicious choice of the discretization scheme should be made. To this end, a recent discretization scheme, based on the Lie expansion of the original differential equations, proposed by Monaco and Normand-Cyrot will be analysed. Such a scheme will be shown to be sufficient for providing an adequate discretization for quite large time steps compared to the pseudo-period of the underlying dynamics
Large deviations of a long-time average in the Ehrenfest urn model
Meerson, Baruch; Zilber, Pini
2018-05-01
Since its inception in 1907, the Ehrenfest urn model (EUM) has served as a test bed of key concepts of statistical mechanics. Here we employ this model to study large deviations of a time-additive quantity. We consider two continuous-time versions of the EUM with K urns and N balls: with and without interactions between the balls in the same urn. We evaluate the probability distribution that the average number of balls in one urn over time T, , takes any specified value aN, where . For long observation time, , a Donsker–Varadhan large deviation principle holds: , where … denote additional parameters of the model. We calculate the rate function exactly by two different methods due to Donsker and Varadhan and compare the exact results with those obtained with a variant of WKB approximation (after Wentzel, Kramers and Brillouin). In the absence of interactions the WKB prediction for is exact for any N. In the presence of interactions the WKB method gives asymptotically exact results for . The WKB method also uncovers the (very simple) time history of the system which dominates the contribution of different time histories to .
Modeling and Control of a Large Nuclear Reactor A Three-Time-Scale Approach
Shimjith, S R; Bandyopadhyay, B
2013-01-01
Control analysis and design of large nuclear reactors requires a suitable mathematical model representing the steady state and dynamic behavior of the reactor with reasonable accuracy. This task is, however, quite challenging because of several complex dynamic phenomena existing in a reactor. Quite often, the models developed would be of prohibitively large order, non-linear and of complex structure not readily amenable for control studies. Moreover, the existence of simultaneously occurring dynamic variations at different speeds makes the mathematical model susceptible to numerical ill-conditioning, inhibiting direct application of standard control techniques. This monograph introduces a technique for mathematical modeling of large nuclear reactors in the framework of multi-point kinetics, to obtain a comparatively smaller order model in standard state space form thus overcoming these difficulties. It further brings in innovative methods for controller design for systems exhibiting multi-time-scale property,...
Molecular dynamics based enhanced sampling of collective variables with very large time steps
Chen, Pei-Yang; Tuckerman, Mark E.
2018-01-01
Enhanced sampling techniques that target a set of collective variables and that use molecular dynamics as the driving engine have seen widespread application in the computational molecular sciences as a means to explore the free-energy landscapes of complex systems. The use of molecular dynamics as the fundamental driver of the sampling requires the introduction of a time step whose magnitude is limited by the fastest motions in a system. While standard multiple time-stepping methods allow larger time steps to be employed for the slower and computationally more expensive forces, the maximum achievable increase in time step is limited by resonance phenomena, which inextricably couple fast and slow motions. Recently, we introduced deterministic and stochastic resonance-free multiple time step algorithms for molecular dynamics that solve this resonance problem and allow ten- to twenty-fold gains in the large time step compared to standard multiple time step algorithms [P. Minary et al., Phys. Rev. Lett. 93, 150201 (2004); B. Leimkuhler et al., Mol. Phys. 111, 3579-3594 (2013)]. These methods are based on the imposition of isokinetic constraints that couple the physical system to Nosé-Hoover chains or Nosé-Hoover Langevin schemes. In this paper, we show how to adapt these methods for collective variable-based enhanced sampling techniques, specifically adiabatic free-energy dynamics/temperature-accelerated molecular dynamics, unified free-energy dynamics, and by extension, metadynamics, thus allowing simulations employing these methods to employ similarly very large time steps. The combination of resonance-free multiple time step integrators with free-energy-based enhanced sampling significantly improves the efficiency of conformational exploration.
Larson-Miller Constant of Heat-Resistant Steel
Tamura, Manabu; Abe, Fujio; Shiba, Kiyoyuki; Sakasegawa, Hideo; Tanigawa, Hiroyasu
2013-06-01
Long-term rupture data for 79 types of heat-resistant steels including carbon steel, low-alloy steel, high-alloy steel, austenitic stainless steel, and superalloy were analyzed, and a constant for the Larson-Miller (LM) parameter was obtained in the current study for each material. The calculated LM constant, C, is approximately 20 for heat-resistant steels and alloys except for high-alloy martensitic steels with high creep resistance, for which C ≈ 30 . The apparent activation energy was also calculated, and the LM constant was found to be proportional to the apparent activation energy with a high correlation coefficient, which suggests that the LM constant is a material constant possessing intrinsic physical meaning. The contribution of the entropy change to the LM constant is not small, especially for several martensitic steels with large values of C. Deformation of such martensitic steels should accompany a large entropy change of 10 times the gas constant at least, besides the entropy change due to self-diffusion.
Asymptotics for Large Time of Global Solutions to the Generalized Kadomtsev-Petviashvili Equation
Hayashi, Nakao; Naumkin, Pavel I.; Saut, Jean-Claude
We study the large time asymptotic behavior of solutions to the generalized Kadomtsev-Petviashvili (KP) equations where σ= 1 or σ=- 1. When ρ= 2 and σ=- 1, (KP) is known as the KPI equation, while ρ= 2, σ=+ 1 corresponds to the KPII equation. The KP equation models the propagation along the x-axis of nonlinear dispersive long waves on the surface of a fluid, when the variation along the y-axis proceeds slowly [10]. The case ρ= 3, σ=- 1 has been found in the modeling of sound waves in antiferromagnetics [15]. We prove that if ρ>= 3 is an integer and the initial data are sufficiently small, then the solution u of (KP) satisfies the following estimates: for all t∈R, where κ= 1 if ρ= 3 and κ= 0 if ρ>= 4. We also find the large time asymptotics for the solution.
Large-scale machine learning and evaluation platform for real-time traffic surveillance
Eichel, Justin A.; Mishra, Akshaya; Miller, Nicholas; Jankovic, Nicholas; Thomas, Mohan A.; Abbott, Tyler; Swanson, Douglas; Keller, Joel
2016-09-01
In traffic engineering, vehicle detectors are trained on limited datasets, resulting in poor accuracy when deployed in real-world surveillance applications. Annotating large-scale high-quality datasets is challenging. Typically, these datasets have limited diversity; they do not reflect the real-world operating environment. There is a need for a large-scale, cloud-based positive and negative mining process and a large-scale learning and evaluation system for the application of automatic traffic measurements and classification. The proposed positive and negative mining process addresses the quality of crowd sourced ground truth data through machine learning review and human feedback mechanisms. The proposed learning and evaluation system uses a distributed cloud computing framework to handle data-scaling issues associated with large numbers of samples and a high-dimensional feature space. The system is trained using AdaBoost on 1,000,000 Haar-like features extracted from 70,000 annotated video frames. The trained real-time vehicle detector achieves an accuracy of at least 95% for 1/2 and about 78% for 19/20 of the time when tested on ˜7,500,000 video frames. At the end of 2016, the dataset is expected to have over 1 billion annotated video frames.
Wealth Transfers Among Large Customers from Implementing Real-Time Retail Electricity Pricing
Borenstein, Severin
2007-01-01
Adoption of real-time electricity pricing — retail prices that vary hourly to reflect changing wholesale prices — removes existing cross-subsidies to those customers that consume disproportionately more when wholesale prices are highest. If their losses are substantial, these customers are likely to oppose RTP initiatives unless there is a supplemental program to offset their loss. Using data on a sample of 1142 large industrial and commercial customers in northern California, I show that RTP...
Gulati, Sankalp; Serrà, Joan; Ishwar, Vignesh; Serra, Xavier
2014-01-01
We demonstrate a data-driven unsupervised approach for the discovery of melodic patterns in large collections of Indian art music recordings. The approach first works on single recordings and subsequently searches in the entire music collection. Melodic similarity is based on dynamic time warping. The task being computationally intensive, lower bounding and early abandoning techniques are applied during distance computation. Our dataset comprises 365 hours of music, containing 1,764 audio rec...
Huang, Feimin; Li, Tianhong; Yu, Huimin; Yuan, Difan
2018-06-01
We are concerned with the global existence and large time behavior of entropy solutions to the one-dimensional unipolar hydrodynamic model for semiconductors in the form of Euler-Poisson equations in a bounded interval. In this paper, we first prove the global existence of entropy solution by vanishing viscosity and compensated compactness framework. In particular, the solutions are uniformly bounded with respect to space and time variables by introducing modified Riemann invariants and the theory of invariant region. Based on the uniform estimates of density, we further show that the entropy solution converges to the corresponding unique stationary solution exponentially in time. No any smallness condition is assumed on the initial data and doping profile. Moreover, the novelty in this paper is about the unform bound with respect to time for the weak solutions of the isentropic Euler-Poisson system.
Large lateral photovoltaic effect with ultrafast relaxation time in SnSe/Si junction
Energy Technology Data Exchange (ETDEWEB)
Wang, Xianjie; Zhao, Xiaofeng; Hu, Chang; Zhang, Yang; Song, Bingqian; Zhang, Lingli; Liu, Weilong; Lv, Zhe; Zhang, Yu; Sui, Yu, E-mail: suiyu@hit.edu.cn [Department of Physics, Harbin Institute of Technology, Harbin 150001 (China); Tang, Jinke [Department of Physics and Astronomy, University of Wyoming, Laramie, Wyoming 82071 (United States); Song, Bo, E-mail: songbo@hit.edu.cn [Department of Physics, Harbin Institute of Technology, Harbin 150001 (China); Academy of Fundamental and Interdisciplinary Sciences, Harbin Institute of Technology, Harbin 150001 (China)
2016-07-11
In this paper, we report a large lateral photovoltaic effect (LPE) with ultrafast relaxation time in SnSe/p-Si junctions. The LPE shows a linear dependence on the position of the laser spot, and the position sensitivity is as high as 250 mV mm{sup −1}. The optical response time and the relaxation time of the LPE are about 100 ns and 2 μs, respectively. The current-voltage curve on the surface of the SnSe film indicates the formation of an inversion layer at the SnSe/p-Si interface. Our results clearly suggest that most of the excited-electrons diffuse laterally in the inversion layer at the SnSe/p-Si interface, which results in a large LPE with ultrafast relaxation time. The high positional sensitivity and ultrafast relaxation time of the LPE make the SnSe/p-Si junction a promising candidate for a wide range of optoelectronic applications.
Incipient multiple fault diagnosis in real time with applications to large-scale systems
International Nuclear Information System (INIS)
Chung, H.Y.; Bien, Z.; Park, J.H.; Seon, P.H.
1994-01-01
By using a modified signed directed graph (SDG) together with the distributed artificial neutral networks and a knowledge-based system, a method of incipient multi-fault diagnosis is presented for large-scale physical systems with complex pipes and instrumentations such as valves, actuators, sensors, and controllers. The proposed method is designed so as to (1) make a real-time incipient fault diagnosis possible for large-scale systems, (2) perform the fault diagnosis not only in the steady-state case but also in the transient case as well by using a concept of fault propagation time, which is newly adopted in the SDG model, (3) provide with highly reliable diagnosis results and explanation capability of faults diagnosed as in an expert system, and (4) diagnose the pipe damage such as leaking, break, or throttling. This method is applied for diagnosis of a pressurizer in the Kori Nuclear Power Plant (NPP) unit 2 in Korea under a transient condition, and its result is reported to show satisfactory performance of the method for the incipient multi-fault diagnosis of such a large-scale system in a real-time manner
Indian Academy of Sciences (India)
IAS Admin
The article discusses the importance of the fine structure constant in quantum mechanics, along with the brief history of how it emerged. Al- though Sommerfelds idea of elliptical orbits has been replaced by wave mechanics, the fine struc- ture constant he introduced has remained as an important parameter in the field of ...
International Nuclear Information System (INIS)
Beloy, K.; Borschevsky, A.; Schwerdtfeger, P.; Flambaum, V. V.
2010-01-01
Recently it was pointed out that transition frequencies in certain diatomic molecules have an enhanced sensitivity to variations in the fine-structure constant α and the proton-to-electron mass ratio m p /m e due to a near cancellation between the fine structure and vibrational interval in a ground electronic multiplet [V. V. Flambaum and M. G. Kozlov, Phys. Rev. Lett. 99, 150801 (2007)]. One such molecule possessing this favorable quality is silicon monobromide. Here we take a closer examination of SiBr as a candidate for detecting variations in α and m p /m e . We analyze the rovibronic spectrum by employing the most accurate experimental data available in the literature and perform ab initio calculations to determine the precise dependence of the spectrum on variations in α. Furthermore, we calculate the natural linewidths of the rovibronic levels, which place a fundamental limit on the accuracy to which variations may be determined.
Time and frequency domain analyses of the Hualien Large-Scale Seismic Test
International Nuclear Information System (INIS)
Kabanda, John; Kwon, Oh-Sung; Kwon, Gunup
2015-01-01
Highlights: • Time- and frequency-domain analysis methods are verified against each other. • The two analysis methods are validated against Hualien LSST. • The nonlinear time domain (NLTD) analysis resulted in more realistic response. • The frequency domain (FD) analysis shows amplification at resonant frequencies. • The NLTD analysis requires significant modeling and computing time. - Abstract: In the nuclear industry, the equivalent-linear frequency domain analysis method has been the de facto standard procedure primarily due to the method's computational efficiency. This study explores the feasibility of applying the nonlinear time domain analysis method for the soil–structure-interaction analysis of nuclear power facilities. As a first step, the equivalency of the time and frequency domain analysis methods is verified through a site response analysis of one-dimensional soil, a dynamic impedance analysis of soil–foundation system, and a seismic response analysis of the entire soil–structure system. For the verifications, an idealized elastic soil–structure system is used to minimize variables in the comparison of the two methods. Then, the verified analysis methods are used to develop time and frequency domain models of Hualien Large-Scale Seismic Test. The predicted structural responses are compared against field measurements. The models are also analyzed with an amplified ground motion to evaluate discrepancies of the time and frequency domain analysis methods when the soil–structure system behaves beyond the elastic range. The analysis results show that the equivalent-linear frequency domain analysis method amplifies certain frequency bands and tends to result in higher structural acceleration than the nonlinear time domain analysis method. A comparison with field measurements shows that the nonlinear time domain analysis method better captures the frequency distribution of recorded structural responses than the frequency domain
Learning Read-constant Polynomials of Constant Degree modulo Composites
DEFF Research Database (Denmark)
Chattopadhyay, Arkadev; Gavaldá, Richard; Hansen, Kristoffer Arnsfelt
2011-01-01
Boolean functions that have constant degree polynomial representation over a fixed finite ring form a natural and strict subclass of the complexity class \\textACC0ACC0. They are also precisely the functions computable efficiently by programs over fixed and finite nilpotent groups. This class...... is not known to be learnable in any reasonable learning model. In this paper, we provide a deterministic polynomial time algorithm for learning Boolean functions represented by polynomials of constant degree over arbitrary finite rings from membership queries, with the additional constraint that each variable...
Stabilized power constant alimentation; Alimentation regulee a puissance constante
Energy Technology Data Exchange (ETDEWEB)
Roussel, L [Commissariat a l' Energie Atomique, Saclay (France). Centre d' Etudes Nucleaires
1968-06-01
The study and realization of a stabilized power alimentation variable from 5 to 100 watts are described. In order to realize a constant power drift of Lithium compensated diodes, we have searched a 1 per cent precision of regulation and a response time minus than 1 sec. Recent components like Hall multiplicator and integrated amplifiers give this possibility and it is easy to use permutable circuits. (author) [French] On decrit l'etude et la realisation d'une alimentation a puissance constante reglable dans une gamme de 5 a 100 watts. Prevue pour le drift a puissance constante des diodes compensees au lithium, l'etude a ete menee en vue d'obtenir une precision de regulation de 1 pour cent et un temps de reponse inferieur a la seconde. Des systemes recents tels que multiplicateurs a effet Hall et circuits integres ont permis d'atteindre ce but tout en facilitant l'emploi de modules interchangeables. (auteur)
Interactive exploration of large-scale time-varying data using dynamic tracking graphs
Widanagamaachchi, W.
2012-10-01
Exploring and analyzing the temporal evolution of features in large-scale time-varying datasets is a common problem in many areas of science and engineering. One natural representation of such data is tracking graphs, i.e., constrained graph layouts that use one spatial dimension to indicate time and show the "tracks" of each feature as it evolves, merges or disappears. However, for practical data sets creating the corresponding optimal graph layouts that minimize the number of intersections can take hours to compute with existing techniques. Furthermore, the resulting graphs are often unmanageably large and complex even with an ideal layout. Finally, due to the cost of the layout, changing the feature definition, e.g. by changing an iso-value, or analyzing properly adjusted sub-graphs is infeasible. To address these challenges, this paper presents a new framework that couples hierarchical feature definitions with progressive graph layout algorithms to provide an interactive exploration of dynamically constructed tracking graphs. Our system enables users to change feature definitions on-the-fly and filter features using arbitrary attributes while providing an interactive view of the resulting tracking graphs. Furthermore, the graph display is integrated into a linked view system that provides a traditional 3D view of the current set of features and allows a cross-linked selection to enable a fully flexible spatio-temporal exploration of data. We demonstrate the utility of our approach with several large-scale scientific simulations from combustion science. © 2012 IEEE.
A method for real-time memory efficient implementation of blob detection in large images
Directory of Open Access Journals (Sweden)
Petrović Vladimir L.
2017-01-01
Full Text Available In this paper we propose a method for real-time blob detection in large images with low memory cost. The method is suitable for implementation on the specialized parallel hardware such as multi-core platforms, FPGA and ASIC. It uses parallelism to speed-up the blob detection. The input image is divided into blocks of equal sizes to which the maximally stable extremal regions (MSER blob detector is applied in parallel. We propose the usage of multiresolution analysis for detection of large blobs which are not detected by processing the small blocks. This method can find its place in many applications such as medical imaging, text recognition, as well as video surveillance or wide area motion imagery (WAMI. We explored the possibilities of usage of detected blobs in the feature-based image alignment as well. When large images are processed, our approach is 10 to over 20 times more memory efficient than the state of the art hardware implementation of the MSER.
Large-time asymptotic behaviour of solutions of non-linear Sobolev-type equations
International Nuclear Information System (INIS)
Kaikina, Elena I; Naumkin, Pavel I; Shishmarev, Il'ya A
2009-01-01
The large-time asymptotic behaviour of solutions of the Cauchy problem is investigated for a non-linear Sobolev-type equation with dissipation. For small initial data the approach taken is based on a detailed analysis of the Green's function of the linear problem and the use of the contraction mapping method. The case of large initial data is also closely considered. In the supercritical case the asymptotic formulae are quasi-linear. The asymptotic behaviour of solutions of a non-linear Sobolev-type equation with a critical non-linearity of the non-convective kind differs by a logarithmic correction term from the behaviour of solutions of the corresponding linear equation. For a critical convective non-linearity, as well as for a subcritical non-convective non-linearity it is proved that the leading term of the asymptotic expression for large times is a self-similar solution. For Sobolev equations with convective non-linearity the asymptotic behaviour of solutions in the subcritical case is the product of a rarefaction wave and a shock wave. Bibliography: 84 titles.
Mean time for the development of large workloads and large queue lengths in the GI/G/1 queue
Directory of Open Access Journals (Sweden)
Charles Knessl
1996-01-01
Full Text Available We consider the GI/G/1 queue described by either the workload U(t (unfinished work or the number of customers N(t in the system. We compute the mean time until U(t reaches excess of the level K, and also the mean time until N(t reaches N0. For the M/G/1 and GI/M/1 models, we obtain exact contour integral representations for these mean first passage times. We then compute the mean times asymptotically, as K and N0→∞, by evaluating these contour integrals. For the general GI/G/1 model, we obtain asymptotic results by a singular perturbation analysis of the appropriate backward Kolmogorov equation(s. Numerical comparisons show that the asymptotic formulas are very accurate even for moderate values of K and N0.
Time domain calculation of connector loads of a very large floating structure
Gu, Jiayang; Wu, Jie; Qi, Enrong; Guan, Yifeng; Yuan, Yubo
2015-06-01
Loads generated after an air crash, ship collision, and other accidents may destroy very large floating structures (VLFSs) and create additional connector loads. In this study, the combined effects of ship collision and wave loads are considered to establish motion differential equations for a multi-body VLFS. A time domain calculation method is proposed to calculate the connector load of the VLFS in waves. The Longuet-Higgins model is employed to simulate the stochastic wave load. Fluid force and hydrodynamic coefficient are obtained with DNV Sesam software. The motion differential equation is calculated by applying the time domain method when the frequency domain hydrodynamic coefficient is converted into the memory function of the motion differential equation of the time domain. As a result of the combined action of wave and impact loads, high-frequency oscillation is observed in the time history curve of the connector load. At wave directions of 0° and 75°, the regularities of the time history curves of the connector loads in different directions are similar and the connector loads of C1 and C2 in the X direction are the largest. The oscillation load is observed in the connector in the Y direction at a wave direction of 75° and not at 0°. This paper presents a time domain calculation method of connector load to provide a certain reference function for the future development of Chinese VLFS
Time Discounting and Credit Market Access in a Large-Scale Cash Transfer Programme
Handa, Sudhanshu; Martorano, Bruno; Halpern, Carolyn; Pettifor, Audrey; Thirumurthy, Harsha
2017-01-01
Summary Time discounting is thought to influence decision-making in almost every sphere of life, including personal finances, diet, exercise and sexual behavior. In this article we provide evidence on whether a national poverty alleviation program in Kenya can affect inter-temporal decisions. We administered a preferences module as part of a large-scale impact evaluation of the Kenyan Government’s Cash Transfer for Orphans and Vulnerable Children. Four years into the program we find that individuals in the treatment group are only marginally more likely to wait for future money, due in part to the erosion of the value of the transfer by inflation. However among the poorest households for whom the value of transfer is still relatively large we find significant program effects on the propensity to wait. We also find strong program effects among those who have access to credit markets though the program itself does not improve access to credit. PMID:28260842
Time Discounting and Credit Market Access in a Large-Scale Cash Transfer Programme.
Handa, Sudhanshu; Martorano, Bruno; Halpern, Carolyn; Pettifor, Audrey; Thirumurthy, Harsha
2016-06-01
Time discounting is thought to influence decision-making in almost every sphere of life, including personal finances, diet, exercise and sexual behavior. In this article we provide evidence on whether a national poverty alleviation program in Kenya can affect inter-temporal decisions. We administered a preferences module as part of a large-scale impact evaluation of the Kenyan Government's Cash Transfer for Orphans and Vulnerable Children. Four years into the program we find that individuals in the treatment group are only marginally more likely to wait for future money, due in part to the erosion of the value of the transfer by inflation. However among the poorest households for whom the value of transfer is still relatively large we find significant program effects on the propensity to wait. We also find strong program effects among those who have access to credit markets though the program itself does not improve access to credit.
Very Large Inflammatory Odontogenic Cyst with Origin on a Single Long Time Traumatized Lower Incisor
Freitas, Filipe; Andre, Saudade; Moreira, Andre; Carames, Joao
2015-01-01
One of the consequences of traumatic injuries is the chance of aseptic pulp necrosis to occur which in time may became infected and give origin to periapical pathosis. Although the apical granulomas and cysts are a common condition, there appearance as an extremely large radiolucent image is a rare finding. Differential diagnosis with other radiographic-like pathologies, such as keratocystic odontogenic tumour or unicystic ameloblastoma, is mandatory. The purpose of this paper is to report a very large radicular cyst caused by a single mandibular incisor traumatized long back, in a 60-year-old male. Medical and clinical histories were obtained, radiographic and cone beam CT examinations performed and an initial incisional biopsy was done. The final decision was to perform a surgical enucleation of a lesion, 51.4 mm in length. The enucleated tissue biopsy analysis was able to render the diagnosis as an inflammatory odontogenic cyst. A 2 year follow-up showed complete bone recovery. PMID:26393219
Timing of Formal Phase Safety Reviews for Large-Scale Integrated Hazard Analysis
Massie, Michael J.; Morris, A. Terry
2010-01-01
Integrated hazard analysis (IHA) is a process used to identify and control unacceptable risk. As such, it does not occur in a vacuum. IHA approaches must be tailored to fit the system being analyzed. Physical, resource, organizational and temporal constraints on large-scale integrated systems impose additional direct or derived requirements on the IHA. The timing and interaction between engineering and safety organizations can provide either benefits or hindrances to the overall end product. The traditional approach for formal phase safety review timing and content, which generally works well for small- to moderate-scale systems, does not work well for very large-scale integrated systems. This paper proposes a modified approach to timing and content of formal phase safety reviews for IHA. Details of the tailoring process for IHA will describe how to avoid temporary disconnects in major milestone reviews and how to maintain a cohesive end-to-end integration story particularly for systems where the integrator inherently has little to no insight into lower level systems. The proposal has the advantage of allowing the hazard analysis development process to occur as technical data normally matures.
Large deviation estimates for exceedance times of perpetuity sequences and their dual processes
DEFF Research Database (Denmark)
Buraczewski, Dariusz; Collamore, Jeffrey F.; Damek, Ewa
2016-01-01
In a variety of problems in pure and applied probability, it is of relevant to study the large exceedance probabilities of the perpetuity sequence $Y_n := B_1 + A_1 B_2 + \\cdots + (A_1 \\cdots A_{n-1}) B_n$, where $(A_i,B_i) \\subset (0,\\infty) \\times \\reals$. Estimates for the stationary tail dist......-time exceedance probabilities of $\\{ M_n^\\ast \\}$, yielding a new result concerning the convergence of $\\{ M_n^\\ast \\}$ to its stationary distribution.......In a variety of problems in pure and applied probability, it is of relevant to study the large exceedance probabilities of the perpetuity sequence $Y_n := B_1 + A_1 B_2 + \\cdots + (A_1 \\cdots A_{n-1}) B_n$, where $(A_i,B_i) \\subset (0,\\infty) \\times \\reals$. Estimates for the stationary tail...... distribution of $\\{ Y_n \\}$ have been developed in the seminal papers of Kesten (1973) and Goldie (1991). Specifically, it is well-known that if $M := \\sup_n Y_n$, then ${\\mathbb P} \\left\\{ M > u \\right\\} \\sim {\\cal C}_M u^{-\\xi}$ as $u \\to \\infty$. While much attention has been focused on extending...
Solution of large nonlinear time-dependent problems using reduced coordinates
International Nuclear Information System (INIS)
Mish, K.D.
1987-01-01
This research is concerned with the idea of reducing a large time-dependent problem, such as one obtained from a finite-element discretization, down to a more manageable size while preserving the most-important physical behavior of the solution. This reduction process is motivated by the concept of a projection operator on a Hilbert Space, and leads to the Lanczos Algorithm for generation of approximate eigenvectors of a large symmetric matrix. The Lanczos Algorithm is then used to develop a reduced form of the spatial component of a time-dependent problem. The solution of the remaining temporal part of the problem is considered from the standpoint of numerical-integration schemes in the time domain. All of these theoretical results are combined to motivate the proposed reduced coordinate algorithm. This algorithm is then developed, discussed, and compared to related methods from the mechanics literature. The proposed reduced coordinate method is then applied to the solution of some representative problems in mechanics. The results of these problems are discussed, conclusions are drawn, and suggestions are made for related future research
Desvillettes, Laurent; Fellner, Klemens
2010-01-01
We study a continuous coagulation-fragmentation model with constant kernels for reacting polymers (see [M. Aizenman and T. Bak, Comm. Math. Phys., 65 (1979), pp. 203-230]). The polymers are set to diffuse within a smooth bounded one
The cosmological constant problem
International Nuclear Information System (INIS)
Dolgov, A.D.
1989-05-01
A review of the cosmological term problem is presented. Baby universe model and the compensating field model are discussed. The importance of more accurate data on the Hubble constant and the Universe age is stressed. 18 refs
Furlong, K. P.; Benz, H.; Hayes, G. P.; Villasenor, A.
2010-12-01
Although most would agree that the occurrence of natural disaster events such as earthquakes, volcanic eruptions, and floods can provide effective learning opportunities for natural hazards-based courses, implementing compelling materials into the large-enrollment classroom environment can be difficult. These natural hazard events derive much of their learning potential from their real-time nature, and in the modern 24/7 news-cycle where all but the most devastating events are quickly out of the public eye, the shelf life for an event is quite limited. To maximize the learning potential of these events requires that both authoritative information be available and course materials be generated as the event unfolds. Although many events such as hurricanes, flooding, and volcanic eruptions provide some precursory warnings, and thus one can prepare background materials to place the main event into context, earthquakes present a particularly confounding situation of providing no warning, but where context is critical to student learning. Attempting to implement real-time materials into large enrollment classes faces the additional hindrance of limited internet access (for students) in most lecture classrooms. In Earth 101 Natural Disasters: Hollywood vs Reality, taught as a large enrollment (150+ students) general education course at Penn State, we are collaborating with the USGS’s National Earthquake Information Center (NEIC) to develop efficient means to incorporate their real-time products into learning activities in the lecture hall environment. Over time (and numerous events) we have developed a template for presenting USGS-produced real-time information in lecture mode. The event-specific materials can be quickly incorporated and updated, along with key contextual materials, to provide students with up-to-the-minute current information. In addition, we have also developed in-class activities, such as student determination of population exposure to severe ground
Large Time Behavior for Weak Solutions of the 3D Globally Modified Navier-Stokes Equations
Directory of Open Access Journals (Sweden)
Junbai Ren
2014-01-01
Full Text Available This paper is concerned with the large time behavior of the weak solutions for three-dimensional globally modified Navier-Stokes equations. With the aid of energy methods and auxiliary decay estimates together with Lp-Lq estimates of heat semigroup, we derive the optimal upper and lower decay estimates of the weak solutions for the globally modified Navier-Stokes equations as C1(1+t-3/4≤uL2≤C2(1+t-3/4, t>1. The decay rate is optimal since it coincides with that of heat equation.
Mining Outlier Data in Mobile Internet-Based Large Real-Time Databases
Directory of Open Access Journals (Sweden)
Xin Liu
2018-01-01
Full Text Available Mining outlier data guarantees access security and data scheduling of parallel databases and maintains high-performance operation of real-time databases. Traditional mining methods generate abundant interference data with reduced accuracy, efficiency, and stability, causing severe deficiencies. This paper proposes a new mining outlier data method, which is used to analyze real-time data features, obtain magnitude spectra models of outlier data, establish a decisional-tree information chain transmission model for outlier data in mobile Internet, obtain the information flow of internal outlier data in the information chain of a large real-time database, and cluster data. Upon local characteristic time scale parameters of information flow, the phase position features of the outlier data before filtering are obtained; the decision-tree outlier-classification feature-filtering algorithm is adopted to acquire signals for analysis and instant amplitude and to achieve the phase-frequency characteristics of outlier data. Wavelet transform threshold denoising is combined with signal denoising to analyze data offset, to correct formed detection filter model, and to realize outlier data mining. The simulation suggests that the method detects the characteristic outlier data feature response distribution, reduces response time, iteration frequency, and mining error rate, improves mining adaptation and coverage, and shows good mining outcomes.
TORCH: A Large-Area Detector for Precision Time-of-Flight Measurements at LHCb
Harnew, N
2012-01-01
The TORCH (Time Of internally Reflected CHerenkov light) is an innovative high-precision time-of-flight detector which is suitable for large areas, up to tens of square metres, and is being developed for the upgraded LHCb experiment. The TORCH provides a time-of-flight measurement from the imaging of photons emitted in a 1 cm thick quartz radiator, based on the Cherenkov principle. The photons propagate by total internal reflection to the edge of the quartz plane and are then focused onto an array of Micro-Channel Plate (MCP) photon detectors at the periphery of the detector. The goal is to achieve a timing resolution of 15 ps per particle over a flight distance of 10 m. This will allow particle identification in the challenging momentum region up to 20 GeV/c. Commercial MCPs have been tested in the laboratory and demonstrate the required timing precision. An electronics readout system based on the NINO and HPTDC chipset is being developed to evaluate an 8×8 channel TORCH prototype. The simulated performance...
Event processing time prediction at the CMS experiment of the Large Hadron Collider
International Nuclear Information System (INIS)
Cury, Samir; Gutsche, Oliver; Kcira, Dorian
2014-01-01
The physics event reconstruction is one of the biggest challenges for the computing of the LHC experiments. Among the different tasks that computing systems of the CMS experiment performs, the reconstruction takes most of the available CPU resources. The reconstruction time of single collisions varies according to event complexity. Measurements were done in order to determine this correlation quantitatively, creating means to predict it based on the data-taking conditions of the input samples. Currently the data processing system splits tasks in groups with the same number of collisions and does not account for variations in the processing time. These variations can be large and can lead to a considerable increase in the time it takes for CMS workflows to finish. The goal of this study was to use estimates on processing time to more efficiently split the workflow into jobs. By considering the CPU time needed for each job the spread of the job-length distribution in a workflow is reduced.
On the gravitational constant change
International Nuclear Information System (INIS)
Milyukov, V.K.
1986-01-01
The nowadays viewpoint on the problem of G gravitational constant invariability is presented in brief. The methods and results of checking of the G dependence on the nature of substance (checking of the equivalence principle), G dependepce on distance (checking of Newton gravity law) and time (cosmological experiments) are presented. It is pointed out that all performed experiments don't give any reasons to have doubts in G constancy in space and time and G independence on the nature of the substance
Real-time graphic display system for ROSA-V Large Scale Test Facility
International Nuclear Information System (INIS)
Kondo, Masaya; Anoda, Yoshinari; Osaki, Hideki; Kukita, Yutaka; Takigawa, Yoshio.
1993-11-01
A real-time graphic display system was developed for the ROSA-V Large Scale Test Facility (LSTF) experiments simulating accident management measures for prevention of severe core damage in pressurized water reactors (PWRs). The system works on an IBM workstation (Power Station RS/6000 model 560) and accommodates 512 channels out of about 2500 total measurements in the LSTF. It has three major functions: (a) displaying the coolant inventory distribution in the facility primary and secondary systems; (b) displaying the measured quantities at desired locations in the facility; and (c) displaying the time histories of measured quantities. The coolant inventory distribution is derived from differential pressure measurements along vertical sections and gamma-ray densitometer measurements for horizontal legs. The color display indicates liquid subcooling calculated from pressure and temperature at individual locations. (author)
Real-world-time simulation of memory consolidation in a large-scale cerebellar model
Directory of Open Access Journals (Sweden)
Masato eGosui
2016-03-01
Full Text Available We report development of a large-scale spiking network model of thecerebellum composed of more than 1 million neurons. The model isimplemented on graphics processing units (GPUs, which are dedicatedhardware for parallel computing. Using 4 GPUs simultaneously, we achieve realtime simulation, in which computer simulation ofcerebellar activity for 1 sec completes within 1 sec in thereal-world time, with temporal resolution of 1 msec.This allows us to carry out a very long-term computer simulationof cerebellar activity in a practical time with millisecond temporalresolution. Using the model, we carry out computer simulationof long-term gain adaptation of optokinetic response (OKR eye movementsfor 5 days aimed to study the neural mechanisms of posttraining memoryconsolidation. The simulation results are consistent with animal experimentsand our theory of posttraining memory consolidation. These resultssuggest that realtime computing provides a useful means to studya very slow neural process such as memory consolidation in the brain.
International Nuclear Information System (INIS)
Jamali, J.; Aghajafari, R.; Moini, R.; Sadeghi, H.
2002-01-01
A time-domain approach is presented to calculate electromagnetic fields inside a large Electromagnetic Pulse (EMP) simulator. This type of EMP simulator is used for studying the effect of electromagnetic pulses on electrical apparatus in various structures such as vehicles, a reoplanes, etc. The simulator consists of three planar transmission lines. To solve the problem, we first model the metallic structure of the simulator as a grid of conducting wires. The numerical solution of the governing electric field integral equation is then obtained using the method of moments in time domain. To demonstrate the accuracy of the model, we consider a typical EMP simulator. The comparison of our results with those obtained experimentally in the literature validates the model introduced in this paper
Time-Efficient Cloning Attacks Identification in Large-Scale RFID Systems
Directory of Open Access Journals (Sweden)
Ju-min Zhao
2017-01-01
Full Text Available Radio Frequency Identification (RFID is an emerging technology for electronic labeling of objects for the purpose of automatically identifying, categorizing, locating, and tracking the objects. But in their current form RFID systems are susceptible to cloning attacks that seriously threaten RFID applications but are hard to prevent. Existing protocols aimed at detecting whether there are cloning attacks in single-reader RFID systems. In this paper, we investigate the cloning attacks identification in the multireader scenario and first propose a time-efficient protocol, called the time-efficient Cloning Attacks Identification Protocol (CAIP to identify all cloned tags in multireaders RFID systems. We evaluate the performance of CAIP through extensive simulations. The results show that CAIP can identify all the cloned tags in large-scale RFID systems fairly fast with required accuracy.
Parallel Motion Simulation of Large-Scale Real-Time Crowd in a Hierarchical Environmental Model
Directory of Open Access Journals (Sweden)
Xin Wang
2012-01-01
Full Text Available This paper presents a parallel real-time crowd simulation method based on a hierarchical environmental model. A dynamical model of the complex environment should be constructed to simulate the state transition and propagation of individual motions. By modeling of a virtual environment where virtual crowds reside, we employ different parallel methods on a topological layer, a path layer and a perceptual layer. We propose a parallel motion path matching method based on the path layer and a parallel crowd simulation method based on the perceptual layer. The large-scale real-time crowd simulation becomes possible with these methods. Numerical experiments are carried out to demonstrate the methods and results.
Ghosh, Soumen; Andersen, Amity; Gagliardi, Laura; Cramer, Christopher J; Govind, Niranjan
2017-09-12
We present an implementation of a time-dependent semiempirical method (INDO/S) in NWChem using real-time (RT) propagation to address, in principle, the entire spectrum of valence electronic excitations. Adopting this model, we study the UV/vis spectra of medium-sized systems such as P3B2 and f-coronene, and in addition much larger systems such as ubiquitin in the gas phase and the betanin chromophore in the presence of two explicit solvents (water and methanol). RT-INDO/S provides qualitatively and often quantitatively accurate results when compared with RT- TDDFT or experimental spectra. Even though we only consider the INDO/S Hamiltonian in this work, our implementation provides a framework for performing electron dynamics in large systems using semiempirical Hartree-Fock Hamiltonians in general.
Rosenkrantz, Andrew B; Liang, Yu; Duszak, Richard; Recht, Michael P
2017-08-01
Patients' willingness to travel farther distances for certain imaging services may reflect their perceptions of the degree of differentiation of such services. We compare patients' travel times for a range of imaging examinations performed across a large academic health system. We searched the NYU Langone Medical Center Enterprise Data Warehouse to identify 442,990 adult outpatient imaging examinations performed over a recent 3.5-year period. Geocoding software was used to estimate typical driving times from patients' residences to imaging facilities. Variation in travel times was assessed among examination types. The mean expected travel time was 29.2 ± 20.6 minutes, but this varied significantly (p travel times were shortest for ultrasound (26.8 ± 18.9) and longest for positron emission tomography-computed tomography (31.9 ± 21.5). For magnetic resonance imaging, travel times were shortest for musculoskeletal extremity (26.4 ± 19.2) and spine (28.6 ± 21.0) examinations and longest for prostate (35.9 ± 25.6) and breast (32.4 ± 22.3) examinations. For computed tomography, travel times were shortest for a range of screening examinations [colonography (25.5 ± 20.8), coronary artery calcium scoring (26.1 ± 19.2), and lung cancer screening (26.4 ± 14.9)] and longest for angiography (32.0 ± 22.6). For ultrasound, travel times were shortest for aortic aneurysm screening (22.3 ± 18.4) and longest for breast (30.1 ± 19.2) examinations. Overall, men (29.9 ± 21.6) had longer (p travel times than women (27.8 ± 20.3); this difference persisted for each modality individually (p ≤ 0.006). Patients' willingness to travel longer times for certain imaging examination types (particularly breast and prostate imaging) supports the role of specialized services in combating potential commoditization of imaging services. Disparities in travel times by gender warrant further investigation. Copyright
A Matter of Time: Faster Percolator Analysis via Efficient SVM Learning for Large-Scale Proteomics.
Halloran, John T; Rocke, David M
2018-05-04
Percolator is an important tool for greatly improving the results of a database search and subsequent downstream analysis. Using support vector machines (SVMs), Percolator recalibrates peptide-spectrum matches based on the learned decision boundary between targets and decoys. To improve analysis time for large-scale data sets, we update Percolator's SVM learning engine through software and algorithmic optimizations rather than heuristic approaches that necessitate the careful study of their impact on learned parameters across different search settings and data sets. We show that by optimizing Percolator's original learning algorithm, l 2 -SVM-MFN, large-scale SVM learning requires nearly only a third of the original runtime. Furthermore, we show that by employing the widely used Trust Region Newton (TRON) algorithm instead of l 2 -SVM-MFN, large-scale Percolator SVM learning is reduced to nearly only a fifth of the original runtime. Importantly, these speedups only affect the speed at which Percolator converges to a global solution and do not alter recalibration performance. The upgraded versions of both l 2 -SVM-MFN and TRON are optimized within the Percolator codebase for multithreaded and single-thread use and are available under Apache license at bitbucket.org/jthalloran/percolator_upgrade .
Direct Analysis in Real Time Mass Spectrometry for Characterization of Large Saccharides.
Ma, Huiying; Jiang, Qing; Dai, Diya; Li, Hongli; Bi, Wentao; Da Yong Chen, David
2018-03-06
Polysaccharide characterization posts the most difficult challenge to available analytical technologies compared to other types of biomolecules. Plant polysaccharides are reported to have numerous medicinal values, but their effect can be different based on the types of plants, and even regions of productions and conditions of cultivation. However, the molecular basis of the differences of these polysaccharides is largely unknown. In this study, direct analysis in real time mass spectrometry (DART-MS) was used to generate polysaccharide fingerprints. Large saccharides can break down into characteristic small fragments in the DART source via pyrolysis, and the products are then detected by high resolution MS. Temperature was shown to be a crucial parameter for the decomposition of large polysaccharide. The general behavior of carbohydrates in DART-MS was also studied through the investigation of a number of mono- and oligosaccharide standards. The chemical formula and putative ionic forms of the fragments were proposed based on accurate mass with less than 10 ppm mass errors. Multivariate data analysis shows the clear differentiation of different plant species. Intensities of marker ions compared among samples also showed obvious differences. The combination of DART-MS analysis and mechanochemical extraction method used in this work demonstrates a simple, fast, and high throughput analytical protocol for the efficient evaluation of molecular features in plant polysaccharides.
International Nuclear Information System (INIS)
Hayden, C.C.; Chandler, D.W.
1995-01-01
Results are presented from femtosecond time-resolved coherent Raman experiments in which we excite and monitor vibrational coherence in gas-phase samples of benzene and 1,3,5-hexatriene. Different physical mechanisms for coherence decay are seen in these two molecules. In benzene, where the Raman polarizability is largely isotropic, the Q branch of the vibrational Raman spectrum is the primary feature excited. Molecules in different rotational states have different Q-branch transition frequencies due to vibration--rotation interaction. Thus, the macroscopic polarization that is observed in these experiments decays because it has many frequency components from molecules in different rotational states, and these frequency components go out of phase with each other. In 1,3,5-hexatriene, the Raman excitation produces molecules in a coherent superposition of rotational states, through (O, P, R, and S branch) transitions that are strong due to the large anisotropy of the Raman polarizability. The coherent superposition of rotational states corresponds to initially spatially oriented, vibrationally excited, molecules that are freely rotating. The rotation of molecules away from the initial orientation is primarily responsible for the coherence decay in this case. These experiments produce large (∼10% efficiency) Raman shifted signals with modest excitation pulse energies (10 μJ) demonstrating the feasibility of this approach for a variety of gas phase studies. copyright 1995 American Institute of Physics
Time to "go large" on biofilm research: advantages of an omics approach.
Azevedo, Nuno F; Lopes, Susana P; Keevil, Charles W; Pereira, Maria O; Vieira, Maria J
2009-04-01
In nature, the biofilm mode of life is of great importance in the cell cycle for many microorganisms. Perhaps because of biofilm complexity and variability, the characterization of a given microbial system, in terms of biofilm formation potential, structure and associated physiological activity, in a large-scale, standardized and systematic manner has been hindered by the absence of high-throughput methods. This outlook is now starting to change as new methods involving the utilization of microtiter-plates and automated spectrophotometry and microscopy systems are being developed to perform large-scale testing of microbial biofilms. Here, we evaluate if the time is ripe to start an integrated omics approach, i.e., the generation and interrogation of large datasets, to biofilms--"biofomics". This omics approach would bring much needed insight into how biofilm formation ability is affected by a number of environmental, physiological and mutational factors and how these factors interplay between themselves in a standardized manner. This could then lead to the creation of a database where biofilm signatures are identified and interrogated. Nevertheless, and before embarking on such an enterprise, the selection of a versatile, robust, high-throughput biofilm growing device and of appropriate methods for biofilm analysis will have to be performed. Whether such device and analytical methods are already available, particularly for complex heterotrophic biofilms is, however, very debatable.
A quadri-constant fraction discriminator
International Nuclear Information System (INIS)
Wang Wei; Gu Zhongdao
1992-01-01
A quad Constant Fraction (Amplitude and Rise Time Compensation) Discriminator Circuit is described, which is based on the ECL high-speed dual comparator AD 9687. The CFD (ARCD) is of the constant fraction timing type (the amplitude and rise time compensation timing type) employing a leading edge discriminator to eliminate error triggers caused by noises. A timing walk measurement indicates a timing walk of less than +- 150 ps from -50 mV to -5 V
A study of residence time distribution using radiotracer technique in the large scale plant facility
Wetchagarun, S.; Tippayakul, C.; Petchrak, A.; Sukrod, K.; Khoonkamjorn, P.
2017-06-01
As the demand for troubleshooting of large industrial plants increases, radiotracer techniques, which have capability to provide fast, online and effective detections to plant problems, have been continually developed. One of the good potential applications of the radiotracer for troubleshooting in a process plant is the analysis of Residence Time Distribution (RTD). In this paper, the study of RTD in a large scale plant facility using radiotracer technique was presented. The objective of this work is to gain experience on the RTD analysis using radiotracer technique in a “larger than laboratory” scale plant setup which can be comparable to the real industrial application. The experiment was carried out at the sedimentation tank in the water treatment facility of Thailand Institute of Nuclear Technology (Public Organization). Br-82 was selected to use in this work due to its chemical property, its suitable half-life and its on-site availability. NH4Br in the form of aqueous solution was injected into the system as the radiotracer. Six NaI detectors were placed along the pipelines and at the tank in order to determine the RTD of the system. The RTD and the Mean Residence Time (MRT) of the tank was analysed and calculated from the measured data. The experience and knowledge attained from this study is important for extending this technique to be applied to industrial facilities in the future.
Research on resistance characteristics of YBCO tape under short-time DC large current impact
Zhang, Zhifeng; Yang, Jiabin; Qiu, Qingquan; Zhang, Guomin; Lin, Liangzhen
2017-06-01
Research of the resistance characteristics of YBCO tape under short-time DC large current impact is the foundation of the developing DC superconducting fault current limiter (SFCL) for voltage source converter-based high voltage direct current system (VSC-HVDC), which is one of the valid approaches to solve the problems of renewable energy integration. SFCL can limit DC short-circuit and enhance the interrupting capabilities of DC circuit breakers. In this paper, under short-time DC large current impacts, the resistance features of naked tape of YBCO tape are studied to find the resistance - temperature change rule and the maximum impact current. The influence of insulation for the resistance - temperature characteristics of YBCO tape is studied by comparison tests with naked tape and insulating tape in 77 K. The influence of operating temperature on the tape is also studied under subcooled liquid nitrogen condition. For the current impact security of YBCO tape, the critical current degradation and top temperature are analyzed and worked as judgment standards. The testing results is helpful for in developing SFCL in VSC-HVDC.
Piloted simulator study of allowable time delays in large-airplane response
Grantham, William D.; Bert T.?aetingas, Stephen A.dings with ran; Bert T.?aetingas, Stephen A.dings with ran
1987-01-01
A piloted simulation was performed to determine the permissible time delay and phase shift in the flight control system of a specific large transport-type airplane. The study was conducted with a six degree of freedom ground-based simulator and a math model similar to an advanced wide-body jet transport. Time delays in discrete and lagged form were incorporated into the longitudinal, lateral, and directional control systems of the airplane. Three experienced pilots flew simulated approaches and landings with random localizer and glide slope offsets during instrument tracking as their principal evaluation task. Results of the present study suggest a level 1 (satisfactory) handling qualities limit for the effective time delay of 0.15 sec in both the pitch and roll axes, as opposed to a 0.10-sec limit of the present specification (MIL-F-8785C) for both axes. Also, the present results suggest a level 2 (acceptable but unsatisfactory) handling qualities limit for an effective time delay of 0.82 sec and 0.57 sec for the pitch and roll axes, respectively, as opposed to 0.20 sec of the present specifications for both axes. In the area of phase shift between cockpit input and control surface deflection,the results of this study, flown in turbulent air, suggest less severe phase shift limitations for the approach and landing task-approximately 50 deg. in pitch and 40 deg. in roll - as opposed to 15 deg. of the present specifications for both axes.
Ogawa, K.; Isobe, M.; Nishitani, T.; Murakami, S.; Seki, R.; Nakata, M.; Takada, E.; Kawase, H.; Pu, N.; LHD Experiment Group
2018-03-01
Time-resolved measurement of triton burnup is performed with a scintillating fiber detector system in the deuterium operation of the large helical device. The scintillating fiber detector system is composed of the detector head consisting of 109 scintillating fibers having a diameter of 1 mm and a length of 100 mm embedded in the aluminum substrate, the magnetic registrant photomultiplier tube, and the data acquisition system equipped with 1 GHz sampling rate analogies to digital converter and the field programmable gate array. The discrimination level of 150 mV was set to extract the pulse signal induced by 14 MeV neutrons according to the pulse height spectra obtained in the experiment. The decay time of 14 MeV neutron emission rate after neutral beam is turned off measured by the scintillating fiber detector. The decay time is consistent with the decay time of total neutron emission rate corresponding to the 14 MeV neutrons measured by the neutron flux monitor as expected. Evaluation of the diffusion coefficient is conducted using a simple classical slowing-down model FBURN code. It is found that the diffusion coefficient of triton is evaluated to be less than 0.2 m2 s-1.
Rapid Large Earthquake and Run-up Characterization in Quasi Real Time
Bravo, F. J.; Riquelme, S.; Koch, P.; Cararo, S.
2017-12-01
Several test in quasi real time have been conducted by the rapid response group at CSN (National Seismological Center) to characterize earthquakes in Real Time. These methods are known for its robustness and realibility to create Finite Fault Models. The W-phase FFM Inversion, The Wavelet Domain FFM and The Body Wave and FFM have been implemented in real time at CSN, all these algorithms are running automatically and triggered by the W-phase Point Source Inversion. Dimensions (Large and Width ) are predefined by adopting scaling laws for earthquakes in subduction zones. We tested the last four major earthquakes occurred in Chile using this scheme: The 2010 Mw 8.8 Maule Earthquake, The 2014 Mw 8.2 Iquique Earthquake, The 2015 Mw 8.3 Illapel Earthquake and The 7.6 Melinka Earthquake. We obtain many solutions as time elapses, for each one of those we calculate the run-up using an analytical formula. Our results are in agreements with some FFM already accepted by the sicentific comunnity aswell as run-up observations in the field.
Natural Time and Nowcasting Earthquakes: Are Large Global Earthquakes Temporally Clustered?
Luginbuhl, Molly; Rundle, John B.; Turcotte, Donald L.
2018-02-01
The objective of this paper is to analyze the temporal clustering of large global earthquakes with respect to natural time, or interevent count, as opposed to regular clock time. To do this, we use two techniques: (1) nowcasting, a new method of statistically classifying seismicity and seismic risk, and (2) time series analysis of interevent counts. We chose the sequences of M_{λ } ≥ 7.0 and M_{λ } ≥ 8.0 earthquakes from the global centroid moment tensor (CMT) catalog from 2004 to 2016 for analysis. A significant number of these earthquakes will be aftershocks of the largest events, but no satisfactory method of declustering the aftershocks in clock time is available. A major advantage of using natural time is that it eliminates the need for declustering aftershocks. The event count we utilize is the number of small earthquakes that occur between large earthquakes. The small earthquake magnitude is chosen to be as small as possible, such that the catalog is still complete based on the Gutenberg-Richter statistics. For the CMT catalog, starting in 2004, we found the completeness magnitude to be M_{σ } ≥ 5.1. For the nowcasting method, the cumulative probability distribution of these interevent counts is obtained. We quantify the distribution using the exponent, β, of the best fitting Weibull distribution; β = 1 for a random (exponential) distribution. We considered 197 earthquakes with M_{λ } ≥ 7.0 and found β = 0.83 ± 0.08. We considered 15 earthquakes with M_{λ } ≥ 8.0, but this number was considered too small to generate a meaningful distribution. For comparison, we generated synthetic catalogs of earthquakes that occur randomly with the Gutenberg-Richter frequency-magnitude statistics. We considered a synthetic catalog of 1.97 × 10^5 M_{λ } ≥ 7.0 earthquakes and found β = 0.99 ± 0.01. The random catalog converted to natural time was also random. We then generated 1.5 × 10^4 synthetic catalogs with 197 M_{λ } ≥ 7.0 in each catalog and
Hypoattenuation on CTA images with large vessel occlusion: timing affects conspicuity
Energy Technology Data Exchange (ETDEWEB)
Dave, Prasham [University of Ottawa, MD Program, Faculty of Medicine, Ottawa, ON (Canada); Lum, Cheemun; Thornhill, Rebecca; Chakraborty, Santanu [University of Ottawa, Department of Radiology, Ottawa, ON (Canada); Ottawa Hospital Research Institute, Ottawa, ON (Canada); Dowlatshahi, Dar [Ottawa Hospital Research Institute, Ottawa, ON (Canada); University of Ottawa, Division of Neurology, Department of Medicine, Ottawa, ON (Canada)
2017-05-15
Parenchymal hypoattenuation distal to occlusions on CTA source images (CTASI) is perceived because of the differences in tissue contrast compared to normally perfused tissue. This difference in conspicuity can be measured objectively. We evaluated the effect of contrast timing on the conspicuity of ischemic areas. We collected consecutive patients, retrospectively, between 2012 and 2014 with large vessel occlusions that had dynamic multiphase CT angiography (CTA) and CT perfusion (CTP). We identified areas of low cerebral blood volume on CTP maps and drew the region of interest (ROI) on the corresponding CTASI. A second ROI was placed in an area of normally perfused tissue. We evaluated conspicuity by comparing the absolute and relative change in attenuation between ischemic and normally perfused tissue over seven time points. The median absolute and relative conspicuity was greatest at the peak arterial (8.6 HU (IQR 5.1-13.9); 1.15 (1.09-1.26)), notch (9.4 HU (5.8-14.9); 1.17 (1.10-1.27)), and peak venous phases (7.0 HU (3.1-12.7); 1.13 (1.05-1.23)) compared to other portions of the time-attenuation curve (TAC). There was a significant effect of phase on the TAC for the conspicuity of ischemic vs normally perfused areas (P < 0.00001). The conspicuity of ischemic areas distal to a large artery occlusion in acute stroke is dependent on the phase of contrast arrival with dynamic CTASI and is objectively greatest in the mid-phase of the TAC. (orig.)
Backward-in-time methods to simulate large-scale transport and mixing in the ocean
Prants, S. V.
2015-06-01
In oceanography and meteorology, it is important to know not only where water or air masses are headed for, but also where they came from as well. For example, it is important to find unknown sources of oil spills in the ocean and of dangerous substance plumes in the atmosphere. It is impossible with the help of conventional ocean and atmospheric numerical circulation models to extrapolate backward from the observed plumes to find the source because those models cannot be reversed in time. We review here recently elaborated backward-in-time numerical methods to identify and study mesoscale eddies in the ocean and to compute where those waters came from to a given area. The area under study is populated with a large number of artificial tracers that are advected backward in time in a given velocity field that is supposed to be known analytically or numerically, or from satellite and radar measurements. After integrating advection equations, one gets positions of each tracer on a fixed day in the past and can identify from known destinations a particle positions at earlier times. The results provided show that the method is efficient, for example, in estimating probabilities to find increased concentrations of radionuclides and other pollutants in oceanic mesoscale eddies. The backward-in-time methods are illustrated in this paper with a few examples. Backward-in-time Lagrangian maps are applied to identify eddies in satellite-derived and numerically generated velocity fields and to document the pathways by which they exchange water with their surroundings. Backward-in-time trapping maps are used to identify mesoscale eddies in the altimetric velocity field with a risk to be contaminated by Fukushima-derived radionuclides. The results of simulations are compared with in situ mesurement of caesium concentration in sea water samples collected in a recent research vessel cruise in the area to the east of Japan. Backward-in-time latitudinal maps and the corresponding
Her, Cheenou; Alonzo, Aaron P.; Vang, Justin Y.; Torres, Ernesto; Krishnan, V. V.
2015-01-01
Enzyme kinetics is an essential part of a chemistry curriculum, especially for students interested in biomedical research or in health care fields. Though the concept is routinely performed in undergraduate chemistry/biochemistry classrooms using other spectroscopic methods, we provide an optimized approach that uses a real-time monitoring of the…
Real-Time Track Reallocation for Emergency Incidents at Large Railway Stations
Directory of Open Access Journals (Sweden)
Wei Liu
2015-01-01
Full Text Available After track capacity breakdowns at a railway station, train dispatchers need to generate appropriate track reallocation plans to recover the impacted train schedule and minimize the expected total train delay time under stochastic scenarios. This paper focuses on the real-time track reallocation problem when tracks break down at large railway stations. To represent these cases, virtual trains are introduced and activated to occupy the accident tracks. A mathematical programming model is developed, which aims at minimizing the total occupation time of station bottleneck sections to avoid train delays. In addition, a hybrid algorithm between the genetic algorithm and the simulated annealing algorithm is designed. The case study from the Baoji railway station in China verifies the efficiency of the proposed model and the algorithm. Numerical results indicate that, during a daily and shift transport plan from 8:00 to 8:30, if five tracks break down simultaneously, this will disturb train schedules (result in train arrival and departure delays.
Shared control on lunar spacecraft teleoperation rendezvous operations with large time delay
Ya-kun, Zhang; Hai-yang, Li; Rui-xue, Huang; Jiang-hui, Liu
2017-08-01
Teleoperation could be used in space on-orbit serving missions, such as object deorbits, spacecraft approaches, and automatic rendezvous and docking back-up systems. Teleoperation rendezvous and docking in lunar orbit may encounter bottlenecks for the inherent time delay in the communication link and the limited measurement accuracy of sensors. Moreover, human intervention is unsuitable in view of the partial communication coverage problem. To solve these problems, a shared control strategy for teleoperation rendezvous and docking is detailed. The control authority in lunar orbital maneuvers that involves two spacecraft as rendezvous and docking in the final phase was discussed in this paper. The predictive display model based on the relative dynamic equations is established to overcome the influence of the large time delay in communication link. We discuss and attempt to prove via consistent, ground-based simulations the relative merits of fully autonomous control mode (i.e., onboard computer-based), fully manual control (i.e., human-driven at the ground station) and shared control mode. The simulation experiments were conducted on the nine-degrees-of-freedom teleoperation rendezvous and docking simulation platform. Simulation results indicated that the shared control methods can overcome the influence of time delay effects. In addition, the docking success probability of shared control method was enhanced compared with automatic and manual modes.
Radiographic constant exposure technique
DEFF Research Database (Denmark)
Domanus, Joseph Czeslaw
1985-01-01
The constant exposure technique has been applied to assess various industrial radiographic systems. Different X-ray films and radiographic papers of two producers were compared. Special attention was given to fast film and paper used with fluorometallic screens. Radiographic image quality...... was tested by the use of ISO wire IQI's and ASTM penetrameters used on Al and Fe test plates. Relative speed and reduction of kilovoltage obtained with the constant exposure technique were calculated. The advantages of fast radiographic systems are pointed out...
A natural cosmological constant from chameleons
International Nuclear Information System (INIS)
Nastase, Horatiu; Weltman, Amanda
2015-01-01
We present a simple model where the effective cosmological constant appears from chameleon scalar fields. For a Kachru–Kallosh–Linde–Trivedi (KKLT)-inspired form of the potential and a particular chameleon coupling to the local density, patches of approximately constant scalar field potential cluster around regions of matter with density above a certain value, generating the effect of a cosmological constant on large scales. This construction addresses both the cosmological constant problem (why Λ is so small, yet nonzero) and the coincidence problem (why Λ is comparable to the matter density now)
A natural cosmological constant from chameleons
Directory of Open Access Journals (Sweden)
Horatiu Nastase
2015-07-01
Full Text Available We present a simple model where the effective cosmological constant appears from chameleon scalar fields. For a Kachru–Kallosh–Linde–Trivedi (KKLT-inspired form of the potential and a particular chameleon coupling to the local density, patches of approximately constant scalar field potential cluster around regions of matter with density above a certain value, generating the effect of a cosmological constant on large scales. This construction addresses both the cosmological constant problem (why Λ is so small, yet nonzero and the coincidence problem (why Λ is comparable to the matter density now.
A natural cosmological constant from chameleons
Energy Technology Data Exchange (ETDEWEB)
Nastase, Horatiu, E-mail: nastase@ift.unesp.br [Instituto de Física Teórica, UNESP-Universidade Estadual Paulista, R. Dr. Bento T. Ferraz 271, Bl. II, Sao Paulo 01140-070, SP (Brazil); Weltman, Amanda, E-mail: amanda.weltman@uct.ac.za [Astrophysics, Cosmology & Gravity Center, Department of Mathematics and Applied Mathematics, University of Cape Town, Private Bag, Rondebosch 7700 (South Africa)
2015-07-30
We present a simple model where the effective cosmological constant appears from chameleon scalar fields. For a Kachru–Kallosh–Linde–Trivedi (KKLT)-inspired form of the potential and a particular chameleon coupling to the local density, patches of approximately constant scalar field potential cluster around regions of matter with density above a certain value, generating the effect of a cosmological constant on large scales. This construction addresses both the cosmological constant problem (why Λ is so small, yet nonzero) and the coincidence problem (why Λ is comparable to the matter density now)
International Nuclear Information System (INIS)
Yamanashi, Yuki; Masubuchi, Kota; Yoshikawa, Nobuyuki
2016-01-01
The relationship between the timing margin and the error rate of the large-scale single flux quantum logic circuits is quantitatively investigated to establish a timing design guideline. We observed that the fluctuation in the set-up/hold time of single flux quantum logic gates caused by thermal noises is the most probable origin of the logical error of the large-scale single flux quantum circuit. The appropriate timing margin for stable operation of the large-scale logic circuit is discussed by taking the fluctuation of setup/hold time and the timing jitter in the single flux quantum circuits. As a case study, the dependence of the error rate of the 1-million-bit single flux quantum shift register on the timing margin is statistically analyzed. The result indicates that adjustment of timing margin and the bias voltage is important for stable operation of a large-scale SFQ logic circuit.
Energy Technology Data Exchange (ETDEWEB)
Yamanashi, Yuki, E-mail: yamanasi@ynu.ac.jp [Department of Electrical and Computer Engineering, Yokohama National University, Tokiwadai 79-5, Hodogaya-ku, Yokohama 240-8501 (Japan); Masubuchi, Kota; Yoshikawa, Nobuyuki [Department of Electrical and Computer Engineering, Yokohama National University, Tokiwadai 79-5, Hodogaya-ku, Yokohama 240-8501 (Japan)
2016-11-15
The relationship between the timing margin and the error rate of the large-scale single flux quantum logic circuits is quantitatively investigated to establish a timing design guideline. We observed that the fluctuation in the set-up/hold time of single flux quantum logic gates caused by thermal noises is the most probable origin of the logical error of the large-scale single flux quantum circuit. The appropriate timing margin for stable operation of the large-scale logic circuit is discussed by taking the fluctuation of setup/hold time and the timing jitter in the single flux quantum circuits. As a case study, the dependence of the error rate of the 1-million-bit single flux quantum shift register on the timing margin is statistically analyzed. The result indicates that adjustment of timing margin and the bias voltage is important for stable operation of a large-scale SFQ logic circuit.
Directory of Open Access Journals (Sweden)
Min Chen
2014-01-01
Full Text Available We study the one-dimensional bipolar nonisentropic Euler-Poisson equations which can model various physical phenomena, such as the propagation of electron and hole in submicron semiconductor devices, the propagation of positive ion and negative ion in plasmas, and the biological transport of ions for channel proteins. We show the existence and large time behavior of global smooth solutions for the initial value problem, when the difference of two particles’ initial mass is nonzero, and the far field of two particles’ initial temperatures is not the ambient device temperature. This result improves that of Y.-P. Li, for the case that the difference of two particles’ initial mass is zero, and the far field of the initial temperature is the ambient device temperature.
Automatic Optimization for Large-Scale Real-Time Coastal Water Simulation
Directory of Open Access Journals (Sweden)
Shunli Wang
2016-01-01
Full Text Available We introduce an automatic optimization approach for the simulation of large-scale coastal water. To solve the singular problem of water waves obtained with the traditional model, a hybrid deep-shallow-water model is estimated by using an automatic coupling algorithm. It can handle arbitrary water depth and different underwater terrain. As a certain feature of coastal terrain, coastline is detected with the collision detection technology. Then, unnecessary water grid cells are simplified by the automatic simplification algorithm according to the depth. Finally, the model is calculated on Central Processing Unit (CPU and the simulation is implemented on Graphics Processing Unit (GPU. We show the effectiveness of our method with various results which achieve real-time rendering on consumer-level computer.
Space-Time Convolutional Codes over Finite Fields and Rings for Systems with Large Diversity Order
Directory of Open Access Journals (Sweden)
B. F. Uchôa-Filho
2008-06-01
Full Text Available We propose a convolutional encoder over the finite ring of integers modulo pk,Ã¢Â„Â¤pk, where p is a prime number and k is any positive integer, to generate a space-time convolutional code (STCC. Under this structure, we prove three properties related to the generator matrix of the convolutional code that can be used to simplify the code search procedure for STCCs over Ã¢Â„Â¤pk. Some STCCs of large diversity order (Ã¢Â‰Â¥4 designed under the trace criterion for n=2,3, and 4 transmit antennas are presented for various PSK signal constellations.
Duvall, Thomas L.; Hanasoge, Shravan M.
2012-01-01
With large separations (10-24 deg heliocentric), it has proven possible to cleanly separate the horizontal and vertical components of supergranular flow with time-distance helioseismology. These measurements require very broad filters in the k-$\\omega$ power spectrum as apparently supergranulation scatters waves over a large area of the power spectrum. By picking locations of supergranulation as peaks in the horizontal divergence signal derived from f-mode waves, it is possible to simultaneously obtain average properties of supergranules and a high signal/noise ratio by averaging over many cells. By comparing ray-theory forward modeling with HMI measurements, an average supergranule model with a peak upflow of 240 m/s at cell center at a depth of 2.3 Mm and a peak horizontal outflow of 700 m/s at a depth of 1.6 Mm. This upflow is a factor of 20 larger than the measured photospheric upflow. These results may not be consistent with earlier measurements using much shorter separations (<5 deg heliocentric). With a 30 Mm horizontal extent and a few Mm in depth, the cells might be characterized as thick pancakes.
Numerical simulation of pseudoelastic shape memory alloys using the large time increment method
Gu, Xiaojun; Zhang, Weihong; Zaki, Wael; Moumni, Ziad
2017-04-01
The paper presents a numerical implementation of the large time increment (LATIN) method for the simulation of shape memory alloys (SMAs) in the pseudoelastic range. The method was initially proposed as an alternative to the conventional incremental approach for the integration of nonlinear constitutive models. It is adapted here for the simulation of pseudoelastic SMA behavior using the Zaki-Moumni model and is shown to be especially useful in situations where the phase transformation process presents little or lack of hardening. In these situations, a slight stress variation in a load increment can result in large variations of strain and local state variables, which may lead to difficulties in numerical convergence. In contrast to the conventional incremental method, the LATIN method solve the global equilibrium and local consistency conditions sequentially for the entire loading path. The achieved solution must satisfy the conditions of static and kinematic admissibility and consistency simultaneously after several iterations. 3D numerical implementation is accomplished using an implicit algorithm and is then used for finite element simulation using the software Abaqus. Computational tests demonstrate the ability of this approach to simulate SMAs presenting flat phase transformation plateaus and subjected to complex loading cases, such as the quasi-static behavior of a stent structure. Some numerical results are contrasted to those obtained using step-by-step incremental integration.
International Nuclear Information System (INIS)
Chandra, R.
1977-01-01
On the grounds of the two correspondence limits, the Newtonian limit and the special theory limit of Einstein field equations, a modification of the cosmical constant has been proposed which gives realistic results in the case of a homogeneous universe. Also, according to this modification an explanation for the negative pressure in the steady-state model of the universe has been given. (author)
International Nuclear Information System (INIS)
Weinberg, S.
1989-01-01
Cosmological constant problem is discussed. History of the problem is briefly considered. Five different approaches to solution of the problem are described: supersymmetry, supergravity, superstring; anthropic approach; mechamism of lagrangian alignment; modification of gravitation theory and quantum cosmology. It is noted that approach, based on quantum cosmology is the most promising one
International Nuclear Information System (INIS)
O Murchadha, N.
1991-01-01
The set of riemannian three-metrics with positive Yamabe constant defines the space of independent data for the gravitational field. The boundary of this set is investigated, and it is shown that metrics close to the boundary satisfy the positive-energy theorem. (Author) 18 refs
Tachyon constant-roll inflation
Mohammadi, A.; Saaidi, Kh.; Golanbari, T.
2018-04-01
The constant-roll inflation is studied where the inflaton is taken as a tachyon field. Based on this approach, the second slow-roll parameter is taken as a constant which leads to a differential equation for the Hubble parameter. Finding an exact solution for the Hubble parameter is difficult and leads us to a numerical solution for the Hubble parameter. On the other hand, since in this formalism the slow-roll parameter η is constant and could not be assumed to be necessarily small, the perturbation parameters should be reconsidered again which, in turn, results in new terms appearing in the amplitude of scalar perturbations and the scalar spectral index. Utilizing the numerical solution for the Hubble parameter, we estimate the perturbation parameter at the horizon exit time and compare it with observational data. The results show that, for specific values of the constant parameter η , we could have an almost scale-invariant amplitude of scalar perturbations. Finally, the attractor behavior for the solution of the model is presented, and we determine that the feature could be properly satisfied.
realfast: Real-time, Commensal Fast Transient Surveys with the Very Large Array
Law, C. J.; Bower, G. C.; Burke-Spolaor, S.; Butler, B. J.; Demorest, P.; Halle, A.; Khudikyan, S.; Lazio, T. J. W.; Pokorny, M.; Robnett, J.; Rupen, M. P.
2018-05-01
Radio interferometers have the ability to precisely localize and better characterize the properties of sources. This ability is having a powerful impact on the study of fast radio transients, where a few milliseconds of data is enough to pinpoint a source at cosmological distances. However, recording interferometric data at millisecond cadence produces a terabyte-per-hour data stream that strains networks, computing systems, and archives. This challenge mirrors that of other domains of science, where the science scope is limited by the computational architecture as much as the physical processes at play. Here, we present a solution to this problem in the context of radio transients: realfast, a commensal, fast transient search system at the Jansky Very Large Array. realfast uses a novel architecture to distribute fast-sampled interferometric data to a 32-node, 64-GPU cluster for real-time imaging and transient detection. By detecting transients in situ, we can trigger the recording of data for those rare, brief instants when the event occurs and reduce the recorded data volume by a factor of 1000. This makes it possible to commensally search a data stream that would otherwise be impossible to record. This system will search for millisecond transients in more than 1000 hr of data per year, potentially localizing several Fast Radio Bursts, pulsars, and other sources of impulsive radio emission. We describe the science scope for realfast, the system design, expected outcomes, and ways in which real-time analysis can help in other fields of astrophysics.
Suppression of the Transit -Time Instability in Large-Area Electron Beam Diodes
Myers, Matthew C.; Friedman, Moshe; Swanekamp, Stephen B.; Chan, Lop-Yung; Ludeking, Larry; Sethian, John D.
2002-12-01
Experiment, theory, and simulation have shown that large-area electron-beam diodes are susceptible to the transit-time instability. The instability modulates the electron beam spatially and temporally, producing a wide spread in electron energy and momentum distributions. The result is gross inefficiency in beam generation and propagation. Simulations indicate that a periodic, slotted cathode structure that is loaded with resistive elements may be used to eliminate the instability. Such a cathode has been fielded on one of the two opposing 60 cm × 200 cm diodes on the NIKE KrF laser at the Naval Research Laboratory. These diodes typically deliver 600 kV, 500 kA, 250 ns electron beams to the laser cell in an external magnetic field of 0.2 T. We conclude that the slotted cathode suppressed the transit-time instability such that the RF power was reduced by a factor of 9 and that electron transmission efficiency into the laser gas was improved by more than 50%.
Suppression of the transit-time instability in large-area electron beam diodes
International Nuclear Information System (INIS)
Myers, Matthew C.; Friedman, Moshe; Sethian, John D.; Swanekamp, Stephen B.; Chan, L.-Y.; Ludeking, Larry
2002-01-01
Experiment, theory, and simulation have shown that large-area electron-beam diodes are susceptible to the transit-time instability. The instability modulates the electron beam spatially and temporally, producing a wide spread in electron energy and momentum distributions. The result is gross inefficiency in beam generation and propagation. Simulations indicate that a periodic, slotted cathode structure that is loaded with resistive elements may be used to eliminate the instability. Such a cathode has been fielded on one of the two opposing 60 cm x 200 cm diodes on the NIKE KrF laser at the Naval Research Laboratory. These diodes typically deliver 600 kV, 500 kA, 250 ns electron beams to the laser cell in an external magnetic field of 0.2 T. We conclude that the slotted cathode suppressed the transit-time instability such that the RF power was reduced by a factor of 9 and that electron transmission efficiency into the laser gas was improved by more than 50%
Practical method of calculating time-integrated concentrations at medium and large distances
International Nuclear Information System (INIS)
Cagnetti, P.; Ferrara, V.
1980-01-01
Previous reports have covered the possibility of calculating time-integrated concentrations (TICs) for a prolonged release, based on concentration estimates for a brief release. This study proposes a simple method of evaluating concentrations in the air at medium and large distances, for a brief release. It is known that the stability of the atmospheric layers close to ground level influence diffusion only over short distances. Beyond some tens of kilometers, as the pollutant cloud progressively reaches higher layers, diffusion is affected by factors other than the stability at ground level, such as wind shear for intermediate distances and the divergence and rotational motion of air masses towards the upper limit of the mesoscale and on the synoptic scale. Using the data available in the literature, expressions for sigmasub(y) and sigmasub(z) are proposed for transfer times corresponding to those for up to distances of several thousand kilometres, for two initial diffusion situations (up to distances of 10 - 20 km), those characterized by stable and neutral conditions respectively. Using this method simple hand calculations can be made for any problem relating to the diffusion of radioactive pollutants over long distances
Likelihood of Null Effects of Large NHLBI Clinical Trials Has Increased over Time.
Directory of Open Access Journals (Sweden)
Robert M Kaplan
Full Text Available We explore whether the number of null results in large National Heart Lung, and Blood Institute (NHLBI funded trials has increased over time.We identified all large NHLBI supported RCTs between 1970 and 2012 evaluating drugs or dietary supplements for the treatment or prevention of cardiovascular disease. Trials were included if direct costs >$500,000/year, participants were adult humans, and the primary outcome was cardiovascular risk, disease or death. The 55 trials meeting these criteria were coded for whether they were published prior to or after the year 2000, whether they registered in clinicaltrials.gov prior to publication, used active or placebo comparator, and whether or not the trial had industry co-sponsorship. We tabulated whether the study reported a positive, negative, or null result on the primary outcome variable and for total mortality.17 of 30 studies (57% published prior to 2000 showed a significant benefit of intervention on the primary outcome in comparison to only 2 among the 25 (8% trials published after 2000 (χ2=12.2,df= 1, p=0.0005. There has been no change in the proportion of trials that compared treatment to placebo versus active comparator. Industry co-sponsorship was unrelated to the probability of reporting a significant benefit. Pre-registration in clinical trials.gov was strongly associated with the trend toward null findings.The number NHLBI trials reporting positive results declined after the year 2000. Prospective declaration of outcomes in RCTs, and the adoption of transparent reporting standards, as required by clinicaltrials.gov, may have contributed to the trend toward null findings.
Low power constant fraction discriminator
International Nuclear Information System (INIS)
Krishnan, Shanti; Raut, S.M.; Mukhopadhyay, P.K.
2001-01-01
This paper describes the design of a low power ultrafast constant fraction discriminator, which significantly reduces the power consumption. A conventional fast discriminator consumes about 1250 MW of power whereas this low power version consumes about 440 MW. In a multi detector system, where the number of discriminators is very large, reduction of power is of utmost importance. This low power discriminator is being designed for GRACE (Gamma Ray Atmospheric Cerenkov Experiments) telescope where 1000 channels of discriminators are required. A novel method of decreasing power consumption has been described. (author)
Pizzuto, J. E.; Skalak, K.; Karwan, D. L.
2017-12-01
Transport of suspended sediment and sediment-borne constituents (here termed fluvial particles) through large river systems can be significantly influenced by episodic storage in floodplains and other alluvial deposits. Geomorphologists quantify the importance of storage using sediment budgets, but these data alone are insufficient to determine how storage influences the routing of fluvial particles through river corridors across large spatial scales. For steady state systems, models that combine sediment budget data with "waiting time distributions" (to define how long deposited particles remain stored until being remobilized) and velocities during transport events can provide useful predictions. Limited field data suggest that waiting time distributions are well represented by power laws, extending from 104 years, while the probability of storage defined by sediment budgets varies from 0.1 km-1 for small drainage basins to 0.001 km-1 for the world's largest watersheds. Timescales of particle delivery from large watersheds are determined by storage rather than by transport processes, with most particles requiring 102 -104 years to reach the basin outlet. These predictions suggest that erosional "signals" induced by climate change, tectonics, or anthropogenic activity will be transformed by storage before delivery to the outlets of large watersheds. In particular, best management practices (BMPs) implemented in upland source areas, designed to reduce the loading of fluvial particles to estuarine receiving waters, will not achieve their intended benefits for centuries (or longer). For transient systems, waiting time distributions cannot be constant, but will vary as portions of transient sediment "pulses" enter and are later released from storage. The delivery of sediment pulses under transient conditions can be predicted by adopting the hypothesis that the probability of erosion of stored particles will decrease with increasing "age" (where age is defined as the
Yongquan, Han
2016-10-01
The ideal gas state equation is not applicable to ordinary gas, it should be applied to the Electromagnetic ``gas'' that is applied to the radiation, the radiation should be the ultimate state of matter changes or initial state, the universe is filled with radiation. That is, the ideal gas equation of state is suitable for the Singular point and the universe. Maybe someone consider that, there is no vessel can accommodate radiation, it is because the Ordinary container is too small to accommodate, if the radius of your container is the distance that Light through an hour, would you still think it can't accommodates radiation? Modern scientific determinate that the radius of the universe now is about 1027 m, assuming that the universe is a sphere whose volume is approximately: V = 4.19 × 1081 cubic meters, the temperature radiation of the universe (cosmic microwave background radiation temperature of the universe, should be the closest the average temperature of the universe) T = 3.15k, radiation pressure P = 5 × 10-6 N / m 2, according to the law of ideal gas state equation, PV / T = constant = 6 × 1075, the value of this constant is the universe, The singular point should also equal to the constant Author: hanyongquan
Replicability of time-varying connectivity patterns in large resting state fMRI samples.
Abrol, Anees; Damaraju, Eswar; Miller, Robyn L; Stephen, Julia M; Claus, Eric D; Mayer, Andrew R; Calhoun, Vince D
2017-12-01
The past few years have seen an emergence of approaches that leverage temporal changes in whole-brain patterns of functional connectivity (the chronnectome). In this chronnectome study, we investigate the replicability of the human brain's inter-regional coupling dynamics during rest by evaluating two different dynamic functional network connectivity (dFNC) analysis frameworks using 7 500 functional magnetic resonance imaging (fMRI) datasets. To quantify the extent to which the emergent functional connectivity (FC) patterns are reproducible, we characterize the temporal dynamics by deriving several summary measures across multiple large, independent age-matched samples. Reproducibility was demonstrated through the existence of basic connectivity patterns (FC states) amidst an ensemble of inter-regional connections. Furthermore, application of the methods to conservatively configured (statistically stationary, linear and Gaussian) surrogate datasets revealed that some of the studied state summary measures were indeed statistically significant and also suggested that this class of null model did not explain the fMRI data fully. This extensive testing of reproducibility of similarity statistics also suggests that the estimated FC states are robust against variation in data quality, analysis, grouping, and decomposition methods. We conclude that future investigations probing the functional and neurophysiological relevance of time-varying connectivity assume critical importance. Copyright © 2017 The Authors. Published by Elsevier Inc. All rights reserved.
Data transfer over the wide area network with a large round trip time
Matsunaga, H.; Isobe, T.; Mashimo, T.; Sakamoto, H.; Ueda, I.
2010-04-01
A Tier-2 regional center is running at the University of Tokyo in Japan. This center receives a large amount of data of the ATLAS experiment from the Tier-1 center in France. Although the link between the two centers has 10Gbps bandwidth, it is not a dedicated link but is shared with other traffic, and the round trip time is 290ms. It is not easy to exploit the available bandwidth for such a link, so-called long fat network. We performed data transfer tests by using GridFTP in various combinations of the parameters, such as the number of parallel streams and the TCP window size. In addition, we have gained experience of the actual data transfer in our production system where the Disk Pool Manager (DPM) is used as the Storage Element and the data transfer is controlled by the File Transfer Service (FTS). We report results of the tests and the daily activity, and discuss the improvement of the data transfer throughput.
Data transfer over the wide area network with a large round trip time
International Nuclear Information System (INIS)
Matsunaga, H; Isobe, T; Mashimo, T; Sakamoto, H; Ueda, I
2010-01-01
A Tier-2 regional center is running at the University of Tokyo in Japan. This center receives a large amount of data of the ATLAS experiment from the Tier-1 center in France. Although the link between the two centers has 10Gbps bandwidth, it is not a dedicated link but is shared with other traffic, and the round trip time is 290ms. It is not easy to exploit the available bandwidth for such a link, so-called long fat network. We performed data transfer tests by using GridFTP in various combinations of the parameters, such as the number of parallel streams and the TCP window size. In addition, we have gained experience of the actual data transfer in our production system where the Disk Pool Manager (DPM) is used as the Storage Element and the data transfer is controlled by the File Transfer Service (FTS). We report results of the tests and the daily activity, and discuss the improvement of the data transfer throughput.
Directory of Open Access Journals (Sweden)
Giorgos Minas
2017-07-01
Full Text Available In order to analyse large complex stochastic dynamical models such as those studied in systems biology there is currently a great need for both analytical tools and also algorithms for accurate and fast simulation and estimation. We present a new stochastic approximation of biological oscillators that addresses these needs. Our method, called phase-corrected LNA (pcLNA overcomes the main limitations of the standard Linear Noise Approximation (LNA to remain uniformly accurate for long times, still maintaining the speed and analytically tractability of the LNA. As part of this, we develop analytical expressions for key probability distributions and associated quantities, such as the Fisher Information Matrix and Kullback-Leibler divergence and we introduce a new approach to system-global sensitivity analysis. We also present algorithms for statistical inference and for long-term simulation of oscillating systems that are shown to be as accurate but much faster than leaping algorithms and algorithms for integration of diffusion equations. Stochastic versions of published models of the circadian clock and NF-κB system are used to illustrate our results.
Stabilized power constant alimentation; Alimentation regulee a puissance constante
Energy Technology Data Exchange (ETDEWEB)
Roussel, L. [Commissariat a l' Energie Atomique, Saclay (France). Centre d' Etudes Nucleaires
1968-06-01
The study and realization of a stabilized power alimentation variable from 5 to 100 watts are described. In order to realize a constant power drift of Lithium compensated diodes, we have searched a 1 per cent precision of regulation and a response time minus than 1 sec. Recent components like Hall multiplicator and integrated amplifiers give this possibility and it is easy to use permutable circuits. (author) [French] On decrit l'etude et la realisation d'une alimentation a puissance constante reglable dans une gamme de 5 a 100 watts. Prevue pour le drift a puissance constante des diodes compensees au lithium, l'etude a ete menee en vue d'obtenir une precision de regulation de 1 pour cent et un temps de reponse inferieur a la seconde. Des systemes recents tels que multiplicateurs a effet Hall et circuits integres ont permis d'atteindre ce but tout en facilitant l'emploi de modules interchangeables. (auteur)
On the problem of earthquake correlation in space and time over large distances
Georgoulas, G.; Konstantaras, A.; Maravelakis, E.; Katsifarakis, E.; Stylios, C. D.
2012-04-01
A quick examination of geographical maps with the epicenters of earthquakes marked on them reveals a strong tendency of these points to form compact clusters of irregular shapes and various sizes often traversing with other clusters. According to [Saleur et al. 1996] "earthquakes are correlated in space and time over large distances". This implies that seismic sequences are not formatted randomly but they follow a spatial pattern with consequent triggering of events. Seismic cluster formation is believed to be due to underlying geological natural hazards, which: a) act as the energy storage elements of the phenomenon, and b) tend to form a complex network of numerous interacting faults [Vallianatos and Tzanis, 1998]. Therefore it is imperative to "isolate" meaningful structures (clusters) in order to mine information regarding the underlying mechanism and at a second stage to test the causality effect implied by what is known as the Domino theory [Burgman, 2009]. Ongoing work by Konstantaras et al. 2011 and Katsifarakis et al. 2011 on clustering seismic sequences in the area of the Southern Hellenic Arc and progressively throughout the Greek vicinity and the entire Mediterranean region based on an explicit segmentation of the data based both on their temporal and spatial stamp, following modelling assumptions proposed by Dobrovolsky et al. 1989 and Drakatos et al. 2001, managed to identify geologically validated seismic clusters. These results suggest that that the time component should be included as a dimension during the clustering process as seismic cluster formation is dynamic and the emerging clusters propagate in time. Another issue that has not been investigated yet explicitly is the role of the magnitude of each seismic event. In other words the major seismic event should be treated differently compared to pre or post seismic sequences. Moreover the sometimes irregular and elongated shapes that appear on geophysical maps means that clustering algorithms
CODATA recommended values of the fundamental constants
International Nuclear Information System (INIS)
Mohr, Peter J.; Taylor, Barry N.
2000-01-01
A review is given of the latest Committee on Data for Science and Technology (CODATA) adjustment of the values of the fundamental constants. The new set of constants, referred to as the 1998 values, replaces the values recommended for international use by CODATA in 1986. The values of the constants, and particularly the Rydberg constant, are of relevance to the calculation of precise atomic spectra. The standard uncertainty (estimated standard deviation) of the new recommended value of the Rydberg constant, which is based on precision frequency metrology and a detailed analysis of the theory, is approximately 1/160 times the uncertainty of the 1986 value. The new set of recommended values as well as a searchable bibliographic database that gives citations to the relevant literature is available on the World Wide Web at physics.nist.gov/constants and physics.nist.gov/constantsbib, respectively
International Nuclear Information System (INIS)
Willson, R.C.; Hudson, H.
1984-01-01
The Active Cavity Radiometer Irradiance Monitor (ACRIM) of the Solar Maximum Mission satellite measures the radiant power emitted by the sun in the direction of the earth and has worked flawlessly since 1980. The main motivation for ACRIM's use to measure the solar constant is the determination of the extent to which this quantity's variations affect earth weather and climate. Data from the solar minimum of 1986-1987 is eagerly anticipated, with a view to the possible presence of a solar cycle variation in addition to that caused directly by sunspots
Small cosmological constant from the QCD trace anomaly?
International Nuclear Information System (INIS)
Schuetzhold, Ralf
2002-01-01
According to recent astrophysical observations the large scale mean pressure of our present Universe is negative suggesting a positive cosmological constant-like term. The issue of whether nonperturbative effects of self-interacting quantum fields in curved space-times may yield a significant contribution is addressed. Focusing on the trace anomaly of quantum chromodynamics, a preliminary estimate of the expected order of magnitude yields a remarkable coincidence with the empirical data, indicating the potential relevance of this effect
In-situ high resolution particle sampling by large time sequence inertial spectrometry
International Nuclear Information System (INIS)
Prodi, V.; Belosi, F.
1990-09-01
In situ sampling is always preferred, when possible, because of the artifacts that can arise when the aerosol has to flow through long sampling lines. On the other hand, the amount of possible losses can be calculated with some confidence only when the size distribution can be measured with a sufficient precision and the losses are not too large. This makes it desirable to sample directly in the vicinity of the aerosol source or containment. High temperature sampling devices with a detailed aerodynamic separation are extremely useful to this purpose. Several measurements are possible with the inertial spectrometer (INSPEC), but not with cascade impactors or cyclones. INSPEC - INertial SPECtrometer - has been conceived to measure the size distribution of aerosols by separating the particles while airborne according to their size and collecting them on a filter. It consists of a channel of rectangular cross-section with a 90 degree bend. Clean air is drawn through the channel, with a thin aerosol sheath injected close to the inner wall. Due to the bend, the particles are separated according to their size, leaving the original streamline by a distance which is a function of particle inertia and resistance, i.e. of aerodynamic diameter. The filter collects all the particles of the same aerodynamic size at the same distance from the inlet, in a continuous distribution. INSPEC particle separation at high temperature (up to 800 C) has been tested with Zirconia particles as calibration aerosols. The feasibility study has been concerned with resolution and time sequence sampling capabilities under high temperature (700 C)
REM-3D Reference Datasets: Reconciling large and diverse compilations of travel-time observations
Moulik, P.; Lekic, V.; Romanowicz, B. A.
2017-12-01
A three-dimensional Reference Earth model (REM-3D) should ideally represent the consensus view of long-wavelength heterogeneity in the Earth's mantle through the joint modeling of large and diverse seismological datasets. This requires reconciliation of datasets obtained using various methodologies and identification of consistent features. The goal of REM-3D datasets is to provide a quality-controlled and comprehensive set of seismic observations that would not only enable construction of REM-3D, but also allow identification of outliers and assist in more detailed studies of heterogeneity. The community response to data solicitation has been enthusiastic with several groups across the world contributing recent measurements of normal modes, (fundamental mode and overtone) surface waves, and body waves. We present results from ongoing work with body and surface wave datasets analyzed in consultation with a Reference Dataset Working Group. We have formulated procedures for reconciling travel-time datasets that include: (1) quality control for salvaging missing metadata; (2) identification of and reasons for discrepant measurements; (3) homogenization of coverage through the construction of summary rays; and (4) inversions of structure at various wavelengths to evaluate inter-dataset consistency. In consultation with the Reference Dataset Working Group, we retrieved the station and earthquake metadata in several legacy compilations and codified several guidelines that would facilitate easy storage and reproducibility. We find strong agreement between the dispersion measurements of fundamental-mode Rayleigh waves, particularly when made using supervised techniques. The agreement deteriorates substantially in surface-wave overtones, for which discrepancies vary with frequency and overtone number. A half-cycle band of discrepancies is attributed to reversed instrument polarities at a limited number of stations, which are not reflected in the instrument response history
Parallel computational in nuclear group constant calculation
International Nuclear Information System (INIS)
Su'ud, Zaki; Rustandi, Yaddi K.; Kurniadi, Rizal
2002-01-01
In this paper parallel computational method in nuclear group constant calculation using collision probability method will be discuss. The main focus is on the calculation of collision matrix which need large amount of computational time. The geometry treated here is concentric cylinder. The calculation of collision probability matrix is carried out using semi analytic method using Beckley Naylor Function. To accelerate computation speed some computer parallel used to solve the problem. We used LINUX based parallelization using PVM software with C or fortran language. While in windows based we used socket programming using DELPHI or C builder. The calculation results shows the important of optimal weight for each processor in case there area many type of processor speed
Directory of Open Access Journals (Sweden)
Vernon Cooray
2017-02-01
Full Text Available Recently, we published two papers in this journal. One of the papers dealt with the action of the radiation fields generated by a traveling-wave element and the other dealt with the momentum transferred by the same radiation fields and their connection to the time energy uncertainty principle. The traveling-wave element is defined as a conductor through which a current pulse propagates with the speed of light in free space from one end of the conductor to the other without attenuation. The goal of this letter is to combine the information provided in these two papers together and make conclusive statements concerning the connection between the energy dissipated by the radiation fields, the time energy uncertainty principle and the elementary charge. As we will show here, the results presented in these two papers, when combined together, show that the time energy uncertainty principle can be applied to the classical radiation emitted by a traveling-wave element and it results in the prediction that the smallest charge associated with the current that can be detected using radiated energy as a vehicle is on the order of the elementary charge. Based on the results, an expression for the fine structure constant is obtained. This is the first time that an order of magnitude estimation of the elementary charge based on electromagnetic radiation fields is obtained. Even though the results obtained in this paper have to be considered as order of magnitude estimations, a strict interpretation of the derived equations shows that the fine structure constant or the elementary charge may change as the size or the age of the universe increases.
Scalar-tensor cosmology with cosmological constant
International Nuclear Information System (INIS)
Maslanka, K.
1983-01-01
The equations of scalar-tensor theory of gravitation with cosmological constant in the case of homogeneous and isotropic cosmological model can be reduced to dynamical system of three differential equations with unknown functions H=R/R, THETA=phi/phi, S=e/phi. When new variables are introduced the system becomes more symmetrical and cosmological solutions R(t), phi(t), e(t) are found. It is shown that when cosmological constant is introduced large class of solutions which depend also on Dicke-Brans parameter can be obtained. Investigations of these solutions give general limits for cosmological constant and mean density of matter in plane model. (author)
Fine-structure constant: Is it really a constant
International Nuclear Information System (INIS)
Bekenstein, J.D.
1982-01-01
It is often claimed that the fine-structure ''constant'' α is shown to be strictly constant in time by a variety of astronomical and geophysical results. These constrain its fractional rate of change alpha-dot/α to at least some orders of magnitude below the Hubble rate H 0 . We argue that the conclusion is not as straightforward as claimed since there are good physical reasons to expect alpha-dot/α 0 . We propose to decide the issue by constructing a framework for a variability based on very general assumptions: covariance, gauge invariance, causality, and time-reversal invariance of electromagnetism, as well as the idea that the Planck-Wheeler length (10 -33 cm) is the shortest scale allowable in any theory. The framework endows α with well-defined dynamics, and entails a modification of Maxwell electrodynamics. It proves very difficult to rule it out with purely electromagnetic experiments. In a cosmological setting, the framework predicts an alpha-dot/α which can be compatible with the astronomical constraints; hence, these are too insensitive to rule out α variability. There is marginal conflict with the geophysical constraints: however, no firm decision is possible because of uncertainty about various cosmological parameters. By contrast the framework's predictions for spatial gradients of α are in fatal conflict with the results of the Eoetvoes-Dicke-Braginsky experiments. Hence these tests of the equivalence principle rule out with confidence spacetime variability of α at any level
Page, Don N.
2018-01-01
In an asymptotically flat spacetime of dimension d >3 and with the Newtonian gravitational constant G , a spherical black hole of initial horizon radius rh and mass M ˜rhd -3/G has a total decay time to Hawking emission of td˜rhd -1/G ˜G2 /(d -3 )M(d -1 )/(d -3 ) which grows without bound as the radius rh and mass M are taken to infinity. However, in asymptotically anti-de Sitter spacetime with a length scale ℓ and with absorbing boundary conditions at infinity, the total Hawking decay time does not diverge as the mass and radius go to infinity but instead remains bounded by a time of the order of ℓd-1/G .
Asympotics with positive cosmological constant
Bonga, Beatrice; Ashtekar, Abhay; Kesavan, Aruna
2014-03-01
Since observations to date imply that our universe has a positive cosmological constant, one needs an extension of the theory of isolated systems and gravitational radiation in full general relativity from the asymptotically flat to asymptotically de Sitter space-times. In current definitions, one mimics the boundary conditions used in asymptotically AdS context to conclude that the asymptotic symmetry group is the de Sitter group. However, these conditions severely restricts radiation and in fact rules out non-zero flux of energy, momentum and angular momentum carried by gravitational waves. Therefore, these formulations of asymptotically de Sitter space-times are uninteresting beyond non-radiative spacetimes. The situation is compared and contrasted with conserved charges and fluxes at null infinity in asymptotically flat space-times.
Building evolutionary architectures support constant change
Ford, Neal; Kua, Patrick
2017-01-01
The software development ecosystem is constantly changing, providing a constant stream of new tools, frameworks, techniques, and paradigms. Over the past few years, incremental developments in core engineering practices for software development have created the foundations for rethinking how architecture changes over time, along with ways to protect important architectural characteristics as it evolves. This practical guide ties those parts together with a new way to think about architecture and time.
International Nuclear Information System (INIS)
Bertolami, Orfeu; Paramos, Jorge
2011-01-01
The purpose of this study is to describe a perfect fluid matter distribution that leads to a constant curvature region, thanks to the effect of a nonminimal coupling. This distribution exhibits a density profile within the range found in the interstellar medium and an adequate matching of the metric components at its boundary. By identifying this constant curvature with the value of the cosmological constant and superimposing the spherical distributions arising from different matter sources throughout the universe, one is able to mimic a large-scale homogeneous cosmological constant solution.
Computational challenges of large-scale, long-time, first-principles molecular dynamics
International Nuclear Information System (INIS)
Kent, P R C
2008-01-01
Plane wave density functional calculations have traditionally been able to use the largest available supercomputing resources. We analyze the scalability of modern projector-augmented wave implementations to identify the challenges in performing molecular dynamics calculations of large systems containing many thousands of electrons. Benchmark calculations on the Cray XT4 demonstrate that global linear-algebra operations are the primary reason for limited parallel scalability. Plane-wave related operations can be made sufficiently scalable. Improving parallel linear-algebra performance is an essential step to reaching longer timescales in future large-scale molecular dynamics calculations
Energy Technology Data Exchange (ETDEWEB)
Carlson, C. M. [Department of Physics, University of Colorado, Boulder, Colorado 80309 (United States); Rivkin, T. V. [National Renewable Energy Laboratory, Golden, Colorado 80401 (United States); Parilla, P. A. [National Renewable Energy Laboratory, Golden, Colorado 80401 (United States); Perkins, J. D. [National Renewable Energy Laboratory, Golden, Colorado 80401 (United States); Ginley, D. S. [National Renewable Energy Laboratory, Golden, Colorado 80401 (United States); Kozyrev, A. B. [Electrotechnical University of St. Petersburg, St. Petersburg, Russia 197376 (Russian Federation); Oshadchy, V. N. [Electrotechnical University of St. Petersburg, St. Petersburg, Russia 197376 (Russian Federation); Pavlov, A. S. [Electrotechnical University of St. Petersburg, St. Petersburg, Russia 197376 (Russian Federation)
2000-04-03
We deposited epitaxial Ba{sub 0.4}Sr{sub 0.6}TiO{sub 3} (BST) films via laser ablation on MgO and LaAlO{sub 3} (LAO) substrates for tunable microwave devices. Postdeposition anneals ({approx}1100 degree sign C in O{sub 2}) improved the morphology and overall dielectric properties of films on both substrates, but shifted the temperature of maximum dielectric constant (T{sub max}) up for BST/LAO and down for BST/MgO. These substrate-dependent T{sub max} shifts had opposite effects on the room-temperature dielectric properties. Overall, BST films on MgO had the larger maximum dielectric constant ({epsilon}/{epsilon}{sub 0}{>=}6000) and tunability ({delta}{epsilon}/{epsilon}{>=}65%), but these maxima occurred at 227 K. 30 GHz phase shifters made from similar films had figures of merit (ratio of maximum phase shift to insertion loss) of {approx}45 degree sign /dB and phase shifts of {approx}400 degree sign under 500 V ({approx}13 V/{mu}m) bias, illustrating their utility for many frequency-agile microwave devices. (c) 2000 American Institute of Physics.
Guevara Hidalgo, Esteban; Nemoto, Takahiro; Lecomte, Vivien
2017-06-01
Rare trajectories of stochastic systems are important to understand because of their potential impact. However, their properties are by definition difficult to sample directly. Population dynamics provides a numerical tool allowing their study, by means of simulating a large number of copies of the system, which are subjected to selection rules that favor the rare trajectories of interest. Such algorithms are plagued by finite simulation time and finite population size, effects that can render their use delicate. In this paper, we present a numerical approach which uses the finite-time and finite-size scalings of estimators of the large deviation functions associated to the distribution of rare trajectories. The method we propose allows one to extract the infinite-time and infinite-size limit of these estimators, which-as shown on the contact process-provides a significant improvement of the large deviation function estimators compared to the standard one.
Guevara Hidalgo, Esteban; Nemoto, Takahiro; Lecomte, Vivien
2017-06-01
Rare trajectories of stochastic systems are important to understand because of their potential impact. However, their properties are by definition difficult to sample directly. Population dynamics provides a numerical tool allowing their study, by means of simulating a large number of copies of the system, which are subjected to selection rules that favor the rare trajectories of interest. Such algorithms are plagued by finite simulation time and finite population size, effects that can render their use delicate. In this paper, we present a numerical approach which uses the finite-time and finite-size scalings of estimators of the large deviation functions associated to the distribution of rare trajectories. The method we propose allows one to extract the infinite-time and infinite-size limit of these estimators, which—as shown on the contact process—provides a significant improvement of the large deviation function estimators compared to the standard one.
Zero cosmological constant from normalized general relativity
International Nuclear Information System (INIS)
Davidson, Aharon; Rubin, Shimon
2009-01-01
Normalizing the Einstein-Hilbert action by the volume functional makes the theory invariant under constant shifts in the Lagrangian. The associated field equations then resemble unimodular gravity whose otherwise arbitrary cosmological constant is now determined as a Machian universal average. We prove that an empty space-time is necessarily Ricci tensor flat, and demonstrate the vanishing of the cosmological constant within the scalar field paradigm. The cosmological analysis, carried out at the mini-superspace level, reveals a vanishing cosmological constant for a universe which cannot be closed as long as gravity is attractive. Finally, we give an example of a normalized theory of gravity which does give rise to a non-zero cosmological constant.
Relaxing neutrino mass bounds by a running cosmological constant
Energy Technology Data Exchange (ETDEWEB)
Bauer, F.; Schrempp, L.
2007-11-15
We establish an indirect link between relic neutrinos and the dark energy sector which originates from the vacuum energy contributions of the neutrino quantum fields. Via renormalization group effects they induce a running of the cosmological constant with time which dynamically influences the evolution of the cosmic neutrino background. We demonstrate that the resulting reduction of the relic neutrino abundance allows to largely evade current cosmological neutrino mass bounds and discuss how the scenario might be probed by the help of future large scale structure surveys and Planck data. (orig.)
Relaxing neutrino mass bounds by a running cosmological constant
International Nuclear Information System (INIS)
Bauer, F.; Schrempp, L.
2007-11-01
We establish an indirect link between relic neutrinos and the dark energy sector which originates from the vacuum energy contributions of the neutrino quantum fields. Via renormalization group effects they induce a running of the cosmological constant with time which dynamically influences the evolution of the cosmic neutrino background. We demonstrate that the resulting reduction of the relic neutrino abundance allows to largely evade current cosmological neutrino mass bounds and discuss how the scenario might be probed by the help of future large scale structure surveys and Planck data. (orig.)
Real-Time Large Scale 3d Reconstruction by Fusing Kinect and Imu Data
Huai, J.; Zhang, Y.; Yilmaz, A.
2015-08-01
Kinect-style RGB-D cameras have been used to build large scale dense 3D maps for indoor environments. These maps can serve many purposes such as robot navigation, and augmented reality. However, to generate dense 3D maps of large scale environments is still very challenging. In this paper, we present a mapping system for 3D reconstruction that fuses measurements from a Kinect and an inertial measurement unit (IMU) to estimate motion. Our major achievements include: (i) Large scale consistent 3D reconstruction is realized by volume shifting and loop closure; (ii) The coarse-to-fine iterative closest point (ICP) algorithm, the SIFT odometry, and IMU odometry are combined to robustly and precisely estimate pose. In particular, ICP runs routinely to track the Kinect motion. If ICP fails in planar areas, the SIFT odometry provides incremental motion estimate. If both ICP and the SIFT odometry fail, e.g., upon abrupt motion or inadequate features, the incremental motion is estimated by the IMU. Additionally, the IMU also observes the roll and pitch angles which can reduce long-term drift of the sensor assembly. In experiments on a consumer laptop, our system estimates motion at 8Hz on average while integrating color images to the local map and saving volumes of meshes concurrently. Moreover, it is immune to tracking failures, and has smaller drift than the state-of-the-art systems in large scale reconstruction.
A large deviations approach to limit theory for heavy-tailed time series
DEFF Research Database (Denmark)
Mikosch, Thomas Valentin; Wintenberger, Olivier
2016-01-01
and vanishing in some neighborhood of the origin. We study a variety of such functionals, including large deviations of random walks, their suprema, the ruin functional, and further derive weak limit theory for maxima, point processes, cluster functionals and the tail empirical process. One of the main results...
Keith Jennings; Julia A. Jones
2015-01-01
This study tested multiple hydrologic mechanisms to explain snowpack dynamics in extreme rain-on-snow floods, which occur widely in the temperate and polar regions. We examined 26, 10 day large storm events over the period 1992â2012 in the H.J. Andrews Experimental Forest in western Oregon, using statistical analyses (regression, ANOVA, and wavelet coherence) of hourly...
The part-time wage penalty in European countries: how large is it for men?
O'Dorchai, Sile Padraigin; Plasman, Robert; Rycx, François
2007-01-01
Economic theory advances a number of reasons for the existence of a wage gap between part-time and full-time workers. Empirical work has concentrated on the wage effects of part-time work for women. For men, much less empirical evidence exists, mainly because of lacking data. In this paper, we take advantage of access to unique harmonised matched employer-employee data (i.e. the 1995 European Structure of Earnings Survey) to investigate the magnitude and sources of the part-time wage penalty ...
Filament instability under constant loads
Monastra, A. G.; Carusela, M. F.; D’Angelo, M. V.; Bruno, L.
2018-04-01
Buckling of semi-flexible filaments appears in different systems and scales. Some examples are: fibers in geophysical applications, microtubules in the cytoplasm of eukaryotic cells and deformation of polymers freely suspended in a flow. In these examples, instabilities arise when a system’s parameter exceeds a critical value, being the Euler force the most known. However, the complete time evolution and wavelength of buckling processes are not fully understood. In this work we solve analytically the time evolution of a filament under a constant compressive force in the small amplitude approximation. This gives an insight into the variable force scenario in terms of normal modes. The evolution is highly sensitive to the initial configuration and to the magnitude of the compressive load. This model can be a suitable approach to many different real situations.
Directory of Open Access Journals (Sweden)
Regula Morgenegg
2018-01-01
retrospective study examined the OR turnaround data of 875 elective surgery cases scheduled at the Marienhospital, Vechta, Germany, between July and October 2014. The frequency distributions of planned and actual OR turnaround times were compared and correlations between turnaround times and various factors were established, including the time of day of the procedure, patient age and the planned duration of the surgery. Results: There was a significant difference between mean planned and actual OR turnaround times (0.32 versus 0.64 hours; P <0.001. In addition, significant correlations were noted between actual OR turnaround times and the time of day of the surgery, patient age, actual duration of the procedure and staffing changes affecting the surgeon or the medical specialty of the surgery (P <0.001 each. The quotient of actual/planned OR turnaround times ranged from 1.733–3.000. Conclusion: Significant discrepancies between planned and actual OR turnaround times were noted during the study period. Such findings may be potentially used in future studies to establish a tool to improve OR planning, measure OR management performance and enable benchmarking.
Large time asymptotics of solutions to the anharmonic oscillator model from nonlinear optics
Jochmann, Frank
2005-01-01
The anharmonic oscillator model describing the propagation of electromagnetic waves in an exterior domain containing a nonlinear dielectric medium is investigated. The system under consideration consists of a generally nonlinear second order differential equation for the dielectrical polarization coupled with Maxwell's equations for the electromagnetic field. Local decay of the electromagnetic field for t to infinity in the charge free case is shown for a large class of potentials. (This pape...
Search for a Variation of Fundamental Constants
Ubachs, W.
2013-06-01
Since the days of Dirac scientists have speculated about the possibility that the laws of nature, and the fundamental constants appearing in those laws, are not rock-solid and eternal but may be subject to change in time or space. Such a scenario of evolving constants might provide an answer to the deepest puzzle of contemporary science, namely why the conditions in our local Universe allow for extreme complexity: the fine-tuning problem. In the past decade it has been established that spectral lines of atoms and molecules, which can currently be measured at ever-higher accuracies, form an ideal test ground for probing drifting constants. This has brought this subject from the realm of metaphysics to that of experimental science. In particular the spectra of molecules are sensitive for probing a variation of the proton-electron mass ratio μ, either on a cosmological time scale, or on a laboratory time scale. A comparison can be made between spectra of molecular hydrogen observed in the laboratory and at a high redshift (z=2-3), using the Very Large Telescope (Paranal, Chile) and the Keck telescope (Hawaii). This puts a constraint on a varying mass ratio Δμ/μ at the 10^{-5} level. The optical work can also be extended to include CO molecules. Further a novel direction will be discussed: it was discovered that molecules exhibiting hindered internal rotation have spectral lines in the radio-spectrum that are extremely sensitive to a varying proton-electron mass ratio. Such lines in the spectrum of methanol were recently observed with the radio-telescope in Effelsberg (Germany). F. van Weerdenburg, M.T. Murphy, A.L. Malec, L. Kaper, W. Ubachs, Phys. Rev. Lett. 106, 180802 (2011). A. Malec, R. Buning, M.T. Murphy, N. Milutinovic, S.L. Ellison, J.X. Prochaska, L. Kaper, J. Tumlinson, R.F. Carswell, W. Ubachs, Mon. Not. Roy. Astron. Soc. 403, 1541 (2010). E.J. Salumbides, M.L. Niu, J. Bagdonaite, N. de Oliveira, D. Joyeux, L. Nahon, W. Ubachs, Phys. Rev. A 86, 022510
Interacting universes and the cosmological constant
International Nuclear Information System (INIS)
Alonso-Serrano, A.; Bastos, C.; Bertolami, O.; Robles-Pérez, S.
2013-01-01
In this Letter it is studied the effects that an interaction scheme among universes can have in the values of their cosmological constants. In the case of two interacting universes, the value of the cosmological constant of one of the universes becomes very close to zero at the expense of an increasing value of the cosmological constant of the partner universe. In the more general case of a chain of N interacting universes with periodic boundary conditions, the spectrum of the Hamiltonian splits into a large number of levels, each of them associated with a particular value of the cosmological constant, that can be occupied by single universes revealing a collective behavior that plainly shows that the multiverse is much more than the mere sum of its parts
Interacting universes and the cosmological constant
Energy Technology Data Exchange (ETDEWEB)
Alonso-Serrano, A. [Centro de Física “Miguel Catalán”, Instituto de Física Fundamental, Consejo Superior de Investigaciones Científicas, Serrano 121, 28006 Madrid (Spain); Estación Ecológica de Biocosmología, Pedro de Alvarado 14, 06411 Medellín (Spain); Bastos, C. [Instituto de Plasmas e Fusão Nuclear, Instituto Superior Técnico, Avenida Rovisco Pais 1, 1049-001 Lisboa (Portugal); Bertolami, O. [Instituto de Plasmas e Fusão Nuclear, Instituto Superior Técnico, Avenida Rovisco Pais 1, 1049-001 Lisboa (Portugal); Departamento de Física e Astronomia, Faculdade de Ciências da Universidade do Porto, Rua do Campo Alegre 687, 4169-007 Porto (Portugal); Robles-Pérez, S., E-mail: salvarp@imaff.cfmac.csic.es [Centro de Física “Miguel Catalán”, Instituto de Física Fundamental, Consejo Superior de Investigaciones Científicas, Serrano 121, 28006 Madrid (Spain); Estación Ecológica de Biocosmología, Pedro de Alvarado 14, 06411 Medellín (Spain); Física Teórica, Universidad del País Vasco, Apartado 644, 48080 Bilbao (Spain)
2013-02-12
In this Letter it is studied the effects that an interaction scheme among universes can have in the values of their cosmological constants. In the case of two interacting universes, the value of the cosmological constant of one of the universes becomes very close to zero at the expense of an increasing value of the cosmological constant of the partner universe. In the more general case of a chain of N interacting universes with periodic boundary conditions, the spectrum of the Hamiltonian splits into a large number of levels, each of them associated with a particular value of the cosmological constant, that can be occupied by single universes revealing a collective behavior that plainly shows that the multiverse is much more than the mere sum of its parts.
Interactive exploration of large-scale time-varying data using dynamic tracking graphs
Widanagamaachchi, W.; Christensen, C.; Bremer, P.-T; Pascucci, Valerio
2012-01-01
that use one spatial dimension to indicate time and show the "tracks" of each feature as it evolves, merges or disappears. However, for practical data sets creating the corresponding optimal graph layouts that minimize the number of intersections can take
The ''Flight Chamber'': A fast, large area, zero-time detector
International Nuclear Information System (INIS)
Trautner, N.
1976-01-01
A new, fast, zero-time detector with an active area of 20 cm 2 has been constructed. Secondary electrons from a thin self-supporting foil are accelerated onto a scinitllator. The intrinsic time resolution (fwhm) was 0.85 for 5.5 MeV α-particles and 0.42 ns for 17 MeV 16 O-ions, at an efficiency of 97.5% and 99.6%, respectively. (author)
Garbage-free reversible constant multipliers for arbitrary integers
DEFF Research Database (Denmark)
Mogensen, Torben Ægidius
2013-01-01
We present a method for constructing reversible circuitry for multiplying integers by arbitrary integer constants. The method is based on Mealy machines and gives circuits whose size are (in the worst case) linear in the size of the constant. This makes the method unsuitable for large constants...
Association constants of telluronium salts
International Nuclear Information System (INIS)
Kovach, N.A.; Rivkin, B.B.; Sadekov, T.D.; Shvajka, O.P.
1996-01-01
Association constants in acetonitrile of triphenyl telluronium salts, which are dilute electrolytes, are determined through the conductometry method. Satisfactory correlation dependence of constants of interion association and threshold molar electroconductivity on the Litvinenko-Popov constants for depositing groups is identified. 6 refs
Anisotropic constant-roll inflation
Energy Technology Data Exchange (ETDEWEB)
Ito, Asuka; Soda, Jiro [Kobe University, Department of Physics, Kobe (Japan)
2018-01-15
We study constant-roll inflation in the presence of a gauge field coupled to an inflaton. By imposing the constant anisotropy condition, we find new exact anisotropic constant-roll inflationary solutions which include anisotropic power-law inflation as a special case. We also numerically show that the new anisotropic solutions are attractors in the phase space. (orig.)
Seeber, P A; Franz, M; Dehnhard, M; Ganswindt, A; Greenwood, A D; East, M L
2018-04-20
Adverse environmental stimuli (stressors) activate the hypothalamic-pituitary-adrenal axis and contribute to allostatic load. This study investigates the contribution of environmental stressors and life history stage to allostatic load in a migratory population of plains zebras (Equus quagga) in the Serengeti ecosystem, in Tanzania, which experiences large local variations in aggregation. We expected higher fGCM response to the environmental stressors of feeding competition, predation pressure and unpredictable social relationships in larger than in smaller aggregations, and in animals at energetically costly life history stages. As the study was conducted during the 2016 El Niño, we did not expect food quality of forage or a lack of water to strongly affect fGCM responses in the dry season. We measured fecal glucocorticoid metabolite (fGCM) concentrations using an enzyme immunoassay (EIA) targeting 11β-hydroxyetiocholanolone and validated its reliability in captive plains zebras. Our results revealed significantly higher fGCM concentrations 1) in large aggregations than in smaller groupings, and 2) in band stallions than in bachelor males. Concentrations of fGCM were not significantly higher in females at the energetically costly life stage of late pregnancy/lactation. The higher allostatic load of stallions associated with females, than bachelor males is likely caused by social stressors. In conclusion, migratory zebras have elevated allostatic loads in large aggregations that probably result from their combined responses to increased feeding competition, predation pressure and various social stressors. Further research is required to disentangle the contribution of these stressors to allostatic load in migratory populations. Copyright © 2018 Elsevier Inc. All rights reserved.
A hybrid adaptive large neighborhood search heuristic for lot-sizing with setup times
DEFF Research Database (Denmark)
Muller, Laurent Flindt; Spoorendonk, Simon; Pisinger, David
2012-01-01
This paper presents a hybrid of a general heuristic framework and a general purpose mixed-integer programming (MIP) solver. The framework is based on local search and an adaptive procedure which chooses between a set of large neighborhoods to be searched. A mixed integer programming solver and its......, and the upper bounds found by the commercial MIP solver ILOG CPLEX using state-of-the-art MIP formulations. Furthermore, we improve the best known solutions on 60 out of 100 and improve the lower bound on all 100 instances from the literature...
Modelling and Formal Verification of Timing Aspects in Large PLC Programs
Fernandez Adiego, B; Blanco Vinuela, E; Tournier, J-C; Gonzalez Suarez, V M; Blech, J O
2014-01-01
One of the main obstacle that prevents model checking from being widely used in industrial control systems is the complexity of building formal models out of PLC programs, especially when timing aspects need to be integrated. This paper brings an answer to this obstacle by proposing a methodology to model and verify timing aspects of PLC programs. Two approaches are proposed to allow the users to balance the trade-off between the complexity of the model, i.e. its number of states, and the set of specifications possible to be verified. A tool supporting the methodology which allows to produce models for different model checkers directly from PLC programs has been developed. Verification of timing aspects for real-life PLC programs are presented in this paper using NuSMV.
Constant Proportion Portfolio Insurance
DEFF Research Database (Denmark)
Jessen, Cathrine
2014-01-01
on the theme, originally proposed by Fischer Black. In CPPI, a financial institution guarantees a floor value for the “insured” portfolio and adjusts the stock/bond mix to produce a leveraged exposure to the risky assets, which depends on how far the portfolio value is above the floor. Plain-vanilla portfolio...... insurance largely died with the crash of 1987, but CPPI is still going strong. In the frictionless markets of finance theory, the issuer’s strategy to hedge its liability under the contract is clear, but in the real world with transactions costs and stochastic jump risk, the optimal strategy is less obvious...
Quasi real-time estimation of the moment magnitude of large earthquake from static strain changes
Itaba, S.
2016-12-01
The 2011 Tohoku-Oki (off the Pacific coast of Tohoku) earthquake, of moment magnitude 9.0, was accompanied by large static strain changes (10-7), as measured by borehole strainmeters operated by the Geological Survey of Japan in the Tokai, Kii Peninsula, and Shikoku regions. A fault model for the earthquake on the boundary between the Pacific and North American plates, based on these borehole strainmeter data, yielded a moment magnitude of 8.7. On the other hand, based on the seismic wave, the prompt report of the magnitude which the Japan Meteorological Agency (JMA) announced just after earthquake occurrence was 7.9. Such geodetic moment magnitudes, derived from static strain changes, can be estimated almost as rapidly as determinations using seismic waves. I have to verify the validity of this method in some cases. In the case of this earthquake's largest aftershock, which occurred 29 minutes after the mainshock. The prompt report issued by JMA assigned this aftershock a magnitude of 7.3, whereas the moment magnitude derived from borehole strain data is 7.6, which is much closer to the actual moment magnitude of 7.7. In order to grasp the magnitude of a great earthquake earlier, several methods are now being suggested to reduce the earthquake disasters including tsunami. Our simple method of using static strain changes is one of the strong methods for rapid estimation of the magnitude of large earthquakes, and useful to improve the accuracy of Earthquake Early Warning.
International Nuclear Information System (INIS)
Liu, H.-L.; Chen, Y.-Y.; Yen, J.-Y.; Lin, W.-L.
2003-01-01
To generate large thermal lesions in ultrasound thermal therapy, cooling intermissions are usually introduced during the treatment to prevent near-field heating, which leads to a long treatment time. A possible strategy to shorten the total treatment time is to eliminate the cooling intermissions. In this study, the two methods, power optimization and acoustic window enlargement, for reducing power accumulation in the near field are combined to investigate the feasibility of continuously heating a large target region (maximally 3.2 x 3.2 x 3.2 cm 3 ). A multiple 1D ultrasound phased array system generates the foci to scan the target region. Simulations show that the target region can be successfully heated without cooling and no near-field heating occurs. Moreover, due to the fact that there is no cooling time during the heating sessions, the total treatment time is significantly reduced to only several minutes, compared to the existing several hours
Time-scale effects in the interaction between a large and a small herbivore
Kuijper, D. P. J.; Beek, P.; van Wieren, S.E.; Bakker, J. P.
2008-01-01
In the short term, grazing will mainly affect plant biomass and forage quality. However, grazing can affect plant species composition by accelerating or retarding succession at longer time-scales. Few studies concerning interactions among herbivores have taken the change in plant species composition
Eulerian short-time statistics of turbulent flow at large Reynolds number
Brouwers, J.J.H.
2004-01-01
An asymptotic analysis is presented of the short-time behavior of second-order temporal velocity structure functions and Eulerian acceleration correlations in a frame that moves with the local mean velocity of the turbulent flow field. Expressions in closed-form are derived which cover the viscous
Response time distributions in rapid chess: A large-scale decision making experiment
Directory of Open Access Journals (Sweden)
Mariano Sigman
2010-10-01
Full Text Available Rapid chess provides an unparalleled laboratory to understand decision making in a natural environment. In a chess game, players choose consecutively around 40 moves in a finite time budget. The goodness of each choice can be determined quantitatively since current chess algorithms estimate precisely the value of a position. Web-based chess produces vast amounts of data, millions of decisions per day, incommensurable with traditional psychological experiments. We generated a database of response times and position value in rapid chess games. We measured robust emergent statistical observables: 1 Response time (RT distributions are long-tailed and show qualitatively distinct forms at different stages of the game, 2 RT of successive moves are highly correlated both for intra- and inter-player moves. These findings have theoretical implications since they deny two basic assumptions of sequential decision making algorithms: RTs are not stationary and can not be generated by a state function. Our results also have practical implications. First, we characterized the capacity of blunders and score fluctuations to predict a player strength, which is yet an open problem in chess softwares. Second, we show that the winning likelihood can be reliably estimated from a weighted combination of remaining times and position evaluation.
A fast large-area position-sensitive time-of-flight neutron detection system
International Nuclear Information System (INIS)
Crawford, R.K.; Haumann, J.R.
1989-01-01
A new position-sensitive time-of-flight neutron detection and histograming system has been developed for use at the Intense Pulsed Neutron Source. Spatial resolution of roughly 1 cm x 1 cm and time-of-flight resolution of ∼1 μsec are combined in a detection system which can ultimately be expanded to cover several square meters of active detector area. This system is based on the use of arrays of cylindrical one-dimensional position-sensitive proportional counters, and is capable of collecting the x-y-t data and sorting them into histograms at time-averaged data rates up to ∼300,000 events/sec over the full detector area and with instantaneous data rates up to more than fifty times that. Numerous hardware features have been incorporated to facilitate initial tuning of the position encoding, absolute calibration of the encoded positions, and automatic testing for drifts. 7 refs., 11 figs., 1 tabs
Citizen journalism in a time of crisis: lessons from a large-scale California wildfire
S. Gillette; J. Taylor; D.J. Chavez; R. Hodgson; J. Downing
2007-01-01
The accessibility of news production tools through consumer communication technology has made it possible for media consumers to become media producers. The evolution of media consumer to media producer has important implications for the shape of public discourse during a time of crisis. Citizen journalists cover crisis events using camera cell phones and digital...
Madison, G.; Mosing, M.A.; Verweij, K.J.H.; Pedersen, N.L.; Ullén, F.
2016-01-01
Intelligence and cognitive ability have long been associated with chronometric performance measures, such as reaction time (RT), but few studies have investigated auditory RT in this context. The nature of this relationship is important for understanding the etiology and structure of intelligence.
Near real-time large scale (sensor) data provisioning for PLF
Vonder, M.R.; Waaij, B.D. van der; Harmsma, E.J.; Donker, G.
2015-01-01
Think big, start small. With that thought in mind, Smart Dairy Farming (SDF) developed a platform to make real-time sensor data from different farms available, for model developers to support dairy farmers in Precision Livestock Farming. The data has been made available via a standard interface on
The fine-structure constant before quantum mechanics
International Nuclear Information System (INIS)
Kragh, Helge
2003-01-01
This paper focuses on the early history of the fine-structure constant, largely the period until 1925. Contrary to what is generally assumed, speculations concerning the interdependence of the elementary electric charge and Planck's constant predated Arnold Sommerfeld's 1916 discussion of the dimensionless constant. This paper pays particular attention to a little known work from 1914 in which G N Lewis and E Q Adams derived what is effectively a numerical expression for the fine-structure constant
Cagnetti, Filippo; Gomes, Diogo A.; Mitake, Hiroyoshi; Tran, Hung V.
2015-01-01
We investigate large-time asymptotics for viscous Hamilton-Jacobi equations with possibly degenerate diffusion terms. We establish new results on the convergence, which are the first general ones concerning equations which are neither uniformly parabolic nor first order. Our method is based on the nonlinear adjoint method and the derivation of new estimates on long time averaging effects. It also extends to the case of weakly coupled systems.
Kim, Minjin; Kim, Gi-Hwan; Oh, Kyoung Suk; Jo, Yimhyun; Yoon, Hyun; Kim, Ka-Hyun; Lee, Heon; Kim, Jin Young; Kim, Dong Suk
2017-06-27
Organic-inorganic hybrid metal halide perovskite solar cells (PSCs) are attracting tremendous research interest due to their high solar-to-electric power conversion efficiency with a high possibility of cost-effective fabrication and certified power conversion efficiency now exceeding 22%. Although many effective methods for their application have been developed over the past decade, their practical transition to large-size devices has been restricted by difficulties in achieving high performance. Here we report on the development of a simple and cost-effective production method with high-temperature and short-time annealing processing to obtain uniform, smooth, and large-size grain domains of perovskite films over large areas. With high-temperature short-time annealing at 400 °C for 4 s, the perovskite film with an average domain size of 1 μm was obtained, which resulted in fast solvent evaporation. Solar cells fabricated using this processing technique had a maximum power conversion efficiency exceeding 20% over a 0.1 cm 2 active area and 18% over a 1 cm 2 active area. We believe our approach will enable the realization of highly efficient large-area PCSs for practical development with a very simple and short-time procedure. This simple method should lead the field toward the fabrication of uniform large-scale perovskite films, which are necessary for the production of high-efficiency solar cells that may also be applicable to several other material systems for more widespread practical deployment.
A large scale flexible real-time communications topology for the LHC accelerator
Lauckner, R J; Ribeiro, P; Wijnands, Thijs
1999-01-01
The LHC design parameters impose very stringent beam control requirements in order to reach the nominal performance. Prompted by the lack of accurate models to predict field behaviour in superconducting magnet systems the control system of the accelerator will provide flexible feedback channels between monitors and magnets around the 27 Km circumference machine. The implementation of feedback systems composed of a large number of sparsely located elements presents some interesting challenges. Our goal was to find a topology where the control loop requirements: number and distribution of nodes, latency and throughput could be guaranteed without compromising the flexibility. Our proposal is to federate a number of well known technologies and concepts, namely ATM, WorldFIP and RTOS, into a general framework. (6 refs).
Time-gated ballistic imaging using a large aperture switching beam.
Mathieu, Florian; Reddemann, Manuel A; Palmer, Johannes; Kneer, Reinhold
2014-03-24
Ballistic imaging commonly denotes the formation of line-of-sight shadowgraphs through turbid media by suppression of multiply scattered photons. The technique relies on a femtosecond laser acting as light source for the images and as switch for an optical Kerr gate that separates ballistic photons from multiply scattered ones. The achievable image resolution is one major limitation for the investigation of small objects. In this study, practical influences on the optical Kerr gate and image quality are discussed theoretically and experimentally applying a switching beam with large aperture (D = 19 mm). It is shown how switching pulse energy and synchronization of switching and imaging pulse in the Kerr cell influence the gate's transmission. Image quality of ballistic imaging and standard shadowgraphy is evaluated and compared, showing that the present ballistic imaging setup is advantageous for optical densities in the range of 8 ballistic imaging setup into a schlieren-type system with an optical schlieren edge.
Interstitial laser photocoagulation for benign thyroid nodules: time to treat large nodules.
Amabile, Gerardo; Rotondi, Mario; Pirali, Barbara; Dionisio, Rosa; Agozzino, Lucio; Lanza, Michele; Buonanno, Luciano; Di Filippo, Bruno; Fonte, Rodolfo; Chiovato, Luca
2011-09-01
Interstitial laser photocoagulation (ILP) is a new therapeutic option for the ablation of non-functioning and hyper-functioning benign thyroid nodules. Amelioration of the ablation procedure currently allows treating large nodules. Aim of this study was to evaluate the therapeutic efficacy of ILP, performed according to a modified protocol of ablation, in patients with large functioning and non-functioning thyroid nodules and to identify the best parameters for predicting successful outcome in hyperthyroid patients. Fifty-one patients with non-functioning thyroid nodules (group 1) and 26 patients with hyperfunctioning thyroid nodules (group 2) were enrolled. All patients had a nodular volume ≥40 ml. Patients were addressed to 1-3 cycles of ILP. A cycle consisted of three ILP sessions, each lasting 5-10 minutes repeated at an interval of 1 month. After each cycle of ILP patients underwent thyroid evaluation. A nodule volume reduction, expressed as percentage of the basal volume, significantly occurred in both groups (F = 190.4; P nodule volume; (iii) total amount of energy delivered expressed in Joule. ROC curves identified the percentage of volume reduction as the best parameter predicting a normalized serum TSH (area under the curve 0.962; P thyroid nodules, both in terms of nodule size reduction and cure of hyperthyroidism (87% of cured patients after the last ILP cycle). ILP should not be limited to patients refusing or being ineligible for surgery and/or radioiodine. Copyright © 2011 Wiley-Liss, Inc.
Evolution of the solar 'constant'
Energy Technology Data Exchange (ETDEWEB)
Newman, M J
1980-06-01
Variations in solar luminosity over geological time are discussed in light of the effect of the solar constant on the evolution of life on earth. Consideration is given to long-term (5 - 7% in a billion years) increases in luminosity due to the conversion of hydrogen into helium in the solar interior, temporary enhancements to solar luminosity due to the accretion of matter from the interstellar medium at intervals on the order of 100 million years, and small-amplitude rapid fluctuations of luminosity due to the stochastic nature of convection on the solar surface. It is noted that encounters with dense interstellar clouds could have had serious consequences for life on earth due to the peaking of the accretion-induced luminosity variation at short wavelengths.
International Nuclear Information System (INIS)
Pokor, C.; Massoud, J.P.; Wintergerst, M.; Toivonen, A.; Ehrnsten, U.; Karlsen, W.
2011-01-01
The structures of Reactor Pressure Vessel Internals are subjected to an intense neutron flux. Under these operating conditions, the microstructure and the mechanical properties of the austenitic stainless steel components change. In addition, these components are subjected to stress of either manufacturing origin or generated under operation. Cases of baffle bolts cracking have occurred in CP0 Nuclear Power Plant units. The mechanism of degradation of these bolts is Irradiation-Assisted Stress Corrosion Cracking. In order to obtain a better understanding of this mechanism and its principal parameters of influence, a set of stress corrosion tests (mainly constant load tests) were launched within the framework of the EDF project 'PWR Internals' using materials from a Chooz A baffle corner (SA 304). These tests aim to quantify the influence on IASCC of the applied stress, temperature and environment (primary water, higher lithium concentration, inert environment) for an irradiation dose close to 30 dpa. A curve showing time to failure as a function of the stress was determined. The shape of this curve is consistent with the few data that are available in the literature. A stress threshold of about 50 % of the yield strength value at the test temperature has been determined, below which cracking in that environment seems impossible. After irradiation this material is sensitive to intergranular fracture in a primary environment, but also in an inert environment (argon) at 340 C. The tests also showed a negative effect of increased lithium concentration on the time to failure and on the stress threshold. (authors)
Chao, Calvin Yi-Ping; Tu, Honyih; Wu, Thomas Meng-Hsiu; Chou, Kuo-Yu; Yeh, Shang-Fu; Yin, Chin; Lee, Chih-Lin
2017-11-23
A study of the random telegraph noise (RTN) of a 1.1 μm pitch, 8.3 Mpixel CMOS image sensor (CIS) fabricated in a 45 nm backside-illumination (BSI) technology is presented in this paper. A noise decomposition scheme is used to pinpoint the noise source. The long tail of the random noise (RN) distribution is directly linked to the RTN from the pixel source follower (SF). The full 8.3 Mpixels are classified into four categories according to the observed RTN histogram peaks. A theoretical formula describing the RTN as a function of the time difference between the two phases of the correlated double sampling (CDS) is derived and validated by measured data. An on-chip time constant extraction method is developed and applied to the RTN analysis. The effects of readout circuit bandwidth on the settling ratios of the RTN histograms are investigated and successfully accounted for in a simulation using a RTN behavior model.
Giesler, Reiner; Clemmensen, Karina E; Wardle, David A; Klaminder, Jonatan; Bindler, Richard
2017-03-07
Alterations in fire activity due to climate change and fire suppression may have profound effects on the balance between storage and release of carbon (C) and associated volatile elements. Stored soil mercury (Hg) is known to volatilize due to wildfires and this could substantially affect the land-air exchange of Hg; conversely the absence of fires and human disturbance may increase the time period over which Hg is sequestered. Here we show for a wildfire chronosequence spanning over more than 5000 years in boreal forest in northern Sweden that belowground inventories of total Hg are strongly related to soil humus C accumulation (R 2 = 0.94, p millennial time scales in the prolonged absence of fire.
International Nuclear Information System (INIS)
Yong-Jun, Wang; Xiang-Jun, Xin; Xiao-Lei, Zhang; Chong-Qing, Wu; Kuang-Lu, Yu
2010-01-01
Optical buffers are critical for optical signal processing in future optical packet-switched networks. In this paper, a theoretical study as well as an experimental demonstration on a new optical buffer with large dynamical delay time is carried out based on cascaded double loop optical buffers (DLOBs). It is found that pulse distortion can be restrained by a negative optical control mode when the optical packet is in the loop. Noise analysis indicates that it is feasible to realise a large variable delay range by cascaded DLOBs. These conclusions are validated by the experiment system with 4-stage cascaded DLOBs. Both the theoretical simulations and the experimental results indicate that a large delay range of 1–9999 times the basic delay unit and a fine granularity of 25 ns can be achieved by the cascaded DLOBs. The performance of the cascaded DLOBs is suitable for the all optical networks. (classical areas of phenomenology)
Response time distributions in rapid chess: a large-scale decision making experiment.
Sigman, Mariano; Etchemendy, Pablo; Slezak, Diego Fernández; Cecchi, Guillermo A
2010-01-01
Rapid chess provides an unparalleled laboratory to understand decision making in a natural environment. In a chess game, players choose consecutively around 40 moves in a finite time budget. The goodness of each choice can be determined quantitatively since current chess algorithms estimate precisely the value of a position. Web-based chess produces vast amounts of data, millions of decisions per day, incommensurable with traditional psychological experiments. We generated a database of response times (RTs) and position value in rapid chess games. We measured robust emergent statistical observables: (1) RT distributions are long-tailed and show qualitatively distinct forms at different stages of the game, (2) RT of successive moves are highly correlated both for intra- and inter-player moves. These findings have theoretical implications since they deny two basic assumptions of sequential decision making algorithms: RTs are not stationary and can not be generated by a state-function. Our results also have practical implications. First, we characterized the capacity of blunders and score fluctuations to predict a player strength, which is yet an open problem in chess softwares. Second, we show that the winning likelihood can be reliably estimated from a weighted combination of remaining times and position evaluation.
On large-time energy concentration in solutions to the Navier-Stokes equations in general domains
Czech Academy of Sciences Publication Activity Database
Skalák, Zdeněk
2011-01-01
Roč. 91, č. 9 (2011), s. 724-732 ISSN 0044-2267 R&D Projects: GA AV ČR IAA100190905 Institutional research plan: CEZ:AV0Z20600510 Keywords : Navier-Stokes equations * large-time behavior * energy concentration Subject RIV: BA - General Mathematics Impact factor: 0.863, year: 2011
On the determination of the Hubble constant
International Nuclear Information System (INIS)
Gurzadyan, V.G.; Harutyunyan, V.V.; Kocharyan, A.A.
1990-10-01
The possibility of an alternative determination of the distance scale of the Universe and the Hubble constant based on the numerical analysis of the hierarchical nature of the large scale Universe (galaxies, clusters and superclusters) is proposed. The results of computer experiments performed by means of special numerical algorithms are represented. (author). 9 refs, 7 figs
Association between time perspective and organic food consumption in a large sample of adults.
Bénard, Marc; Baudry, Julia; Méjean, Caroline; Lairon, Denis; Giudici, Kelly Virecoulon; Etilé, Fabrice; Reach, Gérard; Hercberg, Serge; Kesse-Guyot, Emmanuelle; Péneau, Sandrine
2018-01-05
Organic food intake has risen in many countries during the past decades. Even though motivations associated with such choice have been studied, psychological traits preceding these motivations have rarely been explored. Consideration of future consequences (CFC) represents the extent to which individuals consider future versus immediate consequences of their current behaviors. Consequently, a future oriented personality may be an important characteristic of organic food consumers. The objective was to analyze the association between CFC and organic food consumption in a large sample of the adult general population. In 2014, a sample of 27,634 participants from the NutriNet-Santé cohort study completed the CFC questionnaire and an Organic-Food Frequency questionnaire. For each food group (17 groups), non-organic food consumers were compared to organic food consumers across quartiles of the CFC using multiple logistic regressions. Moreover, adjusted means of proportions of organic food intakes out of total food intakes were compared between quartiles of the CFC. Analyses were adjusted for socio-demographic, lifestyle and dietary characteristics. Participants with higher CFC were more likely to consume organic food (OR quartile 4 (Q4) vs. Q1 = 1.88, 95% CI: 1.62, 2.20). Overall, future oriented participants were more likely to consume 14 food groups. The strongest associations were observed for starchy refined foods (OR = 1.78, 95% CI: 1.63, 1.94), and fruits and vegetables (OR = 1.74, 95% CI: 1.58, 1.92). The contribution of organic food intake out of total food intake was 33% higher in the Q4 compared to Q1. More precisely, the contribution of organic food consumed was higher in the Q4 for 16 food groups. The highest relative differences between Q4 and Q1 were observed for starchy refined foods (22%) and non-alcoholic beverages (21%). Seafood was the only food group without a significant difference. This study provides information on the personality of
Intermittent flow under constant forcing: Acoustic emission from creep avalanches
Salje, Ekhard K. H.; Liu, Hanlong; Jin, Linsen; Jiang, Deyi; Xiao, Yang; Jiang, Xiang
2018-01-01
While avalanches in field driven ferroic systems (e.g., Barkhausen noise), domain switching of martensitic nanostructures, and the collapse of porous materials are well documented, creep avalanches (avalanches under constant forcing) were never observed. Collapse avalanches generate particularly large acoustic emission (AE) signals and were hence chosen to investigate crackling noise under creep conditions. Piezoelectric SiO2 has a strong piezoelectric response even at the nanoscale so that we chose weakly bound SiO2 spheres in natural sandstone as a representative for the study of avalanches under time-independent, constant force. We found highly non-stationary crackling noise with four activity periods, each with power law distributed AE emission. Only the period before the final collapse shows the mean field behavior (ɛ near 1.39), in agreement with previous dynamic measurements at a constant stress rate. All earlier event periods show collapse with larger exponents (ɛ = 1.65). The waiting time exponents are classic with τ near 2.2 and 1.32. Creep data generate power law mixing with "effective" exponents for the full dataset with combinations of mean field and non-mean field regimes. We find close agreement with the predicted time-dependent fiber bound simulations, including events and waiting time distributions. Båth's law holds under creep conditions.
Tracking Large Area Mangrove Deforestation with Time-Series of High Fidelity MODIS Imagery
Rahman, A. F.; Dragoni, D.; Didan, K.
2011-12-01
Mangrove forests are important coastal ecosystems of the tropical and subtropical regions. These forests provide critical ecosystem services, fulfill important socio-economic and environmental functions, and support coastal livelihoods. But these forest are also among the most vulnerable ecosystems, both to anthropogenic disturbance and climate change. Yet, there exists no map or published study showing detailed spatiotemporal trends of mangrove deforestation at local to regional scales. There is an immediate need of producing such detailed maps to further study the drivers, impacts and feedbacks of anthropogenic and climate factors on mangrove deforestation, and to develop local and regional scale adaptation/mitigation strategies. In this study we use a time-series of high fidelity imagery from NASA's Moderate Resolution Imaging Spectroradiometer (MODIS) for tracking changes in the greenness of mangrove forests of Kalimantan Island of Indonesia. A novel method of filtering satellite data for cloud, aerosol, and view angle effects was used to produce high fidelity MODIS time-series images at 250-meter spatial resolution and three-month temporal resolution for the period of 2000-2010. Enhanced Vegetation Index 2 (EVI2), a measure of vegetation greenness, was calculated from these images for each pixel at each time interval. Temporal variations in the EVI2 of each pixel were tracked as a proxy to deforestaton of mangroves using the statistical method of change-point analysis. Results of these change detection were validated using Monte Carlo simulation, photographs from Google-Earth, finer spatial resolution images from Landsat satellite, and ground based GIS data.
Time-Efficient High-Resolution Large-Area Nano-Patterning of Silicon Dioxide
DEFF Research Database (Denmark)
Lin, Li; Ou, Yiyu; Aagesen, Martin
2017-01-01
A nano-patterning approach on silicon dioxide (SiO2) material, which could be used for the selective growth of III-V nanowires in photovoltaic applications, is demonstrated. In this process, a silicon (Si) stamp with nanopillar structures was first fabricated using electron-beam lithography (EBL....... In addition, high time efficiency can be realized by one-spot electron-beam exposure in the EBL process combined with NIL for mass production. Furthermore, the one-spot exposure enables the scalability of the nanostructures for different application requirements by tuning only the exposure dose. The size...
Parallel real-time visualization system for large-scale simulation. Application to WSPEEDI
International Nuclear Information System (INIS)
Muramatsu, Kazuhiro; Otani, Takayuki; Kitabata, Hideyuki; Matsumoto, Hideki; Takei, Toshifumi; Doi, Shun
2000-01-01
The real-time visualization system, PATRAS (PArallel TRAcking Steering system) has been developed on parallel computing servers. The system performs almost all of the visualization tasks on a parallel computing server, and uses image data compression technique for efficient communication between the server and the client terminal. Therefore, the system realizes high performance concurrent visualization in an internet computing environment. The experience in applying PATRAS to WSPEEDI (Worldwide version of System for Prediction Environmental Emergency Dose Information) is reported. The application of PATRAS to WSPEEDI enables users to understand behaviours of radioactive tracers from different release points easily and quickly. (author)
RNA structure and scalar coupling constants
Energy Technology Data Exchange (ETDEWEB)
Tinoco, I. Jr.; Cai, Z.; Hines, J.V.; Landry, S.M.; SantaLucia, J. Jr.; Shen, L.X.; Varani, G. [Univ. of California, Berkeley, CA (United States)
1994-12-01
Signs and magnitudes of scalar coupling constants-spin-spin splittings-comprise a very large amount of data that can be used to establish the conformations of RNA molecules. Proton-proton and proton-phosphorus splittings have been used the most, but the availability of {sup 13}C-and {sup 15}N-labeled molecules allow many more coupling constants to be used for determining conformation. We will systematically consider the torsion angles that characterize a nucleotide unit and the coupling constants that depend on the values of these torsion angles. Karplus-type equations have been established relating many three-bond coupling constants to torsion angles. However, one- and two-bond coupling constants can also depend on conformation. Serianni and coworkers measured carbon-proton coupling constants in ribonucleosides and have calculated their values as a function of conformation. The signs of two-bond coupling can be very useful because it is easier to measure a sign than an accurate magnitude.
A reference web architecture and patterns for real-time visual analytics on large streaming data
Kandogan, Eser; Soroker, Danny; Rohall, Steven; Bak, Peter; van Ham, Frank; Lu, Jie; Ship, Harold-Jeffrey; Wang, Chun-Fu; Lai, Jennifer
2013-12-01
Monitoring and analysis of streaming data, such as social media, sensors, and news feeds, has become increasingly important for business and government. The volume and velocity of incoming data are key challenges. To effectively support monitoring and analysis, statistical and visual analytics techniques need to be seamlessly integrated; analytic techniques for a variety of data types (e.g., text, numerical) and scope (e.g., incremental, rolling-window, global) must be properly accommodated; interaction, collaboration, and coordination among several visualizations must be supported in an efficient manner; and the system should support the use of different analytics techniques in a pluggable manner. Especially in web-based environments, these requirements pose restrictions on the basic visual analytics architecture for streaming data. In this paper we report on our experience of building a reference web architecture for real-time visual analytics of streaming data, identify and discuss architectural patterns that address these challenges, and report on applying the reference architecture for real-time Twitter monitoring and analysis.
Detection of long nulls in PSR B1706-16, a pulsar with large timing irregularities
Naidu, Arun; Joshi, Bhal Chandra; Manoharan, P. K.; Krishnakumar, M. A.
2018-04-01
Single pulse observations, characterizing in detail, the nulling behaviour of PSR B1706-16 are being reported for the first time in this paper. Our regular long duration monitoring of this pulsar reveals long nulls of 2-5 h with an overall nulling fraction of 31 ± 2 per cent. The pulsar shows two distinct phases of emission. It is usually in an active phase, characterized by pulsations interspersed with shorter nulls, with a nulling fraction of about 15 per cent, but it also rarely switches to an inactive phase, consisting of long nulls. The nulls in this pulsar are concurrent between 326.5 and 610 MHz. Profile mode changes accompanied by changes in fluctuation properties are seen in this pulsar, which switches from mode A before a null to mode B after the null. The distribution of null durations in this pulsar is bimodal. With its occasional long nulls, PSR B1706-16 joins the small group of intermediate nullers, which lie between the classical nullers and the intermittent pulsars. Similar to other intermediate nullers, PSR B1706-16 shows high timing noise, which could be due to its rare long nulls if one assumes that the slowdown rate during such nulls is different from that during the bursts.
Across Space and Time: Social Responses to Large-Scale Biophysical Systems
Macmynowski, Dena P.
2007-06-01
The conceptual rubric of ecosystem management has been widely discussed and deliberated in conservation biology, environmental policy, and land/resource management. In this paper, I argue that two critical aspects of the ecosystem management concept require greater attention in policy and practice. First, although emphasis has been placed on the “space” of systems, the “time”—or rates of change—associated with biophysical and social systems has received much less consideration. Second, discussions of ecosystem management have often neglected the temporal disconnects between changes in biophysical systems and the response of social systems to management issues and challenges. The empirical basis of these points is a case study of the “Crown of the Continent Ecosystem,” an international transboundary area of the Rocky Mountains that surrounds Glacier National Park (USA) and Waterton Lakes National Park (Canada). This project assessed the experiences and perspectives of 1) middle- and upper-level government managers responsible for interjurisdictional cooperation, and 2) environmental nongovernment organizations with an international focus. I identify and describe 10 key challenges to increasing the extent and intensity of transboundary cooperation in land/resource management policy and practice. These issues are discussed in terms of their political, institutional, cultural, information-based, and perceptual elements. Analytic techniques include a combination of environmental history, semistructured interviews with 48 actors, and text analysis in a systematic qualitative framework. The central conclusion of this work is that the rates of response of human social systems must be better integrated with the rates of ecological change. This challenge is equal to or greater than the well-recognized need to adapt the spatial scale of human institutions to large-scale ecosystem processes and transboundary wildlife.
Ethical dilemmas of a large national multi-centre study in Australia: time for some consistency.
Driscoll, Andrea; Currey, Judy; Worrall-Carter, Linda; Stewart, Simon
2008-08-01
To examine the impact and obstacles that individual Institutional Research Ethics Committee (IRECs) had on a large-scale national multi-centre clinical audit called the National Benchmarks and Evidence-based National Clinical guidelines for Heart failure management programmes Study. Multi-centre research is commonplace in the health care system. However, IRECs continue to fail to differentiate between research and quality audit projects. The National Benchmarks and Evidence-based National Clinical guidelines for Heart failure management programmes study used an investigator-developed questionnaire concerning a clinical audit for heart failure programmes throughout Australia. Ethical guidelines developed by the National governing body of health and medical research in Australia classified the National Benchmarks and Evidence-based National Clinical guidelines for Heart failure management programmes Study as a low risk clinical audit not requiring ethical approval by IREC. Fifteen of 27 IRECs stipulated that the research proposal undergo full ethical review. None of the IRECs acknowledged: national quality assurance guidelines and recommendations nor ethics approval from other IRECs. Twelve of the 15 IRECs used different ethics application forms. Variability in the type of amendments was prolific. Lack of uniformity in ethical review processes resulted in a six- to eight-month delay in commencing the national study. Development of a national ethics application form with full ethical review by the first IREC and compulsory expedited review by subsequent IRECs would resolve issues raised in this paper. IRECs must change their ethics approval processes to one that enhances facilitation of multi-centre research which is now normative process for health services. The findings of this study highlight inconsistent ethical requirements between different IRECs. Also highlighted are the obstacles and delays that IRECs create when undertaking multi-centre clinical audits
THE WIGNER–FOKKER–PLANCK EQUATION: STATIONARY STATES AND LARGE TIME BEHAVIOR
ARNOLD, ANTON
2012-11-01
We consider the linear WignerFokkerPlanck equation subject to confining potentials which are smooth perturbations of the harmonic oscillator potential. For a certain class of perturbations we prove that the equation admits a unique stationary solution in a weighted Sobolev space. A key ingredient of the proof is a new result on the existence of spectral gaps for FokkerPlanck type operators in certain weighted L 2-spaces. In addition we show that the steady state corresponds to a positive density matrix operator with unit trace and that the solutions of the time-dependent problem converge towards the steady state with an exponential rate. © 2012 World Scientific Publishing Company.
Cyranka, Jacek; Mucha, Piotr B.; Titi, Edriss S.; Zgliczyński, Piotr
2018-04-01
The paper studies the issue of stability of solutions to the forced Navier-Stokes and damped Euler systems in periodic boxes. It is shown that for large, but fixed, Grashoff (Reynolds) number the turbulent behavior of all Leray-Hopf weak solutions of the three-dimensional Navier-Stokes equations, in periodic box, is suppressed, when viewed in the right frame of reference, by large enough average flow of the initial data; a phenomenon that is similar in spirit to the Landau damping. Specifically, we consider an initial data which have large enough spatial average, then by means of the Galilean transformation, and thanks to the periodic boundary conditions, the large time independent forcing term changes into a highly oscillatory force; which then allows us to employ some averaging principles to establish our result. Moreover, we also show that under the action of fast oscillatory-in-time external forces all two-dimensional regular solutions of the Navier-Stokes and the damped Euler equations converge to a unique time-periodic solution.
Time-Efficient High-Resolution Large-Area Nano-Patterning of Silicon Dioxide
Directory of Open Access Journals (Sweden)
Li Lin
2017-01-01
Full Text Available A nano-patterning approach on silicon dioxide (SiO2 material, which could be used for the selective growth of III-V nanowires in photovoltaic applications, is demonstrated. In this process, a silicon (Si stamp with nanopillar structures was first fabricated using electron-beam lithography (EBL followed by a dry etching process. Afterwards, the Si stamp was employed in nanoimprint lithography (NIL assisted with a dry etching process to produce nanoholes on the SiO2 layer. The demonstrated approach has advantages such as a high resolution in nanoscale by EBL and good reproducibility by NIL. In addition, high time efficiency can be realized by one-spot electron-beam exposure in the EBL process combined with NIL for mass production. Furthermore, the one-spot exposure enables the scalability of the nanostructures for different application requirements by tuning only the exposure dose. The size variation of the nanostructures resulting from exposure parameters in EBL, the pattern transfer during nanoimprint in NIL, and subsequent etching processes of SiO2 were also studied quantitatively. By this method, a hexagonal arranged hole array in SiO2 with a hole diameter ranging from 45 to 75 nm and a pitch of 600 nm was demonstrated on a four-inch wafer.
Energy beyond food: foraging theory informs time spent in thermals by a large soaring bird.
Directory of Open Access Journals (Sweden)
Emily L C Shepard
Full Text Available Current understanding of how animals search for and exploit food resources is based on microeconomic models. Although widely used to examine feeding, such constructs should inform other energy-harvesting situations where theoretical assumptions are met. In fact, some animals extract non-food forms of energy from the environment, such as birds that soar in updraughts. This study examined whether the gains in potential energy (altitude followed efficiency-maximising predictions in the world's heaviest soaring bird, the Andean condor (Vultur gryphus. Animal-attached technology was used to record condor flight paths in three-dimensions. Tracks showed that time spent in patchy thermals was broadly consistent with a strategy to maximise the rate of potential energy gain. However, the rate of climb just prior to leaving a thermal increased with thermal strength and exit altitude. This suggests higher rates of energetic gain may not be advantageous where the resulting gain in altitude would lead to a reduction in the ability to search the ground for food. Consequently, soaring behaviour appeared to be modulated by the need to reconcile differing potential energy and food energy distributions. We suggest that foraging constructs may provide insight into the exploitation of non-food energy forms, and that non-food energy distributions may be more important in informing patterns of movement and residency over a range of scales than previously considered.
The Cosmological Constant Problem (1/2)
CERN. Geneva
2015-01-01
I will review the cosmological constant problem as a serious challenge to our notion of naturalness in Physics. Weinberg’s no go theorem is worked through in detail. I review a number of proposals possibly including Linde's universe multiplication, Coleman's wormholes, the fat graviton, and SLED, to name a few. Large distance modifications of gravity are also discussed, with causality considerations pointing towards a global modification as being the most sensible option. The global nature of the cosmological constant problem is also emphasized, and as a result, the sequestering scenario is reviewed in some detail, demonstrating the cancellation of the Standard Model vacuum energy through a global modification of General Relativity.
The Cosmological Constant Problem (2/2)
CERN. Geneva
2015-01-01
I will review the cosmological constant problem as a serious challenge to our notion of naturalness in Physics. Weinberg’s no go theorem is worked through in detail. I review a number of proposals possibly including Linde's universe multiplication, Coleman's wormholes, the fat graviton, and SLED, to name a few. Large distance modifications of gravity are also discussed, with causality considerations pointing towards a global modification as being the most sensible option. The global nature of the cosmological constant problem is also emphasized, and as a result, the sequestering scenario is reviewed in some detail, demonstrating the cancellation of the Standard Model vacuum energy through a global modification of General Relativity.
The Newton constant and gravitational waves in some vector field adjusting mechanisms
Energy Technology Data Exchange (ETDEWEB)
Santillán, Osvaldo P. [IMAS (UBA-CONICET), Facultad de Ciencias Exactas y Naturales, Universidad de Buenos Aires, Ciudad Universitaria, Buenos Aires 1428 (Argentina); Scornavacche, Marina, E-mail: firenzecita@hotmail.com, E-mail: marina.scorna@hotmail.com [Departamento de Física, Universidad de Buenos Aires, Ciudad Universitaria, Buenos Aires 1428 (Argentina)
2017-10-01
At the present, there exist some Lorentz breaking scenarios which explain the smallness of the cosmological constant at the present era [1]–[2]. An important aspect to analyze is the propagation of gravitational waves and the screening or enhancement of the Newton constant G {sub N} in these models. The problem is that the Lorentz symmetry breaking terms may induce an unacceptable value of the Newton constant G {sub N} or introduce longitudinal modes in the gravitational wave propagation. Furthermore this breaking may spoil the standard dispersion relation ω= ck . In [3] the authors have presented a model suggesting that the behavior of the gravitational constant is correct for asymptotic times. In the present work, an explicit checking is made and we finally agree with these claims. Furthermore, it is suggested that the gravitational waves are also well behaved for large times. In the process, some new models with the same behavior are obtained, thus enlarging the list of possible adjustment mechanisms.
Bernaards, Claire M; Hildebrandt, Vincent H; Hendriksen, Ingrid J M
2016-10-26
Evidence shows that prolonged sitting is associated with an increased risk of mortality, independent of physical activity (PA). The aim of the study was to identify correlates of sedentary time (ST) in different age groups and day types (i.e. school-/work day versus non-school-/non-work day). The study sample consisted of 1895 Dutch children (4-11 years), 1131 adolescents (12-17 years), 8003 adults (18-64 years) and 1569 elderly (65 years and older) who enrolled in the Dutch continuous national survey 'Injuries and Physical Activity in the Netherlands' between 2006 and 2011. Respondents estimated the number of sitting hours during a regular school-/workday and a regular non-school/non-work day. Multiple linear regression analyses on cross-sectional data were used to identify correlates of ST. Significant positive associations with ST were observed for: higher age (4-to-17-year-olds and elderly), male gender (adults), overweight (children), higher education (adults ≥ 30 years), urban environment (adults), chronic disease (adults ≥ 30 years), sedentary work (adults), not meeting the moderate to vigorous PA (MVPA) guideline (children and adults ≥ 30 years) and not meeting the vigorous PA (VPA) guideline (4-to-17-year-olds). Correlates of ST that significantly differed between day types were working hours and meeting the VPA guideline. More working hours were associated with more ST on school-/work days. In children and adolescents, meeting the VPA guideline was associated with less ST on non-school/non-working days only. This study provides new insights in the correlates of ST in different age groups and thus possibilities for interventions in these groups. Correlates of ST appear to differ between age groups and to a lesser degree between day types. This implies that interventions to reduce ST should be age specific. Longitudinal studies are needed to draw conclusions on causality of the relationship between identified correlates and ST.
Directory of Open Access Journals (Sweden)
Claire M. Bernaards
2016-10-01
Full Text Available Abstract Background Evidence shows that prolonged sitting is associated with an increased risk of mortality, independent of physical activity (PA. The aim of the study was to identify correlates of sedentary time (ST in different age groups and day types (i.e. school-/work day versus non-school-/non-work day. Methods The study sample consisted of 1895 Dutch children (4–11 years, 1131 adolescents (12–17 years, 8003 adults (18–64 years and 1569 elderly (65 years and older who enrolled in the Dutch continuous national survey ‘Injuries and Physical Activity in the Netherlands’ between 2006 and 2011. Respondents estimated the number of sitting hours during a regular school-/workday and a regular non-school/non-work day. Multiple linear regression analyses on cross-sectional data were used to identify correlates of ST. Results Significant positive associations with ST were observed for: higher age (4-to-17-year-olds and elderly, male gender (adults, overweight (children, higher education (adults ≥ 30 years, urban environment (adults, chronic disease (adults ≥ 30 years, sedentary work (adults, not meeting the moderate to vigorous PA (MVPA guideline (children and adults ≥ 30 years and not meeting the vigorous PA (VPA guideline (4-to-17-year-olds. Correlates of ST that significantly differed between day types were working hours and meeting the VPA guideline. More working hours were associated with more ST on school-/work days. In children and adolescents, meeting the VPA guideline was associated with less ST on non-school/non-working days only. Conclusions This study provides new insights in the correlates of ST in different age groups and thus possibilities for interventions in these groups. Correlates of ST appear to differ between age groups and to a lesser degree between day types. This implies that interventions to reduce ST should be age specific. Longitudinal studies are needed to draw conclusions on causality of
International Nuclear Information System (INIS)
Althaus, R.F.; Kirsten, F.A.; Lee, K.L.; Olson, S.R.; Wagner, L.J.; Wolverton, J.M.
1976-10-01
A large-scale digitizer (LSD) system for acquiring charge and time-of-arrival particle data from high-energy-physics experiments has been developed at the Lawrence Berkeley Laboratory. The objective in this development was to significantly reduce the cost of instrumenting large-detector arrays which, for the 4π-geometry of colliding-beam experiments, are proposed with an order of magnitude increase in channel count over previous detectors. In order to achieve the desired economy (approximately $65 per channel), a system was designed in which a number of control signals for conversion, for digitization, and for readout are shared in common by all the channels in each 128-channel bin. The overall-system concept and the distribution of control signals that are critical to the 10-bit charge resolution and to the 12-bit time resolution are described. Also described is the bit-serial transfer scheme, chosen for its low component and cabling costs
International Nuclear Information System (INIS)
Sabati, M; Lauzon, M L; Frayne, R
2003-01-01
Data acquisition using a continuously moving table approach is a method capable of generating large field-of-view (FOV) 3D MR angiograms. However, in order to obtain venous contamination-free contrast-enhanced (CE) MR angiograms in the lower limbs, one of the major challenges is to acquire all necessary k-space data during the restricted arterial phase of the contrast agent. Preliminary investigation on the space-time relationship of continuously acquired peripheral angiography is performed in this work. Deterministic and stochastic undersampled hybrid-space (x, k y , k z ) acquisitions are simulated for large FOV peripheral runoff studies. Initial results show the possibility of acquiring isotropic large FOV images of the entire peripheral vascular system. An optimal trade-off between the spatial and temporal sampling properties was found that produced a high-spatial resolution peripheral CE-MR angiogram. The deterministic sampling pattern was capable of reconstructing the global structure of the peripheral arterial tree and showed slightly better global quantitative results than stochastic patterns. Optimal stochastic sampling patterns, on the other hand, enhanced small vessels and had more favourable local quantitative results. These simulations demonstrate the complex spatial-temporal relationship when sampling large FOV peripheral runoff studies. They also suggest that more investigation is required to maximize image quality as a function of hybrid-space coverage, acquisition repetition time and sampling pattern parameters
Li, Ji; Chen, Yangbo; Wang, Huanyu; Qin, Jianming; Li, Jie; Chiao, Sen
2017-03-01
Long lead time flood forecasting is very important for large watershed flood mitigation as it provides more time for flood warning and emergency responses. The latest numerical weather forecast model could provide 1-15-day quantitative precipitation forecasting products in grid format, and by coupling this product with a distributed hydrological model could produce long lead time watershed flood forecasting products. This paper studied the feasibility of coupling the Liuxihe model with the Weather Research and Forecasting quantitative precipitation forecast (WRF QPF) for large watershed flood forecasting in southern China. The QPF of WRF products has three lead times, including 24, 48 and 72 h, with the grid resolution being 20 km × 20 km. The Liuxihe model is set up with freely downloaded terrain property; the model parameters were previously optimized with rain gauge observed precipitation, and re-optimized with the WRF QPF. Results show that the WRF QPF has bias with the rain gauge precipitation, and a post-processing method is proposed to post-process the WRF QPF products, which improves the flood forecasting capability. With model parameter re-optimization, the model's performance improves also. This suggests that the model parameters be optimized with QPF, not the rain gauge precipitation. With the increasing of lead time, the accuracy of the WRF QPF decreases, as does the flood forecasting capability. Flood forecasting products produced by coupling the Liuxihe model with the WRF QPF provide a good reference for large watershed flood warning due to its long lead time and rational results.
Global low-energy weak solution and large-time behavior for the compressible flow of liquid crystals
Wu, Guochun; Tan, Zhong
2018-06-01
In this paper, we consider the weak solution of the simplified Ericksen-Leslie system modeling compressible nematic liquid crystal flows in R3. When the initial data are of small energy and initial density is positive and essentially bounded, we prove the existence of a global weak solution in R3. The large-time behavior of a global weak solution is also established.
International Nuclear Information System (INIS)
Goyot, M.
1975-05-01
A broadband and low noise charge preamplifier was developed in hybrid form, for a recoil spectrometer requiring large capacitance semiconductor detectors. This new hybrid and low cost preamplifier permits good timing information without compromising energy resolution. With a 500 pF external input capacity, it provides two simultaneous outputs: (i) the faster, current sensitive, with a rise time of 9 nsec and 2 mV/MeV on 50 ohms load, (ii) the lower, charge sensitive, with an energy resolution of 14 keV (FWHM Si) using a RC-CR ungated filter of 2 μsec and a FET input protection [fr
International Nuclear Information System (INIS)
Nasserzadeh, V.; Swithenbank, J.; Jones, B.
1995-01-01
The problem of measuring gas-residence time in large incinerators was studied by the pseudo-random binary sequence (PRBS) stimulus tracer response technique at the Sheffield municipal solid-waste incinerator (35 MW plant). The steady-state system was disturbed by the superimposition of small fluctuations in the form of a pseudo-random binary sequence of methane pulses, and the response of the incinerator was determined from the CO 2 concentration in flue gases at the boiler exit, measured with a specially developed optical gas analyser with a high-frequency response. For data acquisition, an on-line PC computer was used together with the LAB Windows software system; the output response was then cross-correlated with the perturbation signal to give the impulse response of the incinerator. There was very good agreement between the gas-residence time for the Sheffield MSW incinerator as calculated by computational fluid dynamics (FLUENT Model) and gas-residence time at the plant as measured by the PRBS tracer technique. The results obtained from this research programme clearly demonstrate that the PRBS stimulus tracer response technique can be successfully and economically used to measure gas-residence times in large incinerator plants. It also suggests that the common commercial practice of characterising the incinerator operation by a single-residence-time parameter may lead to a misrepresentation of the complexities involved in describing the operation of the incineration system. (author)
International Nuclear Information System (INIS)
Lott, B.; Escande, L.; Larsson, S.; Ballet, J.
2012-01-01
Here, we present a method enabling the creation of constant-uncertainty/constant-significance light curves with the data of the Fermi-Large Area Telescope (LAT). The adaptive-binning method enables more information to be encapsulated within the light curve than with the fixed-binning method. Although primarily developed for blazar studies, it can be applied to any sources. Furthermore, this method allows the starting and ending times of each interval to be calculated in a simple and quick way during a first step. The reported mean flux and spectral index (assuming the spectrum is a power-law distribution) in the interval are calculated via the standard LAT analysis during a second step. In the absence of major caveats associated with this method Monte-Carlo simulations have been established. We present the performance of this method in determining duty cycles as well as power-density spectra relative to the traditional fixed-binning method.
Spectrophotometric determination of association constant
DEFF Research Database (Denmark)
2016-01-01
Least-squares 'Systematic Trial-and-Error Procedure' (STEP) for spectrophotometric evaluation of association constant (equilibrium constant) K and molar absorption coefficient E for a 1:1 molecular complex, A + B = C, with error analysis according to Conrow et al. (1964). An analysis of the Charge...
International Nuclear Information System (INIS)
Yang Xiaocheng; Han-Oh, Sarah; Gui Minzhi; Niu Ying; Yu, Cedric X.; Yi Byongyong
2012-01-01
Purpose: Dose-rate-regulated tracking (DRRT) is a tumor tracking strategy that programs the MLC to track the tumor under regular breathing and adapts to breathing irregularities during delivery using dose rate regulation. Constant-dose-rate tracking (CDRT) is a strategy that dynamically repositions the beam to account for intrafractional 3D target motion according to real-time information of target location obtained from an independent position monitoring system. The purpose of this study is to illustrate the differences in the effectiveness and delivery accuracy between these two tracking methods in the presence of breathing irregularities. Methods: Step-and-shoot IMRT plans optimized at a reference phase were extended to remaining phases to generate 10-phased 4D-IMRT plans using segment aperture morphing (SAM) algorithm, where both tumor displacement and deformation were considered. A SAM-based 4D plan has been demonstrated to provide better plan quality than plans not considering target deformation. However, delivering such a plan requires preprogramming of the MLC aperture sequence. Deliveries of the 4D plans using DRRT and CDRT tracking approaches were simulated assuming the breathing period is either shorter or longer than the planning day, for 4 IMRT cases: two lung and two pancreatic cases with maximum GTV centroid motion greater than 1 cm were selected. In DRRT, dose rate was regulated to speed up or slow down delivery as needed such that each planned segment is delivered at the planned breathing phase. In CDRT, MLC is separately controlled to follow the tumor motion, but dose rate was kept constant. In addition to breathing period change, effect of breathing amplitude variation on target and critical tissue dose distribution is also evaluated. Results: Delivery of preprogrammed 4D plans by the CDRT method resulted in an average of 5% increase in target dose and noticeable increase in organs at risk (OAR) dose when patient breathing is either 10% faster or
Directory of Open Access Journals (Sweden)
Yunliang Li
2015-04-01
Full Text Available Most biochemical processes and associated water quality in lakes depends on their flushing abilities. The main objective of this study was to investigate the transport time scale in a large floodplain lake, Poyang Lake (China. A 2D hydrodynamic model (MIKE 21 was combined with dye tracer simulations to determine residence and travel times of the lake for various water level variation periods. The results indicate that Poyang Lake exhibits strong but spatially heterogeneous residence times that vary with its highly seasonal water level dynamics. Generally, the average residence times are less than 10 days along the lake’s main flow channels due to the prevailing northward flow pattern; whereas approximately 30 days were estimated during high water level conditions in the summer. The local topographically controlled flow patterns substantially increase the residence time in some bays with high spatial values of six months to one year during all water level variation periods. Depending on changes in the water level regime, the travel times from the pollution sources to the lake outlet during the high and falling water level periods (up to 32 days are four times greater than those under the rising and low water level periods (approximately seven days.
International Nuclear Information System (INIS)
Schmidt, Thomas; Ziganshin, Ayrat M.; Nikolausz, Marcell; Scholwin, Frank; Nelles, Michael; Kleinsteuber, Sabine; Pröter, Jürgen
2014-01-01
The hydraulic retention time (HRT) is one of the key parameters in biogas processes and often it is postulated that a minimum HRT of 10–25 days is obligatory in continuous stirred tank reactors (CSTR) to prevent a washout of slow growing methanogens. In this study the effects of the reduction of the HRT from 6 to 1.5 days on performance and methanogenic community composition in different systems with and without immobilization operated with simulated thin stillage (STS) at mesophilic conditions and constant organic loading rates (OLR) of 10 g L −1 d −1 of volatile solids were investigated. With the reduction of the HRT process instability was first observed in the anaerobic sequencing batch reactor (ASBR) (at HRT of 3 days) followed by the CSTR (at HRT of 2 days). The fixed bed reactor (FBR) was stable until the end of the experiment, but the reduction of the HRT to 1.5 days caused a decrease of the specific biogas production to about 450 L kg −1 of VS compared to about 600 L kg −1 of VS at HRTs of 4–5 days. Methanoculleus and Methanosarcina were the dominant genera under stable process conditions in the CSTR and the ASBR and members of Methanosaeta and Methanospirillum were only present at HRT of 4 days and lower. In the effluent of the FBR Methanosarcina spp. were not detected and Methanosaeta spp. were more abundant then in the other reactors. - Highlights: • A CSTR was operated at high OLR of 10 (g L −1 d −1 VS) and low HRT of 3 days. • Exceeding washout of methanogenic archaea did not take place. • pH and nutrient concentrations influenced the reproduction rate more than HRT. • Methanoculleus and Methanosarcina were the dominant genera in the CSTR
International Nuclear Information System (INIS)
Wetstein, Matthew
2011-01-01
Microchannel plate photomultiplier tubes (MCPs) are compact, imaging detectors, capable of micron-level spatial imaging and timing measurements with resolutions below 10 ps. Conventional fabrication methods are too expensive for making MCPs in the quantities and sizes necessary for typical HEP applications, such as time-of-flight ring-imaging Cherenkov detectors (TOF-RICH) or water Cherenkov-based neutrino experiments. The Large Area Picosecond Photodetector Collaboration (LAPPD) is developing new, commercializable methods to fabricate 20 cm 2 thin planar MCPs at costs comparable to those of traditional photo-multiplier tubes. Transmission-line readout with waveform sampling on both ends of each line allows the efficient coverage of large areas while maintaining excellent time and space resolution. Rather than fabricating channel plates from active, high secondary electron emission materials, we produce plates from passive substrates, and coat them using atomic layer deposition (ALD), a well established industrial batch process. In addition to possible reductions in cost and conditioning time, this allows greater control to optimize the composition of active materials for performance. We present details of the MCP fabrication method, preliminary results from testing and characterization facilities, and possible HEP applications.
Dynamical evolution of star clusters with a changing gravitational constant
International Nuclear Information System (INIS)
Angeletti, L.; Giannone, P.
1978-01-01
The dynamical evolution of massive star clusters was studied, taking into account variations with time of the gravitional constant. The rates of change of G were adopted according to theoretical and observational indications. Various conditions concerning the number of star groups, star masses, mass loss from stars, and initial star concentration were tested for the clusters. The comparison with analogous evolutionary sequences computed with a constant value of G showed that the effects of changes of G may be conspicuous. The analytical dependence of basic structural functions on the law of variation of G with time was determined from the numerical results. They allow an estimate of the consequences of G in a large range of cases. The effects of a decrease of G tended to prevent the formation of dense cores, which is a specific feature of the evolution of 'standard' models of star clusters. The expansion of the whole cluster structure was noteworthy. However, there was not a significant increase of escape of stars from cluster compared with the cases computed with constant G. Although detailed comparison with observations was beyond our present aims, it appears that a varaition of G according to the Brans-Dicke theory is not in conflict with observational data, as is the case for an exponential decrease of G consistent with Van Flandern's result. (orig.) [de
Directory of Open Access Journals (Sweden)
Y. Kawada
2007-01-01
Full Text Available Prior to large earthquakes (e.g. 1995 Kobe earthquake, Japan, an increase in the atmospheric radon concentration is observed, and this increase in the rate follows a power-law of the time-to-earthquake (time-to-failure. This phenomenon corresponds to the increase in the radon migration in crust and the exhalation into atmosphere. An irreversible thermodynamic model including time-scale invariance clarifies that the increases in the pressure of the advecting radon and permeability (hydraulic conductivity in the crustal rocks are caused by the temporal changes in the power-law of the crustal strain (or cumulative Benioff strain, which is associated with damage evolution such as microcracking or changing porosity. As the result, the radon flux and the atmospheric radon concentration can show a temporal power-law increase. The concentration of atmospheric radon can be used as a proxy for the seismic precursory processes associated with crustal dynamics.
Yin, Stuart (Shizhuo); Chao, Ju-Hung; Zhu, Wenbin; Chen, Chang-Jiang; Campbell, Adrian; Henry, Michael; Dubinskiy, Mark; Hoffman, Robert C.
2017-08-01
In this paper, we present a novel large capacity (a 1000+ channel) time division multiplexing (TDM) laser beam combining technique by harnessing a state-of-the-art nanosecond speed potassium tantalate niobate (KTN) electro-optic (EO) beam deflector as the time division multiplexer. The major advantages of TDM approach are: (1) large multiplexing capability (over 1000 channels), (2) high spatial beam quality (the combined beam has the same spatial profile as the individual beam), (3) high spectral beam quality (the combined beam has the same spectral width as the individual beam, and (4) insensitive to the phase fluctuation of individual laser because of the nature of the incoherent beam combining. The quantitative analyses show that it is possible to achieve over one hundred kW average power, single aperture, single transverse mode solid state and/or fiber laser by pursuing this innovative beam combining method, which represents a major technical advance in the field of high energy lasers. Such kind of 100+ kW average power diffraction limited beam quality lasers can play an important role in a variety of applications such as laser directed energy weapons (DEW) and large-capacity high-speed laser manufacturing, including cutting, welding, and printing.
Prochazka, Ivan; Kodet, Jan; Eckl, Johann; Blazej, Josef
2017-10-01
We are reporting on the design, construction, and performance of a photon counting detector system, which is based on single photon avalanche diode detector technology. This photon counting device has been optimized for very high timing resolution and stability of its detection delay. The foreseen application of this detector is laser ranging of space objects, laser time transfer ground to space and fundamental metrology. The single photon avalanche diode structure, manufactured on silicon using K14 technology, is used as a sensor. The active area of the sensor is circular with 200 μm diameter. Its photon detection probability exceeds 40% in the wavelength range spanning from 500 to 800 nm. The sensor is operated in active quenching and gating mode. A new control circuit was optimized to maintain high timing resolution and detection delay stability. In connection to this circuit, timing resolution of the detector is reaching 20 ps FWHM. In addition, the temperature change of the detection delay is as low as 70 fs/K. As a result, the detection delay stability of the device is exceptional: expressed in the form of time deviation, detection delay stability of better than 60 fs has been achieved. Considering the large active area aperture of the detector, this is, to our knowledge, the best timing performance reported for a solid state photon counting detector so far.
International Nuclear Information System (INIS)
Bhattacharya, Deb Sankar; Majumdar, Nayana; Sarkar, S.; Bhattacharya, S.; Mukhopadhyay, Supratik; Bhattacharya, P.; Attie, D.; Colas, P.; Ganjour, S.; Bhattacharya, Aparajita
2016-01-01
The principal particle tracker at the International Linear Collider (ILC) is planned to be a large Time Projection Chamber (TPC) where different Micro Pattern Gaseous Detector (MPGDs) candidate as the gaseous amplifier. A Micromegas (MM) based TPC can meet the ILC requirement of continuous and precise pattern recognition. Seven MM modules, working as the end-plate of a Large Prototype TPC (LPTPC) installed at DESY, have been tested with a 5 GeV electron beam. Due to the grounded peripheral frame of the MM modules, at low drift, the electric field lines near the detector edge remain no longer parallel to the TPC axis. This causes signal loss along the boundaries of the MM modules as well as distortion in the reconstructed track. In presence of magnetic field, the distorted electric field introduces ExB effect
Energy Technology Data Exchange (ETDEWEB)
Debreczeny, Martin Paul [Univ. of California, Berkeley, CA (United States)
1994-05-01
We have measured and assigned rate constants for energy transfer between chromophores in the light-harvesting protein C-phycocyanin (PC), in the monomeric and trimeric aggregation states, isolated from Synechococcus sp. PCC 7002. In order to compare the measured rate constants with those predicted by Fdrster`s theory of inductive resonance in the weak coupling limit, we have experimentally resolved several properties of the three chromophore types ({beta}{sub 155} {alpha}{sub 84}, {beta}{sub 84}) found in PC monomers, including absorption and fluorescence spectra, extinction coefficients, fluorescence quantum yields, and fluorescence lifetimes. The cpcB/C155S mutant, whose PC is missing the {beta}{sub 155} chromophore, was, useful in effecting the resolution of the chromophore properties and in assigning the experimentally observed rate constants for energy transfer to specific pathways.
Salditch, L.; Brooks, E. M.; Stein, S.; Spencer, B. D.; Campbell, M. R.
2017-12-01
A challenge for earthquake hazard assessment is that geologic records often show large earthquakes occurring in temporal clusters separated by periods of quiescence. For example, in Cascadia, a paleoseismic record going back 10,000 years shows four to five clusters separated by approximately 1,000 year gaps. If we are still in the cluster that began 1700 years ago, a large earthquake is likely to happen soon. If the cluster has ended, a great earthquake is less likely. For a Gaussian distribution of recurrence times, the probability of an earthquake in the next 50 years is six times larger if we are still in the most recent cluster. Earthquake hazard assessments typically employ one of two recurrence models, neither of which directly incorporate clustering. In one, earthquake probability is time-independent and modeled as Poissonian, so an earthquake is equally likely at any time. The fault has no "memory" because when a prior earthquake occurred has no bearing on when the next will occur. The other common model is a time-dependent earthquake cycle in which the probability of an earthquake increases with time until one happens, after which the probability resets to zero. Because the probability is reset after each earthquake, the fault "remembers" only the last earthquake. This approach can be used with any assumed probability density function for recurrence times. We propose an alternative, Long-Term Fault Memory (LTFM), a modified earthquake cycle model where the probability of an earthquake increases with time until one happens, after which it decreases, but not necessarily to zero. Hence the probability of the next earthquake depends on the fault's history over multiple cycles, giving "long-term memory". Physically, this reflects an earthquake releasing only part of the elastic strain stored on the fault. We use the LTFM to simulate earthquake clustering along the San Andreas Fault and Cascadia. In some portions of the simulated earthquake history, events would
International Nuclear Information System (INIS)
Rivarolo, M.; Magistri, L.; Massardo, A.F.
2014-01-01
Highlights: • We investigate H 2 and CH 4 production from very large hydraulic plant (14 GW). • We employ only “spilled energy”, not used by hydraulic plant, for H 2 production. • We consider the integration with energy taken from the grid at different prices. • We consider hydrogen conversion in chemical reactors to produce methane. • We find plants optimal size using a time-dependent thermo-economic approach. - Abstract: This paper investigates hydrogen and methane generation from large hydraulic plant, using an original multilevel thermo-economic optimization approach developed by the authors. Hydrogen is produced by water electrolysis employing time-dependent hydraulic energy related to the water which is not normally used by the plant, known as “spilled water electricity”. Both the demand for spilled energy and the electrical grid load vary widely by time of year, therefore a time-dependent hour-by-hour one complete year analysis has been carried out, in order to define the optimal plant size. This time period analysis is necessary to take into account spilled energy and electrical load profiles variability during the year. The hydrogen generation plant is based on 1 MWe water electrolysers fuelled with the “spilled water electricity”, when available; in the remaining periods, in order to assure a regular H 2 production, the energy is taken from the electrical grid, at higher cost. To perform the production plant size optimization, two hierarchical levels have been considered over a one year time period, in order to minimize capital and variable costs. After the optimization of the hydrogen production plant size, a further analysis is carried out, with a view to converting the produced H 2 into methane in a chemical reactor, starting from H 2 and CO 2 which is obtained with CCS plants and/or carried by ships. For this plant, the optimal electrolysers and chemical reactors system size is defined. For both of the two solutions, thermo
A large set of potential past, present and future hydro-meteorological time series for the UK
Directory of Open Access Journals (Sweden)
B. P. Guillod
2018-01-01
Full Text Available Hydro-meteorological extremes such as drought and heavy precipitation can have large impacts on society and the economy. With potentially increasing risks associated with such events due to climate change, properly assessing the associated impacts and uncertainties is critical for adequate adaptation. However, the application of risk-based approaches often requires large sets of extreme events, which are not commonly available. Here, we present such a large set of hydro-meteorological time series for recent past and future conditions for the United Kingdom based on weather@home 2, a modelling framework consisting of a global climate model (GCM driven by observed or projected sea surface temperature (SST and sea ice which is downscaled to 25 km over the European domain by a regional climate model (RCM. Sets of 100 time series are generated for each of (i a historical baseline (1900–2006, (ii five near-future scenarios (2020–2049 and (iii five far-future scenarios (2070–2099. The five scenarios in each future time slice all follow the Representative Concentration Pathway 8.5 (RCP8.5 and sample the range of sea surface temperature and sea ice changes from CMIP5 (Coupled Model Intercomparison Project Phase 5 models. Validation of the historical baseline highlights good performance for temperature and potential evaporation, but substantial seasonal biases in mean precipitation, which are corrected using a linear approach. For extremes in low precipitation over a long accumulation period ( > 3 months and shorter-duration high precipitation (1–30 days, the time series generally represents past statistics well. Future projections show small precipitation increases in winter but large decreases in summer on average, leading to an overall drying, consistently with the most recent UK Climate Projections (UKCP09 but larger in magnitude than the latter. Both drought and high-precipitation events are projected to increase in frequency and
Roosen, David; Wegewijs, Maarten R.; Hofstetter, Walter
2008-02-01
We investigate the time-dependent Kondo effect in a single-molecule magnet (SMM) strongly coupled to metallic electrodes. Describing the SMM by a Kondo model with large spin S>1/2, we analyze the underscreening of the local moment and the effect of anisotropy terms on the relaxation dynamics of the magnetization. Underscreening by single-channel Kondo processes leads to a logarithmically slow relaxation, while finite uniaxial anisotropy causes a saturation of the SMM’s magnetization. Additional transverse anisotropy terms induce quantum spin tunneling and a pseudospin-1/2 Kondo effect sensitive to the spin parity.
A large set of potential past, present and future hydro-meteorological time series for the UK
Guillod, Benoit P.; Jones, Richard G.; Dadson, Simon J.; Coxon, Gemma; Bussi, Gianbattista; Freer, James; Kay, Alison L.; Massey, Neil R.; Sparrow, Sarah N.; Wallom, David C. H.; Allen, Myles R.; Hall, Jim W.
2018-01-01
Hydro-meteorological extremes such as drought and heavy precipitation can have large impacts on society and the economy. With potentially increasing risks associated with such events due to climate change, properly assessing the associated impacts and uncertainties is critical for adequate adaptation. However, the application of risk-based approaches often requires large sets of extreme events, which are not commonly available. Here, we present such a large set of hydro-meteorological time series for recent past and future conditions for the United Kingdom based on weather@home 2, a modelling framework consisting of a global climate model (GCM) driven by observed or projected sea surface temperature (SST) and sea ice which is downscaled to 25 km over the European domain by a regional climate model (RCM). Sets of 100 time series are generated for each of (i) a historical baseline (1900-2006), (ii) five near-future scenarios (2020-2049) and (iii) five far-future scenarios (2070-2099). The five scenarios in each future time slice all follow the Representative Concentration Pathway 8.5 (RCP8.5) and sample the range of sea surface temperature and sea ice changes from CMIP5 (Coupled Model Intercomparison Project Phase 5) models. Validation of the historical baseline highlights good performance for temperature and potential evaporation, but substantial seasonal biases in mean precipitation, which are corrected using a linear approach. For extremes in low precipitation over a long accumulation period ( > 3 months) and shorter-duration high precipitation (1-30 days), the time series generally represents past statistics well. Future projections show small precipitation increases in winter but large decreases in summer on average, leading to an overall drying, consistently with the most recent UK Climate Projections (UKCP09) but larger in magnitude than the latter. Both drought and high-precipitation events are projected to increase in frequency and intensity in most regions
He, Zi; Chen, Ru-Shan
2016-03-01
An efficient three-dimensional time domain parabolic equation (TDPE) method is proposed to fast analyze the narrow-angle wideband EM scattering properties of electrically large targets. The finite difference (FD) of Crank-Nicolson (CN) scheme is used as the traditional tool to solve the time-domain parabolic equation. However, a huge computational resource is required when the meshes become dense. Therefore, the alternating direction implicit (ADI) scheme is introduced to discretize the time-domain parabolic equation. In this way, the reduced transient scattered fields can be calculated line by line in each transverse plane for any time step with unconditional stability. As a result, less computational resources are required for the proposed ADI-based TDPE method when compared with both the traditional CN-based TDPE method and the finite-different time-domain (FDTD) method. By employing the rotating TDPE method, the complete bistatic RCS can be obtained with encouraging accuracy for any observed angle. Numerical examples are given to demonstrate the accuracy and efficiency of the proposed method.
Hong, Kyeongsoo; Koo, Jae-Rim; Lee, Jae Woo; Kim, Seung-Lee; Lee, Chung-Uk; Park, Jang-Ho; Kim, Hyoun-Woo; Lee, Dong-Joo; Kim, Dong-Jin; Han, Cheongho
2018-05-01
We report the results of photometric observations for doubly eclipsing binaries OGLE-LMC-ECL-15674 and OGLE-LMC-ECL-22159, both of which are composed of two pairs (designated A&B) of a detached eclipsing binary located in the Large Magellanic Cloud. The light curves were obtained by high-cadence time-series photometry using the Korea Microlensing Telescope Network 1.6 m telescopes located at three southern sites (CTIO, SAAO, and SSO) between 2016 September and 2017 January. The orbital periods were determined to be 1.433 and 1.387 days for components A and B of OGLE-LMC-ECL-15674, respectively, and 2.988 and 3.408 days for OGLE-LMC-ECL-22159A and B, respectively. Our light curve solutions indicate that the significant changes in the eclipse depths of OGLE-LMC-ECL-15674A and B were caused by variations in their inclination angles. The eclipse timing diagrams of the A and B components of OGLE-LMC-ECL-15674 and OGLE-LMC-ECL-22159 were analyzed using 28, 44, 28, and 26 new times of minimum light, respectively. The apsidal motion period of OGLE-LMC-ECL-15674B was estimated by detailed analysis of eclipse timings for the first time. The detached eclipsing binary OGLE-LMC-ECL-15674B shows a fast apsidal period of 21.5 ± 0.1 years.
The Nature of the Cosmological Constant Problem
Maia, M. D.; Capistrano, A. J. S.; Monte, E. M.
General relativity postulates the Minkowski space-time as the standard (flat) geometry against which we compare all curved space-times and also as the gravitational ground state where particles, quantum fields and their vacua are defined. On the other hand, experimental evidences tell that there exists a non-zero cosmological constant, which implies in a deSitter ground state, which not compatible with the assumed Minkowski structure. Such inconsistency is an evidence of the missing standard of curvature in Riemann's geometry, which in general relativity manifests itself in the form of the cosmological constant problem. We show how the lack of a curvature standard in Riemann's geometry can be fixed by Nash's theorem on metric perturbations. The resulting higher dimensional gravitational theory is more general than general relativity, similar to brane-world gravity, but where the propagation of the gravitational field along the extra dimensions is a mathematical necessity, rather than a postulate. After a brief introduction to Nash's theorem, we show that the vacuum energy density must remain confined to four-dimensional space-times, but the cosmological constant resulting from the contracted Bianchi identity represents a gravitational term which is not confined. In this case, the comparison between the vacuum energy and the cosmological constant in general relativity does not make sense. Instead, the geometrical fix provided by Nash's theorem suggests that the vacuum energy density contributes to the perturbations of the gravitational field.
Simulated annealing with constant thermodynamic speed
International Nuclear Information System (INIS)
Salamon, P.; Ruppeiner, G.; Liao, L.; Pedersen, J.
1987-01-01
Arguments are presented to the effect that the optimal annealing schedule for simulated annealing proceeds with constant thermodynamic speed, i.e., with dT/dt = -(v T)/(ε-√C), where T is the temperature, ε- is the relaxation time, C ist the heat capacity, t is the time, and v is the thermodynamic speed. Experimental results consistent with this conjecture are presented from simulated annealing on graph partitioning problems. (orig.)
Farag, Mohammed; Sweity, Haitham; Fleckenstein, Matthias; Habibi, Saeid
2017-08-01
Real-time prediction of the battery's core temperature and terminal voltage is very crucial for an accurate battery management system. In this paper, a combined electrochemical, heat generation, and thermal model is developed for large prismatic cells. The proposed model consists of three sub-models, an electrochemical model, heat generation model, and thermal model which are coupled together in an iterative fashion through physicochemical temperature dependent parameters. The proposed parameterization cycles identify the sub-models' parameters separately by exciting the battery under isothermal and non-isothermal operating conditions. The proposed combined model structure shows accurate terminal voltage and core temperature prediction at various operating conditions while maintaining a simple mathematical structure, making it ideal for real-time BMS applications. Finally, the model is validated against both isothermal and non-isothermal drive cycles, covering a broad range of C-rates, and temperature ranges [-25 °C to 45 °C].
Application of large area SiPMs for the readout of a plastic scintillator based timing detector
Betancourt, C.; Blondel, A.; Brundler, R.; Dätwyler, A.; Favre, Y.; Gascon, D.; Gomez, S.; Korzenev, A.; Mermod, P.; Noah, E.; Serra, N.; Sgalaberna, D.; Storaci, B.
2017-11-01
In this study an array of eight 6 mm × 6 mm area SiPMs was coupled to the end of a long plastic scintillator counter which was exposed to a 2.5 GeV/c muon beam at the CERN PS. Timing characteristics of bars with dimensions 150 cm × 6 cm × 1 cm and 120 cm × 11 cm × 2.5 cm have been studied. An 8-channel SiPM anode readout ASIC (MUSIC R1) based on a novel low input impedance current conveyor has been used to read out and amplify SiPMs independently and sum the signals at the end. Prospects for applications in large-scale particle physics detectors with timing resolution below 100 ps are provided in light of the results.
Betancourt, C.; Brundler, R.; Dätwyler, A.; Favre, Y.; Gascon, D.; Gomez, S.; Korzenev, Alexander; Mermod, P.; Noah, E.; Serra, N.; Sgalaberna, D.; Storaci, B.
2017-11-27
In this study an array of eight 6 mm × 6 mm area SiPMs was coupled to the end of a long plastic scintillator counter which was exposed to a 2.5 GeV/c muon beam at the CERN PS. Timing characteristics of bars with dimensions 150 cm × 6 cm × 1 cm and 120 cm × 11 cm × 2.5 cm have been studied. An 8-channel SiPM anode readout ASIC (MUSIC R1) based on a novel low input impedance current conveyor has been used to read out and amplify SiPMs independently and sum the signals at the end. Prospects for applications in large-scale particle physics detectors with timing resolution below 100 ps are provided in light of the results.
Large Observatory for x-ray Timing (LOFT-P): a Probe-class mission concept study
Wilson-Hodge, Colleen A.; Ray, Paul S.; Chakrabarty, Deepto; Feroci, Marco; Alvarez, Laura; Baysinger, Michael; Becker, Chris; Bozzo, Enrico; Brandt, Soren; Carson, Billy; Chapman, Jack; Dominguez, Alexandra; Fabisinski, Leo; Gangl, Bert; Garcia, Jay; Griffith, Christopher; Hernanz, Margarita; Hickman, Robert; Hopkins, Randall; Hui, Michelle; Ingram, Luster; Jenke, Peter; Korpela, Seppo; Maccarone, Tom; Michalska, Malgorzata; Pohl, Martin; Santangelo, Andrea; Schanne, Stephane; Schnell, Andrew; Stella, Luigi; van der Klis, Michiel; Watts, Anna; Winter, Berend; Zane, Silvia
2016-07-01
LOFT-P is a mission concept for a NASA Astrophysics Probe-Class (matter? What are the effects of strong gravity on matter spiraling into black holes? It would be optimized for sub-millisecond timing of bright Galactic X-ray sources including X-ray bursters, black hole binaries, and magnetars to study phenomena at the natural timescales of neutron star surfaces and black hole event horizons and to measure mass and spin of black holes. These measurements are synergistic to imaging and high-resolution spectroscopy instruments, addressing much smaller distance scales than are possible without very long baseline X-ray interferometry, and using complementary techniques to address the geometry and dynamics of emission regions. LOFT-P would have an effective area of >6 m2, > 10x that of the highly successful Rossi X-ray Timing Explorer (RXTE). A sky monitor (2-50 keV) acts as a trigger for pointed observations, providing high duty cycle, high time resolution monitoring of the X-ray sky with 20 times the sensitivity of the RXTE All-Sky Monitor, enabling multi-wavelength and multimessenger studies. A probe-class mission concept would employ lightweight collimator technology and large-area solid-state detectors, segmented into pixels or strips, technologies which have been recently greatly advanced during the ESA M3 Phase A study of LOFT. Given the large community interested in LOFT (>800 supporters*, the scientific productivity of this mission is expected to be very high, similar to or greater than RXTE ( 2000 refereed publications). We describe the results of a study, recently completed by the MSFC Advanced Concepts Office, that demonstrates that such a mission is feasible within a NASA probe-class mission budget.
Habarulema, John Bosco; Yizengaw, Endawoke; Katamzi-Joseph, Zama T.; Moldwin, Mark B.; Buchert, Stephan
2018-01-01
This paper discusses the ionosphere's response to the largest storm of solar cycle 24 during 16-18 March 2015. We have used the Global Navigation Satellite Systems (GNSS) total electron content data to study large-scale traveling ionospheric disturbances (TIDs) over the American, African, and Asian regions. Equatorward large-scale TIDs propagated and crossed the equator to the other side of the hemisphere especially over the American and Asian sectors. Poleward TIDs with velocities in the range ≈400-700 m/s have been observed during local daytime over the American and African sectors with origin from around the geomagnetic equator. Our investigation over the American sector shows that poleward TIDs may have been launched by increased Lorentz coupling as a result of penetrating electric field during the southward turning of the interplanetary magnetic field, Bz. We have observed increase in SWARM satellite electron density (Ne) at the same time when equatorward large-scale TIDs are visible over the European-African sector. The altitude Ne profiles from ionosonde observations show a possible link that storm-induced TIDs may have influenced the plasma distribution in the topside ionosphere at SWARM satellite altitude.
From the Rydberg constant to the fundamental constants metrology
International Nuclear Information System (INIS)
Nez, F.
2005-06-01
This document reviews the theoretical and experimental achievements of the author since the beginning of his scientific career. This document is dedicated to the spectroscopy of hydrogen, deuterium and helium atoms. The first part is divided into 6 sub-sections: 1) the principles of hydrogen spectroscopy, 2) the measurement of the 2S-nS/nD transitions, 3) other optical frequency measurements, 4) our contribution to the determination of the Rydberg constant, 5) our current experiment on the 1S-3S transition, 6) the spectroscopy of the muonic hydrogen. Our experiments have improved the accuracy of the Rydberg Constant by a factor 25 in 15 years and we have achieved the first absolute optical frequency measurement of a transition in hydrogen. The second part is dedicated to the measurement of the fine structure constant and the last part deals with helium spectroscopy and the search for optical references in the near infrared range. (A.C.)
Casu, F.; de Luca, C.; Lanari, R.; Manunta, M.; Zinno, I.
2016-12-01
A methodology for computing surface deformation time series and mean velocity maps of large areas is presented. Our approach relies on the availability of a multi-temporal set of Synthetic Aperture Radar (SAR) data collected from ascending and descending orbits over an area of interest, and also permits to estimate the vertical and horizontal (East-West) displacement components of the Earth's surface. The adopted methodology is based on an advanced Cloud Computing implementation of the Differential SAR Interferometry (DInSAR) Parallel Small Baseline Subset (P-SBAS) processing chain which allows the unsupervised processing of large SAR data volumes, from the raw data (level-0) imagery up to the generation of DInSAR time series and maps. The presented solution, which is highly scalable, has been tested on the ascending and descending ENVISAT SAR archives, which have been acquired over a large area of Southern California (US) that extends for about 90.000 km2. Such an input dataset has been processed in parallel by exploiting 280 computing nodes of the Amazon Web Services Cloud environment. Moreover, to produce the final mean deformation velocity maps of the vertical and East-West displacement components of the whole investigated area, we took also advantage of the information available from external GPS measurements that permit to account for possible regional trends not easily detectable by DInSAR and to refer the P-SBAS measurements to an external geodetic datum. The presented results clearly demonstrate the effectiveness of the proposed approach that paves the way to the extensive use of the available ERS and ENVISAT SAR data archives. Furthermore, the proposed methodology can be particularly suitable to deal with the very huge data flow provided by the Sentinel-1 constellation, thus permitting to extend the DInSAR analyses at a nearly global scale. This work is partially supported by: the DPC-CNR agreement, the EPOS-IP project and the ESA GEP project.
Large Dielectric Constant Enhancement in MXene Percolative Polymer Composites
Tu, Shao Bo; Jiang, Qiu; Zhang, Xixiang; Alshareef, Husam N.
2018-01-01
near the percolation limit of about 15.0 wt % MXene loading, which surpasses all previously reported composites made of carbon-based fillers in the same polymer. With up to 10 wt % MXene loading, the dielectric loss of the MXene
Directory of Open Access Journals (Sweden)
Runchun Mark Wang
2015-05-01
Full Text Available We present a neuromorphic implementation of multiple synaptic plasticity learning rules, which include both Spike Timing Dependent Plasticity (STDP and Spike Timing Dependent Delay Plasticity (STDDP. We present a fully digital implementation as well as a mixed-signal implementation, both of which use a novel dynamic-assignment time-multiplexing approach and support up to 2^26 (64M synaptic plasticity elements. Rather than implementing dedicated synapses for particular types of synaptic plasticity, we implemented a more generic synaptic plasticity adaptor array that is separate from the neurons in the neural network. Each adaptor performs synaptic plasticity according to the arrival times of the pre- and post-synaptic spikes assigned to it, and sends out a weighted and/or delayed pre-synaptic spike to the target synapse in the neural network. This strategy provides great flexibility for building complex large-scale neural networks, as a neural network can be configured for multiple synaptic plasticity rules without changing its structure. We validate the proposed neuromorphic implementations with measurement results and illustrate that the circuits are capable of performing both STDP and STDDP. We argue that it is practical to scale the work presented here up to 2^36 (64G synaptic adaptors on a current high-end FPGA platform.
Large deviations of the finite-time magnetization of the Curie-Weiss random-field Ising model
Paga, Pierre; Kühn, Reimer
2017-08-01
We study the large deviations of the magnetization at some finite time in the Curie-Weiss random field Ising model with parallel updating. While relaxation dynamics in an infinite-time horizon gives rise to unique dynamical trajectories [specified by initial conditions and governed by first-order dynamics of the form mt +1=f (mt) ] , we observe that the introduction of a finite-time horizon and the specification of terminal conditions can generate a host of metastable solutions obeying second-order dynamics. We show that these solutions are governed by a Newtonian-like dynamics in discrete time which permits solutions in terms of both the first-order relaxation ("forward") dynamics and the backward dynamics mt +1=f-1(mt) . Our approach allows us to classify trajectories for a given final magnetization as stable or metastable according to the value of the rate function associated with them. We find that in analogy to the Freidlin-Wentzell description of the stochastic dynamics of escape from metastable states, the dominant trajectories may switch between the two types (forward and backward) of first-order dynamics. Additionally, we show how to compute rate functions when uncertainty in the quenched disorder is introduced.
International Nuclear Information System (INIS)
Parker, Leonard; Vanzella, Daniel A.T.
2004-01-01
We investigate the possibility that the late acceleration observed in the rate of expansion of the Universe is due to vacuum quantum effects arising in curved spacetime. The theoretical basis of the vacuum cold dark matter (VCDM), or vacuum metamorphosis, cosmological model of Parker and Raval is reexamined and improved. We show, by means of a manifestly nonperturbative approach, how the infrared behavior of the propagator (related to the large-time asymptotic form of the heat kernel) of a free scalar field in curved spacetime leads to nonperturbative terms in the effective action similar to those appearing in the earlier version of the VCDM model. The asymptotic form that we adopt for the propagator or heat kernel at large proper time s is motivated by, and consistent with, particular cases where the heat kernel has been calculated exactly, namely in de Sitter spacetime, in the Einstein static universe, and in the linearly expanding spatially flat Friedmann-Robertson-Walker (FRW) universe. This large-s asymptotic form generalizes somewhat the one suggested by the Gaussian approximation and the R-summed form of the propagator that earlier served as a theoretical basis for the VCDM model. The vacuum expectation value for the energy-momentum tensor of the free scalar field, obtained through variation of the effective action, exhibits a resonance effect when the scalar curvature R of the spacetime reaches a particular value related to the mass of the field. Modeling our Universe by an FRW spacetime filled with classical matter and radiation, we show that the back reaction caused by this resonance drives the Universe through a transition to an accelerating expansion phase, very much in the same way as originally proposed by Parker and Raval. Our analysis includes higher derivatives that were neglected in the earlier analysis, and takes into account the possible runaway solutions that can follow from these higher-derivative terms. We find that the runaway solutions do
Energy Technology Data Exchange (ETDEWEB)
Nez, F
2005-06-15
This document reviews the theoretical and experimental achievements of the author since the beginning of his scientific career. This document is dedicated to the spectroscopy of hydrogen, deuterium and helium atoms. The first part is divided into 6 sub-sections: 1) the principles of hydrogen spectroscopy, 2) the measurement of the 2S-nS/nD transitions, 3) other optical frequency measurements, 4) our contribution to the determination of the Rydberg constant, 5) our current experiment on the 1S-3S transition, 6) the spectroscopy of the muonic hydrogen. Our experiments have improved the accuracy of the Rydberg Constant by a factor 25 in 15 years and we have achieved the first absolute optical frequency measurement of a transition in hydrogen. The second part is dedicated to the measurement of the fine structure constant and the last part deals with helium spectroscopy and the search for optical references in the near infrared range. (A.C.)
Systematics of constant roll inflation
Anguelova, Lilia; Suranyi, Peter; Wijewardhana, L. C. R.
2018-02-01
We study constant roll inflation systematically. This is a regime, in which the slow roll approximation can be violated. It has long been thought that this approximation is necessary for agreement with observations. However, recently it was understood that there can be inflationary models with a constant, and not necessarily small, rate of roll that are both stable and compatible with the observational constraint ns ≈ 1. We investigate systematically the condition for such a constant-roll regime. In the process, we find a whole new class of inflationary models, in addition to the known solutions. We show that the new models are stable under scalar perturbations. Finally, we find a part of their parameter space, in which they produce a nearly scale-invariant scalar power spectrum, as needed for observational viability.
The large number hypothesis and Einstein's theory of gravitation
International Nuclear Information System (INIS)
Yun-Kau Lau
1985-01-01
In an attempt to reconcile the large number hypothesis (LNH) with Einstein's theory of gravitation, a tentative generalization of Einstein's field equations with time-dependent cosmological and gravitational constants is proposed. A cosmological model consistent with the LNH is deduced. The coupling formula of the cosmological constant with matter is found, and as a consequence, the time-dependent formulae of the cosmological constant and the mean matter density of the Universe at the present epoch are then found. Einstein's theory of gravitation, whether with a zero or nonzero cosmological constant, becomes a limiting case of the new generalized field equations after the early epoch
Cockfield, Jeremy; Su, Kyungmin; Robbins, Kay A
2013-01-01
Experiments to monitor human brain activity during active behavior record a variety of modalities (e.g., EEG, eye tracking, motion capture, respiration monitoring) and capture a complex environmental context leading to large, event-rich time series datasets. The considerable variability of responses within and among subjects in more realistic behavioral scenarios requires experiments to assess many more subjects over longer periods of time. This explosion of data requires better computational infrastructure to more systematically explore and process these collections. MOBBED is a lightweight, easy-to-use, extensible toolkit that allows users to incorporate a computational database into their normal MATLAB workflow. Although capable of storing quite general types of annotated data, MOBBED is particularly oriented to multichannel time series such as EEG that have event streams overlaid with sensor data. MOBBED directly supports access to individual events, data frames, and time-stamped feature vectors, allowing users to ask questions such as what types of events or features co-occur under various experimental conditions. A database provides several advantages not available to users who process one dataset at a time from the local file system. In addition to archiving primary data in a central place to save space and avoid inconsistencies, such a database allows users to manage, search, and retrieve events across multiple datasets without reading the entire dataset. The database also provides infrastructure for handling more complex event patterns that include environmental and contextual conditions. The database can also be used as a cache for expensive intermediate results that are reused in such activities as cross-validation of machine learning algorithms. MOBBED is implemented over PostgreSQL, a widely used open source database, and is freely available under the GNU general public license at http://visual.cs.utsa.edu/mobbed. Source and issue reports for MOBBED
Li, Xi-Bing; Wang, Ze-Wei; Dong, Long-Jun
2016-01-01
Microseismic monitoring systems using local location techniques tend to be timely, automatic and stable. One basic requirement of these systems is the automatic picking of arrival times. However, arrival times generated by automated techniques always contain large picking errors (LPEs), which may make the location solution unreliable and cause the integrated system to be unstable. To overcome the LPE issue, we propose the virtual field optimization method (VFOM) for locating single-point sources. In contrast to existing approaches, the VFOM optimizes a continuous and virtually established objective function to search the space for the common intersection of the hyperboloids, which is determined by sensor pairs other than the least residual between the model-calculated and measured arrivals. The results of numerical examples and in-site blasts show that the VFOM can obtain more precise and stable solutions than traditional methods when the input data contain LPEs. Furthermore, we discuss the impact of LPEs on objective functions to determine the LPE-tolerant mechanism, velocity sensitivity and stopping criteria of the VFOM. The proposed method is also capable of locating acoustic sources using passive techniques such as passive sonar detection and acoustic emission.
Herda, Maxime; Rodrigues, L. Miguel
2018-03-01
The present contribution investigates the dynamics generated by the two-dimensional Vlasov-Poisson-Fokker-Planck equation for charged particles in a steady inhomogeneous background of opposite charges. We provide global in time estimates that are uniform with respect to initial data taken in a bounded set of a weighted L^2 space, and where dependencies on the mean-free path τ and the Debye length δ are made explicit. In our analysis the mean free path covers the full range of possible values: from the regime of evanescent collisions τ → ∞ to the strongly collisional regime τ → 0. As a counterpart, the largeness of the Debye length, that enforces a weakly nonlinear regime, is used to close our nonlinear estimates. Accordingly we pay a special attention to relax as much as possible the τ -dependent constraint on δ ensuring exponential decay with explicit τ -dependent rates towards the stationary solution. In the strongly collisional limit τ → 0, we also examine all possible asymptotic regimes selected by a choice of observation time scale. Here also, our emphasis is on strong convergence, uniformity with respect to time and to initial data in bounded sets of a L^2 space. Our proofs rely on a detailed study of the nonlinear elliptic equation defining stationary solutions and a careful tracking and optimization of parameter dependencies of hypocoercive/hypoelliptic estimates.
Directory of Open Access Journals (Sweden)
Gray G.T.
2012-08-01
Full Text Available Time-temperature equivalence is a widely recognized property of many time-dependent material systems, where there is a clear predictive link relating the deformation response at a nominal temperature and a high strain-rate to an equivalent response at a depressed temperature and nominal strain-rate. It has been found that high-density polyethylene (HDPE obeys a linear empirical formulation relating test temperature and strain-rate. This observation was extended to continuous stress-strain curves, such that material response measured in a load frame at large strains and low strain-rates (at depressed temperatures could be translated into a temperature-dependent response at high strain-rates and validated against Taylor impact results. Time-temperature equivalence was used in conjuction with jump-rate compression tests to investigate isothermal response at high strain-rate while exluding adiabatic heating. The validated constitutive response was then applied to the analysis of Dynamic-Tensile-Extrusion of HDPE, a tensile analog to Taylor impact developed at LANL. The Dyn-Ten-Ext test results and FEA found that HDPE deformed smoothly after exiting the die, and after substantial drawing appeared to undergo a pressure-dependent shear damage mechanism at intermediate velocities, while it fragmented at high velocities. Dynamic-Tensile-Extrusion, properly coupled with a validated constitutive model, can successfully probe extreme tensile deformation and damage of polymers.
International Nuclear Information System (INIS)
Frullani, Salvatore; Castelluccio, Donato M.; Cisbani, Evaristo; Colilli, Stefano; Fratoni, Rolando; Giuliani, Fausto; Mostarda, Angelo; Colangeli, Giorgio; De Otto, Gian L.; Marchiori, Carlo; Paoloni, Gianfranco
2008-01-01
Aerial platform equipped with a sampling line and real-time monitoring of sampled aerosol is presented. The system is composed by: a) A Sky Arrow 650 fixed wing aircraft with the front part of the fuselage properly adapted to house the detection and acquisition equipment; b) A compact air sampling line where the iso kinetic sampling is dynamically maintained, aerosol is collected on a filter positioned along the line and hosted on a rotating 4-filters disk; c) A detection subsystem: a small BGO scintillator and Geiger counter right behind the sampling filter, a HPGe detector allows radionuclide identification in the collected aerosol samples, a large NaI(Tl) crystal detects airborne and ground gamma radiation; d) Several environmental (temperature, pressure, aircraft/wind speed) sensors and a GPS receiver that support the full characterization of the sampling conditions and the temporal and geographical location of the acquired data; e) Acquisition and control system based on compact electronics and real time software that operate the sampling line actuators, guarantee the dynamical iso kinetic condition, and acquire the detectors and sensor data. With this system quantitative measurements can be available also during the plume phase of an accident, while other aerial platforms, without sampling capability, can only be used for qualitative assessments. Transmission of all data will be soon implemented in order to make all the data available in real-time to the Technical Centre for the Emergency Management. The use of an unmanned air-vehicle (UAV) is discussed as future option. (author)