WorldWideScience

Sample records for large time existence

  1. Global Existence and Large Time Behavior of Solutions to the Bipolar Nonisentropic Euler-Poisson Equations

    Directory of Open Access Journals (Sweden)

    Min Chen

    2014-01-01

    Full Text Available We study the one-dimensional bipolar nonisentropic Euler-Poisson equations which can model various physical phenomena, such as the propagation of electron and hole in submicron semiconductor devices, the propagation of positive ion and negative ion in plasmas, and the biological transport of ions for channel proteins. We show the existence and large time behavior of global smooth solutions for the initial value problem, when the difference of two particles’ initial mass is nonzero, and the far field of two particles’ initial temperatures is not the ambient device temperature. This result improves that of Y.-P. Li, for the case that the difference of two particles’ initial mass is zero, and the far field of the initial temperature is the ambient device temperature.

  2. Modeling and control of a large nuclear reactor. A three-time-scale approach

    Energy Technology Data Exchange (ETDEWEB)

    Shimjith, S.R. [Indian Institute of Technology Bombay, Mumbai (India); Bhabha Atomic Research Centre, Mumbai (India); Tiwari, A.P. [Bhabha Atomic Research Centre, Mumbai (India); Bandyopadhyay, B. [Indian Institute of Technology Bombay, Mumbai (India). IDP in Systems and Control Engineering

    2013-07-01

    Recent research on Modeling and Control of a Large Nuclear Reactor. Presents a three-time-scale approach. Written by leading experts in the field. Control analysis and design of large nuclear reactors requires a suitable mathematical model representing the steady state and dynamic behavior of the reactor with reasonable accuracy. This task is, however, quite challenging because of several complex dynamic phenomena existing in a reactor. Quite often, the models developed would be of prohibitively large order, non-linear and of complex structure not readily amenable for control studies. Moreover, the existence of simultaneously occurring dynamic variations at different speeds makes the mathematical model susceptible to numerical ill-conditioning, inhibiting direct application of standard control techniques. This monograph introduces a technique for mathematical modeling of large nuclear reactors in the framework of multi-point kinetics, to obtain a comparatively smaller order model in standard state space form thus overcoming these difficulties. It further brings in innovative methods for controller design for systems exhibiting multi-time-scale property, with emphasis on three-time-scale systems.

  3. The existence and regularity of time-periodic solutions to the three-dimensional Navier–Stokes equations in the whole space

    International Nuclear Information System (INIS)

    Kyed, Mads

    2014-01-01

    The existence, uniqueness and regularity of time-periodic solutions to the Navier–Stokes equations in the three-dimensional whole space are investigated. We consider the Navier–Stokes equations with a non-zero drift term corresponding to the physical model of a fluid flow around a body that moves with a non-zero constant velocity. The existence of a strong time-periodic solution is shown for small time-periodic data. It is further shown that this solution is unique in a large class of weak solutions that can be considered physically reasonable. Finally, we establish regularity properties for any strong solution regardless of its size. (paper)

  4. MageComet—web application for harmonizing existing large-scale experiment descriptions

    OpenAIRE

    Xue, Vincent; Burdett, Tony; Lukk, Margus; Taylor, Julie; Brazma, Alvis; Parkinson, Helen

    2012-01-01

    Motivation: Meta-analysis of large gene expression datasets obtained from public repositories requires consistently annotated data. Curation of such experiments, however, is an expert activity which involves repetitive manipulation of text. Existing tools for automated curation are few, which bottleneck the analysis pipeline. Results: We present MageComet, a web application for biologists and annotators that facilitates the re-annotation of gene expression experiments in MAGE-TAB format. It i...

  5. Discretization of space and time: a slight modification to the Newtonian gravitation which implies the existence of black holes

    OpenAIRE

    Roatta , Luca

    2017-01-01

    Assuming that space and time can only have discrete values, it is shown how deformed space and time cause gravitational attraction, whose law in a discrete context is slightly different from the Newtonian, but to it exactly coincident at large distance. This difference is directly connected to the existence of black holes, which result to have the structure of a hollow sphere.

  6. Effectiveness of a large mimic panel in an existing nuclear power plant central control board

    International Nuclear Information System (INIS)

    Kubota, Ryuji; Satoh, Hiroyuki; Sasajima, Katsuhiro; Kawano, Ryutaro; Shibuya Shinya

    1999-01-01

    We conducted the analysis of the nuclear power plant (NPP) operators' behaviors under emergency conditions by using training simulators as a joint research project by Japanese BWR groups for twelve years. In the phase-IV of this project we executed two kinds of experiments to evaluate the effectiveness of the interfaces. One was for evaluations of the interfaces such as CRTs with touch screen, a large mimic panel, and a hierarchical annunciator system introduced in the newly developed ABWR type central control board. The other was that we analyzed the operators' behaviors in emergency conditions by using the first generation BWR type central control board which was added new interfaces such as a large display screen and demarcation on the board to help operators to understand the plant. The demarcation is one of the visual interface improvements and its technique is that a line enclosing several components causes them to be perceived as a group.The result showed that both the large display panel Introduced in ABWR central control board and the large display screen in the existing BWR type central control board improved the performance of the NPP operators in the experiments. It was expected that introduction of the large mimic panel into the existing BWR type central control boards would improve operators' performance. However, in the case of actual installation of the large display board into the existing central control boards, there are spatial and hardware constraints. Therefore the size of lamps, lines connecting from symbols of the pumps or valves to the others' will have to be modified under these constraints. It is important to evaluate the displayed information on the large display board before actual installation. We made experiments to solve these problems by using TEPCO's research simulator which is added a large mimic panel. (author)

  7. Modeling and Control of a Large Nuclear Reactor A Three-Time-Scale Approach

    CERN Document Server

    Shimjith, S R; Bandyopadhyay, B

    2013-01-01

    Control analysis and design of large nuclear reactors requires a suitable mathematical model representing the steady state and dynamic behavior of the reactor with reasonable accuracy. This task is, however, quite challenging because of several complex dynamic phenomena existing in a reactor. Quite often, the models developed would be of prohibitively large order, non-linear and of complex structure not readily amenable for control studies. Moreover, the existence of simultaneously occurring dynamic variations at different speeds makes the mathematical model susceptible to numerical ill-conditioning, inhibiting direct application of standard control techniques. This monograph introduces a technique for mathematical modeling of large nuclear reactors in the framework of multi-point kinetics, to obtain a comparatively smaller order model in standard state space form thus overcoming these difficulties. It further brings in innovative methods for controller design for systems exhibiting multi-time-scale property,...

  8. Existence conditions for bulk large-wavevector waves in metal-dielectric and graphene-dielectric multilayer hyperbolic metamaterials

    DEFF Research Database (Denmark)

    Zhukovsky, Sergei; Andryieuski, Andrei; Lavrinenko, Andrei

    2014-01-01

    We theoretically investigate general existence conditions for broadband bulk large-wavevector (high-k) propagating waves (such as volume plasmon polaritons in hyperbolic metamaterials) in arbitrary subwavelength periodic multilayers structures. Treating the elementary excitation in the unit cell...... of the structure as a generalized resonance pole of reflection coefficient and using Bloch's theorem, we derive analytical expressions for the band of large-wavevector propagating solutions. We apply our formalism to determine the high-k band existence in two important cases: the well-known metal-dielectric...

  9. Quantifying expert consensus against the existence of a secret, large-scale atmospheric spraying program

    Science.gov (United States)

    Shearer, Christine; West, Mick; Caldeira, Ken; Davis, Steven J.

    2016-08-01

    Nearly 17% of people in an international survey said they believed the existence of a secret large-scale atmospheric program (SLAP) to be true or partly true. SLAP is commonly referred to as ‘chemtrails’ or ‘covert geoengineering’, and has led to a number of websites purported to show evidence of widespread chemical spraying linked to negative impacts on human health and the environment. To address these claims, we surveyed two groups of experts—atmospheric chemists with expertize in condensation trails and geochemists working on atmospheric deposition of dust and pollution—to scientifically evaluate for the first time the claims of SLAP theorists. Results show that 76 of the 77 scientists (98.7%) that took part in this study said they had not encountered evidence of a SLAP, and that the data cited as evidence could be explained through other factors, including well-understood physics and chemistry associated with aircraft contrails and atmospheric aerosols. Our goal is not to sway those already convinced that there is a secret, large-scale spraying program—who often reject counter-evidence as further proof of their theories—but rather to establish a source of objective science that can inform public discourse.

  10. Short-time existence of solutions for mean-field games with congestion

    KAUST Repository

    Gomes, Diogo A.

    2015-11-20

    We consider time-dependent mean-field games with congestion that are given by a Hamilton–Jacobi equation coupled with a Fokker–Planck equation. These models are motivated by crowd dynamics in which agents have difficulty moving in high-density areas. The congestion effects make the Hamilton–Jacobi equation singular. The uniqueness of solutions for this problem is well understood; however, the existence of classical solutions was only known in very special cases, stationary problems with quadratic Hamiltonians and some time-dependent explicit examples. Here, we demonstrate the short-time existence of C∞ solutions for sub-quadratic Hamiltonians.

  11. The existence of very large-scale structures in the universe

    Energy Technology Data Exchange (ETDEWEB)

    Goicoechea, L J; Martin-Mirones, J M [Universidad de Cantabria Santander, (ES)

    1989-09-01

    Assuming that the dipole moment observed in the cosmic background radiation (microwaves and X-rays) can be interpreted as a consequence of the motion of the observer toward a non-local and very large-scale structure in our universe, we study the perturbation of the m-z relation by this inhomogeneity, the dynamical contribution of sources to the dipole anisotropy in the X-ray background and the imprint that several structures with such characteristics would have had on the microwave background at the decoupling. We conclude that in this model the observed anisotropy in the microwave background on intermediate angular scales ({approx}10{sup 0}) may be in conflict with the existence of superstructures.

  12. Familiality of co-existing ADHD and tic disorders: evidence from a large sibling study

    Directory of Open Access Journals (Sweden)

    Veit Roessner

    2016-07-01

    Full Text Available AbstractBackground: The association of attention-deficit/hyperactivity disorder (ADHD and tic disorder (TD is frequent and clinically important. Very few and inconclusive attempts have been made to clarify if and how the combination of ADHD+TD runs in families. Aim: To determine the first time in a large-scale ADHD sample whether ADHD+TD increases the risk of ADHD+TD in siblings and, also the first time, if this is independent of their psychopathological vulnerability in general. Methods: The study is based on the International Multicenter ADHD Genetics (IMAGE study. The present sub-sample of 2815 individuals included ADHD-index patients with co-existing TD (ADHD+TD, n=262 and without TD (ADHD-TD, n=947 as well as their 1606 full siblings (n=358 of the ADHD+TD index patients and n=1248 of the ADHD-TD index patients. We assessed psychopathological symptoms in index patients and siblings by using the strength and difficulties questionnaire (SDQ and the parent and teacher Conners’ long version Rating Scales (CRS. For disorder classification the Parental Account of Childhood Symptoms (PACS-Interview was applied in n = 271 children. Odds ratio with the GENMOD procedure (PROCGENMOD was used to test if the risk for ADHD, TD and ADHD+TD in siblings was associated with the related index patients’ diagnoses. In order to get an estimate for specificity we compared the four groups for general psychopathological symptoms.Results: Co-existing ADHD+TD in index patients increased the risk of both comorbid ADHD+TD and TD in the siblings of these index patients. These effects did not extend to general psychopathology. Interpretation: Co-existence of ADHD+TD may segregate in families. The same holds true for TD (without ADHD. Hence, the segregation of TD (included in both groups seems to be the determining factor, independent of further behavioral problems. This close relationship between ADHD and TD supports the clinical approach to carefully assess ADHD in

  13. Time-Efficient Cloning Attacks Identification in Large-Scale RFID Systems

    Directory of Open Access Journals (Sweden)

    Ju-min Zhao

    2017-01-01

    Full Text Available Radio Frequency Identification (RFID is an emerging technology for electronic labeling of objects for the purpose of automatically identifying, categorizing, locating, and tracking the objects. But in their current form RFID systems are susceptible to cloning attacks that seriously threaten RFID applications but are hard to prevent. Existing protocols aimed at detecting whether there are cloning attacks in single-reader RFID systems. In this paper, we investigate the cloning attacks identification in the multireader scenario and first propose a time-efficient protocol, called the time-efficient Cloning Attacks Identification Protocol (CAIP to identify all cloned tags in multireaders RFID systems. We evaluate the performance of CAIP through extensive simulations. The results show that CAIP can identify all the cloned tags in large-scale RFID systems fairly fast with required accuracy.

  14. Wealth Transfers Among Large Customers from Implementing Real-Time Retail Electricity Pricing

    OpenAIRE

    Borenstein, Severin

    2007-01-01

    Adoption of real-time electricity pricing — retail prices that vary hourly to reflect changing wholesale prices — removes existing cross-subsidies to those customers that consume disproportionately more when wholesale prices are highest. If their losses are substantial, these customers are likely to oppose RTP initiatives unless there is a supplemental program to offset their loss. Using data on a sample of 1142 large industrial and commercial customers in northern California, I show that RTP...

  15. Existence and Stability of Traveling Waves for Degenerate Reaction-Diffusion Equation with Time Delay

    Science.gov (United States)

    Huang, Rui; Jin, Chunhua; Mei, Ming; Yin, Jingxue

    2018-01-01

    This paper deals with the existence and stability of traveling wave solutions for a degenerate reaction-diffusion equation with time delay. The degeneracy of spatial diffusion together with the effect of time delay causes us the essential difficulty for the existence of the traveling waves and their stabilities. In order to treat this case, we first show the existence of smooth- and sharp-type traveling wave solutions in the case of c≥c^* for the degenerate reaction-diffusion equation without delay, where c^*>0 is the critical wave speed of smooth traveling waves. Then, as a small perturbation, we obtain the existence of the smooth non-critical traveling waves for the degenerate diffusion equation with small time delay τ >0 . Furthermore, we prove the global existence and uniqueness of C^{α ,β } -solution to the time-delayed degenerate reaction-diffusion equation via compactness analysis. Finally, by the weighted energy method, we prove that the smooth non-critical traveling wave is globally stable in the weighted L^1 -space. The exponential convergence rate is also derived.

  16. Existence and Stability of Traveling Waves for Degenerate Reaction-Diffusion Equation with Time Delay

    Science.gov (United States)

    Huang, Rui; Jin, Chunhua; Mei, Ming; Yin, Jingxue

    2018-06-01

    This paper deals with the existence and stability of traveling wave solutions for a degenerate reaction-diffusion equation with time delay. The degeneracy of spatial diffusion together with the effect of time delay causes us the essential difficulty for the existence of the traveling waves and their stabilities. In order to treat this case, we first show the existence of smooth- and sharp-type traveling wave solutions in the case of c≥c^* for the degenerate reaction-diffusion equation without delay, where c^*>0 is the critical wave speed of smooth traveling waves. Then, as a small perturbation, we obtain the existence of the smooth non-critical traveling waves for the degenerate diffusion equation with small time delay τ >0. Furthermore, we prove the global existence and uniqueness of C^{α ,β }-solution to the time-delayed degenerate reaction-diffusion equation via compactness analysis. Finally, by the weighted energy method, we prove that the smooth non-critical traveling wave is globally stable in the weighted L^1-space. The exponential convergence rate is also derived.

  17. Process improvement to enhance existing stroke team activity toward more timely thrombolytic treatment.

    Science.gov (United States)

    Cho, Han-Jin; Lee, Kyung Yul; Nam, Hyo Suk; Kim, Young Dae; Song, Tae-Jin; Jung, Yo Han; Choi, Hye-Yeon; Heo, Ji Hoe

    2014-10-01

    Process improvement (PI) is an approach for enhancing the existing quality improvement process by making changes while keeping the existing process. We have shown that implementation of a stroke code program using a computerized physician order entry system is effective in reducing the in-hospital time delay to thrombolysis in acute stroke patients. We investigated whether implementation of this PI could further reduce the time delays by continuous improvement of the existing process. After determining a key indicator [time interval from emergency department (ED) arrival to intravenous (IV) thrombolysis] and conducting data analysis, the target time from ED arrival to IV thrombolysis in acute stroke patients was set at 40 min. The key indicator was monitored continuously at a weekly stroke conference. The possible reasons for the delay were determined in cases for which IV thrombolysis was not administered within the target time and, where possible, the problems were corrected. The time intervals from ED arrival to the various evaluation steps and treatment before and after implementation of the PI were compared. The median time interval from ED arrival to IV thrombolysis in acute stroke patients was significantly reduced after implementation of the PI (from 63.5 to 45 min, p=0.001). The variation in the time interval was also reduced. A reduction in the evaluation time intervals was achieved after the PI [from 23 to 17 min for computed tomography scanning (p=0.003) and from 35 to 29 min for complete blood counts (p=0.006)]. PI is effective for continuous improvement of the existing process by reducing the time delays between ED arrival and IV thrombolysis in acute stroke patients.

  18. Large time behavior of entropy solutions to one-dimensional unipolar hydrodynamic model for semiconductor devices

    Science.gov (United States)

    Huang, Feimin; Li, Tianhong; Yu, Huimin; Yuan, Difan

    2018-06-01

    We are concerned with the global existence and large time behavior of entropy solutions to the one-dimensional unipolar hydrodynamic model for semiconductors in the form of Euler-Poisson equations in a bounded interval. In this paper, we first prove the global existence of entropy solution by vanishing viscosity and compensated compactness framework. In particular, the solutions are uniformly bounded with respect to space and time variables by introducing modified Riemann invariants and the theory of invariant region. Based on the uniform estimates of density, we further show that the entropy solution converges to the corresponding unique stationary solution exponentially in time. No any smallness condition is assumed on the initial data and doping profile. Moreover, the novelty in this paper is about the unform bound with respect to time for the weak solutions of the isentropic Euler-Poisson system.

  19. The large discretization step method for time-dependent partial differential equations

    Science.gov (United States)

    Haras, Zigo; Taasan, Shlomo

    1995-01-01

    A new method for the acceleration of linear and nonlinear time dependent calculations is presented. It is based on the Large Discretization Step (LDS) approximation, defined in this work, which employs an extended system of low accuracy schemes to approximate a high accuracy discrete approximation to a time dependent differential operator. Error bounds on such approximations are derived. These approximations are efficiently implemented in the LDS methods for linear and nonlinear hyperbolic equations, presented here. In these algorithms the high and low accuracy schemes are interpreted as the same discretization of a time dependent operator on fine and coarse grids, respectively. Thus, a system of correction terms and corresponding equations are derived and solved on the coarse grid to yield the fine grid accuracy. These terms are initialized by visiting the fine grid once in many coarse grid time steps. The resulting methods are very general, simple to implement and may be used to accelerate many existing time marching schemes.

  20. Existence of a time-dependent heat flux-related ponderomotive effect

    International Nuclear Information System (INIS)

    Schamel, H.; Sack, C.

    1980-01-01

    The existence of a new ponderomotive effect associated with high-frequency waves is pointed out. It originates when time-dependency, mean velocities, or divergent heat fluxes are involved and it supplements the two effects known previously, namely, the ponderomotive force and fake heating. Two proofs are presented; the first is obtained by establishing the momentum equations generalized by including radiation effects and the second by solving the quasi-linear-type diffusion equation explicitly. For a time-dependent wave packet the solution exhibits a new contribution in terms of an integral over previous states. Owing to this term, the plasma has a memory which leads to a breaking of the time symmetry of the plasma response. The range, influenced by the localized wave packet, expands during the course of time due to streamers emanating from the wave active region. Perturbations, among which is the heat flux, are carried to remote positions and, consequently, the region accessible to wave heating is increased. The density dip appears to be less pronounced at the center, and its generation and decay are delayed. The analysis includes a self-consistent action of high-frequency waves as well as the case of traveling wave packets. In order to establish the existence of this new effect, the analytical results are compared with recent microwave experiments. The possibility of generating fast particles by this new ponderomotive effect is emphasized

  1. The Existence of Local Wisdom Value Through Minangkabau Dance Creation Representation in Present Time

    Directory of Open Access Journals (Sweden)

    Indrayuda Indrayuda

    2017-01-01

    Full Text Available This paper is aiming at revealing the existence of local wisdom values in Minangkabau through the representation of Minangkabau dance creation at present time in West Sumatera. The existence of the dance itself gives impact to the continuation of the existence of local value in West Sumatera. The research method was qualitative which was used to analyze local wisdom values in the present time Minangkabu dance creation representation through the touch of reconstruction and acculturation as the local wisdom continuation. Besides, this study employs multidisciplinary study as the approach of the study by implementing the sociology anthropology of dance and the sociology and anthropology of culture. Object of the research was Minangkabau dance creation in present time, while the data was collected through interview and direct observation, as well as documentation. The data was analyzed by following the technique delivered by Miles and Huberman. Research results showed that Minangkabau dance creation was a reconstruction result of the older traditional dance, and through acculturation which contains local wisdom values. The existence of Mianngkabau dance creation can affect the continuation of local wisdom values in Minangkabau society in West Sumatera. The existence of dance creation has maintained the Minangkabau local wisdom values in present time.

  2. ENERGY DEMANDS OF THE EXISTING COLLECTIVE BUILDINGS WITH BEARING STRUCTURE OF LARGE PRECAST CONCRETE PANELS FROM TIMISOARA

    Directory of Open Access Journals (Sweden)

    Pescari S.

    2015-05-01

    Full Text Available One of the targets of EU Directives on the energy performance of buildings is to reduce the energy consumption of the existing buildings by finding efficient solutions for thermal rehabilitation. In order to find the adequate solutions, the first step is to establish the current state of the buildings and to determine their actual energy consumption. The current paper aims to present the energy demands of the existing buildings with bearing structure of large precast concrete panels in the city of Timisoara. Timisoara is one of the most important cities in the west side of Romania, being on the third place in terms of size and economic development. The Census of Population and Housing of 2011 states that Timisoara has about 127841 private dwellings and 60 percent of them are collective buildings. Energy demand values of the existing buildings with bearing structure of large precast concrete panels in Timisoara, in their current condition, are higher than the accepted values provided in the Romanian normative, C107. The difference between these two values can reach up to 300 percent.

  3. Real-time simulation of large-scale floods

    Science.gov (United States)

    Liu, Q.; Qin, Y.; Li, G. D.; Liu, Z.; Cheng, D. J.; Zhao, Y. H.

    2016-08-01

    According to the complex real-time water situation, the real-time simulation of large-scale floods is very important for flood prevention practice. Model robustness and running efficiency are two critical factors in successful real-time flood simulation. This paper proposed a robust, two-dimensional, shallow water model based on the unstructured Godunov- type finite volume method. A robust wet/dry front method is used to enhance the numerical stability. An adaptive method is proposed to improve the running efficiency. The proposed model is used for large-scale flood simulation on real topography. Results compared to those of MIKE21 show the strong performance of the proposed model.

  4. Global low-energy weak solution and large-time behavior for the compressible flow of liquid crystals

    Science.gov (United States)

    Wu, Guochun; Tan, Zhong

    2018-06-01

    In this paper, we consider the weak solution of the simplified Ericksen-Leslie system modeling compressible nematic liquid crystal flows in R3. When the initial data are of small energy and initial density is positive and essentially bounded, we prove the existence of a global weak solution in R3. The large-time behavior of a global weak solution is also established.

  5. Evaluating Existing Strategies to Limit Video Game Playing Time.

    Science.gov (United States)

    Davies, Bryan; Blake, Edwin

    2016-01-01

    Public concern surrounding the effects video games have on players has inspired a large body of research, and policy makers in China and South Korea have even mandated systems that limit the amount of time players spend in game. The authors present an experiment that evaluates the effectiveness of such policies. They show that forcibly removing players from the game environment causes distress, potentially removing some of the benefits that games provide and producing a desire for more game time. They also show that, with an understanding of player psychology, playtime can be manipulated without significantly changing the user experience or negating the positive effects of video games.

  6. A divide-and-conquer algorithm for large-scale de novo transcriptome assembly through combining small assemblies from existing algorithms.

    Science.gov (United States)

    Sze, Sing-Hoi; Parrott, Jonathan J; Tarone, Aaron M

    2017-12-06

    While the continued development of high-throughput sequencing has facilitated studies of entire transcriptomes in non-model organisms, the incorporation of an increasing amount of RNA-Seq libraries has made de novo transcriptome assembly difficult. Although algorithms that can assemble a large amount of RNA-Seq data are available, they are generally very memory-intensive and can only be used to construct small assemblies. We develop a divide-and-conquer strategy that allows these algorithms to be utilized, by subdividing a large RNA-Seq data set into small libraries. Each individual library is assembled independently by an existing algorithm, and a merging algorithm is developed to combine these assemblies by picking a subset of high quality transcripts to form a large transcriptome. When compared to existing algorithms that return a single assembly directly, this strategy achieves comparable or increased accuracy as memory-efficient algorithms that can be used to process a large amount of RNA-Seq data, and comparable or decreased accuracy as memory-intensive algorithms that can only be used to construct small assemblies. Our divide-and-conquer strategy allows memory-intensive de novo transcriptome assembly algorithms to be utilized to construct large assemblies.

  7. Large dunes on the outer shelf off the Zambezi Delta, Mozambique: evidence for the existence of a Mozambique Current

    Science.gov (United States)

    Flemming, Burghard W.; Kudrass, Hermann-Rudolf

    2018-02-01

    The existence of a continuously flowing Mozambique Current, i.e. a western geostrophic boundary current flowing southwards along the shelf break of Mozambique, was until recently accepted by oceanographers studying ocean circulation in the south-western Indian Ocean. This concept was then cast into doubt based on long-term current measurements obtained from current-meter moorings deployed across the northern Mozambique Channel, which suggested that southward flow through the Mozambique Channel took place in the form of successive, southward migrating and counter-clockwise rotating eddies. Indeed, numerical modelling found that, if at all, strong currents on the outer shelf occurred for not more than 9 days per year. In the present study, the negation of the existence of a Mozambique Current is challenged by the discovery of a large (50 km long, 12 km wide) subaqueous dune field (with up to 10 m high dunes) on the outer shelf east of the modern Zambezi River delta at water depths between 50 and 100 m. Being interpreted as representing the current-modified, early Holocene Zambezi palaeo-delta, the dune field would have migrated southwards by at least 50 km from its former location since sea level recovered to its present-day position some 7 ka ago and after the former delta had been remoulded into a migrating dune field. Because a large dune field composed of actively migrating bedforms cannot be generated and maintained by currents restricted to a period of only 9 days per year, the validity of those earlier modelling results is questioned for the western margin of the flow field. Indeed, satellite images extracted from the Perpetual Ocean display of NASA, which show monthly time-integrated surface currents in the Mozambique Channel for the 5 month period from June-October 2006, support the proposition that strong flow on the outer Mozambican shelf occurs much more frequently than postulated by those modelling results. This is consistent with more recent modelling

  8. Global existence and large time asymptotic behavior of strong solutions to the Cauchy problem of 2D density-dependent Navier–Stokes equations with vacuum

    Science.gov (United States)

    Lü, Boqiang; Shi, Xiaoding; Zhong, Xin

    2018-06-01

    We are concerned with the Cauchy problem of the two-dimensional (2D) nonhomogeneous incompressible Navier–Stokes equations with vacuum as far-field density. It is proved that if the initial density decays not too slow at infinity, the 2D Cauchy problem of the density-dependent Navier–Stokes equations on the whole space admits a unique global strong solution. Note that the initial data can be arbitrarily large and the initial density can contain vacuum states and even have compact support. Furthermore, we also obtain the large time decay rates of the spatial gradients of the velocity and the pressure, which are the same as those of the homogeneous case.

  9. D walls and junctions in supersymmetric gluodynamics in the large N limit suggest the existence of heavy hadrons

    International Nuclear Information System (INIS)

    Gabadadze, Gregory; Shifman, Mikhail

    2000-01-01

    A number of arguments exists that the ''minimal'' Bogomol'nyi-Prasad-Sommerfeld (BPS) wall width in large-N supersymmetric gluodynamics vanishes as 1/N. There is a certain tension between this assertion and the fact that the mesons coupled to λλ have masses O(N 0 ). To reconcile these facts we argue that there should exist additional solitonlike states with masses scaling as N. The BPS walls must be ''made'' predominantly of these heavy states which are coupled to λλ stronger than the conventional mesons. The tension of the BPS wall junction scales as N 2 , which serves as an additional argument in favor of the 1/N scaling of the wall width. The heavy states can be thought of as solitons of the corresponding closed string theory. They are related to certain fivebranes in the M-theory construction. We study the issue of the wall width in toy models which capture some features of supersymmetric gluodynamics. We speculate that the special hadrons with mass scaling as N should also exist in the large-N limit of nonsupersymmetric gluodynamics. (c) 2000 The American Physical Society

  10. Finite-Time Stability of Large-Scale Systems with Interval Time-Varying Delay in Interconnection

    Directory of Open Access Journals (Sweden)

    T. La-inchua

    2017-01-01

    Full Text Available We investigate finite-time stability of a class of nonlinear large-scale systems with interval time-varying delays in interconnection. Time-delay functions are continuous but not necessarily differentiable. Based on Lyapunov stability theory and new integral bounding technique, finite-time stability of large-scale systems with interval time-varying delays in interconnection is derived. The finite-time stability criteria are delays-dependent and are given in terms of linear matrix inequalities which can be solved by various available algorithms. Numerical examples are given to illustrate effectiveness of the proposed method.

  11. Interactive exploration of large-scale time-varying data using dynamic tracking graphs

    KAUST Repository

    Widanagamaachchi, W.

    2012-10-01

    Exploring and analyzing the temporal evolution of features in large-scale time-varying datasets is a common problem in many areas of science and engineering. One natural representation of such data is tracking graphs, i.e., constrained graph layouts that use one spatial dimension to indicate time and show the "tracks" of each feature as it evolves, merges or disappears. However, for practical data sets creating the corresponding optimal graph layouts that minimize the number of intersections can take hours to compute with existing techniques. Furthermore, the resulting graphs are often unmanageably large and complex even with an ideal layout. Finally, due to the cost of the layout, changing the feature definition, e.g. by changing an iso-value, or analyzing properly adjusted sub-graphs is infeasible. To address these challenges, this paper presents a new framework that couples hierarchical feature definitions with progressive graph layout algorithms to provide an interactive exploration of dynamically constructed tracking graphs. Our system enables users to change feature definitions on-the-fly and filter features using arbitrary attributes while providing an interactive view of the resulting tracking graphs. Furthermore, the graph display is integrated into a linked view system that provides a traditional 3D view of the current set of features and allows a cross-linked selection to enable a fully flexible spatio-temporal exploration of data. We demonstrate the utility of our approach with several large-scale scientific simulations from combustion science. © 2012 IEEE.

  12. Treatment time reduction for large thermal lesions by using a multiple 1D ultrasound phased array system

    International Nuclear Information System (INIS)

    Liu, H.-L.; Chen, Y.-Y.; Yen, J.-Y.; Lin, W.-L.

    2003-01-01

    To generate large thermal lesions in ultrasound thermal therapy, cooling intermissions are usually introduced during the treatment to prevent near-field heating, which leads to a long treatment time. A possible strategy to shorten the total treatment time is to eliminate the cooling intermissions. In this study, the two methods, power optimization and acoustic window enlargement, for reducing power accumulation in the near field are combined to investigate the feasibility of continuously heating a large target region (maximally 3.2 x 3.2 x 3.2 cm 3 ). A multiple 1D ultrasound phased array system generates the foci to scan the target region. Simulations show that the target region can be successfully heated without cooling and no near-field heating occurs. Moreover, due to the fact that there is no cooling time during the heating sessions, the total treatment time is significantly reduced to only several minutes, compared to the existing several hours

  13. Time simulation of flutter with large stiffness changes

    Science.gov (United States)

    Karpel, Mordechay; Wieseman, Carol D.

    1992-01-01

    Time simulation of flutter, involving large local structural changes, is formulated with a state-space model that is based on a relatively small number of generalized coordinates. Free-free vibration modes are first calculated for a nominal finite-element model with relatively large fictitious masses located at the area of structural changes. A low-frequency subset of these modes is then transformed into a set of structural modal coordinates with which the entire simulation is performed. These generalized coordinates and the associated oscillatory aerodynamic force coefficient matrices are used to construct an efficient time-domain, state-space model for a basic aeroelastic case. The time simulation can then be performed by simply changing the mass, stiffness, and damping coupling terms when structural changes occur. It is shown that the size of the aeroelastic model required for time simulation with large structural changes at a few apriori known locations is similar to that required for direct analysis of a single structural case. The method is applied to the simulation of an aeroelastic wind-tunnel model. The diverging oscillations are followed by the activation of a tip-ballast decoupling mechanism that stabilizes the system but may cause significant transient overshoots.

  14. On the existence and dynamics of braneworld black holes

    International Nuclear Information System (INIS)

    Fitzpatrick, Andrew Liam; Randall, Lisa; Wiseman, Toby

    2006-01-01

    Based on holographic arguments Tanaka and Emparan et al have claimed that large localized static black holes do not exist in the one-brane Randall-Sundrum model. If such black holes are time-dependent as they propose, there are potentially significant phenomenological and theoretical consequences. We revisit the issue, arguing that their reasoning does not take into account the strongly coupled nature of the holographic theory. We claim that static black holes with smooth metrics should indeed exist in these theories, and give a simple example. However, although the existence of such solutions is relevant to exact and numerical solution searches, such static solutions might be dynamically unstable, again leading to time dependence with phenomenological consequences. We explore a plausible instability, suggested by Tanaka, analogous to that of Gregory and Laflamme, but argue that there is no reliable reason at this point to assume it must exist

  15. Large holographic displays for real-time applications

    Science.gov (United States)

    Schwerdtner, A.; Häussler, R.; Leister, N.

    2008-02-01

    Holography is generally accepted as the ultimate approach to display three-dimensional scenes or objects. Principally, the reconstruction of an object from a perfect hologram would appear indistinguishable from viewing the corresponding real-world object. Up to now two main obstacles have prevented large-screen Computer-Generated Holograms (CGH) from achieving a satisfactory laboratory prototype not to mention a marketable one. The reason is a small cell pitch CGH resulting in a huge number of hologram cells and a very high computational load for encoding the CGH. These seemingly inevitable technological hurdles for a long time have not been cleared limiting the use of holography to special applications, such as optical filtering, interference, beam forming, digital holography for capturing the 3-D shape of objects, and others. SeeReal Technologies has developed a new approach for real-time capable CGH using the socalled Tracked Viewing Windows technology to overcome these problems. The paper will show that today's state of the art reconfigurable Spatial Light Modulators (SLM), especially today's feasible LCD panels are suited for reconstructing large 3-D scenes which can be observed from large viewing angles. For this to achieve the original holographic concept of containing information from the entire scene in each part of the CGH has been abandoned. This substantially reduces the hologram resolution and thus the computational load by several orders of magnitude making thus real-time computation possible. A monochrome real-time prototype measuring 20 inches has been built and demonstrated at last year's SID conference and exhibition 2007 and at several other events.

  16. Existence and global exponential stability of periodic solutions for n-dimensional neutral dynamic equations on time scales.

    Science.gov (United States)

    Li, Bing; Li, Yongkun; Zhang, Xuemei

    2016-01-01

    In this paper, by using the existence of the exponential dichotomy of linear dynamic equations on time scales and the theory of calculus on time scales, we study the existence and global exponential stability of periodic solutions for a class of n-dimensional neutral dynamic equations on time scales. We also present an example to illustrate the feasibility of our results. The results of this paper are completely new and complementary to the previously known results even in both the case of differential equations (time scale [Formula: see text]) and the case of difference equations (time scale [Formula: see text]).

  17. Real-time vibration compensation for large telescopes

    Science.gov (United States)

    Böhm, M.; Pott, J.-U.; Sawodny, O.; Herbst, T.; Kürster, M.

    2014-08-01

    We compare different strategies for minimizing the effects of telescope vibrations to the differential piston (optical pathway difference) for the Near-InfraRed/Visible Adaptive Camera and INterferometer for Astronomy (LINC-NIRVANA) at the Large Binocular Telescope (LBT) using an accelerometer feedforward compensation approach. We summarize, why this technology is important for LINC-NIRVANA, and also for future telescopes and already existing instruments. The main objective is outlining a solution for the estimation problem in general and its specifics at the LBT. Emphasis is put on realistic evaluation of the used algorithms in the laboratory, such that predictions for the expected performance at the LBT can be made. Model-based estimation and broad-band filtering techniques can be used to solve the estimation task, and the differences are discussed. Simulation results and measurements are shown to motivate our choice of the estimation algorithm for LINC-NIRVANA. The laboratory setup is aimed at imitating the vibration behaviour at the LBT in general, and the M2 as main contributor in particular. For our measurements, we introduce a disturbance time series which has a frequency spectrum comparable to what can be measured at the LBT on a typical night. The controllers' ability to suppress vibrations in the critical frequency range of 8-60 Hz is demonstrated. The experimental results are promising, indicating the ability to suppress differential piston induced by telescope vibrations by a factor of about 5 (rms), which is significantly better than any currently commissioned system.

  18. A novel adaptive synchronization control of a class of master-slave large-scale systems with unknown channel time-delay

    Science.gov (United States)

    Shen, Qikun; Zhang, Tianping

    2015-05-01

    The paper addresses a practical issue for adaptive synchronization in master-slave large-scale systems with constant channel time-delay., and a novel adaptive synchronization control scheme is proposed to guarantee the synchronization errors asymptotically converge to the origin, in which the matching condition as in the related literatures is not necessary. The real value of channel time-delay can be estimated online by a proper adaptation mechanism, which removes the conditions that the channel time-delay should be known exactly as in existing works. Finally, simulation results demonstrate the effectiveness of the approach.

  19. The existence and global attractivity of almost periodic sequence solution of discrete-time neural networks

    International Nuclear Information System (INIS)

    Huang Zhenkun; Wang Xinghua; Gao Feng

    2006-01-01

    In this Letter, we discuss discrete-time analogue of a continuous-time cellular neural network. Sufficient conditions are obtained for the existence of a unique almost periodic sequence solution which is globally attractive. Our results demonstrate dynamics of the formulated discrete-time analogue as mathematical models for the continuous-time cellular neural network in almost periodic case. Finally, a computer simulation illustrates the suitability of our discrete-time analogue as numerical algorithms in simulating the continuous-time cellular neural network conveniently

  20. Existence of time-periodic weak solutions to the stochastic Navier-Stokes equations around a moving body

    International Nuclear Information System (INIS)

    Chen, Feng; Han, Yuecai

    2013-01-01

    The existence of time-periodic stochastic motions of an incompressible fluid is obtained. Here the fluid is subject to a time-periodic body force and an additional time-periodic stochastic force that is produced by a rigid body moves periodically stochastically with the same period in the fluid

  1. Existence of time-periodic weak solutions to the stochastic Navier-Stokes equations around a moving body

    Energy Technology Data Exchange (ETDEWEB)

    Chen, Feng, E-mail: chenfengmath@163.com, E-mail: hanyc@jlu.edu.cn; Han, Yuecai, E-mail: chenfengmath@163.com, E-mail: hanyc@jlu.edu.cn [School of Mathematics, Jilin University, Changchun 130012 (China)

    2013-12-15

    The existence of time-periodic stochastic motions of an incompressible fluid is obtained. Here the fluid is subject to a time-periodic body force and an additional time-periodic stochastic force that is produced by a rigid body moves periodically stochastically with the same period in the fluid.

  2. Marketing communication drivers of adoption timing of a new E-service among existing customers

    NARCIS (Netherlands)

    Prins, Remo; Verhoef, Peter C.

    This study investigates the effects of direct marketing communications and mass marketing communications on the adoption timing of a new e-service among existing customers. The mass marketing communications pertain to both specific new service advertising and brand advertising from both the focal

  3. Discrete-time optimal control and games on large intervals

    CERN Document Server

    Zaslavski, Alexander J

    2017-01-01

    Devoted to the structure of approximate solutions of discrete-time optimal control problems and approximate solutions of dynamic discrete-time two-player zero-sum games, this book presents results on properties of approximate solutions in an interval that is independent lengthwise, for all sufficiently large intervals. Results concerning the so-called turnpike property of optimal control problems and zero-sum games in the regions close to the endpoints of the time intervals are the main focus of this book. The description of the structure of approximate solutions on sufficiently large intervals and its stability will interest graduate students and mathematicians in optimal control and game theory, engineering, and economics. This book begins with a brief overview and moves on to analyze the structure of approximate solutions of autonomous nonconcave discrete-time optimal control Lagrange problems.Next the structures of approximate solutions of autonomous discrete-time optimal control problems that are discret...

  4. The Large Observatory For x-ray Timing

    DEFF Research Database (Denmark)

    Feroci, M.; Herder, J. W. den; Bozzo, E.

    2014-01-01

    The Large Observatory For x-ray Timing (LOFT) was studied within ESA M3 Cosmic Vision framework and participated in the final down-selection for a launch slot in 2022-2024. Thanks to the unprecedented combination of effective area and spectral resolution of its main instrument, LOFT will study th...

  5. FTSPlot: fast time series visualization for large datasets.

    Directory of Open Access Journals (Sweden)

    Michael Riss

    Full Text Available The analysis of electrophysiological recordings often involves visual inspection of time series data to locate specific experiment epochs, mask artifacts, and verify the results of signal processing steps, such as filtering or spike detection. Long-term experiments with continuous data acquisition generate large amounts of data. Rapid browsing through these massive datasets poses a challenge to conventional data plotting software because the plotting time increases proportionately to the increase in the volume of data. This paper presents FTSPlot, which is a visualization concept for large-scale time series datasets using techniques from the field of high performance computer graphics, such as hierarchic level of detail and out-of-core data handling. In a preprocessing step, time series data, event, and interval annotations are converted into an optimized data format, which then permits fast, interactive visualization. The preprocessing step has a computational complexity of O(n x log(N; the visualization itself can be done with a complexity of O(1 and is therefore independent of the amount of data. A demonstration prototype has been implemented and benchmarks show that the technology is capable of displaying large amounts of time series data, event, and interval annotations lag-free with < 20 ms ms. The current 64-bit implementation theoretically supports datasets with up to 2(64 bytes, on the x86_64 architecture currently up to 2(48 bytes are supported, and benchmarks have been conducted with 2(40 bytes/1 TiB or 1.3 x 10(11 double precision samples. The presented software is freely available and can be included as a Qt GUI component in future software projects, providing a standard visualization method for long-term electrophysiological experiments.

  6. Existence of positive solutions for semipositone dynamic system on time scales

    Directory of Open Access Journals (Sweden)

    You-Wei Zhang

    2008-08-01

    Full Text Available In this paper, we study the following semipositone dynamic system on time scales $$displaylines{ -x^{DeltaDelta}(t=f(t,y+p(t, quad tin(0,T_{mathbb{T}},cr -y^{DeltaDelta}(t=g(t,x, quad tin(0,T_{mathbb{T}},cr x(0=x(sigma^{2}(T=0, cr alpha{y(0}-eta{y^{Delta}{(0}}= gamma{y(sigma(T}+delta{y^{Delta}(sigma(T}=0. }$$ Using fixed point index theory, we show the existence of at least one positive solution. The interesting point is the that nonlinear term is allowed to change sign and may tend to negative infinity.

  7. Just-in-time connectivity for large spiking networks.

    Science.gov (United States)

    Lytton, William W; Omurtag, Ahmet; Neymotin, Samuel A; Hines, Michael L

    2008-11-01

    The scale of large neuronal network simulations is memory limited due to the need to store connectivity information: connectivity storage grows as the square of neuron number up to anatomically relevant limits. Using the NEURON simulator as a discrete-event simulator (no integration), we explored the consequences of avoiding the space costs of connectivity through regenerating connectivity parameters when needed: just in time after a presynaptic cell fires. We explored various strategies for automated generation of one or more of the basic static connectivity parameters: delays, postsynaptic cell identities, and weights, as well as run-time connectivity state: the event queue. Comparison of the JitCon implementation to NEURON's standard NetCon connectivity method showed substantial space savings, with associated run-time penalty. Although JitCon saved space by eliminating connectivity parameters, larger simulations were still memory limited due to growth of the synaptic event queue. We therefore designed a JitEvent algorithm that added items to the queue only when required: instead of alerting multiple postsynaptic cells, a spiking presynaptic cell posted a callback event at the shortest synaptic delay time. At the time of the callback, this same presynaptic cell directly notified the first postsynaptic cell and generated another self-callback for the next delay time. The JitEvent implementation yielded substantial additional time and space savings. We conclude that just-in-time strategies are necessary for very large network simulations but that a variety of alternative strategies should be considered whose optimality will depend on the characteristics of the simulation to be run.

  8. Travel Times for Screening Mammography: Impact of Geographic Expansion by a Large Academic Health System.

    Science.gov (United States)

    Rosenkrantz, Andrew B; Liang, Yu; Duszak, Richard; Recht, Michael P

    2017-09-01

    This study aims to assess the impact of off-campus facility expansion by a large academic health system on patient travel times for screening mammography. Screening mammograms performed from 2013 to 2015 and associated patient demographics were identified using the NYU Langone Medical Center Enterprise Data Warehouse. During this time, the system's number of mammography facilities increased from 6 to 19, reflecting expansion beyond Manhattan throughout the New York metropolitan region. Geocoding software was used to estimate driving times from patients' homes to imaging facilities. For 147,566 screening mammograms, the mean estimated patient travel time was 19.9 ± 15.2 minutes. With facility expansion, travel times declined significantly (P travel times between such subgroups. However, travel times to pre-expansion facilities remained stable (initial: 26.8 ± 18.9 minutes, final: 26.7 ± 18.6 minutes). Among women undergoing mammography before and after expansion, travel times were shorter for the postexpansion mammogram in only 6.3%, but this rate varied significantly (all P travel burden and reduce travel time variation among sociodemographic populations. Nonetheless, existing patients strongly tend to return to established facilities despite potentially shorter travel time locations, suggesting strong site loyalty. Variation in travel times likely relates to various factors other than facility proximity. Copyright © 2017 The Association of University Radiologists. Published by Elsevier Inc. All rights reserved.

  9. Time dispersion in large plastic scintillation neutron detectors

    International Nuclear Information System (INIS)

    De, A.; Dasgupta, S.S.; Sen, D.

    1993-01-01

    Time dispersion (TD) has been computed for large neutron detectors using plastic scintillators. It has been shown that TD seen by the PM tube does not necessarily increase with incident neutron energy, a result not fully in agreement with the usual finding

  10. On real-time assessment of post-emergency condition existence in complex electric power systems

    Energy Technology Data Exchange (ETDEWEB)

    Tarasov, Vladimir I. [Irkutsk State Technical University 83, Lermontov Street, Irkutsk 664074 (Russian Federation)

    2008-12-15

    This paper presents two effective numerical criteria of estimating post-emergency operating conditions' non-existence in complicated electric power systems. These criteria are based on mathematic and programming tools of the regularized quadratic descent method and the regularized two-parameter minimization method. The proposed criteria can be effectively applied in calculations of real-time electric operating conditions. (author)

  11. Influence of weathering and pre-existing large scale fractures on gravitational slope failure: insights from 3-D physical modelling

    Directory of Open Access Journals (Sweden)

    D. Bachmann

    2004-01-01

    Full Text Available Using a new 3-D physical modelling technique we investigated the initiation and evolution of large scale landslides in presence of pre-existing large scale fractures and taking into account the slope material weakening due to the alteration/weathering. The modelling technique is based on the specially developed properly scaled analogue materials, as well as on the original vertical accelerator device enabling increases in the 'gravity acceleration' up to a factor 50. The weathering primarily affects the uppermost layers through the water circulation. We simulated the effect of this process by making models of two parts. The shallower one represents the zone subject to homogeneous weathering and is made of low strength material of compressive strength σl. The deeper (core part of the model is stronger and simulates intact rocks. Deformation of such a model subjected to the gravity force occurred only in its upper (low strength layer. In another set of experiments, low strength (σw narrow planar zones sub-parallel to the slope surface (σwl were introduced into the model's superficial low strength layer to simulate localized highly weathered zones. In this configuration landslides were initiated much easier (at lower 'gravity force', were shallower and had smaller horizontal size largely defined by the weak zone size. Pre-existing fractures were introduced into the model by cutting it along a given plan. They have proved to be of small influence on the slope stability, except when they were associated to highly weathered zones. In this latter case the fractures laterally limited the slides. Deep seated rockslides initiation is thus directly defined by the mechanical structure of the hillslope's uppermost levels and especially by the presence of the weak zones due to the weathering. The large scale fractures play a more passive role and can only influence the shape and the volume of the sliding units.

  12. Parallel time domain solvers for electrically large transient scattering problems

    KAUST Repository

    Liu, Yang

    2014-09-26

    Marching on in time (MOT)-based integral equation solvers represent an increasingly appealing avenue for analyzing transient electromagnetic interactions with large and complex structures. MOT integral equation solvers for analyzing electromagnetic scattering from perfect electrically conducting objects are obtained by enforcing electric field boundary conditions and implicitly time advance electric surface current densities by iteratively solving sparse systems of equations at all time steps. Contrary to finite difference and element competitors, these solvers apply to nonlinear and multi-scale structures comprising geometrically intricate and deep sub-wavelength features residing atop electrically large platforms. Moreover, they are high-order accurate, stable in the low- and high-frequency limits, and applicable to conducting and penetrable structures represented by highly irregular meshes. This presentation reviews some recent advances in the parallel implementations of time domain integral equation solvers, specifically those that leverage multilevel plane-wave time-domain algorithm (PWTD) on modern manycore computer architectures including graphics processing units (GPUs) and distributed memory supercomputers. The GPU-based implementation achieves at least one order of magnitude speedups compared to serial implementations while the distributed parallel implementation are highly scalable to thousands of compute-nodes. A distributed parallel PWTD kernel has been adopted to solve time domain surface/volume integral equations (TDSIE/TDVIE) for analyzing transient scattering from large and complex-shaped perfectly electrically conducting (PEC)/dielectric objects involving ten million/tens of millions of spatial unknowns.

  13. Global existence of solutions to the Cauchy problem for time-dependent Hartree equations

    International Nuclear Information System (INIS)

    Chadam, J.M.; Glassey, R.T.

    1975-01-01

    The existence of global solutions to the Cauchy problem for time-dependent Hartree equations for N electrons is established. The solution is shown to have a uniformly bounded H 1 (R 3 ) norm and to satisfy an estimate of the form two parallel PSI (t) two parallel/sub H 2 ; less than or equal to c exp(kt). It is shown that ''negative energy'' solutions do not converge uniformly to zero as t → infinity. (U.S.)

  14. The part-time wage penalty in European countries: how large is it for men?

    OpenAIRE

    O'Dorchai, Sile Padraigin; Plasman, Robert; Rycx, François

    2007-01-01

    Economic theory advances a number of reasons for the existence of a wage gap between part-time and full-time workers. Empirical work has concentrated on the wage effects of part-time work for women. For men, much less empirical evidence exists, mainly because of lacking data. In this paper, we take advantage of access to unique harmonised matched employer-employee data (i.e. the 1995 European Structure of Earnings Survey) to investigate the magnitude and sources of the part-time wage penalty ...

  15. Data warehousing technologies for large-scale and right-time data

    DEFF Research Database (Denmark)

    Xiufeng, Liu

    heterogeneous sources into a central data warehouse (DW) by Extract-Transform-Load (ETL) at regular time intervals, e.g., monthly, weekly, or daily. But now, it becomes challenging for large-scale data, and hard to meet the near real-time/right-time business decisions. This thesis considers some...

  16. Does time exist in quantum gravity?

    Directory of Open Access Journals (Sweden)

    Claus Kiefer

    2015-12-01

    Full Text Available Time is absolute in standard quantum theory and dynamical in general relativity. The combination of both theories into a theory of quantum gravity leads therefore to a “problem of time”. In my essay, I investigate those consequences for the concept of time that may be drawn without a detailed knowledge of quantum gravity. The only assumptions are the experimentally supported universality of the linear structure of quantum theory and the recovery of general relativity in the classical limit. Among the consequences are the fundamental timelessness of quantum gravity, the approximate nature of a semiclassical time, and the correlation of entropy with the size of the Universe.

  17. Large Deviations for Two-Time-Scale Diffusions, with Delays

    International Nuclear Information System (INIS)

    Kushner, Harold J.

    2010-01-01

    We consider the problem of large deviations for a two-time-scale reflected diffusion process, possibly with delays in the dynamical terms. The Dupuis-Ellis weak convergence approach is used. It is perhaps the most intuitive and simplest for the problems of concern. The results have applications to the problem of approximating optimal controls for two-time-scale systems via use of the averaged equation.

  18. Large-scale integration of wind power into the existing Chinese energy system

    DEFF Research Database (Denmark)

    Liu, Wen; Lund, Henrik; Mathiesen, Brian Vad

    2011-01-01

    stability, the maximum feasible wind power penetration in the existing Chinese energy system is approximately 26% from both technical and economic points of view. A fuel efficiency decrease occurred when increasing wind power penetration in the system, due to its rigid power supply structure and the task......This paper presents the ability of the existing Chinese energy system to integrate wind power and explores how the Chinese energy system needs to prepare itself in order to integrate more fluctuating renewable energy in the future. With this purpose in mind, a model of the Chinese energy system has...... been constructed by using EnergyPLAN based on the year 2007, which has then been used for investigating three issues. Firstly, the accuracy of the model itself has been examined and then the maximum feasible wind power penetration in the existing energy system has been identified. Finally, barriers...

  19. Existence of solutions to nonlinear parabolic unilateral problems with an obstacle depending on time

    Directory of Open Access Journals (Sweden)

    Nabila Bellal

    2014-10-01

    Full Text Available Using the penalty method, we prove the existence of solutions to nonlinear parabolic unilateral problems with an obstacle depending on time. To find a solution, the original inequality is transformed into an equality by adding a positive function on the right-hand side and a complementary condition. This result can be seen as a generalization of the results by Mokrane in [11] where the obstacle is zero.

  20. Large Variability in the Diversity of Physiologically Complex Surgical Procedures Exists Nationwide Among All Hospitals Including Among Large Teaching Hospitals.

    Science.gov (United States)

    Dexter, Franklin; Epstein, Richard H; Thenuwara, Kokila; Lubarsky, David A

    2017-11-22

    Multiple previous studies have shown that having a large diversity of procedures has a substantial impact on quality management of hospital surgical suites. At hospitals with substantial diversity, unless sophisticated statistical methods suitable for rare events are used, anesthesiologists working in surgical suites will have inaccurate predictions of surgical blood usage, case durations, cost accounting and price transparency, times remaining in late running cases, and use of intraoperative equipment. What is unknown is whether large diversity is a feature of only a few very unique set of hospitals nationwide (eg, the largest hospitals in each state or province). The 2013 United States Nationwide Readmissions Database was used to study heterogeneity among 1981 hospitals in their diversities of physiologically complex surgical procedures (ie, the procedure codes). The diversity of surgical procedures performed at each hospital was quantified using a summary measure, the number of different physiologically complex surgical procedures commonly performed at the hospital (ie, 1/Herfindahl). A total of 53.9% of all hospitals commonly performed 3-fold larger diversity (ie, >30 commonly performed physiologically complex procedures). Larger hospitals had greater diversity than the small- and medium-sized hospitals (P 30 procedures (lower 99% CL, 71.9% of hospitals). However, there was considerable variability among the large teaching hospitals in their diversity (interquartile range of the numbers of commonly performed physiologically complex procedures = 19.3; lower 99% CL, 12.8 procedures). The diversity of procedures represents a substantive differentiator among hospitals. Thus, the usefulness of statistical methods for operating room management should be expected to be heterogeneous among hospitals. Our results also show that "large teaching hospital" alone is an insufficient description for accurate prediction of the extent to which a hospital sustains the

  1. Large-scale straw supplies to existing coal-fired power stations

    International Nuclear Information System (INIS)

    Gylling, M.; Parsby, M.; Thellesen, H.Z.; Keller, P.

    1992-08-01

    It is considered that large-scale supply of straw to power stations and decentral cogeneration plants could open up new economical systems and methods of organization of straw supply in Denmark. This thesis is elucidated and involved constraints are pointed out. The aim is to describe to what extent large-scale straw supply is interesting with regard to monetary savings and available resources. Analyses of models, systems and techniques described in a foregoing project are carried out. It is reckoned that the annual total amount of surplus straw in Denmark is 3.6 million tons. At present, use of straw which is not agricultural is limited to district heating plants with an annual consumption of 2-12 thousand tons. A prerequisite for a significant increase in the use of straw is an annual consumption by power and cogeneration plants of more than 100.000 tons. All aspects of straw management are examined in detail, also in relation to two actual Danish coal-fired plants. The reliability of straw supply is considered. It is concluded that very significant resources of straw are available in Denmark but there remain a number of constraints. Price competitiveness must be considered in relation to other fuels. It is suggested that the use of corn harvests, with whole stems attached (handled as large bales or in the same way as sliced straw alone) as fuel, would result in significant monetary savings in transport and storage especially. An equal status for whole-harvested corn with other forms of biomass fuels, with following changes in taxes and subsidies could possibly reduce constraints on large scale straw fuel supply. (AB) (13 refs.)

  2. Calculation of neutron die-away times in a large-vehicle portal monitor

    International Nuclear Information System (INIS)

    Lillie, R.A.; Santoro, R.T.; Alsmiller, R.G. Jr.

    1980-05-01

    Monte Carlo methods have been used to calculate neutron die-away times in a large-vehicle portal monitor. These calculations were performed to investigate the adequacy of using neutron die-away time measurements to detect the clandestine movement of shielded nuclear materials. The geometry consisted of a large tunnel lined with He 3 proportional counters. The time behavior of the (n,p) capture reaction in these counters was calculated when the tunnel contained a number of different tractor-trailer load configurations. Neutron die-away times obtained from weighted least squares fits to these data were compared. The change in neutron die-away time due to the replacement of cargo in a fully loaded truck with a spherical shell containing 240 kg of borated polyethylene was calculated to be less than 3%. This result together with the overall behavior of neutron die-away time versus mass inside the tunnel strongly suggested that measurements of this type will not provide a reliable means of detecting shielded nuclear materials in a large vehicle. 5 figures, 4 tables

  3. Energy Efficiency in the North American Existing Building Stock

    Energy Technology Data Exchange (ETDEWEB)

    NONE

    2007-07-01

    This report presents the findings of a new assessment of the techno-economic and policy-related efficiency improvement potential in the North American building stock conducted as part of a wider appraisal of existing buildings in member states of the International Energy Agency. It summarizes results and provides insights into the lessons learned through a broader global review of best practice to improve the energy efficiency of existing buildings. At this time, the report is limited to the USA because of the large size of its buildings market. At a later date, a more complete review may include some details about policies and programs in Canada. If resources are available an additional comprehensive review of Canada and Mexico may be performed in the future.

  4. Engineering judgement and bridging the fire safety gap in existing nuclear facilities

    Energy Technology Data Exchange (ETDEWEB)

    Qamheiah, G.; Wu, Y., E-mail: gqamheiah@plcfire.com, E-mail: dwu@plcfire.com [PLC Fire Safety Solutions, Mississauga, ON (Canada)

    2014-07-01

    Canadian nuclear power plants were constructed in the 1960's through the 1980's. Fire safety considerations were largely based on guidance from general building and fire codes in effect at the time. Since then, nuclear specific fire safety standards have been developed and adopted by the Regulator, increasing the expected level of fire safety in the process. Application of the standards to existing plants was largely limited to operational requirements viewed as retroactive. However, as existing facilities undergo modifications or refurbishment for the purpose of life extension, the expectation is that the design requirements of these fire safety standards also be satisfied. This creates considerable challenges for existing nuclear power plants as fire safety requirements such as those intended to assure means for safe egress, prevention of fire spread and protection of redundancy rely upon fire protection features that are inherent in the physical infrastructural design. This paper focuses on the methodology for conducting fire safety gap analyses on existing plants, and the integral role that engineering judgement plays in the development of viable and cost effective solutions to achieve the objectives of the current fire safety standards. (author)

  5. On the existence of perturbed Robertson-Walker universes

    International Nuclear Information System (INIS)

    D'Eath, P.D.

    1976-01-01

    Solutions of the full nonlinear field equations of general relativity near the Robertson-Walker universes are examined, together with their relation to linearized perturbations. A method due to Choquet-Bruhat and Deser is used to prove existence theorems for solutions near Robertson-Walker constraint data of the constraint equations on a spacelike hypersurface. These theorems allow one to regard the matter fluctuations as independent quantities, ranging over certain function spaces. In the k=-1 case the existence theory describes perturbations which may vary within uniform bounds throughout space. When k=+1 a modification of the method leads to a theorem which clarifies some unusual features of these constraint perturbations. The k=0 existence theorem refers only to perturbations which die away at large distances. The connection between linearized constraint solutions and solutions of the full constraints is discussed. For k= +- 1 backgrounds, solutions of the linearized constraints are analyzed using transverse-traceless decompositions of symmetric tensors. Finally the time-evolution of perturbed constraint data and the validity of linearized perturbation theory for Robertson-Walker universes are considered

  6. Dwell time considerations for large area cold plasma decontamination

    Science.gov (United States)

    Konesky, Gregory

    2009-05-01

    Atmospheric discharge cold plasmas have been shown to be effective in the reduction of pathogenic bacteria and spores and in the decontamination of simulated chemical warfare agents, without the generation of toxic or harmful by-products. Cold plasmas may also be useful in assisting cleanup of radiological "dirty bombs." For practical applications in realistic scenarios, the plasma applicator must have both a large area of coverage, and a reasonably short dwell time. However, the literature contains a wide range of reported dwell times, from a few seconds to several minutes, needed to achieve a given level of reduction. This is largely due to different experimental conditions, and especially, different methods of generating the decontaminating plasma. We consider these different approaches and attempt to draw equivalencies among them, and use this to develop requirements for a practical, field-deployable plasma decontamination system. A plasma applicator with 12 square inches area and integral high voltage, high frequency generator is described.

  7. Urban Freight Management with Stochastic Time-Dependent Travel Times and Application to Large-Scale Transportation Networks

    Directory of Open Access Journals (Sweden)

    Shichao Sun

    2015-01-01

    Full Text Available This paper addressed the vehicle routing problem (VRP in large-scale urban transportation networks with stochastic time-dependent (STD travel times. The subproblem which is how to find the optimal path connecting any pair of customer nodes in a STD network was solved through a robust approach without requiring the probability distributions of link travel times. Based on that, the proposed STD-VRP model can be converted into solving a normal time-dependent VRP (TD-VRP, and algorithms for such TD-VRPs can also be introduced to obtain the solution. Numerical experiments were conducted to address STD-VRPTW of practical sizes on a real world urban network, demonstrated here on the road network of Shenzhen, China. The stochastic time-dependent link travel times of the network were calibrated by historical floating car data. A route construction algorithm was applied to solve the STD problem in 4 delivery scenarios efficiently. The computational results showed that the proposed STD-VRPTW model can improve the level of customer service by satisfying the time-window constraint under any circumstances. The improvement can be very significant especially for large-scale network delivery tasks with no more increase in cost and environmental impacts.

  8. High resolution time-of-flight measurements in small and large scintillation counters

    International Nuclear Information System (INIS)

    D'Agostini, G.; Marini, G.; Martellotti, G.; Massa, F.; Rambaldi, A.; Sciubba, A.

    1981-01-01

    In a test run, the experimental time-of-flight resolution was measured for several different scintillation counters of small (10 x 5 cm 2 ) and large (100 x 15 cm 2 and 75 x 25 cm 2 ) area. The design characteristics were decided on the basis of theoretical Monte Carlo calculations. We report results using twisted, fish-tail, and rectangular light- guides and different types of scintillator (NE 114 and PILOT U). Time resolution up to approx. equal to 130-150 ps fwhm for the small counters and up to approx. equal to 280-300 ps fwhm for the large counters were obtained. The spatial resolution from time measurements in the large counters is also reported. The results of Monte Carlo calculations on the type of scintillator, the shape and dimensions of the light-guides, and the nature of the external wrapping surfaces - to be used in order to optimize the time resolution - are also summarized. (orig.)

  9. Modeling and Validating Time, Buffering, and Utilization of a Large-Scale, Real-Time Data Acquisition System

    CERN Document Server

    AUTHOR|(SzGeCERN)756497; The ATLAS collaboration; Garcia Garcia, Pedro Javier; Vandelli, Wainer; Froening, Holger

    2017-01-01

    Data acquisition systems for large-scale high-energy physics experiments have to handle hundreds of gigabytes per second of data, and are typically realized as specialized data centers that connect a very large number of front-end electronics devices to an event detection and storage system. The design of such systems is often based on many assumptions, small-scale experiments and a substantial amount of over-provisioning. In this work, we introduce a discrete event-based simulation tool that models the data flow of the current ATLAS data acquisition system, with the main goal to be accurate with regard to the main operational characteristics. We measure buffer occupancy counting the number of elements in buffers, resource utilization measuring output bandwidth and counting the number of active processing units, and their time evolution by comparing data over many consecutive and small periods of time. We perform studies on the error of simulation when comparing the results to a large amount of real-world ope...

  10. Modeling and Validating Time, Buffering, and Utilization of a Large-Scale, Real-Time Data Acquisition System

    CERN Document Server

    AUTHOR|(SzGeCERN)756497; The ATLAS collaboration; Garcia Garcia, Pedro Javier; Vandelli, Wainer; Froening, Holger

    2017-01-01

    Data acquisition systems for large-scale high-energy physics experiments have to handle hundreds of gigabytes per second of data, and are typically implemented as specialized data centers that connect a very large number of front-end electronics devices to an event detection and storage system. The design of such systems is often based on many assumptions, small-scale experiments and a substantial amount of over-provisioning. In this paper, we introduce a discrete event-based simulation tool that models the dataflow of the current ATLAS data acquisition system, with the main goal to be accurate with regard to the main operational characteristics. We measure buffer occupancy counting the number of elements in buffers; resource utilization measuring output bandwidth and counting the number of active processing units, and their time evolution by comparing data over many consecutive and small periods of time. We perform studies on the error in simulation when comparing the results to a large amount of real-world ...

  11. Finite-time and finite-size scalings in the evaluation of large-deviation functions: Numerical approach in continuous time.

    Science.gov (United States)

    Guevara Hidalgo, Esteban; Nemoto, Takahiro; Lecomte, Vivien

    2017-06-01

    Rare trajectories of stochastic systems are important to understand because of their potential impact. However, their properties are by definition difficult to sample directly. Population dynamics provides a numerical tool allowing their study, by means of simulating a large number of copies of the system, which are subjected to selection rules that favor the rare trajectories of interest. Such algorithms are plagued by finite simulation time and finite population size, effects that can render their use delicate. In this paper, we present a numerical approach which uses the finite-time and finite-size scalings of estimators of the large deviation functions associated to the distribution of rare trajectories. The method we propose allows one to extract the infinite-time and infinite-size limit of these estimators, which-as shown on the contact process-provides a significant improvement of the large deviation function estimators compared to the standard one.

  12. Finite-time and finite-size scalings in the evaluation of large-deviation functions: Numerical approach in continuous time

    Science.gov (United States)

    Guevara Hidalgo, Esteban; Nemoto, Takahiro; Lecomte, Vivien

    2017-06-01

    Rare trajectories of stochastic systems are important to understand because of their potential impact. However, their properties are by definition difficult to sample directly. Population dynamics provides a numerical tool allowing their study, by means of simulating a large number of copies of the system, which are subjected to selection rules that favor the rare trajectories of interest. Such algorithms are plagued by finite simulation time and finite population size, effects that can render their use delicate. In this paper, we present a numerical approach which uses the finite-time and finite-size scalings of estimators of the large deviation functions associated to the distribution of rare trajectories. The method we propose allows one to extract the infinite-time and infinite-size limit of these estimators, which—as shown on the contact process—provides a significant improvement of the large deviation function estimators compared to the standard one.

  13. Versatile synchronized real-time MEG hardware controller for large-scale fast data acquisition

    Science.gov (United States)

    Sun, Limin; Han, Menglai; Pratt, Kevin; Paulson, Douglas; Dinh, Christoph; Esch, Lorenz; Okada, Yoshio; Hämäläinen, Matti

    2017-05-01

    Versatile controllers for accurate, fast, and real-time synchronized acquisition of large-scale data are useful in many areas of science, engineering, and technology. Here, we describe the development of a controller software based on a technique called queued state machine for controlling the data acquisition (DAQ) hardware, continuously acquiring a large amount of data synchronized across a large number of channels (>400) at a fast rate (up to 20 kHz/channel) in real time, and interfacing with applications for real-time data analysis and display of electrophysiological data. This DAQ controller was developed specifically for a 384-channel pediatric whole-head magnetoencephalography (MEG) system, but its architecture is useful for wide applications. This controller running in a LabVIEW environment interfaces with microprocessors in the MEG sensor electronics to control their real-time operation. It also interfaces with a real-time MEG analysis software via transmission control protocol/internet protocol, to control the synchronous acquisition and transfer of the data in real time from >400 channels to acquisition and analysis workstations. The successful implementation of this controller for an MEG system with a large number of channels demonstrates the feasibility of employing the present architecture in several other applications.

  14. Large Time Behavior of the Vlasov-Poisson-Boltzmann System

    Directory of Open Access Journals (Sweden)

    Li Li

    2013-01-01

    Full Text Available The motion of dilute charged particles can be modeled by Vlasov-Poisson-Boltzmann system. We study the large time stability of the VPB system. To be precise, we prove that when time goes to infinity, the solution of VPB system tends to global Maxwellian state in a rate Ot−∞, by using a method developed for Boltzmann equation without force in the work of Desvillettes and Villani (2005. The improvement of the present paper is the removal of condition on parameter λ as in the work of Li (2008.

  15. Large Scale Metric Learning for Distance-Based Image Classification on Open Ended Data Sets

    NARCIS (Netherlands)

    Mensink, T.; Verbeek, J.; Perronnin, F.; Csurka, G.; Farinella, G.M.; Battiato, S.; Cipolla, R,

    2013-01-01

    Many real-life large-scale datasets are open-ended and dynamic: new images are continuously added to existing classes, new classes appear over time, and the semantics of existing classes might evolve too. Therefore, we study large-scale image classification methods that can incorporate new classes

  16. Time series clustering in large data sets

    Directory of Open Access Journals (Sweden)

    Jiří Fejfar

    2011-01-01

    Full Text Available The clustering of time series is a widely researched area. There are many methods for dealing with this task. We are actually using the Self-organizing map (SOM with the unsupervised learning algorithm for clustering of time series. After the first experiment (Fejfar, Weinlichová, Šťastný, 2009 it seems that the whole concept of the clustering algorithm is correct but that we have to perform time series clustering on much larger dataset to obtain more accurate results and to find the correlation between configured parameters and results more precisely. The second requirement arose in a need for a well-defined evaluation of results. It seems useful to use sound recordings as instances of time series again. There are many recordings to use in digital libraries, many interesting features and patterns can be found in this area. We are searching for recordings with the similar development of information density in this experiment. It can be used for musical form investigation, cover songs detection and many others applications.The objective of the presented paper is to compare clustering results made with different parameters of feature vectors and the SOM itself. We are describing time series in a simplistic way evaluating standard deviations for separated parts of recordings. The resulting feature vectors are clustered with the SOM in batch training mode with different topologies varying from few neurons to large maps.There are other algorithms discussed, usable for finding similarities between time series and finally conclusions for further research are presented. We also present an overview of the related actual literature and projects.

  17. Solution of large nonlinear time-dependent problems using reduced coordinates

    International Nuclear Information System (INIS)

    Mish, K.D.

    1987-01-01

    This research is concerned with the idea of reducing a large time-dependent problem, such as one obtained from a finite-element discretization, down to a more manageable size while preserving the most-important physical behavior of the solution. This reduction process is motivated by the concept of a projection operator on a Hilbert Space, and leads to the Lanczos Algorithm for generation of approximate eigenvectors of a large symmetric matrix. The Lanczos Algorithm is then used to develop a reduced form of the spatial component of a time-dependent problem. The solution of the remaining temporal part of the problem is considered from the standpoint of numerical-integration schemes in the time domain. All of these theoretical results are combined to motivate the proposed reduced coordinate algorithm. This algorithm is then developed, discussed, and compared to related methods from the mechanics literature. The proposed reduced coordinate method is then applied to the solution of some representative problems in mechanics. The results of these problems are discussed, conclusions are drawn, and suggestions are made for related future research

  18. Irregular Morphing for Real-Time Rendering of Large Terrain

    Directory of Open Access Journals (Sweden)

    S. Kalem

    2016-06-01

    Full Text Available The following paper proposes an alternative approach to the real-time adaptive triangulation problem. A new region-based multi-resolution approach for terrain rendering is described which improves on-the-fly the distribution of the density of triangles inside the tile after selecting appropriate Level-Of-Detail by an adaptive sampling. This proposed approach organizes the heightmap into a QuadTree of tiles that are processed independently. This technique combines the benefits of both Triangular Irregular Network approach and region-based multi-resolution approach by improving the distribution of the density of triangles inside the tile. Our technique morphs the initial regular grid of the tile to deformed grid in order to minimize approximation error. The proposed technique strives to combine large tile size and real-time processing while guaranteeing an upper bound on the screen space error. Thus, this approach adapts terrain rendering process to local surface characteristics and enables on-the-fly handling of large amount of terrain data. Morphing is based-on the multi-resolution wavelet analysis. The use of the D2WT multi-resolution analysis of the terrain height-map speeds up processing and permits to satisfy an interactive terrain rendering. Tests and experiments demonstrate that Haar B-Spline wavelet, well known for its properties of localization and its compact support, is suitable for fast and accurate redistribution. Such technique could be exploited in client-server architecture for supporting interactive high-quality remote visualization of very large terrain.

  19. TIME DISTRIBUTIONS OF LARGE AND SMALL SUNSPOT GROUPS OVER FOUR SOLAR CYCLES

    International Nuclear Information System (INIS)

    Kilcik, A.; Yurchyshyn, V. B.; Abramenko, V.; Goode, P. R.; Cao, W.; Ozguc, A.; Rozelot, J. P.

    2011-01-01

    Here we analyze solar activity by focusing on time variations of the number of sunspot groups (SGs) as a function of their modified Zurich class. We analyzed data for solar cycles 20-23 by using Rome (cycles 20 and 21) and Learmonth Solar Observatory (cycles 22 and 23) SG numbers. All SGs recorded during these time intervals were separated into two groups. The first group includes small SGs (A, B, C, H, and J classes by Zurich classification), and the second group consists of large SGs (D, E, F, and G classes). We then calculated small and large SG numbers from their daily mean numbers as observed on the solar disk during a given month. We report that the time variations of small and large SG numbers are asymmetric except for solar cycle 22. In general, large SG numbers appear to reach their maximum in the middle of the solar cycle (phases 0.45-0.5), while the international sunspot numbers and the small SG numbers generally peak much earlier (solar cycle phases 0.29-0.35). Moreover, the 10.7 cm solar radio flux, the facular area, and the maximum coronal mass ejection speed show better agreement with the large SG numbers than they do with the small SG numbers. Our results suggest that the large SG numbers are more likely to shed light on solar activity and its geophysical implications. Our findings may also influence our understanding of long-term variations of the total solar irradiance, which is thought to be an important factor in the Sun-Earth climate relationship.

  20. RankExplorer: Visualization of Ranking Changes in Large Time Series Data.

    Science.gov (United States)

    Shi, Conglei; Cui, Weiwei; Liu, Shixia; Xu, Panpan; Chen, Wei; Qu, Huamin

    2012-12-01

    For many applications involving time series data, people are often interested in the changes of item values over time as well as their ranking changes. For example, people search many words via search engines like Google and Bing every day. Analysts are interested in both the absolute searching number for each word as well as their relative rankings. Both sets of statistics may change over time. For very large time series data with thousands of items, how to visually present ranking changes is an interesting challenge. In this paper, we propose RankExplorer, a novel visualization method based on ThemeRiver to reveal the ranking changes. Our method consists of four major components: 1) a segmentation method which partitions a large set of time series curves into a manageable number of ranking categories; 2) an extended ThemeRiver view with embedded color bars and changing glyphs to show the evolution of aggregation values related to each ranking category over time as well as the content changes in each ranking category; 3) a trend curve to show the degree of ranking changes over time; 4) rich user interactions to support interactive exploration of ranking changes. We have applied our method to some real time series data and the case studies demonstrate that our method can reveal the underlying patterns related to ranking changes which might otherwise be obscured in traditional visualizations.

  1. Computing the real-time Green's Functions of large Hamiltonian matrices

    OpenAIRE

    Iitaka, Toshiaki

    1998-01-01

    A numerical method is developed for calculating the real time Green's functions of very large sparse Hamiltonian matrices, which exploits the numerical solution of the inhomogeneous time-dependent Schroedinger equation. The method has a clear-cut structure reflecting the most naive definition of the Green's functions, and is very suitable to parallel and vector supercomputers. The effectiveness of the method is illustrated by applying it to simple lattice models. An application of this method...

  2. Assessment and Rehabilitation Issues Concerning Existing 70’s Structural Stock

    Science.gov (United States)

    Sabareanu, E.

    2017-06-01

    The last 30 years were very demanding in terms of norms and standards change concerning the structural calculus for buildings, leaving a large stock of structures erected during 70-90 decades in a weak position concerning seismic loads and loads level for live loads, wind and snow. In the same time, taking into account that a large amount of buildings are in service all over the country, they cannot be demolished, but suitable rehabilitation methods should be proposed, structural durability being achieved. The paper proposes some rehabilitation methods suitable in terms of structural safety and cost optimization for diaphragm reinforced concrete structures, with an example on an existing multi storey building.

  3. The EXIST Mission Concept Study

    Science.gov (United States)

    Fishman, Gerald J.; Grindlay, J.; Hong, J.

    2008-01-01

    EXIST is a mission designed to find and study black holes (BHs) over a wide range of environments and masses, including: 1) BHs accreting from binary companions or dense molecular clouds throughout our Galaxy and the Local Group, 2) supermassive black holes (SMBHs) lying dormant in galaxies that reveal their existence by disrupting passing stars, and 3) SMBHs that are hidden from our view at lower energies due to obscuration by the gas that they accrete. 4) the birth of stellar mass BHs which is accompanied by long cosmic gamma-ray bursts (GRBs) which are seen several times a day and may be associated with the earliest stars to form in the Universe. EXIST will provide an order of magnitude increase in sensitivity and angular resolution as well as greater spectral resolution and bandwidth compared with earlier hard X-ray survey telescopes. With an onboard optical-infra red (IR) telescope, EXIST will measure the spectra and redshifts of GRBs and their utility as cosmological probes of the highest z universe and epoch of reionization. The mission would retain its primary goal of being the Black Hole Finder Probe in the Beyond Einstein Program. However, the new design for EXIST proposed to be studied here represents a significant advance from its previous incarnation as presented to BEPAC. The mission is now less than half the total mass, would be launched on the smallest EELV available (Atlas V-401) for a Medium Class mission, and most importantly includes a two-telescope complement that is ideally suited for the study of both obscured and very distant BHs. EXIST retains its very wide field hard X-ray imaging High Energy Telescope (HET) as the primary instrument, now with improved angular and spectral resolution, and in a more compact payload that allows occasional rapid slews for immediate optical/IR imaging and spectra of GRBs and AGN as well as enhanced hard X-ray spectra and timing with pointed observations. The mission would conduct a 2 year full sky survey in

  4. Time-sliced perturbation theory for large scale structure I: general formalism

    Energy Technology Data Exchange (ETDEWEB)

    Blas, Diego; Garny, Mathias; Sibiryakov, Sergey [Theory Division, CERN, CH-1211 Genève 23 (Switzerland); Ivanov, Mikhail M., E-mail: diego.blas@cern.ch, E-mail: mathias.garny@cern.ch, E-mail: mikhail.ivanov@cern.ch, E-mail: sergey.sibiryakov@cern.ch [FSB/ITP/LPPC, École Polytechnique Fédérale de Lausanne, CH-1015, Lausanne (Switzerland)

    2016-07-01

    We present a new analytic approach to describe large scale structure formation in the mildly non-linear regime. The central object of the method is the time-dependent probability distribution function generating correlators of the cosmological observables at a given moment of time. Expanding the distribution function around the Gaussian weight we formulate a perturbative technique to calculate non-linear corrections to cosmological correlators, similar to the diagrammatic expansion in a three-dimensional Euclidean quantum field theory, with time playing the role of an external parameter. For the physically relevant case of cold dark matter in an Einstein-de Sitter universe, the time evolution of the distribution function can be found exactly and is encapsulated by a time-dependent coupling constant controlling the perturbative expansion. We show that all building blocks of the expansion are free from spurious infrared enhanced contributions that plague the standard cosmological perturbation theory. This paves the way towards the systematic resummation of infrared effects in large scale structure formation. We also argue that the approach proposed here provides a natural framework to account for the influence of short-scale dynamics on larger scales along the lines of effective field theory.

  5. Solar Panel Installations on Existing Structures

    OpenAIRE

    Tim D. Sass; Pe; Leed

    2013-01-01

    The rising price of fossil fuels, government incentives and growing public aware-ness for the need to implement sustainable energy supplies has resulted in a large in-crease in solar panel installations across the country. For many sites the most eco-nomical solar panel installation uses existing, southerly facing rooftops. Adding solar panels to an existing roof typically means increased loads that must be borne by the building-s structural elements. The structural desig...

  6. Limitations of existing web services

    Indian Academy of Sciences (India)

    First page Back Continue Last page Overview Graphics. Limitations of existing web services. Uploading or downloading large data. Serving too many user from single source. Difficult to provide computer intensive job. Depend on internet and its bandwidth. Security of data in transition. Maintain confidentiality of data ...

  7. A robust and high-performance queue management controller for large round trip time networks

    Science.gov (United States)

    Khoshnevisan, Ladan; Salmasi, Farzad R.

    2016-05-01

    Congestion management for transmission control protocol is of utmost importance to prevent packet loss within a network. This necessitates strategies for active queue management. The most applied active queue management strategies have their inherent disadvantages which lead to suboptimal performance and even instability in the case of large round trip time and/or external disturbance. This paper presents an internal model control robust queue management scheme with two degrees of freedom in order to restrict the undesired effects of large and small round trip time and parameter variations in the queue management. Conventional approaches such as proportional integral and random early detection procedures lead to unstable behaviour due to large delay. Moreover, internal model control-Smith scheme suffers from large oscillations due to the large round trip time. On the other hand, other schemes such as internal model control-proportional integral and derivative show excessive sluggish performance for small round trip time values. To overcome these shortcomings, we introduce a system entailing two individual controllers for queue management and disturbance rejection, simultaneously. Simulation results based on Matlab/Simulink and also Network Simulator 2 (NS2) demonstrate the effectiveness of the procedure and verify the analytical approach.

  8. A coupled chemotaxis-fluid model: Global existence

    KAUST Repository

    Liu, Jian-Guo; Lorz, Alexander

    2011-01-01

    We consider a model arising from biology, consisting of chemotaxis equations coupled to viscous incompressible fluid equations through transport and external forcing. Global existence of solutions to the Cauchy problem is investigated under certain conditions. Precisely, for the chemotaxis-Navier- Stokes system in two space dimensions, we obtain global existence for large data. In three space dimensions, we prove global existence of weak solutions for the chemotaxis-Stokes system with nonlinear diffusion for the cell density.© 2011 Elsevier Masson SAS. All rights reserved.

  9. A coupled chemotaxis-fluid model: Global existence

    KAUST Repository

    Liu, Jian-Guo

    2011-09-01

    We consider a model arising from biology, consisting of chemotaxis equations coupled to viscous incompressible fluid equations through transport and external forcing. Global existence of solutions to the Cauchy problem is investigated under certain conditions. Precisely, for the chemotaxis-Navier- Stokes system in two space dimensions, we obtain global existence for large data. In three space dimensions, we prove global existence of weak solutions for the chemotaxis-Stokes system with nonlinear diffusion for the cell density.© 2011 Elsevier Masson SAS. All rights reserved.

  10. Necessary and Sufficient Conditions for the Existence of Positive Solution for Singular Boundary Value Problems on Time Scales

    Directory of Open Access Journals (Sweden)

    Zhang Xuemei

    2009-01-01

    Full Text Available By constructing available upper and lower solutions and combining the Schauder's fixed point theorem with maximum principle, this paper establishes sufficient and necessary conditions to guarantee the existence of as well as positive solutions for a class of singular boundary value problems on time scales. The results significantly extend and improve many known results for both the continuous case and more general time scales. We illustrate our results by one example.

  11. FREQUENCY CATASTROPHE AND CO-EXISTING ATTRACTORS IN A CELL Ca2+ NONLINEAR OSCILLATION MODEL WITH TIME DELAY*

    Institute of Scientific and Technical Information of China (English)

    应阳君; 黄祖洽

    2001-01-01

    Frequency catastrophe is found in a cell Ca2+ nonlinear oscillation model with time delay. The relation of the frequency transition to the time delay is studied by numerical simulations and theoretical analysis. There is a range of parameters in which two kinds of attractors with great frequency differences co-exist in the system. Along with parameter changes, a critical phenomenon occurs and the oscillation frequency changes greatly. This mechanism helps us to deepen the understanding of the complex dynamics of delay systems, and might be of some meaning in cell signalling.

  12. The LOFT (Large Observatory for X-ray Timing) background simulations

    DEFF Research Database (Denmark)

    Campana, R.; Feroci, M.; Del Monte, E.

    2012-01-01

    The Large Observatory For X-ray Timing (LOFT) is an innovative medium-class mission selected for an assessment phase in the framework of the ESA M3 Cosmic Vision call. LOFT is intended to answer fundamental questions about the behavior of matter in theh very strong gravitational and magnetic fields...

  13. Sex ratio and time to pregnancy: analysis of four large European population surveys

    DEFF Research Database (Denmark)

    Joffe, Mike; Bennett, James; Best, Nicky

    2007-01-01

    To test whether the secondary sex ratio (proportion of male births) is associated with time to pregnancy, a marker of fertility. Design Analysis of four large population surveys. Setting Denmark and the United Kingdom. Participants 49 506 pregnancies.......To test whether the secondary sex ratio (proportion of male births) is associated with time to pregnancy, a marker of fertility. Design Analysis of four large population surveys. Setting Denmark and the United Kingdom. Participants 49 506 pregnancies....

  14. Transportation of Large Wind Components: A Review of Existing Geospatial Data

    Energy Technology Data Exchange (ETDEWEB)

    Mooney, Meghan [National Renewable Energy Lab. (NREL), Golden, CO (United States); Maclaurin, Galen [National Renewable Energy Lab. (NREL), Golden, CO (United States)

    2016-09-01

    This report features the geospatial data component of a larger project evaluating logistical and infrastructure requirements for transporting oversized and overweight (OSOW) wind components. The goal of the larger project was to assess the status and opportunities for improving the infrastructure and regulatory practices necessary to transport wind turbine towers, blades, and nacelles from current and potential manufacturing facilities to end-use markets. The purpose of this report is to summarize existing geospatial data on wind component transportation infrastructure and to provide a data gap analysis, identifying areas for further analysis and data collection.

  15. Large deviations of a long-time average in the Ehrenfest urn model

    Science.gov (United States)

    Meerson, Baruch; Zilber, Pini

    2018-05-01

    Since its inception in 1907, the Ehrenfest urn model (EUM) has served as a test bed of key concepts of statistical mechanics. Here we employ this model to study large deviations of a time-additive quantity. We consider two continuous-time versions of the EUM with K urns and N balls: with and without interactions between the balls in the same urn. We evaluate the probability distribution that the average number of balls in one urn over time T, , takes any specified value aN, where . For long observation time, , a Donsker–Varadhan large deviation principle holds: , where … denote additional parameters of the model. We calculate the rate function exactly by two different methods due to Donsker and Varadhan and compare the exact results with those obtained with a variant of WKB approximation (after Wentzel, Kramers and Brillouin). In the absence of interactions the WKB prediction for is exact for any N. In the presence of interactions the WKB method gives asymptotically exact results for . The WKB method also uncovers the (very simple) time history of the system which dominates the contribution of different time histories to .

  16. The timing of ostrich existence in Central Asia: AMS 14C age of eggshells from Mongolia and southern Siberia (a pilot study)

    International Nuclear Information System (INIS)

    Kurochkin, Evgeny N.; Kuzmin, Yaroslav V.; Antoshchenko-Olenev, Igor V.; Zabelin, Vladimir I.; Krivonogov, Sergey K.; Nohrina, Tatiana I.; Lbova, Ludmila V.; Burr, G.S.; Cruz, Richard J.

    2010-01-01

    The presence of Asiatic ostrich in Central Asia in the later Cenozoic time is well-documented; nevertheless, a few direct age determinations existed until recently. We performed AMS 14 C dating of ostrich eggshells found in Mongolia, Transbaikal, and Tuva. It shows that ostriches existed throughout the second part of Late Pleistocene, until the Late Glacial time (ca. 13,000-10,100 BP). It seems that Asiatic ostrich went extinct in Central Asia just before or even in the Holocene.

  17. Vibration amplitude rule study for rotor under large time scale

    International Nuclear Information System (INIS)

    Yang Xuan; Zuo Jianli; Duan Changcheng

    2014-01-01

    The rotor is an important part of the rotating machinery; its vibration performance is one of the important factors affecting the service life. This paper presents both theoretical analyses and experimental demonstrations of the vibration rule of the rotor under large time scales. The rule can be used for the service life estimation of the rotor. (authors)

  18. Large volume recycling of oceanic lithosphere over short time scales: geochemical constraints from the Caribbean Large Igneous Province

    Science.gov (United States)

    Hauff, F.; Hoernle, K.; Tilton, G.; Graham, D. W.; Kerr, A. C.

    2000-01-01

    Oceanic flood basalts are poorly understood, short-term expressions of highly increased heat flux and mass flow within the convecting mantle. The uniqueness of the Caribbean Large Igneous Province (CLIP, 92-74 Ma) with respect to other Cretaceous oceanic plateaus is its extensive sub-aerial exposures, providing an excellent basis to investigate the temporal and compositional relationships within a starting plume head. We present major element, trace element and initial Sr-Nd-Pb isotope composition of 40 extrusive rocks from the Caribbean Plateau, including onland sections in Costa Rica, Colombia and Curaçao as well as DSDP Sites in the Central Caribbean. Even though the lavas were erupted over an area of ˜3×10 6 km 2, the majority have strikingly uniform incompatible element patterns (La/Yb=0.96±0.16, n=64 out of 79 samples, 2σ) and initial Nd-Pb isotopic compositions (e.g. 143Nd/ 144Nd in=0.51291±3, ɛNdi=7.3±0.6, 206Pb/ 204Pb in=18.86±0.12, n=54 out of 66, 2σ). Lavas with endmember compositions have only been sampled at the DSDP Sites, Gorgona Island (Colombia) and the 65-60 Ma accreted Quepos and Osa igneous complexes (Costa Rica) of the subsequent hotspot track. Despite the relatively uniform composition of most lavas, linear correlations exist between isotope ratios and between isotope and highly incompatible trace element ratios. The Sr-Nd-Pb isotope and trace element signatures of the chemically enriched lavas are compatible with derivation from recycled oceanic crust, while the depleted lavas are derived from a highly residual source. This source could represent either oceanic lithospheric mantle left after ocean crust formation or gabbros with interlayered ultramafic cumulates of the lower oceanic crust. High 3He/ 4He in olivines of enriched picrites at Quepos are ˜12 times higher than the atmospheric ratio suggesting that the enriched component may have once resided in the lower mantle. Evaluation of the Sm-Nd and U-Pb isotope systematics on

  19. Overview of Existing Wind Energy Ordinances

    Energy Technology Data Exchange (ETDEWEB)

    Oteri, F.

    2008-12-01

    Due to increased energy demand in the United States, rural communities with limited or no experience with wind energy now have the opportunity to become involved in this industry. Communities with good wind resources may be approached by entities with plans to develop the resource. Although these opportunities can create new revenue in the form of construction jobs and land lease payments, they also create a new responsibility on the part of local governments to ensure that ordinances will be established to aid the development of safe facilities that will be embraced by the community. The purpose of this report is to educate and engage state and local governments, as well as policymakers, about existing large wind energy ordinances. These groups will have a collection of examples to utilize when they attempt to draft a new large wind energy ordinance in a town or county without existing ordinances.

  20. Locating single-point sources from arrival times containing large picking errors (LPEs): the virtual field optimization method (VFOM)

    Science.gov (United States)

    Li, Xi-Bing; Wang, Ze-Wei; Dong, Long-Jun

    2016-01-01

    Microseismic monitoring systems using local location techniques tend to be timely, automatic and stable. One basic requirement of these systems is the automatic picking of arrival times. However, arrival times generated by automated techniques always contain large picking errors (LPEs), which may make the location solution unreliable and cause the integrated system to be unstable. To overcome the LPE issue, we propose the virtual field optimization method (VFOM) for locating single-point sources. In contrast to existing approaches, the VFOM optimizes a continuous and virtually established objective function to search the space for the common intersection of the hyperboloids, which is determined by sensor pairs other than the least residual between the model-calculated and measured arrivals. The results of numerical examples and in-site blasts show that the VFOM can obtain more precise and stable solutions than traditional methods when the input data contain LPEs. Furthermore, we discuss the impact of LPEs on objective functions to determine the LPE-tolerant mechanism, velocity sensitivity and stopping criteria of the VFOM. The proposed method is also capable of locating acoustic sources using passive techniques such as passive sonar detection and acoustic emission.

  1. The question of the existence of God in the book of Stephen Hawking: A brief history of time

    NARCIS (Netherlands)

    Driessen, A.; Driessen, A; Suarez, A.

    1997-01-01

    The continuing interest in the book of S. Hawking "A Brief History of Time" makes a philosophical evaluation of the content highly desirable. As will be shown, the genre of this work can be identified as a speciality in philosophy, namely the proof of the existence of God. In this study an attempt

  2. Large lateral photovoltaic effect with ultrafast relaxation time in SnSe/Si junction

    Energy Technology Data Exchange (ETDEWEB)

    Wang, Xianjie; Zhao, Xiaofeng; Hu, Chang; Zhang, Yang; Song, Bingqian; Zhang, Lingli; Liu, Weilong; Lv, Zhe; Zhang, Yu; Sui, Yu, E-mail: suiyu@hit.edu.cn [Department of Physics, Harbin Institute of Technology, Harbin 150001 (China); Tang, Jinke [Department of Physics and Astronomy, University of Wyoming, Laramie, Wyoming 82071 (United States); Song, Bo, E-mail: songbo@hit.edu.cn [Department of Physics, Harbin Institute of Technology, Harbin 150001 (China); Academy of Fundamental and Interdisciplinary Sciences, Harbin Institute of Technology, Harbin 150001 (China)

    2016-07-11

    In this paper, we report a large lateral photovoltaic effect (LPE) with ultrafast relaxation time in SnSe/p-Si junctions. The LPE shows a linear dependence on the position of the laser spot, and the position sensitivity is as high as 250 mV mm{sup −1}. The optical response time and the relaxation time of the LPE are about 100 ns and 2 μs, respectively. The current-voltage curve on the surface of the SnSe film indicates the formation of an inversion layer at the SnSe/p-Si interface. Our results clearly suggest that most of the excited-electrons diffuse laterally in the inversion layer at the SnSe/p-Si interface, which results in a large LPE with ultrafast relaxation time. The high positional sensitivity and ultrafast relaxation time of the LPE make the SnSe/p-Si junction a promising candidate for a wide range of optoelectronic applications.

  3. The Space-Time Conservative Schemes for Large-Scale, Time-Accurate Flow Simulations with Tetrahedral Meshes

    Science.gov (United States)

    Venkatachari, Balaji Shankar; Streett, Craig L.; Chang, Chau-Lyan; Friedlander, David J.; Wang, Xiao-Yen; Chang, Sin-Chung

    2016-01-01

    Despite decades of development of unstructured mesh methods, high-fidelity time-accurate simulations are still predominantly carried out on structured, or unstructured hexahedral meshes by using high-order finite-difference, weighted essentially non-oscillatory (WENO), or hybrid schemes formed by their combinations. In this work, the space-time conservation element solution element (CESE) method is used to simulate several flow problems including supersonic jet/shock interaction and its impact on launch vehicle acoustics, and direct numerical simulations of turbulent flows using tetrahedral meshes. This paper provides a status report for the continuing development of the space-time conservation element solution element (CESE) numerical and software framework under the Revolutionary Computational Aerosciences (RCA) project. Solution accuracy and large-scale parallel performance of the numerical framework is assessed with the goal of providing a viable paradigm for future high-fidelity flow physics simulations.

  4. Large-scale machine learning and evaluation platform for real-time traffic surveillance

    Science.gov (United States)

    Eichel, Justin A.; Mishra, Akshaya; Miller, Nicholas; Jankovic, Nicholas; Thomas, Mohan A.; Abbott, Tyler; Swanson, Douglas; Keller, Joel

    2016-09-01

    In traffic engineering, vehicle detectors are trained on limited datasets, resulting in poor accuracy when deployed in real-world surveillance applications. Annotating large-scale high-quality datasets is challenging. Typically, these datasets have limited diversity; they do not reflect the real-world operating environment. There is a need for a large-scale, cloud-based positive and negative mining process and a large-scale learning and evaluation system for the application of automatic traffic measurements and classification. The proposed positive and negative mining process addresses the quality of crowd sourced ground truth data through machine learning review and human feedback mechanisms. The proposed learning and evaluation system uses a distributed cloud computing framework to handle data-scaling issues associated with large numbers of samples and a high-dimensional feature space. The system is trained using AdaBoost on 1,000,000 Haar-like features extracted from 70,000 annotated video frames. The trained real-time vehicle detector achieves an accuracy of at least 95% for 1/2 and about 78% for 19/20 of the time when tested on ˜7,500,000 video frames. At the end of 2016, the dataset is expected to have over 1 billion annotated video frames.

  5. Necessary and Sufficient Conditions for the Existence of Positive Solution for Singular Boundary Value Problems on Time Scales

    Directory of Open Access Journals (Sweden)

    Meiqiang Feng

    2009-01-01

    Full Text Available By constructing available upper and lower solutions and combining the Schauder's fixed point theorem with maximum principle, this paper establishes sufficient and necessary conditions to guarantee the existence of Cld[0,1]𝕋 as well as CldΔ[0,1]𝕋 positive solutions for a class of singular boundary value problems on time scales. The results significantly extend and improve many known results for both the continuous case and more general time scales. We illustrate our results by one example.

  6. Cosmological special relativity the large scale structure of space, time and velocity

    CERN Document Server

    Carmeli, Moshe

    1997-01-01

    This book deals with special relativity theory and its application to cosmology. It presents Einstein's theory of space and time in detail, and describes the large scale structure of space, time and velocity as a new cosmological special relativity. A cosmological Lorentz-like transformation, which relates events at different cosmic times, is derived and applied. A new law of addition of cosmic times is obtained, and the inflation of the space at the early universe is derived, both from the cosmological transformation. The book will be of interest to cosmologists, astrophysicists, theoretical

  7. Cosmological special relativity the large scale structure of space, time and velocity

    CERN Document Server

    Carmeli, Moshe

    2002-01-01

    This book presents Einstein's theory of space and time in detail, and describes the large-scale structure of space, time and velocity as a new cosmological special relativity. A cosmological Lorentz-like transformation, which relates events at different cosmic times, is derived and applied. A new law of addition of cosmic times is obtained, and the inflation of the space at the early universe is derived, both from the cosmological transformation. The relationship between cosmic velocity, acceleration and distances is given. In the appendices gravitation is added in the form of a cosmological g

  8. Large-time asymptotic behaviour of solutions of non-linear Sobolev-type equations

    International Nuclear Information System (INIS)

    Kaikina, Elena I; Naumkin, Pavel I; Shishmarev, Il'ya A

    2009-01-01

    The large-time asymptotic behaviour of solutions of the Cauchy problem is investigated for a non-linear Sobolev-type equation with dissipation. For small initial data the approach taken is based on a detailed analysis of the Green's function of the linear problem and the use of the contraction mapping method. The case of large initial data is also closely considered. In the supercritical case the asymptotic formulae are quasi-linear. The asymptotic behaviour of solutions of a non-linear Sobolev-type equation with a critical non-linearity of the non-convective kind differs by a logarithmic correction term from the behaviour of solutions of the corresponding linear equation. For a critical convective non-linearity, as well as for a subcritical non-convective non-linearity it is proved that the leading term of the asymptotic expression for large times is a self-similar solution. For Sobolev equations with convective non-linearity the asymptotic behaviour of solutions in the subcritical case is the product of a rarefaction wave and a shock wave. Bibliography: 84 titles.

  9. Time delay effects on large-scale MR damper based semi-active control strategies

    International Nuclear Information System (INIS)

    Cha, Y-J; Agrawal, A K; Dyke, S J

    2013-01-01

    This paper presents a detailed investigation on the robustness of large-scale 200 kN MR damper based semi-active control strategies in the presence of time delays in the control system. Although the effects of time delay on stability and performance degradation of an actively controlled system have been investigated extensively by many researchers, degradation in the performance of semi-active systems due to time delay has yet to be investigated. Since semi-active systems are inherently stable, instability problems due to time delay are unlikely to arise. This paper investigates the effects of time delay on the performance of a building with a large-scale MR damper, using numerical simulations of near- and far-field earthquakes. The MR damper is considered to be controlled by four different semi-active control algorithms, namely (i) clipped-optimal control (COC), (ii) decentralized output feedback polynomial control (DOFPC), (iii) Lyapunov control, and (iv) simple-passive control (SPC). It is observed that all controllers except for the COC are significantly robust with respect to time delay. On the other hand, the clipped-optimal controller should be integrated with a compensator to improve the performance in the presence of time delay. (paper)

  10. Time dispersion in large plastic scintillation neutron detector [Paper No.:B3

    International Nuclear Information System (INIS)

    De, A.; Dasgupta, S.S.; Sen, D.

    1993-01-01

    Time dispersion seen by photomultiplier (PM) tube in large plastic scintillation neutron detector and the light collection mechanism by the same have been computed showing that this time dispersion (TD) seen by the PM tube does not necessarily increase with increasing incident neutron energy in contrast to the usual finding that TD increases with increasing energy. (author). 8 refs., 4 figs

  11. Latitude-Time Total Electron Content Anomalies as Precursors to Japan's Large Earthquakes Associated with Principal Component Analysis

    Directory of Open Access Journals (Sweden)

    Jyh-Woei Lin

    2011-01-01

    Full Text Available The goal of this study is to determine whether principal component analysis (PCA can be used to process latitude-time ionospheric TEC data on a monthly basis to identify earthquake associated TEC anomalies. PCA is applied to latitude-time (mean-of-a-month ionospheric total electron content (TEC records collected from the Japan GEONET network to detect TEC anomalies associated with 18 earthquakes in Japan (M≥6.0 from 2000 to 2005. According to the results, PCA was able to discriminate clear TEC anomalies in the months when all 18 earthquakes occurred. After reviewing months when no M≥6.0 earthquakes occurred but geomagnetic storm activity was present, it is possible that the maximal principal eigenvalues PCA returned for these 18 earthquakes indicate earthquake associated TEC anomalies. Previously PCA has been used to discriminate earthquake-associated TEC anomalies recognized by other researchers, who found that statistical association between large earthquakes and TEC anomalies could be established in the 5 days before earthquake nucleation; however, since PCA uses the characteristics of principal eigenvalues to determine earthquake related TEC anomalies, it is possible to show that such anomalies existed earlier than this 5-day statistical window.

  12. Adding large EM stack support

    KAUST Repository

    Holst, Glendon

    2016-12-01

    Serial section electron microscopy (SSEM) image stacks generated using high throughput microscopy techniques are an integral tool for investigating brain connectivity and cell morphology. FIB or 3View scanning electron microscopes easily generate gigabytes of data. In order to produce analyzable 3D dataset from the imaged volumes, efficient and reliable image segmentation is crucial. Classical manual approaches to segmentation are time consuming and labour intensive. Semiautomatic seeded watershed segmentation algorithms, such as those implemented by ilastik image processing software, are a very powerful alternative, substantially speeding up segmentation times. We have used ilastik effectively for small EM stacks – on a laptop, no less; however, ilastik was unable to carve the large EM stacks we needed to segment because its memory requirements grew too large – even for the biggest workstations we had available. For this reason, we refactored the carving module of ilastik to scale it up to large EM stacks on large workstations, and tested its efficiency. We modified the carving module, building on existing blockwise processing functionality to process data in manageable chunks that can fit within RAM (main memory). We review this refactoring work, highlighting the software architecture, design choices, modifications, and issues encountered.

  13. Natural Time and Nowcasting Earthquakes: Are Large Global Earthquakes Temporally Clustered?

    Science.gov (United States)

    Luginbuhl, Molly; Rundle, John B.; Turcotte, Donald L.

    2018-02-01

    The objective of this paper is to analyze the temporal clustering of large global earthquakes with respect to natural time, or interevent count, as opposed to regular clock time. To do this, we use two techniques: (1) nowcasting, a new method of statistically classifying seismicity and seismic risk, and (2) time series analysis of interevent counts. We chose the sequences of M_{λ } ≥ 7.0 and M_{λ } ≥ 8.0 earthquakes from the global centroid moment tensor (CMT) catalog from 2004 to 2016 for analysis. A significant number of these earthquakes will be aftershocks of the largest events, but no satisfactory method of declustering the aftershocks in clock time is available. A major advantage of using natural time is that it eliminates the need for declustering aftershocks. The event count we utilize is the number of small earthquakes that occur between large earthquakes. The small earthquake magnitude is chosen to be as small as possible, such that the catalog is still complete based on the Gutenberg-Richter statistics. For the CMT catalog, starting in 2004, we found the completeness magnitude to be M_{σ } ≥ 5.1. For the nowcasting method, the cumulative probability distribution of these interevent counts is obtained. We quantify the distribution using the exponent, β, of the best fitting Weibull distribution; β = 1 for a random (exponential) distribution. We considered 197 earthquakes with M_{λ } ≥ 7.0 and found β = 0.83 ± 0.08. We considered 15 earthquakes with M_{λ } ≥ 8.0, but this number was considered too small to generate a meaningful distribution. For comparison, we generated synthetic catalogs of earthquakes that occur randomly with the Gutenberg-Richter frequency-magnitude statistics. We considered a synthetic catalog of 1.97 × 10^5 M_{λ } ≥ 7.0 earthquakes and found β = 0.99 ± 0.01. The random catalog converted to natural time was also random. We then generated 1.5 × 10^4 synthetic catalogs with 197 M_{λ } ≥ 7.0 in each catalog and

  14. Incipient multiple fault diagnosis in real time with applications to large-scale systems

    International Nuclear Information System (INIS)

    Chung, H.Y.; Bien, Z.; Park, J.H.; Seon, P.H.

    1994-01-01

    By using a modified signed directed graph (SDG) together with the distributed artificial neutral networks and a knowledge-based system, a method of incipient multi-fault diagnosis is presented for large-scale physical systems with complex pipes and instrumentations such as valves, actuators, sensors, and controllers. The proposed method is designed so as to (1) make a real-time incipient fault diagnosis possible for large-scale systems, (2) perform the fault diagnosis not only in the steady-state case but also in the transient case as well by using a concept of fault propagation time, which is newly adopted in the SDG model, (3) provide with highly reliable diagnosis results and explanation capability of faults diagnosed as in an expert system, and (4) diagnose the pipe damage such as leaking, break, or throttling. This method is applied for diagnosis of a pressurizer in the Kori Nuclear Power Plant (NPP) unit 2 in Korea under a transient condition, and its result is reported to show satisfactory performance of the method for the incipient multi-fault diagnosis of such a large-scale system in a real-time manner

  15. Displacement in the parameter space versus spurious solution of discretization with large time step

    International Nuclear Information System (INIS)

    Mendes, Eduardo; Letellier, Christophe

    2004-01-01

    In order to investigate a possible correspondence between differential and difference equations, it is important to possess discretization of ordinary differential equations. It is well known that when differential equations are discretized, the solution thus obtained depends on the time step used. In the majority of cases, such a solution is considered spurious when it does not resemble the expected solution of the differential equation. This often happens when the time step taken into consideration is too large. In this work, we show that, even for quite large time steps, some solutions which do not correspond to the expected ones are still topologically equivalent to solutions of the original continuous system if a displacement in the parameter space is considered. To reduce such a displacement, a judicious choice of the discretization scheme should be made. To this end, a recent discretization scheme, based on the Lie expansion of the original differential equations, proposed by Monaco and Normand-Cyrot will be analysed. Such a scheme will be shown to be sufficient for providing an adequate discretization for quite large time steps compared to the pseudo-period of the underlying dynamics

  16. Replicability of time-varying connectivity patterns in large resting state fMRI samples.

    Science.gov (United States)

    Abrol, Anees; Damaraju, Eswar; Miller, Robyn L; Stephen, Julia M; Claus, Eric D; Mayer, Andrew R; Calhoun, Vince D

    2017-12-01

    The past few years have seen an emergence of approaches that leverage temporal changes in whole-brain patterns of functional connectivity (the chronnectome). In this chronnectome study, we investigate the replicability of the human brain's inter-regional coupling dynamics during rest by evaluating two different dynamic functional network connectivity (dFNC) analysis frameworks using 7 500 functional magnetic resonance imaging (fMRI) datasets. To quantify the extent to which the emergent functional connectivity (FC) patterns are reproducible, we characterize the temporal dynamics by deriving several summary measures across multiple large, independent age-matched samples. Reproducibility was demonstrated through the existence of basic connectivity patterns (FC states) amidst an ensemble of inter-regional connections. Furthermore, application of the methods to conservatively configured (statistically stationary, linear and Gaussian) surrogate datasets revealed that some of the studied state summary measures were indeed statistically significant and also suggested that this class of null model did not explain the fMRI data fully. This extensive testing of reproducibility of similarity statistics also suggests that the estimated FC states are robust against variation in data quality, analysis, grouping, and decomposition methods. We conclude that future investigations probing the functional and neurophysiological relevance of time-varying connectivity assume critical importance. Copyright © 2017 The Authors. Published by Elsevier Inc. All rights reserved.

  17. The EcoData retriever: improving access to existing ecological data.

    Directory of Open Access Journals (Sweden)

    Benjamin D Morris

    Full Text Available Ecological research relies increasingly on the use of previously collected data. Use of existing datasets allows questions to be addressed more quickly, more generally, and at larger scales than would otherwise be possible. As a result of large-scale data collection efforts, and an increasing emphasis on data publication by journals and funding agencies, a large and ever-increasing amount of ecological data is now publicly available via the internet. Most ecological datasets do not adhere to any agreed-upon standards in format, data structure or method of access. Some may be broken up across multiple files, stored in compressed archives, and violate basic principles of data structure. As a result acquiring and utilizing available datasets can be a time consuming and error prone process. The EcoData Retriever is an extensible software framework which automates the tasks of discovering, downloading, and reformatting ecological data files for storage in a local data file or relational database. The automation of these tasks saves significant time for researchers and substantially reduces the likelihood of errors resulting from manual data manipulation and unfamiliarity with the complexities of individual datasets.

  18. Time-Sliced Perturbation Theory for Large Scale Structure I: General Formalism

    CERN Document Server

    Blas, Diego; Ivanov, Mikhail M.; Sibiryakov, Sergey

    2016-01-01

    We present a new analytic approach to describe large scale structure formation in the mildly non-linear regime. The central object of the method is the time-dependent probability distribution function generating correlators of the cosmological observables at a given moment of time. Expanding the distribution function around the Gaussian weight we formulate a perturbative technique to calculate non-linear corrections to cosmological correlators, similar to the diagrammatic expansion in a three-dimensional Euclidean quantum field theory, with time playing the role of an external parameter. For the physically relevant case of cold dark matter in an Einstein--de Sitter universe, the time evolution of the distribution function can be found exactly and is encapsulated by a time-dependent coupling constant controlling the perturbative expansion. We show that all building blocks of the expansion are free from spurious infrared enhanced contributions that plague the standard cosmological perturbation theory. This pave...

  19. Rapid Modeling of and Response to Large Earthquakes Using Real-Time GPS Networks (Invited)

    Science.gov (United States)

    Crowell, B. W.; Bock, Y.; Squibb, M. B.

    2010-12-01

    Real-time GPS networks have the advantage of capturing motions throughout the entire earthquake cycle (interseismic, seismic, coseismic, postseismic), and because of this, are ideal for real-time monitoring of fault slip in the region. Real-time GPS networks provide the perfect supplement to seismic networks, which operate with lower noise and higher sampling rates than GPS networks, but only measure accelerations or velocities, putting them at a supreme disadvantage for ascertaining the full extent of slip during a large earthquake in real-time. Here we report on two examples of rapid modeling of recent large earthquakes near large regional real-time GPS networks. The first utilizes Japan’s GEONET consisting of about 1200 stations during the 2003 Mw 8.3 Tokachi-Oki earthquake about 100 km offshore Hokkaido Island and the second investigates the 2010 Mw 7.2 El Mayor-Cucapah earthquake recorded by more than 100 stations in the California Real Time Network. The principal components of strain were computed throughout the networks and utilized as a trigger to initiate earthquake modeling. Total displacement waveforms were then computed in a simulated real-time fashion using a real-time network adjustment algorithm that fixes a station far away from the rupture to obtain a stable reference frame. Initial peak ground displacement measurements can then be used to obtain an initial size through scaling relationships. Finally, a full coseismic model of the event can be run minutes after the event, given predefined fault geometries, allowing emergency first responders and researchers to pinpoint the regions of highest damage. Furthermore, we are also investigating using total displacement waveforms for real-time moment tensor inversions to look at spatiotemporal variations in slip.

  20. A Short Proof of the Large Time Energy Growth for the Boussinesq System

    Science.gov (United States)

    Brandolese, Lorenzo; Mouzouni, Charafeddine

    2017-10-01

    We give a direct proof of the fact that the L^p-norms of global solutions of the Boussinesq system in R^3 grow large as t→ ∞ for 1R+× R3. In particular, the kinetic energy blows up as \\Vert u(t)\\Vert _2^2˜ ct^{1/2} for large time. This contrasts with the case of the Navier-Stokes equations.

  1. High-Temperature-Short-Time Annealing Process for High-Performance Large-Area Perovskite Solar Cells.

    Science.gov (United States)

    Kim, Minjin; Kim, Gi-Hwan; Oh, Kyoung Suk; Jo, Yimhyun; Yoon, Hyun; Kim, Ka-Hyun; Lee, Heon; Kim, Jin Young; Kim, Dong Suk

    2017-06-27

    Organic-inorganic hybrid metal halide perovskite solar cells (PSCs) are attracting tremendous research interest due to their high solar-to-electric power conversion efficiency with a high possibility of cost-effective fabrication and certified power conversion efficiency now exceeding 22%. Although many effective methods for their application have been developed over the past decade, their practical transition to large-size devices has been restricted by difficulties in achieving high performance. Here we report on the development of a simple and cost-effective production method with high-temperature and short-time annealing processing to obtain uniform, smooth, and large-size grain domains of perovskite films over large areas. With high-temperature short-time annealing at 400 °C for 4 s, the perovskite film with an average domain size of 1 μm was obtained, which resulted in fast solvent evaporation. Solar cells fabricated using this processing technique had a maximum power conversion efficiency exceeding 20% over a 0.1 cm 2 active area and 18% over a 1 cm 2 active area. We believe our approach will enable the realization of highly efficient large-area PCSs for practical development with a very simple and short-time procedure. This simple method should lead the field toward the fabrication of uniform large-scale perovskite films, which are necessary for the production of high-efficiency solar cells that may also be applicable to several other material systems for more widespread practical deployment.

  2. Signal existence verification (SEV) for GPS low received power signal detection using the time-frequency approach.

    Science.gov (United States)

    Jan, Shau-Shiun; Sun, Chih-Cheng

    2010-01-01

    The detection of low received power of global positioning system (GPS) signals in the signal acquisition process is an important issue for GPS applications. Improving the miss-detection problem of low received power signal is crucial, especially for urban or indoor environments. This paper proposes a signal existence verification (SEV) process to detect and subsequently verify low received power GPS signals. The SEV process is based on the time-frequency representation of GPS signal, and it can capture the characteristic of GPS signal in the time-frequency plane to enhance the GPS signal acquisition performance. Several simulations and experiments are conducted to show the effectiveness of the proposed method for low received power signal detection. The contribution of this work is that the SEV process is an additional scheme to assist the GPS signal acquisition process in low received power signal detection, without changing the original signal acquisition or tracking algorithms.

  3. The global existence problem in general relativity

    CERN Document Server

    Andersson, L

    2000-01-01

    We survey some known facts and open questions concerning the global properties of 3+1 dimensional space--times containing a compact Cauchy surface. We consider space--times with an $\\ell$--dimensional Lie algebra of space--like Killing fields. For each $\\ell \\leq 3$, we give some basic results and conjectures on global existence and cosmic censorship. For the case of the 3+1 dimensional Einstein equations without symmetries, a new small data global existence result is announced.

  4. Incorporating Real-time Earthquake Information into Large Enrollment Natural Disaster Course Learning

    Science.gov (United States)

    Furlong, K. P.; Benz, H.; Hayes, G. P.; Villasenor, A.

    2010-12-01

    Although most would agree that the occurrence of natural disaster events such as earthquakes, volcanic eruptions, and floods can provide effective learning opportunities for natural hazards-based courses, implementing compelling materials into the large-enrollment classroom environment can be difficult. These natural hazard events derive much of their learning potential from their real-time nature, and in the modern 24/7 news-cycle where all but the most devastating events are quickly out of the public eye, the shelf life for an event is quite limited. To maximize the learning potential of these events requires that both authoritative information be available and course materials be generated as the event unfolds. Although many events such as hurricanes, flooding, and volcanic eruptions provide some precursory warnings, and thus one can prepare background materials to place the main event into context, earthquakes present a particularly confounding situation of providing no warning, but where context is critical to student learning. Attempting to implement real-time materials into large enrollment classes faces the additional hindrance of limited internet access (for students) in most lecture classrooms. In Earth 101 Natural Disasters: Hollywood vs Reality, taught as a large enrollment (150+ students) general education course at Penn State, we are collaborating with the USGS’s National Earthquake Information Center (NEIC) to develop efficient means to incorporate their real-time products into learning activities in the lecture hall environment. Over time (and numerous events) we have developed a template for presenting USGS-produced real-time information in lecture mode. The event-specific materials can be quickly incorporated and updated, along with key contextual materials, to provide students with up-to-the-minute current information. In addition, we have also developed in-class activities, such as student determination of population exposure to severe ground

  5. A mathematical model of a steady flow through the Kaplan turbine - The existence of a weak solution in the case of an arbitrarily large inflow

    Science.gov (United States)

    Neustupa, Tomáš

    2017-07-01

    The paper presents the mathematical model of a steady 2-dimensional viscous incompressible flow through a radial blade machine. The corresponding boundary value problem is studied in the rotating frame. We provide the classical and weak formulation of the problem. Using a special form of the so called "artificial" or "natural" boundary condition on the outflow, we prove the existence of a weak solution for an arbitrarily large inflow.

  6. Tracking Object Existence From an Autonomous Patrol Vehicle

    Science.gov (United States)

    Wolf, Michael; Scharenbroich, Lucas

    2011-01-01

    An autonomous vehicle patrols a large region, during which an algorithm receives measurements of detected potential objects within its sensor range. The goal of the algorithm is to track all objects in the region over time. This problem differs from traditional multi-target tracking scenarios because the region of interest is much larger than the sensor range and relies on the movement of the sensor through this region for coverage. The goal is to know whether anything has changed between visits to the same location. In particular, two kinds of alert conditions must be detected: (1) a previously detected object has disappeared and (2) a new object has appeared in a location already checked. For the time an object is within sensor range, the object can be assumed to remain stationary, changing position only between visits. The problem is difficult because the upstream object detection processing is likely to make many errors, resulting in heavy clutter (false positives) and missed detections (false negatives), and because only noisy, bearings-only measurements are available. This work has three main goals: (1) Associate incoming measurements with known objects or mark them as new objects or false positives, as appropriate. For this, a multiple hypothesis tracker was adapted to this scenario. (2) Localize the objects using multiple bearings-only measurements to provide estimates of global position (e.g., latitude and longitude). A nonlinear Kalman filter extension provides these 2D position estimates using the 1D measurements. (3) Calculate the probability that a suspected object truly exists (in the estimated position), and determine whether alert conditions have been triggered (for new objects or disappeared objects). The concept of a probability of existence was created, and a new Bayesian method for updating this probability at each time step was developed. A probabilistic multiple hypothesis approach is chosen because of its superiority in handling the

  7. Timing of Formal Phase Safety Reviews for Large-Scale Integrated Hazard Analysis

    Science.gov (United States)

    Massie, Michael J.; Morris, A. Terry

    2010-01-01

    Integrated hazard analysis (IHA) is a process used to identify and control unacceptable risk. As such, it does not occur in a vacuum. IHA approaches must be tailored to fit the system being analyzed. Physical, resource, organizational and temporal constraints on large-scale integrated systems impose additional direct or derived requirements on the IHA. The timing and interaction between engineering and safety organizations can provide either benefits or hindrances to the overall end product. The traditional approach for formal phase safety review timing and content, which generally works well for small- to moderate-scale systems, does not work well for very large-scale integrated systems. This paper proposes a modified approach to timing and content of formal phase safety reviews for IHA. Details of the tailoring process for IHA will describe how to avoid temporary disconnects in major milestone reviews and how to maintain a cohesive end-to-end integration story particularly for systems where the integrator inherently has little to no insight into lower level systems. The proposal has the advantage of allowing the hazard analysis development process to occur as technical data normally matures.

  8. Large natural geophysical events: planetary planning

    International Nuclear Information System (INIS)

    Knox, J.B.; Smith, J.V.

    1984-09-01

    Geological and geophysical data suggest that during the evolution of the earth and its species, that there have been many mass extinctions due to large impacts from comets and large asteroids, and major volcanic events. Today, technology has developed to the stage where we can begin to consider protective measures for the planet. Evidence of the ecological disruption and frequency of these major events is presented. Surveillance and warning systems are most critical to develop wherein sufficient lead times for warnings exist so that appropriate interventions could be designed. The long term research undergirding these warning systems, implementation, and proof testing is rich in opportunities for collaboration for peace

  9. Prevalence of HIV among MSM in Europe: comparison of self-reported diagnoses from a large scale internet survey and existing national estimates

    Directory of Open Access Journals (Sweden)

    Marcus Ulrich

    2012-11-01

    Full Text Available Abstract Background Country level comparisons of HIV prevalence among men having sex with men (MSM is challenging for a variety of reasons, including differences in the definition and measurement of the denominator group, recruitment strategies and the HIV detection methods. To assess their comparability, self-reported data on HIV diagnoses in a 2010 pan-European MSM internet survey (EMIS were compared with pre-existing estimates of HIV prevalence in MSM from a variety of European countries. Methods The first pan-European survey of MSM recruited more than 180,000 men from 38 countries across Europe and included questions on the year and result of last HIV test. HIV prevalence as measured in EMIS was compared with national estimates of HIV prevalence based on studies using biological measurements or modelling approaches to explore the degree of agreement between different methods. Existing estimates were taken from Dublin Declaration Monitoring Reports or UNAIDS country fact sheets, and were verified by contacting the nominated contact points for HIV surveillance in EU/EEA countries. Results The EMIS self-reported measurements of HIV prevalence were strongly correlated with existing estimates based on biological measurement and modelling studies using surveillance data (R2=0.70 resp. 0.72. In most countries HIV positive MSM appeared disproportionately likely to participate in EMIS, and prevalences as measured in EMIS are approximately twice the estimates based on existing estimates. Conclusions Comparison of diagnosed HIV prevalence as measured in EMIS with pre-existing estimates based on biological measurements using varied sampling frames (e.g. Respondent Driven Sampling, Time and Location Sampling demonstrates a high correlation and suggests similar selection biases from both types of studies. For comparison with modelled estimates the self-selection bias of the Internet survey with increased participation of men diagnosed with HIV has to be

  10. Decrease of the tunneling time and violation of the Hartman effect for large barriers

    International Nuclear Information System (INIS)

    Olkhovsky, V.S.; Zaichenko, A.K.; Petrillo, V.

    2004-01-01

    The explicit formulation of the initial conditions of the definition of the wave-packet tunneling time is proposed. This formulation takes adequately into account the irreversibility of the wave-packet space-time spreading. Moreover, it explains the violations of the Hartman effect, leading to a strong decrease of the tunneling times up to negative values for wave packets with large momentum spreads due to strong wave-packet time spreading

  11. An Efficient Format for Nearly Constant-Time Access to Arbitrary Time Intervals in Large Trace Files

    Directory of Open Access Journals (Sweden)

    Anthony Chan

    2008-01-01

    Full Text Available A powerful method to aid in understanding the performance of parallel applications uses log or trace files containing time-stamped events and states (pairs of events. These trace files can be very large, often hundreds or even thousands of megabytes. Because of the cost of accessing and displaying such files, other methods are often used that reduce the size of the tracefiles at the cost of sacrificing detail or other information. This paper describes a hierarchical trace file format that provides for display of an arbitrary time window in a time independent of the total size of the file and roughly proportional to the number of events within the time window. This format eliminates the need to sacrifice data to achieve a smaller trace file size (since storage is inexpensive, it is necessary only to make efficient use of bandwidth to that storage. The format can be used to organize a trace file or to create a separate file of annotations that may be used with conventional trace files. We present an analysis of the time to access all of the events relevant to an interval of time and we describe experiments demonstrating the performance of this file format.

  12. Hadronic shift of the stable dilambda state and the existence of double hypernuclei

    International Nuclear Information System (INIS)

    Kerbikov, B.O.

    1984-01-01

    The problem of the mass of a six-quark bag with the strangeness S = -2 (the H particle) is discussed in connection with the data on double hypernuclei. It is shown that if the mass of the H particle, as predicted by the bag model, were several tens of MeV below the ΛΛ channel threshold, then the decay time of the 6 /sub LambdaLambda/He double hypernucleus in the 6 /sub LambdaLambda/He→H+α channel would be 10 -18 to 10 -20 sec. Experimentally observed double hypernuclei have the life time 10 -10 sec, which is consistent with the existence of the H particle provided that its mass is close to the ΛΛ threshold. The influence of the hadronic channels ΛΛ, NΨ and ΣΣ on the mass of the H particle is investigated. It is shown that because of coupling to hadronic channels the mass of the H particle decreases by 150--200 MeV. The existence of a large hadronic shift leads to the fact that the H particle is significantly below the ΛΛ channel threshold. Thus, with account of the hadronic shift the contradiction between the observation of double hypernuclei and the existence of the H particle is extremely acute

  13. Asymptotic description of two metastable processes of solidification for the case of large relaxation time

    International Nuclear Information System (INIS)

    Omel'yanov, G.A.

    1995-07-01

    The non-isothermal Cahn-Hilliard equations in the n-dimensional case (n = 2,3) are considered. The interaction length is proportional to a small parameter, and the relaxation time is proportional to a constant. The asymptotic solutions describing two metastable processes are constructed and justified. The soliton type solution describes the first stage of separation in alloy, when a set of ''superheated liquid'' appears inside the ''solid'' part. The Van der Waals type solution describes the free interface dynamics for large time. The smoothness of temperature is established for large time and the Mullins-Sekerka problem describing the free interface is derived. (author). 46 refs

  14. Statistical analysis of error rate of large-scale single flux quantum logic circuit by considering fluctuation of timing parameters

    International Nuclear Information System (INIS)

    Yamanashi, Yuki; Masubuchi, Kota; Yoshikawa, Nobuyuki

    2016-01-01

    The relationship between the timing margin and the error rate of the large-scale single flux quantum logic circuits is quantitatively investigated to establish a timing design guideline. We observed that the fluctuation in the set-up/hold time of single flux quantum logic gates caused by thermal noises is the most probable origin of the logical error of the large-scale single flux quantum circuit. The appropriate timing margin for stable operation of the large-scale logic circuit is discussed by taking the fluctuation of setup/hold time and the timing jitter in the single flux quantum circuits. As a case study, the dependence of the error rate of the 1-million-bit single flux quantum shift register on the timing margin is statistically analyzed. The result indicates that adjustment of timing margin and the bias voltage is important for stable operation of a large-scale SFQ logic circuit.

  15. Statistical analysis of error rate of large-scale single flux quantum logic circuit by considering fluctuation of timing parameters

    Energy Technology Data Exchange (ETDEWEB)

    Yamanashi, Yuki, E-mail: yamanasi@ynu.ac.jp [Department of Electrical and Computer Engineering, Yokohama National University, Tokiwadai 79-5, Hodogaya-ku, Yokohama 240-8501 (Japan); Masubuchi, Kota; Yoshikawa, Nobuyuki [Department of Electrical and Computer Engineering, Yokohama National University, Tokiwadai 79-5, Hodogaya-ku, Yokohama 240-8501 (Japan)

    2016-11-15

    The relationship between the timing margin and the error rate of the large-scale single flux quantum logic circuits is quantitatively investigated to establish a timing design guideline. We observed that the fluctuation in the set-up/hold time of single flux quantum logic gates caused by thermal noises is the most probable origin of the logical error of the large-scale single flux quantum circuit. The appropriate timing margin for stable operation of the large-scale logic circuit is discussed by taking the fluctuation of setup/hold time and the timing jitter in the single flux quantum circuits. As a case study, the dependence of the error rate of the 1-million-bit single flux quantum shift register on the timing margin is statistically analyzed. The result indicates that adjustment of timing margin and the bias voltage is important for stable operation of a large-scale SFQ logic circuit.

  16. Molecular dynamics based enhanced sampling of collective variables with very large time steps

    Science.gov (United States)

    Chen, Pei-Yang; Tuckerman, Mark E.

    2018-01-01

    Enhanced sampling techniques that target a set of collective variables and that use molecular dynamics as the driving engine have seen widespread application in the computational molecular sciences as a means to explore the free-energy landscapes of complex systems. The use of molecular dynamics as the fundamental driver of the sampling requires the introduction of a time step whose magnitude is limited by the fastest motions in a system. While standard multiple time-stepping methods allow larger time steps to be employed for the slower and computationally more expensive forces, the maximum achievable increase in time step is limited by resonance phenomena, which inextricably couple fast and slow motions. Recently, we introduced deterministic and stochastic resonance-free multiple time step algorithms for molecular dynamics that solve this resonance problem and allow ten- to twenty-fold gains in the large time step compared to standard multiple time step algorithms [P. Minary et al., Phys. Rev. Lett. 93, 150201 (2004); B. Leimkuhler et al., Mol. Phys. 111, 3579-3594 (2013)]. These methods are based on the imposition of isokinetic constraints that couple the physical system to Nosé-Hoover chains or Nosé-Hoover Langevin schemes. In this paper, we show how to adapt these methods for collective variable-based enhanced sampling techniques, specifically adiabatic free-energy dynamics/temperature-accelerated molecular dynamics, unified free-energy dynamics, and by extension, metadynamics, thus allowing simulations employing these methods to employ similarly very large time steps. The combination of resonance-free multiple time step integrators with free-energy-based enhanced sampling significantly improves the efficiency of conformational exploration.

  17. The effect of large decoherence on mixing time in continuous-time quantum walks on long-range interacting cycles

    Energy Technology Data Exchange (ETDEWEB)

    Salimi, S; Radgohar, R, E-mail: shsalimi@uok.ac.i, E-mail: r.radgohar@uok.ac.i [Faculty of Science, Department of Physics, University of Kurdistan, Pasdaran Ave, Sanandaj (Iran, Islamic Republic of)

    2010-01-28

    In this paper, we consider decoherence in continuous-time quantum walks on long-range interacting cycles (LRICs), which are the extensions of the cycle graphs. For this purpose, we use Gurvitz's model and assume that every node is monitored by the corresponding point-contact induced by the decoherence process. Then, we focus on large rates of decoherence and calculate the probability distribution analytically and obtain the lower and upper bounds of the mixing time. Our results prove that the mixing time is proportional to the rate of decoherence and the inverse of the square of the distance parameter (m). This shows that the mixing time decreases with increasing range of interaction. Also, what we obtain for m = 0 is in agreement with Fedichkin, Solenov and Tamon's results [48] for cycle, and we see that the mixing time of CTQWs on cycle improves with adding interacting edges.

  18. THE WIGNER–FOKKER–PLANCK EQUATION: STATIONARY STATES AND LARGE TIME BEHAVIOR

    KAUST Repository

    ARNOLD, ANTON

    2012-11-01

    We consider the linear WignerFokkerPlanck equation subject to confining potentials which are smooth perturbations of the harmonic oscillator potential. For a certain class of perturbations we prove that the equation admits a unique stationary solution in a weighted Sobolev space. A key ingredient of the proof is a new result on the existence of spectral gaps for FokkerPlanck type operators in certain weighted L 2-spaces. In addition we show that the steady state corresponds to a positive density matrix operator with unit trace and that the solutions of the time-dependent problem converge towards the steady state with an exponential rate. © 2012 World Scientific Publishing Company.

  19. A method for real-time memory efficient implementation of blob detection in large images

    Directory of Open Access Journals (Sweden)

    Petrović Vladimir L.

    2017-01-01

    Full Text Available In this paper we propose a method for real-time blob detection in large images with low memory cost. The method is suitable for implementation on the specialized parallel hardware such as multi-core platforms, FPGA and ASIC. It uses parallelism to speed-up the blob detection. The input image is divided into blocks of equal sizes to which the maximally stable extremal regions (MSER blob detector is applied in parallel. We propose the usage of multiresolution analysis for detection of large blobs which are not detected by processing the small blocks. This method can find its place in many applications such as medical imaging, text recognition, as well as video surveillance or wide area motion imagery (WAMI. We explored the possibilities of usage of detected blobs in the feature-based image alignment as well. When large images are processed, our approach is 10 to over 20 times more memory efficient than the state of the art hardware implementation of the MSER.

  20. Process evaluation of treatment times in a large radiotherapy department

    International Nuclear Information System (INIS)

    Beech, R.; Burgess, K.; Stratford, J.

    2016-01-01

    Purpose/objective: The Department of Health (DH) recognises access to appropriate and timely radiotherapy (RT) services as crucial in improving cancer patient outcomes, especially when facing a predicted increase in cancer diagnosis. There is a lack of ‘real-time’ data regarding daily demand of a linear accelerator, the impact of increasingly complex techniques on treatment times, and whether current scheduling reflects time needed for RT delivery, which would be valuable in highlighting current RT provision. Material/methods: A systematic quantitative process evaluation was undertaken in a large regional cancer centre, including a satellite centre, between January and April 2014. Data collected included treatment room-occupancy time, RT site, RT and verification technique and patient mobility status. Data was analysed descriptively; average room-occupancy times were calculated for RT techniques and compared to historical standardised treatment times within the department. Results: Room-occupancy was recorded for over 1300 fractions, over 50% of which overran their allotted treatment time. In a focused sample of 16 common techniques, 10 overran their allocated timeslots. Verification increased room-occupancy by six minutes (50%) over non-imaging. Treatments for patients requiring mobility assistance took four minutes (29%) longer. Conclusion: The majority of treatments overran their standardised timeslots. Although technique advancement has reduced RT delivery time, room-occupancy has not necessarily decreased. Verification increases room-occupancy and needs to be considered when moving towards adaptive techniques. Mobility affects room-occupancy and will become increasingly significant in an ageing population. This evaluation assesses validity of current treatment times in this department, and can be modified and repeated as necessary. - Highlights: • A process evaluation examined room-occupancy for various radiotherapy techniques. • Appointment lengths

  1. Parallel Algorithm for Incremental Betweenness Centrality on Large Graphs

    KAUST Repository

    Jamour, Fuad Tarek

    2017-10-17

    Betweenness centrality quantifies the importance of nodes in a graph in many applications, including network analysis, community detection and identification of influential users. Typically, graphs in such applications evolve over time. Thus, the computation of betweenness centrality should be performed incrementally. This is challenging because updating even a single edge may trigger the computation of all-pairs shortest paths in the entire graph. Existing approaches cannot scale to large graphs: they either require excessive memory (i.e., quadratic to the size of the input graph) or perform unnecessary computations rendering them prohibitively slow. We propose iCentral; a novel incremental algorithm for computing betweenness centrality in evolving graphs. We decompose the graph into biconnected components and prove that processing can be localized within the affected components. iCentral is the first algorithm to support incremental betweeness centrality computation within a graph component. This is done efficiently, in linear space; consequently, iCentral scales to large graphs. We demonstrate with real datasets that the serial implementation of iCentral is up to 3.7 times faster than existing serial methods. Our parallel implementation that scales to large graphs, is an order of magnitude faster than the state-of-the-art parallel algorithm, while using an order of magnitude less computational resources.

  2. Hypoattenuation on CTA images with large vessel occlusion: timing affects conspicuity

    Energy Technology Data Exchange (ETDEWEB)

    Dave, Prasham [University of Ottawa, MD Program, Faculty of Medicine, Ottawa, ON (Canada); Lum, Cheemun; Thornhill, Rebecca; Chakraborty, Santanu [University of Ottawa, Department of Radiology, Ottawa, ON (Canada); Ottawa Hospital Research Institute, Ottawa, ON (Canada); Dowlatshahi, Dar [Ottawa Hospital Research Institute, Ottawa, ON (Canada); University of Ottawa, Division of Neurology, Department of Medicine, Ottawa, ON (Canada)

    2017-05-15

    Parenchymal hypoattenuation distal to occlusions on CTA source images (CTASI) is perceived because of the differences in tissue contrast compared to normally perfused tissue. This difference in conspicuity can be measured objectively. We evaluated the effect of contrast timing on the conspicuity of ischemic areas. We collected consecutive patients, retrospectively, between 2012 and 2014 with large vessel occlusions that had dynamic multiphase CT angiography (CTA) and CT perfusion (CTP). We identified areas of low cerebral blood volume on CTP maps and drew the region of interest (ROI) on the corresponding CTASI. A second ROI was placed in an area of normally perfused tissue. We evaluated conspicuity by comparing the absolute and relative change in attenuation between ischemic and normally perfused tissue over seven time points. The median absolute and relative conspicuity was greatest at the peak arterial (8.6 HU (IQR 5.1-13.9); 1.15 (1.09-1.26)), notch (9.4 HU (5.8-14.9); 1.17 (1.10-1.27)), and peak venous phases (7.0 HU (3.1-12.7); 1.13 (1.05-1.23)) compared to other portions of the time-attenuation curve (TAC). There was a significant effect of phase on the TAC for the conspicuity of ischemic vs normally perfused areas (P < 0.00001). The conspicuity of ischemic areas distal to a large artery occlusion in acute stroke is dependent on the phase of contrast arrival with dynamic CTASI and is objectively greatest in the mid-phase of the TAC. (orig.)

  3. Efficient motif finding algorithms for large-alphabet inputs

    Directory of Open Access Journals (Sweden)

    Pavlovic Vladimir

    2010-10-01

    Full Text Available Abstract Background We consider the problem of identifying motifs, recurring or conserved patterns, in the biological sequence data sets. To solve this task, we present a new deterministic algorithm for finding patterns that are embedded as exact or inexact instances in all or most of the input strings. Results The proposed algorithm (1 improves search efficiency compared to existing algorithms, and (2 scales well with the size of alphabet. On a synthetic planted DNA motif finding problem our algorithm is over 10× more efficient than MITRA, PMSPrune, and RISOTTO for long motifs. Improvements are orders of magnitude higher in the same setting with large alphabets. On benchmark TF-binding site problems (FNP, CRP, LexA we observed reduction in running time of over 12×, with high detection accuracy. The algorithm was also successful in rapidly identifying protein motifs in Lipocalin, Zinc metallopeptidase, and supersecondary structure motifs for Cadherin and Immunoglobin families. Conclusions Our algorithm reduces computational complexity of the current motif finding algorithms and demonstrate strong running time improvements over existing exact algorithms, especially in important and difficult cases of large-alphabet sequences.

  4. Large storage operations under climate change: expanding uncertainties and evolving tradeoffs

    Science.gov (United States)

    Giuliani, Matteo; Anghileri, Daniela; Castelletti, Andrea; Vu, Phuong Nam; Soncini-Sessa, Rodolfo

    2016-03-01

    In a changing climate and society, large storage systems can play a key role for securing water, energy, and food, and rebalancing their cross-dependencies. In this letter, we study the role of large storage operations as flexible means of adaptation to climate change. In particular, we explore the impacts of different climate projections for different future time horizons on the multi-purpose operations of the existing system of large dams in the Red River basin (China-Laos-Vietnam). We identify the main vulnerabilities of current system operations, understand the risk of failure across sectors by exploring the evolution of the system tradeoffs, quantify how the uncertainty associated to climate scenarios is expanded by the storage operations, and assess the expected costs if no adaptation is implemented. Results show that, depending on the climate scenario and the time horizon considered, the existing operations are predicted to change on average from -7 to +5% in hydropower production, +35 to +520% in flood damages, and +15 to +160% in water supply deficit. These negative impacts can be partially mitigated by adapting the existing operations to future climate, reducing the loss of hydropower to 5%, potentially saving around 34.4 million US year-1 at the national scale. Since the Red River is paradigmatic of many river basins across south east Asia, where new large dams are under construction or are planned to support fast growing economies, our results can support policy makers in prioritizing responses and adaptation strategies to the changing climate.

  5. Rapid Large Earthquake and Run-up Characterization in Quasi Real Time

    Science.gov (United States)

    Bravo, F. J.; Riquelme, S.; Koch, P.; Cararo, S.

    2017-12-01

    Several test in quasi real time have been conducted by the rapid response group at CSN (National Seismological Center) to characterize earthquakes in Real Time. These methods are known for its robustness and realibility to create Finite Fault Models. The W-phase FFM Inversion, The Wavelet Domain FFM and The Body Wave and FFM have been implemented in real time at CSN, all these algorithms are running automatically and triggered by the W-phase Point Source Inversion. Dimensions (Large and Width ) are predefined by adopting scaling laws for earthquakes in subduction zones. We tested the last four major earthquakes occurred in Chile using this scheme: The 2010 Mw 8.8 Maule Earthquake, The 2014 Mw 8.2 Iquique Earthquake, The 2015 Mw 8.3 Illapel Earthquake and The 7.6 Melinka Earthquake. We obtain many solutions as time elapses, for each one of those we calculate the run-up using an analytical formula. Our results are in agreements with some FFM already accepted by the sicentific comunnity aswell as run-up observations in the field.

  6. The question of the existence of God in the book of Stephen Hawking: A brief history of time

    OpenAIRE

    Driessen, A.; Driessen, A; Suarez, A.

    1997-01-01

    The continuing interest in the book of S. Hawking "A Brief History of Time" makes a philosophical evaluation of the content highly desirable. As will be shown, the genre of this work can be identified as a speciality in philosophy, namely the proof of the existence of God. In this study an attempt is given to unveil the philosophical concepts and steps that lead to the final conclusions, without discussing in detail the remarkable review of modern physical theories. In order to clarify these ...

  7. Kinota: An Open-Source NoSQL implementation of OGC SensorThings for large-scale high-resolution real-time environmental monitoring

    Science.gov (United States)

    Miles, B.; Chepudira, K.; LaBar, W.

    2017-12-01

    The Open Geospatial Consortium (OGC) SensorThings API (STA) specification, ratified in 2016, is a next-generation open standard for enabling real-time communication of sensor data. Building on over a decade of OGC Sensor Web Enablement (SWE) Standards, STA offers a rich data model that can represent a range of sensor and phenomena types (e.g. fixed sensors sensing fixed phenomena, fixed sensors sensing moving phenomena, mobile sensors sensing fixed phenomena, and mobile sensors sensing moving phenomena) and is data agnostic. Additionally, and in contrast to previous SWE standards, STA is developer-friendly, as is evident from its convenient JSON serialization, and expressive OData-based query language (with support for geospatial queries); with its Message Queue Telemetry Transport (MQTT), STA is also well-suited to efficient real-time data publishing and discovery. All these attributes make STA potentially useful for use in environmental monitoring sensor networks. Here we present Kinota(TM), an Open-Source NoSQL implementation of OGC SensorThings for large-scale high-resolution real-time environmental monitoring. Kinota, which roughly stands for Knowledge from Internet of Things Analyses, relies on Cassandra its underlying data store, which is a horizontally scalable, fault-tolerant open-source database that is often used to store time-series data for Big Data applications (though integration with other NoSQL or rational databases is possible). With this foundation, Kinota can scale to store data from an arbitrary number of sensors collecting data every 500 milliseconds. Additionally, Kinota architecture is very modular allowing for customization by adopters who can choose to replace parts of the existing implementation when desirable. The architecture is also highly portable providing the flexibility to choose between cloud providers like azure, amazon, google etc. The scalable, flexible and cloud friendly architecture of Kinota makes it ideal for use in next

  8. Fitness, work, and leisure-time physical activity and ischaemic heart disease and all-cause mortality among men with pre-existing cardiovascular disease

    DEFF Research Database (Denmark)

    Holtermann, Andreas; Mortensen, Ole Steen; Burr, Hermann

    2010-01-01

    Our aim was to study the relative impact of physical fitness, physical demands at work, and physical activity during leisure time on ischaemic heart disease (IHD) and all-cause mortality among employed men with pre-existing cardiovascular disease (CVD)....

  9. Asymptotics for Large Time of Global Solutions to the Generalized Kadomtsev-Petviashvili Equation

    Science.gov (United States)

    Hayashi, Nakao; Naumkin, Pavel I.; Saut, Jean-Claude

    We study the large time asymptotic behavior of solutions to the generalized Kadomtsev-Petviashvili (KP) equations where σ= 1 or σ=- 1. When ρ= 2 and σ=- 1, (KP) is known as the KPI equation, while ρ= 2, σ=+ 1 corresponds to the KPII equation. The KP equation models the propagation along the x-axis of nonlinear dispersive long waves on the surface of a fluid, when the variation along the y-axis proceeds slowly [10]. The case ρ= 3, σ=- 1 has been found in the modeling of sound waves in antiferromagnetics [15]. We prove that if ρ>= 3 is an integer and the initial data are sufficiently small, then the solution u of (KP) satisfies the following estimates: for all t∈R, where κ= 1 if ρ= 3 and κ= 0 if ρ>= 4. We also find the large time asymptotics for the solution.

  10. Eternally existing self-reproducing inflationary universe

    International Nuclear Information System (INIS)

    Linde, A.D.

    1986-05-01

    It is shown that the large-scale quantum fluctuations of the scalar field φ generated in the chaotic inflation scenario lead to an infinite process of self-reproduction of inflationary mini-universes. A model of eternally existing chaotic inflationary universe is suggested. It is pointed out that whereas the universe locally is very homogeneous as a result of inflation, which occurs at the classical level, the global structure of the universe is determined by quantum effects and is highly non-trivial. The universe consists of exponentially large number of different mini-universes, inside which all possible (metastable) vacuum states and all possible types of compactification are realized. The picture differs crucially from the standard picture of a one-domain universe in a ''true'' vacuum state. Our results may serve as a justification of the anthropic principle in the inflationary cosmology. These results may have important implications for the elementary particle theory as well. Namely, since all possible types of mini-universes, in which inflation may occur, should exist in our universe, there is no need to insist (as it is usually done) that in realistic theories the vacuum state of our type should be the only possible one or the best one. (author)

  11. An Efficient Addressing Scheme and Its Routing Algorithm for a Large-Scale Wireless Sensor Network

    Directory of Open Access Journals (Sweden)

    Choi Jeonghee

    2008-01-01

    Full Text Available Abstract So far, various addressing and routing algorithms have been extensively studied for wireless sensor networks (WSNs, but many of them were limited to cover less than hundreds of sensor nodes. It is largely due to stringent requirements for fully distributed coordination among sensor nodes, leading to the wasteful use of available address space. As there is a growing need for a large-scale WSN, it will be extremely challenging to support more than thousands of nodes, using existing standard bodies. Moreover, it is highly unlikely to change the existing standards, primarily due to backward compatibility issue. In response, we propose an elegant addressing scheme and its routing algorithm. While maintaining the existing address scheme, it tackles the wastage problem and achieves no additional memory storage during a routing. We also present an adaptive routing algorithm for location-aware applications, using our addressing scheme. Through a series of simulations, we prove that our approach can achieve two times lesser routing time than the existing standard in a ZigBee network.

  12. An Efficient Addressing Scheme and Its Routing Algorithm for a Large-Scale Wireless Sensor Network

    Directory of Open Access Journals (Sweden)

    Yongwan Park

    2008-12-01

    Full Text Available So far, various addressing and routing algorithms have been extensively studied for wireless sensor networks (WSNs, but many of them were limited to cover less than hundreds of sensor nodes. It is largely due to stringent requirements for fully distributed coordination among sensor nodes, leading to the wasteful use of available address space. As there is a growing need for a large-scale WSN, it will be extremely challenging to support more than thousands of nodes, using existing standard bodies. Moreover, it is highly unlikely to change the existing standards, primarily due to backward compatibility issue. In response, we propose an elegant addressing scheme and its routing algorithm. While maintaining the existing address scheme, it tackles the wastage problem and achieves no additional memory storage during a routing. We also present an adaptive routing algorithm for location-aware applications, using our addressing scheme. Through a series of simulations, we prove that our approach can achieve two times lesser routing time than the existing standard in a ZigBee network.

  13. Large deviation estimates for exceedance times of perpetuity sequences and their dual processes

    DEFF Research Database (Denmark)

    Buraczewski, Dariusz; Collamore, Jeffrey F.; Damek, Ewa

    2016-01-01

    In a variety of problems in pure and applied probability, it is of relevant to study the large exceedance probabilities of the perpetuity sequence $Y_n := B_1 + A_1 B_2 + \\cdots + (A_1 \\cdots A_{n-1}) B_n$, where $(A_i,B_i) \\subset (0,\\infty) \\times \\reals$. Estimates for the stationary tail dist......-time exceedance probabilities of $\\{ M_n^\\ast \\}$, yielding a new result concerning the convergence of $\\{ M_n^\\ast \\}$ to its stationary distribution.......In a variety of problems in pure and applied probability, it is of relevant to study the large exceedance probabilities of the perpetuity sequence $Y_n := B_1 + A_1 B_2 + \\cdots + (A_1 \\cdots A_{n-1}) B_n$, where $(A_i,B_i) \\subset (0,\\infty) \\times \\reals$. Estimates for the stationary tail...... distribution of $\\{ Y_n \\}$ have been developed in the seminal papers of Kesten (1973) and Goldie (1991). Specifically, it is well-known that if $M := \\sup_n Y_n$, then ${\\mathbb P} \\left\\{ M > u \\right\\} \\sim {\\cal C}_M u^{-\\xi}$ as $u \\to \\infty$. While much attention has been focused on extending...

  14. Real-Time Track Reallocation for Emergency Incidents at Large Railway Stations

    Directory of Open Access Journals (Sweden)

    Wei Liu

    2015-01-01

    Full Text Available After track capacity breakdowns at a railway station, train dispatchers need to generate appropriate track reallocation plans to recover the impacted train schedule and minimize the expected total train delay time under stochastic scenarios. This paper focuses on the real-time track reallocation problem when tracks break down at large railway stations. To represent these cases, virtual trains are introduced and activated to occupy the accident tracks. A mathematical programming model is developed, which aims at minimizing the total occupation time of station bottleneck sections to avoid train delays. In addition, a hybrid algorithm between the genetic algorithm and the simulated annealing algorithm is designed. The case study from the Baoji railway station in China verifies the efficiency of the proposed model and the algorithm. Numerical results indicate that, during a daily and shift transport plan from 8:00 to 8:30, if five tracks break down simultaneously, this will disturb train schedules (result in train arrival and departure delays.

  15. Ontological Proofs of Existence and Non-Existence

    Czech Academy of Sciences Publication Activity Database

    Hájek, Petr

    2008-01-01

    Roč. 90, č. 2 (2008), s. 257-262 ISSN 0039-3215 R&D Projects: GA AV ČR IAA100300503 Institutional research plan: CEZ:AV0Z10300504 Keywords : ontological proofs * existence * non-existence * Gödel * Caramuel Subject RIV: BA - General Mathematics

  16. EXIST Perspective for SFXTs

    Science.gov (United States)

    Ubertini, Pietro; Sidoli, L.; Sguera, V.; Bazzano, A.

    2009-12-01

    Supergiant Fast X-ray Transients (SFXTs) are one of the most interesting (and unexpected) results of the INTEGRAL mission. They are a new class of HMXBs displaying short hard X-ray outbursts (duration less tha a day) characterized by fast flares (few hours timescale) and large dinamic range (10E3-10E4). The physical mechanism driving their peculiar behaviour is still unclear and highly debated: some models involve the structure of the supergiant companion donor wind (likely clumpy, in a spherical or non spherical geometry) and the orbital properties (wide separation with eccentric or circular orbit), while others involve the properties of the neutron star compact object and invoke very low magnetic field values (B 1E14 G, magnetars). The picture is still highly unclear from the observational point of view as well: no cyclotron lines have been detected in the spectra, thus the strength of the neutron star magnetic field is unknown. Orbital periods have been measured in only 4 systems, spanning from 3.3 days to 165 days. Even the duty cycle seems to be quite different from source to source. The Energetic X-ray Imaging Survey Telescope (EXIST), with its hard X-ray all-sky survey and large improved limiting sensitivity, will allow us to get a clearer picture of SFXTs. A complete census of their number is essential to enlarge the sample. A long term and continuous as possible X-ray monitoring is crucial to -(1) obtain the duty cycle, -(2 )investigate their unknown orbital properties (separation, orbital period, eccentricity),- (3) to completely cover the whole outburst activity, (4)-to search for cyclotron lines in the high energy spectra. EXIST observations will provide crucial informations to test the different models and shed light on the peculiar behaviour of SFXTs.

  17. Large-time behavior of solutions to a reaction-diffusion system with distributed microstructure

    NARCIS (Netherlands)

    Muntean, A.

    2009-01-01

    Abstract We study the large-time behavior of a class of reaction-diffusion systems with constant distributed microstructure arising when modeling diffusion and reaction in structured porous media. The main result of this Note is the following: As t ¿ 8 the macroscopic concentration vanishes, while

  18. CAN LARGE TIME DELAYS OBSERVED IN LIGHT CURVES OF CORONAL LOOPS BE EXPLAINED IN IMPULSIVE HEATING?

    International Nuclear Information System (INIS)

    Lionello, Roberto; Linker, Jon A.; Mikić, Zoran; Alexander, Caroline E.; Winebarger, Amy R.

    2016-01-01

    The light curves of solar coronal loops often peak first in channels associated with higher temperatures and then in those associated with lower temperatures. The delay times between the different narrowband EUV channels have been measured for many individual loops and recently for every pixel of an active region observation. The time delays between channels for an active region exhibit a wide range of values. The maximum time delay in each channel pair can be quite large, i.e., >5000 s. These large time delays make-up 3%–26% (depending on the channel pair) of the pixels where a trustworthy, positive time delay is measured. It has been suggested that these time delays can be explained by simple impulsive heating, i.e., a short burst of energy that heats the plasma to a high temperature, after which the plasma is allowed to cool through radiation and conduction back to its original state. In this paper, we investigate whether the largest observed time delays can be explained by this hypothesis by simulating a series of coronal loops with different heating rates, loop lengths, abundances, and geometries to determine the range of expected time delays between a set of four EUV channels. We find that impulsive heating cannot address the largest time delays observed in two of the channel pairs and that the majority of the large time delays can only be explained by long, expanding loops with photospheric abundances. Additional observations may rule out these simulations as an explanation for the long time delays. We suggest that either the time delays found in this manner may not be representative of real loop evolution, or that the impulsive heating and cooling scenario may be too simple to explain the observations, and other potential heating scenarios must be explored

  19. Mining Outlier Data in Mobile Internet-Based Large Real-Time Databases

    Directory of Open Access Journals (Sweden)

    Xin Liu

    2018-01-01

    Full Text Available Mining outlier data guarantees access security and data scheduling of parallel databases and maintains high-performance operation of real-time databases. Traditional mining methods generate abundant interference data with reduced accuracy, efficiency, and stability, causing severe deficiencies. This paper proposes a new mining outlier data method, which is used to analyze real-time data features, obtain magnitude spectra models of outlier data, establish a decisional-tree information chain transmission model for outlier data in mobile Internet, obtain the information flow of internal outlier data in the information chain of a large real-time database, and cluster data. Upon local characteristic time scale parameters of information flow, the phase position features of the outlier data before filtering are obtained; the decision-tree outlier-classification feature-filtering algorithm is adopted to acquire signals for analysis and instant amplitude and to achieve the phase-frequency characteristics of outlier data. Wavelet transform threshold denoising is combined with signal denoising to analyze data offset, to correct formed detection filter model, and to realize outlier data mining. The simulation suggests that the method detects the characteristic outlier data feature response distribution, reduces response time, iteration frequency, and mining error rate, improves mining adaptation and coverage, and shows good mining outcomes.

  20. Pre-existing and Postoperative Intimal Hyperplasia and Arteriovenous Fistula Outcomes.

    Science.gov (United States)

    Tabbara, Marwan; Duque, Juan C; Martinez, Laisel; Escobar, Luis A; Wu, Wensong; Pan, Yue; Fernandez, Natasha; Velazquez, Omaida C; Jaimes, Edgar A; Salman, Loay H; Vazquez-Padron, Roberto I

    2016-09-01

    The contribution of intimal hyperplasia (IH) to arteriovenous fistula (AVF) failure is uncertain. This observational study assessed the relationship between pre-existing, postoperative, and change in IH over time and AVF outcomes. Prospective cohort study with longitudinal assessment of IH at the time of AVF creation (pre-existing) and transposition (postoperative). Patients were followed up for up to 3.3 years. 96 patients from a single center who underwent AVF surgery initially planned as a 2-stage procedure. Veins and AVF samples were collected from 66 and 86 patients, respectively. Matched-pair tissues were available from 56 of these patients. Pre-existing, postoperative, and change in IH over time. Anatomic maturation failure was defined as an AVF that never reached a diameter > 6mm. Primary unassisted patency was defined as the time elapsed from the second-stage surgery to the first intervention. Maximal intimal thickness in veins and AVFs and change in intimal thickness over time. Pre-existing IH (>0.05mm) was present in 98% of patients. In this group, the median intimal thickness increased 4.40-fold (IQR, 2.17- to 4.94-fold) between AVF creation and transposition. However, this change was not associated with pre-existing thickness (r(2)=0.002; P=0.7). Ten of 96 (10%) AVFs never achieved maturation, whereas 70% of vascular accesses remained patent at the end of the observational period. Postoperative IH was not associated with anatomic maturation failure using univariate logistic regression. Pre-existing, postoperative, and change in IH over time had no effects on primary unassisted patency. The small number of patients from whom longitudinal tissue samples were available and low incidence of anatomic maturation failure, which decreased the statistical power to find associations between end points and IH. Pre-existing, postoperative, and change in IH over time were not associated with 2-stage AVF outcomes. Copyright © 2016 National Kidney Foundation, Inc

  1. A unifying approach to existence of Nash equilibria

    NARCIS (Netherlands)

    Balder, E.J.

    1997-01-01

    An approach initiated in [4] is shown to unify results about the existence of (i) Nash equilibria in games with at most countably many players, (ii) Cournot-Nash equilibrium distributions for large, anonymous games, and (iii) Nash equilibria (both mixed and pure) for continuum games. A new, central

  2. The Kembs project: environmental integration of a large existing hydropower scheme

    International Nuclear Information System (INIS)

    Garnier, Alain; Barillier, Agnes

    2015-01-01

    The environment was a major issue for the Kembs re-licensing process on the upper Rhine River. Since 1932, Kembs dam derives water from the Rhine River to the 'Grand Canal d'Alsace' (GCA) which is equipped with four hydropower plants (max. diverted flow: 1400 m 3 /s, 630 MW, 3760 GWh/y). The Old Rhine River downstream of the dam is 50 km long and has been strongly affected by works (dikes) since the 19. century for flood protection and navigation, and then by the construction of the dam. Successive engineering works induced morphological simplification and stabilization of the channel pattern from a formerly braided form to a single incised channel, generating ecological alterations. As the Kembs hydroelectric scheme concerns three countries (France, Germany and Switzerland) with various regulations and views on how to manage with environment, EDF undertook an integrated environmental approach instead of a strict 'impact/mitigation' balance that took 10 years to develop. Therefore, the project simultaneously acts on complementary compartments of the aquatic, riparian and terrestrial environment, to benefit from the synergies that exist between them; a new power plant (8,5 MW, 28 GWh/y) is built to limit the energetic losses and to ensure various functions thereby increasing the overall environmental gain. (authors)

  3. A cellular automata approach to estimate incident-related travel time on Interstate 66 in near real time : final contract report.

    Science.gov (United States)

    2010-03-01

    Incidents account for a large portion of all congestion and a need clearly exists for tools to predict and estimate incident effects. This study examined (1) congestion back propagation to estimate the length of the queue and travel time from upstrea...

  4. Study of structural reliability of existing concrete structures

    Science.gov (United States)

    Druķis, P.; Gaile, L.; Valtere, K.; Pakrastiņš, L.; Goremikins, V.

    2017-10-01

    Structural reliability of buildings has become an important issue after the collapse of a shopping center in Riga 21.11.2013, caused the death of 54 people. The reliability of a building is the practice of designing, constructing, operating, maintaining and removing buildings in ways that ensure maintained health, ward suffered injuries or death due to use of the building. Evaluation and improvement of existing buildings is becoming more and more important. For a large part of existing buildings, the design life has been reached or will be reached in the near future. The structures of these buildings need to be reassessed in order to find out whether the safety requirements are met. The safety requirements provided by the Eurocodes are a starting point for the assessment of safety. However, it would be uneconomical to require all existing buildings and structures to comply fully with these new codes and corresponding safety levels, therefore the assessment of existing buildings differs with each design situation. This case study describes the simple and practical procedure of determination of minimal reliability index β of existing concrete structures designed by different codes than Eurocodes and allows to reassess the actual reliability level of different structural elements of existing buildings under design load.

  5. Improving the computation efficiency of COBRA-TF for LWR safety analysis of large problems

    International Nuclear Information System (INIS)

    Cuervo, D.; Avramova, M. N.; Ivanov, K. N.

    2004-01-01

    A matrix solver is implemented in COBRA-TF in order to improve the computation efficiency of both numerical solution methods existing in the code, the Gauss elimination and the Gauss-Seidel iterative technique. Both methods are used to solve the system of pressure linear equations and relay on the solution of large sparse matrices. The introduced solver accelerates the solution of these matrices in cases of large number of cells. The execution time is reduced in half as compared to the execution time without using matrix solver for the cases with large matrices. The achieved improvement and the planned future work in this direction are important for performing efficient LWR safety analyses of large problems. (authors)

  6. Large Efficient Intelligent Heating Relay Station System

    Science.gov (United States)

    Wu, C. Z.; Wei, X. G.; Wu, M. Q.

    2017-12-01

    The design of large efficient intelligent heating relay station system aims at the improvement of the existing heating system in our country, such as low heating efficiency, waste of energy and serious pollution, and the control still depends on the artificial problem. In this design, we first improve the existing plate heat exchanger. Secondly, the ATM89C51 is used to control the whole system and realize the intelligent control. The detection part is using the PT100 temperature sensor, pressure sensor, turbine flowmeter, heating temperature, detection of user end liquid flow, hydraulic, and real-time feedback, feedback signal to the microcontroller through the heating for users to adjust, realize the whole system more efficient, intelligent and energy-saving.

  7. Wigner time-delay distribution in chaotic cavities and freezing transition.

    Science.gov (United States)

    Texier, Christophe; Majumdar, Satya N

    2013-06-21

    Using the joint distribution for proper time delays of a chaotic cavity derived by Brouwer, Frahm, and Beenakker [Phys. Rev. Lett. 78, 4737 (1997)], we obtain, in the limit of the large number of channels N, the large deviation function for the distribution of the Wigner time delay (the sum of proper times) by a Coulomb gas method. We show that the existence of a power law tail originates from narrow resonance contributions, related to a (second order) freezing transition in the Coulomb gas.

  8. On the existence of physiological age based on functional hierarchy: a formal definition related to time irreversibility.

    Science.gov (United States)

    Chauvet, Gilbert A

    2006-09-01

    The present approach of aging and time irreversibility is a consequence of the theory of functional organization that I have developed and presented over recent years (see e.g., Ref. 11). It is based on the effect of physically small and numerous perturbations known as fluctuations, of structural units on the dynamics of the biological system during its adult life. Being a highly regulated biological system, a simple realistic hypothesis, the time-optimum regulation between the levels of organization, leads to the existence of an internal age for the biological system, and time-irreversibility associated with aging. Thus, although specific genes are controlling aging, time-irreversibility of the system may be shown to be due to the degradation of physiological functions. In other words, I suggest that for a biological system, the nature of time is specific and is an expression of the highly regulated integration. An internal physiological age reflects the irreversible course of a living organism towards death because of the irreversible course of physiological functions towards dysfunction, due to the irreversible changes in the regulatory processes. Following the works of Prigogine and his colleagues in physics, and more generally in the field of non-integrable dynamical systems (theorem of Poincaré-Misra), I have stated this problem in terms of the relationship between the macroscopic irreversibility of the functional organization and the basic mechanisms of regulation at the lowest "microscopic" level, i.e., the molecular, lowest level of organization. The neuron-neuron elementary functional interaction is proposed as an illustration of the method to define aging in the nervous system.

  9. Controls for the CERN large hadron collider (LHC)

    International Nuclear Information System (INIS)

    Kissler, K.H.; Perriollat, F.; Rabany, M.; Shering, G.

    1992-01-01

    CERN's planned large superconducting collider project presents several new challenges to the Control System. These are discussed along with current thinking as to how they can be met. The high field superconducting magnets are subject to 'persistent currents' which will require real time measurements and control using a mathematical model on a 2-10 second time interval. This may be realized using direct links, multiplexed using TDM, between the field equipment and central servers. Quench control and avoidance will make new demands on speed of response, reliability and surveillance. The integration of large quantities of industrially controlled equipment will be important. Much of the controls will be in common with LEP so a seamless integration of LHC and LEP controls will be sought. A very large amount of new high-tech equipment will have to be tested, assembled and installed in the LEP tunnel in a short time. The manpower and cost constrains will be much tighter than previously. New approaches will have to be found to solve many of these problems, with the additional constraint of integrating them into an existing frame work. (author)

  10. Large Scale Community Detection Using a Small World Model

    Directory of Open Access Journals (Sweden)

    Ranjan Kumar Behera

    2017-11-01

    Full Text Available In a social network, small or large communities within the network play a major role in deciding the functionalities of the network. Despite of diverse definitions, communities in the network may be defined as the group of nodes that are more densely connected as compared to nodes outside the group. Revealing such hidden communities is one of the challenging research problems. A real world social network follows small world phenomena, which indicates that any two social entities can be reachable in a small number of steps. In this paper, nodes are mapped into communities based on the random walk in the network. However, uncovering communities in large-scale networks is a challenging task due to its unprecedented growth in the size of social networks. A good number of community detection algorithms based on random walk exist in literature. In addition, when large-scale social networks are being considered, these algorithms are observed to take considerably longer time. In this work, with an objective to improve the efficiency of algorithms, parallel programming framework like Map-Reduce has been considered for uncovering the hidden communities in social network. The proposed approach has been compared with some standard existing community detection algorithms for both synthetic and real-world datasets in order to examine its performance, and it is observed that the proposed algorithm is more efficient than the existing ones.

  11. Large area spark counters with fine time and position resolution

    International Nuclear Information System (INIS)

    Ogawa, A.; Atwood, W.B.; Fujiwara, N.; Pestov, Yu.N.; Sugahara, R.

    1983-10-01

    Spark counters trace their history back over three decades but have been used in only a limited number of experiments. The key properties of these devices include their capability of precision timing (at the sub 100 ps level) and of measuring the position of the charged particle to high accuracy. At SLAC we have undertaken a program to develop these devices for use in high energy physics experiments involving large detectors. A spark counter of size 1.2 m x 0.1 m has been constructed and has been operating continuously in our test setup for several months. In this talk I will discuss some details of its construction and its properties as a particle detector. 14 references

  12. Large, real time detectors for solar neutrinos and magnetic monopoles

    International Nuclear Information System (INIS)

    Gonzalez-Mestres, L.

    1990-01-01

    We discuss the present status of superheated superconducting granules (SSG) development for the real time detection of magnetic monopoles of any speed and of low energy solar neutrinos down to the pp region (indium project). Basic properties of SSG and progress made in the recent years are briefly reviewed. Possible ways for further improvement are discussed. The performances reached in ultrasonic grain production at ∼ 100 μm size, as well as in conventional read-out electronics, look particularly promising for a large scale monopole experiment. Alternative approaches are briefly dealt with: induction loops for magnetic monopoles; scintillators, semiconductors or superconducting tunnel junctions for a solar neutrino detector based on an indium target

  13. Existence domains of dust-acoustic solitons and supersolitons

    International Nuclear Information System (INIS)

    Maharaj, S. K.; Bharuthram, R.; Singh, S. V.; Lakhina, G. S.

    2013-01-01

    Using the Sagdeev potential method, the existence of large amplitude dust-acoustic solitons and supersolitons is investigated in a plasma comprising cold negative dust, adiabatic positive dust, Boltzmann electrons, and non-thermal ions. This model supports the existence of positive potential supersolitons in a certain region in parameter space in addition to regular solitons having negative and positive potentials. The lower Mach number limit for supersolitons coincides with the occurrence of double layers whereas the upper limit is imposed by the constraint that the adiabatic positive dust number density must remain real valued. The upper Mach number limits for negative potential (positive potential) solitons coincide with limiting values of the negative (positive) potential for which the negative (positive) dust number density is real valued. Alternatively, the existence of positive potential solitons can terminate when positive potential double layers occur

  14. THE EXISTENCE OF THE STABILIZING SOLUTION OF THE RICCATI EQUATION ARISING IN DISCRETE-TIME STOCHASTIC ZERO SUM LQ DYNAMIC GAMES WITH PERIODIC COEFFICIENTS

    Directory of Open Access Journals (Sweden)

    Vasile Dr ̆agan

    2017-06-01

    Full Text Available We investigate the problem for solving a discrete-time periodic gen- eralized Riccati equation with an indefinite sign of the quadratic term. A necessary condition for the existence of bounded and stabilizing solution of the discrete-time Riccati equation with an indefinite quadratic term is derived. The stabilizing solution is positive semidefinite and satisfies the introduced sign conditions. The proposed condition is illustrated via a numerical example.

  15. Tachyons imply the existence of a privileged frame

    Energy Technology Data Exchange (ETDEWEB)

    Sjoedin, T.; Heylighen, F.

    1985-12-16

    It is shown that the existence of faster-than-light signals (tachyons) would imply the existence (and detectability) of a privileged inertial frame and that one can avoid all problems with reversed-time order only by using absolute synchronization instead of the standard one. The connection between these results and the EPR-paradox is discussed.

  16. Hierarchical 2.5D scene alignment for change detection with large viewpoint differences

    NARCIS (Netherlands)

    van de Wouw, D.; Dubbelman, G.; de With, P.H.N.

    2016-01-01

    Change detection from mobile platforms is a relevant topic in the field of intelligent vehicles and has many applications, such as countering improvised explosive devices (C-IED). Existing real-time C-IED systems are not robust against large viewpoint differences, which are unavoidable under

  17. Event processing time prediction at the CMS experiment of the Large Hadron Collider

    International Nuclear Information System (INIS)

    Cury, Samir; Gutsche, Oliver; Kcira, Dorian

    2014-01-01

    The physics event reconstruction is one of the biggest challenges for the computing of the LHC experiments. Among the different tasks that computing systems of the CMS experiment performs, the reconstruction takes most of the available CPU resources. The reconstruction time of single collisions varies according to event complexity. Measurements were done in order to determine this correlation quantitatively, creating means to predict it based on the data-taking conditions of the input samples. Currently the data processing system splits tasks in groups with the same number of collisions and does not account for variations in the processing time. These variations can be large and can lead to a considerable increase in the time it takes for CMS workflows to finish. The goal of this study was to use estimates on processing time to more efficiently split the workflow into jobs. By considering the CPU time needed for each job the spread of the job-length distribution in a workflow is reduced.

  18. The Existence of Public Protection Unit

    Directory of Open Access Journals (Sweden)

    Moh. Ilham A. Hamudy

    2014-12-01

    Full Text Available This article is about the Public Protection Unit (Satlinmas formerly known as civil defence (Hansip. This article is a summary of the results of the desk study and fieldwork conducted in October-November 2013 in the town of Magelang and Surabaya. This study used descriptive qualitative approach to explore the combined role and existence Satlinmas. The results of the study showed, the existence of the problem Satlinmas still leave many, including, first, the legal basis for the establishment of Satlinmas. Until now, there has been no new regulations governing Satlinmas. Existing regulations are too weak and cannot capture the times. Second, the formulation of concepts and basic tasks and functions Satlinmas overlap with other institutions. Third, Satlinmas image in society tend to fade and abused. Fourth, Satlinmas incorporation into the Municipal Police deemed not appropriate, because different philosophy.

  19. Large-scale building energy efficiency retrofit: Concept, model and control

    International Nuclear Information System (INIS)

    Wu, Zhou; Wang, Bo; Xia, Xiaohua

    2016-01-01

    BEER (Building energy efficiency retrofit) projects are initiated in many nations and regions over the world. Existing studies of BEER focus on modeling and planning based on one building and one year period of retrofitting, which cannot be applied to certain large BEER projects with multiple buildings and multi-year retrofit. In this paper, the large-scale BEER problem is defined in a general TBT (time-building-technology) framework, which fits essential requirements of real-world projects. The large-scale BEER is newly studied in the control approach rather than the optimization approach commonly used before. Optimal control is proposed to design optimal retrofitting strategy in terms of maximal energy savings and maximal NPV (net present value). The designed strategy is dynamically changing on dimensions of time, building and technology. The TBT framework and the optimal control approach are verified in a large BEER project, and results indicate that promising performance of energy and cost savings can be achieved in the general TBT framework. - Highlights: • Energy efficiency retrofit of many buildings is studied. • A TBT (time-building-technology) framework is proposed. • The control system of the large-scale BEER is modeled. • The optimal retrofitting strategy is obtained.

  20. Automatic Optimization for Large-Scale Real-Time Coastal Water Simulation

    Directory of Open Access Journals (Sweden)

    Shunli Wang

    2016-01-01

    Full Text Available We introduce an automatic optimization approach for the simulation of large-scale coastal water. To solve the singular problem of water waves obtained with the traditional model, a hybrid deep-shallow-water model is estimated by using an automatic coupling algorithm. It can handle arbitrary water depth and different underwater terrain. As a certain feature of coastal terrain, coastline is detected with the collision detection technology. Then, unnecessary water grid cells are simplified by the automatic simplification algorithm according to the depth. Finally, the model is calculated on Central Processing Unit (CPU and the simulation is implemented on Graphics Processing Unit (GPU. We show the effectiveness of our method with various results which achieve real-time rendering on consumer-level computer.

  1. Priority setting for existing chemicals : automated data selection routine

    NARCIS (Netherlands)

    Haelst, A.G. van; Hansen, B.G.

    2000-01-01

    One of the four steps within Council Regulation 793/93/EEC on the evaluation and control of existing chemicals is the priority setting step. The priority setting step is concerned with selecting high-priority substances from a large number of substances, initially starting with 2,474

  2. The morphodynamics and sedimentology of large river confluences

    Science.gov (United States)

    Nicholas, Andrew; Sambrook Smith, Greg; Best, James; Bull, Jon; Dixon, Simon; Goodbred, Steven; Sarker, Mamin; Vardy, Mark

    2017-04-01

    Confluences are key locations within large river networks, yet surprisingly little is known about how they migrate and evolve through time. Moreover, because confluence sites are associated with scour pools that are typically several times the mean channel depth, the deposits associated with such scours should have a high potential for preservation within the rock record. However, paradoxically, such scours are rarely observed, and the sedimentological characteristics of such deposits are poorly understood. This study reports results from a physically-based morphodynamic model, which is applied to simulate the evolution and resulting alluvial architecture associated with large river junctions. Boundary conditions within the model simulation are defined to approximate the junction of the Ganges and Jamuna rivers, in Bangladesh. Model results are supplemented by geophysical datasets collected during boat-based surveys at this junction. Simulated deposit characteristics and geophysical datasets are compared with three existing and contrasting conceptual models that have been proposed to represent the sedimentary architecture of confluence scours. Results illustrate that existing conceptual models may be overly simplistic, although elements of each of the three conceptual models are evident in the deposits generated by the numerical simulation. The latter are characterised by several distinct styles of sedimentary fill, which can be linked to particular morphodynamic behaviours. However, the preserved characteristics of simulated confluence deposits vary substantial according to the degree of reworking by channel migration. This may go some way towards explaining the confluence scour paradox; while abundant large scours might be expected in the rock record, they are rarely reported.

  3. 40 CFR 60.2992 - What is an existing incineration unit?

    Science.gov (United States)

    2010-07-01

    ... 40 Protection of Environment 6 2010-07-01 2010-07-01 false What is an existing incineration unit... Times for Other Solid Waste Incineration Units That Commenced Construction On or Before December 9, 2004 Applicability of State Plans § 60.2992 What is an existing incineration unit? An existing incineration unit is...

  4. Leisure Time Invention

    DEFF Research Database (Denmark)

    Davis, Lee N.; Davis, Jerome D.; Hoisl, Karin

    2013-01-01

    the employee is away from the workplace. We build on existing theory in the fields of organizational creativity and knowledge recombination, especially work relating context to creativity. The paper’s main theoretical contribution is to extend our understanding of the boundaries of employee creativity......This paper studies the contextual factors that influence whether invention occurs during work time or leisure time. Leisure time invention, a potentially important but thus far largely unexplored source of employee creativity, refers to invention where the main underlying idea occurs while...... by adding to the discussion of how access to and exploitation of different types of resources—during work hours or during leisure time — may affect creativity. Based on survey data from more than 3,000 inventions from German employee inventors, we find that leisure time inventions are more frequently...

  5. A Novel Spatial-Temporal Voronoi Diagram-Based Heuristic Approach for Large-Scale Vehicle Routing Optimization with Time Constraints

    Directory of Open Access Journals (Sweden)

    Wei Tu

    2015-10-01

    Full Text Available Vehicle routing optimization (VRO designs the best routes to reduce travel cost, energy consumption, and carbon emission. Due to non-deterministic polynomial-time hard (NP-hard complexity, many VROs involved in real-world applications require too much computing effort. Shortening computing time for VRO is a great challenge for state-of-the-art spatial optimization algorithms. From a spatial-temporal perspective, this paper presents a spatial-temporal Voronoi diagram-based heuristic approach for large-scale vehicle routing problems with time windows (VRPTW. Considering time constraints, a spatial-temporal Voronoi distance is derived from the spatial-temporal Voronoi diagram to find near neighbors in the space-time searching context. A Voronoi distance decay strategy that integrates a time warp operation is proposed to accelerate local search procedures. A spatial-temporal feature-guided search is developed to improve unpromising micro route structures. Experiments on VRPTW benchmarks and real-world instances are conducted to verify performance. The results demonstrate that the proposed approach is competitive with state-of-the-art heuristics and achieves high-quality solutions for large-scale instances of VRPTWs in a short time. This novel approach will contribute to spatial decision support community by developing an effective vehicle routing optimization method for large transportation applications in both public and private sectors.

  6. Existence and non-existence for the full thermomechanical Souza–Auricchio model of shape memory wires

    Czech Academy of Sciences Publication Activity Database

    Krejčí, Pavel; Stefanelli, U.

    2011-01-01

    Roč. 16, č. 4 (2011), s. 349-365 ISSN 1081-2865 R&D Projects: GA ČR GAP201/10/2315 Institutional research plan: CEZ:AV0Z10190503 Keywords : shape memory alloys * thermomechanics * existence result * blowup in finite time Subject RIV: BA - General Mathematics Impact factor: 1.012, year: 2011 http://mms.sagepub.com/content/early/2011/03/11/1081286510386935.abstract

  7. Existence and non-existence for the full thermomechanical Souza–Auricchio model of shape memory wires

    Czech Academy of Sciences Publication Activity Database

    Krejčí, Pavel; Stefanelli, U.

    2011-01-01

    Roč. 16, č. 4 (2011), s. 349-365 ISSN 1081-2865 R&D Projects: GA ČR GAP201/10/2315 Institutional research plan: CEZ:AV0Z10190503 Keywords : shape memory alloys * thermomechanics * existence result * blowup in finite time Subject RIV: BA - General Mathematics Impact factor: 1.012, year: 2011 http:// mms .sagepub.com/content/early/2011/03/11/1081286510386935.abstract

  8. Existence of nash equilibrium in competitive nonlinear pricing games with adverse selection

    OpenAIRE

    Monteiro, P. K.

    2003-01-01

    We show that for a large class of competitive nonlinear pricing games with adverse selection, the property of better-reply security is naturally satisfied - thus, resolving via a result due to Reny (1999) the issue of existence of Nash equilibrium for a large class of competitive nonlinear pricing games.

  9. Large homogeneity ranges in the rare earth hydrides: a fiction to be revised

    International Nuclear Information System (INIS)

    Conder, K.; Longmei Wang; Boroch, E.; Kaldis, E.

    1991-01-01

    A large composition range of the solid solutions LnH 2 -LnH 3 (Ln=La, Ce) has been assumed for a long time. The structure of these solutions was believed to be cubic Fm3m with H atoms occupying tetrahedral and octahedral interstitials. Using x-ray diffraction and differential scanning calorimetry we have shown the existence of a large number of phases in both systems at T x ) are presented

  10. Global existence and nonexistence for the viscoelastic wave equation with nonlinear boundary damping-source interaction

    KAUST Repository

    Said-Houari, Belkacem

    2012-09-01

    The goal of this work is to study a model of the viscoelastic wave equation with nonlinear boundary/interior sources and a nonlinear interior damping. First, applying the Faedo-Galerkin approximations combined with the compactness method to obtain existence of regular global solutions to an auxiliary problem with globally Lipschitz source terms and with initial data in the potential well. It is important to emphasize that it is not possible to consider density arguments to pass from regular to weak solutions if one considers regular solutions of our problem where the source terms are locally Lipschitz functions. To overcome this difficulty, we use an approximation method involving truncated sources and adapting the ideas in [13] to show that the existence of weak solutions can still be obtained for our problem. Second, we show that under some restrictions on the initial data and if the interior source dominates the interior damping term, then the solution ceases to exist and blows up in finite time provided that the initial data are large enough.

  11. Global existence and nonexistence for the viscoelastic wave equation with nonlinear boundary damping-source interaction

    KAUST Repository

    Said-Houari, Belkacem; Nascimento, Flá vio A Falcã o

    2012-01-01

    The goal of this work is to study a model of the viscoelastic wave equation with nonlinear boundary/interior sources and a nonlinear interior damping. First, applying the Faedo-Galerkin approximations combined with the compactness method to obtain existence of regular global solutions to an auxiliary problem with globally Lipschitz source terms and with initial data in the potential well. It is important to emphasize that it is not possible to consider density arguments to pass from regular to weak solutions if one considers regular solutions of our problem where the source terms are locally Lipschitz functions. To overcome this difficulty, we use an approximation method involving truncated sources and adapting the ideas in [13] to show that the existence of weak solutions can still be obtained for our problem. Second, we show that under some restrictions on the initial data and if the interior source dominates the interior damping term, then the solution ceases to exist and blows up in finite time provided that the initial data are large enough.

  12. Large scale mapping of groundwater resources using a highly integrated set of tools

    DEFF Research Database (Denmark)

    Søndergaard, Verner; Auken, Esben; Christiansen, Anders Vest

    large areas with information from an optimum number of new investigation boreholes, existing boreholes, logs and water samples to get an integrated and detailed description of the groundwater resources and their vulnerability.Development of more time efficient and airborne geophysical data acquisition...... platforms (e.g. SkyTEM) have made large-scale mapping attractive and affordable in the planning and administration of groundwater resources. The handling and optimized use of huge amounts of geophysical data covering large areas has also required a comprehensive database, where data can easily be stored...

  13. Analyzing the security of an existing computer system

    Science.gov (United States)

    Bishop, M.

    1986-01-01

    Most work concerning secure computer systems has dealt with the design, verification, and implementation of provably secure computer systems, or has explored ways of making existing computer systems more secure. The problem of locating security holes in existing systems has received considerably less attention; methods generally rely on thought experiments as a critical step in the procedure. The difficulty is that such experiments require that a large amount of information be available in a format that makes correlating the details of various programs straightforward. This paper describes a method of providing such a basis for the thought experiment by writing a special manual for parts of the operating system, system programs, and library subroutines.

  14. Stabilizing the long-time behavior of the forced Navier-Stokes and damped Euler systems by large mean flow

    Science.gov (United States)

    Cyranka, Jacek; Mucha, Piotr B.; Titi, Edriss S.; Zgliczyński, Piotr

    2018-04-01

    The paper studies the issue of stability of solutions to the forced Navier-Stokes and damped Euler systems in periodic boxes. It is shown that for large, but fixed, Grashoff (Reynolds) number the turbulent behavior of all Leray-Hopf weak solutions of the three-dimensional Navier-Stokes equations, in periodic box, is suppressed, when viewed in the right frame of reference, by large enough average flow of the initial data; a phenomenon that is similar in spirit to the Landau damping. Specifically, we consider an initial data which have large enough spatial average, then by means of the Galilean transformation, and thanks to the periodic boundary conditions, the large time independent forcing term changes into a highly oscillatory force; which then allows us to employ some averaging principles to establish our result. Moreover, we also show that under the action of fast oscillatory-in-time external forces all two-dimensional regular solutions of the Navier-Stokes and the damped Euler equations converge to a unique time-periodic solution.

  15. Effective diffusion in time-periodic linear planar flow

    International Nuclear Information System (INIS)

    Indeikina, A.; Chang, H.

    1993-01-01

    It is shown that when a point source of solute is inserted into a time-periodic, unbounded linear planar flow, the large-time, time-average transport of the solute can be described by classical anisotropic diffusion with constant effective diffusion tensors. For a given vorticity and forcing period, elongational flow is shown to be the most dispersive followed by simple shear and rotational flow. Large-time diffusivity along the major axis of the time-average concentration ellipse, whose alignment is predicted from the theory, is shown to increase with vorticity for all flows and decrease with increasing forcing frequency for elongational flow and simple shear. For the interesting case of rotational flow, there exist discrete resonant frequencies where the time-average major diffusivity reaches local maxima equal to the time-average steady flow case with zero forcing frequency

  16. Normal black holes in bulge-less galaxies: the largely quiescent, merger-free growth of black holes over cosmic time

    Science.gov (United States)

    Martin, G.; Kaviraj, S.; Volonteri, M.; Simmons, B. D.; Devriendt, J. E. G.; Lintott, C. J.; Smethurst, R. J.; Dubois, Y.; Pichon, C.

    2018-05-01

    Understanding the processes that drive the formation of black holes (BHs) is a key topic in observational cosmology. While the observed MBH-MBulge correlation in bulge-dominated galaxies is thought to be produced by major mergers, the existence of an MBH-M⋆ relation, across all galaxy morphological types, suggests that BHs may be largely built by secular processes. Recent evidence that bulge-less galaxies, which are unlikely to have had significant mergers, are offset from the MBH-MBulge relation, but lie on the MBH-M⋆ relation, has strengthened this hypothesis. Nevertheless, the small size and heterogeneity of current data sets, coupled with the difficulty in measuring precise BH masses, make it challenging to address this issue using empirical studies alone. Here, we use Horizon-AGN, a cosmological hydrodynamical simulation to probe the role of mergers in BH growth over cosmic time. We show that (1) as suggested by observations, simulated bulge-less galaxies lie offset from the main MBH-MBulge relation, but on the MBH-M⋆ relation, (2) the positions of galaxies on the MBH-M⋆ relation are not affected by their merger histories, and (3) only ˜35 per cent of the BH mass in today's massive galaxies is directly attributable to merging - the majority (˜65 per cent) of BH growth, therefore, takes place gradually, via secular processes, over cosmic time.

  17. Black holes from large N singlet models

    Science.gov (United States)

    Amado, Irene; Sundborg, Bo; Thorlacius, Larus; Wintergerst, Nico

    2018-03-01

    The emergent nature of spacetime geometry and black holes can be directly probed in simple holographic duals of higher spin gravity and tensionless string theory. To this end, we study time dependent thermal correlation functions of gauge invariant observables in suitably chosen free large N gauge theories. At low temperature and on short time scales the correlation functions encode propagation through an approximate AdS spacetime while interesting departures emerge at high temperature and on longer time scales. This includes the existence of evanescent modes and the exponential decay of time dependent boundary correlations, both of which are well known indicators of bulk black holes in AdS/CFT. In addition, a new time scale emerges after which the correlation functions return to a bulk thermal AdS form up to an overall temperature dependent normalization. A corresponding length scale was seen in equal time correlation functions in the same models in our earlier work.

  18. Real-time graphic display system for ROSA-V Large Scale Test Facility

    International Nuclear Information System (INIS)

    Kondo, Masaya; Anoda, Yoshinari; Osaki, Hideki; Kukita, Yutaka; Takigawa, Yoshio.

    1993-11-01

    A real-time graphic display system was developed for the ROSA-V Large Scale Test Facility (LSTF) experiments simulating accident management measures for prevention of severe core damage in pressurized water reactors (PWRs). The system works on an IBM workstation (Power Station RS/6000 model 560) and accommodates 512 channels out of about 2500 total measurements in the LSTF. It has three major functions: (a) displaying the coolant inventory distribution in the facility primary and secondary systems; (b) displaying the measured quantities at desired locations in the facility; and (c) displaying the time histories of measured quantities. The coolant inventory distribution is derived from differential pressure measurements along vertical sections and gamma-ray densitometer measurements for horizontal legs. The color display indicates liquid subcooling calculated from pressure and temperature at individual locations. (author)

  19. Global and exponential attractors of the three dimensional viscous primitive equations of large-scale moist atmosphere

    OpenAIRE

    You, Bo; Li, Fang

    2016-01-01

    This paper is concerned with the long-time behavior of solutions for the three dimensional viscous primitive equations of large-scale moist atmosphere. We prove the existence of a global attractor for the three dimensional viscous primitive equations of large-scale moist atmosphere by asymptotic a priori estimate and construct an exponential attractor by using the smoothing property of the semigroup generated by the three dimensional viscous primitive equations of large-scale moist atmosphere...

  20. Late-time cosmological phase transitions

    International Nuclear Information System (INIS)

    Schramm, D.N.

    1990-11-01

    It is shown that the potential galaxy formation and large-scale structure problems of objects existing at high redshifts (Z approx-gt 5), structures existing on scales of 100M pc as well as velocity flows on such scales, and minimal microwave anisotropies (ΔT/T) approx-lt 10 -5 can be solved if the seeds needed to generate structure form in a vacuum phase transition after decoupling. It is argued that the basic physics of such a phase transition is no more exotic than that utilized in the more traditional GUT scale phase transitions, and that, just as in the GUT case, significant random gaussian fluctuations and/or topological defects can form. Scale lengths of ∼100M pc for large-scale structure as well as ∼1 M pc for galaxy formation occur naturally. Possible support for new physics that might be associated with such a late-time transition comes from the preliminary results of the SAGE solar neutrino experiment, implying neutrino flavor mixing with values similar to those required for a late-time transition. It is also noted that a see-saw model for the neutrino masses might also imply a tau neutrino mass that is an ideal hot dark matter candidate. However, in general either hot or cold dark matter can be consistent with a late-time transition. 47 refs., 2 figs

  1. How uncertainty in socio-economic variables affects large-scale transport model forecasts

    DEFF Research Database (Denmark)

    Manzo, Stefano; Nielsen, Otto Anker; Prato, Carlo Giacomo

    2015-01-01

    A strategic task assigned to large-scale transport models is to forecast the demand for transport over long periods of time to assess transport projects. However, by modelling complex systems transport models have an inherent uncertainty which increases over time. As a consequence, the longer...... the period forecasted the less reliable is the forecasted model output. Describing uncertainty propagation patterns over time is therefore important in order to provide complete information to the decision makers. Among the existing literature only few studies analyze uncertainty propagation patterns over...

  2. TORCH: A Large-Area Detector for Precision Time-of-Flight Measurements at LHCb

    CERN Document Server

    Harnew, N

    2012-01-01

    The TORCH (Time Of internally Reflected CHerenkov light) is an innovative high-precision time-of-flight detector which is suitable for large areas, up to tens of square metres, and is being developed for the upgraded LHCb experiment. The TORCH provides a time-of-flight measurement from the imaging of photons emitted in a 1 cm thick quartz radiator, based on the Cherenkov principle. The photons propagate by total internal reflection to the edge of the quartz plane and are then focused onto an array of Micro-Channel Plate (MCP) photon detectors at the periphery of the detector. The goal is to achieve a timing resolution of 15 ps per particle over a flight distance of 10 m. This will allow particle identification in the challenging momentum region up to 20 GeV/c. Commercial MCPs have been tested in the laboratory and demonstrate the required timing precision. An electronics readout system based on the NINO and HPTDC chipset is being developed to evaluate an 8×8 channel TORCH prototype. The simulated performance...

  3. Physics with large extra dimensions

    Indian Academy of Sciences (India)

    can then be accounted by the existence of large internal dimensions, in the sub- ... strongly coupled heterotic theory with one large dimension is described by a weakly ..... one additional U(1) factor corresponding to an extra 'U(1)' D-brane is ...

  4. Estimating clinical chemistry reference values based on an existing data set of unselected animals.

    Science.gov (United States)

    Dimauro, Corrado; Bonelli, Piero; Nicolussi, Paola; Rassu, Salvatore P G; Cappio-Borlino, Aldo; Pulina, Giuseppe

    2008-11-01

    In an attempt to standardise the determination of biological reference values, the International Federation of Clinical Chemistry (IFCC) has published a series of recommendations on developing reference intervals. The IFCC recommends the use of an a priori sampling of at least 120 healthy individuals. However, such a high number of samples and laboratory analysis is expensive, time-consuming and not always feasible, especially in veterinary medicine. In this paper, an alternative (a posteriori) method is described and is used to determine reference intervals for biochemical parameters of farm animals using an existing laboratory data set. The method used was based on the detection and removal of outliers to obtain a large sample of animals likely to be healthy from the existing data set. This allowed the estimation of reliable reference intervals for biochemical parameters in Sarda dairy sheep. This method may also be useful for the determination of reference intervals for different species, ages and gender.

  5. Global solubility of the three-dimensional Navier-Stokes equations with uniformly large initial vorticity

    International Nuclear Information System (INIS)

    Makhalov, A S; Nikolaenko, V P

    2003-01-01

    This paper is a survey of results concerning the three-dimensional Navier-Stokes and Euler equations with initial data characterized by uniformly large vorticity. The existence of regular solutions of the three-dimensional Navier-Stokes equations on an unbounded time interval is proved for large initial data both in R 3 and in bounded cylindrical domains. Moreover, the existence of smooth solutions on large finite time intervals is established for the three-dimensional Euler equations. These results are obtained without additional assumptions on the behaviour of solutions for t>0. Any smooth solution is not close to any two-dimensional manifold. Our approach is based on the computation of singular limits of rapidly oscillating operators, non-linear averaging, and a consideration of the mutual absorption of non-linear oscillations of the vorticity field. The use of resonance conditions, methods from the theory of small divisors, and non-linear averaging of almost periodic functions leads to the limit resonant Navier-Stokes equations. Global solubility of these equations is proved without any conditions on the three-dimensional initial data. The global regularity of weak solutions of three-dimensional Navier-Stokes equations with uniformly large vorticity at t=0 is proved by using the regularity of weak solutions and the strong convergence

  6. Safety Aspects of Sustainable Storage Dams and Earthquake Safety of Existing Dams

    Directory of Open Access Journals (Sweden)

    Martin Wieland

    2016-09-01

    Full Text Available The basic element in any sustainable dam project is safety, which includes the following safety elements: ① structural safety, ② dam safety monitoring, ③ operational safety and maintenance, and ④ emergency planning. Long-term safety primarily includes the analysis of all hazards affecting the project; that is, hazards from the natural environment, hazards from the man-made environment, and project-specific and site-specific hazards. The special features of the seismic safety of dams are discussed. Large dams were the first structures to be systematically designed against earthquakes, starting in the 1930s. However, the seismic safety of older dams is unknown, as most were designed using seismic design criteria and methods of dynamic analysis that are considered obsolete today. Therefore, we need to reevaluate the seismic safety of existing dams based on current state-of-the-art practices and rehabilitate deficient dams. For large dams, a site-specific seismic hazard analysis is usually recommended. Today, large dams and the safety-relevant elements used for controlling the reservoir after a strong earthquake must be able to withstand the ground motions of a safety evaluation earthquake. The ground motion parameters can be determined either by a probabilistic or a deterministic seismic hazard analysis. During strong earthquakes, inelastic deformations may occur in a dam; therefore, the seismic analysis has to be carried out in the time domain. Furthermore, earthquakes create multiple seismic hazards for dams such as ground shaking, fault movements, mass movements, and others. The ground motions needed by the dam engineer are not real earthquake ground motions but models of the ground motion, which allow the safe design of dams. It must also be kept in mind that dam safety evaluations must be carried out several times during the long life of large storage dams. These features are discussed in this paper.

  7. Vicious random walkers in the limit of a large number of walkers

    International Nuclear Information System (INIS)

    Forrester, P.J.

    1989-01-01

    The vicious random walker problem on a line is studied in the limit of a large number of walkers. The multidimensional integral representing the probability that the p walkers will survive a time t (denoted P t (p) ) is shown to be analogous to the partition function of a particular one-component Coulomb gas. By assuming the existence of the thermodynamic limit for the Coulomb gas, one can deduce asymptotic formulas for P t (p) in the large-p, large-t limit. A straightforward analysis gives rigorous asymptotic formulas for the probability that after a time t the walkers are in their initial configuration (this event is termed a reunion). Consequently, asymptotic formulas for the conditional probability of a reunion, given that all walkers survive, are derived. Also, an asymptotic formula for the conditional probability density that any walker will arrive at a particular point in time t, given that all p walkers survive, is calculated in the limit t >> p

  8. Piloted simulator study of allowable time delays in large-airplane response

    Science.gov (United States)

    Grantham, William D.; Bert T.?aetingas, Stephen A.dings with ran; Bert T.?aetingas, Stephen A.dings with ran

    1987-01-01

    A piloted simulation was performed to determine the permissible time delay and phase shift in the flight control system of a specific large transport-type airplane. The study was conducted with a six degree of freedom ground-based simulator and a math model similar to an advanced wide-body jet transport. Time delays in discrete and lagged form were incorporated into the longitudinal, lateral, and directional control systems of the airplane. Three experienced pilots flew simulated approaches and landings with random localizer and glide slope offsets during instrument tracking as their principal evaluation task. Results of the present study suggest a level 1 (satisfactory) handling qualities limit for the effective time delay of 0.15 sec in both the pitch and roll axes, as opposed to a 0.10-sec limit of the present specification (MIL-F-8785C) for both axes. Also, the present results suggest a level 2 (acceptable but unsatisfactory) handling qualities limit for an effective time delay of 0.82 sec and 0.57 sec for the pitch and roll axes, respectively, as opposed to 0.20 sec of the present specifications for both axes. In the area of phase shift between cockpit input and control surface deflection,the results of this study, flown in turbulent air, suggest less severe phase shift limitations for the approach and landing task-approximately 50 deg. in pitch and 40 deg. in roll - as opposed to 15 deg. of the present specifications for both axes.

  9. When David beats Goliath: the advantage of large size in interspecific aggressive contests declines over evolutionary time.

    Directory of Open Access Journals (Sweden)

    Paul R Martin

    Full Text Available Body size has long been recognized to play a key role in shaping species interactions. For example, while small species thrive in a diversity of environments, they typically lose aggressive contests for resources with larger species. However, numerous examples exist of smaller species dominating larger species during aggressive interactions, suggesting that the evolution of traits can allow species to overcome the competitive disadvantage of small size. If these traits accumulate as lineages diverge, then the advantage of large size in interspecific aggressive interactions should decline with increased evolutionary distance. We tested this hypothesis using data on the outcomes of 23,362 aggressive interactions among 246 bird species pairs involving vultures at carcasses, hummingbirds at nectar sources, and antbirds and woodcreepers at army ant swarms. We found the advantage of large size declined as species became more evolutionarily divergent, and smaller species were more likely to dominate aggressive contests when interacting with more distantly-related species. These results appear to be caused by both the evolution of traits in smaller species that enhanced their abilities in aggressive contests, and the evolution of traits in larger species that were adaptive for other functions, but compromised their abilities to compete aggressively. Specific traits that may provide advantages to small species in aggressive interactions included well-developed leg musculature and talons, enhanced flight acceleration and maneuverability, novel fighting behaviors, and traits associated with aggression, such as testosterone and muscle development. Traits that may have hindered larger species in aggressive interactions included the evolution of morphologies for tree trunk foraging that compromised performance in aggressive contests away from trunks, and the evolution of migration. Overall, our results suggest that fundamental trade-offs, such as those

  10. Inhibition of existing denitrification enzyme activity by chloramphenicol

    Science.gov (United States)

    Brooks, M.H.; Smith, R.L.; Macalady, D.L.

    1992-01-01

    Chloramphenicol completely inhibited the activity of existing denitrification enzymes in acetylene-block incubations with (i) sediments from a nitrate-contaminated aquifer and (ii) a continuous culture of denitrifying groundwater bacteria. Control flasks with no antibiotic produced significant amounts of nitrous oxide in the same time period. Amendment with chloramphenicol after nitrous oxide production had begun resulted in a significant decrease in the rate of nitrous oxide production. Chloramphenicol also decreased (>50%) the activity of existing denitrification enzymes in pure cultures of Pseudomonas denitrificans that were harvested during log- phase growth and maintained for 2 weeks in a starvation medium lacking electron donor. Short-term time courses of nitrate consumption and nitrous oxide production in the presence of acetylene with P. denitrificans undergoing carbon starvation were performed under optimal conditions designed to mimic denitrification enzyme activity assays used with soils. Time courses were linear for both chloramphenicol and control flasks, and rate estimates for the two treatments were significantly different at the 95% confidence level. Complete or partial inhibition of existing enzyme activity is not consistent with the current understanding of the mode of action of chloramphenicol or current practice, in which the compound is frequently employed to inhibit de novo protein synthesis during the course of microbial activity assays. The results of this study demonstrate that chloramphenicol amendment can inhibit the activity of existing denitrification enzymes and suggest that caution is needed in the design and interpretation of denitrification activity assays in which chloramphenicol is used to prevent new protein synthesis.

  11. Inclusion of Part-Time Faculty for the Benefit of Faculty and Students

    Science.gov (United States)

    Meixner, Cara; Kruck, S. E.; Madden, Laura T.

    2010-01-01

    The new majority of faculty in today's colleges and universities are part-time, yet sizable gaps exist in the research on their needs, interests, and experiences. Further, the peer-reviewed scholarship is largely quantitative. Principally, it focuses on the utility of the adjunct work force, comparisons between part-time and full-time faculty, and…

  12. Time domain calculation of connector loads of a very large floating structure

    Science.gov (United States)

    Gu, Jiayang; Wu, Jie; Qi, Enrong; Guan, Yifeng; Yuan, Yubo

    2015-06-01

    Loads generated after an air crash, ship collision, and other accidents may destroy very large floating structures (VLFSs) and create additional connector loads. In this study, the combined effects of ship collision and wave loads are considered to establish motion differential equations for a multi-body VLFS. A time domain calculation method is proposed to calculate the connector load of the VLFS in waves. The Longuet-Higgins model is employed to simulate the stochastic wave load. Fluid force and hydrodynamic coefficient are obtained with DNV Sesam software. The motion differential equation is calculated by applying the time domain method when the frequency domain hydrodynamic coefficient is converted into the memory function of the motion differential equation of the time domain. As a result of the combined action of wave and impact loads, high-frequency oscillation is observed in the time history curve of the connector load. At wave directions of 0° and 75°, the regularities of the time history curves of the connector loads in different directions are similar and the connector loads of C1 and C2 in the X direction are the largest. The oscillation load is observed in the connector in the Y direction at a wave direction of 75° and not at 0°. This paper presents a time domain calculation method of connector load to provide a certain reference function for the future development of Chinese VLFS

  13. Research on resistance characteristics of YBCO tape under short-time DC large current impact

    Science.gov (United States)

    Zhang, Zhifeng; Yang, Jiabin; Qiu, Qingquan; Zhang, Guomin; Lin, Liangzhen

    2017-06-01

    Research of the resistance characteristics of YBCO tape under short-time DC large current impact is the foundation of the developing DC superconducting fault current limiter (SFCL) for voltage source converter-based high voltage direct current system (VSC-HVDC), which is one of the valid approaches to solve the problems of renewable energy integration. SFCL can limit DC short-circuit and enhance the interrupting capabilities of DC circuit breakers. In this paper, under short-time DC large current impacts, the resistance features of naked tape of YBCO tape are studied to find the resistance - temperature change rule and the maximum impact current. The influence of insulation for the resistance - temperature characteristics of YBCO tape is studied by comparison tests with naked tape and insulating tape in 77 K. The influence of operating temperature on the tape is also studied under subcooled liquid nitrogen condition. For the current impact security of YBCO tape, the critical current degradation and top temperature are analyzed and worked as judgment standards. The testing results is helpful for in developing SFCL in VSC-HVDC.

  14. Parallel analysis tools and new visualization techniques for ultra-large climate data set

    Energy Technology Data Exchange (ETDEWEB)

    Middleton, Don [National Center for Atmospheric Research, Boulder, CO (United States); Haley, Mary [National Center for Atmospheric Research, Boulder, CO (United States)

    2014-12-10

    ParVis was a project funded under LAB 10-05: “Earth System Modeling: Advanced Scientific Visualization of Ultra-Large Climate Data Sets”. Argonne was the lead lab with partners at PNNL, SNL, NCAR and UC-Davis. This report covers progress from January 1st, 2013 through Dec 1st, 2014. Two previous reports covered the period from Summer, 2010, through September 2011 and October 2011 through December 2012, respectively. While the project was originally planned to end on April 30, 2013, personnel and priority changes allowed many of the institutions to continue work through FY14 using existing funds. A primary focus of ParVis was introducing parallelism to climate model analysis to greatly reduce the time-to-visualization for ultra-large climate data sets. Work in the first two years was conducted on two tracks with different time horizons: one track to provide immediate help to climate scientists already struggling to apply their analysis to existing large data sets and another focused on building a new data-parallel library and tool for climate analysis and visualization that will give the field a platform for performing analysis and visualization on ultra-large datasets for the foreseeable future. In the final 2 years of the project, we focused mostly on the new data-parallel library and associated tools for climate analysis and visualization.

  15. Time to "go large" on biofilm research: advantages of an omics approach.

    Science.gov (United States)

    Azevedo, Nuno F; Lopes, Susana P; Keevil, Charles W; Pereira, Maria O; Vieira, Maria J

    2009-04-01

    In nature, the biofilm mode of life is of great importance in the cell cycle for many microorganisms. Perhaps because of biofilm complexity and variability, the characterization of a given microbial system, in terms of biofilm formation potential, structure and associated physiological activity, in a large-scale, standardized and systematic manner has been hindered by the absence of high-throughput methods. This outlook is now starting to change as new methods involving the utilization of microtiter-plates and automated spectrophotometry and microscopy systems are being developed to perform large-scale testing of microbial biofilms. Here, we evaluate if the time is ripe to start an integrated omics approach, i.e., the generation and interrogation of large datasets, to biofilms--"biofomics". This omics approach would bring much needed insight into how biofilm formation ability is affected by a number of environmental, physiological and mutational factors and how these factors interplay between themselves in a standardized manner. This could then lead to the creation of a database where biofilm signatures are identified and interrogated. Nevertheless, and before embarking on such an enterprise, the selection of a versatile, robust, high-throughput biofilm growing device and of appropriate methods for biofilm analysis will have to be performed. Whether such device and analytical methods are already available, particularly for complex heterotrophic biofilms is, however, very debatable.

  16. Direct Analysis in Real Time Mass Spectrometry for Characterization of Large Saccharides.

    Science.gov (United States)

    Ma, Huiying; Jiang, Qing; Dai, Diya; Li, Hongli; Bi, Wentao; Da Yong Chen, David

    2018-03-06

    Polysaccharide characterization posts the most difficult challenge to available analytical technologies compared to other types of biomolecules. Plant polysaccharides are reported to have numerous medicinal values, but their effect can be different based on the types of plants, and even regions of productions and conditions of cultivation. However, the molecular basis of the differences of these polysaccharides is largely unknown. In this study, direct analysis in real time mass spectrometry (DART-MS) was used to generate polysaccharide fingerprints. Large saccharides can break down into characteristic small fragments in the DART source via pyrolysis, and the products are then detected by high resolution MS. Temperature was shown to be a crucial parameter for the decomposition of large polysaccharide. The general behavior of carbohydrates in DART-MS was also studied through the investigation of a number of mono- and oligosaccharide standards. The chemical formula and putative ionic forms of the fragments were proposed based on accurate mass with less than 10 ppm mass errors. Multivariate data analysis shows the clear differentiation of different plant species. Intensities of marker ions compared among samples also showed obvious differences. The combination of DART-MS analysis and mechanochemical extraction method used in this work demonstrates a simple, fast, and high throughput analytical protocol for the efficient evaluation of molecular features in plant polysaccharides.

  17. Remotely controlled large container disposal methodology

    International Nuclear Information System (INIS)

    Amir, S.J.

    1994-09-01

    Remotely Handled Large Containers (RHLC), also called drag-off boxes, have been used at the Hanford Site since the 1940s to dispose of large pieces of radioactively contaminated equipment. These containers are typically large steel-reinforced concrete boxes, which weigh as much as 40 tons. Because large quantities of high-dose waste can produce radiation levels as high as 200 mrem/hour at 200 ft, the containers are remotely handled (either lifted off the railcar by crane or dragged off with a cable). Many of the existing containers do not meet existing structural and safety design criteria and some of the transportation requirements. The drag-off method of pulling the box off the railcar using a cable and a tractor is also not considered a safe operation, especially in view of past mishaps

  18. Multi-Scale Dissemination of Time Series Data

    DEFF Research Database (Denmark)

    Guo, Qingsong; Zhou, Yongluan; Su, Li

    2013-01-01

    In this paper, we consider the problem of continuous dissemination of time series data, such as sensor measurements, to a large number of subscribers. These subscribers fall into multiple subscription levels, where each subscription level is specified by the bandwidth constraint of a subscriber......, which is an abstract indicator for both the physical limits and the amount of data that the subscriber would like to handle. To handle this problem, we propose a system framework for multi-scale time series data dissemination that employs a typical tree-based dissemination network and existing time...

  19. Time Domain View of Liquid-like Screening and Large Polaron Formation in Lead Halide Perovskites

    Science.gov (United States)

    Joshi, Prakriti Pradhan; Miyata, Kiyoshi; Trinh, M. Tuan; Zhu, Xiaoyang

    The structural softness and dynamic disorder of lead halide perovskites contributes to their remarkable optoelectronic properties through efficient charge screening and large polaron formation. Here we provide a direct time-domain view of the liquid-like structural dynamics and polaron formation in single crystal CH3NH3PbBr3 and CsPbBr3 using femtosecond optical Kerr effect spectroscopy in conjunction with transient reflectance spectroscopy. We investigate structural dynamics as function of pump energy, which enables us to examine the dynamics in the absence and presence of charge carriers. In the absence of charge carriers, structural dynamics are dominated by over-damped picosecond motions of the inorganic PbBr3- sub-lattice and these motions are strongly coupled to band-gap electronic transitions. Carrier injection from across-gap optical excitation triggers additional 0.26 ps dynamics in CH3NH3PbBr3 that can be attributed to the formation of large polarons. In comparison, large polaron formation is slower in CsPbBr3 with a time constant of 0.6 ps. We discuss how such dynamic screening protects charge carriers in lead halide perovskites. US Department of Energy, Office of Science - Basic Energy Sciences.

  20. Lebesgue Sets Immeasurable Existence

    Directory of Open Access Journals (Sweden)

    Diana Marginean Petrovai

    2012-12-01

    Full Text Available It is well known that the notion of measure and integral were released early enough in close connection with practical problems of measuring of geometric figures. Notion of measure was outlined in the early 20th century through H. Lebesgue’s research, founder of the modern theory of measure and integral. It was developed concurrently a technique of integration of functions. Gradually it was formed a specific area todaycalled the measure and integral theory. Essential contributions to building this theory was made by a large number of mathematicians: C. Carathodory, J. Radon, O. Nikodym, S. Bochner, J. Pettis, P. Halmos and many others. In the following we present several abstract sets, classes of sets. There exists the sets which are not Lebesgue measurable and the sets which are Lebesgue measurable but are not Borel measurable. Hence B ⊂ L ⊂ P(X.

  1. Straightening: existence, uniqueness and stability

    Science.gov (United States)

    Destrade, M.; Ogden, R. W.; Sgura, I.; Vergori, L.

    2014-01-01

    One of the least studied universal deformations of incompressible nonlinear elasticity, namely the straightening of a sector of a circular cylinder into a rectangular block, is revisited here and, in particular, issues of existence and stability are addressed. Particular attention is paid to the system of forces required to sustain the large static deformation, including by the application of end couples. The influence of geometric parameters and constitutive models on the appearance of wrinkles on the compressed face of the block is also studied. Different numerical methods for solving the incremental stability problem are compared and it is found that the impedance matrix method, based on the resolution of a matrix Riccati differential equation, is the more precise. PMID:24711723

  2. Freeway travel time estimation using existing fixed traffic sensors : phase 2.

    Science.gov (United States)

    2015-03-01

    Travel time, one of the most important freeway performance metrics, can be easily estimated using the : data collected from fixed traffic sensors, avoiding the need to install additional travel time data collectors. : This project is aimed at fully u...

  3. Expansion potential for existing nuclear power station sites

    Energy Technology Data Exchange (ETDEWEB)

    Cope, D. F.; Bauman, H. F.

    1977-09-26

    This report is a preliminary analysis of the expansion potential of the existing nuclear power sites, in particular their potential for development into nuclear energy centers (NECs) of 10 (GW(e) or greater. The analysis is based primarily on matching the most important physical characteristics of a site against the dominating site criteria. Sites reviewed consist mainly of those in the 1974 through 1976 ERDA Nuclear Power Stations listings without regard to the present status of reactor construction plans. Also a small number of potential NEC sites that are not associated with existing power stations were reviewed. Each site was categorized in terms of its potential as: a dispersed site of 5 GW(e) or less; a mini-NEC of 5 to 10 GW(e); NECs of 10 to 20 GW(e); and large NECs of more than 20 GW(e). The sites were categorized on their ultimate potential without regard to political considerations that might restrain their development. The analysis indicates that nearly 40 percent of existing sites have potential for expansion to nuclear energy centers.

  4. Expansion potential for existing nuclear power station sites

    International Nuclear Information System (INIS)

    Cope, D.F.; Bauman, H.F.

    1977-01-01

    This report is a preliminary analysis of the expansion potential of the existing nuclear power sites, in particular their potential for development into nuclear energy centers (NECs) of 10 (GW(e) or greater. The analysis is based primarily on matching the most important physical characteristics of a site against the dominating site criteria. Sites reviewed consist mainly of those in the 1974 through 1976 ERDA Nuclear Power Stations listings without regard to the present status of reactor construction plans. Also a small number of potential NEC sites that are not associated with existing power stations were reviewed. Each site was categorized in terms of its potential as: a dispersed site of 5 GW(e) or less; a mini-NEC of 5 to 10 GW(e); NECs of 10 to 20 GW(e); and large NECs of more than 20 GW(e). The sites were categorized on their ultimate potential without regard to political considerations that might restrain their development. The analysis indicates that nearly 40 percent of existing sites have potential for expansion to nuclear energy centers

  5. Estimation of Transport Trajectory and Residence Time in Large River–Lake Systems: Application to Poyang Lake (China Using a Combined Model Approach

    Directory of Open Access Journals (Sweden)

    Yunliang Li

    2015-09-01

    Full Text Available The biochemical processes and associated water quality in many lakes mainly depend on their transport behaviors. Most existing methodologies for investigating transport behaviors are based on physically based numerical models. The pollutant transport trajectory and residence time of Poyang Lake are thought to have important implications for the steadily deteriorating water quality and the associated rapid environmental changes during the flood period. This study used a hydrodynamic model (MIKE 21 in conjunction with transport and particle-tracking sub-models to provide comprehensive investigation of transport behaviors in Poyang Lake. Model simulations reveal that the lake’s prevailing water flow patterns cause a unique transport trajectory that primarily develops from the catchment river mouths to the downstream area along the lake’s main flow channels, similar to a river-transport behavior. Particle tracking results show that the mean residence time of the lake is 89 days during July–September. The effect of the Yangtze River (the effluent of the lake on the residence time is stronger than that of the catchment river inflows. The current study represents a first attempt to use a combined model approach to provide insights into the transport behaviors for a large river–lake system, given proposals to manage the pollutant inputs both directly to the lake and catchment rivers.

  6. A large set of potential past, present and future hydro-meteorological time series for the UK

    Science.gov (United States)

    Guillod, Benoit P.; Jones, Richard G.; Dadson, Simon J.; Coxon, Gemma; Bussi, Gianbattista; Freer, James; Kay, Alison L.; Massey, Neil R.; Sparrow, Sarah N.; Wallom, David C. H.; Allen, Myles R.; Hall, Jim W.

    2018-01-01

    Hydro-meteorological extremes such as drought and heavy precipitation can have large impacts on society and the economy. With potentially increasing risks associated with such events due to climate change, properly assessing the associated impacts and uncertainties is critical for adequate adaptation. However, the application of risk-based approaches often requires large sets of extreme events, which are not commonly available. Here, we present such a large set of hydro-meteorological time series for recent past and future conditions for the United Kingdom based on weather@home 2, a modelling framework consisting of a global climate model (GCM) driven by observed or projected sea surface temperature (SST) and sea ice which is downscaled to 25 km over the European domain by a regional climate model (RCM). Sets of 100 time series are generated for each of (i) a historical baseline (1900-2006), (ii) five near-future scenarios (2020-2049) and (iii) five far-future scenarios (2070-2099). The five scenarios in each future time slice all follow the Representative Concentration Pathway 8.5 (RCP8.5) and sample the range of sea surface temperature and sea ice changes from CMIP5 (Coupled Model Intercomparison Project Phase 5) models. Validation of the historical baseline highlights good performance for temperature and potential evaporation, but substantial seasonal biases in mean precipitation, which are corrected using a linear approach. For extremes in low precipitation over a long accumulation period ( > 3 months) and shorter-duration high precipitation (1-30 days), the time series generally represents past statistics well. Future projections show small precipitation increases in winter but large decreases in summer on average, leading to an overall drying, consistently with the most recent UK Climate Projections (UKCP09) but larger in magnitude than the latter. Both drought and high-precipitation events are projected to increase in frequency and intensity in most regions

  7. Near-Real-Time Monitoring of Insect Defoliation Using Landsat Time Series

    Directory of Open Access Journals (Sweden)

    Valerie J. Pasquarella

    2017-07-01

    Full Text Available Introduced insects and pathogens impact millions of acres of forested land in the United States each year, and large-scale monitoring efforts are essential for tracking the spread of outbreaks and quantifying the extent of damage. However, monitoring the impacts of defoliating insects presents a significant challenge due to the ephemeral nature of defoliation events. Using the 2016 gypsy moth (Lymantria dispar outbreak in Southern New England as a case study, we present a new approach for near-real-time defoliation monitoring using synthetic images produced from Landsat time series. By comparing predicted and observed images, we assessed changes in vegetation condition multiple times over the course of an outbreak. Initial measures can be made as imagery becomes available, and season-integrated products provide a wall-to-wall assessment of potential defoliation at 30 m resolution. Qualitative and quantitative comparisons suggest our Landsat Time Series (LTS products improve identification of defoliation events relative to existing products and provide a repeatable metric of change in condition. Our synthetic-image approach is an important step toward using the full temporal potential of the Landsat archive for operational monitoring of forest health over large extents, and provides an important new tool for understanding spatial and temporal dynamics of insect defoliators.

  8. Time and frequency domain analyses of the Hualien Large-Scale Seismic Test

    International Nuclear Information System (INIS)

    Kabanda, John; Kwon, Oh-Sung; Kwon, Gunup

    2015-01-01

    Highlights: • Time- and frequency-domain analysis methods are verified against each other. • The two analysis methods are validated against Hualien LSST. • The nonlinear time domain (NLTD) analysis resulted in more realistic response. • The frequency domain (FD) analysis shows amplification at resonant frequencies. • The NLTD analysis requires significant modeling and computing time. - Abstract: In the nuclear industry, the equivalent-linear frequency domain analysis method has been the de facto standard procedure primarily due to the method's computational efficiency. This study explores the feasibility of applying the nonlinear time domain analysis method for the soil–structure-interaction analysis of nuclear power facilities. As a first step, the equivalency of the time and frequency domain analysis methods is verified through a site response analysis of one-dimensional soil, a dynamic impedance analysis of soil–foundation system, and a seismic response analysis of the entire soil–structure system. For the verifications, an idealized elastic soil–structure system is used to minimize variables in the comparison of the two methods. Then, the verified analysis methods are used to develop time and frequency domain models of Hualien Large-Scale Seismic Test. The predicted structural responses are compared against field measurements. The models are also analyzed with an amplified ground motion to evaluate discrepancies of the time and frequency domain analysis methods when the soil–structure system behaves beyond the elastic range. The analysis results show that the equivalent-linear frequency domain analysis method amplifies certain frequency bands and tends to result in higher structural acceleration than the nonlinear time domain analysis method. A comparison with field measurements shows that the nonlinear time domain analysis method better captures the frequency distribution of recorded structural responses than the frequency domain

  9. Transportation capabilities of the existing cask fleet

    International Nuclear Information System (INIS)

    Johnson, P.E.; Wankerl, M.W.; Joy, D.S.

    1991-01-01

    This paper describes a number of scenarios estimating the amount of spent nuclear fuel that could be transported to a Monitored Retrievable Storage (MRS) Facility by various combinations of existing cask fleets. To develop the scenarios, the data provided by the Transportation System Data Base (TSDB) were modified to reflect the additional time for cask turnaround resulting from various startup and transportation issues. With these more realistic speed and cask-handling assumptions, the annual transportation capability of a fleet consisting of all of the existing casks is approximately 465 metric tons of uranium (MTU). The most likely fleet of existing casks that would be made available to the DOE consists of two rail, three overweight truck, and six legal weight truck casks. Under the same transportation assumptions, this cask fleet is capable of approximately transporting 270 MTU/year. These ranges of capability is a result of the assumptions pertaining to the number of casks assumed to be available. It should be noted that this assessment assumes additional casks based on existing certifications are not fabricated

  10. Transportation capabilities of the existing cask fleet

    International Nuclear Information System (INIS)

    Johnson, P.E.; Joy, D.S.; Wankerl, M.W.

    1991-01-01

    This paper describes a number of scenarios estimating the amount of spent nuclear fuel that could be transported to a Monitored Retrievable Storage (MRS) Facility by various combinations of existing cask fleets. To develop the scenarios, the data provided by the Transportation System Data Base (TSDB) were modified to reflect the additional time for cask turnaround resulting from various startup and transportation issues. With these more realistic speed and cask-handling assumptions, the annual transportation capability of a fleet consisting of all of the existing casks is approximately 46 metric tons of uranium (MTU). The most likely fleet of existing casks that would be made available to the Department of Energy (DOE) consists of two rail, three overweight truck, and six legal weight truck casks. Under the same transportation assumptions, this cask fleet is capable of approximately transporting 270 MTU/year. These ranges of capability is a result of the assumptions pertaining to the number of casks assumed to be available. It should be noted that this assessment assumes additional casks based on existing certifications are not fabricated. 5 refs., 4 tabs

  11. Large-scale digitizer system (LSD) for charge and time digitization in high-energy physics experiments

    International Nuclear Information System (INIS)

    Althaus, R.F.; Kirsten, F.A.; Lee, K.L.; Olson, S.R.; Wagner, L.J.; Wolverton, J.M.

    1976-10-01

    A large-scale digitizer (LSD) system for acquiring charge and time-of-arrival particle data from high-energy-physics experiments has been developed at the Lawrence Berkeley Laboratory. The objective in this development was to significantly reduce the cost of instrumenting large-detector arrays which, for the 4π-geometry of colliding-beam experiments, are proposed with an order of magnitude increase in channel count over previous detectors. In order to achieve the desired economy (approximately $65 per channel), a system was designed in which a number of control signals for conversion, for digitization, and for readout are shared in common by all the channels in each 128-channel bin. The overall-system concept and the distribution of control signals that are critical to the 10-bit charge resolution and to the 12-bit time resolution are described. Also described is the bit-serial transfer scheme, chosen for its low component and cabling costs

  12. Large deviations and mixing for dissipative PDEs with unbounded random kicks

    Science.gov (United States)

    Jakšić, V.; Nersesyan, V.; Pillet, C.-A.; Shirikyan, A.

    2018-02-01

    We study the problem of exponential mixing and large deviations for discrete-time Markov processes associated with a class of random dynamical systems. Under some dissipativity and regularisation hypotheses for the underlying deterministic dynamics and a non-degeneracy condition for the driving random force, we discuss the existence and uniqueness of a stationary measure and its exponential stability in the Kantorovich-Wasserstein metric. We next turn to the large deviations principle (LDP) and establish its validity for the occupation measures of the Markov processes in question. The proof is based on Kifer’s criterion for non-compact spaces, a result on large-time asymptotics for generalised Markov semigroup, and a coupling argument. These tools combined together constitute a new approach to LDP for infinite-dimensional processes without strong Feller property in a non-compact space. The results obtained can be applied to the two-dimensional Navier-Stokes system in a bounded domain and to the complex Ginzburg-Landau equation.

  13. Decontamination of large horizontal concrete surfaces outdoors

    International Nuclear Information System (INIS)

    Barbier, M.M.; Chester, C.V.

    1980-01-01

    A study is being conducted of the resources and planning that would be required to clean up an extensive contamination of the outdoor environment. As part of this study, an assessment of the fleet of machines needed for decontaminating large outdoor surfaces of horizontal concrete will be attempted. The operations required are described. The performance of applicable existing equipment is analyzed in terms of area cleaned per unit time, and the comprehensive cost of decontamination per unit area is derived. Shielded equipment for measuring directional radiation and continuously monitoring decontamination work are described. Shielding of drivers' cabs and remote control vehicles is addressed

  14. On large-time energy concentration in solutions to the Navier-Stokes equations in general domains

    Czech Academy of Sciences Publication Activity Database

    Skalák, Zdeněk

    2011-01-01

    Roč. 91, č. 9 (2011), s. 724-732 ISSN 0044-2267 R&D Projects: GA AV ČR IAA100190905 Institutional research plan: CEZ:AV0Z20600510 Keywords : Navier-Stokes equations * large-time behavior * energy concentration Subject RIV: BA - General Mathematics Impact factor: 0.863, year: 2011

  15. Parasitic lasing suppression in large-aperture Ti:sapphire amplifiers by optimizing the seed–pump time delay

    International Nuclear Information System (INIS)

    Chu, Y X; Liang, X Y; Yu, L H; Xu, L; Lu, X M; Liu, Y Q; Leng, Y X; Li, R X; Xu, Z Z

    2013-01-01

    Theoretical and experimental investigations are carried out to determine the influence of the time delay between the input seed pulse and pump pulses on transverse parasitic lasing in a Ti:sapphire amplifier with a diameter of 80 mm, which is clad by a refractive index-matched liquid doped with an absorber. When the time delay is optimized, a maximum output energy of 50.8 J is achieved at a pump energy of 105 J, which corresponds to a conversion efficiency of 47.5%. Based on the existing compressor, the laser system achieves a peak power of 1.26 PW with a 29.0 fs pulse duration. (letter)

  16. Large eddy simulation of a buoyancy-aided flow in a non-uniform channel – Buoyancy effects on large flow structures

    Energy Technology Data Exchange (ETDEWEB)

    Duan, Y. [Department of Mechanical Engineering, University of Sheffield, Sheffield S1 3JD (United Kingdom); School of Mechanical, Aerospace and Civil Engineering, University of Manchester, Manchester M13 9PL (United Kingdom); He, S., E-mail: s.he@sheffield.ac.uk [Department of Mechanical Engineering, University of Sheffield, Sheffield S1 3JD (United Kingdom)

    2017-02-15

    Highlights: • Buoyancy may greatly redistribute the flow in a non-uniform channel. • Flow structures in the narrow gap are greatly changed when buoyancy is strong. • Large flow structures exist in wider gap, which is enhanced when heat is strong. • Buoyancy reduces mixing factor caused by large flow structures in narrow gap. - Abstract: It has been a long time since the ‘abnormal’ turbulent intensity distribution and high inter-sub-channel mixing rates were observed in the vicinity of the narrow gaps formed by the fuel rods in nuclear reactors. The extraordinary flow behaviour was first described as periodic flow structures by Hooper and Rehme (1984). Since then, the existences of large flow structures were demonstrated by many researchers in various non-uniform flow channels. It has been proved by many authors that the Strouhal number of the flow structure in the isothermal flow is dependent on the size of the narrow gap, not the Reynolds number once it is sufficiently large. This paper reports a numerical investigation on the effect of buoyancy on the large flow structures. A buoyancy-aided flow in a tightly-packed rod-bundle-like channel is modelled using large eddy simulation (LES) together with the Boussinesq approximation. The behaviour of the large flow structures in the gaps of the flow passage are studied using instantaneous flow fields, spectrum analysis and correlation analysis. It is found that the non-uniform buoyancy force in the cross section of the flow channel may greatly redistribute the velocity field once the overall buoyancy force is sufficiently strong, and consequently modify the large flow structures. The temporal and axial spatial scales of the large flow structures are influenced by buoyancy in a way similar to that turbulence is influenced. These scales reduce when the flow is laminarised, but start increasing in the turbulence regeneration region. The spanwise scale of the flow structures in the narrow gap remains more or

  17. Large Observatory for x-ray Timing (LOFT-P): a Probe-class mission concept study

    Science.gov (United States)

    Wilson-Hodge, Colleen A.; Ray, Paul S.; Chakrabarty, Deepto; Feroci, Marco; Alvarez, Laura; Baysinger, Michael; Becker, Chris; Bozzo, Enrico; Brandt, Soren; Carson, Billy; Chapman, Jack; Dominguez, Alexandra; Fabisinski, Leo; Gangl, Bert; Garcia, Jay; Griffith, Christopher; Hernanz, Margarita; Hickman, Robert; Hopkins, Randall; Hui, Michelle; Ingram, Luster; Jenke, Peter; Korpela, Seppo; Maccarone, Tom; Michalska, Malgorzata; Pohl, Martin; Santangelo, Andrea; Schanne, Stephane; Schnell, Andrew; Stella, Luigi; van der Klis, Michiel; Watts, Anna; Winter, Berend; Zane, Silvia

    2016-07-01

    LOFT-P is a mission concept for a NASA Astrophysics Probe-Class (matter? What are the effects of strong gravity on matter spiraling into black holes? It would be optimized for sub-millisecond timing of bright Galactic X-ray sources including X-ray bursters, black hole binaries, and magnetars to study phenomena at the natural timescales of neutron star surfaces and black hole event horizons and to measure mass and spin of black holes. These measurements are synergistic to imaging and high-resolution spectroscopy instruments, addressing much smaller distance scales than are possible without very long baseline X-ray interferometry, and using complementary techniques to address the geometry and dynamics of emission regions. LOFT-P would have an effective area of >6 m2, > 10x that of the highly successful Rossi X-ray Timing Explorer (RXTE). A sky monitor (2-50 keV) acts as a trigger for pointed observations, providing high duty cycle, high time resolution monitoring of the X-ray sky with 20 times the sensitivity of the RXTE All-Sky Monitor, enabling multi-wavelength and multimessenger studies. A probe-class mission concept would employ lightweight collimator technology and large-area solid-state detectors, segmented into pixels or strips, technologies which have been recently greatly advanced during the ESA M3 Phase A study of LOFT. Given the large community interested in LOFT (>800 supporters*, the scientific productivity of this mission is expected to be very high, similar to or greater than RXTE ( 2000 refereed publications). We describe the results of a study, recently completed by the MSFC Advanced Concepts Office, that demonstrates that such a mission is feasible within a NASA probe-class mission budget.

  18. Precise large deviations of aggregate claims in a size-dependent renewal risk model with stopping time claim-number process

    Directory of Open Access Journals (Sweden)

    Shuo Zhang

    2017-04-01

    Full Text Available Abstract In this paper, we consider a size-dependent renewal risk model with stopping time claim-number process. In this model, we do not make any assumption on the dependence structure of claim sizes and inter-arrival times. We study large deviations of the aggregate amount of claims. For the subexponential heavy-tailed case, we obtain a precise large-deviation formula; our method substantially relies on a martingale for the structure of our models.

  19. Suppression of the Transit -Time Instability in Large-Area Electron Beam Diodes

    Science.gov (United States)

    Myers, Matthew C.; Friedman, Moshe; Swanekamp, Stephen B.; Chan, Lop-Yung; Ludeking, Larry; Sethian, John D.

    2002-12-01

    Experiment, theory, and simulation have shown that large-area electron-beam diodes are susceptible to the transit-time instability. The instability modulates the electron beam spatially and temporally, producing a wide spread in electron energy and momentum distributions. The result is gross inefficiency in beam generation and propagation. Simulations indicate that a periodic, slotted cathode structure that is loaded with resistive elements may be used to eliminate the instability. Such a cathode has been fielded on one of the two opposing 60 cm × 200 cm diodes on the NIKE KrF laser at the Naval Research Laboratory. These diodes typically deliver 600 kV, 500 kA, 250 ns electron beams to the laser cell in an external magnetic field of 0.2 T. We conclude that the slotted cathode suppressed the transit-time instability such that the RF power was reduced by a factor of 9 and that electron transmission efficiency into the laser gas was improved by more than 50%.

  20. Suppression of the transit-time instability in large-area electron beam diodes

    International Nuclear Information System (INIS)

    Myers, Matthew C.; Friedman, Moshe; Sethian, John D.; Swanekamp, Stephen B.; Chan, L.-Y.; Ludeking, Larry

    2002-01-01

    Experiment, theory, and simulation have shown that large-area electron-beam diodes are susceptible to the transit-time instability. The instability modulates the electron beam spatially and temporally, producing a wide spread in electron energy and momentum distributions. The result is gross inefficiency in beam generation and propagation. Simulations indicate that a periodic, slotted cathode structure that is loaded with resistive elements may be used to eliminate the instability. Such a cathode has been fielded on one of the two opposing 60 cm x 200 cm diodes on the NIKE KrF laser at the Naval Research Laboratory. These diodes typically deliver 600 kV, 500 kA, 250 ns electron beams to the laser cell in an external magnetic field of 0.2 T. We conclude that the slotted cathode suppressed the transit-time instability such that the RF power was reduced by a factor of 9 and that electron transmission efficiency into the laser gas was improved by more than 50%

  1. A new method for large time behavior of degenerate viscous Hamilton–Jacobi equations with convex Hamiltonians

    KAUST Repository

    Cagnetti, Filippo; Gomes, Diogo A.; Mitake, Hiroyoshi; Tran, Hung V.

    2015-01-01

    We investigate large-time asymptotics for viscous Hamilton-Jacobi equations with possibly degenerate diffusion terms. We establish new results on the convergence, which are the first general ones concerning equations which are neither uniformly parabolic nor first order. Our method is based on the nonlinear adjoint method and the derivation of new estimates on long time averaging effects. It also extends to the case of weakly coupled systems.

  2. LARGE SCALE GLAZED

    DEFF Research Database (Denmark)

    Bache, Anja Margrethe

    2010-01-01

    OF SELECTED EXISTING BUILDINGS IN AND AROUND COPENHAGEN COVERED WITH MOSAIC TILES, UNGLAZED OR GLAZED CLAY TILES. ITS BUILDINGS WHICH HAVE QUALITIES THAT I WOULD LIKE APPLIED, PERHAPS TRANSFORMED OR MOST PREFERABLY, INTERPRETED ANEW, FOR THE LARGE GLAZED CONCRETE PANELS I AM DEVELOPING. KEYWORDS: COLOR, LIGHT...

  3. Combined large field-of-view MRA and time-resolved MRA of the lower extremities: Impact of acquisition order on image quality

    International Nuclear Information System (INIS)

    Riffel, Philipp; Haneder, Stefan; Attenberger, Ulrike I.; Brade, Joachim; Schoenberg, Stefan O.; Michaely, Henrik J.

    2012-01-01

    Purpose: Different approaches exist for hybrid MRA of the calf station. So far, the order of the acquisition of the focused calf MRA and the large field-of-view MRA has not been scientifically evaluated. Therefore the aim of this study was to evaluate if the quality of the combined large field-of-view MRA (CTM MR angiography) and time-resolved MRA with stochastic interleaved trajectories (TWIST MRA) depends on the order of acquisition of the two contrast-enhanced studies. Methods: In this retrospective study, 40 consecutive patients (mean age 68.1 ± 8.7 years, 29 male/11 female) who had undergone an MR angiographic protocol that consisted of CTM-MRA (TR/TE, 2.4/1.0 ms; 21° flip angle; isotropic resolution 1.2 mm; gadolinium dose, 0.07 mmol/kg) and TWIST-MRA (TR/TE 2.8/1.1; 20° flip angle; isotropic resolution 1.1 mm; temporal resolution 5.5 s, gadolinium dose, 0.03 mmol/kg), were included. In the first group (group 1) TWIST-MRA of the calf station was performed 1–2 min after CTM-MRA. In the second group (group 2) CTM-MRA was performed 1–2 min after TWIST-MRA of the calf station. The image quality of CTM-MRA and TWIST-MRA were evaluated by 2 two independent radiologists in consensus according to a 4-point Likert-like rating scale assessing overall image quality on a segmental basis. Venous overlay was assessed per examination. Results: In the CTM-MRA, 1360 segments were included in the assessment of image quality. CTM-MRA was diagnostic in 95% (1289/1360) of segments. There was a significant difference (p < 0.0001) between both groups with regard to the number of segments rated as excellent and moderate. The image quality was rated as excellent in group 1 in 80% (514/640 segments) and in group 2 in 67% (432/649), respectively (p < 0.0001). In contrast, the image quality was rated as moderate in the first group in 5% (33/640) and in the second group in 19% (121/649) respectively (p < 0.0001). The venous overlay was disturbing in 10% in group 1 and 20% in group

  4. Hydrogen and methane generation from large hydraulic plant: Thermo-economic multi-level time-dependent optimization

    International Nuclear Information System (INIS)

    Rivarolo, M.; Magistri, L.; Massardo, A.F.

    2014-01-01

    Highlights: • We investigate H 2 and CH 4 production from very large hydraulic plant (14 GW). • We employ only “spilled energy”, not used by hydraulic plant, for H 2 production. • We consider the integration with energy taken from the grid at different prices. • We consider hydrogen conversion in chemical reactors to produce methane. • We find plants optimal size using a time-dependent thermo-economic approach. - Abstract: This paper investigates hydrogen and methane generation from large hydraulic plant, using an original multilevel thermo-economic optimization approach developed by the authors. Hydrogen is produced by water electrolysis employing time-dependent hydraulic energy related to the water which is not normally used by the plant, known as “spilled water electricity”. Both the demand for spilled energy and the electrical grid load vary widely by time of year, therefore a time-dependent hour-by-hour one complete year analysis has been carried out, in order to define the optimal plant size. This time period analysis is necessary to take into account spilled energy and electrical load profiles variability during the year. The hydrogen generation plant is based on 1 MWe water electrolysers fuelled with the “spilled water electricity”, when available; in the remaining periods, in order to assure a regular H 2 production, the energy is taken from the electrical grid, at higher cost. To perform the production plant size optimization, two hierarchical levels have been considered over a one year time period, in order to minimize capital and variable costs. After the optimization of the hydrogen production plant size, a further analysis is carried out, with a view to converting the produced H 2 into methane in a chemical reactor, starting from H 2 and CO 2 which is obtained with CCS plants and/or carried by ships. For this plant, the optimal electrolysers and chemical reactors system size is defined. For both of the two solutions, thermo

  5. Amplitude and rise time compensated timing optimized for large semiconductor detectors

    International Nuclear Information System (INIS)

    Kozyczkowski, J.J.; Bialkowski, J.

    1976-01-01

    The ARC timing described has excellent timing properties even when using a wide range e.g. from 10 keV to over 1 MeV. The detector signal from a preamplifier is accepted directly by the unit as a timing filter amplifier with a sensitivity of 1 mV is incorporated. The adjustable rise time rejection feature makes it possible to achieve a good prompt time spectrum with symmetrical exponential shape down to less than 1/100 of the peak value. A complete block diagram of the unit is given together with results of extensive tests of its performance. For example the time spectrum for (1330+-20) keV of 60 Co taken with a 43 cm 3 Ge(Li) detector has the following parameters: fwhm = 2.2ns, fwtm = 4.4 ns and fw (0.01) m = 7.6 ns and for (50 +- 10) keV of 22 Na the following was obtained: fwhm = 10.8 ns, fwtm = 21.6 ns and fw (0.01) m = 34.6 ns. In another experiment with two fast plastic scintillations (NE 102A) and using a 20% dynamic energy range the following was measured: fwhm = 280 ps, fwtm = 470 ps and fw (0.01) m = 70ps. (Auth.)

  6. Reducing process delays for real-time earthquake parameter estimation - An application of KD tree to large databases for Earthquake Early Warning

    Science.gov (United States)

    Yin, Lucy; Andrews, Jennifer; Heaton, Thomas

    2018-05-01

    Earthquake parameter estimations using nearest neighbor searching among a large database of observations can lead to reliable prediction results. However, in the real-time application of Earthquake Early Warning (EEW) systems, the accurate prediction using a large database is penalized by a significant delay in the processing time. We propose to use a multidimensional binary search tree (KD tree) data structure to organize large seismic databases to reduce the processing time in nearest neighbor search for predictions. We evaluated the performance of KD tree on the Gutenberg Algorithm, a database-searching algorithm for EEW. We constructed an offline test to predict peak ground motions using a database with feature sets of waveform filter-bank characteristics, and compare the results with the observed seismic parameters. We concluded that large database provides more accurate predictions of the ground motion information, such as peak ground acceleration, velocity, and displacement (PGA, PGV, PGD), than source parameters, such as hypocenter distance. Application of the KD tree search to organize the database reduced the average searching process by 85% time cost of the exhaustive method, allowing the method to be feasible for real-time implementation. The algorithm is straightforward and the results will reduce the overall time of warning delivery for EEW.

  7. Extending flood forecasting lead time in a large watershed by coupling WRF QPF with a distributed hydrological model

    Science.gov (United States)

    Li, Ji; Chen, Yangbo; Wang, Huanyu; Qin, Jianming; Li, Jie; Chiao, Sen

    2017-03-01

    Long lead time flood forecasting is very important for large watershed flood mitigation as it provides more time for flood warning and emergency responses. The latest numerical weather forecast model could provide 1-15-day quantitative precipitation forecasting products in grid format, and by coupling this product with a distributed hydrological model could produce long lead time watershed flood forecasting products. This paper studied the feasibility of coupling the Liuxihe model with the Weather Research and Forecasting quantitative precipitation forecast (WRF QPF) for large watershed flood forecasting in southern China. The QPF of WRF products has three lead times, including 24, 48 and 72 h, with the grid resolution being 20 km  × 20 km. The Liuxihe model is set up with freely downloaded terrain property; the model parameters were previously optimized with rain gauge observed precipitation, and re-optimized with the WRF QPF. Results show that the WRF QPF has bias with the rain gauge precipitation, and a post-processing method is proposed to post-process the WRF QPF products, which improves the flood forecasting capability. With model parameter re-optimization, the model's performance improves also. This suggests that the model parameters be optimized with QPF, not the rain gauge precipitation. With the increasing of lead time, the accuracy of the WRF QPF decreases, as does the flood forecasting capability. Flood forecasting products produced by coupling the Liuxihe model with the WRF QPF provide a good reference for large watershed flood warning due to its long lead time and rational results.

  8. Gamma Ray Bursts as Cosmological Probes with EXIST

    Science.gov (United States)

    Hartmann, Dieter; EXIST Team

    2006-12-01

    The EXIST mission, studied as a Black Hole Finder Probe within NASA's Beyond Einstein Program, would, in its current design, trigger on 1000 Gamma Ray Bursts (GRBs) per year (Grindlay et al, this meeting). The redshift distribution of these GRBs, using results from Swift as a guide, would probe the z > 7 epoch at an event rate of > 50 per year. These bursts trace early cosmic star formation history, point to a first generation of stellar objects that reionize the universe, and provide bright beacons for absorption line studies with groundand space-based observatories. We discuss how EXIST, in conjunction with other space missions and future large survey programs such as LSST, can be utilized to advance our understanding of cosmic chemical evolution, the structure and evolution of the baryonic cosmic web, and the formation of stars in low metallicity environments.

  9. The landscape of existing models for high-throughput exposure assessment

    DEFF Research Database (Denmark)

    Jolliet, O.; Fantke, Peter; Huang, L.

    2017-01-01

    and ability to easily handle large datasets. For building materials a series of diffusion-based models have been developed to predict the chemicals emissions from building materials to indoor air, but existing models require complex analytical or numerical solutions, which are not suitable for LCA or HTS...... applications. Thus, existing model solutions needed to be simplified for application in LCA and HTS, and a parsimonious model has been developed by Huang et al. (2017) to address this need. For SVOCs, simplified solutions do exist, assuming constant SVOC concentrations in building materials and steadystate...... for skin permeation and volatilization as competing processes and that requires a limited number of readily available physiochemical properties would be suitable for LCA and HTS purposes. Thus, the multi-pathway exposure model for chemicals in cosmetics developed by Ernstoff et al.constitutes a suitable...

  10. Time Discounting and Credit Market Access in a Large-Scale Cash Transfer Programme

    Science.gov (United States)

    Handa, Sudhanshu; Martorano, Bruno; Halpern, Carolyn; Pettifor, Audrey; Thirumurthy, Harsha

    2017-01-01

    Summary Time discounting is thought to influence decision-making in almost every sphere of life, including personal finances, diet, exercise and sexual behavior. In this article we provide evidence on whether a national poverty alleviation program in Kenya can affect inter-temporal decisions. We administered a preferences module as part of a large-scale impact evaluation of the Kenyan Government’s Cash Transfer for Orphans and Vulnerable Children. Four years into the program we find that individuals in the treatment group are only marginally more likely to wait for future money, due in part to the erosion of the value of the transfer by inflation. However among the poorest households for whom the value of transfer is still relatively large we find significant program effects on the propensity to wait. We also find strong program effects among those who have access to credit markets though the program itself does not improve access to credit. PMID:28260842

  11. Time Discounting and Credit Market Access in a Large-Scale Cash Transfer Programme.

    Science.gov (United States)

    Handa, Sudhanshu; Martorano, Bruno; Halpern, Carolyn; Pettifor, Audrey; Thirumurthy, Harsha

    2016-06-01

    Time discounting is thought to influence decision-making in almost every sphere of life, including personal finances, diet, exercise and sexual behavior. In this article we provide evidence on whether a national poverty alleviation program in Kenya can affect inter-temporal decisions. We administered a preferences module as part of a large-scale impact evaluation of the Kenyan Government's Cash Transfer for Orphans and Vulnerable Children. Four years into the program we find that individuals in the treatment group are only marginally more likely to wait for future money, due in part to the erosion of the value of the transfer by inflation. However among the poorest households for whom the value of transfer is still relatively large we find significant program effects on the propensity to wait. We also find strong program effects among those who have access to credit markets though the program itself does not improve access to credit.

  12. Interference Cancellation Using Replica Signal for HTRCI-MIMO/OFDM in Time-Variant Large Delay Spread Longer Than Guard Interval

    Directory of Open Access Journals (Sweden)

    Yuta Ida

    2012-01-01

    Full Text Available Orthogonal frequency division multiplexing (OFDM and multiple-input multiple-output (MIMO are generally known as the effective techniques for high data rate services. In MIMO/OFDM systems, the channel estimation (CE is very important to obtain an accurate channel state information (CSI. However, since the orthogonal pilot-based CE requires the large number of pilot symbols, the total transmission rate is degraded. To mitigate this problem, a high time resolution carrier interferometry (HTRCI for MIMO/OFDM has been proposed. In wireless communication systems, if the maximum delay spread is longer than the guard interval (GI, the system performance is significantly degraded due to the intersymbol interference (ISI and intercarrier interference (ICI. However, the conventional HTRCI-MIMO/OFDM does not consider the case with the time-variant large delay spread longer than the GI. In this paper, we propose the ISI and ICI compensation methods for a HTRCI-MIMO/OFDM in the time-variant large delay spread longer than the GI.

  13. Barriers to installing innovative energy systems in existing housing stock identified

    NARCIS (Netherlands)

    Hoppe, Thomas

    2013-01-01

    Several barriers to upgrading existing social housing with innovative energy systems (IES) have been identified by a study of eight large-scale renovation projects in the Netherlands. These include a lack of trust between stakeholders, opposition from tenants on grounds of increased costs or delays,

  14. Existence and global exponential stability of periodic solution of memristor-based BAM neural networks with time-varying delays.

    Science.gov (United States)

    Li, Hongfei; Jiang, Haijun; Hu, Cheng

    2016-03-01

    In this paper, we investigate a class of memristor-based BAM neural networks with time-varying delays. Under the framework of Filippov solutions, boundedness and ultimate boundedness of solutions of memristor-based BAM neural networks are guaranteed by Chain rule and inequalities technique. Moreover, a new method involving Yoshizawa-like theorem is favorably employed to acquire the existence of periodic solution. By applying the theory of set-valued maps and functional differential inclusions, an available Lyapunov functional and some new testable algebraic criteria are derived for ensuring the uniqueness and global exponential stability of periodic solution of memristor-based BAM neural networks. The obtained results expand and complement some previous work on memristor-based BAM neural networks. Finally, a numerical example is provided to show the applicability and effectiveness of our theoretical results. Copyright © 2015 Elsevier Ltd. All rights reserved.

  15. On the Existence of Solutions for Stationary Mean-Field Games with Congestion

    KAUST Repository

    Evangelista, David; Gomes, Diogo A.

    2017-01-01

    Mean-field games (MFGs) are models of large populations of rational agents who seek to optimize an objective function that takes into account their location and the distribution of the remaining agents. Here, we consider stationary MFGs with congestion and prove the existence of stationary solutions. Because moving in congested areas is difficult, agents prefer to move in non-congested areas. As a consequence, the model becomes singular near the zero density. The existence of stationary solutions was previously obtained for MFGs with quadratic Hamiltonians thanks to a very particular identity. Here, we develop robust estimates that give the existence of a solution for general subquadratic Hamiltonians.

  16. On the Existence of Solutions for Stationary Mean-Field Games with Congestion

    KAUST Repository

    Evangelista, David

    2017-09-11

    Mean-field games (MFGs) are models of large populations of rational agents who seek to optimize an objective function that takes into account their location and the distribution of the remaining agents. Here, we consider stationary MFGs with congestion and prove the existence of stationary solutions. Because moving in congested areas is difficult, agents prefer to move in non-congested areas. As a consequence, the model becomes singular near the zero density. The existence of stationary solutions was previously obtained for MFGs with quadratic Hamiltonians thanks to a very particular identity. Here, we develop robust estimates that give the existence of a solution for general subquadratic Hamiltonians.

  17. A large set of potential past, present and future hydro-meteorological time series for the UK

    Directory of Open Access Journals (Sweden)

    B. P. Guillod

    2018-01-01

    Full Text Available Hydro-meteorological extremes such as drought and heavy precipitation can have large impacts on society and the economy. With potentially increasing risks associated with such events due to climate change, properly assessing the associated impacts and uncertainties is critical for adequate adaptation. However, the application of risk-based approaches often requires large sets of extreme events, which are not commonly available. Here, we present such a large set of hydro-meteorological time series for recent past and future conditions for the United Kingdom based on weather@home 2, a modelling framework consisting of a global climate model (GCM driven by observed or projected sea surface temperature (SST and sea ice which is downscaled to 25 km over the European domain by a regional climate model (RCM. Sets of 100 time series are generated for each of (i a historical baseline (1900–2006, (ii five near-future scenarios (2020–2049 and (iii five far-future scenarios (2070–2099. The five scenarios in each future time slice all follow the Representative Concentration Pathway 8.5 (RCP8.5 and sample the range of sea surface temperature and sea ice changes from CMIP5 (Coupled Model Intercomparison Project Phase 5 models. Validation of the historical baseline highlights good performance for temperature and potential evaporation, but substantial seasonal biases in mean precipitation, which are corrected using a linear approach. For extremes in low precipitation over a long accumulation period ( > 3 months and shorter-duration high precipitation (1–30 days, the time series generally represents past statistics well. Future projections show small precipitation increases in winter but large decreases in summer on average, leading to an overall drying, consistently with the most recent UK Climate Projections (UKCP09 but larger in magnitude than the latter. Both drought and high-precipitation events are projected to increase in frequency and

  18. Obliquely propagating large amplitude solitary waves in charge neutral plasmas

    Directory of Open Access Journals (Sweden)

    F. Verheest

    2007-01-01

    Full Text Available This paper deals in a consistent way with the implications, for the existence of large amplitude stationary structures in general plasmas, of assuming strict charge neutrality between electrons and ions. With the limit of pair plasmas in mind, electron inertia is retained. Combining in a fluid dynamic treatment the conservation of mass, momentum and energy with strict charge neutrality has indicated that nonlinear solitary waves (as e.g. oscillitons cannot exist in electron-ion plasmas, at no angle of propagation with respect to the static magnetic field. Specifically for oblique propagation, the proof has turned out to be more involved than for parallel or perpendicular modes. The only exception is pair plasmas that are able to support large charge neutral solitons, owing to the high degree of symmetry naturally inherent in such plasmas. The nonexistence, in particular, of oscillitons is attributed to the breakdown of the plasma approximation in dealing with Poisson's law, rather than to relativistic effects. It is hoped that future space observations will allow to discriminate between oscillitons and large wave packets, by focusing on the time variability (or not of the phase, since the amplitude or envelope graphs look very similar.

  19. Buffer provisioning for large-scale data-acquisition systems

    CERN Document Server

    AUTHOR|(SzGeCERN)756497; The ATLAS collaboration; Garcia Garcia, Pedro Javier; Froening, Holger; Vandelli, Wainer

    2018-01-01

    The data acquisition system of the ATLAS experiment, a major experiment of the Large Hadron Collider (LHC) at CERN, will go through a major upgrade in the next decade. The upgrade is driven by experimental physics requirements, calling for increased data rates on the order of 6~TB/s. By contrast, the data rate of the existing system is 160~GB/s. Among the changes in the upgraded system will be a very large buffer with a projected size on the order of 70 PB. The buffer role will be decoupling of data production from on-line data processing, storing data for periods of up to 24~hours until it can be analyzed by the event processing system. The larger buffer will allow a new data recording strategy, providing additional margins to handle variable data rates. At the same time it will provide sensible trade-offs between buffering space and on-line processing capabilities. This compromise between two resources will be possible since the data production cycle includes time periods where the experiment will not produ...

  20. Parallel Motion Simulation of Large-Scale Real-Time Crowd in a Hierarchical Environmental Model

    Directory of Open Access Journals (Sweden)

    Xin Wang

    2012-01-01

    Full Text Available This paper presents a parallel real-time crowd simulation method based on a hierarchical environmental model. A dynamical model of the complex environment should be constructed to simulate the state transition and propagation of individual motions. By modeling of a virtual environment where virtual crowds reside, we employ different parallel methods on a topological layer, a path layer and a perceptual layer. We propose a parallel motion path matching method based on the path layer and a parallel crowd simulation method based on the perceptual layer. The large-scale real-time crowd simulation becomes possible with these methods. Numerical experiments are carried out to demonstrate the methods and results.

  1. Fitness, work, and leisure-time physical activity and ischaemic heart disease and all-cause mortality among men with pre-existing cardiovascular disease

    DEFF Research Database (Denmark)

    Holtermann, Andreas; Mortensen, Ole Steen; Burr, Hermann

    2010-01-01

    , smoking, alcohol consumption, body mass index, diabetes, hypertension, physical work demands, leisure-time physical activity, and social class - showed a substantially reduced risk for IHD mortality among employees who were intermediately fit [VO (2)Max range 25-36; hazard ratio (HR) 0.54, 95% confidence......OBJECTIVE: Our aim was to study the relative impact of physical fitness, physical demands at work, and physical activity during leisure time on ischaemic heart disease (IHD) and all-cause mortality among employed men with pre-existing cardiovascular disease (CVD). METHOD: We carried out a 30-year...... physical work demands and leisure-time physical activity using a self-reported questionnaire. Results Among 274 men with a history of CVD, 93 men died from IHD. Using male employees with a history of CVD and a low level of fitness as the reference group, our Cox analyses - adjusted for age, blood pressure...

  2. Time-out/Time-in

    DEFF Research Database (Denmark)

    Bødker, Mads; Gimpel, Gregory; Hedman, Jonas

    2014-01-01

    time-in and time-out use. Time-in technology use coincides and co-exists within the flow of ordinary life, while time-out use entails ‘taking time out’ of everyday life to accomplish a circumscribed task or engage reflectively in a particular experience. We apply a theoretically informed grounded...

  3. Melodic pattern extraction in large collections of music recordings using time series mining techniques

    OpenAIRE

    Gulati, Sankalp; Serrà, Joan; Ishwar, Vignesh; Serra, Xavier

    2014-01-01

    We demonstrate a data-driven unsupervised approach for the discovery of melodic patterns in large collections of Indian art music recordings. The approach first works on single recordings and subsequently searches in the entire music collection. Melodic similarity is based on dynamic time warping. The task being computationally intensive, lower bounding and early abandoning techniques are applied during distance computation. Our dataset comprises 365 hours of music, containing 1,764 audio rec...

  4. Global existence proof for relativistic Boltzmann equation

    International Nuclear Information System (INIS)

    Dudynski, M.; Ekiel-Jezewska, M.L.

    1992-01-01

    The existence and causality of solutions to the relativistic Boltzmann equation in L 1 and in L loc 1 are proved. The solutions are shown to satisfy physically natural a priori bounds, time-independent in L 1 . The results rely upon new techniques developed for the nonrelativistic Boltzmann equation by DiPerna and Lions

  5. Development of sub-nanosecond, high gain structures for time-of-flight ring imaging in large area detectors

    International Nuclear Information System (INIS)

    Wetstein, Matthew

    2011-01-01

    Microchannel plate photomultiplier tubes (MCPs) are compact, imaging detectors, capable of micron-level spatial imaging and timing measurements with resolutions below 10 ps. Conventional fabrication methods are too expensive for making MCPs in the quantities and sizes necessary for typical HEP applications, such as time-of-flight ring-imaging Cherenkov detectors (TOF-RICH) or water Cherenkov-based neutrino experiments. The Large Area Picosecond Photodetector Collaboration (LAPPD) is developing new, commercializable methods to fabricate 20 cm 2 thin planar MCPs at costs comparable to those of traditional photo-multiplier tubes. Transmission-line readout with waveform sampling on both ends of each line allows the efficient coverage of large areas while maintaining excellent time and space resolution. Rather than fabricating channel plates from active, high secondary electron emission materials, we produce plates from passive substrates, and coat them using atomic layer deposition (ALD), a well established industrial batch process. In addition to possible reductions in cost and conditioning time, this allows greater control to optimize the composition of active materials for performance. We present details of the MCP fabrication method, preliminary results from testing and characterization facilities, and possible HEP applications.

  6. From Point Clouds to Building Information Models: 3D Semi-Automatic Reconstruction of Indoors of Existing Buildings

    Directory of Open Access Journals (Sweden)

    Hélène Macher

    2017-10-01

    Full Text Available The creation of as-built Building Information Models requires the acquisition of the as-is state of existing buildings. Laser scanners are widely used to achieve this goal since they permit to collect information about object geometry in form of point clouds and provide a large amount of accurate data in a very fast way and with a high level of details. Unfortunately, the scan-to-BIM (Building Information Model process remains currently largely a manual process which is time consuming and error-prone. In this paper, a semi-automatic approach is presented for the 3D reconstruction of indoors of existing buildings from point clouds. Several segmentations are performed so that point clouds corresponding to grounds, ceilings and walls are extracted. Based on these point clouds, walls and slabs of buildings are reconstructed and described in the IFC format in order to be integrated into BIM software. The assessment of the approach is proposed thanks to two datasets. The evaluation items are the degree of automation, the transferability of the approach and the geometric quality of results of the 3D reconstruction. Additionally, quality indexes are introduced to inspect the results in order to be able to detect potential errors of reconstruction.

  7. 75 FR 63259 - Standards of Performance for New Stationary Sources and Emission Guidelines for Existing Sources...

    Science.gov (United States)

    2010-10-14

    ... Standards of Performance for New Stationary Sources and Emission Guidelines for Existing Sources: Sewage... performance standards for new units and emission guidelines for existing units for specific categories of... standards and emission guidelines for large municipal waste combustion units, small municipal waste...

  8. Normal zone soliton in large composite superconductors

    International Nuclear Information System (INIS)

    Kupferman, R.; Mints, R.G.; Ben-Jacob, E.

    1992-01-01

    The study of normal zone of finite size (normal domains) in superconductors, has been continuously a subject of interest in the field of applied superconductivity. It was shown that in homogeneous superconductors normal domains are always unstable, so that if a normal domain nucleates, it will either expand or shrink. While testing the stability of large cryostable composite superconductors, a new phenomena was found, the existence of stable propagating normal solitons. The formation of these propagating domains was shown to be a result of the high Joule power generated in the superconductor during the relatively long process of current redistribution between the superconductor and the stabilizer. Theoretical studies were performed in investigate the propagation of normal domains in large composite super conductors in the cryostable regime. Huang and Eyssa performed numerical calculations simulating the diffusion of heat and current redistribution in the conductor, and showed the existence of stable propagating normal domains. They compared the velocity of normal domain propagation with the experimental data, obtaining a reasonable agreement. Dresner presented an analytical method to solve this problem if the time dependence of the Joule power is given. He performed explicit calculations of normal domain velocity assuming that the Joule power decays exponentially during the process of current redistribution. In this paper, the authors propose a system of two one-dimensional diffusion equations describing the dynamics of the temperature and the current density distributions along the conductor. Numerical simulations of the equations reconfirm the existence of propagating domains in the cryostable regime, while an analytical investigation supplies an explicit formula for the velocity of the normal domain

  9. LEP - Large Electron Positron Exhibition LEPFest 2000

    CERN Multimedia

    2000-01-01

    The Large Electron-Positron Collider (LEP) is 27 km long. Its four detectors (ALEPH, DELPHI, L3, OPAL) measure precisely what happens in the collisions of electrons and positrons. These conditions only exist-ed in the Universe when it was about 10 -10 sec old.

  10. Space-time relationship in continuously moving table method for large FOV peripheral contrast-enhanced magnetic resonance angiography

    International Nuclear Information System (INIS)

    Sabati, M; Lauzon, M L; Frayne, R

    2003-01-01

    Data acquisition using a continuously moving table approach is a method capable of generating large field-of-view (FOV) 3D MR angiograms. However, in order to obtain venous contamination-free contrast-enhanced (CE) MR angiograms in the lower limbs, one of the major challenges is to acquire all necessary k-space data during the restricted arterial phase of the contrast agent. Preliminary investigation on the space-time relationship of continuously acquired peripheral angiography is performed in this work. Deterministic and stochastic undersampled hybrid-space (x, k y , k z ) acquisitions are simulated for large FOV peripheral runoff studies. Initial results show the possibility of acquiring isotropic large FOV images of the entire peripheral vascular system. An optimal trade-off between the spatial and temporal sampling properties was found that produced a high-spatial resolution peripheral CE-MR angiogram. The deterministic sampling pattern was capable of reconstructing the global structure of the peripheral arterial tree and showed slightly better global quantitative results than stochastic patterns. Optimal stochastic sampling patterns, on the other hand, enhanced small vessels and had more favourable local quantitative results. These simulations demonstrate the complex spatial-temporal relationship when sampling large FOV peripheral runoff studies. They also suggest that more investigation is required to maximize image quality as a function of hybrid-space coverage, acquisition repetition time and sampling pattern parameters

  11. Cosmic ray acceleration by large scale galactic shocks

    International Nuclear Information System (INIS)

    Cesarsky, C.J.; Lagage, P.O.

    1987-01-01

    The mechanism of diffusive shock acceleration may account for the existence of galactic cosmic rays detailed application to stellar wind shocks and especially to supernova shocks have been developed. Existing models can usually deal with the energetics or the spectral slope, but the observed energy range of cosmic rays is not explained. Therefore it seems worthwhile to examine the effect that large scale, long-lived galactic shocks may have on galactic cosmic rays, in the frame of the diffusive shock acceleration mechanism. Large scale fast shocks can only be expected to exist in the galactic halo. We consider three situations where they may arise: expansion of a supernova shock in the halo, galactic wind, galactic infall; and discuss the possible existence of these shocks and their role in accelerating cosmic rays

  12. Efficient Processing of Multiple DTW Queries in Time Series Databases

    DEFF Research Database (Denmark)

    Kremer, Hardy; Günnemann, Stephan; Ivanescu, Anca-Maria

    2011-01-01

    . In many of today’s applications, however, large numbers of queries arise at any given time. Existing DTW techniques do not process multiple DTW queries simultaneously, a serious limitation which slows down overall processing. In this paper, we propose an efficient processing approach for multiple DTW...... for multiple DTW queries....

  13. Existence of Torsional Solitons in a Beam Model of Suspension Bridge

    Science.gov (United States)

    Benci, Vieri; Fortunato, Donato; Gazzola, Filippo

    2017-11-01

    This paper studies the existence of solitons, namely stable solitary waves, in an idealized suspension bridge. The bridge is modeled as an unbounded degenerate plate, that is, a central beam with cross sections, and displays two degrees of freedom: the vertical displacement of the beam and the torsional angles of the cross sections. Under fairly general assumptions, we prove the existence of solitons. Under the additional assumption of large tension in the sustaining cables, we prove that these solitons have a nontrivial torsional component. This appears relevant for security since several suspension bridges collapsed due to torsional oscillations.

  14. The Software Reliability of Large Scale Integration Circuit and Very Large Scale Integration Circuit

    OpenAIRE

    Artem Ganiyev; Jan Vitasek

    2010-01-01

    This article describes evaluation method of faultless function of large scale integration circuits (LSI) and very large scale integration circuits (VLSI). In the article there is a comparative analysis of factors which determine faultless of integrated circuits, analysis of already existing methods and model of faultless function evaluation of LSI and VLSI. The main part describes a proposed algorithm and program for analysis of fault rate in LSI and VLSI circuits.

  15. Existence of time-dependent density-functional theory for open electronic systems: time-dependent holographic electron density theorem.

    Science.gov (United States)

    Zheng, Xiao; Yam, ChiYung; Wang, Fan; Chen, GuanHua

    2011-08-28

    We present the time-dependent holographic electron density theorem (TD-HEDT), which lays the foundation of time-dependent density-functional theory (TDDFT) for open electronic systems. For any finite electronic system, the TD-HEDT formally establishes a one-to-one correspondence between the electron density inside any finite subsystem and the time-dependent external potential. As a result, any electronic property of an open system in principle can be determined uniquely by the electron density function inside the open region. Implications of the TD-HEDT on the practicality of TDDFT are also discussed.

  16. Shared control on lunar spacecraft teleoperation rendezvous operations with large time delay

    Science.gov (United States)

    Ya-kun, Zhang; Hai-yang, Li; Rui-xue, Huang; Jiang-hui, Liu

    2017-08-01

    Teleoperation could be used in space on-orbit serving missions, such as object deorbits, spacecraft approaches, and automatic rendezvous and docking back-up systems. Teleoperation rendezvous and docking in lunar orbit may encounter bottlenecks for the inherent time delay in the communication link and the limited measurement accuracy of sensors. Moreover, human intervention is unsuitable in view of the partial communication coverage problem. To solve these problems, a shared control strategy for teleoperation rendezvous and docking is detailed. The control authority in lunar orbital maneuvers that involves two spacecraft as rendezvous and docking in the final phase was discussed in this paper. The predictive display model based on the relative dynamic equations is established to overcome the influence of the large time delay in communication link. We discuss and attempt to prove via consistent, ground-based simulations the relative merits of fully autonomous control mode (i.e., onboard computer-based), fully manual control (i.e., human-driven at the ground station) and shared control mode. The simulation experiments were conducted on the nine-degrees-of-freedom teleoperation rendezvous and docking simulation platform. Simulation results indicated that the shared control methods can overcome the influence of time delay effects. In addition, the docking success probability of shared control method was enhanced compared with automatic and manual modes.

  17. Exploitation and exploration dynamics in recessionary times

    OpenAIRE

    Walrave, B.

    2012-01-01

    Firm performance largely depends on the ability to adapt to, and exploit, changes in the business environment. That is, firms should maintain ecological fitness by reconfiguring their resource base to cope with emerging threats and explore new opportunities, while at the same time exploiting existing resources. As such, firms possessing the ability to simultaneously perform exploitative and explorative initiatives are more resilient. In this respect, the performance implications of balancing ...

  18. Spatiotemporal property and predictability of large-scale human mobility

    Science.gov (United States)

    Zhang, Hai-Tao; Zhu, Tao; Fu, Dongfei; Xu, Bowen; Han, Xiao-Pu; Chen, Duxin

    2018-04-01

    Spatiotemporal characteristics of human mobility emerging from complexity on individual scale have been extensively studied due to the application potential on human behavior prediction and recommendation, and control of epidemic spreading. We collect and investigate a comprehensive data set of human activities on large geographical scales, including both websites browse and mobile towers visit. Numerical results show that the degree of activity decays as a power law, indicating that human behaviors are reminiscent of scale-free random walks known as Lévy flight. More significantly, this study suggests that human activities on large geographical scales have specific non-Markovian characteristics, such as a two-segment power-law distribution of dwelling time and a high possibility for prediction. Furthermore, a scale-free featured mobility model with two essential ingredients, i.e., preferential return and exploration, and a Gaussian distribution assumption on the exploration tendency parameter is proposed, which outperforms existing human mobility models under scenarios of large geographical scales.

  19. Modeling of electromagnetic and thermal diffusion in a large pure aluminum stabilized superconductor under quench

    CERN Document Server

    Gavrilin, A V

    2001-01-01

    Low temperature composite superconductors stabilized with extra large cross-section pure aluminum are currently in use for the Large Helical Device in Japan, modern big detectors such as ATLAS at CERN, and other large magnets. In these types of magnet systems, the rated average current density is not high and the peak field in a region of interest is about 2-4 T. Aluminum stabilized superconductors result in high stability margins and relatively long quench times. Appropriate quench analyses, both for longitudinal and transverse propagation, have to take into account a rather slow diffusion of current from the superconductor into the thick aluminum stabilizer. An exact approach to modeling of the current diffusion would be based on directly solving the Maxwell's equations in parallel with thermal diffusion and conduction relations. However, from a practical point of view, such an approach should be extremely time consuming due to obvious restrictions of computation capacity. At the same time, there exist cert...

  20. Finite correlation time effects in kinematic dynamo problem

    International Nuclear Information System (INIS)

    Schekochihin, A.A.; Kulsrud, R.M.

    2000-01-01

    One-point statistics of the magnetic fluctuations in kinematic regime with large Prandtl number and non delta-correlated in time advecting velocity field are studied. A perturbation expansion in the ratio of the velocity correlation time to the dynamo growth time is constructed in the spirit of the Kliatskin-Tatarskii functional method and carried out to first order. The convergence properties are improved compared to the commonly used van Kampen-Terwiel method. The zeroth-order growth rate of the magnetic energy is estimated to be reduced (in three dimensions) by approximately 40%. This reduction is quite close to existing numerical results

  1. Large-scale and Long-duration Simulation of a Multi-stage Eruptive Solar Event

    Science.gov (United States)

    Jiang, chaowei; Hu, Qiang; Wu, S. T.

    2015-04-01

    We employ a data-driven 3D MHD active region evolution model by using the Conservation Element and Solution Element (CESE) numerical method. This newly developed model retains the full MHD effects, allowing time-dependent boundary conditions and time evolution studies. The time-dependent simulation is driven by measured vector magnetograms and the method of MHD characteristics on the bottom boundary. We have applied the model to investigate the coronal magnetic field evolution of AR11283 which was characterized by a pre-existing sigmoid structure in the core region and multiple eruptions, both in relatively small and large scales. We have succeeded in producing the core magnetic field structure and the subsequent eruptions of flux-rope structures (see https://dl.dropboxusercontent.com/u/96898685/large.mp4 for an animation) as the measured vector magnetograms on the bottom boundary evolve in time with constant flux emergence. The whole process, lasting for about an hour in real time, compares well with the corresponding SDO/AIA and coronagraph imaging observations. From these results, we show the capability of the model, largely data-driven, that is able to simulate complex, topological, and highly dynamic active region evolutions. (We acknowledge partial support of NSF grants AGS 1153323 and AGS 1062050, and data support from SDO/HMI and AIA teams).

  2. Extra-large letter spacing improves reading in dyslexia

    Science.gov (United States)

    Zorzi, Marco; Barbiero, Chiara; Facoetti, Andrea; Lonciari, Isabella; Carrozzi, Marco; Montico, Marcella; Bravar, Laura; George, Florence; Pech-Georgel, Catherine; Ziegler, Johannes C.

    2012-01-01

    Although the causes of dyslexia are still debated, all researchers agree that the main challenge is to find ways that allow a child with dyslexia to read more words in less time, because reading more is undisputedly the most efficient intervention for dyslexia. Sophisticated training programs exist, but they typically target the component skills of reading, such as phonological awareness. After the component skills have improved, the main challenge remains (that is, reading deficits must be treated by reading more—a vicious circle for a dyslexic child). Here, we show that a simple manipulation of letter spacing substantially improved text reading performance on the fly (without any training) in a large, unselected sample of Italian and French dyslexic children. Extra-large letter spacing helps reading, because dyslexics are abnormally affected by crowding, a perceptual phenomenon with detrimental effects on letter recognition that is modulated by the spacing between letters. Extra-large letter spacing may help to break the vicious circle by rendering the reading material more easily accessible. PMID:22665803

  3. Retrofitting adjustable speed drives for large induction motors

    International Nuclear Information System (INIS)

    Wuestefeld, M.R.; Merriam, C.H.; Porter, N.S.

    2004-01-01

    Adjustable speed drives (ASDs) are used in many power plants to control process flow by varying the speed of synchronous and induction motors. In applications where the flow requirements vary significantly, ASDs reduce energy and maintenance requirements when compared with drag valves, dampers or other methods to control flow. Until recently, high horsepower ASDs were not available for induction motors. However, advances in power electronics technology have demonstrated the reliability and cost effectiveness of ASDs for large horsepower induction motors. Emphasis on reducing operation and maintenance costs and increasing the capacity factor of nuclear power plants has led some utilities to consider replacing flow control devices in systems powered by large induction motors with ASDs. ASDs provide a high degree of reliability and significant energy savings in situations where full flow operation is not needed for a substantial part of the time. This paper describes the basic adjustable speed drive technologies available for large induction motor applications, ASD operating experience and retrofitting ASDs to replace the existing GE Boiling Water Reactor recirculation flow control system

  4. Real-world-time simulation of memory consolidation in a large-scale cerebellar model

    Directory of Open Access Journals (Sweden)

    Masato eGosui

    2016-03-01

    Full Text Available We report development of a large-scale spiking network model of thecerebellum composed of more than 1 million neurons. The model isimplemented on graphics processing units (GPUs, which are dedicatedhardware for parallel computing. Using 4 GPUs simultaneously, we achieve realtime simulation, in which computer simulation ofcerebellar activity for 1 sec completes within 1 sec in thereal-world time, with temporal resolution of 1 msec.This allows us to carry out a very long-term computer simulationof cerebellar activity in a practical time with millisecond temporalresolution. Using the model, we carry out computer simulationof long-term gain adaptation of optokinetic response (OKR eye movementsfor 5 days aimed to study the neural mechanisms of posttraining memoryconsolidation. The simulation results are consistent with animal experimentsand our theory of posttraining memory consolidation. These resultssuggest that realtime computing provides a useful means to studya very slow neural process such as memory consolidation in the brain.

  5. Incremental fuzzy C medoids clustering of time series data using dynamic time warping distance.

    Science.gov (United States)

    Liu, Yongli; Chen, Jingli; Wu, Shuai; Liu, Zhizhong; Chao, Hao

    2018-01-01

    Clustering time series data is of great significance since it could extract meaningful statistics and other characteristics. Especially in biomedical engineering, outstanding clustering algorithms for time series may help improve the health level of people. Considering data scale and time shifts of time series, in this paper, we introduce two incremental fuzzy clustering algorithms based on a Dynamic Time Warping (DTW) distance. For recruiting Single-Pass and Online patterns, our algorithms could handle large-scale time series data by splitting it into a set of chunks which are processed sequentially. Besides, our algorithms select DTW to measure distance of pair-wise time series and encourage higher clustering accuracy because DTW could determine an optimal match between any two time series by stretching or compressing segments of temporal data. Our new algorithms are compared to some existing prominent incremental fuzzy clustering algorithms on 12 benchmark time series datasets. The experimental results show that the proposed approaches could yield high quality clusters and were better than all the competitors in terms of clustering accuracy.

  6. Incremental fuzzy C medoids clustering of time series data using dynamic time warping distance

    Science.gov (United States)

    Chen, Jingli; Wu, Shuai; Liu, Zhizhong; Chao, Hao

    2018-01-01

    Clustering time series data is of great significance since it could extract meaningful statistics and other characteristics. Especially in biomedical engineering, outstanding clustering algorithms for time series may help improve the health level of people. Considering data scale and time shifts of time series, in this paper, we introduce two incremental fuzzy clustering algorithms based on a Dynamic Time Warping (DTW) distance. For recruiting Single-Pass and Online patterns, our algorithms could handle large-scale time series data by splitting it into a set of chunks which are processed sequentially. Besides, our algorithms select DTW to measure distance of pair-wise time series and encourage higher clustering accuracy because DTW could determine an optimal match between any two time series by stretching or compressing segments of temporal data. Our new algorithms are compared to some existing prominent incremental fuzzy clustering algorithms on 12 benchmark time series datasets. The experimental results show that the proposed approaches could yield high quality clusters and were better than all the competitors in terms of clustering accuracy. PMID:29795600

  7. Practical method of calculating time-integrated concentrations at medium and large distances

    International Nuclear Information System (INIS)

    Cagnetti, P.; Ferrara, V.

    1980-01-01

    Previous reports have covered the possibility of calculating time-integrated concentrations (TICs) for a prolonged release, based on concentration estimates for a brief release. This study proposes a simple method of evaluating concentrations in the air at medium and large distances, for a brief release. It is known that the stability of the atmospheric layers close to ground level influence diffusion only over short distances. Beyond some tens of kilometers, as the pollutant cloud progressively reaches higher layers, diffusion is affected by factors other than the stability at ground level, such as wind shear for intermediate distances and the divergence and rotational motion of air masses towards the upper limit of the mesoscale and on the synoptic scale. Using the data available in the literature, expressions for sigmasub(y) and sigmasub(z) are proposed for transfer times corresponding to those for up to distances of several thousand kilometres, for two initial diffusion situations (up to distances of 10 - 20 km), those characterized by stable and neutral conditions respectively. Using this method simple hand calculations can be made for any problem relating to the diffusion of radioactive pollutants over long distances

  8. One size does not fit all: a qualitative content analysis of the importance of existing quality improvement capacity in the implementation of Releasing Time to Care: the Productive Ward™ in Saskatchewan, Canada.

    Science.gov (United States)

    Hamilton, Jessica; Verrall, Tanya; Maben, Jill; Griffiths, Peter; Avis, Kyla; Baker, G Ross; Teare, Gary

    2014-12-19

    Releasing Time to Care: The Productive Ward™ (RTC) is a method for conducting continuous quality improvement (QI). The Saskatchewan Ministry of Health mandated its implementation in Saskatchewan, Canada between 2008 and 2012. Subsequently, a research team was developed to evaluate its impact on the nursing unit environment. We sought to explore the influence of the unit's existing QI capacity on their ability to engage with RTC as a program for continuous QI. We conducted interviews with staff from 8 nursing units and asked them to speak about their experience doing RTC. Using qualitative content analysis, and guided by the Organizing for Quality framework, we describe the existing QI capacity and impact of RTC on the unit environment. The results focus on 2 units chosen to highlight extreme variation in existing QI capacity. Unit B was characterized by a strong existing environment. RTC was implemented in an environment with a motivated manager and collaborative culture. Aided by the structural support provided by the organization, the QI capacity on this unit was strengthened through RTC. Staff recognized the potential of using the RTC processes to support QI work. Staff on unit E did not have the same experience with RTC. Like unit B, they had similar structural supports provided by their organization but they did not have the same existing cultural or political environment to facilitate the implementation of RTC. They did not have internal motivation and felt they were only doing RTC because they had to. Though they had some success with RTC activities, the staff did not have the same understanding of the methods that RTC could provide for continuous QI work. RTC has the potential to be a strong tool for engaging units to do QI. This occurs best when RTC is implemented in a supporting environment. One size does not fit all and administrative bodies must consider the unique context of each environment prior to implementing large-scale QI projects. Use of an

  9. Thermal motion in proteins: Large effects on the time-averaged interaction energies

    International Nuclear Information System (INIS)

    Goethe, Martin; Rubi, J. Miguel; Fita, Ignacio

    2016-01-01

    As a consequence of thermal motion, inter-atomic distances in proteins fluctuate strongly around their average values, and hence, also interaction energies (i.e. the pair-potentials evaluated at the fluctuating distances) are not constant in time but exhibit pronounced fluctuations. These fluctuations cause that time-averaged interaction energies do generally not coincide with the energy values obtained by evaluating the pair-potentials at the average distances. More precisely, time-averaged interaction energies behave typically smoother in terms of the average distance than the corresponding pair-potentials. This averaging effect is referred to as the thermal smoothing effect. Here, we estimate the strength of the thermal smoothing effect on the Lennard-Jones pair-potential for globular proteins at ambient conditions using x-ray diffraction and simulation data of a representative set of proteins. For specific atom species, we find a significant smoothing effect where the time-averaged interaction energy of a single atom pair can differ by various tens of cal/mol from the Lennard-Jones potential at the average distance. Importantly, we observe a dependency of the effect on the local environment of the involved atoms. The effect is typically weaker for bulky backbone atoms in beta sheets than for side-chain atoms belonging to other secondary structure on the surface of the protein. The results of this work have important practical implications for protein software relying on free energy expressions. We show that the accuracy of free energy expressions can largely be increased by introducing environment specific Lennard-Jones parameters accounting for the fact that the typical thermal motion of protein atoms depends strongly on their local environment.

  10. Thermal motion in proteins: Large effects on the time-averaged interaction energies

    Energy Technology Data Exchange (ETDEWEB)

    Goethe, Martin, E-mail: martingoethe@ub.edu; Rubi, J. Miguel [Departament de Física Fonamental, Universitat de Barcelona, Martí i Franquès 1, 08028 Barcelona (Spain); Fita, Ignacio [Institut de Biologia Molecular de Barcelona, Baldiri Reixac 10, 08028 Barcelona (Spain)

    2016-03-15

    As a consequence of thermal motion, inter-atomic distances in proteins fluctuate strongly around their average values, and hence, also interaction energies (i.e. the pair-potentials evaluated at the fluctuating distances) are not constant in time but exhibit pronounced fluctuations. These fluctuations cause that time-averaged interaction energies do generally not coincide with the energy values obtained by evaluating the pair-potentials at the average distances. More precisely, time-averaged interaction energies behave typically smoother in terms of the average distance than the corresponding pair-potentials. This averaging effect is referred to as the thermal smoothing effect. Here, we estimate the strength of the thermal smoothing effect on the Lennard-Jones pair-potential for globular proteins at ambient conditions using x-ray diffraction and simulation data of a representative set of proteins. For specific atom species, we find a significant smoothing effect where the time-averaged interaction energy of a single atom pair can differ by various tens of cal/mol from the Lennard-Jones potential at the average distance. Importantly, we observe a dependency of the effect on the local environment of the involved atoms. The effect is typically weaker for bulky backbone atoms in beta sheets than for side-chain atoms belonging to other secondary structure on the surface of the protein. The results of this work have important practical implications for protein software relying on free energy expressions. We show that the accuracy of free energy expressions can largely be increased by introducing environment specific Lennard-Jones parameters accounting for the fact that the typical thermal motion of protein atoms depends strongly on their local environment.

  11. Global existence and exponential growth for a viscoelastic wave equation with dynamic boundary conditions

    KAUST Repository

    Gerbi, Sté phane; Said-Houari, Belkacem

    2013-01-01

    The goal of this work is to study a model of the wave equation with dynamic boundary conditions and a viscoelastic term. First, applying the Faedo-Galerkin method combined with the fixed point theorem, we show the existence and uniqueness of a local in time solution. Second, we show that under some restrictions on the initial data, the solution continues to exist globally in time. On the other hand, if the interior source dominates the boundary damping, then the solution is unbounded and grows as an exponential function. In addition, in the absence of the strong damping, then the solution ceases to exist and blows up in finite time.

  12. Global existence and exponential growth for a viscoelastic wave equation with dynamic boundary conditions

    KAUST Repository

    Gerbi, Stéphane

    2013-01-15

    The goal of this work is to study a model of the wave equation with dynamic boundary conditions and a viscoelastic term. First, applying the Faedo-Galerkin method combined with the fixed point theorem, we show the existence and uniqueness of a local in time solution. Second, we show that under some restrictions on the initial data, the solution continues to exist globally in time. On the other hand, if the interior source dominates the boundary damping, then the solution is unbounded and grows as an exponential function. In addition, in the absence of the strong damping, then the solution ceases to exist and blows up in finite time.

  13. Formal Verification of User-Level Real-Time Property Patterns

    OpenAIRE

    Ge , Ning; Pantel , Marc; Dal Zilio , Silvano

    2017-01-01

    International audience; To ease the expression of real-time requirements, Dwyer, and then Konrad, studied a large collection of existing systems in order to identify a set of real-time property patterns covering most of the useful use cases. The goal was to provide a set of reusable patterns that system designers can instantiate to express requirements instead of using complex temporal logic formulas. A limitation of this approach is that the choice of patterns is more oriented towards expres...

  14. Cone-Beam Computed Tomography–Guided Positioning of Laryngeal Cancer Patients with Large Interfraction Time Trends in Setup and Nonrigid Anatomy Variations

    International Nuclear Information System (INIS)

    Gangsaas, Anne; Astreinidou, Eleftheria; Quint, Sandra; Levendag, Peter C.; Heijmen, Ben

    2013-01-01

    Purpose: To investigate interfraction setup variations of the primary tumor, elective nodes, and vertebrae in laryngeal cancer patients and to validate protocols for cone beam computed tomography (CBCT)-guided correction. Methods and Materials: For 30 patients, CBCT-measured displacements in fractionated treatments were used to investigate population setup errors and to simulate residual setup errors for the no action level (NAL) offline protocol, the extended NAL (eNAL) protocol, and daily CBCT acquisition with online analysis and repositioning. Results: Without corrections, 12 of 26 patients treated with radical radiation therapy would have experienced a gradual change (time trend) in primary tumor setup ≥4 mm in the craniocaudal (CC) direction during the fractionated treatment (11/12 in caudal direction, maximum 11 mm). Due to these trends, correction of primary tumor displacements with NAL resulted in large residual CC errors (required margin 6.7 mm). With the weekly correction vector adjustments in eNAL, the trends could be largely compensated (CC margin 3.5 mm). Correlation between movements of the primary and nodal clinical target volumes (CTVs) in the CC direction was poor (r 2 =0.15). Therefore, even with online setup corrections of the primary CTV, the required CC margin for the nodal CTV was as large as 6.8 mm. Also for the vertebrae, large time trends were observed for some patients. Because of poor CC correlation (r 2 =0.19) between displacements of the primary CTV and the vertebrae, even with daily online repositioning of the vertebrae, the required CC margin around the primary CTV was 6.9 mm. Conclusions: Laryngeal cancer patients showed substantial interfraction setup variations, including large time trends, and poor CC correlation between primary tumor displacements and motion of the nodes and vertebrae (internal tumor motion). These trends and nonrigid anatomy variations have to be considered in the choice of setup verification protocol and

  15. A New Paradigm for Supergranulation Derived from Large-Distance Time-Distance Helioseismology: Pancakes

    Science.gov (United States)

    Duvall, Thomas L.; Hanasoge, Shravan M.

    2012-01-01

    With large separations (10-24 deg heliocentric), it has proven possible to cleanly separate the horizontal and vertical components of supergranular flow with time-distance helioseismology. These measurements require very broad filters in the k-$\\omega$ power spectrum as apparently supergranulation scatters waves over a large area of the power spectrum. By picking locations of supergranulation as peaks in the horizontal divergence signal derived from f-mode waves, it is possible to simultaneously obtain average properties of supergranules and a high signal/noise ratio by averaging over many cells. By comparing ray-theory forward modeling with HMI measurements, an average supergranule model with a peak upflow of 240 m/s at cell center at a depth of 2.3 Mm and a peak horizontal outflow of 700 m/s at a depth of 1.6 Mm. This upflow is a factor of 20 larger than the measured photospheric upflow. These results may not be consistent with earlier measurements using much shorter separations (<5 deg heliocentric). With a 30 Mm horizontal extent and a few Mm in depth, the cells might be characterized as thick pancakes.

  16. Tracking Large Area Mangrove Deforestation with Time-Series of High Fidelity MODIS Imagery

    Science.gov (United States)

    Rahman, A. F.; Dragoni, D.; Didan, K.

    2011-12-01

    Mangrove forests are important coastal ecosystems of the tropical and subtropical regions. These forests provide critical ecosystem services, fulfill important socio-economic and environmental functions, and support coastal livelihoods. But these forest are also among the most vulnerable ecosystems, both to anthropogenic disturbance and climate change. Yet, there exists no map or published study showing detailed spatiotemporal trends of mangrove deforestation at local to regional scales. There is an immediate need of producing such detailed maps to further study the drivers, impacts and feedbacks of anthropogenic and climate factors on mangrove deforestation, and to develop local and regional scale adaptation/mitigation strategies. In this study we use a time-series of high fidelity imagery from NASA's Moderate Resolution Imaging Spectroradiometer (MODIS) for tracking changes in the greenness of mangrove forests of Kalimantan Island of Indonesia. A novel method of filtering satellite data for cloud, aerosol, and view angle effects was used to produce high fidelity MODIS time-series images at 250-meter spatial resolution and three-month temporal resolution for the period of 2000-2010. Enhanced Vegetation Index 2 (EVI2), a measure of vegetation greenness, was calculated from these images for each pixel at each time interval. Temporal variations in the EVI2 of each pixel were tracked as a proxy to deforestaton of mangroves using the statistical method of change-point analysis. Results of these change detection were validated using Monte Carlo simulation, photographs from Google-Earth, finer spatial resolution images from Landsat satellite, and ground based GIS data.

  17. A large capacity time division multiplexed (TDM) laser beam combining technique enabled by nanosecond speed KTN deflector

    Science.gov (United States)

    Yin, Stuart (Shizhuo); Chao, Ju-Hung; Zhu, Wenbin; Chen, Chang-Jiang; Campbell, Adrian; Henry, Michael; Dubinskiy, Mark; Hoffman, Robert C.

    2017-08-01

    In this paper, we present a novel large capacity (a 1000+ channel) time division multiplexing (TDM) laser beam combining technique by harnessing a state-of-the-art nanosecond speed potassium tantalate niobate (KTN) electro-optic (EO) beam deflector as the time division multiplexer. The major advantages of TDM approach are: (1) large multiplexing capability (over 1000 channels), (2) high spatial beam quality (the combined beam has the same spatial profile as the individual beam), (3) high spectral beam quality (the combined beam has the same spectral width as the individual beam, and (4) insensitive to the phase fluctuation of individual laser because of the nature of the incoherent beam combining. The quantitative analyses show that it is possible to achieve over one hundred kW average power, single aperture, single transverse mode solid state and/or fiber laser by pursuing this innovative beam combining method, which represents a major technical advance in the field of high energy lasers. Such kind of 100+ kW average power diffraction limited beam quality lasers can play an important role in a variety of applications such as laser directed energy weapons (DEW) and large-capacity high-speed laser manufacturing, including cutting, welding, and printing.

  18. Time-domain hybrid method for simulating large amplitude motions of ships advancing in waves

    Directory of Open Access Journals (Sweden)

    Shukui Liu

    2011-03-01

    Full Text Available Typical results obtained by a newly developed, nonlinear time domain hybrid method for simulating large amplitude motions of ships advancing with constant forward speed in waves are presented. The method is hybrid in the way of combining a time-domain transient Green function method and a Rankine source method. The present approach employs a simple double integration algorithm with respect to time to simulate the free-surface boundary condition. During the simulation, the diffraction and radiation forces are computed by pressure integration over the mean wetted surface, whereas the incident wave and hydrostatic restoring forces/moments are calculated on the instantaneously wetted surface of the hull. Typical numerical results of application of the method to the seakeeping performance of a standard containership, namely the ITTC S175, are herein presented. Comparisons have been made between the results from the present method, the frequency domain 3D panel method (NEWDRIFT of NTUA-SDL and available experimental data and good agreement has been observed for all studied cases between the results of the present method and comparable other data.

  19. Stiff Columns as Liquefaction Mitigation Measure for Retrofit of Existing Buildings

    Directory of Open Access Journals (Sweden)

    Zaheer Ahmed Almani

    2012-10-01

    Full Text Available In this paper, ground reinforcement with jet grouted columns under shallow foundations of existing buildings was analysed using numerical modelling. This study is related with ground reinforcement by installing stiff jet grouted columns around the shallow foundations of existing building when the foundation soil is liquefied during an earthquake. The isolated shallow square footing pad supporting a typical simple frame structure was constructed on the reinforced ground with stiff jet grouted column rows at the shallow depth from the ground surface. This soil-structure system was modelled and analyzed as plane-strain using the FLAC (Fast Lagrangian Analysis of Continua 2D dynamic modelling and analysis software. The results showed that liquefaction-induced large settlement of shallow foundation of existing building can be reduced to tolerable limits by applying ground reinforcement with continuous rows vertical jet grouted columns adjacent to footing pad.

  20. Large-scale transport across narrow gaps in rod bundles

    Energy Technology Data Exchange (ETDEWEB)

    Guellouz, M.S.; Tavoularis, S. [Univ. of Ottawa (Canada)

    1995-09-01

    Flow visualization and how-wire anemometry were used to investigate the velocity field in a rectangular channel containing a single cylindrical rod, which could be traversed on the centreplane to form gaps of different widths with the plane wall. The presence of large-scale, quasi-periodic structures in the vicinity of the gap has been demonstrated through flow visualization, spectral analysis and space-time correlation measurements. These structures are seen to exist even for relatively large gaps, at least up to W/D=1.350 (W is the sum of the rod diameter, D, and the gap width). The above measurements appear to compatible with the field of a street of three-dimensional, counter-rotating vortices, whose detailed structure, however, remains to be determined. The convection speed and the streamwise spacing of these vortices have been determined as functions of the gap size.

  1. Investigation on performance of all optical buffer with large dynamical delay time based on cascaded double loop optical buffers

    International Nuclear Information System (INIS)

    Yong-Jun, Wang; Xiang-Jun, Xin; Xiao-Lei, Zhang; Chong-Qing, Wu; Kuang-Lu, Yu

    2010-01-01

    Optical buffers are critical for optical signal processing in future optical packet-switched networks. In this paper, a theoretical study as well as an experimental demonstration on a new optical buffer with large dynamical delay time is carried out based on cascaded double loop optical buffers (DLOBs). It is found that pulse distortion can be restrained by a negative optical control mode when the optical packet is in the loop. Noise analysis indicates that it is feasible to realise a large variable delay range by cascaded DLOBs. These conclusions are validated by the experiment system with 4-stage cascaded DLOBs. Both the theoretical simulations and the experimental results indicate that a large delay range of 1–9999 times the basic delay unit and a fine granularity of 25 ns can be achieved by the cascaded DLOBs. The performance of the cascaded DLOBs is suitable for the all optical networks. (classical areas of phenomenology)

  2. Physics with large extra dimensions

    Indian Academy of Sciences (India)

    Home; Journals; Pramana – Journal of Physics; Volume 62; Issue 2 ... The recent understanding of string theory opens the possibility that the string scale can be as ... by the existence of large internal dimensions, in the sub-millimeter region.

  3. Femtosecond time-resolved studies of coherent vibrational Raman scattering in large gas-phase molecules

    International Nuclear Information System (INIS)

    Hayden, C.C.; Chandler, D.W.

    1995-01-01

    Results are presented from femtosecond time-resolved coherent Raman experiments in which we excite and monitor vibrational coherence in gas-phase samples of benzene and 1,3,5-hexatriene. Different physical mechanisms for coherence decay are seen in these two molecules. In benzene, where the Raman polarizability is largely isotropic, the Q branch of the vibrational Raman spectrum is the primary feature excited. Molecules in different rotational states have different Q-branch transition frequencies due to vibration--rotation interaction. Thus, the macroscopic polarization that is observed in these experiments decays because it has many frequency components from molecules in different rotational states, and these frequency components go out of phase with each other. In 1,3,5-hexatriene, the Raman excitation produces molecules in a coherent superposition of rotational states, through (O, P, R, and S branch) transitions that are strong due to the large anisotropy of the Raman polarizability. The coherent superposition of rotational states corresponds to initially spatially oriented, vibrationally excited, molecules that are freely rotating. The rotation of molecules away from the initial orientation is primarily responsible for the coherence decay in this case. These experiments produce large (∼10% efficiency) Raman shifted signals with modest excitation pulse energies (10 μJ) demonstrating the feasibility of this approach for a variety of gas phase studies. copyright 1995 American Institute of Physics

  4. Maximizing economic and environmental performance of existing coal-fired assets

    Energy Technology Data Exchange (ETDEWEB)

    Bartley, Pat; Foucher, Jean-Claude; Hestermann, Rolf; Hilton, Bob; Keegan, Bill; Stephen, Don

    2007-07-01

    In recent years, Plant Owners and innovative suppliers such as ALSTOM have come to realize that existing coal-fired assets have in many cases hidden capacity. This largely results from the conservative nature of their original design, but also from the possibility of integrating the latest advances in technology without the need to buy complete power plant components. ALSTOM's Optimized Plant Retrofit (OPR) process is a proven method to identify the full potential of existing equipment, taking a systemic and holistic approach to achieve full optimisation. OPRs are supported by ALSTOM's comprehensive portfolio of available technologies and a proven capability to integrate retrofit opportunities encompassing innovative solutions for a variety of plant components such as coal mills, boiler, air pollution control equipment, turbogenerator, feedheating and condensing plant. By teaming utility representatives with ALSTOM's technical experts we can collectively identify solutions for enhancing both heat rate and net output, to maximise the value of existing assets. This often gives a return on investment significantly better than greenfield construction for supply margin improvement. This paper introduces the OPR concept in detail and presents case studies and insights into future developments, in particular retrofitting existing assets in an emissions constrained environment. (auth)

  5. Global Existence Analysis of Cross-Diffusion Population Systems for Multiple Species

    Science.gov (United States)

    Chen, Xiuqing; Daus, Esther S.; Jüngel, Ansgar

    2018-02-01

    The existence of global-in-time weak solutions to reaction-cross-diffusion systems for an arbitrary number of competing population species is proved. The equations can be derived from an on-lattice random-walk model with general transition rates. In the case of linear transition rates, it extends the two-species population model of Shigesada, Kawasaki, and Teramoto. The equations are considered in a bounded domain with homogeneous Neumann boundary conditions. The existence proof is based on a refined entropy method and a new approximation scheme. Global existence follows under a detailed balance or weak cross-diffusion condition. The detailed balance condition is related to the symmetry of the mobility matrix, which mirrors Onsager's principle in thermodynamics. Under detailed balance (and without reaction) the entropy is nonincreasing in time, but counter-examples show that the entropy may increase initially if detailed balance does not hold.

  6. A Matter of Time: Faster Percolator Analysis via Efficient SVM Learning for Large-Scale Proteomics.

    Science.gov (United States)

    Halloran, John T; Rocke, David M

    2018-05-04

    Percolator is an important tool for greatly improving the results of a database search and subsequent downstream analysis. Using support vector machines (SVMs), Percolator recalibrates peptide-spectrum matches based on the learned decision boundary between targets and decoys. To improve analysis time for large-scale data sets, we update Percolator's SVM learning engine through software and algorithmic optimizations rather than heuristic approaches that necessitate the careful study of their impact on learned parameters across different search settings and data sets. We show that by optimizing Percolator's original learning algorithm, l 2 -SVM-MFN, large-scale SVM learning requires nearly only a third of the original runtime. Furthermore, we show that by employing the widely used Trust Region Newton (TRON) algorithm instead of l 2 -SVM-MFN, large-scale Percolator SVM learning is reduced to nearly only a fifth of the original runtime. Importantly, these speedups only affect the speed at which Percolator converges to a global solution and do not alter recalibration performance. The upgraded versions of both l 2 -SVM-MFN and TRON are optimized within the Percolator codebase for multithreaded and single-thread use and are available under Apache license at bitbucket.org/jthalloran/percolator_upgrade .

  7. Short-time existence of solutions for mean-field games with congestion

    KAUST Repository

    Gomes, Diogo A.; Voskanyan, Vardan K.

    2015-01-01

    We consider time-dependent mean-field games with congestion that are given by a Hamilton–Jacobi equation coupled with a Fokker–Planck equation. These models are motivated by crowd dynamics in which agents have difficulty moving in high-density areas

  8. Freeway travel time estimation using existing fixed traffic sensors : phase 1.

    Science.gov (United States)

    2013-08-01

    Freeway travel time is one of the most useful pieces of information for road users and an : important measure of effectiveness (MOE) for traffic engineers and policy makers. In the Greater : St. Louis area, Gateway Guide, the St. Louis Transportation...

  9. Two Proposals for determination of large reactivity of reactor

    International Nuclear Information System (INIS)

    Kaneko, Yoshihiko; Nagao, Yoshiharu; Yamane, Tsuyoshi; Takeuchi, Mituo

    1999-01-01

    Two Proposals for determination of large reactivity of reactors are presented. One is for large positive reactivity. The other is for large negative reactivity. Existing experimental methods for determination of large positive reactivity, the fuel addition method and the neutron adsorption substitution method were analyzed. It is found that both the experimental methods are possibly affected to the substantially large systematic error up to ∼ 20%, when the value of the excess multiplication factor comes into the range close to ∼20%Δk. To cope with this difficulty, a revised method is validly proposed. The revised method evaluates the value of the potential excess multiplication factor as the consecutive increments of the effective multiplication factor in a virtual core, which are converted from those in an actual core by multiplying a conversion factor f to it. The conversion factor f is to be obtained in principle by calculation. Numerical experiments were done on a slab reactor using one group diffusion model. The rod drop experimental method is widely used for determination of large negative negative reactivity values. The decay of the neutron density followed by initiating the insertion of the rod is obliged to be slowed down according to its speed. It is proved by analysis based on the one point reactor kinetics that in such a case the integral counting method hitherto used tend to significantly underestimate the absolute values of negative reactivity, even if the insertion time is in the range of 1-2 s. As for the High Temperature Engineering Test Reactor (HTTR), the insertion time will be lengthened up to 4-6 s. In order to overcome the difficulty , the delayed integral counting method is proposed, in which the integration of neutron counting starts after the rod drop has been completed and the counts before is evaluated by calculation using one point reactor kinetics. This is because the influence of the insertion time on the decay of the neutron

  10. Life cycle assessment: Existing building retrofit versus replacement

    Science.gov (United States)

    Darabi, Nura

    The embodied energy in building materials constitutes a large part of the total energy required for any building (Thormark 2001, 429). In working to make buildings more energy efficient this needs to be considered. Integrating considerations about life cycle assessment for buildings and materials is one promising way to reduce the amount of energy consumption being used within the building sector and the environmental impacts associated with that energy. A life cycle assessment (LCA) model can be utilized to help evaluate the embodied energy in building materials in comparison to the buildings operational energy. This thesis takes into consideration the potential life cycle reductions in energy and CO2 emissions that can be made through an energy retrofit of an existing building verses demolition and replacement with a new energy efficient building. A 95,000 square foot institutional building built in the 1960`s was used as a case study for a building LCA, along with a calibrated energy model of the existing building created as part of a previous Masters of Building Science thesis. The chosen case study building was compared to 10 possible improvement options of either energy retrofit or replacement of the existing building with a higher energy performing building in order to see the life cycle relationship between embodied energy, operational energy, and C02 emissions. As a result of completing the LCA, it is shown under which scenarios building retrofit saves more energy over the lifespan of the building than replacement with new construction. It was calculated that energy retrofit of the chosen existing institutional building would reduce the amount of energy and C02 emissions associated with that building over its life span.

  11. A Variable Stiffness Analysis Model for Large Complex Thin-Walled Guide Rail

    Directory of Open Access Journals (Sweden)

    Wang Xiaolong

    2016-01-01

    Full Text Available Large complex thin-walled guide rail has complicated structure and no uniform low rigidity. The traditional cutting simulations are time consuming due to huge computation especially in large workpiece. To solve these problems, a more efficient variable stiffness analysis model has been propose, which can obtain quantitative stiffness value of the machining surface. Applying simulate cutting force in sampling points using finite element analysis software ABAQUS, the single direction variable stiffness rule can be obtained. The variable stiffness matrix has been propose by analyzing multi-directions coupling variable stiffness rule. Combining with the three direction cutting force value, the reasonability of existing processing parameters can be verified and the optimized cutting parameters can be designed.

  12. Measuring gas-residence times in large municipal incinerators, by means of a pseudo-random binary signal tracer technique

    International Nuclear Information System (INIS)

    Nasserzadeh, V.; Swithenbank, J.; Jones, B.

    1995-01-01

    The problem of measuring gas-residence time in large incinerators was studied by the pseudo-random binary sequence (PRBS) stimulus tracer response technique at the Sheffield municipal solid-waste incinerator (35 MW plant). The steady-state system was disturbed by the superimposition of small fluctuations in the form of a pseudo-random binary sequence of methane pulses, and the response of the incinerator was determined from the CO 2 concentration in flue gases at the boiler exit, measured with a specially developed optical gas analyser with a high-frequency response. For data acquisition, an on-line PC computer was used together with the LAB Windows software system; the output response was then cross-correlated with the perturbation signal to give the impulse response of the incinerator. There was very good agreement between the gas-residence time for the Sheffield MSW incinerator as calculated by computational fluid dynamics (FLUENT Model) and gas-residence time at the plant as measured by the PRBS tracer technique. The results obtained from this research programme clearly demonstrate that the PRBS stimulus tracer response technique can be successfully and economically used to measure gas-residence times in large incinerator plants. It also suggests that the common commercial practice of characterising the incinerator operation by a single-residence-time parameter may lead to a misrepresentation of the complexities involved in describing the operation of the incineration system. (author)

  13. Robust Synchronization of Fractional-Order Chaotic Systems at a Pre-Specified Time Using Sliding Mode Controller with Time-Varying Switching Surfaces

    International Nuclear Information System (INIS)

    Khanzadeh, Alireza; Pourgholi, Mahdi

    2016-01-01

    A main problem associated with the synchronization of two chaotic systems is that the time in which complete synchronization will occur is not specified. Synchronization time is either infinitely large or is finite but only its upper bound is known and this bound depends on the systems' initial conditions. In this paper we propose a method for synchronizing of two chaotic systems precisely at a time which we want. To this end, time-varying switching surfaces sliding mode control is used and the control law based on Lyapunov stability theorem is derived which is able to synchronize two fractional-order chaotic systems precisely at a pre specified time without concerning about their initial conditions. Moreover, by eliminating the reaching phase in the proposed synchronization scheme, robustness against existence of uncertainties and exogenous disturbances is obtained. Because of the existence of fractional integral of the sign function instead of the sign function in the control equation, the necessity for infinitely fast switching be obviated in this method. To show the effectiveness of the proposed method the illustrative examples under different situations are provided and the simulation results are reported.

  14. Large scale analysis of co-existing post-translational modifications in histone tails reveals global fine structure of cross-talk

    DEFF Research Database (Denmark)

    Schwämmle, Veit; Aspalter, Claudia-Maria; Sidoli, Simone

    2014-01-01

    Mass spectrometry (MS) is a powerful analytical method for the identification and quantification of co-existing post-translational modifications in histone proteins. One of the most important challenges in current chromatin biology is to characterize the relationships between co-existing histone...... sample-specific patterns for the co-frequency of histone post-translational modifications. We implemented a new method to identify positive and negative interplay between pairs of methylation and acetylation marks in proteins. Many of the detected features were conserved between different cell types...... sites but negative cross-talk for distant ones, and for discrete methylation states at Lys-9, Lys-27, and Lys-36 of histone H3, suggesting a more differentiated functional role of methylation beyond the general expectation of enhanced activity at higher methylation states....

  15. Sustainability in the existing building stock

    DEFF Research Database (Denmark)

    Elle, Morten; Nielsen, Susanne Balslev; Hoffmann, Birgitte

    2005-01-01

    , QRWfacilities management’s most important contribution to sustainable development in the built environment. Space management is an essential tool in facilities management – and it could be considered a powerful tool in sustainable development; remembering that the building not being built is perhaps the most......This paper explores the role of Facilities Management in the relation to sustainable development in the existing building stock. Facilities management is a concept still developing as the management of buildings are becoming more and more professional. Many recognize today that facilities...... management is a concept relevant to others than large companies. Managing the flows of energy and other resources is a part of facilities management, and an increased professionalism could lead to the reduction of the use of energy and water and the generation of waste and wastewater. This is, however...

  16. Incremental Frequent Subgraph Mining on Large Evolving Graphs

    KAUST Repository

    Abdelhamid, Ehab

    2017-08-22

    Frequent subgraph mining is a core graph operation used in many domains, such as graph data management and knowledge exploration, bioinformatics and security. Most existing techniques target static graphs. However, modern applications, such as social networks, utilize large evolving graphs. Mining these graphs using existing techniques is infeasible, due to the high computational cost. In this paper, we propose IncGM+, a fast incremental approach for continuous frequent subgraph mining problem on a single large evolving graph. We adapt the notion of “fringe” to the graph context, that is the set of subgraphs on the border between frequent and infrequent subgraphs. IncGM+ maintains fringe subgraphs and exploits them to prune the search space. To boost the efficiency, we propose an efficient index structure to maintain selected embeddings with minimal memory overhead. These embeddings are utilized to avoid redundant expensive subgraph isomorphism operations. Moreover, the proposed system supports batch updates. Using large real-world graphs, we experimentally verify that IncGM+ outperforms existing methods by up to three orders of magnitude, scales to much larger graphs and consumes less memory.

  17. Greening Existing Tribal Buildings

    Science.gov (United States)

    Guidance about improving sustainability in existing tribal casinos and manufactured homes. Many steps can be taken to make existing buildings greener and healthier. They may also reduce utility and medical costs.

  18. Parallel-In-Time For Moving Meshes

    Energy Technology Data Exchange (ETDEWEB)

    Falgout, R. D. [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States); Manteuffel, T. A. [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States); Southworth, B. [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States); Schroder, J. B. [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States)

    2016-02-04

    With steadily growing computational resources available, scientists must develop e ective ways to utilize the increased resources. High performance, highly parallel software has be- come a standard. However until recent years parallelism has focused primarily on the spatial domain. When solving a space-time partial di erential equation (PDE), this leads to a sequential bottleneck in the temporal dimension, particularly when taking a large number of time steps. The XBraid parallel-in-time library was developed as a practical way to add temporal parallelism to existing se- quential codes with only minor modi cations. In this work, a rezoning-type moving mesh is applied to a di usion problem and formulated in a parallel-in-time framework. Tests and scaling studies are run using XBraid and demonstrate excellent results for the simple model problem considered herein.

  19. Global Well-Posedness of the Boltzmann Equation with Large Amplitude Initial Data

    Science.gov (United States)

    Duan, Renjun; Huang, Feimin; Wang, Yong; Yang, Tong

    2017-07-01

    The global well-posedness of the Boltzmann equation with initial data of large amplitude has remained a long-standing open problem. In this paper, by developing a new {L^∞_xL^1v\\cap L^∞_{x,v}} approach, we prove the global existence and uniqueness of mild solutions to the Boltzmann equation in the whole space or torus for a class of initial data with bounded velocity-weighted {L^∞} norm under some smallness condition on the {L^1_xL^∞_v} norm as well as defect mass, energy and entropy so that the initial data allow large amplitude oscillations. Both the hard and soft potentials with angular cut-off are considered, and the large time behavior of solutions in the {L^∞_{x,v}} norm with explicit rates of convergence are also studied.

  20. Requirements for existing buildings

    DEFF Research Database (Denmark)

    Thomsen, Kirsten Engelund; Wittchen, Kim Bjarne

    This report collects energy performance requirements for existing buildings in European member states by June 2012.......This report collects energy performance requirements for existing buildings in European member states by June 2012....

  1. On the Large-Scaling Issues of Cloud-based Applications for Earth Science Dat

    Science.gov (United States)

    Hua, H.

    2016-12-01

    Next generation science data systems are needed to address the incoming flood of data from new missions such as NASA's SWOT and NISAR where its SAR data volumes and data throughput rates are order of magnitude larger than present day missions. Existing missions, such as OCO-2, may also require high turn-around time for processing different science scenarios where on-premise and even traditional HPC computing environments may not meet the high processing needs. Additionally, traditional means of procuring hardware on-premise are already limited due to facilities capacity constraints for these new missions. Experiences have shown that to embrace efficient cloud computing approaches for large-scale science data systems requires more than just moving existing code to cloud environments. At large cloud scales, we need to deal with scaling and cost issues. We present our experiences on deploying multiple instances of our hybrid-cloud computing science data system (HySDS) to support large-scale processing of Earth Science data products. We will explore optimization approaches to getting best performance out of hybrid-cloud computing as well as common issues that will arise when dealing with large-scale computing. Novel approaches were utilized to do processing on Amazon's spot market, which can potentially offer 75%-90% costs savings but with an unpredictable computing environment based on market forces.

  2. A study of residence time distribution using radiotracer technique in the large scale plant facility

    Science.gov (United States)

    Wetchagarun, S.; Tippayakul, C.; Petchrak, A.; Sukrod, K.; Khoonkamjorn, P.

    2017-06-01

    As the demand for troubleshooting of large industrial plants increases, radiotracer techniques, which have capability to provide fast, online and effective detections to plant problems, have been continually developed. One of the good potential applications of the radiotracer for troubleshooting in a process plant is the analysis of Residence Time Distribution (RTD). In this paper, the study of RTD in a large scale plant facility using radiotracer technique was presented. The objective of this work is to gain experience on the RTD analysis using radiotracer technique in a “larger than laboratory” scale plant setup which can be comparable to the real industrial application. The experiment was carried out at the sedimentation tank in the water treatment facility of Thailand Institute of Nuclear Technology (Public Organization). Br-82 was selected to use in this work due to its chemical property, its suitable half-life and its on-site availability. NH4Br in the form of aqueous solution was injected into the system as the radiotracer. Six NaI detectors were placed along the pipelines and at the tank in order to determine the RTD of the system. The RTD and the Mean Residence Time (MRT) of the tank was analysed and calculated from the measured data. The experience and knowledge attained from this study is important for extending this technique to be applied to industrial facilities in the future.

  3. A review of radon mitigation in large buildings in the US

    International Nuclear Information System (INIS)

    Craig, A.B.

    1994-01-01

    The Environmental Protection Agency of the US carried out its initial research on radon mitigation in houses, both existing and new. A review of this work is presented in another paper at this workshop. Four years ago, this work was expanded to include the study of radon in schools, both new and existing, and now includes studies in other large buildings, as well. Factors affecting ease of mitigation of existing schools using active soil depressurisation (ASD) have been identified and quantified. Examination of the building and architectural plans makes it possible to predict the ease of mitigation of a specific building. Many schools can be easily and inexpensively mitigated using ASD. However, examination of a fairly large number of schools has shown that a significant percentage of existing schools will be hard to mitigate with ASD. In some cases, the heating, ventilating, and air conditioning (HVAC) system can be used to pressurise the building and retard radon entry. However, in some cases no central HVAC system exists and the school is difficult and/or expensive to mitigate by any technique. Prevention of radon entry is relatively easy and inexpensive to accomplish during construction of schools and other large buildings. It is also possible to control radon to near ambient levels in new construction, a goal which is much more difficult to approach in existing large buildings. The preferred method of radon prevention in the construction of large buildings is to design the HVAC system for building pressurisation, install a simple ASD system, and seal all entry routes between the sub-slab and the building interior. (author)

  4. PhilDB: the time series database with built-in change logging

    Directory of Open Access Journals (Sweden)

    Andrew MacDonald

    2016-03-01

    Full Text Available PhilDB is an open-source time series database that supports storage of time series datasets that are dynamic; that is, it records updates to existing values in a log as they occur. PhilDB eases loading of data for the user by utilising an intelligent data write method. It preserves existing values during updates and abstracts the update complexity required to achieve logging of data value changes. It implements fast reads to make it practical to select data for analysis. Recent open-source systems have been developed to indefinitely store long-period high-resolution time series data without change logging. Unfortunately, such systems generally require a large initial installation investment before use because they are designed to operate over a cluster of servers to achieve high-performance writing of static data in real time. In essence, they have a ‘big data’ approach to storage and access. Other open-source projects for handling time series data that avoid the ‘big data’ approach are also relatively new and are complex or incomplete. None of these systems gracefully handle revision of existing data while tracking values that change. Unlike ‘big data’ solutions, PhilDB has been designed for single machine deployment on commodity hardware, reducing the barrier to deployment. PhilDB takes a unique approach to meta-data tracking; optional attribute attachment. This facilitates scaling the complexities of storing a wide variety of data. That is, it allows time series data to be loaded as time series instances with minimal initial meta-data, yet additional attributes can be created and attached to differentiate the time series instances when a wider variety of data is needed. PhilDB was written in Python, leveraging existing libraries. While some existing systems come close to meeting the needs PhilDB addresses, none cover all the needs at once. PhilDB was written to fill this gap in existing solutions. This paper explores existing time

  5. Fast analysis of wide-band scattering from electrically large targets with time-domain parabolic equation method

    Science.gov (United States)

    He, Zi; Chen, Ru-Shan

    2016-03-01

    An efficient three-dimensional time domain parabolic equation (TDPE) method is proposed to fast analyze the narrow-angle wideband EM scattering properties of electrically large targets. The finite difference (FD) of Crank-Nicolson (CN) scheme is used as the traditional tool to solve the time-domain parabolic equation. However, a huge computational resource is required when the meshes become dense. Therefore, the alternating direction implicit (ADI) scheme is introduced to discretize the time-domain parabolic equation. In this way, the reduced transient scattered fields can be calculated line by line in each transverse plane for any time step with unconditional stability. As a result, less computational resources are required for the proposed ADI-based TDPE method when compared with both the traditional CN-based TDPE method and the finite-different time-domain (FDTD) method. By employing the rotating TDPE method, the complete bistatic RCS can be obtained with encouraging accuracy for any observed angle. Numerical examples are given to demonstrate the accuracy and efficiency of the proposed method.

  6. Photorealistic large-scale urban city model reconstruction.

    Science.gov (United States)

    Poullis, Charalambos; You, Suya

    2009-01-01

    The rapid and efficient creation of virtual environments has become a crucial part of virtual reality applications. In particular, civil and defense applications often require and employ detailed models of operations areas for training, simulations of different scenarios, planning for natural or man-made events, monitoring, surveillance, games, and films. A realistic representation of the large-scale environments is therefore imperative for the success of such applications since it increases the immersive experience of its users and helps reduce the difference between physical and virtual reality. However, the task of creating such large-scale virtual environments still remains a time-consuming and manual work. In this work, we propose a novel method for the rapid reconstruction of photorealistic large-scale virtual environments. First, a novel, extendible, parameterized geometric primitive is presented for the automatic building identification and reconstruction of building structures. In addition, buildings with complex roofs containing complex linear and nonlinear surfaces are reconstructed interactively using a linear polygonal and a nonlinear primitive, respectively. Second, we present a rendering pipeline for the composition of photorealistic textures, which unlike existing techniques, can recover missing or occluded texture information by integrating multiple information captured from different optical sensors (ground, aerial, and satellite).

  7. Remark on state vector construction when flavor mixing exists

    International Nuclear Information System (INIS)

    Fujii, K.; Shimomura, T.

    2006-01-01

    In the framework of quantum field theory, we consider the way to construct the one-particle state (with definite 3-momentum) when particle mixing exists, such as in the case of flavor-neutrino mixing. In the preceding report (Prog. Theor. Phys. 112, 901 (2004)), we have examined the structure of expectation values of the flavor neutrino charges (at time t) with respect to a neutrino-source state prepared at time t' (earlier than t). When there is no mixing, each of various contributions to the expectation value is equal, in its dominant part, to the transition probability corresponding to the respective neutrino-production process. On the basis of the assumption that such an equality holds also in the mixing case, we can find an appropriate form of one-flavor-neutrino state with 3-momentum and helicity. Along the same way, we examine the boson case when flavor mixing exists. We give remarks on the relation and difference between the ordinary and the present approaches to flavor oscillation

  8. How many N = 4 strings exist?

    International Nuclear Information System (INIS)

    Ketov, S.V.

    1994-09-01

    Possible ways of constructing extended fermionic strings with N=4 world-sheet supersymmetry are reviewed. String theory constraints form, in general, a non-linear quasi(super)conformal algebra, and can have conformal dimensions ≥1. When N=4, the most general N=4 quasi-superconformal algebra to consider for string theory building is D(1, 2; α), whose linearisation is the so-called ''large'' N=4 superconformal algebra. The D(1, 2; α) algebra has su(2)sub(κ + )+su(2)sub(κ - )+u(1) Kac-Moody component, and α=κ - /κ + . We check the Jacobi identities and construct a BRST charge for the D(1, 2; α) algebra. The quantum BRST operator can be made nilpotent only when κ + =κ - =-2. The D(1, 2; 1) algebra is actually isomorphic to the SO(4)-based Bershadsky-Knizhnik non-linear quasi-superconformal algebra. We argue about the existence of a string theory associated with the latter, and propose the (non-covariant) hamiltonian action for this new N=4 string theory. Our results imply the existence of two different N=4 fermionic string theories: the old one based on the ''small'' linear N=4 superconformal algebra and having the total ghost central charge c gh =+12, and the new one with non-linearly realised N=4 supersymmetry, based on the SO(4) quasi-superconformal algebra and having c gh =+6. Both critical string theories have negative ''critical dimensions'' and do not admit unitary matter representations. (orig.)

  9. Modeling Optical Spectra of Large Organic Systems Using Real-Time Propagation of Semiempirical Effective Hamiltonians.

    Science.gov (United States)

    Ghosh, Soumen; Andersen, Amity; Gagliardi, Laura; Cramer, Christopher J; Govind, Niranjan

    2017-09-12

    We present an implementation of a time-dependent semiempirical method (INDO/S) in NWChem using real-time (RT) propagation to address, in principle, the entire spectrum of valence electronic excitations. Adopting this model, we study the UV/vis spectra of medium-sized systems such as P3B2 and f-coronene, and in addition much larger systems such as ubiquitin in the gas phase and the betanin chromophore in the presence of two explicit solvents (water and methanol). RT-INDO/S provides qualitatively and often quantitatively accurate results when compared with RT- TDDFT or experimental spectra. Even though we only consider the INDO/S Hamiltonian in this work, our implementation provides a framework for performing electron dynamics in large systems using semiempirical Hartree-Fock Hamiltonians in general.

  10. A time-focusing Fourier chopper time-of-flight diffractometer for large scattering angles

    International Nuclear Information System (INIS)

    Heinonen, R.; Hiismaeki, P.; Piirto, A.; Poeyry, H.; Tiitta, A.

    1975-01-01

    A high-resolution time-of-flight diffractometer utilizing time focusing principles in conjunction with a Fourier chopper is under construction at Otaniemi. The design is an improved version of a test facility which has been used for single-crystal and powder diffraction studies with promising results. A polychromatic neutron beam from a radial beam tube of the FiR 1 reactor, collimated to dia. 70 mm, is modulated by a Fourier chopper (dia. 400 mm) which is placed inside a massive boron-loaded particle board shielding of 900 mm wall thickness. A thin flat sample (5 mm x dia. 80 mm typically) is mounted on a turntable at a distance of 4 m from the chopper, and the diffracted neutrons are counted by a scintillation detector at 4 m distance from the sample. The scattering angle 2theta can be chosen between 90deg and 160deg to cover Bragg angles from 45deg up to 80deg. The angle between the chopper disc and the incident beam direction as well as the angle of the detector surface relative to the diffracted beam can be adjusted between 45deg and 90deg in order to accomplish time-focusing. In our set-up, with equal flight paths from chopper to sample and from sample to detector, the time-focusing conditions are fulfilled when the chopper and the detector are parallel to the sample-plane. The time-of-flight spectrum of the scattered neutrons is measured by the reverse time-of-flight method in which, instead of neutrons, one essentially records the modulation function of the chopper during constant periods preceding each detected neutron. With a Fourier chopper whose speed is varied in a suitable way, the method is equivalent to the conventional Fourier method but the spectrum is obtained directly without any off-line calculations. The new diffractometer is operated automatically by a Super Nova computer which not only accumulates the synthetized diffraction pattern but also controls the chopper speed according to the modulation frequency sweep chosen by the user to obtain a

  11. Large deviations of the finite-time magnetization of the Curie-Weiss random-field Ising model

    Science.gov (United States)

    Paga, Pierre; Kühn, Reimer

    2017-08-01

    We study the large deviations of the magnetization at some finite time in the Curie-Weiss random field Ising model with parallel updating. While relaxation dynamics in an infinite-time horizon gives rise to unique dynamical trajectories [specified by initial conditions and governed by first-order dynamics of the form mt +1=f (mt) ] , we observe that the introduction of a finite-time horizon and the specification of terminal conditions can generate a host of metastable solutions obeying second-order dynamics. We show that these solutions are governed by a Newtonian-like dynamics in discrete time which permits solutions in terms of both the first-order relaxation ("forward") dynamics and the backward dynamics mt +1=f-1(mt) . Our approach allows us to classify trajectories for a given final magnetization as stable or metastable according to the value of the rate function associated with them. We find that in analogy to the Freidlin-Wentzell description of the stochastic dynamics of escape from metastable states, the dominant trajectories may switch between the two types (forward and backward) of first-order dynamics. Additionally, we show how to compute rate functions when uncertainty in the quenched disorder is introduced.

  12. GDC 2: Compression of large collections of genomes.

    Science.gov (United States)

    Deorowicz, Sebastian; Danek, Agnieszka; Niemiec, Marcin

    2015-06-25

    The fall of prices of the high-throughput genome sequencing changes the landscape of modern genomics. A number of large scale projects aimed at sequencing many human genomes are in progress. Genome sequencing also becomes an important aid in the personalized medicine. One of the significant side effects of this change is a necessity of storage and transfer of huge amounts of genomic data. In this paper we deal with the problem of compression of large collections of complete genomic sequences. We propose an algorithm that is able to compress the collection of 1092 human diploid genomes about 9,500 times. This result is about 4 times better than what is offered by the other existing compressors. Moreover, our algorithm is very fast as it processes the data with speed 200 MB/s on a modern workstation. In a consequence the proposed algorithm allows storing the complete genomic collections at low cost, e.g., the examined collection of 1092 human genomes needs only about 700 MB when compressed, what can be compared to about 6.7 TB of uncompressed FASTA files. The source code is available at http://sun.aei.polsl.pl/REFRESH/index.php?page=projects&project=gdc&subpage=about.

  13. Talk of time

    Directory of Open Access Journals (Sweden)

    Johann-Albrecht Meylahn

    2015-06-01

    Full Text Available Maybe, before we speak of time, or maybe whilst we are speaking of time, or maybe after we have spoken of time, in the various modes of time’s insistence to exist, one should give time to the talk of time. There are various different modes of time’s insistence to exist, such as quantum physics in conversation with relativity theory where time is constructed as a fourth dimension of space. Or there are the modes of time in history, religion, psychology and philosophy, and each of these modes is composed, and composes its own specific object called time, and a particular subject who understands and interprets time in that particular mode. Yet, before, whilst or after these modes of time’s insistence to exist, one should maybe give time to time’s time. Give time for the various times to articulate themselves in the various modes of existence, thereby creating both a whole plurality of differing subjects, as well as plurality of differing objects, all called ‘time’. Once time has been given time to talk its talk, to articulate itself within the various modes, it will be interrupted by the articulations of time in various modes of time still to come. These disruptions of time by time always still to come opens the door for a theological narrative – a narrative on time, but created by the coming of messianic times, interpreted in the mode of hope but also in the mode of a promise from the past.

  14. Characterize kinematic rupture history of large earthquakes with Multiple Haskell sources

    Science.gov (United States)

    Jia, Z.; Zhan, Z.

    2017-12-01

    Earthquakes are often regarded as continuous rupture along a single fault, but the occurrence of complex large events involving multiple faults and dynamic triggering challenges this view. Such rupture complexities cause difficulties in existing finite fault inversion algorithms, because they rely on specific parameterizations and regularizations to obtain physically meaningful solutions. Furthermore, it is difficult to assess reliability and uncertainty of obtained rupture models. Here we develop a Multi-Haskell Source (MHS) method to estimate rupture process of large earthquakes as a series of sub-events of varying location, timing and directivity. Each sub-event is characterized by a Haskell rupture model with uniform dislocation and constant unilateral rupture velocity. This flexible yet simple source parameterization allows us to constrain first-order rupture complexity of large earthquakes robustly. Additionally, relatively few parameters in the inverse problem yields improved uncertainty analysis based on Markov chain Monte Carlo sampling in a Bayesian framework. Synthetic tests and application of MHS method on real earthquakes show that our method can capture major features of large earthquake rupture process, and provide information for more detailed rupture history analysis.

  15. Very Large Inflammatory Odontogenic Cyst with Origin on a Single Long Time Traumatized Lower Incisor

    Science.gov (United States)

    Freitas, Filipe; Andre, Saudade; Moreira, Andre; Carames, Joao

    2015-01-01

    One of the consequences of traumatic injuries is the chance of aseptic pulp necrosis to occur which in time may became infected and give origin to periapical pathosis. Although the apical granulomas and cysts are a common condition, there appearance as an extremely large radiolucent image is a rare finding. Differential diagnosis with other radiographic-like pathologies, such as keratocystic odontogenic tumour or unicystic ameloblastoma, is mandatory. The purpose of this paper is to report a very large radicular cyst caused by a single mandibular incisor traumatized long back, in a 60-year-old male. Medical and clinical histories were obtained, radiographic and cone beam CT examinations performed and an initial incisional biopsy was done. The final decision was to perform a surgical enucleation of a lesion, 51.4 mm in length. The enucleated tissue biopsy analysis was able to render the diagnosis as an inflammatory odontogenic cyst. A 2 year follow-up showed complete bone recovery. PMID:26393219

  16. Long-Term Fault Memory: A New Time-Dependent Recurrence Model for Large Earthquake Clusters on Plate Boundaries

    Science.gov (United States)

    Salditch, L.; Brooks, E. M.; Stein, S.; Spencer, B. D.; Campbell, M. R.

    2017-12-01

    A challenge for earthquake hazard assessment is that geologic records often show large earthquakes occurring in temporal clusters separated by periods of quiescence. For example, in Cascadia, a paleoseismic record going back 10,000 years shows four to five clusters separated by approximately 1,000 year gaps. If we are still in the cluster that began 1700 years ago, a large earthquake is likely to happen soon. If the cluster has ended, a great earthquake is less likely. For a Gaussian distribution of recurrence times, the probability of an earthquake in the next 50 years is six times larger if we are still in the most recent cluster. Earthquake hazard assessments typically employ one of two recurrence models, neither of which directly incorporate clustering. In one, earthquake probability is time-independent and modeled as Poissonian, so an earthquake is equally likely at any time. The fault has no "memory" because when a prior earthquake occurred has no bearing on when the next will occur. The other common model is a time-dependent earthquake cycle in which the probability of an earthquake increases with time until one happens, after which the probability resets to zero. Because the probability is reset after each earthquake, the fault "remembers" only the last earthquake. This approach can be used with any assumed probability density function for recurrence times. We propose an alternative, Long-Term Fault Memory (LTFM), a modified earthquake cycle model where the probability of an earthquake increases with time until one happens, after which it decreases, but not necessarily to zero. Hence the probability of the next earthquake depends on the fault's history over multiple cycles, giving "long-term memory". Physically, this reflects an earthquake releasing only part of the elastic strain stored on the fault. We use the LTFM to simulate earthquake clustering along the San Andreas Fault and Cascadia. In some portions of the simulated earthquake history, events would

  17. Existing ingestion guidance: Problems and recommendations

    Energy Technology Data Exchange (ETDEWEB)

    Mooney, Robert R; Ziegler, Gordon L; Peterson, Donald S [Environmental Radiation Section, Division of Radiation Protection, WA (United States)

    1989-09-01

    Washington State has been developing plans and procedures for responding to nuclear accidents since the early 1970s. A key part of this process has been formulating a method for calculating ingestion pathway concentration guides (CGs). Such a method must be both technically sound and easy to use. This process has been slow and frustrating. However, much technical headway has been made in recent years, and hopefully the experience of the State of Washington will provide useful insight to problems with the existing guidance. Several recommendations are offered on ways to deal with these problems. In January 1986, the state held an ingestion pathway exercise which required the determination of allowed concentrations of isotopes for various foods, based upon reactor source term and field data. Objectives of the exercise were not met because of the complexity of the necessary calculations. A major problem was that the allowed concentrations had to be computed for each isotope and each food group, given assumptions on the average diet. To solve problems identified during that exercise, Washington developed, by March 1986, partitioned CGs. These CGs apportioned doses from each food group for an assumed mix of radionuclides expected to result from a reactor accident. This effort was therefore in place just in time for actual use during the Chernobyl fallout episode in May 1986. This technique was refined and described in a later report and presented at the 1987 annual meeting of the Health Physics Society. Realizing the technical weaknesses which still existed and a need to simplify the numbers for decision makers, Washington State has been developing computer methods to quickly calculate, from an accident specific relative mix of isotopes, CGs which allow a single radionuclide concentration for all food groups. This latest approach allows constant CGs for different periods of time following the accident, instead of peak CGs, which are good only for a short time after the

  18. Existing ingestion guidance: Problems and recommendations

    International Nuclear Information System (INIS)

    Mooney, Robert R.; Ziegler, Gordon L.; Peterson, Donald S.

    1989-01-01

    Washington State has been developing plans and procedures for responding to nuclear accidents since the early 1970s. A key part of this process has been formulating a method for calculating ingestion pathway concentration guides (CGs). Such a method must be both technically sound and easy to use. This process has been slow and frustrating. However, much technical headway has been made in recent years, and hopefully the experience of the State of Washington will provide useful insight to problems with the existing guidance. Several recommendations are offered on ways to deal with these problems. In January 1986, the state held an ingestion pathway exercise which required the determination of allowed concentrations of isotopes for various foods, based upon reactor source term and field data. Objectives of the exercise were not met because of the complexity of the necessary calculations. A major problem was that the allowed concentrations had to be computed for each isotope and each food group, given assumptions on the average diet. To solve problems identified during that exercise, Washington developed, by March 1986, partitioned CGs. These CGs apportioned doses from each food group for an assumed mix of radionuclides expected to result from a reactor accident. This effort was therefore in place just in time for actual use during the Chernobyl fallout episode in May 1986. This technique was refined and described in a later report and presented at the 1987 annual meeting of the Health Physics Society. Realizing the technical weaknesses which still existed and a need to simplify the numbers for decision makers, Washington State has been developing computer methods to quickly calculate, from an accident specific relative mix of isotopes, CGs which allow a single radionuclide concentration for all food groups. This latest approach allows constant CGs for different periods of time following the accident, instead of peak CGs, which are good only for a short time after the

  19. Prototype for the ALEPH Time Projection Chamber

    CERN Multimedia

    1980-01-01

    This is a prototype endplate piece constructed during R&D for the ALEPH Time Projection Chamber (TPC). ALEPH was one of 4 experiments at CERN's 27km Large Electron Positron collider (LEP) that ran from 1989 to 2000. ALEPH's TPC was a large-volume tracking chamber, 4.4 metres long and 3.6 metres in diameter - the largest TPC in existance at the time. This object is one of the endplates of a “Kind” sector, the smallest of the three types of sectors. The patterns etched into the copper form the cathode pads that measured particle track coordinates in the r-phi direction. It included a laser calibration system, a gating system to prevent space charge buildup, and a new radial pad geometry to improve resolution. the ALEPH TPC allowed for precise momentum measurements of the high-momentum particles from W and Z decays. The following institutes participated: CERN, Athens, Glasgow, Mainz, MPI Munich, INFN-Pisa, INFN-Trieste, Wisconsin.

  20. Optimization of MIMO Systems Capacity Using Large Random Matrix Methods

    Directory of Open Access Journals (Sweden)

    Philippe Loubaton

    2012-11-01

    Full Text Available This paper provides a comprehensive introduction of large random matrix methods for input covariance matrix optimization of mutual information of MIMO systems. It is first recalled informally how large system approximations of mutual information can be derived. Then, the optimization of the approximations is discussed, and important methodological points that are not necessarily covered by the existing literature are addressed, including the strict concavity of the approximation, the structure of the argument of its maximum, the accuracy of the large system approach with regard to the number of antennas, or the justification of iterative water-filling optimization algorithms. While the existing papers have developed methods adapted to a specific model, this contribution tries to provide a unified view of the large system approximation approach.

  1. Large amplitude ion-acoustic waves in a plasma with an electron beam

    International Nuclear Information System (INIS)

    Nejoh, Y.; Sanuki, H.

    1995-01-01

    The nonlinear wave structures of large amplitude ion-acoustic waves are studied in a plasma with an electron beam, by the pseudopotential method. The region of the existence of large amplitude ion-acoustic waves is examined, showing that the condition of the existence sensitively depends on the parameters such as the electron beam temperature, the ion temperature, the electrostatic potential, and the concentration of the electron beam density. It turns out that the region of the existence spreads as the beam temperature increases but the effect of the electron beam velocity is relatively small. New findings of large amplitude ion-acoustic waves in a plasma with an electron beam are predicted. copyright 1995 American Institute of Physics

  2. Transfer and characterization of large-area CVD graphene for transparent electrode applications

    DEFF Research Database (Denmark)

    Whelan, Patrick Rebsdorf

    addresses key issues for industrial integration of large area graphene for optoelectronic devices. This is done through optimization of existing characterization methods and development of new transfer techniques. A method for accurately measuring the decoupling of graphene from copper catalysts...... and the electrical properties of graphene after transfer are superior compared to the standard etching transfer method. Spatial mapping of the electrical properties of transferred graphene is performed using terahertz time-domain spectroscopy (THz-TDS). The non-contact nature of THz-TDS and the fact...

  3. Does Late-onset Anorexia Nervosa Exist? Findings From a Comparative Study in Singapore.

    Science.gov (United States)

    Tan, Shian Ming; Kwok, Kah Foo Victor; Zainal, Kelly A; Lee, Huei Yen

    2018-03-01

    The incidence of cases of older onset anorexia nervosa (AN) has increased in recent years. However, the literature on late-onset AN has been inconclusive. The goal of this study was to compare late-onset with early-onset cases of AN. Cases of AN presenting to an eating disorders treatment service were identified and the associated medical records were studied retrospectively. Of the 577 cases of AN that were studied, 7.1% were late-onset. Unlike the early-onset cases of AN, the late-onset cases reported less teasing and more relationship problems as a trigger for the illness. They were also less likely to join the eating disorders treatment program. Otherwise, the late-onset AN cases were largely similar to the early-onset cases. Although differences exist between early-onset and late-onset cases of AN, these are few. Until stronger evidence emerges over time, there currently seems to be minimal justification to accord late-onset AN a unique position in psychiatric nosology.

  4. Exploiting deterministic maintenance opportunity windows created by conservative engineering design rules that result in free time locked into large high-speed coupled production lines with finite buffers

    Directory of Open Access Journals (Sweden)

    Durandt, Casper

    2016-08-01

    Full Text Available Conservative engineering design rules for large serial coupled production processes result in machines having locked-in free time (also called ‘critical downtime’ or ‘maintenance opportunity windows’, which cause idle time if not used. Operators are not able to assess a large production process holistically, and so may not be aware that they form the current bottleneck – or that they have free time available due to interruptions elsewhere. A real-time method is developed to accurately calculate and display free time in location and magnitude, and efficiency improvements are demonstrated in large-scale production runs.

  5. Finite Time Blowup in a Realistic Food-Chain Model

    KAUST Repository

    Parshad, Rana; Ait Abderrahmane, Hamid; Upadhyay, Ranjit Kumar; Kumari, Nitu

    2013-01-01

    We investigate a realistic three-species food-chain model, with generalist top predator. The model based on a modified version of the Leslie-Gower scheme incorporates mutual interference in all the three populations and generalizes several other known models in the ecological literature. We show that the model exhibits finite time blowup in certain parameter range and for large enough initial data. This result implies that finite time blowup is possible in a large class of such three-species food-chain models. We propose a modification to the model and prove that the modified model has globally existing classical solutions, as well as a global attractor. We reconstruct the attractor using nonlinear time series analysis and show that it pssesses rich dynamics, including chaos in certain parameter regime, whilst avoiding blowup in any parameter regime. We also provide estimates on its fractal dimension as well as provide numerical simulations to visualise the spatiotemporal chaos.

  6. Finite Time Blowup in a Realistic Food-Chain Model

    KAUST Repository

    Parshad, Rana

    2013-05-19

    We investigate a realistic three-species food-chain model, with generalist top predator. The model based on a modified version of the Leslie-Gower scheme incorporates mutual interference in all the three populations and generalizes several other known models in the ecological literature. We show that the model exhibits finite time blowup in certain parameter range and for large enough initial data. This result implies that finite time blowup is possible in a large class of such three-species food-chain models. We propose a modification to the model and prove that the modified model has globally existing classical solutions, as well as a global attractor. We reconstruct the attractor using nonlinear time series analysis and show that it pssesses rich dynamics, including chaos in certain parameter regime, whilst avoiding blowup in any parameter regime. We also provide estimates on its fractal dimension as well as provide numerical simulations to visualise the spatiotemporal chaos.

  7. Numerical simulation of pseudoelastic shape memory alloys using the large time increment method

    Science.gov (United States)

    Gu, Xiaojun; Zhang, Weihong; Zaki, Wael; Moumni, Ziad

    2017-04-01

    The paper presents a numerical implementation of the large time increment (LATIN) method for the simulation of shape memory alloys (SMAs) in the pseudoelastic range. The method was initially proposed as an alternative to the conventional incremental approach for the integration of nonlinear constitutive models. It is adapted here for the simulation of pseudoelastic SMA behavior using the Zaki-Moumni model and is shown to be especially useful in situations where the phase transformation process presents little or lack of hardening. In these situations, a slight stress variation in a load increment can result in large variations of strain and local state variables, which may lead to difficulties in numerical convergence. In contrast to the conventional incremental method, the LATIN method solve the global equilibrium and local consistency conditions sequentially for the entire loading path. The achieved solution must satisfy the conditions of static and kinematic admissibility and consistency simultaneously after several iterations. 3D numerical implementation is accomplished using an implicit algorithm and is then used for finite element simulation using the software Abaqus. Computational tests demonstrate the ability of this approach to simulate SMAs presenting flat phase transformation plateaus and subjected to complex loading cases, such as the quasi-static behavior of a stent structure. Some numerical results are contrasted to those obtained using step-by-step incremental integration.

  8. Geothermal ORC Systems Using Large Screw Expanders

    OpenAIRE

    Biederman, Tim R.; Brasz, Joost J.

    2014-01-01

    Geothermal ORC Systems using Large Screw Expanders Tim Biederman Cyrq Energy Abstract This paper describes a low-temperature Organic Rankine Cycle Power Recovery system with a screw expander a derivative of developed of Kaishan's line of screw compressors, as its power unit. The screw expander design is a modified version of its existing refrigeration compressor used on water-cooled chillers. Starting the ORC development program with existing refrigeration screw compre...

  9. Large deviations

    CERN Document Server

    Varadhan, S R S

    2016-01-01

    The theory of large deviations deals with rates at which probabilities of certain events decay as a natural parameter in the problem varies. This book, which is based on a graduate course on large deviations at the Courant Institute, focuses on three concrete sets of examples: (i) diffusions with small noise and the exit problem, (ii) large time behavior of Markov processes and their connection to the Feynman-Kac formula and the related large deviation behavior of the number of distinct sites visited by a random walk, and (iii) interacting particle systems, their scaling limits, and large deviations from their expected limits. For the most part the examples are worked out in detail, and in the process the subject of large deviations is developed. The book will give the reader a flavor of how large deviation theory can help in problems that are not posed directly in terms of large deviations. The reader is assumed to have some familiarity with probability, Markov processes, and interacting particle systems.

  10. Post-hoc pattern-oriented testing and tuning of an existing large model: lessons from the field vole.

    Directory of Open Access Journals (Sweden)

    Christopher J Topping

    Full Text Available Pattern-oriented modeling (POM is a general strategy for modeling complex systems. In POM, multiple patterns observed at different scales and hierarchical levels are used to optimize model structure, to test and select sub-models of key processes, and for calibration. So far, POM has been used for developing new models and for models of low to moderate complexity. It remains unclear, though, whether the basic idea of POM to utilize multiple patterns, could also be used to test and possibly develop existing and established models of high complexity. Here, we use POM to test, calibrate, and further develop an existing agent-based model of the field vole (Microtus agrestis, which was developed and tested within the ALMaSS framework. This framework is complex because it includes a high-resolution representation of the landscape and its dynamics, of the individual's behavior, and of the interaction between landscape and individual behavior. Results of fitting to the range of patterns chosen were generally very good, but the procedure required to achieve this was long and complicated. To obtain good correspondence between model and the real world it was often necessary to model the real world environment closely. We therefore conclude that post-hoc POM is a useful and viable way to test a highly complex simulation model, but also warn against the dangers of over-fitting to real world patterns that lack details in their explanatory driving factors. To overcome some of these obstacles we suggest the adoption of open-science and open-source approaches to ecological simulation modeling.

  11. Development of large diamond-tipped saws and their application to cutting large radioactive reinforced concrete structures

    International Nuclear Information System (INIS)

    Rawlings, G.W.

    1985-01-01

    The object of this research was to develop a large circular saw, capable of cutting away, by remote control, the inner radio-activated layer of reinforced concrete biological shields or pre-stressed concrete pressure vessel of gas-cooled reactors. Initial investigations and enquiries put to the existing saw industry established although there were blades in use approaching the size and type required, the development of large machines was restricted to the fixed-bed type because there was little demand for deep sawing in the construction or demolition industry. Preliminary work was carried out in 1981 to demonstrate the largest available wall saw at that time which showed that by changing the blade three times, a kerf 810 mm deep could be achieved. From this demonstration, the design and development of a 'free frame saw' and construction of a 660 mm blade as well as a 2500 mm blade, were performed. Initially, the 660 mm blade was used to cut the concrete and reinforcement, followed by the 2500 mm blade to produce a 1 m kerf. Subsequent development and testing demonstrated that the 2500 mm blade could be controlled to ''plunge cut'', that is to cut straight down in the reinforced concrete to a depth of 1 m in 7 minutes and would then advance at 160 mm/min; this is a work rate of 10 m 2 /hr. The final demonstration was to mount the saw on an extendible boom and remove a 1 m 3 block of reinforced concrete from the vertical face of a test wall

  12. Empirical Analysis on The Existence of The Phillips Curve

    Directory of Open Access Journals (Sweden)

    Shaari Mohd Shahidan

    2018-01-01

    Full Text Available The Phillips curve shows the trade-off relationship between the inflation and unemployment rates. A rise in inflation due to the high economic growth, more jobs are available and therefore unemployment will fall. However, the existence of the Phillips curve in high-income countries has not been much discussed. Countries with high income should have low unemployment rate, suggesting a high inflation. However, some high-income countries, the United States in 1970s for example, could not avert stagflation whereby high unemployment rate and inflation occurred in the same time. This situation is contrary to the Phillips curve. Therefore, this study aims to investigate the existence of the Phillips curve in high-income countries for the period 1990-2014 using the panel data analysis. The most interesting finding of this study is the existence of a bidirectional relationship between unemployment rate and inflation rate in both long and short runs. Therefore, the governments should choose to stabilize inflation rate or reduce unemployment rate

  13. Use of primary corticosteroid injection in the management of plantar fasciopathy: is it time to challenge existing practice?

    Science.gov (United States)

    Kirkland, Paul; Beeson, Paul

    2013-01-01

    Plantar fasciopathy (PF) is characterized by degeneration of the fascia at the calcaneal enthesis. It is a common cause of foot pain, accounting for 90% of clinical presentations of heel pathology. In 2009-2010, 9.3 million working days were lost in England due to musculoskeletal disorders, with 2.4 million of those attributable to lower-limb disorders, averaging 16.3 lost working days per case. Numerous studies have attempted to establish the short- and long-term clinical efficacy of corticosteroid injections in the management of PF. Earlier studies have not informed clinical practice. As the research base has developed, evidence has emerged supporting clinical efficacy. With diverse opinions surrounding the etiology and efficacy debate, there does not seem to be a consensus of opinion on a common treatment pathway. For example, in England, the National Institute for Clinical Health and Excellence does not publish strategic guidance for clinical practice. Herein, we review and evaluate core literature that examines the clinical efficacy of corticosteroid injection as a treatment for PF. Outcome measures were wide ranging but largely yielded results supportive of the short- and long-term benefits of this modality. The analysis also looked to establish, where possible, "proof of concept." This article provides evidence supporting the clinical efficacy of corticosteroid injections, in particular those guided by imaging technology. The evidence challenges existing orthodoxy, which marginalizes this treatment as a secondary option. This challenge is supported by recently revised guidelines published by the American College of Foot and Ankle Surgeons advocating corticosteroid injection as a primary treatment option.

  14. Gene coexpression measures in large heterogeneous samples using count statistics.

    Science.gov (United States)

    Wang, Y X Rachel; Waterman, Michael S; Huang, Haiyan

    2014-11-18

    With the advent of high-throughput technologies making large-scale gene expression data readily available, developing appropriate computational tools to process these data and distill insights into systems biology has been an important part of the "big data" challenge. Gene coexpression is one of the earliest techniques developed that is still widely in use for functional annotation, pathway analysis, and, most importantly, the reconstruction of gene regulatory networks, based on gene expression data. However, most coexpression measures do not specifically account for local features in expression profiles. For example, it is very likely that the patterns of gene association may change or only exist in a subset of the samples, especially when the samples are pooled from a range of experiments. We propose two new gene coexpression statistics based on counting local patterns of gene expression ranks to take into account the potentially diverse nature of gene interactions. In particular, one of our statistics is designed for time-course data with local dependence structures, such as time series coupled over a subregion of the time domain. We provide asymptotic analysis of their distributions and power, and evaluate their performance against a wide range of existing coexpression measures on simulated and real data. Our new statistics are fast to compute, robust against outliers, and show comparable and often better general performance.

  15. Existence and asymptotic behavior of the wave equation with dynamic boundary conditions

    KAUST Repository

    Graber, Philip Jameson; Said-Houari, Belkacem

    2012-01-01

    The goal of this work is to study a model of the strongly damped wave equation with dynamic boundary conditions and nonlinear boundary/interior sources and nonlinear boundary/interior damping. First, applying the nonlinear semigroup theory, we show the existence and uniqueness of local in time solutions. In addition, we show that in the strongly damped case solutions gain additional regularity for positive times t>0. Second, we show that under some restrictions on the initial data and if the interior source dominates the interior damping term and if the boundary source dominates the boundary damping, then the solution grows as an exponential function. Moreover, in the absence of the strong damping term, we prove that the solution ceases to exists and blows up in finite time. © 2012 Springer Science+Business Media, LLC.

  16. Existence and asymptotic behavior of the wave equation with dynamic boundary conditions

    KAUST Repository

    Graber, Philip Jameson

    2012-03-07

    The goal of this work is to study a model of the strongly damped wave equation with dynamic boundary conditions and nonlinear boundary/interior sources and nonlinear boundary/interior damping. First, applying the nonlinear semigroup theory, we show the existence and uniqueness of local in time solutions. In addition, we show that in the strongly damped case solutions gain additional regularity for positive times t>0. Second, we show that under some restrictions on the initial data and if the interior source dominates the interior damping term and if the boundary source dominates the boundary damping, then the solution grows as an exponential function. Moreover, in the absence of the strong damping term, we prove that the solution ceases to exists and blows up in finite time. © 2012 Springer Science+Business Media, LLC.

  17. Storm Time Global Observations of Large-Scale TIDs From Ground-Based and In Situ Satellite Measurements

    Science.gov (United States)

    Habarulema, John Bosco; Yizengaw, Endawoke; Katamzi-Joseph, Zama T.; Moldwin, Mark B.; Buchert, Stephan

    2018-01-01

    This paper discusses the ionosphere's response to the largest storm of solar cycle 24 during 16-18 March 2015. We have used the Global Navigation Satellite Systems (GNSS) total electron content data to study large-scale traveling ionospheric disturbances (TIDs) over the American, African, and Asian regions. Equatorward large-scale TIDs propagated and crossed the equator to the other side of the hemisphere especially over the American and Asian sectors. Poleward TIDs with velocities in the range ≈400-700 m/s have been observed during local daytime over the American and African sectors with origin from around the geomagnetic equator. Our investigation over the American sector shows that poleward TIDs may have been launched by increased Lorentz coupling as a result of penetrating electric field during the southward turning of the interplanetary magnetic field, Bz. We have observed increase in SWARM satellite electron density (Ne) at the same time when equatorward large-scale TIDs are visible over the European-African sector. The altitude Ne profiles from ionosonde observations show a possible link that storm-induced TIDs may have influenced the plasma distribution in the topside ionosphere at SWARM satellite altitude.

  18. Multiple choices of time in quantum cosmology

    International Nuclear Information System (INIS)

    Małkiewicz, Przemysław

    2015-01-01

    It is often conjectured that a choice of time function merely sets up a frame for the quantum evolution of the gravitational field, meaning that all choices should be in some sense compatible. In order to explore this conjecture (and the meaning of compatibility), we develop suitable tools for determining the relation between quantum theories based on different time functions. First, we discuss how a time function fixes a canonical structure on the constraint surface. The presentation includes both the kinematical and the reduced perspective, and the relation between them. Second, we formulate twin theorems about the existence of two inequivalent maps between any two deparameterizations, a formal canonical and a coordinate one. They are used to separate the effects induced by choice of clock and other factors. We show, in an example, how the spectra of quantum observables are transformed under the change of clock and prove, via a general argument, the existence of choice-of-time-induced semiclassical effects. Finally, we study an example, in which we find that the semiclassical discrepancies can in fact be arbitrarily large for dynamical observables. We conclude that the values of critical energy density or critical volume in the bouncing scenarios of quantum cosmology cannot in general be at the Planck scale, and always need to be given with reference to a specific time function. (paper)

  19. Information on existing monitoring practices for Harmful Algal ...

    African Journals Online (AJOL)

    spamer

    emitting diodes in the blue, green and red part of the visible spectrum ..... highly affected by sun angle, cloud cover and time of .... for Detecting Large-Scale Environmental Change. Kahru, ... Oceanographic Commission Technical Series. No.

  20. Performance of automatic generation control mechanisms with large-scale wind power

    Energy Technology Data Exchange (ETDEWEB)

    Ummels, B.C.; Gibescu, M.; Paap, G.C. [Delft Univ. of Technology (Netherlands); Kling, W.L. [Transmission Operations Department of TenneT bv (Netherlands)

    2007-11-15

    The unpredictability and variability of wind power increasingly challenges real-time balancing of supply and demand in electric power systems. In liberalised markets, balancing is a responsibility jointly held by the TSO (real-time power balancing) and PRPs (energy programs). In this paper, a procedure is developed for the simulation of power system balancing and the assessment of AGC performance in the presence of large-scale wind power, using the Dutch control zone as a case study. The simulation results show that the performance of existing AGC-mechanisms is adequate for keeping ACE within acceptable bounds. At higher wind power penetrations, however, the capabilities of the generation mix are increasingly challenged and additional reserves are required at the same level. (au)

  1. Existence and convergence theorems for evolutionary hemivariational inequalities of second order

    Directory of Open Access Journals (Sweden)

    Zijia Peng

    2015-03-01

    Full Text Available This article concerns with a class of evolutionary hemivariational inequalities in the framework of evolution triple. Based on the Rothe method, monotonicity-compactness technique and the properties of Clarke's generalized derivative and gradient, the existence and convergence theorems to these problems are established. The main idea in the proof is using the time difference to construct the approximate problems. The work generalizes the existence results on evolution inclusions and hemivariational inequalities of second order.

  2. Use of a large time-compensated scintillation detector in neutron time-of-flight measurements

    International Nuclear Information System (INIS)

    Goodman, C.D.

    1979-01-01

    A scintillator for neutron time-of-flight measurements is positioned at a desired angle with respect to the neutron beam, and as a function of the energy thereof, such that the sum of the transit times of the neutrons and photons in the scintillator are substantially independent of the points of scintillations within the scintillator. Extrapolated zero timing is employed rather than the usual constant fraction timing. As a result, a substantially larger scintillator can be employed that substantially increases the data rate and shortens the experiment time. 3 claims

  3. THE WIGNER–FOKKER–PLANCK EQUATION: STATIONARY STATES AND LARGE TIME BEHAVIOR

    KAUST Repository

    ARNOLD, ANTON; GAMBA, IRENE M.; GUALDANI, MARIA PIA; MISCHLER, STÉ PHANE; MOUHOT, CLEMENT; SPARBER, CHRISTOF

    2012-01-01

    solution in a weighted Sobolev space. A key ingredient of the proof is a new result on the existence of spectral gaps for FokkerPlanck type operators in certain weighted L 2-spaces. In addition we show that the steady state corresponds to a positive density

  4. A neuromorphic implementation of multiple spike-timing synaptic plasticity rules for large-scale neural networks

    Directory of Open Access Journals (Sweden)

    Runchun Mark Wang

    2015-05-01

    Full Text Available We present a neuromorphic implementation of multiple synaptic plasticity learning rules, which include both Spike Timing Dependent Plasticity (STDP and Spike Timing Dependent Delay Plasticity (STDDP. We present a fully digital implementation as well as a mixed-signal implementation, both of which use a novel dynamic-assignment time-multiplexing approach and support up to 2^26 (64M synaptic plasticity elements. Rather than implementing dedicated synapses for particular types of synaptic plasticity, we implemented a more generic synaptic plasticity adaptor array that is separate from the neurons in the neural network. Each adaptor performs synaptic plasticity according to the arrival times of the pre- and post-synaptic spikes assigned to it, and sends out a weighted and/or delayed pre-synaptic spike to the target synapse in the neural network. This strategy provides great flexibility for building complex large-scale neural networks, as a neural network can be configured for multiple synaptic plasticity rules without changing its structure. We validate the proposed neuromorphic implementations with measurement results and illustrate that the circuits are capable of performing both STDP and STDDP. We argue that it is practical to scale the work presented here up to 2^36 (64G synaptic adaptors on a current high-end FPGA platform.

  5. Time-resolved triton burnup measurement using the scintillating fiber detector in the Large Helical Device

    Science.gov (United States)

    Ogawa, K.; Isobe, M.; Nishitani, T.; Murakami, S.; Seki, R.; Nakata, M.; Takada, E.; Kawase, H.; Pu, N.; LHD Experiment Group

    2018-03-01

    Time-resolved measurement of triton burnup is performed with a scintillating fiber detector system in the deuterium operation of the large helical device. The scintillating fiber detector system is composed of the detector head consisting of 109 scintillating fibers having a diameter of 1 mm and a length of 100 mm embedded in the aluminum substrate, the magnetic registrant photomultiplier tube, and the data acquisition system equipped with 1 GHz sampling rate analogies to digital converter and the field programmable gate array. The discrimination level of 150 mV was set to extract the pulse signal induced by 14 MeV neutrons according to the pulse height spectra obtained in the experiment. The decay time of 14 MeV neutron emission rate after neutral beam is turned off measured by the scintillating fiber detector. The decay time is consistent with the decay time of total neutron emission rate corresponding to the 14 MeV neutrons measured by the neutron flux monitor as expected. Evaluation of the diffusion coefficient is conducted using a simple classical slowing-down model FBURN code. It is found that the diffusion coefficient of triton is evaluated to be less than 0.2 m2 s-1.

  6. Existence and stability of circular orbits in general static and spherically symmetric spacetimes

    Science.gov (United States)

    Jia, Junji; Liu, Jiawei; Liu, Xionghui; Mo, Zhongyou; Pang, Xiankai; Wang, Yaoguang; Yang, Nan

    2018-02-01

    The existence and stability of circular orbits (CO) in static and spherically symmetric (SSS) spacetime are important because of their practical and potential usefulness. In this paper, using the fixed point method, we first prove a necessary and sufficient condition on the metric function for the existence of timelike COs in SSS spacetimes. After analyzing the asymptotic behavior of the metric, we then show that asymptotic flat SSS spacetime that corresponds to a negative Newtonian potential at large r will always allow the existence of CO. The stability of the CO in a general SSS spacetime is then studied using the Lyapunov exponent method. Two sufficient conditions on the (in)stability of the COs are obtained. For null geodesics, a sufficient condition on the metric function for the (in)stability of null CO is also obtained. We then illustrate one powerful application of these results by showing that three SSS spacetimes whose metric function is not completely known will allow the existence of timelike and/or null COs. We also used our results to assert the existence and (in)stabilities of a number of known SSS metrics.

  7. The settlement of foundation of existing large structure on soft ground and investigation of its allowable settlement

    International Nuclear Information System (INIS)

    Okamoto, Toshiro

    1987-01-01

    In our laboratory a study of siting on quarternary ground is followed to make possible to construct a nuclear power plant on soil ground in Japan, a important subject is to understand bearing capacity, settlement and seismic responce of foundation. So measured data are collected about relation between ground and type of foundation, total settlement and differential settlement of already constructed large structures, and it is done to investigate the real condition and to examine allowable settlement. Investigated structures are mainly foreign nuclear power plant and domestic and foreign high buildings. The higher buildings are, the more raft foundation are for type of foundation and the higher contact pressure are to similar to a nuclear power plant. So discussion is done about mainly raft foundation. It is found that some measured maximum total settlements are larger than already proposed allowable values. So empirical allowable settlement is derived from measured values considering the effect of the width of base slab, contact pressure and foundation ground. Differential settlement is investigated about relation to maximum total settlement, and is formulated considering the width and the rigidity of base slab. Beside the limit of differential settlement is obtained as foundation is damaged, and the limit of maximum total settlement is obtained by combining this and above mentioned relation. Obtained allowable value is largely influenced by the width of base slab, and becomes less severe than some already proposed values. So it is expected that deformation of foundation is rationaly investigated when large structure as nuclear power plant is constructed on soft ground. (author)

  8. Predictable progressive Doppler deterioration in IUGR: does it really exist?

    LENUS (Irish Health Repository)

    Unterscheider, Julia

    2013-12-01

    An objective of the Prospective Observational Trial to Optimize Pediatric Health in IUGR (PORTO) study was to evaluate multivessel Doppler changes in a large cohort of intrauterine growth restriction (IUGR) fetuses to establish whether a predictable progressive sequence of Doppler deterioration exists and to correlate these Doppler findings with respective perinatal outcomes.

  9. Advanced Kalman Filter for Real-Time Responsiveness in Complex Systems

    Energy Technology Data Exchange (ETDEWEB)

    Welch, Gregory Francis [UNC-Chapel Hill/University of Central Florida; Zhang, Jinghe [UNC-Chapel Hill/Virginia Tech

    2014-06-10

    Complex engineering systems pose fundamental challenges in real-time operations and control because they are highly dynamic systems consisting of a large number of elements with severe nonlinearities and discontinuities. Today’s tools for real-time complex system operations are mostly based on steady state models, unable to capture the dynamic nature and too slow to prevent system failures. We developed advanced Kalman filtering techniques and the formulation of dynamic state estimation using Kalman filtering techniques to capture complex system dynamics in aiding real-time operations and control. In this work, we looked at complex system issues including severe nonlinearity of system equations, discontinuities caused by system controls and network switches, sparse measurements in space and time, and real-time requirements of power grid operations. We sought to bridge the disciplinary boundaries between Computer Science and Power Systems Engineering, by introducing methods that leverage both existing and new techniques. While our methods were developed in the context of electrical power systems, they should generalize to other large-scale scientific and engineering applications.

  10. Compilation of Existing Neutron Screen Technology

    Directory of Open Access Journals (Sweden)

    N. Chrysanthopoulou

    2014-01-01

    Full Text Available The presence of fast neutron spectra in new reactors is expected to induce a strong impact on the contained materials, including structural materials, nuclear fuels, neutron reflecting materials, and tritium breeding materials. Therefore, introduction of these reactors into operation will require extensive testing of their components, which must be performed under neutronic conditions representative of those expected to prevail inside the reactor cores when in operation. Due to limited availability of fast reactors, testing of future reactor materials will mostly take place in water cooled material test reactors (MTRs by tailoring the neutron spectrum via neutron screens. The latter rely on the utilization of materials capable of absorbing neutrons at specific energy. A large but fragmented experience is available on that topic. In this work a comprehensive compilation of the existing neutron screen technology is attempted, focusing on neutron screens developed in order to locally enhance the fast over thermal neutron flux ratio in a reactor core.

  11. Existence of evolutionary variational solutions via the calculus of variations

    Science.gov (United States)

    Bögelein, Verena; Duzaar, Frank; Marcellini, Paolo

    In this paper we introduce a purely variational approach to time dependent problems, yielding the existence of global parabolic minimizers, that is ∫0T ∫Ω [uṡ∂tφ+f(x,Du)] dx dt⩽∫0T ∫Ω f(x,Du+Dφ) dx dt, whenever T>0 and φ∈C0∞(Ω×(0,T),RN). For the integrand f:Ω×R→[0,∞] we merely assume convexity with respect to the gradient variable and coercivity. These evolutionary variational solutions are obtained as limits of maps depending on space and time minimizing certain convex variational functionals. In the simplest situation, with some growth conditions on f, the method provides the existence of global weak solutions to Cauchy-Dirichlet problems of parabolic systems of the type ∂tu-divDξf(x,Du)=0 in Ω×(0,∞).

  12. analysis of large electromagnetic pulse simulators using the electric field integral equation method in time domain

    International Nuclear Information System (INIS)

    Jamali, J.; Aghajafari, R.; Moini, R.; Sadeghi, H.

    2002-01-01

    A time-domain approach is presented to calculate electromagnetic fields inside a large Electromagnetic Pulse (EMP) simulator. This type of EMP simulator is used for studying the effect of electromagnetic pulses on electrical apparatus in various structures such as vehicles, a reoplanes, etc. The simulator consists of three planar transmission lines. To solve the problem, we first model the metallic structure of the simulator as a grid of conducting wires. The numerical solution of the governing electric field integral equation is then obtained using the method of moments in time domain. To demonstrate the accuracy of the model, we consider a typical EMP simulator. The comparison of our results with those obtained experimentally in the literature validates the model introduced in this paper

  13. Mapping Two-Dimensional Deformation Field Time-Series of Large Slope by Coupling DInSAR-SBAS with MAI-SBAS

    Directory of Open Access Journals (Sweden)

    Liming He

    2015-09-01

    Full Text Available Mapping deformation field time-series, including vertical and horizontal motions, is vital for landslide monitoring and slope safety assessment. However, the conventional differential synthetic aperture radar interferometry (DInSAR technique can only detect the displacement component in the satellite-to-ground direction, i.e., line-of-sight (LOS direction displacement. To overcome this constraint, a new method was developed to obtain the displacement field time series of a slope by coupling DInSAR based small baseline subset approach (DInSAR-SBAS with multiple-aperture InSAR (MAI based small baseline subset approach (MAI-SBAS. This novel method has been applied to a set of 11 observations from the phased array type L-band synthetic aperture radar (PALSAR sensor onboard the advanced land observing satellite (ALOS, spanning from 2007 to 2011, of two large-scale north–south slopes of the largest Asian open-pit mine in the Northeast of China. The retrieved displacement time series showed that the proposed method can detect and measure the large displacements that occurred along the north–south direction, and the gradually changing two-dimensional displacement fields. Moreover, we verified this new method by comparing the displacement results to global positioning system (GPS measurements.

  14. Mapping geological structures in bedrock via large-scale direct current resistivity and time-domain induced polarization tomography

    DEFF Research Database (Denmark)

    Rossi, Matteo; Olsson, Per-Ivar; Johansson, Sara

    2017-01-01

    -current resistivity distribution of the subsoil and the phase of the complex conductivity using a constant-phase angle model. The joint interpretation of electrical resistivity and induced-polarization models leads to a better understanding of complex three-dimensional subsoil geometries. The results have been......An investigation of geological conditions is always a key point for planning infrastructure constructions. Bedrock surface and rock quality must be estimated carefully in the designing process of infrastructures. A large direct-current resistivity and time-domain induced-polarization survey has......, there are northwest-trending Permian dolerite dykes that are less deformed. Four 2D direct-current resistivity and time-domain induced-polarization profiles of about 1-km length have been carefully pre-processed to retrieve time-domain induced polarization responses and inverted to obtain the direct...

  15. Non-existence of Normal Tokamak Equilibria with Negative Central Current

    International Nuclear Information System (INIS)

    Hammett, G.W.; Jardin, S.C.; Stratton, B.C.

    2003-01-01

    Recent tokamak experiments employing off-axis, non-inductive current drive have found that a large central current hole can be produced. The current density is measured to be approximately zero in this region, though in principle there was sufficient current-drive power for the central current density to have gone significantly negative. Recent papers have used a large aspect-ratio expansion to show that normal MHD equilibria (with axisymmetric nested flux surfaces, non-singular fields, and monotonic peaked pressure profiles) can not exist with negative central current. We extend that proof here to arbitrary aspect ratio, using a variant of the virial theorem to derive a relatively simple integral constraint on the equilibrium. However, this constraint does not, by itself, exclude equilibria with non-nested flux surfaces, or equilibria with singular fields and/or hollow pressure profiles that may be spontaneously generated

  16. The suitability and installation of technological equipment when upgrading existing facilities

    Directory of Open Access Journals (Sweden)

    Ladnushkin A. A.

    2016-03-01

    Full Text Available to date, a large number of Russian companies in diverse and various industries, has old equipment and requires modernization of the technological process due to the growth of scientific and technological progress. In order to achieve goals when upgrading is considered such an important aspect as the readiness of the new equipment installation. Mounting hardware suitability describes the suitability and readiness of equipment for efficient Assembly at the user. Replacement of technological equipment requires large volumes of works on installation and dismantling, in the absence of the building has its own lifting mechanisms require large financial and labor costs. One of possible methods for replacement of process equipment is the technology of without crane installation allows us to carry out work in existing space planning. Today is the question of the necessity of development and introduction of new technological production methods and fixtures tooling in which it is possible to conduct installation and dismantling of technological equipment in the operating production process.

  17. Acceleration of the universe, vacuum metamorphosis, and the large-time asymptotic form of the heat kernel

    International Nuclear Information System (INIS)

    Parker, Leonard; Vanzella, Daniel A.T.

    2004-01-01

    We investigate the possibility that the late acceleration observed in the rate of expansion of the Universe is due to vacuum quantum effects arising in curved spacetime. The theoretical basis of the vacuum cold dark matter (VCDM), or vacuum metamorphosis, cosmological model of Parker and Raval is reexamined and improved. We show, by means of a manifestly nonperturbative approach, how the infrared behavior of the propagator (related to the large-time asymptotic form of the heat kernel) of a free scalar field in curved spacetime leads to nonperturbative terms in the effective action similar to those appearing in the earlier version of the VCDM model. The asymptotic form that we adopt for the propagator or heat kernel at large proper time s is motivated by, and consistent with, particular cases where the heat kernel has been calculated exactly, namely in de Sitter spacetime, in the Einstein static universe, and in the linearly expanding spatially flat Friedmann-Robertson-Walker (FRW) universe. This large-s asymptotic form generalizes somewhat the one suggested by the Gaussian approximation and the R-summed form of the propagator that earlier served as a theoretical basis for the VCDM model. The vacuum expectation value for the energy-momentum tensor of the free scalar field, obtained through variation of the effective action, exhibits a resonance effect when the scalar curvature R of the spacetime reaches a particular value related to the mass of the field. Modeling our Universe by an FRW spacetime filled with classical matter and radiation, we show that the back reaction caused by this resonance drives the Universe through a transition to an accelerating expansion phase, very much in the same way as originally proposed by Parker and Raval. Our analysis includes higher derivatives that were neglected in the earlier analysis, and takes into account the possible runaway solutions that can follow from these higher-derivative terms. We find that the runaway solutions do

  18. Time-scale invariant changes in atmospheric radon concentration and crustal strain prior to a large earthquake

    Directory of Open Access Journals (Sweden)

    Y. Kawada

    2007-01-01

    Full Text Available Prior to large earthquakes (e.g. 1995 Kobe earthquake, Japan, an increase in the atmospheric radon concentration is observed, and this increase in the rate follows a power-law of the time-to-earthquake (time-to-failure. This phenomenon corresponds to the increase in the radon migration in crust and the exhalation into atmosphere. An irreversible thermodynamic model including time-scale invariance clarifies that the increases in the pressure of the advecting radon and permeability (hydraulic conductivity in the crustal rocks are caused by the temporal changes in the power-law of the crustal strain (or cumulative Benioff strain, which is associated with damage evolution such as microcracking or changing porosity. As the result, the radon flux and the atmospheric radon concentration can show a temporal power-law increase. The concentration of atmospheric radon can be used as a proxy for the seismic precursory processes associated with crustal dynamics.

  19. Similitude and scaling of large structural elements: Case study

    Directory of Open Access Journals (Sweden)

    M. Shehadeh

    2015-06-01

    Full Text Available Scaled down models are widely used for experimental investigations of large structures due to the limitation in the capacities of testing facilities along with the expenses of the experimentation. The modeling accuracy depends upon the model material properties, fabrication accuracy and loading techniques. In the present work the Buckingham π theorem is used to develop the relations (i.e. geometry, loading and properties between the model and a large structural element as that is present in the huge existing petroleum oil drilling rigs. The model is to be designed, loaded and treated according to a set of similitude requirements that relate the model to the large structural element. Three independent scale factors which represent three fundamental dimensions, namely mass, length and time need to be selected for designing the scaled down model. Numerical prediction of the stress distribution within the model and its elastic deformation under steady loading is to be made. The results are compared with those obtained from the full scale structure numerical computations. The effect of scaled down model size and material on the accuracy of the modeling technique is thoroughly examined.

  20. General-relativistic Large-eddy Simulations of Binary Neutron Star Mergers

    Energy Technology Data Exchange (ETDEWEB)

    Radice, David, E-mail: dradice@astro.princeton.edu [Institute for Advanced Study, 1 Einstein Drive, Princeton, NJ 08540 (United States)

    2017-03-20

    The flow inside remnants of binary neutron star (NS) mergers is expected to be turbulent, because of magnetohydrodynamics instability activated at scales too small to be resolved in simulations. To study the large-scale impact of these instabilities, we develop a new formalism, based on the large-eddy simulation technique, for the modeling of subgrid-scale turbulent transport in general relativity. We apply it, for the first time, to the simulation of the late-inspiral and merger of two NSs. We find that turbulence can significantly affect the structure and survival time of the merger remnant, as well as its gravitational-wave (GW) and neutrino emissions. The former will be relevant for GW observation of merging NSs. The latter will affect the composition of the outflow driven by the merger and might influence its nucleosynthetic yields. The accretion rate after black hole formation is also affected. Nevertheless, we find that, for the most likely values of the turbulence mixing efficiency, these effects are relatively small and the GW signal will be affected only weakly by the turbulence. Thus, our simulations provide a first validation of all existing post-merger GW models.

  1. Backward-in-time methods to simulate large-scale transport and mixing in the ocean

    Science.gov (United States)

    Prants, S. V.

    2015-06-01

    In oceanography and meteorology, it is important to know not only where water or air masses are headed for, but also where they came from as well. For example, it is important to find unknown sources of oil spills in the ocean and of dangerous substance plumes in the atmosphere. It is impossible with the help of conventional ocean and atmospheric numerical circulation models to extrapolate backward from the observed plumes to find the source because those models cannot be reversed in time. We review here recently elaborated backward-in-time numerical methods to identify and study mesoscale eddies in the ocean and to compute where those waters came from to a given area. The area under study is populated with a large number of artificial tracers that are advected backward in time in a given velocity field that is supposed to be known analytically or numerically, or from satellite and radar measurements. After integrating advection equations, one gets positions of each tracer on a fixed day in the past and can identify from known destinations a particle positions at earlier times. The results provided show that the method is efficient, for example, in estimating probabilities to find increased concentrations of radionuclides and other pollutants in oceanic mesoscale eddies. The backward-in-time methods are illustrated in this paper with a few examples. Backward-in-time Lagrangian maps are applied to identify eddies in satellite-derived and numerically generated velocity fields and to document the pathways by which they exchange water with their surroundings. Backward-in-time trapping maps are used to identify mesoscale eddies in the altimetric velocity field with a risk to be contaminated by Fukushima-derived radionuclides. The results of simulations are compared with in situ mesurement of caesium concentration in sea water samples collected in a recent research vessel cruise in the area to the east of Japan. Backward-in-time latitudinal maps and the corresponding

  2. Investigation of existing financial incentive policies for solar photovoltaic systems in U.S. regions

    Directory of Open Access Journals (Sweden)

    Jian Zhang

    2017-12-01

    Full Text Available This paper analyzes some of the existing incentives for solar photovoltaic (PV energy generation in the U.S. Four types of buildings (e.g., hospitals, large offices, large hotels, and secondary schools located in five different U.S. states, each having their own incentives, are selected and analyzed for the PV incentive policies. The payback period of the PV system is chosen as an indicator to analyze and critique the effectiveness of each incentive by comparing the payback periods before and after taking the incentive into consideration. Then a parametric analysis is conducted to determine the influence of the variation in key parameters, such as PV system capacity, capital cost of PV, sell back ratio and the performance-based incentive rate, on the performance of the PV system. The results show how the existing incentives can be effectively used to promote the PV systems in the U.S. and how variations of the parameters can impact the payback period of the PV systems. Through the evaluation of the existing incentive policies and the parametric study, this paper demonstrates that the type and level of incentives should be carefully determined in policy-making processes to effectively promote the PV systems.

  3. Particles colliders at the Large High Energy Laboratories

    International Nuclear Information System (INIS)

    Aguilar, M.

    1996-01-01

    In this work we present an elementary introduction to particle accelerators, a basic guide of existing colliders and a description of the large european laboratories devoted to Elementary Particle Physics. This work is a large, corrected and updated version of an article published in: Ciencia-Tecnologia-Medio Ambiente Annual report 1996 Edition el Pais (Author)

  4. 76 FR 69764 - Petitions for Modification of Application of Existing Mandatory Safety Standards

    Science.gov (United States)

    2011-11-09

    ... achieving the result of such standard exists which will at all times guarantee no less than the same measure... limits of the course refuse fill maintaining at least 1 percent slope for positive drainage. (4) The... the same measure of protection for the miners as is given to them by the existing standard. Docket...

  5. Image-processing of time-averaged interface distributions representing CCFL characteristics in a large scale model of a PWR hot-leg pipe geometry

    International Nuclear Information System (INIS)

    Al Issa, Suleiman; Macián-Juan, Rafael

    2017-01-01

    Highlights: • CCFL characteristics are investigated in PWR large-scale hot-leg pipe geometry. • Image processing of air-water interface produced time-averaged interface distributions. • Time-averages provide a comparative method of CCFL characteristics among different studies. • CCFL correlations depend upon the range of investigated water delivery for Dh ≫ 50 mm. • 1D codes are incapable of investigating CCFL because of lack of interface distribution. - Abstract: Countercurrent Flow Limitation (CCFL) was experimentally investigated in a 1/3.9 downscaled COLLIDER facility with a 190 mm pipe’s diameter using air/water at 1 atmospheric pressure. Previous investigations provided knowledge over the onset of CCFL mechanisms. In current article, CCFL characteristics at the COLLIDER facility are measured and discussed along with time-averaged distributions of the air/water interface for a selected matrix of liquid/gas velocities. The article demonstrates the time-averaged interface as a useful method to identify CCFL characteristics at quasi-stationary flow conditions eliminating variations that appears in single images, and showing essential comparative flow features such as: the degree of restriction at the bend, the extension and the intensity of the two-phase mixing zones, and the average water level within the horizontal part and the steam generator. Consequently, making it possible to compare interface distributions obtained at different investigations. The distributions are also beneficial for CFD validations of CCFL as the instant chaotic gas/liquid interface is impossible to reproduce in CFD simulations. The current study shows that final CCFL characteristics curve (and the corresponding CCFL correlation) depends upon the covered measuring range of water delivery. It also shows that a hydraulic diameter should be sufficiently larger than 50 mm in order to obtain CCFL characteristics comparable to the 1:1 scale data (namely the UPTF data). Finally

  6. A Novel Real-Time Path Servo Control of a Hardware-in-the-Loop for a Large-Stroke Asymmetric Rod-Less Pneumatic System under Variable Loads

    Directory of Open Access Journals (Sweden)

    Hao-Ting Lin

    2017-06-01

    Full Text Available This project aims to develop a novel large stroke asymmetric pneumatic servo system of a hardware-in-the-loop for path tracking control under variable loads based on the MATLAB Simulink real-time system. High pressure compressed air provided by the air compressor is utilized for the pneumatic proportional servo valve to drive the large stroke asymmetric rod-less pneumatic actuator. Due to the pressure differences between two chambers, the pneumatic actuator will operate. The highly nonlinear mathematical models of the large stroke asymmetric pneumatic system were analyzed and developed. The functional approximation technique based on the sliding mode controller (FASC is developed as a controller to solve the uncertain time-varying nonlinear system. The MATLAB Simulink real-time system was a main control unit of a hardware-in-the-loop system proposed to establish driver blocks for analog and digital I/O, a linear encoder, a CPU and a large stroke asymmetric pneumatic rod-less system. By the position sensor, the position signals of the cylinder will be measured immediately. The measured signals will be viewed as the feedback signals of the pneumatic servo system for the study of real-time positioning control and path tracking control. Finally, real-time control of a large stroke asymmetric pneumatic servo system with measuring system, a large stroke asymmetric pneumatic servo system, data acquisition system and the control strategy software will be implemented. Thus, upgrading the high position precision and the trajectory tracking performance of the large stroke asymmetric pneumatic servo system will be realized to promote the high position precision and path tracking capability. Experimental results show that fifth order paths in various strokes and the sine wave path are successfully implemented in the test rig. Also, results of variable loads under the different angle were implemented experimentally.

  7. A Novel Real-Time Path Servo Control of a Hardware-in-the-Loop for a Large-Stroke Asymmetric Rod-Less Pneumatic System under Variable Loads.

    Science.gov (United States)

    Lin, Hao-Ting

    2017-06-04

    This project aims to develop a novel large stroke asymmetric pneumatic servo system of a hardware-in-the-loop for path tracking control under variable loads based on the MATLAB Simulink real-time system. High pressure compressed air provided by the air compressor is utilized for the pneumatic proportional servo valve to drive the large stroke asymmetric rod-less pneumatic actuator. Due to the pressure differences between two chambers, the pneumatic actuator will operate. The highly nonlinear mathematical models of the large stroke asymmetric pneumatic system were analyzed and developed. The functional approximation technique based on the sliding mode controller (FASC) is developed as a controller to solve the uncertain time-varying nonlinear system. The MATLAB Simulink real-time system was a main control unit of a hardware-in-the-loop system proposed to establish driver blocks for analog and digital I/O, a linear encoder, a CPU and a large stroke asymmetric pneumatic rod-less system. By the position sensor, the position signals of the cylinder will be measured immediately. The measured signals will be viewed as the feedback signals of the pneumatic servo system for the study of real-time positioning control and path tracking control. Finally, real-time control of a large stroke asymmetric pneumatic servo system with measuring system, a large stroke asymmetric pneumatic servo system, data acquisition system and the control strategy software will be implemented. Thus, upgrading the high position precision and the trajectory tracking performance of the large stroke asymmetric pneumatic servo system will be realized to promote the high position precision and path tracking capability. Experimental results show that fifth order paths in various strokes and the sine wave path are successfully implemented in the test rig. Also, results of variable loads under the different angle were implemented experimentally.

  8. Review of existing studies and unresolved problems associated with socio-economic impact of nuclear powerplants

    International Nuclear Information System (INIS)

    Hendrickson, P.L.; King, J.C.; O'Connell, M.S.

    1975-07-01

    Preparation of socio-economic impact statements for nuclear powerplants began only a few years ago. The number of these statements is increasing, and some states, such as Washington, now require them as a condition to state approval for thermal powerplants. The major purpose of this paper was to review existing socio-economic impact statements to identify where additional research to improve the impact analysis process would be useful and appropriate. A second purpose was to summarize the type of information included in existing statements. Toward this end a number of socio-economic impact statements were reviewed. Most of the statements are for nuclear power plants; however, some are for other large construction projects. The statements reviewed are largely predictive in nature; i.e., they attempt to predict socio-economic impacts based on the existing knowledge. A few of the reports contain retroactive case studies of plants already completed. One describes an ongoing monitoring analysis of plants under construction. As a result of this preliminary study, a need was identified for a better-defined impact statement methodology and for guidelines identifying appropriate areas for analysis and analytical techniques

  9. Existence theory in optimal control

    International Nuclear Information System (INIS)

    Olech, C.

    1976-01-01

    This paper treats the existence problem in two main cases. One case is that of linear systems when existence is based on closedness or compactness of the reachable set and the other, non-linear case refers to a situation where for the existence of optimal solutions closedness of the set of admissible solutions is needed. Some results from convex analysis are included in the paper. (author)

  10. Embedded Electro-Optic Sensor Network for the On-Site Calibration and Real-Time Performance Monitoring of Large-Scale Phased Arrays

    National Research Council Canada - National Science Library

    Yang, Kyoung

    2005-01-01

    This final report summarizes the progress during the Phase I SBIR project entitled "Embedded Electro-Optic Sensor Network for the On-Site Calibration and Real-Time Performance Monitoring of Large-Scale Phased Arrays...

  11. Large Time Behavior for Weak Solutions of the 3D Globally Modified Navier-Stokes Equations

    Directory of Open Access Journals (Sweden)

    Junbai Ren

    2014-01-01

    Full Text Available This paper is concerned with the large time behavior of the weak solutions for three-dimensional globally modified Navier-Stokes equations. With the aid of energy methods and auxiliary decay estimates together with Lp-Lq estimates of heat semigroup, we derive the optimal upper and lower decay estimates of the weak solutions for the globally modified Navier-Stokes equations as C1(1+t-3/4≤uL2≤C2(1+t-3/4,  t>1. The decay rate is optimal since it coincides with that of heat equation.

  12. Concrete structures. Contribution to the safety assessment of existing structures

    Directory of Open Access Journals (Sweden)

    D. COUTO

    Full Text Available The safety evaluation of an existing concrete structure differs from the design of new structures. The partial safety factors for actions and resistances adopted in the design phase consider uncertainties and inaccuracies related to the building processes of structures, variability of materials strength and numerical approximations of the calculation and design processes. However, when analyzing a finished structure, a large number of unknown factors during the design stage are already defined and can be measured, which justifies a change in the increasing factors of the actions or reduction factors of resistances. Therefore, it is understood that safety assessment in existing structures is more complex than introducing security when designing a new structure, because it requires inspection, testing, analysis and careful diagnose. Strong knowledge and security concepts in structural engineering are needed, as well as knowledge about the materials of construction employed, in order to identify, control and properly consider the variability of actions and resistances in the structure. With the intention of discussing this topic considered complex and diffuse, this paper presents an introduction to the safety of concrete structures, a synthesis of the recommended procedures by Brazilian standards and another codes, associated with the topic, as well a realistic example of the safety assessment of an existing structure.

  13. Quick Mining of Isomorphic Exact Large Patterns from Large Graphs

    KAUST Repository

    Almasri, Islam

    2014-12-01

    The applications of the sub graph isomorphism search are growing with the growing number of areas that model their systems using graphs or networks. Specifically, many biological systems, such as protein interaction networks, molecular structures and protein contact maps, are modeled as graphs. The sub graph isomorphism search is concerned with finding all sub graphs that are isomorphic to a relevant query graph, the existence of such sub graphs can reflect on the characteristics of the modeled system. The most computationally expensive step in the search for isomorphic sub graphs is the backtracking algorithm that traverses the nodes of the target graph. In this paper, we propose a pruning approach that is inspired by the minimum remaining value heuristic that achieves greater scalability over large query and target graphs. Our testing on various biological networks shows that performance enhancement of our approach over existing state-of-the-art approaches varies between 6x and 53x. © 2014 IEEE.

  14. Quick Mining of Isomorphic Exact Large Patterns from Large Graphs

    KAUST Repository

    Almasri, Islam; Gao, Xin; Fedoroff, Nina V.

    2014-01-01

    The applications of the sub graph isomorphism search are growing with the growing number of areas that model their systems using graphs or networks. Specifically, many biological systems, such as protein interaction networks, molecular structures and protein contact maps, are modeled as graphs. The sub graph isomorphism search is concerned with finding all sub graphs that are isomorphic to a relevant query graph, the existence of such sub graphs can reflect on the characteristics of the modeled system. The most computationally expensive step in the search for isomorphic sub graphs is the backtracking algorithm that traverses the nodes of the target graph. In this paper, we propose a pruning approach that is inspired by the minimum remaining value heuristic that achieves greater scalability over large query and target graphs. Our testing on various biological networks shows that performance enhancement of our approach over existing state-of-the-art approaches varies between 6x and 53x. © 2014 IEEE.

  15. 10 CFR 4.127 - Existing facilities.

    Science.gov (United States)

    2010-01-01

    ... 10 Energy 1 2010-01-01 2010-01-01 false Existing facilities. 4.127 Section 4.127 Energy NUCLEAR... 1973, as Amended Discriminatory Practices § 4.127 Existing facilities. (a) Accessibility. A recipient... make each of its existing facilities or every part of an existing facility accessible to and usable by...

  16. Network Dynamics with BrainX3: A Large-Scale Simulation of the Human Brain Network with Real-Time Interaction

    OpenAIRE

    Xerxes D. Arsiwalla; Riccardo eZucca; Alberto eBetella; Enrique eMartinez; David eDalmazzo; Pedro eOmedas; Gustavo eDeco; Gustavo eDeco; Paul F.M.J. Verschure; Paul F.M.J. Verschure

    2015-01-01

    BrainX3 is a large-scale simulation of human brain activity with real-time interaction, rendered in 3D in a virtual reality environment, which combines computational power with human intuition for the exploration and analysis of complex dynamical networks. We ground this simulation on structural connectivity obtained from diffusion spectrum imaging data and model it on neuronal population dynamics. Users can interact with BrainX3 in real-time by perturbing brain regions with transient stimula...

  17. Network dynamics with BrainX3: a large-scale simulation of the human brain network with real-time interaction

    OpenAIRE

    Arsiwalla, Xerxes D.; Zucca, Riccardo; Betella, Alberto; Martínez, Enrique, 1961-; Dalmazzo, David; Omedas, Pedro; Deco, Gustavo; Verschure, Paul F. M. J.

    2015-01-01

    BrainX3 is a large-scale simulation of human brain activity with real-time interaction, rendered in 3D in a virtual reality environment, which combines computational power with human intuition for the exploration and analysis of complex dynamical networks. We ground this simulation on structural connectivity obtained from diffusion spectrum imaging data and model it on neuronal population dynamics. Users can interact with BrainX3 in real-time by perturbing brain regions with transient stimula...

  18. Gravity Cutoff in Theories with Large Discrete Symmetries

    International Nuclear Information System (INIS)

    Dvali, Gia; Redi, Michele; Sibiryakov, Sergey; Vainshtein, Arkady

    2008-01-01

    We set an upper bound on the gravitational cutoff in theories with exact quantum numbers of large N periodicity, such as Z N discrete symmetries. The bound stems from black hole physics. It is similar to the bound appearing in theories with N particle species, though a priori, a large discrete symmetry does not imply a large number of species. Thus, there emerges a potentially wide class of new theories that address the hierarchy problem by lowering the gravitational cutoff due to the existence of large Z 10 32 -type symmetries

  19. Why Choose Online Learning: Relationship of Existing Factors and Chronobiology

    Science.gov (United States)

    Luo, Yi; Pan, Rui; Choi, Jea H.; Mellish, Linda; Strobel, Johannes

    2011-01-01

    Existing research on choice of online learning utilized factors such as perceived level of control, independence, and satisfaction, yet the relationship among these factors is under-researched. Due to the value of "learning anytime," biological factors underlying "choice of time" might provide additional insights. This article…

  20. A general formulation of discrete-time quantum mechanics: Restrictions on the action and the relation of unitarity to the existence theorem for initial-value problems

    International Nuclear Information System (INIS)

    Khorrami, M.

    1995-01-01

    A general formulation for discrete-time quantum mechanics, based on Feynman's method in ordinary quantum mechanics, is presented. It is shown that the ambiguities present in ordinary quantum mechanics (due to noncommutativity of the operators), are no longer present here. Then the criteria for the unitarity of the evolution operator are examined. It is shown that the unitarity of the evolution operator puts restrictions on the form of the action, and also implies the existence of a solution for the classical initial-value problem. 13 refs

  1. Existence and construction of large stable food webs

    Science.gov (United States)

    Haerter, Jan O.; Mitarai, Namiko; Sneppen, Kim

    2017-09-01

    Ecological diversity is ubiquitous despite the restrictions imposed by competitive exclusion and apparent competition. To explain the observed richness of species in a given habitat, food-web theory has explored nonlinear functional responses, self-interaction, or spatial structure and dispersal—model ingredients that have proven to promote stability and diversity. We return instead here to classical Lotka-Volterra equations, where species-species interaction is characterized by a simple product and spatial restrictions are ignored. We quantify how this idealization imposes constraints on coexistence and diversity for many species. To this end, we introduce the concept of free and controlled species and use this to demonstrate how stable food webs can be constructed by the sequential addition of species. The resulting food webs can reach dozens of species and generally yield nonrandom degree distributions in accordance with the constraints imposed through the assembly process. Our model thus serves as a formal starting point for the study of sustainable interaction patterns between species.

  2. Application of geometric probability to the existence of faults in anisotropic media

    International Nuclear Information System (INIS)

    Cranwell, R.M.; Donath, F.A.

    1980-01-01

    Three primary aspects of faults which relate to their potential for degradation of a repository site are: the possibility of an existing but undetected fault intersecting the repository site; the potential for a new fault occurring and propagating through the repository site; the ability of any such fault to transmit groundwater. Given that a fault might be present in the region surrounding the site, the probability that it intersects the site depends primarily on its orientation and on the density of faulting in the area. Once these parameters are known, a model can be developed to determine the probability that an existing but undetected fault will intersect the repository site. Similar techniques can be used to estimate the potential for new faults occurring and intersecting site, or intersection from propagation along existing faults. However, additional data includng in situ stress measurements and records of seismic activity would be needed. One can estimate the stress level at which the strength in the surrounding media will be exceeded, and thus determine a time-dependent probability of movement along a pre-existing fault or of a new fault occurring, from a predicted rate of change in local stresses. In situ stress measurements taken at intervals of time could aid in determining the rate of stress change in the surrounding media, although measurable changes might not occur over the available period of observation. In situ stress measurements might also aid in assessing the ability of existing faults to transmit fluids

  3. Application of large area SiPMs for the readout of a plastic scintillator based timing detector

    Science.gov (United States)

    Betancourt, C.; Blondel, A.; Brundler, R.; Dätwyler, A.; Favre, Y.; Gascon, D.; Gomez, S.; Korzenev, A.; Mermod, P.; Noah, E.; Serra, N.; Sgalaberna, D.; Storaci, B.

    2017-11-01

    In this study an array of eight 6 mm × 6 mm area SiPMs was coupled to the end of a long plastic scintillator counter which was exposed to a 2.5 GeV/c muon beam at the CERN PS. Timing characteristics of bars with dimensions 150 cm × 6 cm × 1 cm and 120 cm × 11 cm × 2.5 cm have been studied. An 8-channel SiPM anode readout ASIC (MUSIC R1) based on a novel low input impedance current conveyor has been used to read out and amplify SiPMs independently and sum the signals at the end. Prospects for applications in large-scale particle physics detectors with timing resolution below 100 ps are provided in light of the results.

  4. Long-time analytic approximation of large stochastic oscillators: Simulation, analysis and inference.

    Directory of Open Access Journals (Sweden)

    Giorgos Minas

    2017-07-01

    Full Text Available In order to analyse large complex stochastic dynamical models such as those studied in systems biology there is currently a great need for both analytical tools and also algorithms for accurate and fast simulation and estimation. We present a new stochastic approximation of biological oscillators that addresses these needs. Our method, called phase-corrected LNA (pcLNA overcomes the main limitations of the standard Linear Noise Approximation (LNA to remain uniformly accurate for long times, still maintaining the speed and analytically tractability of the LNA. As part of this, we develop analytical expressions for key probability distributions and associated quantities, such as the Fisher Information Matrix and Kullback-Leibler divergence and we introduce a new approach to system-global sensitivity analysis. We also present algorithms for statistical inference and for long-term simulation of oscillating systems that are shown to be as accurate but much faster than leaping algorithms and algorithms for integration of diffusion equations. Stochastic versions of published models of the circadian clock and NF-κB system are used to illustrate our results.

  5. Salecker-Wigner-Peres clock, Feynman paths, and a tunneling time that should not exist

    Science.gov (United States)

    Sokolovski, D.

    2017-08-01

    The Salecker-Wigner-Peres (SWP) clock is often used to determine the duration a quantum particle is supposed to spend in a specified region of space Ω . By construction, the result is a real positive number, and the method seems to avoid the difficulty of introducing complex time parameters, which arises in the Feynman paths approach. However, it tells little about the particle's motion. We investigate this matter further, and show that the SWP clock, like any other Larmor clock, correlates the rotation of its angular momentum with the durations τ , which the Feynman paths spend in Ω , thereby destroying interference between different durations. An inaccurate weakly coupled clock leaves the interference almost intact, and the need to resolve the resulting "which way?" problem is one of the main difficulties at the center of the "tunnelling time" controversy. In the absence of a probability distribution for the values of τ , the SWP results are expressed in terms of moduli of the "complex times," given by the weighted sums of the corresponding probability amplitudes. It is shown that overinterpretation of these results, by treating the SWP times as physical time intervals, leads to paradoxes and should be avoided. We also analyze various settings of the SWP clock, different calibration procedures, and the relation between the SWP results and the quantum dwell time. The cases of stationary tunneling and tunnel ionization are considered in some detail. Although our detailed analysis addresses only one particular definition of the duration of a tunneling process, it also points towards the impossibility of uniting various time parameters, which may occur in quantum theory, within the concept of a single tunnelling time.

  6. LDEF data: Comparisons with existing models

    Science.gov (United States)

    Coombs, Cassandra R.; Watts, Alan J.; Wagner, John D.; Atkinson, Dale R.

    1993-04-01

    The relationship between the observed cratering impact damage on the Long Duration Exposure Facility (LDEF) versus the existing models for both the natural environment of micrometeoroids and the man-made debris was investigated. Experimental data was provided by several LDEF Principal Investigators, Meteoroid and Debris Special Investigation Group (M&D SIG) members, and by the Kennedy Space Center Analysis Team (KSC A-Team) members. These data were collected from various aluminum materials around the LDEF satellite. A PC (personal computer) computer program, SPENV, was written which incorporates the existing models of the Low Earth Orbit (LEO) environment. This program calculates the expected number of impacts per unit area as functions of altitude, orbital inclination, time in orbit, and direction of the spacecraft surface relative to the velocity vector, for both micrometeoroids and man-made debris. Since both particle models are couched in terms of impact fluxes versus impactor particle size, and much of the LDEF data is in the form of crater production rates, scaling laws have been used to relate the two. Also many hydrodynamic impact computer simulations were conducted, using CTH, of various impact events, that identified certain modes of response, including simple metallic target cratering, perforations and delamination effects of coatings.

  7. realfast: Real-time, Commensal Fast Transient Surveys with the Very Large Array

    Science.gov (United States)

    Law, C. J.; Bower, G. C.; Burke-Spolaor, S.; Butler, B. J.; Demorest, P.; Halle, A.; Khudikyan, S.; Lazio, T. J. W.; Pokorny, M.; Robnett, J.; Rupen, M. P.

    2018-05-01

    Radio interferometers have the ability to precisely localize and better characterize the properties of sources. This ability is having a powerful impact on the study of fast radio transients, where a few milliseconds of data is enough to pinpoint a source at cosmological distances. However, recording interferometric data at millisecond cadence produces a terabyte-per-hour data stream that strains networks, computing systems, and archives. This challenge mirrors that of other domains of science, where the science scope is limited by the computational architecture as much as the physical processes at play. Here, we present a solution to this problem in the context of radio transients: realfast, a commensal, fast transient search system at the Jansky Very Large Array. realfast uses a novel architecture to distribute fast-sampled interferometric data to a 32-node, 64-GPU cluster for real-time imaging and transient detection. By detecting transients in situ, we can trigger the recording of data for those rare, brief instants when the event occurs and reduce the recorded data volume by a factor of 1000. This makes it possible to commensally search a data stream that would otherwise be impossible to record. This system will search for millisecond transients in more than 1000 hr of data per year, potentially localizing several Fast Radio Bursts, pulsars, and other sources of impulsive radio emission. We describe the science scope for realfast, the system design, expected outcomes, and ways in which real-time analysis can help in other fields of astrophysics.

  8. Global existence and asymptotic behavior of a model for biological control of invasive species via supermale introduction

    KAUST Repository

    Parshad, Rana

    2013-01-01

    The purpose of this manuscript is to propose a model for the biological control of invasive species, via introduction of phenotypically modified organisms into a target population. We are inspired by the earlier Trojan Y Chromosome model [J.B. Gutierrez, J.L. Teem, J. Theo. Bio., 241(22), 333-341, 2006]. However, in the current work, we remove the assumption of logisticgrowth rate, and do not consider the addition of sex-reversed supermales. Also the constant birth and death coefficients, considered earlier, are replaced by functionally dependent ones. In this case the nonlinearities present serious difficulties since they change sign, and the components of the solution are not a priori bounded, in some Lp-space for p large, to permit theapplication of the well known regularizing effect principle. Thus functional methods to deducethe global existence in time, for the system in question, are not applicable. Our techniques are based on the Lyapunov functional method. We prove global existence of solutions, as well asexistence of a finite dimensional global attractor, that supports states of extinction. Our analytical finding are in accordance with numerical simulations, which we also present. © 2013 International Press.

  9. The role of fusion power in energy scenarios. Proposed method and review of existing scenarios

    International Nuclear Information System (INIS)

    Lako, P; Ybema, J.R.; Seebregts, A.J.

    1998-04-01

    The European Commission wishes more insight in the potential role of fusion energy in the second half of the 21st century. Therefore, several scenario studies are carried out in the so-called macro-task Long Term Scenarios to investigate the potential role of fusion power in the energy system. The main contribution of ECN to the macro-task is to perform a long term energy scenario study for Western Europe with special focus on the role of fusion power. This interim report gives some methodological considerations for such an analysis. A discussion is given on the problems related to the long time horizon of the scenario study such as the forecast of technological innovations, the selection of appropriate discount rates and the links with climate change. Key parameters which are expected to have large effects on the role and cost-effectiveness are discussed in general terms. The key parameters to be varied include level and structure of energy demand, availability and prices of fossil energy, CO2 reduction policy, discount rates, cost and potential of renewable energy sources, availability of fission power and CO2 capture and disposal and the cost and the maximum rate of market growth of fusion power. The scenario calculations are to be performed later in the project with the help of an existing cost minimisation model of the Western European energy system. This MARKAL model is briefly introduced. The results of the model calculations are expected to make clear under which combinations of scenario parameters fusion power is needed and how large the expected financial benefits will be. The present interim report also gives an evaluation of existing energy scenarios with respect to the role of fusion power. 18 refs

  10. Statistical identification with hidden Markov models of large order splitting strategies in an equity market

    Science.gov (United States)

    Vaglica, Gabriella; Lillo, Fabrizio; Mantegna, Rosario N.

    2010-07-01

    Large trades in a financial market are usually split into smaller parts and traded incrementally over extended periods of time. We address these large trades as hidden orders. In order to identify and characterize hidden orders, we fit hidden Markov models to the time series of the sign of the tick-by-tick inventory variation of market members of the Spanish Stock Exchange. Our methodology probabilistically detects trading sequences, which are characterized by a significant majority of buy or sell transactions. We interpret these patches of sequential buying or selling transactions as proxies of the traded hidden orders. We find that the time, volume and number of transaction size distributions of these patches are fat tailed. Long patches are characterized by a large fraction of market orders and a low participation rate, while short patches have a large fraction of limit orders and a high participation rate. We observe the existence of a buy-sell asymmetry in the number, average length, average fraction of market orders and average participation rate of the detected patches. The detected asymmetry is clearly dependent on the local market trend. We also compare the hidden Markov model patches with those obtained with the segmentation method used in Vaglica et al (2008 Phys. Rev. E 77 036110), and we conclude that the former ones can be interpreted as a partition of the latter ones.

  11. Relaxing a large cosmological constant

    International Nuclear Information System (INIS)

    Bauer, Florian; Sola, Joan; Stefancic, Hrvoje

    2009-01-01

    The cosmological constant (CC) problem is the biggest enigma of theoretical physics ever. In recent times, it has been rephrased as the dark energy (DE) problem in order to encompass a wider spectrum of possibilities. It is, in any case, a polyhedric puzzle with many faces, including the cosmic coincidence problem, i.e. why the density of matter ρ m is presently so close to the CC density ρ Λ . However, the oldest, toughest and most intriguing face of this polyhedron is the big CC problem, namely why the measured value of ρ Λ at present is so small as compared to any typical density scale existing in high energy physics, especially taking into account the many phase transitions that our Universe has undergone since the early times, including inflation. In this Letter, we propose to extend the field equations of General Relativity by including a class of invariant terms that automatically relax the value of the CC irrespective of the initial size of the vacuum energy in the early epochs. We show that, at late times, the Universe enters an eternal de Sitter stage mimicking a tiny positive cosmological constant. Thus, these models could be able to solve the big CC problem without fine-tuning and have also a bearing on the cosmic coincidence problem. Remarkably, they mimic the ΛCDM model to a large extent, but they still leave some characteristic imprints that should be testable in the next generation of experiments.

  12. A Subdivision Method to Unify the Existing Latitude and Longitude Grids

    Directory of Open Access Journals (Sweden)

    Chengqi Cheng

    2016-09-01

    Full Text Available As research on large regions of earth progresses, many geographical subdivision grids have been established for various spatial applications by different industries and disciplines. However, there is no clear relationship between the different grids and no consistent spatial reference grid that allows for information exchange and comprehensive application. Sharing and exchange of data across departments and applications are still at a bottleneck. It would represent a significant step forward to build a new grid model that is inclusive of or compatible with most of the existing geodesic grids and that could support consolidation and exchange within existing data services. This study designs a new geographical coordinate global subdividing grid with one dimension integer coding on a 2n tree (GeoSOT that has 2n coordinate subdivision characteristics (global longitude and latitude subdivision and can form integer hierarchies at degree, minute, and second levels. This grid has the multi-dimensional quadtree hierarchical characteristics of a digital earth grid, but also provides good consistency with applied grids, such as those used in mapping, meteorology, oceanography and national geographical, and three-dimensional digital earth grids. No other existing grid codes possess these characteristics.

  13. An Existence Principle for Nonlocal Difference Boundary Value Problems with φ-Laplacian and Its Application to Singular Problems

    Directory of Open Access Journals (Sweden)

    Svatoslav Stanêk

    2008-03-01

    Full Text Available The paper presents an existence principle for solving a large class of nonlocal regular discrete boundary value problems with the φ-Laplacian. Applications of the existence principle to singular discrete problems are given.

  14. Midplane Faraday rotation: A densitometer for large tokamaks

    International Nuclear Information System (INIS)

    Jobes, F.C.; Mansfield, D.K.

    1992-01-01

    The density in a large tokamak such as International Thermonuclear Experimental Reactor (ITER), or any of the proposed future US machines, can be determined by measuring the Faraday rotation of a 10.6 μm laser directed tangent to the toroidal field. If there is a horizontal array of such beams, then n e (R) can be readily obtained with a simple Abel inversion about the center line of the tokamak. For a large machine, operated at a full field of 30 T m and a density of 2x10 20 /m 3 , the rotation angle would be quite large-about 60 degree for two passes. A layout in which a single laser beam is fanned out in the horizontal midplane of the tokamak, with a set of retroreflectors on the far side of the vacuum vessel, would provide good spatial resolution, depending only upon the number of reflectors. With this proposed layout, only one window would be needed. Because the rotation angle is never more than 1 ''fringe,'' the data is always good, and it is also a continuous measurement in time. Faraday rotation is dependent only upon the plasma itself, and thus is not sensitive to vibration of the optical components. Simulations of the expected results show that ITER, or any large tokamak, existing or proposed, would be well served even at low densities by a midplane Faraday rotation densitometer of ∼64 channels

  15. Sample-based Attribute Selective AnDE for Large Data

    DEFF Research Database (Denmark)

    Chen, Shenglei; Martinez, Ana; Webb, Geoffrey

    2017-01-01

    More and more applications come with large data sets in the past decade. However, existing algorithms cannot guarantee to scale well on large data. Averaged n-Dependence Estimators (AnDE) allows for flexible learning from out-of-core data, by varying the value of n (number of super parents). Henc...

  16. The large-s field-reversed configuration experiment

    International Nuclear Information System (INIS)

    Hoffman, A.L.; Carey, L.N.; Crawford, E.A.; Harding, D.G.; DeHart, T.E.; McDonald, K.F.; McNeil, J.L.; Milroy, R.D.; Slough, J.T.; Maqueda, R.; Wurden, G.A.

    1993-01-01

    The Large-s Experiment (LSX) was built to study the formation and equilibrium properties of field-reversed configurations (FRCs) as the scale size increases. The dynamic, field-reversed theta-pinch method of FRC creation produces axial and azimuthal deformations and makes formation difficult, especially in large devices with large s (number of internal gyroradii) where it is difficult to achieve initial plasma uniformity. However, with the proper technique, these formation distortions can be minimized and are then observed to decay with time. This suggests that the basic stability and robustness of FRCs formed, and in some cases translated, in smaller devices may also characterize larger FRCs. Elaborate formation controls were included on LSX to provide the initial uniformity and symmetry necessary to minimize formation disturbances, and stable FRCs could be formed up to the design goal of s = 8. For x ≤ 4, the formation distortions decayed away completely, resulting in symmetric equilibrium FRCs with record confinement times up to 0.5 ms, agreeing with previous empirical scaling laws (τ∝sR). Above s = 4, reasonably long-lived (up to 0.3 ms) configurations could still be formed, but the initial formation distortions were so large that they never completely decayed away, and the equilibrium confinement was degraded from the empirical expectations. The LSX was only operational for 1 yr, and it is not known whether s = 4 represents a fundamental limit for good confinement in simple (no ion beam stabilization) FRCs or whether it simply reflects a limit of present formation technology. Ideally, s could be increased through flux buildup from neutral beams. Since the addition of kinetic or beam ions will probably be desirable for heating, sustainment, and further stabilization of magnetohydrodynamic modes at reactor-level s values, neutral beam injection is the next logical step in FRC development. 24 refs., 21 figs., 2 tabs

  17. Existence and multiplicity results for homoclinic orbits of Hamiltonian systems

    Directory of Open Access Journals (Sweden)

    Chao-Nien Chen

    1997-03-01

    Full Text Available Homoclinic orbits play an important role in the study of qualitative behavior of dynamical systems. Such kinds of orbits have been studied since the time of Poincare. In this paper, we discuss how to use variational methods to study the existence of homoclinic orbits of Hamiltonian systems.

  18. Adapting existing experience with aquifer vulnerability and groundwater protection for Africa

    CSIR Research Space (South Africa)

    Robins, NS

    2007-01-01

    Full Text Available Africa today, and guidelines for risk assessment and groundwater protection (including protection zoning) exist, these are not always adhered to. For example, in Sep- tember 2005 an outbreak of typhoid in the town of Delmas in Mpumalanga killed... at least four people. Large parts of the town are supplied by boreholes drilled into a karstic dolomitic aquifer. The water is chlorinated before being made available for public supply. Following an earlier out- break of typhoid in 1993...

  19. Multiple-relaxation-time lattice Boltzmann model for incompressible miscible flow with large viscosity ratio and high Péclet number

    Science.gov (United States)

    Meng, Xuhui; Guo, Zhaoli

    2015-10-01

    A lattice Boltzmann model with a multiple-relaxation-time (MRT) collision operator is proposed for incompressible miscible flow with a large viscosity ratio as well as a high Péclet number in this paper. The equilibria in the present model are motivated by the lattice kinetic scheme previously developed by Inamuro et al. [Philos. Trans. R. Soc. London, Ser. A 360, 477 (2002), 10.1098/rsta.2001.0942]. The fluid viscosity and diffusion coefficient depend on both the corresponding relaxation times and additional adjustable parameters in this model. As a result, the corresponding relaxation times can be adjusted in proper ranges to enhance the performance of the model. Numerical validations of the Poiseuille flow and a diffusion-reaction problem demonstrate that the proposed model has second-order accuracy in space. Thereafter, the model is used to simulate flow through a porous medium, and the results show that the proposed model has the advantage to obtain a viscosity-independent permeability, which makes it a robust method for simulating flow in porous media. Finally, a set of simulations are conducted on the viscous miscible displacement between two parallel plates. The results reveal that the present model can be used to simulate, to a high level of accuracy, flows with large viscosity ratios and/or high Péclet numbers. Moreover, the present model is shown to provide superior stability in the limit of high kinematic viscosity. In summary, the numerical results indicate that the present lattice Boltzmann model is an ideal numerical tool for simulating flow with a large viscosity ratio and/or a high Péclet number.

  20. Variation in Patients' Travel Times among Imaging Examination Types at a Large Academic Health System.

    Science.gov (United States)

    Rosenkrantz, Andrew B; Liang, Yu; Duszak, Richard; Recht, Michael P

    2017-08-01

    Patients' willingness to travel farther distances for certain imaging services may reflect their perceptions of the degree of differentiation of such services. We compare patients' travel times for a range of imaging examinations performed across a large academic health system. We searched the NYU Langone Medical Center Enterprise Data Warehouse to identify 442,990 adult outpatient imaging examinations performed over a recent 3.5-year period. Geocoding software was used to estimate typical driving times from patients' residences to imaging facilities. Variation in travel times was assessed among examination types. The mean expected travel time was 29.2 ± 20.6 minutes, but this varied significantly (p travel times were shortest for ultrasound (26.8 ± 18.9) and longest for positron emission tomography-computed tomography (31.9 ± 21.5). For magnetic resonance imaging, travel times were shortest for musculoskeletal extremity (26.4 ± 19.2) and spine (28.6 ± 21.0) examinations and longest for prostate (35.9 ± 25.6) and breast (32.4 ± 22.3) examinations. For computed tomography, travel times were shortest for a range of screening examinations [colonography (25.5 ± 20.8), coronary artery calcium scoring (26.1 ± 19.2), and lung cancer screening (26.4 ± 14.9)] and longest for angiography (32.0 ± 22.6). For ultrasound, travel times were shortest for aortic aneurysm screening (22.3 ± 18.4) and longest for breast (30.1 ± 19.2) examinations. Overall, men (29.9 ± 21.6) had longer (p travel times than women (27.8 ± 20.3); this difference persisted for each modality individually (p ≤ 0.006). Patients' willingness to travel longer times for certain imaging examination types (particularly breast and prostate imaging) supports the role of specialized services in combating potential commoditization of imaging services. Disparities in travel times by gender warrant further investigation. Copyright

  1. Existence of Periodic Orbits with Zeno Behavior in Completed Lagrangian Hybrid Systems

    OpenAIRE

    Or, Yizhar; Ames, Aaron D.

    2009-01-01

    In this paper, we consider hybrid models of mechanical systems undergoing impacts, Lagrangian hybrid systems, and study their periodic orbits in the presence of Zeno behavior-an infinite number of impacts occurring in finite time. The main result of this paper is explicit conditions under which the existence of stable periodic orbits for a Lagrangian hybrid system with perfectly plastic impacts implies the existence of periodic orbits in the same system with non-plastic impacts. Such periodic...

  2. Massive Cloud Computing Processing of P-SBAS Time Series for Displacement Analyses at Large Spatial Scale

    Science.gov (United States)

    Casu, F.; de Luca, C.; Lanari, R.; Manunta, M.; Zinno, I.

    2016-12-01

    A methodology for computing surface deformation time series and mean velocity maps of large areas is presented. Our approach relies on the availability of a multi-temporal set of Synthetic Aperture Radar (SAR) data collected from ascending and descending orbits over an area of interest, and also permits to estimate the vertical and horizontal (East-West) displacement components of the Earth's surface. The adopted methodology is based on an advanced Cloud Computing implementation of the Differential SAR Interferometry (DInSAR) Parallel Small Baseline Subset (P-SBAS) processing chain which allows the unsupervised processing of large SAR data volumes, from the raw data (level-0) imagery up to the generation of DInSAR time series and maps. The presented solution, which is highly scalable, has been tested on the ascending and descending ENVISAT SAR archives, which have been acquired over a large area of Southern California (US) that extends for about 90.000 km2. Such an input dataset has been processed in parallel by exploiting 280 computing nodes of the Amazon Web Services Cloud environment. Moreover, to produce the final mean deformation velocity maps of the vertical and East-West displacement components of the whole investigated area, we took also advantage of the information available from external GPS measurements that permit to account for possible regional trends not easily detectable by DInSAR and to refer the P-SBAS measurements to an external geodetic datum. The presented results clearly demonstrate the effectiveness of the proposed approach that paves the way to the extensive use of the available ERS and ENVISAT SAR data archives. Furthermore, the proposed methodology can be particularly suitable to deal with the very huge data flow provided by the Sentinel-1 constellation, thus permitting to extend the DInSAR analyses at a nearly global scale. This work is partially supported by: the DPC-CNR agreement, the EPOS-IP project and the ESA GEP project.

  3. KMTNet Time-series Photometry of the Doubly Eclipsing Binary Stars Located in the Large Magellanic Cloud

    Science.gov (United States)

    Hong, Kyeongsoo; Koo, Jae-Rim; Lee, Jae Woo; Kim, Seung-Lee; Lee, Chung-Uk; Park, Jang-Ho; Kim, Hyoun-Woo; Lee, Dong-Joo; Kim, Dong-Jin; Han, Cheongho

    2018-05-01

    We report the results of photometric observations for doubly eclipsing binaries OGLE-LMC-ECL-15674 and OGLE-LMC-ECL-22159, both of which are composed of two pairs (designated A&B) of a detached eclipsing binary located in the Large Magellanic Cloud. The light curves were obtained by high-cadence time-series photometry using the Korea Microlensing Telescope Network 1.6 m telescopes located at three southern sites (CTIO, SAAO, and SSO) between 2016 September and 2017 January. The orbital periods were determined to be 1.433 and 1.387 days for components A and B of OGLE-LMC-ECL-15674, respectively, and 2.988 and 3.408 days for OGLE-LMC-ECL-22159A and B, respectively. Our light curve solutions indicate that the significant changes in the eclipse depths of OGLE-LMC-ECL-15674A and B were caused by variations in their inclination angles. The eclipse timing diagrams of the A and B components of OGLE-LMC-ECL-15674 and OGLE-LMC-ECL-22159 were analyzed using 28, 44, 28, and 26 new times of minimum light, respectively. The apsidal motion period of OGLE-LMC-ECL-15674B was estimated by detailed analysis of eclipse timings for the first time. The detached eclipsing binary OGLE-LMC-ECL-15674B shows a fast apsidal period of 21.5 ± 0.1 years.

  4. Speckle photography applied to measure deformations of very large structures

    Science.gov (United States)

    Conley, Edgar; Morgan, Chris K.

    1995-04-01

    Fundamental principles of mechanics have recently been brought to bear on problems concerning very large structures. Fields of study include tectonic plate motion, nuclear waste repository vault closure mechanisms, the flow of glacier and sea ice, and highway bridge damage assessment and residual life prediction. Quantitative observations, appropriate for formulating and verifying models, are still scarce however, so the need to adapt new methods of experimental mechanics is clear. Large dynamic systems often exist in environments subject to rapid change. Therefore, a simple field technique that incorporates short time scales and short gage lengths is required. Further, the measuring methods must yield displacements reliably, and under oft-times adverse field conditions. Fortunately, the advantages conferred by an experimental mechanics technique known as speckle photography nicely fulfill this rather stringent set of performance requirements. Speckle seemed to lend itself nicely to the application since it is robust and relatively inexpensive. Experiment requirements are minimal -- a camera, high resolution film, illumination, and an optically rough surface. Perhaps most important is speckle's distinct advantage over point-by-point methods: It maps the two dimensional displacement vectors of the whole field of interest. And finally, given the method's high spatial resolution, relatively short observation times are necessary. In this paper we discuss speckle, two variations of which were used to gage the deformation of a reinforced concrete bridge structure subjected to bending loads. The measurement technique proved to be easily applied, and yielded the location of the neutral axis self consistently. The research demonstrates the feasibility of using whole field techniques to detect and quantify surface strains of large structures under load.

  5. Connection of the Late Paleolithic archaeological sites of the Chuya depression with geological evidence of existence of the Late Pleistocene ice-dammed lakes

    Science.gov (United States)

    Agatova, A. R.; Nepop, R. K.

    2017-07-01

    The complexity of the age dating of the Pleistocene ice-dammed paleolakes in the Altai Mountains is a reason why geologists consider the Early Paleolithic archaeological sites as an independent age marker for dating geological objects. However, in order to use these sites for paleogeographic reconstructions, their locations, the character of stratification, and the age of stone artifacts need to be comprehensively studied. We investigate 20 Late Paleolithic archaeological sites discovered in the Chuya depression of the Russian Altai (Altai Mountains) with the aim of their possible use for reconstructions of the period of development of the Kurai-Chuya glacio-limnosystem in the Late Neopleistocene. The results of our investigation show that it is improper to use the Paleolithic archaeological sites for the dating of the existence period and the draining time of ice-dammed lakes of the Chuya Depression in the modern period of their study owing to a lack of quantitative age estimates, a wide age range of possible existence of these sites, possible redeposition of the majority of artifacts, and their surface occurrence. It is established that all stratified sites where cultural layers are expected to be dated in the future lie above the uppermost and well-expressed paleolake level (2100 m a.s.l.). Accordingly, there are no grounds to determine the existence time of shallower paleolakes. Since the whole stone material collected below the level of 2100 m a.s.l. is represented by surface finds, it is problematic to use these artifacts for absolute geochronology. The Late Paleolithic Bigdon and Chechketerek sites are of great interest for paleogeographic reconstructions of ice-dammed lakes. The use of iceberg rafting products as cores is evidence that these sites appeared after the draining of a paleolake (2000 m a.s.l.). At this time, the location of these archaeological sites on the slope of the Chuya Depression allows one to assume the existence of a large lake as deep

  6. Study on large scale knowledge base with real time operation for autonomous nuclear power plant. 1. Basic concept and expecting performance

    International Nuclear Information System (INIS)

    Ozaki, Yoshihiko; Suda, Kazunori; Yoshikawa, Shinji; Ozawa, Kenji

    1996-04-01

    Since it is desired to enhance availability and safety of nuclear power plants operation and maintenance by removing human factor, there are many researches and developments for intelligent operation or diagnosis using artificial intelligence (AI) technique. We have been developing an autonomous operation and maintenance system for nuclear power plants by substituting AI's and intelligent robots. It is indispensable to use various and large scale knowledge relative to plant design, operation, and maintenance, that is, whole life cycle data of the plant for the autonomous nuclear power plant. These knowledge must be given to AI system or intelligent robots adequately and opportunely. Moreover, it is necessary to insure real time operation using the large scale knowledge base for plant control and diagnosis performance. We have been studying on the large scale and real time knowledge base system for autonomous plant. In the report, we would like to present the basic concept and expecting performance of the knowledge base for autonomous plant, especially, autonomous control and diagnosis system. (author)

  7. Thesaurus-based search in large heterogeneous collections

    NARCIS (Netherlands)

    J. Wielemaker (Jan); M. Hildebrand (Michiel); J.R. van Ossenbruggen (Jacco); G. Schreiber (Guus); A. Sheth; not CWI et al

    2008-01-01

    htmlabstractIn cultural heritage, large virtual collections are coming into existence. Such collections contain heterogeneous sets of metadata and vocabulary concepts, originating from multiple sources. In the context of the E-Culture demonstrator we have shown earlier that such virtual

  8. Computational challenges of large-scale, long-time, first-principles molecular dynamics

    International Nuclear Information System (INIS)

    Kent, P R C

    2008-01-01

    Plane wave density functional calculations have traditionally been able to use the largest available supercomputing resources. We analyze the scalability of modern projector-augmented wave implementations to identify the challenges in performing molecular dynamics calculations of large systems containing many thousands of electrons. Benchmark calculations on the Cray XT4 demonstrate that global linear-algebra operations are the primary reason for limited parallel scalability. Plane-wave related operations can be made sufficiently scalable. Improving parallel linear-algebra performance is an essential step to reaching longer timescales in future large-scale molecular dynamics calculations

  9. Global Existence Results for Viscoplasticity at Finite Strain

    Science.gov (United States)

    Mielke, Alexander; Rossi, Riccarda; Savaré, Giuseppe

    2018-01-01

    We study a model for rate-dependent gradient plasticity at finite strain based on the multiplicative decomposition of the strain tensor, and investigate the existence of global-in-time solutions to the related PDE system. We reveal its underlying structure as a generalized gradient system, where the driving energy functional is highly nonconvex and features the geometric nonlinearities related to finite-strain elasticity as well as the multiplicative decomposition of finite-strain plasticity. Moreover, the dissipation potential depends on the left-invariant plastic rate, and thus depends on the plastic state variable. The existence theory is developed for a class of abstract, nonsmooth, and nonconvex gradient systems, for which we introduce suitable notions of solutions, namely energy-dissipation-balance and energy-dissipation-inequality solutions. Hence, we resort to the toolbox of the direct method of the calculus of variations to check that the specific energy and dissipation functionals for our viscoplastic models comply with the conditions of the general theory.

  10. Network structure of multivariate time series.

    Science.gov (United States)

    Lacasa, Lucas; Nicosia, Vincenzo; Latora, Vito

    2015-10-21

    Our understanding of a variety of phenomena in physics, biology and economics crucially depends on the analysis of multivariate time series. While a wide range tools and techniques for time series analysis already exist, the increasing availability of massive data structures calls for new approaches for multidimensional signal processing. We present here a non-parametric method to analyse multivariate time series, based on the mapping of a multidimensional time series into a multilayer network, which allows to extract information on a high dimensional dynamical system through the analysis of the structure of the associated multiplex network. The method is simple to implement, general, scalable, does not require ad hoc phase space partitioning, and is thus suitable for the analysis of large, heterogeneous and non-stationary time series. We show that simple structural descriptors of the associated multiplex networks allow to extract and quantify nontrivial properties of coupled chaotic maps, including the transition between different dynamical phases and the onset of various types of synchronization. As a concrete example we then study financial time series, showing that a multiplex network analysis can efficiently discriminate crises from periods of financial stability, where standard methods based on time-series symbolization often fail.

  11. Marine Planning for Potential Wave Energy Facility Placement Amongst a Crowded Sea of Existing Resource Uses

    Science.gov (United States)

    Feist, B. E.; Fuller, E.; Plummer, M. L.

    2016-12-01

    Conversion to renewable energy sources is a logical response to increasing pressure to reduce greenhouse gas emissions. Ocean wave energy is the least developed renewable energy source, despite having the highest energy per unit area. While many hurdles remain in developing wave energy, assessing potential conflicts and evaluating tradeoffs with existing uses is essential. Marine planning encompasses a broad array of activities that take place in and affect large marine ecosystems, making it an ideal tool for evaluating wave energy resource use conflicts. In this study, we focus on the potential conflicts between wave energy conversion (WEC) facilities and existing marine uses in the context of marine planning, within the California Current Large Marine Ecosystem. First, we evaluated wave energy facility development using the Wave Energy Model (WEM) of the Integrated Valuation of Ecosystem Services and Trade-offs (InVEST) toolkit. Second, we ran spatial analyses on model output to identify conflicts with existing marine uses including AIS based vessel traffic, VMS and observer based measures of commercial fishing effort, and marine conservation areas. We found that regions with the highest wave energy potential were distant from major cities and that infrastructure limitations (cable landing sites) restrict integration with existing power grids. We identified multiple spatial conflicts with existing marine uses; especially shipping vessels and various commercial fishing fleets, and overlap with marine conservation areas varied by conservation designation. While wave energy generation facilities may be economically viable in the California Current, this viability must be considered within the context of the costs associated with conflicts that arise with existing marine uses. Our analyses can be used to better inform placement of WEC devices (as well as other types of renewable energy facilities) in the context of marine planning by accounting for economic tradeoffs

  12. Long-time and large-distance asymptotic behavior of the current-current correlators in the non-linear Schroedinger model

    Energy Technology Data Exchange (ETDEWEB)

    Kozlowski, K.K. [Deutsches Elektronen-Synchrotron (DESY), Hamburg (Germany); Terras, V. [CNRS, ENS Lyon (France). Lab. de Physique

    2010-12-15

    We present a new method allowing us to derive the long-time and large-distance asymptotic behavior of the correlations functions of quantum integrable models from their exact representations. Starting from the form factor expansion of the correlation functions in finite volume, we explain how to reduce the complexity of the computation in the so-called interacting integrable models to the one appearing in free fermion equivalent models. We apply our method to the time-dependent zero-temperature current-current correlation function in the non-linear Schroedinger model and compute the first few terms in its asymptotic expansion. Our result goes beyond the conformal field theory based predictions: in the time-dependent case, other types of excitations than the ones on the Fermi surface contribute to the leading orders of the asymptotics. (orig.)

  13. Long-time and large-distance asymptotic behavior of the current-current correlators in the non-linear Schroedinger model

    International Nuclear Information System (INIS)

    Kozlowski, K.K.; Terras, V.

    2010-12-01

    We present a new method allowing us to derive the long-time and large-distance asymptotic behavior of the correlations functions of quantum integrable models from their exact representations. Starting from the form factor expansion of the correlation functions in finite volume, we explain how to reduce the complexity of the computation in the so-called interacting integrable models to the one appearing in free fermion equivalent models. We apply our method to the time-dependent zero-temperature current-current correlation function in the non-linear Schroedinger model and compute the first few terms in its asymptotic expansion. Our result goes beyond the conformal field theory based predictions: in the time-dependent case, other types of excitations than the ones on the Fermi surface contribute to the leading orders of the asymptotics. (orig.)

  14. Data transfer over the wide area network with a large round trip time

    Science.gov (United States)

    Matsunaga, H.; Isobe, T.; Mashimo, T.; Sakamoto, H.; Ueda, I.

    2010-04-01

    A Tier-2 regional center is running at the University of Tokyo in Japan. This center receives a large amount of data of the ATLAS experiment from the Tier-1 center in France. Although the link between the two centers has 10Gbps bandwidth, it is not a dedicated link but is shared with other traffic, and the round trip time is 290ms. It is not easy to exploit the available bandwidth for such a link, so-called long fat network. We performed data transfer tests by using GridFTP in various combinations of the parameters, such as the number of parallel streams and the TCP window size. In addition, we have gained experience of the actual data transfer in our production system where the Disk Pool Manager (DPM) is used as the Storage Element and the data transfer is controlled by the File Transfer Service (FTS). We report results of the tests and the daily activity, and discuss the improvement of the data transfer throughput.

  15. Data transfer over the wide area network with a large round trip time

    International Nuclear Information System (INIS)

    Matsunaga, H; Isobe, T; Mashimo, T; Sakamoto, H; Ueda, I

    2010-01-01

    A Tier-2 regional center is running at the University of Tokyo in Japan. This center receives a large amount of data of the ATLAS experiment from the Tier-1 center in France. Although the link between the two centers has 10Gbps bandwidth, it is not a dedicated link but is shared with other traffic, and the round trip time is 290ms. It is not easy to exploit the available bandwidth for such a link, so-called long fat network. We performed data transfer tests by using GridFTP in various combinations of the parameters, such as the number of parallel streams and the TCP window size. In addition, we have gained experience of the actual data transfer in our production system where the Disk Pool Manager (DPM) is used as the Storage Element and the data transfer is controlled by the File Transfer Service (FTS). We report results of the tests and the daily activity, and discuss the improvement of the data transfer throughput.

  16. Expectation propagation for large scale Bayesian inference of non-linear molecular networks from perturbation data.

    Science.gov (United States)

    Narimani, Zahra; Beigy, Hamid; Ahmad, Ashar; Masoudi-Nejad, Ali; Fröhlich, Holger

    2017-01-01

    Inferring the structure of molecular networks from time series protein or gene expression data provides valuable information about the complex biological processes of the cell. Causal network structure inference has been approached using different methods in the past. Most causal network inference techniques, such as Dynamic Bayesian Networks and ordinary differential equations, are limited by their computational complexity and thus make large scale inference infeasible. This is specifically true if a Bayesian framework is applied in order to deal with the unavoidable uncertainty about the correct model. We devise a novel Bayesian network reverse engineering approach using ordinary differential equations with the ability to include non-linearity. Besides modeling arbitrary, possibly combinatorial and time dependent perturbations with unknown targets, one of our main contributions is the use of Expectation Propagation, an algorithm for approximate Bayesian inference over large scale network structures in short computation time. We further explore the possibility of integrating prior knowledge into network inference. We evaluate the proposed model on DREAM4 and DREAM8 data and find it competitive against several state-of-the-art existing network inference methods.

  17. GRAMI: Generalized Frequent Subgraph Mining in Large Graphs

    KAUST Repository

    El Saeedy, Mohammed El Sayed

    2011-07-24

    Mining frequent subgraphs is an important operation on graphs. Most existing work assumes a database of many small graphs, but modern applications, such as social networks, citation graphs or protein-protein interaction in bioinformatics, are modeled as a single large graph. Interesting interactions in such applications may be transitive (e.g., friend of a friend). Existing methods, however, search for frequent isomorphic (i.e., exact match) subgraphs and cannot discover many useful patterns. In this paper we propose GRAMI, a framework that generalizes frequent subgraph mining in a large single graph. GRAMI discovers frequent patterns. A pattern is a graph where edges are generalized to distance-constrained paths. Depending on the definition of the distance function, many instantiations of the framework are possible. Both directed and undirected graphs, as well as multiple labels per vertex, are supported. We developed an efficient implementation of the framework that models the frequency resolution phase as a constraint satisfaction problem, in order to avoid the costly enumeration of all instances of each pattern in the graph. We also implemented CGRAMI, a version that supports structural and semantic constraints; and AGRAMI, an approximate version that supports very large graphs. Our experiments on real data demonstrate that our framework is up to 3 orders of magnitude faster and discovers more interesting patterns than existing approaches.

  18. A simple model for the initial phase of a water plasma cloud about a large structure in space

    International Nuclear Information System (INIS)

    Hastings, D.E.; Gatsonis, N.A.; Mogstad, T.

    1988-01-01

    Large structures in the ionosphere will outgas or eject neutral water and perturb the ambient neutral environment. This water can undergo charge exchange with the ambient oxygen ions and form a water plasma cloud. Additionally, water dumps or thruster firings can create a water plasma cloud. A simple model for the evolution of a water plasma cloud about a large space structure is obtained. It is shown that if the electron density around a large space structure is substantially enhanced above the ambient density then the plasma cloud will move away from the structure. As the cloud moves away, it will become unstable and will eventually break up into filaments. A true steady state will exist only if the total electron density is unperturbed from the ambient density. When the water density is taken to be consistent with shuttle-based observations, the cloud is found to slowly drift away on a time scale of many tens of milliseconds. This time is consistent with the shuttle observations

  19. Existing and new techniques in uranium exploration

    International Nuclear Information System (INIS)

    Bowie, S.H.U.; Cameron, J.

    1976-01-01

    The demands on uranium exploration over the next 25 years will be very great indeed and will call for every possible means of improvement in exploration capability. The first essential is to increase geological knowledge of the mode of occurrence of uranium ore deposits. The second is to improve existing exploration techniques and instrumentation while, at the same time, promoting research and development on new methods to discover uranium ore bodies on the earth's surface and at depth. The present symposium is an effort to increase co-operation and the exchange of information in the critical field of uranium exploration techniques and instrumentation. As an introduction to the symposium a brief review is presented, firstly of what can be considered as existing techniques and, secondly, of techniques which have not yet been used on an appreciable scale. Some fourteen techniques used over the last 30 years are identified and their appropriate application, advantages and limitations are briefly summarized and the possibilities of their further development considered. The aim of future research on new techniques, in addition to finding new ways and means of identifying surface deposits, should be mainly directed to devising methods and instrumentation capable of detecting buried ore bodies that do not give a gamma signal at the surface. To achieve this aim, two contributory factors are essential: adequate financial support for research and development and increased specialized training in uranium exploration and instrumentation design. The papers in this symposium describe developments in the existing techniques, proposals for future research and development and case histories of exploration programmes

  20. Tracking of large-scale structures in turbulent channel with direct numerical simulation of low Prandtl number passive scalar

    Science.gov (United States)

    Tiselj, Iztok

    2014-12-01

    Channel flow DNS (Direct Numerical Simulation) at friction Reynolds number 180 and with passive scalars of Prandtl numbers 1 and 0.01 was performed in various computational domains. The "normal" size domain was ˜2300 wall units long and ˜750 wall units wide; size taken from the similar DNS of Moser et al. The "large" computational domain, which is supposed to be sufficient to describe the largest structures of the turbulent flows was 3 times longer and 3 times wider than the "normal" domain. The "very large" domain was 6 times longer and 6 times wider than the "normal" domain. All simulations were performed with the same spatial and temporal resolution. Comparison of the standard and large computational domains shows the velocity field statistics (mean velocity, root-mean-square (RMS) fluctuations, and turbulent Reynolds stresses) that are within 1%-2%. Similar agreement is observed for Pr = 1 temperature fields and can be observed also for the mean temperature profiles at Pr = 0.01. These differences can be attributed to the statistical uncertainties of the DNS. However, second-order moments, i.e., RMS temperature fluctuations of standard and large computational domains at Pr = 0.01 show significant differences of up to 20%. Stronger temperature fluctuations in the "large" and "very large" domains confirm the existence of the large-scale structures. Their influence is more or less invisible in the main velocity field statistics or in the statistics of the temperature fields at Prandtl numbers around 1. However, these structures play visible role in the temperature fluctuations at low Prandtl number, where high temperature diffusivity effectively smears the small-scale structures in the thermal field and enhances the relative contribution of large-scales. These large thermal structures represent some kind of an echo of the large scale velocity structures: the highest temperature-velocity correlations are not observed between the instantaneous temperatures and

  1. Managing patients' wait time in specialist out-patient clinic using real-time data from existing queue management and ADT systems.

    Science.gov (United States)

    Ju, John Chen; Gan, Soon Ann; Tan Siew Wee, Justine; Huang Yuchi, Peter; Mei Mei, Chan; Wong Mei Mei, Sharon; Fong, Kam Weng

    2013-01-01

    In major cancer centers, heavy patients load and multiple registration stations could cause significant wait time, and can be result in patient complains. Real-time patient journey data and visual display are useful tools in hospital patient queue management. This paper demonstrates how we capture patient queue data without deploying any tracing devices; and how to convert data into useful patient journey information to understand where interventions are likely to be most effective. During our system development, remarkable effort has been spent on resolving data discrepancy and balancing between accuracy and system performances. A web-based dashboard to display real-time information and a framework for data analysis were also developed to facilitate our clinics' operation. Result shows our system could eliminate more than 95% of data capturing errors and has improved patient wait time data accuracy since it was deployed.

  2. Finding hidden periodic signals in time series - an application to stock prices

    Science.gov (United States)

    O'Shea, Michael

    2014-03-01

    Data in the form of time series appear in many areas of science. In cases where the periodicity is apparent and the only other contribution to the time series is stochastic in origin, the data can be `folded' to improve signal to noise and this has been done for light curves of variable stars with the folding resulting in a cleaner light curve signal. Stock index prices versus time are classic examples of time series. Repeating patterns have been claimed by many workers and include unusually large returns on small-cap stocks during the month of January, and small returns on the Dow Jones Industrial average (DJIA) in the months June through September compared to the rest of the year. Such observations imply that these prices have a periodic component. We investigate this for the DJIA. If such a component exists it is hidden in a large non-periodic variation and a large stochastic variation. We show how to extract this periodic component and for the first time reveal its yearly (averaged) shape. This periodic component leads directly to the `Sell in May and buy at Halloween' adage. We also drill down and show that this yearly variation emerges from approximately half of the underlying stocks making up the DJIA index.

  3. The Integration of DCS I/O to an Existing PLC

    Science.gov (United States)

    Sadhukhan, Debashis; Mihevic, John

    2013-01-01

    At the NASA Glenn Research Center (GRC), Existing Programmable Logic Controller (PLC) I/O was replaced with Distributed Control System (DCS) I/O, while keeping the existing PLC sequence Logic. The reason for integration of the PLC logic and DCS I/O, along with the evaluation of the resulting system is the subject of this paper. The pros and cons of the old system and new upgrade are described, including operator workstation screen update times. Detail of the physical layout and the communication between the PLC, the DCS I/O and the operator workstations are illustrated. The complex characteristics of a central process control system and the plan to remove the PLC processors in future upgrades is also discussed.

  4. Evaluating cloud processes in large-scale models: Of idealized case studies, parameterization testbeds and single-column modelling on climate time-scales

    Science.gov (United States)

    Neggers, Roel

    2016-04-01

    Boundary-layer schemes have always formed an integral part of General Circulation Models (GCMs) used for numerical weather and climate prediction. The spatial and temporal scales associated with boundary-layer processes and clouds are typically much smaller than those at which GCMs are discretized, which makes their representation through parameterization a necessity. The need for generally applicable boundary-layer parameterizations has motivated many scientific studies, which in effect has created its own active research field in the atmospheric sciences. Of particular interest has been the evaluation of boundary-layer schemes at "process-level". This means that parameterized physics are studied in isolated mode from the larger-scale circulation, using prescribed forcings and excluding any upscale interaction. Although feedbacks are thus prevented, the benefit is an enhanced model transparency, which might aid an investigator in identifying model errors and understanding model behavior. The popularity and success of the process-level approach is demonstrated by the many past and ongoing model inter-comparison studies that have been organized by initiatives such as GCSS/GASS. A red line in the results of these studies is that although most schemes somehow manage to capture first-order aspects of boundary layer cloud fields, there certainly remains room for improvement in many areas. Only too often are boundary layer parameterizations still found to be at the heart of problems in large-scale models, negatively affecting forecast skills of NWP models or causing uncertainty in numerical predictions of future climate. How to break this parameterization "deadlock" remains an open problem. This presentation attempts to give an overview of the various existing methods for the process-level evaluation of boundary-layer physics in large-scale models. This includes i) idealized case studies, ii) longer-term evaluation at permanent meteorological sites (the testbed approach

  5. Operational, cost, and technical study of large windpower systems integrated with an existing electric utility. Final report

    Energy Technology Data Exchange (ETDEWEB)

    Ligon, C.; Kirby, G.; Jordan, D.; Lawrence, J.H.; Wiesner, W.; Kosovec, A.; Swanson, R.K.; Smith, R.T.; Johnson, C.C.; Hodson, H.O.

    1976-04-01

    Detailed wind energy assessment from the available wind records, and evaluation of the application of wind energy systems to an existing electric utility were performed in an area known as the Texas Panhandle, on the Great Plains. The study area includes parts of Texas, eastern New Mexico, the Oklahoma Panhandle and southern Kansas. The region is shown to have uniformly distributed winds of relatively high velocity, with average wind power density of 0.53 kW/m/sup 2/ at 30 m height at Amarillo, Texas, a representative location. The annual period of calm is extremely low. Three separate compressed air storage systems with good potential were analyzed in detail, and two potential pumped-hydro facilities were identified and given preliminary consideration. Aquifer storage of compressed air is a promising possibility in the region.

  6. Old times Old times

    Directory of Open Access Journals (Sweden)

    Ubiratan Paiva de Oliveira

    2008-04-01

    Full Text Available In Pinter: A Study of His Plays, Martin Esslin mentions three levels of possible interpretation for Old Times. According to him, Pinter's play could be interpreted on a realistic level, or either representing the male character's dream or a ritual game. He correctly remarks, though, that none of those levels excludes the others, because "... they must co-exist to create the atmosphere of poetic ambivalence on which the image of the play rests..1 In Pinter: A Study of His Plays, Martin Esslin mentions three levels of possible interpretation for Old Times. According to him, Pinter's play could be interpreted on a realistic level, or either representing the male character's dream or a ritual game. He correctly remarks, though, that none of those levels excludes the others, because "... they must co-exist to create the atmosphere of poetic ambivalence on which the image of the play rests..1

  7. Shared probe design and existing microarray reanalysis using PICKY

    Directory of Open Access Journals (Sweden)

    Chou Hui-Hsien

    2010-04-01

    Full Text Available Abstract Background Large genomes contain families of highly similar genes that cannot be individually identified by microarray probes. This limitation is due to thermodynamic restrictions and cannot be resolved by any computational method. Since gene annotations are updated more frequently than microarrays, another common issue facing microarray users is that existing microarrays must be routinely reanalyzed to determine probes that are still useful with respect to the updated annotations. Results PICKY 2.0 can design shared probes for sets of genes that cannot be individually identified using unique probes. PICKY 2.0 uses novel algorithms to track sharable regions among genes and to strictly distinguish them from other highly similar but nontarget regions during thermodynamic comparisons. Therefore, PICKY does not sacrifice the quality of shared probes when choosing them. The latest PICKY 2.1 includes the new capability to reanalyze existing microarray probes against updated gene sets to determine probes that are still valid to use. In addition, more precise nonlinear salt effect estimates and other improvements are added, making PICKY 2.1 more versatile to microarray users. Conclusions Shared probes allow expressed gene family members to be detected; this capability is generally more desirable than not knowing anything about these genes. Shared probes also enable the design of cross-genome microarrays, which facilitate multiple species identification in environmental samples. The new nonlinear salt effect calculation significantly increases the precision of probes at a lower buffer salt concentration, and the probe reanalysis function improves existing microarray result interpretations.

  8. Efficient Geo-Computational Algorithms for Constructing Space-Time Prisms in Road Networks

    Directory of Open Access Journals (Sweden)

    Hui-Ping Chen

    2016-11-01

    Full Text Available The Space-time prism (STP is a key concept in time geography for analyzing human activity-travel behavior under various Space-time constraints. Most existing time-geographic studies use a straightforward algorithm to construct STPs in road networks by using two one-to-all shortest path searches. However, this straightforward algorithm can introduce considerable computational overhead, given the fact that accessible links in a STP are generally a small portion of the whole network. To address this issue, an efficient geo-computational algorithm, called NTP-A*, is proposed. The proposed NTP-A* algorithm employs the A* and branch-and-bound techniques to discard inaccessible links during two shortest path searches, and thereby improves the STP construction performance. Comprehensive computational experiments are carried out to demonstrate the computational advantage of the proposed algorithm. Several implementation techniques, including the label-correcting technique and the hybrid link-node labeling technique, are discussed and analyzed. Experimental results show that the proposed NTP-A* algorithm can significantly improve STP construction performance in large-scale road networks by a factor of 100, compared with existing algorithms.

  9. Thesaurus-based search in large heterogeneous collections

    NARCIS (Netherlands)

    Wielemaker, J.; Hildebrand, M.; van Ossenbruggen, J.; Schreiber, G.

    2008-01-01

    In cultural heritage, large virtual collections are coming into existence. Such collections contain heterogeneous sets of metadata and vocabulary concepts, originating from multiple sources. In the context of the E-Culture demonstrator we have shown earlier that such virtual collections can be

  10. Spatiotemporally enhancing time-series DMSP/OLS nighttime light imagery for assessing large-scale urban dynamics

    Science.gov (United States)

    Xie, Yanhua; Weng, Qihao

    2017-06-01

    Accurate, up-to-date, and consistent information of urban extents is vital for numerous applications central to urban planning, ecosystem management, and environmental assessment and monitoring. However, current large-scale urban extent products are not uniform with respect to definition, spatial resolution, temporal frequency, and thematic representation. This study aimed to enhance, spatiotemporally, time-series DMSP/OLS nighttime light (NTL) data for detecting large-scale urban changes. The enhanced NTL time series from 1992 to 2013 were firstly generated by implementing global inter-calibration, vegetation-based spatial adjustment, and urban archetype-based temporal modification. The dataset was then used for updating and backdating urban changes for the contiguous U.S.A. (CONUS) and China by using the Object-based Urban Thresholding method (i.e., NTL-OUT method, Xie and Weng, 2016b). The results showed that the updated urban extents were reasonably accurate, with city-scale RMSE (root mean square error) of 27 km2 and Kappa of 0.65 for CONUS, and 55 km2 and 0.59 for China, respectively. The backdated urban extents yielded similar accuracy, with RMSE of 23 km2 and Kappa of 0.63 in CONUS, while 60 km2 and 0.60 in China. The accuracy assessment further revealed that the spatial enhancement greatly improved the accuracy of urban updating and backdating by significantly reducing RMSE and slightly increasing Kappa values. The temporal enhancement also reduced RMSE, and improved the spatial consistency between estimated and reference urban extents. Although the utilization of enhanced NTL data successfully detected urban size change, relatively low locational accuracy of the detected urban changes was observed. It is suggested that the proposed methodology would be more effective for updating and backdating global urban maps if further fusion of NTL data with higher spatial resolution imagery was implemented.

  11. Constraining Alternative Theories of Gravity Using Pulsar Timing Arrays

    Science.gov (United States)

    Cornish, Neil J.; O'Beirne, Logan; Taylor, Stephen R.; Yunes, Nicolás

    2018-05-01

    The opening of the gravitational wave window by ground-based laser interferometers has made possible many new tests of gravity, including the first constraints on polarization. It is hoped that, within the next decade, pulsar timing will extend the window by making the first detections in the nanohertz frequency regime. Pulsar timing offers several advantages over ground-based interferometers for constraining the polarization of gravitational waves due to the many projections of the polarization pattern provided by the different lines of sight to the pulsars, and the enhanced response to longitudinal polarizations. Here, we show that existing results from pulsar timing arrays can be used to place stringent limits on the energy density of longitudinal stochastic gravitational waves. However, unambiguously distinguishing these modes from noise will be very difficult due to the large variances in the pulsar-pulsar correlation patterns. Existing upper limits on the power spectrum of pulsar timing residuals imply that the amplitude of vector longitudinal (VL) and scalar longitudinal (SL) modes at frequencies of 1/year are constrained, AVL<4 ×10-16 and ASL<4 ×10-17, while the bounds on the energy density for a scale invariant cosmological background are ΩVLh2<4 ×10-11 and ΩSLh2<3 ×10-13.

  12. 14 CFR 1251.301 - Existing facilities.

    Science.gov (United States)

    2010-01-01

    ... 14 Aeronautics and Space 5 2010-01-01 2010-01-01 false Existing facilities. 1251.301 Section 1251... HANDICAP Accessibility § 1251.301 Existing facilities. (a) Accessibility. A recipient shall operate each... existing facilities or every part of a facility accessible to and usable by handicapped persons. (b...

  13. 10 CFR 611.206 - Existing facilities.

    Science.gov (United States)

    2010-01-01

    ... 10 Energy 4 2010-01-01 2010-01-01 false Existing facilities. 611.206 Section 611.206 Energy... PROGRAM Facility/Funding Awards § 611.206 Existing facilities. The Secretary shall, in making awards to those manufacturers that have existing facilities, give priority to those facilities that are oldest or...

  14. A precariedade humana e a existência estilizada Human precariousness and stylized existence

    Directory of Open Access Journals (Sweden)

    Rita Paiva

    2013-04-01

    Full Text Available Este artigo tematiza o desamparo vivenciado pela consciência ante a ausência de bases sólidas para seus anseios de felicidade e para suas representações simbólicas. Com esse propósito, toma como objeto de reflexão um dos ensaios filosóficos de Albert Camus, O mito de Sísifo, equacionando a possibilidade de uma ética que estilize a vida, sem que se minimize a dolorosa precariedade da existência humana. Posteriormente, em diálogo com alguns textos de M. Foucault, a reflexão procura estabelecer os vínculos possíveis entre a ética camusiana e a ética como uma estética da existência, tal como pensada entre os gregos antigos.This article discusses the helplessness experienced by the consciousness vis-à-vis the absence of solid bases for its longings for happiness and for its symbolic representations. For this purpose, the object of reflection of the article is one of Albert Camus' philosophical essays, The Myth of Sisyphus, and we inquire into the possibility of an ethics that stylizes life without minimizing the painful precariousness of human existence. Making reference to certain texts by Foucault, we attempt to establish possible connections between Camus' ethics and an ethics of the aesthetics of existence as found in the thinkers of ancient Greece.

  15. Likelihood of Null Effects of Large NHLBI Clinical Trials Has Increased over Time.

    Directory of Open Access Journals (Sweden)

    Robert M Kaplan

    Full Text Available We explore whether the number of null results in large National Heart Lung, and Blood Institute (NHLBI funded trials has increased over time.We identified all large NHLBI supported RCTs between 1970 and 2012 evaluating drugs or dietary supplements for the treatment or prevention of cardiovascular disease. Trials were included if direct costs >$500,000/year, participants were adult humans, and the primary outcome was cardiovascular risk, disease or death. The 55 trials meeting these criteria were coded for whether they were published prior to or after the year 2000, whether they registered in clinicaltrials.gov prior to publication, used active or placebo comparator, and whether or not the trial had industry co-sponsorship. We tabulated whether the study reported a positive, negative, or null result on the primary outcome variable and for total mortality.17 of 30 studies (57% published prior to 2000 showed a significant benefit of intervention on the primary outcome in comparison to only 2 among the 25 (8% trials published after 2000 (χ2=12.2,df= 1, p=0.0005. There has been no change in the proportion of trials that compared treatment to placebo versus active comparator. Industry co-sponsorship was unrelated to the probability of reporting a significant benefit. Pre-registration in clinical trials.gov was strongly associated with the trend toward null findings.The number NHLBI trials reporting positive results declined after the year 2000. Prospective declaration of outcomes in RCTs, and the adoption of transparent reporting standards, as required by clinicaltrials.gov, may have contributed to the trend toward null findings.

  16. Possible evidence for the existence of antimatter on a cosmological scale in the universe.

    Science.gov (United States)

    Stecker, F. W.; Morgan, D. L., Jr.; Bredekamp, J.

    1971-01-01

    Initial results of a detailed calculation of the cosmological gamma-ray spectrum from matter-antimatter annihilation in the universe. The similarity between the calculated spectrum and the present observations of the gamma-ray background spectrum above 1 MeV suggests that such observations may be evidence of the existence of antimatter on a large scale in the universe.

  17. Real-Time Adaptive Control of a Magnetic Levitation System with a Large Range of Load Disturbance.

    Science.gov (United States)

    Zhang, Zhizhou; Li, Xiaolong

    2018-05-11

    In an idle light-load or a full-load condition, the change of the load mass of a suspension system is very significant. If the control parameters of conventional control methods remain unchanged, the suspension performance of the control system deteriorates rapidly or even loses stability when the load mass changes in a large range. In this paper, a real-time adaptive control method for a magnetic levitation system with large range of mass changes is proposed. First, the suspension control system model of the maglev train is built up, and the stability of the closed-loop system is analyzed. Then, a fast inner current-loop is used to simplify the design of the suspension control system, and an adaptive control method is put forward to ensure that the system is still in a stable state when the load mass varies in a wide range. Simulations and experiments show that when the load mass of the maglev system varies greatly, the adaptive control method is effective to suspend the system stably with a given displacement.

  18. Space-Time Convolutional Codes over Finite Fields and Rings for Systems with Large Diversity Order

    Directory of Open Access Journals (Sweden)

    B. F. Uchôa-Filho

    2008-06-01

    Full Text Available We propose a convolutional encoder over the finite ring of integers modulo pk,ℤpk, where p is a prime number and k is any positive integer, to generate a space-time convolutional code (STCC. Under this structure, we prove three properties related to the generator matrix of the convolutional code that can be used to simplify the code search procedure for STCCs over ℤpk. Some STCCs of large diversity order (≥4 designed under the trace criterion for n=2,3, and 4 transmit antennas are presented for various PSK signal constellations.

  19. 45 CFR 1170.32 - Existing facilities.

    Science.gov (United States)

    2010-10-01

    ... 45 Public Welfare 3 2010-10-01 2010-10-01 false Existing facilities. 1170.32 Section 1170.32... ASSISTED PROGRAMS OR ACTIVITIES Accessibility § 1170.32 Existing facilities. (a) Accessibility. A recipient... require a recipient to make each of its existing facilities or every part of a facility accessible to and...

  20. 45 CFR 605.22 - Existing facilities.

    Science.gov (United States)

    2010-10-01

    ... 45 Public Welfare 3 2010-10-01 2010-10-01 false Existing facilities. 605.22 Section 605.22 Public... Accessibility § 605.22 Existing facilities. (a) Accessibility. A recipient shall operate each program or... existing facilities or every part of a facility accessible to and usable by qualified handicapped persons...

  1. 45 CFR 1151.22 - Existing facilities.

    Science.gov (United States)

    2010-10-01

    ... 45 Public Welfare 3 2010-10-01 2010-10-01 false Existing facilities. 1151.22 Section 1151.22... Prohibited Accessibility § 1151.22 Existing facilities. (a) A recipient shall operate each program or... make each of its existing facilities or every part of a facility accessible to and usable by...

  2. Development of three-dimensional phasic-velocity distribution measurement in a large-diameter pipe

    International Nuclear Information System (INIS)

    Kanai, Taizo; Furuya, Masahiro; Arai, Takahiro; Shirakawa, Kenetsu

    2011-01-01

    A wire-mesh sensor (WMS) can acquire a void fraction distribution at a high temporal and spatial resolution and also estimate the velocity of a vertical rising flow by investigating the signal time-delay of the upstream WMS relative to downstream. Previously, one-dimensional velocity was estimated by using the same point of each WMS at a temporal resolution of 1.0 - 5.0 s. The authors propose to extend this time series analysis to estimate the multi-dimensional velocity profile via cross-correlation analysis between a point of upstream WMS and multiple points downstream. Bubbles behave in various ways according to size, which is used to classify them into certain groups via wavelet analysis before cross-correlation analysis. This method was verified by air-water straight and swirl flows within a large-diameter vertical pipe. The results revealed that for the rising straight and swirl flows, large scale bubbles tend to move to the center, while the small bubble is pushed to the outside or sucked into the space where the large bubbles existed. Moreover, it is found that this method can estimate the rotational component of velocity of the swirl flow as well as measuring the multi-dimensional velocity vector at high temporal resolutions of 0.2s. (author)

  3. Existence of Three Positive Solutions to Some p-Laplacian Boundary Value Problems

    Directory of Open Access Journals (Sweden)

    Moulay Rchid Sidi Ammi

    2013-01-01

    Full Text Available We obtain, by using the Leggett-Williams fixed point theorem, sufficient conditions that ensure the existence of at least three positive solutions to some p-Laplacian boundary value problems on time scales.

  4. Existence and non-existence of solutions for a p(x-biharmonic problem

    Directory of Open Access Journals (Sweden)

    Ghasem A. Afrouzi

    2015-06-01

    Full Text Available In this article, we study the following problem with Navier boundary conditions $$\\displaylines{ \\Delta (|\\Delta u|^{p(x-2}\\Delta u+|u|^{p(x-2}u =\\lambda |u|^{q(x-2}u +\\mu|u|^{\\gamma(x-2}u\\quad \\text{in } \\Omega,\\cr u=\\Delta u=0 \\quad \\text{on } \\partial\\Omega. }$$ where $\\Omega$ is a bounded domain in $\\mathbb{R}^{N}$ with smooth boundary $\\partial \\Omega$, $N\\geq1$. $p(x,q(x$ and $\\gamma(x$ are continuous functions on $\\overline{\\Omega}$, $\\lambda$ and $\\mu$ are parameters. Using variational methods, we establish some existence and non-existence results of solutions for this problem.

  5. Large degeneracy of excited hadrons and quark models

    International Nuclear Information System (INIS)

    Bicudo, P.

    2007-01-01

    The pattern of a large approximate degeneracy of the excited hadron spectra (larger than the chiral restoration degeneracy) is present in the recent experimental report of Bugg. Here we try to model this degeneracy with state of the art quark models. We review how the Coulomb Gauge chiral invariant and confining Bethe-Salpeter equation simplifies in the case of very excited quark-antiquark mesons, including angular or radial excitations, to a Salpeter equation with an ultrarelativistic kinetic energy with the spin-independent part of the potential. The resulting meson spectrum is solved, and the excited chiral restoration is recovered, for all mesons with J>0. Applying the ultrarelativistic simplification to a linear equal-time potential, linear Regge trajectories are obtained, for both angular and radial excitations. The spectrum is also compared with the semiclassical Bohr-Sommerfeld quantization relation. However, the excited angular and radial spectra do not coincide exactly. We then search, with the classical Bertrand theorem, for central potentials producing always classical closed orbits with the ultrarelativistic kinetic energy. We find that no such potential exists, and this implies that no exact larger degeneracy can be obtained in our equal-time framework, with a single principal quantum number comparable to the nonrelativistic Coulomb or harmonic oscillator potentials. Nevertheless we find it plausible that the large experimental approximate degeneracy will be modeled in the future by quark models beyond the present state of the art

  6. 34 CFR 104.22 - Existing facilities.

    Science.gov (United States)

    2010-07-01

    ... 34 Education 1 2010-07-01 2010-07-01 false Existing facilities. 104.22 Section 104.22 Education... Accessibility § 104.22 Existing facilities. (a) Accessibility. A recipient shall operate its program or activity.... This paragraph does not require a recipient to make each of its existing facilities or every part of a...

  7. Existence, regularity and representation of solutions of time fractional wave equations

    Directory of Open Access Journals (Sweden)

    Valentin Keyantuo

    2017-09-01

    Full Text Available We study the solvability of the fractional order inhomogeneous Cauchy problem $$ \\mathbb{D}_t^\\alpha u(t=Au(t+f(t, \\quad t>0,\\;1<\\alpha\\le 2, $$ where A is a closed linear operator in some Banach space X and $f:[0,\\infty\\to X$ a given function. Operator families associated with this problem are defined and their regularity properties are investigated. In the case where A is a generator of a $\\beta$-times integrated cosine family $(C_\\beta(t$, we derive explicit representations of mild and classical solutions of the above problem in terms of the integrated cosine family. We include applications to elliptic operators with Dirichlet, Neumann or Robin type boundary conditions on $L^p$-spaces and on the space of continuous functions.

  8. Tracing the trajectory of skill learning with a very large sample of online game players.

    Science.gov (United States)

    Stafford, Tom; Dewar, Michael

    2014-02-01

    In the present study, we analyzed data from a very large sample (N = 854,064) of players of an online game involving rapid perception, decision making, and motor responding. Use of game data allowed us to connect, for the first time, rich details of training history with measures of performance from participants engaged for a sustained amount of time in effortful practice. We showed that lawful relations exist between practice amount and subsequent performance, and between practice spacing and subsequent performance. Our methodology allowed an in situ confirmation of results long established in the experimental literature on skill acquisition. Additionally, we showed that greater initial variation in performance is linked to higher subsequent performance, a result we link to the exploration/exploitation trade-off from the computational framework of reinforcement learning. We discuss the benefits and opportunities of behavioral data sets with very large sample sizes and suggest that this approach could be particularly fecund for studies of skill acquisition.

  9. Nonequilibrium Dynamics of Anisotropic Large Spins in the Kondo Regime: Time-Dependent Numerical Renormalization Group Analysis

    Science.gov (United States)

    Roosen, David; Wegewijs, Maarten R.; Hofstetter, Walter

    2008-02-01

    We investigate the time-dependent Kondo effect in a single-molecule magnet (SMM) strongly coupled to metallic electrodes. Describing the SMM by a Kondo model with large spin S>1/2, we analyze the underscreening of the local moment and the effect of anisotropy terms on the relaxation dynamics of the magnetization. Underscreening by single-channel Kondo processes leads to a logarithmically slow relaxation, while finite uniaxial anisotropy causes a saturation of the SMM’s magnetization. Additional transverse anisotropy terms induce quantum spin tunneling and a pseudospin-1/2 Kondo effect sensitive to the spin parity.

  10. Integrating existing software toolkits into VO system

    Science.gov (United States)

    Cui, Chenzhou; Zhao, Yong-Heng; Wang, Xiaoqian; Sang, Jian; Luo, Ze

    2004-09-01

    Virtual Observatory (VO) is a collection of interoperating data archives and software tools. Taking advantages of the latest information technologies, it aims to provide a data-intensively online research environment for astronomers all around the world. A large number of high-qualified astronomical software packages and libraries are powerful and easy of use, and have been widely used by astronomers for many years. Integrating those toolkits into the VO system is a necessary and important task for the VO developers. VO architecture greatly depends on Grid and Web services, consequently the general VO integration route is "Java Ready - Grid Ready - VO Ready". In the paper, we discuss the importance of VO integration for existing toolkits and discuss the possible solutions. We introduce two efforts in the field from China-VO project, "gImageMagick" and "Galactic abundance gradients statistical research under grid environment". We also discuss what additional work should be done to convert Grid service to VO service.

  11. Diamond detector time resolution for large angle tracks

    Energy Technology Data Exchange (ETDEWEB)

    Chiodini, G., E-mail: chiodini@le.infn.it [INFN - Sezione di Lecce (Italy); Fiore, G.; Perrino, R. [INFN - Sezione di Lecce (Italy); Pinto, C.; Spagnolo, S. [INFN - Sezione di Lecce (Italy); Dip. di Matematica e Fisica “Ennio De Giorgi”, Uni. del Salento (Italy)

    2015-10-01

    The applications which have stimulated greater interest in diamond sensors are related to detectors close to particle beams, therefore in an environment with high radiation level (beam monitor, luminosity measurement, detection of primary and secondary-interaction vertices). Our aims is to extend the studies performed so far by developing the technical advances needed to prove the competitiveness of this technology in terms of time resolution, with respect to more usual ones, which does not guarantee the required tolerance to a high level of radiation doses. In virtue of these goals, measurements of diamond detector time resolution with tracks incident at different angles are discussed. In particular, preliminary testbeam results obtained with 5 GeV electrons and polycrystalline diamond strip detectors are shown.

  12. Free time, play and game

    OpenAIRE

    Božović Ratko R.

    2008-01-01

    Free time and play are mutually dependent categories that are always realized together. We either play because we have free time or we have free time because we play (E. Fink). Play, no matter whether it is children's or artistic play or a spontaneous sports game (excluding professional sports) most fully complements human existence and thereby realizes free time as a time in freedom and freedom of time. Therefore, free time exists and is most prominent in play. Moreover, one game releases it...

  13. Geospatial Optimization of Siting Large-Scale Solar Projects

    Energy Technology Data Exchange (ETDEWEB)

    Macknick, Jordan [National Renewable Energy Lab. (NREL), Golden, CO (United States); Quinby, Ted [National Renewable Energy Lab. (NREL), Golden, CO (United States); Caulfield, Emmet [Stanford Univ., CA (United States); Gerritsen, Margot [Stanford Univ., CA (United States); Diffendorfer, Jay [U.S. Geological Survey, Boulder, CO (United States); Haines, Seth [U.S. Geological Survey, Boulder, CO (United States)

    2014-03-01

    Recent policy and economic conditions have encouraged a renewed interest in developing large-scale solar projects in the U.S. Southwest. However, siting large-scale solar projects is complex. In addition to the quality of the solar resource, solar developers must take into consideration many environmental, social, and economic factors when evaluating a potential site. This report describes a proof-of-concept, Web-based Geographical Information Systems (GIS) tool that evaluates multiple user-defined criteria in an optimization algorithm to inform discussions and decisions regarding the locations of utility-scale solar projects. Existing siting recommendations for large-scale solar projects from governmental and non-governmental organizations are not consistent with each other, are often not transparent in methods, and do not take into consideration the differing priorities of stakeholders. The siting assistance GIS tool we have developed improves upon the existing siting guidelines by being user-driven, transparent, interactive, capable of incorporating multiple criteria, and flexible. This work provides the foundation for a dynamic siting assistance tool that can greatly facilitate siting decisions among multiple stakeholders.

  14. Summary of existing superconducting magnet experience and its relevance to the safety of fusion magnet

    International Nuclear Information System (INIS)

    Hsieh, S.Y.; Allinger, J.; Danby, G.; Keane, J.; Powell, J.; Prodell, A.

    1975-01-01

    A comprehensive summary of experience with over twenty superconducting magnet systems has been collected through visits to and discussions about existing facilities including, for example, the bubble chamber magnets at Brookhaven National Laboratory, Argonne National Laboratory and Fermi National Accelerator Laboratory, and the large superconducting spectrometer at Stanford Linear Accelerator Center. This summary includes data relating to parameters of these magnets, magnet protection methods, and operating experiences. The information received is organized and presented in the context of its relevance to the safe operation of future, very large superconducting magnet systems for fusion power plants

  15. SCARDEC: a new technique for the rapid determination of seismic moment magnitude, focal mechanism and source time functions for large earthquakes using body-wave deconvolution

    Science.gov (United States)

    Vallée, M.; Charléty, J.; Ferreira, A. M. G.; Delouis, B.; Vergoz, J.

    2011-01-01

    Accurate and fast magnitude determination for large, shallow earthquakes is of key importance for post-seismic response and tsumami alert purposes. When no local real-time data are available, which is today the case for most subduction earthquakes, the first information comes from teleseismic body waves. Standard body-wave methods give accurate magnitudes for earthquakes up to Mw= 7-7.5. For larger earthquakes, the analysis is more complex, because of the non-validity of the point-source approximation and of the interaction between direct and surface-reflected phases. The latter effect acts as a strong high-pass filter, which complicates the magnitude determination. We here propose an automated deconvolutive approach, which does not impose any simplifying assumptions about the rupture process, thus being well adapted to large earthquakes. We first determine the source duration based on the length of the high frequency (1-3 Hz) signal content. The deconvolution of synthetic double-couple point source signals—depending on the four earthquake parameters strike, dip, rake and depth—from the windowed real data body-wave signals (including P, PcP, PP, SH and ScS waves) gives the apparent source time function (STF). We search the optimal combination of these four parameters that respects the physical features of any STF: causality, positivity and stability of the seismic moment at all stations. Once this combination is retrieved, the integration of the STFs gives directly the moment magnitude. We apply this new approach, referred as the SCARDEC method, to most of the major subduction earthquakes in the period 1990-2010. Magnitude differences between the Global Centroid Moment Tensor (CMT) and the SCARDEC method may reach 0.2, but values are found consistent if we take into account that the Global CMT solutions for large, shallow earthquakes suffer from a known trade-off between dip and seismic moment. We show by modelling long-period surface waves of these events that

  16. Evolution of scaling emergence in large-scale spatial epidemic spreading.

    Science.gov (United States)

    Wang, Lin; Li, Xiang; Zhang, Yi-Qing; Zhang, Yan; Zhang, Kan

    2011-01-01

    Zipf's law and Heaps' law are two representatives of the scaling concepts, which play a significant role in the study of complexity science. The coexistence of the Zipf's law and the Heaps' law motivates different understandings on the dependence between these two scalings, which has still hardly been clarified. In this article, we observe an evolution process of the scalings: the Zipf's law and the Heaps' law are naturally shaped to coexist at the initial time, while the crossover comes with the emergence of their inconsistency at the larger time before reaching a stable state, where the Heaps' law still exists with the disappearance of strict Zipf's law. Such findings are illustrated with a scenario of large-scale spatial epidemic spreading, and the empirical results of pandemic disease support a universal analysis of the relation between the two laws regardless of the biological details of disease. Employing the United States domestic air transportation and demographic data to construct a metapopulation model for simulating the pandemic spread at the U.S. country level, we uncover that the broad heterogeneity of the infrastructure plays a key role in the evolution of scaling emergence. The analyses of large-scale spatial epidemic spreading help understand the temporal evolution of scalings, indicating the coexistence of the Zipf's law and the Heaps' law depends on the collective dynamics of epidemic processes, and the heterogeneity of epidemic spread indicates the significance of performing targeted containment strategies at the early time of a pandemic disease.

  17. Introducing modified TypeScript in an existing framework to improve error handling

    OpenAIRE

    Minder, Patrik

    2016-01-01

    Error messages in compilers is a topic that is often overlooked. The quality of the messages can have a big impact on development time and ease oflearning. Another method used to speed up development is to build a domainspecific language (DSL). This thesis migrates an existing framework to use TypeScript in order to speed up development time with compile-time error handling. Alternative methods for implementing a DSL are evaluated based onhow they affect the ability to generate good error mes...

  18. The future of large old trees in urban landscapes.

    Science.gov (United States)

    Le Roux, Darren S; Ikin, Karen; Lindenmayer, David B; Manning, Adrian D; Gibbons, Philip

    2014-01-01

    Large old trees are disproportionate providers of structural elements (e.g. hollows, coarse woody debris), which are crucial habitat resources for many species. The decline of large old trees in modified landscapes is of global conservation concern. Once large old trees are removed, they are difficult to replace in the short term due to typically prolonged time periods needed for trees to mature (i.e. centuries). Few studies have investigated the decline of large old trees in urban landscapes. Using a simulation model, we predicted the future availability of native hollow-bearing trees (a surrogate for large old trees) in an expanding city in southeastern Australia. In urban greenspace, we predicted that the number of hollow-bearing trees is likely to decline by 87% over 300 years under existing management practices. Under a worst case scenario, hollow-bearing trees may be completely lost within 115 years. Conversely, we predicted that the number of hollow-bearing trees will likely remain stable in semi-natural nature reserves. Sensitivity analysis revealed that the number of hollow-bearing trees perpetuated in urban greenspace over the long term is most sensitive to the: (1) maximum standing life of trees; (2) number of regenerating seedlings ha(-1); and (3) rate of hollow formation. We tested the efficacy of alternative urban management strategies and found that the only way to arrest the decline of large old trees requires a collective management strategy that ensures: (1) trees remain standing for at least 40% longer than currently tolerated lifespans; (2) the number of seedlings established is increased by at least 60%; and (3) the formation of habitat structures provided by large old trees is accelerated by at least 30% (e.g. artificial structures) to compensate for short term deficits in habitat resources. Immediate implementation of these recommendations is needed to avert long term risk to urban biodiversity.

  19. Late-time dynamics of rapidly rotating black holes

    International Nuclear Information System (INIS)

    Glampedakis, K.; Andersson, N.

    2001-01-01

    We study the late-time behaviour of a dynamically perturbed rapidly rotating black hole. Considering an extreme Kerr black hole, we show that the large number of virtually undamped quasinormal modes (that exist for nonzero values of the azimuthal eigenvalue m) combine in such a way that the field (as observed at infinity) oscillates with an amplitude that decays as 1/t at late times. For a near extreme black hole, these modes, collectively, give rise to an exponentially decaying field which, however, is considerably 'long-lived'. Our analytic results are verified using numerical time-evolutions of the Teukolsky equation. Moreover, we argue that the physical mechanism behind the observed behaviour is the presence of a 'superradiance resonance cavity' immediately outside the black hole. We present this new feature in detail, and discuss whether it may be relevant for astrophysical black holes. (author)

  20. Volume independence in large Nc QCD-like gauge theories

    International Nuclear Information System (INIS)

    Kovtun, Pavel; Uensal, Mithat; Yaffe, Laurence G.

    2007-01-01

    Volume independence in large N c gauge theories may be viewed as a generalized orbifold equivalence. The reduction to zero volume (or Eguchi-Kawai reduction) is a special case of this equivalence. So is temperature independence in confining phases. A natural generalization concerns volume independence in 'theory space' of quiver gauge theories. In pure Yang-Mills theory, the failure of volume independence for sufficiently small volumes (at weak coupling) due to spontaneous breaking of center symmetry, together with its validity above a critical size, nicely illustrate the symmetry realization conditions which are both necessary and sufficient for large N c orbifold equivalence. The existence of a minimal size below which volume independence fails also applies to Yang-Mills theory with antisymmetric representation fermions [QCD(AS)]. However, in Yang-Mills theory with adjoint representation fermions [QCD(Adj)], endowed with periodic boundary conditions, volume independence remains valid down to arbitrarily small size. In sufficiently large volumes, QCD(Adj) and QCD(AS) have a large N c ''orientifold'' equivalence, provided charge conjugation symmetry is unbroken in the latter theory. Therefore, via a combined orbifold-orientifold mapping, a well-defined large N c equivalence exists between QCD(AS) in large, or infinite, volume and QCD(Adj) in arbitrarily small volume. Since asymptotically free gauge theories, such as QCD(Adj), are much easier to study (analytically or numerically) in small volume, this equivalence should allow greater understanding of large N c QCD in infinite volume

  1. Moisture monitoring in large diameter boreholes

    International Nuclear Information System (INIS)

    Tyler, S.

    1985-01-01

    The results of both laboratory and field experiments indicate that the neutron moisture gauge traditionally used in soil physics experiments can be extended for use in large diameter (up to 15 cm) steel-cased boreholes with excellent results. This application will permit existing saturated zone monitoring wells to be used for unsaturated zone monitoring of recharge, redistribution and leak detection from waste disposal facilities. Its applicability to large diameter cased wells also gives the soil physicist and ground-water hydrologist and new set of monitoring points in the unsaturated zone to study recharge and aquifer properties. 6 refs., 6 figs., 2 tabs

  2. A new asynchronous parallel algorithm for inferring large-scale gene regulatory networks.

    Directory of Open Access Journals (Sweden)

    Xiangyun Xiao

    Full Text Available The reconstruction of gene regulatory networks (GRNs from high-throughput experimental data has been considered one of the most important issues in systems biology research. With the development of high-throughput technology and the complexity of biological problems, we need to reconstruct GRNs that contain thousands of genes. However, when many existing algorithms are used to handle these large-scale problems, they will encounter two important issues: low accuracy and high computational cost. To overcome these difficulties, the main goal of this study is to design an effective parallel algorithm to infer large-scale GRNs based on high-performance parallel computing environments. In this study, we proposed a novel asynchronous parallel framework to improve the accuracy and lower the time complexity of large-scale GRN inference by combining splitting technology and ordinary differential equation (ODE-based optimization. The presented algorithm uses the sparsity and modularity of GRNs to split whole large-scale GRNs into many small-scale modular subnetworks. Through the ODE-based optimization of all subnetworks in parallel and their asynchronous communications, we can easily obtain the parameters of the whole network. To test the performance of the proposed approach, we used well-known benchmark datasets from Dialogue for Reverse Engineering Assessments and Methods challenge (DREAM, experimentally determined GRN of Escherichia coli and one published dataset that contains more than 10 thousand genes to compare the proposed approach with several popular algorithms on the same high-performance computing environments in terms of both accuracy and time complexity. The numerical results demonstrate that our parallel algorithm exhibits obvious superiority in inferring large-scale GRNs.

  3. A new asynchronous parallel algorithm for inferring large-scale gene regulatory networks.

    Science.gov (United States)

    Xiao, Xiangyun; Zhang, Wei; Zou, Xiufen

    2015-01-01

    The reconstruction of gene regulatory networks (GRNs) from high-throughput experimental data has been considered one of the most important issues in systems biology research. With the development of high-throughput technology and the complexity of biological problems, we need to reconstruct GRNs that contain thousands of genes. However, when many existing algorithms are used to handle these large-scale problems, they will encounter two important issues: low accuracy and high computational cost. To overcome these difficulties, the main goal of this study is to design an effective parallel algorithm to infer large-scale GRNs based on high-performance parallel computing environments. In this study, we proposed a novel asynchronous parallel framework to improve the accuracy and lower the time complexity of large-scale GRN inference by combining splitting technology and ordinary differential equation (ODE)-based optimization. The presented algorithm uses the sparsity and modularity of GRNs to split whole large-scale GRNs into many small-scale modular subnetworks. Through the ODE-based optimization of all subnetworks in parallel and their asynchronous communications, we can easily obtain the parameters of the whole network. To test the performance of the proposed approach, we used well-known benchmark datasets from Dialogue for Reverse Engineering Assessments and Methods challenge (DREAM), experimentally determined GRN of Escherichia coli and one published dataset that contains more than 10 thousand genes to compare the proposed approach with several popular algorithms on the same high-performance computing environments in terms of both accuracy and time complexity. The numerical results demonstrate that our parallel algorithm exhibits obvious superiority in inferring large-scale GRNs.

  4. Field Observations of Precursors to Large Earthquakes: Interpreting and Verifying Their Causes

    Science.gov (United States)

    Suyehiro, K.; Sacks, S. I.; Rydelek, P. A.; Smith, D. E.; Takanami, T.

    2017-12-01

    Many reports of precursory anomalies before large earthquakes exist. However, it has proven elusive to even identify these signals before their actual occurrences. They often only become evident in retrospect. A probabilistic cellular automaton model (Sacks and Rydelek, 1995) explains many of the statistical and dynamic natures of earthquakes including the observed b-value decrease towards a large earthquake or a small stress perturbation to have effect on earthquake occurrence pattern. It also reproduces dynamic characters of each earthquake rupture. This model is useful in gaining insights on causal relationship behind complexities. For example, some reported cases of background seismicity quiescence before a main shock only seen for events larger than M=3 4 at years time scale can be reproduced by this model, if only a small fraction ( 2%) of the component cells are strengthened by a small amount. Such an enhancement may physically occur if a tiny and scattered portion of the seismogenic crust undergoes dilatancy hardening. Such a process to occur will be dependent on the fluid migration and microcracks developments under tectonic loading. Eventual large earthquake faulting will be promoted by the intrusion of excess water from surrounding rocks into the zone capable of cascading slips to a large area. We propose this process manifests itself on the surface as hydrologic, geochemical, or macroscopic anomalies, for which so many reports exist. We infer from seismicity that the eastern Nankai Trough (Tokai) area of central Japan is already in the stage of M-dependent seismic quiescence. Therefore, we advocate that new observations sensitive to detecting water migration in Tokai should be implemented. In particular, vertical component strain, gravity, and/or electrical conductivity, should be observed for verification.

  5. Existence of global attractor for the Trojan Y Chromosome model

    Directory of Open Access Journals (Sweden)

    Xiaopeng Zhao

    2012-04-01

    Full Text Available This paper is concerned with the long time behavior of solution for the equation derived by the Trojan Y Chromosome (TYC model with spatial spread. Based on the regularity estimates for the semigroups and the classical existence theorem of global attractors, we prove that this equations possesses a global attractor in $H^k(\\Omega^4$ $(k\\geq 0$ space.

  6. Fabrication experiments for large helix heat exchangers

    International Nuclear Information System (INIS)

    Burgsmueller, P.

    1978-01-01

    The helical tube has gained increasing attention as a heat transfer element for various kinds of heat exchangers over the last decade. Regardless of reactor type and heat transport medium, nuclear steam generators of the helix type are now in operation, installlation, fabrication or in the project phase. As a rule, projects are based on the extrapolation of existing technologies. In the particlular case of steam generators for HTGR power stations, however, existing experience is with steam generators of up to about 2 m diameter whereas several projects involve units more than twice as large. For this reason it was felt that a fabrication experiment was necessary in order to verify the feasibility of modern steam generator designs. A test rig was erected in the SULZER steam generator shops at Mantes, France, and skilled personnel and conventional production tools were employed in conducting experiments relating to the coiling, handling and threading of large helices. (Auth.)

  7. Existence of relativistic stars in f(R) gravity

    International Nuclear Information System (INIS)

    Upadhye, Amol; Hu, Wayne

    2009-01-01

    We refute recent claims in the literature that stars with relativistically deep potentials cannot exist in f(R) gravity. Numerical examples of stable stars, including relativistic (GM * /r * ∼0.1), constant density stars, are studied. As a star is made larger, nonlinear 'chameleon' effects screen much of the star's mass, stabilizing gravity at the stellar center. Furthermore, we show that the onset of this chameleon screening is unrelated to strong gravity. At large central pressures P>ρ/3, f(R) gravity, like general relativity, does have a maximum gravitational potential, but at a slightly smaller value: GM * /r * | max =0.345<4/9 for constant density and one choice of parameters. This difference is associated with negative central curvature R under general relativity not being accessed in the f(R) model, but does not apply to any known astrophysical object.

  8. Large area CMOS image sensors

    International Nuclear Information System (INIS)

    Turchetta, R; Guerrini, N; Sedgwick, I

    2011-01-01

    CMOS image sensors, also known as CMOS Active Pixel Sensors (APS) or Monolithic Active Pixel Sensors (MAPS), are today the dominant imaging devices. They are omnipresent in our daily life, as image sensors in cellular phones, web cams, digital cameras, ... In these applications, the pixels can be very small, in the micron range, and the sensors themselves tend to be limited in size. However, many scientific applications, like particle or X-ray detection, require large format, often with large pixels, as well as other specific performance, like low noise, radiation hardness or very fast readout. The sensors are also required to be sensitive to a broad spectrum of radiation: photons from the silicon cut-off in the IR down to UV and X- and gamma-rays through the visible spectrum as well as charged particles. This requirement calls for modifications to the substrate to be introduced to provide optimized sensitivity. This paper will review existing CMOS image sensors, whose size can be as large as a single CMOS wafer, and analyse the technical requirements and specific challenges of large format CMOS image sensors.

  9. Tri-track: free software for large-scale particle tracking.

    Science.gov (United States)

    Vallotton, Pascal; Olivier, Sandra

    2013-04-01

    The ability to correctly track objects in time-lapse sequences is important in many applications of microscopy. Individual object motions typically display a level of dynamic regularity reflecting the existence of an underlying physics or biology. Best results are obtained when this local information is exploited. Additionally, if the particle number is known to be approximately constant, a large number of tracking scenarios may be rejected on the basis that they are not compatible with a known maximum particle velocity. This represents information of a global nature, which should ideally be exploited too. Some time ago, we devised an efficient algorithm that exploited both types of information. The tracking task was reduced to a max-flow min-cost problem instance through a novel graph structure that comprised vertices representing objects from three consecutive image frames. The algorithm is explained here for the first time. A user-friendly implementation is provided, and the specific relaxation mechanism responsible for the method's effectiveness is uncovered. The software is particularly competitive for complex dynamics such as dense antiparallel flows, or in situations where object displacements are considerable. As an application, we characterize a remarkable vortex structure formed by bacteria engaged in interstitial motility.

  10. Ecogrid EU - a large scale smart grids demonstration of real time market-based integration of numerous small DER and DR

    DEFF Research Database (Denmark)

    Ding, Yi; Nyeng, Preben; Ostergaard, Jacob

    2012-01-01

    that modern information and communication technology (ICT) and innovative market solutions can enable the operation of a distribution power system with more than 50% renewable energy sources (RES). This will be a major contribution to the European 20-20-20 goals. Furthermore, the proposed Ecogrid EU market......This paper provides an overview of the Ecogrid EU project, which is a large-scale demonstration project on the Danish island Bornholm. It provides Europe a fast track evolution towards smart grid dissemination and deployment in the distribution network. Objective of Ecogrid EU is to illustrate...... will offer the transmission system operator (TSO) additional balancing resources and ancillary services by facilitating the participation of small-scale distributed energy resources (DERs) and small end-consumers into the existing electricity markets. The majority of the 2000 participating residential...

  11. The seismic cycles of large Romanian earthquake: The physical foundation, and the next large earthquake in Vrancea

    International Nuclear Information System (INIS)

    Purcaru, G.

    2002-01-01

    The occurrence patterns of large/great earthquakes at subduction zone interface and in-slab are complex in the space-time dynamics, and make even long-term forecasts very difficult. For some favourable cases where a predictive (empirical) law was found successful predictions were possible (eg. Aleutians, Kuriles, etc). For the large Romanian events (M > 6.7), occurring in the Vrancea seismic slab below 60 km, Purcaru (1974) first found the law of the occurrence time and magnitude: the law of 'quasicycles' and 'supercycles', for large and largest events (M > 7.25), respectively. The quantitative model of Purcaru with these seismic cycles has three time-bands (periods of large earthquakes)/century, discovered using the earthquake history (1100-1973) (however incomplete) of large Vrancea earthquakes for which M was initially estimated (Purcaru, 1974, 1979). Our long-term prediction model is essentially quasideterministic, it predicts uniquely the time and magnitude; since is not strict deterministic the forecasting is interval valued. It predicted the next large earthquake in 1980 in the 3rd time-band (1970-1990), and which occurred in 1977 (M7.1, M w 7.5). The prediction was successful, in long-term sense. We discuss the unpredicted events in 1986 and 1990. Since the laws are phenomenological, we give their physical foundation based on the large scale of rupture zone (RZ) and subscale of the rupture process (RP). First results show that: (1) the 1940 event (h=122 km) ruptured the lower part of the oceanic slab entirely along strike, and down dip, and similarly for 1977 but its upper part, (2) the RZ of 1977 and 1990 events overlap and the first asperity of 1977 event was rebroken in 1990. This shows the size of the events strongly depends on RZ, asperity size/strength and, thus on the failure stress level (FSL), but not on depth, (3) when FSL of high strength (HS) larger zones is critical largest events (eg. 1802, 1940) occur, thus explaining the supercyles (the 1940

  12. Fast time- and frequency-domain finite-element methods for electromagnetic analysis

    Science.gov (United States)

    Lee, Woochan

    Fast electromagnetic analysis in time and frequency domain is of critical importance to the design of integrated circuits (IC) and other advanced engineering products and systems. Many IC structures constitute a very large scale problem in modeling and simulation, the size of which also continuously grows with the advancement of the processing technology. This results in numerical problems beyond the reach of existing most powerful computational resources. Different from many other engineering problems, the structure of most ICs is special in the sense that its geometry is of Manhattan type and its dielectrics are layered. Hence, it is important to develop structure-aware algorithms that take advantage of the structure specialties to speed up the computation. In addition, among existing time-domain methods, explicit methods can avoid solving a matrix equation. However, their time step is traditionally restricted by the space step for ensuring the stability of a time-domain simulation. Therefore, making explicit time-domain methods unconditionally stable is important to accelerate the computation. In addition to time-domain methods, frequency-domain methods have suffered from an indefinite system that makes an iterative solution difficult to converge fast. The first contribution of this work is a fast time-domain finite-element algorithm for the analysis and design of very large-scale on-chip circuits. The structure specialty of on-chip circuits such as Manhattan geometry and layered permittivity is preserved in the proposed algorithm. As a result, the large-scale matrix solution encountered in the 3-D circuit analysis is turned into a simple scaling of the solution of a small 1-D matrix, which can be obtained in linear (optimal) complexity with negligible cost. Furthermore, the time step size is not sacrificed, and the total number of time steps to be simulated is also significantly reduced, thus achieving a total cost reduction in CPU time. The second contribution

  13. Plasma balls in large-N gauge theories and localized black holes

    International Nuclear Information System (INIS)

    Aharony, Ofer; Minwalla, Shiraz; Wiseman, Toby

    2006-01-01

    We argue for the existence of plasma balls-metastable, nearly homogeneous lumps of gluon plasma at just above the deconfinement energy density-in a class of large-N confining gauge theories that undergo first-order deconfinement transitions. Plasma balls decay over a time scale of order N 2 by thermally radiating hadrons at the deconfinement temperature. In gauge theories that have a dual description that is well approximated by a theory of gravity in a warped geometry, we propose that plasma balls map to a family of classically stable finite-energy black holes localized in the IR. We present a conjecture for the qualitative nature of large-mass black holes in such backgrounds and numerically construct these black holes in a particular class of warped geometries. These black holes have novel properties; in particular, their temperature approaches a nonzero constant value at large mass. Black holes dual to plasma balls shrink as they decay by Hawking radiation; towards the end of this process, they resemble ten-dimensional Schwarzschild black holes, which we propose are dual to small plasma balls. Our work may find practical applications in the study of the physics of localized black holes from a dual viewpoint

  14. Assessing outcomes of large-scale public health interventions in the absence of baseline data using a mixture of Cox and binomial regressions

    Science.gov (United States)

    2014-01-01

    Background Large-scale public health interventions with rapid scale-up are increasingly being implemented worldwide. Such implementation allows for a large target population to be reached in a short period of time. But when the time comes to investigate the effectiveness of these interventions, the rapid scale-up creates several methodological challenges, such as the lack of baseline data and the absence of control groups. One example of such an intervention is Avahan, the India HIV/AIDS initiative of the Bill & Melinda Gates Foundation. One question of interest is the effect of Avahan on condom use by female sex workers with their clients. By retrospectively reconstructing condom use and sex work history from survey data, it is possible to estimate how condom use rates evolve over time. However formal inference about how this rate changes at a given point in calendar time remains challenging. Methods We propose a new statistical procedure based on a mixture of binomial regression and Cox regression. We compare this new method to an existing approach based on generalized estimating equations through simulations and application to Indian data. Results Both methods are unbiased, but the proposed method is more powerful than the existing method, especially when initial condom use is high. When applied to the Indian data, the new method mostly agrees with the existing method, but seems to have corrected some implausible results of the latter in a few districts. We also show how the new method can be used to analyze the data of all districts combined. Conclusions The use of both methods can be recommended for exploratory data analysis. However for formal statistical inference, the new method has better power. PMID:24397563

  15. A Java program for LRE-based real-time qPCR that enables large-scale absolute quantification.

    Science.gov (United States)

    Rutledge, Robert G

    2011-03-02

    Linear regression of efficiency (LRE) introduced a new paradigm for real-time qPCR that enables large-scale absolute quantification by eliminating the need for standard curves. Developed through the application of sigmoidal mathematics to SYBR Green I-based assays, target quantity is derived directly from fluorescence readings within the central region of an amplification profile. However, a major challenge of implementing LRE quantification is the labor intensive nature of the analysis. Utilizing the extensive resources that are available for developing Java-based software, the LRE Analyzer was written using the NetBeans IDE, and is built on top of the modular architecture and windowing system provided by the NetBeans Platform. This fully featured desktop application determines the number of target molecules within a sample with little or no intervention by the user, in addition to providing extensive database capabilities. MS Excel is used to import data, allowing LRE quantification to be conducted with any real-time PCR instrument that provides access to the raw fluorescence readings. An extensive help set also provides an in-depth introduction to LRE, in addition to guidelines on how to implement LRE quantification. The LRE Analyzer provides the automated analysis and data storage capabilities required by large-scale qPCR projects wanting to exploit the many advantages of absolute quantification. Foremost is the universal perspective afforded by absolute quantification, which among other attributes, provides the ability to directly compare quantitative data produced by different assays and/or instruments. Furthermore, absolute quantification has important implications for gene expression profiling in that it provides the foundation for comparing transcript quantities produced by any gene with any other gene, within and between samples.

  16. Incremental Frequent Subgraph Mining on Large Evolving Graphs

    KAUST Repository

    Abdelhamid, Ehab; Canim, Mustafa; Sadoghi, Mohammad; Bhatta, Bishwaranjan; Chang, Yuan-Chi; Kalnis, Panos

    2017-01-01

    , such as social networks, utilize large evolving graphs. Mining these graphs using existing techniques is infeasible, due to the high computational cost. In this paper, we propose IncGM+, a fast incremental approach for continuous frequent subgraph mining problem

  17. Track distortion in a micromegas based large prototype of a Time Projection Chamber for the International Linear Collider

    International Nuclear Information System (INIS)

    Bhattacharya, Deb Sankar; Majumdar, Nayana; Sarkar, S.; Bhattacharya, S.; Mukhopadhyay, Supratik; Bhattacharya, P.; Attie, D.; Colas, P.; Ganjour, S.; Bhattacharya, Aparajita

    2016-01-01

    The principal particle tracker at the International Linear Collider (ILC) is planned to be a large Time Projection Chamber (TPC) where different Micro Pattern Gaseous Detector (MPGDs) candidate as the gaseous amplifier. A Micromegas (MM) based TPC can meet the ILC requirement of continuous and precise pattern recognition. Seven MM modules, working as the end-plate of a Large Prototype TPC (LPTPC) installed at DESY, have been tested with a 5 GeV electron beam. Due to the grounded peripheral frame of the MM modules, at low drift, the electric field lines near the detector edge remain no longer parallel to the TPC axis. This causes signal loss along the boundaries of the MM modules as well as distortion in the reconstructed track. In presence of magnetic field, the distorted electric field introduces ExB effect

  18. Determination of residual oil saturation from time-lapse pulsed neutron capture logs in a large sandstone reservoir

    International Nuclear Information System (INIS)

    Syed, E.V.; Salaita, G.N.; McCaffery, F.G.

    1991-01-01

    Cased hole logging with pulsed neutron tools finds extensive use for identifying zones of water breakthrough and monitoring oil-water contacts in oil reservoirs being depleted by waterflooding or natural water drive. Results of such surveys then find direct use for planning recompletions and water shutoff treatments. Pulsed neutron capture (PNC) logs are useful for estimating water saturation changes behind casing in the presence of a constant, high-salinity environment. PNC log surveys run at different times, i.e., in a time-lapse mode, are particularly amenable to quantitative analysis. The combined use of the original open hole and PNC time-lapse log information can then provide information on remaining or residual oil saturations in a reservoir. This paper reports analyses of historical pulsed neutron capture log data to assess residual oil saturation in naturally water-swept zones for selected wells from a large sandstone reservoir in the Middle East. Quantitative determination of oil saturations was aided by PNC log information obtained from a series of tests conducted in a new well in the same field

  19. Minimal time spiking in various ChR2-controlled neuron models.

    Science.gov (United States)

    Renault, Vincent; Thieullen, Michèle; Trélat, Emmanuel

    2018-02-01

    We use conductance based neuron models, and the mathematical modeling of optogenetics to define controlled neuron models and we address the minimal time control of these affine systems for the first spike from equilibrium. We apply tools of geometric optimal control theory to study singular extremals, and we implement a direct method to compute optimal controls. When the system is too large to theoretically investigate the existence of singular optimal controls, we observe numerically the optimal bang-bang controls.

  20. Real time simulation of large systems on mini-computer

    International Nuclear Information System (INIS)

    Nakhle, Michel; Roux, Pierre.

    1979-01-01

    Most simulation languages will only accept an explicit formulation of differential equations, and logical variables hold no special status therein. The pace of the suggested methods of integration is limited by the smallest time constant of the model submitted. The NEPTUNIX 2 simulation software has a language that will take implicit equations and an integration method of which the variable pace is not limited by the time constants of the model. This, together with high time and memory ressources optimization of the code generated, makes NEPTUNIX 2 a basic tool for simulation on mini-computers. Since the logical variables are specific entities under centralized control, correct processing of discontinuities and synchronization with a real process are feasible. The NEPTUNIX 2 is the industrial version of NEPTUNIX 1 [fr