WorldWideScience

Sample records for large time existence

  1. Global Existence and Large Time Behavior of Solutions to the Bipolar Nonisentropic Euler-Poisson Equations

    Directory of Open Access Journals (Sweden)

    Min Chen

    2014-01-01

    Full Text Available We study the one-dimensional bipolar nonisentropic Euler-Poisson equations which can model various physical phenomena, such as the propagation of electron and hole in submicron semiconductor devices, the propagation of positive ion and negative ion in plasmas, and the biological transport of ions for channel proteins. We show the existence and large time behavior of global smooth solutions for the initial value problem, when the difference of two particles’ initial mass is nonzero, and the far field of two particles’ initial temperatures is not the ambient device temperature. This result improves that of Y.-P. Li, for the case that the difference of two particles’ initial mass is zero, and the far field of the initial temperature is the ambient device temperature.

  2. Global existence and large time asymptotic behavior of strong solutions to the Cauchy problem of 2D density-dependent Navier–Stokes equations with vacuum

    Science.gov (United States)

    Lü, Boqiang; Shi, Xiaoding; Zhong, Xin

    2018-06-01

    We are concerned with the Cauchy problem of the two-dimensional (2D) nonhomogeneous incompressible Navier–Stokes equations with vacuum as far-field density. It is proved that if the initial density decays not too slow at infinity, the 2D Cauchy problem of the density-dependent Navier–Stokes equations on the whole space admits a unique global strong solution. Note that the initial data can be arbitrarily large and the initial density can contain vacuum states and even have compact support. Furthermore, we also obtain the large time decay rates of the spatial gradients of the velocity and the pressure, which are the same as those of the homogeneous case.

  3. Does time exist in quantum gravity?

    Directory of Open Access Journals (Sweden)

    Claus Kiefer

    2015-12-01

    Full Text Available Time is absolute in standard quantum theory and dynamical in general relativity. The combination of both theories into a theory of quantum gravity leads therefore to a “problem of time”. In my essay, I investigate those consequences for the concept of time that may be drawn without a detailed knowledge of quantum gravity. The only assumptions are the experimentally supported universality of the linear structure of quantum theory and the recovery of general relativity in the classical limit. Among the consequences are the fundamental timelessness of quantum gravity, the approximate nature of a semiclassical time, and the correlation of entropy with the size of the Universe.

  4. Evaluating Existing Strategies to Limit Video Game Playing Time.

    Science.gov (United States)

    Davies, Bryan; Blake, Edwin

    2016-01-01

    Public concern surrounding the effects video games have on players has inspired a large body of research, and policy makers in China and South Korea have even mandated systems that limit the amount of time players spend in game. The authors present an experiment that evaluates the effectiveness of such policies. They show that forcibly removing players from the game environment causes distress, potentially removing some of the benefits that games provide and producing a desire for more game time. They also show that, with an understanding of player psychology, playtime can be manipulated without significantly changing the user experience or negating the positive effects of video games.

  5. Quantifying expert consensus against the existence of a secret, large-scale atmospheric spraying program

    Science.gov (United States)

    Shearer, Christine; West, Mick; Caldeira, Ken; Davis, Steven J.

    2016-08-01

    Nearly 17% of people in an international survey said they believed the existence of a secret large-scale atmospheric program (SLAP) to be true or partly true. SLAP is commonly referred to as ‘chemtrails’ or ‘covert geoengineering’, and has led to a number of websites purported to show evidence of widespread chemical spraying linked to negative impacts on human health and the environment. To address these claims, we surveyed two groups of experts—atmospheric chemists with expertize in condensation trails and geochemists working on atmospheric deposition of dust and pollution—to scientifically evaluate for the first time the claims of SLAP theorists. Results show that 76 of the 77 scientists (98.7%) that took part in this study said they had not encountered evidence of a SLAP, and that the data cited as evidence could be explained through other factors, including well-understood physics and chemistry associated with aircraft contrails and atmospheric aerosols. Our goal is not to sway those already convinced that there is a secret, large-scale spraying program—who often reject counter-evidence as further proof of their theories—but rather to establish a source of objective science that can inform public discourse.

  6. Familiality of co-existing ADHD and tic disorders: evidence from a large sibling study

    Directory of Open Access Journals (Sweden)

    Veit Roessner

    2016-07-01

    Full Text Available AbstractBackground: The association of attention-deficit/hyperactivity disorder (ADHD and tic disorder (TD is frequent and clinically important. Very few and inconclusive attempts have been made to clarify if and how the combination of ADHD+TD runs in families. Aim: To determine the first time in a large-scale ADHD sample whether ADHD+TD increases the risk of ADHD+TD in siblings and, also the first time, if this is independent of their psychopathological vulnerability in general. Methods: The study is based on the International Multicenter ADHD Genetics (IMAGE study. The present sub-sample of 2815 individuals included ADHD-index patients with co-existing TD (ADHD+TD, n=262 and without TD (ADHD-TD, n=947 as well as their 1606 full siblings (n=358 of the ADHD+TD index patients and n=1248 of the ADHD-TD index patients. We assessed psychopathological symptoms in index patients and siblings by using the strength and difficulties questionnaire (SDQ and the parent and teacher Conners’ long version Rating Scales (CRS. For disorder classification the Parental Account of Childhood Symptoms (PACS-Interview was applied in n = 271 children. Odds ratio with the GENMOD procedure (PROCGENMOD was used to test if the risk for ADHD, TD and ADHD+TD in siblings was associated with the related index patients’ diagnoses. In order to get an estimate for specificity we compared the four groups for general psychopathological symptoms.Results: Co-existing ADHD+TD in index patients increased the risk of both comorbid ADHD+TD and TD in the siblings of these index patients. These effects did not extend to general psychopathology. Interpretation: Co-existence of ADHD+TD may segregate in families. The same holds true for TD (without ADHD. Hence, the segregation of TD (included in both groups seems to be the determining factor, independent of further behavioral problems. This close relationship between ADHD and TD supports the clinical approach to carefully assess ADHD in

  7. Effectiveness of a large mimic panel in an existing nuclear power plant central control board

    International Nuclear Information System (INIS)

    Kubota, Ryuji; Satoh, Hiroyuki; Sasajima, Katsuhiro; Kawano, Ryutaro; Shibuya Shinya

    1999-01-01

    We conducted the analysis of the nuclear power plant (NPP) operators' behaviors under emergency conditions by using training simulators as a joint research project by Japanese BWR groups for twelve years. In the phase-IV of this project we executed two kinds of experiments to evaluate the effectiveness of the interfaces. One was for evaluations of the interfaces such as CRTs with touch screen, a large mimic panel, and a hierarchical annunciator system introduced in the newly developed ABWR type central control board. The other was that we analyzed the operators' behaviors in emergency conditions by using the first generation BWR type central control board which was added new interfaces such as a large display screen and demarcation on the board to help operators to understand the plant. The demarcation is one of the visual interface improvements and its technique is that a line enclosing several components causes them to be perceived as a group.The result showed that both the large display panel Introduced in ABWR central control board and the large display screen in the existing BWR type central control board improved the performance of the NPP operators in the experiments. It was expected that introduction of the large mimic panel into the existing BWR type central control boards would improve operators' performance. However, in the case of actual installation of the large display board into the existing central control boards, there are spatial and hardware constraints. Therefore the size of lamps, lines connecting from symbols of the pumps or valves to the others' will have to be modified under these constraints. It is important to evaluate the displayed information on the large display board before actual installation. We made experiments to solve these problems by using TEPCO's research simulator which is added a large mimic panel. (author)

  8. MageComet—web application for harmonizing existing large-scale experiment descriptions

    OpenAIRE

    Xue, Vincent; Burdett, Tony; Lukk, Margus; Taylor, Julie; Brazma, Alvis; Parkinson, Helen

    2012-01-01

    Motivation: Meta-analysis of large gene expression datasets obtained from public repositories requires consistently annotated data. Curation of such experiments, however, is an expert activity which involves repetitive manipulation of text. Existing tools for automated curation are few, which bottleneck the analysis pipeline. Results: We present MageComet, a web application for biologists and annotators that facilitates the re-annotation of gene expression experiments in MAGE-TAB format. It i...

  9. Large Variability in the Diversity of Physiologically Complex Surgical Procedures Exists Nationwide Among All Hospitals Including Among Large Teaching Hospitals.

    Science.gov (United States)

    Dexter, Franklin; Epstein, Richard H; Thenuwara, Kokila; Lubarsky, David A

    2017-11-22

    Multiple previous studies have shown that having a large diversity of procedures has a substantial impact on quality management of hospital surgical suites. At hospitals with substantial diversity, unless sophisticated statistical methods suitable for rare events are used, anesthesiologists working in surgical suites will have inaccurate predictions of surgical blood usage, case durations, cost accounting and price transparency, times remaining in late running cases, and use of intraoperative equipment. What is unknown is whether large diversity is a feature of only a few very unique set of hospitals nationwide (eg, the largest hospitals in each state or province). The 2013 United States Nationwide Readmissions Database was used to study heterogeneity among 1981 hospitals in their diversities of physiologically complex surgical procedures (ie, the procedure codes). The diversity of surgical procedures performed at each hospital was quantified using a summary measure, the number of different physiologically complex surgical procedures commonly performed at the hospital (ie, 1/Herfindahl). A total of 53.9% of all hospitals commonly performed 3-fold larger diversity (ie, >30 commonly performed physiologically complex procedures). Larger hospitals had greater diversity than the small- and medium-sized hospitals (P 30 procedures (lower 99% CL, 71.9% of hospitals). However, there was considerable variability among the large teaching hospitals in their diversity (interquartile range of the numbers of commonly performed physiologically complex procedures = 19.3; lower 99% CL, 12.8 procedures). The diversity of procedures represents a substantive differentiator among hospitals. Thus, the usefulness of statistical methods for operating room management should be expected to be heterogeneous among hospitals. Our results also show that "large teaching hospital" alone is an insufficient description for accurate prediction of the extent to which a hospital sustains the

  10. The existence of very large-scale structures in the universe

    Energy Technology Data Exchange (ETDEWEB)

    Goicoechea, L J; Martin-Mirones, J M [Universidad de Cantabria Santander, (ES)

    1989-09-01

    Assuming that the dipole moment observed in the cosmic background radiation (microwaves and X-rays) can be interpreted as a consequence of the motion of the observer toward a non-local and very large-scale structure in our universe, we study the perturbation of the m-z relation by this inhomogeneity, the dynamical contribution of sources to the dipole anisotropy in the X-ray background and the imprint that several structures with such characteristics would have had on the microwave background at the decoupling. We conclude that in this model the observed anisotropy in the microwave background on intermediate angular scales ({approx}10{sup 0}) may be in conflict with the existence of superstructures.

  11. Discretization of space and time: a slight modification to the Newtonian gravitation which implies the existence of black holes

    OpenAIRE

    Roatta , Luca

    2017-01-01

    Assuming that space and time can only have discrete values, it is shown how deformed space and time cause gravitational attraction, whose law in a discrete context is slightly different from the Newtonian, but to it exactly coincident at large distance. This difference is directly connected to the existence of black holes, which result to have the structure of a hollow sphere.

  12. Time series clustering in large data sets

    Directory of Open Access Journals (Sweden)

    Jiří Fejfar

    2011-01-01

    Full Text Available The clustering of time series is a widely researched area. There are many methods for dealing with this task. We are actually using the Self-organizing map (SOM with the unsupervised learning algorithm for clustering of time series. After the first experiment (Fejfar, Weinlichová, Šťastný, 2009 it seems that the whole concept of the clustering algorithm is correct but that we have to perform time series clustering on much larger dataset to obtain more accurate results and to find the correlation between configured parameters and results more precisely. The second requirement arose in a need for a well-defined evaluation of results. It seems useful to use sound recordings as instances of time series again. There are many recordings to use in digital libraries, many interesting features and patterns can be found in this area. We are searching for recordings with the similar development of information density in this experiment. It can be used for musical form investigation, cover songs detection and many others applications.The objective of the presented paper is to compare clustering results made with different parameters of feature vectors and the SOM itself. We are describing time series in a simplistic way evaluating standard deviations for separated parts of recordings. The resulting feature vectors are clustered with the SOM in batch training mode with different topologies varying from few neurons to large maps.There are other algorithms discussed, usable for finding similarities between time series and finally conclusions for further research are presented. We also present an overview of the related actual literature and projects.

  13. Large-scale integration of wind power into the existing Chinese energy system

    DEFF Research Database (Denmark)

    Liu, Wen; Lund, Henrik; Mathiesen, Brian Vad

    2011-01-01

    stability, the maximum feasible wind power penetration in the existing Chinese energy system is approximately 26% from both technical and economic points of view. A fuel efficiency decrease occurred when increasing wind power penetration in the system, due to its rigid power supply structure and the task......This paper presents the ability of the existing Chinese energy system to integrate wind power and explores how the Chinese energy system needs to prepare itself in order to integrate more fluctuating renewable energy in the future. With this purpose in mind, a model of the Chinese energy system has...... been constructed by using EnergyPLAN based on the year 2007, which has then been used for investigating three issues. Firstly, the accuracy of the model itself has been examined and then the maximum feasible wind power penetration in the existing energy system has been identified. Finally, barriers...

  14. Short-time existence of solutions for mean-field games with congestion

    KAUST Repository

    Gomes, Diogo A.

    2015-11-20

    We consider time-dependent mean-field games with congestion that are given by a Hamilton–Jacobi equation coupled with a Fokker–Planck equation. These models are motivated by crowd dynamics in which agents have difficulty moving in high-density areas. The congestion effects make the Hamilton–Jacobi equation singular. The uniqueness of solutions for this problem is well understood; however, the existence of classical solutions was only known in very special cases, stationary problems with quadratic Hamiltonians and some time-dependent explicit examples. Here, we demonstrate the short-time existence of C∞ solutions for sub-quadratic Hamiltonians.

  15. The existence and global attractivity of almost periodic sequence solution of discrete-time neural networks

    International Nuclear Information System (INIS)

    Huang Zhenkun; Wang Xinghua; Gao Feng

    2006-01-01

    In this Letter, we discuss discrete-time analogue of a continuous-time cellular neural network. Sufficient conditions are obtained for the existence of a unique almost periodic sequence solution which is globally attractive. Our results demonstrate dynamics of the formulated discrete-time analogue as mathematical models for the continuous-time cellular neural network in almost periodic case. Finally, a computer simulation illustrates the suitability of our discrete-time analogue as numerical algorithms in simulating the continuous-time cellular neural network conveniently

  16. Large dunes on the outer shelf off the Zambezi Delta, Mozambique: evidence for the existence of a Mozambique Current

    Science.gov (United States)

    Flemming, Burghard W.; Kudrass, Hermann-Rudolf

    2018-02-01

    The existence of a continuously flowing Mozambique Current, i.e. a western geostrophic boundary current flowing southwards along the shelf break of Mozambique, was until recently accepted by oceanographers studying ocean circulation in the south-western Indian Ocean. This concept was then cast into doubt based on long-term current measurements obtained from current-meter moorings deployed across the northern Mozambique Channel, which suggested that southward flow through the Mozambique Channel took place in the form of successive, southward migrating and counter-clockwise rotating eddies. Indeed, numerical modelling found that, if at all, strong currents on the outer shelf occurred for not more than 9 days per year. In the present study, the negation of the existence of a Mozambique Current is challenged by the discovery of a large (50 km long, 12 km wide) subaqueous dune field (with up to 10 m high dunes) on the outer shelf east of the modern Zambezi River delta at water depths between 50 and 100 m. Being interpreted as representing the current-modified, early Holocene Zambezi palaeo-delta, the dune field would have migrated southwards by at least 50 km from its former location since sea level recovered to its present-day position some 7 ka ago and after the former delta had been remoulded into a migrating dune field. Because a large dune field composed of actively migrating bedforms cannot be generated and maintained by currents restricted to a period of only 9 days per year, the validity of those earlier modelling results is questioned for the western margin of the flow field. Indeed, satellite images extracted from the Perpetual Ocean display of NASA, which show monthly time-integrated surface currents in the Mozambique Channel for the 5 month period from June-October 2006, support the proposition that strong flow on the outer Mozambican shelf occurs much more frequently than postulated by those modelling results. This is consistent with more recent modelling

  17. The Existence of Local Wisdom Value Through Minangkabau Dance Creation Representation in Present Time

    Directory of Open Access Journals (Sweden)

    Indrayuda Indrayuda

    2017-01-01

    Full Text Available This paper is aiming at revealing the existence of local wisdom values in Minangkabau through the representation of Minangkabau dance creation at present time in West Sumatera. The existence of the dance itself gives impact to the continuation of the existence of local value in West Sumatera. The research method was qualitative which was used to analyze local wisdom values in the present time Minangkabu dance creation representation through the touch of reconstruction and acculturation as the local wisdom continuation. Besides, this study employs multidisciplinary study as the approach of the study by implementing the sociology anthropology of dance and the sociology and anthropology of culture. Object of the research was Minangkabau dance creation in present time, while the data was collected through interview and direct observation, as well as documentation. The data was analyzed by following the technique delivered by Miles and Huberman. Research results showed that Minangkabau dance creation was a reconstruction result of the older traditional dance, and through acculturation which contains local wisdom values. The existence of Mianngkabau dance creation can affect the continuation of local wisdom values in Minangkabau society in West Sumatera. The existence of dance creation has maintained the Minangkabau local wisdom values in present time.

  18. Existence and Stability of Traveling Waves for Degenerate Reaction-Diffusion Equation with Time Delay

    Science.gov (United States)

    Huang, Rui; Jin, Chunhua; Mei, Ming; Yin, Jingxue

    2018-01-01

    This paper deals with the existence and stability of traveling wave solutions for a degenerate reaction-diffusion equation with time delay. The degeneracy of spatial diffusion together with the effect of time delay causes us the essential difficulty for the existence of the traveling waves and their stabilities. In order to treat this case, we first show the existence of smooth- and sharp-type traveling wave solutions in the case of c≥c^* for the degenerate reaction-diffusion equation without delay, where c^*>0 is the critical wave speed of smooth traveling waves. Then, as a small perturbation, we obtain the existence of the smooth non-critical traveling waves for the degenerate diffusion equation with small time delay τ >0 . Furthermore, we prove the global existence and uniqueness of C^{α ,β } -solution to the time-delayed degenerate reaction-diffusion equation via compactness analysis. Finally, by the weighted energy method, we prove that the smooth non-critical traveling wave is globally stable in the weighted L^1 -space. The exponential convergence rate is also derived.

  19. Existence and Stability of Traveling Waves for Degenerate Reaction-Diffusion Equation with Time Delay

    Science.gov (United States)

    Huang, Rui; Jin, Chunhua; Mei, Ming; Yin, Jingxue

    2018-06-01

    This paper deals with the existence and stability of traveling wave solutions for a degenerate reaction-diffusion equation with time delay. The degeneracy of spatial diffusion together with the effect of time delay causes us the essential difficulty for the existence of the traveling waves and their stabilities. In order to treat this case, we first show the existence of smooth- and sharp-type traveling wave solutions in the case of c≥c^* for the degenerate reaction-diffusion equation without delay, where c^*>0 is the critical wave speed of smooth traveling waves. Then, as a small perturbation, we obtain the existence of the smooth non-critical traveling waves for the degenerate diffusion equation with small time delay τ >0. Furthermore, we prove the global existence and uniqueness of C^{α ,β }-solution to the time-delayed degenerate reaction-diffusion equation via compactness analysis. Finally, by the weighted energy method, we prove that the smooth non-critical traveling wave is globally stable in the weighted L^1-space. The exponential convergence rate is also derived.

  20. Transportation of Large Wind Components: A Review of Existing Geospatial Data

    Energy Technology Data Exchange (ETDEWEB)

    Mooney, Meghan [National Renewable Energy Lab. (NREL), Golden, CO (United States); Maclaurin, Galen [National Renewable Energy Lab. (NREL), Golden, CO (United States)

    2016-09-01

    This report features the geospatial data component of a larger project evaluating logistical and infrastructure requirements for transporting oversized and overweight (OSOW) wind components. The goal of the larger project was to assess the status and opportunities for improving the infrastructure and regulatory practices necessary to transport wind turbine towers, blades, and nacelles from current and potential manufacturing facilities to end-use markets. The purpose of this report is to summarize existing geospatial data on wind component transportation infrastructure and to provide a data gap analysis, identifying areas for further analysis and data collection.

  1. Process improvement to enhance existing stroke team activity toward more timely thrombolytic treatment.

    Science.gov (United States)

    Cho, Han-Jin; Lee, Kyung Yul; Nam, Hyo Suk; Kim, Young Dae; Song, Tae-Jin; Jung, Yo Han; Choi, Hye-Yeon; Heo, Ji Hoe

    2014-10-01

    Process improvement (PI) is an approach for enhancing the existing quality improvement process by making changes while keeping the existing process. We have shown that implementation of a stroke code program using a computerized physician order entry system is effective in reducing the in-hospital time delay to thrombolysis in acute stroke patients. We investigated whether implementation of this PI could further reduce the time delays by continuous improvement of the existing process. After determining a key indicator [time interval from emergency department (ED) arrival to intravenous (IV) thrombolysis] and conducting data analysis, the target time from ED arrival to IV thrombolysis in acute stroke patients was set at 40 min. The key indicator was monitored continuously at a weekly stroke conference. The possible reasons for the delay were determined in cases for which IV thrombolysis was not administered within the target time and, where possible, the problems were corrected. The time intervals from ED arrival to the various evaluation steps and treatment before and after implementation of the PI were compared. The median time interval from ED arrival to IV thrombolysis in acute stroke patients was significantly reduced after implementation of the PI (from 63.5 to 45 min, p=0.001). The variation in the time interval was also reduced. A reduction in the evaluation time intervals was achieved after the PI [from 23 to 17 min for computed tomography scanning (p=0.003) and from 35 to 29 min for complete blood counts (p=0.006)]. PI is effective for continuous improvement of the existing process by reducing the time delays between ED arrival and IV thrombolysis in acute stroke patients.

  2. Large-scale straw supplies to existing coal-fired power stations

    International Nuclear Information System (INIS)

    Gylling, M.; Parsby, M.; Thellesen, H.Z.; Keller, P.

    1992-08-01

    It is considered that large-scale supply of straw to power stations and decentral cogeneration plants could open up new economical systems and methods of organization of straw supply in Denmark. This thesis is elucidated and involved constraints are pointed out. The aim is to describe to what extent large-scale straw supply is interesting with regard to monetary savings and available resources. Analyses of models, systems and techniques described in a foregoing project are carried out. It is reckoned that the annual total amount of surplus straw in Denmark is 3.6 million tons. At present, use of straw which is not agricultural is limited to district heating plants with an annual consumption of 2-12 thousand tons. A prerequisite for a significant increase in the use of straw is an annual consumption by power and cogeneration plants of more than 100.000 tons. All aspects of straw management are examined in detail, also in relation to two actual Danish coal-fired plants. The reliability of straw supply is considered. It is concluded that very significant resources of straw are available in Denmark but there remain a number of constraints. Price competitiveness must be considered in relation to other fuels. It is suggested that the use of corn harvests, with whole stems attached (handled as large bales or in the same way as sliced straw alone) as fuel, would result in significant monetary savings in transport and storage especially. An equal status for whole-harvested corn with other forms of biomass fuels, with following changes in taxes and subsidies could possibly reduce constraints on large scale straw fuel supply. (AB) (13 refs.)

  3. Existence conditions for bulk large-wavevector waves in metal-dielectric and graphene-dielectric multilayer hyperbolic metamaterials

    DEFF Research Database (Denmark)

    Zhukovsky, Sergei; Andryieuski, Andrei; Lavrinenko, Andrei

    2014-01-01

    We theoretically investigate general existence conditions for broadband bulk large-wavevector (high-k) propagating waves (such as volume plasmon polaritons in hyperbolic metamaterials) in arbitrary subwavelength periodic multilayers structures. Treating the elementary excitation in the unit cell...... of the structure as a generalized resonance pole of reflection coefficient and using Bloch's theorem, we derive analytical expressions for the band of large-wavevector propagating solutions. We apply our formalism to determine the high-k band existence in two important cases: the well-known metal-dielectric...

  4. On real-time assessment of post-emergency condition existence in complex electric power systems

    Energy Technology Data Exchange (ETDEWEB)

    Tarasov, Vladimir I. [Irkutsk State Technical University 83, Lermontov Street, Irkutsk 664074 (Russian Federation)

    2008-12-15

    This paper presents two effective numerical criteria of estimating post-emergency operating conditions' non-existence in complicated electric power systems. These criteria are based on mathematic and programming tools of the regularized quadratic descent method and the regularized two-parameter minimization method. The proposed criteria can be effectively applied in calculations of real-time electric operating conditions. (author)

  5. Marketing communication drivers of adoption timing of a new E-service among existing customers

    NARCIS (Netherlands)

    Prins, Remo; Verhoef, Peter C.

    This study investigates the effects of direct marketing communications and mass marketing communications on the adoption timing of a new e-service among existing customers. The mass marketing communications pertain to both specific new service advertising and brand advertising from both the focal

  6. The Kembs project: environmental integration of a large existing hydropower scheme

    International Nuclear Information System (INIS)

    Garnier, Alain; Barillier, Agnes

    2015-01-01

    The environment was a major issue for the Kembs re-licensing process on the upper Rhine River. Since 1932, Kembs dam derives water from the Rhine River to the 'Grand Canal d'Alsace' (GCA) which is equipped with four hydropower plants (max. diverted flow: 1400 m 3 /s, 630 MW, 3760 GWh/y). The Old Rhine River downstream of the dam is 50 km long and has been strongly affected by works (dikes) since the 19. century for flood protection and navigation, and then by the construction of the dam. Successive engineering works induced morphological simplification and stabilization of the channel pattern from a formerly braided form to a single incised channel, generating ecological alterations. As the Kembs hydroelectric scheme concerns three countries (France, Germany and Switzerland) with various regulations and views on how to manage with environment, EDF undertook an integrated environmental approach instead of a strict 'impact/mitigation' balance that took 10 years to develop. Therefore, the project simultaneously acts on complementary compartments of the aquatic, riparian and terrestrial environment, to benefit from the synergies that exist between them; a new power plant (8,5 MW, 28 GWh/y) is built to limit the energetic losses and to ensure various functions thereby increasing the overall environmental gain. (authors)

  7. Global existence of solutions to the Cauchy problem for time-dependent Hartree equations

    International Nuclear Information System (INIS)

    Chadam, J.M.; Glassey, R.T.

    1975-01-01

    The existence of global solutions to the Cauchy problem for time-dependent Hartree equations for N electrons is established. The solution is shown to have a uniformly bounded H 1 (R 3 ) norm and to satisfy an estimate of the form two parallel PSI (t) two parallel/sub H 2 ; less than or equal to c exp(kt). It is shown that ''negative energy'' solutions do not converge uniformly to zero as t → infinity. (U.S.)

  8. Existence of solutions to nonlinear parabolic unilateral problems with an obstacle depending on time

    Directory of Open Access Journals (Sweden)

    Nabila Bellal

    2014-10-01

    Full Text Available Using the penalty method, we prove the existence of solutions to nonlinear parabolic unilateral problems with an obstacle depending on time. To find a solution, the original inequality is transformed into an equality by adding a positive function on the right-hand side and a complementary condition. This result can be seen as a generalization of the results by Mokrane in [11] where the obstacle is zero.

  9. Existence of a time-dependent heat flux-related ponderomotive effect

    International Nuclear Information System (INIS)

    Schamel, H.; Sack, C.

    1980-01-01

    The existence of a new ponderomotive effect associated with high-frequency waves is pointed out. It originates when time-dependency, mean velocities, or divergent heat fluxes are involved and it supplements the two effects known previously, namely, the ponderomotive force and fake heating. Two proofs are presented; the first is obtained by establishing the momentum equations generalized by including radiation effects and the second by solving the quasi-linear-type diffusion equation explicitly. For a time-dependent wave packet the solution exhibits a new contribution in terms of an integral over previous states. Owing to this term, the plasma has a memory which leads to a breaking of the time symmetry of the plasma response. The range, influenced by the localized wave packet, expands during the course of time due to streamers emanating from the wave active region. Perturbations, among which is the heat flux, are carried to remote positions and, consequently, the region accessible to wave heating is increased. The density dip appears to be less pronounced at the center, and its generation and decay are delayed. The analysis includes a self-consistent action of high-frequency waves as well as the case of traveling wave packets. In order to establish the existence of this new effect, the analytical results are compared with recent microwave experiments. The possibility of generating fast particles by this new ponderomotive effect is emphasized

  10. ENERGY DEMANDS OF THE EXISTING COLLECTIVE BUILDINGS WITH BEARING STRUCTURE OF LARGE PRECAST CONCRETE PANELS FROM TIMISOARA

    Directory of Open Access Journals (Sweden)

    Pescari S.

    2015-05-01

    Full Text Available One of the targets of EU Directives on the energy performance of buildings is to reduce the energy consumption of the existing buildings by finding efficient solutions for thermal rehabilitation. In order to find the adequate solutions, the first step is to establish the current state of the buildings and to determine their actual energy consumption. The current paper aims to present the energy demands of the existing buildings with bearing structure of large precast concrete panels in the city of Timisoara. Timisoara is one of the most important cities in the west side of Romania, being on the third place in terms of size and economic development. The Census of Population and Housing of 2011 states that Timisoara has about 127841 private dwellings and 60 percent of them are collective buildings. Energy demand values of the existing buildings with bearing structure of large precast concrete panels in Timisoara, in their current condition, are higher than the accepted values provided in the Romanian normative, C107. The difference between these two values can reach up to 300 percent.

  11. D walls and junctions in supersymmetric gluodynamics in the large N limit suggest the existence of heavy hadrons

    International Nuclear Information System (INIS)

    Gabadadze, Gregory; Shifman, Mikhail

    2000-01-01

    A number of arguments exists that the ''minimal'' Bogomol'nyi-Prasad-Sommerfeld (BPS) wall width in large-N supersymmetric gluodynamics vanishes as 1/N. There is a certain tension between this assertion and the fact that the mesons coupled to λλ have masses O(N 0 ). To reconcile these facts we argue that there should exist additional solitonlike states with masses scaling as N. The BPS walls must be ''made'' predominantly of these heavy states which are coupled to λλ stronger than the conventional mesons. The tension of the BPS wall junction scales as N 2 , which serves as an additional argument in favor of the 1/N scaling of the wall width. The heavy states can be thought of as solitons of the corresponding closed string theory. They are related to certain fivebranes in the M-theory construction. We study the issue of the wall width in toy models which capture some features of supersymmetric gluodynamics. We speculate that the special hadrons with mass scaling as N should also exist in the large-N limit of nonsupersymmetric gluodynamics. (c) 2000 The American Physical Society

  12. Intoxication with alcohol at the time of self-harm and pre-existing involvement with mental health services are associated with a pre-disposition to repetition of self-harming behavior in a large cohort of older New Zealanders presenting with an index episode of self-harm.

    Science.gov (United States)

    Ames, David

    2017-08-01

    The paper on predictors of repeat self-harm and suicide by Cheung et al. (2017), which has been chosen by the editorial team as paper of the month for this issue of International Psychogeriatrics, makes a very useful contribution to the study of self-harm and suicide in late life. Of 339 individuals presenting with an index episode of self-harm to one of seven Emergency Departments (EDs) in New Zealand, close to 15% harmed themselves again within one year and for nearly one in six of these 50 people, the repeat episode was fatal. Having alcohol in the blood and already being engaged with mental health services at the time of the index episode both had some utility in predicting the occurrence of a further self-harm episode. While it is encouraging that mental health services look to have been focusing on those who turned out to be at highest risk, clinicians may need to be particularly vigilant when following up individuals who had been drinking alcohol at the time of an initial self-harm presentation. This study also emphasizes the high risk of recurrent self-harm and completed suicide in those older adults who harm themselves and survive the initial episode. It deserves to be widely cited and gives some direction for future research on interventions designed to diminish the recurrence of self-harm in those of our patients who have presented to an ED with an initial self-harm episode.

  13. Existence of positive solutions for semipositone dynamic system on time scales

    Directory of Open Access Journals (Sweden)

    You-Wei Zhang

    2008-08-01

    Full Text Available In this paper, we study the following semipositone dynamic system on time scales $$displaylines{ -x^{DeltaDelta}(t=f(t,y+p(t, quad tin(0,T_{mathbb{T}},cr -y^{DeltaDelta}(t=g(t,x, quad tin(0,T_{mathbb{T}},cr x(0=x(sigma^{2}(T=0, cr alpha{y(0}-eta{y^{Delta}{(0}}= gamma{y(sigma(T}+delta{y^{Delta}(sigma(T}=0. }$$ Using fixed point index theory, we show the existence of at least one positive solution. The interesting point is the that nonlinear term is allowed to change sign and may tend to negative infinity.

  14. Influence of weathering and pre-existing large scale fractures on gravitational slope failure: insights from 3-D physical modelling

    Directory of Open Access Journals (Sweden)

    D. Bachmann

    2004-01-01

    Full Text Available Using a new 3-D physical modelling technique we investigated the initiation and evolution of large scale landslides in presence of pre-existing large scale fractures and taking into account the slope material weakening due to the alteration/weathering. The modelling technique is based on the specially developed properly scaled analogue materials, as well as on the original vertical accelerator device enabling increases in the 'gravity acceleration' up to a factor 50. The weathering primarily affects the uppermost layers through the water circulation. We simulated the effect of this process by making models of two parts. The shallower one represents the zone subject to homogeneous weathering and is made of low strength material of compressive strength σl. The deeper (core part of the model is stronger and simulates intact rocks. Deformation of such a model subjected to the gravity force occurred only in its upper (low strength layer. In another set of experiments, low strength (σw narrow planar zones sub-parallel to the slope surface (σwl were introduced into the model's superficial low strength layer to simulate localized highly weathered zones. In this configuration landslides were initiated much easier (at lower 'gravity force', were shallower and had smaller horizontal size largely defined by the weak zone size. Pre-existing fractures were introduced into the model by cutting it along a given plan. They have proved to be of small influence on the slope stability, except when they were associated to highly weathered zones. In this latter case the fractures laterally limited the slides. Deep seated rockslides initiation is thus directly defined by the mechanical structure of the hillslope's uppermost levels and especially by the presence of the weak zones due to the weathering. The large scale fractures play a more passive role and can only influence the shape and the volume of the sliding units.

  15. Real-time vibration compensation for large telescopes

    Science.gov (United States)

    Böhm, M.; Pott, J.-U.; Sawodny, O.; Herbst, T.; Kürster, M.

    2014-08-01

    We compare different strategies for minimizing the effects of telescope vibrations to the differential piston (optical pathway difference) for the Near-InfraRed/Visible Adaptive Camera and INterferometer for Astronomy (LINC-NIRVANA) at the Large Binocular Telescope (LBT) using an accelerometer feedforward compensation approach. We summarize, why this technology is important for LINC-NIRVANA, and also for future telescopes and already existing instruments. The main objective is outlining a solution for the estimation problem in general and its specifics at the LBT. Emphasis is put on realistic evaluation of the used algorithms in the laboratory, such that predictions for the expected performance at the LBT can be made. Model-based estimation and broad-band filtering techniques can be used to solve the estimation task, and the differences are discussed. Simulation results and measurements are shown to motivate our choice of the estimation algorithm for LINC-NIRVANA. The laboratory setup is aimed at imitating the vibration behaviour at the LBT in general, and the M2 as main contributor in particular. For our measurements, we introduce a disturbance time series which has a frequency spectrum comparable to what can be measured at the LBT on a typical night. The controllers' ability to suppress vibrations in the critical frequency range of 8-60 Hz is demonstrated. The experimental results are promising, indicating the ability to suppress differential piston induced by telescope vibrations by a factor of about 5 (rms), which is significantly better than any currently commissioned system.

  16. Discrete-time optimal control and games on large intervals

    CERN Document Server

    Zaslavski, Alexander J

    2017-01-01

    Devoted to the structure of approximate solutions of discrete-time optimal control problems and approximate solutions of dynamic discrete-time two-player zero-sum games, this book presents results on properties of approximate solutions in an interval that is independent lengthwise, for all sufficiently large intervals. Results concerning the so-called turnpike property of optimal control problems and zero-sum games in the regions close to the endpoints of the time intervals are the main focus of this book. The description of the structure of approximate solutions on sufficiently large intervals and its stability will interest graduate students and mathematicians in optimal control and game theory, engineering, and economics. This book begins with a brief overview and moves on to analyze the structure of approximate solutions of autonomous nonconcave discrete-time optimal control Lagrange problems.Next the structures of approximate solutions of autonomous discrete-time optimal control problems that are discret...

  17. The existence and regularity of time-periodic solutions to the three-dimensional Navier–Stokes equations in the whole space

    International Nuclear Information System (INIS)

    Kyed, Mads

    2014-01-01

    The existence, uniqueness and regularity of time-periodic solutions to the Navier–Stokes equations in the three-dimensional whole space are investigated. We consider the Navier–Stokes equations with a non-zero drift term corresponding to the physical model of a fluid flow around a body that moves with a non-zero constant velocity. The existence of a strong time-periodic solution is shown for small time-periodic data. It is further shown that this solution is unique in a large class of weak solutions that can be considered physically reasonable. Finally, we establish regularity properties for any strong solution regardless of its size. (paper)

  18. Defense Inventory: Opportunities Exist to Improve the Management of DOD's Acquisition Lead Times for Spare Parts

    National Research Council Canada - National Science Library

    2007-01-01

    .... Management of inventory acquisition lead times is important in maintaining cost-effective inventories, budgeting, and having material available when needed, as lead times are DOD's best estimate...

  19. Large Deviations for Two-Time-Scale Diffusions, with Delays

    International Nuclear Information System (INIS)

    Kushner, Harold J.

    2010-01-01

    We consider the problem of large deviations for a two-time-scale reflected diffusion process, possibly with delays in the dynamical terms. The Dupuis-Ellis weak convergence approach is used. It is perhaps the most intuitive and simplest for the problems of concern. The results have applications to the problem of approximating optimal controls for two-time-scale systems via use of the averaged equation.

  20. Parallel time domain solvers for electrically large transient scattering problems

    KAUST Repository

    Liu, Yang

    2014-09-26

    Marching on in time (MOT)-based integral equation solvers represent an increasingly appealing avenue for analyzing transient electromagnetic interactions with large and complex structures. MOT integral equation solvers for analyzing electromagnetic scattering from perfect electrically conducting objects are obtained by enforcing electric field boundary conditions and implicitly time advance electric surface current densities by iteratively solving sparse systems of equations at all time steps. Contrary to finite difference and element competitors, these solvers apply to nonlinear and multi-scale structures comprising geometrically intricate and deep sub-wavelength features residing atop electrically large platforms. Moreover, they are high-order accurate, stable in the low- and high-frequency limits, and applicable to conducting and penetrable structures represented by highly irregular meshes. This presentation reviews some recent advances in the parallel implementations of time domain integral equation solvers, specifically those that leverage multilevel plane-wave time-domain algorithm (PWTD) on modern manycore computer architectures including graphics processing units (GPUs) and distributed memory supercomputers. The GPU-based implementation achieves at least one order of magnitude speedups compared to serial implementations while the distributed parallel implementation are highly scalable to thousands of compute-nodes. A distributed parallel PWTD kernel has been adopted to solve time domain surface/volume integral equations (TDSIE/TDVIE) for analyzing transient scattering from large and complex-shaped perfectly electrically conducting (PEC)/dielectric objects involving ten million/tens of millions of spatial unknowns.

  1. Time dispersion in large plastic scintillation neutron detectors

    International Nuclear Information System (INIS)

    De, A.; Dasgupta, S.S.; Sen, D.

    1993-01-01

    Time dispersion (TD) has been computed for large neutron detectors using plastic scintillators. It has been shown that TD seen by the PM tube does not necessarily increase with incident neutron energy, a result not fully in agreement with the usual finding

  2. Vibration amplitude rule study for rotor under large time scale

    International Nuclear Information System (INIS)

    Yang Xuan; Zuo Jianli; Duan Changcheng

    2014-01-01

    The rotor is an important part of the rotating machinery; its vibration performance is one of the important factors affecting the service life. This paper presents both theoretical analyses and experimental demonstrations of the vibration rule of the rotor under large time scales. The rule can be used for the service life estimation of the rotor. (authors)

  3. The Large Observatory For x-ray Timing

    DEFF Research Database (Denmark)

    Feroci, M.; Herder, J. W. den; Bozzo, E.

    2014-01-01

    The Large Observatory For x-ray Timing (LOFT) was studied within ESA M3 Cosmic Vision framework and participated in the final down-selection for a launch slot in 2022-2024. Thanks to the unprecedented combination of effective area and spectral resolution of its main instrument, LOFT will study th...

  4. Existence and global exponential stability of periodic solutions for n-dimensional neutral dynamic equations on time scales.

    Science.gov (United States)

    Li, Bing; Li, Yongkun; Zhang, Xuemei

    2016-01-01

    In this paper, by using the existence of the exponential dichotomy of linear dynamic equations on time scales and the theory of calculus on time scales, we study the existence and global exponential stability of periodic solutions for a class of n-dimensional neutral dynamic equations on time scales. We also present an example to illustrate the feasibility of our results. The results of this paper are completely new and complementary to the previously known results even in both the case of differential equations (time scale [Formula: see text]) and the case of difference equations (time scale [Formula: see text]).

  5. Freeway travel time estimation using existing fixed traffic sensors : phase 2.

    Science.gov (United States)

    2015-03-01

    Travel time, one of the most important freeway performance metrics, can be easily estimated using the : data collected from fixed traffic sensors, avoiding the need to install additional travel time data collectors. : This project is aimed at fully u...

  6. Existence of time-periodic weak solutions to the stochastic Navier-Stokes equations around a moving body

    International Nuclear Information System (INIS)

    Chen, Feng; Han, Yuecai

    2013-01-01

    The existence of time-periodic stochastic motions of an incompressible fluid is obtained. Here the fluid is subject to a time-periodic body force and an additional time-periodic stochastic force that is produced by a rigid body moves periodically stochastically with the same period in the fluid

  7. Existence of time-periodic weak solutions to the stochastic Navier-Stokes equations around a moving body

    Energy Technology Data Exchange (ETDEWEB)

    Chen, Feng, E-mail: chenfengmath@163.com, E-mail: hanyc@jlu.edu.cn; Han, Yuecai, E-mail: chenfengmath@163.com, E-mail: hanyc@jlu.edu.cn [School of Mathematics, Jilin University, Changchun 130012 (China)

    2013-12-15

    The existence of time-periodic stochastic motions of an incompressible fluid is obtained. Here the fluid is subject to a time-periodic body force and an additional time-periodic stochastic force that is produced by a rigid body moves periodically stochastically with the same period in the fluid.

  8. Real-time simulation of large-scale floods

    Science.gov (United States)

    Liu, Q.; Qin, Y.; Li, G. D.; Liu, Z.; Cheng, D. J.; Zhao, Y. H.

    2016-08-01

    According to the complex real-time water situation, the real-time simulation of large-scale floods is very important for flood prevention practice. Model robustness and running efficiency are two critical factors in successful real-time flood simulation. This paper proposed a robust, two-dimensional, shallow water model based on the unstructured Godunov- type finite volume method. A robust wet/dry front method is used to enhance the numerical stability. An adaptive method is proposed to improve the running efficiency. The proposed model is used for large-scale flood simulation on real topography. Results compared to those of MIKE21 show the strong performance of the proposed model.

  9. Time simulation of flutter with large stiffness changes

    Science.gov (United States)

    Karpel, Mordechay; Wieseman, Carol D.

    1992-01-01

    Time simulation of flutter, involving large local structural changes, is formulated with a state-space model that is based on a relatively small number of generalized coordinates. Free-free vibration modes are first calculated for a nominal finite-element model with relatively large fictitious masses located at the area of structural changes. A low-frequency subset of these modes is then transformed into a set of structural modal coordinates with which the entire simulation is performed. These generalized coordinates and the associated oscillatory aerodynamic force coefficient matrices are used to construct an efficient time-domain, state-space model for a basic aeroelastic case. The time simulation can then be performed by simply changing the mass, stiffness, and damping coupling terms when structural changes occur. It is shown that the size of the aeroelastic model required for time simulation with large structural changes at a few apriori known locations is similar to that required for direct analysis of a single structural case. The method is applied to the simulation of an aeroelastic wind-tunnel model. The diverging oscillations are followed by the activation of a tip-ballast decoupling mechanism that stabilizes the system but may cause significant transient overshoots.

  10. Dwell time considerations for large area cold plasma decontamination

    Science.gov (United States)

    Konesky, Gregory

    2009-05-01

    Atmospheric discharge cold plasmas have been shown to be effective in the reduction of pathogenic bacteria and spores and in the decontamination of simulated chemical warfare agents, without the generation of toxic or harmful by-products. Cold plasmas may also be useful in assisting cleanup of radiological "dirty bombs." For practical applications in realistic scenarios, the plasma applicator must have both a large area of coverage, and a reasonably short dwell time. However, the literature contains a wide range of reported dwell times, from a few seconds to several minutes, needed to achieve a given level of reduction. This is largely due to different experimental conditions, and especially, different methods of generating the decontaminating plasma. We consider these different approaches and attempt to draw equivalencies among them, and use this to develop requirements for a practical, field-deployable plasma decontamination system. A plasma applicator with 12 square inches area and integral high voltage, high frequency generator is described.

  11. Freeway travel time estimation using existing fixed traffic sensors : phase 1.

    Science.gov (United States)

    2013-08-01

    Freeway travel time is one of the most useful pieces of information for road users and an : important measure of effectiveness (MOE) for traffic engineers and policy makers. In the Greater : St. Louis area, Gateway Guide, the St. Louis Transportation...

  12. Salecker-Wigner-Peres clock, Feynman paths, and a tunneling time that should not exist

    Science.gov (United States)

    Sokolovski, D.

    2017-08-01

    The Salecker-Wigner-Peres (SWP) clock is often used to determine the duration a quantum particle is supposed to spend in a specified region of space Ω . By construction, the result is a real positive number, and the method seems to avoid the difficulty of introducing complex time parameters, which arises in the Feynman paths approach. However, it tells little about the particle's motion. We investigate this matter further, and show that the SWP clock, like any other Larmor clock, correlates the rotation of its angular momentum with the durations τ , which the Feynman paths spend in Ω , thereby destroying interference between different durations. An inaccurate weakly coupled clock leaves the interference almost intact, and the need to resolve the resulting "which way?" problem is one of the main difficulties at the center of the "tunnelling time" controversy. In the absence of a probability distribution for the values of τ , the SWP results are expressed in terms of moduli of the "complex times," given by the weighted sums of the corresponding probability amplitudes. It is shown that overinterpretation of these results, by treating the SWP times as physical time intervals, leads to paradoxes and should be avoided. We also analyze various settings of the SWP clock, different calibration procedures, and the relation between the SWP results and the quantum dwell time. The cases of stationary tunneling and tunnel ionization are considered in some detail. Although our detailed analysis addresses only one particular definition of the duration of a tunneling process, it also points towards the impossibility of uniting various time parameters, which may occur in quantum theory, within the concept of a single tunnelling time.

  13. Large Time Behavior of the Vlasov-Poisson-Boltzmann System

    Directory of Open Access Journals (Sweden)

    Li Li

    2013-01-01

    Full Text Available The motion of dilute charged particles can be modeled by Vlasov-Poisson-Boltzmann system. We study the large time stability of the VPB system. To be precise, we prove that when time goes to infinity, the solution of VPB system tends to global Maxwellian state in a rate Ot−∞, by using a method developed for Boltzmann equation without force in the work of Desvillettes and Villani (2005. The improvement of the present paper is the removal of condition on parameter λ as in the work of Li (2008.

  14. Just-in-time connectivity for large spiking networks.

    Science.gov (United States)

    Lytton, William W; Omurtag, Ahmet; Neymotin, Samuel A; Hines, Michael L

    2008-11-01

    The scale of large neuronal network simulations is memory limited due to the need to store connectivity information: connectivity storage grows as the square of neuron number up to anatomically relevant limits. Using the NEURON simulator as a discrete-event simulator (no integration), we explored the consequences of avoiding the space costs of connectivity through regenerating connectivity parameters when needed: just in time after a presynaptic cell fires. We explored various strategies for automated generation of one or more of the basic static connectivity parameters: delays, postsynaptic cell identities, and weights, as well as run-time connectivity state: the event queue. Comparison of the JitCon implementation to NEURON's standard NetCon connectivity method showed substantial space savings, with associated run-time penalty. Although JitCon saved space by eliminating connectivity parameters, larger simulations were still memory limited due to growth of the synaptic event queue. We therefore designed a JitEvent algorithm that added items to the queue only when required: instead of alerting multiple postsynaptic cells, a spiking presynaptic cell posted a callback event at the shortest synaptic delay time. At the time of the callback, this same presynaptic cell directly notified the first postsynaptic cell and generated another self-callback for the next delay time. The JitEvent implementation yielded substantial additional time and space savings. We conclude that just-in-time strategies are necessary for very large network simulations but that a variety of alternative strategies should be considered whose optimality will depend on the characteristics of the simulation to be run.

  15. FTSPlot: fast time series visualization for large datasets.

    Directory of Open Access Journals (Sweden)

    Michael Riss

    Full Text Available The analysis of electrophysiological recordings often involves visual inspection of time series data to locate specific experiment epochs, mask artifacts, and verify the results of signal processing steps, such as filtering or spike detection. Long-term experiments with continuous data acquisition generate large amounts of data. Rapid browsing through these massive datasets poses a challenge to conventional data plotting software because the plotting time increases proportionately to the increase in the volume of data. This paper presents FTSPlot, which is a visualization concept for large-scale time series datasets using techniques from the field of high performance computer graphics, such as hierarchic level of detail and out-of-core data handling. In a preprocessing step, time series data, event, and interval annotations are converted into an optimized data format, which then permits fast, interactive visualization. The preprocessing step has a computational complexity of O(n x log(N; the visualization itself can be done with a complexity of O(1 and is therefore independent of the amount of data. A demonstration prototype has been implemented and benchmarks show that the technology is capable of displaying large amounts of time series data, event, and interval annotations lag-free with < 20 ms ms. The current 64-bit implementation theoretically supports datasets with up to 2(64 bytes, on the x86_64 architecture currently up to 2(48 bytes are supported, and benchmarks have been conducted with 2(40 bytes/1 TiB or 1.3 x 10(11 double precision samples. The presented software is freely available and can be included as a Qt GUI component in future software projects, providing a standard visualization method for long-term electrophysiological experiments.

  16. Keeping the security and the relief in the environment where radioactive materials exist all the times

    International Nuclear Information System (INIS)

    Murata, Takashi

    2014-01-01

    Three-Eleven was a turning point after which we have recognized that are surrounded by the radioactive materials all the times. On the other hand, “getting a claim to edit by any individual” became possible owing to the spread of advanced ICT equipment, and now he can get necessary information for him to decide and act as he want. It is important for keeping security and rejecting anxiety against radiation to record and evaluate personal irradiation information utilizing the results of ICT. The results should be timely returned to the concerned person. At the same time, it necessary to establish the system by which the data are compiled as a big data and are opened for public use. For establishing such system, the promotion of interdisciplinary collaboration is expected. (J.P.N.)

  17. On the existence of conformal Killing vectors for ST-homogeneous Godel type space-times

    Energy Technology Data Exchange (ETDEWEB)

    Parra, Y.; Patino, A.; Percoco, U. [Laboratorio de Fisica Teorica, Facultad de Ciencias Universidad de los Andes, Merida 5101 (Venezuela); Tsamparlis, M. [seccion de Astronomia-Astrofisica-Mecanica, Universidad de Atenas, Atenas 157 83 (Greece)

    2006-07-01

    Tsamparlis with another authors have developed a systematic method for computing of the conformal algebra of 1+3 space-times. The proper CKV's are found in terms of gradient CKVs of the 3-space. In this paper we apply Tsamparlis' results to the study CKVs of the Godel ST-Homogeneous type spacetimes. We find that the only space-time admitting proper CKV's is the ST-Homogeneous Godel type with m{sup 2} = 4{omega}{sup 2} (RT). (Author)

  18. Short-time existence of solutions for mean-field games with congestion

    KAUST Repository

    Gomes, Diogo A.; Voskanyan, Vardan K.

    2015-01-01

    We consider time-dependent mean-field games with congestion that are given by a Hamilton–Jacobi equation coupled with a Fokker–Planck equation. These models are motivated by crowd dynamics in which agents have difficulty moving in high-density areas

  19. Interpolation in Time Series : An Introductive Overview of Existing Methods, Their Performance Criteria and Uncertainty Assessment

    NARCIS (Netherlands)

    Lepot, M.J.; Aubin, Jean Baptiste; Clemens, F.H.L.R.

    2017-01-01

    A thorough review has been performed on interpolation methods to fill gaps in time-series, efficiency criteria, and uncertainty quantifications. On one hand, there are numerous available methods: interpolation, regression, autoregressive, machine learning methods, etc. On the other hand, there are

  20. Large holographic displays for real-time applications

    Science.gov (United States)

    Schwerdtner, A.; Häussler, R.; Leister, N.

    2008-02-01

    Holography is generally accepted as the ultimate approach to display three-dimensional scenes or objects. Principally, the reconstruction of an object from a perfect hologram would appear indistinguishable from viewing the corresponding real-world object. Up to now two main obstacles have prevented large-screen Computer-Generated Holograms (CGH) from achieving a satisfactory laboratory prototype not to mention a marketable one. The reason is a small cell pitch CGH resulting in a huge number of hologram cells and a very high computational load for encoding the CGH. These seemingly inevitable technological hurdles for a long time have not been cleared limiting the use of holography to special applications, such as optical filtering, interference, beam forming, digital holography for capturing the 3-D shape of objects, and others. SeeReal Technologies has developed a new approach for real-time capable CGH using the socalled Tracked Viewing Windows technology to overcome these problems. The paper will show that today's state of the art reconfigurable Spatial Light Modulators (SLM), especially today's feasible LCD panels are suited for reconstructing large 3-D scenes which can be observed from large viewing angles. For this to achieve the original holographic concept of containing information from the entire scene in each part of the CGH has been abandoned. This substantially reduces the hologram resolution and thus the computational load by several orders of magnitude making thus real-time computation possible. A monochrome real-time prototype measuring 20 inches has been built and demonstrated at last year's SID conference and exhibition 2007 and at several other events.

  1. Existence of time-dependent density-functional theory for open electronic systems: time-dependent holographic electron density theorem.

    Science.gov (United States)

    Zheng, Xiao; Yam, ChiYung; Wang, Fan; Chen, GuanHua

    2011-08-28

    We present the time-dependent holographic electron density theorem (TD-HEDT), which lays the foundation of time-dependent density-functional theory (TDDFT) for open electronic systems. For any finite electronic system, the TD-HEDT formally establishes a one-to-one correspondence between the electron density inside any finite subsystem and the time-dependent external potential. As a result, any electronic property of an open system in principle can be determined uniquely by the electron density function inside the open region. Implications of the TD-HEDT on the practicality of TDDFT are also discussed.

  2. Process evaluation of treatment times in a large radiotherapy department

    International Nuclear Information System (INIS)

    Beech, R.; Burgess, K.; Stratford, J.

    2016-01-01

    Purpose/objective: The Department of Health (DH) recognises access to appropriate and timely radiotherapy (RT) services as crucial in improving cancer patient outcomes, especially when facing a predicted increase in cancer diagnosis. There is a lack of ‘real-time’ data regarding daily demand of a linear accelerator, the impact of increasingly complex techniques on treatment times, and whether current scheduling reflects time needed for RT delivery, which would be valuable in highlighting current RT provision. Material/methods: A systematic quantitative process evaluation was undertaken in a large regional cancer centre, including a satellite centre, between January and April 2014. Data collected included treatment room-occupancy time, RT site, RT and verification technique and patient mobility status. Data was analysed descriptively; average room-occupancy times were calculated for RT techniques and compared to historical standardised treatment times within the department. Results: Room-occupancy was recorded for over 1300 fractions, over 50% of which overran their allotted treatment time. In a focused sample of 16 common techniques, 10 overran their allocated timeslots. Verification increased room-occupancy by six minutes (50%) over non-imaging. Treatments for patients requiring mobility assistance took four minutes (29%) longer. Conclusion: The majority of treatments overran their standardised timeslots. Although technique advancement has reduced RT delivery time, room-occupancy has not necessarily decreased. Verification increases room-occupancy and needs to be considered when moving towards adaptive techniques. Mobility affects room-occupancy and will become increasingly significant in an ageing population. This evaluation assesses validity of current treatment times in this department, and can be modified and repeated as necessary. - Highlights: • A process evaluation examined room-occupancy for various radiotherapy techniques. • Appointment lengths

  3. Interpolation in Time Series: An Introductive Overview of Existing Methods, Their Performance Criteria and Uncertainty Assessment

    Directory of Open Access Journals (Sweden)

    Mathieu Lepot

    2017-10-01

    Full Text Available A thorough review has been performed on interpolation methods to fill gaps in time-series, efficiency criteria, and uncertainty quantifications. On one hand, there are numerous available methods: interpolation, regression, autoregressive, machine learning methods, etc. On the other hand, there are many methods and criteria to estimate efficiencies of these methods, but uncertainties on the interpolated values are rarely calculated. Furthermore, while they are estimated according to standard methods, the prediction uncertainty is not taken into account: a discussion is thus presented on the uncertainty estimation of interpolated/extrapolated data. Finally, some suggestions for further research and a new method are proposed.

  4. Existence, regularity and representation of solutions of time fractional wave equations

    Directory of Open Access Journals (Sweden)

    Valentin Keyantuo

    2017-09-01

    Full Text Available We study the solvability of the fractional order inhomogeneous Cauchy problem $$ \\mathbb{D}_t^\\alpha u(t=Au(t+f(t, \\quad t>0,\\;1<\\alpha\\le 2, $$ where A is a closed linear operator in some Banach space X and $f:[0,\\infty\\to X$ a given function. Operator families associated with this problem are defined and their regularity properties are investigated. In the case where A is a generator of a $\\beta$-times integrated cosine family $(C_\\beta(t$, we derive explicit representations of mild and classical solutions of the above problem in terms of the integrated cosine family. We include applications to elliptic operators with Dirichlet, Neumann or Robin type boundary conditions on $L^p$-spaces and on the space of continuous functions.

  5. Modeling and Control of a Large Nuclear Reactor A Three-Time-Scale Approach

    CERN Document Server

    Shimjith, S R; Bandyopadhyay, B

    2013-01-01

    Control analysis and design of large nuclear reactors requires a suitable mathematical model representing the steady state and dynamic behavior of the reactor with reasonable accuracy. This task is, however, quite challenging because of several complex dynamic phenomena existing in a reactor. Quite often, the models developed would be of prohibitively large order, non-linear and of complex structure not readily amenable for control studies. Moreover, the existence of simultaneously occurring dynamic variations at different speeds makes the mathematical model susceptible to numerical ill-conditioning, inhibiting direct application of standard control techniques. This monograph introduces a technique for mathematical modeling of large nuclear reactors in the framework of multi-point kinetics, to obtain a comparatively smaller order model in standard state space form thus overcoming these difficulties. It further brings in innovative methods for controller design for systems exhibiting multi-time-scale property,...

  6. Irregular Morphing for Real-Time Rendering of Large Terrain

    Directory of Open Access Journals (Sweden)

    S. Kalem

    2016-06-01

    Full Text Available The following paper proposes an alternative approach to the real-time adaptive triangulation problem. A new region-based multi-resolution approach for terrain rendering is described which improves on-the-fly the distribution of the density of triangles inside the tile after selecting appropriate Level-Of-Detail by an adaptive sampling. This proposed approach organizes the heightmap into a QuadTree of tiles that are processed independently. This technique combines the benefits of both Triangular Irregular Network approach and region-based multi-resolution approach by improving the distribution of the density of triangles inside the tile. Our technique morphs the initial regular grid of the tile to deformed grid in order to minimize approximation error. The proposed technique strives to combine large tile size and real-time processing while guaranteeing an upper bound on the screen space error. Thus, this approach adapts terrain rendering process to local surface characteristics and enables on-the-fly handling of large amount of terrain data. Morphing is based-on the multi-resolution wavelet analysis. The use of the D2WT multi-resolution analysis of the terrain height-map speeds up processing and permits to satisfy an interactive terrain rendering. Tests and experiments demonstrate that Haar B-Spline wavelet, well known for its properties of localization and its compact support, is suitable for fast and accurate redistribution. Such technique could be exploited in client-server architecture for supporting interactive high-quality remote visualization of very large terrain.

  7. The question of the existence of God in the book of Stephen Hawking: A brief history of time

    NARCIS (Netherlands)

    Driessen, A.; Driessen, A; Suarez, A.

    1997-01-01

    The continuing interest in the book of S. Hawking "A Brief History of Time" makes a philosophical evaluation of the content highly desirable. As will be shown, the genre of this work can be identified as a speciality in philosophy, namely the proof of the existence of God. In this study an attempt

  8. Necessary and Sufficient Conditions for the Existence of Positive Solution for Singular Boundary Value Problems on Time Scales

    Directory of Open Access Journals (Sweden)

    Zhang Xuemei

    2009-01-01

    Full Text Available By constructing available upper and lower solutions and combining the Schauder's fixed point theorem with maximum principle, this paper establishes sufficient and necessary conditions to guarantee the existence of as well as positive solutions for a class of singular boundary value problems on time scales. The results significantly extend and improve many known results for both the continuous case and more general time scales. We illustrate our results by one example.

  9. Large time behavior of entropy solutions to one-dimensional unipolar hydrodynamic model for semiconductor devices

    Science.gov (United States)

    Huang, Feimin; Li, Tianhong; Yu, Huimin; Yuan, Difan

    2018-06-01

    We are concerned with the global existence and large time behavior of entropy solutions to the one-dimensional unipolar hydrodynamic model for semiconductors in the form of Euler-Poisson equations in a bounded interval. In this paper, we first prove the global existence of entropy solution by vanishing viscosity and compensated compactness framework. In particular, the solutions are uniformly bounded with respect to space and time variables by introducing modified Riemann invariants and the theory of invariant region. Based on the uniform estimates of density, we further show that the entropy solution converges to the corresponding unique stationary solution exponentially in time. No any smallness condition is assumed on the initial data and doping profile. Moreover, the novelty in this paper is about the unform bound with respect to time for the weak solutions of the isentropic Euler-Poisson system.

  10. Wealth Transfers Among Large Customers from Implementing Real-Time Retail Electricity Pricing

    OpenAIRE

    Borenstein, Severin

    2007-01-01

    Adoption of real-time electricity pricing — retail prices that vary hourly to reflect changing wholesale prices — removes existing cross-subsidies to those customers that consume disproportionately more when wholesale prices are highest. If their losses are substantial, these customers are likely to oppose RTP initiatives unless there is a supplemental program to offset their loss. Using data on a sample of 1142 large industrial and commercial customers in northern California, I show that RTP...

  11. Large area spark counters with fine time and position resolution

    International Nuclear Information System (INIS)

    Ogawa, A.; Atwood, W.B.; Fujiwara, N.; Pestov, Yu.N.; Sugahara, R.

    1983-10-01

    Spark counters trace their history back over three decades but have been used in only a limited number of experiments. The key properties of these devices include their capability of precision timing (at the sub 100 ps level) and of measuring the position of the charged particle to high accuracy. At SLAC we have undertaken a program to develop these devices for use in high energy physics experiments involving large detectors. A spark counter of size 1.2 m x 0.1 m has been constructed and has been operating continuously in our test setup for several months. In this talk I will discuss some details of its construction and its properties as a particle detector. 14 references

  12. Large, real time detectors for solar neutrinos and magnetic monopoles

    International Nuclear Information System (INIS)

    Gonzalez-Mestres, L.

    1990-01-01

    We discuss the present status of superheated superconducting granules (SSG) development for the real time detection of magnetic monopoles of any speed and of low energy solar neutrinos down to the pp region (indium project). Basic properties of SSG and progress made in the recent years are briefly reviewed. Possible ways for further improvement are discussed. The performances reached in ultrasonic grain production at ∼ 100 μm size, as well as in conventional read-out electronics, look particularly promising for a large scale monopole experiment. Alternative approaches are briefly dealt with: induction loops for magnetic monopoles; scintillators, semiconductors or superconducting tunnel junctions for a solar neutrino detector based on an indium target

  13. Operational, cost, and technical study of large windpower systems integrated with an existing electric utility. Final report

    Energy Technology Data Exchange (ETDEWEB)

    Ligon, C.; Kirby, G.; Jordan, D.; Lawrence, J.H.; Wiesner, W.; Kosovec, A.; Swanson, R.K.; Smith, R.T.; Johnson, C.C.; Hodson, H.O.

    1976-04-01

    Detailed wind energy assessment from the available wind records, and evaluation of the application of wind energy systems to an existing electric utility were performed in an area known as the Texas Panhandle, on the Great Plains. The study area includes parts of Texas, eastern New Mexico, the Oklahoma Panhandle and southern Kansas. The region is shown to have uniformly distributed winds of relatively high velocity, with average wind power density of 0.53 kW/m/sup 2/ at 30 m height at Amarillo, Texas, a representative location. The annual period of calm is extremely low. Three separate compressed air storage systems with good potential were analyzed in detail, and two potential pumped-hydro facilities were identified and given preliminary consideration. Aquifer storage of compressed air is a promising possibility in the region.

  14. Modeling and control of a large nuclear reactor. A three-time-scale approach

    Energy Technology Data Exchange (ETDEWEB)

    Shimjith, S.R. [Indian Institute of Technology Bombay, Mumbai (India); Bhabha Atomic Research Centre, Mumbai (India); Tiwari, A.P. [Bhabha Atomic Research Centre, Mumbai (India); Bandyopadhyay, B. [Indian Institute of Technology Bombay, Mumbai (India). IDP in Systems and Control Engineering

    2013-07-01

    Recent research on Modeling and Control of a Large Nuclear Reactor. Presents a three-time-scale approach. Written by leading experts in the field. Control analysis and design of large nuclear reactors requires a suitable mathematical model representing the steady state and dynamic behavior of the reactor with reasonable accuracy. This task is, however, quite challenging because of several complex dynamic phenomena existing in a reactor. Quite often, the models developed would be of prohibitively large order, non-linear and of complex structure not readily amenable for control studies. Moreover, the existence of simultaneously occurring dynamic variations at different speeds makes the mathematical model susceptible to numerical ill-conditioning, inhibiting direct application of standard control techniques. This monograph introduces a technique for mathematical modeling of large nuclear reactors in the framework of multi-point kinetics, to obtain a comparatively smaller order model in standard state space form thus overcoming these difficulties. It further brings in innovative methods for controller design for systems exhibiting multi-time-scale property, with emphasis on three-time-scale systems.

  15. Post-hoc pattern-oriented testing and tuning of an existing large model: lessons from the field vole.

    Directory of Open Access Journals (Sweden)

    Christopher J Topping

    Full Text Available Pattern-oriented modeling (POM is a general strategy for modeling complex systems. In POM, multiple patterns observed at different scales and hierarchical levels are used to optimize model structure, to test and select sub-models of key processes, and for calibration. So far, POM has been used for developing new models and for models of low to moderate complexity. It remains unclear, though, whether the basic idea of POM to utilize multiple patterns, could also be used to test and possibly develop existing and established models of high complexity. Here, we use POM to test, calibrate, and further develop an existing agent-based model of the field vole (Microtus agrestis, which was developed and tested within the ALMaSS framework. This framework is complex because it includes a high-resolution representation of the landscape and its dynamics, of the individual's behavior, and of the interaction between landscape and individual behavior. Results of fitting to the range of patterns chosen were generally very good, but the procedure required to achieve this was long and complicated. To obtain good correspondence between model and the real world it was often necessary to model the real world environment closely. We therefore conclude that post-hoc POM is a useful and viable way to test a highly complex simulation model, but also warn against the dangers of over-fitting to real world patterns that lack details in their explanatory driving factors. To overcome some of these obstacles we suggest the adoption of open-science and open-source approaches to ecological simulation modeling.

  16. The settlement of foundation of existing large structure on soft ground and investigation of its allowable settlement

    International Nuclear Information System (INIS)

    Okamoto, Toshiro

    1987-01-01

    In our laboratory a study of siting on quarternary ground is followed to make possible to construct a nuclear power plant on soil ground in Japan, a important subject is to understand bearing capacity, settlement and seismic responce of foundation. So measured data are collected about relation between ground and type of foundation, total settlement and differential settlement of already constructed large structures, and it is done to investigate the real condition and to examine allowable settlement. Investigated structures are mainly foreign nuclear power plant and domestic and foreign high buildings. The higher buildings are, the more raft foundation are for type of foundation and the higher contact pressure are to similar to a nuclear power plant. So discussion is done about mainly raft foundation. It is found that some measured maximum total settlements are larger than already proposed allowable values. So empirical allowable settlement is derived from measured values considering the effect of the width of base slab, contact pressure and foundation ground. Differential settlement is investigated about relation to maximum total settlement, and is formulated considering the width and the rigidity of base slab. Beside the limit of differential settlement is obtained as foundation is damaged, and the limit of maximum total settlement is obtained by combining this and above mentioned relation. Obtained allowable value is largely influenced by the width of base slab, and becomes less severe than some already proposed values. So it is expected that deformation of foundation is rationaly investigated when large structure as nuclear power plant is constructed on soft ground. (author)

  17. FREQUENCY CATASTROPHE AND CO-EXISTING ATTRACTORS IN A CELL Ca2+ NONLINEAR OSCILLATION MODEL WITH TIME DELAY*

    Institute of Scientific and Technical Information of China (English)

    应阳君; 黄祖洽

    2001-01-01

    Frequency catastrophe is found in a cell Ca2+ nonlinear oscillation model with time delay. The relation of the frequency transition to the time delay is studied by numerical simulations and theoretical analysis. There is a range of parameters in which two kinds of attractors with great frequency differences co-exist in the system. Along with parameter changes, a critical phenomenon occurs and the oscillation frequency changes greatly. This mechanism helps us to deepen the understanding of the complex dynamics of delay systems, and might be of some meaning in cell signalling.

  18. Necessary and Sufficient Conditions for the Existence of Positive Solution for Singular Boundary Value Problems on Time Scales

    Directory of Open Access Journals (Sweden)

    Meiqiang Feng

    2009-01-01

    Full Text Available By constructing available upper and lower solutions and combining the Schauder's fixed point theorem with maximum principle, this paper establishes sufficient and necessary conditions to guarantee the existence of Cld[0,1]𝕋 as well as CldΔ[0,1]𝕋 positive solutions for a class of singular boundary value problems on time scales. The results significantly extend and improve many known results for both the continuous case and more general time scales. We illustrate our results by one example.

  19. The large discretization step method for time-dependent partial differential equations

    Science.gov (United States)

    Haras, Zigo; Taasan, Shlomo

    1995-01-01

    A new method for the acceleration of linear and nonlinear time dependent calculations is presented. It is based on the Large Discretization Step (LDS) approximation, defined in this work, which employs an extended system of low accuracy schemes to approximate a high accuracy discrete approximation to a time dependent differential operator. Error bounds on such approximations are derived. These approximations are efficiently implemented in the LDS methods for linear and nonlinear hyperbolic equations, presented here. In these algorithms the high and low accuracy schemes are interpreted as the same discretization of a time dependent operator on fine and coarse grids, respectively. Thus, a system of correction terms and corresponding equations are derived and solved on the coarse grid to yield the fine grid accuracy. These terms are initialized by visiting the fine grid once in many coarse grid time steps. The resulting methods are very general, simple to implement and may be used to accelerate many existing time marching schemes.

  20. The question of the existence of God in the book of Stephen Hawking: A brief history of time

    OpenAIRE

    Driessen, A.; Driessen, A; Suarez, A.

    1997-01-01

    The continuing interest in the book of S. Hawking "A Brief History of Time" makes a philosophical evaluation of the content highly desirable. As will be shown, the genre of this work can be identified as a speciality in philosophy, namely the proof of the existence of God. In this study an attempt is given to unveil the philosophical concepts and steps that lead to the final conclusions, without discussing in detail the remarkable review of modern physical theories. In order to clarify these ...

  1. Travel Times for Screening Mammography: Impact of Geographic Expansion by a Large Academic Health System.

    Science.gov (United States)

    Rosenkrantz, Andrew B; Liang, Yu; Duszak, Richard; Recht, Michael P

    2017-09-01

    This study aims to assess the impact of off-campus facility expansion by a large academic health system on patient travel times for screening mammography. Screening mammograms performed from 2013 to 2015 and associated patient demographics were identified using the NYU Langone Medical Center Enterprise Data Warehouse. During this time, the system's number of mammography facilities increased from 6 to 19, reflecting expansion beyond Manhattan throughout the New York metropolitan region. Geocoding software was used to estimate driving times from patients' homes to imaging facilities. For 147,566 screening mammograms, the mean estimated patient travel time was 19.9 ± 15.2 minutes. With facility expansion, travel times declined significantly (P travel times between such subgroups. However, travel times to pre-expansion facilities remained stable (initial: 26.8 ± 18.9 minutes, final: 26.7 ± 18.6 minutes). Among women undergoing mammography before and after expansion, travel times were shorter for the postexpansion mammogram in only 6.3%, but this rate varied significantly (all P travel burden and reduce travel time variation among sociodemographic populations. Nonetheless, existing patients strongly tend to return to established facilities despite potentially shorter travel time locations, suggesting strong site loyalty. Variation in travel times likely relates to various factors other than facility proximity. Copyright © 2017 The Association of University Radiologists. Published by Elsevier Inc. All rights reserved.

  2. Parallel time domain solvers for electrically large transient scattering problems

    KAUST Repository

    Liu, Yang; Yucel, Abdulkadir; Bagcý , Hakan; Michielssen, Eric

    2014-01-01

    scattering from perfect electrically conducting objects are obtained by enforcing electric field boundary conditions and implicitly time advance electric surface current densities by iteratively solving sparse systems of equations at all time steps. Contrary

  3. Signal existence verification (SEV) for GPS low received power signal detection using the time-frequency approach.

    Science.gov (United States)

    Jan, Shau-Shiun; Sun, Chih-Cheng

    2010-01-01

    The detection of low received power of global positioning system (GPS) signals in the signal acquisition process is an important issue for GPS applications. Improving the miss-detection problem of low received power signal is crucial, especially for urban or indoor environments. This paper proposes a signal existence verification (SEV) process to detect and subsequently verify low received power GPS signals. The SEV process is based on the time-frequency representation of GPS signal, and it can capture the characteristic of GPS signal in the time-frequency plane to enhance the GPS signal acquisition performance. Several simulations and experiments are conducted to show the effectiveness of the proposed method for low received power signal detection. The contribution of this work is that the SEV process is an additional scheme to assist the GPS signal acquisition process in low received power signal detection, without changing the original signal acquisition or tracking algorithms.

  4. A divide-and-conquer algorithm for large-scale de novo transcriptome assembly through combining small assemblies from existing algorithms.

    Science.gov (United States)

    Sze, Sing-Hoi; Parrott, Jonathan J; Tarone, Aaron M

    2017-12-06

    While the continued development of high-throughput sequencing has facilitated studies of entire transcriptomes in non-model organisms, the incorporation of an increasing amount of RNA-Seq libraries has made de novo transcriptome assembly difficult. Although algorithms that can assemble a large amount of RNA-Seq data are available, they are generally very memory-intensive and can only be used to construct small assemblies. We develop a divide-and-conquer strategy that allows these algorithms to be utilized, by subdividing a large RNA-Seq data set into small libraries. Each individual library is assembled independently by an existing algorithm, and a merging algorithm is developed to combine these assemblies by picking a subset of high quality transcripts to form a large transcriptome. When compared to existing algorithms that return a single assembly directly, this strategy achieves comparable or increased accuracy as memory-efficient algorithms that can be used to process a large amount of RNA-Seq data, and comparable or decreased accuracy as memory-intensive algorithms that can only be used to construct small assemblies. Our divide-and-conquer strategy allows memory-intensive de novo transcriptome assembly algorithms to be utilized to construct large assemblies.

  5. A mathematical model of a steady flow through the Kaplan turbine - The existence of a weak solution in the case of an arbitrarily large inflow

    Science.gov (United States)

    Neustupa, Tomáš

    2017-07-01

    The paper presents the mathematical model of a steady 2-dimensional viscous incompressible flow through a radial blade machine. The corresponding boundary value problem is studied in the rotating frame. We provide the classical and weak formulation of the problem. Using a special form of the so called "artificial" or "natural" boundary condition on the outflow, we prove the existence of a weak solution for an arbitrarily large inflow.

  6. Existence and global exponential stability of periodic solution of memristor-based BAM neural networks with time-varying delays.

    Science.gov (United States)

    Li, Hongfei; Jiang, Haijun; Hu, Cheng

    2016-03-01

    In this paper, we investigate a class of memristor-based BAM neural networks with time-varying delays. Under the framework of Filippov solutions, boundedness and ultimate boundedness of solutions of memristor-based BAM neural networks are guaranteed by Chain rule and inequalities technique. Moreover, a new method involving Yoshizawa-like theorem is favorably employed to acquire the existence of periodic solution. By applying the theory of set-valued maps and functional differential inclusions, an available Lyapunov functional and some new testable algebraic criteria are derived for ensuring the uniqueness and global exponential stability of periodic solution of memristor-based BAM neural networks. The obtained results expand and complement some previous work on memristor-based BAM neural networks. Finally, a numerical example is provided to show the applicability and effectiveness of our theoretical results. Copyright © 2015 Elsevier Ltd. All rights reserved.

  7. On the existence of physiological age based on functional hierarchy: a formal definition related to time irreversibility.

    Science.gov (United States)

    Chauvet, Gilbert A

    2006-09-01

    The present approach of aging and time irreversibility is a consequence of the theory of functional organization that I have developed and presented over recent years (see e.g., Ref. 11). It is based on the effect of physically small and numerous perturbations known as fluctuations, of structural units on the dynamics of the biological system during its adult life. Being a highly regulated biological system, a simple realistic hypothesis, the time-optimum regulation between the levels of organization, leads to the existence of an internal age for the biological system, and time-irreversibility associated with aging. Thus, although specific genes are controlling aging, time-irreversibility of the system may be shown to be due to the degradation of physiological functions. In other words, I suggest that for a biological system, the nature of time is specific and is an expression of the highly regulated integration. An internal physiological age reflects the irreversible course of a living organism towards death because of the irreversible course of physiological functions towards dysfunction, due to the irreversible changes in the regulatory processes. Following the works of Prigogine and his colleagues in physics, and more generally in the field of non-integrable dynamical systems (theorem of Poincaré-Misra), I have stated this problem in terms of the relationship between the macroscopic irreversibility of the functional organization and the basic mechanisms of regulation at the lowest "microscopic" level, i.e., the molecular, lowest level of organization. The neuron-neuron elementary functional interaction is proposed as an illustration of the method to define aging in the nervous system.

  8. Real time simulation of large systems on mini-computer

    International Nuclear Information System (INIS)

    Nakhle, Michel; Roux, Pierre.

    1979-01-01

    Most simulation languages will only accept an explicit formulation of differential equations, and logical variables hold no special status therein. The pace of the suggested methods of integration is limited by the smallest time constant of the model submitted. The NEPTUNIX 2 simulation software has a language that will take implicit equations and an integration method of which the variable pace is not limited by the time constants of the model. This, together with high time and memory ressources optimization of the code generated, makes NEPTUNIX 2 a basic tool for simulation on mini-computers. Since the logical variables are specific entities under centralized control, correct processing of discontinuities and synchronization with a real process are feasible. The NEPTUNIX 2 is the industrial version of NEPTUNIX 1 [fr

  9. Diamond detector time resolution for large angle tracks

    Energy Technology Data Exchange (ETDEWEB)

    Chiodini, G., E-mail: chiodini@le.infn.it [INFN - Sezione di Lecce (Italy); Fiore, G.; Perrino, R. [INFN - Sezione di Lecce (Italy); Pinto, C.; Spagnolo, S. [INFN - Sezione di Lecce (Italy); Dip. di Matematica e Fisica “Ennio De Giorgi”, Uni. del Salento (Italy)

    2015-10-01

    The applications which have stimulated greater interest in diamond sensors are related to detectors close to particle beams, therefore in an environment with high radiation level (beam monitor, luminosity measurement, detection of primary and secondary-interaction vertices). Our aims is to extend the studies performed so far by developing the technical advances needed to prove the competitiveness of this technology in terms of time resolution, with respect to more usual ones, which does not guarantee the required tolerance to a high level of radiation doses. In virtue of these goals, measurements of diamond detector time resolution with tracks incident at different angles are discussed. In particular, preliminary testbeam results obtained with 5 GeV electrons and polycrystalline diamond strip detectors are shown.

  10. Time-Efficient Cloning Attacks Identification in Large-Scale RFID Systems

    Directory of Open Access Journals (Sweden)

    Ju-min Zhao

    2017-01-01

    Full Text Available Radio Frequency Identification (RFID is an emerging technology for electronic labeling of objects for the purpose of automatically identifying, categorizing, locating, and tracking the objects. But in their current form RFID systems are susceptible to cloning attacks that seriously threaten RFID applications but are hard to prevent. Existing protocols aimed at detecting whether there are cloning attacks in single-reader RFID systems. In this paper, we investigate the cloning attacks identification in the multireader scenario and first propose a time-efficient protocol, called the time-efficient Cloning Attacks Identification Protocol (CAIP to identify all cloned tags in multireaders RFID systems. We evaluate the performance of CAIP through extensive simulations. The results show that CAIP can identify all the cloned tags in large-scale RFID systems fairly fast with required accuracy.

  11. Interactive exploration of large-scale time-varying data using dynamic tracking graphs

    KAUST Repository

    Widanagamaachchi, W.

    2012-10-01

    Exploring and analyzing the temporal evolution of features in large-scale time-varying datasets is a common problem in many areas of science and engineering. One natural representation of such data is tracking graphs, i.e., constrained graph layouts that use one spatial dimension to indicate time and show the "tracks" of each feature as it evolves, merges or disappears. However, for practical data sets creating the corresponding optimal graph layouts that minimize the number of intersections can take hours to compute with existing techniques. Furthermore, the resulting graphs are often unmanageably large and complex even with an ideal layout. Finally, due to the cost of the layout, changing the feature definition, e.g. by changing an iso-value, or analyzing properly adjusted sub-graphs is infeasible. To address these challenges, this paper presents a new framework that couples hierarchical feature definitions with progressive graph layout algorithms to provide an interactive exploration of dynamically constructed tracking graphs. Our system enables users to change feature definitions on-the-fly and filter features using arbitrary attributes while providing an interactive view of the resulting tracking graphs. Furthermore, the graph display is integrated into a linked view system that provides a traditional 3D view of the current set of features and allows a cross-linked selection to enable a fully flexible spatio-temporal exploration of data. We demonstrate the utility of our approach with several large-scale scientific simulations from combustion science. © 2012 IEEE.

  12. Large volume recycling of oceanic lithosphere over short time scales: geochemical constraints from the Caribbean Large Igneous Province

    Science.gov (United States)

    Hauff, F.; Hoernle, K.; Tilton, G.; Graham, D. W.; Kerr, A. C.

    2000-01-01

    Oceanic flood basalts are poorly understood, short-term expressions of highly increased heat flux and mass flow within the convecting mantle. The uniqueness of the Caribbean Large Igneous Province (CLIP, 92-74 Ma) with respect to other Cretaceous oceanic plateaus is its extensive sub-aerial exposures, providing an excellent basis to investigate the temporal and compositional relationships within a starting plume head. We present major element, trace element and initial Sr-Nd-Pb isotope composition of 40 extrusive rocks from the Caribbean Plateau, including onland sections in Costa Rica, Colombia and Curaçao as well as DSDP Sites in the Central Caribbean. Even though the lavas were erupted over an area of ˜3×10 6 km 2, the majority have strikingly uniform incompatible element patterns (La/Yb=0.96±0.16, n=64 out of 79 samples, 2σ) and initial Nd-Pb isotopic compositions (e.g. 143Nd/ 144Nd in=0.51291±3, ɛNdi=7.3±0.6, 206Pb/ 204Pb in=18.86±0.12, n=54 out of 66, 2σ). Lavas with endmember compositions have only been sampled at the DSDP Sites, Gorgona Island (Colombia) and the 65-60 Ma accreted Quepos and Osa igneous complexes (Costa Rica) of the subsequent hotspot track. Despite the relatively uniform composition of most lavas, linear correlations exist between isotope ratios and between isotope and highly incompatible trace element ratios. The Sr-Nd-Pb isotope and trace element signatures of the chemically enriched lavas are compatible with derivation from recycled oceanic crust, while the depleted lavas are derived from a highly residual source. This source could represent either oceanic lithospheric mantle left after ocean crust formation or gabbros with interlayered ultramafic cumulates of the lower oceanic crust. High 3He/ 4He in olivines of enriched picrites at Quepos are ˜12 times higher than the atmospheric ratio suggesting that the enriched component may have once resided in the lower mantle. Evaluation of the Sm-Nd and U-Pb isotope systematics on

  13. Use of primary corticosteroid injection in the management of plantar fasciopathy: is it time to challenge existing practice?

    Science.gov (United States)

    Kirkland, Paul; Beeson, Paul

    2013-01-01

    Plantar fasciopathy (PF) is characterized by degeneration of the fascia at the calcaneal enthesis. It is a common cause of foot pain, accounting for 90% of clinical presentations of heel pathology. In 2009-2010, 9.3 million working days were lost in England due to musculoskeletal disorders, with 2.4 million of those attributable to lower-limb disorders, averaging 16.3 lost working days per case. Numerous studies have attempted to establish the short- and long-term clinical efficacy of corticosteroid injections in the management of PF. Earlier studies have not informed clinical practice. As the research base has developed, evidence has emerged supporting clinical efficacy. With diverse opinions surrounding the etiology and efficacy debate, there does not seem to be a consensus of opinion on a common treatment pathway. For example, in England, the National Institute for Clinical Health and Excellence does not publish strategic guidance for clinical practice. Herein, we review and evaluate core literature that examines the clinical efficacy of corticosteroid injection as a treatment for PF. Outcome measures were wide ranging but largely yielded results supportive of the short- and long-term benefits of this modality. The analysis also looked to establish, where possible, "proof of concept." This article provides evidence supporting the clinical efficacy of corticosteroid injections, in particular those guided by imaging technology. The evidence challenges existing orthodoxy, which marginalizes this treatment as a secondary option. This challenge is supported by recently revised guidelines published by the American College of Foot and Ankle Surgeons advocating corticosteroid injection as a primary treatment option.

  14. Prevalence of HIV among MSM in Europe: comparison of self-reported diagnoses from a large scale internet survey and existing national estimates

    Directory of Open Access Journals (Sweden)

    Marcus Ulrich

    2012-11-01

    Full Text Available Abstract Background Country level comparisons of HIV prevalence among men having sex with men (MSM is challenging for a variety of reasons, including differences in the definition and measurement of the denominator group, recruitment strategies and the HIV detection methods. To assess their comparability, self-reported data on HIV diagnoses in a 2010 pan-European MSM internet survey (EMIS were compared with pre-existing estimates of HIV prevalence in MSM from a variety of European countries. Methods The first pan-European survey of MSM recruited more than 180,000 men from 38 countries across Europe and included questions on the year and result of last HIV test. HIV prevalence as measured in EMIS was compared with national estimates of HIV prevalence based on studies using biological measurements or modelling approaches to explore the degree of agreement between different methods. Existing estimates were taken from Dublin Declaration Monitoring Reports or UNAIDS country fact sheets, and were verified by contacting the nominated contact points for HIV surveillance in EU/EEA countries. Results The EMIS self-reported measurements of HIV prevalence were strongly correlated with existing estimates based on biological measurement and modelling studies using surveillance data (R2=0.70 resp. 0.72. In most countries HIV positive MSM appeared disproportionately likely to participate in EMIS, and prevalences as measured in EMIS are approximately twice the estimates based on existing estimates. Conclusions Comparison of diagnosed HIV prevalence as measured in EMIS with pre-existing estimates based on biological measurements using varied sampling frames (e.g. Respondent Driven Sampling, Time and Location Sampling demonstrates a high correlation and suggests similar selection biases from both types of studies. For comparison with modelled estimates the self-selection bias of the Internet survey with increased participation of men diagnosed with HIV has to be

  15. Finite-Time Stability of Large-Scale Systems with Interval Time-Varying Delay in Interconnection

    Directory of Open Access Journals (Sweden)

    T. La-inchua

    2017-01-01

    Full Text Available We investigate finite-time stability of a class of nonlinear large-scale systems with interval time-varying delays in interconnection. Time-delay functions are continuous but not necessarily differentiable. Based on Lyapunov stability theory and new integral bounding technique, finite-time stability of large-scale systems with interval time-varying delays in interconnection is derived. The finite-time stability criteria are delays-dependent and are given in terms of linear matrix inequalities which can be solved by various available algorithms. Numerical examples are given to illustrate effectiveness of the proposed method.

  16. Use of a large time-compensated scintillation detector in neutron time-of-flight measurements

    International Nuclear Information System (INIS)

    Goodman, C.D.

    1979-01-01

    A scintillator for neutron time-of-flight measurements is positioned at a desired angle with respect to the neutron beam, and as a function of the energy thereof, such that the sum of the transit times of the neutrons and photons in the scintillator are substantially independent of the points of scintillations within the scintillator. Extrapolated zero timing is employed rather than the usual constant fraction timing. As a result, a substantially larger scintillator can be employed that substantially increases the data rate and shortens the experiment time. 3 claims

  17. The timing of ostrich existence in Central Asia: AMS 14C age of eggshells from Mongolia and southern Siberia (a pilot study)

    International Nuclear Information System (INIS)

    Kurochkin, Evgeny N.; Kuzmin, Yaroslav V.; Antoshchenko-Olenev, Igor V.; Zabelin, Vladimir I.; Krivonogov, Sergey K.; Nohrina, Tatiana I.; Lbova, Ludmila V.; Burr, G.S.; Cruz, Richard J.

    2010-01-01

    The presence of Asiatic ostrich in Central Asia in the later Cenozoic time is well-documented; nevertheless, a few direct age determinations existed until recently. We performed AMS 14 C dating of ostrich eggshells found in Mongolia, Transbaikal, and Tuva. It shows that ostriches existed throughout the second part of Late Pleistocene, until the Late Glacial time (ca. 13,000-10,100 BP). It seems that Asiatic ostrich went extinct in Central Asia just before or even in the Holocene.

  18. Global low-energy weak solution and large-time behavior for the compressible flow of liquid crystals

    Science.gov (United States)

    Wu, Guochun; Tan, Zhong

    2018-06-01

    In this paper, we consider the weak solution of the simplified Ericksen-Leslie system modeling compressible nematic liquid crystal flows in R3. When the initial data are of small energy and initial density is positive and essentially bounded, we prove the existence of a global weak solution in R3. The large-time behavior of a global weak solution is also established.

  19. Treatment time reduction for large thermal lesions by using a multiple 1D ultrasound phased array system

    International Nuclear Information System (INIS)

    Liu, H.-L.; Chen, Y.-Y.; Yen, J.-Y.; Lin, W.-L.

    2003-01-01

    To generate large thermal lesions in ultrasound thermal therapy, cooling intermissions are usually introduced during the treatment to prevent near-field heating, which leads to a long treatment time. A possible strategy to shorten the total treatment time is to eliminate the cooling intermissions. In this study, the two methods, power optimization and acoustic window enlargement, for reducing power accumulation in the near field are combined to investigate the feasibility of continuously heating a large target region (maximally 3.2 x 3.2 x 3.2 cm 3 ). A multiple 1D ultrasound phased array system generates the foci to scan the target region. Simulations show that the target region can be successfully heated without cooling and no near-field heating occurs. Moreover, due to the fact that there is no cooling time during the heating sessions, the total treatment time is significantly reduced to only several minutes, compared to the existing several hours

  20. A time-focusing Fourier chopper time-of-flight diffractometer for large scattering angles

    International Nuclear Information System (INIS)

    Heinonen, R.; Hiismaeki, P.; Piirto, A.; Poeyry, H.; Tiitta, A.

    1975-01-01

    A high-resolution time-of-flight diffractometer utilizing time focusing principles in conjunction with a Fourier chopper is under construction at Otaniemi. The design is an improved version of a test facility which has been used for single-crystal and powder diffraction studies with promising results. A polychromatic neutron beam from a radial beam tube of the FiR 1 reactor, collimated to dia. 70 mm, is modulated by a Fourier chopper (dia. 400 mm) which is placed inside a massive boron-loaded particle board shielding of 900 mm wall thickness. A thin flat sample (5 mm x dia. 80 mm typically) is mounted on a turntable at a distance of 4 m from the chopper, and the diffracted neutrons are counted by a scintillation detector at 4 m distance from the sample. The scattering angle 2theta can be chosen between 90deg and 160deg to cover Bragg angles from 45deg up to 80deg. The angle between the chopper disc and the incident beam direction as well as the angle of the detector surface relative to the diffracted beam can be adjusted between 45deg and 90deg in order to accomplish time-focusing. In our set-up, with equal flight paths from chopper to sample and from sample to detector, the time-focusing conditions are fulfilled when the chopper and the detector are parallel to the sample-plane. The time-of-flight spectrum of the scattered neutrons is measured by the reverse time-of-flight method in which, instead of neutrons, one essentially records the modulation function of the chopper during constant periods preceding each detected neutron. With a Fourier chopper whose speed is varied in a suitable way, the method is equivalent to the conventional Fourier method but the spectrum is obtained directly without any off-line calculations. The new diffractometer is operated automatically by a Super Nova computer which not only accumulates the synthetized diffraction pattern but also controls the chopper speed according to the modulation frequency sweep chosen by the user to obtain a

  1. Amplitude and rise time compensated timing optimized for large semiconductor detectors

    International Nuclear Information System (INIS)

    Kozyczkowski, J.J.; Bialkowski, J.

    1976-01-01

    The ARC timing described has excellent timing properties even when using a wide range e.g. from 10 keV to over 1 MeV. The detector signal from a preamplifier is accepted directly by the unit as a timing filter amplifier with a sensitivity of 1 mV is incorporated. The adjustable rise time rejection feature makes it possible to achieve a good prompt time spectrum with symmetrical exponential shape down to less than 1/100 of the peak value. A complete block diagram of the unit is given together with results of extensive tests of its performance. For example the time spectrum for (1330+-20) keV of 60 Co taken with a 43 cm 3 Ge(Li) detector has the following parameters: fwhm = 2.2ns, fwtm = 4.4 ns and fw (0.01) m = 7.6 ns and for (50 +- 10) keV of 22 Na the following was obtained: fwhm = 10.8 ns, fwtm = 21.6 ns and fw (0.01) m = 34.6 ns. In another experiment with two fast plastic scintillations (NE 102A) and using a 20% dynamic energy range the following was measured: fwhm = 280 ps, fwtm = 470 ps and fw (0.01) m = 70ps. (Auth.)

  2. The part-time wage penalty in European countries: how large is it for men?

    OpenAIRE

    O'Dorchai, Sile Padraigin; Plasman, Robert; Rycx, François

    2007-01-01

    Economic theory advances a number of reasons for the existence of a wage gap between part-time and full-time workers. Empirical work has concentrated on the wage effects of part-time work for women. For men, much less empirical evidence exists, mainly because of lacking data. In this paper, we take advantage of access to unique harmonised matched employer-employee data (i.e. the 1995 European Structure of Earnings Survey) to investigate the magnitude and sources of the part-time wage penalty ...

  3. LITERATURE SURVEY ON EXISTING POWER SAVING ROUTING METHODS AND TECHNIQUES FOR INCREASING NETWORK LIFE TIME IN MANET

    Directory of Open Access Journals (Sweden)

    K Mariyappan

    2017-06-01

    Full Text Available Mobile ad hoc network (MANET is a special type of wireless network in which a collection of wireless mobile devices (called also nodes dynamically forming a temporary network without the need of any pre-existing network infrastructure or centralized administration. Currently, Mobile ad hoc networks (MANETs play a significant role in university campus, advertisement, emergency response, disaster recovery, military use in battle fields, disaster management scenarios, in sensor network, and so on. However, wireless network devices, especially in ad hoc networks, are typically battery-powered. Thus, energy efficiency is a critical issue for battery-powered mobile devices in ad hoc networks. This is due to the fact that failure of node or link allows re-routing and establishing a new path from source to destination which creates extra energy consumption of nodes and sparse network connectivity, leading to a more likelihood occurrences of network partition. Routing based on energy related parameters is one of the important solutions to extend the lifetime of the node and reduce energy consumption of the network. In this paper detail literature survey on existing energy efficient routing method are studied and compared for their performance under different condition. The result has shown that both the broadcast schemes and energy aware metrics have great potential in overcoming the broadcast storm problem associated with flooding. However, the performances of these approaches rely on either the appropriate selection of the broadcast decision parameter or an energy efficient path. In the earlier proposed broadcast methods, the forwarding probability is selected based on fixed probability or number of neighbors regardless of nodes battery capacity whereas in energy aware schemes energy inefficient node could be part of an established path. Therefore, in an attempt to remedy the paucity of research and to address the gaps identified in this area, a study

  4. On the Existence and Robustness of Steady Position-Momentum Correlations for Time-Dependent Quadratic Systems

    Directory of Open Access Journals (Sweden)

    M. Gianfreda

    2012-01-01

    Full Text Available We discuss conditions giving rise to stationary position-momentum correlations among quantum states in the Fock and coherent basis associated with the natural invariant for the one-dimensional time-dependent quadratic Hamiltonian operators such as the Kanai-Caldirola Hamiltonian. We also discuss some basic features such as quantum decoherence of the wave functions resulting from the corresponding quantum dynamics of these systems that exhibit no timedependence in their quantum correlations. In particular, steady statistical momentum averages are seen over well-defined time intervals in the evolution of a linear superposition of the basis states of modified exponentially damped mass systems.

  5. Ancient divergence time estimates in Eutropis rugifera support the existence of Pleistocene barriers on the exposed Sunda Shelf

    Directory of Open Access Journals (Sweden)

    Benjamin R. Karin

    2017-10-01

    Full Text Available Episodic sea level changes that repeatedly exposed and inundated the Sunda Shelf characterize the Pleistocene. Available evidence points to a more xeric central Sunda Shelf during periods of low sea levels, and despite the broad land connections that persisted during this time, some organisms are assumed to have faced barriers to dispersal between land-masses on the Sunda Shelf. Eutropis rugifera is a secretive, forest adapted scincid lizard that ranges across the Sunda Shelf. In this study, we sequenced one mitochondrial (ND2 and four nuclear (BRCA1, BRCA2, RAG1, and MC1R markers and generated a time-calibrated phylogeny in BEAST to test whether divergence times between Sundaic populations of E. rugifera occurred during Pleistocene sea-level changes, or if they predate the Pleistocene. We find that E. rugifera shows pre-Pleistocene divergences between populations on different Sundaic land-masses. The earliest divergence within E. rugifera separates the Philippine samples from the Sundaic samples approximately 16 Ma; the Philippine populations thus cannot be considered conspecific with Sundaic congeners. Sundaic populations diverged approximately 6 Ma, and populations within Borneo from Sabah and Sarawak separated approximately 4.5 Ma in the early Pliocene, followed by further cladogenesis in Sarawak through the Pleistocene. Divergence of peninsular Malaysian populations from the Mentawai Archipelago occurred approximately 5 Ma. Separation among island populations from the Mentawai Archipelago likely dates to the Pliocene/Pleistocene boundary approximately 3.5 Ma, and our samples from peninsular Malaysia appear to coalesce in the middle Pleistocene, about 1 Ma. Coupled with the monophyly of these populations, these divergence times suggest that despite consistent land-connections between these regions throughout the Pleistocene E. rugifera still faced barriers to dispersal, which may be a result of environmental shifts that accompanied the

  6. Replicability of time-varying connectivity patterns in large resting state fMRI samples.

    Science.gov (United States)

    Abrol, Anees; Damaraju, Eswar; Miller, Robyn L; Stephen, Julia M; Claus, Eric D; Mayer, Andrew R; Calhoun, Vince D

    2017-12-01

    The past few years have seen an emergence of approaches that leverage temporal changes in whole-brain patterns of functional connectivity (the chronnectome). In this chronnectome study, we investigate the replicability of the human brain's inter-regional coupling dynamics during rest by evaluating two different dynamic functional network connectivity (dFNC) analysis frameworks using 7 500 functional magnetic resonance imaging (fMRI) datasets. To quantify the extent to which the emergent functional connectivity (FC) patterns are reproducible, we characterize the temporal dynamics by deriving several summary measures across multiple large, independent age-matched samples. Reproducibility was demonstrated through the existence of basic connectivity patterns (FC states) amidst an ensemble of inter-regional connections. Furthermore, application of the methods to conservatively configured (statistically stationary, linear and Gaussian) surrogate datasets revealed that some of the studied state summary measures were indeed statistically significant and also suggested that this class of null model did not explain the fMRI data fully. This extensive testing of reproducibility of similarity statistics also suggests that the estimated FC states are robust against variation in data quality, analysis, grouping, and decomposition methods. We conclude that future investigations probing the functional and neurophysiological relevance of time-varying connectivity assume critical importance. Copyright © 2017 The Authors. Published by Elsevier Inc. All rights reserved.

  7. Data warehousing technologies for large-scale and right-time data

    DEFF Research Database (Denmark)

    Xiufeng, Liu

    heterogeneous sources into a central data warehouse (DW) by Extract-Transform-Load (ETL) at regular time intervals, e.g., monthly, weekly, or daily. But now, it becomes challenging for large-scale data, and hard to meet the near real-time/right-time business decisions. This thesis considers some...

  8. Sex ratio and time to pregnancy: analysis of four large European population surveys

    DEFF Research Database (Denmark)

    Joffe, Mike; Bennett, James; Best, Nicky

    2007-01-01

    To test whether the secondary sex ratio (proportion of male births) is associated with time to pregnancy, a marker of fertility. Design Analysis of four large population surveys. Setting Denmark and the United Kingdom. Participants 49 506 pregnancies.......To test whether the secondary sex ratio (proportion of male births) is associated with time to pregnancy, a marker of fertility. Design Analysis of four large population surveys. Setting Denmark and the United Kingdom. Participants 49 506 pregnancies....

  9. Fitness, work, and leisure-time physical activity and ischaemic heart disease and all-cause mortality among men with pre-existing cardiovascular disease

    DEFF Research Database (Denmark)

    Holtermann, Andreas; Mortensen, Ole Steen; Burr, Hermann

    2010-01-01

    Our aim was to study the relative impact of physical fitness, physical demands at work, and physical activity during leisure time on ischaemic heart disease (IHD) and all-cause mortality among employed men with pre-existing cardiovascular disease (CVD)....

  10. THE EXISTENCE OF THE STABILIZING SOLUTION OF THE RICCATI EQUATION ARISING IN DISCRETE-TIME STOCHASTIC ZERO SUM LQ DYNAMIC GAMES WITH PERIODIC COEFFICIENTS

    Directory of Open Access Journals (Sweden)

    Vasile Dr ̆agan

    2017-06-01

    Full Text Available We investigate the problem for solving a discrete-time periodic gen- eralized Riccati equation with an indefinite sign of the quadratic term. A necessary condition for the existence of bounded and stabilizing solution of the discrete-time Riccati equation with an indefinite quadratic term is derived. The stabilizing solution is positive semidefinite and satisfies the introduced sign conditions. The proposed condition is illustrated via a numerical example.

  11. Decrease of the tunneling time and violation of the Hartman effect for large barriers

    International Nuclear Information System (INIS)

    Olkhovsky, V.S.; Zaichenko, A.K.; Petrillo, V.

    2004-01-01

    The explicit formulation of the initial conditions of the definition of the wave-packet tunneling time is proposed. This formulation takes adequately into account the irreversibility of the wave-packet space-time spreading. Moreover, it explains the violations of the Hartman effect, leading to a strong decrease of the tunneling times up to negative values for wave packets with large momentum spreads due to strong wave-packet time spreading

  12. Managing patients' wait time in specialist out-patient clinic using real-time data from existing queue management and ADT systems.

    Science.gov (United States)

    Ju, John Chen; Gan, Soon Ann; Tan Siew Wee, Justine; Huang Yuchi, Peter; Mei Mei, Chan; Wong Mei Mei, Sharon; Fong, Kam Weng

    2013-01-01

    In major cancer centers, heavy patients load and multiple registration stations could cause significant wait time, and can be result in patient complains. Real-time patient journey data and visual display are useful tools in hospital patient queue management. This paper demonstrates how we capture patient queue data without deploying any tracing devices; and how to convert data into useful patient journey information to understand where interventions are likely to be most effective. During our system development, remarkable effort has been spent on resolving data discrepancy and balancing between accuracy and system performances. A web-based dashboard to display real-time information and a framework for data analysis were also developed to facilitate our clinics' operation. Result shows our system could eliminate more than 95% of data capturing errors and has improved patient wait time data accuracy since it was deployed.

  13. Time dispersion in large plastic scintillation neutron detector [Paper No.:B3

    International Nuclear Information System (INIS)

    De, A.; Dasgupta, S.S.; Sen, D.

    1993-01-01

    Time dispersion seen by photomultiplier (PM) tube in large plastic scintillation neutron detector and the light collection mechanism by the same have been computed showing that this time dispersion (TD) seen by the PM tube does not necessarily increase with increasing incident neutron energy in contrast to the usual finding that TD increases with increasing energy. (author). 8 refs., 4 figs

  14. Asymptotic description of two metastable processes of solidification for the case of large relaxation time

    International Nuclear Information System (INIS)

    Omel'yanov, G.A.

    1995-07-01

    The non-isothermal Cahn-Hilliard equations in the n-dimensional case (n = 2,3) are considered. The interaction length is proportional to a small parameter, and the relaxation time is proportional to a constant. The asymptotic solutions describing two metastable processes are constructed and justified. The soliton type solution describes the first stage of separation in alloy, when a set of ''superheated liquid'' appears inside the ''solid'' part. The Van der Waals type solution describes the free interface dynamics for large time. The smoothness of temperature is established for large time and the Mullins-Sekerka problem describing the free interface is derived. (author). 46 refs

  15. Urban Freight Management with Stochastic Time-Dependent Travel Times and Application to Large-Scale Transportation Networks

    Directory of Open Access Journals (Sweden)

    Shichao Sun

    2015-01-01

    Full Text Available This paper addressed the vehicle routing problem (VRP in large-scale urban transportation networks with stochastic time-dependent (STD travel times. The subproblem which is how to find the optimal path connecting any pair of customer nodes in a STD network was solved through a robust approach without requiring the probability distributions of link travel times. Based on that, the proposed STD-VRP model can be converted into solving a normal time-dependent VRP (TD-VRP, and algorithms for such TD-VRPs can also be introduced to obtain the solution. Numerical experiments were conducted to address STD-VRPTW of practical sizes on a real world urban network, demonstrated here on the road network of Shenzhen, China. The stochastic time-dependent link travel times of the network were calibrated by historical floating car data. A route construction algorithm was applied to solve the STD problem in 4 delivery scenarios efficiently. The computational results showed that the proposed STD-VRPTW model can improve the level of customer service by satisfying the time-window constraint under any circumstances. The improvement can be very significant especially for large-scale network delivery tasks with no more increase in cost and environmental impacts.

  16. High resolution time-of-flight measurements in small and large scintillation counters

    International Nuclear Information System (INIS)

    D'Agostini, G.; Marini, G.; Martellotti, G.; Massa, F.; Rambaldi, A.; Sciubba, A.

    1981-01-01

    In a test run, the experimental time-of-flight resolution was measured for several different scintillation counters of small (10 x 5 cm 2 ) and large (100 x 15 cm 2 and 75 x 25 cm 2 ) area. The design characteristics were decided on the basis of theoretical Monte Carlo calculations. We report results using twisted, fish-tail, and rectangular light- guides and different types of scintillator (NE 114 and PILOT U). Time resolution up to approx. equal to 130-150 ps fwhm for the small counters and up to approx. equal to 280-300 ps fwhm for the large counters were obtained. The spatial resolution from time measurements in the large counters is also reported. The results of Monte Carlo calculations on the type of scintillator, the shape and dimensions of the light-guides, and the nature of the external wrapping surfaces - to be used in order to optimize the time resolution - are also summarized. (orig.)

  17. The effect of large decoherence on mixing time in continuous-time quantum walks on long-range interacting cycles

    Energy Technology Data Exchange (ETDEWEB)

    Salimi, S; Radgohar, R, E-mail: shsalimi@uok.ac.i, E-mail: r.radgohar@uok.ac.i [Faculty of Science, Department of Physics, University of Kurdistan, Pasdaran Ave, Sanandaj (Iran, Islamic Republic of)

    2010-01-28

    In this paper, we consider decoherence in continuous-time quantum walks on long-range interacting cycles (LRICs), which are the extensions of the cycle graphs. For this purpose, we use Gurvitz's model and assume that every node is monitored by the corresponding point-contact induced by the decoherence process. Then, we focus on large rates of decoherence and calculate the probability distribution analytically and obtain the lower and upper bounds of the mixing time. Our results prove that the mixing time is proportional to the rate of decoherence and the inverse of the square of the distance parameter (m). This shows that the mixing time decreases with increasing range of interaction. Also, what we obtain for m = 0 is in agreement with Fedichkin, Solenov and Tamon's results [48] for cycle, and we see that the mixing time of CTQWs on cycle improves with adding interacting edges.

  18. Large scale analysis of co-existing post-translational modifications in histone tails reveals global fine structure of cross-talk

    DEFF Research Database (Denmark)

    Schwämmle, Veit; Aspalter, Claudia-Maria; Sidoli, Simone

    2014-01-01

    Mass spectrometry (MS) is a powerful analytical method for the identification and quantification of co-existing post-translational modifications in histone proteins. One of the most important challenges in current chromatin biology is to characterize the relationships between co-existing histone...... sample-specific patterns for the co-frequency of histone post-translational modifications. We implemented a new method to identify positive and negative interplay between pairs of methylation and acetylation marks in proteins. Many of the detected features were conserved between different cell types...... sites but negative cross-talk for distant ones, and for discrete methylation states at Lys-9, Lys-27, and Lys-36 of histone H3, suggesting a more differentiated functional role of methylation beyond the general expectation of enhanced activity at higher methylation states....

  19. A Short Proof of the Large Time Energy Growth for the Boussinesq System

    Science.gov (United States)

    Brandolese, Lorenzo; Mouzouni, Charafeddine

    2017-10-01

    We give a direct proof of the fact that the L^p-norms of global solutions of the Boussinesq system in R^3 grow large as t→ ∞ for 1R+× R3. In particular, the kinetic energy blows up as \\Vert u(t)\\Vert _2^2˜ ct^{1/2} for large time. This contrasts with the case of the Navier-Stokes equations.

  20. Versatile synchronized real-time MEG hardware controller for large-scale fast data acquisition

    Science.gov (United States)

    Sun, Limin; Han, Menglai; Pratt, Kevin; Paulson, Douglas; Dinh, Christoph; Esch, Lorenz; Okada, Yoshio; Hämäläinen, Matti

    2017-05-01

    Versatile controllers for accurate, fast, and real-time synchronized acquisition of large-scale data are useful in many areas of science, engineering, and technology. Here, we describe the development of a controller software based on a technique called queued state machine for controlling the data acquisition (DAQ) hardware, continuously acquiring a large amount of data synchronized across a large number of channels (>400) at a fast rate (up to 20 kHz/channel) in real time, and interfacing with applications for real-time data analysis and display of electrophysiological data. This DAQ controller was developed specifically for a 384-channel pediatric whole-head magnetoencephalography (MEG) system, but its architecture is useful for wide applications. This controller running in a LabVIEW environment interfaces with microprocessors in the MEG sensor electronics to control their real-time operation. It also interfaces with a real-time MEG analysis software via transmission control protocol/internet protocol, to control the synchronous acquisition and transfer of the data in real time from >400 channels to acquisition and analysis workstations. The successful implementation of this controller for an MEG system with a large number of channels demonstrates the feasibility of employing the present architecture in several other applications.

  1. A robust and high-performance queue management controller for large round trip time networks

    Science.gov (United States)

    Khoshnevisan, Ladan; Salmasi, Farzad R.

    2016-05-01

    Congestion management for transmission control protocol is of utmost importance to prevent packet loss within a network. This necessitates strategies for active queue management. The most applied active queue management strategies have their inherent disadvantages which lead to suboptimal performance and even instability in the case of large round trip time and/or external disturbance. This paper presents an internal model control robust queue management scheme with two degrees of freedom in order to restrict the undesired effects of large and small round trip time and parameter variations in the queue management. Conventional approaches such as proportional integral and random early detection procedures lead to unstable behaviour due to large delay. Moreover, internal model control-Smith scheme suffers from large oscillations due to the large round trip time. On the other hand, other schemes such as internal model control-proportional integral and derivative show excessive sluggish performance for small round trip time values. To overcome these shortcomings, we introduce a system entailing two individual controllers for queue management and disturbance rejection, simultaneously. Simulation results based on Matlab/Simulink and also Network Simulator 2 (NS2) demonstrate the effectiveness of the procedure and verify the analytical approach.

  2. CAN LARGE TIME DELAYS OBSERVED IN LIGHT CURVES OF CORONAL LOOPS BE EXPLAINED IN IMPULSIVE HEATING?

    International Nuclear Information System (INIS)

    Lionello, Roberto; Linker, Jon A.; Mikić, Zoran; Alexander, Caroline E.; Winebarger, Amy R.

    2016-01-01

    The light curves of solar coronal loops often peak first in channels associated with higher temperatures and then in those associated with lower temperatures. The delay times between the different narrowband EUV channels have been measured for many individual loops and recently for every pixel of an active region observation. The time delays between channels for an active region exhibit a wide range of values. The maximum time delay in each channel pair can be quite large, i.e., >5000 s. These large time delays make-up 3%–26% (depending on the channel pair) of the pixels where a trustworthy, positive time delay is measured. It has been suggested that these time delays can be explained by simple impulsive heating, i.e., a short burst of energy that heats the plasma to a high temperature, after which the plasma is allowed to cool through radiation and conduction back to its original state. In this paper, we investigate whether the largest observed time delays can be explained by this hypothesis by simulating a series of coronal loops with different heating rates, loop lengths, abundances, and geometries to determine the range of expected time delays between a set of four EUV channels. We find that impulsive heating cannot address the largest time delays observed in two of the channel pairs and that the majority of the large time delays can only be explained by long, expanding loops with photospheric abundances. Additional observations may rule out these simulations as an explanation for the long time delays. We suggest that either the time delays found in this manner may not be representative of real loop evolution, or that the impulsive heating and cooling scenario may be too simple to explain the observations, and other potential heating scenarios must be explored

  3. Cosmological special relativity the large scale structure of space, time and velocity

    CERN Document Server

    Carmeli, Moshe

    1997-01-01

    This book deals with special relativity theory and its application to cosmology. It presents Einstein's theory of space and time in detail, and describes the large scale structure of space, time and velocity as a new cosmological special relativity. A cosmological Lorentz-like transformation, which relates events at different cosmic times, is derived and applied. A new law of addition of cosmic times is obtained, and the inflation of the space at the early universe is derived, both from the cosmological transformation. The book will be of interest to cosmologists, astrophysicists, theoretical

  4. Cosmological special relativity the large scale structure of space, time and velocity

    CERN Document Server

    Carmeli, Moshe

    2002-01-01

    This book presents Einstein's theory of space and time in detail, and describes the large-scale structure of space, time and velocity as a new cosmological special relativity. A cosmological Lorentz-like transformation, which relates events at different cosmic times, is derived and applied. A new law of addition of cosmic times is obtained, and the inflation of the space at the early universe is derived, both from the cosmological transformation. The relationship between cosmic velocity, acceleration and distances is given. In the appendices gravitation is added in the form of a cosmological g

  5. Computing the real-time Green's Functions of large Hamiltonian matrices

    OpenAIRE

    Iitaka, Toshiaki

    1998-01-01

    A numerical method is developed for calculating the real time Green's functions of very large sparse Hamiltonian matrices, which exploits the numerical solution of the inhomogeneous time-dependent Schroedinger equation. The method has a clear-cut structure reflecting the most naive definition of the Green's functions, and is very suitable to parallel and vector supercomputers. The effectiveness of the method is illustrated by applying it to simple lattice models. An application of this method...

  6. Calculation of neutron die-away times in a large-vehicle portal monitor

    International Nuclear Information System (INIS)

    Lillie, R.A.; Santoro, R.T.; Alsmiller, R.G. Jr.

    1980-05-01

    Monte Carlo methods have been used to calculate neutron die-away times in a large-vehicle portal monitor. These calculations were performed to investigate the adequacy of using neutron die-away time measurements to detect the clandestine movement of shielded nuclear materials. The geometry consisted of a large tunnel lined with He 3 proportional counters. The time behavior of the (n,p) capture reaction in these counters was calculated when the tunnel contained a number of different tractor-trailer load configurations. Neutron die-away times obtained from weighted least squares fits to these data were compared. The change in neutron die-away time due to the replacement of cargo in a fully loaded truck with a spherical shell containing 240 kg of borated polyethylene was calculated to be less than 3%. This result together with the overall behavior of neutron die-away time versus mass inside the tunnel strongly suggested that measurements of this type will not provide a reliable means of detecting shielded nuclear materials in a large vehicle. 5 figures, 4 tables

  7. The Space-Time Conservative Schemes for Large-Scale, Time-Accurate Flow Simulations with Tetrahedral Meshes

    Science.gov (United States)

    Venkatachari, Balaji Shankar; Streett, Craig L.; Chang, Chau-Lyan; Friedlander, David J.; Wang, Xiao-Yen; Chang, Sin-Chung

    2016-01-01

    Despite decades of development of unstructured mesh methods, high-fidelity time-accurate simulations are still predominantly carried out on structured, or unstructured hexahedral meshes by using high-order finite-difference, weighted essentially non-oscillatory (WENO), or hybrid schemes formed by their combinations. In this work, the space-time conservation element solution element (CESE) method is used to simulate several flow problems including supersonic jet/shock interaction and its impact on launch vehicle acoustics, and direct numerical simulations of turbulent flows using tetrahedral meshes. This paper provides a status report for the continuing development of the space-time conservation element solution element (CESE) numerical and software framework under the Revolutionary Computational Aerosciences (RCA) project. Solution accuracy and large-scale parallel performance of the numerical framework is assessed with the goal of providing a viable paradigm for future high-fidelity flow physics simulations.

  8. Modeling and Validating Time, Buffering, and Utilization of a Large-Scale, Real-Time Data Acquisition System

    CERN Document Server

    AUTHOR|(SzGeCERN)756497; The ATLAS collaboration; Garcia Garcia, Pedro Javier; Vandelli, Wainer; Froening, Holger

    2017-01-01

    Data acquisition systems for large-scale high-energy physics experiments have to handle hundreds of gigabytes per second of data, and are typically realized as specialized data centers that connect a very large number of front-end electronics devices to an event detection and storage system. The design of such systems is often based on many assumptions, small-scale experiments and a substantial amount of over-provisioning. In this work, we introduce a discrete event-based simulation tool that models the data flow of the current ATLAS data acquisition system, with the main goal to be accurate with regard to the main operational characteristics. We measure buffer occupancy counting the number of elements in buffers, resource utilization measuring output bandwidth and counting the number of active processing units, and their time evolution by comparing data over many consecutive and small periods of time. We perform studies on the error of simulation when comparing the results to a large amount of real-world ope...

  9. Modeling and Validating Time, Buffering, and Utilization of a Large-Scale, Real-Time Data Acquisition System

    CERN Document Server

    AUTHOR|(SzGeCERN)756497; The ATLAS collaboration; Garcia Garcia, Pedro Javier; Vandelli, Wainer; Froening, Holger

    2017-01-01

    Data acquisition systems for large-scale high-energy physics experiments have to handle hundreds of gigabytes per second of data, and are typically implemented as specialized data centers that connect a very large number of front-end electronics devices to an event detection and storage system. The design of such systems is often based on many assumptions, small-scale experiments and a substantial amount of over-provisioning. In this paper, we introduce a discrete event-based simulation tool that models the dataflow of the current ATLAS data acquisition system, with the main goal to be accurate with regard to the main operational characteristics. We measure buffer occupancy counting the number of elements in buffers; resource utilization measuring output bandwidth and counting the number of active processing units, and their time evolution by comparing data over many consecutive and small periods of time. We perform studies on the error in simulation when comparing the results to a large amount of real-world ...

  10. Large-time behavior of solutions to a reaction-diffusion system with distributed microstructure

    NARCIS (Netherlands)

    Muntean, A.

    2009-01-01

    Abstract We study the large-time behavior of a class of reaction-diffusion systems with constant distributed microstructure arising when modeling diffusion and reaction in structured porous media. The main result of this Note is the following: As t ¿ 8 the macroscopic concentration vanishes, while

  11. The LOFT (Large Observatory for X-ray Timing) background simulations

    DEFF Research Database (Denmark)

    Campana, R.; Feroci, M.; Del Monte, E.

    2012-01-01

    The Large Observatory For X-ray Timing (LOFT) is an innovative medium-class mission selected for an assessment phase in the framework of the ESA M3 Cosmic Vision call. LOFT is intended to answer fundamental questions about the behavior of matter in theh very strong gravitational and magnetic fields...

  12. An Efficient Format for Nearly Constant-Time Access to Arbitrary Time Intervals in Large Trace Files

    Directory of Open Access Journals (Sweden)

    Anthony Chan

    2008-01-01

    Full Text Available A powerful method to aid in understanding the performance of parallel applications uses log or trace files containing time-stamped events and states (pairs of events. These trace files can be very large, often hundreds or even thousands of megabytes. Because of the cost of accessing and displaying such files, other methods are often used that reduce the size of the tracefiles at the cost of sacrificing detail or other information. This paper describes a hierarchical trace file format that provides for display of an arbitrary time window in a time independent of the total size of the file and roughly proportional to the number of events within the time window. This format eliminates the need to sacrifice data to achieve a smaller trace file size (since storage is inexpensive, it is necessary only to make efficient use of bandwidth to that storage. The format can be used to organize a trace file or to create a separate file of annotations that may be used with conventional trace files. We present an analysis of the time to access all of the events relevant to an interval of time and we describe experiments demonstrating the performance of this file format.

  13. RankExplorer: Visualization of Ranking Changes in Large Time Series Data.

    Science.gov (United States)

    Shi, Conglei; Cui, Weiwei; Liu, Shixia; Xu, Panpan; Chen, Wei; Qu, Huamin

    2012-12-01

    For many applications involving time series data, people are often interested in the changes of item values over time as well as their ranking changes. For example, people search many words via search engines like Google and Bing every day. Analysts are interested in both the absolute searching number for each word as well as their relative rankings. Both sets of statistics may change over time. For very large time series data with thousands of items, how to visually present ranking changes is an interesting challenge. In this paper, we propose RankExplorer, a novel visualization method based on ThemeRiver to reveal the ranking changes. Our method consists of four major components: 1) a segmentation method which partitions a large set of time series curves into a manageable number of ranking categories; 2) an extended ThemeRiver view with embedded color bars and changing glyphs to show the evolution of aggregation values related to each ranking category over time as well as the content changes in each ranking category; 3) a trend curve to show the degree of ranking changes over time; 4) rich user interactions to support interactive exploration of ranking changes. We have applied our method to some real time series data and the case studies demonstrate that our method can reveal the underlying patterns related to ranking changes which might otherwise be obscured in traditional visualizations.

  14. Rapid Modeling of and Response to Large Earthquakes Using Real-Time GPS Networks (Invited)

    Science.gov (United States)

    Crowell, B. W.; Bock, Y.; Squibb, M. B.

    2010-12-01

    Real-time GPS networks have the advantage of capturing motions throughout the entire earthquake cycle (interseismic, seismic, coseismic, postseismic), and because of this, are ideal for real-time monitoring of fault slip in the region. Real-time GPS networks provide the perfect supplement to seismic networks, which operate with lower noise and higher sampling rates than GPS networks, but only measure accelerations or velocities, putting them at a supreme disadvantage for ascertaining the full extent of slip during a large earthquake in real-time. Here we report on two examples of rapid modeling of recent large earthquakes near large regional real-time GPS networks. The first utilizes Japan’s GEONET consisting of about 1200 stations during the 2003 Mw 8.3 Tokachi-Oki earthquake about 100 km offshore Hokkaido Island and the second investigates the 2010 Mw 7.2 El Mayor-Cucapah earthquake recorded by more than 100 stations in the California Real Time Network. The principal components of strain were computed throughout the networks and utilized as a trigger to initiate earthquake modeling. Total displacement waveforms were then computed in a simulated real-time fashion using a real-time network adjustment algorithm that fixes a station far away from the rupture to obtain a stable reference frame. Initial peak ground displacement measurements can then be used to obtain an initial size through scaling relationships. Finally, a full coseismic model of the event can be run minutes after the event, given predefined fault geometries, allowing emergency first responders and researchers to pinpoint the regions of highest damage. Furthermore, we are also investigating using total displacement waveforms for real-time moment tensor inversions to look at spatiotemporal variations in slip.

  15. Fitness, work, and leisure-time physical activity and ischaemic heart disease and all-cause mortality among men with pre-existing cardiovascular disease

    DEFF Research Database (Denmark)

    Holtermann, Andreas; Mortensen, Ole Steen; Burr, Hermann

    2010-01-01

    , smoking, alcohol consumption, body mass index, diabetes, hypertension, physical work demands, leisure-time physical activity, and social class - showed a substantially reduced risk for IHD mortality among employees who were intermediately fit [VO (2)Max range 25-36; hazard ratio (HR) 0.54, 95% confidence......OBJECTIVE: Our aim was to study the relative impact of physical fitness, physical demands at work, and physical activity during leisure time on ischaemic heart disease (IHD) and all-cause mortality among employed men with pre-existing cardiovascular disease (CVD). METHOD: We carried out a 30-year...... physical work demands and leisure-time physical activity using a self-reported questionnaire. Results Among 274 men with a history of CVD, 93 men died from IHD. Using male employees with a history of CVD and a low level of fitness as the reference group, our Cox analyses - adjusted for age, blood pressure...

  16. Time delay effects on large-scale MR damper based semi-active control strategies

    International Nuclear Information System (INIS)

    Cha, Y-J; Agrawal, A K; Dyke, S J

    2013-01-01

    This paper presents a detailed investigation on the robustness of large-scale 200 kN MR damper based semi-active control strategies in the presence of time delays in the control system. Although the effects of time delay on stability and performance degradation of an actively controlled system have been investigated extensively by many researchers, degradation in the performance of semi-active systems due to time delay has yet to be investigated. Since semi-active systems are inherently stable, instability problems due to time delay are unlikely to arise. This paper investigates the effects of time delay on the performance of a building with a large-scale MR damper, using numerical simulations of near- and far-field earthquakes. The MR damper is considered to be controlled by four different semi-active control algorithms, namely (i) clipped-optimal control (COC), (ii) decentralized output feedback polynomial control (DOFPC), (iii) Lyapunov control, and (iv) simple-passive control (SPC). It is observed that all controllers except for the COC are significantly robust with respect to time delay. On the other hand, the clipped-optimal controller should be integrated with a compensator to improve the performance in the presence of time delay. (paper)

  17. TIME DISTRIBUTIONS OF LARGE AND SMALL SUNSPOT GROUPS OVER FOUR SOLAR CYCLES

    International Nuclear Information System (INIS)

    Kilcik, A.; Yurchyshyn, V. B.; Abramenko, V.; Goode, P. R.; Cao, W.; Ozguc, A.; Rozelot, J. P.

    2011-01-01

    Here we analyze solar activity by focusing on time variations of the number of sunspot groups (SGs) as a function of their modified Zurich class. We analyzed data for solar cycles 20-23 by using Rome (cycles 20 and 21) and Learmonth Solar Observatory (cycles 22 and 23) SG numbers. All SGs recorded during these time intervals were separated into two groups. The first group includes small SGs (A, B, C, H, and J classes by Zurich classification), and the second group consists of large SGs (D, E, F, and G classes). We then calculated small and large SG numbers from their daily mean numbers as observed on the solar disk during a given month. We report that the time variations of small and large SG numbers are asymmetric except for solar cycle 22. In general, large SG numbers appear to reach their maximum in the middle of the solar cycle (phases 0.45-0.5), while the international sunspot numbers and the small SG numbers generally peak much earlier (solar cycle phases 0.29-0.35). Moreover, the 10.7 cm solar radio flux, the facular area, and the maximum coronal mass ejection speed show better agreement with the large SG numbers than they do with the small SG numbers. Our results suggest that the large SG numbers are more likely to shed light on solar activity and its geophysical implications. Our findings may also influence our understanding of long-term variations of the total solar irradiance, which is thought to be an important factor in the Sun-Earth climate relationship.

  18. A general formulation of discrete-time quantum mechanics: Restrictions on the action and the relation of unitarity to the existence theorem for initial-value problems

    International Nuclear Information System (INIS)

    Khorrami, M.

    1995-01-01

    A general formulation for discrete-time quantum mechanics, based on Feynman's method in ordinary quantum mechanics, is presented. It is shown that the ambiguities present in ordinary quantum mechanics (due to noncommutativity of the operators), are no longer present here. Then the criteria for the unitarity of the evolution operator are examined. It is shown that the unitarity of the evolution operator puts restrictions on the form of the action, and also implies the existence of a solution for the classical initial-value problem. 13 refs

  19. Time-Sliced Perturbation Theory for Large Scale Structure I: General Formalism

    CERN Document Server

    Blas, Diego; Ivanov, Mikhail M.; Sibiryakov, Sergey

    2016-01-01

    We present a new analytic approach to describe large scale structure formation in the mildly non-linear regime. The central object of the method is the time-dependent probability distribution function generating correlators of the cosmological observables at a given moment of time. Expanding the distribution function around the Gaussian weight we formulate a perturbative technique to calculate non-linear corrections to cosmological correlators, similar to the diagrammatic expansion in a three-dimensional Euclidean quantum field theory, with time playing the role of an external parameter. For the physically relevant case of cold dark matter in an Einstein--de Sitter universe, the time evolution of the distribution function can be found exactly and is encapsulated by a time-dependent coupling constant controlling the perturbative expansion. We show that all building blocks of the expansion are free from spurious infrared enhanced contributions that plague the standard cosmological perturbation theory. This pave...

  20. Displacement in the parameter space versus spurious solution of discretization with large time step

    International Nuclear Information System (INIS)

    Mendes, Eduardo; Letellier, Christophe

    2004-01-01

    In order to investigate a possible correspondence between differential and difference equations, it is important to possess discretization of ordinary differential equations. It is well known that when differential equations are discretized, the solution thus obtained depends on the time step used. In the majority of cases, such a solution is considered spurious when it does not resemble the expected solution of the differential equation. This often happens when the time step taken into consideration is too large. In this work, we show that, even for quite large time steps, some solutions which do not correspond to the expected ones are still topologically equivalent to solutions of the original continuous system if a displacement in the parameter space is considered. To reduce such a displacement, a judicious choice of the discretization scheme should be made. To this end, a recent discretization scheme, based on the Lie expansion of the original differential equations, proposed by Monaco and Normand-Cyrot will be analysed. Such a scheme will be shown to be sufficient for providing an adequate discretization for quite large time steps compared to the pseudo-period of the underlying dynamics

  1. Large deviations of a long-time average in the Ehrenfest urn model

    Science.gov (United States)

    Meerson, Baruch; Zilber, Pini

    2018-05-01

    Since its inception in 1907, the Ehrenfest urn model (EUM) has served as a test bed of key concepts of statistical mechanics. Here we employ this model to study large deviations of a time-additive quantity. We consider two continuous-time versions of the EUM with K urns and N balls: with and without interactions between the balls in the same urn. We evaluate the probability distribution that the average number of balls in one urn over time T, , takes any specified value aN, where . For long observation time, , a Donsker–Varadhan large deviation principle holds: , where … denote additional parameters of the model. We calculate the rate function exactly by two different methods due to Donsker and Varadhan and compare the exact results with those obtained with a variant of WKB approximation (after Wentzel, Kramers and Brillouin). In the absence of interactions the WKB prediction for is exact for any N. In the presence of interactions the WKB method gives asymptotically exact results for . The WKB method also uncovers the (very simple) time history of the system which dominates the contribution of different time histories to .

  2. Molecular dynamics based enhanced sampling of collective variables with very large time steps

    Science.gov (United States)

    Chen, Pei-Yang; Tuckerman, Mark E.

    2018-01-01

    Enhanced sampling techniques that target a set of collective variables and that use molecular dynamics as the driving engine have seen widespread application in the computational molecular sciences as a means to explore the free-energy landscapes of complex systems. The use of molecular dynamics as the fundamental driver of the sampling requires the introduction of a time step whose magnitude is limited by the fastest motions in a system. While standard multiple time-stepping methods allow larger time steps to be employed for the slower and computationally more expensive forces, the maximum achievable increase in time step is limited by resonance phenomena, which inextricably couple fast and slow motions. Recently, we introduced deterministic and stochastic resonance-free multiple time step algorithms for molecular dynamics that solve this resonance problem and allow ten- to twenty-fold gains in the large time step compared to standard multiple time step algorithms [P. Minary et al., Phys. Rev. Lett. 93, 150201 (2004); B. Leimkuhler et al., Mol. Phys. 111, 3579-3594 (2013)]. These methods are based on the imposition of isokinetic constraints that couple the physical system to Nosé-Hoover chains or Nosé-Hoover Langevin schemes. In this paper, we show how to adapt these methods for collective variable-based enhanced sampling techniques, specifically adiabatic free-energy dynamics/temperature-accelerated molecular dynamics, unified free-energy dynamics, and by extension, metadynamics, thus allowing simulations employing these methods to employ similarly very large time steps. The combination of resonance-free multiple time step integrators with free-energy-based enhanced sampling significantly improves the efficiency of conformational exploration.

  3. Asymptotics for Large Time of Global Solutions to the Generalized Kadomtsev-Petviashvili Equation

    Science.gov (United States)

    Hayashi, Nakao; Naumkin, Pavel I.; Saut, Jean-Claude

    We study the large time asymptotic behavior of solutions to the generalized Kadomtsev-Petviashvili (KP) equations where σ= 1 or σ=- 1. When ρ= 2 and σ=- 1, (KP) is known as the KPI equation, while ρ= 2, σ=+ 1 corresponds to the KPII equation. The KP equation models the propagation along the x-axis of nonlinear dispersive long waves on the surface of a fluid, when the variation along the y-axis proceeds slowly [10]. The case ρ= 3, σ=- 1 has been found in the modeling of sound waves in antiferromagnetics [15]. We prove that if ρ>= 3 is an integer and the initial data are sufficiently small, then the solution u of (KP) satisfies the following estimates: for all t∈R, where κ= 1 if ρ= 3 and κ= 0 if ρ>= 4. We also find the large time asymptotics for the solution.

  4. Large-scale machine learning and evaluation platform for real-time traffic surveillance

    Science.gov (United States)

    Eichel, Justin A.; Mishra, Akshaya; Miller, Nicholas; Jankovic, Nicholas; Thomas, Mohan A.; Abbott, Tyler; Swanson, Douglas; Keller, Joel

    2016-09-01

    In traffic engineering, vehicle detectors are trained on limited datasets, resulting in poor accuracy when deployed in real-world surveillance applications. Annotating large-scale high-quality datasets is challenging. Typically, these datasets have limited diversity; they do not reflect the real-world operating environment. There is a need for a large-scale, cloud-based positive and negative mining process and a large-scale learning and evaluation system for the application of automatic traffic measurements and classification. The proposed positive and negative mining process addresses the quality of crowd sourced ground truth data through machine learning review and human feedback mechanisms. The proposed learning and evaluation system uses a distributed cloud computing framework to handle data-scaling issues associated with large numbers of samples and a high-dimensional feature space. The system is trained using AdaBoost on 1,000,000 Haar-like features extracted from 70,000 annotated video frames. The trained real-time vehicle detector achieves an accuracy of at least 95% for 1/2 and about 78% for 19/20 of the time when tested on ˜7,500,000 video frames. At the end of 2016, the dataset is expected to have over 1 billion annotated video frames.

  5. Melodic pattern extraction in large collections of music recordings using time series mining techniques

    OpenAIRE

    Gulati, Sankalp; Serrà, Joan; Ishwar, Vignesh; Serra, Xavier

    2014-01-01

    We demonstrate a data-driven unsupervised approach for the discovery of melodic patterns in large collections of Indian art music recordings. The approach first works on single recordings and subsequently searches in the entire music collection. Melodic similarity is based on dynamic time warping. The task being computationally intensive, lower bounding and early abandoning techniques are applied during distance computation. Our dataset comprises 365 hours of music, containing 1,764 audio rec...

  6. Large lateral photovoltaic effect with ultrafast relaxation time in SnSe/Si junction

    Energy Technology Data Exchange (ETDEWEB)

    Wang, Xianjie; Zhao, Xiaofeng; Hu, Chang; Zhang, Yang; Song, Bingqian; Zhang, Lingli; Liu, Weilong; Lv, Zhe; Zhang, Yu; Sui, Yu, E-mail: suiyu@hit.edu.cn [Department of Physics, Harbin Institute of Technology, Harbin 150001 (China); Tang, Jinke [Department of Physics and Astronomy, University of Wyoming, Laramie, Wyoming 82071 (United States); Song, Bo, E-mail: songbo@hit.edu.cn [Department of Physics, Harbin Institute of Technology, Harbin 150001 (China); Academy of Fundamental and Interdisciplinary Sciences, Harbin Institute of Technology, Harbin 150001 (China)

    2016-07-11

    In this paper, we report a large lateral photovoltaic effect (LPE) with ultrafast relaxation time in SnSe/p-Si junctions. The LPE shows a linear dependence on the position of the laser spot, and the position sensitivity is as high as 250 mV mm{sup −1}. The optical response time and the relaxation time of the LPE are about 100 ns and 2 μs, respectively. The current-voltage curve on the surface of the SnSe film indicates the formation of an inversion layer at the SnSe/p-Si interface. Our results clearly suggest that most of the excited-electrons diffuse laterally in the inversion layer at the SnSe/p-Si interface, which results in a large LPE with ultrafast relaxation time. The high positional sensitivity and ultrafast relaxation time of the LPE make the SnSe/p-Si junction a promising candidate for a wide range of optoelectronic applications.

  7. Time-sliced perturbation theory for large scale structure I: general formalism

    Energy Technology Data Exchange (ETDEWEB)

    Blas, Diego; Garny, Mathias; Sibiryakov, Sergey [Theory Division, CERN, CH-1211 Genève 23 (Switzerland); Ivanov, Mikhail M., E-mail: diego.blas@cern.ch, E-mail: mathias.garny@cern.ch, E-mail: mikhail.ivanov@cern.ch, E-mail: sergey.sibiryakov@cern.ch [FSB/ITP/LPPC, École Polytechnique Fédérale de Lausanne, CH-1015, Lausanne (Switzerland)

    2016-07-01

    We present a new analytic approach to describe large scale structure formation in the mildly non-linear regime. The central object of the method is the time-dependent probability distribution function generating correlators of the cosmological observables at a given moment of time. Expanding the distribution function around the Gaussian weight we formulate a perturbative technique to calculate non-linear corrections to cosmological correlators, similar to the diagrammatic expansion in a three-dimensional Euclidean quantum field theory, with time playing the role of an external parameter. For the physically relevant case of cold dark matter in an Einstein-de Sitter universe, the time evolution of the distribution function can be found exactly and is encapsulated by a time-dependent coupling constant controlling the perturbative expansion. We show that all building blocks of the expansion are free from spurious infrared enhanced contributions that plague the standard cosmological perturbation theory. This paves the way towards the systematic resummation of infrared effects in large scale structure formation. We also argue that the approach proposed here provides a natural framework to account for the influence of short-scale dynamics on larger scales along the lines of effective field theory.

  8. Incipient multiple fault diagnosis in real time with applications to large-scale systems

    International Nuclear Information System (INIS)

    Chung, H.Y.; Bien, Z.; Park, J.H.; Seon, P.H.

    1994-01-01

    By using a modified signed directed graph (SDG) together with the distributed artificial neutral networks and a knowledge-based system, a method of incipient multi-fault diagnosis is presented for large-scale physical systems with complex pipes and instrumentations such as valves, actuators, sensors, and controllers. The proposed method is designed so as to (1) make a real-time incipient fault diagnosis possible for large-scale systems, (2) perform the fault diagnosis not only in the steady-state case but also in the transient case as well by using a concept of fault propagation time, which is newly adopted in the SDG model, (3) provide with highly reliable diagnosis results and explanation capability of faults diagnosed as in an expert system, and (4) diagnose the pipe damage such as leaking, break, or throttling. This method is applied for diagnosis of a pressurizer in the Kori Nuclear Power Plant (NPP) unit 2 in Korea under a transient condition, and its result is reported to show satisfactory performance of the method for the incipient multi-fault diagnosis of such a large-scale system in a real-time manner

  9. Time and frequency domain analyses of the Hualien Large-Scale Seismic Test

    International Nuclear Information System (INIS)

    Kabanda, John; Kwon, Oh-Sung; Kwon, Gunup

    2015-01-01

    Highlights: • Time- and frequency-domain analysis methods are verified against each other. • The two analysis methods are validated against Hualien LSST. • The nonlinear time domain (NLTD) analysis resulted in more realistic response. • The frequency domain (FD) analysis shows amplification at resonant frequencies. • The NLTD analysis requires significant modeling and computing time. - Abstract: In the nuclear industry, the equivalent-linear frequency domain analysis method has been the de facto standard procedure primarily due to the method's computational efficiency. This study explores the feasibility of applying the nonlinear time domain analysis method for the soil–structure-interaction analysis of nuclear power facilities. As a first step, the equivalency of the time and frequency domain analysis methods is verified through a site response analysis of one-dimensional soil, a dynamic impedance analysis of soil–foundation system, and a seismic response analysis of the entire soil–structure system. For the verifications, an idealized elastic soil–structure system is used to minimize variables in the comparison of the two methods. Then, the verified analysis methods are used to develop time and frequency domain models of Hualien Large-Scale Seismic Test. The predicted structural responses are compared against field measurements. The models are also analyzed with an amplified ground motion to evaluate discrepancies of the time and frequency domain analysis methods when the soil–structure system behaves beyond the elastic range. The analysis results show that the equivalent-linear frequency domain analysis method amplifies certain frequency bands and tends to result in higher structural acceleration than the nonlinear time domain analysis method. A comparison with field measurements shows that the nonlinear time domain analysis method better captures the frequency distribution of recorded structural responses than the frequency domain

  10. A method for real-time memory efficient implementation of blob detection in large images

    Directory of Open Access Journals (Sweden)

    Petrović Vladimir L.

    2017-01-01

    Full Text Available In this paper we propose a method for real-time blob detection in large images with low memory cost. The method is suitable for implementation on the specialized parallel hardware such as multi-core platforms, FPGA and ASIC. It uses parallelism to speed-up the blob detection. The input image is divided into blocks of equal sizes to which the maximally stable extremal regions (MSER blob detector is applied in parallel. We propose the usage of multiresolution analysis for detection of large blobs which are not detected by processing the small blocks. This method can find its place in many applications such as medical imaging, text recognition, as well as video surveillance or wide area motion imagery (WAMI. We explored the possibilities of usage of detected blobs in the feature-based image alignment as well. When large images are processed, our approach is 10 to over 20 times more memory efficient than the state of the art hardware implementation of the MSER.

  11. Large-time asymptotic behaviour of solutions of non-linear Sobolev-type equations

    International Nuclear Information System (INIS)

    Kaikina, Elena I; Naumkin, Pavel I; Shishmarev, Il'ya A

    2009-01-01

    The large-time asymptotic behaviour of solutions of the Cauchy problem is investigated for a non-linear Sobolev-type equation with dissipation. For small initial data the approach taken is based on a detailed analysis of the Green's function of the linear problem and the use of the contraction mapping method. The case of large initial data is also closely considered. In the supercritical case the asymptotic formulae are quasi-linear. The asymptotic behaviour of solutions of a non-linear Sobolev-type equation with a critical non-linearity of the non-convective kind differs by a logarithmic correction term from the behaviour of solutions of the corresponding linear equation. For a critical convective non-linearity, as well as for a subcritical non-convective non-linearity it is proved that the leading term of the asymptotic expression for large times is a self-similar solution. For Sobolev equations with convective non-linearity the asymptotic behaviour of solutions in the subcritical case is the product of a rarefaction wave and a shock wave. Bibliography: 84 titles.

  12. Time Domain View of Liquid-like Screening and Large Polaron Formation in Lead Halide Perovskites

    Science.gov (United States)

    Joshi, Prakriti Pradhan; Miyata, Kiyoshi; Trinh, M. Tuan; Zhu, Xiaoyang

    The structural softness and dynamic disorder of lead halide perovskites contributes to their remarkable optoelectronic properties through efficient charge screening and large polaron formation. Here we provide a direct time-domain view of the liquid-like structural dynamics and polaron formation in single crystal CH3NH3PbBr3 and CsPbBr3 using femtosecond optical Kerr effect spectroscopy in conjunction with transient reflectance spectroscopy. We investigate structural dynamics as function of pump energy, which enables us to examine the dynamics in the absence and presence of charge carriers. In the absence of charge carriers, structural dynamics are dominated by over-damped picosecond motions of the inorganic PbBr3- sub-lattice and these motions are strongly coupled to band-gap electronic transitions. Carrier injection from across-gap optical excitation triggers additional 0.26 ps dynamics in CH3NH3PbBr3 that can be attributed to the formation of large polarons. In comparison, large polaron formation is slower in CsPbBr3 with a time constant of 0.6 ps. We discuss how such dynamic screening protects charge carriers in lead halide perovskites. US Department of Energy, Office of Science - Basic Energy Sciences.

  13. Mean time for the development of large workloads and large queue lengths in the GI/G/1 queue

    Directory of Open Access Journals (Sweden)

    Charles Knessl

    1996-01-01

    Full Text Available We consider the GI/G/1 queue described by either the workload U(t (unfinished work or the number of customers N(t in the system. We compute the mean time until U(t reaches excess of the level K, and also the mean time until N(t reaches N0. For the M/G/1 and GI/M/1 models, we obtain exact contour integral representations for these mean first passage times. We then compute the mean times asymptotically, as K and N0→∞, by evaluating these contour integrals. For the general GI/G/1 model, we obtain asymptotic results by a singular perturbation analysis of the appropriate backward Kolmogorov equation(s. Numerical comparisons show that the asymptotic formulas are very accurate even for moderate values of K and N0.

  14. Time domain calculation of connector loads of a very large floating structure

    Science.gov (United States)

    Gu, Jiayang; Wu, Jie; Qi, Enrong; Guan, Yifeng; Yuan, Yubo

    2015-06-01

    Loads generated after an air crash, ship collision, and other accidents may destroy very large floating structures (VLFSs) and create additional connector loads. In this study, the combined effects of ship collision and wave loads are considered to establish motion differential equations for a multi-body VLFS. A time domain calculation method is proposed to calculate the connector load of the VLFS in waves. The Longuet-Higgins model is employed to simulate the stochastic wave load. Fluid force and hydrodynamic coefficient are obtained with DNV Sesam software. The motion differential equation is calculated by applying the time domain method when the frequency domain hydrodynamic coefficient is converted into the memory function of the motion differential equation of the time domain. As a result of the combined action of wave and impact loads, high-frequency oscillation is observed in the time history curve of the connector load. At wave directions of 0° and 75°, the regularities of the time history curves of the connector loads in different directions are similar and the connector loads of C1 and C2 in the X direction are the largest. The oscillation load is observed in the connector in the Y direction at a wave direction of 75° and not at 0°. This paper presents a time domain calculation method of connector load to provide a certain reference function for the future development of Chinese VLFS

  15. Time Discounting and Credit Market Access in a Large-Scale Cash Transfer Programme

    Science.gov (United States)

    Handa, Sudhanshu; Martorano, Bruno; Halpern, Carolyn; Pettifor, Audrey; Thirumurthy, Harsha

    2017-01-01

    Summary Time discounting is thought to influence decision-making in almost every sphere of life, including personal finances, diet, exercise and sexual behavior. In this article we provide evidence on whether a national poverty alleviation program in Kenya can affect inter-temporal decisions. We administered a preferences module as part of a large-scale impact evaluation of the Kenyan Government’s Cash Transfer for Orphans and Vulnerable Children. Four years into the program we find that individuals in the treatment group are only marginally more likely to wait for future money, due in part to the erosion of the value of the transfer by inflation. However among the poorest households for whom the value of transfer is still relatively large we find significant program effects on the propensity to wait. We also find strong program effects among those who have access to credit markets though the program itself does not improve access to credit. PMID:28260842

  16. Time Discounting and Credit Market Access in a Large-Scale Cash Transfer Programme.

    Science.gov (United States)

    Handa, Sudhanshu; Martorano, Bruno; Halpern, Carolyn; Pettifor, Audrey; Thirumurthy, Harsha

    2016-06-01

    Time discounting is thought to influence decision-making in almost every sphere of life, including personal finances, diet, exercise and sexual behavior. In this article we provide evidence on whether a national poverty alleviation program in Kenya can affect inter-temporal decisions. We administered a preferences module as part of a large-scale impact evaluation of the Kenyan Government's Cash Transfer for Orphans and Vulnerable Children. Four years into the program we find that individuals in the treatment group are only marginally more likely to wait for future money, due in part to the erosion of the value of the transfer by inflation. However among the poorest households for whom the value of transfer is still relatively large we find significant program effects on the propensity to wait. We also find strong program effects among those who have access to credit markets though the program itself does not improve access to credit.

  17. Very Large Inflammatory Odontogenic Cyst with Origin on a Single Long Time Traumatized Lower Incisor

    Science.gov (United States)

    Freitas, Filipe; Andre, Saudade; Moreira, Andre; Carames, Joao

    2015-01-01

    One of the consequences of traumatic injuries is the chance of aseptic pulp necrosis to occur which in time may became infected and give origin to periapical pathosis. Although the apical granulomas and cysts are a common condition, there appearance as an extremely large radiolucent image is a rare finding. Differential diagnosis with other radiographic-like pathologies, such as keratocystic odontogenic tumour or unicystic ameloblastoma, is mandatory. The purpose of this paper is to report a very large radicular cyst caused by a single mandibular incisor traumatized long back, in a 60-year-old male. Medical and clinical histories were obtained, radiographic and cone beam CT examinations performed and an initial incisional biopsy was done. The final decision was to perform a surgical enucleation of a lesion, 51.4 mm in length. The enucleated tissue biopsy analysis was able to render the diagnosis as an inflammatory odontogenic cyst. A 2 year follow-up showed complete bone recovery. PMID:26393219

  18. Timing of Formal Phase Safety Reviews for Large-Scale Integrated Hazard Analysis

    Science.gov (United States)

    Massie, Michael J.; Morris, A. Terry

    2010-01-01

    Integrated hazard analysis (IHA) is a process used to identify and control unacceptable risk. As such, it does not occur in a vacuum. IHA approaches must be tailored to fit the system being analyzed. Physical, resource, organizational and temporal constraints on large-scale integrated systems impose additional direct or derived requirements on the IHA. The timing and interaction between engineering and safety organizations can provide either benefits or hindrances to the overall end product. The traditional approach for formal phase safety review timing and content, which generally works well for small- to moderate-scale systems, does not work well for very large-scale integrated systems. This paper proposes a modified approach to timing and content of formal phase safety reviews for IHA. Details of the tailoring process for IHA will describe how to avoid temporary disconnects in major milestone reviews and how to maintain a cohesive end-to-end integration story particularly for systems where the integrator inherently has little to no insight into lower level systems. The proposal has the advantage of allowing the hazard analysis development process to occur as technical data normally matures.

  19. Large deviation estimates for exceedance times of perpetuity sequences and their dual processes

    DEFF Research Database (Denmark)

    Buraczewski, Dariusz; Collamore, Jeffrey F.; Damek, Ewa

    2016-01-01

    In a variety of problems in pure and applied probability, it is of relevant to study the large exceedance probabilities of the perpetuity sequence $Y_n := B_1 + A_1 B_2 + \\cdots + (A_1 \\cdots A_{n-1}) B_n$, where $(A_i,B_i) \\subset (0,\\infty) \\times \\reals$. Estimates for the stationary tail dist......-time exceedance probabilities of $\\{ M_n^\\ast \\}$, yielding a new result concerning the convergence of $\\{ M_n^\\ast \\}$ to its stationary distribution.......In a variety of problems in pure and applied probability, it is of relevant to study the large exceedance probabilities of the perpetuity sequence $Y_n := B_1 + A_1 B_2 + \\cdots + (A_1 \\cdots A_{n-1}) B_n$, where $(A_i,B_i) \\subset (0,\\infty) \\times \\reals$. Estimates for the stationary tail...... distribution of $\\{ Y_n \\}$ have been developed in the seminal papers of Kesten (1973) and Goldie (1991). Specifically, it is well-known that if $M := \\sup_n Y_n$, then ${\\mathbb P} \\left\\{ M > u \\right\\} \\sim {\\cal C}_M u^{-\\xi}$ as $u \\to \\infty$. While much attention has been focused on extending...

  20. Solution of large nonlinear time-dependent problems using reduced coordinates

    International Nuclear Information System (INIS)

    Mish, K.D.

    1987-01-01

    This research is concerned with the idea of reducing a large time-dependent problem, such as one obtained from a finite-element discretization, down to a more manageable size while preserving the most-important physical behavior of the solution. This reduction process is motivated by the concept of a projection operator on a Hilbert Space, and leads to the Lanczos Algorithm for generation of approximate eigenvectors of a large symmetric matrix. The Lanczos Algorithm is then used to develop a reduced form of the spatial component of a time-dependent problem. The solution of the remaining temporal part of the problem is considered from the standpoint of numerical-integration schemes in the time domain. All of these theoretical results are combined to motivate the proposed reduced coordinate algorithm. This algorithm is then developed, discussed, and compared to related methods from the mechanics literature. The proposed reduced coordinate method is then applied to the solution of some representative problems in mechanics. The results of these problems are discussed, conclusions are drawn, and suggestions are made for related future research

  1. Incorporating Real-time Earthquake Information into Large Enrollment Natural Disaster Course Learning

    Science.gov (United States)

    Furlong, K. P.; Benz, H.; Hayes, G. P.; Villasenor, A.

    2010-12-01

    Although most would agree that the occurrence of natural disaster events such as earthquakes, volcanic eruptions, and floods can provide effective learning opportunities for natural hazards-based courses, implementing compelling materials into the large-enrollment classroom environment can be difficult. These natural hazard events derive much of their learning potential from their real-time nature, and in the modern 24/7 news-cycle where all but the most devastating events are quickly out of the public eye, the shelf life for an event is quite limited. To maximize the learning potential of these events requires that both authoritative information be available and course materials be generated as the event unfolds. Although many events such as hurricanes, flooding, and volcanic eruptions provide some precursory warnings, and thus one can prepare background materials to place the main event into context, earthquakes present a particularly confounding situation of providing no warning, but where context is critical to student learning. Attempting to implement real-time materials into large enrollment classes faces the additional hindrance of limited internet access (for students) in most lecture classrooms. In Earth 101 Natural Disasters: Hollywood vs Reality, taught as a large enrollment (150+ students) general education course at Penn State, we are collaborating with the USGS’s National Earthquake Information Center (NEIC) to develop efficient means to incorporate their real-time products into learning activities in the lecture hall environment. Over time (and numerous events) we have developed a template for presenting USGS-produced real-time information in lecture mode. The event-specific materials can be quickly incorporated and updated, along with key contextual materials, to provide students with up-to-the-minute current information. In addition, we have also developed in-class activities, such as student determination of population exposure to severe ground

  2. Large Time Behavior for Weak Solutions of the 3D Globally Modified Navier-Stokes Equations

    Directory of Open Access Journals (Sweden)

    Junbai Ren

    2014-01-01

    Full Text Available This paper is concerned with the large time behavior of the weak solutions for three-dimensional globally modified Navier-Stokes equations. With the aid of energy methods and auxiliary decay estimates together with Lp-Lq estimates of heat semigroup, we derive the optimal upper and lower decay estimates of the weak solutions for the globally modified Navier-Stokes equations as C1(1+t-3/4≤uL2≤C2(1+t-3/4,  t>1. The decay rate is optimal since it coincides with that of heat equation.

  3. Mining Outlier Data in Mobile Internet-Based Large Real-Time Databases

    Directory of Open Access Journals (Sweden)

    Xin Liu

    2018-01-01

    Full Text Available Mining outlier data guarantees access security and data scheduling of parallel databases and maintains high-performance operation of real-time databases. Traditional mining methods generate abundant interference data with reduced accuracy, efficiency, and stability, causing severe deficiencies. This paper proposes a new mining outlier data method, which is used to analyze real-time data features, obtain magnitude spectra models of outlier data, establish a decisional-tree information chain transmission model for outlier data in mobile Internet, obtain the information flow of internal outlier data in the information chain of a large real-time database, and cluster data. Upon local characteristic time scale parameters of information flow, the phase position features of the outlier data before filtering are obtained; the decision-tree outlier-classification feature-filtering algorithm is adopted to acquire signals for analysis and instant amplitude and to achieve the phase-frequency characteristics of outlier data. Wavelet transform threshold denoising is combined with signal denoising to analyze data offset, to correct formed detection filter model, and to realize outlier data mining. The simulation suggests that the method detects the characteristic outlier data feature response distribution, reduces response time, iteration frequency, and mining error rate, improves mining adaptation and coverage, and shows good mining outcomes.

  4. TORCH: A Large-Area Detector for Precision Time-of-Flight Measurements at LHCb

    CERN Document Server

    Harnew, N

    2012-01-01

    The TORCH (Time Of internally Reflected CHerenkov light) is an innovative high-precision time-of-flight detector which is suitable for large areas, up to tens of square metres, and is being developed for the upgraded LHCb experiment. The TORCH provides a time-of-flight measurement from the imaging of photons emitted in a 1 cm thick quartz radiator, based on the Cherenkov principle. The photons propagate by total internal reflection to the edge of the quartz plane and are then focused onto an array of Micro-Channel Plate (MCP) photon detectors at the periphery of the detector. The goal is to achieve a timing resolution of 15 ps per particle over a flight distance of 10 m. This will allow particle identification in the challenging momentum region up to 20 GeV/c. Commercial MCPs have been tested in the laboratory and demonstrate the required timing precision. An electronics readout system based on the NINO and HPTDC chipset is being developed to evaluate an 8×8 channel TORCH prototype. The simulated performance...

  5. Event processing time prediction at the CMS experiment of the Large Hadron Collider

    International Nuclear Information System (INIS)

    Cury, Samir; Gutsche, Oliver; Kcira, Dorian

    2014-01-01

    The physics event reconstruction is one of the biggest challenges for the computing of the LHC experiments. Among the different tasks that computing systems of the CMS experiment performs, the reconstruction takes most of the available CPU resources. The reconstruction time of single collisions varies according to event complexity. Measurements were done in order to determine this correlation quantitatively, creating means to predict it based on the data-taking conditions of the input samples. Currently the data processing system splits tasks in groups with the same number of collisions and does not account for variations in the processing time. These variations can be large and can lead to a considerable increase in the time it takes for CMS workflows to finish. The goal of this study was to use estimates on processing time to more efficiently split the workflow into jobs. By considering the CPU time needed for each job the spread of the job-length distribution in a workflow is reduced.

  6. Evaluating Varied Label Designs for Use with Medical Devices: Optimized Labels Outperform Existing Labels in the Correct Selection of Devices and Time to Select.

    Directory of Open Access Journals (Sweden)

    Laura Bix

    Full Text Available Effective standardization of medical device labels requires objective study of varied designs. Insufficient empirical evidence exists regarding how practitioners utilize and view labeling.Measure the effect of graphic elements (boxing information, grouping information, symbol use and color-coding to optimize a label for comparison with those typical of commercial medical devices.Participants viewed 54 trials on a computer screen. Trials were comprised of two labels that were identical with regard to graphics, but differed in one aspect of information (e.g., one had latex, the other did not. Participants were instructed to select the label along a given criteria (e.g., latex containing as quickly as possible. Dependent variables were binary (correct selection and continuous (time to correct selection.Eighty-nine healthcare professionals were recruited at Association of Surgical Technologists (AST conferences, and using a targeted e-mail of AST members.Symbol presence, color coding and grouping critical pieces of information all significantly improved selection rates and sped time to correct selection (α = 0.05. Conversely, when critical information was graphically boxed, probability of correct selection and time to selection were impaired (α = 0.05. Subsequently, responses from trials containing optimal treatments (color coded, critical information grouped with symbols were compared to two labels created based on a review of those commercially available. Optimal labels yielded a significant positive benefit regarding the probability of correct choice ((P<0.0001 LSM; UCL, LCL: 97.3%; 98.4%, 95.5%, as compared to the two labels we created based on commercial designs (92.0%; 94.7%, 87.9% and 89.8%; 93.0%, 85.3% and time to selection.Our study provides data regarding design factors, namely: color coding, symbol use and grouping of critical information that can be used to significantly enhance the performance of medical device labels.

  7. THE WIGNER–FOKKER–PLANCK EQUATION: STATIONARY STATES AND LARGE TIME BEHAVIOR

    KAUST Repository

    ARNOLD, ANTON

    2012-11-01

    We consider the linear WignerFokkerPlanck equation subject to confining potentials which are smooth perturbations of the harmonic oscillator potential. For a certain class of perturbations we prove that the equation admits a unique stationary solution in a weighted Sobolev space. A key ingredient of the proof is a new result on the existence of spectral gaps for FokkerPlanck type operators in certain weighted L 2-spaces. In addition we show that the steady state corresponds to a positive density matrix operator with unit trace and that the solutions of the time-dependent problem converge towards the steady state with an exponential rate. © 2012 World Scientific Publishing Company.

  8. Real-time graphic display system for ROSA-V Large Scale Test Facility

    International Nuclear Information System (INIS)

    Kondo, Masaya; Anoda, Yoshinari; Osaki, Hideki; Kukita, Yutaka; Takigawa, Yoshio.

    1993-11-01

    A real-time graphic display system was developed for the ROSA-V Large Scale Test Facility (LSTF) experiments simulating accident management measures for prevention of severe core damage in pressurized water reactors (PWRs). The system works on an IBM workstation (Power Station RS/6000 model 560) and accommodates 512 channels out of about 2500 total measurements in the LSTF. It has three major functions: (a) displaying the coolant inventory distribution in the facility primary and secondary systems; (b) displaying the measured quantities at desired locations in the facility; and (c) displaying the time histories of measured quantities. The coolant inventory distribution is derived from differential pressure measurements along vertical sections and gamma-ray densitometer measurements for horizontal legs. The color display indicates liquid subcooling calculated from pressure and temperature at individual locations. (author)

  9. Real-world-time simulation of memory consolidation in a large-scale cerebellar model

    Directory of Open Access Journals (Sweden)

    Masato eGosui

    2016-03-01

    Full Text Available We report development of a large-scale spiking network model of thecerebellum composed of more than 1 million neurons. The model isimplemented on graphics processing units (GPUs, which are dedicatedhardware for parallel computing. Using 4 GPUs simultaneously, we achieve realtime simulation, in which computer simulation ofcerebellar activity for 1 sec completes within 1 sec in thereal-world time, with temporal resolution of 1 msec.This allows us to carry out a very long-term computer simulationof cerebellar activity in a practical time with millisecond temporalresolution. Using the model, we carry out computer simulationof long-term gain adaptation of optokinetic response (OKR eye movementsfor 5 days aimed to study the neural mechanisms of posttraining memoryconsolidation. The simulation results are consistent with animal experimentsand our theory of posttraining memory consolidation. These resultssuggest that realtime computing provides a useful means to studya very slow neural process such as memory consolidation in the brain.

  10. analysis of large electromagnetic pulse simulators using the electric field integral equation method in time domain

    International Nuclear Information System (INIS)

    Jamali, J.; Aghajafari, R.; Moini, R.; Sadeghi, H.

    2002-01-01

    A time-domain approach is presented to calculate electromagnetic fields inside a large Electromagnetic Pulse (EMP) simulator. This type of EMP simulator is used for studying the effect of electromagnetic pulses on electrical apparatus in various structures such as vehicles, a reoplanes, etc. The simulator consists of three planar transmission lines. To solve the problem, we first model the metallic structure of the simulator as a grid of conducting wires. The numerical solution of the governing electric field integral equation is then obtained using the method of moments in time domain. To demonstrate the accuracy of the model, we consider a typical EMP simulator. The comparison of our results with those obtained experimentally in the literature validates the model introduced in this paper

  11. Parallel Motion Simulation of Large-Scale Real-Time Crowd in a Hierarchical Environmental Model

    Directory of Open Access Journals (Sweden)

    Xin Wang

    2012-01-01

    Full Text Available This paper presents a parallel real-time crowd simulation method based on a hierarchical environmental model. A dynamical model of the complex environment should be constructed to simulate the state transition and propagation of individual motions. By modeling of a virtual environment where virtual crowds reside, we employ different parallel methods on a topological layer, a path layer and a perceptual layer. We propose a parallel motion path matching method based on the path layer and a parallel crowd simulation method based on the perceptual layer. The large-scale real-time crowd simulation becomes possible with these methods. Numerical experiments are carried out to demonstrate the methods and results.

  12. Modeling Optical Spectra of Large Organic Systems Using Real-Time Propagation of Semiempirical Effective Hamiltonians.

    Science.gov (United States)

    Ghosh, Soumen; Andersen, Amity; Gagliardi, Laura; Cramer, Christopher J; Govind, Niranjan

    2017-09-12

    We present an implementation of a time-dependent semiempirical method (INDO/S) in NWChem using real-time (RT) propagation to address, in principle, the entire spectrum of valence electronic excitations. Adopting this model, we study the UV/vis spectra of medium-sized systems such as P3B2 and f-coronene, and in addition much larger systems such as ubiquitin in the gas phase and the betanin chromophore in the presence of two explicit solvents (water and methanol). RT-INDO/S provides qualitatively and often quantitatively accurate results when compared with RT- TDDFT or experimental spectra. Even though we only consider the INDO/S Hamiltonian in this work, our implementation provides a framework for performing electron dynamics in large systems using semiempirical Hartree-Fock Hamiltonians in general.

  13. Variation in Patients' Travel Times among Imaging Examination Types at a Large Academic Health System.

    Science.gov (United States)

    Rosenkrantz, Andrew B; Liang, Yu; Duszak, Richard; Recht, Michael P

    2017-08-01

    Patients' willingness to travel farther distances for certain imaging services may reflect their perceptions of the degree of differentiation of such services. We compare patients' travel times for a range of imaging examinations performed across a large academic health system. We searched the NYU Langone Medical Center Enterprise Data Warehouse to identify 442,990 adult outpatient imaging examinations performed over a recent 3.5-year period. Geocoding software was used to estimate typical driving times from patients' residences to imaging facilities. Variation in travel times was assessed among examination types. The mean expected travel time was 29.2 ± 20.6 minutes, but this varied significantly (p travel times were shortest for ultrasound (26.8 ± 18.9) and longest for positron emission tomography-computed tomography (31.9 ± 21.5). For magnetic resonance imaging, travel times were shortest for musculoskeletal extremity (26.4 ± 19.2) and spine (28.6 ± 21.0) examinations and longest for prostate (35.9 ± 25.6) and breast (32.4 ± 22.3) examinations. For computed tomography, travel times were shortest for a range of screening examinations [colonography (25.5 ± 20.8), coronary artery calcium scoring (26.1 ± 19.2), and lung cancer screening (26.4 ± 14.9)] and longest for angiography (32.0 ± 22.6). For ultrasound, travel times were shortest for aortic aneurysm screening (22.3 ± 18.4) and longest for breast (30.1 ± 19.2) examinations. Overall, men (29.9 ± 21.6) had longer (p travel times than women (27.8 ± 20.3); this difference persisted for each modality individually (p ≤ 0.006). Patients' willingness to travel longer times for certain imaging examination types (particularly breast and prostate imaging) supports the role of specialized services in combating potential commoditization of imaging services. Disparities in travel times by gender warrant further investigation. Copyright

  14. A Matter of Time: Faster Percolator Analysis via Efficient SVM Learning for Large-Scale Proteomics.

    Science.gov (United States)

    Halloran, John T; Rocke, David M

    2018-05-04

    Percolator is an important tool for greatly improving the results of a database search and subsequent downstream analysis. Using support vector machines (SVMs), Percolator recalibrates peptide-spectrum matches based on the learned decision boundary between targets and decoys. To improve analysis time for large-scale data sets, we update Percolator's SVM learning engine through software and algorithmic optimizations rather than heuristic approaches that necessitate the careful study of their impact on learned parameters across different search settings and data sets. We show that by optimizing Percolator's original learning algorithm, l 2 -SVM-MFN, large-scale SVM learning requires nearly only a third of the original runtime. Furthermore, we show that by employing the widely used Trust Region Newton (TRON) algorithm instead of l 2 -SVM-MFN, large-scale Percolator SVM learning is reduced to nearly only a fifth of the original runtime. Importantly, these speedups only affect the speed at which Percolator converges to a global solution and do not alter recalibration performance. The upgraded versions of both l 2 -SVM-MFN and TRON are optimized within the Percolator codebase for multithreaded and single-thread use and are available under Apache license at bitbucket.org/jthalloran/percolator_upgrade .

  15. Direct Analysis in Real Time Mass Spectrometry for Characterization of Large Saccharides.

    Science.gov (United States)

    Ma, Huiying; Jiang, Qing; Dai, Diya; Li, Hongli; Bi, Wentao; Da Yong Chen, David

    2018-03-06

    Polysaccharide characterization posts the most difficult challenge to available analytical technologies compared to other types of biomolecules. Plant polysaccharides are reported to have numerous medicinal values, but their effect can be different based on the types of plants, and even regions of productions and conditions of cultivation. However, the molecular basis of the differences of these polysaccharides is largely unknown. In this study, direct analysis in real time mass spectrometry (DART-MS) was used to generate polysaccharide fingerprints. Large saccharides can break down into characteristic small fragments in the DART source via pyrolysis, and the products are then detected by high resolution MS. Temperature was shown to be a crucial parameter for the decomposition of large polysaccharide. The general behavior of carbohydrates in DART-MS was also studied through the investigation of a number of mono- and oligosaccharide standards. The chemical formula and putative ionic forms of the fragments were proposed based on accurate mass with less than 10 ppm mass errors. Multivariate data analysis shows the clear differentiation of different plant species. Intensities of marker ions compared among samples also showed obvious differences. The combination of DART-MS analysis and mechanochemical extraction method used in this work demonstrates a simple, fast, and high throughput analytical protocol for the efficient evaluation of molecular features in plant polysaccharides.

  16. Femtosecond time-resolved studies of coherent vibrational Raman scattering in large gas-phase molecules

    International Nuclear Information System (INIS)

    Hayden, C.C.; Chandler, D.W.

    1995-01-01

    Results are presented from femtosecond time-resolved coherent Raman experiments in which we excite and monitor vibrational coherence in gas-phase samples of benzene and 1,3,5-hexatriene. Different physical mechanisms for coherence decay are seen in these two molecules. In benzene, where the Raman polarizability is largely isotropic, the Q branch of the vibrational Raman spectrum is the primary feature excited. Molecules in different rotational states have different Q-branch transition frequencies due to vibration--rotation interaction. Thus, the macroscopic polarization that is observed in these experiments decays because it has many frequency components from molecules in different rotational states, and these frequency components go out of phase with each other. In 1,3,5-hexatriene, the Raman excitation produces molecules in a coherent superposition of rotational states, through (O, P, R, and S branch) transitions that are strong due to the large anisotropy of the Raman polarizability. The coherent superposition of rotational states corresponds to initially spatially oriented, vibrationally excited, molecules that are freely rotating. The rotation of molecules away from the initial orientation is primarily responsible for the coherence decay in this case. These experiments produce large (∼10% efficiency) Raman shifted signals with modest excitation pulse energies (10 μJ) demonstrating the feasibility of this approach for a variety of gas phase studies. copyright 1995 American Institute of Physics

  17. Time to "go large" on biofilm research: advantages of an omics approach.

    Science.gov (United States)

    Azevedo, Nuno F; Lopes, Susana P; Keevil, Charles W; Pereira, Maria O; Vieira, Maria J

    2009-04-01

    In nature, the biofilm mode of life is of great importance in the cell cycle for many microorganisms. Perhaps because of biofilm complexity and variability, the characterization of a given microbial system, in terms of biofilm formation potential, structure and associated physiological activity, in a large-scale, standardized and systematic manner has been hindered by the absence of high-throughput methods. This outlook is now starting to change as new methods involving the utilization of microtiter-plates and automated spectrophotometry and microscopy systems are being developed to perform large-scale testing of microbial biofilms. Here, we evaluate if the time is ripe to start an integrated omics approach, i.e., the generation and interrogation of large datasets, to biofilms--"biofomics". This omics approach would bring much needed insight into how biofilm formation ability is affected by a number of environmental, physiological and mutational factors and how these factors interplay between themselves in a standardized manner. This could then lead to the creation of a database where biofilm signatures are identified and interrogated. Nevertheless, and before embarking on such an enterprise, the selection of a versatile, robust, high-throughput biofilm growing device and of appropriate methods for biofilm analysis will have to be performed. Whether such device and analytical methods are already available, particularly for complex heterotrophic biofilms is, however, very debatable.

  18. Locating single-point sources from arrival times containing large picking errors (LPEs): the virtual field optimization method (VFOM)

    Science.gov (United States)

    Li, Xi-Bing; Wang, Ze-Wei; Dong, Long-Jun

    2016-01-01

    Microseismic monitoring systems using local location techniques tend to be timely, automatic and stable. One basic requirement of these systems is the automatic picking of arrival times. However, arrival times generated by automated techniques always contain large picking errors (LPEs), which may make the location solution unreliable and cause the integrated system to be unstable. To overcome the LPE issue, we propose the virtual field optimization method (VFOM) for locating single-point sources. In contrast to existing approaches, the VFOM optimizes a continuous and virtually established objective function to search the space for the common intersection of the hyperboloids, which is determined by sensor pairs other than the least residual between the model-calculated and measured arrivals. The results of numerical examples and in-site blasts show that the VFOM can obtain more precise and stable solutions than traditional methods when the input data contain LPEs. Furthermore, we discuss the impact of LPEs on objective functions to determine the LPE-tolerant mechanism, velocity sensitivity and stopping criteria of the VFOM. The proposed method is also capable of locating acoustic sources using passive techniques such as passive sonar detection and acoustic emission.

  19. A study of residence time distribution using radiotracer technique in the large scale plant facility

    Science.gov (United States)

    Wetchagarun, S.; Tippayakul, C.; Petchrak, A.; Sukrod, K.; Khoonkamjorn, P.

    2017-06-01

    As the demand for troubleshooting of large industrial plants increases, radiotracer techniques, which have capability to provide fast, online and effective detections to plant problems, have been continually developed. One of the good potential applications of the radiotracer for troubleshooting in a process plant is the analysis of Residence Time Distribution (RTD). In this paper, the study of RTD in a large scale plant facility using radiotracer technique was presented. The objective of this work is to gain experience on the RTD analysis using radiotracer technique in a “larger than laboratory” scale plant setup which can be comparable to the real industrial application. The experiment was carried out at the sedimentation tank in the water treatment facility of Thailand Institute of Nuclear Technology (Public Organization). Br-82 was selected to use in this work due to its chemical property, its suitable half-life and its on-site availability. NH4Br in the form of aqueous solution was injected into the system as the radiotracer. Six NaI detectors were placed along the pipelines and at the tank in order to determine the RTD of the system. The RTD and the Mean Residence Time (MRT) of the tank was analysed and calculated from the measured data. The experience and knowledge attained from this study is important for extending this technique to be applied to industrial facilities in the future.

  20. Research on resistance characteristics of YBCO tape under short-time DC large current impact

    Science.gov (United States)

    Zhang, Zhifeng; Yang, Jiabin; Qiu, Qingquan; Zhang, Guomin; Lin, Liangzhen

    2017-06-01

    Research of the resistance characteristics of YBCO tape under short-time DC large current impact is the foundation of the developing DC superconducting fault current limiter (SFCL) for voltage source converter-based high voltage direct current system (VSC-HVDC), which is one of the valid approaches to solve the problems of renewable energy integration. SFCL can limit DC short-circuit and enhance the interrupting capabilities of DC circuit breakers. In this paper, under short-time DC large current impacts, the resistance features of naked tape of YBCO tape are studied to find the resistance - temperature change rule and the maximum impact current. The influence of insulation for the resistance - temperature characteristics of YBCO tape is studied by comparison tests with naked tape and insulating tape in 77 K. The influence of operating temperature on the tape is also studied under subcooled liquid nitrogen condition. For the current impact security of YBCO tape, the critical current degradation and top temperature are analyzed and worked as judgment standards. The testing results is helpful for in developing SFCL in VSC-HVDC.

  1. Piloted simulator study of allowable time delays in large-airplane response

    Science.gov (United States)

    Grantham, William D.; Bert T.?aetingas, Stephen A.dings with ran; Bert T.?aetingas, Stephen A.dings with ran

    1987-01-01

    A piloted simulation was performed to determine the permissible time delay and phase shift in the flight control system of a specific large transport-type airplane. The study was conducted with a six degree of freedom ground-based simulator and a math model similar to an advanced wide-body jet transport. Time delays in discrete and lagged form were incorporated into the longitudinal, lateral, and directional control systems of the airplane. Three experienced pilots flew simulated approaches and landings with random localizer and glide slope offsets during instrument tracking as their principal evaluation task. Results of the present study suggest a level 1 (satisfactory) handling qualities limit for the effective time delay of 0.15 sec in both the pitch and roll axes, as opposed to a 0.10-sec limit of the present specification (MIL-F-8785C) for both axes. Also, the present results suggest a level 2 (acceptable but unsatisfactory) handling qualities limit for an effective time delay of 0.82 sec and 0.57 sec for the pitch and roll axes, respectively, as opposed to 0.20 sec of the present specifications for both axes. In the area of phase shift between cockpit input and control surface deflection,the results of this study, flown in turbulent air, suggest less severe phase shift limitations for the approach and landing task-approximately 50 deg. in pitch and 40 deg. in roll - as opposed to 15 deg. of the present specifications for both axes.

  2. Time-resolved triton burnup measurement using the scintillating fiber detector in the Large Helical Device

    Science.gov (United States)

    Ogawa, K.; Isobe, M.; Nishitani, T.; Murakami, S.; Seki, R.; Nakata, M.; Takada, E.; Kawase, H.; Pu, N.; LHD Experiment Group

    2018-03-01

    Time-resolved measurement of triton burnup is performed with a scintillating fiber detector system in the deuterium operation of the large helical device. The scintillating fiber detector system is composed of the detector head consisting of 109 scintillating fibers having a diameter of 1 mm and a length of 100 mm embedded in the aluminum substrate, the magnetic registrant photomultiplier tube, and the data acquisition system equipped with 1 GHz sampling rate analogies to digital converter and the field programmable gate array. The discrimination level of 150 mV was set to extract the pulse signal induced by 14 MeV neutrons according to the pulse height spectra obtained in the experiment. The decay time of 14 MeV neutron emission rate after neutral beam is turned off measured by the scintillating fiber detector. The decay time is consistent with the decay time of total neutron emission rate corresponding to the 14 MeV neutrons measured by the neutron flux monitor as expected. Evaluation of the diffusion coefficient is conducted using a simple classical slowing-down model FBURN code. It is found that the diffusion coefficient of triton is evaluated to be less than 0.2 m2 s-1.

  3. Rapid Large Earthquake and Run-up Characterization in Quasi Real Time

    Science.gov (United States)

    Bravo, F. J.; Riquelme, S.; Koch, P.; Cararo, S.

    2017-12-01

    Several test in quasi real time have been conducted by the rapid response group at CSN (National Seismological Center) to characterize earthquakes in Real Time. These methods are known for its robustness and realibility to create Finite Fault Models. The W-phase FFM Inversion, The Wavelet Domain FFM and The Body Wave and FFM have been implemented in real time at CSN, all these algorithms are running automatically and triggered by the W-phase Point Source Inversion. Dimensions (Large and Width ) are predefined by adopting scaling laws for earthquakes in subduction zones. We tested the last four major earthquakes occurred in Chile using this scheme: The 2010 Mw 8.8 Maule Earthquake, The 2014 Mw 8.2 Iquique Earthquake, The 2015 Mw 8.3 Illapel Earthquake and The 7.6 Melinka Earthquake. We obtain many solutions as time elapses, for each one of those we calculate the run-up using an analytical formula. Our results are in agreements with some FFM already accepted by the sicentific comunnity aswell as run-up observations in the field.

  4. Natural Time and Nowcasting Earthquakes: Are Large Global Earthquakes Temporally Clustered?

    Science.gov (United States)

    Luginbuhl, Molly; Rundle, John B.; Turcotte, Donald L.

    2018-02-01

    The objective of this paper is to analyze the temporal clustering of large global earthquakes with respect to natural time, or interevent count, as opposed to regular clock time. To do this, we use two techniques: (1) nowcasting, a new method of statistically classifying seismicity and seismic risk, and (2) time series analysis of interevent counts. We chose the sequences of M_{λ } ≥ 7.0 and M_{λ } ≥ 8.0 earthquakes from the global centroid moment tensor (CMT) catalog from 2004 to 2016 for analysis. A significant number of these earthquakes will be aftershocks of the largest events, but no satisfactory method of declustering the aftershocks in clock time is available. A major advantage of using natural time is that it eliminates the need for declustering aftershocks. The event count we utilize is the number of small earthquakes that occur between large earthquakes. The small earthquake magnitude is chosen to be as small as possible, such that the catalog is still complete based on the Gutenberg-Richter statistics. For the CMT catalog, starting in 2004, we found the completeness magnitude to be M_{σ } ≥ 5.1. For the nowcasting method, the cumulative probability distribution of these interevent counts is obtained. We quantify the distribution using the exponent, β, of the best fitting Weibull distribution; β = 1 for a random (exponential) distribution. We considered 197 earthquakes with M_{λ } ≥ 7.0 and found β = 0.83 ± 0.08. We considered 15 earthquakes with M_{λ } ≥ 8.0, but this number was considered too small to generate a meaningful distribution. For comparison, we generated synthetic catalogs of earthquakes that occur randomly with the Gutenberg-Richter frequency-magnitude statistics. We considered a synthetic catalog of 1.97 × 10^5 M_{λ } ≥ 7.0 earthquakes and found β = 0.99 ± 0.01. The random catalog converted to natural time was also random. We then generated 1.5 × 10^4 synthetic catalogs with 197 M_{λ } ≥ 7.0 in each catalog and

  5. Hypoattenuation on CTA images with large vessel occlusion: timing affects conspicuity

    Energy Technology Data Exchange (ETDEWEB)

    Dave, Prasham [University of Ottawa, MD Program, Faculty of Medicine, Ottawa, ON (Canada); Lum, Cheemun; Thornhill, Rebecca; Chakraborty, Santanu [University of Ottawa, Department of Radiology, Ottawa, ON (Canada); Ottawa Hospital Research Institute, Ottawa, ON (Canada); Dowlatshahi, Dar [Ottawa Hospital Research Institute, Ottawa, ON (Canada); University of Ottawa, Division of Neurology, Department of Medicine, Ottawa, ON (Canada)

    2017-05-15

    Parenchymal hypoattenuation distal to occlusions on CTA source images (CTASI) is perceived because of the differences in tissue contrast compared to normally perfused tissue. This difference in conspicuity can be measured objectively. We evaluated the effect of contrast timing on the conspicuity of ischemic areas. We collected consecutive patients, retrospectively, between 2012 and 2014 with large vessel occlusions that had dynamic multiphase CT angiography (CTA) and CT perfusion (CTP). We identified areas of low cerebral blood volume on CTP maps and drew the region of interest (ROI) on the corresponding CTASI. A second ROI was placed in an area of normally perfused tissue. We evaluated conspicuity by comparing the absolute and relative change in attenuation between ischemic and normally perfused tissue over seven time points. The median absolute and relative conspicuity was greatest at the peak arterial (8.6 HU (IQR 5.1-13.9); 1.15 (1.09-1.26)), notch (9.4 HU (5.8-14.9); 1.17 (1.10-1.27)), and peak venous phases (7.0 HU (3.1-12.7); 1.13 (1.05-1.23)) compared to other portions of the time-attenuation curve (TAC). There was a significant effect of phase on the TAC for the conspicuity of ischemic vs normally perfused areas (P < 0.00001). The conspicuity of ischemic areas distal to a large artery occlusion in acute stroke is dependent on the phase of contrast arrival with dynamic CTASI and is objectively greatest in the mid-phase of the TAC. (orig.)

  6. Backward-in-time methods to simulate large-scale transport and mixing in the ocean

    Science.gov (United States)

    Prants, S. V.

    2015-06-01

    In oceanography and meteorology, it is important to know not only where water or air masses are headed for, but also where they came from as well. For example, it is important to find unknown sources of oil spills in the ocean and of dangerous substance plumes in the atmosphere. It is impossible with the help of conventional ocean and atmospheric numerical circulation models to extrapolate backward from the observed plumes to find the source because those models cannot be reversed in time. We review here recently elaborated backward-in-time numerical methods to identify and study mesoscale eddies in the ocean and to compute where those waters came from to a given area. The area under study is populated with a large number of artificial tracers that are advected backward in time in a given velocity field that is supposed to be known analytically or numerically, or from satellite and radar measurements. After integrating advection equations, one gets positions of each tracer on a fixed day in the past and can identify from known destinations a particle positions at earlier times. The results provided show that the method is efficient, for example, in estimating probabilities to find increased concentrations of radionuclides and other pollutants in oceanic mesoscale eddies. The backward-in-time methods are illustrated in this paper with a few examples. Backward-in-time Lagrangian maps are applied to identify eddies in satellite-derived and numerically generated velocity fields and to document the pathways by which they exchange water with their surroundings. Backward-in-time trapping maps are used to identify mesoscale eddies in the altimetric velocity field with a risk to be contaminated by Fukushima-derived radionuclides. The results of simulations are compared with in situ mesurement of caesium concentration in sea water samples collected in a recent research vessel cruise in the area to the east of Japan. Backward-in-time latitudinal maps and the corresponding

  7. A novel adaptive synchronization control of a class of master-slave large-scale systems with unknown channel time-delay

    Science.gov (United States)

    Shen, Qikun; Zhang, Tianping

    2015-05-01

    The paper addresses a practical issue for adaptive synchronization in master-slave large-scale systems with constant channel time-delay., and a novel adaptive synchronization control scheme is proposed to guarantee the synchronization errors asymptotically converge to the origin, in which the matching condition as in the related literatures is not necessary. The real value of channel time-delay can be estimated online by a proper adaptation mechanism, which removes the conditions that the channel time-delay should be known exactly as in existing works. Finally, simulation results demonstrate the effectiveness of the approach.

  8. On the need for a time- and location-dependent estimation of the NDSI threshold value for reducing existing uncertainties in snow cover maps at different scales

    Science.gov (United States)

    Härer, Stefan; Bernhardt, Matthias; Siebers, Matthias; Schulz, Karsten

    2018-05-01

    Knowledge of current snow cover extent is essential for characterizing energy and moisture fluxes at the Earth's surface. The snow-covered area (SCA) is often estimated by using optical satellite information in combination with the normalized-difference snow index (NDSI). The NDSI thereby uses a threshold for the definition if a satellite pixel is assumed to be snow covered or snow free. The spatiotemporal representativeness of the standard threshold of 0.4 is however questionable at the local scale. Here, we use local snow cover maps derived from ground-based photography to continuously calibrate the NDSI threshold values (NDSIthr) of Landsat satellite images at two European mountain sites of the period from 2010 to 2015. The Research Catchment Zugspitzplatt (RCZ, Germany) and Vernagtferner area (VF, Austria) are both located within a single Landsat scene. Nevertheless, the long-term analysis of the NDSIthr demonstrated that the NDSIthr at these sites are not correlated (r = 0.17) and different than the standard threshold of 0.4. For further comparison, a dynamic and locally optimized NDSI threshold was used as well as another locally optimized literature threshold value (0.7). It was shown that large uncertainties in the prediction of the SCA of up to 24.1 % exist in satellite snow cover maps in cases where the standard threshold of 0.4 is used, but a newly developed calibrated quadratic polynomial model which accounts for seasonal threshold dynamics can reduce this error. The model minimizes the SCA uncertainties at the calibration site VF by 50 % in the evaluation period and was also able to improve the results at RCZ in a significant way. Additionally, a scaling experiment shows that the positive effect of a locally adapted threshold diminishes using a pixel size of 500 m or larger, underlining the general applicability of the standard threshold at larger scales.

  9. Time-domain hybrid method for simulating large amplitude motions of ships advancing in waves

    Directory of Open Access Journals (Sweden)

    Shukui Liu

    2011-03-01

    Full Text Available Typical results obtained by a newly developed, nonlinear time domain hybrid method for simulating large amplitude motions of ships advancing with constant forward speed in waves are presented. The method is hybrid in the way of combining a time-domain transient Green function method and a Rankine source method. The present approach employs a simple double integration algorithm with respect to time to simulate the free-surface boundary condition. During the simulation, the diffraction and radiation forces are computed by pressure integration over the mean wetted surface, whereas the incident wave and hydrostatic restoring forces/moments are calculated on the instantaneously wetted surface of the hull. Typical numerical results of application of the method to the seakeeping performance of a standard containership, namely the ITTC S175, are herein presented. Comparisons have been made between the results from the present method, the frequency domain 3D panel method (NEWDRIFT of NTUA-SDL and available experimental data and good agreement has been observed for all studied cases between the results of the present method and comparable other data.

  10. Real-Time Track Reallocation for Emergency Incidents at Large Railway Stations

    Directory of Open Access Journals (Sweden)

    Wei Liu

    2015-01-01

    Full Text Available After track capacity breakdowns at a railway station, train dispatchers need to generate appropriate track reallocation plans to recover the impacted train schedule and minimize the expected total train delay time under stochastic scenarios. This paper focuses on the real-time track reallocation problem when tracks break down at large railway stations. To represent these cases, virtual trains are introduced and activated to occupy the accident tracks. A mathematical programming model is developed, which aims at minimizing the total occupation time of station bottleneck sections to avoid train delays. In addition, a hybrid algorithm between the genetic algorithm and the simulated annealing algorithm is designed. The case study from the Baoji railway station in China verifies the efficiency of the proposed model and the algorithm. Numerical results indicate that, during a daily and shift transport plan from 8:00 to 8:30, if five tracks break down simultaneously, this will disturb train schedules (result in train arrival and departure delays.

  11. Shared control on lunar spacecraft teleoperation rendezvous operations with large time delay

    Science.gov (United States)

    Ya-kun, Zhang; Hai-yang, Li; Rui-xue, Huang; Jiang-hui, Liu

    2017-08-01

    Teleoperation could be used in space on-orbit serving missions, such as object deorbits, spacecraft approaches, and automatic rendezvous and docking back-up systems. Teleoperation rendezvous and docking in lunar orbit may encounter bottlenecks for the inherent time delay in the communication link and the limited measurement accuracy of sensors. Moreover, human intervention is unsuitable in view of the partial communication coverage problem. To solve these problems, a shared control strategy for teleoperation rendezvous and docking is detailed. The control authority in lunar orbital maneuvers that involves two spacecraft as rendezvous and docking in the final phase was discussed in this paper. The predictive display model based on the relative dynamic equations is established to overcome the influence of the large time delay in communication link. We discuss and attempt to prove via consistent, ground-based simulations the relative merits of fully autonomous control mode (i.e., onboard computer-based), fully manual control (i.e., human-driven at the ground station) and shared control mode. The simulation experiments were conducted on the nine-degrees-of-freedom teleoperation rendezvous and docking simulation platform. Simulation results indicated that the shared control methods can overcome the influence of time delay effects. In addition, the docking success probability of shared control method was enhanced compared with automatic and manual modes.

  12. Tracking Large Area Mangrove Deforestation with Time-Series of High Fidelity MODIS Imagery

    Science.gov (United States)

    Rahman, A. F.; Dragoni, D.; Didan, K.

    2011-12-01

    Mangrove forests are important coastal ecosystems of the tropical and subtropical regions. These forests provide critical ecosystem services, fulfill important socio-economic and environmental functions, and support coastal livelihoods. But these forest are also among the most vulnerable ecosystems, both to anthropogenic disturbance and climate change. Yet, there exists no map or published study showing detailed spatiotemporal trends of mangrove deforestation at local to regional scales. There is an immediate need of producing such detailed maps to further study the drivers, impacts and feedbacks of anthropogenic and climate factors on mangrove deforestation, and to develop local and regional scale adaptation/mitigation strategies. In this study we use a time-series of high fidelity imagery from NASA's Moderate Resolution Imaging Spectroradiometer (MODIS) for tracking changes in the greenness of mangrove forests of Kalimantan Island of Indonesia. A novel method of filtering satellite data for cloud, aerosol, and view angle effects was used to produce high fidelity MODIS time-series images at 250-meter spatial resolution and three-month temporal resolution for the period of 2000-2010. Enhanced Vegetation Index 2 (EVI2), a measure of vegetation greenness, was calculated from these images for each pixel at each time interval. Temporal variations in the EVI2 of each pixel were tracked as a proxy to deforestaton of mangroves using the statistical method of change-point analysis. Results of these change detection were validated using Monte Carlo simulation, photographs from Google-Earth, finer spatial resolution images from Landsat satellite, and ground based GIS data.

  13. Statistical analysis of error rate of large-scale single flux quantum logic circuit by considering fluctuation of timing parameters

    International Nuclear Information System (INIS)

    Yamanashi, Yuki; Masubuchi, Kota; Yoshikawa, Nobuyuki

    2016-01-01

    The relationship between the timing margin and the error rate of the large-scale single flux quantum logic circuits is quantitatively investigated to establish a timing design guideline. We observed that the fluctuation in the set-up/hold time of single flux quantum logic gates caused by thermal noises is the most probable origin of the logical error of the large-scale single flux quantum circuit. The appropriate timing margin for stable operation of the large-scale logic circuit is discussed by taking the fluctuation of setup/hold time and the timing jitter in the single flux quantum circuits. As a case study, the dependence of the error rate of the 1-million-bit single flux quantum shift register on the timing margin is statistically analyzed. The result indicates that adjustment of timing margin and the bias voltage is important for stable operation of a large-scale SFQ logic circuit.

  14. Statistical analysis of error rate of large-scale single flux quantum logic circuit by considering fluctuation of timing parameters

    Energy Technology Data Exchange (ETDEWEB)

    Yamanashi, Yuki, E-mail: yamanasi@ynu.ac.jp [Department of Electrical and Computer Engineering, Yokohama National University, Tokiwadai 79-5, Hodogaya-ku, Yokohama 240-8501 (Japan); Masubuchi, Kota; Yoshikawa, Nobuyuki [Department of Electrical and Computer Engineering, Yokohama National University, Tokiwadai 79-5, Hodogaya-ku, Yokohama 240-8501 (Japan)

    2016-11-15

    The relationship between the timing margin and the error rate of the large-scale single flux quantum logic circuits is quantitatively investigated to establish a timing design guideline. We observed that the fluctuation in the set-up/hold time of single flux quantum logic gates caused by thermal noises is the most probable origin of the logical error of the large-scale single flux quantum circuit. The appropriate timing margin for stable operation of the large-scale logic circuit is discussed by taking the fluctuation of setup/hold time and the timing jitter in the single flux quantum circuits. As a case study, the dependence of the error rate of the 1-million-bit single flux quantum shift register on the timing margin is statistically analyzed. The result indicates that adjustment of timing margin and the bias voltage is important for stable operation of a large-scale SFQ logic circuit.

  15. Automatic Optimization for Large-Scale Real-Time Coastal Water Simulation

    Directory of Open Access Journals (Sweden)

    Shunli Wang

    2016-01-01

    Full Text Available We introduce an automatic optimization approach for the simulation of large-scale coastal water. To solve the singular problem of water waves obtained with the traditional model, a hybrid deep-shallow-water model is estimated by using an automatic coupling algorithm. It can handle arbitrary water depth and different underwater terrain. As a certain feature of coastal terrain, coastline is detected with the collision detection technology. Then, unnecessary water grid cells are simplified by the automatic simplification algorithm according to the depth. Finally, the model is calculated on Central Processing Unit (CPU and the simulation is implemented on Graphics Processing Unit (GPU. We show the effectiveness of our method with various results which achieve real-time rendering on consumer-level computer.

  16. Space-Time Convolutional Codes over Finite Fields and Rings for Systems with Large Diversity Order

    Directory of Open Access Journals (Sweden)

    B. F. Uchôa-Filho

    2008-06-01

    Full Text Available We propose a convolutional encoder over the finite ring of integers modulo pk,ℤpk, where p is a prime number and k is any positive integer, to generate a space-time convolutional code (STCC. Under this structure, we prove three properties related to the generator matrix of the convolutional code that can be used to simplify the code search procedure for STCCs over ℤpk. Some STCCs of large diversity order (≥4 designed under the trace criterion for n=2,3, and 4 transmit antennas are presented for various PSK signal constellations.

  17. A New Paradigm for Supergranulation Derived from Large-Distance Time-Distance Helioseismology: Pancakes

    Science.gov (United States)

    Duvall, Thomas L.; Hanasoge, Shravan M.

    2012-01-01

    With large separations (10-24 deg heliocentric), it has proven possible to cleanly separate the horizontal and vertical components of supergranular flow with time-distance helioseismology. These measurements require very broad filters in the k-$\\omega$ power spectrum as apparently supergranulation scatters waves over a large area of the power spectrum. By picking locations of supergranulation as peaks in the horizontal divergence signal derived from f-mode waves, it is possible to simultaneously obtain average properties of supergranules and a high signal/noise ratio by averaging over many cells. By comparing ray-theory forward modeling with HMI measurements, an average supergranule model with a peak upflow of 240 m/s at cell center at a depth of 2.3 Mm and a peak horizontal outflow of 700 m/s at a depth of 1.6 Mm. This upflow is a factor of 20 larger than the measured photospheric upflow. These results may not be consistent with earlier measurements using much shorter separations (<5 deg heliocentric). With a 30 Mm horizontal extent and a few Mm in depth, the cells might be characterized as thick pancakes.

  18. Numerical simulation of pseudoelastic shape memory alloys using the large time increment method

    Science.gov (United States)

    Gu, Xiaojun; Zhang, Weihong; Zaki, Wael; Moumni, Ziad

    2017-04-01

    The paper presents a numerical implementation of the large time increment (LATIN) method for the simulation of shape memory alloys (SMAs) in the pseudoelastic range. The method was initially proposed as an alternative to the conventional incremental approach for the integration of nonlinear constitutive models. It is adapted here for the simulation of pseudoelastic SMA behavior using the Zaki-Moumni model and is shown to be especially useful in situations where the phase transformation process presents little or lack of hardening. In these situations, a slight stress variation in a load increment can result in large variations of strain and local state variables, which may lead to difficulties in numerical convergence. In contrast to the conventional incremental method, the LATIN method solve the global equilibrium and local consistency conditions sequentially for the entire loading path. The achieved solution must satisfy the conditions of static and kinematic admissibility and consistency simultaneously after several iterations. 3D numerical implementation is accomplished using an implicit algorithm and is then used for finite element simulation using the software Abaqus. Computational tests demonstrate the ability of this approach to simulate SMAs presenting flat phase transformation plateaus and subjected to complex loading cases, such as the quasi-static behavior of a stent structure. Some numerical results are contrasted to those obtained using step-by-step incremental integration.

  19. Latitude-Time Total Electron Content Anomalies as Precursors to Japan's Large Earthquakes Associated with Principal Component Analysis

    Directory of Open Access Journals (Sweden)

    Jyh-Woei Lin

    2011-01-01

    Full Text Available The goal of this study is to determine whether principal component analysis (PCA can be used to process latitude-time ionospheric TEC data on a monthly basis to identify earthquake associated TEC anomalies. PCA is applied to latitude-time (mean-of-a-month ionospheric total electron content (TEC records collected from the Japan GEONET network to detect TEC anomalies associated with 18 earthquakes in Japan (M≥6.0 from 2000 to 2005. According to the results, PCA was able to discriminate clear TEC anomalies in the months when all 18 earthquakes occurred. After reviewing months when no M≥6.0 earthquakes occurred but geomagnetic storm activity was present, it is possible that the maximal principal eigenvalues PCA returned for these 18 earthquakes indicate earthquake associated TEC anomalies. Previously PCA has been used to discriminate earthquake-associated TEC anomalies recognized by other researchers, who found that statistical association between large earthquakes and TEC anomalies could be established in the 5 days before earthquake nucleation; however, since PCA uses the characteristics of principal eigenvalues to determine earthquake related TEC anomalies, it is possible to show that such anomalies existed earlier than this 5-day statistical window.

  20. Thermal motion in proteins: Large effects on the time-averaged interaction energies

    International Nuclear Information System (INIS)

    Goethe, Martin; Rubi, J. Miguel; Fita, Ignacio

    2016-01-01

    As a consequence of thermal motion, inter-atomic distances in proteins fluctuate strongly around their average values, and hence, also interaction energies (i.e. the pair-potentials evaluated at the fluctuating distances) are not constant in time but exhibit pronounced fluctuations. These fluctuations cause that time-averaged interaction energies do generally not coincide with the energy values obtained by evaluating the pair-potentials at the average distances. More precisely, time-averaged interaction energies behave typically smoother in terms of the average distance than the corresponding pair-potentials. This averaging effect is referred to as the thermal smoothing effect. Here, we estimate the strength of the thermal smoothing effect on the Lennard-Jones pair-potential for globular proteins at ambient conditions using x-ray diffraction and simulation data of a representative set of proteins. For specific atom species, we find a significant smoothing effect where the time-averaged interaction energy of a single atom pair can differ by various tens of cal/mol from the Lennard-Jones potential at the average distance. Importantly, we observe a dependency of the effect on the local environment of the involved atoms. The effect is typically weaker for bulky backbone atoms in beta sheets than for side-chain atoms belonging to other secondary structure on the surface of the protein. The results of this work have important practical implications for protein software relying on free energy expressions. We show that the accuracy of free energy expressions can largely be increased by introducing environment specific Lennard-Jones parameters accounting for the fact that the typical thermal motion of protein atoms depends strongly on their local environment.

  1. Thermal motion in proteins: Large effects on the time-averaged interaction energies

    Energy Technology Data Exchange (ETDEWEB)

    Goethe, Martin, E-mail: martingoethe@ub.edu; Rubi, J. Miguel [Departament de Física Fonamental, Universitat de Barcelona, Martí i Franquès 1, 08028 Barcelona (Spain); Fita, Ignacio [Institut de Biologia Molecular de Barcelona, Baldiri Reixac 10, 08028 Barcelona (Spain)

    2016-03-15

    As a consequence of thermal motion, inter-atomic distances in proteins fluctuate strongly around their average values, and hence, also interaction energies (i.e. the pair-potentials evaluated at the fluctuating distances) are not constant in time but exhibit pronounced fluctuations. These fluctuations cause that time-averaged interaction energies do generally not coincide with the energy values obtained by evaluating the pair-potentials at the average distances. More precisely, time-averaged interaction energies behave typically smoother in terms of the average distance than the corresponding pair-potentials. This averaging effect is referred to as the thermal smoothing effect. Here, we estimate the strength of the thermal smoothing effect on the Lennard-Jones pair-potential for globular proteins at ambient conditions using x-ray diffraction and simulation data of a representative set of proteins. For specific atom species, we find a significant smoothing effect where the time-averaged interaction energy of a single atom pair can differ by various tens of cal/mol from the Lennard-Jones potential at the average distance. Importantly, we observe a dependency of the effect on the local environment of the involved atoms. The effect is typically weaker for bulky backbone atoms in beta sheets than for side-chain atoms belonging to other secondary structure on the surface of the protein. The results of this work have important practical implications for protein software relying on free energy expressions. We show that the accuracy of free energy expressions can largely be increased by introducing environment specific Lennard-Jones parameters accounting for the fact that the typical thermal motion of protein atoms depends strongly on their local environment.

  2. realfast: Real-time, Commensal Fast Transient Surveys with the Very Large Array

    Science.gov (United States)

    Law, C. J.; Bower, G. C.; Burke-Spolaor, S.; Butler, B. J.; Demorest, P.; Halle, A.; Khudikyan, S.; Lazio, T. J. W.; Pokorny, M.; Robnett, J.; Rupen, M. P.

    2018-05-01

    Radio interferometers have the ability to precisely localize and better characterize the properties of sources. This ability is having a powerful impact on the study of fast radio transients, where a few milliseconds of data is enough to pinpoint a source at cosmological distances. However, recording interferometric data at millisecond cadence produces a terabyte-per-hour data stream that strains networks, computing systems, and archives. This challenge mirrors that of other domains of science, where the science scope is limited by the computational architecture as much as the physical processes at play. Here, we present a solution to this problem in the context of radio transients: realfast, a commensal, fast transient search system at the Jansky Very Large Array. realfast uses a novel architecture to distribute fast-sampled interferometric data to a 32-node, 64-GPU cluster for real-time imaging and transient detection. By detecting transients in situ, we can trigger the recording of data for those rare, brief instants when the event occurs and reduce the recorded data volume by a factor of 1000. This makes it possible to commensally search a data stream that would otherwise be impossible to record. This system will search for millisecond transients in more than 1000 hr of data per year, potentially localizing several Fast Radio Bursts, pulsars, and other sources of impulsive radio emission. We describe the science scope for realfast, the system design, expected outcomes, and ways in which real-time analysis can help in other fields of astrophysics.

  3. Suppression of the Transit -Time Instability in Large-Area Electron Beam Diodes

    Science.gov (United States)

    Myers, Matthew C.; Friedman, Moshe; Swanekamp, Stephen B.; Chan, Lop-Yung; Ludeking, Larry; Sethian, John D.

    2002-12-01

    Experiment, theory, and simulation have shown that large-area electron-beam diodes are susceptible to the transit-time instability. The instability modulates the electron beam spatially and temporally, producing a wide spread in electron energy and momentum distributions. The result is gross inefficiency in beam generation and propagation. Simulations indicate that a periodic, slotted cathode structure that is loaded with resistive elements may be used to eliminate the instability. Such a cathode has been fielded on one of the two opposing 60 cm × 200 cm diodes on the NIKE KrF laser at the Naval Research Laboratory. These diodes typically deliver 600 kV, 500 kA, 250 ns electron beams to the laser cell in an external magnetic field of 0.2 T. We conclude that the slotted cathode suppressed the transit-time instability such that the RF power was reduced by a factor of 9 and that electron transmission efficiency into the laser gas was improved by more than 50%.

  4. Suppression of the transit-time instability in large-area electron beam diodes

    International Nuclear Information System (INIS)

    Myers, Matthew C.; Friedman, Moshe; Sethian, John D.; Swanekamp, Stephen B.; Chan, L.-Y.; Ludeking, Larry

    2002-01-01

    Experiment, theory, and simulation have shown that large-area electron-beam diodes are susceptible to the transit-time instability. The instability modulates the electron beam spatially and temporally, producing a wide spread in electron energy and momentum distributions. The result is gross inefficiency in beam generation and propagation. Simulations indicate that a periodic, slotted cathode structure that is loaded with resistive elements may be used to eliminate the instability. Such a cathode has been fielded on one of the two opposing 60 cm x 200 cm diodes on the NIKE KrF laser at the Naval Research Laboratory. These diodes typically deliver 600 kV, 500 kA, 250 ns electron beams to the laser cell in an external magnetic field of 0.2 T. We conclude that the slotted cathode suppressed the transit-time instability such that the RF power was reduced by a factor of 9 and that electron transmission efficiency into the laser gas was improved by more than 50%

  5. Practical method of calculating time-integrated concentrations at medium and large distances

    International Nuclear Information System (INIS)

    Cagnetti, P.; Ferrara, V.

    1980-01-01

    Previous reports have covered the possibility of calculating time-integrated concentrations (TICs) for a prolonged release, based on concentration estimates for a brief release. This study proposes a simple method of evaluating concentrations in the air at medium and large distances, for a brief release. It is known that the stability of the atmospheric layers close to ground level influence diffusion only over short distances. Beyond some tens of kilometers, as the pollutant cloud progressively reaches higher layers, diffusion is affected by factors other than the stability at ground level, such as wind shear for intermediate distances and the divergence and rotational motion of air masses towards the upper limit of the mesoscale and on the synoptic scale. Using the data available in the literature, expressions for sigmasub(y) and sigmasub(z) are proposed for transfer times corresponding to those for up to distances of several thousand kilometres, for two initial diffusion situations (up to distances of 10 - 20 km), those characterized by stable and neutral conditions respectively. Using this method simple hand calculations can be made for any problem relating to the diffusion of radioactive pollutants over long distances

  6. Likelihood of Null Effects of Large NHLBI Clinical Trials Has Increased over Time.

    Directory of Open Access Journals (Sweden)

    Robert M Kaplan

    Full Text Available We explore whether the number of null results in large National Heart Lung, and Blood Institute (NHLBI funded trials has increased over time.We identified all large NHLBI supported RCTs between 1970 and 2012 evaluating drugs or dietary supplements for the treatment or prevention of cardiovascular disease. Trials were included if direct costs >$500,000/year, participants were adult humans, and the primary outcome was cardiovascular risk, disease or death. The 55 trials meeting these criteria were coded for whether they were published prior to or after the year 2000, whether they registered in clinicaltrials.gov prior to publication, used active or placebo comparator, and whether or not the trial had industry co-sponsorship. We tabulated whether the study reported a positive, negative, or null result on the primary outcome variable and for total mortality.17 of 30 studies (57% published prior to 2000 showed a significant benefit of intervention on the primary outcome in comparison to only 2 among the 25 (8% trials published after 2000 (χ2=12.2,df= 1, p=0.0005. There has been no change in the proportion of trials that compared treatment to placebo versus active comparator. Industry co-sponsorship was unrelated to the probability of reporting a significant benefit. Pre-registration in clinical trials.gov was strongly associated with the trend toward null findings.The number NHLBI trials reporting positive results declined after the year 2000. Prospective declaration of outcomes in RCTs, and the adoption of transparent reporting standards, as required by clinicaltrials.gov, may have contributed to the trend toward null findings.

  7. Data transfer over the wide area network with a large round trip time

    Science.gov (United States)

    Matsunaga, H.; Isobe, T.; Mashimo, T.; Sakamoto, H.; Ueda, I.

    2010-04-01

    A Tier-2 regional center is running at the University of Tokyo in Japan. This center receives a large amount of data of the ATLAS experiment from the Tier-1 center in France. Although the link between the two centers has 10Gbps bandwidth, it is not a dedicated link but is shared with other traffic, and the round trip time is 290ms. It is not easy to exploit the available bandwidth for such a link, so-called long fat network. We performed data transfer tests by using GridFTP in various combinations of the parameters, such as the number of parallel streams and the TCP window size. In addition, we have gained experience of the actual data transfer in our production system where the Disk Pool Manager (DPM) is used as the Storage Element and the data transfer is controlled by the File Transfer Service (FTS). We report results of the tests and the daily activity, and discuss the improvement of the data transfer throughput.

  8. Data transfer over the wide area network with a large round trip time

    International Nuclear Information System (INIS)

    Matsunaga, H; Isobe, T; Mashimo, T; Sakamoto, H; Ueda, I

    2010-01-01

    A Tier-2 regional center is running at the University of Tokyo in Japan. This center receives a large amount of data of the ATLAS experiment from the Tier-1 center in France. Although the link between the two centers has 10Gbps bandwidth, it is not a dedicated link but is shared with other traffic, and the round trip time is 290ms. It is not easy to exploit the available bandwidth for such a link, so-called long fat network. We performed data transfer tests by using GridFTP in various combinations of the parameters, such as the number of parallel streams and the TCP window size. In addition, we have gained experience of the actual data transfer in our production system where the Disk Pool Manager (DPM) is used as the Storage Element and the data transfer is controlled by the File Transfer Service (FTS). We report results of the tests and the daily activity, and discuss the improvement of the data transfer throughput.

  9. Long-time analytic approximation of large stochastic oscillators: Simulation, analysis and inference.

    Directory of Open Access Journals (Sweden)

    Giorgos Minas

    2017-07-01

    Full Text Available In order to analyse large complex stochastic dynamical models such as those studied in systems biology there is currently a great need for both analytical tools and also algorithms for accurate and fast simulation and estimation. We present a new stochastic approximation of biological oscillators that addresses these needs. Our method, called phase-corrected LNA (pcLNA overcomes the main limitations of the standard Linear Noise Approximation (LNA to remain uniformly accurate for long times, still maintaining the speed and analytically tractability of the LNA. As part of this, we develop analytical expressions for key probability distributions and associated quantities, such as the Fisher Information Matrix and Kullback-Leibler divergence and we introduce a new approach to system-global sensitivity analysis. We also present algorithms for statistical inference and for long-term simulation of oscillating systems that are shown to be as accurate but much faster than leaping algorithms and algorithms for integration of diffusion equations. Stochastic versions of published models of the circadian clock and NF-κB system are used to illustrate our results.

  10. When David beats Goliath: the advantage of large size in interspecific aggressive contests declines over evolutionary time.

    Directory of Open Access Journals (Sweden)

    Paul R Martin

    Full Text Available Body size has long been recognized to play a key role in shaping species interactions. For example, while small species thrive in a diversity of environments, they typically lose aggressive contests for resources with larger species. However, numerous examples exist of smaller species dominating larger species during aggressive interactions, suggesting that the evolution of traits can allow species to overcome the competitive disadvantage of small size. If these traits accumulate as lineages diverge, then the advantage of large size in interspecific aggressive interactions should decline with increased evolutionary distance. We tested this hypothesis using data on the outcomes of 23,362 aggressive interactions among 246 bird species pairs involving vultures at carcasses, hummingbirds at nectar sources, and antbirds and woodcreepers at army ant swarms. We found the advantage of large size declined as species became more evolutionarily divergent, and smaller species were more likely to dominate aggressive contests when interacting with more distantly-related species. These results appear to be caused by both the evolution of traits in smaller species that enhanced their abilities in aggressive contests, and the evolution of traits in larger species that were adaptive for other functions, but compromised their abilities to compete aggressively. Specific traits that may provide advantages to small species in aggressive interactions included well-developed leg musculature and talons, enhanced flight acceleration and maneuverability, novel fighting behaviors, and traits associated with aggression, such as testosterone and muscle development. Traits that may have hindered larger species in aggressive interactions included the evolution of morphologies for tree trunk foraging that compromised performance in aggressive contests away from trunks, and the evolution of migration. Overall, our results suggest that fundamental trade-offs, such as those

  11. On the problem of earthquake correlation in space and time over large distances

    Science.gov (United States)

    Georgoulas, G.; Konstantaras, A.; Maravelakis, E.; Katsifarakis, E.; Stylios, C. D.

    2012-04-01

    A quick examination of geographical maps with the epicenters of earthquakes marked on them reveals a strong tendency of these points to form compact clusters of irregular shapes and various sizes often traversing with other clusters. According to [Saleur et al. 1996] "earthquakes are correlated in space and time over large distances". This implies that seismic sequences are not formatted randomly but they follow a spatial pattern with consequent triggering of events. Seismic cluster formation is believed to be due to underlying geological natural hazards, which: a) act as the energy storage elements of the phenomenon, and b) tend to form a complex network of numerous interacting faults [Vallianatos and Tzanis, 1998]. Therefore it is imperative to "isolate" meaningful structures (clusters) in order to mine information regarding the underlying mechanism and at a second stage to test the causality effect implied by what is known as the Domino theory [Burgman, 2009]. Ongoing work by Konstantaras et al. 2011 and Katsifarakis et al. 2011 on clustering seismic sequences in the area of the Southern Hellenic Arc and progressively throughout the Greek vicinity and the entire Mediterranean region based on an explicit segmentation of the data based both on their temporal and spatial stamp, following modelling assumptions proposed by Dobrovolsky et al. 1989 and Drakatos et al. 2001, managed to identify geologically validated seismic clusters. These results suggest that that the time component should be included as a dimension during the clustering process as seismic cluster formation is dynamic and the emerging clusters propagate in time. Another issue that has not been investigated yet explicitly is the role of the magnitude of each seismic event. In other words the major seismic event should be treated differently compared to pre or post seismic sequences. Moreover the sometimes irregular and elongated shapes that appear on geophysical maps means that clustering algorithms

  12. One size does not fit all: a qualitative content analysis of the importance of existing quality improvement capacity in the implementation of Releasing Time to Care: the Productive Ward™ in Saskatchewan, Canada.

    Science.gov (United States)

    Hamilton, Jessica; Verrall, Tanya; Maben, Jill; Griffiths, Peter; Avis, Kyla; Baker, G Ross; Teare, Gary

    2014-12-19

    Releasing Time to Care: The Productive Ward™ (RTC) is a method for conducting continuous quality improvement (QI). The Saskatchewan Ministry of Health mandated its implementation in Saskatchewan, Canada between 2008 and 2012. Subsequently, a research team was developed to evaluate its impact on the nursing unit environment. We sought to explore the influence of the unit's existing QI capacity on their ability to engage with RTC as a program for continuous QI. We conducted interviews with staff from 8 nursing units and asked them to speak about their experience doing RTC. Using qualitative content analysis, and guided by the Organizing for Quality framework, we describe the existing QI capacity and impact of RTC on the unit environment. The results focus on 2 units chosen to highlight extreme variation in existing QI capacity. Unit B was characterized by a strong existing environment. RTC was implemented in an environment with a motivated manager and collaborative culture. Aided by the structural support provided by the organization, the QI capacity on this unit was strengthened through RTC. Staff recognized the potential of using the RTC processes to support QI work. Staff on unit E did not have the same experience with RTC. Like unit B, they had similar structural supports provided by their organization but they did not have the same existing cultural or political environment to facilitate the implementation of RTC. They did not have internal motivation and felt they were only doing RTC because they had to. Though they had some success with RTC activities, the staff did not have the same understanding of the methods that RTC could provide for continuous QI work. RTC has the potential to be a strong tool for engaging units to do QI. This occurs best when RTC is implemented in a supporting environment. One size does not fit all and administrative bodies must consider the unique context of each environment prior to implementing large-scale QI projects. Use of an

  13. Life and death of the resurrection plate: Evidence for its existence and subduction in the northeastern Pacific in Paleocene-Eocene time

    Science.gov (United States)

    Haeussler, P.J.; Bradley, D.C.; Wells, R.E.; Miller, M.L.

    2003-01-01

    Onshore evidence suggests that a plate is missing from published reconstructions of the northeastern Pacific Ooean in Paleocene- Eocene time. The Resurrection plate, named for the Resurrection Peninsula ophiolite near Seward, Alaska, was located east of the Kula plate and north of the Farallon plate. We interpret coeval near-trench magmatism in southern Alaska and the Cascadia margin as evidence for two slab windows associated with trench-ridge-trench (TRT) triple junctions, which formed the western and southern boundaries of the Resurrection plate. In Alaska, the Sanak-Baranof belt of near-trench intrusions records a west-to-east migration, from 61 to 50 Ma, of the northern TRT triple junction along a 2100-km-long section of coastline. In Oregon, Washington, and southern Vancouver Island, voluminous basaltic volcanism of the Siletz River Volcanics, Crescent Formation, and Metchosin Volcanics occurred between ca. 66 and 48 Ma. Lack of a clear age progression of magmatism along the Cascadia margin suggests that this southern triple junction did not migrate significantly. Synchronous near-trench magmatism from southeastern Alaska to Puget Sound at ca. 50 Ma documents the middle Eocene subduction of a spreading center, the crest of which was subparallel to the margin. We interpret this ca. 50 Ma event as recording the subduction-zone consumption of the last of the Resurrection plate. The existence and subsequent subduction of the Resurrection plate explains (1) northward terrane transport along the southeastern Alaska-British Columbia margin between 70 and 50 Ma, synchronous with an eastward-migrating triple junction in southern Alaska; (2) rapid uplift and voluminous magmatism in the Coast Mountains of British Columbia prior to 50 Ma related to subduction of buoyant, young oceanic crust of the Resurrection plate; (3) cessation of Coast Mountains magmatism at ca. 50 Ma due to cessation of subduction, (4) primitive mafic magmatism in the Coast Mountains and Cascade

  14. In-situ high resolution particle sampling by large time sequence inertial spectrometry

    International Nuclear Information System (INIS)

    Prodi, V.; Belosi, F.

    1990-09-01

    In situ sampling is always preferred, when possible, because of the artifacts that can arise when the aerosol has to flow through long sampling lines. On the other hand, the amount of possible losses can be calculated with some confidence only when the size distribution can be measured with a sufficient precision and the losses are not too large. This makes it desirable to sample directly in the vicinity of the aerosol source or containment. High temperature sampling devices with a detailed aerodynamic separation are extremely useful to this purpose. Several measurements are possible with the inertial spectrometer (INSPEC), but not with cascade impactors or cyclones. INSPEC - INertial SPECtrometer - has been conceived to measure the size distribution of aerosols by separating the particles while airborne according to their size and collecting them on a filter. It consists of a channel of rectangular cross-section with a 90 degree bend. Clean air is drawn through the channel, with a thin aerosol sheath injected close to the inner wall. Due to the bend, the particles are separated according to their size, leaving the original streamline by a distance which is a function of particle inertia and resistance, i.e. of aerodynamic diameter. The filter collects all the particles of the same aerodynamic size at the same distance from the inlet, in a continuous distribution. INSPEC particle separation at high temperature (up to 800 C) has been tested with Zirconia particles as calibration aerosols. The feasibility study has been concerned with resolution and time sequence sampling capabilities under high temperature (700 C)

  15. REM-3D Reference Datasets: Reconciling large and diverse compilations of travel-time observations

    Science.gov (United States)

    Moulik, P.; Lekic, V.; Romanowicz, B. A.

    2017-12-01

    A three-dimensional Reference Earth model (REM-3D) should ideally represent the consensus view of long-wavelength heterogeneity in the Earth's mantle through the joint modeling of large and diverse seismological datasets. This requires reconciliation of datasets obtained using various methodologies and identification of consistent features. The goal of REM-3D datasets is to provide a quality-controlled and comprehensive set of seismic observations that would not only enable construction of REM-3D, but also allow identification of outliers and assist in more detailed studies of heterogeneity. The community response to data solicitation has been enthusiastic with several groups across the world contributing recent measurements of normal modes, (fundamental mode and overtone) surface waves, and body waves. We present results from ongoing work with body and surface wave datasets analyzed in consultation with a Reference Dataset Working Group. We have formulated procedures for reconciling travel-time datasets that include: (1) quality control for salvaging missing metadata; (2) identification of and reasons for discrepant measurements; (3) homogenization of coverage through the construction of summary rays; and (4) inversions of structure at various wavelengths to evaluate inter-dataset consistency. In consultation with the Reference Dataset Working Group, we retrieved the station and earthquake metadata in several legacy compilations and codified several guidelines that would facilitate easy storage and reproducibility. We find strong agreement between the dispersion measurements of fundamental-mode Rayleigh waves, particularly when made using supervised techniques. The agreement deteriorates substantially in surface-wave overtones, for which discrepancies vary with frequency and overtone number. A half-cycle band of discrepancies is attributed to reversed instrument polarities at a limited number of stations, which are not reflected in the instrument response history

  16. Existence of life-time stable proteins in mature rats-Dating of proteins' age by repeated short-term exposure to labeled amino acids throughout age

    DEFF Research Database (Denmark)

    Bechshøft, Cecilie Leidesdorff; Schjerling, Peter; Bornø, Andreas

    2017-01-01

    In vivo turnover rates of proteins covering the processes of protein synthesis and breakdown rates have been measured in many tissues and protein pools using various techniques. Connective tissue and collagen protein turnover is of specific interest since existing results are rather diverging. Th...... living days, indicating very slow turnover. The data support the hypothesis that some proteins synthesized during the early development and growth still exist much later in life of animals and hence has a very slow turnover rate.......In vivo turnover rates of proteins covering the processes of protein synthesis and breakdown rates have been measured in many tissues and protein pools using various techniques. Connective tissue and collagen protein turnover is of specific interest since existing results are rather diverging....... The aim of this study is to investigate whether we can verify the presence of protein pools within the same tissue with very distinct turnover rates over the life-span of rats with special focus on connective tissue. Male and female Lewis rats (n = 35) were injected with five different isotopically...

  17. THE WIGNER–FOKKER–PLANCK EQUATION: STATIONARY STATES AND LARGE TIME BEHAVIOR

    KAUST Repository

    ARNOLD, ANTON; GAMBA, IRENE M.; GUALDANI, MARIA PIA; MISCHLER, STÉ PHANE; MOUHOT, CLEMENT; SPARBER, CHRISTOF

    2012-01-01

    solution in a weighted Sobolev space. A key ingredient of the proof is a new result on the existence of spectral gaps for FokkerPlanck type operators in certain weighted L 2-spaces. In addition we show that the steady state corresponds to a positive density

  18. Computational challenges of large-scale, long-time, first-principles molecular dynamics

    International Nuclear Information System (INIS)

    Kent, P R C

    2008-01-01

    Plane wave density functional calculations have traditionally been able to use the largest available supercomputing resources. We analyze the scalability of modern projector-augmented wave implementations to identify the challenges in performing molecular dynamics calculations of large systems containing many thousands of electrons. Benchmark calculations on the Cray XT4 demonstrate that global linear-algebra operations are the primary reason for limited parallel scalability. Plane-wave related operations can be made sufficiently scalable. Improving parallel linear-algebra performance is an essential step to reaching longer timescales in future large-scale molecular dynamics calculations

  19. Finite-time and finite-size scalings in the evaluation of large-deviation functions: Numerical approach in continuous time.

    Science.gov (United States)

    Guevara Hidalgo, Esteban; Nemoto, Takahiro; Lecomte, Vivien

    2017-06-01

    Rare trajectories of stochastic systems are important to understand because of their potential impact. However, their properties are by definition difficult to sample directly. Population dynamics provides a numerical tool allowing their study, by means of simulating a large number of copies of the system, which are subjected to selection rules that favor the rare trajectories of interest. Such algorithms are plagued by finite simulation time and finite population size, effects that can render their use delicate. In this paper, we present a numerical approach which uses the finite-time and finite-size scalings of estimators of the large deviation functions associated to the distribution of rare trajectories. The method we propose allows one to extract the infinite-time and infinite-size limit of these estimators, which-as shown on the contact process-provides a significant improvement of the large deviation function estimators compared to the standard one.

  20. Finite-time and finite-size scalings in the evaluation of large-deviation functions: Numerical approach in continuous time

    Science.gov (United States)

    Guevara Hidalgo, Esteban; Nemoto, Takahiro; Lecomte, Vivien

    2017-06-01

    Rare trajectories of stochastic systems are important to understand because of their potential impact. However, their properties are by definition difficult to sample directly. Population dynamics provides a numerical tool allowing their study, by means of simulating a large number of copies of the system, which are subjected to selection rules that favor the rare trajectories of interest. Such algorithms are plagued by finite simulation time and finite population size, effects that can render their use delicate. In this paper, we present a numerical approach which uses the finite-time and finite-size scalings of estimators of the large deviation functions associated to the distribution of rare trajectories. The method we propose allows one to extract the infinite-time and infinite-size limit of these estimators, which—as shown on the contact process—provides a significant improvement of the large deviation function estimators compared to the standard one.

  1. Lebesgue Sets Immeasurable Existence

    Directory of Open Access Journals (Sweden)

    Diana Marginean Petrovai

    2012-12-01

    Full Text Available It is well known that the notion of measure and integral were released early enough in close connection with practical problems of measuring of geometric figures. Notion of measure was outlined in the early 20th century through H. Lebesgue’s research, founder of the modern theory of measure and integral. It was developed concurrently a technique of integration of functions. Gradually it was formed a specific area todaycalled the measure and integral theory. Essential contributions to building this theory was made by a large number of mathematicians: C. Carathodory, J. Radon, O. Nikodym, S. Bochner, J. Pettis, P. Halmos and many others. In the following we present several abstract sets, classes of sets. There exists the sets which are not Lebesgue measurable and the sets which are Lebesgue measurable but are not Borel measurable. Hence B ⊂ L ⊂ P(X.

  2. Real-Time Large Scale 3d Reconstruction by Fusing Kinect and Imu Data

    Science.gov (United States)

    Huai, J.; Zhang, Y.; Yilmaz, A.

    2015-08-01

    Kinect-style RGB-D cameras have been used to build large scale dense 3D maps for indoor environments. These maps can serve many purposes such as robot navigation, and augmented reality. However, to generate dense 3D maps of large scale environments is still very challenging. In this paper, we present a mapping system for 3D reconstruction that fuses measurements from a Kinect and an inertial measurement unit (IMU) to estimate motion. Our major achievements include: (i) Large scale consistent 3D reconstruction is realized by volume shifting and loop closure; (ii) The coarse-to-fine iterative closest point (ICP) algorithm, the SIFT odometry, and IMU odometry are combined to robustly and precisely estimate pose. In particular, ICP runs routinely to track the Kinect motion. If ICP fails in planar areas, the SIFT odometry provides incremental motion estimate. If both ICP and the SIFT odometry fail, e.g., upon abrupt motion or inadequate features, the incremental motion is estimated by the IMU. Additionally, the IMU also observes the roll and pitch angles which can reduce long-term drift of the sensor assembly. In experiments on a consumer laptop, our system estimates motion at 8Hz on average while integrating color images to the local map and saving volumes of meshes concurrently. Moreover, it is immune to tracking failures, and has smaller drift than the state-of-the-art systems in large scale reconstruction.

  3. A large deviations approach to limit theory for heavy-tailed time series

    DEFF Research Database (Denmark)

    Mikosch, Thomas Valentin; Wintenberger, Olivier

    2016-01-01

    and vanishing in some neighborhood of the origin. We study a variety of such functionals, including large deviations of random walks, their suprema, the ruin functional, and further derive weak limit theory for maxima, point processes, cluster functionals and the tail empirical process. One of the main results...

  4. Precipitation-snowmelt timing and snowmelt augmentation of large peak flow events, western Cascades, Oregon

    Science.gov (United States)

    Keith Jennings; Julia A. Jones

    2015-01-01

    This study tested multiple hydrologic mechanisms to explain snowpack dynamics in extreme rain-on-snow floods, which occur widely in the temperate and polar regions. We examined 26, 10 day large storm events over the period 1992–2012 in the H.J. Andrews Experimental Forest in western Oregon, using statistical analyses (regression, ANOVA, and wavelet coherence) of hourly...

  5. EXIST Perspective for SFXTs

    Science.gov (United States)

    Ubertini, Pietro; Sidoli, L.; Sguera, V.; Bazzano, A.

    2009-12-01

    Supergiant Fast X-ray Transients (SFXTs) are one of the most interesting (and unexpected) results of the INTEGRAL mission. They are a new class of HMXBs displaying short hard X-ray outbursts (duration less tha a day) characterized by fast flares (few hours timescale) and large dinamic range (10E3-10E4). The physical mechanism driving their peculiar behaviour is still unclear and highly debated: some models involve the structure of the supergiant companion donor wind (likely clumpy, in a spherical or non spherical geometry) and the orbital properties (wide separation with eccentric or circular orbit), while others involve the properties of the neutron star compact object and invoke very low magnetic field values (B 1E14 G, magnetars). The picture is still highly unclear from the observational point of view as well: no cyclotron lines have been detected in the spectra, thus the strength of the neutron star magnetic field is unknown. Orbital periods have been measured in only 4 systems, spanning from 3.3 days to 165 days. Even the duty cycle seems to be quite different from source to source. The Energetic X-ray Imaging Survey Telescope (EXIST), with its hard X-ray all-sky survey and large improved limiting sensitivity, will allow us to get a clearer picture of SFXTs. A complete census of their number is essential to enlarge the sample. A long term and continuous as possible X-ray monitoring is crucial to -(1) obtain the duty cycle, -(2 )investigate their unknown orbital properties (separation, orbital period, eccentricity),- (3) to completely cover the whole outburst activity, (4)-to search for cyclotron lines in the high energy spectra. EXIST observations will provide crucial informations to test the different models and shed light on the peculiar behaviour of SFXTs.

  6. Discrepancies Between Planned and Actual Operating Room Turnaround Times at a Large Rural Hospital in Germany

    Directory of Open Access Journals (Sweden)

    Regula Morgenegg

    2018-01-01

    retrospective study examined the OR turnaround data of 875 elective surgery cases scheduled at the Marienhospital, Vechta, Germany, between July and October 2014. The frequency distributions of planned and actual OR turnaround times were compared and correlations between turnaround times and various factors were established, including the time of day of the procedure, patient age and the planned duration of the surgery. Results: There was a significant difference between mean planned and actual OR turnaround times (0.32 versus 0.64 hours; P <0.001. In addition, significant correlations were noted between actual OR turnaround times and the time of day of the surgery, patient age, actual duration of the procedure and staffing changes affecting the surgeon or the medical specialty of the surgery (P <0.001 each. The quotient of actual/planned OR turnaround times ranged from 1.733–3.000. Conclusion: Significant discrepancies between planned and actual OR turnaround times were noted during the study period. Such findings may be potentially used in future studies to establish a tool to improve OR planning, measure OR management performance and enable benchmarking.

  7. Support for the existence of invertible maps between electronic densities and non-analytic 1-body external potentials in non-relativistic time-dependent quantum mechanics

    Science.gov (United States)

    Mosquera, Martín A.

    2017-10-01

    Provided the initial state, the Runge-Gross theorem establishes that the time-dependent (TD) external potential of a system of non-relativistic electrons determines uniquely their TD electronic density, and vice versa (up to a constant in the potential). This theorem requires the TD external potential and density to be Taylor-expandable around the initial time of the propagation. This paper presents an extension without this restriction. Given the initial state of the system and evolution of the density due to some TD scalar potential, we show that a perturbative (not necessarily weak) TD potential that induces a non-zero divergence of the external force-density, inside a small spatial subset and immediately after the initial propagation time, will cause a change in the density within that subset, implying that the TD potential uniquely determines the TD density. In this proof, we assume unitary evolution of wavefunctions and first-order differentiability (which does not imply analyticity) in time of the internal and external force-densities, electronic density, current density, and their spatial derivatives over the small spatial subset and short time interval.

  8. Large Time Asymptotics for a Continuous Coagulation-Fragmentation Model with Degenerate Size-Dependent Diffusion

    KAUST Repository

    Desvillettes, Laurent

    2010-01-01

    We study a continuous coagulation-fragmentation model with constant kernels for reacting polymers (see [M. Aizenman and T. Bak, Comm. Math. Phys., 65 (1979), pp. 203-230]). The polymers are set to diffuse within a smooth bounded one-dimensional domain with no-flux boundary conditions. In particular, we consider size-dependent diffusion coefficients, which may degenerate for small and large cluster-sizes. We prove that the entropy-entropy dissipation method applies directly in this inhomogeneous setting. We first show the necessary basic a priori estimates in dimension one, and second we show faster-than-polynomial convergence toward global equilibria for diffusion coefficients which vanish not faster than linearly for large sizes. This extends the previous results of [J.A. Carrillo, L. Desvillettes, and K. Fellner, Comm. Math. Phys., 278 (2008), pp. 433-451], which assumes that the diffusion coefficients are bounded below. © 2009 Society for Industrial and Applied Mathematics.

  9. Large time asymptotics of solutions to the anharmonic oscillator model from nonlinear optics

    OpenAIRE

    Jochmann, Frank

    2005-01-01

    The anharmonic oscillator model describing the propagation of electromagnetic waves in an exterior domain containing a nonlinear dielectric medium is investigated. The system under consideration consists of a generally nonlinear second order differential equation for the dielectrical polarization coupled with Maxwell's equations for the electromagnetic field. Local decay of the electromagnetic field for t to infinity in the charge free case is shown for a large class of potentials. (This pape...

  10. The EXIST Mission Concept Study

    Science.gov (United States)

    Fishman, Gerald J.; Grindlay, J.; Hong, J.

    2008-01-01

    EXIST is a mission designed to find and study black holes (BHs) over a wide range of environments and masses, including: 1) BHs accreting from binary companions or dense molecular clouds throughout our Galaxy and the Local Group, 2) supermassive black holes (SMBHs) lying dormant in galaxies that reveal their existence by disrupting passing stars, and 3) SMBHs that are hidden from our view at lower energies due to obscuration by the gas that they accrete. 4) the birth of stellar mass BHs which is accompanied by long cosmic gamma-ray bursts (GRBs) which are seen several times a day and may be associated with the earliest stars to form in the Universe. EXIST will provide an order of magnitude increase in sensitivity and angular resolution as well as greater spectral resolution and bandwidth compared with earlier hard X-ray survey telescopes. With an onboard optical-infra red (IR) telescope, EXIST will measure the spectra and redshifts of GRBs and their utility as cosmological probes of the highest z universe and epoch of reionization. The mission would retain its primary goal of being the Black Hole Finder Probe in the Beyond Einstein Program. However, the new design for EXIST proposed to be studied here represents a significant advance from its previous incarnation as presented to BEPAC. The mission is now less than half the total mass, would be launched on the smallest EELV available (Atlas V-401) for a Medium Class mission, and most importantly includes a two-telescope complement that is ideally suited for the study of both obscured and very distant BHs. EXIST retains its very wide field hard X-ray imaging High Energy Telescope (HET) as the primary instrument, now with improved angular and spectral resolution, and in a more compact payload that allows occasional rapid slews for immediate optical/IR imaging and spectra of GRBs and AGN as well as enhanced hard X-ray spectra and timing with pointed observations. The mission would conduct a 2 year full sky survey in

  11. Interactive exploration of large-scale time-varying data using dynamic tracking graphs

    KAUST Repository

    Widanagamaachchi, W.; Christensen, C.; Bremer, P.-T; Pascucci, Valerio

    2012-01-01

    that use one spatial dimension to indicate time and show the "tracks" of each feature as it evolves, merges or disappears. However, for practical data sets creating the corresponding optimal graph layouts that minimize the number of intersections can take

  12. The ''Flight Chamber'': A fast, large area, zero-time detector

    International Nuclear Information System (INIS)

    Trautner, N.

    1976-01-01

    A new, fast, zero-time detector with an active area of 20 cm 2 has been constructed. Secondary electrons from a thin self-supporting foil are accelerated onto a scinitllator. The intrinsic time resolution (fwhm) was 0.85 for 5.5 MeV α-particles and 0.42 ns for 17 MeV 16 O-ions, at an efficiency of 97.5% and 99.6%, respectively. (author)

  13. Towards real-time cardiovascular magnetic resonance-guided transarterial aortic valve implantation: In vitro evaluation and modification of existing devices

    Directory of Open Access Journals (Sweden)

    Ladd Mark E

    2010-10-01

    Full Text Available Abstract Background Cardiovascular magnetic resonance (CMR is considered an attractive alternative for guiding transarterial aortic valve implantation (TAVI featuring unlimited scan plane orientation and unsurpassed soft-tissue contrast with simultaneous device visualization. We sought to evaluate the CMR characteristics of both currently commercially available transcatheter heart valves (Edwards SAPIEN™, Medtronic CoreValve® including their dedicated delivery devices and of a custom-built, CMR-compatible delivery device for the Medtronic CoreValve® prosthesis as an initial step towards real-time CMR-guided TAVI. Methods The devices were systematically examined in phantom models on a 1.5-Tesla scanner using high-resolution T1-weighted 3D FLASH, real-time TrueFISP and flow-sensitive phase-contrast sequences. Images were analyzed for device visualization quality, device-related susceptibility artifacts, and radiofrequency signal shielding. Results CMR revealed major susceptibility artifacts for the two commercial delivery devices caused by considerable metal braiding and precluding in vivo application. The stainless steel-based Edwards SAPIEN™ prosthesis was also regarded not suitable for CMR-guided TAVI due to susceptibility artifacts exceeding the valve's dimensions and hindering an exact placement. In contrast, the nitinol-based Medtronic CoreValve® prosthesis was excellently visualized with delineation even of small details and, thus, regarded suitable for CMR-guided TAVI, particularly since reengineering of its delivery device toward CMR-compatibility resulted in artifact elimination and excellent visualization during catheter movement and valve deployment on real-time TrueFISP imaging. Reliable flow measurements could be performed for both stent-valves after deployment using phase-contrast sequences. Conclusions The present study shows that the Medtronic CoreValve® prosthesis is potentially suited for real-time CMR-guided placement

  14. Non-existence of time-periodic solutions of the Dirac equation in a Reissner-Nordström black hole background

    Science.gov (United States)

    Finster, Felix; Smoller, Joel; Yau, Shing-Tung

    2000-04-01

    It is shown analytically that the Dirac equation has no normalizable, time-periodic solutions in a Reissner-Nordström black hole background; in particular, there are no static solutions of the Dirac equation in such a background metric. The physical interpretation is that Dirac particles can either disappear into the black hole or escape to infinity, but they cannot stay on a periodic orbit around the black hole.

  15. Plains zebra (Equus quagga) adrenocortical activity increases during times of large aggregations in the Serengeti ecosystem.

    Science.gov (United States)

    Seeber, P A; Franz, M; Dehnhard, M; Ganswindt, A; Greenwood, A D; East, M L

    2018-04-20

    Adverse environmental stimuli (stressors) activate the hypothalamic-pituitary-adrenal axis and contribute to allostatic load. This study investigates the contribution of environmental stressors and life history stage to allostatic load in a migratory population of plains zebras (Equus quagga) in the Serengeti ecosystem, in Tanzania, which experiences large local variations in aggregation. We expected higher fGCM response to the environmental stressors of feeding competition, predation pressure and unpredictable social relationships in larger than in smaller aggregations, and in animals at energetically costly life history stages. As the study was conducted during the 2016 El Niño, we did not expect food quality of forage or a lack of water to strongly affect fGCM responses in the dry season. We measured fecal glucocorticoid metabolite (fGCM) concentrations using an enzyme immunoassay (EIA) targeting 11β-hydroxyetiocholanolone and validated its reliability in captive plains zebras. Our results revealed significantly higher fGCM concentrations 1) in large aggregations than in smaller groupings, and 2) in band stallions than in bachelor males. Concentrations of fGCM were not significantly higher in females at the energetically costly life stage of late pregnancy/lactation. The higher allostatic load of stallions associated with females, than bachelor males is likely caused by social stressors. In conclusion, migratory zebras have elevated allostatic loads in large aggregations that probably result from their combined responses to increased feeding competition, predation pressure and various social stressors. Further research is required to disentangle the contribution of these stressors to allostatic load in migratory populations. Copyright © 2018 Elsevier Inc. All rights reserved.

  16. A hybrid adaptive large neighborhood search heuristic for lot-sizing with setup times

    DEFF Research Database (Denmark)

    Muller, Laurent Flindt; Spoorendonk, Simon; Pisinger, David

    2012-01-01

    This paper presents a hybrid of a general heuristic framework and a general purpose mixed-integer programming (MIP) solver. The framework is based on local search and an adaptive procedure which chooses between a set of large neighborhoods to be searched. A mixed integer programming solver and its......, and the upper bounds found by the commercial MIP solver ILOG CPLEX using state-of-the-art MIP formulations. Furthermore, we improve the best known solutions on 60 out of 100 and improve the lower bound on all 100 instances from the literature...

  17. Modelling and Formal Verification of Timing Aspects in Large PLC Programs

    CERN Document Server

    Fernandez Adiego, B; Blanco Vinuela, E; Tournier, J-C; Gonzalez Suarez, V M; Blech, J O

    2014-01-01

    One of the main obstacle that prevents model checking from being widely used in industrial control systems is the complexity of building formal models out of PLC programs, especially when timing aspects need to be integrated. This paper brings an answer to this obstacle by proposing a methodology to model and verify timing aspects of PLC programs. Two approaches are proposed to allow the users to balance the trade-off between the complexity of the model, i.e. its number of states, and the set of specifications possible to be verified. A tool supporting the methodology which allows to produce models for different model checkers directly from PLC programs has been developed. Verification of timing aspects for real-life PLC programs are presented in this paper using NuSMV.

  18. Quasi real-time estimation of the moment magnitude of large earthquake from static strain changes

    Science.gov (United States)

    Itaba, S.

    2016-12-01

    The 2011 Tohoku-Oki (off the Pacific coast of Tohoku) earthquake, of moment magnitude 9.0, was accompanied by large static strain changes (10-7), as measured by borehole strainmeters operated by the Geological Survey of Japan in the Tokai, Kii Peninsula, and Shikoku regions. A fault model for the earthquake on the boundary between the Pacific and North American plates, based on these borehole strainmeter data, yielded a moment magnitude of 8.7. On the other hand, based on the seismic wave, the prompt report of the magnitude which the Japan Meteorological Agency (JMA) announced just after earthquake occurrence was 7.9. Such geodetic moment magnitudes, derived from static strain changes, can be estimated almost as rapidly as determinations using seismic waves. I have to verify the validity of this method in some cases. In the case of this earthquake's largest aftershock, which occurred 29 minutes after the mainshock. The prompt report issued by JMA assigned this aftershock a magnitude of 7.3, whereas the moment magnitude derived from borehole strain data is 7.6, which is much closer to the actual moment magnitude of 7.7. In order to grasp the magnitude of a great earthquake earlier, several methods are now being suggested to reduce the earthquake disasters including tsunami. Our simple method of using static strain changes is one of the strong methods for rapid estimation of the magnitude of large earthquakes, and useful to improve the accuracy of Earthquake Early Warning.

  19. Time-scale effects in the interaction between a large and a small herbivore

    NARCIS (Netherlands)

    Kuijper, D. P. J.; Beek, P.; van Wieren, S.E.; Bakker, J. P.

    2008-01-01

    In the short term, grazing will mainly affect plant biomass and forage quality. However, grazing can affect plant species composition by accelerating or retarding succession at longer time-scales. Few studies concerning interactions among herbivores have taken the change in plant species composition

  20. Eulerian short-time statistics of turbulent flow at large Reynolds number

    NARCIS (Netherlands)

    Brouwers, J.J.H.

    2004-01-01

    An asymptotic analysis is presented of the short-time behavior of second-order temporal velocity structure functions and Eulerian acceleration correlations in a frame that moves with the local mean velocity of the turbulent flow field. Expressions in closed-form are derived which cover the viscous

  1. Response time distributions in rapid chess: A large-scale decision making experiment

    Directory of Open Access Journals (Sweden)

    Mariano Sigman

    2010-10-01

    Full Text Available Rapid chess provides an unparalleled laboratory to understand decision making in a natural environment. In a chess game, players choose consecutively around 40 moves in a finite time budget. The goodness of each choice can be determined quantitatively since current chess algorithms estimate precisely the value of a position. Web-based chess produces vast amounts of data, millions of decisions per day, incommensurable with traditional psychological experiments. We generated a database of response times and position value in rapid chess games. We measured robust emergent statistical observables: 1 Response time (RT distributions are long-tailed and show qualitatively distinct forms at different stages of the game, 2 RT of successive moves are highly correlated both for intra- and inter-player moves. These findings have theoretical implications since they deny two basic assumptions of sequential decision making algorithms: RTs are not stationary and can not be generated by a state function. Our results also have practical implications. First, we characterized the capacity of blunders and score fluctuations to predict a player strength, which is yet an open problem in chess softwares. Second, we show that the winning likelihood can be reliably estimated from a weighted combination of remaining times and position evaluation.

  2. A fast large-area position-sensitive time-of-flight neutron detection system

    International Nuclear Information System (INIS)

    Crawford, R.K.; Haumann, J.R.

    1989-01-01

    A new position-sensitive time-of-flight neutron detection and histograming system has been developed for use at the Intense Pulsed Neutron Source. Spatial resolution of roughly 1 cm x 1 cm and time-of-flight resolution of ∼1 μsec are combined in a detection system which can ultimately be expanded to cover several square meters of active detector area. This system is based on the use of arrays of cylindrical one-dimensional position-sensitive proportional counters, and is capable of collecting the x-y-t data and sorting them into histograms at time-averaged data rates up to ∼300,000 events/sec over the full detector area and with instantaneous data rates up to more than fifty times that. Numerous hardware features have been incorporated to facilitate initial tuning of the position encoding, absolute calibration of the encoded positions, and automatic testing for drifts. 7 refs., 11 figs., 1 tabs

  3. Citizen journalism in a time of crisis: lessons from a large-scale California wildfire

    Science.gov (United States)

    S. Gillette; J. Taylor; D.J. Chavez; R. Hodgson; J. Downing

    2007-01-01

    The accessibility of news production tools through consumer communication technology has made it possible for media consumers to become media producers. The evolution of media consumer to media producer has important implications for the shape of public discourse during a time of crisis. Citizen journalists cover crisis events using camera cell phones and digital...

  4. Common genetic influences on intelligence and auditory simple reaction time in a large Swedish sample

    NARCIS (Netherlands)

    Madison, G.; Mosing, M.A.; Verweij, K.J.H.; Pedersen, N.L.; Ullén, F.

    2016-01-01

    Intelligence and cognitive ability have long been associated with chronometric performance measures, such as reaction time (RT), but few studies have investigated auditory RT in this context. The nature of this relationship is important for understanding the etiology and structure of intelligence.

  5. Near real-time large scale (sensor) data provisioning for PLF

    NARCIS (Netherlands)

    Vonder, M.R.; Waaij, B.D. van der; Harmsma, E.J.; Donker, G.

    2015-01-01

    Think big, start small. With that thought in mind, Smart Dairy Farming (SDF) developed a platform to make real-time sensor data from different farms available, for model developers to support dairy farmers in Precision Livestock Farming. The data has been made available via a standard interface on

  6. A new method for large time behavior of degenerate viscous Hamilton–Jacobi equations with convex Hamiltonians

    KAUST Repository

    Cagnetti, Filippo; Gomes, Diogo A.; Mitake, Hiroyoshi; Tran, Hung V.

    2015-01-01

    We investigate large-time asymptotics for viscous Hamilton-Jacobi equations with possibly degenerate diffusion terms. We establish new results on the convergence, which are the first general ones concerning equations which are neither uniformly parabolic nor first order. Our method is based on the nonlinear adjoint method and the derivation of new estimates on long time averaging effects. It also extends to the case of weakly coupled systems.

  7. High-Temperature-Short-Time Annealing Process for High-Performance Large-Area Perovskite Solar Cells.

    Science.gov (United States)

    Kim, Minjin; Kim, Gi-Hwan; Oh, Kyoung Suk; Jo, Yimhyun; Yoon, Hyun; Kim, Ka-Hyun; Lee, Heon; Kim, Jin Young; Kim, Dong Suk

    2017-06-27

    Organic-inorganic hybrid metal halide perovskite solar cells (PSCs) are attracting tremendous research interest due to their high solar-to-electric power conversion efficiency with a high possibility of cost-effective fabrication and certified power conversion efficiency now exceeding 22%. Although many effective methods for their application have been developed over the past decade, their practical transition to large-size devices has been restricted by difficulties in achieving high performance. Here we report on the development of a simple and cost-effective production method with high-temperature and short-time annealing processing to obtain uniform, smooth, and large-size grain domains of perovskite films over large areas. With high-temperature short-time annealing at 400 °C for 4 s, the perovskite film with an average domain size of 1 μm was obtained, which resulted in fast solvent evaporation. Solar cells fabricated using this processing technique had a maximum power conversion efficiency exceeding 20% over a 0.1 cm 2 active area and 18% over a 1 cm 2 active area. We believe our approach will enable the realization of highly efficient large-area PCSs for practical development with a very simple and short-time procedure. This simple method should lead the field toward the fabrication of uniform large-scale perovskite films, which are necessary for the production of high-efficiency solar cells that may also be applicable to several other material systems for more widespread practical deployment.

  8. A large scale flexible real-time communications topology for the LHC accelerator

    CERN Document Server

    Lauckner, R J; Ribeiro, P; Wijnands, Thijs

    1999-01-01

    The LHC design parameters impose very stringent beam control requirements in order to reach the nominal performance. Prompted by the lack of accurate models to predict field behaviour in superconducting magnet systems the control system of the accelerator will provide flexible feedback channels between monitors and magnets around the 27 Km circumference machine. The implementation of feedback systems composed of a large number of sparsely located elements presents some interesting challenges. Our goal was to find a topology where the control loop requirements: number and distribution of nodes, latency and throughput could be guaranteed without compromising the flexibility. Our proposal is to federate a number of well known technologies and concepts, namely ATM, WorldFIP and RTOS, into a general framework. (6 refs).

  9. Time-gated ballistic imaging using a large aperture switching beam.

    Science.gov (United States)

    Mathieu, Florian; Reddemann, Manuel A; Palmer, Johannes; Kneer, Reinhold

    2014-03-24

    Ballistic imaging commonly denotes the formation of line-of-sight shadowgraphs through turbid media by suppression of multiply scattered photons. The technique relies on a femtosecond laser acting as light source for the images and as switch for an optical Kerr gate that separates ballistic photons from multiply scattered ones. The achievable image resolution is one major limitation for the investigation of small objects. In this study, practical influences on the optical Kerr gate and image quality are discussed theoretically and experimentally applying a switching beam with large aperture (D = 19 mm). It is shown how switching pulse energy and synchronization of switching and imaging pulse in the Kerr cell influence the gate's transmission. Image quality of ballistic imaging and standard shadowgraphy is evaluated and compared, showing that the present ballistic imaging setup is advantageous for optical densities in the range of 8 ballistic imaging setup into a schlieren-type system with an optical schlieren edge.

  10. Interstitial laser photocoagulation for benign thyroid nodules: time to treat large nodules.

    Science.gov (United States)

    Amabile, Gerardo; Rotondi, Mario; Pirali, Barbara; Dionisio, Rosa; Agozzino, Lucio; Lanza, Michele; Buonanno, Luciano; Di Filippo, Bruno; Fonte, Rodolfo; Chiovato, Luca

    2011-09-01

    Interstitial laser photocoagulation (ILP) is a new therapeutic option for the ablation of non-functioning and hyper-functioning benign thyroid nodules. Amelioration of the ablation procedure currently allows treating large nodules. Aim of this study was to evaluate the therapeutic efficacy of ILP, performed according to a modified protocol of ablation, in patients with large functioning and non-functioning thyroid nodules and to identify the best parameters for predicting successful outcome in hyperthyroid patients. Fifty-one patients with non-functioning thyroid nodules (group 1) and 26 patients with hyperfunctioning thyroid nodules (group 2) were enrolled. All patients had a nodular volume ≥40 ml. Patients were addressed to 1-3 cycles of ILP. A cycle consisted of three ILP sessions, each lasting 5-10 minutes repeated at an interval of 1 month. After each cycle of ILP patients underwent thyroid evaluation. A nodule volume reduction, expressed as percentage of the basal volume, significantly occurred in both groups (F = 190.4; P nodule volume; (iii) total amount of energy delivered expressed in Joule. ROC curves identified the percentage of volume reduction as the best parameter predicting a normalized serum TSH (area under the curve 0.962; P thyroid nodules, both in terms of nodule size reduction and cure of hyperthyroidism (87% of cured patients after the last ILP cycle). ILP should not be limited to patients refusing or being ineligible for surgery and/or radioiodine. Copyright © 2011 Wiley-Liss, Inc.

  11. Boreal Forests Sequester Large Amounts of Mercury over Millennial Time Scales in the Absence of Wildfire.

    Science.gov (United States)

    Giesler, Reiner; Clemmensen, Karina E; Wardle, David A; Klaminder, Jonatan; Bindler, Richard

    2017-03-07

    Alterations in fire activity due to climate change and fire suppression may have profound effects on the balance between storage and release of carbon (C) and associated volatile elements. Stored soil mercury (Hg) is known to volatilize due to wildfires and this could substantially affect the land-air exchange of Hg; conversely the absence of fires and human disturbance may increase the time period over which Hg is sequestered. Here we show for a wildfire chronosequence spanning over more than 5000 years in boreal forest in northern Sweden that belowground inventories of total Hg are strongly related to soil humus C accumulation (R 2 = 0.94, p millennial time scales in the prolonged absence of fire.

  12. Investigation on performance of all optical buffer with large dynamical delay time based on cascaded double loop optical buffers

    International Nuclear Information System (INIS)

    Yong-Jun, Wang; Xiang-Jun, Xin; Xiao-Lei, Zhang; Chong-Qing, Wu; Kuang-Lu, Yu

    2010-01-01

    Optical buffers are critical for optical signal processing in future optical packet-switched networks. In this paper, a theoretical study as well as an experimental demonstration on a new optical buffer with large dynamical delay time is carried out based on cascaded double loop optical buffers (DLOBs). It is found that pulse distortion can be restrained by a negative optical control mode when the optical packet is in the loop. Noise analysis indicates that it is feasible to realise a large variable delay range by cascaded DLOBs. These conclusions are validated by the experiment system with 4-stage cascaded DLOBs. Both the theoretical simulations and the experimental results indicate that a large delay range of 1–9999 times the basic delay unit and a fine granularity of 25 ns can be achieved by the cascaded DLOBs. The performance of the cascaded DLOBs is suitable for the all optical networks. (classical areas of phenomenology)

  13. Parasitic lasing suppression in large-aperture Ti:sapphire amplifiers by optimizing the seed–pump time delay

    International Nuclear Information System (INIS)

    Chu, Y X; Liang, X Y; Yu, L H; Xu, L; Lu, X M; Liu, Y Q; Leng, Y X; Li, R X; Xu, Z Z

    2013-01-01

    Theoretical and experimental investigations are carried out to determine the influence of the time delay between the input seed pulse and pump pulses on transverse parasitic lasing in a Ti:sapphire amplifier with a diameter of 80 mm, which is clad by a refractive index-matched liquid doped with an absorber. When the time delay is optimized, a maximum output energy of 50.8 J is achieved at a pump energy of 105 J, which corresponds to a conversion efficiency of 47.5%. Based on the existing compressor, the laser system achieves a peak power of 1.26 PW with a 29.0 fs pulse duration. (letter)

  14. Response time distributions in rapid chess: a large-scale decision making experiment.

    Science.gov (United States)

    Sigman, Mariano; Etchemendy, Pablo; Slezak, Diego Fernández; Cecchi, Guillermo A

    2010-01-01

    Rapid chess provides an unparalleled laboratory to understand decision making in a natural environment. In a chess game, players choose consecutively around 40 moves in a finite time budget. The goodness of each choice can be determined quantitatively since current chess algorithms estimate precisely the value of a position. Web-based chess produces vast amounts of data, millions of decisions per day, incommensurable with traditional psychological experiments. We generated a database of response times (RTs) and position value in rapid chess games. We measured robust emergent statistical observables: (1) RT distributions are long-tailed and show qualitatively distinct forms at different stages of the game, (2) RT of successive moves are highly correlated both for intra- and inter-player moves. These findings have theoretical implications since they deny two basic assumptions of sequential decision making algorithms: RTs are not stationary and can not be generated by a state-function. Our results also have practical implications. First, we characterized the capacity of blunders and score fluctuations to predict a player strength, which is yet an open problem in chess softwares. Second, we show that the winning likelihood can be reliably estimated from a weighted combination of remaining times and position evaluation.

  15. On large-time energy concentration in solutions to the Navier-Stokes equations in general domains

    Czech Academy of Sciences Publication Activity Database

    Skalák, Zdeněk

    2011-01-01

    Roč. 91, č. 9 (2011), s. 724-732 ISSN 0044-2267 R&D Projects: GA AV ČR IAA100190905 Institutional research plan: CEZ:AV0Z20600510 Keywords : Navier-Stokes equations * large-time behavior * energy concentration Subject RIV: BA - General Mathematics Impact factor: 0.863, year: 2011

  16. Association between time perspective and organic food consumption in a large sample of adults.

    Science.gov (United States)

    Bénard, Marc; Baudry, Julia; Méjean, Caroline; Lairon, Denis; Giudici, Kelly Virecoulon; Etilé, Fabrice; Reach, Gérard; Hercberg, Serge; Kesse-Guyot, Emmanuelle; Péneau, Sandrine

    2018-01-05

    Organic food intake has risen in many countries during the past decades. Even though motivations associated with such choice have been studied, psychological traits preceding these motivations have rarely been explored. Consideration of future consequences (CFC) represents the extent to which individuals consider future versus immediate consequences of their current behaviors. Consequently, a future oriented personality may be an important characteristic of organic food consumers. The objective was to analyze the association between CFC and organic food consumption in a large sample of the adult general population. In 2014, a sample of 27,634 participants from the NutriNet-Santé cohort study completed the CFC questionnaire and an Organic-Food Frequency questionnaire. For each food group (17 groups), non-organic food consumers were compared to organic food consumers across quartiles of the CFC using multiple logistic regressions. Moreover, adjusted means of proportions of organic food intakes out of total food intakes were compared between quartiles of the CFC. Analyses were adjusted for socio-demographic, lifestyle and dietary characteristics. Participants with higher CFC were more likely to consume organic food (OR quartile 4 (Q4) vs. Q1 = 1.88, 95% CI: 1.62, 2.20). Overall, future oriented participants were more likely to consume 14 food groups. The strongest associations were observed for starchy refined foods (OR = 1.78, 95% CI: 1.63, 1.94), and fruits and vegetables (OR = 1.74, 95% CI: 1.58, 1.92). The contribution of organic food intake out of total food intake was 33% higher in the Q4 compared to Q1. More precisely, the contribution of organic food consumed was higher in the Q4 for 16 food groups. The highest relative differences between Q4 and Q1 were observed for starchy refined foods (22%) and non-alcoholic beverages (21%). Seafood was the only food group without a significant difference. This study provides information on the personality of

  17. Stochastic Stokes' Drift, Homogenized Functional Inequalities, and Large Time Behavior of Brownian Ratchets

    KAUST Repository

    Blanchet, Adrien

    2009-01-01

    A periodic perturbation of a Gaussian measure modifies the sharp constants in Poincarae and logarithmic Sobolev inequalities in the homogeniz ation limit, that is, when the period of a periodic perturbation converges to zero. We use variational techniques to determine the homogenized constants and get optimal convergence rates toward s equilibrium of the solutions of the perturbed diffusion equations. The study of these sharp constants is motivated by the study of the stochastic Stokes\\' drift. It also applies to Brownian ratchets and molecular motors in biology. We first establish a transport phenomenon. Asymptotically, the center of mass of the solution moves with a constant velocity, which is determined by a doubly periodic problem. In the reference frame attached to the center of mass, the behavior of the solution is governed at large scale by a diffusion with a modified diffusion coefficient. Using the homogenized logarithmic Sobolev inequality, we prove that the solution converges in self-similar variables attached to t he center of mass to a stationary solution of a Fokker-Planck equation modulated by a periodic perturbation with fast oscillations, with an explicit rate. We also give an asymptotic expansion of the traveling diffusion front corresponding to the stochastic Stokes\\' drift with given potential flow. © 2009 Society for Industrial and Applied Mathematics.

  18. Time-Efficient High-Resolution Large-Area Nano-Patterning of Silicon Dioxide

    DEFF Research Database (Denmark)

    Lin, Li; Ou, Yiyu; Aagesen, Martin

    2017-01-01

    A nano-patterning approach on silicon dioxide (SiO2) material, which could be used for the selective growth of III-V nanowires in photovoltaic applications, is demonstrated. In this process, a silicon (Si) stamp with nanopillar structures was first fabricated using electron-beam lithography (EBL....... In addition, high time efficiency can be realized by one-spot electron-beam exposure in the EBL process combined with NIL for mass production. Furthermore, the one-spot exposure enables the scalability of the nanostructures for different application requirements by tuning only the exposure dose. The size...

  19. Parallel real-time visualization system for large-scale simulation. Application to WSPEEDI

    International Nuclear Information System (INIS)

    Muramatsu, Kazuhiro; Otani, Takayuki; Kitabata, Hideyuki; Matsumoto, Hideki; Takei, Toshifumi; Doi, Shun

    2000-01-01

    The real-time visualization system, PATRAS (PArallel TRAcking Steering system) has been developed on parallel computing servers. The system performs almost all of the visualization tasks on a parallel computing server, and uses image data compression technique for efficient communication between the server and the client terminal. Therefore, the system realizes high performance concurrent visualization in an internet computing environment. The experience in applying PATRAS to WSPEEDI (Worldwide version of System for Prediction Environmental Emergency Dose Information) is reported. The application of PATRAS to WSPEEDI enables users to understand behaviours of radioactive tracers from different release points easily and quickly. (author)

  20. A reference web architecture and patterns for real-time visual analytics on large streaming data

    Science.gov (United States)

    Kandogan, Eser; Soroker, Danny; Rohall, Steven; Bak, Peter; van Ham, Frank; Lu, Jie; Ship, Harold-Jeffrey; Wang, Chun-Fu; Lai, Jennifer

    2013-12-01

    Monitoring and analysis of streaming data, such as social media, sensors, and news feeds, has become increasingly important for business and government. The volume and velocity of incoming data are key challenges. To effectively support monitoring and analysis, statistical and visual analytics techniques need to be seamlessly integrated; analytic techniques for a variety of data types (e.g., text, numerical) and scope (e.g., incremental, rolling-window, global) must be properly accommodated; interaction, collaboration, and coordination among several visualizations must be supported in an efficient manner; and the system should support the use of different analytics techniques in a pluggable manner. Especially in web-based environments, these requirements pose restrictions on the basic visual analytics architecture for streaming data. In this paper we report on our experience of building a reference web architecture for real-time visual analytics of streaming data, identify and discuss architectural patterns that address these challenges, and report on applying the reference architecture for real-time Twitter monitoring and analysis.

  1. Detection of long nulls in PSR B1706-16, a pulsar with large timing irregularities

    Science.gov (United States)

    Naidu, Arun; Joshi, Bhal Chandra; Manoharan, P. K.; Krishnakumar, M. A.

    2018-04-01

    Single pulse observations, characterizing in detail, the nulling behaviour of PSR B1706-16 are being reported for the first time in this paper. Our regular long duration monitoring of this pulsar reveals long nulls of 2-5 h with an overall nulling fraction of 31 ± 2 per cent. The pulsar shows two distinct phases of emission. It is usually in an active phase, characterized by pulsations interspersed with shorter nulls, with a nulling fraction of about 15 per cent, but it also rarely switches to an inactive phase, consisting of long nulls. The nulls in this pulsar are concurrent between 326.5 and 610 MHz. Profile mode changes accompanied by changes in fluctuation properties are seen in this pulsar, which switches from mode A before a null to mode B after the null. The distribution of null durations in this pulsar is bimodal. With its occasional long nulls, PSR B1706-16 joins the small group of intermediate nullers, which lie between the classical nullers and the intermittent pulsars. Similar to other intermediate nullers, PSR B1706-16 shows high timing noise, which could be due to its rare long nulls if one assumes that the slowdown rate during such nulls is different from that during the bursts.

  2. Across Space and Time: Social Responses to Large-Scale Biophysical Systems

    Science.gov (United States)

    Macmynowski, Dena P.

    2007-06-01

    The conceptual rubric of ecosystem management has been widely discussed and deliberated in conservation biology, environmental policy, and land/resource management. In this paper, I argue that two critical aspects of the ecosystem management concept require greater attention in policy and practice. First, although emphasis has been placed on the “space” of systems, the “time”—or rates of change—associated with biophysical and social systems has received much less consideration. Second, discussions of ecosystem management have often neglected the temporal disconnects between changes in biophysical systems and the response of social systems to management issues and challenges. The empirical basis of these points is a case study of the “Crown of the Continent Ecosystem,” an international transboundary area of the Rocky Mountains that surrounds Glacier National Park (USA) and Waterton Lakes National Park (Canada). This project assessed the experiences and perspectives of 1) middle- and upper-level government managers responsible for interjurisdictional cooperation, and 2) environmental nongovernment organizations with an international focus. I identify and describe 10 key challenges to increasing the extent and intensity of transboundary cooperation in land/resource management policy and practice. These issues are discussed in terms of their political, institutional, cultural, information-based, and perceptual elements. Analytic techniques include a combination of environmental history, semistructured interviews with 48 actors, and text analysis in a systematic qualitative framework. The central conclusion of this work is that the rates of response of human social systems must be better integrated with the rates of ecological change. This challenge is equal to or greater than the well-recognized need to adapt the spatial scale of human institutions to large-scale ecosystem processes and transboundary wildlife.

  3. Ethical dilemmas of a large national multi-centre study in Australia: time for some consistency.

    Science.gov (United States)

    Driscoll, Andrea; Currey, Judy; Worrall-Carter, Linda; Stewart, Simon

    2008-08-01

    To examine the impact and obstacles that individual Institutional Research Ethics Committee (IRECs) had on a large-scale national multi-centre clinical audit called the National Benchmarks and Evidence-based National Clinical guidelines for Heart failure management programmes Study. Multi-centre research is commonplace in the health care system. However, IRECs continue to fail to differentiate between research and quality audit projects. The National Benchmarks and Evidence-based National Clinical guidelines for Heart failure management programmes study used an investigator-developed questionnaire concerning a clinical audit for heart failure programmes throughout Australia. Ethical guidelines developed by the National governing body of health and medical research in Australia classified the National Benchmarks and Evidence-based National Clinical guidelines for Heart failure management programmes Study as a low risk clinical audit not requiring ethical approval by IREC. Fifteen of 27 IRECs stipulated that the research proposal undergo full ethical review. None of the IRECs acknowledged: national quality assurance guidelines and recommendations nor ethics approval from other IRECs. Twelve of the 15 IRECs used different ethics application forms. Variability in the type of amendments was prolific. Lack of uniformity in ethical review processes resulted in a six- to eight-month delay in commencing the national study. Development of a national ethics application form with full ethical review by the first IREC and compulsory expedited review by subsequent IRECs would resolve issues raised in this paper. IRECs must change their ethics approval processes to one that enhances facilitation of multi-centre research which is now normative process for health services. The findings of this study highlight inconsistent ethical requirements between different IRECs. Also highlighted are the obstacles and delays that IRECs create when undertaking multi-centre clinical audits

  4. Requirements for existing buildings

    DEFF Research Database (Denmark)

    Thomsen, Kirsten Engelund; Wittchen, Kim Bjarne

    This report collects energy performance requirements for existing buildings in European member states by June 2012.......This report collects energy performance requirements for existing buildings in European member states by June 2012....

  5. Greening Existing Tribal Buildings

    Science.gov (United States)

    Guidance about improving sustainability in existing tribal casinos and manufactured homes. Many steps can be taken to make existing buildings greener and healthier. They may also reduce utility and medical costs.

  6. Stabilizing the long-time behavior of the forced Navier-Stokes and damped Euler systems by large mean flow

    Science.gov (United States)

    Cyranka, Jacek; Mucha, Piotr B.; Titi, Edriss S.; Zgliczyński, Piotr

    2018-04-01

    The paper studies the issue of stability of solutions to the forced Navier-Stokes and damped Euler systems in periodic boxes. It is shown that for large, but fixed, Grashoff (Reynolds) number the turbulent behavior of all Leray-Hopf weak solutions of the three-dimensional Navier-Stokes equations, in periodic box, is suppressed, when viewed in the right frame of reference, by large enough average flow of the initial data; a phenomenon that is similar in spirit to the Landau damping. Specifically, we consider an initial data which have large enough spatial average, then by means of the Galilean transformation, and thanks to the periodic boundary conditions, the large time independent forcing term changes into a highly oscillatory force; which then allows us to employ some averaging principles to establish our result. Moreover, we also show that under the action of fast oscillatory-in-time external forces all two-dimensional regular solutions of the Navier-Stokes and the damped Euler equations converge to a unique time-periodic solution.

  7. Time-Efficient High-Resolution Large-Area Nano-Patterning of Silicon Dioxide

    Directory of Open Access Journals (Sweden)

    Li Lin

    2017-01-01

    Full Text Available A nano-patterning approach on silicon dioxide (SiO2 material, which could be used for the selective growth of III-V nanowires in photovoltaic applications, is demonstrated. In this process, a silicon (Si stamp with nanopillar structures was first fabricated using electron-beam lithography (EBL followed by a dry etching process. Afterwards, the Si stamp was employed in nanoimprint lithography (NIL assisted with a dry etching process to produce nanoholes on the SiO2 layer. The demonstrated approach has advantages such as a high resolution in nanoscale by EBL and good reproducibility by NIL. In addition, high time efficiency can be realized by one-spot electron-beam exposure in the EBL process combined with NIL for mass production. Furthermore, the one-spot exposure enables the scalability of the nanostructures for different application requirements by tuning only the exposure dose. The size variation of the nanostructures resulting from exposure parameters in EBL, the pattern transfer during nanoimprint in NIL, and subsequent etching processes of SiO2 were also studied quantitatively. By this method, a hexagonal arranged hole array in SiO2 with a hole diameter ranging from 45 to 75 nm and a pitch of 600 nm was demonstrated on a four-inch wafer.

  8. Energy beyond food: foraging theory informs time spent in thermals by a large soaring bird.

    Directory of Open Access Journals (Sweden)

    Emily L C Shepard

    Full Text Available Current understanding of how animals search for and exploit food resources is based on microeconomic models. Although widely used to examine feeding, such constructs should inform other energy-harvesting situations where theoretical assumptions are met. In fact, some animals extract non-food forms of energy from the environment, such as birds that soar in updraughts. This study examined whether the gains in potential energy (altitude followed efficiency-maximising predictions in the world's heaviest soaring bird, the Andean condor (Vultur gryphus. Animal-attached technology was used to record condor flight paths in three-dimensions. Tracks showed that time spent in patchy thermals was broadly consistent with a strategy to maximise the rate of potential energy gain. However, the rate of climb just prior to leaving a thermal increased with thermal strength and exit altitude. This suggests higher rates of energetic gain may not be advantageous where the resulting gain in altitude would lead to a reduction in the ability to search the ground for food. Consequently, soaring behaviour appeared to be modulated by the need to reconcile differing potential energy and food energy distributions. We suggest that foraging constructs may provide insight into the exploitation of non-food energy forms, and that non-food energy distributions may be more important in informing patterns of movement and residency over a range of scales than previously considered.

  9. Correlates of sedentary time in different age groups: results from a large cross sectional Dutch survey.

    Science.gov (United States)

    Bernaards, Claire M; Hildebrandt, Vincent H; Hendriksen, Ingrid J M

    2016-10-26

    Evidence shows that prolonged sitting is associated with an increased risk of mortality, independent of physical activity (PA). The aim of the study was to identify correlates of sedentary time (ST) in different age groups and day types (i.e. school-/work day versus non-school-/non-work day). The study sample consisted of 1895 Dutch children (4-11 years), 1131 adolescents (12-17 years), 8003 adults (18-64 years) and 1569 elderly (65 years and older) who enrolled in the Dutch continuous national survey 'Injuries and Physical Activity in the Netherlands' between 2006 and 2011. Respondents estimated the number of sitting hours during a regular school-/workday and a regular non-school/non-work day. Multiple linear regression analyses on cross-sectional data were used to identify correlates of ST. Significant positive associations with ST were observed for: higher age (4-to-17-year-olds and elderly), male gender (adults), overweight (children), higher education (adults ≥ 30 years), urban environment (adults), chronic disease (adults ≥ 30 years), sedentary work (adults), not meeting the moderate to vigorous PA (MVPA) guideline (children and adults ≥ 30 years) and not meeting the vigorous PA (VPA) guideline (4-to-17-year-olds). Correlates of ST that significantly differed between day types were working hours and meeting the VPA guideline. More working hours were associated with more ST on school-/work days. In children and adolescents, meeting the VPA guideline was associated with less ST on non-school/non-working days only. This study provides new insights in the correlates of ST in different age groups and thus possibilities for interventions in these groups. Correlates of ST appear to differ between age groups and to a lesser degree between day types. This implies that interventions to reduce ST should be age specific. Longitudinal studies are needed to draw conclusions on causality of the relationship between identified correlates and ST.

  10. Correlates of sedentary time in different age groups: results from a large cross sectional Dutch survey

    Directory of Open Access Journals (Sweden)

    Claire M. Bernaards

    2016-10-01

    Full Text Available Abstract Background Evidence shows that prolonged sitting is associated with an increased risk of mortality, independent of physical activity (PA. The aim of the study was to identify correlates of sedentary time (ST in different age groups and day types (i.e. school-/work day versus non-school-/non-work day. Methods The study sample consisted of 1895 Dutch children (4–11 years, 1131 adolescents (12–17 years, 8003 adults (18–64 years and 1569 elderly (65 years and older who enrolled in the Dutch continuous national survey ‘Injuries and Physical Activity in the Netherlands’ between 2006 and 2011. Respondents estimated the number of sitting hours during a regular school-/workday and a regular non-school/non-work day. Multiple linear regression analyses on cross-sectional data were used to identify correlates of ST. Results Significant positive associations with ST were observed for: higher age (4-to-17-year-olds and elderly, male gender (adults, overweight (children, higher education (adults ≥ 30 years, urban environment (adults, chronic disease (adults ≥ 30 years, sedentary work (adults, not meeting the moderate to vigorous PA (MVPA guideline (children and adults ≥ 30 years and not meeting the vigorous PA (VPA guideline (4-to-17-year-olds. Correlates of ST that significantly differed between day types were working hours and meeting the VPA guideline. More working hours were associated with more ST on school-/work days. In children and adolescents, meeting the VPA guideline was associated with less ST on non-school/non-working days only. Conclusions This study provides new insights in the correlates of ST in different age groups and thus possibilities for interventions in these groups. Correlates of ST appear to differ between age groups and to a lesser degree between day types. This implies that interventions to reduce ST should be age specific. Longitudinal studies are needed to draw conclusions on causality of

  11. Limitations of existing web services

    Indian Academy of Sciences (India)

    First page Back Continue Last page Overview Graphics. Limitations of existing web services. Uploading or downloading large data. Serving too many user from single source. Difficult to provide computer intensive job. Depend on internet and its bandwidth. Security of data in transition. Maintain confidentiality of data ...

  12. Large-scale digitizer system (LSD) for charge and time digitization in high-energy physics experiments

    International Nuclear Information System (INIS)

    Althaus, R.F.; Kirsten, F.A.; Lee, K.L.; Olson, S.R.; Wagner, L.J.; Wolverton, J.M.

    1976-10-01

    A large-scale digitizer (LSD) system for acquiring charge and time-of-arrival particle data from high-energy-physics experiments has been developed at the Lawrence Berkeley Laboratory. The objective in this development was to significantly reduce the cost of instrumenting large-detector arrays which, for the 4π-geometry of colliding-beam experiments, are proposed with an order of magnitude increase in channel count over previous detectors. In order to achieve the desired economy (approximately $65 per channel), a system was designed in which a number of control signals for conversion, for digitization, and for readout are shared in common by all the channels in each 128-channel bin. The overall-system concept and the distribution of control signals that are critical to the 10-bit charge resolution and to the 12-bit time resolution are described. Also described is the bit-serial transfer scheme, chosen for its low component and cabling costs

  13. Space-time relationship in continuously moving table method for large FOV peripheral contrast-enhanced magnetic resonance angiography

    International Nuclear Information System (INIS)

    Sabati, M; Lauzon, M L; Frayne, R

    2003-01-01

    Data acquisition using a continuously moving table approach is a method capable of generating large field-of-view (FOV) 3D MR angiograms. However, in order to obtain venous contamination-free contrast-enhanced (CE) MR angiograms in the lower limbs, one of the major challenges is to acquire all necessary k-space data during the restricted arterial phase of the contrast agent. Preliminary investigation on the space-time relationship of continuously acquired peripheral angiography is performed in this work. Deterministic and stochastic undersampled hybrid-space (x, k y , k z ) acquisitions are simulated for large FOV peripheral runoff studies. Initial results show the possibility of acquiring isotropic large FOV images of the entire peripheral vascular system. An optimal trade-off between the spatial and temporal sampling properties was found that produced a high-spatial resolution peripheral CE-MR angiogram. The deterministic sampling pattern was capable of reconstructing the global structure of the peripheral arterial tree and showed slightly better global quantitative results than stochastic patterns. Optimal stochastic sampling patterns, on the other hand, enhanced small vessels and had more favourable local quantitative results. These simulations demonstrate the complex spatial-temporal relationship when sampling large FOV peripheral runoff studies. They also suggest that more investigation is required to maximize image quality as a function of hybrid-space coverage, acquisition repetition time and sampling pattern parameters

  14. Extending flood forecasting lead time in a large watershed by coupling WRF QPF with a distributed hydrological model

    Science.gov (United States)

    Li, Ji; Chen, Yangbo; Wang, Huanyu; Qin, Jianming; Li, Jie; Chiao, Sen

    2017-03-01

    Long lead time flood forecasting is very important for large watershed flood mitigation as it provides more time for flood warning and emergency responses. The latest numerical weather forecast model could provide 1-15-day quantitative precipitation forecasting products in grid format, and by coupling this product with a distributed hydrological model could produce long lead time watershed flood forecasting products. This paper studied the feasibility of coupling the Liuxihe model with the Weather Research and Forecasting quantitative precipitation forecast (WRF QPF) for large watershed flood forecasting in southern China. The QPF of WRF products has three lead times, including 24, 48 and 72 h, with the grid resolution being 20 km  × 20 km. The Liuxihe model is set up with freely downloaded terrain property; the model parameters were previously optimized with rain gauge observed precipitation, and re-optimized with the WRF QPF. Results show that the WRF QPF has bias with the rain gauge precipitation, and a post-processing method is proposed to post-process the WRF QPF products, which improves the flood forecasting capability. With model parameter re-optimization, the model's performance improves also. This suggests that the model parameters be optimized with QPF, not the rain gauge precipitation. With the increasing of lead time, the accuracy of the WRF QPF decreases, as does the flood forecasting capability. Flood forecasting products produced by coupling the Liuxihe model with the WRF QPF provide a good reference for large watershed flood warning due to its long lead time and rational results.

  15. A hybrid, broadband, low noise charge preamplifier for simultaneous high resolution energy and time information with large capacitance semiconductor detector

    International Nuclear Information System (INIS)

    Goyot, M.

    1975-05-01

    A broadband and low noise charge preamplifier was developed in hybrid form, for a recoil spectrometer requiring large capacitance semiconductor detectors. This new hybrid and low cost preamplifier permits good timing information without compromising energy resolution. With a 500 pF external input capacity, it provides two simultaneous outputs: (i) the faster, current sensitive, with a rise time of 9 nsec and 2 mV/MeV on 50 ohms load, (ii) the lower, charge sensitive, with an energy resolution of 14 keV (FWHM Si) using a RC-CR ungated filter of 2 μsec and a FET input protection [fr

  16. Measuring gas-residence times in large municipal incinerators, by means of a pseudo-random binary signal tracer technique

    International Nuclear Information System (INIS)

    Nasserzadeh, V.; Swithenbank, J.; Jones, B.

    1995-01-01

    The problem of measuring gas-residence time in large incinerators was studied by the pseudo-random binary sequence (PRBS) stimulus tracer response technique at the Sheffield municipal solid-waste incinerator (35 MW plant). The steady-state system was disturbed by the superimposition of small fluctuations in the form of a pseudo-random binary sequence of methane pulses, and the response of the incinerator was determined from the CO 2 concentration in flue gases at the boiler exit, measured with a specially developed optical gas analyser with a high-frequency response. For data acquisition, an on-line PC computer was used together with the LAB Windows software system; the output response was then cross-correlated with the perturbation signal to give the impulse response of the incinerator. There was very good agreement between the gas-residence time for the Sheffield MSW incinerator as calculated by computational fluid dynamics (FLUENT Model) and gas-residence time at the plant as measured by the PRBS tracer technique. The results obtained from this research programme clearly demonstrate that the PRBS stimulus tracer response technique can be successfully and economically used to measure gas-residence times in large incinerator plants. It also suggests that the common commercial practice of characterising the incinerator operation by a single-residence-time parameter may lead to a misrepresentation of the complexities involved in describing the operation of the incineration system. (author)

  17. Investigation of Residence and Travel Times in a Large Floodplain Lake with Complex Lake-River Interactions: Poyang Lake (China

    Directory of Open Access Journals (Sweden)

    Yunliang Li

    2015-04-01

    Full Text Available Most biochemical processes and associated water quality in lakes depends on their flushing abilities. The main objective of this study was to investigate the transport time scale in a large floodplain lake, Poyang Lake (China. A 2D hydrodynamic model (MIKE 21 was combined with dye tracer simulations to determine residence and travel times of the lake for various water level variation periods. The results indicate that Poyang Lake exhibits strong but spatially heterogeneous residence times that vary with its highly seasonal water level dynamics. Generally, the average residence times are less than 10 days along the lake’s main flow channels due to the prevailing northward flow pattern; whereas approximately 30 days were estimated during high water level conditions in the summer. The local topographically controlled flow patterns substantially increase the residence time in some bays with high spatial values of six months to one year during all water level variation periods. Depending on changes in the water level regime, the travel times from the pollution sources to the lake outlet during the high and falling water level periods (up to 32 days are four times greater than those under the rising and low water level periods (approximately seven days.

  18. Development of sub-nanosecond, high gain structures for time-of-flight ring imaging in large area detectors

    International Nuclear Information System (INIS)

    Wetstein, Matthew

    2011-01-01

    Microchannel plate photomultiplier tubes (MCPs) are compact, imaging detectors, capable of micron-level spatial imaging and timing measurements with resolutions below 10 ps. Conventional fabrication methods are too expensive for making MCPs in the quantities and sizes necessary for typical HEP applications, such as time-of-flight ring-imaging Cherenkov detectors (TOF-RICH) or water Cherenkov-based neutrino experiments. The Large Area Picosecond Photodetector Collaboration (LAPPD) is developing new, commercializable methods to fabricate 20 cm 2 thin planar MCPs at costs comparable to those of traditional photo-multiplier tubes. Transmission-line readout with waveform sampling on both ends of each line allows the efficient coverage of large areas while maintaining excellent time and space resolution. Rather than fabricating channel plates from active, high secondary electron emission materials, we produce plates from passive substrates, and coat them using atomic layer deposition (ALD), a well established industrial batch process. In addition to possible reductions in cost and conditioning time, this allows greater control to optimize the composition of active materials for performance. We present details of the MCP fabrication method, preliminary results from testing and characterization facilities, and possible HEP applications.

  19. Time-scale invariant changes in atmospheric radon concentration and crustal strain prior to a large earthquake

    Directory of Open Access Journals (Sweden)

    Y. Kawada

    2007-01-01

    Full Text Available Prior to large earthquakes (e.g. 1995 Kobe earthquake, Japan, an increase in the atmospheric radon concentration is observed, and this increase in the rate follows a power-law of the time-to-earthquake (time-to-failure. This phenomenon corresponds to the increase in the radon migration in crust and the exhalation into atmosphere. An irreversible thermodynamic model including time-scale invariance clarifies that the increases in the pressure of the advecting radon and permeability (hydraulic conductivity in the crustal rocks are caused by the temporal changes in the power-law of the crustal strain (or cumulative Benioff strain, which is associated with damage evolution such as microcracking or changing porosity. As the result, the radon flux and the atmospheric radon concentration can show a temporal power-law increase. The concentration of atmospheric radon can be used as a proxy for the seismic precursory processes associated with crustal dynamics.

  20. Mapping geological structures in bedrock via large-scale direct current resistivity and time-domain induced polarization tomography

    DEFF Research Database (Denmark)

    Rossi, Matteo; Olsson, Per-Ivar; Johansson, Sara

    2017-01-01

    -current resistivity distribution of the subsoil and the phase of the complex conductivity using a constant-phase angle model. The joint interpretation of electrical resistivity and induced-polarization models leads to a better understanding of complex three-dimensional subsoil geometries. The results have been......An investigation of geological conditions is always a key point for planning infrastructure constructions. Bedrock surface and rock quality must be estimated carefully in the designing process of infrastructures. A large direct-current resistivity and time-domain induced-polarization survey has......, there are northwest-trending Permian dolerite dykes that are less deformed. Four 2D direct-current resistivity and time-domain induced-polarization profiles of about 1-km length have been carefully pre-processed to retrieve time-domain induced polarization responses and inverted to obtain the direct...

  1. A large capacity time division multiplexed (TDM) laser beam combining technique enabled by nanosecond speed KTN deflector

    Science.gov (United States)

    Yin, Stuart (Shizhuo); Chao, Ju-Hung; Zhu, Wenbin; Chen, Chang-Jiang; Campbell, Adrian; Henry, Michael; Dubinskiy, Mark; Hoffman, Robert C.

    2017-08-01

    In this paper, we present a novel large capacity (a 1000+ channel) time division multiplexing (TDM) laser beam combining technique by harnessing a state-of-the-art nanosecond speed potassium tantalate niobate (KTN) electro-optic (EO) beam deflector as the time division multiplexer. The major advantages of TDM approach are: (1) large multiplexing capability (over 1000 channels), (2) high spatial beam quality (the combined beam has the same spatial profile as the individual beam), (3) high spectral beam quality (the combined beam has the same spectral width as the individual beam, and (4) insensitive to the phase fluctuation of individual laser because of the nature of the incoherent beam combining. The quantitative analyses show that it is possible to achieve over one hundred kW average power, single aperture, single transverse mode solid state and/or fiber laser by pursuing this innovative beam combining method, which represents a major technical advance in the field of high energy lasers. Such kind of 100+ kW average power diffraction limited beam quality lasers can play an important role in a variety of applications such as laser directed energy weapons (DEW) and large-capacity high-speed laser manufacturing, including cutting, welding, and printing.

  2. Note: Large active area solid state photon counter with 20 ps timing resolution and 60 fs detection delay stability

    Science.gov (United States)

    Prochazka, Ivan; Kodet, Jan; Eckl, Johann; Blazej, Josef

    2017-10-01

    We are reporting on the design, construction, and performance of a photon counting detector system, which is based on single photon avalanche diode detector technology. This photon counting device has been optimized for very high timing resolution and stability of its detection delay. The foreseen application of this detector is laser ranging of space objects, laser time transfer ground to space and fundamental metrology. The single photon avalanche diode structure, manufactured on silicon using K14 technology, is used as a sensor. The active area of the sensor is circular with 200 μm diameter. Its photon detection probability exceeds 40% in the wavelength range spanning from 500 to 800 nm. The sensor is operated in active quenching and gating mode. A new control circuit was optimized to maintain high timing resolution and detection delay stability. In connection to this circuit, timing resolution of the detector is reaching 20 ps FWHM. In addition, the temperature change of the detection delay is as low as 70 fs/K. As a result, the detection delay stability of the device is exceptional: expressed in the form of time deviation, detection delay stability of better than 60 fs has been achieved. Considering the large active area aperture of the detector, this is, to our knowledge, the best timing performance reported for a solid state photon counting detector so far.

  3. Track distortion in a micromegas based large prototype of a Time Projection Chamber for the International Linear Collider

    International Nuclear Information System (INIS)

    Bhattacharya, Deb Sankar; Majumdar, Nayana; Sarkar, S.; Bhattacharya, S.; Mukhopadhyay, Supratik; Bhattacharya, P.; Attie, D.; Colas, P.; Ganjour, S.; Bhattacharya, Aparajita

    2016-01-01

    The principal particle tracker at the International Linear Collider (ILC) is planned to be a large Time Projection Chamber (TPC) where different Micro Pattern Gaseous Detector (MPGDs) candidate as the gaseous amplifier. A Micromegas (MM) based TPC can meet the ILC requirement of continuous and precise pattern recognition. Seven MM modules, working as the end-plate of a Large Prototype TPC (LPTPC) installed at DESY, have been tested with a 5 GeV electron beam. Due to the grounded peripheral frame of the MM modules, at low drift, the electric field lines near the detector edge remain no longer parallel to the TPC axis. This causes signal loss along the boundaries of the MM modules as well as distortion in the reconstructed track. In presence of magnetic field, the distorted electric field introduces ExB effect

  4. Long-Term Fault Memory: A New Time-Dependent Recurrence Model for Large Earthquake Clusters on Plate Boundaries

    Science.gov (United States)

    Salditch, L.; Brooks, E. M.; Stein, S.; Spencer, B. D.; Campbell, M. R.

    2017-12-01

    A challenge for earthquake hazard assessment is that geologic records often show large earthquakes occurring in temporal clusters separated by periods of quiescence. For example, in Cascadia, a paleoseismic record going back 10,000 years shows four to five clusters separated by approximately 1,000 year gaps. If we are still in the cluster that began 1700 years ago, a large earthquake is likely to happen soon. If the cluster has ended, a great earthquake is less likely. For a Gaussian distribution of recurrence times, the probability of an earthquake in the next 50 years is six times larger if we are still in the most recent cluster. Earthquake hazard assessments typically employ one of two recurrence models, neither of which directly incorporate clustering. In one, earthquake probability is time-independent and modeled as Poissonian, so an earthquake is equally likely at any time. The fault has no "memory" because when a prior earthquake occurred has no bearing on when the next will occur. The other common model is a time-dependent earthquake cycle in which the probability of an earthquake increases with time until one happens, after which the probability resets to zero. Because the probability is reset after each earthquake, the fault "remembers" only the last earthquake. This approach can be used with any assumed probability density function for recurrence times. We propose an alternative, Long-Term Fault Memory (LTFM), a modified earthquake cycle model where the probability of an earthquake increases with time until one happens, after which it decreases, but not necessarily to zero. Hence the probability of the next earthquake depends on the fault's history over multiple cycles, giving "long-term memory". Physically, this reflects an earthquake releasing only part of the elastic strain stored on the fault. We use the LTFM to simulate earthquake clustering along the San Andreas Fault and Cascadia. In some portions of the simulated earthquake history, events would

  5. Hydrogen and methane generation from large hydraulic plant: Thermo-economic multi-level time-dependent optimization

    International Nuclear Information System (INIS)

    Rivarolo, M.; Magistri, L.; Massardo, A.F.

    2014-01-01

    Highlights: • We investigate H 2 and CH 4 production from very large hydraulic plant (14 GW). • We employ only “spilled energy”, not used by hydraulic plant, for H 2 production. • We consider the integration with energy taken from the grid at different prices. • We consider hydrogen conversion in chemical reactors to produce methane. • We find plants optimal size using a time-dependent thermo-economic approach. - Abstract: This paper investigates hydrogen and methane generation from large hydraulic plant, using an original multilevel thermo-economic optimization approach developed by the authors. Hydrogen is produced by water electrolysis employing time-dependent hydraulic energy related to the water which is not normally used by the plant, known as “spilled water electricity”. Both the demand for spilled energy and the electrical grid load vary widely by time of year, therefore a time-dependent hour-by-hour one complete year analysis has been carried out, in order to define the optimal plant size. This time period analysis is necessary to take into account spilled energy and electrical load profiles variability during the year. The hydrogen generation plant is based on 1 MWe water electrolysers fuelled with the “spilled water electricity”, when available; in the remaining periods, in order to assure a regular H 2 production, the energy is taken from the electrical grid, at higher cost. To perform the production plant size optimization, two hierarchical levels have been considered over a one year time period, in order to minimize capital and variable costs. After the optimization of the hydrogen production plant size, a further analysis is carried out, with a view to converting the produced H 2 into methane in a chemical reactor, starting from H 2 and CO 2 which is obtained with CCS plants and/or carried by ships. For this plant, the optimal electrolysers and chemical reactors system size is defined. For both of the two solutions, thermo

  6. A large set of potential past, present and future hydro-meteorological time series for the UK

    Directory of Open Access Journals (Sweden)

    B. P. Guillod

    2018-01-01

    Full Text Available Hydro-meteorological extremes such as drought and heavy precipitation can have large impacts on society and the economy. With potentially increasing risks associated with such events due to climate change, properly assessing the associated impacts and uncertainties is critical for adequate adaptation. However, the application of risk-based approaches often requires large sets of extreme events, which are not commonly available. Here, we present such a large set of hydro-meteorological time series for recent past and future conditions for the United Kingdom based on weather@home 2, a modelling framework consisting of a global climate model (GCM driven by observed or projected sea surface temperature (SST and sea ice which is downscaled to 25 km over the European domain by a regional climate model (RCM. Sets of 100 time series are generated for each of (i a historical baseline (1900–2006, (ii five near-future scenarios (2020–2049 and (iii five far-future scenarios (2070–2099. The five scenarios in each future time slice all follow the Representative Concentration Pathway 8.5 (RCP8.5 and sample the range of sea surface temperature and sea ice changes from CMIP5 (Coupled Model Intercomparison Project Phase 5 models. Validation of the historical baseline highlights good performance for temperature and potential evaporation, but substantial seasonal biases in mean precipitation, which are corrected using a linear approach. For extremes in low precipitation over a long accumulation period ( > 3 months and shorter-duration high precipitation (1–30 days, the time series generally represents past statistics well. Future projections show small precipitation increases in winter but large decreases in summer on average, leading to an overall drying, consistently with the most recent UK Climate Projections (UKCP09 but larger in magnitude than the latter. Both drought and high-precipitation events are projected to increase in frequency and

  7. Nonequilibrium Dynamics of Anisotropic Large Spins in the Kondo Regime: Time-Dependent Numerical Renormalization Group Analysis

    Science.gov (United States)

    Roosen, David; Wegewijs, Maarten R.; Hofstetter, Walter

    2008-02-01

    We investigate the time-dependent Kondo effect in a single-molecule magnet (SMM) strongly coupled to metallic electrodes. Describing the SMM by a Kondo model with large spin S>1/2, we analyze the underscreening of the local moment and the effect of anisotropy terms on the relaxation dynamics of the magnetization. Underscreening by single-channel Kondo processes leads to a logarithmically slow relaxation, while finite uniaxial anisotropy causes a saturation of the SMM’s magnetization. Additional transverse anisotropy terms induce quantum spin tunneling and a pseudospin-1/2 Kondo effect sensitive to the spin parity.

  8. A large set of potential past, present and future hydro-meteorological time series for the UK

    Science.gov (United States)

    Guillod, Benoit P.; Jones, Richard G.; Dadson, Simon J.; Coxon, Gemma; Bussi, Gianbattista; Freer, James; Kay, Alison L.; Massey, Neil R.; Sparrow, Sarah N.; Wallom, David C. H.; Allen, Myles R.; Hall, Jim W.

    2018-01-01

    Hydro-meteorological extremes such as drought and heavy precipitation can have large impacts on society and the economy. With potentially increasing risks associated with such events due to climate change, properly assessing the associated impacts and uncertainties is critical for adequate adaptation. However, the application of risk-based approaches often requires large sets of extreme events, which are not commonly available. Here, we present such a large set of hydro-meteorological time series for recent past and future conditions for the United Kingdom based on weather@home 2, a modelling framework consisting of a global climate model (GCM) driven by observed or projected sea surface temperature (SST) and sea ice which is downscaled to 25 km over the European domain by a regional climate model (RCM). Sets of 100 time series are generated for each of (i) a historical baseline (1900-2006), (ii) five near-future scenarios (2020-2049) and (iii) five far-future scenarios (2070-2099). The five scenarios in each future time slice all follow the Representative Concentration Pathway 8.5 (RCP8.5) and sample the range of sea surface temperature and sea ice changes from CMIP5 (Coupled Model Intercomparison Project Phase 5) models. Validation of the historical baseline highlights good performance for temperature and potential evaporation, but substantial seasonal biases in mean precipitation, which are corrected using a linear approach. For extremes in low precipitation over a long accumulation period ( > 3 months) and shorter-duration high precipitation (1-30 days), the time series generally represents past statistics well. Future projections show small precipitation increases in winter but large decreases in summer on average, leading to an overall drying, consistently with the most recent UK Climate Projections (UKCP09) but larger in magnitude than the latter. Both drought and high-precipitation events are projected to increase in frequency and intensity in most regions

  9. Fast analysis of wide-band scattering from electrically large targets with time-domain parabolic equation method

    Science.gov (United States)

    He, Zi; Chen, Ru-Shan

    2016-03-01

    An efficient three-dimensional time domain parabolic equation (TDPE) method is proposed to fast analyze the narrow-angle wideband EM scattering properties of electrically large targets. The finite difference (FD) of Crank-Nicolson (CN) scheme is used as the traditional tool to solve the time-domain parabolic equation. However, a huge computational resource is required when the meshes become dense. Therefore, the alternating direction implicit (ADI) scheme is introduced to discretize the time-domain parabolic equation. In this way, the reduced transient scattered fields can be calculated line by line in each transverse plane for any time step with unconditional stability. As a result, less computational resources are required for the proposed ADI-based TDPE method when compared with both the traditional CN-based TDPE method and the finite-different time-domain (FDTD) method. By employing the rotating TDPE method, the complete bistatic RCS can be obtained with encouraging accuracy for any observed angle. Numerical examples are given to demonstrate the accuracy and efficiency of the proposed method.

  10. KMTNet Time-series Photometry of the Doubly Eclipsing Binary Stars Located in the Large Magellanic Cloud

    Science.gov (United States)

    Hong, Kyeongsoo; Koo, Jae-Rim; Lee, Jae Woo; Kim, Seung-Lee; Lee, Chung-Uk; Park, Jang-Ho; Kim, Hyoun-Woo; Lee, Dong-Joo; Kim, Dong-Jin; Han, Cheongho

    2018-05-01

    We report the results of photometric observations for doubly eclipsing binaries OGLE-LMC-ECL-15674 and OGLE-LMC-ECL-22159, both of which are composed of two pairs (designated A&B) of a detached eclipsing binary located in the Large Magellanic Cloud. The light curves were obtained by high-cadence time-series photometry using the Korea Microlensing Telescope Network 1.6 m telescopes located at three southern sites (CTIO, SAAO, and SSO) between 2016 September and 2017 January. The orbital periods were determined to be 1.433 and 1.387 days for components A and B of OGLE-LMC-ECL-15674, respectively, and 2.988 and 3.408 days for OGLE-LMC-ECL-22159A and B, respectively. Our light curve solutions indicate that the significant changes in the eclipse depths of OGLE-LMC-ECL-15674A and B were caused by variations in their inclination angles. The eclipse timing diagrams of the A and B components of OGLE-LMC-ECL-15674 and OGLE-LMC-ECL-22159 were analyzed using 28, 44, 28, and 26 new times of minimum light, respectively. The apsidal motion period of OGLE-LMC-ECL-15674B was estimated by detailed analysis of eclipse timings for the first time. The detached eclipsing binary OGLE-LMC-ECL-15674B shows a fast apsidal period of 21.5 ± 0.1 years.

  11. Combined electrochemical, heat generation, and thermal model for large prismatic lithium-ion batteries in real-time applications

    Science.gov (United States)

    Farag, Mohammed; Sweity, Haitham; Fleckenstein, Matthias; Habibi, Saeid

    2017-08-01

    Real-time prediction of the battery's core temperature and terminal voltage is very crucial for an accurate battery management system. In this paper, a combined electrochemical, heat generation, and thermal model is developed for large prismatic cells. The proposed model consists of three sub-models, an electrochemical model, heat generation model, and thermal model which are coupled together in an iterative fashion through physicochemical temperature dependent parameters. The proposed parameterization cycles identify the sub-models' parameters separately by exciting the battery under isothermal and non-isothermal operating conditions. The proposed combined model structure shows accurate terminal voltage and core temperature prediction at various operating conditions while maintaining a simple mathematical structure, making it ideal for real-time BMS applications. Finally, the model is validated against both isothermal and non-isothermal drive cycles, covering a broad range of C-rates, and temperature ranges [-25 °C to 45 °C].

  12. Application of large area SiPMs for the readout of a plastic scintillator based timing detector

    Science.gov (United States)

    Betancourt, C.; Blondel, A.; Brundler, R.; Dätwyler, A.; Favre, Y.; Gascon, D.; Gomez, S.; Korzenev, A.; Mermod, P.; Noah, E.; Serra, N.; Sgalaberna, D.; Storaci, B.

    2017-11-01

    In this study an array of eight 6 mm × 6 mm area SiPMs was coupled to the end of a long plastic scintillator counter which was exposed to a 2.5 GeV/c muon beam at the CERN PS. Timing characteristics of bars with dimensions 150 cm × 6 cm × 1 cm and 120 cm × 11 cm × 2.5 cm have been studied. An 8-channel SiPM anode readout ASIC (MUSIC R1) based on a novel low input impedance current conveyor has been used to read out and amplify SiPMs independently and sum the signals at the end. Prospects for applications in large-scale particle physics detectors with timing resolution below 100 ps are provided in light of the results.

  13. arXiv Application of large area SiPMs for the readout of a plastic scintillator based timing detector

    CERN Document Server

    Betancourt, C.; Brundler, R.; Dätwyler, A.; Favre, Y.; Gascon, D.; Gomez, S.; Korzenev, Alexander; Mermod, P.; Noah, E.; Serra, N.; Sgalaberna, D.; Storaci, B.

    2017-11-27

    In this study an array of eight 6 mm × 6 mm area SiPMs was coupled to the end of a long plastic scintillator counter which was exposed to a 2.5 GeV/c muon beam at the CERN PS. Timing characteristics of bars with dimensions 150 cm × 6 cm × 1 cm and 120 cm × 11 cm × 2.5 cm have been studied. An 8-channel SiPM anode readout ASIC (MUSIC R1) based on a novel low input impedance current conveyor has been used to read out and amplify SiPMs independently and sum the signals at the end. Prospects for applications in large-scale particle physics detectors with timing resolution below 100 ps are provided in light of the results.

  14. Large Observatory for x-ray Timing (LOFT-P): a Probe-class mission concept study

    Science.gov (United States)

    Wilson-Hodge, Colleen A.; Ray, Paul S.; Chakrabarty, Deepto; Feroci, Marco; Alvarez, Laura; Baysinger, Michael; Becker, Chris; Bozzo, Enrico; Brandt, Soren; Carson, Billy; Chapman, Jack; Dominguez, Alexandra; Fabisinski, Leo; Gangl, Bert; Garcia, Jay; Griffith, Christopher; Hernanz, Margarita; Hickman, Robert; Hopkins, Randall; Hui, Michelle; Ingram, Luster; Jenke, Peter; Korpela, Seppo; Maccarone, Tom; Michalska, Malgorzata; Pohl, Martin; Santangelo, Andrea; Schanne, Stephane; Schnell, Andrew; Stella, Luigi; van der Klis, Michiel; Watts, Anna; Winter, Berend; Zane, Silvia

    2016-07-01

    LOFT-P is a mission concept for a NASA Astrophysics Probe-Class (matter? What are the effects of strong gravity on matter spiraling into black holes? It would be optimized for sub-millisecond timing of bright Galactic X-ray sources including X-ray bursters, black hole binaries, and magnetars to study phenomena at the natural timescales of neutron star surfaces and black hole event horizons and to measure mass and spin of black holes. These measurements are synergistic to imaging and high-resolution spectroscopy instruments, addressing much smaller distance scales than are possible without very long baseline X-ray interferometry, and using complementary techniques to address the geometry and dynamics of emission regions. LOFT-P would have an effective area of >6 m2, > 10x that of the highly successful Rossi X-ray Timing Explorer (RXTE). A sky monitor (2-50 keV) acts as a trigger for pointed observations, providing high duty cycle, high time resolution monitoring of the X-ray sky with 20 times the sensitivity of the RXTE All-Sky Monitor, enabling multi-wavelength and multimessenger studies. A probe-class mission concept would employ lightweight collimator technology and large-area solid-state detectors, segmented into pixels or strips, technologies which have been recently greatly advanced during the ESA M3 Phase A study of LOFT. Given the large community interested in LOFT (>800 supporters*, the scientific productivity of this mission is expected to be very high, similar to or greater than RXTE ( 2000 refereed publications). We describe the results of a study, recently completed by the MSFC Advanced Concepts Office, that demonstrates that such a mission is feasible within a NASA probe-class mission budget.

  15. Storm Time Global Observations of Large-Scale TIDs From Ground-Based and In Situ Satellite Measurements

    Science.gov (United States)

    Habarulema, John Bosco; Yizengaw, Endawoke; Katamzi-Joseph, Zama T.; Moldwin, Mark B.; Buchert, Stephan

    2018-01-01

    This paper discusses the ionosphere's response to the largest storm of solar cycle 24 during 16-18 March 2015. We have used the Global Navigation Satellite Systems (GNSS) total electron content data to study large-scale traveling ionospheric disturbances (TIDs) over the American, African, and Asian regions. Equatorward large-scale TIDs propagated and crossed the equator to the other side of the hemisphere especially over the American and Asian sectors. Poleward TIDs with velocities in the range ≈400-700 m/s have been observed during local daytime over the American and African sectors with origin from around the geomagnetic equator. Our investigation over the American sector shows that poleward TIDs may have been launched by increased Lorentz coupling as a result of penetrating electric field during the southward turning of the interplanetary magnetic field, Bz. We have observed increase in SWARM satellite electron density (Ne) at the same time when equatorward large-scale TIDs are visible over the European-African sector. The altitude Ne profiles from ionosonde observations show a possible link that storm-induced TIDs may have influenced the plasma distribution in the topside ionosphere at SWARM satellite altitude.

  16. Estimation of Transport Trajectory and Residence Time in Large River–Lake Systems: Application to Poyang Lake (China Using a Combined Model Approach

    Directory of Open Access Journals (Sweden)

    Yunliang Li

    2015-09-01

    Full Text Available The biochemical processes and associated water quality in many lakes mainly depend on their transport behaviors. Most existing methodologies for investigating transport behaviors are based on physically based numerical models. The pollutant transport trajectory and residence time of Poyang Lake are thought to have important implications for the steadily deteriorating water quality and the associated rapid environmental changes during the flood period. This study used a hydrodynamic model (MIKE 21 in conjunction with transport and particle-tracking sub-models to provide comprehensive investigation of transport behaviors in Poyang Lake. Model simulations reveal that the lake’s prevailing water flow patterns cause a unique transport trajectory that primarily develops from the catchment river mouths to the downstream area along the lake’s main flow channels, similar to a river-transport behavior. Particle tracking results show that the mean residence time of the lake is 89 days during July–September. The effect of the Yangtze River (the effluent of the lake on the residence time is stronger than that of the catchment river inflows. The current study represents a first attempt to use a combined model approach to provide insights into the transport behaviors for a large river–lake system, given proposals to manage the pollutant inputs both directly to the lake and catchment rivers.

  17. Massive Cloud Computing Processing of P-SBAS Time Series for Displacement Analyses at Large Spatial Scale

    Science.gov (United States)

    Casu, F.; de Luca, C.; Lanari, R.; Manunta, M.; Zinno, I.

    2016-12-01

    A methodology for computing surface deformation time series and mean velocity maps of large areas is presented. Our approach relies on the availability of a multi-temporal set of Synthetic Aperture Radar (SAR) data collected from ascending and descending orbits over an area of interest, and also permits to estimate the vertical and horizontal (East-West) displacement components of the Earth's surface. The adopted methodology is based on an advanced Cloud Computing implementation of the Differential SAR Interferometry (DInSAR) Parallel Small Baseline Subset (P-SBAS) processing chain which allows the unsupervised processing of large SAR data volumes, from the raw data (level-0) imagery up to the generation of DInSAR time series and maps. The presented solution, which is highly scalable, has been tested on the ascending and descending ENVISAT SAR archives, which have been acquired over a large area of Southern California (US) that extends for about 90.000 km2. Such an input dataset has been processed in parallel by exploiting 280 computing nodes of the Amazon Web Services Cloud environment. Moreover, to produce the final mean deformation velocity maps of the vertical and East-West displacement components of the whole investigated area, we took also advantage of the information available from external GPS measurements that permit to account for possible regional trends not easily detectable by DInSAR and to refer the P-SBAS measurements to an external geodetic datum. The presented results clearly demonstrate the effectiveness of the proposed approach that paves the way to the extensive use of the available ERS and ENVISAT SAR data archives. Furthermore, the proposed methodology can be particularly suitable to deal with the very huge data flow provided by the Sentinel-1 constellation, thus permitting to extend the DInSAR analyses at a nearly global scale. This work is partially supported by: the DPC-CNR agreement, the EPOS-IP project and the ESA GEP project.

  18. A neuromorphic implementation of multiple spike-timing synaptic plasticity rules for large-scale neural networks

    Directory of Open Access Journals (Sweden)

    Runchun Mark Wang

    2015-05-01

    Full Text Available We present a neuromorphic implementation of multiple synaptic plasticity learning rules, which include both Spike Timing Dependent Plasticity (STDP and Spike Timing Dependent Delay Plasticity (STDDP. We present a fully digital implementation as well as a mixed-signal implementation, both of which use a novel dynamic-assignment time-multiplexing approach and support up to 2^26 (64M synaptic plasticity elements. Rather than implementing dedicated synapses for particular types of synaptic plasticity, we implemented a more generic synaptic plasticity adaptor array that is separate from the neurons in the neural network. Each adaptor performs synaptic plasticity according to the arrival times of the pre- and post-synaptic spikes assigned to it, and sends out a weighted and/or delayed pre-synaptic spike to the target synapse in the neural network. This strategy provides great flexibility for building complex large-scale neural networks, as a neural network can be configured for multiple synaptic plasticity rules without changing its structure. We validate the proposed neuromorphic implementations with measurement results and illustrate that the circuits are capable of performing both STDP and STDDP. We argue that it is practical to scale the work presented here up to 2^36 (64G synaptic adaptors on a current high-end FPGA platform.

  19. Large deviations of the finite-time magnetization of the Curie-Weiss random-field Ising model

    Science.gov (United States)

    Paga, Pierre; Kühn, Reimer

    2017-08-01

    We study the large deviations of the magnetization at some finite time in the Curie-Weiss random field Ising model with parallel updating. While relaxation dynamics in an infinite-time horizon gives rise to unique dynamical trajectories [specified by initial conditions and governed by first-order dynamics of the form mt +1=f (mt) ] , we observe that the introduction of a finite-time horizon and the specification of terminal conditions can generate a host of metastable solutions obeying second-order dynamics. We show that these solutions are governed by a Newtonian-like dynamics in discrete time which permits solutions in terms of both the first-order relaxation ("forward") dynamics and the backward dynamics mt +1=f-1(mt) . Our approach allows us to classify trajectories for a given final magnetization as stable or metastable according to the value of the rate function associated with them. We find that in analogy to the Freidlin-Wentzell description of the stochastic dynamics of escape from metastable states, the dominant trajectories may switch between the two types (forward and backward) of first-order dynamics. Additionally, we show how to compute rate functions when uncertainty in the quenched disorder is introduced.

  20. Acceleration of the universe, vacuum metamorphosis, and the large-time asymptotic form of the heat kernel

    International Nuclear Information System (INIS)

    Parker, Leonard; Vanzella, Daniel A.T.

    2004-01-01

    We investigate the possibility that the late acceleration observed in the rate of expansion of the Universe is due to vacuum quantum effects arising in curved spacetime. The theoretical basis of the vacuum cold dark matter (VCDM), or vacuum metamorphosis, cosmological model of Parker and Raval is reexamined and improved. We show, by means of a manifestly nonperturbative approach, how the infrared behavior of the propagator (related to the large-time asymptotic form of the heat kernel) of a free scalar field in curved spacetime leads to nonperturbative terms in the effective action similar to those appearing in the earlier version of the VCDM model. The asymptotic form that we adopt for the propagator or heat kernel at large proper time s is motivated by, and consistent with, particular cases where the heat kernel has been calculated exactly, namely in de Sitter spacetime, in the Einstein static universe, and in the linearly expanding spatially flat Friedmann-Robertson-Walker (FRW) universe. This large-s asymptotic form generalizes somewhat the one suggested by the Gaussian approximation and the R-summed form of the propagator that earlier served as a theoretical basis for the VCDM model. The vacuum expectation value for the energy-momentum tensor of the free scalar field, obtained through variation of the effective action, exhibits a resonance effect when the scalar curvature R of the spacetime reaches a particular value related to the mass of the field. Modeling our Universe by an FRW spacetime filled with classical matter and radiation, we show that the back reaction caused by this resonance drives the Universe through a transition to an accelerating expansion phase, very much in the same way as originally proposed by Parker and Raval. Our analysis includes higher derivatives that were neglected in the earlier analysis, and takes into account the possible runaway solutions that can follow from these higher-derivative terms. We find that the runaway solutions do

  1. MOBBED: a computational data infrastructure for handling large collections of event-rich time series datasets in MATLAB.

    Science.gov (United States)

    Cockfield, Jeremy; Su, Kyungmin; Robbins, Kay A

    2013-01-01

    Experiments to monitor human brain activity during active behavior record a variety of modalities (e.g., EEG, eye tracking, motion capture, respiration monitoring) and capture a complex environmental context leading to large, event-rich time series datasets. The considerable variability of responses within and among subjects in more realistic behavioral scenarios requires experiments to assess many more subjects over longer periods of time. This explosion of data requires better computational infrastructure to more systematically explore and process these collections. MOBBED is a lightweight, easy-to-use, extensible toolkit that allows users to incorporate a computational database into their normal MATLAB workflow. Although capable of storing quite general types of annotated data, MOBBED is particularly oriented to multichannel time series such as EEG that have event streams overlaid with sensor data. MOBBED directly supports access to individual events, data frames, and time-stamped feature vectors, allowing users to ask questions such as what types of events or features co-occur under various experimental conditions. A database provides several advantages not available to users who process one dataset at a time from the local file system. In addition to archiving primary data in a central place to save space and avoid inconsistencies, such a database allows users to manage, search, and retrieve events across multiple datasets without reading the entire dataset. The database also provides infrastructure for handling more complex event patterns that include environmental and contextual conditions. The database can also be used as a cache for expensive intermediate results that are reused in such activities as cross-validation of machine learning algorithms. MOBBED is implemented over PostgreSQL, a widely used open source database, and is freely available under the GNU general public license at http://visual.cs.utsa.edu/mobbed. Source and issue reports for MOBBED

  2. Determination of residual oil saturation from time-lapse pulsed neutron capture logs in a large sandstone reservoir

    International Nuclear Information System (INIS)

    Syed, E.V.; Salaita, G.N.; McCaffery, F.G.

    1991-01-01

    Cased hole logging with pulsed neutron tools finds extensive use for identifying zones of water breakthrough and monitoring oil-water contacts in oil reservoirs being depleted by waterflooding or natural water drive. Results of such surveys then find direct use for planning recompletions and water shutoff treatments. Pulsed neutron capture (PNC) logs are useful for estimating water saturation changes behind casing in the presence of a constant, high-salinity environment. PNC log surveys run at different times, i.e., in a time-lapse mode, are particularly amenable to quantitative analysis. The combined use of the original open hole and PNC time-lapse log information can then provide information on remaining or residual oil saturations in a reservoir. This paper reports analyses of historical pulsed neutron capture log data to assess residual oil saturation in naturally water-swept zones for selected wells from a large sandstone reservoir in the Middle East. Quantitative determination of oil saturations was aided by PNC log information obtained from a series of tests conducted in a new well in the same field

  3. Large-Time Behavior of Solutions to Vlasov-Poisson-Fokker-Planck Equations: From Evanescent Collisions to Diffusive Limit

    Science.gov (United States)

    Herda, Maxime; Rodrigues, L. Miguel

    2018-03-01

    The present contribution investigates the dynamics generated by the two-dimensional Vlasov-Poisson-Fokker-Planck equation for charged particles in a steady inhomogeneous background of opposite charges. We provide global in time estimates that are uniform with respect to initial data taken in a bounded set of a weighted L^2 space, and where dependencies on the mean-free path τ and the Debye length δ are made explicit. In our analysis the mean free path covers the full range of possible values: from the regime of evanescent collisions τ → ∞ to the strongly collisional regime τ → 0. As a counterpart, the largeness of the Debye length, that enforces a weakly nonlinear regime, is used to close our nonlinear estimates. Accordingly we pay a special attention to relax as much as possible the τ -dependent constraint on δ ensuring exponential decay with explicit τ -dependent rates towards the stationary solution. In the strongly collisional limit τ → 0, we also examine all possible asymptotic regimes selected by a choice of observation time scale. Here also, our emphasis is on strong convergence, uniformity with respect to time and to initial data in bounded sets of a L^2 space. Our proofs rely on a detailed study of the nonlinear elliptic equation defining stationary solutions and a careful tracking and optimization of parameter dependencies of hypocoercive/hypoelliptic estimates.

  4. Large-strain time-temperature equivalence in high density polyethylene for prediction of extreme deformation and damage

    Directory of Open Access Journals (Sweden)

    Gray G.T.

    2012-08-01

    Full Text Available Time-temperature equivalence is a widely recognized property of many time-dependent material systems, where there is a clear predictive link relating the deformation response at a nominal temperature and a high strain-rate to an equivalent response at a depressed temperature and nominal strain-rate. It has been found that high-density polyethylene (HDPE obeys a linear empirical formulation relating test temperature and strain-rate. This observation was extended to continuous stress-strain curves, such that material response measured in a load frame at large strains and low strain-rates (at depressed temperatures could be translated into a temperature-dependent response at high strain-rates and validated against Taylor impact results. Time-temperature equivalence was used in conjuction with jump-rate compression tests to investigate isothermal response at high strain-rate while exluding adiabatic heating. The validated constitutive response was then applied to the analysis of Dynamic-Tensile-Extrusion of HDPE, a tensile analog to Taylor impact developed at LANL. The Dyn-Ten-Ext test results and FEA found that HDPE deformed smoothly after exiting the die, and after substantial drawing appeared to undergo a pressure-dependent shear damage mechanism at intermediate velocities, while it fragmented at high velocities. Dynamic-Tensile-Extrusion, properly coupled with a validated constitutive model, can successfully probe extreme tensile deformation and damage of polymers.

  5. A unique aerial platform equipped for large area surveillance: a real-time tool for emergency management

    International Nuclear Information System (INIS)

    Frullani, Salvatore; Castelluccio, Donato M.; Cisbani, Evaristo; Colilli, Stefano; Fratoni, Rolando; Giuliani, Fausto; Mostarda, Angelo; Colangeli, Giorgio; De Otto, Gian L.; Marchiori, Carlo; Paoloni, Gianfranco

    2008-01-01

    Aerial platform equipped with a sampling line and real-time monitoring of sampled aerosol is presented. The system is composed by: a) A Sky Arrow 650 fixed wing aircraft with the front part of the fuselage properly adapted to house the detection and acquisition equipment; b) A compact air sampling line where the iso kinetic sampling is dynamically maintained, aerosol is collected on a filter positioned along the line and hosted on a rotating 4-filters disk; c) A detection subsystem: a small BGO scintillator and Geiger counter right behind the sampling filter, a HPGe detector allows radionuclide identification in the collected aerosol samples, a large NaI(Tl) crystal detects airborne and ground gamma radiation; d) Several environmental (temperature, pressure, aircraft/wind speed) sensors and a GPS receiver that support the full characterization of the sampling conditions and the temporal and geographical location of the acquired data; e) Acquisition and control system based on compact electronics and real time software that operate the sampling line actuators, guarantee the dynamical iso kinetic condition, and acquire the detectors and sensor data. With this system quantitative measurements can be available also during the plume phase of an accident, while other aerial platforms, without sampling capability, can only be used for qualitative assessments. Transmission of all data will be soon implemented in order to make all the data available in real-time to the Technical Centre for the Emergency Management. The use of an unmanned air-vehicle (UAV) is discussed as future option. (author)

  6. Third-order-accurate numerical methods for efficient, large time-step solutions of mixed linear and nonlinear problems

    Energy Technology Data Exchange (ETDEWEB)

    Cobb, J.W.

    1995-02-01

    There is an increasing need for more accurate numerical methods for large-scale nonlinear magneto-fluid turbulence calculations. These methods should not only increase the current state of the art in terms of accuracy, but should also continue to optimize other desired properties such as simplicity, minimized computation, minimized memory requirements, and robust stability. This includes the ability to stably solve stiff problems with long time-steps. This work discusses a general methodology for deriving higher-order numerical methods. It also discusses how the selection of various choices can affect the desired properties. The explicit discussion focuses on third-order Runge-Kutta methods, including general solutions and five examples. The study investigates the linear numerical analysis of these methods, including their accuracy, general stability, and stiff stability. Additional appendices discuss linear multistep methods, discuss directions for further work, and exhibit numerical analysis results for some other commonly used lower-order methods.

  7. Fast Time-Dependent Density Functional Theory Calculations of the X-ray Absorption Spectroscopy of Large Systems.

    Science.gov (United States)

    Besley, Nicholas A

    2016-10-11

    The computational cost of calculations of K-edge X-ray absorption spectra using time-dependent density functional (TDDFT) within the Tamm-Dancoff approximation is significantly reduced through the introduction of a severe integral screening procedure that includes only integrals that involve the core s basis function of the absorbing atom(s) coupled with a reduced quality numerical quadrature for integrals associated with the exchange and correlation functionals. The memory required for the calculations is reduced through construction of the TDDFT matrix within the absorbing core orbitals excitation space and exploiting further truncation of the virtual orbital space. The resulting method, denoted fTDDFTs, leads to much faster calculations and makes the study of large systems tractable. The capability of the method is demonstrated through calculations of the X-ray absorption spectra at the carbon K-edge of chlorophyll a, C 60 and C 70 .

  8. Time-resolved large-scale volumetric pressure fields of an impinging jet from dense Lagrangian particle tracking

    Science.gov (United States)

    Huhn, F.; Schanz, D.; Manovski, P.; Gesemann, S.; Schröder, A.

    2018-05-01

    Time-resolved volumetric pressure fields are reconstructed from Lagrangian particle tracking with high seeding concentration using the Shake-The-Box algorithm in a perpendicular impinging jet flow with exit velocity U=4 m/s (Re˜ 36,000) and nozzle-plate spacing H/D=5. Helium-filled soap bubbles are used as tracer particles which are illuminated with pulsed LED arrays. A large measurement volume has been covered (cloud of tracked particles in a volume of 54 L, ˜ 180,000 particles). The reconstructed pressure field has been validated against microphone recordings at the wall with high correlation coefficients up to 0.88. In a reduced measurement volume (13 L), dense Lagrangian particle tracking is shown to be feasable up to the maximal possible jet velocity of U=16 m/s.

  9. Some Examples of Residence-Time Distribution Studies in Large-Scale Chemical Processes by Using Radiotracer Techniques

    Energy Technology Data Exchange (ETDEWEB)

    Bullock, R. M.; Johnson, P.; Whiston, J. [Imperial Chemical Industries Ltd., Billingham, Co., Durham (United Kingdom)

    1967-06-15

    The application of radiotracers to determine flow patterns in chemical processes is discussed with particular reference to the derivation of design data from model reactors for translation to large-scale units, the study of operating efficiency and design attainment in established plant and the rapid identification of various types of process malfunction. The requirements governing the selection of tracers for various types of media are considered and an example is given of the testing of the behaviour of a typical tracer before use in a particular large-scale process operating at 250 atm and 200 Degree-Sign C. Information which may be derived from flow patterns is discussed including the determination of mixing parameters, gas hold-up in gas/liquid reactions and the detection of channelling and stagnant regions. Practical results and their interpretation are given in relation to an define hydroformylation reaction system, a process for the conversion of propylene to isopropanol, a moving bed catalyst system for the isomerization of xylenes and a three-stage gas-liquid reaction system. The use of mean residence-time data for the detection of leakage between reaction vessels and a heat interchanger system is given as an example of the identification of process malfunction. (author)

  10. Large-strain optical fiber sensing and real-time FEM updating of steel structures under the high temperature effect

    International Nuclear Information System (INIS)

    Huang, Ying; Fang, Xia; Xiao, Hai; Bevans, Wesley James; Chen, Genda; Zhou, Zhi

    2013-01-01

    Steel buildings are subjected to fire hazards during or immediately after a major earthquake. Under combined gravity and thermal loads, they have non-uniformly distributed stiffness and strength, and thus collapse progressively with large deformation. In this study, large-strain optical fiber sensors for high temperature applications and a temperature-dependent finite element model updating method are proposed for accurate prediction of structural behavior in real time. The optical fiber sensors can measure strains up to 10% at approximately 700 °C. Their measurements are in good agreement with those from strain gauges up to 0.5%. In comparison with the experimental results, the proposed model updating method can reduce the predicted strain errors from over 75% to below 20% at 800 °C. The minimum number of sensors in a fire zone that can properly characterize the vertical temperature distribution of heated air due to the gravity effect should be included in the proposed model updating scheme to achieve a predetermined simulation accuracy. (paper)

  11. Real-Time Adaptive Control of a Magnetic Levitation System with a Large Range of Load Disturbance.

    Science.gov (United States)

    Zhang, Zhizhou; Li, Xiaolong

    2018-05-11

    In an idle light-load or a full-load condition, the change of the load mass of a suspension system is very significant. If the control parameters of conventional control methods remain unchanged, the suspension performance of the control system deteriorates rapidly or even loses stability when the load mass changes in a large range. In this paper, a real-time adaptive control method for a magnetic levitation system with large range of mass changes is proposed. First, the suspension control system model of the maglev train is built up, and the stability of the closed-loop system is analyzed. Then, a fast inner current-loop is used to simplify the design of the suspension control system, and an adaptive control method is put forward to ensure that the system is still in a stable state when the load mass varies in a wide range. Simulations and experiments show that when the load mass of the maglev system varies greatly, the adaptive control method is effective to suspend the system stably with a given displacement.

  12. Highly Sensitive GMO Detection Using Real-Time PCR with a Large Amount of DNA Template: Single-Laboratory Validation.

    Science.gov (United States)

    Mano, Junichi; Hatano, Shuko; Nagatomi, Yasuaki; Futo, Satoshi; Takabatake, Reona; Kitta, Kazumi

    2018-03-01

    Current genetically modified organism (GMO) detection methods allow for sensitive detection. However, a further increase in sensitivity will enable more efficient testing for large grain samples and reliable testing for processed foods. In this study, we investigated real-time PCR-based GMO detection methods using a large amount of DNA template. We selected target sequences that are commonly introduced into many kinds of GM crops, i.e., 35S promoter and nopaline synthase (NOS) terminator. This makes the newly developed method applicable to a wide range of GMOs, including some unauthorized ones. The estimated LOD of the new method was 0.005% of GM maize events; to the best of our knowledge, this method is the most sensitive among the GM maize detection methods for which the LOD was evaluated in terms of GMO content. A 10-fold increase in the DNA amount as compared with the amount used under common testing conditions gave an approximately 10-fold reduction in the LOD without PCR inhibition. Our method is applicable to various analytical samples, including processed foods. The use of other primers and fluorescence probes would permit highly sensitive detection of various recombinant DNA sequences besides the 35S promoter and NOS terminator.

  13. Normal black holes in bulge-less galaxies: the largely quiescent, merger-free growth of black holes over cosmic time

    Science.gov (United States)

    Martin, G.; Kaviraj, S.; Volonteri, M.; Simmons, B. D.; Devriendt, J. E. G.; Lintott, C. J.; Smethurst, R. J.; Dubois, Y.; Pichon, C.

    2018-05-01

    Understanding the processes that drive the formation of black holes (BHs) is a key topic in observational cosmology. While the observed MBH-MBulge correlation in bulge-dominated galaxies is thought to be produced by major mergers, the existence of an MBH-M⋆ relation, across all galaxy morphological types, suggests that BHs may be largely built by secular processes. Recent evidence that bulge-less galaxies, which are unlikely to have had significant mergers, are offset from the MBH-MBulge relation, but lie on the MBH-M⋆ relation, has strengthened this hypothesis. Nevertheless, the small size and heterogeneity of current data sets, coupled with the difficulty in measuring precise BH masses, make it challenging to address this issue using empirical studies alone. Here, we use Horizon-AGN, a cosmological hydrodynamical simulation to probe the role of mergers in BH growth over cosmic time. We show that (1) as suggested by observations, simulated bulge-less galaxies lie offset from the main MBH-MBulge relation, but on the MBH-M⋆ relation, (2) the positions of galaxies on the MBH-M⋆ relation are not affected by their merger histories, and (3) only ˜35 per cent of the BH mass in today's massive galaxies is directly attributable to merging - the majority (˜65 per cent) of BH growth, therefore, takes place gradually, via secular processes, over cosmic time.

  14. Exploiting deterministic maintenance opportunity windows created by conservative engineering design rules that result in free time locked into large high-speed coupled production lines with finite buffers

    Directory of Open Access Journals (Sweden)

    Durandt, Casper

    2016-08-01

    Full Text Available Conservative engineering design rules for large serial coupled production processes result in machines having locked-in free time (also called ‘critical downtime’ or ‘maintenance opportunity windows’, which cause idle time if not used. Operators are not able to assess a large production process holistically, and so may not be aware that they form the current bottleneck – or that they have free time available due to interruptions elsewhere. A real-time method is developed to accurately calculate and display free time in location and magnitude, and efficiency improvements are demonstrated in large-scale production runs.

  15. An integrated, indicator framework for assessing large-scale variations and change in seasonal timing and phenology (Invited)

    Science.gov (United States)

    Betancourt, J. L.; Weltzin, J. F.

    2013-12-01

    As part of an effort to develop an Indicator System for the National Climate Assessment (NCA), the Seasonality and Phenology Indicators Technical Team (SPITT) proposed an integrated, continental-scale framework for understanding and tracking seasonal timing in physical and biological systems. The framework shares several metrics with the EPA's National Climate Change Indicators. The SPITT framework includes a comprehensive suite of national indicators to track conditions, anticipate vulnerabilities, and facilitate intervention or adaptation to the extent possible. Observed, modeled, and forecasted seasonal timing metrics can inform a wide spectrum of decisions on federal, state, and private lands in the U.S., and will be pivotal for international efforts to mitigation and adaptation. Humans use calendars both to understand the natural world and to plan their lives. Although the seasons are familiar concepts, we lack a comprehensive understanding of how variability arises in the timing of seasonal transitions in the atmosphere, and how variability and change translate and propagate through hydrological, ecological and human systems. For example, the contributions of greenhouse warming and natural variability to secular trends in seasonal timing are difficult to disentangle, including earlier spring transitions from winter (strong westerlies) to summer (weak easterlies) patterns of atmospheric circulation; shifts in annual phasing of daily temperature means and extremes; advanced timing of snow and ice melt and soil thaw at higher latitudes and elevations; and earlier start and longer duration of the growing and fire seasons. The SPITT framework aims to relate spatiotemporal variability in surface climate to (1) large-scale modes of natural climate variability and greenhouse gas-driven climatic change, and (2) spatiotemporal variability in hydrological, ecological and human responses and impacts. The hierarchical framework relies on ground and satellite observations

  16. Spatiotemporally enhancing time-series DMSP/OLS nighttime light imagery for assessing large-scale urban dynamics

    Science.gov (United States)

    Xie, Yanhua; Weng, Qihao

    2017-06-01

    Accurate, up-to-date, and consistent information of urban extents is vital for numerous applications central to urban planning, ecosystem management, and environmental assessment and monitoring. However, current large-scale urban extent products are not uniform with respect to definition, spatial resolution, temporal frequency, and thematic representation. This study aimed to enhance, spatiotemporally, time-series DMSP/OLS nighttime light (NTL) data for detecting large-scale urban changes. The enhanced NTL time series from 1992 to 2013 were firstly generated by implementing global inter-calibration, vegetation-based spatial adjustment, and urban archetype-based temporal modification. The dataset was then used for updating and backdating urban changes for the contiguous U.S.A. (CONUS) and China by using the Object-based Urban Thresholding method (i.e., NTL-OUT method, Xie and Weng, 2016b). The results showed that the updated urban extents were reasonably accurate, with city-scale RMSE (root mean square error) of 27 km2 and Kappa of 0.65 for CONUS, and 55 km2 and 0.59 for China, respectively. The backdated urban extents yielded similar accuracy, with RMSE of 23 km2 and Kappa of 0.63 in CONUS, while 60 km2 and 0.60 in China. The accuracy assessment further revealed that the spatial enhancement greatly improved the accuracy of urban updating and backdating by significantly reducing RMSE and slightly increasing Kappa values. The temporal enhancement also reduced RMSE, and improved the spatial consistency between estimated and reference urban extents. Although the utilization of enhanced NTL data successfully detected urban size change, relatively low locational accuracy of the detected urban changes was observed. It is suggested that the proposed methodology would be more effective for updating and backdating global urban maps if further fusion of NTL data with higher spatial resolution imagery was implemented.

  17. A Java program for LRE-based real-time qPCR that enables large-scale absolute quantification.

    Science.gov (United States)

    Rutledge, Robert G

    2011-03-02

    Linear regression of efficiency (LRE) introduced a new paradigm for real-time qPCR that enables large-scale absolute quantification by eliminating the need for standard curves. Developed through the application of sigmoidal mathematics to SYBR Green I-based assays, target quantity is derived directly from fluorescence readings within the central region of an amplification profile. However, a major challenge of implementing LRE quantification is the labor intensive nature of the analysis. Utilizing the extensive resources that are available for developing Java-based software, the LRE Analyzer was written using the NetBeans IDE, and is built on top of the modular architecture and windowing system provided by the NetBeans Platform. This fully featured desktop application determines the number of target molecules within a sample with little or no intervention by the user, in addition to providing extensive database capabilities. MS Excel is used to import data, allowing LRE quantification to be conducted with any real-time PCR instrument that provides access to the raw fluorescence readings. An extensive help set also provides an in-depth introduction to LRE, in addition to guidelines on how to implement LRE quantification. The LRE Analyzer provides the automated analysis and data storage capabilities required by large-scale qPCR projects wanting to exploit the many advantages of absolute quantification. Foremost is the universal perspective afforded by absolute quantification, which among other attributes, provides the ability to directly compare quantitative data produced by different assays and/or instruments. Furthermore, absolute quantification has important implications for gene expression profiling in that it provides the foundation for comparing transcript quantities produced by any gene with any other gene, within and between samples.

  18. A Java program for LRE-based real-time qPCR that enables large-scale absolute quantification.

    Directory of Open Access Journals (Sweden)

    Robert G Rutledge

    Full Text Available BACKGROUND: Linear regression of efficiency (LRE introduced a new paradigm for real-time qPCR that enables large-scale absolute quantification by eliminating the need for standard curves. Developed through the application of sigmoidal mathematics to SYBR Green I-based assays, target quantity is derived directly from fluorescence readings within the central region of an amplification profile. However, a major challenge of implementing LRE quantification is the labor intensive nature of the analysis. FINDINGS: Utilizing the extensive resources that are available for developing Java-based software, the LRE Analyzer was written using the NetBeans IDE, and is built on top of the modular architecture and windowing system provided by the NetBeans Platform. This fully featured desktop application determines the number of target molecules within a sample with little or no intervention by the user, in addition to providing extensive database capabilities. MS Excel is used to import data, allowing LRE quantification to be conducted with any real-time PCR instrument that provides access to the raw fluorescence readings. An extensive help set also provides an in-depth introduction to LRE, in addition to guidelines on how to implement LRE quantification. CONCLUSIONS: The LRE Analyzer provides the automated analysis and data storage capabilities required by large-scale qPCR projects wanting to exploit the many advantages of absolute quantification. Foremost is the universal perspective afforded by absolute quantification, which among other attributes, provides the ability to directly compare quantitative data produced by different assays and/or instruments. Furthermore, absolute quantification has important implications for gene expression profiling in that it provides the foundation for comparing transcript quantities produced by any gene with any other gene, within and between samples.

  19. Why preeclampsia still exists?

    Science.gov (United States)

    Chelbi, Sonia T; Veitia, Reiner A; Vaiman, Daniel

    2013-08-01

    Preeclampsia (PE) is a deadly gestational disease affecting up to 10% of women and specific of the human species. Preeclampsia is clearly multifactorial, but the existence of a genetic basis for this disease is now clearly established by the existence of familial cases, epidemiological studies and known predisposing gene polymorphisms. PE is very common despite the fact that Darwinian pressure should have rapidly eliminated or strongly minimized the frequency of predisposing alleles. Consecutive pregnancies with the same partner decrease the risk and severity of PE. Here, we show that, due to this peculiar feature, preeclampsia predisposing-alleles can be differentially maintained according to the familial structure. Thus, we suggest that an optimal frequency of PE-predisposing alleles in human populations can be achieved as a result of a trade-off between benefits of exogamy, importance for maintaining genetic diversity and increase of the fitness owing to a stable paternal investment. Copyright © 2013 Elsevier Ltd. All rights reserved.

  20. Existence of Projective Planes

    OpenAIRE

    Perrott, Xander

    2016-01-01

    This report gives an overview of the history of finite projective planes and their properties before going on to outline the proof that no projective plane of order 10 exists. The report also investigates the search carried out by MacWilliams, Sloane and Thompson in 1970 [12] and confirms their result by providing independent verification that there is no vector of weight 15 in the code generated by the projective plane of order 10.

  1. Does bioethics exist?

    Science.gov (United States)

    Turner, L

    2009-12-01

    Bioethicists disagree over methods, theories, decision-making guides, case analyses and public policies. Thirty years ago, the thinking of many scholars coalesced around a principlist approach to bioethics. That mid-level mode of moral reasoning is now one of many approaches to moral deliberation. Significant variation in contemporary approaches to the study of ethical issues related to medicine, biotechnology and health care raises the question of whether bioethics exists as widely shared method, theory, normative framework or mode of moral reasoning.

  2. Combined large field-of-view MRA and time-resolved MRA of the lower extremities: Impact of acquisition order on image quality

    International Nuclear Information System (INIS)

    Riffel, Philipp; Haneder, Stefan; Attenberger, Ulrike I.; Brade, Joachim; Schoenberg, Stefan O.; Michaely, Henrik J.

    2012-01-01

    Purpose: Different approaches exist for hybrid MRA of the calf station. So far, the order of the acquisition of the focused calf MRA and the large field-of-view MRA has not been scientifically evaluated. Therefore the aim of this study was to evaluate if the quality of the combined large field-of-view MRA (CTM MR angiography) and time-resolved MRA with stochastic interleaved trajectories (TWIST MRA) depends on the order of acquisition of the two contrast-enhanced studies. Methods: In this retrospective study, 40 consecutive patients (mean age 68.1 ± 8.7 years, 29 male/11 female) who had undergone an MR angiographic protocol that consisted of CTM-MRA (TR/TE, 2.4/1.0 ms; 21° flip angle; isotropic resolution 1.2 mm; gadolinium dose, 0.07 mmol/kg) and TWIST-MRA (TR/TE 2.8/1.1; 20° flip angle; isotropic resolution 1.1 mm; temporal resolution 5.5 s, gadolinium dose, 0.03 mmol/kg), were included. In the first group (group 1) TWIST-MRA of the calf station was performed 1–2 min after CTM-MRA. In the second group (group 2) CTM-MRA was performed 1–2 min after TWIST-MRA of the calf station. The image quality of CTM-MRA and TWIST-MRA were evaluated by 2 two independent radiologists in consensus according to a 4-point Likert-like rating scale assessing overall image quality on a segmental basis. Venous overlay was assessed per examination. Results: In the CTM-MRA, 1360 segments were included in the assessment of image quality. CTM-MRA was diagnostic in 95% (1289/1360) of segments. There was a significant difference (p < 0.0001) between both groups with regard to the number of segments rated as excellent and moderate. The image quality was rated as excellent in group 1 in 80% (514/640 segments) and in group 2 in 67% (432/649), respectively (p < 0.0001). In contrast, the image quality was rated as moderate in the first group in 5% (33/640) and in the second group in 19% (121/649) respectively (p < 0.0001). The venous overlay was disturbing in 10% in group 1 and 20% in group

  3. Reported frequency of physical activity in a large epidemiological study: relationship to specific activities and repeatability over time

    Directory of Open Access Journals (Sweden)

    Reeves Gillian K

    2011-06-01

    Full Text Available Abstract Background How overall physical activity relates to specific activities and how reported activity changes over time may influence interpretation of observed associations between physical activity and health. We examine the relationships between various physical activities self-reported at different times in a large cohort study of middle-aged UK women. Methods At recruitment, Million Women Study participants completed a baseline questionnaire including questions on frequency of strenuous and of any physical activity. About 3 years later 589,896 women also completed a follow-up questionnaire reporting the hours they spent on a range of specific activities. Time spent on each activity was used to estimate the associated excess metabolic equivalent hours (MET-hours and this value was compared across categories of physical activity reported at recruitment. Additionally, 18,655 women completed the baseline questionnaire twice, at intervals of up to 4 years; repeatability over time was assessed using the weighted kappa coefficient (κweighted and absolute percentage agreement. Results The average number of hours per week women reported doing specific activities was 14.0 for housework, 4.5 for walking, 3.0 for gardening, 0.2 for cycling, and 1.4 for all strenuous activity. Time spent and the estimated excess MET-hours associated with each activity increased with increasing frequency of any or strenuous physical activity reported at baseline (tests for trend, P weighted = 0.71 for questionnaires administered less than 6 months apart, and 52% (κweighted = 0.51 for questionnaires more than 2 years apart. Corresponding values for any physical activity were 57% (κweighted = 0.67 and 47% (κweighted = 0.58. Conclusions In this cohort, responses to simple questions on the frequency of any physical activity and of strenuous activity asked at baseline were associated with hours spent on specific activities and the associated estimated excess MET

  4. Kinota: An Open-Source NoSQL implementation of OGC SensorThings for large-scale high-resolution real-time environmental monitoring

    Science.gov (United States)

    Miles, B.; Chepudira, K.; LaBar, W.

    2017-12-01

    The Open Geospatial Consortium (OGC) SensorThings API (STA) specification, ratified in 2016, is a next-generation open standard for enabling real-time communication of sensor data. Building on over a decade of OGC Sensor Web Enablement (SWE) Standards, STA offers a rich data model that can represent a range of sensor and phenomena types (e.g. fixed sensors sensing fixed phenomena, fixed sensors sensing moving phenomena, mobile sensors sensing fixed phenomena, and mobile sensors sensing moving phenomena) and is data agnostic. Additionally, and in contrast to previous SWE standards, STA is developer-friendly, as is evident from its convenient JSON serialization, and expressive OData-based query language (with support for geospatial queries); with its Message Queue Telemetry Transport (MQTT), STA is also well-suited to efficient real-time data publishing and discovery. All these attributes make STA potentially useful for use in environmental monitoring sensor networks. Here we present Kinota(TM), an Open-Source NoSQL implementation of OGC SensorThings for large-scale high-resolution real-time environmental monitoring. Kinota, which roughly stands for Knowledge from Internet of Things Analyses, relies on Cassandra its underlying data store, which is a horizontally scalable, fault-tolerant open-source database that is often used to store time-series data for Big Data applications (though integration with other NoSQL or rational databases is possible). With this foundation, Kinota can scale to store data from an arbitrary number of sensors collecting data every 500 milliseconds. Additionally, Kinota architecture is very modular allowing for customization by adopters who can choose to replace parts of the existing implementation when desirable. The architecture is also highly portable providing the flexibility to choose between cloud providers like azure, amazon, google etc. The scalable, flexible and cloud friendly architecture of Kinota makes it ideal for use in next

  5. 3-D time-dependent numerical model of flow patterns within a large-scale Czochralski system

    Science.gov (United States)

    Nam, Phil-Ouk; O, Sang-Kun; Yi, Kyung-Woo

    2008-04-01

    Silicon single crystals grown through the Czochralski (Cz) method have increased in size to 300 mm, resulting in the use of larger crucibles. The objective of this study is to investigate the continuous Cz method in a large crucible (800 mm), which is performed by inserting a polycrystalline silicon rod into the melt. The numerical model is based on a time-dependent and three-dimensional standard k- ɛ turbulent model using the analytical software package CFD-ACE+, version 2007. Wood's metal melt, which has a low melting point ( Tm=70 °C), was used as the modeling fluid. Crystal rotation given in the clockwise direction with rotation rates varying from 0 to 15 rpm, while the crucible was rotated counter-clockwise, with rotation rates between 0 and 3 rpm. The results show that asymmetrical phenomena of fluid flow arise as results of crystal and crucible rotation, and that these phenomena move with the passage of time. Near the crystal, the flow moves towards the crucible at the pole of the asymmetrical phenomena. Away from the poles, a vortex begins to form, which is strongly pronounced in the region between the poles.

  6. Time-resolved assessment of collateral flow using 4D CT angiography in large-vessel occlusion stroke

    International Nuclear Information System (INIS)

    Froelich, Andreas M.J.; Wolff, Sarah Lena; Psychogios, Marios N.; Schramm, Ramona; Knauth, Michael; Schramm, Peter; Klotz, Ernst; Wasser, Katrin

    2014-01-01

    In acute stroke patients with large vessel occlusion, collateral blood flow affects tissue fate and patient outcome. The visibility of collaterals on computed tomography angiography (CTA) strongly depends on the acquisition phase, but the optimal time point for collateral imaging is unknown. We analysed collaterals in a time-resolved fashion using four-dimensional (4D) CTA in 82 endovascularly treated stroke patients, aiming to determine which acquisition phase best depicts collaterals and predicts outcome. Early, peak and late phases as well as temporally fused maximum intensity projections (tMIP) were graded using a semiquantitative regional leptomeningeal collateral score, compared with conventional single-phase CTA and correlated with functional outcome. The total extent of collateral flow was best visualised on tMIP. Collateral scores were significantly lower on early and peak phase as well as on single-phase CTA. Collateral grade was associated with favourable functional outcome and the strength of this relationship increased from earlier to later phases, with collaterals on tMIP showing the strongest correlation with outcome. Temporally fused tMIP images provide the best depiction of collateral flow. Our findings suggest that the total extent of collateral flow, rather than the velocity of collateral filling, best predicts clinical outcome. (orig.)

  7. Building rainfall thresholds for large-scales landslides by extracting occurrence time of landslides from seismic records

    Science.gov (United States)

    Yen, Hsin-Yi; Lin, Guan-Wei

    2017-04-01

    Understanding the rainfall condition which triggers mass moment on hillslope is the key to forecast rainfall-induced slope hazards, and the exact time of landslide occurrence is one of the basic information for rainfall statistics. In the study, we focused on large-scale landslides (LSLs) with disturbed area larger than 10 ha and conducted a string of studies including the recognition of landslide-induced ground motions and the analyses of different terms of rainfall thresholds. More than 10 heavy typhoons during the periods of 2005-2014 in Taiwan induced more than hundreds of LSLs and provided the opportunity to characterize the rainfall conditions which trigger LSLs. A total of 101 landslide-induced seismic signals were identified from the records of Taiwan seismic network. These signals exposed the occurrence time of landslide to assess rainfall conditions. Rainfall analyses showed that LSLs occurred when cumulative rainfall exceeded 500 mm. The results of rainfall-threshold analyses revealed that it is difficult to distinct LSLs from small-scale landslides (SSLs) by the I-D and R-D methods, but the I-R method can achieve the discrimination. Besides, an enhanced three-factor threshold considering deep water content was proposed as the rainfall threshold for LSLs.

  8. Early Lance-Adams syndrome after cardiac arrest: Prevalence, time to return to awareness, and outcome in a large cohort.

    Science.gov (United States)

    Aicua Rapun, Irene; Novy, Jan; Solari, Daria; Oddo, Mauro; Rossetti, Andrea O

    2017-06-01

    Early myoclonus after cardiac arrest (CA) is traditionally viewed as a poor prognostic sign (status myoclonus). However, some patients may present early Lance-Adams syndrome (LAS): under appropriate treatment, they can reach a satisfactory functional outcome. Our aim was to describe their profile, focusing on pharmacologic management in the ICU, time to return of awareness, and long-term prognosis. Adults with early LAS (defined as generalized myoclonus within 96h, with epileptiform EEG within 48h after CA) were retrospectively identified in our CA registry between 2006 and 2016. Functional outcome was assessed through cerebral performance categories (CPC) at 3 months, CPC 1-2 defined good outcome. Among 458 consecutive patients, 7 (1.5%) developed early LAS (4 women, median age 59 years). Within 72h after CA, in normothemia and off sedation, all showed preserved brainstem reflexes and localized pain. All patients were initially treated with valproate, levetiracetam and clonazepam; additional agents, including propofol and midazolam, were prescribed in the majority. First signs of awareness occurred after 3-23 days (median 11.8); 3/7 reached a good outcome at 3 months. Early after CA, myoclonus together with a reactive, epileptiform EEG, preserved evoked potentials and brainstem reflexes suggests LAS. This condition was managed with a combination of highly dosed, large spectrum antiepileptic agents including propofol and midazolam. Even if awakening was at times delayed, good outcome occurred in a substantial proportion of patients. Copyright © 2017 Elsevier B.V. All rights reserved.

  9. Large-Scale Uncertainty and Error Analysis for Time-dependent Fluid/Structure Interactions in Wind Turbine Applications

    Energy Technology Data Exchange (ETDEWEB)

    Alonso, Juan J. [Stanford University; Iaccarino, Gianluca [Stanford University

    2013-08-25

    The following is the final report covering the entire period of this aforementioned grant, June 1, 2011 - May 31, 2013 for the portion of the effort corresponding to Stanford University (SU). SU has partnered with Sandia National Laboratories (PI: Mike S. Eldred) and Purdue University (PI: Dongbin Xiu) to complete this research project and this final report includes those contributions made by the members of the team at Stanford. Dr. Eldred is continuing his contributions to this project under a no-cost extension and his contributions to the overall effort will be detailed at a later time (once his effort has concluded) on a separate project submitted by Sandia National Laboratories. At Stanford, the team is made up of Profs. Alonso, Iaccarino, and Duraisamy, post-doctoral researcher Vinod Lakshminarayan, and graduate student Santiago Padron. At Sandia National Laboratories, the team includes Michael Eldred, Matt Barone, John Jakeman, and Stefan Domino, and at Purdue University, we have Prof. Dongbin Xiu as our main collaborator. The overall objective of this project was to develop a novel, comprehensive methodology for uncertainty quantification by combining stochastic expansions (nonintrusive polynomial chaos and stochastic collocation), the adjoint approach, and fusion with experimental data to account for aleatory and epistemic uncertainties from random variable, random field, and model form sources. The expected outcomes of this activity were detailed in the proposal and are repeated here to set the stage for the results that we have generated during the time period of execution of this project: 1. The rigorous determination of an error budget comprising numerical errors in physical space and statistical errors in stochastic space and its use for optimal allocation of resources; 2. A considerable increase in efficiency when performing uncertainty quantification with a large number of uncertain variables in complex non-linear multi-physics problems; 3. A

  10. Large-scale analysis of acute ethanol exposure in zebrafish development: a critical time window and resilience.

    Directory of Open Access Journals (Sweden)

    Shaukat Ali

    Full Text Available BACKGROUND: In humans, ethanol exposure during pregnancy causes a spectrum of developmental defects (fetal alcohol syndrome or FAS. Individuals vary in phenotypic expression. Zebrafish embryos develop FAS-like features after ethanol exposure. In this study, we ask whether stage-specific effects of ethanol can be identified in the zebrafish, and if so, whether they allow the pinpointing of sensitive developmental mechanisms. We have therefore conducted the first large-scale (>1500 embryos analysis of acute, stage-specific drug effects on zebrafish development, with a large panel of readouts. METHODOLOGY/PRINCIPAL FINDINGS: Zebrafish embryos were raised in 96-well plates. Range-finding indicated that 10% ethanol for 1 h was suitable for an acute exposure regime. High-resolution magic-angle spinning proton magnetic resonance spectroscopy showed that this produced a transient pulse of 0.86% concentration of ethanol in the embryo within the chorion. Survivors at 5 days postfertilisation were analysed. Phenotypes ranged from normal (resilient to severely malformed. Ethanol exposure at early stages caused high mortality (≥88%. At later stages of exposure, mortality declined and malformations developed. Pharyngeal arch hypoplasia and behavioral impairment were most common after prim-6 and prim-16 exposure. By contrast, microphthalmia and growth retardation were stage-independent. CONCLUSIONS: Our findings show that some ethanol effects are strongly stage-dependent. The phenotypes mimic key aspects of FAS including craniofacial abnormality, microphthalmia, growth retardation and behavioral impairment. We also identify a critical time window (prim-6 and prim-16 for ethanol sensitivity. Finally, our identification of a wide phenotypic spectrum is reminiscent of human FAS, and may provide a useful model for studying disease resilience.

  11. On the existence and dynamics of braneworld black holes

    International Nuclear Information System (INIS)

    Fitzpatrick, Andrew Liam; Randall, Lisa; Wiseman, Toby

    2006-01-01

    Based on holographic arguments Tanaka and Emparan et al have claimed that large localized static black holes do not exist in the one-brane Randall-Sundrum model. If such black holes are time-dependent as they propose, there are potentially significant phenomenological and theoretical consequences. We revisit the issue, arguing that their reasoning does not take into account the strongly coupled nature of the holographic theory. We claim that static black holes with smooth metrics should indeed exist in these theories, and give a simple example. However, although the existence of such solutions is relevant to exact and numerical solution searches, such static solutions might be dynamically unstable, again leading to time dependence with phenomenological consequences. We explore a plausible instability, suggested by Tanaka, analogous to that of Gregory and Laflamme, but argue that there is no reliable reason at this point to assume it must exist

  12. Breaking Computational Barriers: Real-time Analysis and Optimization with Large-scale Nonlinear Models via Model Reduction

    Energy Technology Data Exchange (ETDEWEB)

    Carlberg, Kevin Thomas [Sandia National Lab. (SNL-CA), Livermore, CA (United States). Quantitative Modeling and Analysis; Drohmann, Martin [Sandia National Lab. (SNL-CA), Livermore, CA (United States). Quantitative Modeling and Analysis; Tuminaro, Raymond S. [Sandia National Lab. (SNL-CA), Livermore, CA (United States). Computational Mathematics; Boggs, Paul T. [Sandia National Lab. (SNL-CA), Livermore, CA (United States). Quantitative Modeling and Analysis; Ray, Jaideep [Sandia National Lab. (SNL-CA), Livermore, CA (United States). Quantitative Modeling and Analysis; van Bloemen Waanders, Bart Gustaaf [Sandia National Lab. (SNL-CA), Livermore, CA (United States). Optimization and Uncertainty Estimation

    2014-10-01

    Model reduction for dynamical systems is a promising approach for reducing the computational cost of large-scale physics-based simulations to enable high-fidelity models to be used in many- query (e.g., Bayesian inference) and near-real-time (e.g., fast-turnaround simulation) contexts. While model reduction works well for specialized problems such as linear time-invariant systems, it is much more difficult to obtain accurate, stable, and efficient reduced-order models (ROMs) for systems with general nonlinearities. This report describes several advances that enable nonlinear reduced-order models (ROMs) to be deployed in a variety of time-critical settings. First, we present an error bound for the Gauss-Newton with Approximated Tensors (GNAT) nonlinear model reduction technique. This bound allows the state-space error for the GNAT method to be quantified when applied with the backward Euler time-integration scheme. Second, we present a methodology for preserving classical Lagrangian structure in nonlinear model reduction. This technique guarantees that important properties--such as energy conservation and symplectic time-evolution maps--are preserved when performing model reduction for models described by a Lagrangian formalism (e.g., molecular dynamics, structural dynamics). Third, we present a novel technique for decreasing the temporal complexity --defined as the number of Newton-like iterations performed over the course of the simulation--by exploiting time-domain data. Fourth, we describe a novel method for refining projection-based reduced-order models a posteriori using a goal-oriented framework similar to mesh-adaptive h -refinement in finite elements. The technique allows the ROM to generate arbitrarily accurate solutions, thereby providing the ROM with a 'failsafe' mechanism in the event of insufficient training data. Finally, we present the reduced-order model error surrogate (ROMES) method for statistically quantifying reduced- order

  13. O Ponto G Existe?

    Directory of Open Access Journals (Sweden)

    Carlos Alexandre Molina Noccioli

    2016-07-01

    Full Text Available Este trabalho busca analisar o tratamento linguístico-discursivo das informações acerca de um tópicotemático tradicionalmente visto como tabu, relacionado a questões sexuais, na notícia O ponto G existe?, publicada em 2008, na revista brasileira Superinteressante, destacando-se como o conhecimento em questão é representado socialmente ao se considerar a linha editorial da revista. A notícia caracteriza-se como um campo fértil para a análise das estratégias divulgativas, já que atrai, inclusive pelas escolhas temáticas, a curiosidade dos leitores. Imbuído de um tema excêntrico, o texto consegue angariar um público jovem interessado em discussões polêmicas relacionadas ao seu universo.

  14. GRB 090926A AND BRIGHT LATE-TIME FERMI LARGE AREA TELESCOPE GAMMA-RAY BURST AFTERGLOWS

    International Nuclear Information System (INIS)

    Swenson, C. A.; Roming, P. W. A.; Vetere, L.; Kennea, J. A.; Maxham, A.; Zhang, B. B.; Zhang, B.; Schady, P.; Holland, S. T.; Kuin, N. P. M.; Oates, S. R.; De Pasquale, M.; Page, K. L.

    2010-01-01

    GRB 090926A was detected by both the Gamma-ray Burst Monitor and Large Area Telescope (LAT) instruments on board the Fermi Gamma-ray Space Telescope. Swift follow-up observations began ∼13 hr after the initial trigger. The optical afterglow was detected for nearly 23 days post trigger, placing it in the long-lived category. The afterglow is of particular interest due to its brightness at late times, as well as the presence of optical flares at T0+10 5 s and later, which may indicate late-time central engine activity. The LAT has detected a total of 16 gamma-ray bursts; nine of these bursts, including GRB 090926A, also have been observed by Swift. Of the nine Swift-observed LAT bursts, six were detected by UVOT, with five of the bursts having bright, long-lived optical afterglows. In comparison, Swift has been operating for five years and has detected nearly 500 bursts, but has only seen ∼30% of bursts with optical afterglows that live longer than 10 5 s. We have calculated the predicted gamma-ray fluence, as would have been seen by the Burst Alert Telescope (BAT) on board Swift, of the LAT bursts to determine whether this high percentage of long-lived optical afterglows is unique, when compared to BAT-triggered bursts. We find that, with the exception of the short burst GRB 090510A, the predicted BAT fluences indicate that the LAT bursts are more energetic than 88% of all Swift bursts and also have brighter than average X-ray and optical afterglows.

  15. Damage-Based Time-Dependent Modeling of Paraglacial to Postglacial Progressive Failure of Large Rock Slopes

    Science.gov (United States)

    Riva, Federico; Agliardi, Federico; Amitrano, David; Crosta, Giovanni B.

    2018-01-01

    Large alpine rock slopes undergo long-term evolution in paraglacial to postglacial environments. Rock mass weakening and increased permeability associated with the progressive failure of deglaciated slopes promote the development of potentially catastrophic rockslides. We captured the entire life cycle of alpine slopes in one damage-based, time-dependent 2-D model of brittle creep, including deglaciation, damage-dependent fluid occurrence, and rock mass property upscaling. We applied the model to the Spriana rock slope (Central Alps), affected by long-term instability after Last Glacial Maximum and representing an active threat. We simulated the evolution of the slope from glaciated conditions to present day and calibrated the model using site investigation data and available temporal constraints. The model tracks the entire progressive failure path of the slope from deglaciation to rockslide development, without a priori assumptions on shear zone geometry and hydraulic conditions. Complete rockslide differentiation occurs through the transition from dilatant damage to a compacting basal shear zone, accounting for observed hydraulic barrier effects and perched aquifer formation. Our model investigates the mechanical role of deglaciation and damage-controlled fluid distribution in the development of alpine rockslides. The absolute simulated timing of rock slope instability development supports a very long "paraglacial" period of subcritical rock mass damage. After initial damage localization during the Lateglacial, rockslide nucleation initiates soon after the onset of Holocene, whereas full mechanical and hydraulic rockslide differentiation occurs during Mid-Holocene, supporting a key role of long-term damage in the reported occurrence of widespread rockslide clusters of these ages.

  16. SCARDEC: a new technique for the rapid determination of seismic moment magnitude, focal mechanism and source time functions for large earthquakes using body-wave deconvolution

    Science.gov (United States)

    Vallée, M.; Charléty, J.; Ferreira, A. M. G.; Delouis, B.; Vergoz, J.

    2011-01-01

    Accurate and fast magnitude determination for large, shallow earthquakes is of key importance for post-seismic response and tsumami alert purposes. When no local real-time data are available, which is today the case for most subduction earthquakes, the first information comes from teleseismic body waves. Standard body-wave methods give accurate magnitudes for earthquakes up to Mw= 7-7.5. For larger earthquakes, the analysis is more complex, because of the non-validity of the point-source approximation and of the interaction between direct and surface-reflected phases. The latter effect acts as a strong high-pass filter, which complicates the magnitude determination. We here propose an automated deconvolutive approach, which does not impose any simplifying assumptions about the rupture process, thus being well adapted to large earthquakes. We first determine the source duration based on the length of the high frequency (1-3 Hz) signal content. The deconvolution of synthetic double-couple point source signals—depending on the four earthquake parameters strike, dip, rake and depth—from the windowed real data body-wave signals (including P, PcP, PP, SH and ScS waves) gives the apparent source time function (STF). We search the optimal combination of these four parameters that respects the physical features of any STF: causality, positivity and stability of the seismic moment at all stations. Once this combination is retrieved, the integration of the STFs gives directly the moment magnitude. We apply this new approach, referred as the SCARDEC method, to most of the major subduction earthquakes in the period 1990-2010. Magnitude differences between the Global Centroid Moment Tensor (CMT) and the SCARDEC method may reach 0.2, but values are found consistent if we take into account that the Global CMT solutions for large, shallow earthquakes suffer from a known trade-off between dip and seismic moment. We show by modelling long-period surface waves of these events that

  17. Investigação sobre a existência de inovações disruptivas das grandes empresas multinacionais para o mercado brasileiro de baixa renda Large multinational companies innovations to the low-income Brazilian market

    Directory of Open Access Journals (Sweden)

    Silvia Novaes Zilber

    2013-06-01

    sustainable and incremental innovations linked to the adequacy of existing products.

  18. Investigação sobre a existência de inovações disruptivas das grandes empresas multinacionais para o mercado brasileiro de baixa renda Large multinational companies innovations to the low-income Brazilian market

    Directory of Open Access Journals (Sweden)

    Silvia Novaes Zilber

    2012-01-01

    sustainable and incremental innovations linked to the adequacy of existing products.

  19. Detection of time-varying structures by large deformation diffeomorphic metric mapping to aid reading of high-resolution CT images of the lung.

    Directory of Open Access Journals (Sweden)

    Ryo Sakamoto

    Full Text Available OBJECTIVES: To evaluate the accuracy of advanced non-linear registration of serial lung Computed Tomography (CT images using Large Deformation Diffeomorphic Metric Mapping (LDDMM. METHODS: FIFTEEN CASES OF LUNG CANCER WITH SERIAL LUNG CT IMAGES (INTERVAL: 62.2±26.9 days were used. After affine transformation, three dimensional, non-linear volume registration was conducted using LDDMM with or without cascading elasticity control. Registration accuracy was evaluated by measuring the displacement of landmarks placed on vessel bifurcations for each lung segment. Subtraction images and Jacobian color maps, calculated from the transformation matrix derived from image warping, were generated, which were used to evaluate time-course changes of the tumors. RESULTS: The average displacement of landmarks was 0.02±0.16 mm and 0.12±0.60 mm for proximal and distal landmarks after LDDMM transformation with cascading elasticity control, which was significantly smaller than 3.11±2.47 mm and 3.99±3.05 mm, respectively, after affine transformation. Emerged or vanished nodules were visualized on subtraction images, and enlarging or shrinking nodules were displayed on Jacobian maps enabled by highly accurate registration of the nodules using LDDMM. However, some residual misalignments were observed, even with non-linear transformation when substantial changes existed between the image pairs. CONCLUSIONS: LDDMM provides accurate registration of serial lung CT images, and temporal subtraction images with Jacobian maps help radiologists to find changes in pulmonary nodules.

  20. submitter Optimizing the data-collection time of a large-scale data-acquisition system through a simulation framework

    CERN Document Server

    Colombo, Tommaso; Garcìa, Pedro Javier; Vandelli, Wainer

    2016-01-01

    The ATLAS detector at CERN records particle collision “events” delivered by the Large Hadron Collider. Its data-acquisition system identifies, selects, and stores interesting events in near real-time, with an aggregate throughput of several 10 GB/s. It is a distributed software system executed on a farm of roughly 2000 commodity worker nodes communicating via TCP/IP on an Ethernet network. Event data fragments are received from the many detector readout channels and are buffered, collected together, analyzed and either stored permanently or discarded. This system, and data-acquisition systems in general, are sensitive to the latency of the data transfer from the readout buffers to the worker nodes. Challenges affecting this transfer include the many-to-one communication pattern and the inherently bursty nature of the traffic. The main performance issues brought about by this workload are addressed in this paper, focusing in particular on the so-called TCP incast pathology. Since performing systematic stud...

  1. Government management and implementation of national real-time energy monitoring system for China large-scale public building

    International Nuclear Information System (INIS)

    Na Wei; Wu Yong; Song Yan; Dong Zhongcheng

    2009-01-01

    The supervision of energy efficiency in government office buildings and large-scale public buildings (GOBLPB) is the main embodiment for government implementation of Public Administration in the fields of resource saving and environmental protection. It is significant for China government to achieve the target: reducing building energy consumption by 11 million ton standard coal before 2010. In the framework of a national demonstration project concerning the energy management system, Shenzhen Municipality has been selected for the implementation of the system. A data acquisition system and a methodology concerning the energy consumption of the GOBLPB have been developed. This paper summarizes the various features of the system incorporated into identifying the building consumes and energy saving potential. This paper also defines the methods to achieve the real-time monitoring and diagnosis: the meters installed at each building, the data transmitted through internet to a center server, the analysis and unification at the center server and the publication through web. Furthermore, this paper introduces the plans to implement the system and to extend countrywide. Finally, this paper presents some measurements to achieve a common benefit community in implementation of building energy efficiency supervisory system on GOBLPB in its construction, reconstruction or operation stages.

  2. Do `negative' temperatures exist?

    Science.gov (United States)

    Lavenda, B. H.

    1999-06-01

    A modification of the second law is required for a system with a bounded density of states and not the introduction of a `negative' temperature scale. The ascending and descending branches of the entropy versus energy curve describe particle and hole states, having thermal equations of state that are given by the Fermi and logistic distributions, respectively. Conservation of energy requires isentropic states to be isothermal. The effect of adiabatically reversing the field is entirely mechanical because the only difference between the two states is their energies. The laws of large and small numbers, leading to the normal and Poisson approximations, characterize statistically the states of infinite and zero temperatures, respectively. Since the heat capacity also vanishes in the state of maximum disorder, the third law can be generalized in systems with a bounded density of states: the entropy tends to a constant as the temperature tends to either zero or infinity.

  3. Embedded Electro-Optic Sensor Network for the On-Site Calibration and Real-Time Performance Monitoring of Large-Scale Phased Arrays

    National Research Council Canada - National Science Library

    Yang, Kyoung

    2005-01-01

    This final report summarizes the progress during the Phase I SBIR project entitled "Embedded Electro-Optic Sensor Network for the On-Site Calibration and Real-Time Performance Monitoring of Large-Scale Phased Arrays...

  4. Precise large deviations of aggregate claims in a size-dependent renewal risk model with stopping time claim-number process

    Directory of Open Access Journals (Sweden)

    Shuo Zhang

    2017-04-01

    Full Text Available Abstract In this paper, we consider a size-dependent renewal risk model with stopping time claim-number process. In this model, we do not make any assumption on the dependence structure of claim sizes and inter-arrival times. We study large deviations of the aggregate amount of claims. For the subexponential heavy-tailed case, we obtain a precise large-deviation formula; our method substantially relies on a martingale for the structure of our models.

  5. Straightening: existence, uniqueness and stability

    Science.gov (United States)

    Destrade, M.; Ogden, R. W.; Sgura, I.; Vergori, L.

    2014-01-01

    One of the least studied universal deformations of incompressible nonlinear elasticity, namely the straightening of a sector of a circular cylinder into a rectangular block, is revisited here and, in particular, issues of existence and stability are addressed. Particular attention is paid to the system of forces required to sustain the large static deformation, including by the application of end couples. The influence of geometric parameters and constitutive models on the appearance of wrinkles on the compressed face of the block is also studied. Different numerical methods for solving the incremental stability problem are compared and it is found that the impedance matrix method, based on the resolution of a matrix Riccati differential equation, is the more precise. PMID:24711723

  6. Existence and construction of large stable food webs

    Science.gov (United States)

    Haerter, Jan O.; Mitarai, Namiko; Sneppen, Kim

    2017-09-01

    Ecological diversity is ubiquitous despite the restrictions imposed by competitive exclusion and apparent competition. To explain the observed richness of species in a given habitat, food-web theory has explored nonlinear functional responses, self-interaction, or spatial structure and dispersal—model ingredients that have proven to promote stability and diversity. We return instead here to classical Lotka-Volterra equations, where species-species interaction is characterized by a simple product and spatial restrictions are ignored. We quantify how this idealization imposes constraints on coexistence and diversity for many species. To this end, we introduce the concept of free and controlled species and use this to demonstrate how stable food webs can be constructed by the sequential addition of species. The resulting food webs can reach dozens of species and generally yield nonrandom degree distributions in accordance with the constraints imposed through the assembly process. Our model thus serves as a formal starting point for the study of sustainable interaction patterns between species.

  7. Image-processing of time-averaged interface distributions representing CCFL characteristics in a large scale model of a PWR hot-leg pipe geometry

    International Nuclear Information System (INIS)

    Al Issa, Suleiman; Macián-Juan, Rafael

    2017-01-01

    Highlights: • CCFL characteristics are investigated in PWR large-scale hot-leg pipe geometry. • Image processing of air-water interface produced time-averaged interface distributions. • Time-averages provide a comparative method of CCFL characteristics among different studies. • CCFL correlations depend upon the range of investigated water delivery for Dh ≫ 50 mm. • 1D codes are incapable of investigating CCFL because of lack of interface distribution. - Abstract: Countercurrent Flow Limitation (CCFL) was experimentally investigated in a 1/3.9 downscaled COLLIDER facility with a 190 mm pipe’s diameter using air/water at 1 atmospheric pressure. Previous investigations provided knowledge over the onset of CCFL mechanisms. In current article, CCFL characteristics at the COLLIDER facility are measured and discussed along with time-averaged distributions of the air/water interface for a selected matrix of liquid/gas velocities. The article demonstrates the time-averaged interface as a useful method to identify CCFL characteristics at quasi-stationary flow conditions eliminating variations that appears in single images, and showing essential comparative flow features such as: the degree of restriction at the bend, the extension and the intensity of the two-phase mixing zones, and the average water level within the horizontal part and the steam generator. Consequently, making it possible to compare interface distributions obtained at different investigations. The distributions are also beneficial for CFD validations of CCFL as the instant chaotic gas/liquid interface is impossible to reproduce in CFD simulations. The current study shows that final CCFL characteristics curve (and the corresponding CCFL correlation) depends upon the covered measuring range of water delivery. It also shows that a hydraulic diameter should be sufficiently larger than 50 mm in order to obtain CCFL characteristics comparable to the 1:1 scale data (namely the UPTF data). Finally

  8. Seasonal Differences in Determinants of Time Location Patterns in an Urban Population: A Large Population-Based Study in Korea.

    Science.gov (United States)

    Lee, Sewon; Lee, Kiyoung

    2017-06-22

    Time location patterns are a significant factor for exposure assessment models of air pollutants. Factors associated with time location patterns in urban populations are typically due to high air pollution levels in urban areas. The objective of this study was to determine the seasonal differences in time location patterns in two urban cities. A Time Use Survey of Korean Statistics (KOSTAT) was conducted in the summer, fall, and winter of 2014. Time location data from Seoul and Busan were collected, together with demographic information obtained by diaries and questionnaires. Determinants of the time spent at each location were analyzed by multiple linear regression and the stepwise method. Seoul and Busan participants had similar time location profiles over the three seasons. The time spent at own home, other locations, workplace/school and during walk were similar over the three seasons in both the Seoul and Busan participants. The most significant time location pattern factors were employment status, age, gender, monthly income, and spouse. Season affected the time spent at the workplace/school and other locations in the Seoul participants, but not in the Busan participants. The seasons affected each time location pattern of the urban population slightly differently, but overall there were few differences.

  9. Seasonal Differences in Determinants of Time Location Patterns in an Urban Population: A Large Population-Based Study in Korea

    Directory of Open Access Journals (Sweden)

    Sewon Lee

    2017-06-01

    Full Text Available Time location patterns are a significant factor for exposure assessment models of air pollutants. Factors associated with time location patterns in urban populations are typically due to high air pollution levels in urban areas. The objective of this study was to determine the seasonal differences in time location patterns in two urban cities. A Time Use Survey of Korean Statistics (KOSTAT was conducted in the summer, fall, and winter of 2014. Time location data from Seoul and Busan were collected, together with demographic information obtained by diaries and questionnaires. Determinants of the time spent at each location were analyzed by multiple linear regression and the stepwise method. Seoul and Busan participants had similar time location profiles over the three seasons. The time spent at own home, other locations, workplace/school and during walk were similar over the three seasons in both the Seoul and Busan participants. The most significant time location pattern factors were employment status, age, gender, monthly income, and spouse. Season affected the time spent at the workplace/school and other locations in the Seoul participants, but not in the Busan participants. The seasons affected each time location pattern of the urban population slightly differently, but overall there were few differences.

  10. Evaluating cloud processes in large-scale models: Of idealized case studies, parameterization testbeds and single-column modelling on climate time-scales

    Science.gov (United States)

    Neggers, Roel

    2016-04-01

    Boundary-layer schemes have always formed an integral part of General Circulation Models (GCMs) used for numerical weather and climate prediction. The spatial and temporal scales associated with boundary-layer processes and clouds are typically much smaller than those at which GCMs are discretized, which makes their representation through parameterization a necessity. The need for generally applicable boundary-layer parameterizations has motivated many scientific studies, which in effect has created its own active research field in the atmospheric sciences. Of particular interest has been the evaluation of boundary-layer schemes at "process-level". This means that parameterized physics are studied in isolated mode from the larger-scale circulation, using prescribed forcings and excluding any upscale interaction. Although feedbacks are thus prevented, the benefit is an enhanced model transparency, which might aid an investigator in identifying model errors and understanding model behavior. The popularity and success of the process-level approach is demonstrated by the many past and ongoing model inter-comparison studies that have been organized by initiatives such as GCSS/GASS. A red line in the results of these studies is that although most schemes somehow manage to capture first-order aspects of boundary layer cloud fields, there certainly remains room for improvement in many areas. Only too often are boundary layer parameterizations still found to be at the heart of problems in large-scale models, negatively affecting forecast skills of NWP models or causing uncertainty in numerical predictions of future climate. How to break this parameterization "deadlock" remains an open problem. This presentation attempts to give an overview of the various existing methods for the process-level evaluation of boundary-layer physics in large-scale models. This includes i) idealized case studies, ii) longer-term evaluation at permanent meteorological sites (the testbed approach

  11. An Adaptive Large Neighborhood Search-based Three-Stage Matheuristic for the Vehicle Routing Problem with Time Windows

    DEFF Research Database (Denmark)

    Christensen, Jonas Mark; Røpke, Stefan

    that serves all the customers. The second stage usesan Adaptive Large Neighborhood Search (ALNS) algorithm to minimise the travel distance, during the second phase all of the generated routes are considered by solving a set cover problem. The ALNS algorithm uses 4 destroy operators, 2 repair operators...

  12. Temperature field due to time-dependent heat sources in a large rectangular grid - Derivation of analytical solution

    International Nuclear Information System (INIS)

    Claesson, J.; Probert, T.

    1996-01-01

    The temperature field in rock due to a large rectangular grid of heat releasing canisters containing nuclear waste is studied. The solution is by superposition divided into different parts. There is a global temperature field due to the large rectangular canister area, while a local field accounts for the remaining heat source problem. The global field is reduced to a single integral. The local field is also solved analytically using solutions for a finite line heat source and for an infinite grid of point sources. The local solution is reduced to three parts, each of which depends on two spatial coordinates only. The temperatures at the envelope of a canister are given by a single thermal resistance, which is given by an explicit formula. The results are illustrated by a few numerical examples dealing with the KBS-3 concept for storage of nuclear waste. 8 refs

  13. Reducing process delays for real-time earthquake parameter estimation - An application of KD tree to large databases for Earthquake Early Warning

    Science.gov (United States)

    Yin, Lucy; Andrews, Jennifer; Heaton, Thomas

    2018-05-01

    Earthquake parameter estimations using nearest neighbor searching among a large database of observations can lead to reliable prediction results. However, in the real-time application of Earthquake Early Warning (EEW) systems, the accurate prediction using a large database is penalized by a significant delay in the processing time. We propose to use a multidimensional binary search tree (KD tree) data structure to organize large seismic databases to reduce the processing time in nearest neighbor search for predictions. We evaluated the performance of KD tree on the Gutenberg Algorithm, a database-searching algorithm for EEW. We constructed an offline test to predict peak ground motions using a database with feature sets of waveform filter-bank characteristics, and compare the results with the observed seismic parameters. We concluded that large database provides more accurate predictions of the ground motion information, such as peak ground acceleration, velocity, and displacement (PGA, PGV, PGD), than source parameters, such as hypocenter distance. Application of the KD tree search to organize the database reduced the average searching process by 85% time cost of the exhaustive method, allowing the method to be feasible for real-time implementation. The algorithm is straightforward and the results will reduce the overall time of warning delivery for EEW.

  14. Fully implicit solution of large-scale non-equilibrium radiation diffusion with high order time integration

    International Nuclear Information System (INIS)

    Brown, Peter N.; Shumaker, Dana E.; Woodward, Carol S.

    2005-01-01

    We present a solution method for fully implicit radiation diffusion problems discretized on meshes having millions of spatial zones. This solution method makes use of high order in time integration techniques, inexact Newton-Krylov nonlinear solvers, and multigrid preconditioners. We explore the advantages and disadvantages of high order time integration methods for the fully implicit formulation on both two- and three-dimensional problems with tabulated opacities and highly nonlinear fusion source terms

  15. Solar Panel Installations on Existing Structures

    OpenAIRE

    Tim D. Sass; Pe; Leed

    2013-01-01

    The rising price of fossil fuels, government incentives and growing public aware-ness for the need to implement sustainable energy supplies has resulted in a large in-crease in solar panel installations across the country. For many sites the most eco-nomical solar panel installation uses existing, southerly facing rooftops. Adding solar panels to an existing roof typically means increased loads that must be borne by the building-s structural elements. The structural desig...

  16. A Novel Spatial-Temporal Voronoi Diagram-Based Heuristic Approach for Large-Scale Vehicle Routing Optimization with Time Constraints

    Directory of Open Access Journals (Sweden)

    Wei Tu

    2015-10-01

    Full Text Available Vehicle routing optimization (VRO designs the best routes to reduce travel cost, energy consumption, and carbon emission. Due to non-deterministic polynomial-time hard (NP-hard complexity, many VROs involved in real-world applications require too much computing effort. Shortening computing time for VRO is a great challenge for state-of-the-art spatial optimization algorithms. From a spatial-temporal perspective, this paper presents a spatial-temporal Voronoi diagram-based heuristic approach for large-scale vehicle routing problems with time windows (VRPTW. Considering time constraints, a spatial-temporal Voronoi distance is derived from the spatial-temporal Voronoi diagram to find near neighbors in the space-time searching context. A Voronoi distance decay strategy that integrates a time warp operation is proposed to accelerate local search procedures. A spatial-temporal feature-guided search is developed to improve unpromising micro route structures. Experiments on VRPTW benchmarks and real-world instances are conducted to verify performance. The results demonstrate that the proposed approach is competitive with state-of-the-art heuristics and achieves high-quality solutions for large-scale instances of VRPTWs in a short time. This novel approach will contribute to spatial decision support community by developing an effective vehicle routing optimization method for large transportation applications in both public and private sectors.

  17. Accounting for large deformations in real-time simulations of soft tissues based on reduced-order models.

    Science.gov (United States)

    Niroomandi, S; Alfaro, I; Cueto, E; Chinesta, F

    2012-01-01

    Model reduction techniques have shown to constitute a valuable tool for real-time simulation in surgical environments and other fields. However, some limitations, imposed by real-time constraints, have not yet been overcome. One of such limitations is the severe limitation in time (established in 500Hz of frequency for the resolution) that precludes the employ of Newton-like schemes for solving non-linear models as the ones usually employed for modeling biological tissues. In this work we present a technique able to deal with geometrically non-linear models, based on the employ of model reduction techniques, together with an efficient non-linear solver. Examples of the performance of the technique over some examples will be given. Copyright © 2010 Elsevier Ireland Ltd. All rights reserved.

  18. Finite upper bound for the Hawking decay time of an arbitrarily large black hole in anti-de Sitter spacetime

    Science.gov (United States)

    Page, Don N.

    2018-01-01

    In an asymptotically flat spacetime of dimension d >3 and with the Newtonian gravitational constant G , a spherical black hole of initial horizon radius rh and mass M ˜rhd -3/G has a total decay time to Hawking emission of td˜rhd -1/G ˜G2 /(d -3 )M(d -1 )/(d -3 ) which grows without bound as the radius rh and mass M are taken to infinity. However, in asymptotically anti-de Sitter spacetime with a length scale ℓ and with absorbing boundary conditions at infinity, the total Hawking decay time does not diverge as the mass and radius go to infinity but instead remains bounded by a time of the order of ℓd-1/G .

  19. Real-Time Magnitude Characterization of Large Earthquakes Using the Predominant Period Derived From 1 Hz GPS Data

    Science.gov (United States)

    Psimoulis, Panos A.; Houlié, Nicolas; Behr, Yannik

    2018-01-01

    Earthquake early warning (EEW) systems' performance is driven by the trade-off between the need for a rapid alert and the accuracy of each solution. A challenge for many EEW systems has been the magnitude saturation for large events (MW > 7) and the resulting underestimation of seismic moment magnitude. In this study, we test the performance of high-rate (1 Hz) GPS, based on seven seismic events, to evaluate whether long-period ground motions can be measured well enough to infer reliably earthquake predominant periods. We show that high-rate GPS data allow the computation of a GPS-based predominant period (τg) to estimate lower bounds for the magnitude of earthquakes and distinguish between large (MW > 7) and great (MW > 8) events and thus extend the capability of EEW systems for larger events. It has also identified the impact of the different values of the smoothing factor α on the τg results and how the sampling rate and the computation process differentiate τg from the commonly used τp.

  20. Design and Demonstration of an Acousto-Optic Time-Integrating Correlator with a Large a Parallel Gain

    Science.gov (United States)

    1993-01-01

    Deoxyribose nucleicacid DPP: Digital Post-Processor DREO Detence Research Establishment Ottawa RF: Radio Frequency TeO2 : tellurium dioxide TIC: Time... TeO2 is 620 m/s, a device with a 100-As aperture device is 62-mm long. To take advantage of the full interaction time of these Bragg cells, the whole...INCLUDED IN THE DIGITAL POST-PROCESSOR HARDWARE Characteristics of Bandwidth Center Frequency Bragg Cell glass (bulk 100 MHz 150 MHz interaction) iNbO3

  1. Real-time impact of power balancing on power system operation with large scale integration of wind power

    DEFF Research Database (Denmark)

    Basit, Abdul; Hansen, Anca Daniela; Sørensen, Poul Ejnar

    2017-01-01

    Highly wind power integrated power system requires continuous active power regulation to tackle the power imbalances resulting from the wind power forecast errors. The active power balance is maintained in real-time with the automatic generation control and also from the control room, where...... power system model. The power system model takes the hour-ahead regulating power plan from power balancing model and the generation and power exchange capacities for the year 2020 into account. The real-time impact of power balancing in a highly wind power integrated power system is assessed...

  2. The global existence problem in general relativity

    CERN Document Server

    Andersson, L

    2000-01-01

    We survey some known facts and open questions concerning the global properties of 3+1 dimensional space--times containing a compact Cauchy surface. We consider space--times with an $\\ell$--dimensional Lie algebra of space--like Killing fields. For each $\\ell \\leq 3$, we give some basic results and conjectures on global existence and cosmic censorship. For the case of the 3+1 dimensional Einstein equations without symmetries, a new small data global existence result is announced.

  3. Scalability of Semi-Implicit Time Integrators for Nonhydrostatic Galerkin-based Atmospheric Models on Large Scale Cluster

    Science.gov (United States)

    2011-01-01

    present performance statistics to explain the scalability behavior. Keywords-atmospheric models, time intergrators , MPI, scal- ability, performance; I...across inter-element bound- aries. Basis functions are constructed as tensor products of Lagrange polynomials ψi (x) = hα(ξ) ⊗ hβ(η) ⊗ hγ(ζ)., where hα

  4. An adaptive large neighborhood search heuristic for the pickup and delivery problem with time Windows and scheduled lines

    NARCIS (Netherlands)

    Ghilas, V.; Demir, E.; van Woensel, T.

    2016-01-01

    The Pickup and Delivery Problem with Time Windows and Scheduled Lines (PDPTW-SL) concerns scheduling a set of vehicles to serve freight requests such that a part of the journey can be carried out on a scheduled public transportation line. Due to the complexity of the problem, which is NP-hard, we

  5. Plant diversity induces a shift of DOC concentration over time - results from long term and large scale experiment

    Science.gov (United States)

    Lange, Markus; Gleixner, Gerd

    2016-04-01

    Plant diversity has been demonstrated as a crucial factor for soil organic carbon (SOC) storage. The horizontal SOC formation in turn is strongly impacted by the relative small but consistent flow of dissolved organic carbon (DOC) in soils. In this process, pore water leaches plant material and already stored SOC while simultaneously these leachates are transported downwards. However, there is a big uncertainty about the drivers of DOC flux; in particular about the importance of biological processes. We investigated the impact of plant diversity and other biotic drivers on DOC concentrations and total DOC fluxes (concentration × sampled water amount). In addition, we considered abiotic factors such as weather and soil conditions to assess the relative importance of biotic and abiotic drivers and how their importance changes over time. We used a comprehensive data set, gathered in the frame of the long-term biodiversity experiment "The Jena Experiment". Permanent monitoring started directly after establishment of the field site in 2002 and is still running. This enabled us to trace the impact of plant communities with their increasing establishment over the time on DOC concentration. We found the amount of sampled pore water best explained by rainfall, while it was not related to plant associated variables. Directly after establishing the experimental site, DOC concentrations were highest and then decreasing with time. In the first period of the experiment plant diversity had no or even a slightly negative impact on DOC concentrations. The direction of the plant diversity effect on DOC concentrations changed over time; namely in later phases we observed highest DOC concentrations on plots with high plant diversity. Moreover, DOC concentrations were negatively affected by increased amounts of sampled pore water indicating a dilution effect. Even though this impact was highly significant; its effect size was even less pronounced at later time points. In summary

  6. Network Dynamics with BrainX3: A Large-Scale Simulation of the Human Brain Network with Real-Time Interaction

    OpenAIRE

    Xerxes D. Arsiwalla; Riccardo eZucca; Alberto eBetella; Enrique eMartinez; David eDalmazzo; Pedro eOmedas; Gustavo eDeco; Gustavo eDeco; Paul F.M.J. Verschure; Paul F.M.J. Verschure

    2015-01-01

    BrainX3 is a large-scale simulation of human brain activity with real-time interaction, rendered in 3D in a virtual reality environment, which combines computational power with human intuition for the exploration and analysis of complex dynamical networks. We ground this simulation on structural connectivity obtained from diffusion spectrum imaging data and model it on neuronal population dynamics. Users can interact with BrainX3 in real-time by perturbing brain regions with transient stimula...

  7. Network dynamics with BrainX3: a large-scale simulation of the human brain network with real-time interaction

    OpenAIRE

    Arsiwalla, Xerxes D.; Zucca, Riccardo; Betella, Alberto; Martínez, Enrique, 1961-; Dalmazzo, David; Omedas, Pedro; Deco, Gustavo; Verschure, Paul F. M. J.

    2015-01-01

    BrainX3 is a large-scale simulation of human brain activity with real-time interaction, rendered in 3D in a virtual reality environment, which combines computational power with human intuition for the exploration and analysis of complex dynamical networks. We ground this simulation on structural connectivity obtained from diffusion spectrum imaging data and model it on neuronal population dynamics. Users can interact with BrainX3 in real-time by perturbing brain regions with transient stimula...

  8. Temperature field due to time-dependent heat sources in a large rectangular grid. Application for the KBS-3 repository

    International Nuclear Information System (INIS)

    Probert, T.; Claesson, Johan

    1997-04-01

    In the KBS-3 concept canisters containing nuclear waste are deposited along parallel tunnels over a large rectangular area deep below the ground surface. The temperature field in rock due to such a rectangular grid of heat-releasing canisters is studied. An analytical solution for this problem for any heat source has been presented in a preceding paper. The complete solution is summarized in this paper. The solution is by superposition divided into two main parts. There is a global temperature field due to the large rectangular canister area, while a local field accounts for the remaining heat source problem. In this sequel to the first report, the local solution is discussed in detail. The local solution consists of three parts corresponding to line heat sources along tunnels, point heat sources along a tunnel and a line heat source along a canister. Each part depends on two special variables only. These parts are illustrated in dimensionless form. Inside the repository the local temperature field is periodic in the horizontal directions and has a short extent in the vertical direction. This allows us to look at the solution in a parallelepiped around a canister. The solution in the parallelepiped is valid for all canisters that are not too close to the repository edges. The total temperature field is calculated for the KBS-3 case. The temperature field is calculated using a heat release that is valid for the first 10 000 years after deposition. The temperature field is shown in 23 figures in order to illustrate different aspects of the complex thermal process

  9. The utilization of real time models as a decision aid following a large release of radionuclides into the atmosphere

    International Nuclear Information System (INIS)

    1994-02-01

    In 1986, following the Chernobyl accident, the International Nuclear Safety Advisory Group (INSAG) recommended, in Safety Series No. 75-INSAG-1, inter alia (recommendation B.2(10)), that ''the IAEA should develop technical guidance on the use of real-time models able to accept actual meteorological and radiological monitoring system data in predicting the radiological consequences of a nuclear accident for persons and the environment and in determining what protective measures are necessary''. 24 refs, 21 figs, 2 tabs

  10. Understanding Business Interests in International Large-Scale Student Assessments: A Media Analysis of "The Economist," "Financial Times," and "Wall Street Journal"

    Science.gov (United States)

    Steiner-Khamsi, Gita; Appleton, Margaret; Vellani, Shezleen

    2018-01-01

    The media analysis is situated in the larger body of studies that explore the varied reasons why different policy actors advocate for international large-scale student assessments (ILSAs) and adds to the research on the fast advance of the global education industry. The analysis of "The Economist," "Financial Times," and…

  11. Matrix-assisted laser desorption/ionization-time of flight mass spectrometry identification of large colony beta-hemolytic streptococci containing Lancefield groups A, C, and G

    DEFF Research Database (Denmark)

    Salgård Jensen, Christian; Dam-Nielsen, Casper; Arpi, Magnus

    2015-01-01

    BACKGROUND: The aim of this study was to investigate whether large colony beta-hemolytic streptococci containing Lancefield groups A, C, and G can be adequately identified using matrix-assisted laser desorption/ionization-time of flight mass spectrometry (MALDI-ToF). Previous studies show varying...

  12. Evidence from a Large Sample on the Effects of Group Size and Decision-Making Time on Performance in a Marketing Simulation Game

    Science.gov (United States)

    Treen, Emily; Atanasova, Christina; Pitt, Leyland; Johnson, Michael

    2016-01-01

    Marketing instructors using simulation games as a way of inducing some realism into a marketing course are faced with many dilemmas. Two important quandaries are the optimal size of groups and how much of the students' time should ideally be devoted to the game. Using evidence from a very large sample of teams playing a simulation game, the study…

  13. Local Recurrence in Women With Stage I Breast Cancer: Declining Rates Over Time in a Large, Population-Based Cohort

    Energy Technology Data Exchange (ETDEWEB)

    Canavan, Joycelin, E-mail: canavanjoycelin@gmail.com [Radiation Therapy Program and Breast Cancer Outcomes Unit, British Columbia Cancer Agency, Vancouver Island Centre, University of British Columbia, Victoria, British Columbia (Canada); Truong, Pauline T.; Smith, Sally L. [Radiation Therapy Program and Breast Cancer Outcomes Unit, British Columbia Cancer Agency, Vancouver Island Centre, University of British Columbia, Victoria, British Columbia (Canada); Lu, Linghong; Lesperance, Mary [Department of Mathematics and Statistics, University of Victoria, British Columbia (Canada); Olivotto, Ivo A. [Department of Radiation Oncology, Tom Baker Cancer Centre, University of Calgary (Canada)

    2014-01-01

    Purpose: To evaluate whether local recurrence (LR) risk has changed over time among women with stage I breast cancer treated with breast-conserving therapy. Methods and Materials: Subjects were 5974 women aged ≥50 years diagnosis with pT1N0 breast cancer from 1989 to 2006, treated with breast-conserving surgery and radiation therapy. Clinicopathologic characteristics, treatment, and LR outcomes were compared among 4 cohorts stratified by year of diagnosis: 1989 to 1993 (n=1077), 1994 to 1998 (n=1633), 1999 to 2002 (n=1622), and 2003 to 2006 (n=1642). Multivariable analysis was performed, with year of diagnosis as a continuous variable. Results: Median follow-up time was 8.6 years. Among patients diagnosed in 1989 to 1993, 1994 to 1998, 1999 to 2002, and 2003 to 2006, the proportions of grade 1 tumors increased (16% vs 29% vs 40% vs 39%, respectively, P<.001). Surgical margin clearance rates increased from 82% to 93% to 95% and 88%, respectively (P<.001). Over time, the proportions of unknown estrogen receptor (ER) status decreased (29% vs 10% vs 1.2% vs 0.5%, respectively, P<.001), whereas ER-positive tumors increased (56% vs 77% vs 86% vs 86%, respectively, P<.001). Hormone therapy use increased (23% vs 23% vs 62% vs 73%, respectively, P<.001), and chemotherapy use increased (2% vs 5% vs 10% vs 13%, respectively, P<.001). The 5-year cumulative incidence rates of LR over the 4 time periods were 2.8% vs 1.7% vs 0.9% vs 0.8%, respectively (Gray's test, P<.001). On competing risk multivariable analysis, year of diagnosis was significantly associated with decreased LR (hazard ratio, 0.92 per year, P=.0003). Relative to grade 1 histology, grades 2, 3, and unknown were associated with increased LR. Hormone therapy use was associated with reduced LR. Conclusion: Significant changes in the multimodality management of stage I breast cancer have occurred over the past 2 decades. More favorable-risk tumors were diagnosed, and margin clearance and systemic therapy use

  14. Local Recurrence in Women With Stage I Breast Cancer: Declining Rates Over Time in a Large, Population-Based Cohort

    International Nuclear Information System (INIS)

    Canavan, Joycelin; Truong, Pauline T.; Smith, Sally L.; Lu, Linghong; Lesperance, Mary; Olivotto, Ivo A.

    2014-01-01

    Purpose: To evaluate whether local recurrence (LR) risk has changed over time among women with stage I breast cancer treated with breast-conserving therapy. Methods and Materials: Subjects were 5974 women aged ≥50 years diagnosis with pT1N0 breast cancer from 1989 to 2006, treated with breast-conserving surgery and radiation therapy. Clinicopathologic characteristics, treatment, and LR outcomes were compared among 4 cohorts stratified by year of diagnosis: 1989 to 1993 (n=1077), 1994 to 1998 (n=1633), 1999 to 2002 (n=1622), and 2003 to 2006 (n=1642). Multivariable analysis was performed, with year of diagnosis as a continuous variable. Results: Median follow-up time was 8.6 years. Among patients diagnosed in 1989 to 1993, 1994 to 1998, 1999 to 2002, and 2003 to 2006, the proportions of grade 1 tumors increased (16% vs 29% vs 40% vs 39%, respectively, P<.001). Surgical margin clearance rates increased from 82% to 93% to 95% and 88%, respectively (P<.001). Over time, the proportions of unknown estrogen receptor (ER) status decreased (29% vs 10% vs 1.2% vs 0.5%, respectively, P<.001), whereas ER-positive tumors increased (56% vs 77% vs 86% vs 86%, respectively, P<.001). Hormone therapy use increased (23% vs 23% vs 62% vs 73%, respectively, P<.001), and chemotherapy use increased (2% vs 5% vs 10% vs 13%, respectively, P<.001). The 5-year cumulative incidence rates of LR over the 4 time periods were 2.8% vs 1.7% vs 0.9% vs 0.8%, respectively (Gray's test, P<.001). On competing risk multivariable analysis, year of diagnosis was significantly associated with decreased LR (hazard ratio, 0.92 per year, P=.0003). Relative to grade 1 histology, grades 2, 3, and unknown were associated with increased LR. Hormone therapy use was associated with reduced LR. Conclusion: Significant changes in the multimodality management of stage I breast cancer have occurred over the past 2 decades. More favorable-risk tumors were diagnosed, and margin clearance and systemic therapy use

  15. A real time 155 GHz millimeter wave interferometer module for electron density measurement in large plasma devices

    International Nuclear Information System (INIS)

    Huettemann, P.W.; Waidmann, G.

    1982-09-01

    A homodyne, real time 155 GHz interferometer channel is described which is one module of a multichannel system for use on TEXTOR tokamak. A standing sine wave is generated in a phase bridge by transmitting a frequency modulated millimeter wave down two unequal interferometer branches. The presence of plasma produces a phase slip of the sine wave with respect to a reference signal. The phase shift is linear proportional to plasma density for expected TEXTOR plasmas. Long plasma paths give multiradian phase shifts which are recorded by a digital fringe counting system. The accuracy of phase measurement is ΔPHI = 2π/16. Phase changes of 7π/8 are accepted per modulation period. The microwave in the measurement branch of the interferometer is transmitted using a quasioptical technique. Components and technical details are described. The interferometer was tested in a simulation set-up and in two different plasma experiments. Experimental results are presented. (orig.)

  16. PANDORA, a large volume low-energy neutron detector with real-time neutron-gamma discrimination

    Science.gov (United States)

    Stuhl, L.; Sasano, M.; Yako, K.; Yasuda, J.; Baba, H.; Ota, S.; Uesaka, T.

    2017-09-01

    The PANDORA (Particle Analyzer Neutron Detector Of Real-time Acquisition) system, which was developed for use in inverse kinematics experiments with unstable isotope beams, is a neutron detector based on a plastic scintillator coupled to a digital readout. PANDORA can be used for any reaction study involving the emission of low energy neutrons (100 keV-10 MeV) where background suppression and an increased signal-to-noise ratio are crucial. The digital readout system provides an opportunity for pulse shape discrimination (PSD) of the detected particles as well as intelligent triggering based on PSD. The figure of merit results of PANDORA are compared to the data in literature. Using PANDORA, 91 ± 1% of all detected neutrons can be separated, while 91 ± 1% of the detected gamma rays can be excluded, reducing the gamma ray background by one order of magnitude.

  17. Load time functions (LTF) for large commercial aircraft based on Riera approach and finite element analyses – a parametric study

    International Nuclear Information System (INIS)

    Iliev, A.

    2013-01-01

    Conclusions: In cases of a complex geometry of the target structure a careful evaluation of the predefined load time function should be performed. Special attention should be paid on different deceleration of airplane parts and their equivalent load forces. In cases of cylindrical structures with relatively small diameter (in comparison to the airplane wing spread), the impact of the engines should be investigated separately. When auxiliary structures are surrounding the reactor containment, the impact load will be reduced due to initial destruction of part of the airplane in the surrounding auxiliary structures. For the case study, this reduction was found to be non-significant. However if important equipment is situated in surrounding auxiliary buildings, engines may provide higher equivalent forces compared to normal planar target structure

  18. Ontological Proofs of Existence and Non-Existence

    Czech Academy of Sciences Publication Activity Database

    Hájek, Petr

    2008-01-01

    Roč. 90, č. 2 (2008), s. 257-262 ISSN 0039-3215 R&D Projects: GA AV ČR IAA100300503 Institutional research plan: CEZ:AV0Z10300504 Keywords : ontological proofs * existence * non-existence * Gödel * Caramuel Subject RIV: BA - General Mathematics

  19. Existence theory in optimal control

    International Nuclear Information System (INIS)

    Olech, C.

    1976-01-01

    This paper treats the existence problem in two main cases. One case is that of linear systems when existence is based on closedness or compactness of the reachable set and the other, non-linear case refers to a situation where for the existence of optimal solutions closedness of the set of admissible solutions is needed. Some results from convex analysis are included in the paper. (author)

  20. Storage in alluvial deposits controls the timing of particle delivery from large watersheds, filtering upland erosional signals and delaying benefits from watershed best management practices

    Science.gov (United States)

    Pizzuto, J. E.; Skalak, K.; Karwan, D. L.

    2017-12-01

    Transport of suspended sediment and sediment-borne constituents (here termed fluvial particles) through large river systems can be significantly influenced by episodic storage in floodplains and other alluvial deposits. Geomorphologists quantify the importance of storage using sediment budgets, but these data alone are insufficient to determine how storage influences the routing of fluvial particles through river corridors across large spatial scales. For steady state systems, models that combine sediment budget data with "waiting time distributions" (to define how long deposited particles remain stored until being remobilized) and velocities during transport events can provide useful predictions. Limited field data suggest that waiting time distributions are well represented by power laws, extending from 104 years, while the probability of storage defined by sediment budgets varies from 0.1 km-1 for small drainage basins to 0.001 km-1 for the world's largest watersheds. Timescales of particle delivery from large watersheds are determined by storage rather than by transport processes, with most particles requiring 102 -104 years to reach the basin outlet. These predictions suggest that erosional "signals" induced by climate change, tectonics, or anthropogenic activity will be transformed by storage before delivery to the outlets of large watersheds. In particular, best management practices (BMPs) implemented in upland source areas, designed to reduce the loading of fluvial particles to estuarine receiving waters, will not achieve their intended benefits for centuries (or longer). For transient systems, waiting time distributions cannot be constant, but will vary as portions of transient sediment "pulses" enter and are later released from storage. The delivery of sediment pulses under transient conditions can be predicted by adopting the hypothesis that the probability of erosion of stored particles will decrease with increasing "age" (where age is defined as the

  1. Assessing Methods for Mapping 2D Field Concentrations of CO2 Over Large Spatial Areas for Monitoring Time Varying Fluctuations

    Science.gov (United States)

    Zaccheo, T. S.; Pernini, T.; Botos, C.; Dobler, J. T.; Blume, N.; Braun, M.; Levine, Z. H.; Pintar, A. L.

    2014-12-01

    This work presents a methodology for constructing 2D estimates of CO2 field concentrations from integrated open path measurements of CO2 concentrations. It provides a description of the methodology, an assessment based on simulated data and results from preliminary field trials. The Greenhouse gas Laser Imaging Tomography Experiment (GreenLITE) system, currently under development by Exelis and AER, consists of a set of laser-based transceivers and a number of retro-reflectors coupled with a cloud-based compute environment to enable real-time monitoring of integrated CO2 path concentrations, and provides 2D maps of estimated concentrations over an extended area of interest. The GreenLITE transceiver-reflector pairs provide laser absorption spectroscopy (LAS) measurements of differential absorption due to CO2 along intersecting chords within the field of interest. These differential absorption values for the intersecting chords of horizontal path are not only used to construct estimated values of integrated concentration, but also employed in an optimal estimation technique to derive 2D maps of underlying concentration fields. This optimal estimation technique combines these sparse data with in situ measurements of wind speed/direction and an analytic plume model to provide tomographic-like reconstruction of the field of interest. This work provides an assessment of this reconstruction method and preliminary results from the Fall 2014 testing at the Zero Emissions Research and Technology (ZERT) site in Bozeman, Montana. This work is funded in part under the GreenLITE program developed under a cooperative agreement between Exelis and the National Energy and Technology Laboratory (NETL) under the Department of Energy (DOE), contract # DE-FE0012574. Atmospheric and Environmental Research, Inc. is a major partner in this development.

  2. Emerging adults' use of alcohol and social networking sites during a large street festival: A real-time interview study.

    Science.gov (United States)

    Whitehill, Jennifer M; Pumper, Megan A; Moreno, Megan A

    2015-05-20

    Emerging adults have high rates of heavy episodic drinking (binge drinking) and related risks including alcohol-impaired driving. To understand whether social networking sites (SNSs) used on mobile devices represent a viable platform for real-time interventions, this study measured emerging adults' use of two popular SNSs (Facebook and Twitter) during the Mifflin Street Block Party. This annual festival is held in Madison, Wisconsin and is known for high alcohol consumption. Event attendees ages 18-23 years were recruited by young adult research assistants (>21 years). Participants completed a brief in-person interview assessing drinking intensity, use of SNSs, and use of SNSs to plan transportation. Analyses included t-tests, chi-squared tests, and Fisher's exact tests. At the event, nearly all of the 200 participants (97 %) consumed alcohol and 18 % met criteria for heavy episodic drinking. Approximately one-third of participants had used Facebook or Twitter on the day of the event. Facebook use (23 %) was more prevalent than Twitter use (18 %), especially among heavy episodic drinkers. Use of either SNS was 41 % among females and 24 % among males (χ (2)=6.01; df=1; p=0.01). Plans to use a SNS to arrange transportation were relatively uncommon (4 %), but this was more frequent among heavy episodic drinkers (11 %) compared to non-heavy episodic drinkers (2 %) (Fisher's exact p=0.02). These results indicate that SNSs are used during alcohol consumption and warrant exploration as a way to facilitate connections to resources like safe ride services.

  3. Large-scale heterogeneity of Amazonian phenology revealed from 26-year long AVHRR/NDVI time-series

    International Nuclear Information System (INIS)

    Silva, Fabrício B; Shimabukuro, Yosio E; Aragão, Luiz E O C; Anderson, Liana O; Pereira, Gabriel; Cardozo, Franciele; Arai, Egídio

    2013-01-01

    Depiction of phenological cycles in tropical forests is critical for an understanding of seasonal patterns in carbon and water fluxes as well as the responses of vegetation to climate variations. However, the detection of clear spatially explicit phenological patterns across Amazonia has proven difficult using data from the Moderate Resolution Imaging Spectroradiometer (MODIS). In this work, we propose an alternative approach based on a 26-year time-series of the normalized difference vegetation index (NDVI) from the Advanced Very High Resolution Radiometer (AVHRR) to identify regions with homogeneous phenological cycles in Amazonia. Specifically, we aim to use a pattern recognition technique, based on temporal signal processing concepts, to map Amazonian phenoregions and to compare the identified patterns with field-derived information. Our automated method recognized 26 phenoregions with unique intra-annual seasonality. This result highlights the fact that known vegetation types in Amazonia are not only structurally different but also phenologically distinct. Flushing of new leaves observed in the field is, in most cases, associated to a continuous increase in NDVI. The peak in leaf production is normally observed from the beginning to the middle of the wet season in 66% of the field sites analyzed. The phenoregion map presented in this work gives a new perspective on the dynamics of Amazonian canopies. It is clear that the phenology across Amazonia is more variable than previously detected using remote sensing data. An understanding of the implications of this spatial heterogeneity on the seasonality of Amazonian forest processes is a crucial step towards accurately quantifying the role of tropical forests within global biogeochemical cycles. (letter)

  4. Effect of real-time boundary wind conditions on the air flow and pollutant dispersion in an urban street canyon—Large eddy simulations

    Science.gov (United States)

    Zhang, Yun-Wei; Gu, Zhao-Lin; Cheng, Yan; Lee, Shun-Cheng

    2011-07-01

    Air flow and pollutant dispersion characteristics in an urban street canyon are studied under the real-time boundary conditions. A new scheme for realizing real-time boundary conditions in simulations is proposed, to keep the upper boundary wind conditions consistent with the measured time series of wind data. The air flow structure and its evolution under real-time boundary wind conditions are simulated by using this new scheme. The induced effect of time series of ambient wind conditions on the flow structures inside and above the street canyon is investigated. The flow shows an obvious intermittent feature in the street canyon and the flapping of the shear layer forms near the roof layer under real-time wind conditions, resulting in the expansion or compression of the air mass in the canyon. The simulations of pollutant dispersion show that the pollutants inside and above the street canyon are transported by different dispersion mechanisms, relying on the time series of air flow structures. Large scale air movements in the processes of the air mass expansion or compression in the canyon exhibit obvious effects on pollutant dispersion. The simulations of pollutant dispersion also show that the transport of pollutants from the canyon to the upper air flow is dominated by the shear layer turbulence near the roof level and the expansion or compression of the air mass in street canyon under real-time boundary wind conditions. Especially, the expansion of the air mass, which features the large scale air movement of the air mass, makes more contribution to the pollutant dispersion in this study. Comparisons of simulated results under different boundary wind conditions indicate that real-time boundary wind conditions produces better condition for pollutant dispersion than the artificially-designed steady boundary wind conditions.

  5. Cone-Beam Computed Tomography–Guided Positioning of Laryngeal Cancer Patients with Large Interfraction Time Trends in Setup and Nonrigid Anatomy Variations

    International Nuclear Information System (INIS)

    Gangsaas, Anne; Astreinidou, Eleftheria; Quint, Sandra; Levendag, Peter C.; Heijmen, Ben

    2013-01-01

    Purpose: To investigate interfraction setup variations of the primary tumor, elective nodes, and vertebrae in laryngeal cancer patients and to validate protocols for cone beam computed tomography (CBCT)-guided correction. Methods and Materials: For 30 patients, CBCT-measured displacements in fractionated treatments were used to investigate population setup errors and to simulate residual setup errors for the no action level (NAL) offline protocol, the extended NAL (eNAL) protocol, and daily CBCT acquisition with online analysis and repositioning. Results: Without corrections, 12 of 26 patients treated with radical radiation therapy would have experienced a gradual change (time trend) in primary tumor setup ≥4 mm in the craniocaudal (CC) direction during the fractionated treatment (11/12 in caudal direction, maximum 11 mm). Due to these trends, correction of primary tumor displacements with NAL resulted in large residual CC errors (required margin 6.7 mm). With the weekly correction vector adjustments in eNAL, the trends could be largely compensated (CC margin 3.5 mm). Correlation between movements of the primary and nodal clinical target volumes (CTVs) in the CC direction was poor (r 2 =0.15). Therefore, even with online setup corrections of the primary CTV, the required CC margin for the nodal CTV was as large as 6.8 mm. Also for the vertebrae, large time trends were observed for some patients. Because of poor CC correlation (r 2 =0.19) between displacements of the primary CTV and the vertebrae, even with daily online repositioning of the vertebrae, the required CC margin around the primary CTV was 6.9 mm. Conclusions: Laryngeal cancer patients showed substantial interfraction setup variations, including large time trends, and poor CC correlation between primary tumor displacements and motion of the nodes and vertebrae (internal tumor motion). These trends and nonrigid anatomy variations have to be considered in the choice of setup verification protocol and

  6. Interference Cancellation Using Replica Signal for HTRCI-MIMO/OFDM in Time-Variant Large Delay Spread Longer Than Guard Interval

    Directory of Open Access Journals (Sweden)

    Yuta Ida

    2012-01-01

    Full Text Available Orthogonal frequency division multiplexing (OFDM and multiple-input multiple-output (MIMO are generally known as the effective techniques for high data rate services. In MIMO/OFDM systems, the channel estimation (CE is very important to obtain an accurate channel state information (CSI. However, since the orthogonal pilot-based CE requires the large number of pilot symbols, the total transmission rate is degraded. To mitigate this problem, a high time resolution carrier interferometry (HTRCI for MIMO/OFDM has been proposed. In wireless communication systems, if the maximum delay spread is longer than the guard interval (GI, the system performance is significantly degraded due to the intersymbol interference (ISI and intercarrier interference (ICI. However, the conventional HTRCI-MIMO/OFDM does not consider the case with the time-variant large delay spread longer than the GI. In this paper, we propose the ISI and ICI compensation methods for a HTRCI-MIMO/OFDM in the time-variant large delay spread longer than the GI.

  7. Ecogrid EU - a large scale smart grids demonstration of real time market-based integration of numerous small DER and DR

    DEFF Research Database (Denmark)

    Ding, Yi; Nyeng, Preben; Ostergaard, Jacob

    2012-01-01

    that modern information and communication technology (ICT) and innovative market solutions can enable the operation of a distribution power system with more than 50% renewable energy sources (RES). This will be a major contribution to the European 20-20-20 goals. Furthermore, the proposed Ecogrid EU market......This paper provides an overview of the Ecogrid EU project, which is a large-scale demonstration project on the Danish island Bornholm. It provides Europe a fast track evolution towards smart grid dissemination and deployment in the distribution network. Objective of Ecogrid EU is to illustrate...... will offer the transmission system operator (TSO) additional balancing resources and ancillary services by facilitating the participation of small-scale distributed energy resources (DERs) and small end-consumers into the existing electricity markets. The majority of the 2000 participating residential...

  8. Large, but not small, antigens require time- and temperature-dependent processing in accessory cells before they can be recognized by T cells

    DEFF Research Database (Denmark)

    Buus, S; Werdelin, O

    1986-01-01

    We have studied if antigens of different size and structure all require processing in antigen-presenting cells of guinea-pigs before they can be recognized by T cells. The method of mild paraformaldehyde fixation was used to stop antigen-processing in the antigen-presenting cells. As a measure...... of antigen presentation we used the proliferative response of appropriately primed T cells during a co-culture with the paraformaldehyde-fixed and antigen-exposed presenting cells. We demonstrate that the large synthetic polypeptide antigen, dinitrophenyl-poly-L-lysine, requires processing. After an initial......-dependent and consequently energy-requiring. Processing is strongly inhibited by the lysosomotrophic drug, chloroquine, suggesting a lysosomal involvement in antigen processing. The existence of a minor, non-lysosomal pathway is suggested, since small amounts of antigen were processed even at 10 degrees C, at which...

  9. Increased fluoroquinolone resistance with time in Escherichia coli from >17,000 patients at a large county hospital as a function of culture site, age, sex, and location

    Directory of Open Access Journals (Sweden)

    Hamill Richard J

    2008-01-01

    Full Text Available Abstract Background Escherichia coli infections are common and often treated with fluoroquinolones. Fluoroquinolone resistance is of worldwide importance and is monitored by national and international surveillance networks. In this study, we analyzed the effects of time, culture site, and patient age, sex, and location on fluoroquinolone resistance in E. coli clinical isolates. Methods To understand how patient factors and time influenced fluoroquinolone resistance and to determine how well data from surveillance networks predict trends at Ben Taub General Hospital in Houston, TX, we used Perl to parse and MySQL to house data from antibiograms (n ≅ 21,000 for E. coli isolated between 1999 to 2004 using Chi Square, Bonferroni, and Multiple Linear Regression methods. Results Fluoroquinolone resistance (i increased with time; (ii exceeded national averages by 2- to 4-fold; (iii was higher in males than females, largely because of urinary isolates from male outpatients; (iv increased with patient age; (v was 3% in pediatric patients; (vi was higher in hospitalized patients than outpatients; (vii was higher in sputum samples, particularly from inpatients, than all other culture sites, including blood and urine, regardless of patient location; and (viii was lowest in genital isolates than all other culture sites. Additionally, the data suggest that, with regard to susceptibility or resistance by the Dade Behring MicroScan system, a single fluoroquinolone suffices as a "surrogate marker" for all of the fluoroquinolone tested. Conclusion Large surveillance programs often did not predict E. coli fluoroquinolone resistance trends at a large, urban hospital with a largely indigent, ethnically diverse patient population or its affiliated community clinics.

  10. Large deviations

    CERN Document Server

    Varadhan, S R S

    2016-01-01

    The theory of large deviations deals with rates at which probabilities of certain events decay as a natural parameter in the problem varies. This book, which is based on a graduate course on large deviations at the Courant Institute, focuses on three concrete sets of examples: (i) diffusions with small noise and the exit problem, (ii) large time behavior of Markov processes and their connection to the Feynman-Kac formula and the related large deviation behavior of the number of distinct sites visited by a random walk, and (iii) interacting particle systems, their scaling limits, and large deviations from their expected limits. For the most part the examples are worked out in detail, and in the process the subject of large deviations is developed. The book will give the reader a flavor of how large deviation theory can help in problems that are not posed directly in terms of large deviations. The reader is assumed to have some familiarity with probability, Markov processes, and interacting particle systems.

  11. Long-time and large-distance asymptotic behavior of the current-current correlators in the non-linear Schroedinger model

    Energy Technology Data Exchange (ETDEWEB)

    Kozlowski, K.K. [Deutsches Elektronen-Synchrotron (DESY), Hamburg (Germany); Terras, V. [CNRS, ENS Lyon (France). Lab. de Physique

    2010-12-15

    We present a new method allowing us to derive the long-time and large-distance asymptotic behavior of the correlations functions of quantum integrable models from their exact representations. Starting from the form factor expansion of the correlation functions in finite volume, we explain how to reduce the complexity of the computation in the so-called interacting integrable models to the one appearing in free fermion equivalent models. We apply our method to the time-dependent zero-temperature current-current correlation function in the non-linear Schroedinger model and compute the first few terms in its asymptotic expansion. Our result goes beyond the conformal field theory based predictions: in the time-dependent case, other types of excitations than the ones on the Fermi surface contribute to the leading orders of the asymptotics. (orig.)

  12. A massively parallel algorithm for the solution of constrained equations of motion with applications to large-scale, long-time molecular dynamics simulations

    Energy Technology Data Exchange (ETDEWEB)

    Fijany, A. [Jet Propulsion Lab., Pasadena, CA (United States); Coley, T.R. [Virtual Chemistry, Inc., San Diego, CA (United States); Cagin, T.; Goddard, W.A. III [California Institute of Technology, Pasadena, CA (United States)

    1997-12-31

    Successful molecular dynamics (MD) simulation of large systems (> million atoms) for long times (> nanoseconds) requires the integration of constrained equations of motion (CEOM). Constraints are used to eliminate high frequency degrees of freedom (DOF) and to allow the use of rigid bodies. Solving the CEOM allows for larger integration time-steps and helps focus the simulation on the important collective dynamics of chemical, biological, and materials systems. We explore advances in multibody dynamics which have resulted in O(N) algorithms for propagating the CEOM. However, because of their strictly sequential nature, the computational time required by these algorithms does not scale down with increased numbers of processors. We then present the new constraint force algorithm for solving the CEOM and show that this algorithm is fully parallelizable, leading to a computational cost of O(N/P+IogP) for N DOF on P processors.

  13. Long-time and large-distance asymptotic behavior of the current-current correlators in the non-linear Schroedinger model

    International Nuclear Information System (INIS)

    Kozlowski, K.K.; Terras, V.

    2010-12-01

    We present a new method allowing us to derive the long-time and large-distance asymptotic behavior of the correlations functions of quantum integrable models from their exact representations. Starting from the form factor expansion of the correlation functions in finite volume, we explain how to reduce the complexity of the computation in the so-called interacting integrable models to the one appearing in free fermion equivalent models. We apply our method to the time-dependent zero-temperature current-current correlation function in the non-linear Schroedinger model and compute the first few terms in its asymptotic expansion. Our result goes beyond the conformal field theory based predictions: in the time-dependent case, other types of excitations than the ones on the Fermi surface contribute to the leading orders of the asymptotics. (orig.)

  14. Study on large scale knowledge base with real time operation for autonomous nuclear power plant. 1. Basic concept and expecting performance

    International Nuclear Information System (INIS)

    Ozaki, Yoshihiko; Suda, Kazunori; Yoshikawa, Shinji; Ozawa, Kenji

    1996-04-01

    Since it is desired to enhance availability and safety of nuclear power plants operation and maintenance by removing human factor, there are many researches and developments for intelligent operation or diagnosis using artificial intelligence (AI) technique. We have been developing an autonomous operation and maintenance system for nuclear power plants by substituting AI's and intelligent robots. It is indispensable to use various and large scale knowledge relative to plant design, operation, and maintenance, that is, whole life cycle data of the plant for the autonomous nuclear power plant. These knowledge must be given to AI system or intelligent robots adequately and opportunely. Moreover, it is necessary to insure real time operation using the large scale knowledge base for plant control and diagnosis performance. We have been studying on the large scale and real time knowledge base system for autonomous plant. In the report, we would like to present the basic concept and expecting performance of the knowledge base for autonomous plant, especially, autonomous control and diagnosis system. (author)

  15. The LOFT perspective on neutron star thermonuclear bursts: White paper in support of the mission concept of the large observatory for X-ray timing

    Energy Technology Data Exchange (ETDEWEB)

    in' t Zand, J. J.M. [SRON Netherlands Institute for Space Research, Utrecht (The Netherlands); Malone, Christopher M. [Los Alamos National Lab. (LANL), Los Alamos, NM (United States); Altamirano, D. [Univ. of Southampton, Southampton (United Kingdom); Ballantyne, D. R. [Georgia Inst. of Technology, Atlanta, GA (United States); Bhattacharyya, S. [Tata Institute of Fundamental Research, Mumbai (India); Brown, E. F. [Michigan State Univ., East Lansing, MI (United States); Cavecchi, Y. [Univ. of Amsterdam, Amsterdam (The Netherlands); Chakrabarty, D. [Massachusetts Inst. of Technology (MIT), Cambridge, MA (United States); Chenevez, J. [Technical Univ. of Denmark, Lyngby (Denmark); Cumming, A. [McGill Univ., Montreal, QC (Canada); Degenaar, N. [Univ. of Cambridge, Cambridge (United Kingdom); Falanga, M. [International Space Science Institute, Bern (Switzerland); Galloway, D. K. [Monash Univ., VIC (Australia); Heger, A. [Monash Univ., VIC (Australia); Jose, J. [Univ. Politecnica de Catalunya, Barcelona (Spain); Institut d' Estudis Espacials de Catalunya, Barcelona (Spain); Keek, L. [Georgia Institute of Technology, Atlanta, GA (United States); Linares, M. [Univ. de La Laguna, Tenerife (Spain); Mahmoodifar, S. [Univ. of Maryland, College Park, MD (United States); Mendez, M. [Univ. of Groningen, Groningen (The Netherlands); Miller, M. C. [Univ. of Maryland, College Park, MD (United States); Paerels, F. B. S. [Columbia Astrophysics Lab., New York, NY (United States); Poutanen, J. [Univ. of Turku, Piikkio (Finland); Rozanska, A. [N. Copernicus Astronomical Center PAS, Warsaw (Poland); Schatz, H. [National Superconducting Cyclotron Laboratory at Michigan State University; Serino, M. [Institute of Physical and Chemical Research (RIKEN); Strohmayer, T. E. [NASA' s Goddard Space Flight Center, Greenbelt, MD (United States); Suleimanov, V. F. [Univ. Tubingen, Tubingen (Germany); Thielemann, F. -K. [Univ. Basel, Basel (Switzerland); Watts, A. L. [Univ. of Amsterdam, Amsterdam (The Netherlands); Weinberg, N. N. [Massachusetts Institute of Technology, Cambridge, MA (United States); Woosley, S. E. [Univ. of California, Santa Cruz, CA (United States); Yu, W. [Chinese Academy of Sciences (CAS), Shanghai (China); Zhang, S. [Institute of High-Energy Physics, Beijing (China); Zingale, M. [Stony Brook Univ., Stony Brook, NY (United States)

    2015-01-14

    The Large Area Detector (LAD) on the Large Observatory For X-ray Timing ( LOFT ), with a 8.5 m 2 photon- collecting area in the 2–30 keV bandpass at CCD-class spectral resolving power (λ/Δλ = 10 – 100), is designed for optimum performance on bright X-ray sources. Thus, it is well-suited to study thermonuclear X-ray bursts from Galactic neutron stars. These bursts will typically yield 2 x 105 photon detections per second in the LAD, which is at least 15 times more than with any other instrument past, current or anticipated. The Wide Field Monitor (WFM) foreseen for LOFT uniquely combines 2–50 keV imaging with large (30%) prompt sky coverage. This will enable the detection of tens of thousands of thermonuclear X-ray bursts during a 3-yr mission, including tens of superbursts. Both numbers are similar or more than the current database gathered in 50 years of X-ray astronomy.

  16. Overview of Existing Wind Energy Ordinances

    Energy Technology Data Exchange (ETDEWEB)

    Oteri, F.

    2008-12-01

    Due to increased energy demand in the United States, rural communities with limited or no experience with wind energy now have the opportunity to become involved in this industry. Communities with good wind resources may be approached by entities with plans to develop the resource. Although these opportunities can create new revenue in the form of construction jobs and land lease payments, they also create a new responsibility on the part of local governments to ensure that ordinances will be established to aid the development of safe facilities that will be embraced by the community. The purpose of this report is to educate and engage state and local governments, as well as policymakers, about existing large wind energy ordinances. These groups will have a collection of examples to utilize when they attempt to draft a new large wind energy ordinance in a town or county without existing ordinances.

  17. Energy Efficiency in the North American Existing Building Stock

    Energy Technology Data Exchange (ETDEWEB)

    NONE

    2007-07-01

    This report presents the findings of a new assessment of the techno-economic and policy-related efficiency improvement potential in the North American building stock conducted as part of a wider appraisal of existing buildings in member states of the International Energy Agency. It summarizes results and provides insights into the lessons learned through a broader global review of best practice to improve the energy efficiency of existing buildings. At this time, the report is limited to the USA because of the large size of its buildings market. At a later date, a more complete review may include some details about policies and programs in Canada. If resources are available an additional comprehensive review of Canada and Mexico may be performed in the future.

  18. U-Pb dating of large zircons in low-temperature jadeitite from the Osayama serpentinite melange, southwest Japan: insights into the timing of serpentinization

    Science.gov (United States)

    Tsujimori, T.; Liou, J.G.; Wooden, J.; Miyamoto, T.

    2005-01-01

    Crystals of zircon up to 3 mm in length occur in jadeitite veins in the Osayama serpentinite mélange, Southwest Japan. The zircon porphyroblasts show pronounced zoning, and are characterized by both low Th/U ratios (0.2-0.8) and low Th and U abundances (Th = 1-81 ppm; U = 6-149 ppm). They contain inclusions of high-pressure minerals, including jadeite and rutile; such an occurrence indicates that the zircon crystallized during subduction-zone metamorphism. Phase equilibria and the existing fluid-inclusion data constrain P-T conditions to P > 1.2 GPa at T > 350°C for formation of the jadeitite. Most U/Pb ages obtained by SHRIMP-RG are concordant, with a weighted mean 206Pb/238U age of 472 ± 8.5 Ma (MSWD = 2.7, n = 25). Because zircon porphyroblasts contain inclusions of high-pressure minerals, the SHRIMP U-Pb age represents the timing of jadeitite formation, i.e., the timing of interaction between alkaline fluid and ultramafic rocks in a subduction zone. Although this dating does not provide a direct time constraint for serpentinization, U-Pb ages of zircon in jadeitite associated with serpentinite result in new insights into the timing of fluid-rock interaction of ultramafic rocks at a subduction zone and the minimum age for serpentinization.

  19. Preoperative Embolization Reduces the Risk of Cathecolamines Release at the Time of Surgical Excision of Large Pelvic Extra-Adrenal Sympathetic Paraganglioma

    Directory of Open Access Journals (Sweden)

    Nicola Di Daniele

    2012-01-01

    Full Text Available A 30-year-old woman with severe hypertension was admitted to the hospital with a history of headache, palpitations, and diaphoresis following sexual intercourse. Twenty-four hour urinary excretion of free catecholamines and metabolites was markedly increased as was serum chromogranin A. Computed tomography scan revealed a large mass in the left adnex site and magnetic resonance imaging confirmed the computer tomography finding, suggesting the presence of extra-adrenal sympathetic paraganglioma. I-metaiodobenzyl guanidine scintigram revealed an increased uptake in the same area. Transcatheter arterial embolization of the mass resulted in marked decreases in blood pressure and urinary excretion of free catecholamines and metabolites. Surgical excision of the mass was then accomplished without complication. Preoperative embolization is a useful and safe procedure which may reduce the risk of catecholamines release at the time of surgical excision in large pelvic extra-adrenal sympathetic paraganglioma.

  20. Summary of existing uncertainty methods

    International Nuclear Information System (INIS)

    Glaeser, Horst

    2013-01-01

    A summary of existing and most used uncertainty methods is presented, and the main features are compared. One of these methods is the order statistics method based on Wilks' formula. It is applied in safety research as well as in licensing. This method has been first proposed by GRS for use in deterministic safety analysis, and is now used by many organisations world-wide. Its advantage is that the number of potential uncertain input and output parameters is not limited to a small number. Such a limitation was necessary for the first demonstration of the Code Scaling Applicability Uncertainty Method (CSAU) by the United States Regulatory Commission (USNRC). They did not apply Wilks' formula in their statistical method propagating input uncertainties to obtain the uncertainty of a single output variable, like peak cladding temperature. A Phenomena Identification and Ranking Table (PIRT) was set up in order to limit the number of uncertain input parameters, and consequently, the number of calculations to be performed. Another purpose of such a PIRT process is to identify the most important physical phenomena which a computer code should be suitable to calculate. The validation of the code should be focused on the identified phenomena. Response surfaces are used in some applications replacing the computer code for performing a high number of calculations. The second well known uncertainty method is the Uncertainty Methodology Based on Accuracy Extrapolation (UMAE) and the follow-up method 'Code with the Capability of Internal Assessment of Uncertainty (CIAU)' developed by the University Pisa. Unlike the statistical approaches, the CIAU does compare experimental data with calculation results. It does not consider uncertain input parameters. Therefore, the CIAU is highly dependent on the experimental database. The accuracy gained from the comparison between experimental data and calculated results are extrapolated to obtain the uncertainty of the system code predictions

  1. Mapping Two-Dimensional Deformation Field Time-Series of Large Slope by Coupling DInSAR-SBAS with MAI-SBAS

    Directory of Open Access Journals (Sweden)

    Liming He

    2015-09-01

    Full Text Available Mapping deformation field time-series, including vertical and horizontal motions, is vital for landslide monitoring and slope safety assessment. However, the conventional differential synthetic aperture radar interferometry (DInSAR technique can only detect the displacement component in the satellite-to-ground direction, i.e., line-of-sight (LOS direction displacement. To overcome this constraint, a new method was developed to obtain the displacement field time series of a slope by coupling DInSAR based small baseline subset approach (DInSAR-SBAS with multiple-aperture InSAR (MAI based small baseline subset approach (MAI-SBAS. This novel method has been applied to a set of 11 observations from the phased array type L-band synthetic aperture radar (PALSAR sensor onboard the advanced land observing satellite (ALOS, spanning from 2007 to 2011, of two large-scale north–south slopes of the largest Asian open-pit mine in the Northeast of China. The retrieved displacement time series showed that the proposed method can detect and measure the large displacements that occurred along the north–south direction, and the gradually changing two-dimensional displacement fields. Moreover, we verified this new method by comparing the displacement results to global positioning system (GPS measurements.

  2. Statistical searches for microlensing events in large, non-uniformly sampled time-domain surveys: A test using palomar transient factory data

    Energy Technology Data Exchange (ETDEWEB)

    Price-Whelan, Adrian M.; Agüeros, Marcel A. [Department of Astronomy, Columbia University, 550 W 120th Street, New York, NY 10027 (United States); Fournier, Amanda P. [Department of Physics, Broida Hall, University of California, Santa Barbara, CA 93106 (United States); Street, Rachel [Las Cumbres Observatory Global Telescope Network, Inc., 6740 Cortona Drive, Suite 102, Santa Barbara, CA 93117 (United States); Ofek, Eran O. [Benoziyo Center for Astrophysics, Weizmann Institute of Science, 76100 Rehovot (Israel); Covey, Kevin R. [Lowell Observatory, 1400 West Mars Hill Road, Flagstaff, AZ 86001 (United States); Levitan, David; Sesar, Branimir [Division of Physics, Mathematics, and Astronomy, California Institute of Technology, Pasadena, CA 91125 (United States); Laher, Russ R.; Surace, Jason, E-mail: adrn@astro.columbia.edu [Spitzer Science Center, California Institute of Technology, Mail Stop 314-6, Pasadena, CA 91125 (United States)

    2014-01-20

    Many photometric time-domain surveys are driven by specific goals, such as searches for supernovae or transiting exoplanets, which set the cadence with which fields are re-imaged. In the case of the Palomar Transient Factory (PTF), several sub-surveys are conducted in parallel, leading to non-uniform sampling over its ∼20,000 deg{sup 2} footprint. While the median 7.26 deg{sup 2} PTF field has been imaged ∼40 times in the R band, ∼2300 deg{sup 2} have been observed >100 times. We use PTF data to study the trade off between searching for microlensing events in a survey whose footprint is much larger than that of typical microlensing searches, but with far-from-optimal time sampling. To examine the probability that microlensing events can be recovered in these data, we test statistics used on uniformly sampled data to identify variables and transients. We find that the von Neumann ratio performs best for identifying simulated microlensing events in our data. We develop a selection method using this statistic and apply it to data from fields with >10 R-band observations, 1.1 × 10{sup 9} light curves, uncovering three candidate microlensing events. We lack simultaneous, multi-color photometry to confirm these as microlensing events. However, their number is consistent with predictions for the event rate in the PTF footprint over the survey's three years of operations, as estimated from near-field microlensing models. This work can help constrain all-sky event rate predictions and tests microlensing signal recovery in large data sets, which will be useful to future time-domain surveys, such as that planned with the Large Synoptic Survey Telescope.

  3. Existing Steel Railway Bridges Evaluation

    Science.gov (United States)

    Vičan, Josef; Gocál, Jozef; Odrobiňák, Jaroslav; Koteš, Peter

    2016-12-01

    The article describes general principles and basis of evaluation of existing railway bridges based on the concept of load-carrying capacity determination. Compared to the design of a new bridge, the modified reliability level for existing bridges evaluation should be considered due to implementation of the additional data related to bridge condition and behaviour obtained from regular inspections. Based on those data respecting the bridge remaining lifetime, a modification of partial safety factors for actions and materials could be respected in the bridge evaluation process. A great attention is also paid to the specific problems of determination of load-caring capacity of steel railway bridges in service. Recommendation for global analysis and methodology for existing steel bridge superstructure load-carrying capacity determination are described too.

  4. Existing Steel Railway Bridges Evaluation

    Directory of Open Access Journals (Sweden)

    Vičan Josef

    2016-12-01

    Full Text Available The article describes general principles and basis of evaluation of existing railway bridges based on the concept of load-carrying capacity determination. Compared to the design of a new bridge, the modified reliability level for existing bridges evaluation should be considered due to implementation of the additional data related to bridge condition and behaviour obtained from regular inspections. Based on those data respecting the bridge remaining lifetime, a modification of partial safety factors for actions and materials could be respected in the bridge evaluation process. A great attention is also paid to the specific problems of determination of load-caring capacity of steel railway bridges in service. Recommendation for global analysis and methodology for existing steel bridge superstructure load-carrying capacity determination are described too.

  5. Multiple-relaxation-time lattice Boltzmann model for incompressible miscible flow with large viscosity ratio and high Péclet number

    Science.gov (United States)

    Meng, Xuhui; Guo, Zhaoli

    2015-10-01

    A lattice Boltzmann model with a multiple-relaxation-time (MRT) collision operator is proposed for incompressible miscible flow with a large viscosity ratio as well as a high Péclet number in this paper. The equilibria in the present model are motivated by the lattice kinetic scheme previously developed by Inamuro et al. [Philos. Trans. R. Soc. London, Ser. A 360, 477 (2002), 10.1098/rsta.2001.0942]. The fluid viscosity and diffusion coefficient depend on both the corresponding relaxation times and additional adjustable parameters in this model. As a result, the corresponding relaxation times can be adjusted in proper ranges to enhance the performance of the model. Numerical validations of the Poiseuille flow and a diffusion-reaction problem demonstrate that the proposed model has second-order accuracy in space. Thereafter, the model is used to simulate flow through a porous medium, and the results show that the proposed model has the advantage to obtain a viscosity-independent permeability, which makes it a robust method for simulating flow in porous media. Finally, a set of simulations are conducted on the viscous miscible displacement between two parallel plates. The results reveal that the present model can be used to simulate, to a high level of accuracy, flows with large viscosity ratios and/or high Péclet numbers. Moreover, the present model is shown to provide superior stability in the limit of high kinematic viscosity. In summary, the numerical results indicate that the present lattice Boltzmann model is an ideal numerical tool for simulating flow with a large viscosity ratio and/or a high Péclet number.

  6. Global existence proof for relativistic Boltzmann equation

    International Nuclear Information System (INIS)

    Dudynski, M.; Ekiel-Jezewska, M.L.

    1992-01-01

    The existence and causality of solutions to the relativistic Boltzmann equation in L 1 and in L loc 1 are proved. The solutions are shown to satisfy physically natural a priori bounds, time-independent in L 1 . The results rely upon new techniques developed for the nonrelativistic Boltzmann equation by DiPerna and Lions

  7. Influence of Extrinsic Information Scaling Coefficient on Double-Iterative Decoding Algorithm for Space-Time Turbo Codes with Large Number of Antennas

    Directory of Open Access Journals (Sweden)

    TRIFINA, L.

    2011-02-01

    Full Text Available This paper analyzes the extrinsic information scaling coefficient influence on double-iterative decoding algorithm for space-time turbo codes with large number of antennas. The max-log-APP algorithm is used, scaling both the extrinsic information in the turbo decoder and the one used at the input of the interference-canceling block. Scaling coefficients of 0.7 or 0.75 lead to a 0.5 dB coding gain compared to the no-scaling case, for one or more iterations to cancel the spatial interferences.

  8. Optical chaos and hybrid WDM/TDM based large capacity quasi-distributed sensing network with real-time fiber fault monitoring.

    Science.gov (United States)

    Luo, Yiyang; Xia, Li; Xu, Zhilin; Yu, Can; Sun, Qizhen; Li, Wei; Huang, Di; Liu, Deming

    2015-02-09

    An optical chaos and hybrid wavelength division multiplexing/time division multiplexing (WDM/TDM) based large capacity quasi-distributed sensing network with real-time fiber fault monitoring is proposed. Chirped fiber Bragg grating (CFBG) intensity demodulation is adopted to improve the dynamic range of the measurements. Compared with the traditional sensing interrogation methods in time, radio frequency and optical wavelength domains, the measurand sensing and the precise locating of the proposed sensing network can be simultaneously interrogated by the relative amplitude change (RAC) and the time delay of the correlation peak in the cross-correlation spectrum. Assisted with the WDM/TDM technology, hundreds of sensing units could be potentially multiplexed in the multiple sensing fiber lines. Based on the proof-of-concept experiment for axial strain measurement with three sensing fiber lines, the strain sensitivity up to 0.14% RAC/με and the precise locating of the sensors are achieved. Significantly, real-time fiber fault monitoring in the three sensing fiber lines is also implemented with a spatial resolution of 2.8 cm.

  9. Improving Rydberg Excitations within Time-Dependent Density Functional Theory with Generalized Gradient Approximations: The Exchange-Enhancement-for-Large-Gradient Scheme.

    Science.gov (United States)

    Li, Shaohong L; Truhlar, Donald G

    2015-07-14

    Time-dependent density functional theory (TDDFT) with conventional local and hybrid functionals such as the local and hybrid generalized gradient approximations (GGA) seriously underestimates the excitation energies of Rydberg states, which limits its usefulness for applications such as spectroscopy and photochemistry. We present here a scheme that modifies the exchange-enhancement factor to improve GGA functionals for Rydberg excitations within the TDDFT framework while retaining their accuracy for valence excitations and for the thermochemical energetics calculated by ground-state density functional theory. The scheme is applied to a popular hybrid GGA functional and tested on data sets of valence and Rydberg excitations and atomization energies, and the results are encouraging. The scheme is simple and flexible. It can be used to correct existing functionals, and it can also be used as a strategy for the development of new functionals.

  10. Longitudinal development of muons in large air showers studies from the arrival time distributions measured at 900m above sea level

    Science.gov (United States)

    Kakimoto, F.; Tsuchimoto, I.; Enoki, T.; Suga, K.; Nishi, K.

    1985-01-01

    The arrival time distributions of muons with energies above 1.0GeV and 0.5GeV have been measured in the Akeno air-shower array to study the longitudinal development of muons in air showers with primary energies in the range 10 to the 17th power to 10 to the 18th power ev. The average rise times of muons with energies above 1.0GeV at large core distances are consistent with those expected from very high multiplicity models and, on the contrary, with those expected from the low multiplicity models at small core distances. This implies that the longitudinal development at atmospheric depth smaller than 500 cm square is very fast and that at larger atmospheric depths is rather slow.

  11. Large-distance and long-time asymptotic behavior of the reduced density matrix in the non-linear Schroedinger model

    Energy Technology Data Exchange (ETDEWEB)

    Kozlowski, K.K.

    2010-12-15

    Starting from the form factor expansion in finite volume, we derive the multidimensional generalization of the so-called Natte series for the zero-temperature, time and distance dependent reduced density matrix in the non-linear Schroedinger model. This representation allows one to read-off straightforwardly the long-time/large-distance asymptotic behavior of this correlator. Our method of analysis reduces the complexity of the computation of the asymptotic behavior of correlation functions in the so-called interacting integrable models, to the one appearing in free fermion equivalent models. We compute explicitly the first few terms appearing in the asymptotic expansion. Part of these terms stems from excitations lying away from the Fermi boundary, and hence go beyond what can be obtained by using the CFT/Luttinger liquid based predictions. (orig.)

  12. Performance of Existing Hydrogen Stations

    Energy Technology Data Exchange (ETDEWEB)

    Sprik, Samuel [National Renewable Energy Laboratory (NREL), Golden, CO (United States); Kurtz, Jennifer M [National Renewable Energy Laboratory (NREL), Golden, CO (United States); Ainscough, Christopher D [National Renewable Energy Laboratory (NREL), Golden, CO (United States); Saur, Genevieve [National Renewable Energy Laboratory (NREL), Golden, CO (United States); Peters, Michael C [National Renewable Energy Laboratory (NREL), Golden, CO (United States)

    2017-12-01

    In this presentation, the National Renewable Energy Laboratory presented aggregated analysis results on the performance of existing hydrogen stations, including performance, operation, utilization, maintenance, safety, hydrogen quality, and cost. The U.S. Department of Energy funds technology validation work at NREL through its National Fuel Cell Technology Evaluation Center (NFCTEC).

  13. Mathematical modelling and optimization of a large-scale combined cooling, heat, and power system that incorporates unit changeover and time-of-use electricity price

    International Nuclear Information System (INIS)

    Zhu, Qiannan; Luo, Xianglong; Zhang, Bingjian; Chen, Ying

    2017-01-01

    Highlights: • We propose a novel superstructure for the design and optimization of LSCCHP. • A multi-objective multi-period MINLP model is formulated. • The unit start-up cost and time-of-use electricity prices are involved. • Unit size discretization strategy is proposed to linearize the original MINLP model. • A case study is elaborated to demonstrate the effectiveness of the proposed method. - Abstract: Building energy systems, particularly large public ones, are major energy consumers and pollutant emission contributors. In this study, a superstructure of large-scale combined cooling, heat, and power system is constructed. The off-design unit, economic cost, and CO_2 emission models are also formulated. Moreover, a multi-objective mixed integer nonlinear programming model is formulated for the simultaneous system synthesis, technology selection, unit sizing, and operation optimization of large-scale combined cooling, heat, and power system. Time-of-use electricity price and unit changeover cost are incorporated into the problem model. The economic objective is to minimize the total annual cost, which comprises the operation and investment costs of large-scale combined cooling, heat, and power system. The environmental objective is to minimize the annual global CO_2 emission of large-scale combined cooling, heat, and power system. The augmented ε–constraint method is applied to achieve the Pareto frontier of the design configuration, thereby reflecting the set of solutions that represent optimal trade-offs between the economic and environmental objectives. Sensitivity analysis is conducted to reflect the impact of natural gas price on the combined cooling, heat, and power system. The synthesis and design of combined cooling, heat, and power system for an airport in China is studied to test the proposed synthesis and design methodology. The Pareto curve of multi-objective optimization shows that the total annual cost varies from 102.53 to 94.59 M

  14. Two Step Acceleration Process of Electrons in the Outer Van Allen Radiation Belt by Time Domain Electric Field Bursts and Large Amplitude Chorus Waves

    Science.gov (United States)

    Agapitov, O. V.; Mozer, F.; Artemyev, A.; Krasnoselskikh, V.; Lejosne, S.

    2014-12-01

    A huge number of different non-linear structures (double layers, electron holes, non-linear whistlers, etc) have been observed by the electric field experiment on the Van Allen Probes in conjunction with relativistic electron acceleration in the Earth's outer radiation belt. These structures, found as short duration (~0.1 msec) quasi-periodic bursts of electric field in the high time resolution electric field waveform, have been called Time Domain Structures (TDS). They can quite effectively interact with radiation belt electrons. Due to the trapping of electrons into these non-linear structures, they are accelerated up to ~10 keV and their pitch angles are changed, especially for low energies (˜1 keV). Large amplitude electric field perturbations cause non-linear resonant trapping of electrons into the effective potential of the TDS and these electrons are then accelerated in the non-homogeneous magnetic field. These locally accelerated electrons create the "seed population" of several keV electrons that can be accelerated by coherent, large amplitude, upper band whistler waves to MeV energies in this two step acceleration process. All the elements of this chain acceleration mechanism have been observed by the Van Allen Probes.

  15. Decomposing Large Inverse Problems with an Augmented Lagrangian Approach: Application to Joint Inversion of Body-Wave Travel Times and Surface-Wave Dispersion Measurements

    Science.gov (United States)

    Reiter, D. T.; Rodi, W. L.

    2015-12-01

    Constructing 3D Earth models through the joint inversion of large geophysical data sets presents numerous theoretical and practical challenges, especially when diverse types of data and model parameters are involved. Among the challenges are the computational complexity associated with large data and model vectors and the need to unify differing model parameterizations, forward modeling methods and regularization schemes within a common inversion framework. The challenges can be addressed in part by decomposing the inverse problem into smaller, simpler inverse problems that can be solved separately, providing one knows how to merge the separate inversion results into an optimal solution of the full problem. We have formulated an approach to the decomposition of large inverse problems based on the augmented Lagrangian technique from optimization theory. As commonly done, we define a solution to the full inverse problem as the Earth model minimizing an objective function motivated, for example, by a Bayesian inference formulation. Our decomposition approach recasts the minimization problem equivalently as the minimization of component objective functions, corresponding to specified data subsets, subject to the constraints that the minimizing models be equal. A standard optimization algorithm solves the resulting constrained minimization problems by alternating between the separate solution of the component problems and the updating of Lagrange multipliers that serve to steer the individual solution models toward a common model solving the full problem. We are applying our inversion method to the reconstruction of the·crust and upper-mantle seismic velocity structure across Eurasia.· Data for the inversion comprise a large set of P and S body-wave travel times·and fundamental and first-higher mode Rayleigh-wave group velocities.

  16. Characterization of mean transit time at large springs in the Upper Colorado River Basin, USA: A tool for assessing groundwater discharge vulnerability

    Science.gov (United States)

    Solder, John; Stolp, Bernard J.; Heilweil, Victor M.; Susong, David D.

    2016-01-01

    Environmental tracers (noble gases, tritium, industrial gases, stable isotopes, and radio-carbon) and hydrogeology were interpreted to determine groundwater transit-time distribution and calculate mean transit time (MTT) with lumped parameter modeling at 19 large springs distributed throughout the Upper Colorado River Basin (UCRB), USA. The predictive value of the MTT to evaluate the pattern and timing of groundwater response to hydraulic stress (i.e., vulnerability) is examined by a statistical analysis of MTT, historical spring discharge records, and the Palmer Hydrological Drought Index. MTTs of the springs range from 10 to 15,000 years and 90 % of the cumulative discharge-weighted travel-time distribution falls within the range of 2−10,000 years. Historical variability in discharge was assessed as the ratio of 10–90 % flow-exceedance (R 10/90%) and ranged from 2.8 to 1.1 for select springs with available discharge data. The lag-time (i.e., delay in discharge response to drought conditions) was determined by cross-correlation analysis and ranged from 0.5 to 6 years for the same select springs. Springs with shorter MTTs (<80 years) statistically correlate with larger discharge variations and faster responses to drought, indicating MTT can be used for estimating the relative magnitude and timing of groundwater response. Results indicate that groundwater discharge to streams in the UCRB will likely respond on the order of years to climate variation and increasing groundwater withdrawals.

  17. The Greenhouse Effect Does Exist!

    OpenAIRE

    Ebel, Jochen

    2009-01-01

    In particular, without the greenhouse effect, essential features of the atmospheric temperature profile as a function of height cannot be described, i.e., the existence of the tropopause above which we see an almost isothermal temperature curve, whereas beneath it the temperature curve is nearly adiabatic. The relationship between the greenhouse effect and observed temperature curve is explained and the paper by Gerlich and Tscheuschner [arXiv:0707.1161] critically analyzed. Gerlich and Tsche...

  18. Europe - space for transcultural existence?

    OpenAIRE

    Tamcke, Martin; Janny, de Jong; Klein, Lars; Waal, Margriet

    2013-01-01

    Europe - Space for Transcultural Existence? is the first volume of the new series, Studies in Euroculture, published by Göttingen University Press. The series derives its name from the Erasmus Mundus Master of Excellence Euroculture: Europe in the Wider World, a two year programme offered by a consortium of eight European universities in collaboration with four partner universities outside Europe. This master highlights regional, national and supranational dimensions of the European democrati...

  19. Existence of undiscovered Uranian satellites

    International Nuclear Information System (INIS)

    Boice, D.C.

    1986-04-01

    Structure in the Uranian ring system as observed in recent occultations may contain indirect evidence for the existence of undiscovered satellites. Using the Alfven and Arrhenius (1975, 1976) scenario for the formation of planetary systems, the orbital radii of up to nine hypothetical satellites interior to Miranda are computed. These calculations should provide interesting comparisons when the results from the Voyager 2 encounter with Uranus are made public. 15 refs., 1 fig., 1 tab

  20. UNCITRAL: Changes to existing law

    OpenAIRE

    Andersson, Joakim

    2008-01-01

    The UNCITRAL Convention on Contracts for the International Carriage of Goods [wholly or partly] by Sea has an ambition of replacing current maritime regimes and expands the application of the Convention to include also multimodal transport. This thesis questions what changes to existing law, in certain areas, the new Convention will bring compared to the current regimes. In the initial part, the thesis provides for a brief background and history of international maritime regulations and focus...

  1. Existence Results for Incompressible Magnetoelasticity

    Czech Academy of Sciences Publication Activity Database

    Kružík, Martin; Stefanelli, U.; Zeman, J.

    2015-01-01

    Roč. 35, č. 6 (2015), s. 2615-2623 ISSN 1078-0947 R&D Projects: GA ČR GA13-18652S Institutional support: RVO:67985556 Keywords : magnetoelasticity * magnetostrictive solids * incompressibility * existence of minimizers * quasistatic evolution * energetic solution Subject RIV: BA - General Mathematics Impact factor: 1.127, year: 2015 http://library.utia.cas.cz/separaty/2015/MTR/kruzik-0443017.pdf

  2. Assessing the use of digital radiography and a real-time interactive pulmonary nodule analysis system for large population lung cancer screening

    International Nuclear Information System (INIS)

    Xu Yan; Ma Daqing; He Wen

    2012-01-01

    Rationale and objectives: To assess the use of chest digital radiograph (DR) assisted with a real-time interactive pulmonary nodule analysis system in large population lung cancer screening. Materials and methods: 346 DR/CR patient studies with corresponding CT images were selected from 12,500 patients screened for lung cancer from year 2007 to 2009. Two expert chest radiologists established CT-confirmed Gold Standard of nodules on DR/CR images with consensus. These cases were read by eight other chest radiologists (participating radiologists) first without using a real-time interactive pulmonary nodule analysis system and then re-read using the system. Performances of participating radiologists and the computer system were analyzed. Results: The computer system achieved similar performance on DR and CR images, with a detection rate of 76% and an average FPs of 2.0 per image. Before and after using the computer-aided detection system, the nodule detection sensitivities of the participating radiologists were 62.3% and 77.3% respectively, and the A z values increased from 0.794 to 0.831. Statistical analysis demonstrated statically significant improvement for the participating radiologists after using the computer analysis system with a P-value 0.05. Conclusion: The computer system could help radiologists identify more lesions, especially small ones that are more likely to be overlooked on chest DR/CR images, and could help reduce inter-observer diagnostic variations, while its FPs were easy to recognize and dismiss. It is suggested that DR/CR assisted by the real-time interactive pulmonary nodule analysis system may be an effective means to screen large populations for lung cancer.

  3. The Existence of Public Protection Unit

    Directory of Open Access Journals (Sweden)

    Moh. Ilham A. Hamudy

    2014-12-01

    Full Text Available This article is about the Public Protection Unit (Satlinmas formerly known as civil defence (Hansip. This article is a summary of the results of the desk study and fieldwork conducted in October-November 2013 in the town of Magelang and Surabaya. This study used descriptive qualitative approach to explore the combined role and existence Satlinmas. The results of the study showed, the existence of the problem Satlinmas still leave many, including, first, the legal basis for the establishment of Satlinmas. Until now, there has been no new regulations governing Satlinmas. Existing regulations are too weak and cannot capture the times. Second, the formulation of concepts and basic tasks and functions Satlinmas overlap with other institutions. Third, Satlinmas image in society tend to fade and abused. Fourth, Satlinmas incorporation into the Municipal Police deemed not appropriate, because different philosophy.

  4. Brief report: accuracy and response time for the recognition of facial emotions in a large sample of children with autism spectrum disorders.

    Science.gov (United States)

    Fink, Elian; de Rosnay, Marc; Wierda, Marlies; Koot, Hans M; Begeer, Sander

    2014-09-01

    The empirical literature has presented inconsistent evidence for deficits in the recognition of basic emotion expressions in children with autism spectrum disorders (ASD), which may be due to the focus on research with relatively small sample sizes. Additionally, it is proposed that although children with ASD may correctly identify emotion expression they rely on more deliberate, more time-consuming strategies in order to accurately recognize emotion expressions when compared to typically developing children. In the current study, we examine both emotion recognition accuracy and response time in a large sample of children, and explore the moderating influence of verbal ability on these findings. The sample consisted of 86 children with ASD (M age = 10.65) and 114 typically developing children (M age = 10.32) between 7 and 13 years of age. All children completed a pre-test (emotion word-word matching), and test phase consisting of basic emotion recognition, whereby they were required to match a target emotion expression to the correct emotion word; accuracy and response time were recorded. Verbal IQ was controlled for in the analyses. We found no evidence of a systematic deficit in emotion recognition accuracy or response time for children with ASD, controlling for verbal ability. However, when controlling for children's accuracy in word-word matching, children with ASD had significantly lower emotion recognition accuracy when compared to typically developing children. The findings suggest that the social impairments observed in children with ASD are not the result of marked deficits in basic emotion recognition accuracy or longer response times. However, children with ASD may be relying on other perceptual skills (such as advanced word-word matching) to complete emotion recognition tasks at a similar level as typically developing children.

  5. Cryogen-free cryostat for large-scale arrays of superconducting tunnel junction ion detectors in time-of-flight mass spectrometry

    Science.gov (United States)

    Kushino, A.; Ohkubo, M.; Chen, Y. E.; Ukibe, M.; Kasai, S.; Fujioka, K.

    2006-04-01

    Nb-based superconducting tunnel junction (STJ) detectors have a fast time resolution of a few 100 ns and high operating temperature of 0.3 K. These advantages expand their applicable fields to time-of-flight mass spectrometry (TOF-MS). In order to enlarge effective detection area, we have built arrays based on hundreds of large STJ elements. To realize the fast readout and no-cross talk, coaxial cables made of low-thermal conductivity materials were investigated. From results of thermal conduction measurements, we chose thin coaxial cables with a diameter of 0.33 mm, consisting of CuNi center/outer conductors and Teflon insulator for the wiring between 0.3 K- 3He pot of the sorption pump and 3 K-2nd stage of GM cooler. Even after the installation of coaxial cables and a cold snout to the cryogen-free cryostat, we could keep arrays at 0.3 K for about a week, and reduction of the holding time at 0.3 K and temperature rise at 3He pot due to the installation were small, ˜0.5 day and 10 mK, respectively.

  6. A Minority of Patients Newly Diagnosed with AIDS Are Started on Antiretroviral Therapy at the Time of Diagnosis in a Large Public Hospital in the Southeastern United States.

    Science.gov (United States)

    Goswami, Neela D; Colasanti, Jonathan; Khoubian, Jonathan J; Huang, Yijian; Armstrong, Wendy S; Del Rio, Carlos

    Prompt antiretroviral therapy (ART) initiation after AIDS diagnosis, in the absence of certain opportunistic infections such as tuberculosis and cryptococcal meningitis, delays disease progression and death, but system barriers to inpatient ART initiation at large hospitals in the era of modern ART have been less studied. We reviewed hospitalizations for persons newly diagnosed with AIDS at Grady Memorial Hospital in Atlanta, Georgia in 2011 and 2012. Individual- and system-level variables were collected. Logistic regression models were used to estimate the odds ratios (ORs) for ART initiation prior to discharge. With Georgia Department of Health surveillance data, we estimated time to first clinic visit, ART initiation, and viral suppression. In the study population (n = 81), ART was initiated prior to discharge in 10 (12%) patients. Shorter hospital stay was significantly associated with lack of ART initiation at the time of HIV diagnosis (8 versus 24 days, OR: 1.14, 95% confidence interval: 1.04-1.25). Reducing barriers to ART initiation for newly diagnosed HIV-positive patients with short hospital stays may improve time to viral suppression.

  7. Quantum logics with existence property

    International Nuclear Information System (INIS)

    Schindler, C.

    1991-01-01

    A quantum logic (σ-orthocomplete orthomodular poset L with a convex, unital, and separating set Δ of states) is said to have the existence property if the expectation functionals on lin(Δ) associated with the bounded observables of L form a vector space. Classical quantum logics as well as the Hilbert space logics of traditional quantum mechanics have this property. The author shows that, if a quantum logic satisfies certain conditions in addition to having property E, then the number of its blocks (maximal classical subsystems) must either be one (classical logics) or uncountable (as in Hilbert space logics)

  8. Exshall: A Turkel-Zwas explicit large time-step FORTRAN program for solving the shallow-water equations in spherical coordinates

    Science.gov (United States)

    Navon, I. M.; Yu, Jian

    A FORTRAN computer program is presented and documented applying the Turkel-Zwas explicit large time-step scheme to a hemispheric barotropic model with constraint restoration of integral invariants of the shallow-water equations. We then proceed to detail the algorithms embodied in the code EXSHALL in this paper, particularly algorithms related to the efficiency and stability of T-Z scheme and the quadratic constraint restoration method which is based on a variational approach. In particular we provide details about the high-latitude filtering, Shapiro filtering, and Robert filtering algorithms used in the code. We explain in detail the various subroutines in the EXSHALL code with emphasis on algorithms implemented in the code and present the flowcharts of some major subroutines. Finally, we provide a visual example illustrating a 4-day run using real initial data, along with a sample printout and graphic isoline contours of the height field and velocity fields.

  9. Assessment of time-dependent density functional theory with the restricted excitation space approximation for excited state calculations of large systems

    Science.gov (United States)

    Hanson-Heine, Magnus W. D.; George, Michael W.; Besley, Nicholas A.

    2018-06-01

    The restricted excitation subspace approximation is explored as a basis to reduce the memory storage required in linear response time-dependent density functional theory (TDDFT) calculations within the Tamm-Dancoff approximation. It is shown that excluding the core orbitals and up to 70% of the virtual orbitals in the construction of the excitation subspace does not result in significant changes in computed UV/vis spectra for large molecules. The reduced size of the excitation subspace greatly reduces the size of the subspace vectors that need to be stored when using the Davidson procedure to determine the eigenvalues of the TDDFT equations. Furthermore, additional screening of the two-electron integrals in combination with a reduction in the size of the numerical integration grid used in the TDDFT calculation leads to significant computational savings. The use of these approximations represents a simple approach to extend TDDFT to the study of large systems and make the calculations increasingly tractable using modest computing resources.

  10. Using memory-efficient algorithm for large-scale time-domain modeling of surface plasmon polaritons propagation in organic light emitting diodes

    Science.gov (United States)

    Zakirov, Andrey; Belousov, Sergei; Valuev, Ilya; Levchenko, Vadim; Perepelkina, Anastasia; Zempo, Yasunari

    2017-10-01

    We demonstrate an efficient approach to numerical modeling of optical properties of large-scale structures with typical dimensions much greater than the wavelength of light. For this purpose, we use the finite-difference time-domain (FDTD) method enhanced with a memory efficient Locally Recursive non-Locally Asynchronous (LRnLA) algorithm called DiamondTorre and implemented for General Purpose Graphical Processing Units (GPGPU) architecture. We apply our approach to simulation of optical properties of organic light emitting diodes (OLEDs), which is an essential step in the process of designing OLEDs with improved efficiency. Specifically, we consider a problem of excitation and propagation of surface plasmon polaritons (SPPs) in a typical OLED, which is a challenging task given that SPP decay length can be about two orders of magnitude greater than the wavelength of excitation. We show that with our approach it is possible to extend the simulated volume size sufficiently so that SPP decay dynamics is accounted for. We further consider an OLED with periodically corrugated metallic cathode and show how the SPP decay length can be greatly reduced due to scattering off the corrugation. Ultimately, we compare the performance of our algorithm to the conventional FDTD and demonstrate that our approach can efficiently be used for large-scale FDTD simulations with the use of only a single GPGPU-powered workstation, which is not practically feasible with the conventional FDTD.

  11. The effects on safety, time consumption and environment of large scale use of roundabouts in an urban area: a case study.

    Science.gov (United States)

    Hydén, C; Várhelyi, A

    2000-01-01

    An experiment with small roundabouts-as speed reducing measures-was carried out in a Swedish city. The purpose of the study was to test the large scale and long term effects of the roundabouts. The results showed that the roundabouts reduced the speed considerably at the junctions and on links between roundabouts. The lateral displacement the roundabout forces the driver to has a great importance for the speed of approaching cars to a roundabout. The speed-reducing effect is large already at a 2 m deflection. A larger deflection does not result in a larger effect. Conflict studies indicated an overall decrease in accident risk by 44%. Vulnerable road-users' risk was reduced significantly, while there was no reduction for car occupants. There is a relation between the reduction of approach speed and the reduction of injury accident risk. The time consumption at a time operated signal was reduced heavily by the instalment of a roundabout at a signalised intersection. On average, emissions (CO and NOx) at roundabouts replacing non-signalised junctions increased by between 4 and 6%, while a roundabout replacing a signalised intersection led to a reduction by between 20 and 29%. The noise level was reduced at junctions that were provided with roundabout. Car drivers were less positive to the roundabouts than bicyclists. In the long term, the unchanged roundabouts worked almost as good as they did shortly after the rebuilding. The study showed that details in the design are of decisive importance for road-users' safety. Special attention has to be paid to the situation of bicyclists. The transition between the cycle path/lane and the junction has to be designed with care-the bicyclists should be integrated with motorised traffic before they enter the roundabout. There should be only one car lane both on the approach, in the circulating area and on the exit. The size of the roundabout shall be as small as possible.

  12. Based on Real Time Remote Health Monitoring Systems: A New Approach for Prioritization "Large Scales Data" Patients with Chronic Heart Diseases Using Body Sensors and Communication Technology.

    Science.gov (United States)

    Kalid, Naser; Zaidan, A A; Zaidan, B B; Salman, Omar H; Hashim, M; Albahri, O S; Albahri, A S

    2018-03-02

    This paper presents a new approach to prioritize "Large-scale Data" of patients with chronic heart diseases by using body sensors and communication technology during disasters and peak seasons. An evaluation matrix is used for emergency evaluation and large-scale data scoring of patients with chronic heart diseases in telemedicine environment. However, one major problem in the emergency evaluation of these patients is establishing a reasonable threshold for patients with the most and least critical conditions. This threshold can be used to detect the highest and lowest priority levels when all the scores of patients are identical during disasters and peak seasons. A practical study was performed on 500 patients with chronic heart diseases and different symptoms, and their emergency levels were evaluated based on four main measurements: electrocardiogram, oxygen saturation sensor, blood pressure monitoring, and non-sensory measurement tool, namely, text frame. Data alignment was conducted for the raw data and decision-making matrix by converting each extracted feature into an integer. This integer represents their state in the triage level based on medical guidelines to determine the features from different sources in a platform. The patients were then scored based on a decision matrix by using multi-criteria decision-making techniques, namely, integrated multi-layer for analytic hierarchy process (MLAHP) and technique for order performance by similarity to ideal solution (TOPSIS). For subjective validation, cardiologists were consulted to confirm the ranking results. For objective validation, mean ± standard deviation was computed to check the accuracy of the systematic ranking. This study provides scenarios and checklist benchmarking to evaluate the proposed and existing prioritization methods. Experimental results revealed the following. (1) The integration of TOPSIS and MLAHP effectively and systematically solved the patient settings on triage and

  13. Does cold nuclear fusion exist?

    International Nuclear Information System (INIS)

    Brudanin, V.B.; Bystritskij, V.M.; Egorov, V.G.; Shamsutdinov, S.G.; Shyshkin, A.L.; Stolupin, V.A.; Yutlandov, I.A.

    1989-01-01

    The results of investigation of cold nuclear fusion on palladium are given both for electrolysis of heavy water D 2 O and mixture D 2 O + H 2 O) (1:1) and for palladium saturation with gaseous deuterium. The possibility of existance of this phenomenon was examined by detection of neutrons and gamma quanta from reactions: d + d → 3 He + n + 3.27 MeV, p + d → 3 He + γ + 5.5 MeV. Besides these reactions were identified by measuring the characteristic X radiation of palladium due to effect of charged products 3 He, p, t. The upper limits of the intensities of hypothetical sources of neutrons and gamma quanta at the 95% confidence level were obtained to be Q n ≤ 2x10 -2 n/sxcm 3 Pd, Q γ ≤ 2x10 -3 γ/sxcm 3 Pd. 2 refs.; 4 figs.; 2 tabs

  14. Why do interstellar grains exist

    International Nuclear Information System (INIS)

    Seab, C.G.; Hollenbach, D.J.; Mckee, C.F.; Tielens, A.G.G.M.

    1986-01-01

    There exists a discrepancy between calculated destruction rates of grains in the interstellar medium and postulated sources of new grains. This problem was examined by modelling the global life cycle of grains in the galaxy. The model includes: grain destruction due to supernovae shock waves; grain injection from cool stars, planetary nebulae, star formation, novae, and supernovae; grain growth by accretion in dark clouds; and a mixing scheme between phases of the interstellar medium. Grain growth in molecular clouds is considered as a mechanism or increasing the formation rate. To decrease the shock destruction rate, several new physical processes, such as partial vaporization effects in grain-grain collisions, breakdown of the small Larmor radius approximation for betatron acceleration, and relaxation of the steady-state shock assumption are included

  15. Comparison of Health Risks and Changes in Risks over Time Among a Sample of Lesbian, Gay, Bisexual, and Heterosexual Employees at a Large Firm.

    Science.gov (United States)

    Mitchell, Rebecca J; Ozminkowski, Ronald J

    2017-04-01

    The objective of this study was to estimate the prevalence of health risk factors by sexual orientation over a 4-year period within a sample of employees from a large firm. Propensity score-weighted generalized linear regression models were used to estimate the proportion of employees at high risk for health problems in each year and over time, controlling for many factors. Analyses were conducted with 6 study samples based on sex and sexual orientation. Rates of smoking, stress, and certain other health risk factors were higher for lesbian, gay, and bisexual (LGB) employees compared with rates of these risks among straight employees. Lesbian, gay, and straight employees successfully reduced risk levels in many areas. Significant reductions were realized for the proportion at risk for high stress and low life satisfaction among gay and lesbian employees, and for the proportion of smokers among gay males. Comparing changes over time for sexual orientation groups versus other employee groups showed that improvements and reductions in risk levels for most health risk factors examined occurred at similar rates among individuals employed by this firm, regardless of sexual orientation. These results can help improve understanding of LGB health and provide information on where to focus workplace health promotion efforts to meet the health needs of LGB employees.

  16. Does developmental timing of exposure to child maltreatment predict memory performance in adulthood? Results from a large, population-based sample.

    Science.gov (United States)

    Dunn, Erin C; Busso, Daniel S; Raffeld, Miriam R; Smoller, Jordan W; Nelson, Charles A; Doyle, Alysa E; Luk, Gigi

    2016-01-01

    Although maltreatment is a known risk factor for multiple adverse outcomes across the lifespan, its effects on cognitive development, especially memory, are poorly understood. Using data from a large, nationally representative sample of young adults (Add Health), we examined the effects of physical and sexual abuse on working and short-term memory in adulthood. We examined the association between exposure to maltreatment as well as its timing of first onset after adjusting for covariates. Of our sample, 16.50% of respondents were exposed to physical abuse and 4.36% to sexual abuse by age 17. An analysis comparing unexposed respondents to those exposed to physical or sexual abuse did not yield any significant differences in adult memory performance. However, two developmental time periods emerged as important for shaping memory following exposure to sexual abuse, but in opposite ways. Relative to non-exposed respondents, those exposed to sexual abuse during early childhood (ages 3-5), had better number recall and those first exposed during adolescence (ages 14-17) had worse number recall. However, other variables, including socioeconomic status, played a larger role (than maltreatment) on working and short-term memory. We conclude that a simple examination of "exposed" versus "unexposed" respondents may obscure potentially important within-group differences that are revealed by examining the effects of age at onset to maltreatment. Copyright © 2015 Elsevier Ltd. All rights reserved.

  17. A Wavelet-Enhanced PWTD-Accelerated Time-Domain Integral Equation Solver for Analysis of Transient Scattering from Electrically Large Conducting Objects

    KAUST Repository

    Liu, Yang

    2018-02-26

    A wavelet-enhanced plane-wave time-domain (PWTD) algorithm for efficiently and accurately solving time-domain surface integral equations (TD-SIEs) on electrically large conducting objects is presented. The proposed scheme reduces the memory requirement and computational cost of the PWTD algorithm by representing the PWTD ray data using local cosine wavelet bases (LCBs) and performing PWTD operations in the wavelet domain. The memory requirement and computational cost of the LCB-enhanced PWTD-accelerated TD-SIE solver, when applied to the analysis of transient scattering from smooth quasi-planar objects with near-normal incident pulses, scale nearly as O(Ns log Ns) and O(Ns 1.5 ), respectively. Here, Ns denotes the number of spatial unknowns. The efficiency and accuracy of the proposed scheme are demonstrated through its applications to the analysis of transient scattering from a 185 wave-length-long NASA almond and a 123-wavelength long Air-bus-A320 model.

  18. On the existence of perturbed Robertson-Walker universes

    International Nuclear Information System (INIS)

    D'Eath, P.D.

    1976-01-01

    Solutions of the full nonlinear field equations of general relativity near the Robertson-Walker universes are examined, together with their relation to linearized perturbations. A method due to Choquet-Bruhat and Deser is used to prove existence theorems for solutions near Robertson-Walker constraint data of the constraint equations on a spacelike hypersurface. These theorems allow one to regard the matter fluctuations as independent quantities, ranging over certain function spaces. In the k=-1 case the existence theory describes perturbations which may vary within uniform bounds throughout space. When k=+1 a modification of the method leads to a theorem which clarifies some unusual features of these constraint perturbations. The k=0 existence theorem refers only to perturbations which die away at large distances. The connection between linearized constraint solutions and solutions of the full constraints is discussed. For k= +- 1 backgrounds, solutions of the linearized constraints are analyzed using transverse-traceless decompositions of symmetric tensors. Finally the time-evolution of perturbed constraint data and the validity of linearized perturbation theory for Robertson-Walker universes are considered

  19. Time-scale and extent at which large-scale circulation modes determine the wind and solar potential in the Iberian Peninsula

    International Nuclear Information System (INIS)

    Jerez, Sonia; Trigo, Ricardo M

    2013-01-01

    The North Atlantic Oscillation (NAO), the East Atlantic (EA) and the Scandinavian (SCAND) modes are the three main large-scale circulation patterns driving the climate variability of the Iberian Peninsula. This study assesses their influence in terms of solar (photovoltaic) and wind power generation potential (SP and WP) and evaluates their skill as predictors. For that we use a hindcast regional climate simulation to retrieve the primary meteorological variables involved, surface solar radiation and wind speed. First we identify that the maximum influence of the various modes occurs on the interannual variations of the monthly mean SP and WP series, being generally more relevant in winter. Second we find that in this time-scale and season, SP (WP) varies up to 30% (40%) with respect to the mean climatology between years with opposite phases of the modes, although the strength and the spatial distribution of the signals differ from one month to another. Last, the skill of a multi-linear regression model (MLRM), built using the NAO, EA and SCAND indices, to reconstruct the original wintertime monthly series of SP and WP was investigated. The reconstructed series (when the MLRM is calibrated for each month individually) correlate with the original ones up to 0.8 at the interannual time-scale. Besides, when the modeled series for each individual month are merged to construct an October-to-March monthly series, and after removing the annual cycle in order to account for monthly anomalies, these correlate 0.65 (0.55) with the original SP (WP) series in average. These values remain fairly stable when the calibration and reconstruction periods differ, thus supporting up to a point the predictive potential of the method at the time-scale assessed here. (letter)

  20. Network Dynamics with BrainX3: A Large-Scale Simulation of the Human Brain Network with Real-Time Interaction

    Directory of Open Access Journals (Sweden)

    Xerxes D. Arsiwalla

    2015-02-01

    Full Text Available BrainX3 is a large-scale simulation of human brain activity with real-time interaction, rendered in 3D in a virtual reality environment, which combines computational power with human intuition for the exploration and analysis of complex dynamical networks. We ground this simulation on structural connectivity obtained from diffusion spectrum imaging data and model it on neuronal population dynamics. Users can interact with BrainX3 in real-time by perturbing brain regions with transient stimulations to observe reverberating network activity, simulate lesion dynamics or implement network analysis functions from a library of graph theoretic measures. BrainX3 can thus be used as a novel immersive platform for real-time exploration and analysis of dynamical activity patterns in brain networks, both at rest or in a task-related state, for discovery of signaling pathways associated to brain function and/or dysfunction and as a tool for virtual neurosurgery. Our results demonstrate these functionalities and shed insight on the dynamics of the resting-state attractor. Specifically, we found that a noisy network seems to favor a low firing attractor state. We also found that the dynamics of a noisy network is less resilient to lesions. Our simulations on TMS perturbations show that even though TMS inhibits most of the network, it also sparsely excites a few regions. This is presumably, due to anti-correlations in the dynamics and suggests that even a lesioned network can show sparsely distributed increased activity compared to healthy resting-state, over specific brain areas.

  1. Patterns and Timing of Failure for Diffuse Large B-Cell Lymphoma After Initial Therapy in a Cohort Who Underwent Autologous Bone Marrow Transplantation for Relapse

    Energy Technology Data Exchange (ETDEWEB)

    Dhakal, Sughosh; Bates, James E. [Department of Radiation Oncology, Wilmot Cancer Institute, University of Rochester Medical Center, Rochester, New York (United States); Casulo, Carla; Friedberg, Jonathan W.; Becker, Michael W.; Liesveld, Jane L. [Department of Medicine, Wilmot Cancer Institute, University of Rochester Medical Center, Rochester, New York (United States); Constine, Louis S., E-mail: louis_constine@urmc.rochester.edu [Department of Radiation Oncology, Wilmot Cancer Institute, University of Rochester Medical Center, Rochester, New York (United States)

    2016-10-01

    Purpose: To evaluate the location and timing of initial recurrence in patients with diffuse large B-cell lymphoma (DLBCL) who subsequently underwent high-dose chemotherapy with autologous stem cell transplant (HDC/ASCT), to direct approaches for disease surveillance, elucidate the patterns of failure of contemporary treatment strategies, and guide adjuvant treatment decisions. Methods and Materials: We analyzed consecutive patients with DLBCL who underwent HDC/ASCT between May 1992 and March 2014 at our institution. Of the 187 evaluable patients, 8 had incomplete data, and 79 underwent HDC/ASCT as a component of initial treatment for de novo or refractory DLBCL and were excluded from further analysis. Results: The median age was 50.8 years; the median time to relapse was 1.3 years. Patients were segregated according to the initial stage at diagnosis, with early stage (ES) defined as stage I/II and advanced stage (AS) defined as stage III/IV. In total, 40.4% of the ES and 75.5% of the AS patients relapsed in sites of initial disease; 68.4% of those with ES disease and 75.0% of those with AS disease relapsed in sites of initial disease only. Extranodal relapses were common (44.7% in ES and 35.9% in AS) and occurred in a variety of organs, although gastrointestinal tract/liver (n=12) was most frequent. Conclusions: Most patients with DLBCL who relapse and subsequently undergo HDC/ASCT initially recur in the previously involved disease site(s). Time to recurrence is brief, suggesting that frequency of screening is most justifiably greatest in the early posttherapy years. © 2016 Elsevier Inc.

  2. Eternally existing self-reproducing inflationary universe

    International Nuclear Information System (INIS)

    Linde, A.D.

    1986-05-01

    It is shown that the large-scale quantum fluctuations of the scalar field φ generated in the chaotic inflation scenario lead to an infinite process of self-reproduction of inflationary mini-universes. A model of eternally existing chaotic inflationary universe is suggested. It is pointed out that whereas the universe locally is very homogeneous as a result of inflation, which occurs at the classical level, the global structure of the universe is determined by quantum effects and is highly non-trivial. The universe consists of exponentially large number of different mini-universes, inside which all possible (metastable) vacuum states and all possible types of compactification are realized. The picture differs crucially from the standard picture of a one-domain universe in a ''true'' vacuum state. Our results may serve as a justification of the anthropic principle in the inflationary cosmology. These results may have important implications for the elementary particle theory as well. Namely, since all possible types of mini-universes, in which inflation may occur, should exist in our universe, there is no need to insist (as it is usually done) that in realistic theories the vacuum state of our type should be the only possible one or the best one. (author)

  3. Transportation capabilities of the existing cask fleet

    International Nuclear Information System (INIS)

    Johnson, P.E.; Joy, D.S.; Wankerl, M.W.

    1991-01-01

    This paper describes a number of scenarios estimating the amount of spent nuclear fuel that could be transported to a Monitored Retrievable Storage (MRS) Facility by various combinations of existing cask fleets. To develop the scenarios, the data provided by the Transportation System Data Base (TSDB) were modified to reflect the additional time for cask turnaround resulting from various startup and transportation issues. With these more realistic speed and cask-handling assumptions, the annual transportation capability of a fleet consisting of all of the existing casks is approximately 46 metric tons of uranium (MTU). The most likely fleet of existing casks that would be made available to the Department of Energy (DOE) consists of two rail, three overweight truck, and six legal weight truck casks. Under the same transportation assumptions, this cask fleet is capable of approximately transporting 270 MTU/year. These ranges of capability is a result of the assumptions pertaining to the number of casks assumed to be available. It should be noted that this assessment assumes additional casks based on existing certifications are not fabricated. 5 refs., 4 tabs

  4. Transportation capabilities of the existing cask fleet

    International Nuclear Information System (INIS)

    Johnson, P.E.; Wankerl, M.W.; Joy, D.S.

    1991-01-01

    This paper describes a number of scenarios estimating the amount of spent nuclear fuel that could be transported to a Monitored Retrievable Storage (MRS) Facility by various combinations of existing cask fleets. To develop the scenarios, the data provided by the Transportation System Data Base (TSDB) were modified to reflect the additional time for cask turnaround resulting from various startup and transportation issues. With these more realistic speed and cask-handling assumptions, the annual transportation capability of a fleet consisting of all of the existing casks is approximately 465 metric tons of uranium (MTU). The most likely fleet of existing casks that would be made available to the DOE consists of two rail, three overweight truck, and six legal weight truck casks. Under the same transportation assumptions, this cask fleet is capable of approximately transporting 270 MTU/year. These ranges of capability is a result of the assumptions pertaining to the number of casks assumed to be available. It should be noted that this assessment assumes additional casks based on existing certifications are not fabricated

  5. Matrix-assisted laser desorption/ionization-time of flight mass spectrometry identification of large colony beta-hemolytic streptococci containing Lancefield groups A, C, and G.

    Science.gov (United States)

    Jensen, Christian Salgård; Dam-Nielsen, Casper; Arpi, Magnus

    2015-08-01

    The aim of this study was to investigate whether large colony beta-hemolytic streptococci containing Lancefield groups A, C, and G can be adequately identified using matrix-assisted laser desorption/ionization-time of flight mass spectrometry (MALDI-ToF). Previous studies show varying results, with an identification rate from below 50% to 100%. Large colony beta-hemolytic streptococci containing Lancefield groups A, C, and G isolated from blood cultures between January 1, 2007 and May 1, 2012 were included in the study. Isolates were identified to the species level using a combination of phenotypic characteristics and 16s rRNA sequencing. The isolates were subjected to MALDI-ToF analysis. We used a two-stage approach starting with the direct method. If no valid result was obtained we proceeded to an extraction protocol. Scores above 2 were considered valid identification at the species level. A total of 97 Streptococcus pyogenes, 133 Streptococcus dysgalactiae, and 2 Streptococcus canis isolates were tested; 94%, 66%, and 100% of S. pyogenes, S. dysgalactiae, and S. canis, respectively, were correctly identified by MALDI-ToF. In most instances when the isolates were not identified by MALDI-ToF this was because MALDI-ToF was unable to differentiate between S. pyogenes and S. dysgalactiae. By removing two S. pyogenes reference spectra from the MALDI-ToF database the proportion of correctly identified isolates increased to 96% overall. MALDI-ToF is a promising method for discriminating between S. dysgalactiae, S. canis, and S. equi, although more strains need to be tested to clarify this.

  6. SU-G-JeP2-13: Spatial Accuracy Evaluation for Real-Time MR Guided Radiation Therapy Using a Novel Large-Field MRI Distortion Phantom

    International Nuclear Information System (INIS)

    Antolak, A; Bayouth, J; Bosca, R; Jackson, E

    2016-01-01

    Purpose: Evaluate a large-field MRI phantom for assessment of geometric distortion in whole-body MRI for real-time MR guided radiation therapy. Methods: A prototype CIRS large-field MRI distortion phantom consisting of a PMMA cylinder (33 cm diameter, 30 cm length) containing a 3D-printed orthogonal grid (3 mm diameter rods, 20 mm apart), was filled with 6 mM NiCl_2 and 30 mM NaCl solution. The phantom was scanned at 1.5T and 3.0T on a GE HDxt and Discovery MR750, respectively, and at 0.35T on a ViewRay system. Scans were obtained with and without 3D distortion correction to demonstrate the impact of such corrections. CT images were used as a reference standard for analysis of geometric distortion, as determined by a fully automated gradient-search method developed in Matlab. Results: 1,116 grid points distributed throughout a cylindrical volume 28 cm in diameter and 16 cm in length were identified and analyzed. With 3D distortion correction, average/maximum displacements for the 1.5, 3.0, and 0.35T systems were 0.84/2.91, 1.00/2.97, and 0.95/2.37 mm, respectively. The percentage of points with less than (1.0, 1.5, 2.0 mm) total displacement were (73%, 92%, 97%), (54%, 85%, 97%), and (55%, 90%, 99%), respectively. A reduced scan volume of 20 × 20 × 10 cm"3 (representative of a head and neck scan volume) consisting of 420 points was also analyzed. In this volume, the percentage of points with less than (1.0, 1.5, 2.0 mm) total displacement were (90%, 99%, 100%), (63%, 95%, 100%), and (75%, 96%, 100%), respectively. Without 3D distortion correction, average/maximum displacements were 1.35/3.67, 1.67/4.46, and 1.51/3.89 mm, respectively. Conclusion: The prototype large-field MRI distortion phantom and developed software provide a thorough assessment of 3D spatial distortions in MRI. The distortions measured were acceptable for RT applications, both for the high field strengths and the system configuration developed by ViewRay.

  7. Existe sujeito em Michel Maffesoli?

    Directory of Open Access Journals (Sweden)

    Marli Appel da Silva

    2010-06-01

    Full Text Available Este ensaio discute a concepção de sujeito na abordagem teórica de Michel Maffesoli. As ideias desse autor estão em voga em alguns meios acadêmicos no Brasil e são difundidas por algumas mídias de grande circulação nacional. Entretanto, ao longo de suas obras, os pressupostos que definem quem é o sujeito maffesoliano se encontram pouco clarificados. Portanto, para alcançar o objetivo a que se propõe, este ensaio desenvolve uma análise da epistemologia e da ontologia maffesoliana com a finalidade de compreender as origens dos pressupostos desse autor, ou seja, as teorias e os autores em que Maffesoli se baseou para desenvolver uma visão de sujeito. Com essa compreensão, pretende-se responder à questão: existe sujeito na abordagem teórica de Maffesoli.

  8. Why firewalls need not exist

    Energy Technology Data Exchange (ETDEWEB)

    Nomura, Yasunori [Berkeley Center for Theoretical Physics, Department of Physics, University of California, Berkeley, CA 94720 (United States); Theoretical Physics Group, Lawrence Berkeley National Laboratory, Berkeley, CA 94720 (United States); Salzetta, Nico, E-mail: nsalzetta@berkeley.edu [Berkeley Center for Theoretical Physics, Department of Physics, University of California, Berkeley, CA 94720 (United States); Theoretical Physics Group, Lawrence Berkeley National Laboratory, Berkeley, CA 94720 (United States)

    2016-10-10

    The firewall paradox for black holes is often viewed as indicating a conflict between unitarity and the equivalence principle. We elucidate how the paradox manifests as a limitation of semiclassical theory, rather than presents a conflict between fundamental principles. Two principal features of the fundamental and semiclassical theories address two versions of the paradox: the entanglement and typicality arguments. First, the physical Hilbert space describing excitations on a fixed black hole background in the semiclassical theory is exponentially smaller than the number of physical states in the fundamental theory of quantum gravity. Second, in addition to the Hilbert space for physical excitations, the semiclassical theory possesses an unphysically large Fock space built by creation and annihilation operators on the fixed black hole background. Understanding these features not only eliminates the necessity of firewalls but also leads to a new picture of Hawking emission contrasting pair creation at the horizon.

  9. Why firewalls need not exist

    Directory of Open Access Journals (Sweden)

    Yasunori Nomura

    2016-10-01

    Full Text Available The firewall paradox for black holes is often viewed as indicating a conflict between unitarity and the equivalence principle. We elucidate how the paradox manifests as a limitation of semiclassical theory, rather than presents a conflict between fundamental principles. Two principal features of the fundamental and semiclassical theories address two versions of the paradox: the entanglement and typicality arguments. First, the physical Hilbert space describing excitations on a fixed black hole background in the semiclassical theory is exponentially smaller than the number of physical states in the fundamental theory of quantum gravity. Second, in addition to the Hilbert space for physical excitations, the semiclassical theory possesses an unphysically large Fock space built by creation and annihilation operators on the fixed black hole background. Understanding these features not only eliminates the necessity of firewalls but also leads to a new picture of Hawking emission contrasting pair creation at the horizon.

  10. Why firewalls need not exist

    Science.gov (United States)

    Nomura, Yasunori; Salzetta, Nico

    2016-10-01

    The firewall paradox for black holes is often viewed as indicating a conflict between unitarity and the equivalence principle. We elucidate how the paradox manifests as a limitation of semiclassical theory, rather than presents a conflict between fundamental principles. Two principal features of the fundamental and semiclassical theories address two versions of the paradox: the entanglement and typicality arguments. First, the physical Hilbert space describing excitations on a fixed black hole background in the semiclassical theory is exponentially smaller than the number of physical states in the fundamental theory of quantum gravity. Second, in addition to the Hilbert space for physical excitations, the semiclassical theory possesses an unphysically large Fock space built by creation and annihilation operators on the fixed black hole background. Understanding these features not only eliminates the necessity of firewalls but also leads to a new picture of Hawking emission contrasting pair creation at the horizon.

  11. Why firewalls need not exist

    International Nuclear Information System (INIS)

    Nomura, Yasunori; Salzetta, Nico

    2016-01-01

    The firewall paradox for black holes is often viewed as indicating a conflict between unitarity and the equivalence principle. We elucidate how the paradox manifests as a limitation of semiclassical theory, rather than presents a conflict between fundamental principles. Two principal features of the fundamental and semiclassical theories address two versions of the paradox: the entanglement and typicality arguments. First, the physical Hilbert space describing excitations on a fixed black hole background in the semiclassical theory is exponentially smaller than the number of physical states in the fundamental theory of quantum gravity. Second, in addition to the Hilbert space for physical excitations, the semiclassical theory possesses an unphysically large Fock space built by creation and annihilation operators on the fixed black hole background. Understanding these features not only eliminates the necessity of firewalls but also leads to a new picture of Hawking emission contrasting pair creation at the horizon.

  12. Novel web-based real-time dashboard to optimize recycling and use of red cell units at a large multi-site transfusion service

    Directory of Open Access Journals (Sweden)

    Christopher Sharpe

    2014-01-01

    Full Text Available Background: Effective blood inventory management reduces outdates of blood products. Multiple strategies have been employed to reduce the rate of red blood cell (RBC unit outdate. We designed an automated real-time web-based dashboard interfaced with our laboratory information system to effectively recycle red cell units. The objective of our approach is to decrease RBC outdate rates within our transfusion service. Methods: The dashboard was deployed in August 2011 and is accessed by a shortcut that was placed on the desktops of all blood transfusion services computers in the Capital District Health Authority region. It was designed to refresh automatically every 10 min. The dashboard provides all vital information on RBC units, and implemented a color coding scheme to indicate an RBC unit′s proximity to expiration. Results: The overall RBC unit outdate rate in the 7 months period following implementation of the dashboard (September 2011-March 2012 was 1.24% (123 units outdated/9763 units received, compared to similar periods in 2010-2011 and 2009-2010: 2.03% (188/9395 and 2.81% (261/9220, respectively. The odds ratio of a RBC unit outdate postdashboard (2011-2012 compared with 2010-2011 was 0.625 (95% confidence interval: 0.497-0.786; P < 0.0001. Conclusion: Our dashboard system is an inexpensive and novel blood inventory management system which was associated with a significant reduction in RBC unit outdate rates at our institution over a period of 7 months. This system, or components of it, could be a useful addition to existing RBC management systems at other institutions.

  13. Novel web-based real-time dashboard to optimize recycling and use of red cell units at a large multi-site transfusion service.

    Science.gov (United States)

    Sharpe, Christopher; Quinn, Jason G; Watson, Stephanie; Doiron, Donald; Crocker, Bryan; Cheng, Calvino

    2014-01-01

    Effective blood inventory management reduces outdates of blood products. Multiple strategies have been employed to reduce the rate of red blood cell (RBC) unit outdate. We designed an automated real-time web-based dashboard interfaced with our laboratory information system to effectively recycle red cell units. The objective of our approach is to decrease RBC outdate rates within our transfusion service. The dashboard was deployed in August 2011 and is accessed by a shortcut that was placed on the desktops of all blood transfusion services computers in the Capital District Health Authority region. It was designed to refresh automatically every 10 min. The dashboard provides all vital information on RBC units, and implemented a color coding scheme to indicate an RBC unit's proximity to expiration. The overall RBC unit outdate rate in the 7 months period following implementation of the dashboard (September 2011-March 2012) was 1.24% (123 units outdated/9763 units received), compared to similar periods in 2010-2011 and 2009-2010: 2.03% (188/9395) and 2.81% (261/9220), respectively. The odds ratio of a RBC unit outdate postdashboard (2011-2012) compared with 2010-2011 was 0.625 (95% confidence interval: 0.497-0.786; P dashboard system is an inexpensive and novel blood inventory management system which was associated with a significant reduction in RBC unit outdate rates at our institution over a period of 7 months. This system, or components of it, could be a useful addition to existing RBC management systems at other institutions.

  14. Network dynamics with BrainX(3): a large-scale simulation of the human brain network with real-time interaction.

    Science.gov (United States)

    Arsiwalla, Xerxes D; Zucca, Riccardo; Betella, Alberto; Martinez, Enrique; Dalmazzo, David; Omedas, Pedro; Deco, Gustavo; Verschure, Paul F M J

    2015-01-01

    BrainX(3) is a large-scale simulation of human brain activity with real-time interaction, rendered in 3D in a virtual reality environment, which combines computational power with human intuition for the exploration and analysis of complex dynamical networks. We ground this simulation on structural connectivity obtained from diffusion spectrum imaging data and model it on neuronal population dynamics. Users can interact with BrainX(3) in real-time by perturbing brain regions with transient stimulations to observe reverberating network activity, simulate lesion dynamics or implement network analysis functions from a library of graph theoretic measures. BrainX(3) can thus be used as a novel immersive platform for exploration and analysis of dynamical activity patterns in brain networks, both at rest or in a task-related state, for discovery of signaling pathways associated to brain function and/or dysfunction and as a tool for virtual neurosurgery. Our results demonstrate these functionalities and shed insight on the dynamics of the resting-state attractor. Specifically, we found that a noisy network seems to favor a low firing attractor state. We also found that the dynamics of a noisy network is less resilient to lesions. Our simulations on TMS perturbations show that even though TMS inhibits most of the network, it also sparsely excites a few regions. This is presumably due to anti-correlations in the dynamics and suggests that even a lesioned network can show sparsely distributed increased activity compared to healthy resting-state, over specific brain areas.

  15. Network dynamics with BrainX3: a large-scale simulation of the human brain network with real-time interaction

    Science.gov (United States)

    Arsiwalla, Xerxes D.; Zucca, Riccardo; Betella, Alberto; Martinez, Enrique; Dalmazzo, David; Omedas, Pedro; Deco, Gustavo; Verschure, Paul F. M. J.

    2015-01-01

    BrainX3 is a large-scale simulation of human brain activity with real-time interaction, rendered in 3D in a virtual reality environment, which combines computational power with human intuition for the exploration and analysis of complex dynamical networks. We ground this simulation on structural connectivity obtained from diffusion spectrum imaging data and model it on neuronal population dynamics. Users can interact with BrainX3 in real-time by perturbing brain regions with transient stimulations to observe reverberating network activity, simulate lesion dynamics or implement network analysis functions from a library of graph theoretic measures. BrainX3 can thus be used as a novel immersive platform for exploration and analysis of dynamical activity patterns in brain networks, both at rest or in a task-related state, for discovery of signaling pathways associated to brain function and/or dysfunction and as a tool for virtual neurosurgery. Our results demonstrate these functionalities and shed insight on the dynamics of the resting-state attractor. Specifically, we found that a noisy network seems to favor a low firing attractor state. We also found that the dynamics of a noisy network is less resilient to lesions. Our simulations on TMS perturbations show that even though TMS inhibits most of the network, it also sparsely excites a few regions. This is presumably due to anti-correlations in the dynamics and suggests that even a lesioned network can show sparsely distributed increased activity compared to healthy resting-state, over specific brain areas. PMID:25759649

  16. On the application of the classic Kessler and Berry schemes in Large Eddy Simulation models with a particular emphasis on cloud autoconversion, the onset time of precipitation and droplet evaporation

    Directory of Open Access Journals (Sweden)

    S. Ghosh

    Full Text Available Many Large Eddy Simulation (LES models use the classic Kessler parameterisation either as it is or in a modified form to model the process of cloud water autoconversion into precipitation. The Kessler scheme, being linear, is particularly useful and is computationally straightforward to implement. However, a major limitation with this scheme lies in its inability to predict different autoconversion rates for maritime and continental clouds. In contrast, the Berry formulation overcomes this difficulty, although it is cubic. Due to their different forms, it is difficult to match the two solutions to each other. In this paper we single out the processes of cloud conversion and accretion operating in a deep model cloud and neglect the advection terms for simplicity. This facilitates exact analytical integration and we are able to derive new expressions for the time of onset of precipitation using both the Kessler and Berry formulations. We then discuss the conditions when the two schemes are equivalent. Finally, we also critically examine the process of droplet evaporation within the framework of the classic Kessler scheme. We improve the existing parameterisation with an accurate estimation of the diffusional mass transport of water vapour. We then demonstrate the overall robustness of our calculations by comparing our results with the experimental observations of Beard and Pruppacher, and find excellent agreement.

    Key words. Atmospheric composition and structure · Cloud physics and chemistry · Pollution · Meteorology and atmospheric dynamics · Precipitation

  17. On the application of the classic Kessler and Berry schemes in Large Eddy Simulation models with a particular emphasis on cloud autoconversion, the onset time of precipitation and droplet evaporation

    Directory of Open Access Journals (Sweden)

    S. Ghosh

    1998-05-01

    Full Text Available Many Large Eddy Simulation (LES models use the classic Kessler parameterisation either as it is or in a modified form to model the process of cloud water autoconversion into precipitation. The Kessler scheme, being linear, is particularly useful and is computationally straightforward to implement. However, a major limitation with this scheme lies in its inability to predict different autoconversion rates for maritime and continental clouds. In contrast, the Berry formulation overcomes this difficulty, although it is cubic. Due to their different forms, it is difficult to match the two solutions to each other. In this paper we single out the processes of cloud conversion and accretion operating in a deep model cloud and neglect the advection terms for simplicity. This facilitates exact analytical integration and we are able to derive new expressions for the time of onset of precipitation using both the Kessler and Berry formulations. We then discuss the conditions when the two schemes are equivalent. Finally, we also critically examine the process of droplet evaporation within the framework of the classic Kessler scheme. We improve the existing parameterisation with an accurate estimation of the diffusional mass transport of water vapour. We then demonstrate the overall robustness of our calculations by comparing our results with the experimental observations of Beard and Pruppacher, and find excellent agreement.Key words. Atmospheric composition and structure · Cloud physics and chemistry · Pollution · Meteorology and atmospheric dynamics · Precipitation

  18. QUASI-STELLAR OBJECT SELECTION ALGORITHM USING TIME VARIABILITY AND MACHINE LEARNING: SELECTION OF 1620 QUASI-STELLAR OBJECT CANDIDATES FROM MACHO LARGE MAGELLANIC CLOUD DATABASE

    International Nuclear Information System (INIS)

    Kim, Dae-Won; Protopapas, Pavlos; Alcock, Charles; Trichas, Markos; Byun, Yong-Ik; Khardon, Roni

    2011-01-01

    We present a new quasi-stellar object (QSO) selection algorithm using a Support Vector Machine, a supervised classification method, on a set of extracted time series features including period, amplitude, color, and autocorrelation value. We train a model that separates QSOs from variable stars, non-variable stars, and microlensing events using 58 known QSOs, 1629 variable stars, and 4288 non-variables in the MAssive Compact Halo Object (MACHO) database as a training set. To estimate the efficiency and the accuracy of the model, we perform a cross-validation test using the training set. The test shows that the model correctly identifies ∼80% of known QSOs with a 25% false-positive rate. The majority of the false positives are Be stars. We applied the trained model to the MACHO Large Magellanic Cloud (LMC) data set, which consists of 40 million light curves, and found 1620 QSO candidates. During the selection none of the 33,242 known MACHO variables were misclassified as QSO candidates. In order to estimate the true false-positive rate, we crossmatched the candidates with astronomical catalogs including the Spitzer Surveying the Agents of a Galaxy's Evolution LMC catalog and a few X-ray catalogs. The results further suggest that the majority of the candidates, more than 70%, are QSOs.

  19. Maximum swimming speeds of sailfish and three other large marine predatory fish species based on muscle contraction time and stride length: a myth revisited

    Directory of Open Access Journals (Sweden)

    Morten B. S. Svendsen

    2016-10-01

    Full Text Available Billfishes are considered to be among the fastest swimmers in the oceans. Previous studies have estimated maximum speed of sailfish and black marlin at around 35 m s−1 but theoretical work on cavitation predicts that such extreme speed is unlikely. Here we investigated maximum speed of sailfish, and three other large marine pelagic predatory fish species, by measuring the twitch contraction time of anaerobic swimming muscle. The highest estimated maximum swimming speeds were found in sailfish (8.3±1.4 m s−1, followed by barracuda (6.2±1.0 m s−1, little tunny (5.6±0.2 m s−1 and dorado (4.0±0.9 m s−1; although size-corrected performance was highest in little tunny and lowest in sailfish. Contrary to previously reported estimates, our results suggest that sailfish are incapable of exceeding swimming speeds of 10-15 m s−1, which corresponds to the speed at which cavitation is predicted to occur, with destructive consequences for fin tissues.

  20. Black Hole Caught Zapping Galaxy into Existence?

    Science.gov (United States)

    2009-11-01

    equivalent to about 350 Suns per year, one hundred times more than rates for typical galaxies in the local Universe. Earlier observations had shown that the companion galaxy is, in fact, under fire: the quasar is spewing a jet of highly energetic particles towards its companion, accompanied by a stream of fast-moving gas. The injection of matter and energy into the galaxy indicates that the quasar itself might be inducing the formation of stars and thereby creating its own host galaxy; in such a scenario, galaxies would have evolved from clouds of gas hit by the energetic jets emerging from quasars. "The two objects are bound to merge in the future: the quasar is moving at a speed of only a few tens of thousands of km/h with respect to the companion galaxy and their separation is only about 22 000 light-years," says Elbaz. "Although the quasar is still 'naked', it will eventually be 'dressed' when it merges with its star-rich companion. It will then finally reside inside a host galaxy like all other quasars." Hence, the team have identified black hole jets as a possible driver of galaxy formation, which may also represent the long-sought missing link to understanding why the mass of black holes is larger in galaxies that contain more stars [3]. "A natural extension of our work is to search for similar objects in other systems," says Jahnke. Future instruments, such as the Atacama Large Millimeter/submillimeter Array, the European Extremely Large Telescope and the NASA/ESA/CSA James Webb Space Telescope will be able to search for such objects at even larger distances from us, probing the connection between black holes and the formation of galaxies in the more distant Universe. Notes [1] Supermassive black holes are found in the cores of most large galaxies; unlike the inactive and starving one sitting at the centre of the Milky Way, a fraction of them are said to be active, as they eat up enormous amounts of material. These frantic actions produce a copious release of energy

  1. Integrating existing software toolkits into VO system

    Science.gov (United States)

    Cui, Chenzhou; Zhao, Yong-Heng; Wang, Xiaoqian; Sang, Jian; Luo, Ze

    2004-09-01

    Virtual Observatory (VO) is a collection of interoperating data archives and software tools. Taking advantages of the latest information technologies, it aims to provide a data-intensively online research environment for astronomers all around the world. A large number of high-qualified astronomical software packages and libraries are powerful and easy of use, and have been widely used by astronomers for many years. Integrating those toolkits into the VO system is a necessary and important task for the VO developers. VO architecture greatly depends on Grid and Web services, consequently the general VO integration route is "Java Ready - Grid Ready - VO Ready". In the paper, we discuss the importance of VO integration for existing toolkits and discuss the possible solutions. We introduce two efforts in the field from China-VO project, "gImageMagick" and "Galactic abundance gradients statistical research under grid environment". We also discuss what additional work should be done to convert Grid service to VO service.

  2. Sustainability in the existing building stock

    DEFF Research Database (Denmark)

    Elle, Morten; Nielsen, Susanne Balslev; Hoffmann, Birgitte

    2005-01-01

    , QRWfacilities management’s most important contribution to sustainable development in the built environment. Space management is an essential tool in facilities management – and it could be considered a powerful tool in sustainable development; remembering that the building not being built is perhaps the most......This paper explores the role of Facilities Management in the relation to sustainable development in the existing building stock. Facilities management is a concept still developing as the management of buildings are becoming more and more professional. Many recognize today that facilities...... management is a concept relevant to others than large companies. Managing the flows of energy and other resources is a part of facilities management, and an increased professionalism could lead to the reduction of the use of energy and water and the generation of waste and wastewater. This is, however...

  3. Compilation of Existing Neutron Screen Technology

    Directory of Open Access Journals (Sweden)

    N. Chrysanthopoulou

    2014-01-01

    Full Text Available The presence of fast neutron spectra in new reactors is expected to induce a strong impact on the contained materials, including structural materials, nuclear fuels, neutron reflecting materials, and tritium breeding materials. Therefore, introduction of these reactors into operation will require extensive testing of their components, which must be performed under neutronic conditions representative of those expected to prevail inside the reactor cores when in operation. Due to limited availability of fast reactors, testing of future reactor materials will mostly take place in water cooled material test reactors (MTRs by tailoring the neutron spectrum via neutron screens. The latter rely on the utilization of materials capable of absorbing neutrons at specific energy. A large but fragmented experience is available on that topic. In this work a comprehensive compilation of the existing neutron screen technology is attempted, focusing on neutron screens developed in order to locally enhance the fast over thermal neutron flux ratio in a reactor core.

  4. Determinant factors of residential consumption and perception of energy conservation: Time-series analysis by large-scale questionnaire in Suita, Japan

    International Nuclear Information System (INIS)

    Hara, Keishiro; Uwasu, Michinori; Kishita, Yusuke; Takeda, Hiroyuki

    2015-01-01

    In this study, we examined determinant factors associated with the residential consumption and perception of savings of electricity and city gas; this was based on data collected from a large-scale questionnaire sent to households in Suita, Osaka Prefecture, Japan, in two different years: 2009 and 2013. We applied an ordered logit model to determine the overall trend of the determinant factors, and then we performed a more detailed analysis in order to understand the reasons why the determinant factors changed between the two periods. Results from the ordered logit model reveal that electricity and gas consumption was primarily determined by such factors as household income, number of family members, the number of home appliances, and the perceptions of energy savings; there was not much difference between the two years, although in 2013, household income did not affect the perception of energy savings. Detailed analysis demonstrated that households with high energy consumption and those with moderate consumption are becoming polarized and that there was a growing gap between consumption behavior and the perception of conservation. The implications derived from the analyses provide an essential insight into the design of a municipal policy to induce lifestyle changes for an energy-saving society. - Highlights: • Questionnaire was conducted to households in two years for time-series analysis. • We analyzed residential energy consumption and perception of savings in households. • Determinant factors for consumption and perception of savings were identified. • Households being wasteful of energy are also found willing to cut consumption. • Policy intervention could affect consumption pattern and perception of savings.

  5. The Potential Applications of Real-Time Monitoring of Water Quality in a Large Shallow Lake (Lake Taihu, China Using a Chromophoric Dissolved Organic Matter Fluorescence Sensor

    Directory of Open Access Journals (Sweden)

    Cheng Niu

    2014-06-01

    Full Text Available This study presents results from field surveys performed over various seasons in a large, eutrophic, shallow lake (Lake Taihu, China using an in situ chromophoric dissolved organic matter (CDOM fluorescence sensor as a surrogate for other water quality parameters. These measurements identified highly significant empirical relationships between CDOM concentration measured using the in situ fluorescence sensor and CDOM absorption, fluorescence, dissolved organic carbon (DOC, chemical oxygen demand (COD and total phosphorus (TP concentrations. CDOM concentration expressed in quinine sulfate equivalent units, was highly correlated with the CDOM absorption coefficient (r2 = 0.80, p < 0.001, fluorescence intensities (Ex./Em. 370/460 nm (r2 = 0.91, p < 0.001, the fluorescence index (r2 = 0.88, p < 0.001 and the humification index (r2 = 0.78, p < 0.001, suggesting that CDOM concentration measured using the in situ fluorescence sensor could act as a substitute for the CDOM absorption coefficient and fluorescence measured in the laboratory. Similarly, CDOM concentration was highly correlated with DOC concentration (r2 = 0.68, p < 0.001, indicating that in situ CDOM fluorescence sensor measurements could be a proxy for DOC concentration. In addition, significant positive correlations were found between laboratory CDOM absorption coefficients and COD (r2 = 0.83, p < 0.001, TP (r2 = 0.82, p < 0.001 concentrations, suggesting a potential further application for the real-time monitoring of water quality using an in situ CDOM fluorescence sensor.

  6. Large-scale temperature and salinity changes in the upper Canadian Basin of the Arctic Ocean at a time of a drastic Arctic Oscillation inversion

    Directory of Open Access Journals (Sweden)

    P. Bourgain

    2013-04-01

    Full Text Available Between 2008 and 2010, the Arctic Oscillation index over Arctic regions shifted from positive values corresponding to more cyclonic conditions prevailing during the 4th International Polar Year (IPY period (2007–2008 to extremely negative values corresponding to strong anticyclonic conditions in 2010. In this context, we investigated the recent large-scale evolution of the upper western Arctic Ocean, based on temperature and salinity summertime observations collected during icebreaker campaigns and from ice-tethered profilers (ITPs drifting across the region in 2008 and 2010. Particularly, we focused on (1 the freshwater content which was extensively studied during previous years, (2 the near-surface temperature maximum due to incoming solar radiation, and (3 the water masses advected from the Pacific Ocean into the Arctic Ocean. The observations revealed a freshwater content change in the Canadian Basin during this time period. South of 80° N, the freshwater content increased, while north of 80° N, less freshening occurred in 2010 compared to 2008. This was more likely due to the strong anticyclonicity characteristic of a low AO index mode that enhanced both a wind-generated Ekman pumping in the Beaufort Gyre and a possible diversion of the Siberian River runoff toward the Eurasian Basin at the same time. The near-surface temperature maximum due to incoming solar radiation was almost 1 °C colder in the southern Canada Basin (south of 75° N in 2010 compared to 2008, which contrasted with the positive trend observed during previous years. This was more likely due to higher summer sea ice concentration in 2010 compared to 2008 in that region, and surface albedo feedback reflecting more sun radiation back in space. The Pacific water (PaW was also subjected to strong spatial and temporal variability between 2008 and 2010. In the Canada Basin, both summer and winter PaW signatures were stronger between 75° N and 80° N. This was more likely

  7. Large Scale Metric Learning for Distance-Based Image Classification on Open Ended Data Sets

    NARCIS (Netherlands)

    Mensink, T.; Verbeek, J.; Perronnin, F.; Csurka, G.; Farinella, G.M.; Battiato, S.; Cipolla, R,

    2013-01-01

    Many real-life large-scale datasets are open-ended and dynamic: new images are continuously added to existing classes, new classes appear over time, and the semantics of existing classes might evolve too. Therefore, we study large-scale image classification methods that can incorporate new classes

  8. High-energy radiation from thunderstorms and lightning with LOFT. White Paper in Support of the Mission Concept of the Large Observatory for X-ray Timing

    DEFF Research Database (Denmark)

    Marisaldi, M.; Smith, D. M.; Brandt, Søren

    has been continued, aiming at the new M4 launch opportunity, for which the M3 science goals have been confirmed. The unprecedentedly large effective area, large grasp, and spectroscopic capabilities of LOFT’s instruments make the mission capable of state-of-the-art science not only for its core...

  9. The use of real-time off-site observations as a methodology for increasing forecast skill in prediction of large wind power ramps one or more hours ahead of their impact on a wind plant.

    Energy Technology Data Exchange (ETDEWEB)

    Martin Wilde, Principal Investigator

    2012-12-31

    ABSTRACT Application of Real-Time Offsite Measurements in Improved Short-Term Wind Ramp Prediction Skill Improved forecasting performance immediately preceding wind ramp events is of preeminent concern to most wind energy companies, system operators, and balancing authorities. The value of near real-time hub height-level wind data and more general meteorological measurements to short-term wind power forecasting is well understood. For some sites, access to onsite measured wind data - even historical - can reduce forecast error in the short-range to medium-range horizons by as much as 50%. Unfortunately, valuable free-stream wind measurements at tall tower are not typically available at most wind plants, thereby forcing wind forecasters to rely upon wind measurements below hub height and/or turbine nacelle anemometry. Free-stream measurements can be appropriately scaled to hub-height levels, using existing empirically-derived relationships that account for surface roughness and turbulence. But there is large uncertainty in these relationships for a given time of day and state of the boundary layer. Alternatively, forecasts can rely entirely on turbine anemometry measurements, though such measurements are themselves subject to wake effects that are not stationary. The void in free-stream hub-height level measurements of wind can be filled by remote sensing (e.g., sodar, lidar, and radar). However, the expense of such equipment may not be sustainable. There is a growing market for traditional anemometry on tall tower networks, maintained by third parties to the forecasting process (i.e., independent of forecasters and the forecast users). This study examines the value of offsite tall-tower data from the WINDataNOW Technology network for short-horizon wind power predictions at a wind farm in northern Montana. The presentation shall describe successful physical and statistical techniques for its application and the practicality of its application in an operational

  10. Youth Mental Health Services Utilization Rates After a Large-Scale Social Media Campaign: Population-Based Interrupted Time-Series Analysis.

    Science.gov (United States)

    Booth, Richard G; Allen, Britney N; Bray Jenkyn, Krista M; Li, Lihua; Shariff, Salimah Z

    2018-04-06

    Despite the uptake of mass media campaigns, their overall impact remains unclear. Since 2011, a Canadian telecommunications company has operated an annual, large-scale mental health advocacy campaign (Bell Let's Talk) focused on mental health awareness and stigma reduction. In February 2012, the campaign began to explicitly leverage the social media platform Twitter and incented participation from the public by promising donations of Can $0.05 for each interaction with a campaign-specific username (@Bell_LetsTalk). The intent of the study was to examine the impact of this 2012 campaign on youth outpatient mental health services in the province of Ontario, Canada. Monthly outpatient mental health visits (primary health care and psychiatric services) were obtained for the Ontario youth aged 10 to 24 years (approximately 5.66 million visits) from January 1, 2006 to December 31, 2015. Interrupted time series, autoregressive integrated moving average modeling was implemented to evaluate the impact of the campaign on rates of monthly outpatient mental health visits. A lagged intervention date of April 1, 2012 was selected to account for the delay required for a patient to schedule and attend a mental health-related physician visit. The inclusion of Twitter into the 2012 Bell Let's Talk campaign was temporally associated with an increase in outpatient mental health utilization for both males and females. Within primary health care environments, female adolescents aged 10 to 17 years experienced a monthly increase in the mental health visit rate from 10.2/1000 in April 2006 to 14.1/1000 in April 2015 (slope change of 0.094 following campaign, Pcampaign, Pcampaign (slope change of 0.005, P=.02; slope change of 0.003, P=.005, respectively). For young adults aged 18 to 24 years, females who used primary health care experienced the most significant increases in mental health visit rates from 26.5/1000 in April 2006 to 29.2/1000 in April 2015 (slope change of 0.17 following

  11. Seismic evaluation of existing nuclear power plants

    International Nuclear Information System (INIS)

    2003-01-01

    programmes at existing operating plants are plant specific or regulatory specific. This means that this report is meant to define the minimum generic requirements and may need to be supplemented on a plant specific basis to consider particular aspects of the original design basis. Among the options available, two methods are particularly appropriate for assessing the seismic safety of facilities, the seismic margin assessment (SMA) method and the seismic probabilistic safety assessment (SPSA) method. Both SMA and SPSA are discussed in this report. Current NPP design criteria and comprehensive seismic design procedures, as applied to the design of new facilities but using a re-evaluated seismic input, may be applied in the seismic evaluation programme. It is noted that these would be a conservative and usually expensive approach for evaluation of an existing operating facility and they are not discussed further in this report. Evaluation of existing NPPs may result in the identification of items of the SSSC list which have to be upgraded. Upgrading itself is not covered by this Safety Report; however, some general principles are presented in order to preserve consistency between evaluation and upgrading processes. (It should be pointed out that when an upgrading programme has to be carried out, it necessitates more engineering resources than the evaluation process does; similarly upgrading is too large and complex a matter to be covered by this Safety Report.) Section 2 presents the general philosophy of seismic evaluation; Section 3 discusses data collection and investigations; Section 4 is devoted to seismic hazard assessment; Section 5 discusses the safety analysis of the NPP; Section 6 discusses the practice of walkdown; Section 7 covers the criteria and methods used for seismic capacity assessment of SSSCs; Section 8 discusses the principle of the design of a possible seismic upgrading; Section 9 specifies some rules of quality assurance and organization

  12. 10 CFR 4.127 - Existing facilities.

    Science.gov (United States)

    2010-01-01

    ... 10 Energy 1 2010-01-01 2010-01-01 false Existing facilities. 4.127 Section 4.127 Energy NUCLEAR... 1973, as Amended Discriminatory Practices § 4.127 Existing facilities. (a) Accessibility. A recipient... make each of its existing facilities or every part of an existing facility accessible to and usable by...

  13. Existing ingestion guidance: Problems and recommendations

    Energy Technology Data Exchange (ETDEWEB)

    Mooney, Robert R; Ziegler, Gordon L; Peterson, Donald S [Environmental Radiation Section, Division of Radiation Protection, WA (United States)

    1989-09-01

    Washington State has been developing plans and procedures for responding to nuclear accidents since the early 1970s. A key part of this process has been formulating a method for calculating ingestion pathway concentration guides (CGs). Such a method must be both technically sound and easy to use. This process has been slow and frustrating. However, much technical headway has been made in recent years, and hopefully the experience of the State of Washington will provide useful insight to problems with the existing guidance. Several recommendations are offered on ways to deal with these problems. In January 1986, the state held an ingestion pathway exercise which required the determination of allowed concentrations of isotopes for various foods, based upon reactor source term and field data. Objectives of the exercise were not met because of the complexity of the necessary calculations. A major problem was that the allowed concentrations had to be computed for each isotope and each food group, given assumptions on the average diet. To solve problems identified during that exercise, Washington developed, by March 1986, partitioned CGs. These CGs apportioned doses from each food group for an assumed mix of radionuclides expected to result from a reactor accident. This effort was therefore in place just in time for actual use during the Chernobyl fallout episode in May 1986. This technique was refined and described in a later report and presented at the 1987 annual meeting of the Health Physics Society. Realizing the technical weaknesses which still existed and a need to simplify the numbers for decision makers, Washington State has been developing computer methods to quickly calculate, from an accident specific relative mix of isotopes, CGs which allow a single radionuclide concentration for all food groups. This latest approach allows constant CGs for different periods of time following the accident, instead of peak CGs, which are good only for a short time after the

  14. Existing ingestion guidance: Problems and recommendations

    International Nuclear Information System (INIS)

    Mooney, Robert R.; Ziegler, Gordon L.; Peterson, Donald S.

    1989-01-01

    Washington State has been developing plans and procedures for responding to nuclear accidents since the early 1970s. A key part of this process has been formulating a method for calculating ingestion pathway concentration guides (CGs). Such a method must be both technically sound and easy to use. This process has been slow and frustrating. However, much technical headway has been made in recent years, and hopefully the experience of the State of Washington will provide useful insight to problems with the existing guidance. Several recommendations are offered on ways to deal with these problems. In January 1986, the state held an ingestion pathway exercise which required the determination of allowed concentrations of isotopes for various foods, based upon reactor source term and field data. Objectives of the exercise were not met because of the complexity of the necessary calculations. A major problem was that the allowed concentrations had to be computed for each isotope and each food group, given assumptions on the average diet. To solve problems identified during that exercise, Washington developed, by March 1986, partitioned CGs. These CGs apportioned doses from each food group for an assumed mix of radionuclides expected to result from a reactor accident. This effort was therefore in place just in time for actual use during the Chernobyl fallout episode in May 1986. This technique was refined and described in a later report and presented at the 1987 annual meeting of the Health Physics Society. Realizing the technical weaknesses which still existed and a need to simplify the numbers for decision makers, Washington State has been developing computer methods to quickly calculate, from an accident specific relative mix of isotopes, CGs which allow a single radionuclide concentration for all food groups. This latest approach allows constant CGs for different periods of time following the accident, instead of peak CGs, which are good only for a short time after the

  15. Existing and new techniques in uranium exploration

    International Nuclear Information System (INIS)

    Bowie, S.H.U.; Cameron, J.

    1976-01-01

    The demands on uranium exploration over the next 25 years will be very great indeed and will call for every possible means of improvement in exploration capability. The first essential is to increase geological knowledge of the mode of occurrence of uranium ore deposits. The second is to improve existing exploration techniques and instrumentation while, at the same time, promoting research and development on new methods to discover uranium ore bodies on the earth's surface and at depth. The present symposium is an effort to increase co-operation and the exchange of information in the critical field of uranium exploration techniques and instrumentation. As an introduction to the symposium a brief review is presented, firstly of what can be considered as existing techniques and, secondly, of techniques which have not yet been used on an appreciable scale. Some fourteen techniques used over the last 30 years are identified and their appropriate application, advantages and limitations are briefly summarized and the possibilities of their further development considered. The aim of future research on new techniques, in addition to finding new ways and means of identifying surface deposits, should be mainly directed to devising methods and instrumentation capable of detecting buried ore bodies that do not give a gamma signal at the surface. To achieve this aim, two contributory factors are essential: adequate financial support for research and development and increased specialized training in uranium exploration and instrumentation design. The papers in this symposium describe developments in the existing techniques, proposals for future research and development and case histories of exploration programmes

  16. Assessment and Rehabilitation Issues Concerning Existing 70’s Structural Stock

    Science.gov (United States)

    Sabareanu, E.

    2017-06-01

    The last 30 years were very demanding in terms of norms and standards change concerning the structural calculus for buildings, leaving a large stock of structures erected during 70-90 decades in a weak position concerning seismic loads and loads level for live loads, wind and snow. In the same time, taking into account that a large amount of buildings are in service all over the country, they cannot be demolished, but suitable rehabilitation methods should be proposed, structural durability being achieved. The paper proposes some rehabilitation methods suitable in terms of structural safety and cost optimization for diaphragm reinforced concrete structures, with an example on an existing multi storey building.

  17. Tracking Object Existence From an Autonomous Patrol Vehicle

    Science.gov (United States)

    Wolf, Michael; Scharenbroich, Lucas

    2011-01-01

    An autonomous vehicle patrols a large region, during which an algorithm receives measurements of detected potential objects within its sensor range. The goal of the algorithm is to track all objects in the region over time. This problem differs from traditional multi-target tracking scenarios because the region of interest is much larger than the sensor range and relies on the movement of the sensor through this region for coverage. The goal is to know whether anything has changed between visits to the same location. In particular, two kinds of alert conditions must be detected: (1) a previously detected object has disappeared and (2) a new object has appeared in a location already checked. For the time an object is within sensor range, the object can be assumed to remain stationary, changing position only between visits. The problem is difficult because the upstream object detection processing is likely to make many errors, resulting in heavy clutter (false positives) and missed detections (false negatives), and because only noisy, bearings-only measurements are available. This work has three main goals: (1) Associate incoming measurements with known objects or mark them as new objects or false positives, as appropriate. For this, a multiple hypothesis tracker was adapted to this scenario. (2) Localize the objects using multiple bearings-only measurements to provide estimates of global position (e.g., latitude and longitude). A nonlinear Kalman filter extension provides these 2D position estimates using the 1D measurements. (3) Calculate the probability that a suspected object truly exists (in the estimated position), and determine whether alert conditions have been triggered (for new objects or disappeared objects). The concept of a probability of existence was created, and a new Bayesian method for updating this probability at each time step was developed. A probabilistic multiple hypothesis approach is chosen because of its superiority in handling the

  18. LDEF data: Comparisons with existing models

    Science.gov (United States)

    Coombs, Cassandra R.; Watts, Alan J.; Wagner, John D.; Atkinson, Dale R.

    1993-04-01

    The relationship between the observed cratering impact damage on the Long Duration Exposure Facility (LDEF) versus the existing models for both the natural environment of micrometeoroids and the man-made debris was investigated. Experimental data was provided by several LDEF Principal Investigators, Meteoroid and Debris Special Investigation Group (M&D SIG) members, and by the Kennedy Space Center Analysis Team (KSC A-Team) members. These data were collected from various aluminum materials around the LDEF satellite. A PC (personal computer) computer program, SPENV, was written which incorporates the existing models of the Low Earth Orbit (LEO) environment. This program calculates the expected number of impacts per unit area as functions of altitude, orbital inclination, time in orbit, and direction of the spacecraft surface relative to the velocity vector, for both micrometeoroids and man-made debris. Since both particle models are couched in terms of impact fluxes versus impactor particle size, and much of the LDEF data is in the form of crater production rates, scaling laws have been used to relate the two. Also many hydrodynamic impact computer simulations were conducted, using CTH, of various impact events, that identified certain modes of response, including simple metallic target cratering, perforations and delamination effects of coatings.

  19. How many N = 4 strings exist?

    International Nuclear Information System (INIS)

    Ketov, S.V.

    1994-09-01

    Possible ways of constructing extended fermionic strings with N=4 world-sheet supersymmetry are reviewed. String theory constraints form, in general, a non-linear quasi(super)conformal algebra, and can have conformal dimensions ≥1. When N=4, the most general N=4 quasi-superconformal algebra to consider for string theory building is D(1, 2; α), whose linearisation is the so-called ''large'' N=4 superconformal algebra. The D(1, 2; α) algebra has su(2)sub(κ + )+su(2)sub(κ - )+u(1) Kac-Moody component, and α=κ - /κ + . We check the Jacobi identities and construct a BRST charge for the D(1, 2; α) algebra. The quantum BRST operator can be made nilpotent only when κ + =κ - =-2. The D(1, 2; 1) algebra is actually isomorphic to the SO(4)-based Bershadsky-Knizhnik non-linear quasi-superconformal algebra. We argue about the existence of a string theory associated with the latter, and propose the (non-covariant) hamiltonian action for this new N=4 string theory. Our results imply the existence of two different N=4 fermionic string theories: the old one based on the ''small'' linear N=4 superconformal algebra and having the total ghost central charge c gh =+12, and the new one with non-linearly realised N=4 supersymmetry, based on the SO(4) quasi-superconformal algebra and having c gh =+6. Both critical string theories have negative ''critical dimensions'' and do not admit unitary matter representations. (orig.)

  20. Welfare Economics: A Story of Existence

    Directory of Open Access Journals (Sweden)

    Khalid Iqbal

    2017-06-01

    Full Text Available The purpose of this study is to explore that, despite severe challenges, welfare economics still exists. This descriptive study is conducted through some specific time line developments in this field. Economists are divided over the veracity and survival of the welfare economics. Welfare economics emphasizes on the optimum resource and goods allocation with the objective of better living standard, materialistic gains, social welfare and ethical decisions. It origins back to the political economics and utilitarianism. Adam Smith, Irving Fisher and Pareto contributed significantly towards it. During 1930 to 1940, American and British approaches were developed. Many economists tried to explore the relationship between level of income and happiness. Amartya Sen gave the comparative approach and Tinbergen pioneered the theory of equity. Contemporarily the futuristic restoration of welfare economics is on trial and hopes are alive. This study may be useful to understand the transitional and survival process of welfare economics.

  1. Testing Metadata Existence of Web Map Services

    Directory of Open Access Journals (Sweden)

    Jan Růžička

    2011-05-01

    Full Text Available For a general user is quite common to use data sources available on WWW. Almost all GIS software allow to use data sources available via Web Map Service (ISO/OGC standard interface. The opportunity to use different sources and combine them brings a lot of problems that were discussed many times on conferences or journal papers. One of the problem is based on non existence of metadata for published sources. The question was: were the discussions effective? The article is partly based on comparison of situation for metadata between years 2007 and 2010. Second part of the article is focused only on 2010 year situation. The paper is created in a context of research of intelligent map systems, that can be used for an automatic or a semi-automatic map creation or a map evaluation.

  2. Tachyons imply the existence of a privileged frame

    Energy Technology Data Exchange (ETDEWEB)

    Sjoedin, T.; Heylighen, F.

    1985-12-16

    It is shown that the existence of faster-than-light signals (tachyons) would imply the existence (and detectability) of a privileged inertial frame and that one can avoid all problems with reversed-time order only by using absolute synchronization instead of the standard one. The connection between these results and the EPR-paradox is discussed.

  3. The generalized centroid difference method for picosecond sensitive determination of lifetimes of nuclear excited states using large fast-timing arrays

    Energy Technology Data Exchange (ETDEWEB)

    Régis, J.-M., E-mail: regis@ikp.uni-koeln.de [Institut für Kernphysik der Universität zu Köln, Zülpicher Str. 77, 50937 Köln (Germany); Mach, H. [Departamento de Física Atómica y Nuclear, Universidad Complutense, 28040 Madrid (Spain); Simpson, G.S. [Laboratoire de Physique Subatomique et de Cosmologie Grenoble, 53, rue des Martyrs, 38026 Grenoble Cedex (France); Jolie, J.; Pascovici, G.; Saed-Samii, N.; Warr, N. [Institut für Kernphysik der Universität zu Köln, Zülpicher Str. 77, 50937 Köln (Germany); Bruce, A. [School of Computing, Engineering and Mathematics, University of Brighton, Lewes Road, Brighton BN2 4GJ (United Kingdom); Degenkolb, J. [Institut für Kernphysik der Universität zu Köln, Zülpicher Str. 77, 50937 Köln (Germany); Fraile, L.M. [Departamento de Física Atómica y Nuclear, Universidad Complutense, 28040 Madrid (Spain); Fransen, C. [Institut für Kernphysik der Universität zu Köln, Zülpicher Str. 77, 50937 Köln (Germany); Ghita, D.G. [Horia Hulubei National Institute for Physics and Nuclear Engineering, 77125 Bucharest (Romania); and others

    2013-10-21

    A novel method for direct electronic “fast-timing” lifetime measurements of nuclear excited states via γ–γ coincidences using an array equipped with N∈N equally shaped very fast high-resolution LaBr{sub 3}(Ce) scintillator detectors is presented. Analogous to the mirror symmetric centroid difference method, the generalized centroid difference method provides two independent “start” and “stop” time spectra obtained by a superposition of the N(N−1)γ–γ time difference spectra of the N detector fast-timing system. The two fast-timing array time spectra correspond to a forward and reverse gating of a specific γ–γ cascade. Provided that the energy response and the electronic time pick-off of the detectors are almost equal, a mean prompt response difference between start and stop events is calibrated and used as a single correction for lifetime determination. These combined fast-timing arrays mean γ–γ time-walk characteristics can be determined for 40keVtiming array delivered an absolute time resolving power of 3 ps for 10 000 γ–γ events per total fast timing array start and stop time spectrum. The new method is tested over the total dynamic range by the measurements of known picosecond lifetimes in standard γ-ray sources.

  4. 26 CFR 20.6166-1 - Election of alternate extension of time for payment of estate tax where estate consists largely...

    Science.gov (United States)

    2010-04-01

    ... consists largely of interest in closely held business. (a) In general. Section 6166 allows an executor to... executor's conclusion that the estate qualifies for payment of the estate tax in installments. In the... under section 6166(a) to pay any tax in installments, the executor may elect under section 6166(h) to...

  5. Ecogrid EU: a large scale smart grids demonstration of real time market-based integration of numerous small der and DR

    NARCIS (Netherlands)

    Ding, Y.; Nyeng, P.; Ostergaard, J.; Trong, M.D.; Pineda, S.; Kok, K.; Huitema, G.B.; Grande, O.S.

    2012-01-01

    This paper provides an overview of the Ecogrid EU project, which is a large-scale demonstration project on the Danish island Bornholm. It provides Europe a fast track evolution towards smart grid dissemination and deployment in the distribution network. Objective of Ecogrid EU is to illustrate that

  6. A coupled chemotaxis-fluid model: Global existence

    KAUST Repository

    Liu, Jian-Guo; Lorz, Alexander

    2011-01-01

    We consider a model arising from biology, consisting of chemotaxis equations coupled to viscous incompressible fluid equations through transport and external forcing. Global existence of solutions to the Cauchy problem is investigated under certain conditions. Precisely, for the chemotaxis-Navier- Stokes system in two space dimensions, we obtain global existence for large data. In three space dimensions, we prove global existence of weak solutions for the chemotaxis-Stokes system with nonlinear diffusion for the cell density.© 2011 Elsevier Masson SAS. All rights reserved.

  7. A coupled chemotaxis-fluid model: Global existence

    KAUST Repository

    Liu, Jian-Guo

    2011-09-01

    We consider a model arising from biology, consisting of chemotaxis equations coupled to viscous incompressible fluid equations through transport and external forcing. Global existence of solutions to the Cauchy problem is investigated under certain conditions. Precisely, for the chemotaxis-Navier- Stokes system in two space dimensions, we obtain global existence for large data. In three space dimensions, we prove global existence of weak solutions for the chemotaxis-Stokes system with nonlinear diffusion for the cell density.© 2011 Elsevier Masson SAS. All rights reserved.

  8. Leisure-time physical activity in pregnancy and risk of postpartum depression: a prospective study in a large national birth cohort

    DEFF Research Database (Denmark)

    Strøm, Marin; Mortensen, Erik L; Halldorson, Thórhallur I

    2009-01-01

    OBJECTIVE: To explore the association between physical activity during pregnancy and postpartum depression (PPD) in a large, prospective cohort. METHOD: Exposure information from the Danish National Birth Cohort, a large, prospective cohort with information on more than 100,000 pregnancies (1996......, and type of physical activity were assessed by a telephone interview at approximately week 12 of gestation. Admission to hospital due to depression (PPD-admission) and prescription of an antidepressant (PPD-prescription) were treated as separate outcomes. RESULTS: Through linkage to national registers, we...... identified 157 cases of PPD-admission and 1,305 cases of PPD-prescription. Women engaging in vigorous physical activity during pregnancy had a lower risk of PPD-prescription compared to women who were not physically active (adjusted odds ratio, 0.81; 95% CI, 0.66-0.99). No association was observed between...

  9. The Potential Applications of Real-Time Monitoring of Water Quality in a Large Shallow Lake (Lake Taihu, China) Using a Chromophoric Dissolved Organic Matter Fluorescence Sensor

    OpenAIRE

    Niu, Cheng; Zhang, Yunlin; Zhou, Yongqiang; Shi, Kun; Liu, Xiaohan; Qin, Boqiang

    2014-01-01

    This study presents results from field surveys performed over various seasons in a large, eutrophic, shallow lake (Lake Taihu, China) using an in situ chromophoric dissolved organic matter (CDOM) fluorescence sensor as a surrogate for other water quality parameters. These measurements identified highly significant empirical relationships between CDOM concentration measured using the in situ fluorescence sensor and CDOM absorption, fluorescence, dissolved organic carbon (DOC), chemical oxygen ...

  10. Public health in the field and the emergency operations center: methods for implementing real-time onsite syndromic surveillance at large public events.

    Science.gov (United States)

    Pogreba-Brown, Kristen; McKeown, Kyle; Santana, Sarah; Diggs, Alisa; Stewart, Jennifer; Harris, Robin B

    2013-10-01

    To develop an onsite syndromic surveillance system for the early detection of public health emergencies and outbreaks at large public events. As the third largest public health jurisdiction in the United States, Maricopa County Department of Public Health has worked with academic and first-response partners to create an event-targeted syndromic surveillance (EVENTSS) system. This system complements long-standing traditional emergency department-based surveillance and provides public health agencies with rapid reporting of possible clusters of illness. At 6 high profile events, 164 patient reports were collected. Gastrointestinal and neurological syndromes were most commonly reported, followed by multisyndromic reports. Neurological symptoms were significantly increased during hot weather events. The interview rate was 2 to 7 interviews per 50 000 people per hour, depending on the ambient temperature. Discussion Study data allowed an estimation of baseline values of illness occurring at large public events. As more data are collected, prediction models can be built to determine threshold levels for public health response. EVENTSS was conducted largely by volunteer public health graduate students, increasing the response capacity for the health department. Onsite epidemiology staff could make informed decisions and take actions quickly in the event of a public health emergency.

  11. Do Elementary Particles Have an Objective Existence?

    OpenAIRE

    Nissenson, Bilha

    2007-01-01

    The formulation of quantum theory does not comply with the notion of objective existence of elementary particles. Objective existence independent of observation implies the distinguishability of elementary particles. In other words: If elementary particles have an objective existence independent of observations, then they are distinguishable. Or if elementary particles are indistinguishable then matter cannot have existence independent of our observation. This paper presents a simple deductio...

  12. A Novel Real-Time Path Servo Control of a Hardware-in-the-Loop for a Large-Stroke Asymmetric Rod-Less Pneumatic System under Variable Loads

    Directory of Open Access Journals (Sweden)

    Hao-Ting Lin

    2017-06-01

    Full Text Available This project aims to develop a novel large stroke asymmetric pneumatic servo system of a hardware-in-the-loop for path tracking control under variable loads based on the MATLAB Simulink real-time system. High pressure compressed air provided by the air compressor is utilized for the pneumatic proportional servo valve to drive the large stroke asymmetric rod-less pneumatic actuator. Due to the pressure differences between two chambers, the pneumatic actuator will operate. The highly nonlinear mathematical models of the large stroke asymmetric pneumatic system were analyzed and developed. The functional approximation technique based on the sliding mode controller (FASC is developed as a controller to solve the uncertain time-varying nonlinear system. The MATLAB Simulink real-time system was a main control unit of a hardware-in-the-loop system proposed to establish driver blocks for analog and digital I/O, a linear encoder, a CPU and a large stroke asymmetric pneumatic rod-less system. By the position sensor, the position signals of the cylinder will be measured immediately. The measured signals will be viewed as the feedback signals of the pneumatic servo system for the study of real-time positioning control and path tracking control. Finally, real-time control of a large stroke asymmetric pneumatic servo system with measuring system, a large stroke asymmetric pneumatic servo system, data acquisition system and the control strategy software will be implemented. Thus, upgrading the high position precision and the trajectory tracking performance of the large stroke asymmetric pneumatic servo system will be realized to promote the high position precision and path tracking capability. Experimental results show that fifth order paths in various strokes and the sine wave path are successfully implemented in the test rig. Also, results of variable loads under the different angle were implemented experimentally.

  13. A Novel Real-Time Path Servo Control of a Hardware-in-the-Loop for a Large-Stroke Asymmetric Rod-Less Pneumatic System under Variable Loads.

    Science.gov (United States)

    Lin, Hao-Ting

    2017-06-04

    This project aims to develop a novel large stroke asymmetric pneumatic servo system of a hardware-in-the-loop for path tracking control under variable loads based on the MATLAB Simulink real-time system. High pressure compressed air provided by the air compressor is utilized for the pneumatic proportional servo valve to drive the large stroke asymmetric rod-less pneumatic actuator. Due to the pressure differences between two chambers, the pneumatic actuator will operate. The highly nonlinear mathematical models of the large stroke asymmetric pneumatic system were analyzed and developed. The functional approximation technique based on the sliding mode controller (FASC) is developed as a controller to solve the uncertain time-varying nonlinear system. The MATLAB Simulink real-time system was a main control unit of a hardware-in-the-loop system proposed to establish driver blocks for analog and digital I/O, a linear encoder, a CPU and a large stroke asymmetric pneumatic rod-less system. By the position sensor, the position signals of the cylinder will be measured immediately. The measured signals will be viewed as the feedback signals of the pneumatic servo system for the study of real-time positioning control and path tracking control. Finally, real-time control of a large stroke asymmetric pneumatic servo system with measuring system, a large stroke asymmetric pneumatic servo system, data acquisition system and the control strategy software will be implemented. Thus, upgrading the high position precision and the trajectory tracking performance of the large stroke asymmetric pneumatic servo system will be realized to promote the high position precision and path tracking capability. Experimental results show that fifth order paths in various strokes and the sine wave path are successfully implemented in the test rig. Also, results of variable loads under the different angle were implemented experimentally.

  14. 34 CFR 104.22 - Existing facilities.

    Science.gov (United States)

    2010-07-01

    ... 34 Education 1 2010-07-01 2010-07-01 false Existing facilities. 104.22 Section 104.22 Education... Accessibility § 104.22 Existing facilities. (a) Accessibility. A recipient shall operate its program or activity.... This paragraph does not require a recipient to make each of its existing facilities or every part of a...

  15. 45 CFR 1170.32 - Existing facilities.

    Science.gov (United States)

    2010-10-01

    ... 45 Public Welfare 3 2010-10-01 2010-10-01 false Existing facilities. 1170.32 Section 1170.32... ASSISTED PROGRAMS OR ACTIVITIES Accessibility § 1170.32 Existing facilities. (a) Accessibility. A recipient... require a recipient to make each of its existing facilities or every part of a facility accessible to and...

  16. 45 CFR 605.22 - Existing facilities.

    Science.gov (United States)

    2010-10-01

    ... 45 Public Welfare 3 2010-10-01 2010-10-01 false Existing facilities. 605.22 Section 605.22 Public... Accessibility § 605.22 Existing facilities. (a) Accessibility. A recipient shall operate each program or... existing facilities or every part of a facility accessible to and usable by qualified handicapped persons...

  17. 14 CFR 1251.301 - Existing facilities.

    Science.gov (United States)

    2010-01-01

    ... 14 Aeronautics and Space 5 2010-01-01 2010-01-01 false Existing facilities. 1251.301 Section 1251... HANDICAP Accessibility § 1251.301 Existing facilities. (a) Accessibility. A recipient shall operate each... existing facilities or every part of a facility accessible to and usable by handicapped persons. (b...

  18. 45 CFR 1151.22 - Existing facilities.

    Science.gov (United States)

    2010-10-01

    ... 45 Public Welfare 3 2010-10-01 2010-10-01 false Existing facilities. 1151.22 Section 1151.22... Prohibited Accessibility § 1151.22 Existing facilities. (a) A recipient shall operate each program or... make each of its existing facilities or every part of a facility accessible to and usable by...

  19. 10 CFR 611.206 - Existing facilities.

    Science.gov (United States)

    2010-01-01

    ... 10 Energy 4 2010-01-01 2010-01-01 false Existing facilities. 611.206 Section 611.206 Energy... PROGRAM Facility/Funding Awards § 611.206 Existing facilities. The Secretary shall, in making awards to those manufacturers that have existing facilities, give priority to those facilities that are oldest or...

  20. Effectiveness of Existing International Nuclear Liability Regime

    International Nuclear Information System (INIS)

    Al-Doais, Salwa; Kessel, Daivd

    2015-01-01

    The first convention was the Paris Convention on Third Party Liability in the Field of Nuclear Energy (the Paris Convention) had been adopted on 29 July 1960 under the auspices of the OECD, and entered into force on 1 April 1968. In 1963,the Brussels Convention - supplementary to the Paris Convention- was adopted in to provide additional funds to compensate damage as a result of a nuclear incident where Paris Convention funds proved to be insufficient. The IAEA's first convention was the Vienna Convention on Civil Liability for Nuclear Damage (the Vienna Convention) which adopted on 21 May 1963,and entered into force in 1977. Both the Paris Convention and the Vienna Convention laid down very similar nuclear liability rules based on the same general principles. The broad principles in these conventions can be summarized as follows: 1- The no-fault liability principle (strict liability) 2- Liability is channeled exclusively to the operator of the nuclear installation (legal channeling) 3- Only courts of the state in which the nuclear accident occurs would have jurisdiction (exclusive jurisdiction) 4- Limitation of the amount of liability and the time frame for claiming damages (limited liability) 5- The operator is required to have adequate insurance or financial guarantees to the extent of its liability amount (liability must be financially secured). 6- Liability is limited in time. Compensation rights are extinguished after specific time. 7- Non-discrimination of victims on the grounds of nationality, domicile or residence. Nuclear liability conventions objective is to provide adequate compensation payments to victims of a nuclear accident. Procedures for receiving these compensation are controlled by some rules such as exclusive jurisdiction, that rule need a further amendment to ensure the effectiveness of the exiting nuclear liability regime . Membership of the Conventions is a critical issue, because the existence of the conventions without being party to