WorldWideScience

Sample records for superposed epoch analyses

  1. A superposed epoch analysis of geomagnetic storms

    Directory of Open Access Journals (Sweden)

    J. R. Taylor

    1994-06-01

    Full Text Available A superposed epoch analysis of geomagnetic storms has been undertaken. The storms are categorised via their intensity (as defined by the Dst index. Storms have also been classified here as either storm sudden commencements (SSCs or storm gradual commencements (SGCs, that is all storms which did not begin with a sudden commencement. The prevailing solar wind conditions defined by the parameters solar wind speed (vsw, density (ρsw and pressure (Psw and the total field and the components of the interplanetary magnetic field (IMF during the storms in each category have been investigated by a superposed epoch analysis. The southward component of the IMF, appears to be the controlling parameter for the generation of small SGCs (-100 nT< minimum Dst ≤ -50 nT for ≥ 4 h, but for SSCs of the same intensity solar wind pressure is dominant. However, for large SSCs (minimum Dst ≤ -100 nT for ≥ 4 h the solar wind speed is the controlling parameter. It is also demonstrated that for larger storms magnetic activity is not solely driven by the accumulation of substorm activity, but substantial energy is directly input via the dayside. Furthermore, there is evidence that SSCs are caused by the passage of a coronal mass ejection, whereas SGCs result from the passage of a high speed/ slow speed coronal stream interface. Storms are also grouped by the sign of Bz during the first hour epoch after the onset. The sign of Bz at t = +1 h is the dominant sign of the Bz for ~24 h before the onset. The total energy released during storms for which Bz was initially positive is, however, of the same order as for storms where Bz was initially negative.

  2. Superposed epoch analysis of O+ auroral outflow during sawtooth events and substorms

    Science.gov (United States)

    Nowrouzi, N.; Kistler, L. M.; Lund, E. J.; Cai, X.

    2017-12-01

    Sawtooth events are repeated injection of energetic particles at geosynchronous orbit. Studies have shown that 94% of sawtooth events occurred during magnetic storm times. The main factor that causes a sawtooth event is still an open question. Simulations have suggested that heavy ions like O+ may play a role in triggering the injections. One of the sources of the O+ in the Earth's magnetosphere is the nightside aurora. O+ ions coming from the nightside auroral region have direct access to the near-earth magnetotail. A model (Brambles et al. 2013) for interplanetary coronal mass ejection driven sawtooth events found that nightside O+ outflow caused the subsequent teeth of the sawtooth event through a feedback mechanism. This work is a superposed epoch analysis to test whether the observed auroral outflow supports this model. Using FAST spacecraft data from 1997-2007, we examine the auroral O+ outflow as a function of time relative to an injection onset. Then we determine whether the profile of outflow flux of O+ during sawtooth events is different from the outflow observed during isolated substorms. The auroral region boundaries are estimated using the method of (Andersson et al. 2004). Subsequently the O+ outflow flux inside these boundaries are calculated and binned as a function of superposed epoch time for substorms and sawtooth "teeth". In this way, we will determine if sawtooth events do in fact have greater O+ outflow, and if that outflow is predominantly from the nightside, as suggested by the model results.

  3. Dynamics of large-scale solar wind streams obtained by the double superposed epoch analysis

    Science.gov (United States)

    Yermolaev, Yu. I.; Lodkina, I. G.; Nikolaeva, N. S.; Yermolaev, M. Yu.

    2015-09-01

    Using the OMNI data for period 1976-2000, we investigate the temporal profiles of 20 plasma and field parameters in the disturbed large-scale types of solar wind (SW): corotating interaction regions (CIR), interplanetary coronal mass ejections (ICME) (both magnetic cloud (MC) and Ejecta), and Sheath as well as the interplanetary shock (IS). To take into account the different durations of SW types, we use the double superposed epoch analysis (DSEA) method: rescaling the duration of the interval for all types in such a manner that, respectively, beginning and end for all intervals of selected type coincide. As the analyzed SW types can interact with each other and change parameters as a result of such interaction, we investigate separately eights sequences of SW types: (1) CIR, (2) IS/CIR, (3) Ejecta, (4) Sheath/Ejecta, (5) IS/Sheath/Ejecta, (6) MC, (7) Sheath/MC, and (8) IS/Sheath/MC. The main conclusion is that the behavior of parameters in Sheath and in CIR are very similar both qualitatively and quantitatively. Both the high-speed stream (HSS) and the fast ICME play a role of pistons which push the plasma located ahead them. The increase of speed in HSS and ICME leads at first to formation of compression regions (CIR and Sheath, respectively) and then to IS. The occurrence of compression regions and IS increases the probability of growth of magnetospheric activity.

  4. Superposed epoch analysis applied to large-amplitude travelling convection vortices

    Directory of Open Access Journals (Sweden)

    H. Lühr

    1998-07-01

    Full Text Available For the six months from 1 October 1993 to 1 April 1994 the recordings of the IMAGE magnetometer network have been surveyed in a search for large-amplitude travelling convection vortices (TCVs. The restriction to large amplitudes (>100 nT was chosen to ensure a proper detection of evens also during times of high activity. Readings of all stations of the northern half of the IMAGE network were employed to check the consistency of the ground signature with the notation of a dual-vortex structure moving in an azimuthal direction. Applying these stringent selection criteria we detected a total of 19 clear TCV events. The statistical properties of our selection resemble the expected characteristics of large-amplitude TCVs. New and unexpected results emerged from the superposed epoch analysis. TCVs tend to form during quiet intervals embedded in moderately active periods. The occurrence of events is not randomly distributed but rather shows a clustering around a few days. These clusters recur once or twice every 27 days. Within a storm cycle they show up five to seven days after the commencement. With regard to solar wind conditions, we see the events occurring in the middle of the IMF sector structure. Large-amplitude TCVs seem to require certain conditions to make solar wind transients 'geoeffective', which have the tendency to recur with the solar rotation period.Key words. Ionosphere (Aural ionosphere; Ionosphere- magnetosphere interactions · Magnetospheric Physics (current system

  5. Superposed epoch study of ICME sub-structures near Earth and their effects on Galactic cosmic rays

    Science.gov (United States)

    Masías-Meza, J. J.; Dasso, S.; Démoulin, P.; Rodriguez, L.; Janvier, M.

    2016-08-01

    Context. Interplanetary coronal mass ejections (ICMEs) are the interplanetary manifestations of solar eruptions. The overtaken solar wind forms a sheath of compressed plasma at the front of ICMEs. Magnetic clouds (MCs) are a subset of ICMEs with specific properties (e.g. the presence of a flux rope). When ICMEs pass near Earth, ground observations indicate that the flux of Galactic cosmic rays (GCRs) decreases. Aims: The main aims of this paper are to find common plasma and magnetic properties of different ICME sub-structures and which ICME properties affect the flux of GCRs near Earth. Methods: We used a superposed epoch method applied to a large set of ICMEs observed in situ by the spacecraft ACE, between 1998 and 2006. We also applied a superposed epoch analysis on GCRs time series observed with the McMurdo neutron monitors. Results: We find that slow MCs at 1 AU have on average more massive sheaths. We conclude that this is because they are more effectively slowed down by drag during their travel from the Sun. Slow MCs also have a more symmetric magnetic field and sheaths expanding similarly as their following MC, while in contrast, fast MCs have an asymmetric magnetic profile and a sheath in compression. In all types of MCs, we find that the proton density and the temperature and the magnetic fluctuations can diffuse within the front of the MC due to 3D reconnection. Finally, we derive a quantitative model that describes the decrease in cosmic rays as a function of the amount of magnetic fluctuations and field strength. Conclusions: The obtained typical profiles of sheath, MC and GCR properties corresponding to slow, middle, and fast ICMEs, can be used for forecasting or modelling these events, and to better understand the transport of energetic particles in ICMEs. They are also useful for improving future operative space weather activities.

  6. Superposed epoch analysis of pressure and magnetic field configuration changes in the plasma sheet

    International Nuclear Information System (INIS)

    Kistler, L.M.; Moebius, E.; Baumjohann, W.; Nagai, T.

    1993-01-01

    The authors report on an analysis of pressure and magnetic configuration within the plasma sheet following the initiation of substorm events. They have constructed this time dependent picture by using an epoch analysis of data from the AMPTE/IRM spacecraft. This analysis procedure can be used to construct a unified picture of events, provided they are reproducible, from a statistical analysis of a series of point measurements. The authors first determine the time dependent pressure changes in the plasma sheet. With some simplifying assumptions they then determine the z dependence of the pressure profiles, and from this distribution determine how field lines in the plasma sheet map to the neutral sheet

  7. Solar-wind turbulence and shear: a superposed-epoch analysis of corotating interaction regions at 1 AU

    Energy Technology Data Exchange (ETDEWEB)

    Borovsky, Joseph E [Los Alamos National Laboratory; Denton, Michael H [LANCASTER UNIV.

    2009-01-01

    A superposed-epoch analysis of ACE and OMNI2 measurements is performed on 27 corotating interaction regions (CIRs) in 2003-2008, with the zero epoch taken to be the stream interface as determined by the maximum of the plasma vorticity. The structure of CIRs is investigated. When the flow measurements are rotated into the local-Parker-spiral coordinate system the shear is seen to be abrupt and intense, with vorticities on the order of 10{sup -5}-10{sup -4} sec{sup -1}. Converging flows perpendicular to the stream interface are seen in the local-Parker-spiral coordinate system and about half of the CIRs show a layer of divergent rebound flow away from the stream interface. Arguments indicate that any spreading of turbulence away from the region where it is produced is limited to about 10{sup 6} km, which is very small compared with the thickness of a CrR. Analysis of the turbulence across the CrRs is performed. When possible, the effects of discontinuities are removed from the data. Fluctuation amplitudes, the Alfvenicity, and the level of Alfvenic correlations all vary smoothly across the CrR. The Alfven ratio exhibits a decrease at the shear zone of the stream interface. Fourier analysis of 4.5-hr subintervals of ACE data is performed and the results are superposed averaged as an ensemble of realizations. The spectral slopes of the velocity, magnetic-field, and total-energy fluctuations vary smoothly across the CIR. The total-energy spectral slope is {approx} 3/2 in the slow and fast wind and in the CrRs. Analysis of the Elsasser inward-outward fluctuations shows a smooth transition across the CrR from an inward-outward balance in the slow wind to an outward dominance in the fast wind. A number of signatures of turbulence driving at the shear zone are sought (entropy change, turbulence amplitude, Alfvenicity, Alfven ratio, spectral slopes, in-out nature): none show evidence of driving of turbulence by shear.

  8. Dynamics of Large-Scale Solar-Wind Streams Obtained by the Double Superposed Epoch Analysis: 2. Comparisons of CIRs vs. Sheaths and MCs vs. Ejecta

    Science.gov (United States)

    Yermolaev, Y. I.; Lodkina, I. G.; Nikolaeva, N. S.; Yermolaev, M. Y.

    2017-12-01

    This work is a continuation of our previous article (Yermolaev et al. in J. Geophys. Res. 120, 7094, 2015), which describes the average temporal profiles of interplanetary plasma and field parameters in large-scale solar-wind (SW) streams: corotating interaction regions (CIRs), interplanetary coronal mass ejections (ICMEs including both magnetic clouds (MCs) and ejecta), and sheaths as well as interplanetary shocks (ISs). As in the previous article, we use the data of the OMNI database, our catalog of large-scale solar-wind phenomena during 1976 - 2000 (Yermolaev et al. in Cosmic Res., 47, 2, 81, 2009) and the method of double superposed epoch analysis (Yermolaev et al. in Ann. Geophys., 28, 2177, 2010a). We rescale the duration of all types of structures in such a way that the beginnings and endings for all of them coincide. We present new detailed results comparing pair phenomena: 1) both types of compression regions ( i.e. CIRs vs. sheaths) and 2) both types of ICMEs (MCs vs. ejecta). The obtained data allow us to suggest that the formation of the two types of compression regions responds to the same physical mechanism, regardless of the type of piston (high-speed stream (HSS) or ICME); the differences are connected to the geometry ( i.e. the angle between the speed gradient in front of the piston and the satellite trajectory) and the jumps in speed at the edges of the compression regions. In our opinion, one of the possible reasons behind the observed differences in the parameters in MCs and ejecta is that when ejecta are observed, the satellite passes farther from the nose of the area of ICME than when MCs are observed.

  9. Analysing the 21 cm signal from the epoch of reionization with artificial neural networks

    Science.gov (United States)

    Shimabukuro, Hayato; Semelin, Benoit

    2017-07-01

    The 21 cm signal from the epoch of reionization should be observed within the next decade. While a simple statistical detection is expected with Square Kilometre Array (SKA) pathfinders, the SKA will hopefully produce a full 3D mapping of the signal. To extract from the observed data constraints on the parameters describing the underlying astrophysical processes, inversion methods must be developed. For example, the Markov Chain Monte Carlo method has been successfully applied. Here, we test another possible inversion method: artificial neural networks (ANNs). We produce a training set that consists of 70 individual samples. Each sample is made of the 21 cm power spectrum at different redshifts produced with the 21cmFast code plus the value of three parameters used in the seminumerical simulations that describe astrophysical processes. Using this set, we train the network to minimize the error between the parameter values it produces as an output and the true values. We explore the impact of the architecture of the network on the quality of the training. Then we test the trained network on the new set of 54 test samples with different values of the parameters. We find that the quality of the parameter reconstruction depends on the sensitivity of the power spectrum to the different parameters at a given redshift, that including thermal noise and sample variance decreases the quality of the reconstruction and that using the power spectrum at several redshifts as an input to the ANN improves the quality of the reconstruction. We conclude that ANNs are a viable inversion method whose main strength is that they require a sparse exploration of the parameter space and thus should be usable with full numerical simulations.

  10. Superposed epoch analysis of physiological fluctuations: possible space weather connections.

    Science.gov (United States)

    Wanliss, James; Cornélissen, Germaine; Halberg, Franz; Brown, Denzel; Washington, Brien

    2018-03-01

    There is a strong connection between space weather and fluctuations in technological systems. Some studies also suggest a statistical connection between space weather and subsequent fluctuations in the physiology of living creatures. This connection, however, has remained controversial and difficult to demonstrate. Here we present support for a response of human physiology to forcing from the explosive onset of the largest of space weather events-space storms. We consider a case study with over 16 years of high temporal resolution measurements of human blood pressure (systolic, diastolic) and heart rate variability to search for associations with space weather. We find no statistically significant change in human blood pressure but a statistically significant drop in heart rate during the main phase of space storms. Our empirical findings shed light on how human physiology may respond to exogenous space weather forcing.

  11. Superposed epoch analysis of physiological fluctuations: possible space weather connections

    Science.gov (United States)

    Wanliss, James; Cornélissen, Germaine; Halberg, Franz; Brown, Denzel; Washington, Brien

    2018-03-01

    There is a strong connection between space weather and fluctuations in technological systems. Some studies also suggest a statistical connection between space weather and subsequent fluctuations in the physiology of living creatures. This connection, however, has remained controversial and difficult to demonstrate. Here we present support for a response of human physiology to forcing from the explosive onset of the largest of space weather events—space storms. We consider a case study with over 16 years of high temporal resolution measurements of human blood pressure (systolic, diastolic) and heart rate variability to search for associations with space weather. We find no statistically significant change in human blood pressure but a statistically significant drop in heart rate during the main phase of space storms. Our empirical findings shed light on how human physiology may respond to exogenous space weather forcing.

  12. Superficies in the form of the right to superpose

    Directory of Open Access Journals (Sweden)

    Simona CHIRICĂ

    2015-06-01

    Full Text Available The purpose of this paper is to present the current legal framework related to the superficies right in the form of the right to superpose, and especially to draw the attention and put certain question marks regarding the actuality or even the urgency of the need for regulation regarding the right to superpose. First, as a preliminary aspect, in order to emphasize the historical evolution of the superficies right, we will briefly present the development of this concept starting from the Roman law up to the present date. Second, by analysing the relevant legislation, the doctrine and the jurisprudence, the authors set themselves to present the main methods for constituting the superficies right. Third, the characteristics related to the right to superpose will be correlatively laid out. Fourth, the possibility to obtain a building permit on the basis of the right to superpose will also be analysed. Fifth, the recently entered-into-force legislative framework regarding the registration of the right to superpose and of the building thus erected is presented. Last but not least, the conclusions of this paper are presented, highlighting the necessity for more clearly defined rules regulating the legal status of the right to superpose, in order to avoid any confusions and inconsistencies in practice.

  13. Materials processing with superposed Bessel beams

    Science.gov (United States)

    Yu, Xiaoming; Trallero-Herrero, Carlos A.; Lei, Shuting

    2016-01-01

    We report experimental results of femtosecond laser processing on the surface of glass and metal thin film using superposed Bessel beams. These beams are generated by a combination of a spatial light modulator (SLM) and an axicon with >50% efficiency, and they possess the long depth-of-focus (propagation-invariant) property as found in ordinary Bessel beams. Through micromachining experiments using femtosecond laser pulses, we show that multiple craters can be fabricated on glass with single-shot exposure, and the 1+(⿿1) superposed beam can reduce collateral damage caused by the rings in zero-order Bessel beams in the scribing of metal thin film.

  14. Electrohydraulic drive system with planetary superposed gears

    Energy Technology Data Exchange (ETDEWEB)

    Graetz, A.; Klimek, K.H.; Welz, H.

    1989-01-01

    To prevent drive problems in ploughs the drives must be designed in such a way as to compensate for asymmetries. If electromechanical drives are replaced by an electrohydraulic drive system with superposed planetary gears and hydrostatic torque reaction supports the following advantages occur: load-free acceleration, load equalisation between main and auxiliary drive, overload protection, and reduction of systems vibrations. 2 figs., 2 tabs.

  15. Materials processing with superposed Bessel beams

    Energy Technology Data Exchange (ETDEWEB)

    Yu, Xiaoming [Department of Industrial and Manufacturing Systems Engineering, Kansas State University, Manhattan, KS 66506 (United States); Trallero-Herrero, Carlos A. [J. R. Macdonald Laboratory, Department of Physics, Kansas State University, Manhattan, KS 66506 (United States); Lei, Shuting, E-mail: lei@ksu.edu [Department of Industrial and Manufacturing Systems Engineering, Kansas State University, Manhattan, KS 66506 (United States)

    2016-01-01

    Graphical abstract: - Highlights: • Superpositions of Bessel beams can be generated with >50% efficiency using an SLM and an axicon. • These beams have orders-of-magnitude increase in depth-of-focus compared to Gaussian beams. • Multiple craters can be fabricated on glass with single-shot exposure. • The 1+(−1) superposition can reduce collateral damage caused by the rings in the zero-order Bessel beams. - Abstract: We report experimental results of femtosecond laser processing on the surface of glass and metal thin film using superposed Bessel beams. These beams are generated by a combination of a spatial light modulator (SLM) and an axicon with >50% efficiency, and they possess the long depth-of-focus (propagation-invariant) property as found in ordinary Bessel beams. Through micromachining experiments using femtosecond laser pulses, we show that multiple craters can be fabricated on glass with single-shot exposure, and the 1+(−1) superposed beam can reduce collateral damage caused by the rings in zero-order Bessel beams in the scribing of metal thin film.

  16. Mixed convection in fluid superposed porous layers

    CERN Document Server

    Dixon, John M

    2017-01-01

    This Brief describes and analyzes flow and heat transport over a liquid-saturated porous bed. The porous bed is saturated by a liquid layer and heating takes place from a section of the bottom. The effect on flow patterns of heating from the bottom is shown by calculation, and when the heating is sufficiently strong, the flow is affected through the porous and upper liquid layers. Measurements of the heat transfer rate from the heated section confirm calculations. General heat transfer laws are developed for varying porous bed depths for applications to process industry needs, environmental sciences, and materials processing. Addressing a topic of considerable interest to the research community, the brief features an up-to-date literature review of mixed convection energy transport in fluid superposed porous layers.

  17. Pliocene geomagnetic polarity epochs

    Science.gov (United States)

    Dalrymple, G.B.; Cox, A.; Doell, Richard R.; Gromme, C.S.

    1967-01-01

    A paleomagnetic and K-Ar dating study of 44 upper Miocene and Pliocene volcanic units from the western United States suggests that the frequency of reversals of the earth's magnetic field during Pliocene time may have been comparable with that of the last 3.6 m.y. Although the data are too limited to permit the formal naming of any new polarity epochs or events, four polarity transitions have been identified: the W10 R/N boundary at 3.7 ?? 0.1 m.y., the A12 N/R boundary at 4.9 ?? 0.1 m.y., the W32 N/R boundary at 9.0 ?? 0.2m.y., and the W36 R/N boundary at 10.8 ?? 0.3 - 1.0 m.y. The loss of absolute resolution of K-Ar dating in older rocks indicates that the use of well defined stratigraphic successions to identify and date polarity transitions will be important in the study of Pliocene and older reversals. ?? 1967.

  18. Superpose3D: a local structural comparison program that allows for user-defined structure representations.

    Directory of Open Access Journals (Sweden)

    Pier Federico Gherardini

    Full Text Available Local structural comparison methods can be used to find structural similarities involving functional protein patches such as enzyme active sites and ligand binding sites. The outcome of such analyses is critically dependent on the representation used to describe the structure. Indeed different categories of functional sites may require the comparison program to focus on different characteristics of the protein residues. We have therefore developed superpose3D, a novel structural comparison software that lets users specify, with a powerful and flexible syntax, the structure description most suited to the requirements of their analysis. Input proteins are processed according to the user's directives and the program identifies sets of residues (or groups of atoms that have a similar 3D position in the two structures. The advantages of using such a general purpose program are demonstrated with several examples. These test cases show that no single representation is appropriate for every analysis, hence the usefulness of having a flexible program that can be tailored to different needs. Moreover we also discuss how to interpret the results of a database screening where a known structural motif is searched against a large ensemble of structures. The software is written in C++ and is released under the open source GPL license. Superpose3D does not require any external library, runs on Linux, Mac OSX, Windows and is available at http://cbm.bio.uniroma2.it/superpose3D.

  19. The quantum epoché.

    Science.gov (United States)

    Pylkkänen, Paavo

    2015-12-01

    The theme of phenomenology and quantum physics is here tackled by examining some basic interpretational issues in quantum physics. One key issue in quantum theory from the very beginning has been whether it is possible to provide a quantum ontology of particles in motion in the same way as in classical physics, or whether we are restricted to stay within a more limited view of quantum systems, in terms of complementary but mutually exclusive phenomena. In phenomenological terms we could describe the situation by saying that according to the usual interpretation of quantum theory (especially Niels Bohr's), quantum phenomena require a kind of epoché (i.e. a suspension of assumptions about reality at the quantum level). However, there are other interpretations (especially David Bohm's) that seem to re-establish the possibility of a mind-independent ontology at the quantum level. We will show that even such ontological interpretations contain novel, non-classical features, which require them to give a special role to "phenomena" or "appearances", a role not encountered in classical physics. We will conclude that while ontological interpretations of quantum theory are possible, quantum theory implies the need of a certain kind of epoché even for this type of interpretations. While different from the epoché connected to phenomenological description, the "quantum epoché" nevertheless points to a potentially interesting parallel between phenomenology and quantum philosophy. Copyright © 2015. Published by Elsevier Ltd.

  20. Extended superposed quantum-state initialization using disjoint prime implicants

    International Nuclear Information System (INIS)

    Rosenbaum, David; Perkowski, Marek

    2009-01-01

    Extended superposed quantum-state initialization using disjoint prime implicants is an algorithm for generating quantum arrays for the purpose of initializing a desired quantum superposition. The quantum arrays generated by this algorithm almost always use fewer gates than other algorithms and in the worst case use the same number of gates. These improvements are achieved by allowing certain parts of the quantum superposition that cannot be initialized directly by the algorithm to be initialized using special circuits. This allows more terms in the quantum superposition to be initialized at the same time which decreases the number of gates required by the generated quantum array.

  1. Distribution of standard deviation of an observable among superposed states

    International Nuclear Information System (INIS)

    Yu, Chang-shui; Shao, Ting-ting; Li, Dong-mo

    2016-01-01

    The standard deviation (SD) quantifies the spread of the observed values on a measurement of an observable. In this paper, we study the distribution of SD among the different components of a superposition state. It is found that the SD of an observable on a superposition state can be well bounded by the SDs of the superposed states. We also show that the bounds also serve as good bounds on coherence of a superposition state. As a further generalization, we give an alternative definition of incompatibility of two observables subject to a given state and show how the incompatibility subject to a superposition state is distributed.

  2. Distribution of standard deviation of an observable among superposed states

    Science.gov (United States)

    Yu, Chang-shui; Shao, Ting-ting; Li, Dong-mo

    2016-10-01

    The standard deviation (SD) quantifies the spread of the observed values on a measurement of an observable. In this paper, we study the distribution of SD among the different components of a superposition state. It is found that the SD of an observable on a superposition state can be well bounded by the SDs of the superposed states. We also show that the bounds also serve as good bounds on coherence of a superposition state. As a further generalization, we give an alternative definition of incompatibility of two observables subject to a given state and show how the incompatibility subject to a superposition state is distributed.

  3. Distribution of Standard deviation of an observable among superposed states

    OpenAIRE

    Yu, Chang-shui; Shao, Ting-ting; Li, Dong-mo

    2016-01-01

    The standard deviation (SD) quantifies the spread of the observed values on a measurement of an observable. In this paper, we study the distribution of SD among the different components of a superposition state. It is found that the SD of an observable on a superposition state can be well bounded by the SDs of the superposed states. We also show that the bounds also serve as good bounds on coherence of a superposition state. As a further generalization, we give an alternative definition of in...

  4. Electrohydraulic drive system with planetary superposed PS 16 gears

    Energy Technology Data Exchange (ETDEWEB)

    Graetz, A.; Klimek, K.H.; Welz, H.

    1988-10-20

    During the nine-month period of use of the electrohydraulic drive system with PS 16 superposed planetary gear and hydrostatic support advance of 800 m was achieved on the 250 m long face in the Geitling 2 seam at the Niederberg colliery. No appreciable difficulties occurred in the hydraulic system and with the PS 16 superposed planetary gear in the entire period. Uniform load distribution between the two drives was proved until the end of the working even with a chain elongation difference up to 3% observed during the final phase of operation. In contrast to normal operation thermal disconnections and motor failures no longer occurred. After accurate adjustment of the pressures the system operated successfully. The time utilisation of the equipment was improved by 15% to 65.7%. The quick and reliable response of the hydraulics in the event of overloading ensured that no chain cracks occurred. The four connector fractures were attributable to fatigue failures. The material-protecting method of operation was proved by the quiet running of the chain and substantially longer operating time, e.g. of the chain and sprocket. To prove the efficiency of the new drive system, comprehensive measurements were undertaken. It emerged during these measurements that in contrast to the conventional drives the load equalisation ensures that the total installed power is available if required. However, the freeing capacity of the plough could not be fully utilised because of the missing conveyor cross-section.

  5. Pulsar slow-down epochs

    International Nuclear Information System (INIS)

    Heintzmann, H.; Novello, M.

    1981-01-01

    The relative importance of magnetospheric currents and low frequency waves for pulsar braking is assessed and a model is developed which tries to account for the available pulsar timing data under the unifying aspect that all pulsars have equal masses and magnetic moments and are born as rapid rotators. Four epochs of slow-down are distinguished which are dominated by different braking mechanisms. According to the model no direct relationship exists between 'slow-down age' and true age of a pulsar and leads to a pulsar birth-rate of one event per hundred years. (Author) [pt

  6. Super: a web server to rapidly screen superposable oligopeptide fragments from the protein data bank

    Science.gov (United States)

    Collier, James H.; Lesk, Arthur M.; Garcia de la Banda, Maria; Konagurthu, Arun S.

    2012-01-01

    Searching for well-fitting 3D oligopeptide fragments within a large collection of protein structures is an important task central to many analyses involving protein structures. This article reports a new web server, Super, dedicated to the task of rapidly screening the protein data bank (PDB) to identify all fragments that superpose with a query under a prespecified threshold of root-mean-square deviation (RMSD). Super relies on efficiently computing a mathematical bound on the commonly used structural similarity measure, RMSD of superposition. This allows the server to filter out a large proportion of fragments that are unrelated to the query; >99% of the total number of fragments in some cases. For a typical query, Super scans the current PDB containing over 80 500 structures (with ∼40 million potential oligopeptide fragments to match) in under a minute. Super web server is freely accessible from: http://lcb.infotech.monash.edu.au/super. PMID:22638586

  7. Results of fatigue tests and prediction of fatigue life under superposed stress wave and combined superposed stress wave

    International Nuclear Information System (INIS)

    Takasugi, Shunji; Horikawa, Takeshi; Tsunenari, Toshiyasu; Nakamura, Hiroshi

    1983-01-01

    In order to examine fatigue life prediction methods at high temperatures where creep damage need not be taken into account, fatigue tests were carried out on plane bending specimens of alloy steels (SCM 435, 2 1/4Cr-1Mo) under superposed and combined superposed stress waves at room temperature and 500 0 C. The experimental data were compared with the fatigue lives predicted by using the cycle counting methods (range pair, range pair mean and zero-cross range pair mean methods), the modified Goodman's equation and the modified Miner's rule. The main results were as follows. (1) The fatigue life prediction method which is being used for the data at room temperature is also applicable to predict the life at high temperatures. The range pair mean method is especially better than other cycle counting methods. The zero-cross range pair mean method gives the estimated lives on the safe side of the experimental lives. (2) The scatter bands of N-bar/N-barsub(es) (experimental life/estimated life) becomes narrower when the following equation is used instead of the modified Goodman's equation for predicting the effect of mean stress on fatigue life. σ sub(t) = σ sub(a) / (1 - Sigma-s sub(m) / kσ sub(B)) σ sub(t); stress amplitude at zero mean stress (kg/mm 2 ) σ sub(B); tensile strength (kg/mm 2 ) σ sub(m); mean stress (kg/mm 2 ) σ sub(a); stress amplitude (kg/mm 2 ) k; modified coefficient of σ sub(B) (author)

  8. Temporal stability of superposed magnetic fluids in porous media

    International Nuclear Information System (INIS)

    Zakaria, Kadry; Sirwah, Magdy A; Alkharashi, Sameh

    2008-01-01

    The present work deals with the stability properties of time periodically streaming superposed magnetic fluids through porous media under the influence of an oblique alternating magnetic field. The system is composed of a middle fluid sheet of finite thickness embedded between two other bounded layers. The fluids are assumed to be incompressible and there are no volume charges in the layers of the fluids. Such configurations are of relevance in a variety of astrophysical and space configurations. The solutions of the linearized equations of motion and boundary conditions lead to deriving two more general simultaneous Mathieu equations of damping terms with complex coefficients. The method of multiple time scales is used to obtain approximate solutions and analyze the stability criteria for both the non-resonant and resonant cases and hence transition curves are obtained for such cases. The stability criteria are examined theoretically and numerically from which stability diagrams are obtained. It is found that the fluid sheet thickness plays a destabilizing role in the presence of a constant field and velocity, while the damping role is observed for the resonant cases. Dual roles are observed for the fluid velocity and the porosity in the stability criteria

  9. The epochs of international law

    CERN Document Server

    Grewe, Wilhelm G

    2000-01-01

    A theoretical overview and detailed analysis of the history of international law from the Middle Ages through to the end of the twentieth century (updated from the 1984 German language edition). Wilhelm Grewe's "Epochen der Völkerrechtsgeschichte" is widely regarded as one of the classic twentieth century works of international law. This revised translation by Michael Byers of Oxford University makes this important book available to non-German readers for the first time. "The Epochs of International Law" provides a theoretical overview and detailed analysis of the history of international law from the Middle Ages, to the Age of Discovery and the Thirty Years War, from Napoleon Bonaparte to the Treaty of Versailles and the Age of the Single Superpower, and does so in a way that reflects Grewe's own experience as one of Germany's leading diplomats and professors of international law. A new chapter, written by Wilhelm Grewe and Michael Byers, updates the book to 1998, making the revised translation of interest ...

  10. Epochality, Global Capitalism and Ecology

    Directory of Open Access Journals (Sweden)

    Wayne Hope

    2018-05-01

    Full Text Available What type of capitalism do we live in today? My answer to this question draws upon two interrelated lines of argument. Firstly, I will argue that we inhabit an epoch of global capitalism. The precursors of this kind of capitalism originated from the late nineteenth century when the development of telegraph networks, modern transport systems and world time zones provided a global template for industrialisation and Western imperialism. From about 1980 a confluence of global events and processes bought a fully-fledged global capitalism into being. These included the collapse of Fordist Keynesianism, national Keynesianism and Soviet Communism along with First, Second and Third World demarcations; the international proliferation of neo-liberal policy regimes; the growth of transnational corporations in all economic sectors; the predominance of financialisation and the reconstitution of global workforces. Secondly, I will argue that the shift from organic surface energy to underground fossil energy intertwined the time of the earth with the time of human history as nature was being instrumentalised as a resource for humanity. Understanding the capitalist relations of power involved here requires that we rethink the emergence of industrial capitalism in the historical context of a world system built upon unequal socio-ecological exchange between core and periphery. Today, global capitalism has intensified the anthropogenic feedback loops associated with CO2 emissions and climate change and universalised the organisational frameworks of profit extraction and socio-ecological destruction. I refer here to the transnational systems of fossil fuel capitalism along with their interlinkages with financialisation and advertising/commodity fetishism. From the preceding lines of argument I will briefly outline the intra-capitalist and planetary-ecological crises out of which transnational coalitions of opposition might emerge.

  11. Linear Covariance Analysis and Epoch State Estimators

    Science.gov (United States)

    Markley, F. Landis; Carpenter, J. Russell

    2014-01-01

    This paper extends in two directions the results of prior work on generalized linear covariance analysis of both batch least-squares and sequential estimators. The first is an improved treatment of process noise in the batch, or epoch state, estimator with an epoch time that may be later than some or all of the measurements in the batch. The second is to account for process noise in specifying the gains in the epoch state estimator. We establish the conditions under which the latter estimator is equivalent to the Kalman filter.

  12. Geomagnetic Polarity Epochs: Sierra Nevada II.

    Science.gov (United States)

    Cox, A; Doell, R R; Dalrymple, G B

    1963-10-18

    Ten new determinations on volcanic extrusions in the Sierra Nevada with potassium-argon ages of 3.1 million years or less indicate that the remanent magnetizations fall into two groups, a normal group in which the remanent magnetization is directed downward and to the north, and a reversed group magnetized up and to the south. Thermomagnetic experiments and mineralogic studies fail to provide an explanation of the opposing polarities in terms of mineralogic control, but rather suggest that the remanent magnetization reflects reversals of the main dipole field of the earth. All available radiometric ages are consistent with this field-reversal hypothesis and indicate that the present normal polarity epoch (N1) as well as the previous reversed epoch (R1) are 0.9 to 1.0 million years long, whereas the previous normal epoch (N2) was at least 25 percent longer.

  13. Rayleigh Taylor instability of two superposed compressible fluids in un-magnetized plasma

    International Nuclear Information System (INIS)

    Sharma, P K; Tiwari, A; Argal, S; Chhajlani, R K

    2014-01-01

    The linear Rayleigh Taylor instability of two superposed compressible Newtonian fluids is discussed with the effect of surface tension which can play important roles in space plasma. As in both the superposed Newtonian fluids, the system is stable for potentially stable case and unstable for potentially unstable case in the present problem also. The equations of the problem are solved by normal mode method and a dispersion relation is obtained for such a system. The behaviour of growth rate is examined in the presence of surface tension and it is found that the surface tension has stabilizing influence on the Rayleigh Taylor instability of two superposed compressible fluids. Numerical analysis is performed to show the effect of sound velocity and surface tension on the growth rate of Rayleigh Taylor instability. It is found that both parameters have stabilizing influence on the growth rate of Rayleigh Taylor instability.

  14. Deposition of diamond-like carbon films by plasma source ion implantation with superposed pulse

    International Nuclear Information System (INIS)

    Baba, K.; Hatada, R.

    2003-01-01

    Diamond-like carbon (DLC) films were prepared on silicon wafer substrate by plasma source ion implantation with superposed negative pulse. Methane and acetylene gases were used as working gases for plasma. A negative DC voltage and a negative pulse voltage were superposed and applied to the substrate holder. The DC voltage was changed in the range from 0 to -4 kV and the pulse voltage was changed from 0 to -18 kV. The surface of DLC films was very smooth. The deposition rate of DLC films increased with increasing in superposed DC bias voltage. Carbon ion implantation was confirmed for the DLC film deposited from methane plasma with high pulse voltage. I D /I G ratios of Raman spectroscopy were around 1.5 independent on pulse voltage. The maximum hardness of 20.3 GPa was observed for the film prepared with high DC and high pulse voltage

  15. LEDDB : LOFAR Epoch of Reionization Diagnostic Database

    NARCIS (Netherlands)

    Martinez-Rubi, O.; Veligatla, V. K.; de Bruyn, A. G.; Lampropoulos, P.; Offringa, A. R.; Jelic, V.; Yatawatta, S.; Koopmans, L. V. E.; Zaroubi, S.

    2013-01-01

    One of the key science projects of the Low-Frequency Array (LOFAR) is the detection of the cosmological signal coming from the Epoch of Reionization (EoR). Here we present the LOFAR EoR Diagnostic Database (LEDDB) that is used in the storage, management, processing and analysis of the LOFAR EoR

  16. Exploring on the Sensitivity Changes of the LC Resonance Magnetic Sensors Affected by Superposed Ringing Signals.

    Science.gov (United States)

    Lin, Tingting; Zhou, Kun; Yu, Sijia; Wang, Pengfei; Wan, Ling; Zhao, Jing

    2018-04-25

    LC resonance magnetic sensors are widely used in low-field nuclear magnetic resonance (LF-NMR) and surface nuclear magnetic resonance (SNMR) due to their high sensitivity, low cost and simple design. In magnetically shielded rooms, LC resonance magnetic sensors can exhibit sensitivities at the fT/√Hz level in the kHz range. However, since the equivalent magnetic field noise of this type of sensor is greatly affected by the environment, weak signals are often submerged in practical applications, resulting in relatively low signal-to-noise ratios (SNRs). To determine why noise increases in unshielded environments, we analysed the noise levels of an LC resonance magnetic sensor ( L ≠ 0) and a Hall sensor ( L ≈ 0) in different environments. The experiments and simulations indicated that the superposed ringing of the LC resonance magnetic sensors led to the observed increase in white noise level caused by environmental interference. Nevertheless, ringing is an inherent characteristic of LC resonance magnetic sensors. It cannot be eliminated when environmental interference exists. In response to this problem, we proposed a method that uses matching resistors with various values to adjust the quality factor Q of the LC resonance magnetic sensor in different measurement environments to obtain the best sensitivity. The LF-NMR experiment in the laboratory showed that the SNR is improved significantly when the LC resonance magnetic sensor with the best sensitivity is selected for signal acquisition in the light of the test environment. (When the matching resistance is 10 kΩ, the SNR is 3.46 times that of 510 Ω). This study improves LC resonance magnetic sensors for nuclear magnetic resonance (NMR) detection in a variety of environments.

  17. Orogen-transverse tectonic window in the Eastern Himalayan fold belt: A superposed buckling model

    Science.gov (United States)

    Bose, Santanu; Mandal, Nibir; Acharyya, S. K.; Ghosh, Subhajit; Saha, Puspendu

    2014-09-01

    The Eastern Lesser Himalayan fold-thrust belt is punctuated by a row of orogen-transverse domal tectonic windows. To evaluate their origin, a variety of thrust-stack models have been proposed, assuming that the crustal shortening occurred dominantly by brittle deformations. However, the Rangit Window (RW) in the Darjeeling-Sikkim Himalaya (DSH) shows unequivocal structural imprints of ductile deformations of multiple episodes. Based on new structural maps, coupled with outcrop-scale field observations, we recognize at least four major episodes of folding in the litho-tectonic units of DSH. The last episode has produced regionally orogen-transverse upright folds (F4), the interference of which with the third-generation (F3) orogen-parallel folds has shaped the large-scale structural patterns in DSH. We propose a new genetic model for the RW, invoking the mechanics of superposed buckling in the mechanically stratified litho-tectonic systems. We substantiate this superposed buckling model with results obtained from analogue experiments. The model explains contrasting F3-F4 interferences in the Lesser Himalayan Sequence (LHS). The lower-order (terrain-scale) folds have undergone superposed buckling in Mode 1, producing large-scale domes and basins, whereas the RW occurs as a relatively higher-order dome nested in the first-order Tista Dome. The Gondwana and the Proterozoic rocks within the RW underwent superposed buckling in Modes 3 and 4, leading to Type 2 fold interferences, as evident from their structural patterns.

  18. The behavior of high-strength unidirectional composites under tension with superposed hydrostatic pressure

    NARCIS (Netherlands)

    Zinoviev, P.A.; Tsvetkov, S.V.; Kulish, G.G.; Berg, van den R.W.; Schepdael, van L.J.M.M.

    2001-01-01

    Three types of high-strength unidirectional composite materials were studied under longitudinal tension with superposed high hydrostatic pressure. Reinforcing fibers were T1000G carbon, S2 glass and Zylon PBO fibers; the Ciba 5052 epoxy resin was used as matrix. The composites were tested under

  19. Stability analysis of natural convection in superposed fluid and porous layers

    International Nuclear Information System (INIS)

    Hirata, S.C.; Goyeau, B.; Gobin, D.; Cotta, R.M.

    2005-01-01

    A linear stability analysis of the onset of thermal natural convection in superposed fluid and porous layers is called out. The resulting eigenvalue problem is solved using a integral transformation technique. The effect of the variation of the Darcy number on the stability of the system is analyzed. (authors)

  20. Stability analysis of natural convection in superposed fluid and porous layers

    Energy Technology Data Exchange (ETDEWEB)

    Hirata, S.C.; Goyeau, B.; Gobin, D. [Paris-11 Univ. - Paris-6, FAST - UMR CNRS 7608, 91 - Orsay (France); Cotta, R.M. [Rio de Janeiro Univ. (LTTC/PEM/EE/COPPE/UFRJ), RJ (Brazil)

    2005-07-01

    A linear stability analysis of the onset of thermal natural convection in superposed fluid and porous layers is called out. The resulting eigenvalue problem is solved using a integral transformation technique. The effect of the variation of the Darcy number on the stability of the system is analyzed. (authors)

  1. Geomagnetic reversal in brunhes normal polarity epoch.

    Science.gov (United States)

    Smith, J D; Foster, J H

    1969-02-07

    The magnetic stratigraphly of seven cores of deep-sea sediment established the existence of a short interval of reversed polarity in the upper part of the Brunches epoch of normal polarity. The reversed zone in the cores correlates well with paleontological boundaries and is named the Blake event. Its boundaries are estimated to be 108,000 and 114,000 years ago +/- 10 percent.

  2. Observing the epoch of galaxy formation.

    Science.gov (United States)

    Steidel, C C

    1999-04-13

    Significant observational progress in addressing the question of the origin and early evolution of galaxies has been made in the past few years, allowing for direct comparison of the epoch when most of the stars in the universe were forming to prevailing theoretical models. There is currently broad consistency between theoretical expectations and the observations, but rapid improvement in the data will provide much more critical tests of theory in the coming years.

  3. 'Anthropocene': An Ethical Crisis, Not a Geological Epoch

    Science.gov (United States)

    Cuomo, Chris

    2017-04-01

    The term 'anthropocene' has gained enormous popularity among scientists who believe we are in a global phase distinguished by the extensive and lasting impacts of social activities on Earth's sedimentary record and vital systems. Beyond its widespread informal use, a working group of the International Union of Geological Sciences seeks to formalize the term to name a new geological epoch, implying that the Holocene epoch has ended. I argue that the move to formalize the 'anthropocene' and to declare the demise of the Holocene is premature and ethically misguided, at best, and that the very name 'anthropocene' obscures rather than illuminates the serious moral and political/economic implications of the dire warnings evident in recent stratigraphic and ecological changes. If human-caused mass extinction and other ecological catastrophes are serious harms, ethical responses are required. Instead, the move to formalize the idea of an 'anthropocene' epoch treats dire ethical warnings as an opportunity to redefine the current dangerous situation as a new status quo. Have we met our responsibilities to protect Holocene Earth? This presentation will focus on the ethical implications of using the power and discourse of geology to demote Holocene ecological states from their role as the foundational benchmarks for guiding and assessing human relationships with nature and other species. Have geoscientists adequately consulted the biological, ecological and social sciences before declaring the end of the Holocene epoch? Upon what do we base environmental ethics if the Holocene is considered past history? I will also examine the ethical dimensions of naming the so-called 'anthropocene', asking: who is the presumed 'anthro' in the 'anthropocene'? Are the phenomena identified with the 'anthropocene' (nuclear fallout, mass species endangerment, ocean acidification, fossil fuel pollution, deforestation, mining) definitive accomplishments of the human species? Should the practices

  4. Administering an epoch initiated for remote memory access

    Science.gov (United States)

    Blocksome, Michael A; Miller, Douglas R

    2012-10-23

    Methods, systems, and products are disclosed for administering an epoch initiated for remote memory access that include: initiating, by an origin application messaging module on an origin compute node, one or more data transfers to a target compute node for the epoch; initiating, by the origin application messaging module after initiating the data transfers, a closing stage for the epoch, including rejecting any new data transfers after initiating the closing stage for the epoch; determining, by the origin application messaging module, whether the data transfers have completed; and closing, by the origin application messaging module, the epoch if the data transfers have completed.

  5. Experimental investigation of a small-sized betatron with superposed magnetization

    International Nuclear Information System (INIS)

    Kas'yanov, V.A.; Rychkov, M.V.; Filimonov, A.A.; Furman, Eh.G.; Chakhlov, V.L.; Chertov, A.S.; Shtejn, M.M.

    2001-01-01

    The aim of the paper is to study possibilities of small-sized betatrons (SSB) with direct current superposed magnetization (DSM). It is shown that DSM permits to decrease the SSB weight and cost of the electromagnet and capacitor storage and to shape the prolonged beam dump. It is noted that the DSM realization has the most expediency in SSB operating in a short-time mode [ru

  6. Hydrogen Epoch of Reionization Array (HERA)

    Science.gov (United States)

    DeBoer, David R.; HERA

    2015-01-01

    The Hydrogen Epoch of Reionization Arrays (HERA - reionization.org) roadmap uses the unique properties of the neutral hydrogen (HI) 21cm line to probe our cosmic dawn: from the birth of the first stars and black holes, through the full reionization of the primordial intergalactic medium (IGM). HERA is a collaboration between the Precision Array Probing the Epoch of Reionization (PAPER - eor.berkeley.edu), the US-based Murchison Widefield Array (MWA - mwatelescope.org), and MIT Epoch of Reionization (MITEOR) teams along with the South African SKA-SA, University of KwaZulu Natal and the University of Cambridge Cavendish Laborabory. HERA has recently been awarded a National Science Foundation Mid-Scale Innovation Program grant to begin the next phase.HERA leverages the operation of the PAPER and MWA telescopes to explore techniques and designs required to detect the primordial HI signal in the presence of systematics and radio continuum foreground emission some four orders of magnitude brighter. With this understanding, we are now able to remove foregrounds to the limits of our sensitivity, culminating in the first physically meaningful upper limits. A redundant calibration algorithm from MITEOR improves the sensitivity of the approach.Building on this, the next stage of HERA incorporates a 14m diameter antenna element that is optimized both for sensitivity and for minimizing foreground systematics. Arranging these elements in a compact hexagonal grid yields an array that facilitates calibration, leverages proven foreground removal techniques, and is scalable to large collecting areas. HERA will be located in the radio quiet environment of the SKA site in the Karoo region of South Africa (where PAPER is currently located). It will have a sensitivity close to two orders of magnitude better than PAPER and the MWA to ensure a robust detection. With its sensitivity and broader frequency coverage, HERA can paint an uninterrupted picture through reionization, back to the

  7. LSST and the Epoch of Reionization Experiments

    Science.gov (United States)

    Ivezić, Željko

    2018-05-01

    The Large Synoptic Survey Telescope (LSST), a next generation astronomical survey, sited on Cerro Pachon in Chile, will provide an unprecedented amount of imaging data for studies of the faint optical sky. The LSST system includes an 8.4m (6.7m effective) primary mirror and a 3.2 Gigapixel camera with a 9.6 sq. deg. field of view. This system will enable about 10,000 sq. deg. of sky to be covered twice per night, every three to four nights on average, with typical 5-sigma depth for point sources of r = 24.5 (AB). With over 800 observations in the ugrizy bands over a 10-year period, these data will enable coadded images reaching r = 27.5 (about 5 magnitudes deeper than SDSS) as well as studies of faint time-domain astronomy. The measured properties of newly discovered and known astrometric and photometric transients will be publicly reported within 60 sec after closing the shutter. The resulting hundreds of petabytes of imaging data for about 40 billion objects will be used for scientific investigations ranging from the properties of near-Earth asteroids to characterizations of dark matter and dark energy. For example, simulations estimate that LSST will discover about 1,000 quasars at redshifts exceeding 7; this sample will place tight constraints on the cosmic environment at the end of the reionization epoch. In addition to a brief introduction to LSST, I review the value of LSST data in support of epoch of reionization experiments and discuss how international participants can join LSST.

  8. The Statistics of Radio Astronomical Polarimetry: Disjoint, Superposed, and Composite Samples

    Energy Technology Data Exchange (ETDEWEB)

    Straten, W. van [Centre for Astrophysics and Supercomputing, Swinburne University of Technology, Hawthorn, VIC 3122 (Australia); Tiburzi, C., E-mail: willem.van.straten@aut.ac.nz [Max-Planck-Institut für Radioastronomie, Auf dem Hügel 69, D-53121 Bonn (Germany)

    2017-02-01

    A statistical framework is presented for the study of the orthogonally polarized modes of radio pulsar emission via the covariances between the Stokes parameters. To accommodate the typically heavy-tailed distributions of single-pulse radio flux density, the fourth-order joint cumulants of the electric field are used to describe the superposition of modes with arbitrary probability distributions. The framework is used to consider the distinction between superposed and disjoint modes, with particular attention to the effects of integration over finite samples. If the interval over which the polarization state is estimated is longer than the timescale for switching between two or more disjoint modes of emission, then the modes are unresolved by the instrument. The resulting composite sample mean exhibits properties that have been attributed to mode superposition, such as depolarization. Because the distinction between disjoint modes and a composite sample of unresolved disjoint modes depends on the temporal resolution of the observing instrumentation, the arguments in favor of superposed modes of pulsar emission are revisited, and observational evidence for disjoint modes is described. In principle, the four-dimensional covariance matrix that describes the distribution of sample mean Stokes parameters can be used to distinguish between disjoint modes, superposed modes, and a composite sample of unresolved disjoint modes. More comprehensive and conclusive interpretation of the covariance matrix requires more detailed consideration of various relevant phenomena, including temporally correlated subpulse modulation (e.g., jitter), statistical dependence between modes (e.g., covariant intensities and partial coherence), and multipath propagation effects (e.g., scintillation and scattering).

  9. Rayleigh-Taylor instability of two superposed conducting Walters B' elastico-viscous fluids in hydromagnetics

    International Nuclear Information System (INIS)

    Sharma, R.C.; Kumar, Pardeep

    1998-01-01

    The Rayleigh-Taylor instability of two superposed electrically conducting Walters elastico-viscous fluids (Model B') of uniform densities when the whole system is immersed in a uniform horizontal magnetic field has been studied. The stability analysis has been carried out, for mathematical simplicity, for two highly viscoelastic fluids of equal kinematic viscosities and equal kinematic viscoelasticities. For the stable configuration as in hydrodynamic case, the system is found to be stable or unstable for the wave-number range k (2v') -12 depending on kinematic viscoelasticity v'. For the unstable configuration, the magnetic field has got stabilizing effect and completely stabilizes certain wave-number range which was always unstable in the absence of magnetic field. The behaviour of growth rates with respect kinematic viscosity and kinematic viscoelasticity parameters are examined analytically. (author)

  10. Metallogenic epoch of the Jiapigou gold belt, Jilin Province, China

    Indian Academy of Sciences (India)

    Metallogenic epoch of the Jiapigou gold belt, Jilin Province, China: ... The Jiapigou gold belt is located on the northern margin of the North China Craton, and is one of the ... 29, Xueyuan Road, Beijing 100083, People's Republic of China.

  11. BRIGHTEST CLUSTER GALAXIES AT THE PRESENT EPOCH

    Energy Technology Data Exchange (ETDEWEB)

    Lauer, Tod R. [National Optical Astronomy Observatory, P.O. Box 26732, Tucson, AZ 85726 (United States); Postman, Marc [Space Telescope Science Institute, 3700 San Martin Drive, Baltimore, MD 21218 (United States); Strauss, Michael A.; Graves, Genevieve J.; Chisari, Nora E. [Department of Astrophysical Sciences, Princeton University, Princeton, NJ 08544 (United States)

    2014-12-20

    We have obtained photometry and spectroscopy of 433 z ≤ 0.08 brightest cluster galaxies (BCGs) in a full-sky survey of Abell clusters to construct a BCG sample suitable for probing deviations from the local Hubble flow. The BCG Hubble diagram over 0 < z < 0.08 is consistent to within 2% of the Hubble relation specified by a Ω {sub m} = 0.3, Λ = 0.7 cosmology. This sample allows us to explore the structural and photometric properties of BCGs at the present epoch, their location in their hosting galaxy clusters, and the effects of the cluster environment on their structure and evolution. We revisit the L{sub m} -α relation for BCGs, which uses α, the log-slope of the BCG photometric curve of growth, to predict the metric luminosity in an aperture with 14.3 kpc radius, L{sub m} , for use as a distance indicator. Residuals in the relation are 0.27 mag rms. We measure central stellar velocity dispersions, σ, of the BCGs, finding the Faber-Jackson relation to flatten as the metric aperture grows to include an increasing fraction of the total BCG luminosity. A three-parameter ''metric plane'' relation using α and σ together gives the best prediction of L{sub m} , with 0.21 mag residuals. The distribution of projected spatial offsets, r{sub x} of BCGs from the X-ray-defined cluster center is a steep γ = –2.33 power law over 1 < r{sub x} < 10{sup 3} kpc. The median offset is ∼10 kpc, but ∼15% of the BCGs have r{sub x} > 100 kpc. The absolute cluster-dispersion normalized BCG peculiar velocity |ΔV {sub 1}|/σ {sub c} follows an exponential distribution with scale length 0.39 ± 0.03. Both L{sub m} and α increase with σ {sub c}. The α parameter is further moderated by both the spatial and velocity offset from the cluster center, with larger α correlated with the proximity of the BCG to the cluster mean velocity or potential center. At the same time, position in the cluster has little effect on L{sub m} . Likewise, residuals from

  12. Statistical analyses in the study of solar wind-magnetosphere coupling

    International Nuclear Information System (INIS)

    Baker, D.N.

    1985-01-01

    Statistical analyses provide a valuable method for establishing initially the existence (or lack of existence) of a relationship between diverse data sets. Statistical methods also allow one to make quantitative assessments of the strengths of observed relationships. This paper reviews the essential techniques and underlying statistical bases for the use of correlative methods in solar wind-magnetosphere coupling studies. Techniques of visual correlation and time-lagged linear cross-correlation analysis are emphasized, but methods of multiple regression, superposed epoch analysis, and linear prediction filtering are also described briefly. The long history of correlation analysis in the area of solar wind-magnetosphere coupling is reviewed with the assessments organized according to data averaging time scales (minutes to years). It is concluded that these statistical methods can be very useful first steps, but that case studies and various advanced analysis methods should be employed to understand fully the average response of the magnetosphere to solar wind input. It is clear that many workers have not always recognized underlying assumptions of statistical methods and thus the significance of correlation results can be in doubt. Long-term averages (greater than or equal to 1 hour) can reveal gross relationships, but only when dealing with high-resolution data (1 to 10 min) can one reach conclusions pertinent to magnetospheric response time scales and substorm onset mechanisms

  13. Solution of the problem of superposing image and digital map for detection of new objects

    Science.gov (United States)

    Rizaev, I. S.; Miftakhutdinov, D. I.; Takhavova, E. G.

    2018-01-01

    The problem of superposing the map of the terrain with the image of the terrain is considered. The image of the terrain may be represented in different frequency bands. Further analysis of the results of collation the digital map with the image of the appropriate terrain is described. Also the approach to detection of differences between information represented on the digital map and information of the image of the appropriate area is offered. The algorithm for calculating the values of brightness of the converted image area on the original picture is offered. The calculation is based on using information about the navigation parameters and information according to arranged bench marks. For solving the posed problem the experiments were performed. The results of the experiments are shown in this paper. The presented algorithms are applicable to the ground complex of remote sensing data to assess differences between resulting images and accurate geopositional data. They are also suitable for detecting new objects in the image, based on the analysis of the matching the digital map and the image of corresponding locality.

  14. Experimental investigation of convective stability in a superposed fluid and porous layer when heated from below

    Science.gov (United States)

    Chen, Falin; Chen, C. F.

    1989-01-01

    Experiments have been carried out in a horizontal superposed fluid and porous layer contained in a test box 24 cm x 12 cm x 4 cm high. The porous layer consisted of 3 mm diameter glass beads, and the fluids used were water, 60 and 90 percent glycerin-water solutions, and 100 percent glycerin. The depth ratio d, which is the ratio of the thickness of the fluid layer to that of the porous layer, varied from 0 to 1.0. Fluids of increasingly higher viscosity were used for cases with larger d in order to keep the temperature difference across the tank within reasonable limits. The size of the convection cells was inferred from temperature measurements made with embedded thermocouples and from temperature distributions at the top of the layer by use of liquid crystal film. The experimental results showed: (1) a precipitous decrease in the critical Rayleigh number as the depth of the fluid layer was increased from zero, and (2) an eightfold decrease in the critical wavelength between d = 0.1 and 0.2. Both of these results were predicted by the linear stability theory reported earlier (Chen and Chen, 1988).

  15. Atlas Basemaps in Web 2.0 Epoch

    Science.gov (United States)

    Chabaniuk, V.; Dyshlyk, O.

    2016-06-01

    The authors have analyzed their experience of the production of various Electronic Atlases (EA) and Atlas Information Systems (AtIS) of so-called "classical type". These EA/AtIS have been implemented in the past decade in the Web 1.0 architecture (e.g., National Atlas of Ukraine, Atlas of radioactive contamination of Ukraine, and others). One of the main distinguishing features of these atlases was their static nature - the end user could not change the content of EA/AtIS. Base maps are very important element of any EA/AtIS. In classical type EA/AtIS they were static datasets, which consisted of two parts: the topographic data of a fixed scale and data of the administrative-territorial division of Ukraine. It is important to note that the technique of topographic data production was based on the use of direct channels of topographic entity observation (such as aerial photography) for the selected scale. Changes in the information technology of the past half-decade are characterized by the advent of the "Web 2.0 epoch". Due to this, in cartography appeared such phenomena as, for example, "neo-cartography" and various mapping platforms like OpenStreetMap. These changes have forced developers of EA/AtIS to use new atlas basemaps. Our approach is described in the article. The phenomenon of neo-cartography and/or Web 2.0 cartography are analysed by authors using previously developed Conceptual framework of EA/AtIS. This framework logically explains the cartographic phenomena relations of three formations: Web 1.0, Web 1.0x1.0 and Web 2.0. Atlas basemaps of the Web 2.0 epoch are integrated information systems. We use several ways to integrate separate atlas basemaps into the information system - by building: weak integrated information system, structured system and meta-system. This integrated information system consists of several basemaps and falls under the definition of "big data". In real projects it is already used the basemaps of three strata: Conceptual

  16. The Cosmic Dawn and Epoch of Reionisation with SKA

    NARCIS (Netherlands)

    Koopmans, L.; Pritchard, J.; Mellema, G.; Aguirre, J.; Ahn, K.; Barkana, R.; van Bemmel, I.; Bernardi, G.; Bonaldi, A.; Briggs, F.; de Bruyn, A. G.; Chang, T. C.; Chapman, E.; Chen, X.; Ciardi, B.; Dayal, P.; Ferrara, A.; Fialkov, A.; Fiore, F.; Ichiki, K.; Illiev, I. T.; Inoue, S.; Jelic, V.; Jones, M.; Lazio, J.; Maio, U.; Majumdar, S.; Mack, K. J.; Mesinger, A.; Morales, M. F.; Parsons, A.; Pen, U. L.; Santos, M.; Schneider, R.; Semelin, B.; de Souza, R. S.; Subrahmanyan, R.; Takeuchi, T.; Vedantham, H.; Wagg, J.; Webster, R.; Wyithe, S.; Datta, K. K.; Trott, C.

    2014-01-01

    Concerted effort is currently ongoing to open up the Epoch of Reionization (EoR) ($z\\sim$15-6) for studies with IR and radio telescopes. Whereas IR detections have been made of sources (Lyman-$\\alpha$ emitters, quasars and drop-outs) in this redshift regime in relatively small fields of view, no

  17. The Cosmic Dawn and Epoch of Reionisation with SKA

    NARCIS (Netherlands)

    Koopmans, L.; Pritchard, J.; Mellema, G.; Aguirre, J.; Ahn, K.; Barkana, R.; van Bemmel, I.; Bernardi, G.; Bonaldi, A.; Briggs, F.; de Bruyn, A. G.; Chang, T. C.; Chapman, E.; Chen, X.; Ciardi, B.; Dayal, P.; Ferrara, A.; Fialkov, A.; Fiore, F.; Ichiki, K.; Illiev, I. T.; Inoue, S.; Jelic, V.; Jones, M.; Lazio, J.; Maio, U.; Majumdar, S.; Mack, K. J.; Mesinger, A.; Morales, M. F.; Parsons, A.; Pen, U. L.; Santos, M.; Schneider, R.; Semelin, B.; de Souza, R. S.; Subrahmanyan, R.; Takeuchi, T.; Vedantham, H.; Wagg, J.; Webster, R.; Wyithe, S.; Datta, K. K.; Trott, C.

    2015-01-01

    Concerted effort is currently ongoing to open up the Epoch of Reionization (EoR) ($z\\sim$15-6) for studies with IR and radio telescopes. Whereas IR detections have been made of sources (Lyman-$\\alpha$ emitters, quasars and drop-outs) in this redshift regime in relatively small fields of view, no

  18. Transfection effect of microbubbles on cells in superposed ultrasound waves and behavior of cavitation bubble.

    Science.gov (United States)

    Kodama, Tetsuya; Tomita, Yukio; Koshiyama, Ken-Ichiro; Blomley, Martin J K

    2006-06-01

    collapse of UCAs were a key factor for transfection, and their intensities were enhanced by the interaction of the superpose ultrasound with the decreasing the height of the medium. Hypothesizing that free cavitation bubbles were generated from cavitation nuclei created by fragmented UCA shells, we carried out numerical analysis of a free spherical bubble motion in the field of ultrasound. Analyzing the interaction of the shock wave generated by a cavitation bubble and a cell membrane, we estimated the shock wave propagation distance that would induce cell membrane damage from the center of the cavitation bubble.

  19. A search for changing look quasars in second epoch imaging

    Science.gov (United States)

    Findlay, Joseph; Myers, Adam; McGreer, Ian

    2018-01-01

    Over nearly two decades, the Sloan Digital Sky Survey has compiled a catalog of over half a million confirmed quasars. During that period approximately ten percent of these objects have been spectroscopically observed in two or more epochs over baselines of ten or more years. This led recently to the discovery of the largest change in luminosity ever before observed in a quasar. The dimming emission was a reflection of very significant changes in continuum and broad line properties, the source had effectively transitioned from a Type I quasar to a Type II AGN. Since then several more "changing look" quasars have been discovered in multi-epoch SDSS spectroscopy. Among them are objects with rising and falling luminosities, appearing and disappearing broad lines. The origin of this behavior is still very uncertain, currently favored is the scenario in which an accreting black hole is simply starved of fuel. Other plausible scenarios include flaring due to stellar tidal disruption close to the black hole or large changes in accretion flow, which can occur during transitions between radiatively efficient and inefficient accretion regimes. Monitoring of larger numbers of changing look quasars will help to elucidate these ideas.In this poster, we report on the progress of a pilot study in which we hope to learn how to select changing look quasars in multi-epoch imaging. This will allow us to take advantage of the entire SDSS quasar catalog rather than just the ten percent of objects with multi-epoch spectroscopy. Comparing archival SDSS and more recent Legacy Survey imaging over ten-year baselines we select objects whose photometry is consistent with the large changes in luminosity expected in changing look quasars. We aim to build up a catalog of both transitioned and transitioning objects for future monitoring.

  20. Epoch-based Entropy for Early Screening of Alzheimer's Disease.

    Science.gov (United States)

    Houmani, N; Dreyfus, G; Vialatte, F B

    2015-12-01

    In this paper, we introduce a novel entropy measure, termed epoch-based entropy. This measure quantifies disorder of EEG signals both at the time level and spatial level, using local density estimation by a Hidden Markov Model on inter-channel stationary epochs. The investigation is led on a multi-centric EEG database recorded from patients at an early stage of Alzheimer's disease (AD) and age-matched healthy subjects. We investigate the classification performances of this method, its robustness to noise, and its sensitivity to sampling frequency and to variations of hyperparameters. The measure is compared to two alternative complexity measures, Shannon's entropy and correlation dimension. The classification accuracies for the discrimination of AD patients from healthy subjects were estimated using a linear classifier designed on a development dataset, and subsequently tested on an independent test set. Epoch-based entropy reached a classification accuracy of 83% on the test dataset (specificity = 83.3%, sensitivity = 82.3%), outperforming the two other complexity measures. Furthermore, it was shown to be more stable to hyperparameter variations, and less sensitive to noise and sampling frequency disturbances than the other two complexity measures.

  1. Superposed ruptile deformational events revealed by field and VOM structural analysis

    Science.gov (United States)

    Kumaira, Sissa; Guadagnin, Felipe; Keller Lautert, Maiara

    2017-04-01

    Virtual outcrop models (VOM) is becoming an important application in the analysis of geological structures due to the possibility of obtaining the geometry and in some cases kinematic aspects of analyzed structures in a tridimensional photorealistic space. These data are used to gain quantitative information on the deformational features which coupled with numeric models can assist in understands deformational processes. Old basement units commonly register superposed deformational events either ductile or ruptile along its evolution. The Porongos Belt, located at southern Brazil, have a complex deformational history registering at least five ductile and ruptile deformational events. In this study, we presents a structural analysis of a quarry in the Porongos Belt, coupling field and VOM structural information to understand process involved in the last two deformational events. Field information was acquired using traditional structural methods for analysis of ruptile structures, such as the descriptions, drawings, acquisition of orientation vectors and kinematic analysis. VOM was created from the image-based modeling method through photogrammetric data acquisition and orthorectification. Photogrammetric data acquisition was acquired using Sony a3500 camera and a total of 128 photographs were taken from ca. 10-20 m from the outcrop in different orientations. Thirty two control point coordinates were acquired using a combination of RTK dGPS surveying and total station work, providing a precision of few millimeters for x, y and z. Photographs were imported into the Photo Scan software to create a 3D dense point cloud from structure from-motion algorithm, which were triangulated and textured to generate the VOM. VOM was georreferenced (oriented and scaled) using the ground control points, and later analyzed in OpenPlot software to extract structural information. Data was imported in Wintensor software to obtain tensor orientations, and Move software to process and

  2. ATLAS BASEMAPS IN WEB 2.0 EPOCH

    Directory of Open Access Journals (Sweden)

    V. Chabaniuk

    2016-06-01

    Full Text Available The authors have analyzed their experience of the production of various Electronic Atlases (EA and Atlas Information Systems (AtIS of so-called "classical type". These EA/AtIS have been implemented in the past decade in the Web 1.0 architecture (e.g., National Atlas of Ukraine, Atlas of radioactive contamination of Ukraine, and others. One of the main distinguishing features of these atlases was their static nature - the end user could not change the content of EA/AtIS. Base maps are very important element of any EA/AtIS. In classical type EA/AtIS they were static datasets, which consisted of two parts: the topographic data of a fixed scale and data of the administrative-territorial division of Ukraine. It is important to note that the technique of topographic data production was based on the use of direct channels of topographic entity observation (such as aerial photography for the selected scale. Changes in the information technology of the past half-decade are characterized by the advent of the “Web 2.0 epoch”. Due to this, in cartography appeared such phenomena as, for example, "neo-cartography" and various mapping platforms like OpenStreetMap. These changes have forced developers of EA/AtIS to use new atlas basemaps. Our approach is described in the article. The phenomenon of neo-cartography and/or Web 2.0 cartography are analysed by authors using previously developed Conceptual framework of EA/AtIS. This framework logically explains the cartographic phenomena relations of three formations: Web 1.0, Web 1.0x1.0 and Web 2.0. Atlas basemaps of the Web 2.0 epoch are integrated information systems. We use several ways to integrate separate atlas basemaps into the information system – by building: weak integrated information system, structured system and meta-system. This integrated information system consists of several basemaps and falls under the definition of "big data". In real projects it is already used the basemaps of three strata

  3. Primordial black hole formation during the QCD epoch

    International Nuclear Information System (INIS)

    Jedamzik, K.

    1997-01-01

    We consider the formation of horizon-size primordial black holes (PBH close-quote s) from pre-existing density fluctuations during cosmic phase transitions. It is pointed out that the formation of PBH close-quote s should be particularly efficient during the QCD epoch due to a substantial reduction of pressure forces during adiabatic collapse, or equivalently, a significant decrease in the effective speed of sound during the color-confinement transition. Our considerations imply that for generic initial density perturbation spectra PBH mass functions are expected to exhibit a pronounced peak on the QCD-horizon mass scale ∼1M circle-dot . This mass scale is roughly coincident with the estimated masses for compact objects recently observed in our galactic halo by the MACHO Collaboration. Black holes formed during the QCD epoch may offer an attractive explanation for the origin of halo dark matter evading possibly problematic nucleosynthesis and luminosity bounds on baryonic halo dark matter. copyright 1997 The American Physical Society

  4. Lunar Impact Basins: Stratigraphy, Sequence and Ages from Superposed Impact Crater Populations Measured from Lunar Orbiter Laser Altimeter (LOLA) Data

    Science.gov (United States)

    Fassett, C. I.; Head, J. W.; Kadish, S. J.; Mazarico, E.; Neumann, G. A.; Smith, D. E.; Zuber, M. T.

    2012-01-01

    Impact basin formation is a fundamental process in the evolution of the Moon and records the history of impactors in the early solar system. In order to assess the stratigraphy, sequence, and ages of impact basins and the impactor population as a function of time, we have used topography from the Lunar Orbiter Laser Altimeter (LOLA) on the Lunar Reconnaissance Orbiter (LRO) to measure the superposed impact crater size-frequency distributions for 30 lunar basins (D = 300 km). These data generally support the widely used Wilhelms sequence of lunar basins, although we find significantly higher densities of superposed craters on many lunar basins than derived by Wilhelms (50% higher densities). Our data also provide new insight into the timing of the transition between distinct crater populations characteristic of ancient and young lunar terrains. The transition from a lunar impact flux dominated by Population 1 to Population 2 occurred before the mid-Nectarian. This is before the end of the period of rapid cratering, and potentially before the end of the hypothesized Late Heavy Bombardment. LOLA-derived crater densities also suggest that many Pre-Nectarian basins, such as South Pole-Aitken, have been cratered to saturation equilibrium. Finally, both crater counts and stratigraphic observations based on LOLA data are applicable to specific basin stratigraphic problems of interest; for example, using these data, we suggest that Serenitatis is older than Nectaris, and Humboldtianum is younger than Crisium. Sample return missions to specific basins can anchor these measurements to a Pre-Imbrian absolute chronology.

  5. Late Globalization and Evolution, Episodes and Epochs of Industries

    DEFF Research Database (Denmark)

    Turcan, Romeo V.; Boujarzadeh, Behnam; Dholakia, Nikhilesh

    While the empirical focus of this paper is the Danish Textile and Fashion Industry (DTFI) – specifically the episodes and epochs in the emergence and evolution of DTFI, in essence the micro and macro time-slices – the theoretical intent is wider. We aim to explore the conceptual terrain of what we...... for further exploration of the late globalization phenomenon. To get to the empirical case study, we follow a macro-conceptual to a micro-empirical path. We discuss the multidisciplinary and multifaceted field of late globalization and employing the historic-analytic approach to study DTFI we draw out very...... specific, empirically derived, conceptual themes about the patterns of global interactions that characterized the evolutionary trajectory of DTFI. We return to a final macro-conceptual section on late globalization where the particular DTFI case study advances the knowledge register only slightly; and we...

  6. The 21-cm Signal from the cosmological epoch of recombination

    Energy Technology Data Exchange (ETDEWEB)

    Fialkov, A. [Departement de Physique, Ecole Normale Superieure, CNRS, 24 rue Lhomond, Paris, 75005 (France); Loeb, A., E-mail: anastasia.fialkov@phys.ens.fr, E-mail: aloeb@cfa.harvard.edu [Department of Astronomy, Harvard University, 60 Garden Street, MS-51, Cambridge, MA, 02138 (United States)

    2013-11-01

    The redshifted 21-cm emission by neutral hydrogen offers a unique tool for mapping structure formation in the early universe in three dimensions. Here we provide the first detailed calculation of the 21-cm emission signal during and after the epoch of hydrogen recombination in the redshift range of z ∼ 500–1,100, corresponding to observed wavelengths of 100–230 meters. The 21-cm line deviates from thermal equilibrium with the cosmic microwave background (CMB) due to the excess Lyα radiation from hydrogen and helium recombinations. The resulting 21-cm signal reaches a brightness temperature of a milli-Kelvin, orders of magnitude larger than previously estimated. Its detection by a future lunar or space-based observatory could improve dramatically the statistical constraints on the cosmological initial conditions compared to existing two-dimensional maps of the CMB anisotropies.

  7. The 21-cm Signal from the cosmological epoch of recombination

    International Nuclear Information System (INIS)

    Fialkov, A.; Loeb, A.

    2013-01-01

    The redshifted 21-cm emission by neutral hydrogen offers a unique tool for mapping structure formation in the early universe in three dimensions. Here we provide the first detailed calculation of the 21-cm emission signal during and after the epoch of hydrogen recombination in the redshift range of z ∼ 500–1,100, corresponding to observed wavelengths of 100–230 meters. The 21-cm line deviates from thermal equilibrium with the cosmic microwave background (CMB) due to the excess Lyα radiation from hydrogen and helium recombinations. The resulting 21-cm signal reaches a brightness temperature of a milli-Kelvin, orders of magnitude larger than previously estimated. Its detection by a future lunar or space-based observatory could improve dramatically the statistical constraints on the cosmological initial conditions compared to existing two-dimensional maps of the CMB anisotropies

  8. Description of nighttime cough epochs in patients with stable COPD GOLD II-IV.

    Science.gov (United States)

    Fischer, Patrick; Gross, Volker; Kroenig, Johannes; Weissflog, Andreas; Hildebrandt, Olaf; Sohrabi, Keywan; Koehler, Ulrich

    Chronic cough is one of the main symptoms of COPD. Ambulatory objective monitoring provides novel insights into the determinants and characteristics of nighttime cough in COPD. Nighttime cough was monitored objectively by LEOSound lung sound monitor in patients with stable COPD II-IV. In 30 patients, with 10 patients in each stage group, nighttime cough was analyzed for epoch frequency, epoch severity (epoch length and coughs per epoch), and pattern (productive or nonproductive). Cough was found in all patients ranging from 1 to 294 events over the recording period. In 29 patients, cough epochs were monitored, ranging from 1 to 75 epochs. The highest amount of cough epochs was found in patients with COPD stage III. Active smokers had significantly more productive cough epochs (61%) than nonsmokers (24%). We found a high rate of nighttime cough epochs in patients with COPD, especially in those in stage III. Productive cough was predominantly found in patients with persistent smoking. LEOSound lung sound monitor offers a practical and valuable opportunity to evaluate cough objectively.

  9. On the link between extreme floods and excess monsoon epochs in South Asia

    Energy Technology Data Exchange (ETDEWEB)

    Kale, Vishwas [University of Pune, Department of Geography, Pune (India)

    2012-09-15

    This paper provides a synoptic view of extreme monsoon floods on all the nine large rivers of South Asia and their association with the excess (above-normal) monsoon rainfall periods. Annual maximum flood series for 18 gauging stations spread over four countries (India, Pakistan, Bangladesh and Nepal) and long-term monsoon rainfall data were analyzed to ascertain whether the extreme floods were clustered in time and whether they coincided with multi-decade excess monsoon rainfall epochs at the basin level. Simple techniques, such as the Cramer's t-test, regression and Mann-Kendall (MK) tests and Hurst method were used to evaluate the trends and patterns of the flood and rainfall series. MK test reveals absence of any long-term tendency in all the series. However, the Cramer's t test and Hurst-Mandelbrot rescaled range statistic provide evidence that both rainfall and flood time series are persistent. Using the Cramer's t-test the excess monsoon epochs for each basin were identified. The excess monsoon periods for different basins were found to be highly asynchronous with respect to duration as well as the beginning and end. Three main conclusions readily emerge from the analyses. Extreme floods (>90th percentile) in South Asia show a tendency to cluster in time. About three-fourth of the extreme floods have occurred during the excess monsoon periods between {proportional_to}1840 and 2000 AD, implying a noteworthy link between the two. The frequency of large floods was higher during the post-1940 period in general and during three decades (1940s, 1950s and 1980s) in particular. (orig.)

  10. Thinking social sciences from Latin America at the epochal change

    Directory of Open Access Journals (Sweden)

    Jaime Antonio Preciado Coronado

    2016-07-01

    Full Text Available From the legacy of an original disciplinary approach, as the Dependence theory and its Marxian critics, or the neo-structural economic theory founded by The Economic Commission for Latin America (ECLA, the Latin-American social sciences deny the Anglo-European centered approaches, in the way of reaffirming its own critical thinking, including the neo-colonial practices. The challenge for this critical thinking is to be, simultaneously, cosmopolitan and Latin American’s one. In this process, the Latin-American social thinking is regaining its own originality and its vigorous proposals, thanks to a rich south-south dialogue, that implies a global character of its reflections and the questioning of its universal references. Although neither classical nor western Marxism are hegemonic within critical theory, the (neo Marxism enriched with criticism of the coloniality of power, the theory of World-System, critical geopolitics and political ecology recover the field of critical theory in key founder of an epochal thinking time. Epistemological debates with post-structuralism and postmodern approaches configure various recent developments in critical theory

  11. The “Anthropocene” epoch: Scientific decision or political statement?

    Science.gov (United States)

    Finney, Stanley C.; Edwards, Lucy E.

    2016-01-01

    The proposal for the “Anthropocene” epoch as a formal unit of the geologic time scale has received extensive attention in scientific and public media. However, most articles on the Anthropocene misrepresent the nature of the units of the International Chronostratigraphic Chart, which is produced by the International Commission on Stratigraphy (ICS) and serves as the basis for the geologic time scale. The stratigraphic record of the Anthropocene is minimal, especially with its recently proposed beginning in 1945; it is that of a human lifespan, and that definition relegates considerable anthropogenic change to a “pre-Anthropocene.” The utility of the Anthropocene requires careful consideration by its various potential users. Its concept is fundamentally different from the chronostratigraphic units that are established by ICS in that the documentation and study of the human impact on the Earth system are based more on direct human observation than on a stratigraphic record. The drive to officially recognize the Anthropocene may, in fact, be political rather than scientific.

  12. A dusty, normal galaxy in the epoch of reionization

    DEFF Research Database (Denmark)

    Watson, Darach; Christensen, Lise; Knudsen, Kirsten Kraiberg

    2015-01-01

    Candidates for the modest galaxies that formed most of the stars in the early universe, at redshifts $z > 7$, have been found in large numbers with extremely deep restframe-UV imaging. But it has proved difficult for existing spectrographs to characterise them in the UV. The detailed properties...... of these galaxies could be measured from dust and cool gas emission at far-infrared wavelengths if the galaxies have become sufficiently enriched in dust and metals. So far, however, the most distant UV-selected galaxy detected in dust emission is only at $z = 3.25$, and recent results have cast doubt on whether...... dust and molecules can be found in typical galaxies at this early epoch. Here we report thermal dust emission from an archetypal early universe star-forming galaxy, A1689-zD1. We detect its stellar continuum in spectroscopy and determine its redshift to be $z = 7.5\\pm0.2$ from a spectroscopic detection...

  13. Epoch making NIRS studies seen through citation trends

    International Nuclear Information System (INIS)

    Dan, Ippeita

    2009-01-01

    Near-infrared spectroscopy (NIRS) studies through citation trends are investigated of literature concerning only the brain function measurement and its methodology together with NIRS principle, technological development, present state and future view. Investigation is conducted firstly for the survey of important author name of those concerned papers in Web of Science and Google Scholar with search words of NIRS, brain and optical topography as an option. Second, >100 papers of those authors citing any of them are picked up and their papers are ranked in accordance with Web of Science citation number, of which top-nineteen are presented here. Impact and epoch making papers are reviewed with explanations of: the establishment of measuring technology of cerebral blood flow change and subsequent brain function by NIRS; development with multi-channel detection; simultaneous measurement with other imaging modalities; examination of NIRS validity; spatial analysis of NIRS; and measurement of brain function. The highest times of citation are 1,238 of the paper by F. F. Jobsis in 'Science' (1977). It should be noted that 10 of top 19 papers are those by Japanese authors. However, review articles omitted in the present literature survey are mostly described by foreign authors: an effort to systemize the concerned fields might be required in this country. (K.T.)

  14. Systematic Uncertainties in Black Hole Masses Determined from Single Epoch Spectra

    DEFF Research Database (Denmark)

    Denney, Kelly D.; Peterson, Bradley M.; Dietrich, Matthias

    2008-01-01

    We explore the nature of systematic errors that can arise in measurement of black hole masses from single-epoch spectra of active galactic nuclei (AGNs) by utilizing the many epochs available for NGC 5548 and PG1229+204 from reverberation mapping databases. In particular, we examine systematics due...

  15. Linguistic Engineering and Linguistic of Engineering: Adaptation of Linguistic Paradigm for Circumstance of Engineering Epoch

    OpenAIRE

    Natalya Halina

    2014-01-01

    The article is devoted to the problems of linguistic knowledge in the Engineering Epoch. Engineering Epoch is the time of adaptation to the information flows by knowledge management, The system of adaptation mechanisms is connected with linguistic and linguistic technologies, forming in new linguistic patterns Linguistic Engineering and Linguistic of Engineering.

  16. Hydrogen Epoch of Reinozation Array (HERA) Calibrated FFT Correlator Simulation

    Science.gov (United States)

    Salazar, Jeffrey David; Parsons, Aaron

    2018-01-01

    The Hydrogen Epoch of Reionization Array (HERA) project is an astronomical radio interferometer array with a redundant baseline configuration. Interferometer arrays are being used widely in radio astronomy because they have a variety of advantages over single antenna systems. For example, they produce images (visibilities) closely matching that of a large antenna (such as the Arecibo observatory), while both the hardware and maintenance costs are significantly lower. However, this method has some complications; one being the computational cost of correlating data from all of the antennas. A correlator is an electronic device that cross-correlates the data between the individual antennas; these are what radio astronomers call visibilities. HERA, being in its early stages, utilizes a traditional correlator system. The correlator cost scales as N2, where N is the number of antennas in the array. The purpose of a redundant baseline configuration array setup is for the use of a more efficient Fast Fourier Transform (FFT) correlator. FFT correlators scale as Nlog2N. The data acquired from this sort of setup, however, inherits geometric delay and uncalibrated antenna gains. This particular project simulates the process of calibrating signals from astronomical sources. Each signal “received” by an antenna in the simulation is given random antenna gain and geometric delay. The “linsolve” Python module was used to solve for the unknown variables in the simulation (complex gains and delays), which then gave a value for the true visibilities. This first version of the simulation only mimics a one dimensional redundant telescope array detecting a small amount of sources located in the volume above the antenna plane. Future versions, using GPUs, will handle a two dimensional redundant array of telescopes detecting a large amount of sources in the volume above the array.

  17. Seeking the epoch of maximum luminosity for dusty quasars

    International Nuclear Information System (INIS)

    Vardanyan, Valeri; Weedman, Daniel; Sargsyan, Lusine

    2014-01-01

    Infrared luminosities νL ν (7.8 μm) arising from dust reradiation are determined for Sloan Digital Sky Survey (SDSS) quasars with 1.4 10 46.6 erg s –1 for all 2 epoch when quasars first reached their maximum luminosity has not yet been identified at any redshift below 5. The most ultraviolet luminous quasars, defined by rest frame νL ν (0.25 μm), have the largest values of the ratio νL ν (0.25 μm)/νL ν (7.8 μm) with a maximum ratio at z = 2.9. From these results, we conclude that the quasars most luminous in the ultraviolet have the smallest dust content and appear luminous primarily because of lessened extinction. Observed ultraviolet/infrared luminosity ratios are used to define 'obscured' quasars as those having >5 mag of ultraviolet extinction. We present a new summary of obscured quasars discovered with the Spitzer Infrared Spectrograph and determine the infrared luminosity function of these obscured quasars at z ∼ 2.1. This is compared with infrared luminosity functions of optically discovered, unobscured quasars in the SDSS and in the AGN and Galaxy Evolution Survey. The comparison indicates comparable numbers of obscured and unobscured quasars at z ∼ 2.1 with a possible excess of obscured quasars at fainter luminosities.

  18. Identifying individual sleep apnea/hypoapnea epochs using smartphone-based pulse oximetry.

    Science.gov (United States)

    Garde, Ainara; Dekhordi, Parastoo; Ansermino, J Mark; Dumont, Guy A

    2016-08-01

    Sleep apnea, characterized by frequent pauses in breathing during sleep, poses a serious threat to the healthy growth and development of children. Polysomnography (PSG), the gold standard for sleep apnea diagnosis, is resource intensive and confined to sleep laboratories, thus reducing its accessibility. Pulse oximetry alone, providing blood oxygen saturation (SpO2) and blood volume changes in tissue (PPG), has the potential to identify children with sleep apnea. Thus, we aim to develop a tool for at-home sleep apnea screening that provides a detailed and automated 30 sec epoch-by-epoch sleep apnea analysis. We propose to extract features characterizing pulse oximetry (SpO2 and pulse rate variability [PRV], a surrogate measure of heart rate variability) to create a multivariate logistic regression model that identifies epochs containing apnea/hypoapnea events. Overnight pulse oximetry was collected using a smartphone-based pulse oximeter, simultaneously with standard PSG from 160 children at the British Columbia Children's hospital. The sleep technician manually scored all apnea/hypoapnea events during the PSG study. Based on these scores we labeled each epoch as containing or not containing apnea/hypoapnea. We randomly divided the subjects into training data (40%), used to develop the model applying the LASSO method, and testing data (60%), used to validate the model. The developed model was assessed epoch-by-epoch for each subject. The test dataset had a median area under the receiver operating characteristic (ROC) curve of 81%; the model provided a median accuracy of 74% sensitivity of 75%, and specificity of 73% when using a risk threshold similar to the percentage of apnea/hypopnea epochs. Thus, providing a detailed epoch-by-epoch analysis with at-home pulse oximetry alone is feasible with accuracy, sensitivity and specificity values above 73% However, the performance might decrease when analyzing subjects with a low number of apnea/hypoapnea events.

  19. Application Of A New Semi-Empirical Model For Forming Limit Prediction Of Sheet Material Including Superposed Loads Of Bending And Shearing

    Science.gov (United States)

    Held, Christian; Liewald, Mathias; Schleich, Ralf; Sindel, Manfred

    2010-06-01

    The use of lightweight materials offers substantial strength and weight advantages in car body design. Unfortunately such kinds of sheet material are more susceptible to wrinkling, spring back and fracture during press shop operations. For characterization of capability of sheet material dedicated to deep drawing processes in the automotive industry, mainly Forming Limit Diagrams (FLD) are used. However, new investigations at the Institute for Metal Forming Technology have shown that High Strength Steel Sheet Material and Aluminum Alloys show increased formability in case of bending loads are superposed to stretching loads. Likewise, by superposing shearing on in plane uniaxial or biaxial tension formability changes because of materials crystallographic texture. Such mixed stress and strain conditions including bending and shearing effects can occur in deep-drawing processes of complex car body parts as well as subsequent forming operations like flanging. But changes in formability cannot be described by using the conventional FLC. Hence, for purpose of improvement of failure prediction in numerical simulation codes significant failure criteria for these strain conditions are missing. Considering such aspects in defining suitable failure criteria which is easy to implement into FEA a new semi-empirical model has been developed considering the effect of bending and shearing in sheet metals formability. This failure criterion consists of the combination of the so called cFLC (combined Forming Limit Curve), which considers superposed bending load conditions and the SFLC (Shear Forming Limit Curve), which again includes the effect of shearing on sheet metal's formability.

  20. Application Of A New Semi-Empirical Model For Forming Limit Prediction Of Sheet Material Including Superposed Loads Of Bending And Shearing

    International Nuclear Information System (INIS)

    Held, Christian; Liewald, Mathias; Schleich, Ralf; Sindel, Manfred

    2010-01-01

    The use of lightweight materials offers substantial strength and weight advantages in car body design. Unfortunately such kinds of sheet material are more susceptible to wrinkling, spring back and fracture during press shop operations. For characterization of capability of sheet material dedicated to deep drawing processes in the automotive industry, mainly Forming Limit Diagrams (FLD) are used. However, new investigations at the Institute for Metal Forming Technology have shown that High Strength Steel Sheet Material and Aluminum Alloys show increased formability in case of bending loads are superposed to stretching loads. Likewise, by superposing shearing on in plane uniaxial or biaxial tension formability changes because of materials crystallographic texture. Such mixed stress and strain conditions including bending and shearing effects can occur in deep-drawing processes of complex car body parts as well as subsequent forming operations like flanging. But changes in formability cannot be described by using the conventional FLC. Hence, for purpose of improvement of failure prediction in numerical simulation codes significant failure criteria for these strain conditions are missing. Considering such aspects in defining suitable failure criteria which is easy to implement into FEA a new semi-empirical model has been developed considering the effect of bending and shearing in sheet metals formability. This failure criterion consists of the combination of the so called cFLC (combined Forming Limit Curve), which considers superposed bending load conditions and the SFLC (Shear Forming Limit Curve), which again includes the effect of shearing on sheet metal's formability.

  1. Epochs of radioactivity in historical evolution of the earth with reference to evolution of biosphere

    International Nuclear Information System (INIS)

    Neruchev, S.G.

    1976-01-01

    Periodic epochs of intense contamination of the medium by uranium in the course of the Earth's evolution and the biogene mechanism of uranium accumulation in sediments during the lifetime are established. Global differentiation of the radioactivity epochs and essential effect of periodic radiation on the evolution of biosphere are shown. Radiational-mutational mechanism in shown to be extremely nonuniform during the evolution of the organic kingdom. It has been found that the intermittency in radioactive epochs is responsible for peculiarities in the stratigraphic distribution of sedimentary uranium, sapropelic shales, phosphorites, oil-producing rocks and other minerals

  2. Organic Chemostratigraphic Markers Characteristic of the (Informally Designated) Anthropocene Epoch

    Science.gov (United States)

    Kruge, M. A.

    2008-12-01

    Recognizing the tremendous collective impact of humans on the environment in the industrial age, the proposed designation of the current time period as the Anthropocene Epoch has considerable merit. One of the signature activities during this time continues to be the intensive extraction, processing, and combustion of fossil fuels. While fossil fuels themselves are naturally-occurring, they are most often millions of years old and associated with deeply buried strata. They may be found at the surface, for example, as natural oil seeps or coal seam outcrops, but these are relatively rare occurrences. Fossil fuels and their myriad by- products become the source of distinctive organic chemostratigraphic marker compounds for the Anthropocene when they occur out of their original geological context, i.e., as widespread contaminants in sediments and soils. These persistent compounds have high long-term preservation potential, particularly when deposited under low oxygen conditions. Fossil fuels can occur as environmental contaminants in raw form (e.g., crude petroleum spilled during transport) or as manufactured products (e.g., diesel oil from a leaking storage facility, coal tar from a manufactured gas plant, plastic waste in a landfill, pesticides from petroleum feedstock in agricultural soils). Distinctive assemblages of hydrocarbon marker compounds including acyclic isoprenoids, hopanes, and steranes can be readily detected by gas chromatography/mass spectrometric analysis of surface sediments and soils. Polycyclic aromatic hydrocarbons (PAHs), along with sulfur-, oxygen-, and nitrogen-containing aromatic compounds, are also characteristic of fossil fuels and are readily detectable as well. More widespread is the airfall deposition of fossil fuel combustion products from vehicular, domestic and industrial sources. These occur in higher concentrations in large urban centers, but are also detected in remote areas. Parent (nonmethylated) PAHs such as phenanthrene

  3. Upper Limits on the 21 cm Epoch of Reionization Power Spectrum from One Night with LOFAR

    Science.gov (United States)

    Patil, A. H.; Yatawatta, S.; Koopmans, L. V. E.; de Bruyn, A. G.; Brentjens, M. A.; Zaroubi, S.; Asad, K. M. B.; Hatef, M.; Jelić, V.; Mevius, M.; Offringa, A. R.; Pandey, V. N.; Vedantham, H.; Abdalla, F. B.; Brouw, W. N.; Chapman, E.; Ciardi, B.; Gehlot, B. K.; Ghosh, A.; Harker, G.; Iliev, I. T.; Kakiichi, K.; Majumdar, S.; Mellema, G.; Silva, M. B.; Schaye, J.; Vrbanec, D.; Wijnholds, S. J.

    2017-03-01

    We present the first limits on the Epoch of Reionization 21 cm H I power spectra, in the redshift range z = 7.9-10.6, using the Low-Frequency Array (LOFAR) High-Band Antenna (HBA). In total, 13.0 hr of data were used from observations centered on the North Celestial Pole. After subtraction of the sky model and the noise bias, we detect a non-zero {{{Δ }}}{{I}}2={(56+/- 13{mK})}2 (1-σ) excess variance and a best 2-σ upper limit of {{{Δ }}}212< {(79.6{mK})}2 at k = 0.053 h cMpc-1 in the range z = 9.6-10.6. The excess variance decreases when optimizing the smoothness of the direction- and frequency-dependent gain calibration, and with increasing the completeness of the sky model. It is likely caused by (I) residual side-lobe noise on calibration baselines, (II) leverage due to nonlinear effects, (III) noise and ionosphere-induced gain errors, or a combination thereof. Further analyses of the excess variance will be discussed in forthcoming publications.

  4. Brain network segregation and integration during an epoch-related working memory fMRI experiment.

    Science.gov (United States)

    Fransson, Peter; Schiffler, Björn C; Thompson, William Hedley

    2018-05-17

    The characterization of brain subnetwork segregation and integration has previously focused on changes that are detectable at the level of entire sessions or epochs of imaging data. In this study, we applied time-varying functional connectivity analysis together with temporal network theory to calculate point-by-point estimates in subnetwork segregation and integration during an epoch-based (2-back, 0-back, baseline) working memory fMRI experiment as well as during resting-state. This approach allowed us to follow task-related changes in subnetwork segregation and integration at a high temporal resolution. At a global level, the cognitively more taxing 2-back epochs elicited an overall stronger response of integration between subnetworks compared to the 0-back epochs. Moreover, the visual, sensorimotor and fronto-parietal subnetworks displayed characteristic and distinct temporal profiles of segregation and integration during the 0- and 2-back epochs. During the interspersed epochs of baseline, several subnetworks, including the visual, fronto-parietal, cingulo-opercular and dorsal attention subnetworks showed pronounced increases in segregation. Using a drift diffusion model we show that the response time for the 2-back trials are correlated with integration for the fronto-parietal subnetwork and correlated with segregation for the visual subnetwork. Our results elucidate the fast-evolving events with regard to subnetwork integration and segregation that occur in an epoch-related task fMRI experiment. Our findings suggest that minute changes in subnetwork integration are of importance for task performance. Copyright © 2018 The Authors. Published by Elsevier Inc. All rights reserved.

  5. Faint Object Detection in Multi-Epoch Observations via Catalog Data Fusion

    Energy Technology Data Exchange (ETDEWEB)

    Budavári, Tamás; Szalay, Alexander S. [Department of Physics and Astronomy, The Johns Hopkins University, 3400 North Charles Street, Baltimore, MD 21218 (United States); Loredo, Thomas J. [Cornell Center for Astrophysics and Planetary Science, Cornell University, Ithaca, NY 14853 (United States)

    2017-03-20

    Astronomy in the time-domain era faces several new challenges. One of them is the efficient use of observations obtained at multiple epochs. The work presented here addresses faint object detection and describes an incremental strategy for separating real objects from artifacts in ongoing surveys. The idea is to produce low-threshold single-epoch catalogs and to accumulate information across epochs. This is in contrast to more conventional strategies based on co-added or stacked images. We adopt a Bayesian approach, addressing object detection by calculating the marginal likelihoods for hypotheses asserting that there is no object or one object in a small image patch containing at most one cataloged source at each epoch. The object-present hypothesis interprets the sources in a patch at different epochs as arising from a genuine object; the no-object hypothesis interprets candidate sources as spurious, arising from noise peaks. We study the detection probability for constant-flux objects in a Gaussian noise setting, comparing results based on single and stacked exposures to results based on a series of single-epoch catalog summaries. Our procedure amounts to generalized cross-matching: it is the product of a factor accounting for the matching of the estimated fluxes of the candidate sources and a factor accounting for the matching of their estimated directions. We find that probabilistic fusion of multi-epoch catalogs can detect sources with similar sensitivity and selectivity compared to stacking. The probabilistic cross-matching framework underlying our approach plays an important role in maintaining detection sensitivity and points toward generalizations that could accommodate variability and complex object structure.

  6. Faint Object Detection in Multi-Epoch Observations via Catalog Data Fusion

    International Nuclear Information System (INIS)

    Budavári, Tamás; Szalay, Alexander S.; Loredo, Thomas J.

    2017-01-01

    Astronomy in the time-domain era faces several new challenges. One of them is the efficient use of observations obtained at multiple epochs. The work presented here addresses faint object detection and describes an incremental strategy for separating real objects from artifacts in ongoing surveys. The idea is to produce low-threshold single-epoch catalogs and to accumulate information across epochs. This is in contrast to more conventional strategies based on co-added or stacked images. We adopt a Bayesian approach, addressing object detection by calculating the marginal likelihoods for hypotheses asserting that there is no object or one object in a small image patch containing at most one cataloged source at each epoch. The object-present hypothesis interprets the sources in a patch at different epochs as arising from a genuine object; the no-object hypothesis interprets candidate sources as spurious, arising from noise peaks. We study the detection probability for constant-flux objects in a Gaussian noise setting, comparing results based on single and stacked exposures to results based on a series of single-epoch catalog summaries. Our procedure amounts to generalized cross-matching: it is the product of a factor accounting for the matching of the estimated fluxes of the candidate sources and a factor accounting for the matching of their estimated directions. We find that probabilistic fusion of multi-epoch catalogs can detect sources with similar sensitivity and selectivity compared to stacking. The probabilistic cross-matching framework underlying our approach plays an important role in maintaining detection sensitivity and points toward generalizations that could accommodate variability and complex object structure.

  7. Assessing worst case scenarios in movement demands derived from global positioning systems during international rugby union matches: Rolling averages versus fixed length epochs

    Science.gov (United States)

    Cunningham, Daniel J.; Shearer, David A.; Carter, Neil; Drawer, Scott; Pollard, Ben; Bennett, Mark; Eager, Robin; Cook, Christian J.; Farrell, John; Russell, Mark

    2018-01-01

    The assessment of competitive movement demands in team sports has traditionally relied upon global positioning system (GPS) analyses presented as fixed-time epochs (e.g., 5–40 min). More recently, presenting game data as a rolling average has become prevalent due to concerns over a loss of sampling resolution associated with the windowing of data over fixed periods. Accordingly, this study compared rolling average (ROLL) and fixed-time (FIXED) epochs for quantifying the peak movement demands of international rugby union match-play as a function of playing position. Elite players from three different squads (n = 119) were monitored using 10 Hz GPS during 36 matches played in the 2014–2017 seasons. Players categorised broadly as forwards and backs, and then by positional sub-group (FR: front row, SR: second row, BR: back row, HB: half back, MF: midfield, B3: back three) were monitored during match-play for peak values of high-speed running (>5 m·s-1; HSR) and relative distance covered (m·min-1) over 60–300 s using two types of sample-epoch (ROLL, FIXED). Irrespective of the method used, as the epoch length increased, values for the intensity of running actions decreased (e.g., For the backs using the ROLL method, distance covered decreased from 177.4 ± 20.6 m·min-1 in the 60 s epoch to 107.5 ± 13.3 m·min-1 for the 300 s epoch). For the team as a whole, and irrespective of position, estimates of fixed effects indicated significant between-method differences across all time-points for both relative distance covered and HSR. Movement demands were underestimated consistently by FIXED versus ROLL with differences being most pronounced using 60 s epochs (95% CI HSR: -6.05 to -4.70 m·min-1, 95% CI distance: -18.45 to -16.43 m·min-1). For all HSR time epochs except one, all backs groups increased more (p < 0.01) from FIXED to ROLL than the forward groups. Linear mixed modelling of ROLL data highlighted that for HSR (except 60 s epoch), SR was the only group not

  8. A shift of thermokarst lakes from carbon sources to sinks during the Holocene epoch

    Science.gov (United States)

    Walter Anthony, K. M.; Zimov, S. A.; Grosse, G.; Jones, Miriam C.; Anthony, P.; Chapin, F. S.; Finlay, J. C.; Mack, M. C.; Davydov, S.; Frenzel, P.F.; Frolking, S.

    2014-01-01

    Thermokarst lakes formed across vast regions of Siberia and Alaska during the last deglaciation and are thought to be a net source of atmospheric methane and carbon dioxide during the Holocene epoch1,2,3,4. However, the same thermokarst lakes can also sequester carbon5, and it remains uncertain whether carbon uptake by thermokarst lakes can offset their greenhouse gas emissions. Here we use field observations of Siberian permafrost exposures, radiocarbon dating and spatial analyses to quantify Holocene carbon stocks and fluxes in lake sediments overlying thawed Pleistocene-aged permafrost. We find that carbon accumulation in deep thermokarst-lake sediments since the last deglaciation is about 1.6 times larger than the mass of Pleistocene-aged permafrost carbon released as greenhouse gases when the lakes first formed. Although methane and carbon dioxide emissions following thaw lead to immediate radiative warming, carbon uptake in peat-rich sediments occurs over millennial timescales. We assess thermokarst-lake carbon feedbacks to climate with an atmospheric perturbation model and find that thermokarst basins switched from a net radiative warming to a net cooling climate effect about 5,000 years ago. High rates of Holocene carbon accumulation in 20 lake sediments (47±10 grams of carbon per square metre per year; mean±standard error) were driven by thermokarst erosion and deposition of terrestrial organic matter, by nutrient release from thawing permafrost that stimulated lake productivity and by slow decomposition in cold, anoxic lake bottoms. When lakes eventually drained, permafrost formation rapidly sequestered sediment carbon. Our estimate of about 160petagrams of Holocene organic carbon in deep lake basins of Siberia and Alaska increases the circumpolar peat carbon pool estimate for permafrost regions by over 50 per cent (ref. 6). The carbon in perennially frozen drained lake sediments may become vulnerable to mineralization as permafrost disappears7

  9. Was the Sun especially active at the end of the late glacial epoch?

    Science.gov (United States)

    Alekseeva, Liliya

    In their pioneering work, the geophysicists A. Brekke and A. Egeland (1983) collected beliefs of different peoples, associated with northern lights. Our analyses of this collection show that these beliefs are mainly related to the mythological idea of ``abnormal'' deads (dead, childless old maids in Finnish beliefs; killed people; spirits dangerous to children). We find similar motifs in Slavic fairy tales about the ``Thrice-Nine Land,'' regarded as the other world in folkloric studies (in the Land where mobile and agitated warlike girls live, whose Head Girl is characterized by the words ``white snow, pretty light, the prettiest in the World,'' but whose name ``Mariya Morevna'' refers to the word ``mort''; where a river flows with its banks covered by human bones; where the witch Baba-Yaga dwells, being extremely dangerous for children). Moreover, it can be noted that similar narrative fabulous myths deal with the concept of auroral oval northern lights, since some specific features of the natural auroral forms are mentioned there, with their particular spatial orientations (to the North or West). This resembles the manner in which Ancient Greek myths describe the real properties of the heavenly phenomena in a mythological language. It is interesting that myths on the high-latitude northern lights spread even to the South of Europe (and, might be, to India and Iran). This fact can be understood in view of the following. It has been established that, during the late glacial epoch, the environmental and cultural conditions were similar over the area from Pyrenean to the Ural Mountains; the pattern of hunters' settlements outlined the glacial sheet from the outside. Relics of the hunters' beliefs can now be found in Arctic, where the environment and lifestyle remain nearly unchanged. The ethnographer Yu.B. Simchenko (1976) has reconstructed the most archaic Arctic myths. According to them, the World of dead is associated with the world of ice governed by the ``Ice

  10. Heavy metals in human bones in different historical epochs.

    Science.gov (United States)

    Martínez-García, M J; Moreno, J M; Moreno-Clavel, J; Vergara, N; García-Sánchez, A; Guillamón, A; Portí, M; Moreno-Grau, S

    2005-09-15

    The concentration of the metals lead, copper, zinc, cadmium and iron was determined in bone remains belonging to 30 individuals buried in the Region of Cartagena dating from different historical periods and in eight persons who had died in recent times. The metals content with respect to lead, cadmium and copper was determined either by anodic stripping voltammetry or by atomic absorption spectroscopy on the basis of the concentrations present in the bone remains. In all cases, zinc and iron were quantified by means of atomic absorption spectroscopy. The lead concentrations found in the bone remains in our city are greater than those reported in the literature for other locations. This led to the consideration of the sources of these metals in our area, both the contribution from atmospheric aerosols as well as that from the soil in the area. Correlation analysis leads us to consider the presence of the studied metals in the analysed bone samples to be the consequence of analogous inputs, namely the inhalation of atmospheric aerosols and diverse contributions in the diet. The lowest values found in the studied bone remains correspond to the Neolithic period, with similar contents to present-day samples with respect to lead, copper, cadmium and iron. As regards the evolution over time of the concentrations of the metals under study, a clear increase in these is observed between the Neolithic period and the grouping made up of the Bronze Age, Roman domination and the Byzantine period. The trend lines used to classify the samples into 7 periods show that the maximum values of lead correspond to the Roman and Byzantine periods. For copper, this peak is found in the Byzantine Period and for iron, in the Islamic Period. Zinc shows an increasing tendency over the periods under study and cadmium is the only metal whose trend lines shows a decreasing slope.

  11. The Square Kilometre Array Epoch of Reionisation and Cosmic Dawn Experiment

    Science.gov (United States)

    Trott, Cathryn M.

    2018-05-01

    The Square Kilometre Array (SKA) Epoch of Reionisation and Cosmic Dawn (EoR/CD) experiments aim to explore the growth of structure and production of ionising radiation in the first billion years of the Universe. Here I describe the experiments planned for the future low-frequency components of the Observatory, and work underway to define, design and execute these programs.

  12. Different habitats within a region contain evolutionary heritage from different epochs depending on the abiotic environment

    NARCIS (Netherlands)

    Bartish, I.V.; Ozinga, W.A.; Bartish, M.I.; Wamelink, G.W.W.; Hennekens, S.M.; Prinzing, Andreas

    2016-01-01

    Aim: Biodiversity hot-spots are regions containing evolutionary heritage from ancient or recent geological epochs, i.e. evolutionary 'museums' or 'cradles', respectively. We hypothesize that: (1) there are also 'museums' and 'cradles' within regions - some species pools of particular habitat

  13. On the spin-temperature evolution during the epoch of reionization

    NARCIS (Netherlands)

    Thomas, Rajat M.; Zaroubi, Saleem

    Simulations estimating the brightness temperature (delta T-b) of the redshifted 21 cm from the epoch of reionization (EoR) often assume that the spin temperature (T-s) is decoupled from the background cosmic microwave background (CMB) temperature and is much larger than it, i.e. T-s T-CMB. Although

  14. Detectability of the 21-cm CMB cross-correlation from the epoch of reionization

    NARCIS (Netherlands)

    Tashiro, Hiroyuki; Aghanim, Nabila; Langer, Mathieu; Douspis, Marian; Zaroubi, Saleem; Jelic, Vibor

    The 21-cm line fluctuations and the cosmic microwave background (CMB) are powerful probes of the epoch of reionization of the Universe. We study the potential of the cross-correlation between 21-cm line fluctuations and CMB anisotropy to obtain further constraints on the reionization history. We

  15. Importance of epoch length and registration time on accelerometer measurements in younger children

    DEFF Research Database (Denmark)

    Dencker, M; Svensson, J; El-Naaman, B

    2012-01-01

    The aim of this study was to investigate the effect of epoch length on accumulation of minutes of physical activity per day over a spectrum of intensities, and the effect that selection of number of hours of acceptable registration required per day had on number of days that were considered accep...

  16. Episodes and Epochs in the Evolution of Danish Textile and Fashion Industry

    DEFF Research Database (Denmark)

    Boujarzadeh, Behnam; Turcan, Romeo V.; Dholakia, Nikhilesh

    2016-01-01

    In this paper we explore the emergence and evolution of industries. Specifically we investigate the episodes and epochs in the emergence and evolution of Danish Textile and Fashion Industry. We collected historical data on Danish Textile and Fashion Industry between 1945 and 2015. We employ radar...

  17. Polarization leakage in epoch of reionization windows : The Low Frequency Array Case

    NARCIS (Netherlands)

    Asad, Khan

    2017-01-01

    The farther we look in space, the earlier we see in time. By observing a radio signal of 21cm wavelength coming from the epoch of reionization, when the universe was less than a billion years old, we can understand how the first stars, galaxies and black holes formed. This signal has not been

  18. The Corporate University's Role in Managing an Epoch in Learning Organisation Innovation

    Science.gov (United States)

    Dealtry, Richard

    2006-01-01

    Purpose: The purpose of this paper is to set the scene for some radical epochal thinking about the approach and future strategic directions in the management of organisational learning, following the author's earlier editorial theme concerning the need for exploration and innovation in organisational learning management.…

  19. The effect of epoch length on estimated EEG functional connectivity and brain network organisation

    Science.gov (United States)

    Fraschini, Matteo; Demuru, Matteo; Crobe, Alessandra; Marrosu, Francesco; Stam, Cornelis J.; Hillebrand, Arjan

    2016-06-01

    Objective. Graph theory and network science tools have revealed fundamental mechanisms of functional brain organization in resting-state M/EEG analysis. Nevertheless, it is still not clearly understood how several methodological aspects may bias the topology of the reconstructed functional networks. In this context, the literature shows inconsistency in the chosen length of the selected epochs, impeding a meaningful comparison between results from different studies. Approach. The aim of this study was to provide a network approach insensitive to the effects that epoch length has on functional connectivity and network reconstruction. Two different measures, the phase lag index (PLI) and the amplitude envelope correlation (AEC) were applied to EEG resting-state recordings for a group of 18 healthy volunteers using non-overlapping epochs with variable length (1, 2, 4, 6, 8, 10, 12, 14 and 16 s). Weighted clustering coefficient (CCw), weighted characteristic path length (L w) and minimum spanning tree (MST) parameters were computed to evaluate the network topology. The analysis was performed on both scalp and source-space data. Main results. Results from scalp analysis show a decrease in both mean PLI and AEC values with an increase in epoch length, with a tendency to stabilize at a length of 12 s for PLI and 6 s for AEC. Moreover, CCw and L w show very similar behaviour, with metrics based on AEC more reliable in terms of stability. In general, MST parameters stabilize at short epoch lengths, particularly for MSTs based on PLI (1-6 s versus 4-8 s for AEC). At the source-level the results were even more reliable, with stability already at 1 s duration for PLI-based MSTs. Significance. The present work suggests that both PLI and AEC depend on epoch length and that this has an impact on the reconstructed network topology, particularly at the scalp-level. Source-level MST topology is less sensitive to differences in epoch length, therefore enabling the comparison of brain

  20. Designing Successful Next-Generation Instruments to Detect the Epoch of Reionization

    Science.gov (United States)

    Thyagarajan, Nithyanandan; Hydrogen Epoch of Reionization Array (HERA) team, Murchison Widefield Array (MWA) team

    2018-01-01

    The Epoch of Reionization (EoR) signifies a period of intense evolution of the Inter-Galactic Medium (IGM) in the early Universe caused by the first generations of stars and galaxies, wherein they turned the neutral IGM to be completely ionized by redshift ≥ 6. This important epoch is poorly explored to date. Measurement of redshifted 21 cm line from neutral Hydrogen during the EoR is promising to provide the most direct constraints of this epoch. Ongoing experiments to detect redshifted 21 cm power spectrum during reionization, including the Murchison Widefield Array (MWA), Precision Array for Probing the Epoch of Reionization (PAPER), and the Low Frequency Array (LOFAR), appear to be severely affected by bright foregrounds and unaccounted instrumental systematics. For example, the spectral structure introduced by wide-field effects, aperture shapes and angular power patterns of the antennas, electrical and geometrical reflections in the antennas and electrical paths, and antenna position errors can be major limiting factors. These mimic the 21 cm signal and severely degrade the instrument performance. It is imperative for the next-generation of experiments to eliminate these systematics at their source via robust instrument design. I will discuss a generic framework to set cosmologically motivated antenna performance specifications and design strategies using the Precision Radio Interferometry Simulator (PRISim) -- a high-precision tool that I have developed for simulations of foregrounds and the instrument transfer function intended primarily for 21 cm EoR studies, but also broadly applicable to interferometer-based intensity mapping experiments. The Hydrogen Epoch of Reionization Array (HERA), designed in-part based on this framework, is expected to detect the 21 cm signal with high significance. I will present this framework and the simulations, and their potential for designing upcoming radio instruments such as HERA and the Square Kilometre Array (SKA).

  1. trash epoch

    Directory of Open Access Journals (Sweden)

    Elena Grigoryeva

    2012-02-01

    Full Text Available Unconventionally, the editorial of this issue begins with two epigraphs:trash1. foolish ideas or talk; nonsense;2. Chiefly US and Canadian useless or unwanted matter or objects;3. a literary or artistic production of poor quality;4. Chiefly US and Canadian a poor or worthless person or a group of such people.[Collins English Dictionary]You got to make good out of bad. That's all there is to make it with. Robert Penn Warren, “All the King’s Men”.But before talking about the topic of the issue, we should pay off the debt. The previous (double issue was completely devoted to Irkutsk’s jubilee. We will fill in the gaps and look back on the events of the past year missed by PB. There are a lot of them: the UIA Congress in Tokyo (5-7, the Zodchestvo Festival and the UAR Plenary Meeting in Moscow (16-29, the review competition of graduation works in Erevan (30–39 and Les Ateliers of Urban Planning and Development in Cergy-Pontoise (48–54.The New Year’s news blends into the topic of the issue. This year in Irkutsk discarded Christmas trees began to be recycled into fertilizer. In the Kemerovo region the old Christmas trees are used as food for cows. So the supercilious attitude toward trash is no longer relevant. In the future we shall eat it; actually, we are already eating it. Artists help us adjust to the idea that we will live in the global dump. Trash Muses inspired Zhenya Kraineva and Konstantin Lidin to a diptych about trash art (66–75.The first essential step in waste recycling is to change our view on waste. In people’s minds the dump is turned into a deposit of valuable resources. The detailed text by Andrei Ivanov (90–99 tells us about a slum district obtaining a new status and turning into a historical heritage.It is very urgent to convert trash into resources. On the Earth, mountains of garbage grow twice as fast as the population. 17% of homes in Sweden are heated with garbage. Japan processes 96% of waste, Germany 95%, Poland 16%, Russia only 5%. At this moment the territory of Russia has accumulated 100 billion tons of garbage – there is scarcely any deposit of such a large scale. Waste recycling plants are a vital task for today’s and tomorrow’s architecture (102–105.The architecture itself is no longer designed for centuries. The green standards contain such an essential item as engineering redesign for dismantled or demolished structures. Instead of the philosophical principle “my house is my fortress”, the impermanence is on the agenda today. A house, like a short-life packing or tomorrow’s garbage, is now under the statement of the problem.This issue returns to its regular sections. In the HERITAGE section there is an article by our American contributor Brian Spencer (124–129; and a debut of Fabio Todeschini, Professor from South Africa (130–136. In the LANDSCAPE section we start publishing a series of articles by Olga Smirnova (142–147, a Krasnoyarsk architect, poet and traveler…

  2. Faint galaxies - Bounds on the epoch of galaxy formation and the cosmological deceleration parameter

    International Nuclear Information System (INIS)

    Yoshii, Yuzuru; Peterson, B.A.

    1991-01-01

    Models of galaxy luminosity evolution are used to interpret the observed color distributions, redshift distributions, and number counts of faint galaxies. It is found from the color distributions that the redshift corresponding to the epoch of galaxy formation must be greater than three, and that the number counts of faint galaxies, which are sensitive to the slope of the faint end of the luminosity function, are incompatible with q0 = 1/2 and indicate a smaller value. The models assume that the sequence of galaxy types is due to different star-formation rates, that the period of galaxy formation can be characterized by a single epoch, and that after formation, galaxies change in luminosity by star formation and stellar evolution, maintaining a constant comoving space density. 40 refs

  3. Period, epoch, and prediction errors of ephemerides from continuous sets of timing measurements

    Science.gov (United States)

    Deeg, H. J.

    2015-06-01

    Space missions such as Kepler and CoRoT have led to large numbers of eclipse or transit measurements in nearly continuous time series. This paper shows how to obtain the period error in such measurements from a basic linear least-squares fit, and how to correctly derive the timing error in the prediction of future transit or eclipse events. Assuming strict periodicity, a formula for the period error of these time series is derived, σP = σT (12 / (N3-N))1 / 2, where σP is the period error, σT the timing error of a single measurement, and N the number of measurements. Compared to the iterative method for period error estimation by Mighell & Plavchan (2013), this much simpler formula leads to smaller period errors, whose correctness has been verified through simulations. For the prediction of times of future periodic events, usual linear ephemeris were epoch errors are quoted for the first time measurement, are prone to an overestimation of the error of that prediction. This may be avoided by a correction for the duration of the time series. An alternative is the derivation of ephemerides whose reference epoch and epoch error are given for the centre of the time series. For long continuous or near-continuous time series whose acquisition is completed, such central epochs should be the preferred way for the quotation of linear ephemerides. While this work was motivated from the analysis of eclipse timing measures in space-based light curves, it should be applicable to any other problem with an uninterrupted sequence of discrete timings for which the determination of a zero point, of a constant period and of the associated errors is needed.

  4. EPOCH 0013 - the impact of elevated CO{sub 2} upon the response of European forests

    Energy Technology Data Exchange (ETDEWEB)

    Lee, H.S.J.; Jarvis, P.G. [University of Edinburgh, Edinburgh (United Kingdom). Inst. of Ecology and Resource Management

    1994-12-31

    Apart from details on the EPOCH project, this paper includes information on the currently funded project under the Environment R & D programme (the likely impact of rising CO{sub 2} and temperature on European forests, 1993 & 1994) and the proposed extension of this is Environment R & D, phase II (predicting the response of European forests to global change, 1995 & 1996). Some suggestions for future work under Framework IV are also proposed. 30 refs., 14 refs., 8 tabs.

  5. THE TIME EVOLUTION OF HH 1 FROM FOUR EPOCHS OF HST IMAGES

    Energy Technology Data Exchange (ETDEWEB)

    Raga, A. C.; Esquivel, A. [Instituto de Ciencias Nucleares, Universidad Nacional Autónoma de México, Ap. 70-543, 04510 D.F., México (Mexico); Reipurth, B. [Institute for Astronomy, University of Hawaii at Manoa, Hilo, HI 96720 (United States); Bally, J., E-mail: raga@nucleares.unam.mx [Center for Astrophysics and Space Astronomy, University of Colorado, UCB 389, Boulder, CO 80309 (United States)

    2016-05-15

    We present an analysis of four epochs of Hα and [S ii] λλ 6716/6731 Hubble Space Telescope (HST) images of HH 1. For determining proper motions, we explore a new method based on the analysis of spatially degraded images obtained convolving the images with wavelet functions of chosen widths. With this procedure, we are able to generate maps of proper motion velocities along and across the outflow axis, as well as (angularly integrated) proper motion velocity distributions. From the four available epochs, we find the time evolution of the velocities, intensities, and spatial distribution of the line emission. We find that over the last two decades HH 1 shows a clear acceleration. Also, the Hα and [S ii] intensities first dropped and then recovered in the more recent (2014) images. Finally, we show a comparison between the two available HST epochs of [O iii] λ 5007 (1994 and 2014), in which we see a clear drop in the value of the [O iii]/Hα ratio.

  6. The Reel Deal: Interpreting HST Multi-Epoch Movies of YSO Jets.

    Science.gov (United States)

    Frank, Adam

    2010-09-01

    The goal of this proposal is to bring the theoretical interpretation of Young Stellar Object jets and their environments to a new level of realism. We propose to build on the results of a successful Cycle 16 observing proposal that has obtained 3rd epoch images of HH jets. We will use Adaptive Mesh Refinement MHD simulations {developed by our team} to carry forward a detailed program of modeling and interpretation of the time-dependent behavior revealed in the new, extended multi-epoch data set. Only with the third epoch observations can we explore forces: i.e. accelerations, decelerations and structural changes to develop an accurate understanding of physical processes occurring in hypersonic, magnetized jet flows. Our studies will allow us to characterize the jets and, therefore, make the crucial link with jet central engines. We note an innovative feature of our project is its link with laboratory astrophysical experiments of jets. Our analysis of the observations will be used to determine future laboratory experiments which will explore A?clumpyA? jet propagation issues.

  7. Clinical and Cost Comparison Evaluation of Inpatient Versus Outpatient Administration of EPOCH-Containing Regimens in Non-Hodgkin Lymphoma.

    Science.gov (United States)

    Evans, Sarah S; Gandhi, Arpita S; Clemmons, Amber B; DeRemer, David L

    2017-08-01

    Etoposide, prednisone, vincristine, cyclophosphamide, doxorubicin (EPOCH)-containing regimens are frequently utilized in non-Hodgkin's lymphoma, however, the incidence of febrile neutropenia (FN) in patients receiving inpatient versus outpatient EPOCH has not been described. Additionally, no comparisons have been made regarding financial implications of EPOCH administration in either setting. This study's primary objective was to compare hospital admissions for FN in patients receiving inpatient or outpatient EPOCH. A single-center, institutional review board-approved review was conducted for adults receiving EPOCH beginning January 2010. Clinical and financial data were collected through chart review and the institution's financial department. Descriptive statistics were utilized for analysis. A total of 25 patients received 86 cycles of an EPOCH-containing regimen (61 [70.9%] inpatient). Five (8.2%) inpatient cycles resulted in an admission for FN compared to 4 (16%) outpatient cycles. Prophylactic antifungal and antiviral agents were prescribed more often after inpatient cycles (>80%) compared to outpatient cycles (cost savings of approximately US$141 116 for both chemotherapy costs and hospital day avoidance. EPOCH-containing regimens can be safely administered in the outpatient setting, which may result in cost savings for healthcare institutions.

  8. Improving the repeatability of Motor Unit Number Index (MUNIX) by introducing additional epochs at low contraction levels.

    Science.gov (United States)

    Peng, Yun; Zhang, Yingchun

    2017-07-01

    To evaluate the repeatability of (Motor Unit Number Index) MUNIX under repeatability conditions, specify the origin of variations and provide strategies for quality control. MUNIX calculations were performed on the bicep brachii muscles of eight healthy subjects. Negative effect of suboptimal electrode positions on MUNIX accuracy was eliminated by employing the high-density surface electromyography technique. MUNIX procedures that utilized a variety of surface interferential pattern (SIP) epoch recruitment strategies (including the original MUNIX procedure, two proposed improvement strategies and their combinations) were described. For each MUNIX procedure, ten thousands of different SIP pools were constructed by randomly recruiting necessary SIP epochs from a large SIP epoch pool (3 datasets, 9 independent electromyography recordings at different contraction levels per dataset and 10 SIP epochs per recording) and implemented for MUNIX calculation. The repeatability of each MUNIX procedure was assessed by summarizing the resulting MUNIX distribution and compared to investigate the effect of SIP epoch selection strategy on repeatability performance. SIP epochs selected at lower contraction levels have a stronger influence on the repeatability of MUNIX than those selected at higher contraction levels. MUNIX under repeatability conditions follows a normal distribution and the standard deviation can be significantly reduced by introducing more epochs near the MUNIX definition line. The MUNIX technique shows an inherent variation attributable to SIP epochs at low contraction levels. It is recommended that more epochs should be sampled at these low contraction levels to improve the repeatability. The present study thoroughly documented the inherent variation of MUNIX and the causes, and offered practical solutions to improve the repeatability of MUNIX. Copyright © 2017 International Federation of Clinical Neurophysiology. Published by Elsevier B.V. All rights reserved.

  9. Newton law corrections and instabilities in f(R) gravity with the effective cosmological constant epoch

    International Nuclear Information System (INIS)

    Nojiri, Shin'ichi; Odintsov, Sergei D.

    2007-01-01

    We consider class of modified f(R) gravities with the effective cosmological constant epoch at the early and late universe. Such models pass most of solar system tests as well they satisfy to cosmological bounds. Despite their very attractive properties, it is shown that one realistic class of such models may lead to significant Newton law corrections at large cosmological scales. Nevertheless, these corrections are small at solar system as well as at the future universe. Another realistic model with acceptable Newton law regime shows the matter instability

  10. Will nonlinear peculiar velocity and inhomogeneous reionization spoil 21 cm cosmology from the epoch of reionization?

    Science.gov (United States)

    Shapiro, Paul R; Mao, Yi; Iliev, Ilian T; Mellema, Garrelt; Datta, Kanan K; Ahn, Kyungjin; Koda, Jun

    2013-04-12

    The 21 cm background from the epoch of reionization is a promising cosmological probe: line-of-sight velocity fluctuations distort redshift, so brightness fluctuations in Fourier space depend upon angle, which linear theory shows can separate cosmological from astrophysical information. Nonlinear fluctuations in ionization, density, and velocity change this, however. The validity and accuracy of the separation scheme are tested here for the first time, by detailed reionization simulations. The scheme works reasonably well early in reionization (≲40% ionized), but not late (≳80% ionized).

  11. Elucidating dark energy with future 21 cm observations at the epoch of reionization

    Energy Technology Data Exchange (ETDEWEB)

    Kohri, Kazunori [The Graduate University for Advanced Studies (SOKENDAI), 1-1 Oho, Tsukuba 305-0801 (Japan); Oyama, Yoshihiko [Institute for Cosmic Ray Research, The University of Tokyo, 5-1-5 Kashiwanoha, Kashiwa, Chiba 277-8582 (Japan); Sekiguchi, Toyokazu [Center for Theoretical Physics of the Universe, Institute for Basic Science (IBS), 193, Munjiro, Yuseoung-gu, Daejeon 34051 (Korea, Republic of); Takahashi, Tomo, E-mail: kohri@post.kek.jp, E-mail: oyamayo@icrr.u-tokyo.ac.jp, E-mail: sekiguti@ibs.re.kr, E-mail: tomot@cc.saga-u.ac.jp [Department of Physics, Saga University, 1 Honjo, Saga 840-8502 (Japan)

    2017-02-01

    We investigate how precisely we can determine the nature of dark energy such as the equation of state (EoS) and its time dependence by using future observations of 21 cm fluctuations at the epoch of reionization (06.8∼< z ∼<1) such as Square Kilometre Array (SKA) and Omniscope in combination with those from cosmic microwave background, baryon acoustic oscillation, type Ia supernovae and direct measurement of the Hubble constant. We consider several parametrizations for the EoS and find that future 21 cm observations will be powerful in constraining models of dark energy, especially when its EoS varies at high redshifts.

  12. Cosmological constraints on variations of the fine structure constant at the epoch of recombination

    International Nuclear Information System (INIS)

    Menegoni, E; Galli, S; Archidiacono, M; Calabrese, E; Melchiorri, A

    2013-01-01

    In this brief work we investigate any possible variation of the fine structure constant at the epoch of recombination. The recent measurements of the Cosmic Microwave Background anisotropies at arcminute angular scales performed by the ACT and SPT experiments are probing the damping regime of Cosmic Microwave Background fluctuations. We study the role of a mechanism that could affect the shape of the Cosmic Microwave Background angular fluctuations at those scales, namely a change in the recombination process through variations in the fine structure constant α

  13. Multi-epoch VLBA Imaging of 20 New TeV Blazars: Apparent Jet Speeds

    Science.gov (United States)

    Piner, B. Glenn; Edwards, Philip G.

    2018-01-01

    We present 88 multi-epoch Very Long Baseline Array (VLBA) images (most at an observing frequency of 8 GHz) of 20 TeV blazars, all of the high-frequency-peaked BL Lac (HBL) class, that have not been previously studied at multiple epochs on the parsec scale. From these 20 sources, we analyze the apparent speeds of 43 jet components that are all detected at four or more epochs. As has been found for other TeV HBLs, the apparent speeds of these components are relatively slow. About two-thirds of the components have an apparent speed that is consistent (within 2σ) with no motion, and some of these components may be stationary patterns whose apparent speed does not relate to the underlying bulk flow speed. In addition, a superluminal tail to the apparent speed distribution of the TeV HBLs is detected for the first time, with eight components in seven sources having a 2σ lower limit on the apparent speed exceeding 1c. We combine the data from these 20 sources with an additional 18 sources from the literature to analyze the complete apparent speed distribution of all 38 TeV HBLs that have been studied with very long baseline interferometry at multiple epochs. The highest 2σ apparent speed lower limit considering all sources is 3.6c. This suggests that bulk Lorentz factors of up to about 4, but probably not much higher, exist in the parsec-scale radio-emitting regions of these sources, consistent with estimates obtained in the radio by other means such as brightness temperatures. This can be reconciled with the high Lorentz factors estimated from the high-energy data if the jet has velocity structures consisting of different emission regions with different Lorentz factors. In particular, we analyze the current apparent speed data for the TeV HBLs in the context of a model with a fast central spine and a slower outer layer.

  14. The optical variability of SDSS quasars from multi-epoch spectroscopy. I. Results from 60 quasars with ≥ six-epoch spectra

    Energy Technology Data Exchange (ETDEWEB)

    Guo, Hengxiao; Gu, Minfeng, E-mail: hxguo@shao.ac.cn, E-mail: gumf@shao.ac.cn [Key Laboratory for Research in Galaxies and Cosmology, Shanghai Astronomical Observatory, Chinese Academy of Sciences, 80 Nandan Road, Shanghai 200030 (China)

    2014-09-01

    In a sample of 60 quasars selected from the Sloan Digital Sky Survey with at least six-epoch spectroscopy, we investigate the variability of emission lines and continuum luminosity at various aspects. A strong anti-correlation between the variability and continuum luminosity at 2500 Å is found for the sample, which is consistent with previous works. In individual sources, we find that half of the sample objects follow the trend of being bluer when brighter, while the remaining half follow the redder-when-brighter (RWB) trend. Although the mechanism for RWB is unclear, the effects of host galaxy contribution due to seeing variations cannot be completely ruled out. As expected from the photoionization model, the positive correlations between the broad emission line and continuum luminosity are found in most individual sources, as well as for the whole sample. We confirm the Baldwin effect in most individual objects and the whole sample, while a negative Baldwin effect is also found in several quasars, which can be at least partly (if not all) due to the host galaxy contamination. We find positive correlations between the broad emission line luminosity and line width in most individual quasars, as well as the whole sample, implying a line base that is more variable than the line core.

  15. Schubert and Beethoven - Adorno’s early antipods of the music in bougeois epoch

    Directory of Open Access Journals (Sweden)

    Jeremić-Molnar Dragana

    2012-01-01

    Full Text Available In this article the authors are reconstructing the dichotomies which the young Theodor Adorno was trying to detect in the music of the bourgeois epoch and personify in two antipodes - Franz Schubert and Ludwig van Beethoven. Although already a devotee of Arnold Schönberg and the 20th century music avantgardism, Adorno was, in his works prior to his exile from Germany (1934, intensively dealing with Schubert and his opposition towards Beethoven. While Beethoven was a bold and progressive revolutionary, fascinated by the “practical reason” and the mission to rise up and reach the stars, Schubert wanted none of it (almost anticipating the failure of the whole revolutionary project. Instead, he was looking backwards, to primordial nature and the possibility of man to participate in its mythic cycles of death and regeneration. The lack of synthesis between this two opposing tendencies in the music of early bourgeois epoch lead to the “negative dialectics” of Schönberg and 20th century music avantgardism and to the final separation of Beethovenian musical progress and Schubertian musical mimesis. [Projekat Ministarstva nauke Republike Srbije, br. 177019: Identiteti srpske muzike u svetskom kulturnom kontekstu, i br. 179035: Izazovi nove društvene integracije u Srbiji - koncepti i akteri

  16. THE HYDROGEN EPOCH OF REIONIZATION ARRAY DISH. I. BEAM PATTERN MEASUREMENTS AND SCIENCE IMPLICATIONS

    Energy Technology Data Exchange (ETDEWEB)

    Neben, Abraham R.; Hewitt, Jacqueline N.; Ewall-Wice, Aaron [MIT Kavli Institute, Massachusetts Institute of Technology, Cambridge, MA 02139 (United States); Bradley, Richard F. [Department of Electrical and Computer Engineering, University of Virginia, Charlottesville, VA 22904 (United States); DeBoer, David R.; Parsons, Aaron R.; Ali, Zaki S.; Cheng, Carina; Patra, Nipanjana; Dillon, Joshua S. [Department of Astronomy, University of California, Berkeley, CA (United States); Aguirre, James E.; Kohn, Saul A. [Department of Physics and Astronomy, University of Pennsylvania, Philadelphia, PA (United States); Thyagarajan, Nithyanandan; Bowman, Judd; Jacobs, Daniel C. [Arizona State University, School of Earth and Space Exploration, Tempe, AZ 85287 (United States); Dickenson, Roger; Doolittle, Phillip; Egan, Dennis; Hedrick, Mike; Klima, Patricia J. [National Radio Astronomy Observatory, Charlottesville, VA (United States); and others

    2016-08-01

    The Hydrogen Epoch of Reionization Array (HERA) is a radio interferometer aiming to detect the power spectrum of 21 cm fluctuations from neutral hydrogen from the epoch of reionization (EOR). Drawing on lessons from the Murchison Widefield Array and the Precision Array for Probing the EOR, HERA is a hexagonal array of large (14 m diameter) dishes with suspended dipole feeds. The dish not only determines overall sensitivity, but also affects the observed frequency structure of foregrounds in the interferometer. This is the first of a series of four papers characterizing the frequency and angular response of the dish with simulations and measurements. In this paper, we focus on the angular response (i.e., power pattern), which sets the relative weighting between sky regions of high and low delay and thus apparent source frequency structure. We measure the angular response at 137 MHz using the ORBCOMM beam mapping system of Neben et al. We measure a collecting area of 93 m{sup 2} in the optimal dish/feed configuration, implying that HERA-320 should detect the EOR power spectrum at z ∼ 9 with a signal-to-noise ratio of 12.7 using a foreground avoidance approach with a single season of observations and 74.3 using a foreground subtraction approach. Finally, we study the impact of these beam measurements on the distribution of foregrounds in Fourier space.

  17. The epoch of cosmic heating by early sources of X-rays

    Science.gov (United States)

    Eide, Marius B.; Graziani, Luca; Ciardi, Benedetta; Feng, Yu; Kakiichi, Koki; Di Matteo, Tiziana

    2018-05-01

    Observations of the 21 cm line from neutral hydrogen indicate that an epoch of heating (EoH) might have preceded the later epoch of reionization. Here we study the effects on the ionization state and the thermal history of the intergalactic medium (IGM) during the EoH induced by different assumptions on ionizing sources in the high-redshift Universe: (i) stars; (ii) X-ray binaries (XRBs); (iii) thermal bremsstrahlung of the hot interstellar medium (ISM); and (iv) accreting nuclear black holes (BHs). To this aim, we post-process outputs from the (100 h-1 comoving Mpc)3 hydrodynamical simulation MassiveBlack-II with the cosmological 3D radiative transfer code CRASH, which follows the propagation of ultraviolet and X-ray photons, computing the thermal and ionization state of hydrogen and helium through the EoH. We find that stars determine the fully ionized morphology of the IGM, while the spectrally hard XRBs pave way for efficient subsequent heating and ionization by the spectrally softer ISM. With the seeding prescription in MassiveBlack-II, BHs do not contribute significantly to either ionization or heating. With only stars, most of the IGM remains in a cold state (with a median T = 11 K at z = 10), however, the presence of more energetic sources raises the temperature of regions around the brightest and more clustered sources above that of the cosmic microwave background, opening the possibility to observing the 21 cm signal in emission.

  18. Coordinating supplier-retailer using multiple common replenishment epochs with retailers’ choices

    Directory of Open Access Journals (Sweden)

    Juhwen Hwang

    2013-06-01

    Full Text Available Purpose: Provide a coordination strategy using multiple common replenishment epochs (MCRE for a single-supplier multi-retailer supply chain. Design/methodology/approach: The demand of a product occurs only with a group of heterogeneous and independent retailers with constant rates, whereas all their order requests are fulfilled by the supplier. The supplier decides a set of MCREs with general price and extra bonus to entice the retailers to join any one of the MCREs, or to let them remain with their original order time epochs. A retailer is willing to participate in a CRE as long as the retailer’s cost increase is within its tolerance. This paper provide a mixed integer programming to determine the MCRE strategies in order to minimize the total costs of the supplier. Findings: The results illustrate that MCRE model provided in the paper can generate a better replenishment coordination scheme than single CRE models. Practical implications: Replenishment coordination is one of the most important mechanisms to improve the efficiency in supply chains, e.g., chain convenience stores in the modern retail industry. Originality/value: This is a follow-up research on Joint Economic Lot Size (JELS models with a focus on multiple retailers with their replenishment coordination.

  19. Identification enhancement of auditory evoked potentials in EEG by epoch concatenation and temporal decorrelation.

    Science.gov (United States)

    Zavala-Fernandez, H; Orglmeister, R; Trahms, L; Sander, T H

    2012-12-01

    Event-related potentials (ERP) recorded by electroencephalography (EEG) are brain responses following an external stimulus, e.g., a sound or an image. They are used in fundamental cognitive research and neurological and psychiatric clinical research. ERPs are weaker than spontaneous brain activity and therefore it is difficult or even impossible to identify an ERP in the brain activity following an individual stimulus. For this reason, a blind source separation method relying on statistical information is proposed for the isolation of ERP after auditory stimulation. In this paper it is suggested to integrate epoch concatenation into the popular temporal decorrelation algorithm SOBI/TDSEP relying on time shifted correlations. With the proposed epoch concatenation temporal decorrelation (ecTD) algorithm a component representing the auditory evoked potential (AEP) is found in electroencephalographic data from an auditory stimulation experiment lasting 3min. The ecTD result is compared with the averaged AEP and it is superior to the result from the SOBI/TDSEP algorithm. Furthermore the ecTD processing leads to significant increases in the signal-to-noise ratio (shape SNR) of the AEP and reduces the computation time by 50% if compared to the SOBI/TDSEP calculation. It can be concluded that data concatenation in combination with temporal decorrelation is useful for isolating and improving the properties of an AEP especially in a short duration stimulation experiment. Copyright © 2012 Elsevier Ireland Ltd. All rights reserved.

  20. THE HYDROGEN EPOCH OF REIONIZATION ARRAY DISH. I. BEAM PATTERN MEASUREMENTS AND SCIENCE IMPLICATIONS

    International Nuclear Information System (INIS)

    Neben, Abraham R.; Hewitt, Jacqueline N.; Ewall-Wice, Aaron; Bradley, Richard F.; DeBoer, David R.; Parsons, Aaron R.; Ali, Zaki S.; Cheng, Carina; Patra, Nipanjana; Dillon, Joshua S.; Aguirre, James E.; Kohn, Saul A.; Thyagarajan, Nithyanandan; Bowman, Judd; Jacobs, Daniel C.; Dickenson, Roger; Doolittle, Phillip; Egan, Dennis; Hedrick, Mike; Klima, Patricia J.

    2016-01-01

    The Hydrogen Epoch of Reionization Array (HERA) is a radio interferometer aiming to detect the power spectrum of 21 cm fluctuations from neutral hydrogen from the epoch of reionization (EOR). Drawing on lessons from the Murchison Widefield Array and the Precision Array for Probing the EOR, HERA is a hexagonal array of large (14 m diameter) dishes with suspended dipole feeds. The dish not only determines overall sensitivity, but also affects the observed frequency structure of foregrounds in the interferometer. This is the first of a series of four papers characterizing the frequency and angular response of the dish with simulations and measurements. In this paper, we focus on the angular response (i.e., power pattern), which sets the relative weighting between sky regions of high and low delay and thus apparent source frequency structure. We measure the angular response at 137 MHz using the ORBCOMM beam mapping system of Neben et al. We measure a collecting area of 93 m 2 in the optimal dish/feed configuration, implying that HERA-320 should detect the EOR power spectrum at z ∼ 9 with a signal-to-noise ratio of 12.7 using a foreground avoidance approach with a single season of observations and 74.3 using a foreground subtraction approach. Finally, we study the impact of these beam measurements on the distribution of foregrounds in Fourier space.

  1. A fast pulse phase estimation method for X-ray pulsar signals based on epoch folding

    Directory of Open Access Journals (Sweden)

    Xue Mengfan

    2016-06-01

    Full Text Available X-ray pulsar-based navigation (XPNAV is an attractive method for autonomous deep-space navigation in the future. The pulse phase estimation is a key task in XPNAV and its accuracy directly determines the navigation accuracy. State-of-the-art pulse phase estimation techniques either suffer from poor estimation accuracy, or involve the maximization of generally non-convex object function, thus resulting in a large computational cost. In this paper, a fast pulse phase estimation method based on epoch folding is presented. The statistical properties of the observed profile obtained through epoch folding are developed. Based on this, we recognize the joint probability distribution of the observed profile as the likelihood function and utilize a fast Fourier transform-based procedure to estimate the pulse phase. Computational complexity of the proposed estimator is analyzed as well. Experimental results show that the proposed estimator significantly outperforms the currently used cross-correlation (CC and nonlinear least squares (NLS estimators, while significantly reduces the computational complexity compared with NLS and maximum likelihood (ML estimators.

  2. OPENING THE 21 cm EPOCH OF REIONIZATION WINDOW: MEASUREMENTS OF FOREGROUND ISOLATION WITH PAPER

    Energy Technology Data Exchange (ETDEWEB)

    Pober, Jonathan C.; Parsons, Aaron R.; Ali, Zaki [Astronomy Department, U. California, Berkeley, CA (United States); Aguirre, James E.; Moore, David F. [Department of Physics and Astronomy, U. Pennsylvania, Philadelphia, PA (United States); Bradley, Richard F. [Department of Electrical and Computer Engineering, U. Virginia, Charlottesville, VA (United States); Carilli, Chris L. [National Radio Astronomy Observatory, Socorro, NM (United States); DeBoer, Dave; Dexter, Matthew; MacMahon, Dave [Radio Astronomy Laboratory, U. California, Berkeley, CA (United States); Gugliucci, Nicole E. [Department of Astronomy, U. Virginia, Charlottesville, VA (United States); Jacobs, Daniel C. [School of Earth and Space Exploration, Arizona State U., Tempe, AZ (United States); Klima, Patricia J. [National Radio Astronomy Observatory, Charlottesville, VA (United States); Manley, Jason; Walbrugh, William P. [Square Kilometer Array, South Africa Project, Cape Town (South Africa); Stefan, Irina I. [Cavendish Laboratory, Cambridge (United Kingdom)

    2013-05-10

    We present new observations with the Precision Array for Probing the Epoch of Reionization with the aim of measuring the properties of foreground emission for 21 cm epoch of reionization (EoR) experiments at 150 MHz. We focus on the footprint of the foregrounds in cosmological Fourier space to understand which modes of the 21 cm power spectrum will most likely be compromised by foreground emission. These observations confirm predictions that foregrounds can be isolated to a {sup w}edge{sup -}like region of two-dimensional (k , k{sub Parallel-To })-space, creating a window for cosmological studies at higher k{sub Parallel-To} values. We also find that the emission extends past the nominal edge of this wedge due to spectral structure in the foregrounds, with this feature most prominent on the shortest baselines. Finally, we filter the data to retain only this ''unsmooth'' emission and image its specific k{sub Parallel-To} modes. The resultant images show an excess of power at the lowest modes, but no emission can be clearly localized to any one region of the sky. This image is highly suggestive that the most problematic foregrounds for 21 cm EoR studies will not be easily identifiable bright sources, but rather an aggregate of fainter emission.

  3. Signals from the epoch of cosmological recombination (Karl Schwarzschild Award Lecture 2008)

    Science.gov (United States)

    Sunyaev, R. A.; Chluba, J.

    2009-07-01

    The physical ingredients to describe the epoch of cosmological recombination are amazingly simple and well-understood. This fact allows us to take into account a very large variety of physical processes, still finding potentially measurable consequences for the energy spectrum and temperature anisotropies of the Cosmic Microwave Background (CMB). In this contribution we provide a short historical overview in connection with the cosmological recombination epoch and its connection to the CMB. Also we highlight some of the detailed physics that were studied over the past few years in the context of the cosmological recombination of hydrogen and helium. The impact of these considerations is two-fold: The associated release of photons during this epoch leads to interesting and unique deviations of the Cosmic Microwave Background (CMB) energy spectrum from a perfect blackbody, which, in particular at decimeter wavelength and the Wien part of the CMB spectrum, may become observable in the near future. Despite the fact that the abundance of helium is rather small, it still contributes a sizeable amount of photons to the full recombination spectrum, leading to additional distinct spectral features. Observing the spectral distortions from the epochs of hydrogen and helium recombination, in principle would provide an additional way to determine some of the key parameters of the Universe (e.g. the specific entropy, the CMB monopole temperature and the pre-stellar abundance of helium). Also it permits us to confront our detailed understanding of the recombination process with direct observational evidence. In this contribution we illustrate how the theoretical spectral template of the cosmological recombination spectrum may be utilized for this purpose. We also show that because hydrogen and helium recombine at very different epochs it is possible to address questions related to the thermal history of our Universe. In particular the cosmological recombination radiation may

  4. Three-dimensional structure of the enveloped bacteriophage phi12: an incomplete T = 13 lattice is superposed on an enclosed T = 1 shell.

    Directory of Open Access Journals (Sweden)

    Hui Wei

    2009-09-01

    Full Text Available Bacteriophage phi12 is a member of the Cystoviridae, a unique group of lipid containing membrane enveloped bacteriophages that infect the bacterial plant pathogen Pseudomonas syringae pv. phaseolicola. The genomes of the virus species contain three double-stranded (dsRNA segments, and the virus capsid itself is organized in multiple protein shells. The segmented dsRNA genome, the multi-layered arrangement of the capsid and the overall viral replication scheme make the Cystoviridae similar to the Reoviridae.We present structural studies of cystovirus phi12 obtained using cryo-electron microscopy and image processing techniques. We have collected images of isolated phi12 virions and generated reconstructions of both the entire particles and the polymerase complex (PC. We find that in the nucleocapsid (NC, the phi12 P8 protein is organized on an incomplete T = 13 icosahedral lattice where the symmetry axes of the T = 13 layer and the enclosed T = 1 layer of the PC superpose. This is the same general protein-component organization found in phi6 NC's but the detailed structure of the entire phi12 P8 layer is distinct from that found in the best classified cystovirus species phi6. In the reconstruction of the NC, the P8 layer includes protein density surrounding the hexamers of P4 that sit at the 5-fold vertices of the icosahedral lattice. We believe these novel features correspond to dimers of protein P7.In conclusion, we have determined that the phi12 NC surface is composed of an incomplete T = 13 P8 layer forming a net-like configuration. The significance of this finding in regard to cystovirus assembly is that vacancies in the lattice could have the potential to accommodate additional viral proteins that are required for RNA packaging and synthesis.

  5. Superposed Redox Chemistry of Fused Carbon Rings in Cyclooctatetraene-Based Organic Molecules for High-Voltage and High-Capacity Cathodes.

    Science.gov (United States)

    Zhao, Xiaolin; Qiu, Wujie; Ma, Chao; Zhao, Yingqin; Wang, Kaixue; Zhang, Wenqing; Kang, Litao; Liu, Jianjun

    2018-01-24

    Even though many organic cathodes have been developed and have made a significant improvement in energy density and reversibility, some organic materials always generate relatively low voltage and limited discharge capacity because their energy storage mechanism is solely based on redox reactions of limited functional groups [N-O, C═X (X = O, N, S)] linking to aromatic rings. Here, a series of cyclooctatetraene-based (C 8 H 8 ) organic molecules were demonstrated to have electrochemical activity of high-capacity and high-voltage from carbon rings by means of first-principles calculations and electronic structure analysis. Fused molecules of C 8 -C 4 -C 8 (C 16 H 12 ) and C 8 -C 4 -C 8 -C 4 -C 8 (C 24 H 16 ) contain, respectively, four and eight electron-deficient carbons, generating high-capacity by their multiple redox reactions. Our sodiation calculations predict that C 16 H 12 and C 24 H 16 exhibit discharge capacities of 525.3 and 357.2 mA h g -1 at the voltage change from 3.5 to 1.0 V and 3.7 to 1.3 V versus Na + /Na, respectively. Electronic structure analysis reveals that the high voltages are attributed to superposed electron stabilization mechanisms, including double-bond reformation and aromatization from carbon rings. High thermodynamic stability of these C 24 H 16 -based systems strongly suggests feasibility of experimental realization. The present work provides evidence that cyclooctatetraene-based organic molecules fused with the C 4 ring are promising in designing high-capacity and high-voltage organic rechargeable cathodes.

  6. Satellite- and Epoch Differenced Precise Point Positioning Based on a Regional Augmentation Network

    Directory of Open Access Journals (Sweden)

    Bin Wu

    2012-06-01

    Full Text Available Precise Point Positioning (PPP has been demonstrated as a simple and effective approach for user positioning. The key issue in PPP is how to shorten convergence time and improve positioning efficiency. Recent researches mainly focus on the ambiguity resolution by correcting residual phase errors at a single station. The success of this approach (referred to hereafter as NORM-PPP is subject to how rapidly one can fix wide-lane and narrow-lane ambiguities to achieve the first ambiguity-fixed solution. The convergence time of NORM-PPP is receiver type dependent, and normally takes 15–20 min. Different from the general algorithm and theory by which the float ambiguities are estimated and the integer ambiguities are fixed, we concentrate on a differential PPP approach: the satellite- and epoch differenced (SDED approach. In general, the SDED approach eliminates receiver clocks and ambiguity parameters and thus avoids the complicated residual phase modeling procedure. As a further development of the SDED approach, we use a regional augmentation network to derive tropospheric delay and remaining un-modeled errors at user sites. By adding these corrections and applying the Robust estimation, the weak mathematic properties due to the ED operation is much improved. Implementing this new approach, we need only two epochs of data to achieve PPP positioning converging to centimeter-positioning accuracy. Using seven days of GPS data at six CORS stations in Shanghai, we demonstrate the success rate, defined as the case when three directions converging to desired positioning accuracy of 10 cm, reaches 100% when the interval between the two epochs is longer than 15 min. Comparing the results of 15 min’ interval to that of 10 min’, it is observed that the position RMS improves from 2.47, 3.95, 5.78 cm to 2.21, 3.93, 4.90 cm in the North, East and Up directions, respectively. Combining the SDED coordinates at the starting point and the ED relative

  7. Direct detection of projectile relics from the end of the lunar basin-forming epoch.

    Science.gov (United States)

    Joy, Katherine H; Zolensky, Michael E; Nagashima, Kazuhide; Huss, Gary R; Ross, D Kent; McKay, David S; Kring, David A

    2012-06-15

    The lunar surface, a key proxy for the early Earth, contains relics of asteroids and comets that have pummeled terrestrial planetary surfaces. Surviving fragments of projectiles in the lunar regolith provide a direct measure of the types and thus the sources of exogenous material delivered to the Earth-Moon system. In ancient [>3.4 billion years ago (Ga)] regolith breccias from the Apollo 16 landing site, we located mineral and lithologic relics of magnesian chondrules from chondritic impactors. These ancient impactor fragments are not nearly as diverse as those found in younger (3.4 Ga to today) regolith breccias and soils from the Moon or that presently fall as meteorites to Earth. This suggests that primitive chondritic asteroids, originating from a similar source region, were common Earth-Moon-crossing impactors during the latter stages of the basin-forming epoch.

  8. Analysis of Accuracy and Epoch on Back-propagation BFGS Quasi-Newton

    Science.gov (United States)

    Silaban, Herlan; Zarlis, Muhammad; Sawaluddin

    2017-12-01

    Back-propagation is one of the learning algorithms on artificial neural networks that have been widely used to solve various problems, such as pattern recognition, prediction and classification. The Back-propagation architecture will affect the outcome of learning processed. BFGS Quasi-Newton is one of the functions that can be used to change the weight of back-propagation. This research tested some back-propagation architectures using classical back-propagation and back-propagation with BFGS. There are 7 architectures that have been tested on glass dataset with various numbers of neurons, 6 architectures with 1 hidden layer and 1 architecture with 2 hidden layers. BP with BFGS improves the convergence of the learning process. The average improvement convergence is 98.34%. BP with BFGS is more optimal on architectures with smaller number of neurons with decreased epoch number is 94.37% with the increase of accuracy about 0.5%.

  9. Modeling high-order synchronization epochs and transitions in the cardiovascular system

    Science.gov (United States)

    García-Álvarez, David; Bahraminasab, Alireza; Stefanovska, Aneta; McClintock, Peter V. E.

    2007-12-01

    We study a system consisting of two coupled phase oscillators in the presence of noise. This system is used as a model for the cardiorespiratory interaction in wakefulness and anaesthesia. We show that longrange correlated noise produces transitions between epochs with different n:m synchronisation ratios, as observed in the cardiovascular system. Also, we see that, the smaller the noise (specially the one acting on the slower oscillator), the bigger the synchronisation time, exactly as happens in anaesthesia compared with wakefulness. The dependence of the synchronisation time on the couplings, in the presence of noise, is studied; such dependence is softened by low-frequency noise. We show that the coupling from the slow oscillator to the fast one (respiration to heart) plays a more important role in synchronisation. Finally, we see that the isolines with same synchronisation time seem to be a linear combination of the two couplings.

  10. Evolution of the intergalactic medium - What happened during the epoch z = 3-10?

    Science.gov (United States)

    Ikeuchi, S.; Ostriker, J. P.

    1986-01-01

    An attempt is made to model consistently the thermal and dynamic history of the intergalactic medium (IGM) from the era of reheating (z = 10-5) to the present, and to provide a unified explanation for the origin of ordinary galaxies, blue compact objects, and Lyman-alpha clouds. The evolution of the intergalactic gas is analyzed, treating the IGM as perfectly homogeneous at every epoch and taking into account radiative and Compton cooling, adiabatic cooling, shock heating, and heating produced by the diffuse UV flux. It is suggested that the IGM must have been heated to higher than a 10 to the 6th K by shock heasting caused either by explosions of pregalactic objects or expanding voids. The formation of intergalactic clouds by fragmentation of the resulting shells and the subsequent collapse of the shells to form galaxies are studied. An attempt is made to determine model parameters on the basis of an analysis of Lyman-alpha absorption lines.

  11. The faint-end of galaxy luminosity functions at the Epoch of Reionization

    Science.gov (United States)

    Yue, B.; Castellano, M.; Ferrara, A.; Fontana, A.; Merlin, E.; Amorín, R.; Grazian, A.; Mármol-Queralto, E.; Michałowski, M. J.; Mortlock, A.; Paris, D.; Parsa, S.; Pilo, S.; Santini, P.; Di Criscienzo, M.

    2018-05-01

    During the Epoch of Reionization (EoR), feedback effects reduce the efficiency of star formation process in small halos or even fully quench it. The galaxy luminosity function (LF) may then turn over at the faint-end. We analyze the number counts of z > 5 galaxies observed in the fields of four Frontier Fields (FFs) clusters and obtain constraints on the LF faint-end: for the turn-over magnitude at z ~ 6, MUVT >~-13.3 for the circular velocity threshold of quenching star formation process, vc* <~ 47 km s-1. We have not yet found significant evidence of the presence of feedback effects suppressing the star formation in small galaxies.

  12. A Lyman Break Galaxy in the Epoch of Reionization from Hubble Space Telescope (HST) Grism Spectroscopy

    Science.gov (United States)

    Rhoads, James E.; Malhotra, Sangeeta; Stern, Daniel K.; Gardner, Jonathan P.; Dickinson, Mark; Pirzkal, Norbert; Spinrad, Hyron; Reddy, Naveen; Dey, Arjun; Hathi, Nimish; hide

    2013-01-01

    Slitless grism spectroscopy from space offers dramatic advantages for studying high redshift galaxies: high spatial resolution to match the compact sizes of the targets, a dark and uniform sky background, and simultaneous observation over fields ranging from five square arcminutes (HST) to over 1000 square arcminutes (Euclid). Here we present observations of a galaxy at z = 6.57 the end of the reioinization epoch identified using slitless HST grism spectra from the PEARS survey (Probing Evolution And Reionization Spectroscopically) and reconfirmed with Keck + DEIMOS. This high redshift identification is enabled by the depth of the PEARS survey. Substantially higher redshifts are precluded for PEARS data by the declining sensitivity of the ACS grism at greater than lambda 0.95 micrometers. Spectra of Lyman breaks at yet higher redshifts will be possible using comparably deep observations with IR-sensitive grisms.

  13. The Mars water cycle at other epochs: History of the polar caps and layered terrain

    Science.gov (United States)

    Jakosky, Bruce M.; Henderson, Bradley G.; Mellon, Michael T.

    1992-01-01

    The atmospheric water cycle at the present epoch involves summertime sublimation of water from the north polar cap, transport of water through the atmosphere, and condensation on one or both winter CO2 caps. Exchange with the regolith is important seasonally, but the water content of the atmosphere appears to be controlled by the polar caps. The net annual transport through the atmosphere, integrated over long timescales, must be the driving force behind the long-term evolution of the polar caps; clearly, this feeds back into the evolution of the layered terrain. We have investigated the behavior of the seasonal water cycle and the net integrated behavior at the pole for the last 10 exp 7 years. Our model of the water cycle includes the solar input, CO2 condensation and sublimation, and summertime water sublimation through the seasonal cycles, and incorporates the long-term variations in the orbital elements describing the Martian orbit.

  14. The hydrogen epoch of reionization array dish III: measuring chromaticity of prototype element with reflectometry

    Science.gov (United States)

    Patra, Nipanjana; Parsons, Aaron R.; DeBoer, David R.; Thyagarajan, Nithyanandan; Ewall-Wice, Aaron; Hsyu, Gilbert; Leung, Tsz Kuk; Day, Cherie K.; de Lera Acedo, Eloy; Aguirre, James E.; Alexander, Paul; Ali, Zaki S.; Beardsley, Adam P.; Bowman, Judd D.; Bradley, Richard F.; Carilli, Chris L.; Cheng, Carina; Dillon, Joshua S.; Fadana, Gcobisa; Fagnoni, Nicolas; Fritz, Randall; Furlanetto, Steve R.; Glendenning, Brian; Greig, Bradley; Grobbelaar, Jasper; Hazelton, Bryna J.; Jacobs, Daniel C.; Julius, Austin; Kariseb, MacCalvin; Kohn, Saul A.; Lebedeva, Anna; Lekalake, Telalo; Liu, Adrian; Loots, Anita; MacMahon, David; Malan, Lourence; Malgas, Cresshim; Maree, Matthys; Martinot, Zachary; Mathison, Nathan; Matsetela, Eunice; Mesinger, Andrei; Morales, Miguel F.; Neben, Abraham R.; Pieterse, Samantha; Pober, Jonathan C.; Razavi-Ghods, Nima; Ringuette, Jon; Robnett, James; Rosie, Kathryn; Sell, Raddwine; Smith, Craig; Syce, Angelo; Tegmark, Max; Williams, Peter K. G.; Zheng, Haoxuan

    2018-04-01

    Spectral structures due to the instrument response is the current limiting factor for the experiments attempting to detect the redshifted 21 cm signal from the Epoch of Reionization (EoR). Recent advances in the delay spectrum methodology for measuring the redshifted 21 cm EoR power spectrum brought new attention to the impact of an antenna's frequency response on the viability of making this challenging measurement. The delay spectrum methodology provides a somewhat straightforward relationship between the time-domain response of an instrument that can be directly measured and the power spectrum modes accessible to a 21 cm EoR experiment. In this paper, we derive the explicit relationship between antenna reflection coefficient ( S 11) measurements made by a Vector Network Analyzer (VNA) and the extent of additional foreground contaminations in delay space. In the light of this mathematical framework, we examine the chromaticity of a prototype antenna element that will constitute the Hydrogen Epoch of Reionization Array (HERA) between 100 and 200 MHz. These reflectometry measurements exhibit additional structures relative to electromagnetic simulations, but we find that even without any further design improvement, such an antenna element will support measuring spatial k modes with line-of-sight components of k ∥ > 0.2 h Mpc- 1. We also find that when combined with the powerful inverse covariance weighting method used in optimal quadratic estimation of redshifted 21 cm power spectra the HERA prototype elements can successfully measure the power spectrum at spatial modes as low as k ∥ > 0.1 h Mpc- 1. This work represents a major step toward understanding the HERA antenna element and highlights a straightforward method for characterizing instrument response for future experiments designed to detect the 21 cm EoR power spectrum.

  15. SYSTEMATIC UNCERTAINTIES IN BLACK HOLE MASSES DETERMINED FROM SINGLE-EPOCH SPECTRA

    International Nuclear Information System (INIS)

    Denney, Kelly D.; Peterson, Bradley M.; Dietrich, Matthias; Bentz, Misty C.; Vestergaard, Marianne

    2009-01-01

    We explore the nature of systematic errors that can arise in measurement of black hole masses from single-epoch (SE) spectra of active galactic nuclei (AGNs) by utilizing the many epochs available for NGC 5548 and PG1229+204 from reverberation mapping (RM) databases. In particular, we examine systematics due to AGN variability, contamination due to constant spectral components (i.e., narrow lines and host galaxy flux), data quality (i.e., signal-to-noise ratio (S/N)), and blending of spectral features. We investigate the effect that each of these systematics has on the precision and accuracy of SE masses calculated from two commonly used line width measures by comparing these results to recent RM studies. We calculate masses by characterizing the broad Hβ emission line by both the full width at half maximum and the line dispersion, and demonstrate the importance of removing narrow emission-line components and host starlight. We find that the reliability of line width measurements rapidly decreases for S/N lower than ∼ 10-20 (per pixel), and that fitting the line profiles instead of direct measurement of the data does not mitigate this problem but can, in fact, introduce systematic errors. We also conclude that a full spectral decomposition to deblend the AGN and galaxy spectral features is unnecessary, except to judge the contribution of the host galaxy to the luminosity and to deblend any emission lines that may inhibit accurate line width measurements. Finally, we present an error budget which summarizes the minimum observable uncertainties as well as the amount of additional scatter and/or systematic offset that can be expected from the individual sources of error investigated. In particular, we find that the minimum observable uncertainty in SE mass estimates due to variability is ∼ 20 pixel -1 ) spectra.

  16. Gravitationally neutral dark matter-dark antimatter universe crystal with epochs of decelerated and accelerated expansion

    Science.gov (United States)

    Gribov, I. A.; Trigger, S. A.

    2016-11-01

    A large-scale self-similar crystallized phase of finite gravitationally neutral universe (GNU)—huge GNU-ball—with spherical 2D-boundary immersed into an endless empty 3D- space is considered. The main principal assumptions of this universe model are: (1) existence of stable elementary particles-antiparticles with the opposite gravitational “charges” (M+gr and M -gr), which have the same positive inertial mass M in = |M ±gr | ≥ 0 and are equally presented in the universe during all universe evolution epochs; (2) the gravitational interaction between the masses of the opposite charges” is repulsive; (3) the unbroken baryon-antibaryon symmetry; (4) M+gr-M-gr “charges” symmetry, valid for two equally presented matter-antimatter GNU-components: (a) ordinary matter (OM)-ordinary antimatter (OAM), (b) dark matter (DM)-dark antimatter (DAM). The GNU-ball is weightless crystallized dust of equally presented, mutually repulsive (OM+DM) clusters and (OAM+DAM) anticlusters. Newtonian GNU-hydrodynamics gives the observable spatial flatness and ideal Hubble flow. The GNU in the obtained large-scale self-similar crystallized phase preserves absence of the cluster-anticluster collisions and simultaneously explains the observable large-scale universe phenomena: (1) the absence of the matter-antimatter clusters annihilation, (2) the self-similar Hubble flow stability and homogeneity, (3) flatness, (4) bubble and cosmic-net structures as 3D-2D-1D decrystallization phases with decelerative (a ≤ 0) and accelerative (a ≥ 0) expansion epochs, (5) the dark energy (DE) phenomena with Λ VACUUM = 0, (6) the DE and DM fine-tuning nature and predicts (7) evaporation into isolated huge M±gr superclusters without Big Rip.

  17. Gravitationally neutral dark matter–dark antimatter universe crystal with epochs of decelerated and accelerated expansion

    International Nuclear Information System (INIS)

    Gribov, I A; Trigger, S A

    2016-01-01

    A large-scale self-similar crystallized phase of finite gravitationally neutral universe (GNU)—huge GNU-ball—with spherical 2D-boundary immersed into an endless empty 3D- space is considered. The main principal assumptions of this universe model are: (1) existence of stable elementary particles-antiparticles with the opposite gravitational “charges” ( M + gr and M -gr ), which have the same positive inertial mass M in = | M ±gr | ≥ 0 and are equally presented in the universe during all universe evolution epochs; (2) the gravitational interaction between the masses of the opposite charges” is repulsive; (3) the unbroken baryon-antibaryon symmetry; (4) M +gr -M -gr “charges” symmetry, valid for two equally presented matter-antimatter GNU-components: (a) ordinary matter (OM)-ordinary antimatter (OAM), (b) dark matter (DM)-dark antimatter (DAM). The GNU-ball is weightless crystallized dust of equally presented, mutually repulsive (OM+DM) clusters and (OAM+DAM) anticlusters. Newtonian GNU-hydrodynamics gives the observable spatial flatness and ideal Hubble flow. The GNU in the obtained large-scale self-similar crystallized phase preserves absence of the cluster-anticluster collisions and simultaneously explains the observable large-scale universe phenomena: (1) the absence of the matter-antimatter clusters annihilation, (2) the self-similar Hubble flow stability and homogeneity, (3) flatness, (4) bubble and cosmic-net structures as 3D-2D-1D decrystallization phases with decelerative (a ≤ 0) and accelerative (a ≥ 0) expansion epochs, (5) the dark energy (DE) phenomena with Λ VACUUM = 0, (6) the DE and DM fine-tuning nature and predicts (7) evaporation into isolated huge M ±gr superclusters without Big Rip. (paper)

  18. Two-epoch cross-sectional case record review protocol comparing quality of care of hospital emergency admissions at weekends versus weekdays.

    Science.gov (United States)

    Bion, Julian; Aldridge, Cassie P; Girling, Alan; Rudge, Gavin; Beet, Chris; Evans, Tim; Temple, R Mark; Roseveare, Chris; Clancy, Mike; Boyal, Amunpreet; Tarrant, Carolyn; Sutton, Elizabeth; Sun, Jianxia; Rees, Peter; Mannion, Russell; Chen, Yen-Fu; Watson, Samuel Ian; Lilford, Richard

    2017-12-22

    The mortality associated with weekend admission to hospital (the 'weekend effect') has for many years been attributed to deficiencies in quality of hospital care, often assumed to be due to suboptimal senior medical staffing at weekends. This protocol describes a case note review to determine whether there are differences in care quality for emergency admissions (EAs) to hospital at weekends compared with weekdays, and whether the difference has reduced over time as health policies have changed to promote 7-day services. Cross-sectional two-epoch case record review of 20 acute hospital Trusts in England. Anonymised case records of 4000 EAs to hospital, 2000 at weekends and 2000 on weekdays, covering two epochs (financial years 2012-2013 and 2016-2017). Admissions will be randomly selected across the whole of each epoch from Trust electronic patient records. Following training, structured implicit case reviews will be conducted by consultants or senior registrars (senior residents) in acute medical specialities (60 case records per reviewer), and limited to the first 7 days following hospital admission. The co-primary outcomes are the weekend:weekday admission ratio of errors per case record, and a global assessment of care quality on a Likert scale. Error rates will be analysed using mixed effects logistic regression models, and care quality using ordinal regression methods. Secondary outcomes include error typology, error-related adverse events and any correlation between error rates and staffing. The data will also be used to inform a parallel health economics analysis. The project has received ethics approval from the South West Wales Research Ethics Committee (REC): reference 13/WA/0372. Informed consent is not required for accessing anonymised patient case records from which patient identifiers had been removed. The findings will be disseminated through peer-reviewed publications in high-quality journals and through local High-intensity Specialist-Led Acute

  19. LOFAR insights into the epoch of reionization from the cross-power spectrum of 21 cm emission and galaxies

    NARCIS (Netherlands)

    Wiersma, R. P. C.; Ciardi, B.; Thomas, R. M.; Harker, G. J. A.; Zaroubi, S.; Bernardi, G.; Brentjens, M.; de Bruyn, A. G.; Daiboo, S.; Jelic, V.; Kazemi, S.; Koopmans, L. V. E.; Labropoulos, P.; Martinez, O.; Offringa, A.; Pandey, V. N.; Schaye, J.; Veligatla, V.; Vedantham, H.; Yatawatta, S.; Mellema, G.

    2013-01-01

    Using a combination of N-body simulations, semi-analytic models and radiative transfer calculations, we have estimated the theoretical cross-power spectrum between galaxies and the 21 cm emission from neutral hydrogen during the epoch of reionization. In accordance with previous studies, we find

  20. Combinations of Epoch Durations and Cut-Points to Estimate Sedentary Time and Physical Activity among Adolescents

    Science.gov (United States)

    Fröberg, Andreas; Berg, Christina; Larsson, Christel; Boldemann, Cecilia; Raustorp, Anders

    2017-01-01

    The purpose of the current study was to investigate how combinations of different epoch durations and cut-points affect the estimations of sedentary time and physical activity in adolescents. Accelerometer data from 101 adolescents were derived and 30 combinations were used to estimate sedentary time, light, moderate, vigorous, and combined…

  1. THE LICK AGN MONITORING PROJECT: RECALIBRATING SINGLE-EPOCH VIRIAL BLACK HOLE MASS ESTIMATES

    Energy Technology Data Exchange (ETDEWEB)

    Park, Daeseong; Woo, Jong-Hak [Astronomy Program, Department of Physics and Astronomy, Seoul National University, Seoul 151-742 (Korea, Republic of); Treu, Tommaso; Bennert, Vardha N. [Department of Physics, University of California, Santa Barbara, CA 93106 (United States); Barth, Aaron J.; Walsh, Jonelle [Department of Physics and Astronomy, 4129 Frederick Reines Hall, University of California, Irvine, CA 92697-4575 (United States); Bentz, Misty C. [Department of Physics and Astronomy, Georgia State University Atlanta, GA 30303 (United States); Canalizo, Gabriela [Department of Physics and Astronomy, University of California, Riverside, 900 University Ave., Riverside, CA 92521 (United States); Filippenko, Alexei V. [Department of Astronomy, University of California, Berkeley, CA 94720-3411 (United States); Gates, Elinor [Lick Observatory, P.O. Box 85, Mount Hamilton, CA 95140 (United States); Greene, Jenny E. [Department of Astrophysical Sciences, Princeton University, Princeton, NJ 08544 (United States); Malkan, Matthew A., E-mail: woo@astro.snu.ac.kr [Department of Physics and Astronomy, University of California, Los Angeles, CA 90024 (United States)

    2012-03-01

    We investigate the calibration and uncertainties of black hole (BH) mass estimates based on the single-epoch (SE) method, using homogeneous and high-quality multi-epoch spectra obtained by the Lick Active Galactic Nucleus (AGN) Monitoring Project for nine local Seyfert 1 galaxies with BH masses <10{sup 8} M{sub Sun }. By decomposing the spectra into their AGNs and stellar components, we study the variability of the SE H{beta} line width (full width at half-maximum intensity, FWHM{sub H{beta}} or dispersion, {sigma}{sub H{beta}}) and of the AGN continuum luminosity at 5100 A (L{sub 5100}). From the distribution of the 'virial products' ({proportional_to} FWHM{sub H{beta}}{sup 2} L{sup 0.5}{sub 5100} or {sigma}{sub H{beta}}{sup 2} L{sup 0.5}{sub 5100}) measured from SE spectra, we estimate the uncertainty due to the combined variability as {approx}0.05 dex (12%). This is subdominant with respect to the total uncertainty in SE mass estimates, which is dominated by uncertainties in the size-luminosity relation and virial coefficient, and is estimated to be {approx}0.46 dex (factor of {approx}3). By comparing the H{beta} line profile of the SE, mean, and root-mean-square (rms) spectra, we find that the H{beta} line is broader in the mean (and SE) spectra than in the rms spectra by {approx}0.1 dex (25%) for our sample with FWHM{sub H{beta}} <3000 km s{sup -1}. This result is at variance with larger mass BHs where the difference is typically found to be much less than 0.1 dex. To correct for this systematic difference of the H{beta} line profile, we introduce a line-width dependent virial factor, resulting in a recalibration of SE BH mass estimators for low-mass AGNs.

  2. A FLUX SCALE FOR SOUTHERN HEMISPHERE 21 cm EPOCH OF REIONIZATION EXPERIMENTS

    Energy Technology Data Exchange (ETDEWEB)

    Jacobs, Daniel C.; Bowman, Judd [School of Earth and Space Exploration, Arizona State University, Tempe, AZ (United States); Parsons, Aaron R.; Ali, Zaki; Pober, Jonathan C. [Astronomy Department, University of California, Berkeley, CA (United States); Aguirre, James E.; Moore, David F. [Department of Physics and Astronomy, University of Pennsylvania, Philadelphia, PA (United States); Bradley, Richard F. [Department of Electrical and Computer Engineering, University of Virginia, Charlottesville, VA (United States); Carilli, Chris L. [National Radio Astronomy Observatory, Socorro, NM (United States); DeBoer, David R.; Dexter, Matthew R.; MacMahon, Dave H. E. [Radio Astronomy Lab., University of California, Berkeley, CA (United States); Gugliucci, Nicole E.; Klima, Pat [National Radio Astronomy Observatory, Charlottesville, VA (United States); Manley, Jason R.; Walbrugh, William P. [Square Kilometer Array, South Africa Project, Cape Town (South Africa); Stefan, Irina I. [Cavendish Laboratory, Cambridge (United Kingdom)

    2013-10-20

    We present a catalog of spectral measurements covering a 100-200 MHz band for 32 sources, derived from observations with a 64 antenna deployment of the Donald C. Backer Precision Array for Probing the Epoch of Reionization (PAPER) in South Africa. For transit telescopes such as PAPER, calibration of the primary beam is a difficult endeavor and errors in this calibration are a major source of error in the determination of source spectra. In order to decrease our reliance on an accurate beam calibration, we focus on calibrating sources in a narrow declination range from –46° to –40°. Since sources at similar declinations follow nearly identical paths through the primary beam, this restriction greatly reduces errors associated with beam calibration, yielding a dramatic improvement in the accuracy of derived source spectra. Extrapolating from higher frequency catalogs, we derive the flux scale using a Monte Carlo fit across multiple sources that includes uncertainty from both catalog and measurement errors. Fitting spectral models to catalog data and these new PAPER measurements, we derive new flux models for Pictor A and 31 other sources at nearby declinations; 90% are found to confirm and refine a power-law model for flux density. Of particular importance is the new Pictor A flux model, which is accurate to 1.4% and shows that between 100 MHz and 2 GHz, in contrast with previous models, the spectrum of Pictor A is consistent with a single power law given by a flux at 150 MHz of 382 ± 5.4 Jy and a spectral index of –0.76 ± 0.01. This accuracy represents an order of magnitude improvement over previous measurements in this band and is limited by the uncertainty in the catalog measurements used to estimate the absolute flux scale. The simplicity and improved accuracy of Pictor A's spectrum make it an excellent calibrator in a band important for experiments seeking to measure 21 cm emission from the epoch of reionization.

  3. Transverse tectonic structural elements across Himalayan mountain front, eastern Arunachal Himalaya, India: Implication of superposed landform development on analysis of neotectonics

    Science.gov (United States)

    Bhakuni, S. S.; Luirei, Khayingshing; Kothyari, Girish Ch.; Imsong, Watinaro

    2017-04-01

    mountain front along the Sesseri, Siluk, Siku, Siang, Mingo, Sileng, Dikari, and Simen rivers. At some such junctions, landforms associated with the active right-lateral strike-slip faults are superposed over the earlier landforms formed by transverse normal faults. In addition to linear transverse features, we see evidence that the fold-thrust belt of the frontal part of the Arunachal Himalaya has also been affected by the neotectonically active NW-SE trending major fold known as the Siang antiform that again is aligned transverse to the mountain front. The folding of the HFT and MBT along this antiform has reshaped the landscape developed between its two western and eastern limbs running N-S and NW-SE, respectively. The transverse faults are parallel to the already reported deep-seated transverse seismogenic strike-slip fault. Therefore, a single take home message is that any true manifestation of the neotectonics and seismic hazard assessment in the Himalayan region must take into account the role of transverse tectonics.

  4. Polarization leakage in epoch of reionization windows - II. Primary beam model and direction-dependent calibration

    Science.gov (United States)

    Asad, K. M. B.; Koopmans, L. V. E.; Jelić, V.; Ghosh, A.; Abdalla, F. B.; Brentjens, M. A.; de Bruyn, A. G.; Ciardi, B.; Gehlot, B. K.; Iliev, I. T.; Mevius, M.; Pandey, V. N.; Yatawatta, S.; Zaroubi, S.

    2016-11-01

    Leakage of diffuse polarized emission into Stokes I caused by the polarized primary beam of the instrument might mimic the spectral structure of the 21-cm signal coming from the epoch of reionization (EoR) making their separation difficult. Therefore, understanding polarimetric performance of the antenna is crucial for a successful detection of the EoR signal. Here, we have calculated the accuracy of the nominal model beam of Low Frequency ARray (LOFAR) in predicting the leakage from Stokes I to Q, U by comparing them with the corresponding leakage of compact sources actually observed in the 3C 295 field. We have found that the model beam has errors of ≤10 per cent on the predicted levels of leakage of ˜1 per cent within the field of view, I.e. if the leakage is taken out perfectly using this model the leakage will reduce to 10-3 of the Stokes I flux. If similar levels of accuracy can be obtained in removing leakage from Stokes Q, U to I, we can say, based on the results of our previous paper, that the removal of this leakage using this beam model would ensure that the leakage is well below the expected EoR signal in almost the whole instrumental k-space of the cylindrical power spectrum. We have also shown here that direction-dependent calibration can remove instrumentally polarized compact sources, given an unpolarized sky model, very close to the local noise level.

  5. An Improved Statistical Point-source Foreground Model for the Epoch of Reionization

    Energy Technology Data Exchange (ETDEWEB)

    Murray, S. G.; Trott, C. M.; Jordan, C. H. [ARC Centre of Excellence for All-sky Astrophysics (CAASTRO) (Australia)

    2017-08-10

    We present a sophisticated statistical point-source foreground model for low-frequency radio Epoch of Reionization (EoR) experiments using the 21 cm neutral hydrogen emission line. Motivated by our understanding of the low-frequency radio sky, we enhance the realism of two model components compared with existing models: the source count distributions as a function of flux density and spatial position (source clustering), extending current formalisms for the foreground covariance of 2D power-spectral modes in 21 cm EoR experiments. The former we generalize to an arbitrarily broken power law, and the latter to an arbitrary isotropically correlated field. This paper presents expressions for the modified covariance under these extensions, and shows that for a more realistic source spatial distribution, extra covariance arises in the EoR window that was previously unaccounted for. Failure to include this contribution can yield bias in the final power-spectrum and under-estimate uncertainties, potentially leading to a false detection of signal. The extent of this effect is uncertain, owing to ignorance of physical model parameters, but we show that it is dependent on the relative abundance of faint sources, to the effect that our extension will become more important for future deep surveys. Finally, we show that under some parameter choices, ignoring source clustering can lead to false detections on large scales, due to both the induced bias and an artificial reduction in the estimated measurement uncertainty.

  6. Diverse properties of interstellar medium embedding gamma-ray bursts at the epoch of reionization

    International Nuclear Information System (INIS)

    Cen, Renyue; Kimm, Taysun

    2014-01-01

    Analysis is performed on ultra-high-resolution large-scale cosmological radiation-hydrodynamic simulations to quantify, for the first time, the physical environment of long-duration gamma-ray bursts (GRBs) at the epoch of reionization. We find that, on parsec scales, 13% of GRBs remain in high-density (≥10 4 cm –3 ) low-temperature star-forming regions, whereas 87% of GRBs occur in low-density (∼10 –2.5 cm –3 ) high-temperature regions heated by supernovae. More importantly, the spectral properties of GRB afterglows, such as the neutral hydrogen column density, total hydrogen column density, dust column density, gas temperature, and metallicity of intervening absorbers, vary strongly from sight line to sight line. Although our model explains extant limited observationally inferred values with respect to circumburst density, metallicity, column density, and dust properties, a substantially larger sample of high-z GRB afterglows would be required to facilitate a statistically solid test of the model. Our findings indicate that any attempt to infer the physical properties (such as metallicity) of the interstellar medium (ISM) of the host galaxy based on a very small number (usually one) of sight lines would be precarious. Utilizing high-z GRBs to probe the ISM and intergalactic medium should be undertaken properly, taking into consideration the physical diversities of the ISM.

  7. An Improved Statistical Point-source Foreground Model for the Epoch of Reionization

    Science.gov (United States)

    Murray, S. G.; Trott, C. M.; Jordan, C. H.

    2017-08-01

    We present a sophisticated statistical point-source foreground model for low-frequency radio Epoch of Reionization (EoR) experiments using the 21 cm neutral hydrogen emission line. Motivated by our understanding of the low-frequency radio sky, we enhance the realism of two model components compared with existing models: the source count distributions as a function of flux density and spatial position (source clustering), extending current formalisms for the foreground covariance of 2D power-spectral modes in 21 cm EoR experiments. The former we generalize to an arbitrarily broken power law, and the latter to an arbitrary isotropically correlated field. This paper presents expressions for the modified covariance under these extensions, and shows that for a more realistic source spatial distribution, extra covariance arises in the EoR window that was previously unaccounted for. Failure to include this contribution can yield bias in the final power-spectrum and under-estimate uncertainties, potentially leading to a false detection of signal. The extent of this effect is uncertain, owing to ignorance of physical model parameters, but we show that it is dependent on the relative abundance of faint sources, to the effect that our extension will become more important for future deep surveys. Finally, we show that under some parameter choices, ignoring source clustering can lead to false detections on large scales, due to both the induced bias and an artificial reduction in the estimated measurement uncertainty.

  8. Study of redshifted H I from the epoch of reionization with drift scan

    Energy Technology Data Exchange (ETDEWEB)

    Paul, Sourabh; Sethi, Shiv K.; Subrahmanyan, Ravi; Shankar, N. Udaya; Dwarakanath, K. S.; Deshpande, Avinash A. [Raman Research Institute, Bangalore (India); Bernardi, Gianni [Square Kilometre Array South Africa (SKA SA), 3rd Floor, The Park, Park Road, Pinelands 7405 (South Africa); Bowman, Judd D. [Arizona State University, Tempe, AZ85281 (United States); Briggs, Frank; Gaensler, Bryan M. [ARC Centre of Excellence for All-sky Astrophysics (CAASTRO), 44 Rosehill Street, Redfern, NSW 2016 (Australia); Cappallo, Roger J.; Corey, Brian E.; Goeke, Robert F. [MIT Haystack Observatory, Westford, MA 01886 (United States); Emrich, David [Curtin University, Perth (Australia); Greenhill, Lincoln J.; Kasper, Justin C. [Harvard-Smithsonian Center for Astrophysics, 60 Garden Street, Cambridge, MA 02138 (United States); Hazelton, Bryna J. [University of Washington, Seattle, WA 98195 (United States); Hewitt, Jacqueline N. [MIT Kavli Institute for Astrophysics and Space Research, 77 Massachusetts Avenue, 37-241, Cambridge, MA 02139 (United States); Johnston-Hollitt, Melanie [Victoria University of Wellington, P.O. Box 600, Wellington 6140 (New Zealand); Kaplan, David L., E-mail: sourabh@rri.res.in, E-mail: sethi@rri.res.in [University of Wisconsin-Milwaukee, Milwaukee, WI 53201 (United States); and others

    2014-09-20

    Detection of the epoch of reionization (EoR) in the redshifted 21 cm line is a challenging task. Here, we formulate the detection of the EoR signal using the drift scan strategy. This method potentially has better instrumental stability compared to the case where a single patch of sky is tracked. We demonstrate that the correlation time between measured visibilities could extend up to 1-2 hr for an interferometer array such as the Murchison Widefield Array, which has a wide primary beam. We estimate the EoR power based on a cross-correlation of visibilities over time and show that the drift scan strategy is capable of detecting the EoR signal with a signal to noise that is comparable/better compared to the tracking case. We also estimate the visibility correlation for a set of bright point sources and argue that the statistical inhomogeneity of bright point sources might allow their separation from the EoR signal.

  9. A New Method for Single-Epoch Ambiguity Resolution with Indoor Pseudolite Positioning.

    Science.gov (United States)

    Li, Xin; Zhang, Peng; Guo, Jiming; Wang, Jinling; Qiu, Weining

    2017-04-21

    Ambiguity resolution (AR) is crucial for high-precision indoor pseudolite positioning. Due to the existing characteristics of the pseudolite positioning system, such as the geometry structure of the stationary pseudolite which is consistently invariant, the indoor signal is easy to interrupt and the first order linear truncation error cannot be ignored, and a new AR method based on the idea of the ambiguity function method (AFM) is proposed in this paper. The proposed method is a single-epoch and nonlinear method that is especially well-suited for indoor pseudolite positioning. Considering the very low computational efficiency of conventional AFM, we adopt an improved particle swarm optimization (IPSO) algorithm to search for the best solution in the coordinate domain, and variances of a least squares adjustment is conducted to ensure the reliability of the solving ambiguity. Several experiments, including static and kinematic tests, are conducted to verify the validity of the proposed AR method. Numerical results show that the IPSO significantly improved the computational efficiency of AFM and has a more elaborate search ability compared to the conventional grid searching method. For the indoor pseudolite system, which had an initial approximate coordinate precision better than 0.2 m, the AFM exhibited good performances in both static and kinematic tests. With the corrected ambiguity gained from our proposed method, indoor pseudolite positioning can achieve centimeter-level precision using a low-cost single-frequency software receiver.

  10. Methodological proposal for the jotted issue of the first epoch of the Hero magazine.

    Directory of Open Access Journals (Sweden)

    Maité García Díaz

    2013-07-01

    Full Text Available During the neocolonial republic emerges in Sancti Spiritus a very significative magazine of artistic, literary and scientific sketch: Hero, founded by Jacinto Gomer Fernández-Morera and Anastacio Fernández-Morera del Castillo, it first appeared on December 20th 1907. This issue constitutes a vivid reflect of the commercial, literary, cultural, scientific and historic panorama and above all, of the life of the middle and high class in Sancti Spiritus at the beginning of the XX century. The need to divulge the advantage the texts of Hero constitute for the high school and university students, also to humanistic profile graduates and others, such as investigators, evidenced the need to carry out an investigation that pursued such purposes. That’s why the methodological proposal for the jotted issue of the first epoch of the Hero magazine (1907-1908 takes place, composed by 38 publications, departing from the fundamental theorizations about the jotted issue of the presentation of the methodological proposal on behalf of updating these publication texts for its potential readers.

  11. Self-shielding of hydrogen in the IGM during the epoch of reionization

    Science.gov (United States)

    Chardin, Jonathan; Kulkarni, Girish; Haehnelt, Martin G.

    2018-04-01

    We investigate self-shielding of intergalactic hydrogen against ionizing radiation in radiative transfer simulations of cosmic reionization carefully calibrated with Lyα forest data. While self-shielded regions manifest as Lyman-limit systems in the post-reionization Universe, here we focus on their evolution during reionization (redshifts z = 6-10). At these redshifts, the spatial distribution of hydrogen-ionizing radiation is highly inhomogeneous, and some regions of the Universe are still neutral. After masking the neutral regions and ionizing sources in the simulation, we find that the hydrogen photoionization rate depends on the local hydrogen density in a manner very similar to that in the post-reionization Universe. The characteristic physical hydrogen density above which self-shielding becomes important at these redshifts is about nH ˜ 3 × 10-3 cm-3, or ˜20 times the mean hydrogen density, reflecting the fact that during reionization photoionization rates are typically low enough that the filaments in the cosmic web are often self-shielded. The value of the typical self-shielding density decreases by a factor of 3 between redshifts z = 3 and 10, and follows the evolution of the average photoionization rate in ionized regions in a simple fashion. We provide a simple parameterization of the photoionization rate as a function of density in self-shielded regions during the epoch of reionization.

  12. IslandFAST: A Semi-numerical Tool for Simulating the Late Epoch of Reionization

    Energy Technology Data Exchange (ETDEWEB)

    Xu, Yidong; Chen, Xuelei [Key Laboratory for Computational Astrophysics, National Astronomical Observatories, Chinese Academy of Sciences, Beijing 100012 (China); Yue, Bin [National Astronomical Observatories, Chinese Academy of Sciences, Beijing 100012 (China)

    2017-08-01

    We present the algorithm and main results of our semi-numerical simulation, islandFAST, which was developed from 21cmFAST and designed for the late stage of reionization. The islandFAST simulation predicts the evolution and size distribution of the large-scale underdense neutral regions (neutral islands), and we find that the late Epoch of Reionization proceeds very fast, showing a characteristic scale of the neutral islands at each redshift. Using islandFAST, we compare the impact of two types of absorption systems, i.e., the large-scale underdense neutral islands versus small-scale overdense absorbers, in regulating the reionization process. The neutral islands dominate the morphology of the ionization field, while the small-scale absorbers dominate the mean-free path of ionizing photons, and also delay and prolong the reionization process. With our semi-numerical simulation, the evolution of the ionizing background can be derived self-consistently given a model for the small absorbers. The hydrogen ionization rate of the ionizing background is reduced by an order of magnitude in the presence of dense absorbers.

  13. Coarse Initial Orbit Determination for a Geostationary Satellite Using Single-Epoch GPS Measurements

    Directory of Open Access Journals (Sweden)

    Ghangho Kim

    2015-04-01

    Full Text Available A practical algorithm is proposed for determining the orbit of a geostationary orbit (GEO satellite using single-epoch measurements from a Global Positioning System (GPS receiver under the sparse visibility of the GPS satellites. The algorithm uses three components of a state vector to determine the satellite’s state, even when it is impossible to apply the classical single-point solutions (SPS. Through consideration of the characteristics of the GEO orbital elements and GPS measurements, the components of the state vector are reduced to three. However, the algorithm remains sufficiently accurate for a GEO satellite. The developed algorithm was tested on simulated measurements from two or three GPS satellites, and the calculated maximum position error was found to be less than approximately 40 km or even several kilometers within the geometric range, even when the classical SPS solution was unattainable. In addition, extended Kalman filter (EKF tests of a GEO satellite with the estimated initial state were performed to validate the algorithm. In the EKF, a reliable dynamic model was adapted to reduce the probability of divergence that can be caused by large errors in the initial state.

  14. ARECIBO MULTI-EPOCH H I ABSORPTION MEASUREMENTS AGAINST PULSARS: TINY-SCALE ATOMIC STRUCTURE

    International Nuclear Information System (INIS)

    Stanimirovic, S.; Weisberg, J. M.; Pei, Z.; Tuttle, K.; Green, J. T.

    2010-01-01

    We present results from multi-epoch neutral hydrogen (H I) absorption observations of six bright pulsars with the Arecibo telescope. Moving through the interstellar medium (ISM) with transverse velocities of 10-150 AU yr -1 , these pulsars have swept across 1-200 AU over the course of our experiment, allowing us to probe the existence and properties of the tiny-scale atomic structure (TSAS) in the cold neutral medium (CNM). While most of the observed pulsars show no significant change in their H I absorption spectra, we have identified at least two clear TSAS-induced opacity variations in the direction of B1929+10. These observations require strong spatial inhomogeneities in either the TSAS clouds' physical properties themselves or else in the clouds' galactic distribution. While TSAS is occasionally detected on spatial scales down to 10 AU, it is too rare to be characterized by a spectrum of turbulent CNM fluctuations on scales of 10 1 -10 3 AU, as previously suggested by some work. In the direction of B1929+10, an apparent correlation between TSAS and interstellar clouds inside the warm Local Bubble (LB) indicates that TSAS may be tracing the fragmentation of the LB wall via hydrodynamic instabilities. While similar fragmentation events occur frequently throughout the ISM, the warm medium surrounding these cold cloudlets induces a natural selection effect wherein small TSAS clouds evaporate quickly and are rare, while large clouds survive longer and become a general property of the ISM.

  15. GNSS Single Frequency, Single Epoch Reliable Attitude Determination Method with Baseline Vector Constraint

    Directory of Open Access Journals (Sweden)

    Ang Gong

    2015-12-01

    Full Text Available For Global Navigation Satellite System (GNSS single frequency, single epoch attitude determination, this paper proposes a new reliable method with baseline vector constraint. First, prior knowledge of baseline length, heading, and pitch obtained from other navigation equipment or sensors are used to reconstruct objective function rigorously. Then, searching strategy is improved. It substitutes gradually Enlarged ellipsoidal search space for non-ellipsoidal search space to ensure correct ambiguity candidates are within it and make the searching process directly be carried out by least squares ambiguity decorrelation algorithm (LAMBDA method. For all vector candidates, some ones are further eliminated by derived approximate inequality, which accelerates the searching process. Experimental results show that compared to traditional method with only baseline length constraint, this new method can utilize a priori baseline three-dimensional knowledge to fix ambiguity reliably and achieve a high success rate. Experimental tests also verify it is not very sensitive to baseline vector error and can perform robustly when angular error is not great.

  16. The mean free path of hydrogen ionizing photons during the epoch of reionization

    Science.gov (United States)

    Rahmati, Alireza; Schaye, Joop

    2018-05-01

    We use the Aurora radiation-hydrodynamical simulations to study the mean free path (MFP) for hydrogen ionizing photons during the epoch of reionization. We directly measure the MFP by averaging the distance 1 Ry photons travel before reaching an optical depth of unity along random lines-of-sight. During reionization the free paths tend to end in neutral gas with densities near the cosmic mean, while after reionization the end points tend to be overdense but highly ionized. Despite the increasing importance of discrete, over-dense systems, the cumulative contribution of systems with NHI ≲ 1016.5 cm-2 suffices to drive the MFP at z ≈ 6, while at earlier times higher column densities are more important. After reionization the typical size of HI systems is close to the local Jeans length, but during reionization it is much larger. The mean free path for photons originating close to galaxies, {MFP_{gal}}, is much smaller than the cosmic MFP. After reionization this enhancement can remain significant up to starting distances of ˜1 comoving Mpc. During reionization, however, {MFP_{gal}} for distances ˜102 - 103 comoving kpc typically exceeds the cosmic MFP. These findings have important consequences for models that interpret the intergalactic MFP as the distance escaped ionizing photons can travel from galaxies before being absorbed and may cause them to under-estimate the required escape fraction from galaxies, and/or the required emissivity of ionizing photons after reionization.

  17. New Finds of Painted Ceramics of the Epoch of the Abkhazian Kingdom

    Directory of Open Access Journals (Sweden)

    Armarchuk Ekaterina A.

    2012-03-01

    Full Text Available The painted polished red-clay ceramic items of the 8th-10th centuries, found on the Anakopia and other sites of Northern Abkhazia and the adjoining district of the city of Sochi are considered in the article. These are small and medium sized single-handled narrow-necked jugs. The painting is made in dark brown paint. The ornament consists mainly of straight and wavy lines; hatching in the form of oblique grids, braids and patterns of specks occur. Taking into account the new finds made during the 2007-2008 excavations of the necropolis on the Sakharnaya Golovka Mountain and of the church near Veseloye village, the analysis of painted polished ware of the Abkhazian Kingdom epoch makes it possible to determine its territorial and chronological distribution limits. The area of this pottery covers the Black Sea coast from the mouth of the Mzymta river to New Athos. Based on the results of the 1950-1980s excavations, it has been dated to the 8th-10th centuries; however, the new materials allow restricting the interval to the 9th-10th centuries. The differences in the vessels manufacturing technology indicate the presence of at least two production centers. The small painted burnished jars discovered in Christian graves suggest that they had been used to store incense or chrism.

  18. FIVE NEW TRANSIT EPOCHS OF THE EXOPLANET OGLE-TR-111b

    International Nuclear Information System (INIS)

    Hoyer, S.; Rojo, P.; Lopez-Morales, M.; DIaz, R. F.; Chambers, J.; Minniti, D.

    2011-01-01

    We report five new transit epochs of the extrasolar planet OGLE-TR-111b, observed in the v-HIGH and Bessell I bands with the FORS1 and FORS2 at the ESO Very Large Telescope between 2008 April and May. The new transits have been combined with all previously published transit data for this planet to provide a new transit timing variations (TTVs) analysis of its orbit. We find no TTVs with amplitudes larger than 1.5 minutes over a four-year observation time baseline, in agreement with the recent result by Adams et al. Dynamical simulations fully exclude the presence of additional planets in the system with masses greater than 1.3, 0.4, and 0.5 M + at the 3:2, 1:2, and 2:1 resonances, respectively. We also place an upper limit of about 30 M + on the mass of potential second planets in the region between the 3:2 and 1:2 mean-motion resonances.

  19. The Shock Dynamics of Heterogeneous YSO Jets: 3D Simulations Meet Multi-epoch Observations

    Energy Technology Data Exchange (ETDEWEB)

    Hansen, E. C.; Frank, A. [Department of Physics and Astronomy, University of Rochester, Rochester, NY 14627-0171 (United States); Hartigan, P. [Department of Physics and Astronomy, Rice University, 6100 S. Main, Houston, TX 77521-1892 (United States); Lebedev, S. V. [Blackett Laboratory, Imperial College London, Prince Consort Road, London SW7 2BW (United Kingdom)

    2017-03-10

    High-resolution observations of young stellar object (YSO) jets show them to be composed of many small-scale knots or clumps. In this paper, we report results of 3D numerical simulations designed to study how such clumps interact and create morphologies and kinematic patterns seen in emission line observations. Our simulations focus on clump scale dynamics by imposing velocity differences between spherical, over-dense regions, which then lead to the formation of bow shocks as faster clumps overtake slower material. We show that much of the spatial structure apparent in emission line images of jets arises from the dynamics and interactions of these bow shocks. Our simulations show a variety of time-dependent features, including bright knots associated with Mach stems where the shocks intersect, a “frothy” emission structure that arises from the presence of the Nonlinear Thin Shell Instability along the surfaces of the bow shocks, and the merging and fragmentation of clumps. Our simulations use a new non-equilibrium cooling method to produce synthetic emission maps in H α and [S ii]. These are directly compared to multi-epoch Hubble Space Telescope observations of Herbig–Haro jets. We find excellent agreement between features seen in the simulations and the observations in terms of both proper motion and morphologies. Thus we conclude that YSO jets may be dominated by heterogeneous structures and that interactions between these structures and the shocks they produce can account for many details of YSO jet evolution.

  20. Luminescence in Primordial Helium Lines at the Pre-recombination Epoch

    Science.gov (United States)

    Dubrovich, V. K.; Grachev, S. I.

    2018-04-01

    The formation of luminescent subordinate He I lines by the absorption of radiation from a source in lines of the main He I series in an expanding Universe is considered. A burst of radiation in continuum is assumed to occur at some instant of time corresponding to redshift z 0. This radiation is partially absorbed at different z < z 0 in lines of the main He I series (different pumping channels) and then is partially converted into radiation in subordinate lines. If ν ik is the laboratory transition frequency of some subordinate line emerging at some z, then at the present epoch its frequency will be ν = ν ik /(1 + z). The quantum yield, i.e., the number of photons emitted in the subordinate line per initial excited atom, has been calculated for different z (and, consequently, for different ν). Several pumping channels have been considered. We show that the luminescent lines can be both emission and absorption ones; the same line can be an emission one for one of the pumping channels and an absorption one for another. For example, the 1s2s-1s2p (1S-1P*) line is an emission one for the 1s2-1s2p pumping and an absorption one for the 1s2-1s3p pumping. We show that in the frequency range 30-80 GHz the total quantum yield for the first and second of the above channels can reach +50 and -50%, respectively.

  1. Simultaneously constraining the astrophysics of reionisation and the epoch of heating with 21CMMC

    Science.gov (United States)

    Greig, Bradley; Mesinger, Andrei

    2018-05-01

    We extend our MCMC sampler of 3D EoR simulations, 21CMMC, to perform parameter estimation directly on light-cones of the cosmic 21cm signal. This brings theoretical analysis one step closer to matching the expected 21-cm signal from next generation interferometers like HERA and the SKA. Using the light-cone version of 21CMMC, we quantify biases in the recovered astrophysical parameters obtained from the 21cm power spectrum when using the co-eval approximation to fit a mock 3D light-cone observation. While ignoring the light-cone effect does not bias the parameters under most assumptions, it can still underestimate their uncertainties. However, significant biases (~few - 10 σ) are possible if all of the following conditions are met: (i) foreground removal is very efficient, allowing large physical scales (k ~ 0.1 Mpc-1) to be used in the analysis; (ii) theoretical modelling is accurate to ~10 per cent in the power spectrum amplitude; and (iii) the 21cm signal evolves rapidly (i.e. the epochs of reionisation and heating overlap significantly

  2. The Mars water cycle at other epochs: Recent history of the polar caps and layered terrain

    Science.gov (United States)

    Jakosky, Bruce M.; Henderson, Bradley G.; Mellon, Michael T.

    1992-01-01

    The Martian polar caps and layered terrain presumably evolves by the deposition and removal of small amounts of water and dust each year, the current cap attributes therefore represent the incremental transport during a single year as integrated over long periods of time. The role was studied of condensation and sublimation of water ice in this process by examining the seasonal water cycle during the last 10(exp 7) yr. In the model, axial obliquity, eccentricity, and L sub s of perihelion vary according to dynamical models. At each epoch, the seasonal variations in temperature are calculated at the two poles, keeping track of the seasonal CO2 cap and the summertime sublimation of water vapor into the atmosphere; net exchange of water between the two caps is calculated based on the difference in the summertime sublimation between the two caps (or on the sublimation from one cap if the other is covered with CO2 frost all year). Results from the model can help to explain (1) the apparent inconsistency between the timescales inferred for layer formation and the much older crater retention age of the cap and (2) the difference in sizes of the two residual caps, with the south being smaller than the north.

  3. Coarse Initial Orbit Determination for a Geostationary Satellite Using Single-Epoch GPS Measurements

    Science.gov (United States)

    Kim, Ghangho; Kim, Chongwon; Kee, Changdon

    2015-01-01

    A practical algorithm is proposed for determining the orbit of a geostationary orbit (GEO) satellite using single-epoch measurements from a Global Positioning System (GPS) receiver under the sparse visibility of the GPS satellites. The algorithm uses three components of a state vector to determine the satellite’s state, even when it is impossible to apply the classical single-point solutions (SPS). Through consideration of the characteristics of the GEO orbital elements and GPS measurements, the components of the state vector are reduced to three. However, the algorithm remains sufficiently accurate for a GEO satellite. The developed algorithm was tested on simulated measurements from two or three GPS satellites, and the calculated maximum position error was found to be less than approximately 40 km or even several kilometers within the geometric range, even when the classical SPS solution was unattainable. In addition, extended Kalman filter (EKF) tests of a GEO satellite with the estimated initial state were performed to validate the algorithm. In the EKF, a reliable dynamic model was adapted to reduce the probability of divergence that can be caused by large errors in the initial state. PMID:25835299

  4. Effects of Varying Epoch Lengths, Wear Time Algorithms, and Activity Cut-Points on Estimates of Child Sedentary Behavior and Physical Activity from Accelerometer Data.

    Science.gov (United States)

    Banda, Jorge A; Haydel, K Farish; Davila, Tania; Desai, Manisha; Bryson, Susan; Haskell, William L; Matheson, Donna; Robinson, Thomas N

    2016-01-01

    To examine the effects of accelerometer epoch lengths, wear time (WT) algorithms, and activity cut-points on estimates of WT, sedentary behavior (SB), and physical activity (PA). 268 7-11 year-olds with BMI ≥ 85th percentile for age and sex wore accelerometers on their right hips for 4-7 days. Data were processed and analyzed at epoch lengths of 1-, 5-, 10-, 15-, 30-, and 60-seconds. For each epoch length, WT minutes/day was determined using three common WT algorithms, and minutes/day and percent time spent in SB, light (LPA), moderate (MPA), and vigorous (VPA) PA were determined using five common activity cut-points. ANOVA tested differences in WT, SB, LPA, MPA, VPA, and MVPA when using the different epoch lengths, WT algorithms, and activity cut-points. WT minutes/day varied significantly by epoch length when using the NHANES WT algorithm (p algorithms. Minutes/day and percent time spent in SB, LPA, MPA, VPA, and MVPA varied significantly by epoch length for all sets of activity cut-points tested with all three WT algorithms (all p algorithms (all p algorithms and activity cut-point definitions to match different epoch lengths may introduce significant errors. Estimates of SB and PA from studies that process and analyze data using different epoch lengths, WT algorithms, and/or activity cut-points are not comparable, potentially leading to very different results, interpretations, and conclusions, misleading research and public policy.

  5. Imprints of quasar duty cycle on the 21cm signal from the Epoch of Reionization

    Science.gov (United States)

    Bolgar, Florian; Eames, Evan; Hottier, Clément; Semelin, Benoit

    2018-05-01

    Quasars contribute to the 21-cm signal from the Epoch of Reionization (EoR) primarily through their ionizing UV and X-ray emission. However, their radio continuum and Lyman-band emission also regulates the 21-cm signal in their direct environment, potentially leaving the imprint of their duty cycle. We develop a model for the radio and UV luminosity functions of quasars from the EoR, and constrain it using recent observations. Our model is consistent with the recent discovery of the quasar J1342+0928 at redshift ˜7.5, and also predicts only a few quasars suitable for 21-cm forest observations (˜10 mJy) in the sky. We exhibit a new effect on the 21-cm signal observed against the CMB: a radio-loud quasar can leave the imprint of its duty cycle on the 21-cm tomography. We apply this effect in a cosmological simulation and conclude that the effect of typical radio-loud quasars is most likely negligible in an SKA field of view. For a ˜10mJy quasar the effect is stronger though hardly observable at SKA resolution. Then we study the contribution of the lyman band (Ly-α to Ly-β) emission of quasars to the Wouthuisen-Field coupling. The collective effect of quasars on the 21-cm power spectrum is larger than the thermal noise at low k, though featureless. However, a distinctive pattern around the brightest quasars in an SKA field of view may be observable in the tomography, encoding the duration of their duty cycle. This pattern has a high signal-to-noise ratio for the brightest quasar in a typical SKA shallow survey.

  6. Chimenea and other tools: Automated imaging of multi-epoch radio-synthesis data with CASA

    Science.gov (United States)

    Staley, T. D.; Anderson, G. E.

    2015-11-01

    In preparing the way for the Square Kilometre Array and its pathfinders, there is a pressing need to begin probing the transient sky in a fully robotic fashion using the current generation of radio telescopes. Effective exploitation of such surveys requires a largely automated data-reduction process. This paper introduces an end-to-end automated reduction pipeline, AMIsurvey, used for calibrating and imaging data from the Arcminute Microkelvin Imager Large Array. AMIsurvey makes use of several component libraries which have been packaged separately for open-source release. The most scientifically significant of these is chimenea, which implements a telescope-agnostic algorithm for automated imaging of pre-calibrated multi-epoch radio-synthesis data, of the sort typically acquired for transient surveys or follow-up. The algorithm aims to improve upon standard imaging pipelines by utilizing iterative RMS-estimation and automated source-detection to avoid so called 'Clean-bias', and makes use of CASA subroutines for the underlying image-synthesis operations. At a lower level, AMIsurvey relies upon two libraries, drive-ami and drive-casa, built to allow use of mature radio-astronomy software packages from within Python scripts. While targeted at automated imaging, the drive-casa interface can also be used to automate interaction with any of the CASA subroutines from a generic Python process. Additionally, these packages may be of wider technical interest beyond radio-astronomy, since they demonstrate use of the Python library pexpect to emulate terminal interaction with an external process. This approach allows for rapid development of a Python interface to any legacy or externally-maintained pipeline which accepts command-line input, without requiring alterations to the original code.

  7. New probe of magnetic fields in the prereionization epoch. I. Formalism

    Science.gov (United States)

    Venumadhav, Tejaswi; Oklopčić, Antonija; Gluscevic, Vera; Mishra, Abhilash; Hirata, Christopher M.

    2017-04-01

    We propose a method of measuring extremely weak magnetic fields in the intergalactic medium prior to and during the epoch of cosmic reionization. The method utilizes the Larmor precession of spin-polarized neutral hydrogen in the triplet state of the hyperfine transition. This precession leads to a systematic change in the brightness temperature fluctuations of the 21-cm line from the high-redshift universe, and thus the statistics of these fluctuations encode information about the magnetic field the atoms are immersed in. The method is most suited to probing fields that are coherent on large scales; in this paper, we consider a homogenous magnetic field over the scale of the 21-cm fluctuations. Due to the long lifetime of the triplet state of the 21-cm transition, this technique is naturally sensitive to extremely weak field strengths, of order 10-19 G at a reference redshift of ˜20 (or 10-21 G if scaled to the present day). Therefore, this might open up the possibility of probing primordial magnetic fields just prior to reionization. If the magnetic fields are much stronger, it is still possible to use this method to infer their direction, and place a lower limit on their strength. In this paper (Paper I in a series on this effect), we perform detailed calculations of the microphysics behind this effect, and take into account all the processes that affect the hyperfine transition, including radiative decays, collisions, and optical pumping by Lyman-α photons. We conclude with an analytic formula for the brightness temperature of linear-regime fluctuations in the presence of a magnetic field, and discuss its limiting behavior for weak and strong fields.

  8. MULTI-EPOCH OBSERVATIONS OF HD 69830: HIGH-RESOLUTION SPECTROSCOPY AND LIMITS TO VARIABILITY

    Energy Technology Data Exchange (ETDEWEB)

    Beichman, C. A.; Tanner, A. M.; Bryden, G.; Akeson, R. L.; Ciardi, D. R. [NASA Exoplanet Science Institute, Jet Propulsion Laboratory, California Institute of Technology, Pasadena, CA, 91125 (United States); Lisse, C. M. [Johns Hopkins University, Applied Physics Laboratory, Laurel, MD 20723 (United States); Boden, A. F. [Caltech Optical Observatories, California Institute of Technology, Pasadena, CA 91125 (United States); Dodson-Robinson, S. E.; Salyk, C. [University of Texas, Astronomy Department, Austin, TX 78712 (United States); Wyatt, M. C., E-mail: chas@pop.jpl.nasa.gov [Institute of Astronomy, University of Cambridge, Cambridge, CB3 0HA (United Kingdom)

    2011-12-10

    The main-sequence solar-type star HD 69830 has an unusually large amount of dusty debris orbiting close to three planets found via the radial velocity technique. In order to explore the dynamical interaction between the dust and planets, we have performed multi-epoch photometry and spectroscopy of the system over several orbits of the outer dust. We find no evidence for changes in either the dust amount or its composition, with upper limits of 5%-7% (1{sigma} per spectral element) on the variability of the dust spectrum over 1 year, 3.3% (1{sigma}) on the broadband disk emission over 4 years, and 33% (1{sigma}) on the broadband disk emission over 24 years. Detailed modeling of the spectrum of the emitting dust indicates that the dust is located outside of the orbits of the three planets and has a composition similar to main-belt, C-type asteroids in our solar system. Additionally, we find no evidence for a wide variety of gas species associated with the dust. Our new higher signal-to-noise spectra do not confirm our previously claimed detection of H{sub 2}O ice leading to a firm conclusion that the debris can be associated with the break-up of one or more C-type asteroids formed in the dry, inner regions of the protoplanetary disk of the HD 69830 system. The modeling of the spectral energy distribution and high spatial resolution observations in the mid-infrared are consistent with a {approx}1 AU location for the emitting material.

  9. Multi-band, multi-epoch observations of the transiting warm Jupiter WASP-80b

    Energy Technology Data Exchange (ETDEWEB)

    Fukui, Akihiko; Kuroda, Daisuke [Okayama Astrophysical Observatory, National Astronomical Observatory of Japan, Asakuchi, Okayama 719-0232 (Japan); Kawashima, Yui; Ikoma, Masahiro; Kurosaki, Kenji [Department of Earth and Planetary Science, Graduate School of Science, The University of Tokyo, 7-3-1 Bunkyo-ku, Tokyo 113-0033 (Japan); Narita, Norio; Nishiyama, Shogo; Takahashi, Yasuhiro H.; Nagayama, Shogo [National Astronomical Observatory of Japan, 2-21-1 Osawa, Mitaka, Tokyo 181-8588 (Japan); Onitsuka, Masahiro; Baba, Haruka; Ryu, Tsuguru [The Graduate University for Advanced Studies, 2-21-1 Osawa, Mitaka, Tokyo 181-8588 (Japan); Ita, Yoshifusa; Onozato, Hiroki [Astronomical Institute, Graduate School of Science, Tohoku University, 6-3 Aramaki Aoba, Aoba-ku, Sendai, Miyagi 980-8578 (Japan); Hirano, Teruyuki; Kawauchi, Kiyoe [Department of Earth and Planetary Sciences, Tokyo Institute of Technology, 2-12-1 Ookayama, Meguro-ku, Tokyo 152-8551 (Japan); Hori, Yasunori [Department of Astronomy and Astrophysics, University of California, Santa Cruz, CA 95064 (United States); Nagayama, Takahiro [Department of Physics, Nagoya University, Furo-cho, Chikusa-ku, Nagoya 464-8602 (Japan); Tamura, Motohide [Department of Astronomy, Graduate School of Science, The University of Tokyo, and National Astronomical Observatory of Japan (Japan); Kawai, Nobuyuki, E-mail: afukui@oao.nao.ac.jp [Department of Physics, Tokyo Institute of Technology, 2-12-1, Oookayama, Meguro, Tokyo 152-8551 (Japan); and others

    2014-08-01

    WASP-80b is a warm Jupiter transiting a bright late-K/early-M dwarf, providing a good opportunity to extend the atmospheric study of hot Jupiters toward the lower temperature regime. We report multi-band, multi-epoch transit observations of WASP-80b by using three ground-based telescopes covering from optical (g', R{sub c}, and I{sub c} bands) to near-infrared (NIR; J, H, and K{sub s} bands) wavelengths. We observe 5 primary transits, each in 3 or 4 different bands simultaneously, obtaining 17 independent transit light curves. Combining them with results from previous works, we find that the observed transmission spectrum is largely consistent with both a solar abundance and thick cloud atmospheric models at a 1.7σ discrepancy level. On the other hand, we find a marginal spectral rise in the optical region compared to the NIR region at the 2.9σ level, which possibly indicates the existence of haze in the atmosphere. We simulate theoretical transmission spectra for a solar abundance but hazy atmosphere, finding that a model with equilibrium temperature of 600 K can explain the observed data well, having a discrepancy level of 1.0σ. We also search for transit timing variations, but find no timing excess larger than 50 s from a linear ephemeris. In addition, we conduct 43 day long photometric monitoring of the host star in the optical bands, finding no significant variation in the stellar brightness. Combined with the fact that no spot-crossing event is observed in the five transits, our results confirm previous findings that the host star appears quiet for spot activities, despite the indications of strong chromospheric activities.

  10. IMAGING THE EPOCH OF REIONIZATION: LIMITATIONS FROM FOREGROUND CONFUSION AND IMAGING ALGORITHMS

    International Nuclear Information System (INIS)

    Vedantham, Harish; Udaya Shankar, N.; Subrahmanyan, Ravi

    2012-01-01

    Tomography of redshifted 21 cm transition from neutral hydrogen using Fourier synthesis telescopes is a promising tool to study the Epoch of Reionization (EoR). Limiting the confusion from Galactic and extragalactic foregrounds is critical to the success of these telescopes. The instrumental response or the point-spread function (PSF) of such telescopes is inherently three dimensional with frequency mapping to the line-of-sight (LOS) distance. EoR signals will necessarily have to be detected in data where continuum confusion persists; therefore, it is important that the PSF has acceptable frequency structure so that the residual foreground does not confuse the EoR signature. This paper aims to understand the three-dimensional PSF and foreground contamination in the same framework. We develop a formalism to estimate the foreground contamination along frequency, or equivalently LOS dimension, and establish a relationship between foreground contamination in the image plane and visibility weights on the Fourier plane. We identify two dominant sources of LOS foreground contamination—'PSF contamination' and 'gridding contamination'. We show that PSF contamination is localized in LOS wavenumber space, beyond which there potentially exists an 'EoR window' with negligible foreground contamination where we may focus our efforts to detect EoR. PSF contamination in this window may be substantially reduced by judicious choice of a frequency window function. Gridding and imaging algorithms create additional gridding contamination and we propose a new imaging algorithm using the Chirp Z Transform that significantly reduces this contamination. Finally, we demonstrate the analytical relationships and the merit of the new imaging algorithm for the case of imaging with the Murchison Widefield Array.

  11. Spectroscopic confirmation of an ultra-faint galaxy at the epoch of reionization

    Science.gov (United States)

    Hoag, Austin; Bradač, Maruša; Trenti, Michele; Treu, Tommaso; Schmidt, Kasper B.; Huang, Kuang-Han; Lemaux, Brian C.; He, Julie; Bernard, Stephanie R.; Abramson, Louis E.; Mason, Charlotte A.; Morishita, Takahiro; Pentericci, Laura; Schrabback, Tim

    2017-04-01

    Within one billion years of the Big Bang, intergalactic hydrogen was ionized by sources emitting ultraviolet and higher energy photons. This was the final phenomenon to globally affect all the baryons (visible matter) in the Universe. It is referred to as cosmic reionization and is an integral component of cosmology. It is broadly expected that intrinsically faint galaxies were the primary ionizing sources due to their abundance in this epoch1,2. However, at the highest redshifts (z > 7.5 lookback time 13.1 Gyr), all galaxies with spectroscopic confirmations to date are intrinsically bright and, therefore, not necessarily representative of the general population3. Here, we report the unequivocal spectroscopic detection of a low luminosity galaxy at z > 7.5. We detected the Lyman-α emission line at ˜10,504 Å in two separate observations with MOSFIRE4 on the Keck I Telescope and independently with the Hubble Space Telescope's slitless grism spectrograph, implying a source redshift of z = 7.640 ± 0.001. The galaxy is gravitationally magnified by the massive galaxy cluster MACS J1423.8+2404 (z = 0.545), with an estimated intrinsic luminosity of MAB = -19.6 ± 0.2 mag and a stellar mass of M⊙=3.0-0.8+1.5×108 solar masses. Both are an order of magnitude lower than the four other Lyman-α emitters currently known at z > 7.5, making it probably the most distant representative source of reionization found to date.

  12. Epoch of reionization 21 cm forecasting from MCMC-constrained semi-numerical models

    Science.gov (United States)

    Hassan, Sultan; Davé, Romeel; Finlator, Kristian; Santos, Mario G.

    2017-06-01

    The recent low value of Planck Collaboration XLVII integrated optical depth to Thomson scattering suggests that the reionization occurred fairly suddenly, disfavouring extended reionization scenarios. This will have a significant impact on the 21 cm power spectrum. Using a semi-numerical framework, we improve our model from instantaneous to include time-integrated ionization and recombination effects, and find that this leads to more sudden reionization. It also yields larger H II bubbles that lead to an order of magnitude more 21 cm power on large scales, while suppressing the small-scale ionization power. Local fluctuations in the neutral hydrogen density play the dominant role in boosting the 21 cm power spectrum on large scales, while recombinations are subdominant. We use a Monte Carlo Markov chain approach to constrain our model to observations of the star formation rate functions at z = 6, 7, 8 from Bouwens et al., the Planck Collaboration XLVII optical depth measurements and the Becker & Bolton ionizing emissivity data at z ˜ 5. We then use this constrained model to perform 21 cm forecasting for Low Frequency Array, Hydrogen Epoch of Reionization Array and Square Kilometre Array in order to determine how well such data can characterize the sources driving reionization. We find that the Mock 21 cm power spectrum alone can somewhat constrain the halo mass dependence of ionizing sources, the photon escape fraction and ionizing amplitude, but combining the Mock 21 cm data with other current observations enables us to separately constrain all these parameters. Our framework illustrates how the future 21 cm data can play a key role in understanding the sources and topology of reionization as observations improve.

  13. The Egyptian geomagnetic reference field to the Epoch, 2010.0

    Science.gov (United States)

    Deebes, H. A.; Abd Elaal, E. M.; Arafa, T.; Lethy, A.; El Emam, A.; Ghamry, E.; Odah, H.

    2017-06-01

    The present work is a compilation of two tasks within the frame of the project ;Geomagnetic Survey & Detailed Geomagnetic Measurements within the Egyptian Territory; funded by the ;Science and Technology Development Fund agency (STDF);. The National Research Institute of Astronomy and Geophysics (NRIAG), has conducted a new extensive land geomagnetic survey that covers the whole Egyptian territory. The field measurements have been done at 3212 points along all the asphalted roads, defined tracks, and ill-defined tracks in Egypt; with total length of 11,586 km. In the present work, the measurements cover for the first time new areas as: the southern eastern borders of Egypt including Halayeb and Shlatin, the Quattara depresion in the western desert, and the new roads between Farafra and Baharia oasis. Also marine geomagnetic survey have been applied for the first time in Naser lake. Misallat and Abu-Simble geomagnetic observatories have been used to reduce the field data to the Epoch 2010. During the field measurements, whenever possible, the old stations occupied by the previous observers have been re-occupied to determine the secular variations at these points. The geomagnetic anomaly maps, the normal geomagnetic field maps with their corresponding secular variation maps, the normal geomagnetic field equations of the geomagnetic elements (EGRF) and their corresponding secular variations equations, are outlined. The anomalous sites, as discovered from the anomaly maps are, only, mentioned. In addition, a correlation between the International Geomagnetic Reference Field (IGRF) 2010.0 and the Egyptian Geomagnetic Reference Field (EGRF) 2010 is indicated.

  14. THE OPTICAL VARIABILITY OF SDSS QUASARS FROM MULTI-EPOCH SPECTROSCOPY. II. COLOR VARIATION

    Energy Technology Data Exchange (ETDEWEB)

    Guo, Hengxiao; Gu, Minfeng, E-mail: hxguo@shao.ac.cn, E-mail: gumf@shao.ac.cn [Key Laboratory for Research in Galaxies and Cosmology, Shanghai Astronomical Observatory, Chinese Academy of Sciences, 80 Nandan Road, Shanghai 200030 (China)

    2016-05-01

    We investigated the optical/ultraviolet (UV) color variations for a sample of 2169 quasars based on multi-epoch spectroscopy in the Sloan Digital Sky Survey data releases seven (DR7) and nine (DR9). To correct the systematic difference between DR7 and DR9 due to the different instrumental setup, we produced a correction spectrum by using a sample of F-stars observed in both DR7 and DR9. The correction spectrum was then applied to quasars when comparing the spectra of DR7 with DR9. In each object, the color variation was explored by comparing the spectral index of the continuum power-law fit on the brightest spectrum with the faintest one, and also by the shape of their difference spectrum. In 1876 quasars with consistent color variations from two methods, we found that most sources (1755, ∼94%) show the bluer-when-brighter (BWB) trend, and the redder-when-brighter (RWB) trend is detected in only 121 objects (∼6%). The common BWB trend is supported by the composite spectrum constructed from bright spectra, which is bluer than that from faint spectra, and also by the blue composite difference spectrum. The correction spectrum is proven to be highly reliable by comparing the composite spectrum from corrected DR9 and original DR7 spectra. Assuming that the optical/UV variability is triggered by fluctuations, the RWB trend can likely be explained if the fluctuations occur first in the outer disk region, and the inner disk region has not yet fully responded when the fluctuations are being propagated inward. In contrast, the common BWB trend implies that the fluctuations likely more often happen first in the inner disk region.

  15. Persistent near-tropical warmth on the Antarctic continent during the early Eocene epoch.

    Science.gov (United States)

    Pross, Jörg; Contreras, Lineth; Bijl, Peter K; Greenwood, David R; Bohaty, Steven M; Schouten, Stefan; Bendle, James A; Röhl, Ursula; Tauxe, Lisa; Raine, J Ian; Huck, Claire E; van de Flierdt, Tina; Jamieson, Stewart S R; Stickley, Catherine E; van de Schootbrugge, Bas; Escutia, Carlota; Brinkhuis, Henk

    2012-08-02

    The warmest global climates of the past 65 million years occurred during the early Eocene epoch (about 55 to 48 million years ago), when the Equator-to-pole temperature gradients were much smaller than today and atmospheric carbon dioxide levels were in excess of one thousand parts per million by volume. Recently the early Eocene has received considerable interest because it may provide insight into the response of Earth's climate and biosphere to the high atmospheric carbon dioxide levels that are expected in the near future as a consequence of unabated anthropogenic carbon emissions. Climatic conditions of the early Eocene 'greenhouse world', however, are poorly constrained in critical regions, particularly Antarctica. Here we present a well-dated record of early Eocene climate on Antarctica from an ocean sediment core recovered off the Wilkes Land coast of East Antarctica. The information from biotic climate proxies (pollen and spores) and independent organic geochemical climate proxies (indices based on branched tetraether lipids) yields quantitative, seasonal temperature reconstructions for the early Eocene greenhouse world on Antarctica. We show that the climate in lowland settings along the Wilkes Land coast (at a palaeolatitude of about 70° south) supported the growth of highly diverse, near-tropical forests characterized by mesothermal to megathermal floral elements including palms and Bombacoideae. Notably, winters were extremely mild (warmer than 10 °C) and essentially frost-free despite polar darkness, which provides a critical new constraint for the validation of climate models and for understanding the response of high-latitude terrestrial ecosystems to increased carbon dioxide forcing.

  16. Sub-horizon evolution of cold dark matter perturbations through dark matter-dark energy equivalence epoch

    International Nuclear Information System (INIS)

    Piattella, O.F.; Martins, D.L.A.; Casarini, L.

    2014-01-01

    We consider a cosmological model of the late universe constituted by standard cold dark matter plus a dark energy component with constant equation of state w and constant effective speed of sound. By neglecting fluctuations in the dark energy component, we obtain an equation describing the evolution of sub-horizon cold dark matter perturbations through the epoch of dark matter-dark energy equality. We explore its analytic solutions and calculate an exact w-dependent correction for the dark matter growth function, logarithmic growth function and growth index parameter through the epoch considered. We test our analytic approximation with the numerical solution and find that the discrepancy is less than 1% for 0k = during the cosmic evolution up to a = 100

  17. A Method for Exploring Program and Portfolio Affordability Tradeoffs Under Uncertainty Using Epoch-Era Analysis: A Case Application to Carrier Strike Group Design

    Science.gov (United States)

    2015-04-30

    design for affordability by augmenting Epoch-Era Analysis with aspects of Modern Portfolio Theory . The method is demonstrated through the design of a...introduces a method to conduct portfolio design for affordability by augmenting Epoch-Era Analysis with aspects of Modern Portfolio Theory . The method...through the integration of elements of Modern Portfolio Theory (MPT) and the SoS design literature. The proposed method is demonstrated in a case study

  18. Basic Geomagnetic Network of the Republic of Croatia 2004 – 2012, with Geomagnetic Field Maps for 2009.5 epoch

    Directory of Open Access Journals (Sweden)

    Mario Brkić

    2013-12-01

    Full Text Available After more than half a century, scientific book Basic Geomagnetic Network of the Republic of Croatia 2004 – 2012, with Geomagnetic Field Maps for 2009.5 epoch describes the recent geomagnetic field on Croatian territory. A review of research in the past decade as well as the original solutions makes the book a document of contribution to geodesy and geomagnetism in Croatia.The book’s introduction gives an overview of two centuries of history and the strategic, security, economic and scientific significance of knowing the geomagnetic field on the Croatian territory. All the activities related to the updating of the geomagnetic information, which took place in the last decade, signified a big step toward the countries where geomagnetic survey is a mature scientific and technical discipline, and a scientific contribution to understanding of the nature of the Earth's magnetism.The declination, inclination and total intensity maps (along with the normal annual changes for the epoch 2009.5 are given in the Appendix. The book Basic Geomagnetic Network of the Republic of Croatia 2004 – 2012, with Geomagnetic Field Maps for 2009.5 epoch (ISBN 978-953-293-521-9 is published by the State Geodetic Administration of the Republic of Croatia. Beside editor in chief, M. Brkić, the authors are: E. Vujić, D. Šugar, E. Jungwirth, D. Markovinović, M. Rezo, M. Pavasović, O. Bjelotomić, M. Šljivarić, M. Varga and V. Poslončec-Petrić. The book contains 48 pages and 3 maps, and is published in 200 copies. CIP record is available in digital catalogue of the National and University Library in Zagreb under number 861937.

  19. The phenomenon of literature images transformation into musical images and paradigmal images of corresponding epochs (philosophy of history analysis

    Directory of Open Access Journals (Sweden)

    M. V. Masayev

    2014-01-01

    The author comes to the conclusion that the images of the M. A. Sholokhov’s novels «Quit Waves of Don» and «Newly­ploughed virgin soil» and short story «Destiny of the man» having become musical images of the I. I. Dzerzhinsky’s operas «Quit Waves of Don», «Newly­ploughed virgin soil», «Destiny of the man» and «Grigoriy Melekhov», turned into real paradigmal symbols of the epochs of the civil war, collectivization and the Great Patriotic War.

  20. Global Peak in Atmospheric Radiocarbon Provides a Potential Definition for the Onset of the Anthropocene Epoch in 1965.

    Science.gov (United States)

    Turney, Chris S M; Palmer, Jonathan; Maslin, Mark A; Hogg, Alan; Fogwill, Christopher J; Southon, John; Fenwick, Pavla; Helle, Gerhard; Wilmshurst, Janet M; McGlone, Matt; Bronk Ramsey, Christopher; Thomas, Zoë; Lipson, Mathew; Beaven, Brent; Jones, Richard T; Andrews, Oliver; Hua, Quan

    2018-02-19

    Anthropogenic activity is now recognised as having profoundly and permanently altered the Earth system, suggesting we have entered a human-dominated geological epoch, the 'Anthropocene'. To formally define the onset of the Anthropocene, a synchronous global signature within geological-forming materials is required. Here we report a series of precisely-dated tree-ring records from Campbell Island (Southern Ocean) that capture peak atmospheric radiocarbon ( 14 C) resulting from Northern Hemisphere-dominated thermonuclear bomb tests during the 1950s and 1960s. The only alien tree on the island, a Sitka spruce (Picea sitchensis), allows us to seasonally-resolve Southern Hemisphere atmospheric 14 C, demonstrating the 'bomb peak' in this remote and pristine location occurred in the last-quarter of 1965 (October-December), coincident with the broader changes associated with the post-World War II 'Great Acceleration' in industrial capacity and consumption. Our findings provide a precisely-resolved potential Global Stratotype Section and Point (GSSP) or 'golden spike', marking the onset of the Anthropocene Epoch.

  1. PROBING THE EPOCH OF PRE-REIONIZATION BY CROSS-CORRELATING COSMIC MICROWAVE AND INFRARED BACKGROUND ANISOTROPIES

    International Nuclear Information System (INIS)

    Atrio-Barandela, F.; Kashlinsky, A.

    2014-01-01

    The epoch of first star formation and the state of the intergalactic medium (IGM) at that time are not directly observable with current telescopes. The radiation from those early sources is now part of the cosmic infrared background (CIB) and, as these sources ionize the gas around them, the IGM plasma would produce faint temperature anisotropies in the cosmic microwave background (CMB) via the thermal Sunyaev-Zeldovich (TSZ) effect. While these TSZ anisotropies are too faint to be detected, we show that the cross-correlation of maps of source-subtracted CIB fluctuations from Euclid, with suitably constructed microwave maps at different frequencies, can probe the physical state of the gas during reionization and test/constrain models of the early CIB sources. We identify the frequency-combined, CMB-subtracted microwave maps from space- and ground-based instruments to show that they can be cross-correlated with the forthcoming all-sky Euclid CIB maps to detect the cross-power at scales ∼5'-60' with signal-to-noise ratios (S/Ns) of up to S/N ∼ 4-8 depending on the contribution to the Thomson optical depth during those pre-reionization epochs (Δτ ≅ 0.05) and the temperature of the IGM (up to ∼10 4 K). Such a measurement would offer a new window to explore the emergence and physical properties of these first light sources

  2. A model and solving algorithm of combination planning for weapon equipment based on Epoch-era analysis method

    Science.gov (United States)

    Wang, Meng; Zhang, Huaiqiang; Zhang, Kan

    2017-10-01

    Focused on the circumstance that the equipment using demand in the short term and the development demand in the long term should be made overall plans and took into consideration in the weapons portfolio planning and the practical problem of the fuzziness in the definition of equipment capacity demand. The expression of demand is assumed to be an interval number or a discrete number. With the analysis method of epoch-era, a long planning cycle is broke into several short planning cycles with different demand value. The multi-stage stochastic programming model is built aimed at maximize long-term planning cycle demand under the constraint of budget, equipment development time and short planning cycle demand. The scenario tree is used to discretize the interval value of the demand, and genetic algorithm is designed to solve the problem. At last, a case is studied to demonstrate the feasibility and effectiveness of the proposed mode.

  3. What next-generation 21 cm power spectrum measurements can teach us about the epoch of reionization

    International Nuclear Information System (INIS)

    Pober, Jonathan C.; Morales, Miguel F.; Liu, Adrian; McQuinn, Matthew; Parsons, Aaron R.; Dillon, Joshua S.; Hewitt, Jacqueline N.; Tegmark, Max; Aguirre, James E.; Bowman, Judd D.; Jacobs, Daniel C.; Bradley, Richard F.; Carilli, Chris L.; DeBoer, David R.; Werthimer, Dan J.

    2014-01-01

    A number of experiments are currently working toward a measurement of the 21 cm signal from the epoch of reionization (EoR). Whether or not these experiments deliver a detection of cosmological emission, their limited sensitivity will prevent them from providing detailed information about the astrophysics of reionization. In this work, we consider what types of measurements will be enabled by the next generation of larger 21 cm EoR telescopes. To calculate the type of constraints that will be possible with such arrays, we use simple models for the instrument, foreground emission, and the reionization history. We focus primarily on an instrument modeled after the ∼0.1 km 2 collecting area Hydrogen Epoch of Reionization Array concept design and parameterize the uncertainties with regard to foreground emission by considering different limits to the recently described 'wedge' footprint in k space. Uncertainties in the reionization history are accounted for using a series of simulations that vary the ionizing efficiency and minimum virial temperature of the galaxies responsible for reionization, as well as the mean free path of ionizing photons through the intergalactic medium. Given various combinations of models, we consider the significance of the possible power spectrum detections, the ability to trace the power spectrum evolution versus redshift, the detectability of salient power spectrum features, and the achievable level of quantitative constraints on astrophysical parameters. Ultimately, we find that 0.1 km 2 of collecting area is enough to ensure a very high significance (≳ 30σ) detection of the reionization power spectrum in even the most pessimistic scenarios. This sensitivity should allow for meaningful constraints on the reionization history and astrophysical parameters, especially if foreground subtraction techniques can be improved and successfully implemented.

  4. INTERFEROMETRIC MONITORING OF GAMMA-RAY BRIGHT AGNs. I. THE RESULTS OF SINGLE-EPOCH MULTIFREQUENCY OBSERVATIONS

    Energy Technology Data Exchange (ETDEWEB)

    Lee, Sang-Sung; Wajima, Kiyoaki; Algaba, Juan-Carlos; Zhao, Guang-Yao; Hodgson, Jeffrey A.; Byun, Do-Young; Kang, Sincheol; Kim, Soon-Wook; Kino, Motoki [Korea Astronomy and Space Science Institute, 776 Daedeok-daero, Yuseong-gu, Daejeon 34055 (Korea, Republic of); Kim, Dae-Won; Park, Jongho; Kim, Jae-Young; Trippe, Sascha [Department of Physics and Astronomy, Seoul National University, 1 Gwanak-ro, Gwanak-gu, Seoul 08826 (Korea, Republic of); Miyazaki, Atsushi [Japan Space Forum, 3-2-1, Kandasurugadai, Chiyoda-ku, Tokyo 101-0062 Japan (Japan); Kim, Jeong-Sook, E-mail: sslee@kasi.re.kr [National Astronomical Observatory of Japan, 2211 Osawa, Mitaka, Tokyo 1818588 (Japan)

    2016-11-01

    We present results of single-epoch very long baseline interferometry (VLBI) observations of gamma-ray bright active galactic nuclei (AGNs) using the Korean VLBI Network (KVN) at the 22, 43, 86, and 129 GHz bands, which are part of a KVN key science program, Interferometric Monitoring of Gamma-Ray Bright AGNs. We selected a total of 34 radio-loud AGNs of which 30 sources are gamma-ray bright AGNs with flux densities of >6 × 10{sup −10} ph cm{sup −2} s{sup −1}. Single-epoch multifrequency VLBI observations of the target sources were conducted during a 24 hr session on 2013 November 19 and 20. All observed sources were detected and imaged at all frequency bands, with or without a frequency phase transfer technique, which enabled the imaging of 12 faint sources at 129 GHz, except for one source. Many of the target sources are resolved on milliarcsecond scales, yielding a core-jet structure, with the VLBI core dominating the synchrotron emission on a milliarcsecond scale. CLEAN flux densities of the target sources are 0.43–28 Jy, 0.32–21 Jy, 0.18–11 Jy, and 0.35–8.0 Jy in the 22, 43, 86, and 129 GHz bands, respectively. Spectra of the target sources become steeper at higher frequency, with spectral index means of −0.40, −0.62, and −1.00 in the 22–43 GHz, 43–86 GHz and 86–129 GHz bands, respectively, implying that the target sources become optically thin at higher frequencies (e.g., 86–129 GHz).

  5. The Early Prevention of Obesity in CHildren (EPOCH Collaboration - an Individual Patient Data Prospective Meta-Analysis

    Directory of Open Access Journals (Sweden)

    Simes John

    2010-11-01

    Full Text Available Abstract Background Efforts to prevent the development of overweight and obesity have increasingly focused early in the life course as we recognise that both metabolic and behavioural patterns are often established within the first few years of life. Randomised controlled trials (RCTs of interventions are even more powerful when, with forethought, they are synthesised into an individual patient data (IPD prospective meta-analysis (PMA. An IPD PMA is a unique research design where several trials are identified for inclusion in an analysis before any of the individual trial results become known and the data are provided for each randomised patient. This methodology minimises the publication and selection bias often associated with a retrospective meta-analysis by allowing hypotheses, analysis methods and selection criteria to be specified a priori. Methods/Design The Early Prevention of Obesity in CHildren (EPOCH Collaboration was formed in 2009. The main objective of the EPOCH Collaboration is to determine if early intervention for childhood obesity impacts on body mass index (BMI z scores at age 18-24 months. Additional research questions will focus on whether early intervention has an impact on children's dietary quality, TV viewing time, duration of breastfeeding and parenting styles. This protocol includes the hypotheses, inclusion criteria and outcome measures to be used in the IPD PMA. The sample size of the combined dataset at final outcome assessment (approximately 1800 infants will allow greater precision when exploring differences in the effect of early intervention with respect to pre-specified participant- and intervention-level characteristics. Discussion Finalisation of the data collection procedures and analysis plans will be complete by the end of 2010. Data collection and analysis will occur during 2011-2012 and results should be available by 2013. Trial registration number ACTRN12610000789066

  6. THE HYDROGEN EPOCH OF REIONIZATION ARRAY DISH. II. CHARACTERIZATION OF SPECTRAL STRUCTURE WITH ELECTROMAGNETIC SIMULATIONS AND ITS SCIENCE IMPLICATIONS

    Energy Technology Data Exchange (ETDEWEB)

    Ewall-Wice, Aaron; Hewitt, Jacqueline; Neben, Abraham R. [MIT Kavli Institute for Cosmological Physics, Cambridge, MA, 02139 (United States); Bradley, Richard; Dickenson, Roger; Doolittle, Phillip; Egan, Dennis; Hedrick, Mike; Klima, Patricia [National Radio Astronomy Observatory, Charlottesville, VA (United States); Deboer, David; Parsons, Aaron; Ali, Zaki S.; Cheng, Carina; Patra, Nipanjana; Dillon, Joshua S. [Department of Astronomy, University of California, Berkeley, CA (United States); Aguirre, James [Department of Physics and Astronomy, University of Pennsylvania, Philadelphia, PA (United States); Bowman, Judd; Thyagarajan, Nithyanandan [Arizona State University, School of Earth and Space Exploration, Tempe, AZ 85287 (United States); Venter, Mariet [Department of Electrical and Electronic Engineering, Stellenbosch University, Stellenbosch, SA (South Africa); Acedo, Eloy de Lera [Cavendish Laboratory, University of Cambridge, Cambridge (United Kingdom); and others

    2016-11-10

    We use time-domain electromagnetic simulations to determine the spectral characteristics of the Hydrogen Epoch of Reionization Arrays (HERA) antenna. These simulations are part of a multi-faceted campaign to determine the effectiveness of the dish’s design for obtaining a detection of redshifted 21 cm emission from the epoch of reionization. Our simulations show the existence of reflections between HERA’s suspended feed and its parabolic dish reflector that fall below -40 dB at 150 ns and, for reasonable impedance matches, have a negligible impact on HERA’s ability to constrain EoR parameters. It follows that despite the reflections they introduce, dishes are effective for increasing the sensitivity of EoR experiments at a relatively low cost. We find that electromagnetic resonances in the HERA feed’s cylindrical skirt, which is intended to reduce cross coupling and beam ellipticity, introduces significant power at large delays (-40 dB at 200 ns), which can lead to some loss of measurable Fourier modes and a modest reduction in sensitivity. Even in the presence of this structure, we find that the spectral response of the antenna is sufficiently smooth for delay filtering to contain foreground emission at line-of-sight wave numbers below k {sub ∥} ≲ 0.2 h Mpc{sup -1}, in the region where the current PAPER experiment operates. Incorporating these results into a Fisher Matrix analysis, we find that the spectral structure observed in our simulations has only a small effect on the tight constraints HERA can achieve on parameters associated with the astrophysics of reionization.

  7. THE HYDROGEN EPOCH OF REIONIZATION ARRAY DISH. II. CHARACTERIZATION OF SPECTRAL STRUCTURE WITH ELECTROMAGNETIC SIMULATIONS AND ITS SCIENCE IMPLICATIONS

    International Nuclear Information System (INIS)

    Ewall-Wice, Aaron; Hewitt, Jacqueline; Neben, Abraham R.; Bradley, Richard; Dickenson, Roger; Doolittle, Phillip; Egan, Dennis; Hedrick, Mike; Klima, Patricia; Deboer, David; Parsons, Aaron; Ali, Zaki S.; Cheng, Carina; Patra, Nipanjana; Dillon, Joshua S.; Aguirre, James; Bowman, Judd; Thyagarajan, Nithyanandan; Venter, Mariet; Acedo, Eloy de Lera

    2016-01-01

    We use time-domain electromagnetic simulations to determine the spectral characteristics of the Hydrogen Epoch of Reionization Arrays (HERA) antenna. These simulations are part of a multi-faceted campaign to determine the effectiveness of the dish’s design for obtaining a detection of redshifted 21 cm emission from the epoch of reionization. Our simulations show the existence of reflections between HERA’s suspended feed and its parabolic dish reflector that fall below -40 dB at 150 ns and, for reasonable impedance matches, have a negligible impact on HERA’s ability to constrain EoR parameters. It follows that despite the reflections they introduce, dishes are effective for increasing the sensitivity of EoR experiments at a relatively low cost. We find that electromagnetic resonances in the HERA feed’s cylindrical skirt, which is intended to reduce cross coupling and beam ellipticity, introduces significant power at large delays (-40 dB at 200 ns), which can lead to some loss of measurable Fourier modes and a modest reduction in sensitivity. Even in the presence of this structure, we find that the spectral response of the antenna is sufficiently smooth for delay filtering to contain foreground emission at line-of-sight wave numbers below k ∥ ≲ 0.2 h Mpc -1 , in the region where the current PAPER experiment operates. Incorporating these results into a Fisher Matrix analysis, we find that the spectral structure observed in our simulations has only a small effect on the tight constraints HERA can achieve on parameters associated with the astrophysics of reionization.

  8. A new species of great ape from the late Miocene epoch in Ethiopia.

    Science.gov (United States)

    Suwa, Gen; Kono, Reiko T; Katoh, Shigehiro; Asfaw, Berhane; Beyene, Yonas

    2007-08-23

    With the discovery of Ardipithecus, Orrorin and Sahelanthropus, our knowledge of hominid evolution before the emergence of Pliocene species of Australopithecus has significantly increased, extending the hominid fossil record back to at least 6 million years (Myr) ago. However, because of the dearth of fossil hominoid remains in sub-Saharan Africa spanning the period 12-7 Myr ago, nothing is known of the actual timing and mode of divergence of the African ape and hominid lineages. Most genomic-based studies suggest a late divergence date-5-6 Myr ago and 6-8 Myr ago for the human-chimp and human-gorilla splits, respectively-and some palaeontological and molecular analyses hypothesize a Eurasian origin of the African ape and hominid clade. We report here the discovery and recognition of a new species of great ape, Chororapithecus abyssinicus, from the 10-10.5-Myr-old deposits of the Chorora Formation at the southern margin of the Afar rift. To the best of our knowledge, these are the first fossils of a large-bodied Miocene ape from the African continent north of Kenya. They exhibit a gorilla-sized dentition that combines distinct shearing crests with thick enamel on its 'functional' side cusps. Visualization of the enamel-dentine junction by micro-computed tomography reveals shearing crest features that partly resemble the modern gorilla condition. These features represent genetically based structural modifications probably associated with an initial adaptation to a comparatively fibrous diet. The relatively flat cuspal enamel-dentine junction and thick enamel, however, suggest a concurrent adaptation to hard and/or abrasive food items. The combined evidence suggests that Chororapithecus may be a basal member of the gorilla clade, and that the latter exhibited some amount of adaptive and phyletic diversity at around 10-11 Myr ago.

  9. A HIGH-RESOLUTION, MULTI-EPOCH SPECTRAL ATLAS OF PECULIAR STARS INCLUDING RAVE, GAIA , AND HERMES WAVELENGTH RANGES

    International Nuclear Information System (INIS)

    Tomasella, Lina; Munari, Ulisse; Zwitter, Tomaz

    2010-01-01

    We present an Echelle+CCD, high signal-to-noise ratio, high-resolution (R = 20,000) spectroscopic atlas of 108 well-known objects representative of the most common types of peculiar and variable stars. The wavelength interval extends from 4600 to 9400 A and includes the RAVE, Gaia, and HERMES wavelength ranges. Multi-epoch spectra are provided for the majority of the observed stars. A total of 425 spectra of peculiar stars, which were collected during 56 observing nights between 1998 November and 2002 August, are presented. The spectra are given in FITS format and heliocentric wavelengths, with accurate subtraction of both the sky background and the scattered light. Auxiliary material useful for custom applications (telluric dividers, spectrophotometric stars, flat-field tracings) is also provided. The atlas aims to provide a homogeneous database of the spectral appearance of stellar peculiarities, a tool useful both for classification purposes and inter-comparison studies. It could also serve in the planning and development of automated classification algorithms designed for RAVE, Gaia, HERMES, and other large-scale spectral surveys. The spectrum of XX Oph is discussed in some detail as an example of the content of the present atlas.

  10. BEAM-FORMING ERRORS IN MURCHISON WIDEFIELD ARRAY PHASED ARRAY ANTENNAS AND THEIR EFFECTS ON EPOCH OF REIONIZATION SCIENCE

    Energy Technology Data Exchange (ETDEWEB)

    Neben, Abraham R.; Hewitt, Jacqueline N.; Dillon, Joshua S.; Goeke, R.; Morgan, E. [Kavli Institute for Astrophysics and Space Research, Massachusetts Institute of Technology, Cambridge, MA 02139 (United States); Bradley, Richard F. [Dept. of Electrical and Computer Engineering, University of Virginia, Charlottesville, VA, 22904 (United States); Bernardi, G. [Square Kilometre Array South Africa (SKA SA), Cape Town 7405 (South Africa); Bowman, J. D. [School of Earth and Space Exploration, Arizona State University, Tempe, AZ 85287 (United States); Briggs, F. [Research School of Astronomy and Astrophysics, Australian National University, Canberra, ACT 2611 (Australia); Cappallo, R. J.; Corey, B. E.; Lonsdale, C. J.; McWhirter, S. R. [MIT Haystack Observatory, Westford, MA 01886 (United States); Deshpande, A. A. [Raman Research Institute, Bangalore 560080 (India); Greenhill, L. J. [Harvard-Smithsonian Center for Astrophysics, Cambridge, MA 02138 (United States); Hazelton, B. J.; Morales, M. F. [Department of Physics, University of Washington, Seattle, WA 98195 (United States); Johnston-Hollitt, M. [School of Chemical and Physical Sciences, Victoria University of Wellington, Wellington 6140 (New Zealand); Kaplan, D. L. [Department of Physics, University of Wisconsin–Milwaukee, Milwaukee, WI 53201 (United States); Mitchell, D. A. [CSIRO Astronomy and Space Science (CASS), P.O. Box 76, Epping, NSW 1710 (Australia); and others

    2016-03-20

    Accurate antenna beam models are critical for radio observations aiming to isolate the redshifted 21 cm spectral line emission from the Dark Ages and the Epoch of Reionization (EOR) and unlock the scientific potential of 21 cm cosmology. Past work has focused on characterizing mean antenna beam models using either satellite signals or astronomical sources as calibrators, but antenna-to-antenna variation due to imperfect instrumentation has remained unexplored. We characterize this variation for the Murchison Widefield Array (MWA) through laboratory measurements and simulations, finding typical deviations of the order of ±10%–20% near the edges of the main lobe and in the sidelobes. We consider the ramifications of these results for image- and power spectrum-based science. In particular, we simulate visibilities measured by a 100 m baseline and find that using an otherwise perfect foreground model, unmodeled beam-forming errors severely limit foreground subtraction accuracy within the region of Fourier space contaminated by foreground emission (the “wedge”). This region likely contains much of the cosmological signal, and accessing it will require measurement of per-antenna beam patterns. However, unmodeled beam-forming errors do not contaminate the Fourier space region expected to be free of foreground contamination (the “EOR window”), showing that foreground avoidance remains a viable strategy.

  11. PROBING THE EPOCH OF REIONIZATION WITH THE Lyα FOREST AT z ∼ 4-5

    International Nuclear Information System (INIS)

    Cen Renyue; McDonald, Patrick; Trac, Hy; Loeb, Abraham

    2009-01-01

    The inhomogeneous cosmological reionization process leaves tangible imprints in the intergalactic medium (IGM) down to z ∼ 4-5. The Lyα forest flux power spectrum provides a potentially powerful probe of the epoch of reionization. With the existing Sloan Digital Sky Survey I/II quasar sample, we show that two cosmological reionization scenarios, one completing reionization at z = 6 and the other at z = 9, can be distinguished at ∼7σ level by utilizing Lyα forest absorption spectra at z = 3.9-4.1 in the absence of other physical processes that may also affect the Lyα flux power spectrum. The difference may not be distinguishable at such high significance after marginalization over other effects, but, in any case, one will need to consider this effect in order to correctly interpret the power spectrum in this redshift range. The redshift range z = 4-5 may provide the best window because there are still enough transmitted flux and quasars to measure precise statistics of the flux fluctuations, and the IGM still retains a significant amount of memory of reionization.

  12. Modeling the Radio Foreground for Detection of CMB Spectral Distortions from the Cosmic Dawn and the Epoch of Reionization

    Energy Technology Data Exchange (ETDEWEB)

    Sathyanarayana Rao, Mayuri; Subrahmanyan, Ravi; Shankar, N Udaya [Raman Research Institute, C V Raman Avenue, Sadashivanagar, Bangalore 560080 (India); Chluba, Jens, E-mail: mayuris@rri.res.in [Jodrell Bank Centre for Astrophysics, University of Manchester, Oxford Road, M13 9PL (United Kingdom)

    2017-05-01

    Cosmic baryon evolution during the Cosmic Dawn and Reionization results in redshifted 21-cm spectral distortions in the cosmic microwave background (CMB). These encode information about the nature and timing of first sources over redshifts 30–6 and appear at meter wavelengths as a tiny CMB distortion along with the Galactic and extragalactic radio sky, which is orders of magnitude brighter. Therefore, detection requires precise methods to model foregrounds. We present a method of foreground fitting using maximally smooth (MS) functions. We demonstrate the usefulness of MS functions over traditionally used polynomials to separate foregrounds from the Epoch of Reionization (EoR) signal. We also examine the level of spectral complexity in plausible foregrounds using GMOSS, a physically motivated model of the radio sky, and find that they are indeed smooth and can be modeled by MS functions to levels sufficient to discern the vanilla model of the EoR signal. We show that MS functions are loss resistant and robustly preserve EoR signal strength and turning points in the residuals. Finally, we demonstrate that in using a well-calibrated spectral radiometer and modeling foregrounds with MS functions, the global EoR signal can be detected with a Bayesian approach with 90% confidence in 10 minutes’ integration.

  13. Multi-epoch analysis of the X-ray spectrum of the active galactic nucleus in NGC 5506

    Science.gov (United States)

    Sun, Shangyu; Guainazzi, Matteo; Ni, Qingling; Wang, Jingchun; Qian, Chenyang; Shi, Fangzheng; Wang, Yu; Bambi, Cosimo

    2018-05-01

    We present a multi-epoch X-ray spectroscopy analysis of the nearby narrow-line Seyfert I galaxy NGC 5506. For the first time, spectra taken by Chandra, XMM-Newton, Suzaku, and NuSTAR - covering the 2000-2014 time span - are analyzed simultaneously, using state-of-the-art models to describe reprocessing of the primary continuum by optical thick matter in the AGN environment. The main goal of our study is determining the spin of the supermassive black hole (SMBH). The nuclear X-ray spectrum is photoelectrically absorbed by matter with column density ≃ 3 × 1022 cm-2. A soft excess is present at energies lower than the photoelectric cut-off. Both photo-ionized and collisionally ionized components are required to fit it. This component is constant over the time-scales probed by our data. The spectrum at energies higher than 2 keV is variable. We propose that its evolution could be driven by flux-dependent changes in the geometry of the innermost regions of the accretion disk. The black hole spin in NGC ,5506 is constrained to be 0.93± _{ 0.04 }^{0.04} at 90% confidence level for one interesting parameter.

  14. Female fertility following dose-adjusted EPOCH-R chemotherapy in primary mediastinal B-cell lymphomas.

    Science.gov (United States)

    Gharwan, Helen; Lai, Catherine; Grant, Cliona; Dunleavy, Kieron; Steinberg, Seth M; Shovlin, Margaret; Fojo, Tito; Wilson, Wyndham H

    2016-07-01

    We assessed fertility/gonadal function in premenopausal women treated with dose-adjusted EPOCH-Rituximab for untreated primary mediastinal B-cell lymphoma (PMBL). Eligible patents were ≤ 50 years and premenopausal. Serial reproductive histories were obtained and hormonal assays were performed on serum samples before, at the end of treatment and 4-18 months later. Twenty-eight eligible women had a median age (range) of 31 (21-50) years and were followed a median of 7.3 years. Of 23 patients who completed a questionnaire, 19 (83%) were and four were not menstruating prior to chemotherapy. Amenorrhea developed in 12 patients during chemotherapy. At > 1-year follow-up, 14/19 (74%) patients were menstruating, all years old, and six (43%) of these patients delivered healthy children. Hormonal assays showed ovarian dysfunction during chemotherapy in all patients with varying recovery at 4-18 months after treatment. Fertility was preserved in most women with ovarian failure confined to patients > 40 years old.

  15. Sun in the Epoch ``LOWERED'' Solar Activity: the Comparative Analysis of the Current 24 Solar Cycle and Past Authentic Low Cycles

    Science.gov (United States)

    Vitaly, Ishkov

    A reliable series of the relative numbers of sunspots (14 solar cycles ‒ 165 years) it leads to the only scenario of solar activity cycles - to the alternation of epochs of “increased” (18 ‒ 22 cycles of solar activity) and “lowered” (12 ‒ 16 and 24 ‒ ...) solar activity with the periods of solar magnetic field reconstruction in solar zone of the sunspots formation (11, 12, 23) from one epoch to another. The regime of the produce of magnetic field significantly changes in these periods, providing to the subsequent 5 cycles the stable conditions of solar activity. Space solar research made it possible to sufficiently fully investigate characteristics and parameters of the solar cycles for the epoch of “increased” (20 ‒ 22 cycles) solar activity and period of the reconstruction (22 ‒ 23 cycles) to the epoch of “lowered” solar activity (24 ‒ ... cycles). In this scenario 24 solar cycle is the first solar cycle of the second epoch of “lowered” solar activity. Therefore his development and characteristics roughly must be described in the context of the low solar cycles development (12, 14, and 16). In the current solar cycle the sunspot-forming activity is lowered, the average areas of the sunspot groups correspond to values for epoch of “lowered “solar activity, average magnetic field in the umbra of sunspots was reduced approximately to 700 gauss, and for this time was observed only 4 very large sunspot groups (≥1500 mvh). Flare activity substantially was lowered: for the time of the current solar cycle development it was occurrence of M-class flares M - 368, class X - 32, from which only 2 solar flares of class X> 5. Solar proton events are observed predominantly small intensity; but only 5 from them were the intensity of ≥100 pfu (S2) and 4 - ≥1000 pfu (S3). The first five years of the 24 cycle evolution confirm this assumption and the possibility to give the qualitative forecast of his evolution and development of the

  16. The Epoch of Reionization

    NARCIS (Netherlands)

    Zaroubi, Saleem

    2013-01-01

    The Universe's dark ages end with the formation of the first generation of galaxies. These objects start emitting ultraviolet radiation that carves out ionized regions around them. After a sufficient number of ionizing sources have formed, the ionized fraction of the gas in the Universe rapidly

  17. The Epoch Pilot Program.

    Science.gov (United States)

    Uhlenberg, Donald M.; Molenaar, Richard A.

    1982-01-01

    Describes a program for high school students who are between their junior and senior years which provides an opportunity to take part in aviation courses at the University of North Dakota. Students take courses leading to a private pilot license, and earn college credit for their efforts. (JN)

  18. Recombination epoch revisited

    International Nuclear Information System (INIS)

    Krolik, J.H.

    1989-01-01

    Previous studies of cosmological recombination have shown that this process produces as a by-product a highly superthermal population of Ly-alpha photons which retard completion of recombination. Cosmological redshifting was thought to determine the frequency distribution of the photons, while two-photon decay of hydrogen's 2s state was thought to control their numbers. It is shown here that frequency diffusion due to photon scattering dominate the cosmological redshift in the frequency range near line center which fixes the ratio of ground state to excited state population, while incoherent scattering into the far-red damping wing effectively destroys Ly-alpha photons as a rate which is competitive with two-photon decay. The former effect tends to hold back recombination, while the latter tends to accelerate it; the net results depends on cosmological parameters, particularly the combination Omega(b) h/sq rt (2q0), where Omega(b) is the fraction of the critical density provided by baryons. 18 references

  19. Physicists epoch and personalities

    CERN Document Server

    Feinberg, E L; Leonidov, A V

    2011-01-01

    The book is a collection of memoirs on famous Soviet physicists of the 20th century, such as Tamm, Vavilov, Sakharov, Landau and others. The narrative is situated within a remarkably well-described historical, cultural and social context. Of special interest are the chapters devoted to Soviet and German atomic projects.

  20. First Results from the Lyman Alpha Galaxies in the Epoch of Reionization (LAGER) Survey: Cosmological Reionization at z ∼ 7

    Energy Technology Data Exchange (ETDEWEB)

    Zheng, Zhen-Ya; Jiang, Chunyan [CAS Key Laboratory for Research in Galaxies and Cosmology, Shanghai Astronomical Observatory, Shanghai 200030 (China); Wang, Junxian; Hu, Weida; Kong, Xu [CAS Key Laboratory for Research in Galaxies and Cosmology, Department of Astronomy, University of Science and Technology of China, Hefei, Anhui 230026 (China); Rhoads, James; Malhotra, Sangeeta; Gonzalez, Alicia [School of Earth and Space Exploration, Arizona State University, Tempe, AZ 85287 (United States); Infante, Leopoldo; Galaz, Gaspar; Barrientos, L. Felipe [Institute of Astrophysics and Center for Astroengineering, Pontificia Universidad Catolica de Chile, Santiago 7820436 (Chile); Walker, Alistair R. [Cerro Tololo Inter-American Observatory, Casilla 603, La Serena (Chile); Jiang, Linhua [The Kavli Institute for Astronomy and Astrophysics, Peking University, Beijing 100871 (China); Hibon, Pascale [European Southern Observatory, Alonso de Cordova 3107, Casilla 19001, Santiago (Chile); Zheng, XianZhong, E-mail: zhengzy@shao.ac.cn, E-mail: linfante@astro.puc.cl, E-mail: jxw@ustc.edu.cn, E-mail: Sangeeta.Malhotra@asu.edu, E-mail: James.Rhoads@asu.edu [Purple Mountain Observatory, Chinese Academy of Sciences, Nanjing 210008 (China)

    2017-06-20

    We present the first results from the ongoing Lyman Alpha Galaxies in the Epoch of Reionization (LAGER) project, which is the largest narrowband survey for z ∼ 7 galaxies to date. Using a specially built narrowband filter NB964 for the superb large-area Dark Energy Camera (DECam) on the NOAO/CTIO 4 m Blanco telescope, LAGER has collected 34 hr NB964 narrowband imaging data in the 3 deg{sup 2} COSMOS field. We have identified 23 Ly α Emitter candidates at z = 6.9 in the central 2-deg{sup 2} region, where DECam and public COSMOS multi-band images exist. The resulting luminosity function (LF) can be described as a Schechter function modified by a significant excess at the bright end (four galaxies with L {sub Lyα∼} 10{sup 43.4±0.2} erg s{sup −1}). The number density at L {sub Ly} {sub α} ∼ 10{sup 43.4±0.2} erg s{sup −1} is little changed from z = 6.6, while at fainter L {sub Lyα} it is substantially reduced. Overall, we see a fourfold reduction in Ly α luminosity density from z = 5.7 to z = 6.9. Combined with a more modest evolution of the continuum UV luminosity density, this suggests a factor of ∼3 suppression of Ly α by radiative transfer through the z ∼ 7 intergalactic medium (IGM). It indicates an IGM neutral fraction of x {sub Hi} ∼ 0.4–0.6 (assuming Ly α velocity offsets of 100–200 km s{sup −1}). The changing shape of the Ly α LF between z ≲ 6.6 and z = 6.9 supports the hypothesis of ionized bubbles in a patchy reionization at z ∼ 7.

  1. Polarization leakage in epoch of reionization windows - III. Wide-field effects of narrow-field arrays

    Science.gov (United States)

    Asad, K. M. B.; Koopmans, L. V. E.; Jelić, V.; de Bruyn, A. G.; Pandey, V. N.; Gehlot, B. K.

    2018-05-01

    Leakage of polarized Galactic diffuse emission into total intensity can potentially mimic the 21-cm signal coming from the epoch of reionization (EoR), as both of them might have fluctuating spectral structure. Although we are sensitive to the EoR signal only in small fields of view, chromatic side-lobes from further away can contaminate the inner region. Here, we explore the effects of leakage into the `EoR window' of the cylindrically averaged power spectra (PS) within wide fields of view using both observation and simulation of the 3C196 and North Celestial Pole (NCP) fields, two observing fields of the LOFAR-EoR project. We present the polarization PS of two one-night observations of the two fields and find that the NCP field has higher fluctuations along frequency, and consequently exhibits more power at high-k∥ that could potentially leak to Stokes I. Subsequently, we simulate LOFAR observations of Galactic diffuse polarized emission based on a model to assess what fraction of polarized power leaks into Stokes I because of the primary beam. We find that the rms fractional leakage over the instrumental k-space is 0.35 {per cent} in the 3C196 field and 0.27 {per cent} in the NCP field, and it does not change significantly within the diameters of 15°, 9°, and 4°. Based on the observed PS and simulated fractional leakage, we show that a similar level of leakage into Stokes I is expected in the 3C196 and NCP fields, and the leakage can be considered to be a bias in the PS.

  2. Astronomical tuning of the end-Permian extinction and the Early Triassic Epoch of South China and Germany

    Science.gov (United States)

    Li, Mingsong; Ogg, James; Zhang, Yang; Huang, Chunju; Hinnov, Linda; Chen, Zhong-Qiang; Zou, Zhuoyan

    2016-05-01

    The timing of the end-Permian mass extinction and subsequent prolonged recovery during the Early Triassic Epoch can be established from astronomically controlled climate cycles recorded in continuous marine sedimentary sections. Astronomical-cycle tuning of spectral gamma-ray logs from biostratigraphically-constrained cyclic stratigraphy through marine sections at Meishan, Chaohu, Daxiakou and Guandao in South China yields an integrated time scale for the Early Triassic, which is consistent with scaling of magnetostratigraphy from climatic cycles in continental deposits of the Germanic Basin. The main marine mass extinction interval at Meishan is constrained to less than 40% of a 100-kyr (kilo-year) cycle (i.e., marine reptiles in the Mesozoic at Chaohu that are considered to represent a significant recovery of marine ecosystems did not appear until 4.7 myr (million years) after the end-Permian extinction. The durations of the Griesbachian, Dienerian, Smithian and Spathian substages, including the uncertainty in placement of widely used conodont biostratigraphic datums for their boundaries, are 1.4 ± 0.1, 0.6 ± 0.1, 1.7 ± 0.1 and 1.4 ± 0.1 myr, implying a total span for the Early Triassic of 5.1 ± 0.1 myr. Therefore, relative to an assigned 251.902 ± 0.024 Ma for the Permian-Triassic boundary from the Meishan GSSP, the ages for these substage boundaries are 250.5 ± 0.1 Ma for base Dienerian, 249.9 ± 0.1 Ma for base Smithian (base of Olenekian stage), 248.2 ± 0.1 Ma for base Spathian, and 246.8 ± 0.1 Ma for the base of the Anisian Stage. This astronomical-calibrated timescale provides rates for the recurrent carbon isotope excursions and for trends in sedimentation accumulation through the Early Triassic of studied sections in South China.

  3. EVOLUTION IN THE H I GAS CONTENT OF GALAXY GROUPS: PRE-PROCESSING AND MASS ASSEMBLY IN THE CURRENT EPOCH

    Energy Technology Data Exchange (ETDEWEB)

    Hess, Kelley M. [Astrophysics, Cosmology and Gravity Centre (ACGC), Department of Astronomy, University of Cape Town, Rondebosch 7701 (South Africa); Wilcots, Eric M., E-mail: hess@ast.uct.ac.za, E-mail: ewilcots@astro.wisc.edu [Department of Astronomy, University of Wisconsin-Madison, Madison, WI 53706 (United States)

    2013-11-01

    We present an analysis of the neutral hydrogen (H I) content and distribution of galaxies in groups as a function of their parent dark matter halo mass. The Arecibo Legacy Fast ALFA survey α.40 data release allows us, for the first time, to study the H I properties of over 740 galaxy groups in the volume of sky common to the Sloan Digital Sky Survey (SDSS) and ALFALFA surveys. We assigned ALFALFA H I detections a group membership based on an existing magnitude/volume-limited SDSS Data Release 7 group/cluster catalog. Additionally, we assigned group ''proximity' membership to H I detected objects whose optical counterpart falls below the limiting optical magnitude—thereby not contributing substantially to the estimate of the group stellar mass, but significantly to the total group H I mass. We find that only 25% of the H I detected galaxies reside in groups or clusters, in contrast to approximately half of all optically detected galaxies. Further, we plot the relative positions of optical and H I detections in groups as a function of parent dark matter halo mass to reveal strong evidence that H I is being processed in galaxies as a result of the group environment: as optical membership increases, groups become increasingly deficient of H I rich galaxies at their center and the H I distribution of galaxies in the most massive groups starts to resemble the distribution observed in comparatively more extreme cluster environments. We find that the lowest H I mass objects lose their gas first as they are processed in the group environment, and it is evident that the infall of gas rich objects is important to the continuing growth of large scale structure at the present epoch, replenishing the neutral gas supply of groups. Finally, we compare our results to those of cosmological simulations and find that current models cannot simultaneously predict the H I selected halo occupation distribution for both low and high mass halos.

  4. Sequential Probability Ratio Test for Collision Avoidance Maneuver Decisions Based on a Bank of Norm-Inequality-Constrained Epoch-State Filters

    Science.gov (United States)

    Carpenter, J. R.; Markley, F. L.; Alfriend, K. T.; Wright, C.; Arcido, J.

    2011-01-01

    Sequential probability ratio tests explicitly allow decision makers to incorporate false alarm and missed detection risks, and are potentially less sensitive to modeling errors than a procedure that relies solely on a probability of collision threshold. Recent work on constrained Kalman filtering has suggested an approach to formulating such a test for collision avoidance maneuver decisions: a filter bank with two norm-inequality-constrained epoch-state extended Kalman filters. One filter models 1he null hypothesis 1ha1 the miss distance is inside the combined hard body radius at the predicted time of closest approach, and one filter models the alternative hypothesis. The epoch-state filter developed for this method explicitly accounts for any process noise present in the system. The method appears to work well using a realistic example based on an upcoming highly-elliptical orbit formation flying mission.

  5. The selection of royal figures in the image of power during the Palaiologan epoch: Byzantium - Serbia - Bulgaria

    Directory of Open Access Journals (Sweden)

    Vojvodić Dragan

    2009-01-01

    Full Text Available The preserved presentations of the Byzantine basileis of the XIII, XIV and XV centuries show that the creators of the late Byzantine monarchical portraits adhered to certain traditional rules when selecting the personages from the ruling house, which they were to portray. Defining which figures were to be depicted in the portrayal of power depended to a large extent on the changing circumstances and events in the imperial house. However, at the same time this was also based on a significantly more profound conception that rested on principles that had evolved in the course of a long history. The understanding of who could personify power was refracted through the prism of ideology and reflected in carefully shaped iconographic matrices. The omission of the images of certain members of the ruler's house, just as much as their inclusion, carried a certain meaning, as did the hierarchical arrangement of those who were portrayed. Generally speaking, this depended on the degree of their kinship with the sovereign, their sex, titles or dignities, and the connection of the members of the dynasty with the emperor's particular marriage. Therefore, one can rather clearly distinguish certain constants, if not rules, according to which some figures were omitted and others included, and, the specific changes that occurred from the end of the Middle Byzantine period till the fall of the Empire. The development of a unique kind of feudalism played a particular role in the specific characteristics in determining who was to appear in the monarchical portraits of the Palaiologan epoch in Byzantium and the states in its neighbourhood. As the preserved portrait ensembles and known written testimonies indicate, we find the images of the rulers' daughters did not feature in presentations of the 'emperors of the Romans' from the Late Byzantine period. In the Palaiologan epoch, they did not participate in the governing of the state nor were they taken into

  6. The variation of the baryon-to-photon ratio during different cosmological epochs due to decay and annihilation of dark matter

    International Nuclear Information System (INIS)

    Zavarygin, E O; Ivanchik, A V

    2015-01-01

    An influence of annihilation and decay of the dark matter particles on the baryon-to-photon ratio has been studied for different cosmological epochs. We consider the different parameter values of the dark matter particles such as mass, annihilation cross section, lifetime and so on. The obtained results are compared with the data which come from the Big Bang nucleosynthesis calculation and from the analysis of the anisotropy of the cosmic microwave background radiation. It has been shown that the modern value of the dark matter density Ω CDM = 0.26 is enough to provide the variation of the baryon-to-photon ratio up to Δη/η ∼ 0.01÷1 for decay of the dark matter particles, but it also leads to an excess of the diffuse gamma ray background. We use the observational data on the diffuse gamma ray background in order to determine our constraints on the model of the dark matter particle decay and on the corresponding variation of the baryon-to-photon ratio: Δη/η ≲ 10 -5 . It has been shown that the variation of the baryon-to-photon ratio caused by the annihilation of the dark matter particles is negligible during the cosmological epochs from Big Bang nucleosynthesis to the present epoch. (paper)

  7. End-of-treatment and serial PET imaging in primary mediastinal B-cell lymphoma following dose-adjusted-EPOCH-R: A paradigm shift in clinical decision making.

    Science.gov (United States)

    Melani, Christopher; Advani, Ranjana; Roschewski, Mark; Walters, Kelsey M; Chen, Clara C; Baratto, Lucia; Ahlman, Mark A; Miljkovic, Milos D; Steinberg, Seth M; Lam, Jessica; Shovlin, Margaret; Dunleavy, Kieron; Pittaluga, Stefania; Jaffe, Elaine S; Wilson, Wyndham H

    2018-05-10

    Dose-adjusted-EPOCH-R obviates the need for radiotherapy in most patients with primary mediastinal B-cell lymphoma. End-of-treatment PET, however, does not accurately identify patients at risk of treatment failure, thereby confounding clinical decision making. To define the role of PET in primary mediastinal B-cell lymphoma following dose-adjusted-EPOCH-R, we extended enrollment and follow-up on our published phase II trial and independent series. Ninety-three patients received dose-adjusted-EPOCH-R without radiotherapy. End-of-treatment PET was performed in 80 patients, of whom 57 received 144 serial scans. One nuclear medicine physician from each institution blindly reviewed all scans from their respective institution. End-of-treatment PET was negative (Deauville 1-3) in 55 (69%) patients with one treatment failure (8-year event-free and overall survival of 96.0% and 97.7%). Among 25 (31%) patients with a positive (Deauville 4-5) end-of-treatment PET, there were 5 (20%) treatment failures (8-year event-free and overall survival of 71.1% and 84.3%). Linear regression analysis of serial scans showed a significant decrease in SUVmax in positive end-of-treatment PET non-progressors compared to an increase in treatment failures. Among 6 treatment failures, the median end-of-treatment SUVmax was 15.4 (range, 1.9-21.3) and 4 achieved long-term remission with salvage therapy. Virtually all patients with a negative end-of-treatment PET following dose-adjusted-EPOCH-R achieved durable remissions and should not receive radiotherapy. Among patients with a positive end-of-treatment PET, only 5/25 (20%) had treatment-failure. Serial PET imaging distinguished end-of-treatment PET positive patients without treatment failure, thereby reducing unnecessary radiotherapy by 80%, and should be considered in all patients with an initial positive PET following dose-adjusted-EPOCH-R (NCT00001337). Copyright © 2018, Ferrata Storti Foundation.

  8. VIRIAL BLACK HOLE MASS ESTIMATES FOR 280,000 AGNs FROM THE SDSS BROADBAND PHOTOMETRY AND SINGLE-EPOCH SPECTRA

    Energy Technology Data Exchange (ETDEWEB)

    Kozłowski, Szymon, E-mail: simkoz@astrouw.edu.pl [Warsaw University Observatory, Al. Ujazdowskie, 4 00-478 Warszawa (Poland)

    2017-01-01

    We use the Sloan Digital Sky Survey (SDSS) Quasar Data Release 12 (DR12Q), containing nearly 300,000 active galactic nuclei (AGNs), to calculate the monochromatic luminosities at 5100, 3000, and 1350 Å, derived from the broadband extinction-corrected SDSS magnitudes. After matching these sources to their counterparts from the SDSS Quasar Data Release 7 (DR7Q), we find very high correlations between our luminosities and DR7Q spectra-based luminosities with minute mean offsets (∼0.01 dex) and dispersions of differences of 0.11, 0.10, and 0.12 dex, respectively, across a luminosity range of 2.5 dex. We then estimate the black hole (BH) masses of the AGNs using the broad line region radius–disk luminosity relations and the FWHM of the Mg ii and C iv emission lines, to provide a catalog of 283,033 virial BH mass estimates (132,451 for Mg ii, 213,071 for C iv, and 62,489 for both) along with the estimates of the bolometric luminosity and Eddington ratio for 0.1 <  z  < 5.5 and for roughly a quarter of the sky covered by SDSS. The BH mass estimates from Mg ii turned out to be closely matched to the ones from DR7Q with a dispersion of differences of 0.34 dex across a BH mass range of ∼2 dex. We uncovered a bias in the derived C iv FWHMs from DR12Q as compared to DR7Q, which we correct empirically. The C iv BH mass estimates should be used with caution because the C iv line is known to cause problems in the estimation of BH mass from single-epoch spectra. Finally, after the FWHM correction, the AGN BH mass estimates from C iv closely match the DR7Q ones (with a dispersion of 0.28 dex), and more importantly the Mg ii and C iv BH masses agree internally with a mean offset of 0.07 dex and a dispersion of 0.39 dex.

  9. THE IMPACT OF THE IONOSPHERE ON GROUND-BASED DETECTION OF THE GLOBAL EPOCH OF REIONIZATION SIGNAL

    Energy Technology Data Exchange (ETDEWEB)

    Sokolowski, Marcin; Wayth, Randall B.; Tremblay, Steven E.; Tingay, Steven J.; Waterson, Mark; Tickner, Jonathan; Emrich, David; Schlagenhaufer, Franz; Kenney, David; Padhi, Shantanu, E-mail: marcin.sokolowski@curtin.edu.au [International Centre for Radio Astronomy Research, Curtin University, G.P.O Box U1987, Perth, WA 6845 (Australia)

    2015-11-01

    The redshifted 21 cm line of neutral hydrogen (H i), potentially observable at low radio frequencies (∼50–200 MHz), is a promising probe of the physical conditions of the intergalactic medium during Cosmic Dawn and the Epoch of Reionization (EoR). The sky-averaged H i signal is expected to be extremely weak (∼100 mK) in comparison to the Galactic foreground emission (∼10{sup 4} K). Moreover, the sky-averaged spectra measured by ground-based instruments are affected by chromatic propagation effects (∼tens of kelvin) originating in the ionosphere. We analyze data collected with the upgraded Broadband Instrument for Global Hydrogen Reionization Signal system deployed at the Murchison Radio-astronomy Observatory to assess the significance of ionospheric effects on the detection of the global EoR signal. The ionospheric effects identified in these data are, particularly during nighttime, dominated by absorption and emission. We measure some properties of the ionosphere, such as the electron temperature (T{sub e} ≈ 470 K at nighttime), magnitude, and variability of optical depth (τ{sub 100} {sub MHz} ≈ 0.01 and δτ ≈ 0.005 at nighttime). According to the results of a statistical test applied on a large data sample, very long integrations (∼100 hr collected over approximately 2 months) lead to increased signal-to-noise ratio even in the presence of ionospheric variability. This is further supported by the structure of the power spectrum of the sky temperature fluctuations, which has flicker noise characteristics at frequencies ≳10{sup −5} Hz, but becomes flat below ≈10{sup −5} Hz. Hence, we conclude that the stochastic error introduced by the chromatic ionospheric effects tends to zero in an average. Therefore, the ionospheric effects and fluctuations are not fundamental impediments preventing ground-based instruments from integrating down to the precision required by global EoR experiments, provided that the ionospheric contribution is

  10. Galaxy formation in the reionization epoch as hinted by Wide Field Camera 3 observations of the Hubble Ultra Deep Field

    International Nuclear Information System (INIS)

    Yan Haojing; Windhorst, Rogier A.; Cohen, Seth H.; Hathi, Nimish P.; Ryan, Russell E.; O'Connell, Robert W.; McCarthy, Patrick J.

    2010-01-01

    towards z ∼ 6. In this scenario, the majority of the stellar mass that the universe assembled through the reionization epoch seems still undetected by current observations at z ∼ 6. (research papers)

  11. Assembling the Infrared Extragalactic Background Light with CIBER-2: Probing Inter-Halo Light and the Epoch of Reionization.

    Science.gov (United States)

    Bock, James

    We propose to carry out a program of observations with the Cosmic Infrared Background Experiment (CIBER-2). CIBER-2 is a near-infrared sounding rocket experiment designed to measure spatial fluctuations in the extragalactic background light. CIBER-2 scientifically follows on the detection of fluctuations with the CIBER-1 imaging instrument, and will use measurement techniques developed and successfully demonstrated by CIBER-1. With high-sensitivity, multi-band imaging measurements, CIBER-2 will elucidate the history of interhalo light (IHL) production and carry out a deep search for extragalactic background fluctuations associated with the epoch of reionization (EOR). CIBER-1 has made high-quality detections of large-scale fluctuations over 4 sounding rocket flights. CIBER-1 measured the amplitude and spatial power spectrum of fluctuations, and observed an electromagnetic spectrum that is close to Rayleigh-Jeans, but with a statistically significant turnover at 1.1 um. The fluctuations cross-correlate with Spitzer images and are significantly bluer than the spectrum of the integrated background derived from galaxy counts. We interpret the CIBER-1 fluctuations as arising from IHL, low-mass stars tidally stripped from their parent galaxies during galaxy mergers. The first generation of stars and their remnants are likely responsible for the for the reionization of the intergalactic medium, observed to be ionized out to the most distant quasars at a redshift of 6. The total luminosity produced by first stars is uncertain, but a lower limit can be placed assuming a minimal number of photons to produce and sustain reionization. This 'minimal' extragalactic background component associated with reionization is detectable in fluctuations at the design sensitivity of CIBER-2. The CIBER-2 instrument is optimized for sensitivity to surface brightness in a short sounding rocket flight. The instrument consists of a 28 cm wide-field telescope operating in 6 spectral bands

  12. Thermal analyses of KBS-3H type repository

    International Nuclear Information System (INIS)

    Ikonen, K.

    2003-12-01

    This report contains the temperature dimensioning of the KBS-3H type nuclear fuel repository, where the fuel canisters are disposed at horizontal position in the horizontal tunnels according to the preliminary SKB (Swedish Nuclear Fuel and Waste Management Co) and Posiva plan. The maximum temperature on the canister surface is limited to the design temperature of +100 deg C. However, due to uncertainties in thermal analysis parameters (like scattering in rock conductivity) the allowable calculated maximum canister temperature is set to 90 deg C causing a safety margin of 10 deg C. The allowable temperature is controlled by adjusting the space between adjacent canisters, adjacent tunnels and the distance between separate panels of the repository and the pre-cooling time affecting power of the canisters. With reasonable canister and tunnel spacing the maximum temperature 90 deg C is achieved with an initial canister power of 1700 W. It became apparent that the temperature of canister surfaces can be determined by superposing analytic line heat source models much more efficiently than by numerical analysis, if the analytic model is first verified and calibrated by numerical analysis. This was done by comparing the surface temperatures of the central canister in a single infinite canister queue calculated numerically and analytically. In addition, the results from SKB analysis were used for comparison and for confirming the calculation procedure. For the Olkiluoto repository a reference case of one panel having 1500 canisters was analysed. The canisters are disposed in a rectangular geometry in a certain order. The calculation was performed separately for both Olkiluoto BWR canisters and Loviisa PWR canisters. The result was the minimum allowable spacing between canisters. (orig.)

  13. Laser Beam Focus Analyser

    DEFF Research Database (Denmark)

    Nielsen, Peter Carøe; Hansen, Hans Nørgaard; Olsen, Flemming Ove

    2007-01-01

    the obtainable features in direct laser machining as well as heat affected zones in welding processes. This paper describes the development of a measuring unit capable of analysing beam shape and diameter of lasers to be used in manufacturing processes. The analyser is based on the principle of a rotating......The quantitative and qualitative description of laser beam characteristics is important for process implementation and optimisation. In particular, a need for quantitative characterisation of beam diameter was identified when using fibre lasers for micro manufacturing. Here the beam diameter limits...... mechanical wire being swept through the laser beam at varying Z-heights. The reflected signal is analysed and the resulting beam profile determined. The development comprised the design of a flexible fixture capable of providing both rotation and Z-axis movement, control software including data capture...

  14. Contesting Citizenship: Comparative Analyses

    DEFF Research Database (Denmark)

    Siim, Birte; Squires, Judith

    2007-01-01

    importance of particularized experiences and multiple ineequality agendas). These developments shape the way citizenship is both practiced and analysed. Mapping neat citizenship modles onto distinct nation-states and evaluating these in relation to formal equality is no longer an adequate approach....... Comparative citizenship analyses need to be considered in relation to multipleinequalities and their intersections and to multiple governance and trans-national organisinf. This, in turn, suggests that comparative citizenship analysis needs to consider new spaces in which struggles for equal citizenship occur...

  15. Relative Contribution of the Hydrogen 2 s Two-Photon Decay and Lyman- α Escape Channels during the Epoch of Cosmological Recombination

    Science.gov (United States)

    Rubiño-Martin, J. A.; Sunyaev, R. A.

    2018-01-01

    We discuss the evolution of the ratio in number of recombinations due to 2 s two photon escape and due to the escape of Lyman- α photons from the resonance during the epoch of cosmological recombination, within the width of the last scattering surface and near its boundaries. We discuss how this ratio evolves in time, and how it defines the profile of the Lyman- α line in the spectrum of CMB. One of the key reasons for explaining its time dependence is the strong overpopulation of the 2 p level relative to the 2 s level at redshifts z ≲ 750.

  16. Risico-analyse brandstofpontons

    NARCIS (Netherlands)

    Uijt de Haag P; Post J; LSO

    2001-01-01

    Voor het bepalen van de risico's van brandstofpontons in een jachthaven is een generieke risico-analyse uitgevoerd. Er is een referentiesysteem gedefinieerd, bestaande uit een betonnen brandstofponton met een relatief grote inhoud en doorzet. Aangenomen is dat de ponton gelegen is in een

  17. Fast multichannel analyser

    Energy Technology Data Exchange (ETDEWEB)

    Berry, A; Przybylski, M M; Sumner, I [Science Research Council, Daresbury (UK). Daresbury Lab.

    1982-10-01

    A fast multichannel analyser (MCA) capable of sampling at a rate of 10/sup 7/ s/sup -1/ has been developed. The instrument is based on an 8 bit parallel encoding analogue to digital converter (ADC) reading into a fast histogramming random access memory (RAM) system, giving 256 channels of 64 k count capacity. The prototype unit is in CAMAC format.

  18. A fast multichannel analyser

    International Nuclear Information System (INIS)

    Berry, A.; Przybylski, M.M.; Sumner, I.

    1982-01-01

    A fast multichannel analyser (MCA) capable of sampling at a rate of 10 7 s -1 has been developed. The instrument is based on an 8 bit parallel encoding analogue to digital converter (ADC) reading into a fast histogramming random access memory (RAM) system, giving 256 channels of 64 k count capacity. The prototype unit is in CAMAC format. (orig.)

  19. MULTI-EPOCH VERY LONG BASELINE ARRAY OBSERVATIONS OF THE COMPACT WIND-COLLISION REGION IN THE QUADRUPLE SYSTEM Cyg OB2 no. 5

    Energy Technology Data Exchange (ETDEWEB)

    Dzib, Sergio A.; Rodriguez, Luis F.; Loinard, Laurent; Ortiz-Leon, Gisela N.; Araudo, Anabella T. [Centro de Radiostronomia y Astrofisica, Universidad Nacional Autonoma de Mexico, Morelia 58089 (Mexico); Mioduszewski, Amy J., E-mail: s.dzib@crya.unam.mx [National Radio Astronomy Observatory, 1003 Lopezville Road, Socorro, NM 87801 (United States)

    2013-02-15

    We present multi-epoch Very Long Base Array observations of the compact wind-collision region in the Cyg OB2 no. 5 system. These observations confirm the arc-shaped morphology of the emission reported earlier. The total flux as a function of time is roughly constant when the source is 'on', but falls below the detection limit as the wind-collision region approaches periastron in its orbit around the contact binary at the center of the system. In addition, at one of the 'on' epochs, the flux drops to about a fifth of its average value. We suggest that this apparent variation could result from the inhomogeneity of the wind that hides part of the flux rather than from an intrinsic variation. We measured a trigonometrical parallax, for the most compact radio emission of 0.61 {+-} 0.22 mas, corresponding to a distance of 1.65 {sup +0.96} {sub -0.44} kpc, in agreement with recent trigonometrical parallaxes measured for objects in the Cygnus X complex. Using constraints on the total mass of the system and orbital parameters previously reported in the literature, we obtain two independent indirect measurements of the distance to the Cyg OB2 no. 5 system, both consistent with 1.3-1.4 kpc. Finally, we suggest that the companion star responsible for the wind interaction, yet undetected, is of spectral type between B0.5 and O8.

  20. Sensitivity of the Hydrogen Epoch of Reionization Array and its build-out stages to one-point statistics from redshifted 21 cm observations

    Science.gov (United States)

    Kittiwisit, Piyanat; Bowman, Judd D.; Jacobs, Daniel C.; Beardsley, Adam P.; Thyagarajan, Nithyanandan

    2018-03-01

    We present a baseline sensitivity analysis of the Hydrogen Epoch of Reionization Array (HERA) and its build-out stages to one-point statistics (variance, skewness, and kurtosis) of redshifted 21 cm intensity fluctuation from the Epoch of Reionization (EoR) based on realistic mock observations. By developing a full-sky 21 cm light-cone model, taking into account the proper field of view and frequency bandwidth, utilizing a realistic measurement scheme, and assuming perfect foreground removal, we show that HERA will be able to recover statistics of the sky model with high sensitivity by averaging over measurements from multiple fields. All build-out stages will be able to detect variance, while skewness and kurtosis should be detectable for HERA128 and larger. We identify sample variance as the limiting constraint of the measurements at the end of reionization. The sensitivity can also be further improved by performing frequency windowing. In addition, we find that strong sample variance fluctuation in the kurtosis measured from an individual field of observation indicates the presence of outlying cold or hot regions in the underlying fluctuations, a feature that can potentially be used as an EoR bubble indicator.

  1. MULTI-EPOCH VERY LONG BASELINE ARRAY OBSERVATIONS OF THE COMPACT WIND-COLLISION REGION IN THE QUADRUPLE SYSTEM Cyg OB2 no. 5

    International Nuclear Information System (INIS)

    Dzib, Sergio A.; Rodríguez, Luis F.; Loinard, Laurent; Ortiz-León, Gisela N.; Araudo, Anabella T.; Mioduszewski, Amy J.

    2013-01-01

    We present multi-epoch Very Long Base Array observations of the compact wind-collision region in the Cyg OB2 no. 5 system. These observations confirm the arc-shaped morphology of the emission reported earlier. The total flux as a function of time is roughly constant when the source is 'on', but falls below the detection limit as the wind-collision region approaches periastron in its orbit around the contact binary at the center of the system. In addition, at one of the 'on' epochs, the flux drops to about a fifth of its average value. We suggest that this apparent variation could result from the inhomogeneity of the wind that hides part of the flux rather than from an intrinsic variation. We measured a trigonometrical parallax, for the most compact radio emission of 0.61 ± 0.22 mas, corresponding to a distance of 1.65 +0.96 –0.44 kpc, in agreement with recent trigonometrical parallaxes measured for objects in the Cygnus X complex. Using constraints on the total mass of the system and orbital parameters previously reported in the literature, we obtain two independent indirect measurements of the distance to the Cyg OB2 no. 5 system, both consistent with 1.3-1.4 kpc. Finally, we suggest that the companion star responsible for the wind interaction, yet undetected, is of spectral type between B0.5 and O8.

  2. Real-time forecasting of ICME shock arrivals at L1 during the "April Fool’s Day" epoch: 28 March – 21 April 2001

    Directory of Open Access Journals (Sweden)

    W. Sun

    Full Text Available The Sun was extremely active during the "April Fool’s Day" epoch of 2001. We chose this period between a solar flare on 28 March 2001 to a final shock arrival at Earth on 21 April 2001. The activity consisted of two presumed helmet-streamer blowouts, seven M-class flares, and nine X-class flares, the last of which was behind the west limb. We have been experimenting since February 1997 with real-time, end-to-end forecasting of interplanetary coronal mass ejection (ICME shock arrival times. Since August 1998, these forecasts have been distributed in real-time by e-mail to a list of interested scientists and operational USAF and NOAA forecasters. They are made using three different solar wind models. We describe here the solar events observed during the April Fool’s 2001 epoch, along with the predicted and actual shock arrival times, and the ex post facto correction to the real-time coronal shock speed observations. It appears that the initial estimates of coronal shock speeds from Type II radio burst observations and coronal mass ejections were too high by as much as 30%. We conclude that a 3-dimensional coronal density model should be developed for application to observations of solar flares and their Type II radio burst observations.

    Key words. Interplanetary physics (flare and stream dynamics; interplanetary shocks – Magnetosheric physics (storms and substorms

  3. Possible future HERA analyses

    International Nuclear Information System (INIS)

    Geiser, Achim

    2015-12-01

    A variety of possible future analyses of HERA data in the context of the HERA data preservation programme is collected, motivated, and commented. The focus is placed on possible future analyses of the existing ep collider data and their physics scope. Comparisons to the original scope of the HERA pro- gramme are made, and cross references to topics also covered by other participants of the workshop are given. This includes topics on QCD, proton structure, diffraction, jets, hadronic final states, heavy flavours, electroweak physics, and the application of related theory and phenomenology topics like NNLO QCD calculations, low-x related models, nonperturbative QCD aspects, and electroweak radiative corrections. Synergies with other collider programmes are also addressed. In summary, the range of physics topics which can still be uniquely covered using the existing data is very broad and of considerable physics interest, often matching the interest of results from colliders currently in operation. Due to well-established data and MC sets, calibrations, and analysis procedures the manpower and expertise needed for a particular analysis is often very much smaller than that needed for an ongoing experiment. Since centrally funded manpower to carry out such analyses is not available any longer, this contribution not only targets experienced self-funded experimentalists, but also theorists and master-level students who might wish to carry out such an analysis.

  4. Biomass feedstock analyses

    Energy Technology Data Exchange (ETDEWEB)

    Wilen, C.; Moilanen, A.; Kurkela, E. [VTT Energy, Espoo (Finland). Energy Production Technologies

    1996-12-31

    The overall objectives of the project `Feasibility of electricity production from biomass by pressurized gasification systems` within the EC Research Programme JOULE II were to evaluate the potential of advanced power production systems based on biomass gasification and to study the technical and economic feasibility of these new processes with different type of biomass feed stocks. This report was prepared as part of this R and D project. The objectives of this task were to perform fuel analyses of potential woody and herbaceous biomasses with specific regard to the gasification properties of the selected feed stocks. The analyses of 15 Scandinavian and European biomass feed stock included density, proximate and ultimate analyses, trace compounds, ash composition and fusion behaviour in oxidizing and reducing atmospheres. The wood-derived fuels, such as whole-tree chips, forest residues, bark and to some extent willow, can be expected to have good gasification properties. Difficulties caused by ash fusion and sintering in straw combustion and gasification are generally known. The ash and alkali metal contents of the European biomasses harvested in Italy resembled those of the Nordic straws, and it is expected that they behave to a great extent as straw in gasification. Any direct relation between the ash fusion behavior (determined according to the standard method) and, for instance, the alkali metal content was not found in the laboratory determinations. A more profound characterisation of the fuels would require gasification experiments in a thermobalance and a PDU (Process development Unit) rig. (orig.) (10 refs.)

  5. AMS analyses at ANSTO

    Energy Technology Data Exchange (ETDEWEB)

    Lawson, E.M. [Australian Nuclear Science and Technology Organisation, Lucas Heights, NSW (Australia). Physics Division

    1998-03-01

    The major use of ANTARES is Accelerator Mass Spectrometry (AMS) with {sup 14}C being the most commonly analysed radioisotope - presently about 35 % of the available beam time on ANTARES is used for {sup 14}C measurements. The accelerator measurements are supported by, and dependent on, a strong sample preparation section. The ANTARES AMS facility supports a wide range of investigations into fields such as global climate change, ice cores, oceanography, dendrochronology, anthropology, and classical and Australian archaeology. Described here are some examples of the ways in which AMS has been applied to support research into the archaeology, prehistory and culture of this continent`s indigenous Aboriginal peoples. (author)

  6. AMS analyses at ANSTO

    International Nuclear Information System (INIS)

    Lawson, E.M.

    1998-01-01

    The major use of ANTARES is Accelerator Mass Spectrometry (AMS) with 14 C being the most commonly analysed radioisotope - presently about 35 % of the available beam time on ANTARES is used for 14 C measurements. The accelerator measurements are supported by, and dependent on, a strong sample preparation section. The ANTARES AMS facility supports a wide range of investigations into fields such as global climate change, ice cores, oceanography, dendrochronology, anthropology, and classical and Australian archaeology. Described here are some examples of the ways in which AMS has been applied to support research into the archaeology, prehistory and culture of this continent's indigenous Aboriginal peoples. (author)

  7. Analyses of MHD instabilities

    International Nuclear Information System (INIS)

    Takeda, Tatsuoki

    1985-01-01

    In this article analyses of the MHD stabilities which govern the global behavior of a fusion plasma are described from the viewpoint of the numerical computation. First, we describe the high accuracy calculation of the MHD equilibrium and then the analysis of the linear MHD instability. The former is the basis of the stability analysis and the latter is closely related to the limiting beta value which is a very important theoretical issue of the tokamak research. To attain a stable tokamak plasma with good confinement property it is necessary to control or suppress disruptive instabilities. We, next, describe the nonlinear MHD instabilities which relate with the disruption phenomena. Lastly, we describe vectorization of the MHD codes. The above MHD codes for fusion plasma analyses are relatively simple though very time-consuming and parts of the codes which need a lot of CPU time concentrate on a small portion of the codes, moreover, the codes are usually used by the developers of the codes themselves, which make it comparatively easy to attain a high performance ratio on the vector processor. (author)

  8. Uncertainty Analyses and Strategy

    International Nuclear Information System (INIS)

    Kevin Coppersmith

    2001-01-01

    The DOE identified a variety of uncertainties, arising from different sources, during its assessment of the performance of a potential geologic repository at the Yucca Mountain site. In general, the number and detail of process models developed for the Yucca Mountain site, and the complex coupling among those models, make the direct incorporation of all uncertainties difficult. The DOE has addressed these issues in a number of ways using an approach to uncertainties that is focused on producing a defensible evaluation of the performance of a potential repository. The treatment of uncertainties oriented toward defensible assessments has led to analyses and models with so-called ''conservative'' assumptions and parameter bounds, where conservative implies lower performance than might be demonstrated with a more realistic representation. The varying maturity of the analyses and models, and uneven level of data availability, result in total system level analyses with a mix of realistic and conservative estimates (for both probabilistic representations and single values). That is, some inputs have realistically represented uncertainties, and others are conservatively estimated or bounded. However, this approach is consistent with the ''reasonable assurance'' approach to compliance demonstration, which was called for in the U.S. Nuclear Regulatory Commission's (NRC) proposed 10 CFR Part 63 regulation (64 FR 8640 [DIRS 101680]). A risk analysis that includes conservatism in the inputs will result in conservative risk estimates. Therefore, the approach taken for the Total System Performance Assessment for the Site Recommendation (TSPA-SR) provides a reasonable representation of processes and conservatism for purposes of site recommendation. However, mixing unknown degrees of conservatism in models and parameter representations reduces the transparency of the analysis and makes the development of coherent and consistent probability statements about projected repository

  9. A simple beam analyser

    International Nuclear Information System (INIS)

    Lemarchand, G.

    1977-01-01

    (ee'p) experiments allow to measure the missing energy distribution as well as the momentum distribution of the extracted proton in the nucleus versus the missing energy. Such experiments are presently conducted on SACLAY's A.L.S. 300 Linac. Electrons and protons are respectively analysed by two spectrometers and detected in their focal planes. Counting rates are usually low and include time coincidences and accidentals. Signal-to-noise ratio is dependent on the physics of the experiment and the resolution of the coincidence, therefore it is mandatory to get a beam current distribution as flat as possible. Using new technologies has allowed to monitor in real time the behavior of the beam pulse and determine when the duty cycle can be considered as being good with respect to a numerical basis

  10. EEG analyses with SOBI.

    Energy Technology Data Exchange (ETDEWEB)

    Glickman, Matthew R.; Tang, Akaysha (University of New Mexico, Albuquerque, NM)

    2009-02-01

    The motivating vision behind Sandia's MENTOR/PAL LDRD project has been that of systems which use real-time psychophysiological data to support and enhance human performance, both individually and of groups. Relevant and significant psychophysiological data being a necessary prerequisite to such systems, this LDRD has focused on identifying and refining such signals. The project has focused in particular on EEG (electroencephalogram) data as a promising candidate signal because it (potentially) provides a broad window on brain activity with relatively low cost and logistical constraints. We report here on two analyses performed on EEG data collected in this project using the SOBI (Second Order Blind Identification) algorithm to identify two independent sources of brain activity: one in the frontal lobe and one in the occipital. The first study looks at directional influences between the two components, while the second study looks at inferring gender based upon the frontal component.

  11. Pathway-based analyses.

    Science.gov (United States)

    Kent, Jack W

    2016-02-03

    New technologies for acquisition of genomic data, while offering unprecedented opportunities for genetic discovery, also impose severe burdens of interpretation and penalties for multiple testing. The Pathway-based Analyses Group of the Genetic Analysis Workshop 19 (GAW19) sought reduction of multiple-testing burden through various approaches to aggregation of highdimensional data in pathways informed by prior biological knowledge. Experimental methods testedincluded the use of "synthetic pathways" (random sets of genes) to estimate power and false-positive error rate of methods applied to simulated data; data reduction via independent components analysis, single-nucleotide polymorphism (SNP)-SNP interaction, and use of gene sets to estimate genetic similarity; and general assessment of the efficacy of prior biological knowledge to reduce the dimensionality of complex genomic data. The work of this group explored several promising approaches to managing high-dimensional data, with the caveat that these methods are necessarily constrained by the quality of external bioinformatic annotation.

  12. Analysing Access Control Specifications

    DEFF Research Database (Denmark)

    Probst, Christian W.; Hansen, René Rydhof

    2009-01-01

    When prosecuting crimes, the main question to answer is often who had a motive and the possibility to commit the crime. When investigating cyber crimes, the question of possibility is often hard to answer, as in a networked system almost any location can be accessed from almost anywhere. The most...... common tool to answer this question, analysis of log files, faces the problem that the amount of logged data may be overwhelming. This problems gets even worse in the case of insider attacks, where the attacker’s actions usually will be logged as permissible, standard actions—if they are logged at all....... Recent events have revealed intimate knowledge of surveillance and control systems on the side of the attacker, making it often impossible to deduce the identity of an inside attacker from logged data. In this work we present an approach that analyses the access control configuration to identify the set...

  13. Network class superposition analyses.

    Directory of Open Access Journals (Sweden)

    Carl A B Pearson

    Full Text Available Networks are often used to understand a whole system by modeling the interactions among its pieces. Examples include biomolecules in a cell interacting to provide some primary function, or species in an environment forming a stable community. However, these interactions are often unknown; instead, the pieces' dynamic states are known, and network structure must be inferred. Because observed function may be explained by many different networks (e.g., ≈ 10(30 for the yeast cell cycle process, considering dynamics beyond this primary function means picking a single network or suitable sample: measuring over all networks exhibiting the primary function is computationally infeasible. We circumvent that obstacle by calculating the network class ensemble. We represent the ensemble by a stochastic matrix T, which is a transition-by-transition superposition of the system dynamics for each member of the class. We present concrete results for T derived from boolean time series dynamics on networks obeying the Strong Inhibition rule, by applying T to several traditional questions about network dynamics. We show that the distribution of the number of point attractors can be accurately estimated with T. We show how to generate Derrida plots based on T. We show that T-based Shannon entropy outperforms other methods at selecting experiments to further narrow the network structure. We also outline an experimental test of predictions based on T. We motivate all of these results in terms of a popular molecular biology boolean network model for the yeast cell cycle, but the methods and analyses we introduce are general. We conclude with open questions for T, for example, application to other models, computational considerations when scaling up to larger systems, and other potential analyses.

  14. Does resting-state connectivity reflect depressive rumination? A tale of two analyses.

    Science.gov (United States)

    Berman, Marc G; Misic, Bratislav; Buschkuehl, Martin; Kross, Ethan; Deldin, Patricia J; Peltier, Scott; Churchill, Nathan W; Jaeggi, Susanne M; Vakorin, Vasily; McIntosh, Anthony R; Jonides, John

    2014-12-01

    Major Depressive Disorder (MDD) is characterized by rumination. Prior research suggests that resting-state brain activation reflects rumination when depressed individuals are not task engaged. However, no study has directly tested this. Here we investigated whether resting-state epochs differ from induced ruminative states for healthy and depressed individuals. Most previous research on resting-state networks comes from seed-based analyses with the posterior cingulate cortex (PCC). By contrast, we examined resting state connectivity by using the complete multivariate connectivity profile (i.e., connections across all brain nodes) and by comparing these results to seeded analyses. We find that unconstrained resting-state intervals differ from active rumination states in strength of connectivity and that overall connectivity was higher for healthy vs. depressed individuals. Relationships between connectivity and subjective mood (i.e., behavior) were strongly observed during induced rumination epochs. Furthermore, connectivity patterns that related to subjective mood were strikingly different for MDD and healthy control (HC) groups suggesting different mood regulation mechanisms. Copyright © 2014 Elsevier Inc. All rights reserved.

  15. Seismic fragility analyses

    International Nuclear Information System (INIS)

    Kostov, Marin

    2000-01-01

    In the last two decades there is increasing number of probabilistic seismic risk assessments performed. The basic ideas of the procedure for performing a Probabilistic Safety Analysis (PSA) of critical structures (NUREG/CR-2300, 1983) could be used also for normal industrial and residential buildings, dams or other structures. The general formulation of the risk assessment procedure applied in this investigation is presented in Franzini, et al., 1984. The probability of failure of a structure for an expected lifetime (for example 50 years) can be obtained from the annual frequency of failure, β E determined by the relation: β E ∫[d[β(x)]/dx]P(flx)dx. β(x) is the annual frequency of exceedance of load level x (for example, the variable x may be peak ground acceleration), P(fI x) is the conditional probability of structure failure at a given seismic load level x. The problem leads to the assessment of the seismic hazard β(x) and the fragility P(fl x). The seismic hazard curves are obtained by the probabilistic seismic hazard analysis. The fragility curves are obtained after the response of the structure is defined as probabilistic and its capacity and the associated uncertainties are assessed. Finally the fragility curves are combined with the seismic loading to estimate the frequency of failure for each critical scenario. The frequency of failure due to seismic event is presented by the scenario with the highest frequency. The tools usually applied for probabilistic safety analyses of critical structures could relatively easily be adopted to ordinary structures. The key problems are the seismic hazard definitions and the fragility analyses. The fragility could be derived either based on scaling procedures or on the base of generation. Both approaches have been presented in the paper. After the seismic risk (in terms of failure probability) is assessed there are several approaches for risk reduction. Generally the methods could be classified in two groups. The

  16. THE INNER DISK STRUCTURE, DISK-PLANET INTERACTIONS, AND TEMPORAL EVOLUTION IN THE β PICTORIS SYSTEM: A TWO-EPOCH HST/STIS CORONAGRAPHIC STUDY

    Energy Technology Data Exchange (ETDEWEB)

    Apai, Dániel; Schneider, Glenn [Department of Astronomy and Steward Observatory, The University of Arizona, Tucson, AZ 85721 (United States); Grady, Carol A. [Eureka Scientific, 2452 Delmer, Suite 100, Oakland CA 96002 (United States); Wyatt, Mark C. [Institute of Astronomy, University of Cambridge, Madingley Road, Cambridge CB3 0HA (United Kingdom); Lagrange, Anne-Marie [Université Grenoble Alpes, IPAG, F-38000, Grenoble (France); Kuchner, Marc J.; Stark, Christopher J. [NASA Goddard Space Flight Center, Exoplanets and Stellar Astrophysics Laboratory, Code 667, Greenbelt, MD 20771 (United States); Lubow, Stephen H., E-mail: apai@arizona.edu [Space Telescope Science Institute, 3700 San Martin Drive, Baltimore, MD 21218 (United States)

    2015-02-20

    We present deep Hubble Space Telescope/Space Telescope Imaging Spectrograph coronagraphic images of the β Pic debris disk obtained at two epochs separated by 15 yr. The new images and the re-reduction of the 1997 data provide the most sensitive and detailed views of the disk at optical wavelengths as well as the yet smallest inner working angle optical coronagraphic image of the disk. Our observations characterize the large-scale and inner-disk asymmetries and we identify multiple breaks in the disk radial surface brightness profile. We study in detail the radial and vertical disk structure and show that the disk is warped. We explore the disk at the location of the β Pic b super-Jupiter and find that the disk surface brightness slope is continuous between 0.''5 and 2.''0, arguing for no change at the separations where β Pic b orbits. The two epoch images constrain the disk's surface brightness evolution on orbital and radiation pressure blow-out timescales. We place an upper limit of 3% on the disk surface brightness change between 3'' and 5'', including the locations of the disk warp, and the CO and dust clumps. We discuss the new observations in the context of high-resolution multi-wavelength images and divide the disk asymmetries in two groups: axisymmetric and non-axisymmetric. The axisymmetric structures (warp, large-scale butterfly, etc.) are consistent with disk structure models that include interactions of a planetesimal belt and a non-coplanar giant planet. The non-axisymmetric features, however, require a different explanation.

  17. THE MULTI-EPOCH NEARBY CLUSTER SURVEY: TYPE Ia SUPERNOVA RATE MEASUREMENT IN z ∼ 0.1 CLUSTERS AND THE LATE-TIME DELAY TIME DISTRIBUTION

    International Nuclear Information System (INIS)

    Sand, David J.; Graham, Melissa L.; Bildfell, Chris; Pritchet, Chris; Zaritsky, Dennis; Just, Dennis W.; Herbert-Fort, Stéphane; Hoekstra, Henk; Sivanandam, Suresh; Foley, Ryan J.; Mahdavi, Andisheh

    2012-01-01

    We describe the Multi-Epoch Nearby Cluster Survey, designed to measure the cluster Type Ia supernova (SN Ia) rate in a sample of 57 X-ray selected galaxy clusters, with redshifts of 0.05 200 (1 Mpc) of 0.042 +0.012 –0.010 +0.010 –0.008 SNuM (0.049 +0.016 –0.014 +0.005 –0.004 SNuM) and an SN Ia rate within red-sequence galaxies of 0.041 +0.015 –0.015 +0.005 –0.010 SNuM (0.041 +0.019 –0.015 +0.005 –0.004 SNuM). The red-sequence SN Ia rate is consistent with published rates in early-type/elliptical galaxies in the 'field'. Using our red-sequence SN Ia rate, and other cluster SN measurements in early-type galaxies up to z ∼ 1, we derive the late-time (>2 Gyr) delay time distribution (DTD) of SN Ia assuming a cluster early-type galaxy star formation epoch of z f = 3. Assuming a power-law form for the DTD, Ψ(t)∝t s , we find s = –1.62 ± 0.54. This result is consistent with predictions for the double degenerate SN Ia progenitor scenario (s ∼ –1) and is also in line with recent calculations for the double detonation explosion mechanism (s ∼ –2). The most recent calculations of the single degenerate scenario DTD predicts an order-of-magnitude drop-off in SN Ia rate ∼6-7 Gyr after stellar formation, and the observed cluster rates cannot rule this out.

  18. The Galaxy mass function up to z =4 in the GOODS-MUSIC sample: into the epoch of formation of massive galaxies

    Science.gov (United States)

    Fontana, A.; Salimbeni, S.; Grazian, A.; Giallongo, E.; Pentericci, L.; Nonino, M.; Fontanot, F.; Menci, N.; Monaco, P.; Cristiani, S.; Vanzella, E.; de Santis, C.; Gallozzi, S.

    2006-12-01

    Aims.The goal of this work is to measure the evolution of the Galaxy Stellar Mass Function and of the resulting Stellar Mass Density up to redshift ≃4, in order to study the assembly of massive galaxies in the high redshift Universe. Methods: .We have used the GOODS-MUSIC catalog, containing 3000 Ks-selected galaxies with multi-wavelength coverage extending from the U band to the Spitzer 8 μm band, of which 27% have spectroscopic redshifts and the remaining fraction have accurate photometric redshifts. On this sample we have applied a standard fitting procedure to measure stellar masses. We compute the Galaxy Stellar Mass Function and the resulting Stellar Mass Density up to redshift ≃4, taking into proper account the biases and incompleteness effects. Results: .Within the well known trend of global decline of the Stellar Mass Density with redshift, we show that the decline of the more massive galaxies may be described by an exponential timescale of ≃6 Gyr up to z≃ 1.5, and proceeds much faster thereafter, with an exponential timescale of ≃0.6 Gyr. We also show that there is some evidence for a differential evolution of the Galaxy Stellar Mass Function, with low mass galaxies evolving faster than more massive ones up to z≃ 1{-}1.5 and that the Galaxy Stellar Mass Function remains remarkably flat (i.e. with a slope close to the local one) up to z≃ 1{-}1.3. Conclusions: .The observed behaviour of the Galaxy Stellar Mass Function is consistent with a scenario where about 50% of present-day massive galaxies formed at a vigorous rate in the epoch between redshift 4 and 1.5, followed by a milder evolution until the present-day epoch.

  19. Interpretation of an Epoch in the Novel "the Big Green Tent" by L. Ulitskaya: Linguistic-Cultural Analysis of Verbal Lexicon

    Directory of Open Access Journals (Sweden)

    Ildar Ch. Safin

    2017-11-01

    Full Text Available This article is the verbal lexicon analysis based on the text of the novel "The Big Green Tent" by L. Ulitskaya. The creative manner of the contemporary writer attracts the attention of researchers, her writings describe the emotional experiences of the heroes and also give a generalized image of time full of historical details and features. The language of her stories and short stories is characterized by a special style in the description of time realities. A verb in the text allows the author to express the events and the circumstances that characterize an action in its dynamics due to the fact that verbal categories reflect the real reality in our consciousness. The method of linguistic cultural analysis of verbal lexicon in the novel "The Big Green Tent" made it possible to single out exactly those language units that the writer carefully selects for the creation and interpretation of the era. A special emphasis in the study is made on the creation of an expressive-emotional style of narration using the stylistic capabilities of the Russian verb. The individual author's methods of narration expressiveness creation are singled out: synonymous series, euphemisms, colloquial lexicon, etc. The conducted study and a careful analysis of the selected factual material testifies that, recreating an epoch, the master of the word invariably uses that language arsenal that brightly and fully conveys the color of time. L. Ulitskaya is able to be not only an indifferent witness of the epoch, but also her tenacious observer and interpreter. The analyzed factual material and the main points of this research can be used in the courses on stylistics and linguistic culturology, and also as an illustrative material during the classes on the linguistic analysis of a literary text.

  20. New limits on 21 cm epoch of reionization from paper-32 consistent with an x-ray heated intergalactic medium at z = 7.7

    International Nuclear Information System (INIS)

    Parsons, Aaron R.; Liu, Adrian; Ali, Zaki S.; Pober, Jonathan C.; Aguirre, James E.; Moore, David F.; Bradley, Richard F.; Carilli, Chris L.; DeBoer, David R.; Dexter, Matthew R.; MacMahon, David H. E.; Gugliucci, Nicole E.; Jacobs, Daniel C.; Klima, Pat; Manley, Jason R.; Walbrugh, William P.; Stefan, Irina I.

    2014-01-01

    We present new constraints on the 21 cm Epoch of Reionization (EoR) power spectrum derived from three months of observing with a 32 antenna, dual-polarization deployment of the Donald C. Backer Precision Array for Probing the Epoch of Reionization in South Africa. In this paper, we demonstrate the efficacy of the delay-spectrum approach to avoiding foregrounds, achieving over eight orders of magnitude of foreground suppression (in mK 2 ). Combining this approach with a procedure for removing off-diagonal covariances arising from instrumental systematics, we achieve a best 2σ upper limit of (41 mK) 2 for k = 0.27 h Mpc –1 at z = 7.7. This limit falls within an order of magnitude of the brighter predictions of the expected 21 cm EoR signal level. Using the upper limits set by these measurements, we generate new constraints on the brightness temperature of 21 cm emission in neutral regions for various reionization models. We show that for several ionization scenarios, our measurements are inconsistent with cold reionization. That is, heating of the neutral intergalactic medium (IGM) is necessary to remain consistent with the constraints we report. Hence, we have suggestive evidence that by z = 7.7, the H I has been warmed from its cold primordial state, probably by X-rays from high-mass X-ray binaries or miniquasars. The strength of this evidence depends on the ionization state of the IGM, which we are not yet able to constrain. This result is consistent with standard predictions for how reionization might have proceeded.

  1. New limits on 21 cm epoch of reionization from paper-32 consistent with an x-ray heated intergalactic medium at z = 7.7

    Energy Technology Data Exchange (ETDEWEB)

    Parsons, Aaron R.; Liu, Adrian; Ali, Zaki S.; Pober, Jonathan C. [Astronomy Department, University of California, Berkeley, CA (United States); Aguirre, James E.; Moore, David F. [Department of Physics and Astronomy, University of Pennsylvania, Philadelphia, PA (United States); Bradley, Richard F. [Department of Electrical and Computer Engineering, University of Virginia, Charlottesville, VA (United States); Carilli, Chris L. [National Radio Astronomy Observatory, Socorro, NM (United States); DeBoer, David R.; Dexter, Matthew R.; MacMahon, David H. E. [Radio Astronomy Laboratory, University of California, Berkeley, CA (United States); Gugliucci, Nicole E. [Department of Astronomy, University of Virginia, Charlottesville, VA (United States); Jacobs, Daniel C. [School of Earth and Space Exploration, Arizona State University, Tempe, AZ (United States); Klima, Pat [National Radio Astronomy Observatory, Charlottesville, VA (United States); Manley, Jason R.; Walbrugh, William P. [Square Kilometer Array, South Africa Project, Cape Town (South Africa); Stefan, Irina I. [Cavendish Laboratory, Cambridge (United Kingdom)

    2014-06-20

    We present new constraints on the 21 cm Epoch of Reionization (EoR) power spectrum derived from three months of observing with a 32 antenna, dual-polarization deployment of the Donald C. Backer Precision Array for Probing the Epoch of Reionization in South Africa. In this paper, we demonstrate the efficacy of the delay-spectrum approach to avoiding foregrounds, achieving over eight orders of magnitude of foreground suppression (in mK{sup 2}). Combining this approach with a procedure for removing off-diagonal covariances arising from instrumental systematics, we achieve a best 2σ upper limit of (41 mK){sup 2} for k = 0.27 h Mpc{sup –1} at z = 7.7. This limit falls within an order of magnitude of the brighter predictions of the expected 21 cm EoR signal level. Using the upper limits set by these measurements, we generate new constraints on the brightness temperature of 21 cm emission in neutral regions for various reionization models. We show that for several ionization scenarios, our measurements are inconsistent with cold reionization. That is, heating of the neutral intergalactic medium (IGM) is necessary to remain consistent with the constraints we report. Hence, we have suggestive evidence that by z = 7.7, the H I has been warmed from its cold primordial state, probably by X-rays from high-mass X-ray binaries or miniquasars. The strength of this evidence depends on the ionization state of the IGM, which we are not yet able to constrain. This result is consistent with standard predictions for how reionization might have proceeded.

  2. Website-analyse

    DEFF Research Database (Denmark)

    Thorlacius, Lisbeth

    2009-01-01

    eller blindgyder, når han/hun besøger sitet. Studier i design og analyse af de visuelle og æstetiske aspekter i planlægning og brug af websites har imidlertid kun i et begrænset omfang været under reflektorisk behandling. Det er baggrunden for dette kapitel, som indleder med en gennemgang af æstetikkens......Websitet er i stigende grad det foretrukne medie inden for informationssøgning,virksomhedspræsentation, e-handel, underholdning, undervisning og social kontakt. I takt med denne voksende mangfoldighed af kommunikationsaktiviteter på nettet, er der kommet mere fokus på at optimere design og...... planlægning af de funktionelle og indholdsmæssige aspekter ved websites. Der findes en stor mængde teori- og metodebøger, som har specialiseret sig i de tekniske problemstillinger i forbindelse med interaktion og navigation, samt det sproglige indhold på websites. Den danske HCI (Human Computer Interaction...

  3. A channel profile analyser

    International Nuclear Information System (INIS)

    Gobbur, S.G.

    1983-01-01

    It is well understood that due to the wide band noise present in a nuclear analog-to-digital converter, events at the boundaries of adjacent channels are shared. It is a difficult and laborious process to exactly find out the shape of the channels at the boundaries. A simple scheme has been developed for the direct display of channel shape of any type of ADC on a cathode ray oscilliscope display. This has been accomplished by sequentially incrementing the reference voltage of a precision pulse generator by a fraction of a channel and storing ADC data in alternative memory locations of a multichannel pulse height analyser. Alternative channels are needed due to the sharing at the boundaries of channels. In the flat region of the profile alternate memory locations are channels with zero counts and channels with the full scale counts. At the boundaries all memory locations will have counts. The shape of this is a direct display of the channel boundaries. (orig.)

  4. NOAA's National Snow Analyses

    Science.gov (United States)

    Carroll, T. R.; Cline, D. W.; Olheiser, C. M.; Rost, A. A.; Nilsson, A. O.; Fall, G. M.; Li, L.; Bovitz, C. T.

    2005-12-01

    NOAA's National Operational Hydrologic Remote Sensing Center (NOHRSC) routinely ingests all of the electronically available, real-time, ground-based, snow data; airborne snow water equivalent data; satellite areal extent of snow cover information; and numerical weather prediction (NWP) model forcings for the coterminous U.S. The NWP model forcings are physically downscaled from their native 13 km2 spatial resolution to a 1 km2 resolution for the CONUS. The downscaled NWP forcings drive an energy-and-mass-balance snow accumulation and ablation model at a 1 km2 spatial resolution and at a 1 hour temporal resolution for the country. The ground-based, airborne, and satellite snow observations are assimilated into the snow model's simulated state variables using a Newtonian nudging technique. The principle advantages of the assimilation technique are: (1) approximate balance is maintained in the snow model, (2) physical processes are easily accommodated in the model, and (3) asynoptic data are incorporated at the appropriate times. The snow model is reinitialized with the assimilated snow observations to generate a variety of snow products that combine to form NOAA's NOHRSC National Snow Analyses (NSA). The NOHRSC NSA incorporate all of the available information necessary and available to produce a "best estimate" of real-time snow cover conditions at 1 km2 spatial resolution and 1 hour temporal resolution for the country. The NOHRSC NSA consist of a variety of daily, operational, products that characterize real-time snowpack conditions including: snow water equivalent, snow depth, surface and internal snowpack temperatures, surface and blowing snow sublimation, and snowmelt for the CONUS. The products are generated and distributed in a variety of formats including: interactive maps, time-series, alphanumeric products (e.g., mean areal snow water equivalent on a hydrologic basin-by-basin basis), text and map discussions, map animations, and quantitative gridded products

  5. Landscape and vegetation change on the Iberian Peninsula during the Roman Epoch - A reconstruction based on Geo-Bioarchives

    Science.gov (United States)

    Schneider, Heike

    2010-05-01

    Archaeological investigations expect that first strong landscape changes on the Iberian Peninsula based on Roman Occupation (Schattner 1998, Teichner 2007). Actual sedimentological investigations in flood plains, lagoons and estuaries do not reflect this development. They often show a decrease in sedimentation during this period (Thorndycraft & Benito 2006 a/b). In contrast analyses on sediments from roman dams (Hinderer et al. 2004, Solanas 2005) document massive erosion processes. The aim of this presented project is to reconstruct the effects of the roman land use system on vegetation and landscape development. Therefore different Geo-Bioarchives on several sites of Portugal and Spain - estuaries, palaeoriver channels and roman dams - are actually investigated with a high temporal resolution using palynological and sedimentological methods. First results show, that the anthropogenic impact starts clearly before roman time with an peak in human activity during Iron Age (Schneider et al. 2008). During the roman occupation phase different effects are visible. The inland areas document a massive increase in vegetation change, while the coastal areas were stronger developed before and show only slightly and very local changes in land use and vegetation. References Hinderer, M., Silva C. & J. Ries (2004). Erosion in zentralen Ebrobecken und Sedimentakkumulation in Talsperren. GeoLeipzig 2004, Geowissenschaften sichern Zukunft. - Schriftenreihe der Dt. Geol. Gesell. 34. Schattner, T.G. (1998): Archäologischer Wegweiser durch Portugal.- Kulturgeschichte der antiken Welt 74. Mainz. Schneider, H., Höfer, D., Trog, C., Daut, G., Hilbich C. & R. Mäusbacher (2008): Geoarcheological reconstruction of lagoon development in the Algarve Region (South Portugal). Terra Nostra 2008/2, Abstract Volume 12th IPC: 248. Solanas, O.L.-P. (2005): El Aterramiento del embalse romano de Muel: Implicaciones para la evolución de la erosióy el uso de los recursos hidricos en el valle del

  6. THE MULTI-EPOCH NEARBY CLUSTER SURVEY: TYPE Ia SUPERNOVA RATE MEASUREMENT IN z {approx} 0.1 CLUSTERS AND THE LATE-TIME DELAY TIME DISTRIBUTION

    Energy Technology Data Exchange (ETDEWEB)

    Sand, David J.; Graham, Melissa L. [Las Cumbres Observatory Global Telescope Network, 6740 Cortona Drive, Suite 102, Santa Barbara, CA 93117 (United States); Bildfell, Chris; Pritchet, Chris [Department of Physics and Astronomy, University of Victoria, P.O. Box 3055, STN CSC, Victoria BC V8W 3P6 (Canada); Zaritsky, Dennis; Just, Dennis W.; Herbert-Fort, Stephane [Steward Observatory, University of Arizona, Tucson, AZ 85721 (United States); Hoekstra, Henk [Leiden Observatory, Leiden University, Niels Bohrweg 2, NL-2333 CA Leiden (Netherlands); Sivanandam, Suresh [Dunlap Institute for Astronomy and Astrophysics, 50 St. George Street, Toronto, ON M5S 3H4 (Canada); Foley, Ryan J. [Harvard-Smithsonian Center for Astrophysics, 60 Garden Street, Cambridge, MA 02138 (United States); Mahdavi, Andisheh, E-mail: dsand@lcogt.net [Department of Physics and Astronomy, San Francisco State University, San Francisco, CA 94132 (United States)

    2012-02-20

    We describe the Multi-Epoch Nearby Cluster Survey, designed to measure the cluster Type Ia supernova (SN Ia) rate in a sample of 57 X-ray selected galaxy clusters, with redshifts of 0.05 < z < 0.15. Utilizing our real-time analysis pipeline, we spectroscopically confirmed twenty-three cluster SNe Ia, four of which were intracluster events. Using our deep Canada-France-Hawaii Telescope/MegaCam imaging, we measured total stellar luminosities in each of our galaxy clusters, and we performed detailed supernova (SN) detection efficiency simulations. Bringing these ingredients together, we measure an overall cluster SN Ia rate within R{sub 200} (1 Mpc) of 0.042{sup +0.012}{sub -0.010}{sup +0.010}{sub -0.008} SNuM (0.049{sup +0.016}{sub -0.014}{sup +0.005}{sub -0.004} SNuM) and an SN Ia rate within red-sequence galaxies of 0.041{sup +0.015}{sub -0.015}{sup +0.005}{sub -0.010} SNuM (0.041{sup +0.019}{sub -0.015}{sup +0.005}{sub -0.004} SNuM). The red-sequence SN Ia rate is consistent with published rates in early-type/elliptical galaxies in the 'field'. Using our red-sequence SN Ia rate, and other cluster SN measurements in early-type galaxies up to z {approx} 1, we derive the late-time (>2 Gyr) delay time distribution (DTD) of SN Ia assuming a cluster early-type galaxy star formation epoch of z{sub f} = 3. Assuming a power-law form for the DTD, {Psi}(t){proportional_to}t{sup s} , we find s = -1.62 {+-} 0.54. This result is consistent with predictions for the double degenerate SN Ia progenitor scenario (s {approx} -1) and is also in line with recent calculations for the double detonation explosion mechanism (s {approx} -2). The most recent calculations of the single degenerate scenario DTD predicts an order-of-magnitude drop-off in SN Ia rate {approx}6-7 Gyr after stellar formation, and the observed cluster rates cannot rule this out.

  7. MULTI-EPOCH IMAGING POLARIMETRY OF THE SiO MASERS IN THE EXTENDED ATMOSPHERE OF THE MIRA VARIABLE TX CAM

    International Nuclear Information System (INIS)

    Kemball, Athol J.; Diamond, Philip J.; Gonidakis, Ioannis; Mitra, Modhurita; Yim, Kijeong; Pan, K.-C.; Chiang, H.-F.

    2009-01-01

    We present a time series of synoptic images of the linearly polarized v = 1, J = 1-0 SiO maser emission toward the Mira variable, TX Cam. These data comprise 43 individual epochs at an approximate biweekly sampling over an optical pulsation phase range of φ = 0.68 to φ = 1.82. The images have an angular resolution of ∼500 μas and were obtained using the Very Long Baseline Array (VLBA), operating in the 43 GHz band in spectral-line, polarization mode. We have previously published the total intensity time series for this pulsation phase range; this paper serves to present the linearly polarized image sequence and an associated animation representing the evolution of the linear polarization morphology over time. We find a predominantly tangential polarization morphology, a high degree of persistence in linear polarization properties over individual component lifetimes, and stronger linear polarization in the inner projected shell than at larger projected shell radii. We present an initial polarization proper motion analysis examining the possible dynamical influence of magnetic fields in component motions in the extended atmospheres of late-type, evolved stars.

  8. Foreground and Sensitivity Analysis for Broadband (2D) 21 cm-Lyα and 21 cm-Hα Correlation Experiments Probing the Epoch of Reionization

    Science.gov (United States)

    Neben, Abraham R.; Stalder, Brian; Hewitt, Jacqueline N.; Tonry, John L.

    2017-11-01

    A detection of the predicted anticorrelation between 21 cm and either Lyα or Hα from the epoch of reionization (EOR) would be a powerful probe of the first galaxies. While 3D intensity maps isolate foregrounds in low-{k}\\parallel modes, infrared surveys cannot yet match the field of view and redshift resolution of radio intensity mapping experiments. In contrast, 2D (I.e., broadband) infrared intensity maps can be measured with current experiments and are limited by foregrounds instead of photon or thermal noise. We show that 2D experiments can measure most of the 3D fluctuation power at klimit on residual foregrounds of the 21 cm-Lyα cross-power spectrum at z˜ 7 of {{{Δ }}}2text{kJy sr}}-1 {{mK}}) (95%) at {\\ell }˜ 800. We predict levels of foreground correlation and sample variance noise in future experiments, showing that higher-resolution surveys such as LOFAR, SKA-LOW, and the Dark Energy Survey can start to probe models of the 21 cm-Lyα EOR cross spectrum.

  9. The magnetic epoch-6 carbon shift: a change in the ocean's 13C/12C ratio 6.2 million years ago

    International Nuclear Information System (INIS)

    Vincent, E.; Killingley, J.S.; Berger, W.H.

    1980-01-01

    Tropical Indian Ocean planktonic and benthic foraminifera have 13 C/ 12 C ratios which change abruptly within Magnetic Epoch 6 about 6.2 million years ago. All species analyzed in the Late Miocene section of DSDP Site 238 show a shift towards lighter values of delta 13 C by about 0.8 per thousand. The oxygen isotope signal indicates that the pre-shift period is climatically quiet while the post-shift period has strong fluctuations. The authors suggest that the shift reflects a sudden increase in the rate of supply of organic carbon from coastal lowlands and from shelves exposed by regression, as well as a change in deep circulation patterns and ocean fertility. The event marks the transition of the ocean-atmosphere system from a quiet Early and Middle Neogene climate regime toward a Late Neogene regime characterized by climatic amplifying mechanisms (albedo feedback, bottom water production) located around the northern North Atlantic. The beginning of this regime may have been strongly influenced by the isolation of the Mediterranean basin. (Auth.)

  10. Heavy ion therapy: Bevalac epoch

    International Nuclear Information System (INIS)

    Castro, J.R.

    1993-10-01

    An overview of heavy ion therapy at the Bevelac complex (SuperHILac linear accelerator + Bevatron) is given. Treatment planning, clinical results with helium ions on the skull base and uveal melanoma, clinical results with high-LET charged particles, neon radiotherapy of prostate cancer, heavy charged particle irradiation for unfavorable soft tissue sarcoma, preliminary results in heavy charged particle irradiation of bone sarcoma, and irradiation of bile duct carcinoma with charged particles and-or photons are all covered

  11. The extended epoch of galaxy formation: Age dating of 3600 galaxies with 2 < z < 6.5 in the VIMOS Ultra-Deep Survey

    Science.gov (United States)

    Thomas, R.; Le Fèvre, O.; Scodeggio, M.; Cassata, P.; Garilli, B.; Le Brun, V.; Lemaux, B. C.; Maccagni, D.; Pforr, J.; Tasca, L. A. M.; Zamorani, G.; Bardelli, S.; Hathi, N. P.; Tresse, L.; Zucca, E.; Koekemoer, A. M.

    2017-06-01

    In this paper we aim at improving constraints on the epoch of galaxy formation by measuring the ages of 3597 galaxies with reliable spectroscopic redshifts 2 ≤ z ≤ 6.5 in the VIMOS Ultra Deep Survey (VUDS). We derive ages and other physical parameters from the simultaneous fitting with the GOSSIP+ software of observed UV rest-frame spectra and photometric data from the u band up to 4.5 μm using model spectra from composite stellar populations. We perform extensive simulations and conclude that at z ≥ 2 the joint analysis of spectroscopy and photometry, combined with restricted age possibilities when taking the age of the Universe into account, substantially reduces systematic uncertainties and degeneracies in the age derivation; we find that age measurements from this process are reliable. We find that galaxy ages range from very young with a few tens of million years to substantially evolved with ages up to 1.5 Gyr or more. This large age spread is similar for different age definitions including ages corresponding to the last major star formation event, stellar mass-weighted ages, and ages corresponding to the time since the formation of 25% of the stellar mass. We derive the formation redshift zf from the measured ages and find galaxies that may have started forming stars as early as zf 15. We produce the formation redshift function (FzF), the number of galaxies per unit volume formed at a redshift zf, and compare the FzF in increasing observed redshift bins finding a remarkably constant FzF. The FzF is parametrized with (1 + z)ζ, where ζ ≃ 0.58 ± 0.06, indicating a smooth increase of about 2 dex from the earliest redshifts, z 15, to the lowest redshifts of our sample at z 2. Remarkably, this observed increase in the number of forming galaxies is of the same order as the observed rise in the star formation rate density (SFRD). The ratio of the comoving SFRD with the FzF gives an average SFR per galaxy of 7-17M⊙/yr at z 4-6, in agreement with the

  12. A 6% measurement of the Hubble parameter at z ∼0.45: direct evidence of the epoch of cosmic re-acceleration

    International Nuclear Information System (INIS)

    Moresco, Michele; Cimatti, Andrea; Citro, Annalisa; Pozzetti, Lucia; Jimenez, Raul; Verde, Licia; Maraston, Claudia; Thomas, Daniel; Wilkinson, David; Tojeiro, Rita

    2016-01-01

    Deriving the expansion history of the Universe is a major goal of modern cosmology. To date, the most accurate measurements have been obtained with Type Ia Supernovae (SNe) and Baryon Acoustic Oscillations (BAO), providing evidence for the existence of a transition epoch at which the expansion rate changes from decelerated to accelerated. However, these results have been obtained within the framework of specific cosmological models that must be implicitly or explicitly assumed in the measurement. It is therefore crucial to obtain measurements of the accelerated expansion of the Universe independently of assumptions on cosmological models. Here we exploit the unprecedented statistics provided by the Baryon Oscillation Spectroscopic Survey (BOSS, [1-3]) Data Release 9 to provide new constraints on the Hubble parameter H ( z ) using the cosmic chronometers approach. We extract a sample of more than 130000 of the most massive and passively evolving galaxies, obtaining five new cosmology-independent H ( z ) measurements in the redshift range 0.3 < z < 0.5, with an accuracy of ∼11–16% incorporating both statistical and systematic errors. Once combined, these measurements yield a 6% accuracy constraint of H ( z = 0.4293) = 91.8 ± 5.3 km/s/Mpc. The new data are crucial to provide the first cosmology-independent determination of the transition redshift at high statistical significance, measuring z t = 0.4 ± 0.1, and to significantly disfavor the null hypothesis of no transition between decelerated and accelerated expansion at 99.9% confidence level. This analysis highlights the wide potential of the cosmic chronometers approach: it permits to derive constraints on the expansion history of the Universe with results competitive with standard probes, and most importantly, being the estimates independent of the cosmological model, it can constrain cosmologies beyond—and including—the ΛCDM model.

  13. HST/COS OBSERVATIONS OF THE QUASAR HE 2347-4342: PROBING THE EPOCH OF He II PATCHY REIONIZATION AT REDSHIFTS z = 2.4-2.9

    International Nuclear Information System (INIS)

    Shull, J. Michael; France, Kevin; Danforth, Charles W.; Smith, Britton; Tumlinson, Jason

    2010-01-01

    We report ultraviolet spectra of the high-redshift (z em ∼ 2.9) quasar, HE 2347 - 4342, taken by the Cosmic Origins Spectrograph (COS) on the Hubble Space Telescope. Spectra in the G130M (medium resolution, 1135-1440 A) and G140L (low resolution, 1030-2000 A) gratings exhibit patchy Gunn-Peterson absorption in the 303.78 A Lyα line of He II between z = 2.39-2.87 (G140L) and z = 2.74-2.90 (G130M). With COS, we obtain better spectral resolution, higher signal-to-noise ratio (S/N), and better determined backgrounds than previous studies, with sensitivity to abundance fractions x He I I ∼ 0.01 in filaments of the cosmic web. The He II optical depths from COS are higher than those with the Far Ultraviolet Spectroscopic Explorer and range from τ He I I ≤ 0.02 to τ He I I ≥ 5, with a slow recovery in mean optical depth to (τ He I I ) ≤ 2 at z abs ∼ z QSO and minimal 'proximity effect' of flux transmission at the He II edge. We propose a QSO systemic redshift z QSO = 2.904 ± 0.002, some Δz = 0.019 higher than that derived from O I λ1302 emission. Three long troughs (4-10 A or 25-60 Mpc comoving distance) of strong He II absorption between z = 2.75and2.90 are uncharacteristic of the intergalactic medium if He II reionized at z r ∼ 3. Contrary to recent indirect estimates (z r = 3.2 ± 0.2) from H I optical depths, the epoch of He II reionization may extend to z ∼< 2.7.

  14. An Extended ADOP for Performance Evaluation of Single-Frequency Single-Epoch Positioning by BDS/GPS in Asia-Pacific Region

    Directory of Open Access Journals (Sweden)

    Xin Liu

    2017-09-01

    Full Text Available Single-Frequency Single-Epoch (SFSE high-precision positioning has always been the hot spot of Global Navigation Satellite System (GNSS, and ambiguity dilution of precision (ADOP is a well-known scalar measure for success rate of ambiguity resolution. Traditional ADOP expression is complicated, thus the SFSE extended ADOP (E-ADOP, with the newly defined Summation-Multiplication Ratio of Weight (SMRW and two theorems for short baseline, was developed. This simplifies the ADOP expression; gives a clearer insight into the influences of SMRW and number of satellites on E-ADOP; and makes theoretical analysis of E-ADOP more convenient than that of ADOP, and through that the E-ADOP value can be predicted more accurately than through the ADOP expression for ADOP value. E-ADOP reveals that number of satellites and SMRW or high-elevation satellite are important for ADOP and, through E-ADOP, we studied which factor is dominant to control ADOP in different conditions and make ADOP different between BeiDou Navigation Satellite System (BDS, Global Positioning System (GPS, and BDS/GPS. Based on experimental results of SFSE positioning with different baselines, some conclusions are made: (1 ADOP decreases when new satellites are added mainly because the number of satellites becomes larger; (2 when the number of satellites is constant, ADOP is mainly affected by SMRW; (3 in contrast to systems where the satellites with low-elevation are the majority or where low- and high-elevation satellites are equally distributed, in systems where the high-elevation satellites are the majority, the SMRW mainly makes ADOP smaller, even if there are fewer satellites than in the two previous cases, and the difference in numbers of satellites can be expanded as the proportion of high-elevation satellites becomes larger; and (4 ADOP of BDS is smaller than ADOP of GPS mainly because of its SMRW.

  15. SARAS 2: a spectral radiometer for probing cosmic dawn and the epoch of reionization through detection of the global 21-cm signal

    Science.gov (United States)

    Singh, Saurabh; Subrahmanyan, Ravi; Shankar, N. Udaya; Rao, Mayuri Sathyanarayana; Girish, B. S.; Raghunathan, A.; Somashekar, R.; Srivani, K. S.

    2018-04-01

    The global 21-cm signal from Cosmic Dawn (CD) and the Epoch of Reionization (EoR), at redshifts z ˜ 6-30, probes the nature of first sources of radiation as well as physics of the Inter-Galactic Medium (IGM). Given that the signal is predicted to be extremely weak, of wide fractional bandwidth, and lies in a frequency range that is dominated by Galactic and Extragalactic foregrounds as well as Radio Frequency Interference, detection of the signal is a daunting task. Critical to the experiment is the manner in which the sky signal is represented through the instrument. It is of utmost importance to design a system whose spectral bandpass and additive spurious signals can be well calibrated and any calibration residual does not mimic the signal. Shaped Antenna measurement of the background RAdio Spectrum (SARAS) is an ongoing experiment that aims to detect the global 21-cm signal. Here we present the design philosophy of the SARAS 2 system and discuss its performance and limitations based on laboratory and field measurements. Laboratory tests with the antenna replaced with a variety of terminations, including a network model for the antenna impedance, show that the gain calibration and modeling of internal additive signals leave no residuals with Fourier amplitudes exceeding 2 mK, or residual Gaussians of 25 MHz width with amplitudes exceeding 2 mK. Thus, even accounting for reflection and radiation efficiency losses in the antenna, the SARAS 2 system is capable of detection of complex 21-cm profiles at the level predicted by currently favoured models for thermal baryon evolution.

  16. A 6% measurement of the Hubble parameter at z ∼0.45: direct evidence of the epoch of cosmic re-acceleration

    Energy Technology Data Exchange (ETDEWEB)

    Moresco, Michele; Cimatti, Andrea; Citro, Annalisa [Dipartimento di Fisica e Astronomia, Università di Bologna, V.le Berti Pichat, 6/2, 40127, Bologna (Italy); Pozzetti, Lucia [INAF—Osservatorio Astronomico di Bologna, via Ranzani 1, 40127 Bologna (Italy); Jimenez, Raul; Verde, Licia [ICREA and ICC, University of Barcelona (IEEC-UB), Barcelona 08028 (Spain); Maraston, Claudia; Thomas, Daniel; Wilkinson, David [Institute of Cosmology and Gravitation, Dennis Sciama Building, University of Portsmouth, Burnaby Road, Portsmouth, PO1 3FX (United Kingdom); Tojeiro, Rita, E-mail: michele.moresco@unibo.it, E-mail: lucia.pozzetti@oabo.inaf.it, E-mail: a.cimatti@unibo.it, E-mail: rauljimenez@g.harvard.edu, E-mail: claudia.maraston@port.ac.uk, E-mail: liciaverde@icc.ub.edu, E-mail: daniel.thomas@port.ac.uk, E-mail: annalisa.citro@unibo.it, E-mail: rmftr@st-andrews.ac.uk, E-mail: david.wilkinson@port.ac.uk [School of Physics and Astronomy, University of St. Andrews, Saint Andrews, KY16 9SS (United Kingdom)

    2016-05-01

    Deriving the expansion history of the Universe is a major goal of modern cosmology. To date, the most accurate measurements have been obtained with Type Ia Supernovae (SNe) and Baryon Acoustic Oscillations (BAO), providing evidence for the existence of a transition epoch at which the expansion rate changes from decelerated to accelerated. However, these results have been obtained within the framework of specific cosmological models that must be implicitly or explicitly assumed in the measurement. It is therefore crucial to obtain measurements of the accelerated expansion of the Universe independently of assumptions on cosmological models. Here we exploit the unprecedented statistics provided by the Baryon Oscillation Spectroscopic Survey (BOSS, [1-3]) Data Release 9 to provide new constraints on the Hubble parameter H ( z ) using the cosmic chronometers approach. We extract a sample of more than 130000 of the most massive and passively evolving galaxies, obtaining five new cosmology-independent H ( z ) measurements in the redshift range 0.3 < z < 0.5, with an accuracy of ∼11–16% incorporating both statistical and systematic errors. Once combined, these measurements yield a 6% accuracy constraint of H ( z = 0.4293) = 91.8 ± 5.3 km/s/Mpc. The new data are crucial to provide the first cosmology-independent determination of the transition redshift at high statistical significance, measuring z {sub t} = 0.4 ± 0.1, and to significantly disfavor the null hypothesis of no transition between decelerated and accelerated expansion at 99.9% confidence level. This analysis highlights the wide potential of the cosmic chronometers approach: it permits to derive constraints on the expansion history of the Universe with results competitive with standard probes, and most importantly, being the estimates independent of the cosmological model, it can constrain cosmologies beyond—and including—the ΛCDM model.

  17. A PRECISION MULTI-BAND TWO-EPOCH PHOTOMETRIC CATALOG OF 44 MILLION SOURCES IN THE NORTHERN SKY FROM A COMBINATION OF THE USNO-B AND SLOAN DIGITAL SKY SURVEY CATALOGS

    International Nuclear Information System (INIS)

    Madsen, G. J.; Gaensler, B. M.

    2013-01-01

    A key science driver for the next generation of wide-field optical and radio surveys is the exploration of the time variable sky. These surveys will have unprecedented sensitivity and areal coverage, but will be limited in their ability to detect variability on time scales longer than the lifetime of the surveys. We present a new precision, multi-epoch photometric catalog that spans 60 yr by combining the US Naval Observatory-B (USNO-B) and Sloan Digital Sky Survey (SDSS) Data Release 9 (DR9) catalogs. We recalibrate the photometry of the original USNO-B catalog and create a catalog with two epochs of photometry in up to five different bands for 43,647,887 optical point sources that lie in the DR9 footprint of the northern sky. The recalibrated objects span a magnitude range 14 ≲ m ≲ 20 and are accurate to ≈0.1 mag. We minimize the presence of spurious objects and those with inaccurate magnitudes by identifying and removing several sources of systematic errors in the two originating catalogs, with a focus on spurious objects that exhibit large apparent magnitude variations. After accounting for these effects, we find ≈250,000 stars and quasars that show significant (≥4σ) changes in brightness between the USNO-B and SDSS DR9 epochs. We discuss the historical value of the catalog and its application to the study of long time scale, large amplitude variable stars and quasars

  18. THE TYPE II SUPERNOVA RATE IN z {approx} 0.1 GALAXY CLUSTERS FROM THE MULTI-EPOCH NEARBY CLUSTER SURVEY

    Energy Technology Data Exchange (ETDEWEB)

    Graham, M. L.; Sand, D. J. [Las Cumbres Observatory Global Telescope Network, 6740 Cortona Drive, Suite 102, Santa Barbara, CA 93117 (United States); Bildfell, C. J.; Pritchet, C. J. [Department of Physics and Astronomy, University of Victoria, P.O. Box 3055, STN CSC, Victoria BC V8W 3P6 (Canada); Zaritsky, D.; Just, D. W.; Herbert-Fort, S. [Steward Observatory, University of Arizona, Tucson, AZ 85721 (United States); Hoekstra, H. [Leiden Observatory, Leiden University, Niels Bohrweg 2, NL-2333 CA Leiden (Netherlands); Sivanandam, S. [Dunlap Institute for Astronomy and Astrophysics, 50 St. George St., Toronto, ON M5S 3H4 (Canada); Foley, R. J. [Harvard-Smithsonian Center for Astrophysics, 60 Garden Street, Cambridge, MA 02138 (United States)

    2012-07-01

    We present seven spectroscopically confirmed Type II cluster supernovae (SNe II) discovered in the Multi-Epoch Nearby Cluster Survey, a supernova survey targeting 57 low-redshift 0.05 < z < 0.15 galaxy clusters with the Canada-France-Hawaii Telescope. We find the rate of Type II supernovae within R{sub 200} of z {approx} 0.1 galaxy clusters to be 0.026{sup +0.085}{sub -0.018}(stat){sup +0.003}{sub -0.001}(sys) SNuM. Surprisingly, one SN II is in a red-sequence host galaxy that shows no clear evidence of recent star formation (SF). This is unambiguous evidence in support of ongoing, low-level SF in at least some cluster elliptical galaxies, and illustrates that galaxies that appear to be quiescent cannot be assumed to host only Type Ia SNe. Based on this single SN II we make the first measurement of the SN II rate in red-sequence galaxies, and find it to be 0.007{sup +0.014}{sub -0.007}(stat){sup +0.009}{sub -0.001}(sys) SNuM. We also make the first derivation of cluster specific star formation rates (sSFR) from cluster SN II rates. We find that for all galaxy types the sSFR is 5.1{sup +15.8}{sub -3.1}(stat) {+-} 0.9(sys) M{sub Sun} yr{sup -1} (10{sup 12} M{sub Sun }){sup -1}, and for red-sequence galaxies only it is 2.0{sup +4.2}{sub -0.9}(stat) {+-} 0.4(sys) M{sub Sun} yr{sup -1} (10{sup 12} M{sub Sun }){sup -1}. These values agree with SFRs measured from infrared and ultraviolet photometry, and H{alpha} emission from optical spectroscopy. Additionally, we use the SFR derived from our SNII rate to show that although a small fraction of cluster Type Ia SNe may originate in the young stellar population and experience a short delay time, these results do not preclude the use of cluster SN Ia rates to derive the late-time delay time distribution for SNe Ia.

  19. Constraining the Evolution of the Ionizing Background and the Epoch of Reionization with z~6 Quasars. II. A Sample of 19 Quasars

    Science.gov (United States)

    Fan, Xiaohui; Strauss, Michael A.; Becker, Robert H.; White, Richard L.; Gunn, James E.; Knapp, Gillian R.; Richards, Gordon T.; Schneider, Donald P.; Brinkmann, J.; Fukugita, Masataka

    2006-07-01

    We study the evolution of the ionization state of the intergalactic medium (IGM) at the end of the reionization epoch using moderate-resolution spectra of a sample of 19 quasars at 5.745.7: the GP optical depth evolution changes from τeffGP~(1+z)4.3 to (1+z)>~11, and the average length of dark gaps with τ>3.5 increases from 80 comoving Mpc. The dispersion of IGM properties along different lines of sight also increases rapidly, implying fluctuations by a factor of >~4 in the UV background at z>6, when the mean free path of UV photons is comparable to the correlation length of the star-forming galaxies that are thought to have caused reionization. The mean length of dark gaps shows the most dramatic increase at z~6, as well as the largest line-of-sight variations. We suggest using dark gap statistics as a powerful probe of the ionization state of the IGM at yet higher redshift. The sizes of H II regions around luminous quasars decrease rapidly toward higher redshift, suggesting that the neutral fraction of the IGM has increased by a factor of >~10 from z=5.7 to 6.4, consistent with the value derived from the GP optical depth. The mass-averaged neutral fraction is 1%-4% at z~6.2 based on the GP optical depth and H II region size measurements. The observations suggest that z~6 is the end of the overlapping stage of reionization and are inconsistent with a mostly neutral IGM at z~6, as indicated by the finite length of the dark absorption gaps. Based on observations obtained with the Sloan Digital Sky Survey at the W. M. Keck Observatory, which is operated as a scientific partnership among the California Institute of Technology, the University of California, and the National Aeronautics and Space Administration, made possible by the generous financial support of the W. M. Keck Foundation, with the MMT Observatory, a joint facility of the University of Arizona and the Smithsonian Institution, and with the Kitt Peak National Observatory 4 m Mayall Telescope. This paper

  20. Sample preparation in foodomic analyses.

    Science.gov (United States)

    Martinović, Tamara; Šrajer Gajdošik, Martina; Josić, Djuro

    2018-04-16

    Representative sampling and adequate sample preparation are key factors for successful performance of further steps in foodomic analyses, as well as for correct data interpretation. Incorrect sampling and improper sample preparation can be sources of severe bias in foodomic analyses. It is well known that both wrong sampling and sample treatment cannot be corrected anymore. These, in the past frequently neglected facts, are now taken into consideration, and the progress in sampling and sample preparation in foodomics is reviewed here. We report the use of highly sophisticated instruments for both high-performance and high-throughput analyses, as well as miniaturization and the use of laboratory robotics in metabolomics, proteomics, peptidomics and genomics. This article is protected by copyright. All rights reserved. This article is protected by copyright. All rights reserved.

  1. Superposing pure quantum states with partial prior information

    Science.gov (United States)

    Dogra, Shruti; Thomas, George; Ghosh, Sibasish; Suter, Dieter

    2018-05-01

    The principle of superposition is an intriguing feature of quantum mechanics, which is regularly exploited in many different circumstances. A recent work [M. Oszmaniec et al., Phys. Rev. Lett. 116, 110403 (2016), 10.1103/PhysRevLett.116.110403] shows that the fundamentals of quantum mechanics restrict the process of superimposing two unknown pure states, even though it is possible to superimpose two quantum states with partial prior knowledge. The prior knowledge imposes geometrical constraints on the choice of input states. We discuss an experimentally feasible protocol to superimpose multiple pure states of a d -dimensional quantum system and carry out an explicit experimental realization for two single-qubit pure states with partial prior information on a two-qubit NMR quantum information processor.

  2. Natural convection in superposed fluid-porous layers

    CERN Document Server

    Bagchi, Aniruddha

    2013-01-01

    Natural Convection in Composite Fluid-Porous Domains provides a timely overview of the current state of understanding on the phenomenon of convection in composite fluid-porous layers. Natural convection in horizontal fluid-porous layers has received renewed attention because of engineering problems such as post-accident cooling of nuclear reactors, contaminant transport in groundwater, and convection in fibrous insulation systems. Because applications of the problem span many scientific domains, the book serves as a valuable resource for a wide audience.

  3. Monogamy relations of quantum entanglement for partially coherently superposed states

    Science.gov (United States)

    Shi, Xian

    2017-12-01

    Not Available Project partially supported by the National Key Research and Development Program of China (Grant No. 2016YFB1000902), the National Natural Science Foundation of China (Grant Nos. 61232015, 61472412, and 61621003), the Beijing Science and Technology Project (2016), Tsinghua-Tencent-AMSS-Joint Project (2016), and the Key Laboratory of Mathematics Mechanization Project: Quantum Computing and Quantum Information Processing.

  4. The Wigner distribution function for squeezed vacuum superposed state

    International Nuclear Information System (INIS)

    Zayed, E.M.E.; Daoud, A.S.; AL-Laithy, M.A.; Naseem, E.N.

    2005-01-01

    In this paper, we construct the Wigner distribution function for a single-mode squeezed vacuum mixed-state which is a superposition of the squeezed vacuum state. This state is defined as a P-representation for the density operator. The obtained Wigner function depends, beside the phase-space variables, on the mean number of photons occupied by the coherent state of the mode. This mean number relates to the mean free path through a given relation, which enables us to measure this number experimentally by measuring the mean free path

  5. Descriptive Analyses of Mechanical Systems

    DEFF Research Database (Denmark)

    Andreasen, Mogens Myrup; Hansen, Claus Thorp

    2003-01-01

    Forord Produktanalyse og teknologianalyse kan gennmføres med et bredt socio-teknisk sigte med henblik på at forstå kulturelle, sociologiske, designmæssige, forretningsmæssige og mange andre forhold. Et delområde heri er systemisk analyse og beskrivelse af produkter og systemer. Nærværende kompend...

  6. Analysing and Comparing Encodability Criteria

    Directory of Open Access Journals (Sweden)

    Kirstin Peters

    2015-08-01

    Full Text Available Encodings or the proof of their absence are the main way to compare process calculi. To analyse the quality of encodings and to rule out trivial or meaningless encodings, they are augmented with quality criteria. There exists a bunch of different criteria and different variants of criteria in order to reason in different settings. This leads to incomparable results. Moreover it is not always clear whether the criteria used to obtain a result in a particular setting do indeed fit to this setting. We show how to formally reason about and compare encodability criteria by mapping them on requirements on a relation between source and target terms that is induced by the encoding function. In particular we analyse the common criteria full abstraction, operational correspondence, divergence reflection, success sensitiveness, and respect of barbs; e.g. we analyse the exact nature of the simulation relation (coupled simulation versus bisimulation that is induced by different variants of operational correspondence. This way we reduce the problem of analysing or comparing encodability criteria to the better understood problem of comparing relations on processes.

  7. Analysing Children's Drawings: Applied Imagination

    Science.gov (United States)

    Bland, Derek

    2012-01-01

    This article centres on a research project in which freehand drawings provided a richly creative and colourful data source of children's imagined, ideal learning environments. Issues concerning the analysis of the visual data are discussed, in particular, how imaginative content was analysed and how the analytical process was dependent on an…

  8. Impact analyses after pipe rupture

    International Nuclear Information System (INIS)

    Chun, R.C.; Chuang, T.Y.

    1983-01-01

    Two of the French pipe whip experiments are reproduced with the computer code WIPS. The WIPS results are in good agreement with the experimental data and the French computer code TEDEL. This justifies the use of its pipe element in conjunction with its U-bar element in a simplified method of impact analyses

  9. Millifluidic droplet analyser for microbiology

    NARCIS (Netherlands)

    Baraban, L.; Bertholle, F.; Salverda, M.L.M.; Bremond, N.; Panizza, P.; Baudry, J.; Visser, de J.A.G.M.; Bibette, J.

    2011-01-01

    We present a novel millifluidic droplet analyser (MDA) for precisely monitoring the dynamics of microbial populations over multiple generations in numerous (=103) aqueous emulsion droplets (100 nL). As a first application, we measure the growth rate of a bacterial strain and determine the minimal

  10. Analyser of sweeping electron beam

    International Nuclear Information System (INIS)

    Strasser, A.

    1993-01-01

    The electron beam analyser has an array of conductors that can be positioned in the field of the sweeping beam, an electronic signal treatment system for the analysis of the signals generated in the conductors by the incident electrons and a display for the different characteristics of the electron beam

  11. EVIDENCE FOR PopIII-LIKE STELLAR POPULATIONS IN THE MOST LUMINOUS Lyα EMITTERS AT THE EPOCH OF REIONIZATION: SPECTROSCOPIC CONFIRMATION

    Energy Technology Data Exchange (ETDEWEB)

    Sobral, David; Santos, Sérgio [Instituto de Astrofísica e Ciências do Espaço, Universidade de Lisboa, OAL, Tapada da Ajuda, PT1349-018 Lisbon (Portugal); Matthee, Jorryt; Röttgering, Huub J. A. [Leiden Observatory, Leiden University, P.O. Box 9513, NL-2300 RA Leiden (Netherlands); Darvish, Behnam; Mobasher, Bahram; Hemmati, Shoubaneh [Department of Physics and Astronomy, University of California, 900 University Avenue, Riverside, CA 92521 (United States); Schaerer, Daniel, E-mail: sobral@iastro.pt [Observatoire de Genève, Département d’Astronomie, Université de Genève, 51 Ch. des Maillettes, 1290 Versoix (Switzerland)

    2015-08-01

    Faint Lyα emitters become increasingly rarer toward the reionization epoch (z ∼ 6–7). However, observations from a very large (∼5 deg{sup 2}) Lyα narrow-band survey at z = 6.6 show that this is not the case for the most luminous emitters, capable of ionizing their own local bubbles. Here we present follow-up observations of the two most luminous Lyα candidates in the COSMOS field: “MASOSA” and “CR7.” We used X-SHOOTER, SINFONI, and FORS2 on the Very Large Telescope, and DEIMOS on Keck, to confirm both candidates beyond any doubt. We find redshifts of z = 6.541 and z = 6.604 for “MASOSA” and “CR7,” respectively. MASOSA has a strong detection in Lyα with a line width of 386 ± 30 km s{sup −1} (FWHM) and with very high EW{sub 0} (>200 Å), but undetected in the continuum, implying very low stellar mass and a likely young, metal-poor stellar population. “CR7,” with an observed Lyα luminosity of 10{sup 43.92±0.05} erg s{sup −1} is the most luminous Lyα emitter ever found at z > 6 and is spatially extended (∼16 kpc). “CR7” reveals a narrow Lyα line with 266 ± 15 km s{sup −1} FWHM, being detected in the near-infrared (NIR) (rest-frame UV; β = −2.3 ± 0.1) and in IRAC/Spitzer. We detect a narrow He ii 1640 Å emission line (6σ, FWHM = 130 ± 30 km s{sup −1}) in CR7 which can explain the clear excess seen in the J-band photometry (EW{sub 0} ∼ 80 Å). We find no other emission lines from the UV to the NIR in our X-SHOOTER spectra (He ii/O iii] 1663 Å > 3 and He ii/C iii] 1908 Å > 2.5). We conclude that CR7 is best explained by a combination of a PopIII-like population, which dominates the rest-frame UV and the nebular emission, and a more normal stellar population, which presumably dominates the mass. Hubble Space Telescope/WFC3 observations show that the light is indeed spatially separated between a very blue component, coincident with Lyα and He ii emission, and two red components (∼5 kpc away), which

  12. The Evolution of the Faint End of the UV Luminosity Function during the Peak Epoch of Star Formation (1 < z < 3)

    Science.gov (United States)

    Alavi, Anahita; Siana, Brian; Richard, Johan; Rafelski, Marc; Jauzac, Mathilde; Limousin, Marceau; Freeman, William R.; Scarlata, Claudia; Robertson, Brant; Stark, Daniel P.; Teplitz, Harry I.; Desai, Vandana

    2016-11-01

    We present a robust measurement of the rest-frame UV luminosity function (LF) and its evolution during the peak epoch of cosmic star formation at 1\\lt z\\lt 3. We use our deep near-ultraviolet imaging from WFC3/UVIS on the Hubble Space Telescope and existing Advanced Camera for Surveys (ACS)/WFC and WFC3/IR imaging of three lensing galaxy clusters, Abell 2744 and MACS J0717 from the Hubble Frontier Field survey and Abell 1689. Combining deep UV imaging and high magnification from strong gravitational lensing, we use photometric redshifts to identify 780 ultra-faint galaxies with {M}{UV}\\lt -12.5 AB mag at 1\\lt z\\lt 3. From these samples, we identified five new, faint, multiply imaged systems in A1689. We run a Monte Carlo simulation to estimate the completeness correction and effective volume for each cluster using the latest published lensing models. We compute the rest-frame UV LF and find the best-fit faint-end slopes of α =-1.56+/- 0.04, α =-1.72+/- 0.04, and α =-1.94+/- 0.06 at 1.0\\lt z\\lt 1.6, 1.6\\lt z\\lt 2.2, and 2.2\\lt z\\lt 3.0, respectively. Our results demonstrate that the UV LF becomes steeper from z˜ 1.3 to z˜ 2.6 with no sign of a turnover down to {M}{UV}=-14 AB mag. We further derive the UV LFs using the Lyman break “dropout” selection and confirm the robustness of our conclusions against different selection methodologies. Because the sample sizes are so large and extend to such faint luminosities, the statistical uncertainties are quite small, and systematic uncertainties (due to the assumed size distribution, for example) likely dominate. If we restrict our analysis to galaxies and volumes above \\gt 50 % completeness in order to minimize these systematics, we still find that the faint-end slope is steep and getting steeper with redshift, though with slightly shallower (less negative) values (α =-1.55+/- 0.06, -1.69 ± 0.07, and -1.79 ± 0.08 for z˜ 1.3, 1.9, and 2.6, respectively). Finally, we conclude that the faint star

  13. Workload analyse of assembling process

    Science.gov (United States)

    Ghenghea, L. D.

    2015-11-01

    The workload is the most important indicator for managers responsible of industrial technological processes no matter if these are automated, mechanized or simply manual in each case, machines or workers will be in the focus of workload measurements. The paper deals with workload analyses made to a most part manual assembling technology for roller bearings assembling process, executed in a big company, with integrated bearings manufacturing processes. In this analyses the delay sample technique have been used to identify and divide all bearing assemblers activities, to get information about time parts from 480 minutes day work time that workers allow to each activity. The developed study shows some ways to increase the process productivity without supplementary investments and also indicated the process automation could be the solution to gain maximum productivity.

  14. Mitogenomic analyses from ancient DNA

    DEFF Research Database (Denmark)

    Paijmans, Johanna L. A.; Gilbert, Tom; Hofreiter, Michael

    2013-01-01

    The analysis of ancient DNA is playing an increasingly important role in conservation genetic, phylogenetic and population genetic analyses, as it allows incorporating extinct species into DNA sequence trees and adds time depth to population genetics studies. For many years, these types of DNA...... analyses (whether using modern or ancient DNA) were largely restricted to the analysis of short fragments of the mitochondrial genome. However, due to many technological advances during the past decade, a growing number of studies have explored the power of complete mitochondrial genome sequences...... yielded major progress with regard to both the phylogenetic positions of extinct species, as well as resolving population genetics questions in both extinct and extant species....

  15. Recriticality analyses for CAPRA cores

    International Nuclear Information System (INIS)

    Maschek, W.; Thiem, D.

    1995-01-01

    The first scoping calculation performed show that the energetics levels from recriticalities in CAPRA cores are in the same range as in conventional cores. However, considerable uncertainties exist and further analyses are necessary. Additional investigations are performed for the separation scenarios of fuel/steel/inert and matrix material as a large influence of these processes on possible ramp rates and kinetics parameters was detected in the calculations. (orig./HP)

  16. Recriticality analyses for CAPRA cores

    Energy Technology Data Exchange (ETDEWEB)

    Maschek, W.; Thiem, D.

    1995-08-01

    The first scoping calculation performed show that the energetics levels from recriticalities in CAPRA cores are in the same range as in conventional cores. However, considerable uncertainties exist and further analyses are necessary. Additional investigations are performed for the separation scenarios of fuel/steel/inert and matrix material as a large influence of these processes on possible ramp rates and kinetics parameters was detected in the calculations. (orig./HP)

  17. Technical center for transportation analyses

    International Nuclear Information System (INIS)

    Foley, J.T.

    1978-01-01

    A description is presented of an information search/retrieval/research activity of Sandia Laboratories which provides technical environmental information which may be used in transportation risk analyses, environmental impact statements, development of design and test criteria for packaging of energy materials, and transportation mode research studies. General activities described are: (1) history of center development; (2) environmental information storage/retrieval system; (3) information searches; (4) data needs identification; and (5) field data acquisition system and applications

  18. Methodology of cost benefit analyses

    International Nuclear Information System (INIS)

    Patrik, M.; Babic, P.

    2000-10-01

    The report addresses financial aspects of proposed investments and other steps which are intended to contribute to nuclear safety. The aim is to provide introductory insight into the procedures and potential of cost-benefit analyses as a routine guide when making decisions on costly provisions as one of the tools to assess whether a particular provision is reasonable. The topic is applied to the nuclear power sector. (P.A.)

  19. Chapter No.4. Safety analyses

    International Nuclear Information System (INIS)

    2002-01-01

    In 2001 the activity in the field of safety analyses was focused on verification of the safety analyses reports for NPP V-2 Bohunice and NPP Mochovce concerning the new profiled fuel and probabilistic safety assessment study for NPP Mochovce. The calculation safety analyses were performed and expert reviews for the internal UJD needs were elaborated. An important part of work was performed also in solving of scientific and technical tasks appointed within bilateral projects of co-operation between UJD and its international partnership organisations as well as within international projects ordered and financed by the European Commission. All these activities served as an independent support for UJD in its deterministic and probabilistic safety assessment of nuclear installations. A special attention was paid to a review of probabilistic safety assessment study of level 1 for NPP Mochovce. The probabilistic safety analysis of NPP related to the full power operation was elaborated in the study and a contribution of the technical and operational improvements to the risk decreasing was quantified. A core damage frequency of the reactor was calculated and the dominant initiating events and accident sequences with the major contribution to the risk were determined. The target of the review was to determine the acceptance of the sources of input information, assumptions, models, data, analyses and obtained results, so that the probabilistic model could give a real picture of the NPP. The review of the study was performed in co-operation of UJD with the IAEA (IPSART mission) as well as with other external organisations, which were not involved in the elaboration of the reviewed document and probabilistic model of NPP. The review was made in accordance with the IAEA guidelines and methodical documents of UJD and US NRC. In the field of calculation safety analyses the UJD activity was focused on the analysis of an operational event, analyses of the selected accident scenarios

  20. Analysing the Wrongness of Killing

    DEFF Research Database (Denmark)

    Di Nucci, Ezio

    2014-01-01

    This article provides an in-depth analysis of the wrongness of killing by comparing different versions of three influential views: the traditional view that killing is always wrong; the liberal view that killing is wrong if and only if the victim does not want to be killed; and Don Marquis‟ future...... of value account of the wrongness of killing. In particular, I illustrate the advantages that a basic version of the liberal view and a basic version of the future of value account have over competing alternatives. Still, ultimately none of the views analysed here are satisfactory; but the different...

  1. Methodological challenges in carbohydrate analyses

    Directory of Open Access Journals (Sweden)

    Mary Beth Hall

    2007-07-01

    Full Text Available Carbohydrates can provide up to 80% of the dry matter in animal diets, yet their specific evaluation for research and diet formulation is only now becoming a focus in the animal sciences. Partitioning of dietary carbohydrates for nutritional purposes should reflect differences in digestion and fermentation characteristics and effects on animal performance. Key challenges to designating nutritionally important carbohydrate fractions include classifying the carbohydrates in terms of nutritional characteristics, and selecting analytical methods that describe the desired fraction. The relative lack of information on digestion characteristics of various carbohydrates and their interactions with other fractions in diets means that fractions will not soon be perfectly established. Developing a system of carbohydrate analysis that could be used across animal species could enhance the utility of analyses and amount of data we can obtain on dietary effects of carbohydrates. Based on quantities present in diets and apparent effects on animal performance, some nutritionally important classes of carbohydrates that may be valuable to measure include sugars, starch, fructans, insoluble fiber, and soluble fiber. Essential to selection of methods for these fractions is agreement on precisely what carbohydrates should be included in each. Each of these fractions has analyses that could potentially be used to measure them, but most of the available methods have weaknesses that must be evaluated to see if they are fatal and the assay is unusable, or if the assay still may be made workable. Factors we must consider as we seek to analyze carbohydrates to describe diets: Does the assay accurately measure the desired fraction? Is the assay for research, regulatory, or field use (affects considerations of acceptable costs and throughput? What are acceptable accuracy and variability of measures? Is the assay robust (enhances accuracy of values? For some carbohydrates, we

  2. Deformation analyse of the high point field Košická Nová Ves

    Directory of Open Access Journals (Sweden)

    Sedlák Vladimír

    2003-09-01

    Full Text Available From the science point of view the deformation measurements serve to an objective determination of movements and from the technical point of view the deformation measurements serve to a determinantion of the building technologies and the construction procedures. Detrmined movements by means of using the geodetic terrestrial or satellite navigation technologies give informations about displacements in a concrete time information on the base of repeated geodetic measurements in the concrete time intervals (epochs.Level deformation investigation of the point of the monitoring station stabled in the fill slope territory Košická Nová Ves is the main task of the presented paper. Level measurements are realized in autumn 2000 (the epoch 200.9 - it is considered as the first epoch of the deformation measurement, and in spring 2001 (the epoch 2001.3 – it is considered as the second epoch of the deformation measurement.

  3. Theorising and Analysing Academic Labour

    Directory of Open Access Journals (Sweden)

    Thomas Allmer

    2018-01-01

    Full Text Available The aim of this article is to contextualise universities historically within capitalism and to analyse academic labour and the deployment of digital media theoretically and critically. It argues that the post-war expansion of the university can be considered as medium and outcome of informational capitalism and as a dialectical development of social achievement and advanced commodification. The article strives to identify the class position of academic workers, introduces the distinction between academic work and labour, discusses the connection between academic, information and cultural work, and suggests a broad definition of university labour. It presents a theoretical model of working conditions that helps to systematically analyse the academic labour process and to provide an overview of working conditions at universities. The paper furthermore argues for the need to consider the development of education technologies as a dialectics of continuity and discontinuity, discusses the changing nature of the forces and relations of production, and the impact on the working conditions of academics in the digital university. Based on Erik Olin Wright’s inclusive approach of social transformation, the article concludes with the need to bring together anarchist, social democratic and revolutionary strategies for establishing a socialist university in a commons-based information society.

  4. CFD analyses in regulatory practice

    International Nuclear Information System (INIS)

    Bloemeling, F.; Pandazis, P.; Schaffrath, A.

    2012-01-01

    Numerical software is used in nuclear regulatory procedures for many problems in the fields of neutron physics, structural mechanics, thermal hydraulics etc. Among other things, the software is employed in dimensioning and designing systems and components and in simulating transients and accidents. In nuclear technology, analyses of this kind must meet strict requirements. Computational Fluid Dynamics (CFD) codes were developed for computing multidimensional flow processes of the type occurring in reactor cooling systems or in containments. Extensive experience has been accumulated by now in selected single-phase flow phenomena. At the present time, there is a need for development and validation with respect to the simulation of multi-phase and multi-component flows. As insufficient input by the user can lead to faulty results, the validity of the results and an assessment of uncertainties are guaranteed only through consistent application of so-called Best Practice Guidelines. The authors present the possibilities now available to CFD analyses in nuclear regulatory practice. This includes a discussion of the fundamental requirements to be met by numerical software, especially the demands upon computational analysis made by nuclear rules and regulations. In conclusion, 2 examples are presented of applications of CFD analysis to nuclear problems: Determining deboration in the condenser reflux mode of operation, and protection of the reactor pressure vessel (RPV) against brittle failure. (orig.)

  5. Severe accident recriticality analyses (SARA)

    DEFF Research Database (Denmark)

    Frid, W.; Højerup, C.F.; Lindholm, I.

    2001-01-01

    with all three codes. The core initial and boundary conditions prior to recriticality have been studied with the severe accident codes SCDAP/RELAP5, MELCOR and MAAP4. The results of the analyses show that all three codes predict recriticality-both super-prompt power bursts and quasi steady-state power......Recriticality in a BWR during reflooding of an overheated partly degraded core, i.e. with relocated control rods, has been studied for a total loss of electric power accident scenario. In order to assess the impact of recriticality on reactor safety, including accident management strategies......, which results in large energy deposition in the fuel during power burst in some accident scenarios. The highest value, 418 cal g(-1), was obtained with SIMULATE-3K for an Oskarshamn 3 case with reflooding rate of 2000 kg s(-1). In most cases, however, the predicted energy deposition was smaller, below...

  6. Hydrogen Analyses in the EPR

    International Nuclear Information System (INIS)

    Worapittayaporn, S.; Eyink, J.; Movahed, M.

    2008-01-01

    In severe accidents with core melting large amounts of hydrogen may be released into the containment. The EPR provides a combustible gas control system to prevent hydrogen combustion modes with the potential to challenge the containment integrity due to excessive pressure and temperature loads. This paper outlines the approach for the verification of the effectiveness and efficiency of this system. Specifically, the justification is a multi-step approach. It involves the deployment of integral codes, lumped parameter containment codes and CFD codes and the use of the sigma criterion, which provides the link to the broad experimental data base for flame acceleration (FA) and deflagration to detonation transition (DDT). The procedure is illustrated with an example. The performed analyses show that hydrogen combustion at any time does not lead to pressure or temperature loads that threaten the containment integrity of the EPR. (authors)

  7. Uncertainty and Sensitivity Analyses Plan

    International Nuclear Information System (INIS)

    Simpson, J.C.; Ramsdell, J.V. Jr.

    1993-04-01

    Hanford Environmental Dose Reconstruction (HEDR) Project staff are developing mathematical models to be used to estimate the radiation dose that individuals may have received as a result of emissions since 1944 from the US Department of Energy's (DOE) Hanford Site near Richland, Washington. An uncertainty and sensitivity analyses plan is essential to understand and interpret the predictions from these mathematical models. This is especially true in the case of the HEDR models where the values of many parameters are unknown. This plan gives a thorough documentation of the uncertainty and hierarchical sensitivity analysis methods recommended for use on all HEDR mathematical models. The documentation includes both technical definitions and examples. In addition, an extensive demonstration of the uncertainty and sensitivity analysis process is provided using actual results from the Hanford Environmental Dose Reconstruction Integrated Codes (HEDRIC). This demonstration shows how the approaches used in the recommended plan can be adapted for all dose predictions in the HEDR Project

  8. The hemispherical deflector analyser revisited

    Energy Technology Data Exchange (ETDEWEB)

    Benis, E.P. [Institute of Electronic Structure and Laser, P.O. Box 1385, 71110 Heraklion, Crete (Greece)], E-mail: benis@iesl.forth.gr; Zouros, T.J.M. [Institute of Electronic Structure and Laser, P.O. Box 1385, 71110 Heraklion, Crete (Greece); Department of Physics, University of Crete, P.O. Box 2208, 71003 Heraklion, Crete (Greece)

    2008-04-15

    Using the basic spectrometer trajectory equation for motion in an ideal 1/r potential derived in Eq. (101) of part I [T.J.M. Zouros, E.P. Benis, J. Electron Spectrosc. Relat. Phenom. 125 (2002) 221], the operational characteristics of a hemispherical deflector analyser (HDA) such as dispersion, energy resolution, energy calibration, input lens magnification and energy acceptance window are investigated from first principles. These characteristics are studied as a function of the entry point R{sub 0} and the nominal value of the potential V(R{sub 0}) at entry. Electron-optics simulations and actual laboratory measurements are compared to our theoretical results for an ideal biased paracentric HDA using a four-element zoom lens and a two-dimensional position sensitive detector (2D-PSD). These results should be of particular interest to users of modern HDAs utilizing a PSD.

  9. The hemispherical deflector analyser revisited

    International Nuclear Information System (INIS)

    Benis, E.P.; Zouros, T.J.M.

    2008-01-01

    Using the basic spectrometer trajectory equation for motion in an ideal 1/r potential derived in Eq. (101) of part I [T.J.M. Zouros, E.P. Benis, J. Electron Spectrosc. Relat. Phenom. 125 (2002) 221], the operational characteristics of a hemispherical deflector analyser (HDA) such as dispersion, energy resolution, energy calibration, input lens magnification and energy acceptance window are investigated from first principles. These characteristics are studied as a function of the entry point R 0 and the nominal value of the potential V(R 0 ) at entry. Electron-optics simulations and actual laboratory measurements are compared to our theoretical results for an ideal biased paracentric HDA using a four-element zoom lens and a two-dimensional position sensitive detector (2D-PSD). These results should be of particular interest to users of modern HDAs utilizing a PSD

  10. Analysing Protocol Stacks for Services

    DEFF Research Database (Denmark)

    Gao, Han; Nielson, Flemming; Nielson, Hanne Riis

    2011-01-01

    We show an approach, CaPiTo, to model service-oriented applications using process algebras such that, on the one hand, we can achieve a certain level of abstraction without being overwhelmed by the underlying implementation details and, on the other hand, we respect the concrete industrial...... standards used for implementing the service-oriented applications. By doing so, we will be able to not only reason about applications at different levels of abstractions, but also to build a bridge between the views of researchers on formal methods and developers in industry. We apply our approach...... to the financial case study taken from Chapter 0-3. Finally, we develop a static analysis to analyse the security properties as they emerge at the level of concrete industrial protocols....

  11. Analysing performance through value creation

    Directory of Open Access Journals (Sweden)

    Adrian TRIFAN

    2015-12-01

    Full Text Available This paper draws a parallel between measuring financial performance in 2 variants: the first one using data offered by accounting, which lays emphasis on maximizing profit, and the second one which aims to create value. The traditional approach to performance is based on some indicators from accounting data: ROI, ROE, EPS. The traditional management, based on analysing the data from accounting, has shown its limits, and a new approach is needed, based on creating value. The evaluation of value based performance tries to avoid the errors due to accounting data, by using other specific indicators: EVA, MVA, TSR, CVA. The main objective is shifted from maximizing the income to maximizing the value created for shareholders. The theoretical part is accompanied by a practical analysis regarding the creation of value and an analysis of the main indicators which evaluate this concept.

  12. Thermal analyses of spent nuclear fuel repository

    International Nuclear Information System (INIS)

    Ikonen, K.

    2003-06-01

    calibrated by numerical analysis. Superposing single line heat sources the evolution of the temperature field of the repository can be determined efficiently. Efficient visualisation programmes were used for showing the results. Visualisation is an important element in assuring the reliability of the calculation process. (orig.)

  13. Spectroscopic Analyses of Neutron Capture Elements in Open Clusters

    Science.gov (United States)

    O'Connell, Julia E.

    The evolution of elements as a function or age throughout the Milky Way disk provides strong constraints for galaxy evolution models, and on star formation epochs. In an effort to provide such constraints, we conducted an investigation into r- and s-process elemental abundances for a large sample of open clusters as part of an optical follow-up to the SDSS-III/APOGEE-1 near infrared survey. To obtain data for neutron capture abundance analysis, we conducted a long-term observing campaign spanning three years (2013-2016) using the McDonald Observatory Otto Struve 2.1-meter telescope and Sandiford Cass Echelle Spectrograph (SES, R(lambda/Deltalambda) ˜60,000). The SES provides a wavelength range of ˜1400 A, making it uniquely suited to investigate a number of other important chemical abundances as well as the neutron capture elements. For this study, we derive abundances for 18 elements covering four nucleosynthetic families- light, iron-peak, neutron capture and alpha-elements- for ˜30 open clusters within 6 kpc of the Sun with ages ranging from ˜80 Myr to ˜10 Gyr. Both equivalent width (EW) measurements and spectral synthesis methods were employed to derive abundances for all elements. Initial estimates for model stellar atmospheres- effective temperature and surface gravity- were provided by the APOGEE data set, and then re-derived for our optical spectra by removing abundance trends as a function of excitation potential and reduced width log(EW/lambda). With the exception of Ba II and Zr I, abundance analyses for all neutron capture elements were performed by generating synthetic spectra from the new stellar parameters. In order to remove molecular contamination, or blending from nearby atomic features, the synthetic spectra were modeled by a best-fit Gaussian to the observed data. Nd II shows a slight enhancement in all cluster stars, while other neutron capture elements follow solar abundance trends. Ba II shows a large cluster-to-cluster abundance spread

  14. Proteins analysed as virtual knots

    Science.gov (United States)

    Alexander, Keith; Taylor, Alexander J.; Dennis, Mark R.

    2017-02-01

    Long, flexible physical filaments are naturally tangled and knotted, from macroscopic string down to long-chain molecules. The existence of knotting in a filament naturally affects its configuration and properties, and may be very stable or disappear rapidly under manipulation and interaction. Knotting has been previously identified in protein backbone chains, for which these mechanical constraints are of fundamental importance to their molecular functionality, despite their being open curves in which the knots are not mathematically well defined; knotting can only be identified by closing the termini of the chain somehow. We introduce a new method for resolving knotting in open curves using virtual knots, which are a wider class of topological objects that do not require a classical closure and so naturally capture the topological ambiguity inherent in open curves. We describe the results of analysing proteins in the Protein Data Bank by this new scheme, recovering and extending previous knotting results, and identifying topological interest in some new cases. The statistics of virtual knots in protein chains are compared with those of open random walks and Hamiltonian subchains on cubic lattices, identifying a regime of open curves in which the virtual knotting description is likely to be important.

  15. Digital image analyser for autoradiography

    International Nuclear Information System (INIS)

    Muth, R.A.; Plotnick, J.

    1985-01-01

    The most critical parameter in quantitative autoradiography for assay of tissue concentrations of tracers is the ability to obtain precise and accurate measurements of optical density of the images. Existing high precision systems for image analysis, rotating drum densitometers, are expensive, suffer from mechanical problems and are slow. More moderately priced and reliable video camera based systems are available, but their outputs generally do not have the uniformity and stability necessary for high resolution quantitative autoradiography. The authors have designed and constructed an image analyser optimized for quantitative single and multiple tracer autoradiography which the authors refer to as a memory-mapped charged-coupled device scanner (MM-CCD). The input is from a linear array of CCD's which is used to optically scan the autoradiograph. Images are digitized into 512 x 512 picture elements with 256 gray levels and the data is stored in buffer video memory in less than two seconds. Images can then be transferred to RAM memory by direct memory-mapping for further processing. Arterial blood curve data and optical density-calibrated standards data can be entered and the optical density images can be converted automatically to tracer concentration or functional images. In double tracer studies, images produced from both exposures can be stored and processed in RAM to yield ''pure'' individual tracer concentration or functional images. Any processed image can be transmitted back to the buffer memory to be viewed on a monitor and processed for region of interest analysis

  16. Severe Accident Recriticality Analyses (SARA)

    Energy Technology Data Exchange (ETDEWEB)

    Frid, W. [Swedish Nuclear Power Inspectorate, Stockholm (Sweden); Hoejerup, F. [Risoe National Lab. (Denmark); Lindholm, I.; Miettinen, J.; Puska, E.K. [VTT Energy, Helsinki (Finland); Nilsson, Lars [Studsvik Eco and Safety AB, Nykoeping (Sweden); Sjoevall, H. [Teoliisuuden Voima Oy (Finland)

    1999-11-01

    Recriticality in a BWR has been studied for a total loss of electric power accident scenario. In a BWR, the B{sub 4}C control rods would melt and relocate from the core before the fuel during core uncovery and heat-up. If electric power returns during this time-window unborated water from ECCS systems will start to reflood the partly control rod free core. Recriticality might take place for which the only mitigating mechanisms are the Doppler effect and void formation. In order to assess the impact of recriticality on reactor safety, including accident management measures, the following issues have been investigated in the SARA project: 1. the energy deposition in the fuel during super-prompt power burst, 2. the quasi steady-state reactor power following the initial power burst and 3. containment response to elevated quasi steady-state reactor power. The approach was to use three computer codes and to further develop and adapt them for the task. The codes were SIMULATE-3K, APROS and RECRIT. Recriticality analyses were carried out for a number of selected reflooding transients for the Oskarshamn 3 plant in Sweden with SIMULATE-3K and for the Olkiluoto 1 plant in Finland with all three codes. The core state initial and boundary conditions prior to recriticality have been studied with the severe accident codes SCDAP/RELAP5, MELCOR and MAAP4. The results of the analyses show that all three codes predict recriticality - both superprompt power bursts and quasi steady-state power generation - for the studied range of parameters, i. e. with core uncovery and heat-up to maximum core temperatures around 1800 K and water flow rates of 45 kg/s to 2000 kg/s injected into the downcomer. Since the recriticality takes place in a small fraction of the core the power densities are high which results in large energy deposition in the fuel during power burst in some accident scenarios. The highest value, 418 cal/g, was obtained with SIMULATE-3K for an Oskarshamn 3 case with reflooding

  17. Severe accident recriticality analyses (SARA)

    Energy Technology Data Exchange (ETDEWEB)

    Frid, W. E-mail: wiktor.frid@ski.se; Hoejerup, F.; Lindholm, I.; Miettinen, J.; Nilsson, L.; Puska, E.K.; Sjoevall, H

    2001-11-01

    Recriticality in a BWR during reflooding of an overheated partly degraded core, i.e. with relocated control rods, has been studied for a total loss of electric power accident scenario. In order to assess the impact of recriticality on reactor safety, including accident management strategies, the following issues have been investigated in the SARA project: (1) the energy deposition in the fuel during super-prompt power burst; (2) the quasi steady-state reactor power following the initial power burst; and (3) containment response to elevated quasi steady-state reactor power. The approach was to use three computer codes and to further develop and adapt them for the task. The codes were SIMULATE-3K, APROS and RECRIT. Recriticality analyses were carried out for a number of selected reflooding transients for the Oskarshamn 3 plant in Sweden with SIMULATE-3K and for the Olkiluoto 1 plant in Finland with all three codes. The core initial and boundary conditions prior to recriticality have been studied with the severe accident codes SCDAP/RELAP5, MELCOR and MAAP4. The results of the analyses show that all three codes predict recriticality--both super-prompt power bursts and quasi steady-state power generation--for the range of parameters studied, i.e. with core uncovering and heat-up to maximum core temperatures of approximately 1800 K, and water flow rates of 45-2000 kg s{sup -1} injected into the downcomer. Since recriticality takes place in a small fraction of the core, the power densities are high, which results in large energy deposition in the fuel during power burst in some accident scenarios. The highest value, 418 cal g{sup -1}, was obtained with SIMULATE-3K for an Oskarshamn 3 case with reflooding rate of 2000 kg s{sup -1}. In most cases, however, the predicted energy deposition was smaller, below the regulatory limits for fuel failure, but close to or above recently observed thresholds for fragmentation and dispersion of high burn-up fuel. The highest calculated

  18. Severe accident recriticality analyses (SARA)

    International Nuclear Information System (INIS)

    Frid, W.; Hoejerup, F.; Lindholm, I.; Miettinen, J.; Nilsson, L.; Puska, E.K.; Sjoevall, H.

    2001-01-01

    Recriticality in a BWR during reflooding of an overheated partly degraded core, i.e. with relocated control rods, has been studied for a total loss of electric power accident scenario. In order to assess the impact of recriticality on reactor safety, including accident management strategies, the following issues have been investigated in the SARA project: (1) the energy deposition in the fuel during super-prompt power burst; (2) the quasi steady-state reactor power following the initial power burst; and (3) containment response to elevated quasi steady-state reactor power. The approach was to use three computer codes and to further develop and adapt them for the task. The codes were SIMULATE-3K, APROS and RECRIT. Recriticality analyses were carried out for a number of selected reflooding transients for the Oskarshamn 3 plant in Sweden with SIMULATE-3K and for the Olkiluoto 1 plant in Finland with all three codes. The core initial and boundary conditions prior to recriticality have been studied with the severe accident codes SCDAP/RELAP5, MELCOR and MAAP4. The results of the analyses show that all three codes predict recriticality--both super-prompt power bursts and quasi steady-state power generation--for the range of parameters studied, i.e. with core uncovering and heat-up to maximum core temperatures of approximately 1800 K, and water flow rates of 45-2000 kg s -1 injected into the downcomer. Since recriticality takes place in a small fraction of the core, the power densities are high, which results in large energy deposition in the fuel during power burst in some accident scenarios. The highest value, 418 cal g -1 , was obtained with SIMULATE-3K for an Oskarshamn 3 case with reflooding rate of 2000 kg s -1 . In most cases, however, the predicted energy deposition was smaller, below the regulatory limits for fuel failure, but close to or above recently observed thresholds for fragmentation and dispersion of high burn-up fuel. The highest calculated quasi steady

  19. Severe Accident Recriticality Analyses (SARA)

    International Nuclear Information System (INIS)

    Frid, W.; Hoejerup, F.; Lindholm, I.; Miettinen, J.; Puska, E.K.; Nilsson, Lars; Sjoevall, H.

    1999-11-01

    Recriticality in a BWR has been studied for a total loss of electric power accident scenario. In a BWR, the B 4 C control rods would melt and relocate from the core before the fuel during core uncovery and heat-up. If electric power returns during this time-window unborated water from ECCS systems will start to reflood the partly control rod free core. Recriticality might take place for which the only mitigating mechanisms are the Doppler effect and void formation. In order to assess the impact of recriticality on reactor safety, including accident management measures, the following issues have been investigated in the SARA project: 1. the energy deposition in the fuel during super-prompt power burst, 2. the quasi steady-state reactor power following the initial power burst and 3. containment response to elevated quasi steady-state reactor power. The approach was to use three computer codes and to further develop and adapt them for the task. The codes were SIMULATE-3K, APROS and RECRIT. Recriticality analyses were carried out for a number of selected reflooding transients for the Oskarshamn 3 plant in Sweden with SIMULATE-3K and for the Olkiluoto 1 plant in Finland with all three codes. The core state initial and boundary conditions prior to recriticality have been studied with the severe accident codes SCDAP/RELAP5, MELCOR and MAAP4. The results of the analyses show that all three codes predict recriticality - both superprompt power bursts and quasi steady-state power generation - for the studied range of parameters, i. e. with core uncovery and heat-up to maximum core temperatures around 1800 K and water flow rates of 45 kg/s to 2000 kg/s injected into the downcomer. Since the recriticality takes place in a small fraction of the core the power densities are high which results in large energy deposition in the fuel during power burst in some accident scenarios. The highest value, 418 cal/g, was obtained with SIMULATE-3K for an Oskarshamn 3 case with reflooding

  20. Pawnee Nation Energy Option Analyses

    Energy Technology Data Exchange (ETDEWEB)

    Matlock, M.; Kersey, K.; Riding In, C.

    2009-07-21

    Pawnee Nation of Oklahoma Energy Option Analyses In 2003, the Pawnee Nation leadership identified the need for the tribe to comprehensively address its energy issues. During a strategic energy planning workshop a general framework was laid out and the Pawnee Nation Energy Task Force was created to work toward further development of the tribe’s energy vision. The overarching goals of the “first steps” project were to identify the most appropriate focus for its strategic energy initiatives going forward, and to provide information necessary to take the next steps in pursuit of the “best fit” energy options. Description of Activities Performed The research team reviewed existing data pertaining to the availability of biomass (focusing on woody biomass, agricultural biomass/bio-energy crops, and methane capture), solar, wind and hydropower resources on the Pawnee-owned lands. Using these data, combined with assumptions about costs and revenue streams, the research team performed preliminary feasibility assessments for each resource category. The research team also reviewed available funding resources and made recommendations to Pawnee Nation highlighting those resources with the greatest potential for financially-viable development, both in the near-term and over a longer time horizon. Findings and Recommendations Due to a lack of financial incentives for renewable energy, particularly at the state level, combined mediocre renewable energy resources, renewable energy development opportunities are limited for Pawnee Nation. However, near-term potential exists for development of solar hot water at the gym, and an exterior wood-fired boiler system at the tribe’s main administrative building. Pawnee Nation should also explore options for developing LFGTE resources in collaboration with the City of Pawnee. Significant potential may also exist for development of bio-energy resources within the next decade. Pawnee Nation representatives should closely monitor

  1. Improving word coverage using unsupervised morphological analyser

    Indian Academy of Sciences (India)

    To enable a computer to process information in human languages, ... vised morphological analyser (UMA) would learn how to analyse a language just by looking ... result for English, but they did remarkably worse for Finnish and Turkish.

  2. Techniques for Analysing Problems in Engineering Projects

    DEFF Research Database (Denmark)

    Thorsteinsson, Uffe

    1998-01-01

    Description of how CPM network can be used for analysing complex problems in engineering projects.......Description of how CPM network can be used for analysing complex problems in engineering projects....

  3. Automatic incrementalization of Prolog based static analyses

    DEFF Research Database (Denmark)

    Eichberg, Michael; Kahl, Matthias; Saha, Diptikalyan

    2007-01-01

    Modem development environments integrate various static analyses into the build process. Analyses that analyze the whole project whenever the project changes are impractical in this context. We present an approach to automatic incrementalization of analyses that are specified as tabled logic prog...

  4. Fracture analyses of WWER reactor pressure vessels

    International Nuclear Information System (INIS)

    Sievers, J.; Liu, X.

    1997-01-01

    In the paper first the methodology of fracture assessment based on finite element (FE) calculations is described and compared with simplified methods. The FE based methodology was verified by analyses of large scale thermal shock experiments in the framework of the international comparative study FALSIRE (Fracture Analyses of Large Scale Experiments) organized by GRS and ORNL. Furthermore, selected results from fracture analyses of different WWER type RPVs with postulated cracks under different loading transients are presented. 11 refs, 13 figs, 1 tab

  5. Fracture analyses of WWER reactor pressure vessels

    Energy Technology Data Exchange (ETDEWEB)

    Sievers, J; Liu, X [Gesellschaft fuer Anlagen- und Reaktorsicherheit mbH (GRS), Koeln (Germany)

    1997-09-01

    In the paper first the methodology of fracture assessment based on finite element (FE) calculations is described and compared with simplified methods. The FE based methodology was verified by analyses of large scale thermal shock experiments in the framework of the international comparative study FALSIRE (Fracture Analyses of Large Scale Experiments) organized by GRS and ORNL. Furthermore, selected results from fracture analyses of different WWER type RPVs with postulated cracks under different loading transients are presented. 11 refs, 13 figs, 1 tab.

  6. [Anne Arold. Kontrastive Analyse...] / Paul Alvre

    Index Scriptorium Estoniae

    Alvre, Paul, 1921-2008

    2001-01-01

    Arvustus: Arold, Anne. Kontrastive analyse der Wortbildungsmuster im Deutschen und im Estnischen (am Beispiel der Aussehensadjektive). Tartu, 2000. (Dissertationes philologiae germanicae Universitatis Tartuensis)

  7. Influence of geomagnetic activity and atmospheric pressure in hypertensive adults.

    Science.gov (United States)

    Azcárate, T; Mendoza, B

    2017-09-01

    We performed a study of the systolic and diastolic arterial blood pressure behavior under natural variables such as the atmospheric pressure and the horizontal geomagnetic field component. We worked with a group of eight adult hypertensive volunteers, four men and four women, with ages between 18 and 27 years in Mexico City during a geomagnetic storm in 2014. The data was divided by gender, age, and day/night cycle. We studied the time series using three methods: correlations, bivariate analysis, and superposed epoch (within a window of 2 days around the day of occurrence of a geomagnetic storm) analysis, between the systolic and diastolic blood pressure and the natural variables. The correlation analysis indicated a correlation between the systolic and diastolic blood pressure and the atmospheric pressure and the horizontal geomagnetic field component, being the largest during the night. Furthermore, the correlation and bivariate analyses showed that the largest correlations are between the systolic and diastolic blood pressure and the horizontal geomagnetic field component. Finally, the superposed epoch analysis showed that the largest number of significant changes in the blood pressure under the influence of geomagnetic field occurred in the systolic blood pressure for men.

  8. Influence of geomagnetic activity and atmospheric pressure in hypertensive adults

    Science.gov (United States)

    Azcárate, T.; Mendoza, B.

    2017-09-01

    We performed a study of the systolic and diastolic arterial blood pressure behavior under natural variables such as the atmospheric pressure and the horizontal geomagnetic field component. We worked with a group of eight adult hypertensive volunteers, four men and four women, with ages between 18 and 27 years in Mexico City during a geomagnetic storm in 2014. The data was divided by gender, age, and day/night cycle. We studied the time series using three methods: correlations, bivariate analysis, and superposed epoch (within a window of 2 days around the day of occurrence of a geomagnetic storm) analysis, between the systolic and diastolic blood pressure and the natural variables. The correlation analysis indicated a correlation between the systolic and diastolic blood pressure and the atmospheric pressure and the horizontal geomagnetic field component, being the largest during the night. Furthermore, the correlation and bivariate analyses showed that the largest correlations are between the systolic and diastolic blood pressure and the horizontal geomagnetic field component. Finally, the superposed epoch analysis showed that the largest number of significant changes in the blood pressure under the influence of geomagnetic field occurred in the systolic blood pressure for men.

  9. An MDE Approach for Modular Program Analyses

    NARCIS (Netherlands)

    Yildiz, Bugra Mehmet; Bockisch, Christoph; Aksit, Mehmet; Rensink, Arend

    Program analyses are an important tool to check if a system fulfills its specification. A typical implementation strategy for program analyses is to use an imperative, general-purpose language like Java, and access the program to be analyzed through libraries that offer an API for reading, writing

  10. Random error in cardiovascular meta-analyses

    DEFF Research Database (Denmark)

    Albalawi, Zaina; McAlister, Finlay A; Thorlund, Kristian

    2013-01-01

    BACKGROUND: Cochrane reviews are viewed as the gold standard in meta-analyses given their efforts to identify and limit systematic error which could cause spurious conclusions. The potential for random error to cause spurious conclusions in meta-analyses is less well appreciated. METHODS: We exam...

  11. Diversity of primary care systems analysed.

    NARCIS (Netherlands)

    Kringos, D.; Boerma, W.; Bourgueil, Y.; Cartier, T.; Dedeu, T.; Hasvold, T.; Hutchinson, A.; Lember, M.; Oleszczyk, M.; Pavlick, D.R.

    2015-01-01

    This chapter analyses differences between countries and explains why countries differ regarding the structure and process of primary care. The components of primary care strength that are used in the analyses are health policy-making, workforce development and in the care process itself (see Fig.

  12. Approximate analyses of inelastic effects in pipework

    International Nuclear Information System (INIS)

    Jobson, D.A.

    1983-01-01

    This presentation shows figures concerned with analyses of inelastic effects in pipework as follows: comparison of experimental and calculated simplified analyses results for free end rotation and for circumferential strain; interrupted stress relaxation; regenerated relaxation caused by reversed yield; buckling of straight pipe under combined bending and torsion; results of fatigues test of pipe bend

  13. Level II Ergonomic Analyses, Dover AFB, DE

    Science.gov (United States)

    1999-02-01

    IERA-RS-BR-TR-1999-0002 UNITED STATES AIR FORCE IERA Level II Ergonomie Analyses, Dover AFB, DE Andrew Marcotte Marilyn Joyce The Joyce...Project (070401881, Washington, DC 20503. 1. AGENCY USE ONLY (Leave blank) 2. REPORT DATE 4. TITLE AND SUBTITLE Level II Ergonomie Analyses, Dover...1.0 INTRODUCTION 1-1 1.1 Purpose Of The Level II Ergonomie Analyses : 1-1 1.2 Approach 1-1 1.2.1 Initial Shop Selection and Administration of the

  14. Automatic incrementalization of Prolog based static analyses

    DEFF Research Database (Denmark)

    Eichberg, Michael; Kahl, Matthias; Saha, Diptikalyan

    2007-01-01

    Modem development environments integrate various static analyses into the build process. Analyses that analyze the whole project whenever the project changes are impractical in this context. We present an approach to automatic incrementalization of analyses that are specified as tabled logic...... programs and evaluated using incremental tabled evaluation, a technique for efficiently updating memo tables in response to changes in facts and rules. The approach has been implemented and integrated into the Eclipse IDE. Our measurements show that this technique is effective for automatically...

  15. Cost-Benefit Analyses of Transportation Investments

    DEFF Research Database (Denmark)

    Næss, Petter

    2006-01-01

    This paper discusses the practice of cost-benefit analyses of transportation infrastructure investment projects from the meta-theoretical perspective of critical realism. Such analyses are based on a number of untenable ontological assumptions about social value, human nature and the natural......-to-pay investigations. Accepting the ontological and epistemological assumptions of cost-benefit analysis involves an implicit acceptance of the ethical and political values favoured by these assumptions. Cost-benefit analyses of transportation investment projects tend to neglect long-term environmental consequences...

  16. Plasma Sterilization: New Epoch in Medical Textiles

    Science.gov (United States)

    Senthilkumar, P.; Arun, N.; Vigneswaran, C.

    2015-04-01

    Clothing is perceived to be second skin to the human body since it is in close contact with the human skin most of the times. In hospitals, use of textile materials in different forms and sterilization of these materials is an essential requirement for preventing spread of germs. The need for appropriate disinfection and sterilization techniques is of paramount importance. There has been a continuous demand for novel sterilization techniques appropriate for use on various textile materials as the existing sterilization techniques suffer from various technical and economical drawbacks. Plasma sterilization is the alternative method, which is friendlier and more effective on the wide spectrum of prokaryotic and eukaryotic microorganisms. Basically, the main inactivation factors for cells exposed to plasma are heat, UV radiation and various reactive species. Plasma exposure can kill micro-organisms on a surface in addition to removing adsorbed monolayer of surface contaminants. Advantages of plasma surface treatment are removal of contaminants from the surface, change in the surface energy and sterilization of the surface. Plasma sterilization aims to kill and/or remove all micro-organisms which may cause infection of humans or animals, or which can cause spoilage of foods or other goods. This review paper emphasizes necessity for sterilization, essentials of sterilization, mechanism of plasma sterilization and the parameters influencing it.

  17. Epoch-based analysis of speech signals

    Indian Academy of Sciences (India)

    on speech production characteristics, but also helps in accurate analysis of speech. .... include time delay estimation, speech enhancement from single and multi- ...... log. (. E[k]. ∑K−1 l=0. E[l]. ) ,. (7) where K is the number of samples in the ...

  18. z~2: An Epoch of Disk Assembly

    Science.gov (United States)

    Simons, Raymond C.; Kassin, Susan A.; Weiner, Benjamin; Heckman, Timothy M.; Trump, Jonathan; SIGMA, DEEP2

    2018-01-01

    At z = 0, the majority of massive star-forming galaxies contain thin, rotationally supported gas disks. It was once accepted that galaxies form thin disks early: collisional gas with high velocity dispersion should dissipate energy, conserve angular momentum, and develop strong rotational support in only a few galaxy crossing times (~few hundred Myr). However, this picture is complicated at high redshift, where the processes governing galaxy assembly tend to be violent and inhospitable to disk formation. We present results from our SIGMA survey of star-forming galaxy kinematics at z = 2. These results challenge the simple picture described above: galaxies at z = 2 are unlike local well-ordered disks. Their kinematics tend to be much more disordered, as quantified by their low ratios of rotational velocity to gas velocity dispersion (Vrot/σg): less than 35% of galaxies have Vrot/σg > 3. For comparison, nearly 100% of local star-forming galaxies meet this same threshold. We combine our high redshift sample with a similar low redshift sample from the DEEP2 survey. This combined sample covers a continuous redshift baseline over 0.1 < z < 2.5, spanning 10 Gyrs of cosmic time. Over this period, galaxies exhibit remarkably smooth kinematic evolution on average. All galaxies tend towards rotational support with time, and it is reached earlier in higher mass systems. This is due to both a significant decline in gas velocity dispersion and a mild rise in ordered rotational motions. These results indicate that z = 2 is a period of disk assembly, during which the strong rotational support present in today’s massive disk galaxies is only just beginning to emerge.

  19. Nature(andculture in the anthropocene epoch

    Directory of Open Access Journals (Sweden)

    Grażyna Gajewska

    2012-01-01

    Full Text Available The article deals with a New Understanding of nature and culture, which is being crystallized in the intellectual current defined by the name of post-humanities. The starting point for the analysis of work of art from the area of bioart – transgenic plant named Edunia, which is a part of a larger project Natural History of the Enigma. Edunia, which does not occur in nature but was created by the artist Eduardo Kac by means of specialists in the field of genetic engineering. A new form of life defined as plantimal shows a DNA expression of the artist included into decorative flowers of petunia. Rose petals of flowers are “interspersed” by dark red vessels the feature of which is the expression of Kac’s gene; this because the artist took care of that that his DNA was found just in the venation of the flower. In the article I present two interpretation paths of this work of bioart which, however, are not a sharp counterpoint to one another, but somewhat differently place accents between nature and culture in their mutual entanglements. One of these paths may be defined as an attempt at making others realise or reminding them about our evolutional species condition, while the other one as an attempt at treating nature as an important actor of the sociopolitical activities.

  20. Comparison with Russian analyses of meteor impact

    Energy Technology Data Exchange (ETDEWEB)

    Canavan, G.H.

    1997-06-01

    The inversion model for meteor impacts is used to discuss Russian analyses and compare principal results. For common input parameters, the models produce consistent estimates of impactor parameters. Directions for future research are discussed and prioritized.

  1. 7 CFR 94.102 - Analyses available.

    Science.gov (United States)

    2010-01-01

    ... analyses for total ash, fat by acid hydrolysis, moisture, salt, protein, beta-carotene, catalase... glycol, SLS, and zeolex. There are also be tests for starch, total sugars, sugar profile, whey, standard...

  2. Anthocyanin analyses of Vaccinium fruit dietary supplements

    Science.gov (United States)

    Vaccinium fruit ingredients within dietary supplements were identified by comparisons with anthocyanin analyses of known Vaccinium profiles (demonstration of anthocyanin fingerprinting). Available Vaccinium supplements were purchased and analyzed; their anthocyanin profiles (based on HPLC separation...

  3. Analyse of Maintenance Cost in ST

    CERN Document Server

    Jenssen, B W

    2001-01-01

    An analyse has been carried out in ST concerning the total costs for the division. Even though the target was the maintenance costs in ST, the global budget over has been analysed. This has been done since there is close relation between investments & consolidation and the required level for maintenance. The purpose of the analyse was to focus on maintenance cost in ST as a ratio of total maintenance costs over the replacement value of the equipment, and to make some comparisons with other industries and laboratories. Families of equipment have been defined and their corresponding ratios calculated. This first approach gives us some "quantitative" measurements. This analyse should be combined with performance indicators (more "qualitative" measurements) that are telling us how well we are performing. This will help us in defending our budget, make better priorities, and we will satisfy the requirements from our external auditors.

  4. A History of Rotorcraft Comprehensive Analyses

    Science.gov (United States)

    Johnson, Wayne

    2013-01-01

    A history of the development of rotorcraft comprehensive analyses is presented. Comprehensive analyses are digital computer programs that calculate the aeromechanical behavior of the rotor and aircraft, bringing together the most advanced models of the geometry, structure, dynamics, and aerodynamics available in rotary wing technology. The development of the major codes of the last five decades from industry, government, and universities is described. A number of common themes observed in this history are discussed.

  5. Safety analyses for reprocessing and waste processing

    International Nuclear Information System (INIS)

    1983-03-01

    Presentation of an incident analysis of process steps of the RP, simplified considerations concerning safety, and safety analyses of the storage and solidification facilities of the RP. A release tree method is developed and tested. An incident analysis of process steps, the evaluation of the SRL-study and safety analyses of the storage and solidification facilities of the RP are performed in particular. (DG) [de

  6. Risk analyses of nuclear power plants

    International Nuclear Information System (INIS)

    Jehee, J.N.T.; Seebregts, A.J.

    1991-02-01

    Probabilistic risk analyses of nuclear power plants are carried out by systematically analyzing the possible consequences of a broad spectrum of causes of accidents. The risk can be expressed in the probabilities for melt down, radioactive releases, or harmful effects for the environment. Following risk policies for chemical installations as expressed in the mandatory nature of External Safety Reports (EVRs) or, e.g., the publication ''How to deal with risks'', probabilistic risk analyses are required for nuclear power plants

  7. FORMATION EPOCHS, STAR FORMATION HISTORIES, AND SIZES OF MASSIVE EARLY-TYPE GALAXIES IN CLUSTER AND FIELD ENVIRONMENTS AT z = 1.2: INSIGHTS FROM THE REST-FRAME ULTRAVIOLET

    International Nuclear Information System (INIS)

    Rettura, Alessandro; Demarco, R.; Ford, H. C.; Rosati, P.; Gobat, R.; Nonino, M.; Fosbury, R. A. E.; Menci, N.; Strazzullo, V.; Mei, S.

    2010-01-01

    We derive stellar masses, ages, and star formation histories (SFHs) of massive early-type galaxies in the z = 1.237 RDCS1252.9-2927 cluster and compare them with those measured in a similarly mass-selected sample of field contemporaries drawn from the Great Observatories Origin Deep Survey South Field. Robust estimates of these parameters are obtained by comparing a large grid of composite stellar population models with 8-9 band photometry in the rest-frame near-ultraviolet, optical, and IR, thus sampling the entire relevant domain of emission of the different stellar populations. Additionally, we present new, deep U-band photometry of both fields, giving access to the critical far-ultraviolet rest frame, in order to empirically constrain the dependence of the most recent star formation processes on the environment. We also analyze the morphological properties of both samples to examine the dependence of their scaling relations on their mass and environment. We find that early-type galaxies, both in the cluster and in the field, show analogous optical morphologies, follow comparable mass versus size relation, have congruent average surface stellar mass densities, and lie on the same Kormendy relation. We also show that a fraction of early-type galaxies in the field employ longer timescales, τ, to assemble their mass than their cluster contemporaries. Hence, we conclude that while the formation epoch of early-type galaxies only depends on their mass, the environment does regulate the timescales of their SFHs. Our deep U-band imaging strongly supports this conclusion. We show that cluster galaxies are at least 0.5 mag fainter than their field contemporaries of similar mass and optical-to-infrared colors, implying that the last episode of star formation must have happened more recently in the field than in the cluster.

  8. Adubação foliar: I. Épocas de aplicação de fósforo na cultura da soja Leaf fertilization: I. Epochs of phosphorus application on soybeans

    Directory of Open Access Journals (Sweden)

    Pedro Milanez de Rezende

    2005-12-01

    + R1 + R4 + R6 respectivamente.The search for new alternatives in order to increase soybeans productivity has been constant objective of researchers and farmers. The crop responses to phosphorus application in the soil are well defined, being this nutrient very important on its development and yield. The leaf fertilization on this crop appears as a new rationale option, mainly when the plant nutrient levels are low. So, this work aimed to study the effect of phosphorus leaf fertilization, applied at different plant stage, including: V5, R1, R4, V5 + R1, V5 + R4, R1 + R4, V5 + R1 + R4, V5 + R1 + R4 + R6 and test plot. The experiment was installed in a soybeans crop, Monarca cultivar, at Palmital Farm, Ijaci county, Minas Gerais state, Brazil, using a totally randomized design, with 9 treatments and 3 replications. The chelate Quimifol P30 in liquid form with 30% of the nutrient soluble in CNA + water in the, with doses of 2 l. ha-1, was utilized as phosphorus source, using the applications performed with a constant pressure CO2-nebulizer. The different epochs of phosphorous application significantly altered the grains yield, proportioning significant increases, up to 16% for the V5, V5 + R1, V5 + R4, V5 + R1 + R4, V5 + R1 + R4 + R6 epochs, when compared to the test plot, clearly expressing the positive effect of these applications at V5 stage. The plant height, first legume insertion, and lodging index characteristics were not significantly altered by the different epochs evaluated. It was observed significant response for the nutrient leaf amounts only in the case of K and Zn indices, exclusively in the V5 + R4, and in the V5, V5 + R1 and V5 + R1 + R4 + R6 treatments, respectively.

  9. Mass separated neutral particle energy analyser

    International Nuclear Information System (INIS)

    Takeuchi, Hiroshi; Matsuda, Toshiaki; Miura, Yukitoshi; Shiho, Makoto; Maeda, Hikosuke; Hashimoto, Kiyoshi; Hayashi, Kazuo.

    1983-09-01

    A mass separated neutral particle energy analyser which could simultaneously measure hydrogen and deuterium atoms emitted from tokamak plasma was constructed. The analyser was calibrated for the energy and mass separation in the energy range from 0.4 keV to 9 keV. In order to investigate the behavior of deuteron and proton in the JFT-2 tokamak plasma heated with ion cyclotron wave and neutral beam injection, this analyser was installed in JFT-2 tokamak. It was found that the energy spectrum could be determined with sufficient accuracy. The obtained ion temperature and ratio of deuteron and proton density from the energy spectrum were in good agreement with the value deduced from Doppler broadening of TiXIV line and the line intensities of H sub(α) and D sub(α) respectively. (author)

  10. Advanced toroidal facility vaccuum vessel stress analyses

    International Nuclear Information System (INIS)

    Hammonds, C.J.; Mayhall, J.A.

    1987-01-01

    The complex geometry of the Advance Toroidal Facility (ATF) vacuum vessel required special analysis techniques in investigating the structural behavior of the design. The response of a large-scale finite element model was found for transportation and operational loading. Several computer codes and systems, including the National Magnetic Fusion Energy Computer Center Cray machines, were implemented in accomplishing these analyses. The work combined complex methods that taxed the limits of both the codes and the computer systems involved. Using MSC/NASTRAN cyclic-symmetry solutions permitted using only 1/12 of the vessel geometry to mathematically analyze the entire vessel. This allowed the greater detail and accuracy demanded by the complex geometry of the vessel. Critical buckling-pressure analyses were performed with the same model. The development, results, and problems encountered in performing these analyses are described. 5 refs., 3 figs

  11. Thermal and stress analyses with ANSYS program

    International Nuclear Information System (INIS)

    Kanoo, Iwao; Kawaguchi, Osamu; Asakura, Junichi.

    1975-03-01

    Some analyses of the heat conduction and elastic/inelastic stresses, carried out in Power Reactor and Nuclear Fuel Development Corporation (PNC) in fiscal 1973 using ANSYS (Engineering Analysis System) program, are summarized. In chapter I, the present state of structural analysis programs available for a FBR (fast breeder reactor) in PNC is explained. Chapter II is a brief description of the ANSYS current status. In chapter III are presented 8 examples of the steady-state and transient thermal analyses for fast-reactor plant components, and in chapter IV 5 examples of the inelastic structural analysis. With the advance in the field of finite element method, its applications in design study should extend progressively in the future. The present report, it is hoped, will contribute as references in similar analyses and at the same time help to understand the deformation and strain behaviors of structures. (Mori, K.)

  12. Periodic safety analyses; Les essais periodiques

    Energy Technology Data Exchange (ETDEWEB)

    Gouffon, A; Zermizoglou, R

    1990-12-01

    The IAEA Safety Guide 50-SG-S8 devoted to 'Safety Aspects of Foundations of Nuclear Power Plants' indicates that operator of a NPP should establish a program for inspection of safe operation during construction, start-up and service life of the plant for obtaining data needed for estimating the life time of structures and components. At the same time the program should ensure that the safety margins are appropriate. Periodic safety analysis are an important part of the safety inspection program. Periodic safety reports is a method for testing the whole system or a part of the safety system following the precise criteria. Periodic safety analyses are not meant for qualification of the plant components. Separate analyses are devoted to: start-up, qualification of components and materials, and aging. All these analyses are described in this presentation. The last chapter describes the experience obtained for PWR-900 and PWR-1300 units from 1986-1989.

  13. A Simple, Reliable Precision Time Analyser

    Energy Technology Data Exchange (ETDEWEB)

    Joshi, B. V.; Nargundkar, V. R.; Subbarao, K.; Kamath, M. S.; Eligar, S. K. [Atomic Energy Establishment Trombay, Bombay (India)

    1966-06-15

    A 30-channel time analyser is described. The time analyser was designed and built for pulsed neutron research but can be applied to other uses. Most of the logic is performed by means of ferrite memory core and transistor switching circuits. This leads to great versatility, low power consumption, extreme reliability and low cost. The analyser described provides channel Widths from 10 {mu}s to 10 ms; arbitrarily wider channels are easily obtainable. It can handle counting rates up to 2000 counts/min in each channel with less than 1% dead time loss. There is a provision for an initial delay equal to 100 channel widths. An input pulse de-randomizer unit using tunnel diodes ensures exactly equal channel widths. A brief description of the principles involved in core switching circuitry is given. The core-transistor transfer loop is compared with the usual core-diode loops and is shown to be more versatile and better adapted to the making of a time analyser. The circuits derived from the basic loop are described. These include the scale of ten, the frequency dividers and the delay generator. The current drivers developed for driving the cores are described. The crystal-controlled clock which controls the width of the time channels and synchronizes the operation of the various circuits is described. The detector pulse derandomizer unit using tunnel diodes is described. The scheme of the time analyser is then described showing how the various circuits can be integrated together to form a versatile time analyser. (author)

  14. Fundamental data analyses for measurement control

    International Nuclear Information System (INIS)

    Campbell, K.; Barlich, G.L.; Fazal, B.; Strittmatter, R.B.

    1987-02-01

    A set of measurment control data analyses was selected for use by analysts responsible for maintaining measurement quality of nuclear materials accounting instrumentation. The analyses consist of control charts for bias and precision and statistical tests used as analytic supplements to the control charts. They provide the desired detection sensitivity and yet can be interpreted locally, quickly, and easily. The control charts provide for visual inspection of data and enable an alert reviewer to spot problems possibly before statistical tests detect them. The statistical tests are useful for automating the detection of departures from the controlled state or from the underlying assumptions (such as normality). 8 refs., 3 figs., 5 tabs

  15. A theoretical framework for analysing preschool teaching

    DEFF Research Database (Denmark)

    Chaiklin, Seth

    2014-01-01

    This article introduces a theoretical framework for analysing preschool teaching as a historically-grounded societal practice. The aim is to present a unified framework that can be used to analyse and compare both historical and contemporary examples of preschool teaching practice within and across...... national traditions. The framework has two main components, an analysis of preschool teaching as a practice, formed in relation to societal needs, and an analysis of the categorical relations which necessarily must be addressed in preschool teaching activity. The framework is introduced and illustrated...

  16. Power System Oscillatory Behaviors: Sources, Characteristics, & Analyses

    Energy Technology Data Exchange (ETDEWEB)

    Follum, James D. [Pacific Northwest National Lab. (PNNL), Richland, WA (United States); Tuffner, Francis K. [Pacific Northwest National Lab. (PNNL), Richland, WA (United States); Dosiek, Luke A. [Union College, Schenectady, NY (United States); Pierre, John W. [Univ. of Wyoming, Laramie, WY (United States)

    2017-05-17

    This document is intended to provide a broad overview of the sources, characteristics, and analyses of natural and forced oscillatory behaviors in power systems. These aspects are necessarily linked. Oscillations appear in measurements with distinguishing characteristics derived from the oscillation’s source. These characteristics determine which analysis methods can be appropriately applied, and the results from these analyses can only be interpreted correctly with an understanding of the oscillation’s origin. To describe oscillations both at their source within a physical power system and within measurements, a perspective from the boundary between power system and signal processing theory has been adopted.

  17. 10 CFR 61.13 - Technical analyses.

    Science.gov (United States)

    2010-01-01

    ... air, soil, groundwater, surface water, plant uptake, and exhumation by burrowing animals. The analyses... processes such as erosion, mass wasting, slope failure, settlement of wastes and backfill, infiltration through covers over disposal areas and adjacent soils, and surface drainage of the disposal site. The...

  18. Analysing Simple Electric Motors in the Classroom

    Science.gov (United States)

    Yap, Jeff; MacIsaac, Dan

    2006-01-01

    Electromagnetic phenomena and devices such as motors are typically unfamiliar to both teachers and students. To better visualize and illustrate the abstract concepts (such as magnetic fields) underlying electricity and magnetism, we suggest that students construct and analyse the operation of a simply constructed Johnson electric motor. In this…

  19. En kvantitativ metode til analyse af radio

    Directory of Open Access Journals (Sweden)

    Christine Lejre

    2014-06-01

    Full Text Available I den danske såvel som den internationale radiolitteratur er bud på metoder til analyse af radiomediet sparsomme. Det skyldes formentlig, at radiomediet er svært at analysere, fordi det er et medie, der ikke er visualiseret i form af billeder eller understøttet af printet tekst. Denne artikel har til formål at beskrive en ny kvantitativ metode til analyse af radio, der tager særligt hensyn til radiomediets modalitet – lyd struktureret som et lineært forløb i tid. Metoden understøtter dermed både radiomediet som et medie i tid og som et blindt medie. Metoden er udviklet i forbindelse med en komparativ analyse af kulturprogrammer på P1 og Radio24syv lavet for Danmarks Radio. Artiklen peger på, at metoden er velegnet til analyse af ikke kun radio, men også andre medieplatforme samt forskellige journalistiske stofområder.

  20. Analysing User Lifetime in Voluntary Online Collaboration

    DEFF Research Database (Denmark)

    McHugh, Ronan; Larsen, Birger

    2010-01-01

    This paper analyses persuasion in online collaboration projects. It introduces a set of heuristics that can be applied to such projects and combines these with a quantitative analysis of user activity over time. Two example sites are studies, Open Street Map and The Pirate Bay. Results show that ...

  1. Analyses of hydraulic performance of velocity caps

    DEFF Research Database (Denmark)

    Christensen, Erik Damgaard; Degn Eskesen, Mark Chr.; Buhrkall, Jeppe

    2014-01-01

    The hydraulic performance of a velocity cap has been investigated. Velocity caps are often used in connection with offshore intakes. CFD (computational fluid dynamics) examined the flow through the cap openings and further down into the intake pipes. This was combined with dimension analyses...

  2. Quantitative analyses of shrinkage characteristics of neem ...

    African Journals Online (AJOL)

    Quantitative analyses of shrinkage characteristics of neem (Azadirachta indica A. Juss.) wood were carried out. Forty five wood specimens were prepared from the three ecological zones of north eastern Nigeria, viz: sahel savanna, sudan savanna and guinea savanna for the research. The results indicated that the wood ...

  3. UMTS signal measurements with digital spectrum analysers

    International Nuclear Information System (INIS)

    Licitra, G.; Palazzuoli, D.; Ricci, A. S.; Silvi, A. M.

    2004-01-01

    The launch of the Universal Mobile Telecommunications System (UNITS), the most recent mobile telecommunications standard has imposed the requirement of updating measurement instrumentation and methodologies. In order to define the most reliable measurement procedure, which is aimed at assessing the exposure to electromagnetic fields, modern spectrum analysers' features for correct signal characterisation has been reviewed. (authors)

  4. Hybrid Logical Analyses of the Ambient Calculus

    DEFF Research Database (Denmark)

    Bolander, Thomas; Hansen, Rene Rydhof

    2010-01-01

    In this paper, hybrid logic is used to formulate three control flow analyses for Mobile Ambients, a process calculus designed for modelling mobility. We show that hybrid logic is very well-suited to express the semantic structure of the ambient calculus and how features of hybrid logic can...

  5. Micromechanical photothermal analyser of microfluidic samples

    DEFF Research Database (Denmark)

    2014-01-01

    The present invention relates to a micromechanical photothermal analyser of microfluidic samples comprising an oblong micro-channel extending longitudinally from a support element, the micro-channel is made from at least two materials with different thermal expansion coefficients, wherein...

  6. Systematic review and meta-analyses

    DEFF Research Database (Denmark)

    Dreier, Julie Werenberg; Andersen, Anne-Marie Nybo; Berg-Beckhoff, Gabriele

    2014-01-01

    1990 were excluded. RESULTS: The available literature supported an increased risk of adverse offspring health in association with fever during pregnancy. The strongest evidence was available for neural tube defects, congenital heart defects, and oral clefts, in which meta-analyses suggested between a 1...

  7. Secundaire analyses organisatiebeleid psychosociale arbeidsbelasting (PSA)

    NARCIS (Netherlands)

    Kraan, K.O.; Houtman, I.L.D.

    2016-01-01

    Hoe het organisatiebeleid rond psychosociale arbeidsbelasting (PSA) eruit ziet anno 2014 en welke samenhang er is met ander beleid en uitkomstmaten, zijn de centrale vragen in dit onderzoek. De resultaten van deze verdiepende analyses kunnen ten goede komen aan de lopende campagne ‘Check je

  8. Exergoeconomic and environmental analyses of CO

    NARCIS (Netherlands)

    Mosaffa, A. H.; Garousi Farshi, L; Infante Ferreira, C.A.; Rosen, M. A.

    2016-01-01

    Exergoeconomic and environmental analyses are presented for two CO2/NH3 cascade refrigeration systems equipped with (1) two flash tanks and (2) a flash tank along with a flash intercooler with indirect subcooler. A comparative study is performed for the proposed systems, and

  9. Meta-analyses on viral hepatitis

    DEFF Research Database (Denmark)

    Gluud, Lise L; Gluud, Christian

    2009-01-01

    This article summarizes the meta-analyses of interventions for viral hepatitis A, B, and C. Some of the interventions assessed are described in small trials with unclear bias control. Other interventions are supported by large, high-quality trials. Although attempts have been made to adjust...

  10. Multivariate differential analyses of adolescents' experiences of ...

    African Journals Online (AJOL)

    Aggression is reasoned to be dependent on aspects such as self-concept, moral reasoning, communication, frustration tolerance and family relationships. To analyse the data from questionnaires of 101 families (95 adolescents, 95 mothers and 91 fathers) Cronbach Alpha, various consecutive first and second order factor ...

  11. Chromosomal evolution and phylogenetic analyses in Tayassu ...

    Indian Academy of Sciences (India)

    Chromosome preparation and karyotype description. The material analysed consists of chromosome preparations of the tayassuid species T. pecari (three individuals) and. P. tajacu (four individuals) and were made from short-term lymphocyte cultures of whole blood samples using standard protocols (Chaves et al. 2002).

  12. Grey literature in meta-analyses.

    Science.gov (United States)

    Conn, Vicki S; Valentine, Jeffrey C; Cooper, Harris M; Rantz, Marilyn J

    2003-01-01

    In meta-analysis, researchers combine the results of individual studies to arrive at cumulative conclusions. Meta-analysts sometimes include "grey literature" in their evidential base, which includes unpublished studies and studies published outside widely available journals. Because grey literature is a source of data that might not employ peer review, critics have questioned the validity of its data and the results of meta-analyses that include it. To examine evidence regarding whether grey literature should be included in meta-analyses and strategies to manage grey literature in quantitative synthesis. This article reviews evidence on whether the results of studies published in peer-reviewed journals are representative of results from broader samplings of research on a topic as a rationale for inclusion of grey literature. Strategies to enhance access to grey literature are addressed. The most consistent and robust difference between published and grey literature is that published research is more likely to contain results that are statistically significant. Effect size estimates of published research are about one-third larger than those of unpublished studies. Unfunded and small sample studies are less likely to be published. Yet, importantly, methodological rigor does not differ between published and grey literature. Meta-analyses that exclude grey literature likely (a) over-represent studies with statistically significant findings, (b) inflate effect size estimates, and (c) provide less precise effect size estimates than meta-analyses including grey literature. Meta-analyses should include grey literature to fully reflect the existing evidential base and should assess the impact of methodological variations through moderator analysis.

  13. Thermal analyses. Information on the expected baking process; Thermische analyses. Informatie over een te verwachten bakgedrag

    Energy Technology Data Exchange (ETDEWEB)

    Van Wijck, H. [Stichting Technisch Centrum voor de Keramische Industrie TCKI, Velp (Netherlands)

    2009-09-01

    The design process and the drying process for architectural ceramics and pottery partly determine the characteristics of the final product, but the largest changes occur during the baking process. An overview is provided of the different thermal analyses and how the information from these analyses can predict the process in practice. (mk) [Dutch] Het vormgevingsproces en het droogproces voor bouwkeramische producten en aardewerk bepalen voor een deel de eigenschappen van de eindproducten, maar de grootste veranderingen treden op bij het bakproces. Een overzicht wordt gegeven van de verschillende thermische analyses en hoe de informatie uit deze analyses het in de praktijk te verwachten gedrag kan voorspellen.

  14. Analyses and characterization of double shell tank

    Energy Technology Data Exchange (ETDEWEB)

    1994-10-04

    Evaporator candidate feed from tank 241-AP-108 (108-AP) was sampled under prescribed protocol. Physical, inorganic, and radiochemical analyses were performed on tank 108-AP. Characterization of evaporator feed tank waste is needed primarily for an evaluation of its suitability to be safely processed through the evaporator. Such analyses should provide sufficient information regarding the waste composition to confidently determine whether constituent concentrations are within not only safe operating limits, but should also be relevant to functional limits for operation of the evaporator. Characterization of tank constituent concentrations should provide data which enable a prediction of where the types and amounts of environmentally hazardous waste are likely to occur in the evaporator product streams.

  15. DCH analyses using the CONTAIN code

    International Nuclear Information System (INIS)

    Hong, Sung Wan; Kim, Hee Dong

    1996-08-01

    This report describes CONTAIN analyses performed during participation in the project of 'DCH issue resolution for ice condenser plants' which is sponsored by NRC at SNL. Even though the calculations were performed for the Ice Condenser plant, CONTAIN code has been used for analyses of many phenomena in the PWR containment and the DCH module can be commonly applied to any plant types. The present ice condenser issue resolution effort intended to provide guidance as to what might be needed to resolve DCH for ice condenser plants. It includes both a screening analysis and a scoping study if the screening analysis cannot provide an complete resolution. The followings are the results concerning DCH loads in descending order. 1. Availability of ignition sources prior to vessel breach 2. availability and effectiveness of ice in the ice condenser 3. Loads modeling uncertainties related to co-ejected RPV water 4. Other loads modeling uncertainties 10 tabs., 3 figs., 14 refs. (Author)

  16. DCH analyses using the CONTAIN code

    Energy Technology Data Exchange (ETDEWEB)

    Hong, Sung Wan; Kim, Hee Dong [Korea Atomic Energy Research Institute, Taejon (Korea, Republic of)

    1996-08-01

    This report describes CONTAIN analyses performed during participation in the project of `DCH issue resolution for ice condenser plants` which is sponsored by NRC at SNL. Even though the calculations were performed for the Ice Condenser plant, CONTAIN code has been used for analyses of many phenomena in the PWR containment and the DCH module can be commonly applied to any plant types. The present ice condenser issue resolution effort intended to provide guidance as to what might be needed to resolve DCH for ice condenser plants. It includes both a screening analysis and a scoping study if the screening analysis cannot provide an complete resolution. The followings are the results concerning DCH loads in descending order. 1. Availability of ignition sources prior to vessel breach 2. availability and effectiveness of ice in the ice condenser 3. Loads modeling uncertainties related to co-ejected RPV water 4. Other loads modeling uncertainties 10 tabs., 3 figs., 14 refs. (Author).

  17. Analyses and characterization of double shell tank

    International Nuclear Information System (INIS)

    1994-01-01

    Evaporator candidate feed from tank 241-AP-108 (108-AP) was sampled under prescribed protocol. Physical, inorganic, and radiochemical analyses were performed on tank 108-AP. Characterization of evaporator feed tank waste is needed primarily for an evaluation of its suitability to be safely processed through the evaporator. Such analyses should provide sufficient information regarding the waste composition to confidently determine whether constituent concentrations are within not only safe operating limits, but should also be relevant to functional limits for operation of the evaporator. Characterization of tank constituent concentrations should provide data which enable a prediction of where the types and amounts of environmentally hazardous waste are likely to occur in the evaporator product streams

  18. Soil analyses by ICP-MS (Review)

    International Nuclear Information System (INIS)

    Yamasaki, Shin-ichi

    2000-01-01

    Soil analyses by inductively coupled plasma mass spectrometry (ICP-MS) are reviewed. The first half of the paper is devoted to the development of techniques applicable to soil analyses, where diverse analytical parameters are carefully evaluated. However, the choice of soil samples is somewhat arbitrary, and only a limited number of samples (mostly reference materials) are examined. In the second half, efforts are mostly concentrated on the introduction of reports, where a large number of samples and/or very precious samples have been analyzed. Although the analytical techniques used in these reports are not necessarily novel, valuable information concerning such topics as background levels of elements in soils, chemical forms of elements in soils and behavior of elements in soil ecosystems and the environment can be obtained. The major topics discussed are total elemental analysis, analysis of radionuclides with long half-lives, speciation, leaching techniques, and isotope ratio measurements. (author)

  19. Sorption analyses in materials science: selected oxides

    International Nuclear Information System (INIS)

    Fuller, E.L. Jr.; Condon, J.B.; Eager, M.H.; Jones, L.L.

    1981-01-01

    Physical adsorption studies have been shown to be extremely valuable in studying the chemistry and structure of dispersed materials. Many processes rely on the access to the large amount of surface made available by the high degree of dispersion. Conversely, there are many applications where consolidation of the dispersed solids is required. Several systems (silica gel, alumina catalysts, mineralogic alumino-silicates, and yttrium oxide plasters) have been studied to show the type and amount of chemical and structural information that can be obtained. Some review of current theories is given and additional concepts are developed based on statistical and thermodynamic arguments. The results are applied to sorption data to show that detailed sorption analyses are extremely useful and can provide valuable information that is difficult to obtain by any other means. Considerable emphasis has been placed on data analyses and interpretation of a nonclassical nature to show the potential of such studies that is often not recognized nor utilized

  20. Standardized analyses of nuclear shipping containers

    International Nuclear Information System (INIS)

    Parks, C.V.; Hermann, O.W.; Petrie, L.M.; Hoffman, T.J.; Tang, J.S.; Landers, N.F.; Turner, W.D.

    1983-01-01

    This paper describes improved capabilities for analyses of nuclear fuel shipping containers within SCALE -- a modular code system for Standardized Computer Analyses for Licensing Evaluation. Criticality analysis improvements include the new KENO V, a code which contains an enhanced geometry package and a new control module which uses KENO V and allows a criticality search on optimum pitch (maximum k-effective) to be performed. The SAS2 sequence is a new shielding analysis module which couples fuel burnup, source term generation, and radial cask shielding. The SAS5 shielding sequence allows a multidimensional Monte Carlo analysis of a shipping cask with code generated biasing of the particle histories. The thermal analysis sequence (HTAS1) provides an easy-to-use tool for evaluating a shipping cask response to the accident capability of the SCALE system to provide the cask designer or evaluator with a computational system that provides the automated procedures and easy-to-understand input that leads to standarization

  1. Quantitative Analyse und Visualisierung der Herzfunktionen

    Science.gov (United States)

    Sauer, Anne; Schwarz, Tobias; Engel, Nicole; Seitel, Mathias; Kenngott, Hannes; Mohrhardt, Carsten; Loßnitzer, Dirk; Giannitsis, Evangelos; Katus, Hugo A.; Meinzer, Hans-Peter

    Die computergestützte bildbasierte Analyse der Herzfunktionen ist mittlerweile Standard in der Kardiologie. Die verfügbaren Produkte erfordern meist ein hohes Maß an Benutzerinteraktion und somit einen erhöhten Zeitaufwand. In dieser Arbeit wird ein Ansatz vorgestellt, der dem Kardiologen eine größtenteils automatische Analyse der Herzfunktionen mittels MRT-Bilddaten ermöglicht und damit Zeitersparnis schafft. Hierbei werden alle relevanten herzphysiologsichen Parameter berechnet und mithilfe von Diagrammen und Graphen visualisiert. Diese Berechnungen werden evaluiert, indem die ermittelten Werte mit manuell vermessenen verglichen werden. Der hierbei berechnete mittlere Fehler liegt mit 2,85 mm für die Wanddicke und 1,61 mm für die Wanddickenzunahme immer noch im Bereich einer Pixelgrösse der verwendeten Bilder.

  2. Exergetic and thermoeconomic analyses of power plants

    International Nuclear Information System (INIS)

    Kwak, H.-Y.; Kim, D.-J.; Jeon, J.-S.

    2003-01-01

    Exergetic and thermoeconomic analyses were performed for a 500-MW combined cycle plant. In these analyses, mass and energy conservation laws were applied to each component of the system. Quantitative balances of the exergy and exergetic cost for each component, and for the whole system was carefully considered. The exergoeconomic model, which represented the productive structure of the system considered, was used to visualize the cost formation process and the productive interaction between components. The computer program developed in this study can determine the production costs of power plants, such as gas- and steam-turbines plants and gas-turbine cogeneration plants. The program can be also be used to study plant characteristics, namely, thermodynamic performance and sensitivity to changes in process and/or component design variables

  3. Pratique de l'analyse fonctionelle

    CERN Document Server

    Tassinari, Robert

    1997-01-01

    Mettre au point un produit ou un service qui soit parfaitement adapté aux besoins et aux exigences du client est indispensable pour l'entreprise. Pour ne rien laisser au hasard, il s'agit de suivre une méthodologie rigoureuse : celle de l'analyse fonctionnelle. Cet ouvrage définit précisément cette méthode ainsi que ses champs d'application. Il décrit les méthodes les plus performantes en termes de conception de produit et de recherche de qualité et introduit la notion d'analyse fonctionnelle interne. Un ouvrage clé pour optimiser les processus de conception de produit dans son entreprise. -- Idées clés, par Business Digest

  4. Kinetic stability analyses in a bumpy cylinder

    International Nuclear Information System (INIS)

    Dominguez, R.R.; Berk, H.L.

    1981-01-01

    Recent interest in the ELMO Bumpy Torus (EBT) has prompted a number of stability analyses of both the hot electron rings and the toroidal plasma. Typically these works employ the local approximation, neglecting radial eigenmode structure and ballooning effects to perform the stability analysis. In the present work we develop a fully kinetic formalism for performing nonlocal stability analyses in a bumpy cylinder. We show that the Vlasov-Maxwell integral equations (with one ignorable coordinate) are self-adjoint and hence amenable to analysis using numerical techniques developed for self-adjoint systems of equations. The representation we obtain for the kernel of the Vlasov-Maxwell equations is a differential operator of arbitrarily high order. This form leads to a manifestly self-adjoint system of differential equations for long wavelength modes

  5. Sectorial Group for Incident Analyses (GSAI)

    International Nuclear Information System (INIS)

    Galles, Q.; Gamo, J. M.; Jorda, M.; Sanchez-Garrido, P.; Lopez, F.; Asensio, L.; Reig, J.

    2013-01-01

    In 2008, the UNESA Nuclear Energy Committee (CEN) proposed the creation of a working group formed by experts from all Spanish NPPs with the purpose of jointly analyze relevant incidents occurred in each one of the plants. This initiative was a response to a historical situation in which the exchange of information on incidents between the Spanish NPP's was below the desired level. In june 2009, UNESA's Guide CEN-29 established the performance criteria for the so called Sectorial Group for Incident Analyses (GSAI), whose activity would be coordinated by the UNESA's Group for Incident Analyses (GSAI), whose activity would be coordinated by the UNESA's Group of Operating Experience, under the Operations Commission (COP). (Author)

  6. Analyses of cavitation instabilities in ductile metals

    DEFF Research Database (Denmark)

    Tvergaard, Viggo

    2007-01-01

    Cavitation instabilities have been predicted for a single void in a ductile metal stressed under high triaxiality conditions. In experiments for a ceramic reinforced by metal particles a single dominant void has been observed on the fracture surface of some of the metal particles bridging a crack......, and also tests for a thin ductile metal layer bonding two ceramic blocks have indicated rapid void growth. Analyses for these material configurations are discussed here. When the void radius is very small, a nonlocal plasticity model is needed to account for observed size-effects, and recent analyses......, while the surrounding voids are represented by a porous ductile material model in terms of a field quantity that specifies the variation of the void volume fraction in the surrounding metal....

  7. Analysing organic transistors based on interface approximation

    International Nuclear Information System (INIS)

    Akiyama, Yuto; Mori, Takehiko

    2014-01-01

    Temperature-dependent characteristics of organic transistors are analysed thoroughly using interface approximation. In contrast to amorphous silicon transistors, it is characteristic of organic transistors that the accumulation layer is concentrated on the first monolayer, and it is appropriate to consider interface charge rather than band bending. On the basis of this model, observed characteristics of hexamethylenetetrathiafulvalene (HMTTF) and dibenzotetrathiafulvalene (DBTTF) transistors with various surface treatments are analysed, and the trap distribution is extracted. In turn, starting from a simple exponential distribution, we can reproduce the temperature-dependent transistor characteristics as well as the gate voltage dependence of the activation energy, so we can investigate various aspects of organic transistors self-consistently under the interface approximation. Small deviation from such an ideal transistor operation is discussed assuming the presence of an energetically discrete trap level, which leads to a hump in the transfer characteristics. The contact resistance is estimated by measuring the transfer characteristics up to the linear region

  8. New environmental metabarcodes for analysing soil DNA

    DEFF Research Database (Denmark)

    Epp, Laura S.; Boessenkool, Sanne; Bellemain, Eva P.

    2012-01-01

    was systematically evaluated by (i) in silico PCRs using all standard sequences in the EMBL public database as templates, (ii) in vitro PCRs of DNA extracts from surface soil samples from a site in Varanger, northern Norway and (iii) in vitro PCRs of DNA extracts from permanently frozen sediment samples of late......Metabarcoding approaches use total and typically degraded DNA from environmental samples to analyse biotic assemblages and can potentially be carried out for any kinds of organisms in an ecosystem. These analyses rely on specific markers, here called metabarcodes, which should be optimized...... for taxonomic resolution, minimal bias in amplification of the target organism group and short sequence length. Using bioinformatic tools, we developed metabarcodes for several groups of organisms: fungi, bryophytes, enchytraeids, beetles and birds. The ability of these metabarcodes to amplify the target groups...

  9. Visuelle Analyse von E-mail-Verkehr

    OpenAIRE

    Mansmann, Florian

    2003-01-01

    Diese Arbeit beschreibt Methoden zur visuellen geographischen Analyse von E-mail Verkehr.Aus dem Header einer E-mail können Hostadressen und IP-Adressen herausgefiltert werden. Anhand einer Datenbank werden diesen Host- und IP-Adressen geographische Koordinaten zugeordnet.Durch eine Visualisierung werden in übersichtlicher Art und Weise mehrere tausend E-mail Routen dargestellt. Zusätzlich dazu wurden interktive Manipulationsmöglichkeiten vorgestellt, welche eine visuelle Exploration der Date...

  10. BWR core melt progression phenomena: Experimental analyses

    International Nuclear Information System (INIS)

    Ott, L.J.

    1992-01-01

    In the BWR Core Melt in Progression Phenomena Program, experimental results concerning severe fuel damage and core melt progression in BWR core geometry are used to evaluate existing models of the governing phenomena. These include control blade eutectic liquefaction and the subsequent relocation and attack on the channel box structure; oxidation heating and hydrogen generation; Zircaloy melting and relocation; and the continuing oxidation of zirconium with metallic blockage formation. Integral data have been obtained from the BWR DF-4 experiment in the ACRR and from BWR tests in the German CORA exreactor fuel-damage test facility. Additional integral data will be obtained from new CORA BWR test, the full-length FLHT-6 BWR test in the NRU test reactor, and the new program of exreactor experiments at Sandia National Laboratories (SNL) on metallic melt relocation and blockage formation. an essential part of this activity is interpretation and use of the results of the BWR tests. The Oak Ridge National Laboratory (ORNL) has developed experiment-specific models for analysis of the BWR experiments; to date, these models have permitted far more precise analyses of the conditions in these experiments than has previously been available. These analyses have provided a basis for more accurate interpretation of the phenomena that the experiments are intended to investigate. The results of posttest analyses of BWR experiments are discussed and significant findings from these analyses are explained. The ORNL control blade/canister models with materials interaction, relocation and blockage models are currently being implemented in SCDAP/RELAP5 as an optional structural component

  11. En Billig GPS Data Analyse Platform

    DEFF Research Database (Denmark)

    Andersen, Ove; Christiansen, Nick; Larsen, Niels T.

    2011-01-01

    Denne artikel præsenterer en komplet software platform til analyse af GPS data. Platformen er bygget udelukkende vha. open-source komponenter. De enkelte komponenter i platformen beskrives i detaljer. Fordele og ulemper ved at bruge open-source diskuteres herunder hvilke IT politiske tiltage, der...... organisationer med et digitalt vejkort og GPS data begynde at lave trafikanalyser på disse data. Det er et krav, at der er passende IT kompetencer tilstede i organisationen....

  12. Neuronal network analyses: premises, promises and uncertainties

    OpenAIRE

    Parker, David

    2010-01-01

    Neuronal networks assemble the cellular components needed for sensory, motor and cognitive functions. Any rational intervention in the nervous system will thus require an understanding of network function. Obtaining this understanding is widely considered to be one of the major tasks facing neuroscience today. Network analyses have been performed for some years in relatively simple systems. In addition to the direct insights these systems have provided, they also illustrate some of the diffic...

  13. Modelling and analysing oriented fibrous structures

    International Nuclear Information System (INIS)

    Rantala, M; Lassas, M; Siltanen, S; Sampo, J; Takalo, J; Timonen, J

    2014-01-01

    A mathematical model for fibrous structures using a direction dependent scaling law is presented. The orientation of fibrous nets (e.g. paper) is analysed with a method based on the curvelet transform. The curvelet-based orientation analysis has been tested successfully on real data from paper samples: the major directions of fibrefibre orientation can apparently be recovered. Similar results are achieved in tests on data simulated by the new model, allowing a comparison with ground truth

  14. Kinematic gait analyses in healthy Golden Retrievers

    OpenAIRE

    Silva, Gabriela C.A.; Cardoso, Mariana Trés; Gaiad, Thais P.; Brolio, Marina P.; Oliveira, Vanessa C.; Assis Neto, Antonio; Martins, Daniele S.; Ambrósio, Carlos E.

    2014-01-01

    Kinematic analysis relates to the relative movement between rigid bodies and finds application in gait analysis and other body movements, interpretation of their data when there is change, determines the choice of treatment to be instituted. The objective of this study was to standardize the march of Dog Golden Retriever Healthy to assist in the diagnosis and treatment of musculoskeletal disorders. We used a kinematic analysis system to analyse the gait of seven dogs Golden Retriever, female,...

  15. Evaluation of periodic safety status analyses

    International Nuclear Information System (INIS)

    Faber, C.; Staub, G.

    1997-01-01

    In order to carry out the evaluation of safety status analyses by the safety assessor within the periodical safety reviews of nuclear power plants safety goal oriented requirements have been formulated together with complementary evaluation criteria. Their application in an inter-disciplinary coopertion covering the subject areas involved facilitates a complete safety goal oriented assessment of the plant status. The procedure is outlined briefly by an example for the safety goal 'reactivity control' for BWRs. (orig.) [de

  16. Application of RUNTA code in flood analyses

    International Nuclear Information System (INIS)

    Perez Martin, F.; Benitez Fonzalez, F.

    1994-01-01

    Flood probability analyses carried out to date indicate the need to evaluate a large number of flood scenarios. This necessity is due to a variety of reasons, the most important of which include: - Large number of potential flood sources - Wide variety of characteristics of flood sources - Large possibility of flood-affected areas becoming inter linked, depending on the location of the potential flood sources - Diversity of flood flows from one flood source, depending on the size of the rupture and mode of operation - Isolation times applicable - Uncertainties in respect of the structural resistance of doors, penetration seals and floors - Applicable degrees of obstruction of floor drainage system Consequently, a tool which carries out the large number of calculations usually required in flood analyses, with speed and flexibility, is considered necessary. The RUNTA Code enables the range of possible scenarios to be calculated numerically, in accordance with all those parameters which, as a result of previous flood analyses, it is necessary to take into account in order to cover all the possible floods associated with each flood area

  17. An analyser for power plant operations

    International Nuclear Information System (INIS)

    Rogers, A.E.; Wulff, W.

    1990-01-01

    Safe and reliable operation of power plants is essential. Power plant operators need a forecast of what the plant will do when its current state is disturbed. The in-line plant analyser provides precisely this information at relatively low cost. The plant analyser scheme uses a mathematical model of the dynamic behaviour of the plant to establish a numerical simulation. Over a period of time, the simulation is calibrated with measurements from the particular plant in which it is used. The analyser then provides a reference against which to evaluate the plant's current behaviour. It can be used to alert the operator to any atypical excursions or combinations of readings that indicate malfunction or off-normal conditions that, as the Three Mile Island event suggests, are not easily recognised by operators. In a look-ahead mode, it can forecast the behaviour resulting from an intended change in settings or operating conditions. Then, when such changes are made, the plant's behaviour can be tracked against the forecast in order to assure that the plant is behaving as expected. It can be used to investigate malfunctions that have occurred and test possible adjustments in operating procedures. Finally, it can be used to consider how far from the limits of performance the elements of the plant are operating. Then by adjusting settings, the required power can be generated with as little stress as possible on the equipment. (6 figures) (Author)

  18. Comparison of elastic and inelastic analyses

    International Nuclear Information System (INIS)

    Ammerman, D.J.; Heinstein, M.W.; Wellman, G.W.

    1992-01-01

    The use of inelastic analysis methods instead of the traditional elastic analysis methods in the design of radioactive material (RAM) transport packagings leads to a better understanding of the response of the package to mechanical loadings. Thus, better assessment of the containment, thermal protection, and shielding integrity of the package after a structure accident event can be made. A more accurate prediction of the package response can lead to enhanced safety and also allow for a more efficient use of materials, possibly leading to a package with higher capacity or lower weight. This paper discusses the advantages and disadvantages of using inelastic analysis in the design of RAM shipping packages. The use of inelastic analysis presents several problems to the package designer. When using inelastic analysis the entire nonlinear response of the material must be known, including the effects of temperature changes and strain rate. Another problem is that there currently is not an acceptance criteria for this type of analysis that is approved by regulatory agencies. Inelastic analysis acceptance criteria based on failure stress, failure strain , or plastic energy density could be developed. For both elastic and inelastic analyses it is also important to include other sources of stress in the analyses, such as fabrication stresses, thermal stresses, stresses from bolt preloading, and contact stresses at material interfaces. Offsetting these added difficulties is the improved knowledge of the package behavior. This allows for incorporation of a more uniform margin of safety, which can result in weight savings and a higher level of confidence in the post-accident configuration of the package. In this paper, comparisons between elastic and inelastic analyses are made for a simple ring structure and for a package to transport a large quantity of RAM by rail (rail cask) with lead gamma shielding to illustrate the differences in the two analysis techniques

  19. IDEA: Interactive Display for Evolutionary Analyses.

    Science.gov (United States)

    Egan, Amy; Mahurkar, Anup; Crabtree, Jonathan; Badger, Jonathan H; Carlton, Jane M; Silva, Joana C

    2008-12-08

    The availability of complete genomic sequences for hundreds of organisms promises to make obtaining genome-wide estimates of substitution rates, selective constraints and other molecular evolution variables of interest an increasingly important approach to addressing broad evolutionary questions. Two of the programs most widely used for this purpose are codeml and baseml, parts of the PAML (Phylogenetic Analysis by Maximum Likelihood) suite. A significant drawback of these programs is their lack of a graphical user interface, which can limit their user base and considerably reduce their efficiency. We have developed IDEA (Interactive Display for Evolutionary Analyses), an intuitive graphical input and output interface which interacts with PHYLIP for phylogeny reconstruction and with codeml and baseml for molecular evolution analyses. IDEA's graphical input and visualization interfaces eliminate the need to edit and parse text input and output files, reducing the likelihood of errors and improving processing time. Further, its interactive output display gives the user immediate access to results. Finally, IDEA can process data in parallel on a local machine or computing grid, allowing genome-wide analyses to be completed quickly. IDEA provides a graphical user interface that allows the user to follow a codeml or baseml analysis from parameter input through to the exploration of results. Novel options streamline the analysis process, and post-analysis visualization of phylogenies, evolutionary rates and selective constraint along protein sequences simplifies the interpretation of results. The integration of these functions into a single tool eliminates the need for lengthy data handling and parsing, significantly expediting access to global patterns in the data.

  20. IDEA: Interactive Display for Evolutionary Analyses

    Directory of Open Access Journals (Sweden)

    Carlton Jane M

    2008-12-01

    Full Text Available Abstract Background The availability of complete genomic sequences for hundreds of organisms promises to make obtaining genome-wide estimates of substitution rates, selective constraints and other molecular evolution variables of interest an increasingly important approach to addressing broad evolutionary questions. Two of the programs most widely used for this purpose are codeml and baseml, parts of the PAML (Phylogenetic Analysis by Maximum Likelihood suite. A significant drawback of these programs is their lack of a graphical user interface, which can limit their user base and considerably reduce their efficiency. Results We have developed IDEA (Interactive Display for Evolutionary Analyses, an intuitive graphical input and output interface which interacts with PHYLIP for phylogeny reconstruction and with codeml and baseml for molecular evolution analyses. IDEA's graphical input and visualization interfaces eliminate the need to edit and parse text input and output files, reducing the likelihood of errors and improving processing time. Further, its interactive output display gives the user immediate access to results. Finally, IDEA can process data in parallel on a local machine or computing grid, allowing genome-wide analyses to be completed quickly. Conclusion IDEA provides a graphical user interface that allows the user to follow a codeml or baseml analysis from parameter input through to the exploration of results. Novel options streamline the analysis process, and post-analysis visualization of phylogenies, evolutionary rates and selective constraint along protein sequences simplifies the interpretation of results. The integration of these functions into a single tool eliminates the need for lengthy data handling and parsing, significantly expediting access to global patterns in the data.

  1. Bayesian uncertainty analyses of probabilistic risk models

    International Nuclear Information System (INIS)

    Pulkkinen, U.

    1989-01-01

    Applications of Bayesian principles to the uncertainty analyses are discussed in the paper. A short review of the most important uncertainties and their causes is provided. An application of the principle of maximum entropy to the determination of Bayesian prior distributions is described. An approach based on so called probabilistic structures is presented in order to develop a method of quantitative evaluation of modelling uncertainties. The method is applied to a small example case. Ideas for application areas for the proposed method are discussed

  2. Safety analyses for high-temperature reactors

    International Nuclear Information System (INIS)

    Mueller, A.

    1978-01-01

    The safety evaluation of HTRs may be based on the three methods presented here: The licensing procedure, the probabilistic risk analysis, and the damage extent analysis. Thereby all safety aspects - from normal operation to the extreme (hypothetical) accidents - of the HTR are covered. The analyses within the licensing procedure of the HTR-1160 have shown that for normal operation and for the design basis accidents the radiation exposures remain clearly below the maximum permissible levels as prescribed by the radiation protection ordinance, so that no real hazard for the population will avise from them. (orig./RW) [de

  3. Introduction: Analysing Emotion and Theorising Affect

    Directory of Open Access Journals (Sweden)

    Peta Tait

    2016-08-01

    Full Text Available This discussion introduces ideas of emotion and affect for a volume of articles demonstrating the scope of approaches used in their study within the humanities and creative arts. The volume offers multiple perspectives on emotion and affect within 20th-century and 21st-century texts, arts and organisations and their histories. The discussion explains how emotion encompasses the emotions, emotional feeling, sensation and mood and how these can be analysed particularly in relation to literature, art and performance. It briefly summarises concepts of affect theory within recent approaches before introducing the articles.

  4. Applications of neural network to numerical analyses

    International Nuclear Information System (INIS)

    Takeda, Tatsuoki; Fukuhara, Makoto; Ma, Xiao-Feng; Liaqat, Ali

    1999-01-01

    Applications of a multi-layer neural network to numerical analyses are described. We are mainly concerned with the computed tomography and the solution of differential equations. In both cases as the objective functions for the training process of the neural network we employed residuals of the integral equation or the differential equations. This is different from the conventional neural network training where sum of the squared errors of the output values is adopted as the objective function. For model problems both the methods gave satisfactory results and the methods are considered promising for some kind of problems. (author)

  5. Komparativ analyse - Scandinavian Airlines & Norwegian Air Shuttle

    OpenAIRE

    Kallesen, Martin Nystrup; Singh, Ravi Pal; Boesen, Nana Wiaberg

    2017-01-01

    The project is based around a pondering of how that a company the size of Scandinavian Airlines or Norwegian Air Shuttle use their Finances and how they see their external environment. This has led to us researching the relationship between the companies and their finances as well as their external environment, and how they differ in both.To do this we have utilised a myriad of different methods to analyse the companies, including PESTEL, SWOT, TOWS; DCF, risk analysis, Sensitivity, Porter’s ...

  6. Implementing partnerships in nonreactor facility safety analyses

    International Nuclear Information System (INIS)

    Courtney, J.C.; Perry, W.H.; Phipps, R.D.

    1996-01-01

    Faculty and students from LSU have been participating in nuclear safety analyses and radiation protection projects at ANL-W at INEL since 1973. A mutually beneficial relationship has evolved that has resulted in generation of safety-related studies acceptable to Argonne and DOE, NRC, and state regulatory groups. Most of the safety projects have involved the Hot Fuel Examination Facility or the Fuel Conditioning Facility; both are hot cells that receive spent fuel from EBR-II. A table shows some of the major projects at ANL-W that involved LSU students and faculty

  7. Cost/benefit analyses of environmental impact

    International Nuclear Information System (INIS)

    Goldman, M.I.

    1974-01-01

    Various aspects of cost-benefit analyses are considered. Some topics discussed are: regulations of the National Environmental Policy Act (NEPA); statement of AEC policy and procedures for implementation of NEPA; Calvert Cliffs decision; AEC Regulatory Guide; application of risk-benefit analysis to nuclear power; application of the as low as practicable (ALAP) rule to radiation discharges; thermal discharge restrictions proposed by EPA under the 1972 Amendment to the Water Pollution Control Act; estimates of somatic and genetic insult per unit population exposure; occupational exposure; EPA Point Source Guidelines for Discharges from Steam Electric Power Plants; and costs of closed-cycle cooling using cooling towers. (U.S.)

  8. The phaco machine: analysing new technology.

    Science.gov (United States)

    Fishkind, William J

    2013-01-01

    The phaco machine is frequently overlooked as the crucial surgical instrument it is. Understanding how to set parameters is initiated by understanding fundamental concepts of machine function. This study analyses the critical concepts of partial occlusion phaco, occlusion phaco and pump technology. In addition, phaco energy categories as well as variations of phaco energy production are explored. Contemporary power modulations and pump controls allow for the enhancement of partial occlusion phacoemulsification. These significant changes in the anterior chamber dynamics produce a balanced environment for phaco; less complications; and improved patient outcomes.

  9. Nuclear analyses of the Pietroasa gold hoard

    International Nuclear Information System (INIS)

    Cojocaru, V.; Besliu, C.

    1999-01-01

    By means of nuclear analyses the concentrations of Au, Ag, Cu, Ir, Os, Pt, Co and Hg were measured in the 12 artifacts of the gold hoard discovered in 1837 at Pietroasa, Buzau country in Romania. The concentrations of the first four elements were used to compare different stylistic groups assumed by historians. Comparisons with gold nuggets from the old Dacian territory and gold Roman imperial coins were also made. A good agreement was found with the oldest hypothesis which considers that the hoard is represented by three styles appropriated mainly by the Goths. (author)

  10. An evaluation of the Olympus "Quickrate" analyser.

    Science.gov (United States)

    Williams, D G; Wood, R J; Worth, H G

    1979-02-01

    The Olympus "Quickrate", a photometer built for both kinetic and end point analysis was evaluated in this laboratory. Aspartate transaminase, lactate dehydrogenase, hydroxybutyrate dehydrogenase, creatine kinase, alkaline phosphatase and gamma glutamyl transpeptidase were measured in the kinetic mode and glucose, urea, total protein, albumin, bilirubin, calcium and iron in the end point mode. Overall, good correlation was observed with routine methodologies and the precision of the methods was acceptable. An electrical evaluation was also performed. In our hands, the instrument proved to be simple to use and gave no trouble. It should prove useful for paediatric and emergency work, and as a back up for other analysers.

  11. Analyses of containment structures with corrosion damage

    International Nuclear Information System (INIS)

    Cherry, J.L.

    1997-01-01

    Corrosion damage that has been found in a number of nuclear power plant containment structures can degrade the pressure capacity of the vessel. This has prompted concerns regarding the capacity of corroded containments to withstand accident loadings. To address these concerns, finite element analyses have been performed for a typical PWR Ice Condenser containment structure. Using ABAQUS, the pressure capacity was calculated for a typical vessel with no corrosion damage. Multiple analyses were then performed with the location of the corrosion and the amount of corrosion varied in each analysis. Using a strain-based failure criterion, a open-quotes lower boundclose quotes, open-quotes best estimateclose quotes, and open-quotes upper boundclose quotes failure level was predicted for each case. These limits were established by: determining the amount of variability that exists in material properties of typical containments, estimating the amount of uncertainty associated with the level of modeling detail and modeling assumptions, and estimating the effect of corrosion on the material properties

  12. Analyser Framework to Verify Software Components

    Directory of Open Access Journals (Sweden)

    Rolf Andreas Rasenack

    2009-01-01

    Full Text Available Today, it is important for software companies to build software systems in a short time-interval, to reduce costs and to have a good market position. Therefore well organized and systematic development approaches are required. Reusing software components, which are well tested, can be a good solution to develop software applications in effective manner. The reuse of software components is less expensive and less time consuming than a development from scratch. But it is dangerous to think that software components can be match together without any problems. Software components itself are well tested, of course, but even if they composed together problems occur. Most problems are based on interaction respectively communication. Avoiding such errors a framework has to be developed for analysing software components. That framework determines the compatibility of corresponding software components. The promising approach discussed here, presents a novel technique for analysing software components by applying an Abstract Syntax Language Tree (ASLT. A supportive environment will be designed that checks the compatibility of black-box software components. This article is concerned to the question how can be coupled software components verified by using an analyzer framework and determines the usage of the ASLT. Black-box Software Components and Abstract Syntax Language Tree are the basis for developing the proposed framework and are discussed here to provide the background knowledge. The practical implementation of this framework is discussed and shows the result by using a test environment.

  13. Passive safety injection experiments and analyses (PAHKO)

    International Nuclear Information System (INIS)

    Tuunanen, J.

    1998-01-01

    PAHKO project involved experiments on the PACTEL facility and computer simulations of selected experiments. The experiments focused on the performance of Passive Safety Injection Systems (PSIS) of Advanced Light Water Reactors (ALWRs) in Small Break Loss-Of-Coolant Accident (SBLOCA) conditions. The PSIS consisted of a Core Make-up Tank (CMT) and two pipelines (Pressure Balancing Line, PBL, and Injection Line, IL). The examined PSIS worked efficiently in SBLOCAs although the flow through the PSIS stopped temporarily if the break was very small and the hot water filled the CMT. The experiments demonstrated the importance of the flow distributor in the CMT to limit rapid condensation. The project included validation of three thermal-hydraulic computer codes (APROS, CATHARE and RELAP5). The analyses showed the codes are capable to simulate the overall behaviour of the transients. The detailed analyses of the results showed some models in the codes still need improvements. Especially, further development of models for thermal stratification, condensation and natural circulation flow with small driving forces would be necessary for accurate simulation of the PSIS phenomena. (orig.)

  14. Used Fuel Management System Interface Analyses - 13578

    Energy Technology Data Exchange (ETDEWEB)

    Howard, Robert; Busch, Ingrid [Oak Ridge National Laboratory, P.O. Box 2008, Bldg. 5700, MS-6170, Oak Ridge, TN 37831 (United States); Nutt, Mark; Morris, Edgar; Puig, Francesc [Argonne National Laboratory (United States); Carter, Joe; Delley, Alexcia; Rodwell, Phillip [Savannah River National Laboratory (United States); Hardin, Ernest; Kalinina, Elena [Sandia National Laboratories (United States); Clark, Robert [U.S. Department of Energy (United States); Cotton, Thomas [Complex Systems Group (United States)

    2013-07-01

    Preliminary system-level analyses of the interfaces between at-reactor used fuel management, consolidated storage facilities, and disposal facilities, along with the development of supporting logistics simulation tools, have been initiated to provide the U.S. Department of Energy (DOE) and other stakeholders with information regarding the various alternatives for managing used nuclear fuel (UNF) generated by the current fleet of light water reactors operating in the United States. An important UNF management system interface consideration is the need for ultimate disposal of UNF assemblies contained in waste packages that are sized to be compatible with different geologic media. Thermal analyses indicate that waste package sizes for the geologic media under consideration by the Used Fuel Disposition Campaign may be significantly smaller than the canisters being used for on-site dry storage by the nuclear utilities. Therefore, at some point along the UNF disposition pathway, there could be a need to repackage fuel assemblies already loaded and being loaded into the dry storage canisters currently in use. The implications of where and when the packaging or repackaging of commercial UNF will occur are key questions being addressed in this evaluation. The analysis demonstrated that thermal considerations will have a major impact on the operation of the system and that acceptance priority, rates, and facility start dates have significant system implications. (authors)

  15. Sensitivity in risk analyses with uncertain numbers.

    Energy Technology Data Exchange (ETDEWEB)

    Tucker, W. Troy; Ferson, Scott

    2006-06-01

    Sensitivity analysis is a study of how changes in the inputs to a model influence the results of the model. Many techniques have recently been proposed for use when the model is probabilistic. This report considers the related problem of sensitivity analysis when the model includes uncertain numbers that can involve both aleatory and epistemic uncertainty and the method of calculation is Dempster-Shafer evidence theory or probability bounds analysis. Some traditional methods for sensitivity analysis generalize directly for use with uncertain numbers, but, in some respects, sensitivity analysis for these analyses differs from traditional deterministic or probabilistic sensitivity analyses. A case study of a dike reliability assessment illustrates several methods of sensitivity analysis, including traditional probabilistic assessment, local derivatives, and a ''pinching'' strategy that hypothetically reduces the epistemic uncertainty or aleatory uncertainty, or both, in an input variable to estimate the reduction of uncertainty in the outputs. The prospects for applying the methods to black box models are also considered.

  16. Fractal and multifractal analyses of bipartite networks

    Science.gov (United States)

    Liu, Jin-Long; Wang, Jian; Yu, Zu-Guo; Xie, Xian-Hua

    2017-03-01

    Bipartite networks have attracted considerable interest in various fields. Fractality and multifractality of unipartite (classical) networks have been studied in recent years, but there is no work to study these properties of bipartite networks. In this paper, we try to unfold the self-similarity structure of bipartite networks by performing the fractal and multifractal analyses for a variety of real-world bipartite network data sets and models. First, we find the fractality in some bipartite networks, including the CiteULike, Netflix, MovieLens (ml-20m), Delicious data sets and (u, v)-flower model. Meanwhile, we observe the shifted power-law or exponential behavior in other several networks. We then focus on the multifractal properties of bipartite networks. Our results indicate that the multifractality exists in those bipartite networks possessing fractality. To capture the inherent attribute of bipartite network with two types different nodes, we give the different weights for the nodes of different classes, and show the existence of multifractality in these node-weighted bipartite networks. In addition, for the data sets with ratings, we modify the two existing algorithms for fractal and multifractal analyses of edge-weighted unipartite networks to study the self-similarity of the corresponding edge-weighted bipartite networks. The results show that our modified algorithms are feasible and can effectively uncover the self-similarity structure of these edge-weighted bipartite networks and their corresponding node-weighted versions.

  17. Special analyses reveal coke-deposit structure

    International Nuclear Information System (INIS)

    Albright, L.F.

    1988-01-01

    A scanning electron microscope (SEM) and an energy dispersive X-ray analyzer (EDAX) have been used to obtain information that clarifies the three mechanisms of coke formation in ethylene furnaces, and to analyze the metal condition at the exit of furnace. The results can be used to examine furnace operations and develop improved ethylene plant practices. In this first of four articles on the analyses of coke and metal samples, the coking mechanisms and coke deposits in a section of tube from an actual ethylene furnace (Furnace A) from a plant on the Texas Gulf Coast are discussed. The second articles in the series will analyze the condition of the tube metal in the same furnace. To show how coke deposition and metal condition dependent on the operating parameters of an ethylene furnace, the third article in the series will show the coke deposition in a Texas Gulf Coast furnace tube (Furnace B) that operated at shorter residence time. The fourth article discusses the metal condition in that furnace. Some recommendations, based on the analyses and findings, are offered in the fourth article that could help extend the life of ethylene furnace tubes, and also improve overall ethylene plant operations

  18. Overview of cooperative international piping benchmark analyses

    International Nuclear Information System (INIS)

    McAfee, W.J.

    1982-01-01

    This paper presents an overview of an effort initiated in 1976 by the International Working Group on Fast Reactors (IWGFR) of the International Atomic Energy Agency (IAEA) to evaluate detailed and simplified inelastic analysis methods for piping systems with particular emphasis on piping bends. The procedure was to collect from participating member IAEA countries descriptions of tests and test results for piping systems or bends (with emphasis on high temperature inelastic tests), to compile, evaluate, and issue a selected number of these problems for analysis, and to compile and make a preliminary evaluation of the analyses results. Of the problem descriptions submitted three were selected to be used: a 90 0 -elbow at 600 0 C with an in-plane transverse force; a 90 0 -elbow with an in-plane moment; and a 180 0 -elbow at room temperature with a reversed, cyclic, in-plane transverse force. A variety of both detailed and simplified analysis solutions were obtained. A brief comparative assessment of the analyses is contained in this paper. 15 figures

  19. Ethics of cost analyses in medical education.

    Science.gov (United States)

    Walsh, Kieran

    2013-11-01

    Cost analyses in medical education are rarely straightforward, and rarely lead to clear-cut conclusions. Occasionally they do lead to clear conclusions but even when that happens, some stakeholders will ask difficult but valid questions about what to do following cost analyses-specifically about distributive justice in the allocation of resources. At present there are few or no debates about these issues and rationing decisions that are taken in medical education are largely made subconsciously. Distributive justice 'concerns the nature of a socially just allocation of goods in a society'. Inevitably there is a large degree of subjectivity in the judgment as to whether an allocation is seen as socially just or ethical. There are different principles by which we can view distributive justice and which therefore affect the prism of subjectivity through which we see certain problems. For example, we might say that distributive justice at a certain institution or in a certain medical education system operates according to the principle that resources must be divided equally amongst learners. Another system may say that resources should be distributed according to the needs of learners or even of patients. No ethical system or model is inherently right or wrong, they depend on the context in which the educator is working.

  20. Pathway analyses implicate glial cells in schizophrenia.

    Directory of Open Access Journals (Sweden)

    Laramie E Duncan

    Full Text Available The quest to understand the neurobiology of schizophrenia and bipolar disorder is ongoing with multiple lines of evidence indicating abnormalities of glia, mitochondria, and glutamate in both disorders. Despite high heritability estimates of 81% for schizophrenia and 75% for bipolar disorder, compelling links between findings from neurobiological studies, and findings from large-scale genetic analyses, are only beginning to emerge.Ten publically available gene sets (pathways related to glia, mitochondria, and glutamate were tested for association to schizophrenia and bipolar disorder using MAGENTA as the primary analysis method. To determine the robustness of associations, secondary analyses were performed with: ALIGATOR, INRICH, and Set Screen. Data from the Psychiatric Genomics Consortium (PGC were used for all analyses. There were 1,068,286 SNP-level p-values for schizophrenia (9,394 cases/12,462 controls, and 2,088,878 SNP-level p-values for bipolar disorder (7,481 cases/9,250 controls.The Glia-Oligodendrocyte pathway was associated with schizophrenia, after correction for multiple tests, according to primary analysis (MAGENTA p = 0.0005, 75% requirement for individual gene significance and also achieved nominal levels of significance with INRICH (p = 0.0057 and ALIGATOR (p = 0.022. For bipolar disorder, Set Screen yielded nominally and method-wide significant associations to all three glial pathways, with strongest association to the Glia-Astrocyte pathway (p = 0.002.Consistent with findings of white matter abnormalities in schizophrenia by other methods of study, the Glia-Oligodendrocyte pathway was associated with schizophrenia in our genomic study. These findings suggest that the abnormalities of myelination observed in schizophrenia are at least in part due to inherited factors, contrasted with the alternative of purely environmental causes (e.g. medication effects or lifestyle. While not the primary purpose of our study

  1. DEPUTY: analysing architectural structures and checking style

    International Nuclear Information System (INIS)

    Gorshkov, D.; Kochelev, S.; Kotegov, S.; Pavlov, I.; Pravilnikov, V.; Wellisch, J.P.

    2001-01-01

    The DepUty (dependencies utility) can be classified as a project and process management tool. The main goal of DepUty is to assist by means of source code analysis and graphical representation using UML, in understanding dependencies of sub-systems and packages in CMS Object Oriented software, to understand architectural structure, and to schedule code release in modularised integration. It also allows a new-comer to more easily understand the global structure of CMS software, and to void circular dependencies up-front or re-factor the code, in case it was already too close to the edge of non-maintainability. The authors will discuss the various views DepUty provides to analyse package dependencies, and illustrate both the metrics and style checking facilities it provides

  2. Response surface use in safety analyses

    International Nuclear Information System (INIS)

    Prosek, A.

    1999-01-01

    When thousands of complex computer code runs related to nuclear safety are needed for statistical analysis, the response surface is used to replace the computer code. The main purpose of the study was to develop and demonstrate a tool called optimal statistical estimator (OSE) intended for response surface generation of complex and non-linear phenomena. The performance of optimal statistical estimator was tested by the results of 59 different RELAP5/MOD3.2 code calculations of the small-break loss-of-coolant accident in a two loop pressurized water reactor. The results showed that OSE adequately predicted the response surface for the peak cladding temperature. Some good characteristic of the OSE like monotonic function between two neighbor points and independence on the number of output parameters suggest that OSE can be used for response surface generation of any safety or system parameter in the thermal-hydraulic safety analyses.(author)

  3. Spatial Analyses of Harappan Urban Settlements

    Directory of Open Access Journals (Sweden)

    Hirofumi Teramura

    2006-12-01

    Full Text Available The Harappan Civilization occupies a unique place among the early civilizations of the world with its well planned urban settlements, advanced handicraft and technology, religious and trade activities. Using a Geographical Information Systems (GIS, this study presents spatial analyses that locate urban settlements on a digital elevation model (DEM according to the three phases of early, mature and late. Understanding the relationship between the spatial distribution of Harappan sites and the change in some factors, such as topographic features, river passages or sea level changes, will lead to an understanding of the dynamism of this civilization. It will also afford a glimpse of the factors behind the formation, development, and decline of the Harappan Civilization.

  4. The plant design analyser and its applications

    International Nuclear Information System (INIS)

    Whitmarsh-Everiss, M.J.

    1992-01-01

    Consideration is given to the history of computational methods for the non-linear dynamic analysis of plant behaviour. This is traced from analogue to hybrid computers. When these were phased out simulation languages were used in the batch mode and the interactive computational capabilities were lost. These have subsequently been recovered using mainframe computing architecture in the context of small models using the Prototype Plant Design Analyser. Given the development of parallel processing architectures, the restriction on model size can be lifted. This capability and the use of advanced Work Stations and graphics software has enabled an advanced interactive design environment to be developed. This system is generic and can be used, with suitable graphics development, to study the dynamics and control behaviour of any plant or system for minimum cost. Examples of past and possible future uses are identified. (author)

  5. Abundance analyses of thirty cool carbon stars

    International Nuclear Information System (INIS)

    Utsumi, Kazuhiko

    1985-01-01

    The results were previously obtained by use of the absolute gf-values and the cosmic abundance as a standard. These gf-values were found to contain large systematic errors, and as a result, the solar photospheric abundances were revised. Our previous results, therefore, must be revised by using new gf-values, and abundance analyses are extended for as many carbon stars as possible. In conclusion, in normal cool carbon stars heavy metals are overabundant by factors of 10 - 100 and rare-earth elements are overabundant by a factor of about 10, and in J-type cool carbon stars, C 12 /C 13 ratio is smaller, C 2 and CN bands and Li 6708 are stronger than in normal cool carbon stars, and the abundances of s-process elements with respect to Fe are nearly normal. (Mori, K.)

  6. Analysing Medieval Urban Space; a methodology

    Directory of Open Access Journals (Sweden)

    Marlous L. Craane MA

    2007-08-01

    Full Text Available This article has been written in reaction to recent developments in medieval history and archaeology, to study not only the buildings in a town but also the spaces that hold them together. It discusses a more objective and interdisciplinary approach for analysing urban morphology and use of space. It proposes a 'new' methodology by combining town plan analysis and space syntax. This methodology was trialled on the city of Utrecht in the Netherlands. By comparing the results of this 'new' methodology with the results of previous, more conventional, research, this article shows that space syntax can be applied successfully to medieval urban contexts. It does this by demonstrating a strong correlation between medieval economic spaces and the most integrated spaces, just as is found in the study of modern urban environments. It thus provides a strong basis for the use of this technique in future research of medieval urban environments.

  7. Reliability and safety analyses under fuzziness

    International Nuclear Information System (INIS)

    Onisawa, T.; Kacprzyk, J.

    1995-01-01

    Fuzzy theory, for example possibility theory, is compatible with probability theory. What is shown so far is that probability theory needs not be replaced by fuzzy theory, but rather that the former works much better in applications if it is combined with the latter. In fact, it is said that there are two essential uncertainties in the field of reliability and safety analyses: One is a probabilistic uncertainty which is more relevant for mechanical systems and the natural environment, and the other is fuzziness (imprecision) caused by the existence of human beings in systems. The classical probability theory alone is therefore not sufficient to deal with uncertainties in humanistic system. In such a context this collection of works will put a milestone in the arguments of probability theory and fuzzy theory. This volume covers fault analysis, life time analysis, reliability, quality control, safety analysis and risk analysis. (orig./DG). 106 figs

  8. Precise Chemical Analyses of Planetary Surfaces

    Science.gov (United States)

    Kring, David; Schweitzer, Jeffrey; Meyer, Charles; Trombka, Jacob; Freund, Friedemann; Economou, Thanasis; Yen, Albert; Kim, Soon Sam; Treiman, Allan H.; Blake, David; hide

    1996-01-01

    We identify the chemical elements and element ratios that should be analyzed to address many of the issues identified by the Committee on Planetary and Lunar Exploration (COMPLEX). We determined that most of these issues require two sensitive instruments to analyze the necessary complement of elements. In addition, it is useful in many cases to use one instrument to analyze the outermost planetary surface (e.g. to determine weathering effects), while a second is used to analyze a subsurface volume of material (e.g., to determine the composition of unaltered planetary surface material). This dual approach to chemical analyses will also facilitate the calibration of orbital and/or Earth-based spectral observations of the planetary body. We determined that in many cases the scientific issues defined by COMPLEX can only be fully addressed with combined packages of instruments that would supplement the chemical data with mineralogic or visual information.

  9. Seismic analyses of structures. 1st draft

    International Nuclear Information System (INIS)

    David, M.

    1995-01-01

    The dynamic analysis presented in this paper refers to the seismic analysis of the main building of Paks NPP. The aim of the analysis was to determine the floor response spectra as response to seismic input. This analysis was performed by the 3-dimensional calculation model and the floor response spectra were determined for a number levels from the floor response time histories and no other adjustments were applied. The following results of seismic analysis are presented: 3-dimensional finite element model; basic assumptions of dynamic analyses; table of frequencies and included factors; modal masses for all modes; floor response spectra in all the selected nodes with figures of indicated nodes and important nodes of free vibration

  10. Analysing Terrorism from a Systems Thinking Perspective

    Directory of Open Access Journals (Sweden)

    Lukas Schoenenberger

    2014-02-01

    Full Text Available Given the complexity of terrorism, solutions based on single factors are destined to fail. Systems thinking offers various tools for helping researchers and policy makers comprehend terrorism in its entirety. We have developed a semi-quantitative systems thinking approach for characterising relationships between variables critical to terrorism and their impact on the system as a whole. For a better understanding of the mechanisms underlying terrorism, we present a 16-variable model characterising the critical components of terrorism and perform a series of highly focused analyses. We show how to determine which variables are best suited for government intervention, describing in detail their effects on the key variable—the political influence of a terrorist network. We also offer insights into how to elicit variables that destabilise and ultimately break down these networks. Because we clarify our novel approach with fictional data, the primary importance of this paper lies in the new framework for reasoning that it provides.

  11. Seismic analyses of structures. 1st draft

    Energy Technology Data Exchange (ETDEWEB)

    David, M [David Consulting, Engineering and Design Office (Czech Republic)

    1995-07-01

    The dynamic analysis presented in this paper refers to the seismic analysis of the main building of Paks NPP. The aim of the analysis was to determine the floor response spectra as responseto seismic input. This analysis was performed by the 3-dimensional calculation model and the floor response spectra were determined for a number levels from the floor response time histories and no other adjustments were applied. The following results of seismic analysis are presented: 3-dimensional finite element model; basic assumptions of dynamic analyses; table of frequencies and included factors; modal masses for all modes; floor response spectra in all the selected nodes with figures of indicated nodes and important nodes of free vibration.

  12. Project analysis and integration economic analyses summary

    Science.gov (United States)

    Macomber, H. L.

    1986-01-01

    An economic-analysis summary was presented for the manufacture of crystalline-silicon modules involving silicon ingot/sheet, growth, slicing, cell manufacture, and module assembly. Economic analyses provided: useful quantitative aspects for complex decision-making to the Flat-plate Solar Array (FSA) Project; yardsticks for design and performance to industry; and demonstration of how to evaluate and understand the worth of research and development both to JPL and other government agencies and programs. It was concluded that future research and development funds for photovoltaics must be provided by the Federal Government because the solar industry today does not reap enough profits from its present-day sales of photovoltaic equipment.

  13. Level 2 probabilistic event analyses and quantification

    International Nuclear Information System (INIS)

    Boneham, P.

    2003-01-01

    In this paper an example of quantification of a severe accident phenomenological event is given. The performed analysis for assessment of the probability that the debris released from the reactor vessel was in a coolable configuration in the lower drywell is presented. It is also analysed the assessment of the type of core/concrete attack that would occur. The coolability of the debris ex-vessel evaluation by an event in the Simplified Boiling Water Reactor (SBWR) Containment Event Tree (CET) and a detailed Decomposition Event Tree (DET) developed to aid in the quantification of this CET event are considered. The headings in the DET selected to represent plant physical states (e.g., reactor vessel pressure at the time of vessel failure) and the uncertainties associated with the occurrence of critical physical phenomena (e.g., debris configuration in the lower drywell) considered important to assessing whether the debris was coolable or not coolable ex-vessel are also discussed

  14. Externalizing Behaviour for Analysing System Models

    DEFF Research Database (Denmark)

    Ivanova, Marieta Georgieva; Probst, Christian W.; Hansen, René Rydhof

    2013-01-01

    System models have recently been introduced to model organisations and evaluate their vulnerability to threats and especially insider threats. Especially for the latter these models are very suitable, since insiders can be assumed to have more knowledge about the attacked organisation than outside...... attackers. Therefore, many attacks are considerably easier to be performed for insiders than for outsiders. However, current models do not support explicit specification of different behaviours. Instead, behaviour is deeply embedded in the analyses supported by the models, meaning that it is a complex......, if not impossible task to change behaviours. Especially when considering social engineering or the human factor in general, the ability to use different kinds of behaviours is essential. In this work we present an approach to make the behaviour a separate component in system models, and explore how to integrate...

  15. ATLAS helicity analyses in beauty hadron decays

    CERN Document Server

    Smizanska, M

    2000-01-01

    The ATLAS detector will allow a precise spatial reconstruction of the kinematics of B hadron decays. In combination with the efficient lepton identification applied already at trigger level, ATLAS is expected to provide large samples of exclusive decay channels cleanly separable from background. These data sets will allow spin-dependent analyses leading to the determination of production and decay parameters, which are not accessible if the helicity amplitudes are not separated. Measurement feasibility studies for decays B/sub s //sup 0/ to J/ psi phi and Lambda /sub b//sup 0/ to Lambda J/ psi , presented in this document, show the experimental precisions that can be achieved in determination of B/sub s//sup 0/ and Lambda /sub b //sup 0/ characteristics. (19 refs).

  16. Thermal hydraulic reactor safety analyses and experiments

    International Nuclear Information System (INIS)

    Holmstroem, H.; Eerikaeinen, L.; Kervinen, T.; Kilpi, K.; Mattila, L.; Miettinen, J.; Yrjoelae, V.

    1989-04-01

    The report introduces the results of the thermal hydraulic reactor safety research performed in the Nuclear Engineering Laboratory of the Technical Research Centre of Finland (VTT) during the years 1972-1987. Also practical applications i.e. analyses for the safety authorities and power companies are presented. The emphasis is on description of the state-of-the-art know how. The report describes VTT's most important computer codes, both those of foreign origin and those developed at VTT, and their assessment work, VTT's own experimental research, as well as international experimental projects and other forms of cooperation VTT has participated in. Appendix 8 contains a comprehensive list of the most important publications and technical reports produced. They present the content and results of the research in detail.(orig.)

  17. Digital analyses of cartometric Fruska Gora guidelines

    Directory of Open Access Journals (Sweden)

    Živković Dragica

    2013-01-01

    Full Text Available Modern geo morphological topography research have been using quantity statistic and cartographic methods for topographic relief features, mutual relief features, mutual connection analyses on the grounds of good quality numeric parameters etc. Topographic features are important for topographic activities are important for important natural activities. Important morphological characteristics are precisely at the angle of topography, hypsometry, and topography exposition and so on. Small yet unknown relief slants can deeply affect land configuration, hypsometry, topographic exposition etc. Expositions modify the light and heat of interconnected phenomena: soil and air temperature, soil disintegration, the length of vegetation period, the complexity of photosynthesis, the fruitfulness of agricultural crops, the height of snow limit etc. [Projekat Ministarstva nauke Republike Srbije, br. 176008 i br. III44006

  18. Attitude stability analyses for small artificial satellites

    International Nuclear Information System (INIS)

    Silva, W R; Zanardi, M C; Formiga, J K S; Cabette, R E S; Stuchi, T J

    2013-01-01

    The objective of this paper is to analyze the stability of the rotational motion of a symmetrical spacecraft, in a circular orbit. The equilibrium points and regions of stability are established when components of the gravity gradient torque acting on the spacecraft are included in the equations of rotational motion, which are described by the Andoyer's variables. The nonlinear stability of the equilibrium points of the rotational motion is analysed here by the Kovalev-Savchenko theorem. With the application of the Kovalev-Savchenko theorem, it is possible to verify if they remain stable under the influence of the terms of higher order of the normal Hamiltonian. In this paper, numerical simulations are made for a small hypothetical artificial satellite. Several stable equilibrium points were determined and regions around these points have been established by variations in the orbital inclination and in the spacecraft principal moment of inertia. The present analysis can directly contribute in the maintenance of the spacecraft's attitude

  19. Cointegration Approach to Analysing Inflation in Croatia

    Directory of Open Access Journals (Sweden)

    Lena Malešević-Perović

    2009-06-01

    Full Text Available The aim of this paper is to analyse the determinants of inflation in Croatia in the period 1994:6-2006:6. We use a cointegration approach and find that increases in wages positively influence inflation in the long-run. Furthermore, in the period from June 1994 onward, the depreciation of the currency also contributed to inflation. Money does not explain Croatian inflation. This irrelevance of the money supply is consistent with its endogeneity to exchange rate targeting, whereby the money supply is determined by developments in the foreign exchange market. The value of inflation in the previous period is also found to be significant, thus indicating some inflation inertia.

  20. Comprehensive immunoproteogenomic analyses of malignant pleural mesothelioma.

    Science.gov (United States)

    Lee, Hyun-Sung; Jang, Hee-Jin; Choi, Jong Min; Zhang, Jun; de Rosen, Veronica Lenge; Wheeler, Thomas M; Lee, Ju-Seog; Tu, Thuydung; Jindra, Peter T; Kerman, Ronald H; Jung, Sung Yun; Kheradmand, Farrah; Sugarbaker, David J; Burt, Bryan M

    2018-04-05

    We generated a comprehensive atlas of the immunologic cellular networks within human malignant pleural mesothelioma (MPM) using mass cytometry. Data-driven analyses of these high-resolution single-cell data identified 2 distinct immunologic subtypes of MPM with vastly different cellular composition, activation states, and immunologic function; mass spectrometry demonstrated differential abundance of MHC-I and -II neopeptides directly identified between these subtypes. The clinical relevance of this immunologic subtyping was investigated with a discriminatory molecular signature derived through comparison of the proteomes and transcriptomes of these 2 immunologic MPM subtypes. This molecular signature, representative of a favorable intratumoral cell network, was independently associated with improved survival in MPM and predicted response to immune checkpoint inhibitors in patients with MPM and melanoma. These data additionally suggest a potentially novel mechanism of response to checkpoint blockade: requirement for high measured abundance of neopeptides in the presence of high expression of MHC proteins specific for these neopeptides.

  1. Deterministic analyses of severe accident issues

    International Nuclear Information System (INIS)

    Dua, S.S.; Moody, F.J.; Muralidharan, R.; Claassen, L.B.

    2004-01-01

    Severe accidents in light water reactors involve complex physical phenomena. In the past there has been a heavy reliance on simple assumptions regarding physical phenomena alongside of probability methods to evaluate risks associated with severe accidents. Recently GE has developed realistic methodologies that permit deterministic evaluations of severe accident progression and of some of the associated phenomena in the case of Boiling Water Reactors (BWRs). These deterministic analyses indicate that with appropriate system modifications, and operator actions, core damage can be prevented in most cases. Furthermore, in cases where core-melt is postulated, containment failure can either be prevented or significantly delayed to allow sufficient time for recovery actions to mitigate severe accidents

  2. Risques naturels en montagne et analyse spatiale

    Directory of Open Access Journals (Sweden)

    Yannick Manche

    1999-06-01

    Full Text Available Le concept de risque repose sur deux notions :l'aléa, qui représente le phénomène physique par son amplitude et sa période retour ;la vulnérabilité, qui représente l'ensemble des biens et des personnes pouvant être touchés par un phénomène naturel.Le risque se définit alors comme le croisement de ces deux notions. Cette vision théorique permet de modéliser indépendamment les aléas et la vulnérabilité.Ce travail s'intéresse essentiellement à la prise en compte de la vulnérabilité dans la gestion des risques naturels. Son évaluation passe obligatoirement par une certaine analyse spatiale qui prend en compte l'occupation humaine et différentes échelles de l'utilisation de l'espace. Mais l'évaluation spatiale, que ce soit des biens et des personnes, ou des effets indirects se heurte à de nombreux problèmes. Il faut estimer l'importance de l'occupation de l'espace. Par ailleurs, le traitement des données implique des changements constants d'échelle pour passer des éléments ponctuels aux surfaces, ce que les systèmes d'information géographique ne gèrent pas parfaitement. La gestion des risques entraîne de fortes contraintes d'urbanisme, la prise en compte de la vulnérabilité permet de mieux comprendre et gérer les contraintes spatiales qu'impliquent les risques naturels. aléa, analyse spatiale, risques naturels, S.I.G., vulnérabilité

  3. Isotropy analyses of the Planck convergence map

    Science.gov (United States)

    Marques, G. A.; Novaes, C. P.; Bernui, A.; Ferreira, I. S.

    2018-01-01

    The presence of matter in the path of relic photons causes distortions in the angular pattern of the cosmic microwave background (CMB) temperature fluctuations, modifying their properties in a slight but measurable way. Recently, the Planck Collaboration released the estimated convergence map, an integrated measure of the large-scale matter distribution that produced the weak gravitational lensing (WL) phenomenon observed in Planck CMB data. We perform exhaustive analyses of this convergence map calculating the variance in small and large regions of the sky, but excluding the area masked due to Galactic contaminations, and compare them with the features expected in the set of simulated convergence maps, also released by the Planck Collaboration. Our goal is to search for sky directions or regions where the WL imprints anomalous signatures to the variance estimator revealed through a χ2 analyses at a statistically significant level. In the local analysis of the Planck convergence map, we identified eight patches of the sky in disagreement, in more than 2σ, with what is observed in the average of the simulations. In contrast, in the large regions analysis we found no statistically significant discrepancies, but, interestingly, the regions with the highest χ2 values are surrounding the ecliptic poles. Thus, our results show a good agreement with the features expected by the Λ cold dark matter concordance model, as given by the simulations. Yet, the outliers regions found here could suggest that the data still contain residual contamination, like noise, due to over- or underestimation of systematic effects in the simulation data set.

  4. The radiation analyses of ITER lower ports

    International Nuclear Information System (INIS)

    Petrizzi, L.; Brolatti, G.; Martin, A.; Loughlin, M.; Moro, F.; Villari, R.

    2010-01-01

    The ITER Vacuum Vessel has upper, equatorial, and lower ports used for equipment installation, diagnostics, heating and current drive systems, cryo-vacuum pumping, and access inside the vessel for maintenance. At the level of the divertor, the nine lower ports for remote handling, cryo-vacuum pumping and diagnostic are inclined downwards and toroidally located each every 40 o . The cryopump port has additionally a branch to allocate a second cryopump. The ports, as openings in the Vacuum Vessel, permit radiation streaming out of the vessel which affects the heating in the components in the outer regions of the machine inside and outside the ports. Safety concerns are also raised with respect to the dose after shutdown at the cryostat behind the ports: in such zones the radiation dose level must be kept below the regulatory limit to allow personnel access for maintenance purposes. Neutronic analyses have been required to qualify the ITER project related to the lower ports. A 3-D model was used to take into account full details of the ports and the lower machine surroundings. MCNP version 5 1.40 has been used with the FENDL 2.1 nuclear data library. The ITER 40 o model distributed by the ITER Organization was developed in the lower part to include the relevant details. The results of a first analysis, focused on cryopump system only, were recently published. In this paper more complete data on the cryopump port and analysis for the remote handling port and the diagnostic rack are presented; the results of both analyses give a complete map of the radiation loads in the outer divertor ports. Nuclear heating, dpa, tritium production, and dose rates after shutdown are provided and the implications for the design are discussed.

  5. Database-Driven Analyses of Astronomical Spectra

    Science.gov (United States)

    Cami, Jan

    2012-03-01

    Spectroscopy is one of the most powerful tools to study the physical properties and chemical composition of very diverse astrophysical environments. In principle, each nuclide has a unique set of spectral features; thus, establishing the presence of a specific material at astronomical distances requires no more than finding a laboratory spectrum of the right material that perfectly matches the astronomical observations. Once the presence of a substance is established, a careful analysis of the observational characteristics (wavelengths or frequencies, intensities, and line profiles) allows one to determine many physical parameters of the environment in which the substance resides, such as temperature, density, velocity, and so on. Because of this great diagnostic potential, ground-based and space-borne astronomical observatories often include instruments to carry out spectroscopic analyses of various celestial objects and events. Of particular interest is molecular spectroscopy at infrared wavelengths. From the spectroscopic point of view, molecules differ from atoms in their ability to vibrate and rotate, and quantum physics inevitably causes those motions to be quantized. The energies required to excite vibrations or rotations are such that vibrational transitions generally occur at infrared wavelengths, whereas pure rotational transitions typically occur at sub-mm wavelengths. Molecular vibration and rotation are coupled though, and thus at infrared wavelengths, one commonly observes a multitude of ro-vibrational transitions (see Figure 13.1). At lower spectral resolution, all transitions blend into one broad ro-vibrational molecular band. The isotope. Molecular spectroscopy thus allows us to see a difference of one neutron in an atomic nucleus that is located at astronomical distances! Since the detection of the first interstellar molecules (the CH [21] and CN [14] radicals), more than 150 species have been detected in space, ranging in size from diatomic

  6. High performance liquid chromatography in pharmaceutical analyses

    Directory of Open Access Journals (Sweden)

    Branko Nikolin

    2004-05-01

    Full Text Available In testing the pre-sale procedure the marketing of drugs and their control in the last ten years, high performance liquid chromatographyreplaced numerous spectroscopic methods and gas chromatography in the quantitative and qualitative analysis. In the first period of HPLC application it was thought that it would become a complementary method of gas chromatography, however, today it has nearly completely replaced gas chromatography in pharmaceutical analysis. The application of the liquid mobile phase with the possibility of transformation of mobilized polarity during chromatography and all other modifications of mobile phase depending upon the characteristics of substance which are being tested, is a great advantage in the process of separation in comparison to other methods. The greater choice of stationary phase is the next factor which enables realization of good separation. The separation line is connected to specific and sensitive detector systems, spectrafluorimeter, diode detector, electrochemical detector as other hyphernated systems HPLC-MS and HPLC-NMR, are the basic elements on which is based such wide and effective application of the HPLC method. The purpose high performance liquid chromatography(HPLC analysis of any drugs is to confirm the identity of a drug and provide quantitative results and also to monitor the progress of the therapy of a disease.1 Measuring presented on the Fig. 1. is chromatogram obtained for the plasma of depressed patients 12 h before oral administration of dexamethasone. It may also be used to further our understanding of the normal and disease process in the human body trough biomedical and therapeutically research during investigation before of the drugs registration. The analyses of drugs and metabolites in biological fluids, particularly plasma, serum or urine is one of the most demanding but one of the most common uses of high performance of liquid chromatography. Blood, plasma or

  7. High perfomance liquid chromatography in pharmaceutical analyses.

    Science.gov (United States)

    Nikolin, Branko; Imamović, Belma; Medanhodzić-Vuk, Saira; Sober, Miroslav

    2004-05-01

    In testing the pre-sale procedure the marketing of drugs and their control in the last ten years, high performance liquid chromatography replaced numerous spectroscopic methods and gas chromatography in the quantitative and qualitative analysis. In the first period of HPLC application it was thought that it would become a complementary method of gas chromatography, however, today it has nearly completely replaced gas chromatography in pharmaceutical analysis. The application of the liquid mobile phase with the possibility of transformation of mobilized polarity during chromatography and all other modifications of mobile phase depending upon the characteristics of substance which are being tested, is a great advantage in the process of separation in comparison to other methods. The greater choice of stationary phase is the next factor which enables realization of good separation. The separation line is connected to specific and sensitive detector systems, spectrafluorimeter, diode detector, electrochemical detector as other hyphernated systems HPLC-MS and HPLC-NMR, are the basic elements on which is based such wide and effective application of the HPLC method. The purpose high performance liquid chromatography (HPLC) analysis of any drugs is to confirm the identity of a drug and provide quantitative results and also to monitor the progress of the therapy of a disease.1) Measuring presented on the Fig. 1. is chromatogram obtained for the plasma of depressed patients 12 h before oral administration of dexamethasone. It may also be used to further our understanding of the normal and disease process in the human body trough biomedical and therapeutically research during investigation before of the drugs registration. The analyses of drugs and metabolites in biological fluids, particularly plasma, serum or urine is one of the most demanding but one of the most common uses of high performance of liquid chromatography. Blood, plasma or serum contains numerous endogenous

  8. Uncertainty Analyses for Back Projection Methods

    Science.gov (United States)

    Zeng, H.; Wei, S.; Wu, W.

    2017-12-01

    So far few comprehensive error analyses for back projection methods have been conducted, although it is evident that high frequency seismic waves can be easily affected by earthquake depth, focal mechanisms and the Earth's 3D structures. Here we perform 1D and 3D synthetic tests for two back projection methods, MUltiple SIgnal Classification (MUSIC) (Meng et al., 2011) and Compressive Sensing (CS) (Yao et al., 2011). We generate synthetics for both point sources and finite rupture sources with different depths, focal mechanisms, as well as 1D and 3D structures in the source region. The 3D synthetics are generated through a hybrid scheme of Direct Solution Method and Spectral Element Method. Then we back project the synthetic data using MUSIC and CS. The synthetic tests show that the depth phases can be back projected as artificial sources both in space and time. For instance, for a source depth of 10km, back projection gives a strong signal 8km away from the true source. Such bias increases with depth, e.g., the error of horizontal location could be larger than 20km for a depth of 40km. If the array is located around the nodal direction of direct P-waves the teleseismic P-waves are dominated by the depth phases. Therefore, back projections are actually imaging the reflection points of depth phases more than the rupture front. Besides depth phases, the strong and long lasted coda waves due to 3D effects near trench can lead to additional complexities tested here. The strength contrast of different frequency contents in the rupture models also produces some variations to the back projection results. In the synthetic tests, MUSIC and CS derive consistent results. While MUSIC is more computationally efficient, CS works better for sparse arrays. In summary, our analyses indicate that the impact of various factors mentioned above should be taken into consideration when interpreting back projection images, before we can use them to infer the earthquake rupture physics.

  9. Scanning electron microscopy and micro-analyses

    International Nuclear Information System (INIS)

    Brisset, F.; Repoux, L.; Ruste, J.; Grillon, F.; Robaut, F.

    2008-01-01

    Scanning electron microscopy (SEM) and the related micro-analyses are involved in extremely various domains, from the academic environments to the industrial ones. The overall theoretical bases, the main technical characteristics, and some complements of information about practical usage and maintenance are developed in this book. high-vacuum and controlled-vacuum electron microscopes are thoroughly presented, as well as the last generation of EDS (energy dispersive spectrometer) and WDS (wavelength dispersive spectrometer) micro-analysers. Beside these main topics, other analysis or observation techniques are approached, such as EBSD (electron backscattering diffraction), 3-D imaging, FIB (focussed ion beams), Monte-Carlo simulations, in-situ tests etc.. This book, in French language, is the only one which treats of this subject in such an exhaustive way. It represents the actualized and totally updated version of a previous edition of 1979. It gathers the lectures given in 2006 at the summer school of Saint Martin d'Heres (France). Content: 1 - electron-matter interactions; 2 - characteristic X-radiation, Bremsstrahlung; 3 - electron guns in SEM; 4 - elements of electronic optics; 5 - vacuum techniques; 6 - detectors used in SEM; 7 - image formation and optimization in SEM; 7a - SEM practical instructions for use; 8 - controlled pressure microscopy; 8a - applications; 9 - energy selection X-spectrometers (energy dispersive spectrometers - EDS); 9a - EDS analysis; 9b - X-EDS mapping; 10 - technological aspects of WDS; 11 - processing of EDS and WDS spectra; 12 - X-microanalysis quantifying methods; 12a - quantitative WDS microanalysis of very light elements; 13 - statistics: precision and detection limits in microanalysis; 14 - analysis of stratified samples; 15 - crystallography applied to EBSD; 16 - EBSD: history, principle and applications; 16a - EBSD analysis; 17 - Monte Carlo simulation; 18 - insulating samples in SEM and X-ray microanalysis; 18a - insulating

  10. Period Study and Analyses of 2017 Observations of the Totally Eclipsing, Solar Type Binary, MT Camelopardalis

    Science.gov (United States)

    Faulkner, Danny R.; Samec, Ronald G.; Caton, Daniel B.

    2018-06-01

    We report here on a period study and the analysis of BVRcIc light curves (taken in 2017) of MT Cam (GSC03737-01085), which is a solar type (T ~ 5500K) eclipsing binary. D. Caton observed MT Cam on 05, 14, 15, 16, and 17, December 2017 with the 0.81-m reflector at Dark Sky Observatory. Six times of minimum light were calculated from four primary eclipses and two secondary eclipses:HJD I = 24 58092.4937±0.0002, 2458102.74600±0.0021, 2458104.5769±0.0002, 2458104.9434±0.0029HJD II = 2458103.6610±0.0001, 2458104.7607±0.0020,Six times of minimum light were also calculated from data taken by Terrell, Gross, and Cooney, in their 2016 and 2004 observations (reported in IBVS #6166; TGC, hereafter). In addition, six more times of minimum light were taken from the literature. From all 18 times of minimum light, we determined the following light elements:JD Hel Min I=2458102.7460(4) + 0.36613937(5) EWe found the orbital period was constant over the 14 years spanning all observations. We note that TGC found a slightly increasing period. However, our results were obtained from a period study rather than comparison of observations from only two epochs by the Wilson-Devinney (W-D) Program. A BVRcIc Johnson-Cousins filtered simultaneous W-D Program solution gives a mass ratio (0.3385±0.0014) very nearly the same as TGC’s (0.347±0.003), and a component temperature difference of only ~40 K. As with TGC, no spot was needed in the modeling. Our modeling (beginning with Binary Maker 3.0 fits) was done without prior knowledge of TGC’s. This shows the agreement achieved when independent analyses are done with the W-D code. The present observations were taken 1.8 years later than the last curves by TGC, so some variation is expected.The Roche Lobe fill-out of the binary is ~13% and the inclination is ~83.5 degrees. The system is a shallow contact W-type W UMa Binary, albeit, the amplitudes of the primary and secondary eclipse are very nearly identical. An eclipse duration of ~21

  11. Oxygen isotope analyses of ground ice from North of West Siberia, from Yakutia and from Chukotka

    International Nuclear Information System (INIS)

    Vaikmaee, R.; Vassilchuk, Y.

    1991-01-01

    The aim of the present work is to make the large amount of original factual material obtained by studying the oxygen isotope composition in different types of permafrost and ground ice available to specialists. The samples analysed were systematically collected over a period of many years from different permafrost areas of the Soviet Union with the aim of elucidating and studying the regularities of isotope composition formation in different types of ground ice and selecting the most promising objects for paleoclimatic reconstructions. Much attention was paid on methodical problems of isotopic analysis starting with the collection, transportation and storage of samples up to the interpretation of the results obtained. Besides permafrost isotope data covering a large geographical area, a good deal of data concerns the isotopic composition of precipitation and surface water in permafrost areas. This is of great consequence as regards the understanding of the regularities of isotope compositions formation in permafrost. The largest chapter gives a brief account of the isotopic composition in different types of ground ice. The conclusion has been reached that in terms of paleoclimatic research syngenetic ice wedges are most promising. Grounding on the representative data bank it may be maintained with certainty that the isotopic composition provides a reliable basis for the differentiation of ice wedges originating in different epochs , however, it also reveals regional regularities. Much more complicated is the interpretation of textural ice isotopic composition. In some cases it is possible to use the distribution of 18 O in vertical sections of textural ice for their stratigraphic division. One has to consider here different mechanisms of textural ice formation as a result of which the initial isotopic composition of the ice-forming water can be in some cases highly modified. A problem of its own is the investigation of 18 O variations in the section of massive

  12. Multichannel amplitude analyser for nuclear spectrometry

    International Nuclear Information System (INIS)

    Jankovic, S.; Milovanovic, B.

    2003-01-01

    A multichannel amplitude analyser with 4096 channels was designed. It is based on a fast 12-bit analog-to-digital converter. The intended purpose of the instrument is recording nuclear spectra by means of scintillation detectors. The computer link is established through an opto-isolated serial connection cable, thus reducing instrument sensitivity to disturbances originating from digital circuitry. Refreshing of the data displayed on the screen occurs on every 2.5 seconds. The impulse peak detection is implemented through the differentiation of the amplified input signal, while the synchronization with the data coming from the converter output is established by taking advantage of the internal 'pipeline' structure of the converter itself. The mode of operation of the built-in microcontroller provides that there are no missed impulses, and the simple logic network prevents the initiation of the amplitude reading sequence for the next impulse in case it appears shortly after its precedent. The solution proposed here demonstrated a good performance at a comparatively low manufacturing cost, and is thus suitable for educational purposes (author)

  13. Scleral topography analysed by optical coherence tomography.

    Science.gov (United States)

    Bandlitz, Stefan; Bäumer, Joachim; Conrad, Uwe; Wolffsohn, James

    2017-08-01

    A detailed evaluation of the corneo-scleral-profile (CSP) is of particular relevance in soft and scleral lenses fitting. The aim of this study was to use optical coherence tomography (OCT) to analyse the profile of the limbal sclera and to evaluate the relationship between central corneal radii, corneal eccentricity and scleral radii. Using OCT (Optos OCT/SLO; Dunfermline, Scotland, UK) the limbal scleral radii (SR) of 30 subjects (11M, 19F; mean age 23.8±2.0SD years) were measured in eight meridians 45° apart. Central corneal radii (CR) and corneal eccentricity (CE) were evaluated using the Oculus Keratograph 4 (Oculus, Wetzlar, Germany). Differences between SR in the meridians and the associations between SR and corneal topography were assessed. Median SR measured along 45° (58.0; interquartile range, 46.8-84.8mm) was significantly (ptopography and may provide additional data useful in fitting soft and scleral contact lenses. Copyright © 2017 British Contact Lens Association. Published by Elsevier Ltd. All rights reserved.

  14. Bayesian analyses of seasonal runoff forecasts

    Science.gov (United States)

    Krzysztofowicz, R.; Reese, S.

    1991-12-01

    Forecasts of seasonal snowmelt runoff volume provide indispensable information for rational decision making by water project operators, irrigation district managers, and farmers in the western United States. Bayesian statistical models and communication frames have been researched in order to enhance the forecast information disseminated to the users, and to characterize forecast skill from the decision maker's point of view. Four products are presented: (i) a Bayesian Processor of Forecasts, which provides a statistical filter for calibrating the forecasts, and a procedure for estimating the posterior probability distribution of the seasonal runoff; (ii) the Bayesian Correlation Score, a new measure of forecast skill, which is related monotonically to the ex ante economic value of forecasts for decision making; (iii) a statistical predictor of monthly cumulative runoffs within the snowmelt season, conditional on the total seasonal runoff forecast; and (iv) a framing of the forecast message that conveys the uncertainty associated with the forecast estimates to the users. All analyses are illustrated with numerical examples of forecasts for six gauging stations from the period 1971 1988.

  15. Analyses of demand response in Denmark

    International Nuclear Information System (INIS)

    Moeller Andersen, F.; Grenaa Jensen, S.; Larsen, Helge V.; Meibom, P.; Ravn, H.; Skytte, K.; Togeby, M.

    2006-10-01

    Due to characteristics of the power system, costs of producing electricity vary considerably over short time intervals. Yet, many consumers do not experience corresponding variations in the price they pay for consuming electricity. The topic of this report is: are consumers willing and able to respond to short-term variations in electricity prices, and if so, what is the social benefit of consumers doing so? Taking Denmark and the Nord Pool market as a case, the report focuses on what is known as short-term consumer flexibility or demand response in the electricity market. With focus on market efficiency, efficient allocation of resources and security of supply, the report describes demand response from a micro-economic perspective and provides empirical observations and case studies. The report aims at evaluating benefits from demand response. However, only elements contributing to an overall value are presented. In addition, the analyses are limited to benefits for society, and costs of obtaining demand response are not considered. (au)

  16. WIND SPEED AND ENERGY POTENTIAL ANALYSES

    Directory of Open Access Journals (Sweden)

    A. TOKGÖZLÜ

    2013-01-01

    Full Text Available This paper provides a case study on application of wavelet techniques to analyze wind speed and energy (renewable and environmental friendly energy. Solar and wind are main sources of energy that allows farmers to have the potential for transferring kinetic energy captured by the wind mill for pumping water, drying crops, heating systems of green houses, rural electrification's or cooking. Larger wind turbines (over 1 MW can pump enough water for small-scale irrigation. This study tried to initiate data gathering process for wavelet analyses, different scale effects and their role on wind speed and direction variations. The wind data gathering system is mounted at latitudes: 37° 50" N; longitude 30° 33" E and height: 1200 m above mean sea level at a hill near Süleyman Demirel University campus. 10 minutes average values of two levels wind speed and direction (10m and 30m above ground level have been recorded by a data logger between July 2001 and February 2002. Wind speed values changed between the range of 0 m/s and 54 m/s. Annual mean speed value is 4.5 m/s at 10 m ground level. Prevalent wind

  17. PRECLOSURE CONSEQUENCE ANALYSES FOR LICENSE APPLICATION

    Energy Technology Data Exchange (ETDEWEB)

    S. Tsai

    2005-01-12

    Radiological consequence analyses are performed for potential releases from normal operations in surface and subsurface facilities and from Category 1 and Category 2 event sequences during the preclosure period. Surface releases from normal repository operations are primarily from radionuclides released from opening a transportation cask during dry transfer operations of spent nuclear fuel (SNF) in Dry Transfer Facility 1 (DTF 1), Dry Transfer Facility 2 (DTF 2), the Canister Handling facility (CHF), or the Fuel Handling Facility (FHF). Subsurface releases from normal repository operations are from resuspension of waste package surface contamination and neutron activation of ventilated air and silica dust from host rock in the emplacement drifts. The purpose of this calculation is to demonstrate that the preclosure performance objectives, specified in 10 CFR 63.111(a) and 10 CFR 63.111(b), have been met for the proposed design and operations in the geologic repository operations area. Preclosure performance objectives are discussed in Section 6.2.3 and are summarized in Tables 1 and 2.

  18. Soil deflation analyses from wind erosion events

    Directory of Open Access Journals (Sweden)

    Lenka Lackóová

    2015-09-01

    Full Text Available There are various methods to assess soil erodibility for wind erosion. This paper focuses on aggregate analysis by a laser particle sizer ANALYSETTE 22 (FRITSCH GmbH, made to determine the size distribution of soil particles detached by wind (deflated particles. Ten soil samples, trapped along the same length of the erosion surface (150–155 m but at different wind speeds, were analysed. The soil was sampled from a flat, smooth area without vegetation cover or soil crust, not affected by the impact of windbreaks or other barriers, from a depth of maximum 2.5 cm. Prior to analysis the samples were prepared according to the relevant specifications. An experiment was also conducted using a device that enables characterisation of the vertical movement of the deflated material. The trapped samples showed no differences in particle size and the proportions of size fractions at different hourly average wind speeds. It was observed that most of particles travelling in saltation mode (size 50–500 μm – 58–70% – moved vertically up to 26 cm above the soil surface. At greater heights, particles moving in suspension mode (floating in the air; size < 100 μm accounted for up to 90% of the samples. This result suggests that the boundary between the two modes of the vertical movement of deflated soil particles lies at about 25 cm above the soil surface.

  19. Genomic analyses of modern dog breeds.

    Science.gov (United States)

    Parker, Heidi G

    2012-02-01

    A rose may be a rose by any other name, but when you call a dog a poodle it becomes a very different animal than if you call it a bulldog. Both the poodle and the bulldog are examples of dog breeds of which there are >400 recognized worldwide. Breed creation has played a significant role in shaping the modern dog from the length of his leg to the cadence of his bark. The selection and line-breeding required to maintain a breed has also reshaped the genome of the dog, resulting in a unique genetic pattern for each breed. The breed-based population structure combined with extensive morphologic variation and shared human environments have made the dog a popular model for mapping both simple and complex traits and diseases. In order to obtain the most benefit from the dog as a genetic system, it is necessary to understand the effect structured breeding has had on the genome of the species. That is best achieved by looking at genomic analyses of the breeds, their histories, and their relationships to each other.

  20. Interim Basis for PCB Sampling and Analyses

    International Nuclear Information System (INIS)

    BANNING, D.L.

    2001-01-01

    This document was developed as an interim basis for sampling and analysis of polychlorinated biphenyls (PCBs) and will be used until a formal data quality objective (DQO) document is prepared and approved. On August 31, 2000, the Framework Agreement for Management of Polychlorinated Biphenyls (PCBs) in Hanford Tank Waste was signed by the US. Department of Energy (DOE), the Environmental Protection Agency (EPA), and the Washington State Department of Ecology (Ecology) (Ecology et al. 2000). This agreement outlines the management of double shell tank (DST) waste as Toxic Substance Control Act (TSCA) PCB remediation waste based on a risk-based disposal approval option per Title 40 of the Code of Federal Regulations 761.61 (c). The agreement calls for ''Quantification of PCBs in DSTs, single shell tanks (SSTs), and incoming waste to ensure that the vitrification plant and other ancillary facilities PCB waste acceptance limits and the requirements of the anticipated risk-based disposal approval are met.'' Waste samples will be analyzed for PCBs to satisfy this requirement. This document describes the DQO process undertaken to assure appropriate data will be collected to support management of PCBs and is presented in a DQO format. The DQO process was implemented in accordance with the U.S. Environmental Protection Agency EPA QAlG4, Guidance for the Data Quality Objectives Process (EPA 1994) and the Data Quality Objectives for Sampling and Analyses, HNF-IP-0842/Rev.1 A, Vol. IV, Section 4.16 (Banning 1999)

  1. Achieving reasonable conservatism in nuclear safety analyses

    International Nuclear Information System (INIS)

    Jamali, Kamiar

    2015-01-01

    In the absence of methods that explicitly account for uncertainties, seeking reasonable conservatism in nuclear safety analyses can quickly lead to extreme conservatism. The rate of divergence to extreme conservatism is often beyond the expert analysts’ intuitive feeling, but can be demonstrated mathematically. Too much conservatism in addressing the safety of nuclear facilities is not beneficial to society. Using certain properties of lognormal distributions for representation of input parameter uncertainties, example calculations for the risk and consequence of a fictitious facility accident scenario are presented. Results show that there are large differences between the calculated 95th percentiles and the extreme bounding values derived from using all input variables at their upper-bound estimates. Showing the relationship of the mean values to the key parameters of the output distributions, the paper concludes that the mean is the ideal candidate for representation of the value of an uncertain parameter. The mean value is proposed as the metric that is consistent with the concept of reasonable conservatism in nuclear safety analysis, because its value increases towards higher percentiles of the underlying positively skewed distribution with increasing levels of uncertainty. Insensitivity of the results to the actual underlying distributions is briefly demonstrated. - Highlights: • Multiple conservative assumptions can quickly diverge into extreme conservatism. • Mathematics and attractive properties provide basis for wide use of lognormal distribution. • Mean values are ideal candidates for representation of parameter uncertainties. • Mean values are proposed as reasonably conservative estimates of parameter uncertainties

  2. CFD analyses of coolant channel flowfields

    Science.gov (United States)

    Yagley, Jennifer A.; Feng, Jinzhang; Merkle, Charles L.

    1993-01-01

    The flowfield characteristics in rocket engine coolant channels are analyzed by means of a numerical model. The channels are characterized by large length to diameter ratios, high Reynolds numbers, and asymmetrical heating. At representative flow conditions, the channel length is approximately twice the hydraulic entrance length so that fully developed conditions would be reached for a constant property fluid. For the supercritical hydrogen that is used as the coolant, the strong property variations create significant secondary flows in the cross-plane which have a major influence on the flow and the resulting heat transfer. Comparison of constant and variable property solutions show substantial differences. In addition, the property variations prevent fully developed flow. The density variation accelerates the fluid in the channels increasing the pressure drop without an accompanying increase in heat flux. Analyses of the inlet configuration suggest that side entry from a manifold can affect the development of the velocity profile because of vortices generated as the flow enters the channel. Current work is focused on studying the effects of channel bifurcation on the flow field and the heat transfer characteristics.

  3. Fast and accurate methods for phylogenomic analyses

    Directory of Open Access Journals (Sweden)

    Warnow Tandy

    2011-10-01

    Full Text Available Abstract Background Species phylogenies are not estimated directly, but rather through phylogenetic analyses of different gene datasets. However, true gene trees can differ from the true species tree (and hence from one another due to biological processes such as horizontal gene transfer, incomplete lineage sorting, and gene duplication and loss, so that no single gene tree is a reliable estimate of the species tree. Several methods have been developed to estimate species trees from estimated gene trees, differing according to the specific algorithmic technique used and the biological model used to explain differences between species and gene trees. Relatively little is known about the relative performance of these methods. Results We report on a study evaluating several different methods for estimating species trees from sequence datasets, simulating sequence evolution under a complex model including indels (insertions and deletions, substitutions, and incomplete lineage sorting. The most important finding of our study is that some fast and simple methods are nearly as accurate as the most accurate methods, which employ sophisticated statistical methods and are computationally quite intensive. We also observe that methods that explicitly consider errors in the estimated gene trees produce more accurate trees than methods that assume the estimated gene trees are correct. Conclusions Our study shows that highly accurate estimations of species trees are achievable, even when gene trees differ from each other and from the species tree, and that these estimations can be obtained using fairly simple and computationally tractable methods.

  4. Mediation Analyses in the Real World

    DEFF Research Database (Denmark)

    Lange, Theis; Starkopf, Liis

    2016-01-01

    The paper by Nguyen et al.1 published in this issue of Epidemiology presents a comparison of the recently suggested inverse odds ratio approach for addressing mediation and a more conventional Baron and Kenny-inspired method. Interestingly, the comparison is not done through a discussion of restr......The paper by Nguyen et al.1 published in this issue of Epidemiology presents a comparison of the recently suggested inverse odds ratio approach for addressing mediation and a more conventional Baron and Kenny-inspired method. Interestingly, the comparison is not done through a discussion...... it simultaneously ensures that the comparison is based on properties, which matter in actual applications, and makes the comparison accessible for a broader audience. In a wider context, the choice to stay close to real-life problems mirrors a general trend within the literature on mediation analysis namely to put...... applications using the inverse odds ration approach, as it simply has not had enough time to move from theoretical concept to published applied paper, we do expect to be able to judge the willingness of authors and journals to employ the causal inference-based approach to mediation analyses. Our hope...

  5. Reproducibility of neuroimaging analyses across operating systems.

    Science.gov (United States)

    Glatard, Tristan; Lewis, Lindsay B; Ferreira da Silva, Rafael; Adalat, Reza; Beck, Natacha; Lepage, Claude; Rioux, Pierre; Rousseau, Marc-Etienne; Sherif, Tarek; Deelman, Ewa; Khalili-Mahani, Najmeh; Evans, Alan C

    2015-01-01

    Neuroimaging pipelines are known to generate different results depending on the computing platform where they are compiled and executed. We quantify these differences for brain tissue classification, fMRI analysis, and cortical thickness (CT) extraction, using three of the main neuroimaging packages (FSL, Freesurfer and CIVET) and different versions of GNU/Linux. We also identify some causes of these differences using library and system call interception. We find that these packages use mathematical functions based on single-precision floating-point arithmetic whose implementations in operating systems continue to evolve. While these differences have little or no impact on simple analysis pipelines such as brain extraction and cortical tissue classification, their accumulation creates important differences in longer pipelines such as subcortical tissue classification, fMRI analysis, and cortical thickness extraction. With FSL, most Dice coefficients between subcortical classifications obtained on different operating systems remain above 0.9, but values as low as 0.59 are observed. Independent component analyses (ICA) of fMRI data differ between operating systems in one third of the tested subjects, due to differences in motion correction. With Freesurfer and CIVET, in some brain regions we find an effect of build or operating system on cortical thickness. A first step to correct these reproducibility issues would be to use more precise representations of floating-point numbers in the critical sections of the pipelines. The numerical stability of pipelines should also be reviewed.

  6. Activation analyses for different fusion structural alloys

    International Nuclear Information System (INIS)

    Attaya, H.; Smith, D.

    1991-01-01

    The leading candidate structural materials, viz., the vanadium alloys, the nickel or the manganese stabilized austenitic steels, and the ferritic steels, are analysed in terms of their induced activation in the TPSS fusion power reactor. The TPSS reactor has 1950 MW fusion power and inboard and outboard average neutron wall loading of 3.75 and 5.35 MW/m 2 respectively. The results shows that, after one year of continuous operation, the vanadium alloys have the least radioactivity at reactor shutdown. The maximum difference between the induced radioactivity in the vanadium alloys and in the other iron-based alloys occurs at about 10 years after reactor shutdown. At this time, the total reactor radioactivity, using the vanadium alloys, is about two orders of magnitude less than the total reactor radioactivity utilizing any other alloy. The difference is even larger in the first wall, the FW-vanadium activation is 3 orders of magnitude less than other alloys' FW activation. 2 refs., 7 figs

  7. Statistical analyses of extreme food habits

    International Nuclear Information System (INIS)

    Breuninger, M.; Neuhaeuser-Berthold, M.

    2000-01-01

    This report is a summary of the results of the project ''Statistical analyses of extreme food habits'', which was ordered from the National Office for Radiation Protection as a contribution to the amendment of the ''General Administrative Regulation to paragraph 45 of the Decree on Radiation Protection: determination of the radiation exposition by emission of radioactive substances from facilities of nuclear technology''. Its aim is to show if the calculation of the radiation ingested by 95% of the population by food intake, like it is planned in a provisional draft, overestimates the true exposure. If such an overestimation exists, the dimension of it should be determined. It was possible to prove the existence of this overestimation but its dimension could only roughly be estimated. To identify the real extent of it, it is necessary to include the specific activities of the nuclides, which were not available for this investigation. In addition to this the report shows how the amounts of food consumption of different groups of foods influence each other and which connections between these amounts should be taken into account, in order to estimate the radiation exposition as precise as possible. (orig.) [de

  8. Evaluation of the Olympus AU-510 analyser.

    Science.gov (United States)

    Farré, C; Velasco, J; Ramón, F

    1991-01-01

    The selective multitest Olympus AU-510 analyser was evaluated according to the recommendations of the Comision de Instrumentacion de la Sociedad Española de Quimica Clinica and the European Committee for Clinical Laboratory Standards. The evaluation was carried out in two stages: an examination of the analytical units and then an evaluation in routine work conditions. The operational characteristics of the system were also studied.THE FIRST STAGE INCLUDED A PHOTOMETRIC STUDY: dependent on the absorbance, the inaccuracy varies between +0.5% to -0.6% at 405 nm and from -5.6% to 10.6% at 340 nm; the imprecision ranges between -0.22% and 0.56% at 405 nm and between 0.09% and 2.74% at 340 nm. Linearity was acceptable, apart from a very low absorbance for NADH at 340 nm; and the imprecision of the serum sample pipetter was satisfactory.TWELVE SERUM ANALYTES WERE STUDIED UNDER ROUTINE CONDITIONS: glucose, urea urate, cholesterol, triglycerides, total bilirubin, creatinine, phosphate, iron, aspartate aminotransferase, alanine aminotransferase and gamma-glutamyl transferase.The within-run imprecision (CV%) ranged from 0.67% for phosphate to 2.89% for iron and the between-run imprecision from 0.97% for total bilirubin to 7.06% for iron. There was no carryover in a study of the serum sample pipetter. Carry-over studies with the reagent and sample pipetters shows some cross contamination in the iron assay.

  9. PRECLOSURE CONSEQUENCE ANALYSES FOR LICENSE APPLICATION

    International Nuclear Information System (INIS)

    S. Tsai

    2005-01-01

    Radiological consequence analyses are performed for potential releases from normal operations in surface and subsurface facilities and from Category 1 and Category 2 event sequences during the preclosure period. Surface releases from normal repository operations are primarily from radionuclides released from opening a transportation cask during dry transfer operations of spent nuclear fuel (SNF) in Dry Transfer Facility 1 (DTF 1), Dry Transfer Facility 2 (DTF 2), the Canister Handling facility (CHF), or the Fuel Handling Facility (FHF). Subsurface releases from normal repository operations are from resuspension of waste package surface contamination and neutron activation of ventilated air and silica dust from host rock in the emplacement drifts. The purpose of this calculation is to demonstrate that the preclosure performance objectives, specified in 10 CFR 63.111(a) and 10 CFR 63.111(b), have been met for the proposed design and operations in the geologic repository operations area. Preclosure performance objectives are discussed in Section 6.2.3 and are summarized in Tables 1 and 2

  10. Genomic analyses of the CAM plant pineapple.

    Science.gov (United States)

    Zhang, Jisen; Liu, Juan; Ming, Ray

    2014-07-01

    The innovation of crassulacean acid metabolism (CAM) photosynthesis in arid and/or low CO2 conditions is a remarkable case of adaptation in flowering plants. As the most important crop that utilizes CAM photosynthesis, the genetic and genomic resources of pineapple have been developed over many years. Genetic diversity studies using various types of DNA markers led to the reclassification of the two genera Ananas and Pseudananas and nine species into one genus Ananas and two species, A. comosus and A. macrodontes with five botanical varieties in A. comosus. Five genetic maps have been constructed using F1 or F2 populations, and high-density genetic maps generated by genotype sequencing are essential resources for sequencing and assembling the pineapple genome and for marker-assisted selection. There are abundant expression sequence tag resources but limited genomic sequences in pineapple. Genes involved in the CAM pathway has been analysed in several CAM plants but only a few of them are from pineapple. A reference genome of pineapple is being generated and will accelerate genetic and genomic research in this major CAM crop. This reference genome of pineapple provides the foundation for studying the origin and regulatory mechanism of CAM photosynthesis, and the opportunity to evaluate the classification of Ananas species and botanical cultivars. © The Author 2014. Published by Oxford University Press on behalf of the Society for Experimental Biology. All rights reserved. For permissions, please email: journals.permissions@oup.com.

  11. Social Media Analyses for Social Measurement

    Science.gov (United States)

    Schober, Michael F.; Pasek, Josh; Guggenheim, Lauren; Lampe, Cliff; Conrad, Frederick G.

    2016-01-01

    Demonstrations that analyses of social media content can align with measurement from sample surveys have raised the question of whether survey research can be supplemented or even replaced with less costly and burdensome data mining of already-existing or “found” social media content. But just how trustworthy such measurement can be—say, to replace official statistics—is unknown. Survey researchers and data scientists approach key questions from starting assumptions and analytic traditions that differ on, for example, the need for representative samples drawn from frames that fully cover the population. New conversations between these scholarly communities are needed to understand the potential points of alignment and non-alignment. Across these approaches, there are major differences in (a) how participants (survey respondents and social media posters) understand the activity they are engaged in; (b) the nature of the data produced by survey responses and social media posts, and the inferences that are legitimate given the data; and (c) practical and ethical considerations surrounding the use of the data. Estimates are likely to align to differing degrees depending on the research topic and the populations under consideration, the particular features of the surveys and social media sites involved, and the analytic techniques for extracting opinions and experiences from social media. Traditional population coverage may not be required for social media content to effectively predict social phenomena to the extent that social media content distills or summarizes broader conversations that are also measured by surveys. PMID:27257310

  12. Reliability Analyses of Groundwater Pollutant Transport

    Energy Technology Data Exchange (ETDEWEB)

    Dimakis, Panagiotis

    1997-12-31

    This thesis develops a probabilistic finite element model for the analysis of groundwater pollution problems. Two computer codes were developed, (1) one using finite element technique to solve the two-dimensional steady state equations of groundwater flow and pollution transport, and (2) a first order reliability method code that can do a probabilistic analysis of any given analytical or numerical equation. The two codes were connected into one model, PAGAP (Probability Analysis of Groundwater And Pollution). PAGAP can be used to obtain (1) the probability that the concentration at a given point at a given time will exceed a specified value, (2) the probability that the maximum concentration at a given point will exceed a specified value and (3) the probability that the residence time at a given point will exceed a specified period. PAGAP could be used as a tool for assessment purposes and risk analyses, for instance the assessment of the efficiency of a proposed remediation technique or to study the effects of parameter distribution for a given problem (sensitivity study). The model has been applied to study the greatest self sustained, precipitation controlled aquifer in North Europe, which underlies Oslo`s new major airport. 92 refs., 187 figs., 26 tabs.

  13. System for analysing sickness absenteeism in Poland.

    Science.gov (United States)

    Indulski, J A; Szubert, Z

    1997-01-01

    The National System of Sickness Absenteeism Statistics has been functioning in Poland since 1977, as the part of the national health statistics. The system is based on a 15-percent random sample of copies of certificates of temporary incapacity for work issued by all health care units and authorised private medical practitioners. A certificate of temporary incapacity for work is received by every insured employee who is compelled to stop working due to sickness, accident, or due to the necessity to care for a sick member of his/her family. The certificate is required on the first day of sickness. Analyses of disease- and accident-related sickness absenteeism carried out each year in Poland within the statistical system lead to the main conclusions: 1. Diseases of the musculoskeletal and peripheral nervous systems accounting, when combined, for 1/3 of the total sickness absenteeism, are a major health problem of the working population in Poland. During the past five years, incapacity for work caused by these diseases in males increased 2.5 times. 2. Circulatory diseases, and arterial hypertension and ischaemic heart disease in particular (41% and 27% of sickness days, respectively), create an essential health problem among males at productive age, especially, in the 40 and older age group. Absenteeism due to these diseases has increased in males more than two times.

  14. Comparative analyses of bidirectional promoters in vertebrates

    Directory of Open Access Journals (Sweden)

    Taylor James

    2008-05-01

    Full Text Available Abstract Background Orthologous genes with deep phylogenetic histories are likely to retain similar regulatory features. In this report we utilize orthology assignments for pairs of genes co-regulated by bidirectional promoters to map the ancestral history of the promoter regions. Results Our mapping of bidirectional promoters from humans to fish shows that many such promoters emerged after the divergence of chickens and fish. Furthermore, annotations of promoters in deep phylogenies enable detection of missing data or assembly problems present in higher vertebrates. The functional importance of bidirectional promoters is indicated by selective pressure to maintain the arrangement of genes regulated by the promoter over long evolutionary time spans. Characteristics unique to bidirectional promoters are further elucidated using a technique for unsupervised classification, known as ESPERR. Conclusion Results of these analyses will aid in our understanding of the evolution of bidirectional promoters, including whether the regulation of two genes evolved as a consequence of their proximity or if function dictated their co-regulation.

  15. Thermomagnetic Analyses to Test Concrete Stability

    Science.gov (United States)

    Geiss, C. E.; Gourley, J. R.

    2017-12-01

    Over the past decades pyrrhotite-containing aggregate has been used in concrete to build basements and foundations in central Connecticut. The sulphur in the pyrrhotite reacts to several secondary minerals, and associated changes in volume lead to a loss of structural integrity. As a result hundreds of homes have been rendered worthless as remediation costs often exceed the value of the homes and the value of many other homes constructed during the same time period is in question as concrete provenance and potential future structural issues are unknown. While minor abundances of pyrrhotite are difficult to detect or quantify by traditional means, the mineral is easily identified through its magnetic properties. All concrete samples from affected homes show a clear increase in magnetic susceptibility above 220°C due to the γ - transition of Fe9S10 [1] and a clearly defined Curie-temperature near 320°C for Fe7S8. X-ray analyses confirm the presence of pyrrhotite and ettringite in these samples. Synthetic mixtures of commercially available concrete and pyrrhotite show that the method is semiquantitative but needs to be calibrated for specific pyrrhotite mineralogies. 1. Schwarz, E.J., Magnetic properties of pyrrhotite and their use in applied geology and geophysics. 1975, Geological Survey of Canada : Ottawa, ON, Canada: Canada.

  16. Social Media Analyses for Social Measurement.

    Science.gov (United States)

    Schober, Michael F; Pasek, Josh; Guggenheim, Lauren; Lampe, Cliff; Conrad, Frederick G

    2016-01-01

    Demonstrations that analyses of social media content can align with measurement from sample surveys have raised the question of whether survey research can be supplemented or even replaced with less costly and burdensome data mining of already-existing or "found" social media content. But just how trustworthy such measurement can be-say, to replace official statistics-is unknown. Survey researchers and data scientists approach key questions from starting assumptions and analytic traditions that differ on, for example, the need for representative samples drawn from frames that fully cover the population. New conversations between these scholarly communities are needed to understand the potential points of alignment and non-alignment. Across these approaches, there are major differences in (a) how participants (survey respondents and social media posters) understand the activity they are engaged in; (b) the nature of the data produced by survey responses and social media posts, and the inferences that are legitimate given the data; and (c) practical and ethical considerations surrounding the use of the data. Estimates are likely to align to differing degrees depending on the research topic and the populations under consideration, the particular features of the surveys and social media sites involved, and the analytic techniques for extracting opinions and experiences from social media. Traditional population coverage may not be required for social media content to effectively predict social phenomena to the extent that social media content distills or summarizes broader conversations that are also measured by surveys.

  17. Validating experimental and theoretical Langmuir probe analyses

    Science.gov (United States)

    Pilling, L. S.; Carnegie, D. A.

    2007-08-01

    Analysis of Langmuir probe characteristics contains a paradox in that it is unknown a priori which theory is applicable before it is applied. Often theories are assumed to be correct when certain criteria are met although they may not validate the approach used. We have analysed the Langmuir probe data from cylindrical double and single probes acquired from a dc discharge plasma over a wide variety of conditions. This discharge contains a dual-temperature distribution and hence fitting a theoretically generated curve is impractical. To determine the densities, an examination of the current theories was necessary. For the conditions where the probe radius is the same order of magnitude as the Debye length, the gradient expected for orbital-motion limited (OML) is approximately the same as the radial-motion gradients. An analysis of the 'gradients' from the radial-motion theory was able to resolve the differences from the OML gradient value of two. The method was also able to determine whether radial or OML theories applied without knowledge of the electron temperature, or separation of the ion and electron contributions. Only the value of the space potential is necessary to determine the applicable theory.

  18. Bench top and portable mineral analysers, borehole core analysers and in situ borehole logging

    International Nuclear Information System (INIS)

    Howarth, W.J.; Watt, J.S.

    1982-01-01

    Bench top and portable mineral analysers are usually based on balanced filter techniques using scintillation detectors or on low resolution proportional detectors. The application of radioisotope x-ray techniques to in situ borehole logging is increasing, and is particularly suited for logging for tin and higher atomic number elements

  19. Integrated Field Analyses of Thermal Springs

    Science.gov (United States)

    Shervais, K.; Young, B.; Ponce-Zepeda, M. M.; Rosove, S.

    2011-12-01

    A group of undergraduate researchers through the SURE internship offered by the Southern California Earthquake Center (SCEC) have examined thermal springs in southern Idaho, northern Utah as well as mud volcanoes in the Salton Sea, California. We used an integrated approach to estimate the setting and maximum temperature, including water chemistry, Ipad-based image and data-base management, microbiology, and gas analyses with a modified Giggenbach sampler.All springs were characterized using GISRoam (tmCogent3D). We are performing geothermometry calculations as well as comparisons with temperature gradient data on the results while also analyzing biological samples. Analyses include water temperature, pH, electrical conductivity, and TDS measured in the field. Each sample is sealed and chilled and delivered to a water lab within 12 hours.Temperatures are continuously monitored with the use of Solinst Levelogger Juniors. Through partnership with a local community college geology club, we receive results on a monthly basis and are able to process initial data earlier in order to evaluate data over a longer time span. The springs and mudpots contained microbial organisms which were analyzed using methods of single colony isolation, polymerase chain reaction, and DNA sequencing showing the impact of the organisms on the springs or vice versa. Soon we we will collect gas samples at sites that show signs of gas. This will be taken using a hybrid of the Giggenbach method and our own methods. Drawing gas samples has proven a challenge, however we devised a method to draw out gas samples utilizing the Giggenbach flask, transferring samples to glass blood sample tubes, replacing NaOH in the Giggenbach flask, and evacuating it in the field for multiple samples using a vacuum pump. We also use a floating platform devised to carry and lower a levelogger, to using an in-line fuel filter from a tractor in order to keep mud from contaminating the equipment.The use of raster

  20. Transient Seepage for Levee Engineering Analyses

    Science.gov (United States)

    Tracy, F. T.

    2017-12-01

    Historically, steady-state seepage analyses have been a key tool for designing levees by practicing engineers. However, with the advances in computer modeling, transient seepage analysis has become a potentially viable tool. A complication is that the levees usually have partially saturated flow, and this is significantly more complicated in transient flow. This poster illustrates four elements of our research in partially saturated flow relating to the use of transient seepage for levee design: (1) a comparison of results from SEEP2D, SEEP/W, and SLIDE for a generic levee cross section common to the southeastern United States; (2) the results of a sensitivity study of varying saturated hydraulic conductivity, the volumetric water content function (as represented by van Genuchten), and volumetric compressibility; (3) a comparison of when soils do and do not exhibit hysteresis, and (4) a description of proper and improper use of transient seepage in levee design. The variables considered for the sensitivity and hysteresis studies are pore pressure beneath the confining layer at the toe, the flow rate through the levee system, and a levee saturation coefficient varying between 0 and 1. Getting results for SEEP2D, SEEP/W, and SLIDE to match proved more difficult than expected. After some effort, the results matched reasonably well. Differences in results were caused by various factors, including bugs, different finite element meshes, different numerical formulations of the system of nonlinear equations to be solved, and differences in convergence criteria. Varying volumetric compressibility affected the above test variables the most. The levee saturation coefficient was most affected by the use of hysteresis. The improper use of pore pressures from a transient finite element seepage solution imported into a slope stability computation was found to be the most grievous mistake in using transient seepage in the design of levees.

  1. Summary of the analyses for recovery factors

    Science.gov (United States)

    Verma, Mahendra K.

    2017-07-17

    IntroductionIn order to determine the hydrocarbon potential of oil reservoirs within the U.S. sedimentary basins for which the carbon dioxide enhanced oil recovery (CO2-EOR) process has been considered suitable, the CO2 Prophet model was chosen by the U.S. Geological Survey (USGS) to be the primary source for estimating recovery-factor values for individual reservoirs. The choice was made because of the model’s reliability and the ease with which it can be used to assess a large number of reservoirs. The other two approaches—the empirical decline curve analysis (DCA) method and a review of published literature on CO2-EOR projects—were deployed to verify the results of the CO2 Prophet model. This chapter discusses the results from CO2 Prophet (chapter B, by Emil D. Attanasi, this report) and compares them with results from decline curve analysis (chapter C, by Hossein Jahediesfanjani) and those reported in the literature for selected reservoirs with adequate data for analyses (chapter D, by Ricardo A. Olea).To estimate the technically recoverable hydrocarbon potential for oil reservoirs where CO2-EOR has been applied, two of the three approaches—CO2 Prophet modeling and DCA—do not include analysis of economic factors, while the third approach—review of published literature—implicitly includes economics. For selected reservoirs, DCA has provided estimates of the technically recoverable hydrocarbon volumes, which, in combination with calculated amounts of original oil in place (OOIP), helped establish incremental CO2-EOR recovery factors for individual reservoirs.The review of published technical papers and reports has provided substantial information on recovery factors for 70 CO2-EOR projects that are either commercially profitable or classified as pilot tests. When comparing the results, it is important to bear in mind the differences and limitations of these three approaches.

  2. The ABC (Analysing Biomolecular Contacts-database

    Directory of Open Access Journals (Sweden)

    Walter Peter

    2007-03-01

    Full Text Available As protein-protein interactions are one of the basic mechanisms in most cellular processes, it is desirable to understand the molecular details of protein-protein contacts and ultimately be able to predict which proteins interact. Interface areas on a protein surface that are involved in protein interactions exhibit certain characteristics. Therefore, several attempts were made to distinguish protein interactions from each other and to categorize them. One way of classification are the groups of transient and permanent interactions. Previously two of the authors analysed several properties for transient complexes such as the amino acid and secondary structure element composition and pairing preferences. Certainly, interfaces can be characterized by many more possible attributes and this is a subject of intense ongoing research. Although several freely available online databases exist that illuminate various aspects of protein-protein interactions, we decided to construct a new database collecting all desired interface features allowing for facile selection of subsets of complexes. As database-server we applied MySQL and the program logic was written in JAVA. Furthermore several class extensions and tools such as JMOL were included to visualize the interfaces and JfreeChart for the representation of diagrams and statistics. The contact data is automatically generated from standard PDB files by a tcl/tk-script running through the molecular visualization package VMD. Currently the database contains 536 interfaces extracted from 479 PDB files and it can be queried by various types of parameters. Here, we describe the database design and demonstrate its usefulness with a number of selected features.

  3. Trend analyses with river sediment rating curves

    Science.gov (United States)

    Warrick, Jonathan A.

    2015-01-01

    Sediment rating curves, which are fitted relationships between river discharge (Q) and suspended-sediment concentration (C), are commonly used to assess patterns and trends in river water quality. In many of these studies it is assumed that rating curves have a power-law form (i.e., C = aQb, where a and b are fitted parameters). Two fundamental questions about the utility of these techniques are assessed in this paper: (i) How well to the parameters, a and b, characterize trends in the data? (ii) Are trends in rating curves diagnostic of changes to river water or sediment discharge? As noted in previous research, the offset parameter, a, is not an independent variable for most rivers, but rather strongly dependent on b and Q. Here it is shown that a is a poor metric for trends in the vertical offset of a rating curve, and a new parameter, â, as determined by the discharge-normalized power function [C = â (Q/QGM)b], where QGM is the geometric mean of the Q values sampled, provides a better characterization of trends. However, these techniques must be applied carefully, because curvature in the relationship between log(Q) and log(C), which exists for many rivers, can produce false trends in â and b. Also, it is shown that trends in â and b are not uniquely diagnostic of river water or sediment supply conditions. For example, an increase in â can be caused by an increase in sediment supply, a decrease in water supply, or a combination of these conditions. Large changes in water and sediment supplies can occur without any change in the parameters, â and b. Thus, trend analyses using sediment rating curves must include additional assessments of the time-dependent rates and trends of river water, sediment concentrations, and sediment discharge.

  4. BN-600 hybrid core benchmark analyses

    International Nuclear Information System (INIS)

    Kim, Y.I.; Stanculescu, A.; Finck, P.; Hill, R.N.; Grimm, K.N.

    2003-01-01

    Benchmark analyses for the hybrid BN-600 reactor that contains three uranium enrichment zones and one plutonium zone in the core, have been performed within the frame of an IAEA sponsored Coordinated Research Project. The results for several relevant reactivity parameters obtained by the participants with their own state-of-the-art basic data and codes, were compared in terms of calculational uncertainty, and their effects on the ULOF transient behavior of the hybrid BN-600 core were evaluated. The comparison of the diffusion and transport results obtained for the homogeneous representation generally shows good agreement for most parameters between the RZ and HEX-Z models. The burnup effect and the heterogeneity effect on most reactivity parameters also show good agreement for the HEX-Z diffusion and transport theory results. A large difference noticed for the sodium and steel density coefficients is mainly due to differences in the spatial coefficient predictions for non fuelled regions. The burnup reactivity loss was evaluated to be 0.025 (4.3 $) within ∼ 5.0% standard deviation. The heterogeneity effect on most reactivity coefficients was estimated to be small. The heterogeneity treatment reduced the control rod worth by 2.3%. The heterogeneity effect on the k-eff and control rod worth appeared to differ strongly depending on the heterogeneity treatment method. A substantial spread noticed for several reactivity coefficients did not give a significant impact on the transient behavior prediction. This result is attributable to compensating effects between several reactivity effects and the specific design of the partially MOX fuelled hybrid core. (author)

  5. Analysing 21cm signal with artificial neural network

    Science.gov (United States)

    Shimabukuro, Hayato; a Semelin, Benoit

    2018-05-01

    The 21cm signal at epoch of reionization (EoR) should be observed within next decade. We expect that cosmic 21cm signal at the EoR provides us both cosmological and astrophysical information. In order to extract fruitful information from observation data, we need to develop inversion method. For such a method, we introduce artificial neural network (ANN) which is one of the machine learning techniques. We apply the ANN to inversion problem to constrain astrophysical parameters from 21cm power spectrum. We train the architecture of the neural network with 70 training datasets and apply it to 54 test datasets with different value of parameters. We find that the quality of the parameter reconstruction depends on the sensitivity of the power spectrum to the different parameter sets at a given redshift and also find that the accuracy of reconstruction is improved by increasing the number of given redshifts. We conclude that the ANN is viable inversion method whose main strength is that they require a sparse extrapolation of the parameter space and thus should be usable with full simulation.

  6. Vibro-spring particle size distribution analyser

    International Nuclear Information System (INIS)

    Patel, Ketan Shantilal

    2002-01-01

    This thesis describes the design and development of an automated pre-production particle size distribution analyser for particles in the 20 - 2000 μm size range. This work is follow up to the vibro-spring particle sizer reported by Shaeri. In its most basic form, the instrument comprises a horizontally held closed coil helical spring that is partly filled with the test powder and sinusoidally vibrated in the transverse direction. Particle size distribution data are obtained by stretching the spring to known lengths and measuring the mass of the powder discharged from the spring's coils. The size of the particles on the other hand is determined from the spring 'intercoil' distance. The instrument developed by Shaeri had limited use due to its inability to measure sample mass directly. For the device reported here, modifications are made to the original configurations to establish means of direct sample mass measurement. The feasibility of techniques for measuring the mass of powder retained within the spring are investigated in detail. Initially, the measurement of mass is executed in-situ from the vibration characteristics based on the spring's first harmonic resonant frequency. This method is often erratic and unreliable due to the particle-particle-spring wall interactions and the spring bending. An much more successful alternative is found from a more complicated arrangement in which the spring forms part of a stiff cantilever system pivoted along its main axis. Here, the sample mass is determined in the 'static mode' by monitoring the cantilever beam's deflection following the wanton termination of vibration. The system performance has been optimised through the variations of the mechanical design of the key components and the operating procedure as well as taking into account the effect of changes in the ambient temperature on the system's response. The thesis also describes the design and development of the ancillary mechanisms. These include the pneumatic

  7. Kuosheng Mark III containment analyses using GOTHIC

    Energy Technology Data Exchange (ETDEWEB)

    Lin, Ansheng, E-mail: samuellin1999@iner.gov.tw; Chen, Yen-Shu; Yuann, Yng-Ruey

    2013-10-15

    Highlights: • The Kuosheng Mark III containment model is established using GOTHIC. • Containment pressure and temperature responses due to LOCA are presented. • The calculated results are all below the design values and compared with the FSAR results. • The calculated results can be served as an analysis reference for an SPU project in the future. -- Abstract: Kuosheng nuclear power plant in Taiwan is a twin-unit BWR/6 plant, and both units utilize the Mark III containment. Currently, the plant is performing a stretch power uprate (SPU) project to increase the core thermal power to 103.7% OLTP (original licensed thermal power). However, the containment response in the Kuosheng Final Safety Analysis Report (FSAR) was completed more than twenty-five years ago. The purpose of this study is to establish a Kuosheng Mark III containment model using the containment program GOTHIC. The containment pressure and temperature responses under the design-basis accidents, which are the main steam line break (MSLB) and the recirculation line break (RCLB) accidents, are investigated. Short-term and long-term analyses are presented in this study. The short-term analysis is to calculate the drywell peak pressure and temperature which happen in the early stage of the LOCAs. The long-term analysis is to calculate the peak pressure and temperature of the reactor building space. In the short-term analysis, the calculated peak drywell to wetwell differential pressure is 140.6 kPa for the MSLB, which is below than the design value of 189.6 kPa. The calculated peak drywell temperature is 158 °C, which is still below the design value of 165.6 °C. In addition, in the long-term analysis, the calculated peak containment pressure is 47 kPa G, which is below the design value of 103.4 kPa G. The calculated peak values of containment temperatures are 74.7 °C, which is lower than the design value of 93.3 °C. Therefore, the Kuosheng Mark III containment can maintain the integrity after

  8. YALINA Booster subcritical assembly modeling and analyses

    International Nuclear Information System (INIS)

    Talamo, A.; Gohar, Y.; Aliberti, G.; Cao, Y.; Zhong, Z.; Kiyavitskaya, H.; Bournos, V.; Fokov, Y.; Routkovskaya, C.; Sadovich, S.

    2010-01-01

    Full text: Accurate simulation models of the YALINA Booster assembly of the Joint Institute for Power and Nuclear Research (JIPNR)-Sosny, Belarus have been developed by Argonne National Laboratory (ANL) of the USA. YALINA-Booster has coupled zones operating with fast and thermal neutron spectra, which requires a special attention in the modelling process. Three different uranium enrichments of 90%, 36% or 21% were used in the fast zone and 10% uranium enrichment was used in the thermal zone. Two of the most advanced Monte Carlo computer programs have been utilized for the ANL analyses: MCNP of the Los Alamos National Laboratory and MONK of the British Nuclear Fuel Limited and SERCO Assurance. The developed geometrical models for both computer programs modelled all the details of the YALINA Booster facility as described in the technical specifications defined in the International Atomic Energy Agency (IAEA) report without any geometrical approximation or material homogenization. Materials impurities and the measured material densities have been used in the models. The obtained results for the neutron multiplication factors calculated in criticality mode (keff) and in source mode (ksrc) with an external neutron source from the two Monte Carlo programs are very similar. Different external neutron sources have been investigated including californium, deuterium-deuterium (D-D), and deuterium-tritium (D-T) neutron sources. The spatial neutron flux profiles and the neutron spectra in the experimental channels were calculated. In addition, the kinetic parameters were defined including the effective delayed neutron fraction, the prompt neutron lifetime, and the neutron generation time. A new calculation methodology has been developed at ANL to simulate the pulsed neutron source experiments. In this methodology, the MCNP code is used to simulate the detector response from a single pulse of the external neutron source and a C code is used to superimpose the pulse until the

  9. Altools: a user friendly NGS data analyser.

    Science.gov (United States)

    Camiolo, Salvatore; Sablok, Gaurav; Porceddu, Andrea

    2016-02-17

    Genotyping by re-sequencing has become a standard approach to estimate single nucleotide polymorphism (SNP) diversity, haplotype structure and the biodiversity and has been defined as an efficient approach to address geographical population genomics of several model species. To access core SNPs and insertion/deletion polymorphisms (indels), and to infer the phyletic patterns of speciation, most such approaches map short reads to the reference genome. Variant calling is important to establish patterns of genome-wide association studies (GWAS) for quantitative trait loci (QTLs), and to determine the population and haplotype structure based on SNPs, thus allowing content-dependent trait and evolutionary analysis. Several tools have been developed to investigate such polymorphisms as well as more complex genomic rearrangements such as copy number variations, presence/absence variations and large deletions. The programs available for this purpose have different strengths (e.g. accuracy, sensitivity and specificity) and weaknesses (e.g. low computation speed, complex installation procedure and absence of a user-friendly interface). Here we introduce Altools, a software package that is easy to install and use, which allows the precise detection of polymorphisms and structural variations. Altools uses the BWA/SAMtools/VarScan pipeline to call SNPs and indels, and the dnaCopy algorithm to achieve genome segmentation according to local coverage differences in order to identify copy number variations. It also uses insert size information from the alignment of paired-end reads and detects potential large deletions. A double mapping approach (BWA/BLASTn) identifies precise breakpoints while ensuring rapid elaboration. Finally, Altools implements several processes that yield deeper insight into the genes affected by the detected polymorphisms. Altools was used to analyse both simulated and real next-generation sequencing (NGS) data and performed satisfactorily in terms of

  10. First Super-Earth Atmosphere Analysed

    Science.gov (United States)

    2010-12-01

    The atmosphere around a super-Earth exoplanet has been analysed for the first time by an international team of astronomers using ESO's Very Large Telescope. The planet, which is known as GJ 1214b, was studied as it passed in front of its parent star and some of the starlight passed through the planet's atmosphere. We now know that the atmosphere is either mostly water in the form of steam or is dominated by thick clouds or hazes. The results will appear in the 2 December 2010 issue of the journal Nature. The planet GJ 1214b was confirmed in 2009 using the HARPS instrument on ESO's 3.6-metre telescope in Chile (eso0950) [1]. Initial findings suggested that this planet had an atmosphere, which has now been confirmed and studied in detail by an international team of astronomers, led by Jacob Bean (Harvard-Smithsonian Center for Astrophysics), using the FORS instrument on ESO's Very Large Telescope. "This is the first super-Earth to have its atmosphere analysed. We've reached a real milestone on the road toward characterising these worlds," said Bean. GJ 1214b has a radius of about 2.6 times that of the Earth and is about 6.5 times as massive, putting it squarely into the class of exoplanets known as super-Earths. Its host star lies about 40 light-years from Earth in the constellation of Ophiuchus (the Serpent Bearer). It is a faint star [2], but it is also small, which means that the size of the planet is large compared to the stellar disc, making it relatively easy to study [3]. The planet travels across the disc of its parent star once every 38 hours as it orbits at a distance of only two million kilometres: about seventy times closer than the Earth orbits the Sun. To study the atmosphere, the team observed the light coming from the star as the planet passed in front of it [4]. During these transits, some of the starlight passes through the planet's atmosphere and, depending on the chemical composition and weather on the planet, specific wavelengths of light are

  11. Systems reliability analyses and risk analyses for the licencing procedure under atomic law

    International Nuclear Information System (INIS)

    Berning, A.; Spindler, H.

    1983-01-01

    For the licencing procedure under atomic law in accordance with Article 7 AtG, the nuclear power plant as a whole needs to be assessed, plus the reliability of systems and plant components that are essential to safety are to be determined with probabilistic methods. This requirement is the consequence of safety criteria for nuclear power plants issued by the Home Department (BMI). Systems reliability studies and risk analyses used in licencing procedures under atomic law are identified. The stress is on licencing decisions, mainly for PWR-type reactors. Reactor Safety Commission (RSK) guidelines, examples of reasoning in legal proceedings and arguments put forth by objectors are also dealt with. Correlations are shown between reliability analyses made by experts and licencing decisions by means of examples. (orig./HP) [de

  12. SAND: an automated VLBI imaging and analysing pipeline - I. Stripping component trajectories

    Science.gov (United States)

    Zhang, M.; Collioud, A.; Charlot, P.

    2018-02-01

    We present our implementation of an automated very long baseline interferometry (VLBI) data-reduction pipeline that is dedicated to interferometric data imaging and analysis. The pipeline can handle massive VLBI data efficiently, which makes it an appropriate tool to investigate multi-epoch multiband VLBI data. Compared to traditional manual data reduction, our pipeline provides more objective results as less human interference is involved. The source extraction is carried out in the image plane, while deconvolution and model fitting are performed in both the image plane and the uv plane for parallel comparison. The output from the pipeline includes catalogues of CLEANed images and reconstructed models, polarization maps, proper motion estimates, core light curves and multiband spectra. We have developed a regression STRIP algorithm to automatically detect linear or non-linear patterns in the jet component trajectories. This algorithm offers an objective method to match jet components at different epochs and to determine their proper motions.

  13. Statistical power analyses using G*Power 3.1: tests for correlation and regression analyses.

    Science.gov (United States)

    Faul, Franz; Erdfelder, Edgar; Buchner, Axel; Lang, Albert-Georg

    2009-11-01

    G*Power is a free power analysis program for a variety of statistical tests. We present extensions and improvements of the version introduced by Faul, Erdfelder, Lang, and Buchner (2007) in the domain of correlation and regression analyses. In the new version, we have added procedures to analyze the power of tests based on (1) single-sample tetrachoric correlations, (2) comparisons of dependent correlations, (3) bivariate linear regression, (4) multiple linear regression based on the random predictor model, (5) logistic regression, and (6) Poisson regression. We describe these new features and provide a brief introduction to their scope and handling.

  14. Forced vibration tests and simulation analyses of a nuclear reactor building. Part 2: simulation analyses

    International Nuclear Information System (INIS)

    Kuno, M.; Nakagawa, S.; Momma, T.; Naito, Y.; Niwa, M.; Motohashi, S.

    1995-01-01

    Forced vibration tests of a BWR-type reactor building. Hamaoka Unit 4, were performed. Valuable data on the dynamic characteristics of the soil-structure interaction system were obtained through the tests. Simulation analyses of the fundamental dynamic characteristics of the soil-structure system were conducted, using a basic lumped mass soil-structure model (lattice model), and strong correlation with the measured data was obtained. Furthermore, detailed simulation models were employed to investigate the effects of simultaneously induced vertical response and response of the adjacent turbine building on the lateral response of the reactor building. (author). 4 refs., 11 figs

  15. Early 500 MHz prototype LEP RF Cavity with superposed storage cavity

    CERN Multimedia

    CERN PhotoLab

    1981-01-01

    The principle of transferring the RF power back and forth between the accelerating cavity and a side-coupled storage cavity was demonstrated with this 500 MHz prototype. In LEP, the accelerating frequency was 352.2 MHz, and accelerating and storage cavities were consequently larger. See also 8002294, 8006061, 8407619X, and Annual Reports 1980, p.115; 1981, p.95; 1985, vol.I, p.13.

  16. Novel Method for Superposing 3D Digital Models for Monitoring Orthodontic Tooth Movement.

    Science.gov (United States)

    Schmidt, Falko; Kilic, Fatih; Piro, Neltje Emma; Geiger, Martin Eberhard; Lapatki, Bernd Georg

    2018-04-18

    Quantitative three-dimensional analysis of orthodontic tooth movement (OTM) is possible by superposition of digital jaw models made at different times during treatment. Conventional methods rely on surface alignment at palatal soft-tissue areas, which is applicable to the maxilla only. We introduce two novel numerical methods applicable to both maxilla and mandible. The OTM from the initial phase of multi-bracket appliance treatment of ten pairs of maxillary models were evaluated and compared with four conventional methods. The median range of deviation of OTM for three users was 13-72% smaller for the novel methods than for the conventional methods, indicating greater inter-observer agreement. Total tooth translation and rotation were significantly different (ANOVA, p < 0.01) for OTM determined by use of the two numerical and four conventional methods. Directional decomposition of OTM from the novel methods showed clinically acceptable agreement with reference results except for vertical translations (deviations of medians greater than 0.6 mm). The difference in vertical translational OTM can be explained by maxillary vertical growth during the observation period, which is additionally recorded by conventional methods. The novel approaches are, thus, particularly suitable for evaluation of pure treatment effects, because growth-related changes are ignored.

  17. Flapping current sheet with superposed waves seen in space and on the ground

    Science.gov (United States)

    Wang, Guoqiang; Volwerk, Martin; Nakamura, Rumi; Boakes, Peter; Zhang, Tielong; Ge, Yasong; Yoshikawa, Akimasa; Baishev, Dmitry

    2015-04-01

    A wavy current sheet event observed on 15th of October 2004 between 1235 and 1300 UT has been studied by using Cluster and ground-based magnetometer data. Waves propagating from the tail centre to the duskside flank with a period ~30 s and wavelength ~1 RE, are superimposed on a flapping current sheet, accompanied with a bursty bulk flow (BBF). Three Pi2 pulsations, with onset at ~1236, ~1251 and ~1255 UT, respectively, are observed at the Tixie (TIK) station located near the foot-points of Cluster. The mechanism creating the Pi2 (period ~40 s) onset at ~1236 UT is unclear. The second Pi2 (period ~90 s, onset at ~1251 UT) is associated with a strong field-aligned current, which has a strong transverse component of the magnetic field, observed by Cluster with a time delay ~60 s. We suggest that it is caused by bouncing Alfvén waves between the northern and southern ionosphere which transport the field-aligned current. For the third Pi2 (period ~60 s) there is almost no damping at the first three periods. They occur in conjunction with periodic field-aligned currents one-on-one with 72s delay. We suggest that it is generated by these periodic field-aligned currents. We conclude that the strong field-aligned currents generated in the plasma sheet during flapping with superimposed higher frequency waves can drive Pi2 pulsations on the ground, and periodic field-aligned currents can even control the period of the Pi2s.

  18. Structural analysis of superposed fault systems of the Bornholm horst block, Tornquist Zone, Denmark

    DEFF Research Database (Denmark)

    Graversen, Ole

    2009-01-01

    The Bornholm horst block is composed of Precambrian crystalline basement overlain by Palaeozoicand Mesozoic cover rocks. The cover intervals are separated by an angular unconformity and a hiatus spanning the Devonian through Middle Triassic interval. Late Palaeozoic faulting of the Early Palaeozo...

  19. Effect of magnetic field on Rayleigh-Taylor instability of two superposed fluids

    International Nuclear Information System (INIS)

    Sharma, P K; Tiwari, Anita; Chhajlani, R K

    2012-01-01

    The effect of two dimensional magnetic field on the Rayleigh-Taylor (R-T) instability in an incompressible plasma is investigated to include simultaneously the effects of suspended particles and the porosity of the medium. The relevant linearized perturbation equations have been solved. The explicit expression of the linear growth rate is obtained in the presence of fixed boundary conditions. A stability criterion for the medium is derived and discussed the Rayleigh Taylor instabilities in different configurations. It is found that the basic Rayleigh-Taylor instability condition is modified by the presence of magnetic field, suspended particles and porosity of the medium. In case of an unstable R-T configuration, the magnetic field has a stabilizing effect on the system. It is also found that the growth rate of an unstable R-T mode decreases with increasing relaxation frequency thereby showing a stabilizing influence on the R-T configuration.

  20. The Tacit 'Quantum' of Meeting the Aesthetic Sign; Contextualize, Entangle, Superpose, Collapse or Decohere.

    Science.gov (United States)

    Broekaert, Jan

    2018-01-01

    The semantically ambiguous nature of the sign and aspects of non-classicality of elementary matter as described by quantum theory show remarkable coherent analogy. We focus on how the ambiguous nature of the image, text and art work bears functional resemblance to the dynamics of contextuality , entanglement , superposition , collapse and decoherence as these phenomena are known in quantum theory. These quantumlike properties in linguistic signs have previously been identified in formal descritions of e.g. concept combinations and mental lexicon representations and have been reported on in the literature. In this approach the informationalized, communicated, mediatized conceptual configuration-of e.g. the art work-in the personal reflected mind behaves like a quantum state function in a higher dimensional complex space, in which it is time and again contextually collapsed and further cognitively entangled (Aerts et al. in Found Sci 4:115-132, 1999; in Lect Notes Comput Sci 7620:36-47, 2012). The observer-consumer of signs becomes the empowered 'produmer' (Floridi in The philosophy of information, Oxford University Press, Oxford, 2011) creating the cognitive outcome of the interaction, while loosing most of any 'classical givenness' of the sign (Bal and Bryson in Art Bull 73:174-208, 1991). These quantum-like descriptions are now developed here in four example aesthetic signs; the installation Mist room by Ann Veronica Janssens (2010), the installation Sections of a happy moment by David Claerbout (2010), the photograph The Falling Man by Richard Drew (New York Times, p. 7, September 12, 2001) and the documentary Huicholes. The Last Peyote Guardians by Vilchez and Stefani (2014). Our present work develops further the use of a previously developed quantum model for concept representation in natural language. In our present approach of the aesthetic sign, we extend to individual -idiosyncratic-observer contexts instead of socially shared group contexts, and as such also include multiple idiosyncratic creation of meaning and experience. This irreducible superposition emerges as the core feature of the aesthetic sign and is most critically embedded in the 'no-interpretation' interpretation of the documentary signal.

  1. Subsecond dopamine fluctuations in human striatum encode superposed error signals about actual and counterfactual reward

    Science.gov (United States)

    Kishida, Kenneth T.; Saez, Ignacio; Lohrenz, Terry; Witcher, Mark R.; Laxton, Adrian W.; Tatter, Stephen B.; White, Jason P.; Ellis, Thomas L.; Phillips, Paul E. M.; Montague, P. Read

    2016-01-01

    In the mammalian brain, dopamine is a critical neuromodulator whose actions underlie learning, decision-making, and behavioral control. Degeneration of dopamine neurons causes Parkinson’s disease, whereas dysregulation of dopamine signaling is believed to contribute to psychiatric conditions such as schizophrenia, addiction, and depression. Experiments in animal models suggest the hypothesis that dopamine release in human striatum encodes reward prediction errors (RPEs) (the difference between actual and expected outcomes) during ongoing decision-making. Blood oxygen level-dependent (BOLD) imaging experiments in humans support the idea that RPEs are tracked in the striatum; however, BOLD measurements cannot be used to infer the action of any one specific neurotransmitter. We monitored dopamine levels with subsecond temporal resolution in humans (n = 17) with Parkinson’s disease while they executed a sequential decision-making task. Participants placed bets and experienced monetary gains or losses. Dopamine fluctuations in the striatum fail to encode RPEs, as anticipated by a large body of work in model organisms. Instead, subsecond dopamine fluctuations encode an integration of RPEs with counterfactual prediction errors, the latter defined by how much better or worse the experienced outcome could have been. How dopamine fluctuations combine the actual and counterfactual is unknown. One possibility is that this process is the normal behavior of reward processing dopamine neurons, which previously had not been tested by experiments in animal models. Alternatively, this superposition of error terms may result from an additional yet-to-be-identified subclass of dopamine neurons. PMID:26598677

  2. Mineralogical-geochemical specificity of the uranium mineralization superposed on the oxidized rocks

    International Nuclear Information System (INIS)

    Bulatov, S.G.; Shchetochkin, V.N.

    1975-01-01

    Taking as an example a uranium deposit connected with oxidation zones developing along the strata, the author examines the mineralogical and geochemical features of a pitchblende-sooty uraninite mineralization superimposed on limonitized sandstones. The typical relations between ore mineralization with new formations of the infiltration oxidation process and the changes caused by the action of rising thermal solutions on the rocks are given. Based on these relations, two generations of different ages of rich pitchblende-sooty uraninite ores are distinguished, separated by the time of development of the oxidation processes. The typical change around the ore is a reduction of limonitized rocks, accompanied by their pyritization, clarification and hematitization. The ore concentrations were formed as a result of the action of rising thermal solutions that had interacted with oxidized rocks. The development of late oxidation processes caused the redistribution of these ore concentrations and their downward shift along the stratum slope following the limonitization boundary. On the basis of the data presented, comments of a forecasting and prospecting nature are made. (author)

  3. Energy band theory of heterometal superposed film and relevant comments on superconductivity in heterometal systems

    International Nuclear Information System (INIS)

    Zhang, L.; Yin, D.

    1981-08-01

    A method for calculating the electronic structure of a heterogeneous metal-metal interface is discussed. It combines a series of well-defined interface plane-wave orbitals and the muffin-tin orbitals. The problem of high-Tsub(c) superconductivity in systems containing metal-metal interfaces and the related problem in compounds is addressed

  4. Evidence for an All-Or-None Perceptual Response: Single-Trial Analyses of Magnetoencephalography Signals Indicate an Abrupt Transition Between Visual Perception and Its Absence

    Science.gov (United States)

    Sekar, Krithiga; Findley, William M.; Llinás, Rodolfo R.

    2014-01-01

    Whether consciousness is an all-or-none or graded phenomenon is an area of inquiry that has received considerable interest in neuroscience and is as of yet, still debated. In this magnetoencephalography (MEG) study we used a single stimulus paradigm with sub-threshold, threshold and supra-threshold duration inputs to assess whether stimulus perception is continuous with or abruptly differentiated from unconscious stimulus processing in the brain. By grouping epochs according to stimulus identification accuracy and exposure duration, we were able to investigate whether a high-amplitude perception-related cortical event was (1) only evoked for conditions where perception was most probable (2) had invariant amplitude once evoked and (3) was largely absent for conditions where perception was least probable (criteria satisfying an all-on-none hypothesis). We found that averaged evoked responses showed a gradual increase in amplitude with increasing perceptual strength. However, single trial analyses demonstrated that stimulus perception was correlated with an all-or-none response, the temporal precision of which increased systematically as perception transitioned from ambiguous to robust states. Due to poor signal-to-noise resolution of single trial data, whether perception-related responses, whenever present, were invariant in amplitude could not be unambiguously demonstrated. However, our findings strongly suggest that visual perception of simple stimuli is associated with an all-or-none cortical evoked response the temporal precision of which varies as a function of perceptual strength. PMID:22020091

  5. Integrated Waste Treatment Unit (IWTU) Input Coal Analyses and Off-Gass Filter (OGF) Content Analyses

    Energy Technology Data Exchange (ETDEWEB)

    Jantzen, Carol M. [Savannah River Site (SRS), Aiken, SC (United States). Savannah River National Lab. (SRNL); Missimer, David M. [Savannah River Site (SRS), Aiken, SC (United States). Savannah River National Lab. (SRNL); Guenther, Chris P. [National Energy Technology Lab. (NETL), Morgantown, WV (United States); Shekhawat, Dushyant [National Energy Technology Lab. (NETL), Morgantown, WV (United States); VanEssendelft, Dirk T. [National Energy Technology Lab. (NETL), Morgantown, WV (United States); Means, Nicholas C. [AECOM Technology Corp., Oak Ridge, TN (United States)

    2015-04-23

    in process piping and materials, in excessive off-gas absorbent loading, and in undesired process emissions. The ash content of the coal is important as the ash adds to the DMR and other vessel products which affect the final waste product mass and composition. The amount and composition of the ash also affects the reaction kinetics. Thus ash content and composition contributes to the mass balance. In addition, sodium, potassium, calcium, sulfur, and maybe silica and alumina in the ash may contribute to wall-scale formation. Sodium, potassium, and alumina in the ash will be overwhelmed by the sodium, potassium, and alumina from the feed but the impact from the other ash components needs to be quantified. A maximum coal particle size is specified so the feed system does not plug and a minimum particle size is specified to prevent excess elutriation from the DMR to the Process Gas Filter (PGF). A vendor specification was used to procure the calcined coal for IWTU processing. While the vendor supplied a composite analysis for the 22 tons of coal (Appendix A), this study compares independent analyses of the coal performed at the Savannah River National Laboratory (SRNL) and at the National Energy Technology Laboratory (NETL). Three supersacks a were sampled at three different heights within the sack in order to determine within bag variability and between bag variability of the coal. These analyses were also compared to the vendor’s composite analyses and to the coal specification. These analyses were also compared to historic data on Bestac coal analyses that had been performed at Hazen Research Inc. (HRI) between 2004-2011.

  6. Integrating and scheduling an open set of static analyses

    DEFF Research Database (Denmark)

    Eichberg, Michael; Mezini, Mira; Kloppenburg, Sven

    2006-01-01

    to keep the set of analyses open. We propose an approach to integrating and scheduling an open set of static analyses which decouples the individual analyses and coordinates the analysis executions such that the overall time and space consumption is minimized. The approach has been implemented...... for the Eclipse IDE and has been used to integrate a wide range of analyses such as finding bug patterns, detecting violations of design guidelines, or type system extensions for Java....

  7. Signals and memory in tree-ring width and density data

    Czech Academy of Sciences Publication Activity Database

    Esper, J.; Schneider, L.; Smerdon, J. E.; Schoene, B.; Büntgen, Ulf

    2015-01-01

    Roč. 35, OCT (2015), s. 62-70 ISSN 1125-7865 Institutional support: RVO:67179843 Keywords : summer temperature-variations * major volcanic - eruptions * european summer * chronologies * climate * variability * reconstruction * precipitation * millennium * centuries * Maximum latewood density * Temperature * Autocorrelation * Superposed epoch analysis * Volcanic eruption * Northern hemisphere Subject RIV: EH - Ecology, Behaviour Impact factor: 2.107, year: 2015

  8. Angular analyses in relativistic quantum mechanics; Analyses angulaires en mecanique quantique relativiste

    Energy Technology Data Exchange (ETDEWEB)

    Moussa, P [Commissariat a l' Energie Atomique, 91 - Saclay (France). Centre d' Etudes Nucleaires

    1968-06-01

    This work describes the angular analysis of reactions between particles with spin in a fully relativistic fashion. One particle states are introduced, following Wigner's method, as representations of the inhomogeneous Lorentz group. In order to perform the angular analyses, the reduction of the product of two representations of the inhomogeneous Lorentz group is studied. Clebsch-Gordan coefficients are computed for the following couplings: l-s coupling, helicity coupling, multipolar coupling, and symmetric coupling for more than two particles. Massless and massive particles are handled simultaneously. On the way we construct spinorial amplitudes and free fields; we recall how to establish convergence theorems for angular expansions from analyticity hypothesis. Finally we substitute these hypotheses to the idea of 'potential radius', which gives at low energy the usual 'centrifugal barrier' factors. The presence of such factors had never been deduced from hypotheses compatible with relativistic invariance. (author) [French] On decrit un formalisme permettant de tenir compte de l'invariance relativiste, dans l'analyse angulaire des amplitudes de reaction entre particules de spin quelconque. Suivant Wigner, les etats a une particule sont introduits a l'aide des representations du groupe de Lorentz inhomogene. Pour effectuer les analyses angulaires, on etudie la reduction du produit de deux representations du groupe de Lorentz inhomogene. Les coefficients de Clebsch-Gordan correspondants sont calcules dans les couplages suivants: couplage l-s couplage d'helicite, couplage multipolaire, couplage symetrique pour plus de deux particules. Les particules de masse nulle et de masse non nulle sont traitees simultanement. Au passage, on introduit les amplitudes spinorielles et on construit les champs libres, on rappelle comment des hypotheses d'analyticite permettent d'etablir des theoremes de convergence pour les developpements angulaires. Enfin on fournit un substitut a la

  9. Comparative biochemical analyses of venous blood and peritoneal fluid from horses with colic using a portable analyser and an in-house analyser.

    Science.gov (United States)

    Saulez, M N; Cebra, C K; Dailey, M

    2005-08-20

    Fifty-six horses with colic were examined over a period of three months. The concentrations of glucose, lactate, sodium, potassium and chloride, and the pH of samples of blood and peritoneal fluid, were determined with a portable clinical analyser and with an in-house analyser and the results were compared. Compared with the in-house analyser, the portable analyser gave higher pH values for blood and peritoneal fluid with greater variability in the alkaline range, and lower pH values in the acidic range, lower concentrations of glucose in the range below 8.3 mmol/l, and lower concentrations of lactate in venous blood in the range below 5 mmol/l and in peritoneal fluid in the range below 2 mmol/l, with less variability. On average, the portable analyser underestimated the concentrations of lactate and glucose in peritoneal fluid in comparison with the in-house analyser. Its measurements of the concentrations of sodium and chloride in peritoneal fluid had a higher bias and were more variable than the measurements in venous blood, and its measurements of potassium in venous blood and peritoneal fluid had a smaller bias and less variability than the measurements made with the in-house analyser.

  10. The flamenco dance in the cafés cantantes epoch: a historiographical review. El baile flamenco en la época de los cafés cantantes: un recorrido historiográfico.

    Directory of Open Access Journals (Sweden)

    Juan Zagalaz

    2012-01-01

    Full Text Available The main objective of this article is to examine the perception of flamencology and flamenco historiography regarding the phenomenon that strengthened flamenco dancing in the era of the ‘cafés cantantes’, from the second half of the 19th century to the early years of the 20th century. For this purpose, we have performed a thorough bibliographic review in which we have analysed various interpretations from the most prominent specialists in the field in order to offer, as closely as possible, a transversal and synthetic vision that helps to decipher the state of the art. Thus, we have taken a close look at the emergence of key elements in the establishment of this art, such as the appearance of a new theatre space, the progressive professionalization of the dancers, or the various technical and stylistic advances. Furthermore, we have analysed the different types of dances performed during this period, reaching the main conclusion of the existence of a multitude of historiographical perspectives and the absence of a specific study of the period which corresponds to the importance of this art in the development of flamenco dance. Este artículo tiene como principal objetivo revisar la percepción de la flamencología y la historiografía flamenca, del fenómeno que consolidó al baile flamenco en la época de los cafés cantantes, comprendida entre la segunda mitad del siglo XIX y los primeros años del XX. Para ello, se ha realizado un profundo trabajo bibliográfico, en el que se han analizado las diferentes interpretaciones de los más destacados especialistas, con el fin de ofrecer, en la medida de lo posible, una visión transversal y sintética que ayude a desentrañar el estado de la cuestión. Así, se ha observado el surgimiento de elementos determinantes para la consolidación de este arte, como la aparición de un nuevo espacio escénico, la paulatina profesionalización de los intérpretes o los diversos avances técnicos y estil

  11. ATHENA/INTRA analyses for ITER, NSSR-2

    International Nuclear Information System (INIS)

    Shen, Kecheng; Eriksson, John; Sjoeberg, A.

    1999-02-01

    The present report is a summary report including thermal-hydraulic analyses made at Studsvik Eco and Safety AB for the ITER NSSR-2 safety documentation. The objective of the analyses was to reveal the safety characteristics of various heat transfer systems at specified operating conditions and to indicate the conditions for which there were obvious risks of jeopardising the structural integrity of the coolant systems. In the latter case also some analyses were made to indicate conceivable mitigating measures for maintaining the integrity.The analyses were primarily concerned with the First Wall and Divertor heat transfer systems. Several enveloping transients were analysed with associated specific flow and heat load boundary conditions. The analyses were performed with the ATHENA and INTRA codes

  12. ATHENA/INTRA analyses for ITER, NSSR-2

    Energy Technology Data Exchange (ETDEWEB)

    Shen, Kecheng; Eriksson, John; Sjoeberg, A

    1999-02-01

    The present report is a summary report including thermal-hydraulic analyses made at Studsvik Eco and Safety AB for the ITER NSSR-2 safety documentation. The objective of the analyses was to reveal the safety characteristics of various heat transfer systems at specified operating conditions and to indicate the conditions for which there were obvious risks of jeopardising the structural integrity of the coolant systems. In the latter case also some analyses were made to indicate conceivable mitigating measures for maintaining the integrity.The analyses were primarily concerned with the First Wall and Divertor heat transfer systems. Several enveloping transients were analysed with associated specific flow and heat load boundary conditions. The analyses were performed with the ATHENA and INTRA codes 8 refs, 14 figs, 15 tabs

  13. Methods and procedures for shielding analyses for the SNS

    International Nuclear Information System (INIS)

    Popova, I.; Ferguson, F.; Gallmeier, F.X.; Iverson, E.; Lu, Wei

    2011-01-01

    In order to provide radiologically safe Spallation Neutron Source operation, shielding analyses are performed according to Oak Ridge National Laboratory internal regulations and to comply with the Code of Federal Regulations. An overview of on-going shielding work for the accelerator facility and neutrons beam lines, methods used for the analyses, and associated procedures and regulations are presented. Methods used to perform shielding analyses are described as well. (author)

  14. SENSITIVITY ANALYSIS FOR SALTSTONE DISPOSAL UNIT COLUMN DEGRADATION ANALYSES

    Energy Technology Data Exchange (ETDEWEB)

    Flach, G.

    2014-10-28

    PORFLOW related analyses supporting a Sensitivity Analysis for Saltstone Disposal Unit (SDU) column degradation were performed. Previous analyses, Flach and Taylor 2014, used a model in which the SDU columns degraded in a piecewise manner from the top and bottom simultaneously. The current analyses employs a model in which all pieces of the column degrade at the same time. Information was extracted from the analyses which may be useful in determining the distribution of Tc-99 in the various SDUs throughout time and in determining flow balances for the SDUs.

  15. Analysing harmonic motions with an iPhone’s magnetometer

    Science.gov (United States)

    Yavuz, Ahmet; Kağan Temiz, Burak

    2016-05-01

    In this paper, we propose an experiment for analysing harmonic motion using an iPhone’s (or iPad’s) magnetometer. This experiment consists of the detection of magnetic field variations obtained from an iPhone’s magnetometer sensor. A graph of harmonic motion is directly displayed on the iPhone’s screen using the Sensor Kinetics application. Data from this application was analysed with Eureqa software to establish the equation of the harmonic motion. Analyses show that the use of an iPhone’s magnetometer to analyse harmonic motion is a practical and effective method for small oscillations and frequencies less than 15-20 Hz.

  16. Discrete frequency identification using the HP 5451B Fourier analyser

    International Nuclear Information System (INIS)

    Holland, L.; Barry, P.

    1977-01-01

    The frequency analysis by the HP5451B discrete frequency Fourier analyser is studied. The advantages of cross correlation analysis to identify discrete frequencies in a background noise are discussed in conjuction with the elimination of aliasing and wraparound error. Discrete frequency identification is illustrated by a series of graphs giving the results of analysing 'electrical' and 'acoustical' white noise and sinusoidal signals [pt

  17. A Java Bytecode Metamodel for Composable Program Analyses

    NARCIS (Netherlands)

    Yildiz, Bugra Mehmet; Bockisch, Christoph; Rensink, Arend; Aksit, Mehmet; Seidl, Martina; Zschaler, Steffen

    Program analyses are an important tool to check if a system fulfills its specification. A typical implementation strategy for program analyses is to use an imperative, general-purpose language like Java; and access the program to be analyzed through libraries for manipulating intermediate code, such

  18. Finite strain analyses of deformations in polymer specimens

    DEFF Research Database (Denmark)

    Tvergaard, Viggo

    2016-01-01

    Analyses of the stress and strain state in test specimens or structural components made of polymer are discussed. This includes the Izod impact test, based on full 3D transient analyses. Also a long thin polymer tube under internal pressure has been studied, where instabilities develop, such as b...

  19. Multipole analyses and photo-decay couplings at intermediate energies

    International Nuclear Information System (INIS)

    Workman, R.L.; Arndt, R.A.; Zhujun Li

    1992-01-01

    The authors describe the results of several multipole analyses of pion-photoproduction data to 2 GeV in the lab photon energy. Comparisons are made with previous analyses. The photo-decay couplings for the delta are examined in detail. Problems in the representation of photoproduction data are discussed, with an emphasis on the recent LEGS data. 16 refs., 4 tabs

  20. Houdbaarheid en conservering van grondwatermonsters voor anorganische analyses

    NARCIS (Netherlands)

    Cleven RFMJ; Gast LFL; Boshuis-Hilverdink ME; LAC

    1995-01-01

    The storage life and the possibilities for preservation of inorganic analyses of groundwater samples have been investigated. Groundwater samples, with and without preservation with acid, from four locations in the Netherlands have been analysed ten times over a period of three months on six